text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
Blank Project, import Mirror from Asset Store/ Discord Releases unity package.
Open up your scene, for this guide we will use Mirror/Examples/Tanks
You should be familiar with the examples, and default NetworkManager HUD, they look something like this:.
Then create and attach a new script to the canvas, I have named it CanvasHUD.
Open up this new script, and open up Mirrors NetworkManagerHUD (for reference).
Add the following code as the starting template to CanvasHUD.
CanvasHUD.csusing Mirror;using UnityEngine;using UnityEngine.UI;public class CanvasHUD : MonoBehaviour{public Button buttonHost;public void Start(){buttonHost.onClick.AddListener(ButtonHost);}public void ButtonHost(){NetworkManager.singleton.StartHost();}}
Create a button inside the main Canvas, and drag it into the Canvas “ButtonHost” variable. We will not focus too much on layout and looks of the canvas for this guide, but go wild and position the contents where you please :)
Test! Start the game, and press your own “Host Button”, the game should start.
Congratulations, this is the first step to using Unity Canvas with Mirror, and upgrading from the NetworkManagerHUD OnGUI.
If you check the old HUD, it can be summarised into 2 parts. The ‘Start’ (before connecting) and ‘Stop’ (after connecting).
Create 2 UI panels inside the canvas, rename them Panel Start and Panel Stop, remove the image component from Panel Stop, this way we can tell them apart.
Drag your “Button Host” into Panel Start.
Add the following variables to your CanvasHUD script, these variables cover most of what is needed.
CanvasHUD.cspublic GameObject PanelStart;public GameObject PanelStop;public Button buttonHost, buttonServer, buttonClient, buttonStop;public InputField inputFieldAddress;public Text serverText;public Text clientText;
Next, add more UI ! Exciting right! :D
Don’t worry about the code yet, check the image below to see what is needed.
Inside “Panel Start” there should be 3 buttons, inputField and optional title text. Panel Stop should contain one button, and 2 texts, you can remove, add, and adjust things after, but for now follow this guide so everything matches up.
Drag all the new UI into the CanvasHUD script variables, if you have labelled them all nicely when going along, it will be an easier task.
Now for the code to make it all work, various parts will have comments to explain. And this is it, you have now made your own Unity Canvas HUD UI, or upgraded the OnGUI NetworkManagerHUD! :D
CanvasHUD.csprivate void Start(){//Update the canvas text if you have manually changed network managers address from the game object before starting the game sceneif (NetworkManager.singleton.networkAddress != "localhost") { inputFieldAddress.text = NetworkManager.singleton.networkAddress; }//Adds a listener to the main input field and invokes a method when the value changes.inputFieldAddress.onValueChanged.AddListener(delegate { ValueChangeCheck(); });//Make sure to attach these Buttons in the InspectorbuttonHost.onClick.AddListener(ButtonHost);buttonServer.onClick.AddListener(ButtonServer);buttonClient.onClick.AddListener(ButtonClient);buttonStop.onClick.AddListener(ButtonStop);//This updates the Unity canvas, we have to manually call it every change, unlike legacy OnGUI.SetupCanvas();}// Invoked when the value of the text field changes.public void ValueChangeCheck(){NetworkManager.singleton.networkAddress = inputFieldAddress.text;}public void ButtonHost(){NetworkManager.singleton.StartHost();SetupCanvas();}public void ButtonServer(){NetworkManager.singleton.StartServer();SetupCanvas();}public void ButtonClient(){NetworkManager.singleton.StartClient();SetupCanvas();}public void ButtonStop(){// stop host if host modeif (NetworkServer.active && NetworkClient.isConnected){NetworkManager.singleton.StopHost();}// stop client if client-onlyelse if (NetworkClient.isConnected){NetworkManager.singleton.StopClient();}// stop server if server-onlyelse if (NetworkServer.active){NetworkManager.singleton.StopServer();}SetupCanvas();}public void SetupCanvas(){// Here we will dump majority of the canvas UI that may be changed.if (!NetworkClient.isConnected && !NetworkServer.active){if (NetworkClient.active){PanelStart.SetActive(false);PanelStop.SetActive(true);clientText.text = "Connecting to " + NetworkManager.singleton.networkAddress + "..";}else{PanelStart.SetActive(true);PanelStop.SetActive(false);}}else{PanelStart.SetActive(false);PanelStop.SetActive(true);// server / client status messageif (NetworkServer.active){serverText.text = "Server: active. Transport: " + Transport.activeTransport;}if (NetworkClient.isConnected){clientText.text = "Client: address=" + NetworkManager.singleton.networkAddress;}}} | https://mirror-networking.gitbook.io/docs/community-guides/unity-canvas-hud-guide | CC-MAIN-2021-31 | en | refinedweb |
Tutorial
Anatomy of an Angular Angular, we get modules, known as NgModules, to act as the unit that glues together an app or features within an app. All Angular apps have one root module, the app module, that deals with unifying the whole app together. It’s also a best practice to breakdown individual features of an app into their own module. This practice enables things such as lazy loading or preloading certain features.
This post covers NgModules in Angular 2+
Angular modules are not to be confused with ES6 modules. They are two distinct things.
Here’s what a barebones NgModule looks like:
// ...ES6 module imports @NgModule({ declarations: [ ... ], imports: [ ... ], providers: [ ... ], bootstrap: [ ... ], entryComponents: [ ... ], exports: [ ... ] }) export class MyModule { }
And here’s an example of what it can look like with actual members:
import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { FormsModule } from '@angular/forms'; import { MaterialModule } from '@angular/material'; import { BrowserAnimationsModule } from '@angular/platform-browser/animations'; import { AppComponent } from './app.component'; import { VideoPlayerComponent } from './video-player/video-player.component'; import { ConfirmationDialogComponent } from './confirmation-dialog/confirmation-dialog.component'; import { VideoService } from './services/video.service';
Breaking It Down
Let’s briefly explain each entry in the @NgModule decorator:
declarations
This is to specify the components, pipes and directives that should be part of the module.
imports
This is to import other modules that have exported members to be used in templates of components that are part of the NgModule. For example, BrowserModule re-exports CommonModule, which makes available the built-in NgIf and NgFor directives.
RouterModule, BrowserModule, FormsModule, HttpModule and BrowserAnimationsModule are examples of commonly imported modules.
exports
If you want to export members of the module so that they can be used in component templates of other modules, these members would go in the exports array.
In the case of the CommonModule, for example,
COMMON_DIRECTIVES and
COMMON_PIPES are exported, which is what is made available to component templates when you import BrowserModule or CommonModule into your own NgModules.
bootstrap
This is to define the root component, which is often called AppComponent. This means that bootstrap should contain only one member, and that it should be defined only in the main app module.
providers
This is where injectables go. Things like services, route guards and http interceptors are example of injectables that should be part of the providers array. They will be made available everywhere in the app once they have been provided. This is why services are often provided in the root app module.
entryComponents
This is for components that can’t be found by the Angular compiler during compilation time because they are not referenced anywhere in component templates. It’s common for routed components not to be referenced at all in templates, but Angular takes care of adding those to entryComponents automatically, so there’s no need to add those yourself.
Components that should go into entryComponents are not that common. A good example would be Angular Material dialogs, because they are created dynamically, and the Angular compiler wouldn’t know about them otherwise. | https://www.digitalocean.com/community/tutorials/angular-anatomy-angular-module | CC-MAIN-2021-31 | en | refinedweb |
Re: [Zope-dev] [ZConfig] import inhomogenous
On Tuesday 17 February 2004 01:10 pm, Dieter Maurer wrote: It would be better if the type of the imported object (schema or component) were orthogonal to the location from where the object is found (in a package or via an URL). I agree, that does seem nicer. Another approach would be to
Re: [Zope-dev] server for new protocol?
On Thursday 26 February 2004 06:50 am, Nikolay Kim wrote: is there any way create server for new protocol without changing ZServer module? There's clearly a need for some more documentation here, but I'm not sure what to write yet, or where it should go. Contrary to many of the other
Re: [Zope-dev] server for new protocol?
On Sunday 29 February 2004 11:21 pm, Nikolay Kim wrote: P.S. i'm developing smtp server for zope and already have working code. Ooh! This is really cool. Will this be open source? I'll be a lot of people will be interested in this. -Fred -- Fred L. Drake, Jr. fred at zope.com
Re: [Zope-dev] Product directory?
Re: [Zope-dev] Product directory?
On Tuesday 09 March 2004 02:30 pm, Chris McDonough wrote: There is also a convenience function for this in Zope: from Globals import package_home here = package_home(globals()) Maybe I'm just weird, but I generally prefer the general approach when there's not a clear improvement in
Re: [Zope-dev] Re: test.py
On Thursday 11 March 2004 10:40 am, Tres Seaver wrote: #!/usr/bin/python2.3 distutils will munge it anyway, if it installs the scripts. That won't work for a lot of developers, I'll bet, who have python2.3 installed in /usr/local/bin. The env hack is more reasonable for developers; since
[Zope-dev] Re: test.py
On Friday 12 March 2004 05:40 am, yuppie wrote: I don't care much *how* this is resolved, but I'd like to have this consistent and up to date. If I always have to check if the python version is set correctly that line isn't helpful at all. Well, Tres appearantly hasn't had time to respond.
Re: [Zope-dev] ZPT for CSS, anyone?
On Tuesday 30 March 2004 01:40 pm, Dieter Maurer wrote: Furthermore, stylesheets often contain customization variables, e.g. for a color scheme. I think, this is useful. This is one of the most painful warts in CSS that would have been really easy to do right, I think. Being able to name
[Zope-dev] Re: [Zope-Coders] Re: CVS: Zope3/src/zope/tal - talparser.py:1.6
On Thursday 08 April 2004 10:00 am, Philipp von Weitershausen wrote: I would like to backport this patch (including tests) to Zope 2, since I need to i18n XML generated by ZPTs. None here. -Fred -- Fred L. Drake, Jr. fred at zope.com PythonLabs at Zope Corporation
[Zope-dev] ZConfig 2.1 released
I've posted a distribution for ZConfig 2.1 on the ZConfig page: This fixes a few bugs and improves the ability to set default values in schemas. It also adds some helpful schema building blocks, including a general mapping type and support for
Re: [Zope-dev] Zope Developers
On Tuesday 13 April 2004 01:44 pm, Paul Edward Brinich wrote: I was wondering if someone on this list could point me in the direction of an appropriate place to post Zope job openings. I am looking for an audience well-versed in Zope. Thanks for any guidance! There's the Python Job Board:
[Zope-dev] zLOG is dead
zLOG is dead The zLOG package used for logging throughout ZODB, ZEO, and Zope 2 has been declared obsolete. All logging for Zope products will use the logging package from Python's standard library. The zLOG package still exists in Zope 2 and the separate package for
[Zope-dev] Re: [Zope3-dev] Re: Zope and zope
Jim Fulton noted: Of course, having two packages with names differing only in case is a bit ugly. Do we want to consider renaming one or both of these packages to avoid the conflict? A bit ugly, but I can live with it. On Tuesday 13 April 2004 22:17, Tres Seaver wrote: -1 to renaming
Re: [Zope-dev] zLOG is dead
On Wednesday 14 April 2004 01:49 am, Andreas Jung wrote: What is the recommend way to migrate existing code? I assume using: import logging logger = logging.getLogger(loggername). That works, and certainly matches what I've been doing, and what we see in the Zope 3 codebase as well.
Re: [Zope-dev] Re: [Zope3-dev] Zope and zope
On Wednesday 14 April 2004 09:54 am, Kapil Thangavelu wrote: its probably a problem imo for mac users who are on a case insensitive fs. Is this still an issue for Mac OS X, or is your concern for classic Mac OS? I don't know if we support that (simply because I've never heard anyone mention
Re: [Zope-dev] zLOG is dead
On Wednesday 14 April 2004 11:44 am, Lennart Regebro wrote: Yeah, but is it reasonable to think that people who write new products will do this? A rule that most people will break is a bad rule... That people working on Zope itself can be well versed enough to use Zope. for things in
[Zope-dev] Re: [Zope3-dev] Zope and zope
On Wednesday 14 April 2004 10:52 am, Jim Fulton wrote: packages become very unsttractive. It turns out that pkgutil will be confused by the Zope package on Windows or Mac OS, adding it's directory to the zope package's path. This is a bug in pkgutil that can be fixed, but it is an example
Re: [Zope-dev] zLOG is dead
On Wednesday 14 April 2004 10:45 am, Andreas Jung wrote: For consitency: Zope.Products. For lazy writers: Zope. X I prefer the second solution...everyone should know what are products and what are packages. In fact the name does not matter because you can see in the traceback
Re: [Zope3-dev], [Zope-dev] More arguments for z (was Re: Zope and zope)
On Thursday 15 April 2004 10:23 am, Jim Fulton wrote: (BTW, I think it was a mistake to have top-level persistent and transaction packages. I think that will eventually come back to haunt us.) I won't disagree with this. ;-( The only way to avoid collissions is to pick stupid names
[Zope3-dev], [Zope-dev] Import checking code
On Thursday 15 April 2004 13:22, Martijn Faassen wrote: Note that for checking dependencies in Python code I still think this tool could be improved by using technology from importchecker.py ... which can use Python's compiler module to lift all imports from source code, which I think is
Re: [Zope3-dev], [Zope-dev] Import checking code
On Friday 16 April 2004 06:15 am, Martijn Faassen wrote: I'll try to find some time to take a look at it. Dependency checking is sort of a natural thing to use importchecker stuff for, but importchecker itself may need some refactoring. :) Actually, the confusion I was referring to was in
Re: [Zope-dev] Proposal: Rename zope package
On Friday 16 April 2004 01:31 pm, Michael Bernstein wrote: From a consistency in nomenclature POV, I find 'z' jars a bit with ZConfig, zdaemon, ZEO, zLog, and ZODB, which one might expect to find nested within 'z' (as 'z.Config' for example). This is admittedly only an issue for the
Re: [Zope-dev] Proposal: Rename zope package
On Friday 16 April 2004 03:06 pm, Michael Bernstein wrote: Shouldn't we strive for consistency in nomenclature going forward? Definately. My point was that we don't have anything to base it on, not that we shouldn't be. Zope 3 kindly specifies some guidelines for naming, including module and
Re: [Zope-dev] Proposal: Rename zope package
On Friday 16 April 2004 03:24 pm, Shane Hathaway wrote: - Spelling it zOPE to take advantage of a frequent mishap involving the cAPS lOCK key That must be what happened for zLOG, and I declared that dead. I don't think anyone's ready for that for Zope 3 just yet. ;-) -Fred -- Fred L.
[Zope-dev] Re: [Zope3-dev] Re: Proposal: Rename zope package
On Tuesday 20 April 2004 12:08 pm, Jim Fulton wrote: What do people think about alternative 4? +1 -Fred -- Fred L. Drake, Jr. fred at zope.com PythonLabs at Zope Corporation ___ Zope-Dev maillist - [EMAIL PROTECTED]
[Zope-dev] Re: zLOG changes
On Tuesday 20 April 2004 01:22 pm, Andreas Jung wrote: - the entries in the event.log are currently written without the log level. Is this dedicated behaviour? I think the log level should be part of the event.log It should. This was lost when I changed the zLOG -- logging mapping, and
Re: [Zope-dev] Re: zLOG is dead
On Tuesday 20 April 2004 01:15 pm, Christian Heimes wrote: Fred Drake wrote: * Do I need to take care that the messages are logged into the event log and on the console or can I safly use the logging package like:: ... zLOG has logged the messages to the console when zope was started
Re: [Zope-dev] Re: zLOG changes
I wrote: - In debug mode, add a new handler that dumps to standard output. This is fairly easy to code, but is inflexible. Andreas responded: But flexible enough for most usecase. The point is that you want to see the tracebacks on the console during the development phase. Watching the
Re: [Zope-dev] Re: zLOG changes
On Wednesday 21 April 2004 04:48 am, Chris Withers wrote: I'm guessing there is some kind of log-to-console logger already? If so, why not just add that in zope.conf and comment it out when you move to production? That would work for me, but not everyone at ZC agreed, so I've made some
[Zope-dev] New mailing list for ZConfig users
At a suggestion from the community, I've created a new mailing list for ZConfig users. This is for general discussion and questions. The list is run using Mailman at Zope.org; you can sign up at -Fred -- Fred L. Drake, Jr. fred at
[Zope-dev] Re: [Zope3-dev] Need help with
On Sunday 25 April 2004 12:29 pm, Jim Fulton wrote: cvs co svn+ssh://svn.zope.org/repos/Zope3/trunk Zope3 That should be: svn co svn+ssh://svn.zope.org/repos/Zope3/trunk Zope3 cvs co Zope3 and this would be: svn co
[Zope-dev] Re: [Zope3-dev] Need help with
On Sunday 25 April 2004 01:00 pm, Jim Fulton wrote: Oops. Thanks. I gues some habits will be hard to kick. :) Yeah. Of course, CVS isn't just an old habit; it'll still be current practice, not just for Zope 2.6.x and 2.7.x, but for lots of projects. The general pain of two widely-used
[Zope-dev] Re: Mailing Log Entries in 2.7
On Saturday 24 April 2004 06:26 pm, Chris Withers wrote: And there was me looking forward to writing a product that added its own ZConfig section :'( You still can if you like. ;-) Okay, how can I get the log level and exception type into the subject? That's not currently possible,
[Zope-dev] Re: [Zope3-dev] What do we want to bring from CVS to Subversion
On Monday 26 April 2004 03:23 pm, Jim Fulton wrote: 2. Convert the mainline history, but leave off the branches. This sounds good to me. -Fred -- Fred L. Drake, Jr. fred at zope.com PythonLabs at Zope Corporation ___ Zope-Dev maillist -
Re: Should we require windows users to use tools that honor Unixlineendings? (Re: [Zope-dev] Re: [Zope3-dev] ATTENTION! cvs tosubversiontransitiontomorrow)
On Wednesday 28 April 2004 04:01 pm, Lennart Regebro wrote: Yes, but I'm pretty sure there are default settings for which files that should be treated as binary on the server side in CVS. At least I rember setting it up. :´) Yes, this is specified in the CVSROOT/cvswrappers file. -Fred
Re: [Zope-dev] [Weakness] analysis of Zope startup problems
On Friday 30 April 2004 10:50 am, Chris McDonough wrote: Is this improved at all by Fred's latest zZLOG-removal checkins? If not, I will open a collector issue. None of my changes have been applied to Zope 2.7; they only exist on the Zope 2 HEAD. The removal of zLOG is only for the Zope 3
[Zope-dev] Re: [Zope3-dev] Re: [Zope-Coders] End-of-line translation problem
On Friday 30 April 2004 12:25 pm, Lennart Regebro wrote: Only that the Subversion people are wrong. Ad-hoc is not good enough. It must be able to be configurable on the server. Another possible approach would be to write a script to use for adds instead of the default client; it could set
[Zope-dev] Re: [Zope3-dev] Proposal: cvs to subversion transition May 11 (?)ope-dev] Re: Mailing Log Entries in 2.7
Regarding putting more information in email's generated using email-notifier sections, I wrote: That's not currently possible, though it wouldn't be hard to add. Perhaps some future ZConfig revision would add this. On Wednesday 05 May 2004 05:41 am, Chris Withers responded: Surely ZConfig
Re: [Zope-dev] cvs.zope.org wedged, we're looking into it
On Friday 07 May 2004 01:50 pm, Ken Manheimer wrote: cvs.zope.org is wedged - you can connect to it (ping, web, ssh) but not get any further. We've got a call in for attention, hopefully it'll be back available soon... Yay! It's working again! Ken, you're my hero. ;-) -Fred --
[Zope-dev] Re: [Zope3-dev] Re: SVN: Zope3/trunk/functional_tests/ Remove unused directory.
On Wednesday 26 May 2004 08:42 am, Philipp von Weitershausen wrote: That works for me too, but please *above* the diff. Yes; I should have been more clear and said *immediately* following the commit message. No, the CVS should stay in there to tell them apart, for the very same reason you
[Zope-dev] Re: [Zope3-dev] status of Zope versus zope? the imports right.
Re: [Zope-dev] INSTANCE_HOME and SOFTWARE_HOME still necessary in 2.7 start script?
On Thursday 01 July 2004 07:42 pm, Chris Withers wrote: Is it still necessary to specify INSTANCE_HOME and SOFTWARE_HOME in the start script for Zope? I'm pretty sure I removed that requirement long ago, back when I added the App.config module. Or would the following work? python
Re: [Zope-dev] INSTANCE_HOME and SOFTWARE_HOME still necessary in 2.7 start script?
On Friday 02 July 2004 02:21 am, Chris Withers wrote: Well, the automatically generated runzope still includes INSTANCE_HOME and SOFTWARE_HOME, hence my question... Can those variables be dropped from the template script? They were dropped, also back when I added App.config. Someone
[Zope-dev] TAL Hotfix 2004-07
[Zope-dev] TAL Hotfix 2004-07
Re: [Zope-dev] Troubles with hotfix20040714
On Mon, 19 Jul 2004 09:48:00 -0700 (PDT), C. Olmsted [EMAIL PROTECTED] wrote: Not sure if this is quite the correct list to post to, but I'm having trouble with Hotfix20040714. We're running zope 2.7, plone 2.0.3, and zwiki 0.32.0. Can you test with the 2.7.2 release candidate? This very
Re: [Zope-dev] Troubles with hotfix20040714
On Tue, 20 Jul 2004 10:40:22 -0700 (PDT), Cliff O. [EMAIL PROTECTED] wrote: Ok, so I just finished loading up 2.7.2rc1 and all seems well using the same products and database as before. I can probably migrate the site once 2.7.2 final is released but, of course, it would be great to apply the
Re: [Zope-dev] Re: [Zope-Coders] Collector Status Meanings
On Fri, 30 Jul 2004 11:50:57 -0400 (EDT), Ken Manheimer [EMAIL PROTECTED] wrote: Accepted: Issues that some supporter(s) has responsibility for resolving it, and it is not yet resolved. Your description says that some supporter has assessed the issue as
Re: [Zope-dev] ZConfig keys
On Wed, 04 Aug 2004 10:24:30 +0200, Godefroid Chapelle [EMAIL PROTECTED] wrote: I would like to confirm that ZConfig keys are case insensitive and that the corresponding attributes on the config object returned by the 'loadConfig' call are always lower case. It sounds like I need to clarify
Re: [Zope-dev] ZPT Optimization Opportunity
On Mon, 27 Sep 2004 14:18:30 -0400, Tres Seaver [EMAIL PROTECTED] wrote: Transformation is already complete at that point. The only difference is the type of the result returned (eventually) to the publisher. Ok, that sounds good. BTW, I looked again at where StringIO is used, and it seems
Re: [Zope-dev] Re: inconsistent mimetype assignment for uploaded files
On Tue, 05 Oct 2004 12:44:32 +0200, Tino Wildenhain [EMAIL PROTECTED] wrote: Well, the problem might be the asymptation part of the filename should be considered an indicator of its contents. That is a nuissance. It's unfortunate we still don't have any sort of common type system for
Re: [Zope-dev] Re: Conditional imports in ZTUtils/__init__.py
On Tue, 05 Oct 2004 09:47:11 -0500, Evan Simpson [EMAIL PROTECTED] wrote: This is part of my attempt to allow the various bits of ZPT to work outside of Zope. It assumes that the presence of the 'Zope' module is a reliable test. Perhaps this is a YAGNI, or perhaps there's a better way of
Re: [Zope-dev] PageTemplate XML mode bugs
On Tue, 05 Oct 2004 12:47:33 +0200, yuppie [EMAIL PROTECTED] wrote: There are two annoying bugs that make the XML mode unusable for many tasks: - (i18n namespace broken) - (XML files opened in binary mode) I would
Re: [Zope-dev] Re: PageTemplate XML mode bugs
On Tue, 05 Oct 2004 18:44:04 +0200, yuppie [EMAIL PROTECTED] wrote: Ok. I'll remove that line in CVS/SVN. Thanks! I added a new comment to the issue. Hope that makes things clearer. ( ) Ok; let's just say the discussion's moved there. -Fred -- Fred
Re: [Zope-dev] Python2.4 and Zope 2.7
On Sun, 17 Oct 2004 18:43:34 +0200, Andreas Jung [EMAIL PROTECTED] wrote: Python 2.4 is still in alpha stage and there are no plans to support Python 2.4 in the short term. It's in beta as of Friday evening; this would be a good time for someone with time to start testing it with various Zope
Re: [Zope-dev] Python2.4 and Zope 2.7
On Mon, 18 Oct 2004 06:59:26 +0200, Andreas Jung [EMAIL PROTECTED] wrote: Zope 2.7.3 + Python 2.4 fails when running the unittests: Then a collector item should be filed. ;-) I don't know anything about ThreadedAsync myself. -Fred -- Fred L. Drake, Jr.fdrake at gmail.com Zope
[Zope-dev] Proposed changes to the TAL specification
I'm proposing some (small) changes to the TAL specification. This would result in a new version of TAL for Zope X3 3.1 (and Zope 2.8 if anyone wants to backport the relevant code changes). The discussion will be on the ZPT list, where I've sent a copy of the proposal. The proposal is also
Re: [Zope-dev] ZopeInterface
On Thu, 28 Oct 2004 11:07:52 +0200, Radoslaw Stachowiak [EMAIL PROTECTED] wrote: Could anyone please provide me information when ZopeInterface product is going to be updated ? And how is it related to zopex3 releases ? I'm planning to release a final version around the time that Zope X3 3.0.0
Re: [Zope-dev] Renamed the Zope package to Zope2 and including Zope 3 packages in Zope 2.8
On Mon, 31 Jan 2005 19:25:15 +0100, Lennart Regebro [EMAIL PROTECTED] wrote: Note that diskussions about the Zope2 + Zope3 pagetemplate issue arrived in the conclusion that, the faster we can get Zope3 pagetemplates back ported to Zope2, the happier we will be. ;) I have no idea if that is a
Re: [Zope-dev] ZConfig issue: products and container-class
On Mon, 14 Feb 2005 15:22:38 -0200, Leonardo Rochael Almeida [EMAIL PROTECTED] wrote: It's obvious that the container-class directive is being evaluated much earlier than the products directive. Without delving further into the code, it looks like the container-class directive has an error
Re: [Zope-dev] ZConfig issue: products and container-class
On Mon, 14 Feb 2005 18:41:20 -0200, Leonardo Rochael Almeida [EMAIL PROTECTED] wrote: Should I bother with the collector entry or is it a known limitation no one is going to bother with? :-) It's not a bad idea to file a report in the collector. While I've no plan to change it myself, that's
Re: [Zope-dev] Parrot
On Mon, 21 Mar 2005 10:54:11 -0500, Andrew Langmead [EMAIL PROTECTED] wrote: I haven't tried the latest version of Parrot, but I'd think that Zope would be the last thing that will run successfully. I don't know that Parrot tries to emulate Python's C API either, and Zope definately contains
Re: [Zope-dev] Re: ZConfig change breaks Zope 2 and Zope 3
On 5/9/05, yuppie [EMAIL PROTECTED] wrote: But I still believe it was wrong to change the 'inet_address' datatype in ZConfig. I spoke with Tim about this briefly today, and I can't remember the reasons for some of the relevant changes. I suspect at this point that putting less magic in the
[Zope-dev] Re: [Zope-Coders] Zope 2.8 coming this weekend
On 6/10/05, Paul Winkler [EMAIL PROTECTED] wrote: Mind if I check in text-only changes to the 2_8 branch? It's still Friday for Andreas, so this is a good time! -Fred -- Fred L. Drake, Jr.fdrake at gmail.com Zope Corporation ___ Zope-Dev
[Zope-dev] Re: [Zope-Coders] Zope 2.8 coming this weekend
On Fri, Jun 10, 2005 at 02:21:32PM -0400, Paul Winkler wrote: Done. Like I said, just trivial docs typos. Yeah, but improvements are improvements! On 6/10/05, Paul Winkler [EMAIL PROTECTED] wrote: While I'm at it, anybody object to the attached patch to doc/FAQ.txt ? I don't see a need to
Re: [Zope-dev] Extending Zope's ZConfig Schema in a Product?
On 6/30/05, Sidnei da Silva [EMAIL PROTECTED] wrote: Gosh, that looks too nice to be true. I will try that out tomorrow and write out a how-to on zope.org if it works out. It is too good to be true; sorry. Well, it is true, but it's not what you're looking for. You can't use it to extend the
Re: [Zope-dev] Extending Zope's ZConfig Schema in a Product?
Re: [Zope-dev] Extending Zope's ZConfig Schema in a Product?
On 7/1/05, Jens Vagelpohl [EMAIL PROTECTED] wrote: That just has the disadvantage that you're increasing the number of configuration files to maintain in an instance. If it's imported and used in zope.conf at leaast there's just one file to deal with... This is true. Is that really important,
[Zope-dev] Build process for Zope 2.9
Hey all, I'm working on a revised build process for Zope 2.9, based on the work that we've done for Zope 3. What this means is that we'll have a setup.py that uses the code from zpkg () to load metadata from the various packages are part of the
[Zope-dev] Create Zope 2.9 project wiki
I've created a project wiki for Zope 2.9: The wiki is pretty bare at the moment. I've started adding some notes about moving the build and packaging support to use zpkg in the Tasks page:
Re: [Zope-dev] Can't fetch externals???
On 9/30/05, Paul Winkler [EMAIL PROTECTED] wrote: One thing Tino suggested: it might be a firewall issue. Does svn's externals-fetching look somehow different to a firewall than does a regular (non-external) checkout? When I tried checking out on my laptop, I noticed that ZoneAlarm asked 'me
Re: [Zope-dev] Can't fetch externals???
On 9/30/05, Paul Winkler [EMAIL PROTECTED] wrote: Hypothesis: Is it possible that svn.zope.org is configured such that when you get the externals, it uses plain svn (i.e. an anonymous checkout) rather than svn+ssh? As noted, very likely. The default port for SVN w/out SSH is 3690:
Re: [Zope-dev] Can't fetch externals???
On 9/30/05, Sidnei da Silva [EMAIL PROTECTED] wrote: Note this is not generic externals behaviour. In the case of Zope, it's because the externals are set to their svn:// urls. It could just as well be set to svn+ssh://, but then you would be able to checkin a file from the wrong place without
Re: [Zope-dev] Mountpoints
[It doesn't look like my response went to the zope-dev list; re-sending.] On Tuesday 18 October 2005 15:43, Tim Peters wrote: I'm copying Fred because he may remember more about this than I do. Fred, do you know of a reason why I can't stitch a newer ZODB into Zope(2) trunk? I have a dim,
Re: Get rid of configure/make? (was Re: [Zope-dev] Re: SVN: Zope/trunk/ Merge philikon-zope32-integration branch. Basically, this branch entails:)
On 11/5/05, Tino Wildenhain [EMAIL PROTECTED] wrote: The usual setup.py from distutils to make it more pythonic. The install.py in the root of the distribution is actually a conventional setup.py. Would it be helpful to keep the setup.py name? We renamed it to encourage the configure/make
Re: Get rid of configure/make? (was Re: [Zope-dev] Re: SVN: Zope/trunk/ Merge philikon-zope32-integration branch. Basically, this branch entails:)
On 11/5/05, Jim Fulton [EMAIL PROTECTED] wrote: It's main benefit is that it leverages a familiar pattern, but I'm not convinced that it's worth it. Also, as tools like rpm and deb become more widely used, I'm not sure how familar the configure/make dance is. Other than Python and Zope, I
Re: [Zope-dev] Not-really-unit-testing ideas.
On 11/23/05, Lennart Regebro [EMAIL PROTECTED] wrote: Basically, I'd like to create a site once, and use it for all subsequent tests, until I made a change that means the site needs to be recreated. But how? Well, I'm not sure. How, for example, could I Jim's new test runner includes support
Re: [Zope-dev] Re: [Zope3-dev] RFC: Reunite Zope 2 and Zope 3 in the source code repository
On 11/23/05, Stephan Richter [EMAIL PROTECTED] wrote: Using this group, we have about an 80-90% -1 vote count. I'll weigh in with a -1 as well, for all the reasons cited by the other -1 voters on this issue. Zope 2 and Zope 3 are far too different at this point. The only way I see for
Re: [Zope-dev] zope.conf extensibility
Re: [Zope-dev] zope.conf extensibility
On 11/30/05, Sidnei da Silva [EMAIL PROTECTED] wrote: I haven't seen this being checked in at all, maybe it's in Tres laptop? These were committed to the trunk before the 2.9 branch was created: r39652 | tseaver |
Re: [Zope-dev] zope.conf extensibility
On 12/1/05, Chris Withers [EMAIL PROTECTED] wrote: In this case, I think zopeschema.xml should be documentation enough, especially as any product author wanting to use this feature is going to have to write a component.xml at least ;-) Actually, a product author isn't required to write a
Re: [Zope-dev] Re: Unit Test Failures
On 12/19/05, Philipp von Weitershausen [EMAIL PROTECTED] wrote: Now I see what you mean by contract. You're right, I guess it isn't documented then, but perhaps it should be. That's never been part of the contract and, as Tres notes, it's inconsistent. The implmentation will only sort when
Re: [Zope-dev] What use cases are driving make install from a checkout?
On 12/21/05, Leonardo Rochael Almeida [EMAIL PROTECTED] wrote: My point is: I don't think there's anything wrong in the install procedure being different between the checkout and the tarball, but it should never take more than a couple of fixed (and documented) steps to convert a checkout to a
Re: [Zope-dev] Re: Product installation (implicit vs. explicit)
On 12/22/05, Andreas Jung [EMAIL PROTECTED] wrote: Jar files have no dependencies. Well, I know you know what you mean here, but I'll elaborate since the kids haven't started fighting yet this morning. :-) Jar files don't have dependency metadata. They're pretty much equivalent to zipped
Re: [Zope-dev] Re: zLOG module deprecated
Re: [Zope-dev] Re: zLOG module deprecated.
Re: [Zope-dev] Re: ZPT backward compatibility
On 1/17/06, Tino Wildenhain [EMAIL PROTECTED] wrote: Add to it the fact the zpt@zope.org was due to be retired anyway ;) Though retirement for the list was discussed, it was decided not to retire it since it was still the best place for implementors to discuss matters. The implementations in
[Zope-dev] Re: [Zope3-dev] December release post-mortem
On 1/18/06, Jim Fulton [EMAIL PROTECTED] wrote: If eggs work out, as I hope they will, I'd like to stop work on zpkg and just use eggs. +42 -Fred -- Fred L. Drake, Jr.fdrake at gmail.com There is no wealth but life. --John Ruskin ___ Zope-Dev
[Zope-dev] Re: [Zope3-dev] merge zope-dev and zope3-dev?
On 2/16/06, Chris Withers [EMAIL PROTECTED] wrote: To be clear: I'm talking _only_ about merging the dev lists, _not_ the user lists. The users lists are still largely independent, but it seems like just about every post to the dev list now has a bearing on both Zope 2 and Zope 3, especially
Re: XML export/import is cool! (Was Re: [Zope-dev] Deprecating XML export/import?)
On 3/24/06, Jim Fulton [EMAIL PROTECTED] wrote: We've had sucess writing XSLT templates to transform the pickle data into formats easily parsable for particular applications. As part of a recent task (likely the same one Jim's referring to here!), I transformed the XML export into another XML
[Zope-dev] 64-bit BTrees
I have a need for 64-bit BTrees (at least for IOBTree and OIBTree), and I'm not the first. I've created a feature development branch for this, and checked in my initial implementation. I've modified the existing code to use PY_LONG_LONG instead of int for the key and/or value type; there's no
Re: [ZODB-Dev] Re: [Zope-dev] Re: 64-bit BTrees
On 4/17/06, Jim Fulton [EMAIL PROTECTED] wrote: The fact that IIBTrees is so widely used is exatly the reason I want to use 64-bits for the existing types rather than having to introduce a new type. Oops, I was checking in the separated version of 64-bit BTrees while this was landing in my
[Zope-dev] Re: 64-bit BTrees
On 4/13/06, Fred Drake [EMAIL PROTECTED] wrote: I've created a feature development branch for this, and checked in my initial implementation. I've made another branch for this, with a different twist. I'm not sure it'll be interesting, but I think it'll solve my immediate need until I can get
Re: [Zope-dev] zpkg, building zope from source
On 5/15/06, Sidnei da Silva [EMAIL PROTECTED] wrote: Also on a similar subject, running 'make install' from a checkout only copies packages that have a 'SETUP.cfg' inside. Is that intentional? I thought someone was in charge of fixing the 'make install' dance. Someone might be, and it might
Re: [Zope-dev] zpkg, building zope from source
On 5/15/06, Sidnei da Silva [EMAIL PROTECTED] wrote: I was looking at zpkg for the first time today, and was sorry to realize it won't run to completion on a Windows machine due to some minor use of os.WIFEXITED which is due to a dubious use of the 'tar' command, since Python has a 'tarfile'
[Zope-dev] Re: [Zope3-dev] pkg_resources or pkgutil?
Re: [Zope-dev] Next step on Blobs?
On 12/17/06, Christian Theune [EMAIL PROTECTED] wrote: Nope, not yet. I don't have any plans for Zope 2, but I'll be working on the Zope 3 side. ... - Make the existing File implementation use blobs This would be good so people see how to use them and get blobs widely exposed. Ideally,
Re: [Zope-dev] Next step on Blobs?
On 12/18/06, Christian Theune [EMAIL PROTECTED] wrote: a) provide a generation to convert old data structures Since we tend to work with high-availability issues at ZC, I'm hesitant to go this route; expensive generations that affect large portions of a database can be very difficult to run
Re: [Zope-dev] Re: [Zope] Re: [Warning] Zope 3 component architecture (CA) not reliably usable for registrations from Python
On 1/10/07, Dieter Maurer [EMAIL PROTECTED] | https://www.mail-archive.com/search?l=zope-dev@zope.org&q=from:%22Fred+Drake%22 | CC-MAIN-2018-47 | en | refinedweb |
I first wrote a version of this article just over 16 months ago, and it’s pretty impressive how much the GraphQL community has achieved in that time.
First of all, the question is no longer really Apollo or Relay, but rather Apollo or does one even need a fancy client at all. If you’re just testing the waters with GraphQL and don’t want to change your existing app too much, you can just use
fetch in your component like so:
fetch('/graphql', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Accept': 'application/json',
},
body: JSON.stringify({query: "{ hello }"})
})
.then(r => r.json())
.then(data => console.log('data returned:', data));
The main benefit to adopting a GraphQL client is the cache implementation. Using fetch is fine to begin with, but you don’t want to be using it in an app where users quickly jump between views.
In OnlineOrNot we use Apollo to cache query results — which gives us quite a noticeable boost in performance. How it works in practice:
- User opens a list of their GraphQL applications -> this list gets cached
- User opens a list of tests -> client has already fetched data about the applications, so it adds the tests to each application
- User now visits their list of applications again -> no new GraphQL request is made, all results are already in memory
Essentially, the more the user clicks around your application, the faster your user experience becomes.
On Bundle size
One of the biggest complaints I hear about adopting Apollo is the bundle size (Apollo Boost, the “easiest way to get started with Apollo Client” weighs in at 30.7 kB min+gzipped), so luckily there are also alternative lightweight clients to consider:
No article on GraphQL clients would be complete without mentioning AWS Amplify. Though Amplify does take an ‘everything but the kitchen sink’ approach to features, and you get everything included with it:
- Authentication
- Analytics
- API
- GraphQL Client
- Storage
- Push Notifications
- Interactions
- PubSub
- Internationalization
- Cache
Thus Amplify may not suit your needs unless you’re building a whole product experience that relies on GraphQL and don’t want to customise your approach.
Why I like Apollo Client
The setup is considerably easier than Relay — it involves installing one package, and adding the
ApolloProvider to the root of your React tree.
The API is nice — they have an equivalent to Relay’s QueryRenderer called
Query that does what it says:
<Query
query={gql`
{
rates(currency: "USD") {
currency
rate
}
}
`}
>
if (loading) return <p>Loading...</p>;
if (error) return <p>Error :(</p>;
return data.rates.map(({ currency, rate }) => (
<div key={currency}>
<p>{`${currency}: ${rate}`}</p>
</div>
));
}}
</Query>
It can be used to manage state in your React app — that is, you can directly write to Apollo’s Redux-like store and consume that data in another part of the React tree. Though with React’s new Context API, and React’s best practices of Lifting State Up you probably won’t need it.
import React from 'react';
import { ApolloConsumer } from 'react-apollo';
import Link from './Link';
const FilterLink = ({ filter, children }) => (
<ApolloConsumer>
{client => (
<Link
onClick={() => client.writeData({ data: { visibilityFilter: filter } })}
>
{children}
</Link>
)}
</ApolloConsumer>
);
Downsides to Apollo
- It’s huge. It weighs in at 10x more than the smallest GraphQL client I’d consider using, and 3x more than urql
Quirks
Apollo is not without quirks however:
- Since Apollo uses
idto build its cache, forgetting to include
idin your query can cause some interesting bugs and error messages
Why I dislike Relay
Setup
The main benefit to using Relay is that
relay-compiler doesn't get included in your frontend bundle, saving your user from downloading the whole GraphQL parser - it "pre-compiles" the GraphQL queries at build time.
What annoys me about Relay is that it requires a fair bit of work to even add to a project. Just to get it running on the client side, you need to:
- add a relay plugin to your
.babelrcconfig
- set up relay-compiler as a yarn script
- setup a “relay environment” (essentially your own
fetchutility to pass data to the relay-runtime), and
- add
QueryRenderercomponents to the React Components you wish to pass your data to
On the server side, you need to:
- Ensure the IDs your app returns are unique across all of your types (meaning you can’t return nice ID values like
1, 2, 3, they need to be like
typename_1, typename_2)
Developer Experience
The developer experience itself is pretty unpleasant too —
relay-compiler needs to run each time you modify any GraphQL query, or modify the schema. In large frontend teams this means teaching everyone to run
relay-compiler every time you change branches in Git, since almost all of our work involves fetching data from GraphQL in some way.
Quirks
Being one of Facebook’s Open Source projects doesn’t necessarily mean issues get fixed quickly. Occasionally, things break in unexpected ways:
- Using an old version of graphql breaks
relay:
- Errors don’t get sent via the error object in GraphQL when using QueryRenderer, instead one needs to create an
errortype, and send the errors through the data object:
Originally published at onlineornot.com on December 15, 2018. | https://hackernoon.com/apollo-vs-relay-modern-which-graphql-client-to-consider-using-in-2019-7488cc0cf02c?source=rss----3a8144eabfe3---4 | CC-MAIN-2019-35 | en | refinedweb |
[…]
Tag Archives: labs
Announcing Adobe AIR 2 Beta 2 Now Available on Adobe Labs
Today we are making available the second beta of Adobe AIR 2 on our Adobe Labs website. Since our first beta release of AIR 2 back in November, our team has been focused on improving the quality of AIR 2 as well as adding a number of new capabilities to the runtime.
Since we are very close to shipping AIR 2, we would like to request that all developers download the AIR 2 beta 2 runtime and SDK, read the release notes and developer FAQ for important changes and known issues, and test out their 1.0 and 2.0 applications. If you run into an issue, our team would like to hear from you. Please submit a bug using our feedback form. You can also post questions to the AIR 2 beta 2 forums if you would like to connect with other developers using the AIR 2 beta 2 runtime and SDK.
Two new features that developers mab be particularly interested in are the following:
-.
If you are interested in learning more about the new printing capabilities in the AIR 2 beta 2, please be sure to watch Adobe platform evangelist Ryan Stewart’s interview with Rick Rocheleau, the developer that led the development of these new features.
Important: Applications built against Adobe AIR 2 beta 1 will not run using the AIR 2 beta 2 runtime. In order for an AIR 2 beta 1 application to run on the AIR 2 beta 2 runtime, the namespace of the beta 1 application descriptor file must first be updated to "2.0beta2" and compiled against the AIR 2 beta 2 SDK. .
We have updated all of our AIR 2 beta sample applications to be compatible with the AIR 2 runtime.
Thank you for your continued help and support. In addition to our blog, please be sure to follow us on Twitter for AIR-related updates.
Rob Christensen
Product Manager, Adobe AIR), […]
New on Adobe Labs: Squiggly – spell checking engine for Flash Player and AIR
One […]
Tutorial: Building a data-centric app using Catalyst and Builder betas. | https://blogs.adobe.com/digitalmedia/tag/labs/ | CC-MAIN-2019-35 | en | refinedweb |
Core ML and Vision: Machine Learning in iOS 11 Tutorial
Learn about Core ML and Vision, two cutting-edge iOS 11 frameworks, in this iOS machine learning tutorial!
Version
- Swift 4, iOS 11, Xcode 9.
Build and run your project; you’ll see an image of a city at night, and a button:
Choose another image from the photo library in the Photos app. This starter project’s Info.plist already has a Privacy – Photo Library Usage Description, so you might be prompted to allow usage.
The gap between the image and the button contains a label, where you’ll display the model’s classification of the image’s scene..
Adding a Model to Your Project
After you download GoogLeNetPlaces.mlmodel, drag it from Finder into the Resources group in your project’s Project Navigator:
Select this file, and wait for a moment. An arrow will appear when Xcode has generated the model class:
Click the arrow to see the generated class:
Xcode has generated input and output classes, and the main class
GoogLeNetPlaces, which has a
model property and two
prediction methods.
GoogLeNetPlacesInput has a
sceneImage property of type
CVPixelBuffer. Whazzat!?, we all cry together, but fear not, the Vision framework will take care of converting our familiar image formats into the correct input type. :]
The Vision framework also converts
GoogLeNetPlacesOutput properties into its own
results type, and manages calls to
prediction methods, so out of all this generated code, your code will use only the
model property.
Wrapping the Core ML Model in a Vision Model
Finally, you get to write some code! Open ViewController.swift, and import the two frameworks, just below
import UIKit:
import CoreML import Vision
Next, add the following extension below the
IBActions extension:
// MARK: - Methods extension ViewController { func detectScene(image: CIImage) { answerLabel.text = "detecting scene..." // Load the ML model through its generated class guard let model = try? VNCoreMLModel(for: GoogLeNetPlaces().model) else { fatalError("can't load Places ML model") } } }
Here’s what you’re doing:
First, you display a message so the user knows something is happening.
The designated initializer of
GoogLeNetPlaces throws an error, so you must use
try when creating it.
VNCoreMLModel is simply a container for a Core ML model used with Vision requests.
The standard Vision workflow is to create a model, create one or more requests, and then create and run a request handler. You’ve just created the model, so your next step is to create a request.
Add the following lines to the end of
detectScene(image:):
// Create a Vision request with completion handler let request = VNCoreMLRequest(model: model) { [weak self] request, error in guard let results = request.results as? [VNClassificationObservation], let topResult = results.first else { fatalError("unexpected result type from VNCoreMLRequest") } // Update UI on main queue let article = (self?.vowels.contains(topResult.identifier.first!))! ? "an" : "a" DispatchQueue.main.async { [weak self] in self?.answerLabel.text = "\(Int(topResult.confidence * 100))% it's \(article) \(topResult.identifier)" } }
VNCoreMLRequest is an image analysis request that uses a Core ML model to do the work. Its completion handler receives
request and
error objects.
You check that
request.results is an array of
VNClassificationObservation objects, which is what the Vision framework returns when the Core ML model is a classifier, rather than a predictor or image processor. And
GoogLeNetPlaces is a classifier, because it predicts only one feature: the image’s scene classification.
A
VNClassificationObservation has two properties:
identifier — a
String — and
confidence — a number between 0 and 1 — it’s the probability the classification is correct. When using an object-detection model, you would probably look at only those objects with
confidence greater than some threshold, such as 30%.
You then take the first result, which will have the highest confidence value, and set the indefinite article to “a” or “an”, depending on the identifier’s first letter. Finally, you dispatch back to the main queue to update the label. You’ll soon see the classification work happens off the main queue, because it can be slow.
Now, on to the third step: creating and running the request handler.
Add the following lines to the end of
detectScene(image:):
// Run the Core ML GoogLeNetPlaces classifier on global dispatch queue let handler = VNImageRequestHandler(ciImage: image) DispatchQueue.global(qos: .userInteractive).async { do { try handler.perform([request]) } catch { print(error) } }
VNImageRequestHandler is the standard Vision framework request handler; it isn’t specific to Core ML models. You give it the image that came into
detectScene(image:) as an argument. And then you run the handler by calling its
perform method, passing an array of requests. In this case, you have only one request.
The
perform method throws an error, so you wrap it in a try-catch.
Using the Model to Classify Scenes
Whew, that was a lot of code! But now you simply have to call
detectScene(image:) in two places.
Add the following lines at the end of
viewDidLoad() and at the end of
imagePickerController(_:didFinishPickingMediaWithInfo:):
guard let ciImage = CIImage(image: image) else { fatalError("couldn't convert UIImage to CIImage") } detectScene(image: ciImage)
Now build and run. It shouldn’t take long to see a classification:
Well, yes, there are skyscrapers in the image. There’s also a train.
Tap the button, and select the first image in the photo library: a close-up of some sun-dappled leaves:
Hmmm, maybe if you squint, you can imagine Nemo or Dory swimming around? But at least you know the “a” vs. “an” thing works. ;]
A Look at Apple’s Core ML Sample Apps
This tutorial’s project is similar to the sample project for WWDC 2017 Session 506 Vision Framework: Building on Core ML. The Vision + ML Example app uses the MNIST classifier, which recognizes hand-written numerals — useful for automating postal sorting. It also uses the native Vision framework method
VNDetectRectanglesRequest, and includes Core Image code to correct the perspective of detected rectangles.
You can also download a different sample project from the Core ML documentation page. Inputs to the MarsHabitatPricePredictor model are just numbers, so the code uses the generated
MarsHabitatPricer methods and properties directly, instead of wrapping the model in a Vision model. By changing the parameters one at a time, it’s easy to see the model is simply a linear regression:
137 * solarPanels + 653.50 * greenHouses + 5854 * acres
Where to Go From Here?.
- And thanks to Jimmy Kim (jimmyk1) for this link to Awesome CoreML Models — a collection of machine learning models that work with Core ML: try them out, and contribute your own awesome model!
Last but not least, I really learned a lot from this concise history of AI from Andreessen Horowitz’s Frank Chen: AI and Deep Learning a16z podcast.
I hope you found this tutorial useful. Feel free to join the discussion below! | https://www.raywenderlich.com/577-core-ml-and-vision-machine-learning-in-ios-11-tutorial | CC-MAIN-2019-35 | en | refinedweb |
Changelog for package openni_launch
1.11.1 (2018-09-13)
1.11.0 (2018-01-13)
1.10.0 (2018-01-06)
[maintenance] Repository moved to
Contributors: Isaac I.Y. Saito
1.9.8 (2016-05-07)
[feat] adding depth_registered_filtered injection
#26
[sys][Travis CI] Update config to using industrial_ci with Prerelease Test.
#28
Contributors: Jonathan Bohren, Isaac I.Y. Saito
1.9.7 (2015-11-15)
1st ROS Jade release
[sys] Add a simple travis config
Contributors: Isaac I.Y. Saito
1.9.6 (2015-10-27)
[feat] adjust frame ids to TF2
[fix] Removes the leading '/' from the TF frames in case tf_prefix is empty, which fixes this error: [ WARN] [1432284298.914340788]: TF2 exception: Invalid argument "/camera_rgb_optical_frame" passed to lookupTransform argument target_frame in tf2 frame_ids cannot start with a '/' like: (/camera/camera_nodelet_manager) Actually, tf_prefix is now ignored altogether.
Contributors: Jack O'Quin, Jonathan Binney, Martin Guenther
1.9.5 (2014-04-18)
Test the ROS launch files, fix some errors (
#10
).
Fix errors found by roslaunch unit test (
#10
).
Add unit tests for launch file dependencies.
Contributors: Jack O'Quin, jonbinney
1.9.4 (2013-08-25 18:04)
Fix missing run_depend.
Contributors: Marcus Liebhardt, jonbinney
1.9.3 (2013-08-25 17:49)
Switch to rgbd_launch.
Added sw_registration and hw_registration flags.
Modified the top level file to use internal file names.
device.launch is now internal.
Added deprecation notice about rgb.launch.
Deprecation notices about the move of common launch files to rgbd_launch.
Added tf prefix resolution.
Contributors: Piyush Khandelwal, jonbinney
1.9.2 (2013-08-01)
Fix device registered point cloud generation.
Disabled unregistered depth and disparity processing when depth_registration is set to true.
Fixing xyzrgb pointcloud generation when device registration is enabled.
Contributors: Piyush Khandelwal, jonbinney
1.9.1 (2013-07-29)
Allow proper usage of namespaces for openni's nodes and nodelets.
Add arguments for switching on/off each processing module.
Removes (assumed) duplicated depth nodelets include thereby removing service registration error.
Add topic remappings to sort the versious nodelets i/o (i.e. depth, rgb etc.).
Add option of utilising the worker threads parameter for the nodelet manager.
Moves nodelet_manager into camera namespace.
Contributors: Marcus Liebhardt, jonbinney
1.9.0 (2013-06-27)
1.8.3 (2013-01-03)
Catkinizing openni_launch.
Moved manager setup to manager.launch. Added options load_driver and publish_tf for suppressing the driver and/or default tf tree, to better support bag file playback and calibration.
Moved launching of all processing nodelets from device.launch to new processing.launch for better reusability, for example bag file playback.
Use 'respawn' arg instead of 'bond' (deprecated). Conforms to image_proc and stereo_image_proc launch files, and now attempts to respawn loaders when bonds are enabled. Removed rgb.launch, using image_proc.launch instead.
Initial commit of openni_launch as unary stack.
Contributors: Jonathan Binney, Julius Kammerl, Michael Ferguson, Patrick Mihelich, jbinney | http://docs.ros.org/kinetic/changelogs/openni_launch/changelog.html | CC-MAIN-2019-35 | en | refinedweb |
Minimalistic building tool
- Why runjs ?
- Features
- Transpilers
- API
- Using Async/Await
For 3.x to 4.x migration instructions look here
Install runjs in your project
npm install runjs --save-dev
Create
runfile.js in your root project directory:
const run ={console}{}moduleexports =hellomakedir
Call in your terminal:
$ npx run hello TommyHello Tommy!$ npx run makedirmkdir somedir
For node < 8.2, npx is not available, so doing
npm install -g runjs-cliis neccessary which installs global
runscript. After that above task would be called like:
run hello Tommy
Mechanism of RunJS is very simple. Tasks are run by just importing
runfile.js as a
normal node.js module. Then based on command line arguments proper exported function
from
runfile.js is called.
RunJS in a nutshell
const runfile =const taskName = processargv2const options params =runfiletaskName
Why runjs ?Why runjs ?
We have Grunt, Gulp, npm scripts, Makefile. Why another building tool ?
Gulp or Grunt files seem overly complex for what they do and the plugin ecosystem adds a layer of complexity towards the simple command line tools underneath. The documentation is not always up to date and the plugin does not always use the latest version of the tool. After a while customizing the process even with simple things, reconfiguring it becomes time consuming.
Npm scripts are simple but they get out of hand pretty quickly if we need more complex process which make them quite hard to read and manage.
Makefiles are simple, better for more complex processes
but they depend on bash scripting. Within
runfile you can use
command line calls as well as JavaScript code and npm
libraries which makes that approach much more flexible.
FeaturesFeatures
Executing shell commandsExecuting shell commands
RunJS gives an easy way to execute shell commands in your tasks by
run function
in synchronous and asynchronous way:
const run ={}moduleexports =all
$ run commands
Because
./node_modules/.bin is included in
PATH when calling shell commands
by
run function, you can call "bins" from your local project in the same way as
in npm scripts.
Handling argumentsHandling arguments
Provided arguments in the command line are passed to the function:
{console}moduleexports =sayHello
$ run sayHello worldHello world!
You can also provide dash arguments like
-a or
--test. Order of them doesn't
matter after task name. They will be always available by
options helper
from inside a function.
const options ={consoleconsole}moduleexports =sayHello
$ run sayHello -a --test=something worldHello world!Given options:
Documenting tasksDocumenting tasks
To display all available tasks for your
runfile.js type
run in your command line
without any arguments:
$ run Processing runfile.js... Available tasks: echo - echo task description buildjs - Compile JS files
Use
help utility function for your task to get additional description:
const run help ={}moduleexports =buildjs
$ run buildjs --help Processing runfile.js... Usage: buildjs Compile JS files
You can provide detailed annotation to give even more info about the task:
const dedent =const run help ={}moduleexports =test
$ run test --help Processing runfile.js... Usage: test [options] [file] Run unit tests Options: --watch run tests in a watch mode Examples: run test dummyComponent.js run test dummyComponent.js --watch
NamespacingNamespacing
To better organise tasks, it is possible to call them from namespaces:
const test ={console}moduleexports =test
$ run test:unitDoing unit testing!
This is especially useful if
runfile.js gets too large. We can move some tasks
to external modules and import them back to a namespace:
./tasks/test.js:
{console}{console}moduleexports =unitintegration
runfile.js
const test =moduleexports =test
$ run test:unitDoing unit testing!
If we don't want to put imported tasks into a namespace, we can always use spread operator:
moduleexports =...test
$ run unitDoing unit testing!
With ES6 modules import/export syntax this becomes even simpler:
// export with no namespace// export with namespace
$ run unit$ run test:unit
Sharing tasksSharing tasks
Because
runfile.js is just a node.js module and
runjs just calls exported
functions from that module based on cli arguments, nothing stops you to move
some repetitive tasks across your projects to external npm package and
just reuse it.
shared-runfile module:
{console}{console}moduleexports =shared1shared2
Local
runfile.js
const shared ={console}moduleexports =...sharedlocal
$ run shared1$ run shared2$ run local
AutocompletionAutocompletion
After setting up autocompletion, suggestions about available
tasks from your
runfile.js will be given when calling
run <tab>.
This is an experimental feature. It will work slowly if you use transpiler with your
runfile.js. It won't work also with
npx run <task>calls,
npm -g install runjs-cliis necessary, so you could do calls like
run <task>.
Setup process:
run --completion >> ~/runjs.completion.sh
echo 'source ~/runjs.completion.sh' >> .bash_profile
- Restart your shell (reopen terminal)
Depending on your shell, use proper bootstrap files accordingly.
If you get errors like
_get_comp_words_by_ref command not foundyou need to install bash completion package. For MacOS users doing
brew install bash-completionshould do the job and then adding
[ -f /usr/local/etc/bash_completion ] && ./usr/local/etc/bash_completion. to your
~/.bash_profile.
TranspilersTranspilers
Transpilers gives you an advantage of using ES6/ES7 features which may not be available in your node version.
So for example writing
runfile.js with es6 imports/exports is possible:
{console}
$ run makeThatDir somedirmkdir somedirDone!
BabelBabel
If you want to use Babel transpiler for your
runfile.js install it:
npm install babel-core babel-preset-es2015 babel-register --save-dev
and in your
package.json write:
RunJS will require defined transpiler before requiring
runfile.js so you can
use all ES6/ES7 features which are not supported by your node version.
TypeScriptTypeScript
If you want to use TypeScript transpiler for your runfile, install TypeScript tooling:
npm install typescript ts-node --save-dev
and then in your
package.json define a path to
ts-node/register and
runfile.ts.
You need to also define custom path to your runfile as TypeScript files have
*.ts extension. RunJS will require defined transpiler before requiring
./runfile.ts.
APIAPI
For inside
runfile.js usage.
run(cmd, options)run(cmd, options)
run given command as a child process and log the call in the output.
./node_modules/.bin/ is included into PATH so you can call installed scripts directly.
const run =
Options:
cwd: ... // current working directory (String)async: ... // run command asynchronously (true/false), false by defaultstdio: ... // 'inherit' (default), 'pipe' or 'ignore'env: ... // environment key-value pairs (Object)timeout: ...
Examples:
To get an output from
run function we need to set
stdio option to
pipe otherwise
output will be
null:
const output =
For
stdio: 'pipe' outputs are returned but not forwarded to the parent process thus
not printed out to the terminal.
For
stdio: 'inherit' (default) outputs are passed
to the terminal, but
run function will resolve (async) / return (sync)
null.
For
stdio: 'ignore' nothing will be returned or printed
options(this)options(this)
A helper which returns an object with options which were given through dash params of command line script.
const options =
Example:
$ run lint --fix
{fix ? :}
To execute a task in JS with options:
lint
help(func, annotation)help(func, annotation)
Define help annotation for task function, so it will be printed out when calling task with
option and when calling
run without any arguments.
const help =
$ run build --help $ run test --help
Using Async/AwaitUsing Async/Await
For node >= 7.10 it is possible to use async functions out of the box since node will support them natively.
Expected usage in your runfile:
const run ={awaitconsole}moduleexports =testasyncawait
and then just
$ run testasyncawait
If your node version is older you need to depend on transpilers,
either
Babel or
TypeScript. For
TypeScript you do no more than transpiler
setup which was described above and async/await should just
work.
For
Babel you additionally need
babel-preset-es2017 and
babel-polyfill:
npm install babel-preset-es2017 babel-polyfill --save-dev
and proper config in your
package.json:
"babel": { "presets": ["es2017"] }, "runjs": { "requires": [ "./node_modules/babel-polyfill", "./node_modules/babel-register" ] } | https://www.npmjs.com/package/runjs | CC-MAIN-2019-35 | en | refinedweb |
12-07-2017
10:02 AM
Hello,
I'm trying to build a Virtual Commissioning model in Process Simulate usin a pre programmed PLC program in Tia portal and a ABB Robotstudio program for the robot. I am usin PLCSIM Advanced to run the PLC.
I have encountered a grate many challanges and my first question is how to import the PLC signals in to Process Simulate. I have exported the PLC-tags from the project i Tia portal using export.
Then I tried to import the same excel file in Process Simulate usin the Signal Maping Tool.
But I had no luck. Even tried to export an excel list from the Signal Viewer in PS and try to reconfigure the Tia Portal exported list in the same format but still no luck.
How do I transfer Signals from a PLC Project in Tia Portal to a Process Simulate Study?
When this is solved I have many more questions but that I can save for another topic
Best Regards
Johan
12-13-2017
05:01 AM
Really no one knows how to get the I/O list from Tia Portal to Process Simulate? Everyone who have set up a comunication between a PLC and Process Simulate must have done this. Or is there another function to retrive the I/O list from the PLC controller?
12-22-2017
04:31 PM
Hi,
one easy way from TIA to Process Simulate is via PLCSimAdv.
You can load your PLC program to PLCSimAdv.
Later you can start the function "extract s7 signals" under PS to import the signals from PLCSimAdv to PS.
Other way:
Export the symbol list from TIA to excel
change excel format to PS
import signals to ps.
Done
Watch Replays of Previous Topics | https://community.plm.automation.siemens.com/t5/General-Tecnomatix-Forum/Virtual-Commisioning-signals-comunication/m-p/455691/highlight/true | CC-MAIN-2019-35 | en | refinedweb |
Typeclass DerivationEdit this page on GitHub
Typeclass derivation is a way to generate given instances for certain type classes automatically or with minimal code hints. A type class in this sense is any trait or class with a type parameter that describes the type being operated on. Commonly used examples are
Eql,
Ordering,
Show, or
Pickling. Example:
enum Tree[T] derives Eql, Ordering, Pickling { case Branch(left: Tree[T], right: Tree[T]) case Leaf(elem: T) }
The
derives clause generates given instances for the
Eql,
Ordering, and
Pickling traits in the companion object
Tree:
given [T: Eql] as Eql[Tree[T]] = Eql.derived given [T: Ordering] as Ordering[Tree[T]] = Ordering.derived given [T: Pickling] as Pickling[Tree[T]] = Pickling.derived
Deriving Types
Besides for enums, typeclasses can also be derived for other sets of classes and objects that form an algebraic data type. These are:
- individual case classes or case objects
- sealed classes or traits that have only case classes and case objects as children.
Examples:
case class Labelled[T](x: T, label: String) derives Eql, Show sealed trait Option[T] derives Eql case class Some[T] extends Option[T] case object None extends Option[Nothing]
The generated typeclass instances are placed in the companion objects
Labelled and
Option, respectively.
Derivable Types
A trait or class can appear in a
derives clause if its companion object defines a method named
derived. The type and implementation of a
derived method are arbitrary, but typically it has a definition like this:
def derived[T] given Mirror.Of[T] = ...
That is, the
derived method takes an implicit parameter of (some subtype of) type
Mirror that defines the shape of the deriving type
T and it computes the typeclass implementation according
to that shape. A given
Mirror instance is generated automatically for
- case classes and objects,
- enums and enum cases,
- sealed traits or classes that have only case classes and case objects as children.
The description that follows gives a low-level way to define a type class.
The Shape Type
For every class with a
derives clause, the compiler computes the shape of that class as a type. For example, here is the shape type for the
Tree[T] enum:
Cases[( Case[Branch[T], (Tree[T], Tree[T])], Case[Leaf[T], T *: Unit] )]
Informally, this states that
The shape of a
Tree[T]is one of two cases: Either a
Branch[T]with two elements of type
Tree[T], or a
Leaf[T]with a single element of type
T.
The type constructors
Cases and
Case come from the companion object of a class
scala.compiletime.Shape, which is defined in the standard library as follows:
sealed abstract class Shape object Shape { /** A sum with alternative types `Alts` */ case class Cases[Alts <: Tuple] extends Shape /** A product type `T` with element types `Elems` */ case class Case[T, Elems <: Tuple] extends Shape }
Here is the shape type for
Labelled[T]:
Case[Labelled[T], (T, String)]
And here is the one for
Option[T]:
Cases[( Case[Some[T], T *: Unit], Case[None.type, Unit] )]
Note that an empty element tuple is represented as type
Unit. A single-element tuple
is represented as
T *: Unit since there is no direct syntax for such tuples:
(T) is just
T in parentheses, not a tuple.
The Generic Typeclass
For every class
C[T_1,...,T_n] with a
derives clause, the compiler generates in the companion object of
C a given instance for
Generic[C[T_1,...,T_n]] that follows
the outline below:
given [T_1, ..., T_n] as Generic[C[T_1,...,T_n]] { type Shape = ... ... }
where the right hand side of
Shape is the shape type of
C[T_1,...,T_n].
For instance, the definition
enum Result[+T, +E] derives Logging { case Ok[T](result: T) case Err[E](err: E) }
would produce:
object Result { import scala.compiletime.Shape._ given [T, E] as Generic[Result[T, E]] { type Shape = Cases[( Case[Ok[T], T *: Unit], Case[Err[E], E *: Unit] )] ... } }
The
Generic class is defined in package
scala.reflect.
abstract class Generic[T] { type Shape <: scala.compiletime.Shape /** The mirror corresponding to ADT instance `x` */ def reflect(x: T): Mirror /** The ADT instance corresponding to given `mirror` */ def reify(mirror: Mirror): T /** The companion object of the ADT */ def common: GenericClass }
It defines the
Shape type for the ADT
T, as well as two methods that map between a
type
T and a generic representation of
T, which we call a
Mirror:
The
reflect method maps an instance of the ADT
T to its mirror whereas
the
reify method goes the other way. There's also a
common method that returns
a value of type
GenericClass which contains information that is the same for all
instances of a class (right now, this consists of the runtime
Class value and
the names of the cases and their parameters).
Mirrors
A mirror is a generic representation of an instance of an ADT.
Mirror objects have three components:
adtClass: GenericClass: The representation of the ADT class
ordinal: Int: The ordinal number of the case among all cases of the ADT, starting from 0
elems: Product: The elements of the instance, represented as a
Product.
The
Mirror class is defined in package
scala.reflect as follows:
class Mirror(val adtClass: GenericClass, val ordinal: Int, val elems: Product) { /** The `n`'th element of this generic case */ def apply(n: Int): Any = elems.productElement(n) /** The name of the constructor of the case reflected by this mirror */ def caseLabel: String = adtClass.label(ordinal)(0) /** The label of the `n`'th element of the case reflected by this mirror */ def elementLabel(n: Int): String = adtClass.label(ordinal)(n + 1) }
GenericClass
Here's the API of
scala.reflect.GenericClass:
class GenericClass(val runtimeClass: Class[_], labelsStr: String) { /** A mirror of case with ordinal number `ordinal` and elements as given by `Product` */ def mirror(ordinal: Int, product: Product): Mirror = new Mirror(this, ordinal, product) /** A mirror with elements given as an array */ def mirror(ordinal: Int, elems: Array[AnyRef]): Mirror = mirror(ordinal, new ArrayProduct(elems)) /** A mirror with an initial empty array of `numElems` elements, to be filled in. */ def mirror(ordinal: Int, numElems: Int): Mirror = mirror(ordinal, new Array[AnyRef](numElems)) /** A mirror of a case with no elements */ def mirror(ordinal: Int): Mirror = mirror(ordinal, EmptyProduct) /** Case and element labels as a two-dimensional array. * Each row of the array contains a case label, followed by the labels of the elements of that case. */ val label: Array[Array[String]] = ... }
The class provides four overloaded methods to create mirrors. The first of these is invoked by the
reify method that maps an ADT instance to its mirror. It simply passes the
instance itself (which is a
Product) to the second parameter of the mirror. That operation does not involve any copying and is thus quite efficient. The second and third versions of
mirror are typically invoked by typeclass methods that create instances from mirrors. An example would be an
unpickle method that first creates an array of elements, then creates
a mirror over that array, and finally uses the
reify method in
Reflected to create the ADT instance. The fourth version of
mirror is used to create mirrors of instances that do not have any elements.
How to Write Generic Typeclasses
Based on the machinery developed so far it becomes possible to define type classes generically. This means that the
derived method will compute a type class instance for any ADT that has a given
Generic instance, recursively.
The implementation of these methods typically uses three new type-level constructs in Dotty: inline methods, inline matches, and implicit matches. As an example, here is one possible implementation of a generic
Eql type class, with explanations. Let's assume
Eql is defined by the following trait:
trait Eql[T] { def eql(x: T, y: T): Boolean }
We need to implement a method
Eql.derived that produces a given instance for
Eql[T] provided
a given
Generic[T]. Here's a possible solution:
inline def derived[T] given (ev: Generic[T]): Eql[T] = new Eql[T] { def eql(x: T, y: T): Boolean = { val mx = ev.reflect(x) // (1) val my = ev.reflect(y) // (2) inline erasedValue[ev.Shape] match { case _: Cases[alts] => mx.ordinal == my.ordinal && // (3) eqlCases[alts](mx, my, 0) // [4] case _: Case[_, elems] => eqlElems[elems](mx, my, 0) // [5] } } }
The implementation of the inline method
derived creates a given instance for
Eql[T] and implements its
eql method. The right-hand side of
eql mixes compile-time and runtime elements. In the code above, runtime elements are marked with a number in parentheses, i.e
(1),
(2),
(3). Compile-time calls that expand to runtime code are marked with a number in brackets, i.e.
[4],
[5]. The implementation of
eql consists of the following steps.
- Map the compared values
xand
yto their mirrors using the
reflectmethod of the implicitly passed
Generic
(1),
(2).
- Match at compile-time against the shape of the ADT given in
ev.Shape. Dotty does not have a construct for matching types directly, but we can emulate it using an
inlinematch over an
erasedValue. Depending on the actual type
ev.Shape, the match will reduce at compile time to one of its two alternatives.
- If
ev.Shapeis of the form
Cases[alts]for some tuple
altsof alternative types, the equality test consists of comparing the ordinal values of the two mirrors
(3)and, if they are equal, comparing the elements of the case indicated by that ordinal value. That second step is performed by code that results from the compile-time expansion of the
eqlCasescall
[4].
- If
ev.Shapeis of the form
Case[elems]for some tuple
elemsfor element types, the elements of the case are compared by code that results from the compile-time expansion of the
eqlElemscall
[5].
Here is a possible implementation of
eqlCases:
inline def eqlCases[Alts <: Tuple](mx: Mirror, my: Mirror, n: Int): Boolean = inline erasedValue[Alts] match { case _: (Shape.Case[_, elems] *: alts1) => if (mx.ordinal == n) // (6) eqlElems[elems](mx, my, 0) // [7] else eqlCases[alts1](mx, my, n + 1) // [8] case _: Unit => throw new MatchError(mx.ordinal) // (9) }
The inline method
eqlCases takes as type arguments the alternatives of the ADT that remain to be tested. It takes as value arguments mirrors of the two instances
x and
y to be compared and an integer
n that indicates the ordinal number of the case that is tested next. It produces an expression that compares these two values.
If the list of alternatives
Alts consists of a case of type
Case[_, elems], possibly followed by further cases in
alts1, we generate the following code:
- Compare the
ordinalvalue of
mx(a runtime value) with the case number
n(a compile-time value translated to a constant in the generated code) in an if-then-else
(6).
- In the then-branch of the conditional we have that the
ordinalvalue of both mirrors matches the number of the case with elements
elems. Proceed by comparing the elements of the case in code expanded from the
eqlElemscall
[7].
- In the else-branch of the conditional we have that the present case does not match the ordinal value of both mirrors. Proceed by trying the remaining cases in
alts1using code expanded from the
eqlCasescall
[8].
If the list of alternatives
Alts is the empty tuple, there are no further cases to check.
This place in the code should not be reachable at runtime. Therefore an appropriate
implementation is by throwing a
MatchError or some other runtime exception
(9).
The
eqlElems method compares the elements of two mirrors that are known to have the same
ordinal number, which means they represent the same case of the ADT. Here is a possible
implementation:
inline def eqlElems[Elems <: Tuple](xs: Mirror, ys: Mirror, n: Int): Boolean = inline erasedValue[Elems] match { case _: (elem *: elems1) => tryEql[elem]( // [12] xs(n).asInstanceOf[elem], // (10) ys(n).asInstanceOf[elem]) && // (11) eqlElems[elems1](xs, ys, n + 1) // [13] case _: Unit => true // (14) }
eqlElems takes as arguments the two mirrors of the elements to compare and a compile-time index
n, indicating the index of the next element to test. It is defined in terms of another compile-time match, this time over the tuple type
Elems of all element types that remain to be tested. If that type is
non-empty, say of form
elem *: elems1, the following code is produced:
- Access the
n'th elements of both mirrors and cast them to the current element type
elem
(10),
(11). Note that because of the way runtime reflection mirrors compile-time
Shapetypes, the casts are guaranteed to succeed.
- Compare the element values using code expanded by the
tryEqlcall
[12].
- "And" the result with code that compares the remaining elements using a recursive call to
eqlElems
[13].
If type
Elems is empty, there are no more elements to be compared, so the comparison's result is
true.
(14)
Since
eqlElems is an inline method, its recursive calls are unrolled. The end result is a conjunction
test_1 && ... && test_n && true of test expressions produced by the
tryEql calls.
The last, and in a sense most interesting part of the derivation is the comparison of a pair of element values in
tryEql. Here is the definition of this method:
inline def tryEql[T](x: T, y: T) = implicit match { case ev: Eql[T] => ev.eql(x, y) // (15) case _ => error("No `Eql` instance was found for $T") }
tryEql is an inline method that takes an element type
T and two element values of that type as arguments. It is defined using an
implicit match that tries to find a given instance for
Eql[T]. If an instance
ev is found, it proceeds by comparing the arguments using
ev.eql. On the other hand, if no instance is found
this signals a compilation error: the user tried a generic derivation of
Eql for a class with an element type that does not have an
Eql instance itself. The error is signaled by
calling the
error method defined in
scala.compiletime.
Note: At the moment our error diagnostics for metaprogramming does not support yet interpolated string arguments for the
scala.compiletime.error method that is called in the second case above. As an alternative, one can simply leave off the second case, then a missing typeclass would result in a "failure to reduce match" error.
Example: Here is a slightly polished and compacted version of the code that's generated by inline expansion for the derived
Eql instance for class
Tree.
given [T] as Eql[Tree[T]] where (elemEq: Eql[T]) { def eql(x: Tree[T], y: Tree[T]): Boolean = { val ev = the[Generic[Tree[T]]] val mx = ev.reflect(x) val my = ev.reflect(y) mx.ordinal == my.ordinal && { if (mx.ordinal == 0) { this.eql(mx(0).asInstanceOf[Tree[T]], my(0).asInstanceOf[Tree[T]]) && this.eql(mx(1).asInstanceOf[Tree[T]], my(1).asInstanceOf[Tree[T]]) } else if (mx.ordinal == 1) { elemEq.eql(mx(0).asInstanceOf[T], my(0).asInstanceOf[T]) } else throw new MatchError(mx.ordinal) } } }
One important difference between this approach and Scala-2 typeclass derivation frameworks such as Shapeless or Magnolia is that no automatic attempt is made to generate typeclass instances for elements recursively using the generic derivation framework. There must be a given instance for
Eql[T] (which can of course be produced in turn using
Eql.derived), or the compilation will fail. The advantage of this more restrictive approach to typeclass derivation is that it avoids uncontrolled transitive typeclass derivation by design. This keeps code sizes smaller, compile times lower, and is generally more predictable.
Deriving Instances Elsewhere
Sometimes one would like to derive a typeclass instance for an ADT after the ADT is defined, without being able to change the code of the ADT itself.
To do this, simply define an instance with the
derived method of the typeclass as right-hand side. E.g, to implement
Ordering for
Option, define:
instance [T: Ordering] as Ordering[Option[T]] = Ordering.derived
Usually, the
Ordering.derived clause has an implicit parameter of type
Generic[Option[T]]. Since the
Option trait has a
derives clause,
the necessary instance is already present in the companion object of
Option.
If the ADT in question does not have a
derives clause, a
Generic instance
would still be synthesized by the compiler at the point where
derived is called.
This is similar to the situation with type tags or class tags: If no instance
is found, the compiler will synthesize one.
Syntax
Template ::= InheritClauses [TemplateBody] EnumDef ::= id ClassConstr InheritClauses EnumBody InheritClauses ::= [‘extends’ ConstrApps] [‘derives’ QualId {‘,’ QualId}] ConstrApps ::= ConstrApp {‘with’ ConstrApp} | ConstrApp {‘,’ ConstrApp}
Discussion
The typeclass derivation framework is quite small and low-level. There are essentially
two pieces of infrastructure in the compiler-generated
Generic instances:
- a type representing the shape of an ADT,
- a way to map between ADT instances and generic mirrors.
Generic mirrors make use of the already existing
Product infrastructure for case
classes, which means they are efficient and their generation requires not much code.
Generic mirrors can be so simple because, just like
Products, they are weakly
typed. On the other hand, this means that code for generic typeclasses has to
ensure that type exploration and value selection proceed in lockstep and it
has to assert this conformance in some places using casts. If generic typeclasses
are correctly written these casts will never fail.
It could make sense to explore a higher-level framework that encapsulates all casts in the framework. This could give more guidance to the typeclass implementer. It also seems quite possible to put such a framework on top of the lower-level mechanisms presented here. --> | http://dotty.epfl.ch/docs/reference/contextual/derivation.html | CC-MAIN-2019-35 | en | refinedweb |
Displays video with object-fit: cover with fallback for IE
react-video-cover
A small React component rendering a video with object-fit: cover, or a Fallback if object-fit is not available.
Installation
npm install --save react-video-cover
Basic Usage
Okay, let's say you have a simple video tag like this
<video src="" />
Now you want to display this video so that it always fills some container, while keeping the correct aspect ratio. For this example the container will be 300px by 300px:
<div style={{ width: '300px', height: '300px', overflow: 'hidden', }}> <video src="" /> </div>
We can use object-fit: cover to let the video fill the container:
<div style={{ width: '300px', height: '300px', overflow: 'hidden', }}> <video style={{ objectFit: 'cover', width: '100%', height: '100%', }} </div>
The only problem with this: object-fit is not implemented by IE and Edge.
If you do not have to support IE, I would suggest that you stop right here.
If you want to get the same effect in IE, simply replace the video tag with the react-video-cover component:
<div style={{ width: '300px', height: '300px', overflow: 'hidden', }}> <VideoCover videoOptions={{src: ''}} /> </div>
react-video-cover will set width: 100% and height: 100% because I think these are sensible defaults. You can use the style prop to overwrite it.
Here is the complete example, which also allows you to play/pause by clicking the video:
class MinimalCoverExample extends Component { render() { const videoOptions = { src: '', ref: videoRef => { this.videoRef = videoRef; }, onClick: () => { if (this.videoRef && this.videoRef.paused) { this.videoRef.play(); } else if (this.videoRef) { this.videoRef.pause(); } }, title: 'click to play/pause', }; return ( <div style={{ width: '300px', height: '300px', overflow: 'hidden', }}> <VideoCover videoOptions={videoOptions} /> </div> ); } }
It is also available as Example 3 on the demo-page.
Props
videoOptions
type: Object
default:
undefined
All members of videoOptions will be passed as props to the .
style
type: Object
default:
undefined
Additional styles which will be merged with those defined by this component.
Please note that some styles are not possible to override, in particular:
- object-fit: cover (when the fallback is not used)
- position: relative and overflow: hidden (when the fallback is used)
className
type: String
default:
undefined
Use this to set a custom className.
forceFallback
type: Boolean
default:
false
This component will use object-fit: cover if available, that is in all modern browsers except IE.
This prop forces use of the fallback. This is helpful during troubleshooting,
but apart from that you should not use it.
remeasureOnWindowResize
type: Boolean
default:
false
If set, an event listener on window-resize is added when the Fallback is used.
It will re-evaluate the aspect-ratio and update the styles if necessary.
This has no effect if the fallback is not used.
The classic example where it makes sense to use this is when using a background video.
If you need to react to different events to re-measure the aspect-ratio please see the onFallbackDidMount prop.
onFallbackDidMount
type: Function
default:
undefined
Will be executed when the Fallback is mounted.
The only parameter is a function, which can be used to force a re-measuring, for example after the size of the surrounding container has changed.
Please note that this will only be invoked if the fallback is used, that is in IE.
See ResizableCoverExample for an example implementation.
onFallbackWillUnmount
type: Function
default:
undefined
Will be executed before the Fallback unmounts.
You probably want to use this to clear any event-listeners added in onFallbackDidMount.
Development
To start a webpack-dev-server with the examples:
npm start
Then open
To build the examples:
npm run build-examples
You can find the results in
dist_examples.
To build the Component as published to npm:
npm run build
You can find the results in
dist. | https://reactjsexample.com/displays-video-with-object-fit-cover-with-fallback-for-ie/ | CC-MAIN-2019-35 | en | refinedweb |
problem occurs on Linux Mint 13 (Ubuntu 12.04) with mono 2.10.8.1 (Debian 2.10.8.1-1ubuntu2.2)
I'm using a FTDI USB to Serial converter which supports up to 4000000
Baud. With "stty -F /dev/ttyUSB0 speed <baud rate>" I can set the baud rate to
1000000. But the highest value I can set in my SerialPort instance is 921600 . If
I set it to 1000000 "stty -F /dev/ttyUSB0" shows me that the device
actually runs with 9600 baud.
Baud rate 1000000 is one of the "standard" baud rates. Setting baud rate to 1000000 works in python, C on the same Linux and on .NET on Windows.
Steps to reproduce:
-------------------
0) I think you need a device which supports at least 1000000 baud.
1) run the following code and compare the values with the output from "stty -F
/dev/ttyUSB0":
using System;
using System.IO.Ports;
namespace SerialPortTest
{
class MainClass
{
public static void Main(string[] args)
{
SerialPort sp = new SerialPort("/dev/ttyUSB0");
sp.Open();
sp.BaudRate = 38400;
Console.WriteLine("stty should now display " + sp.BaudRate +
" baud. press a key to continue");
Console.ReadKey();
sp.BaudRate = 921600;
Console.WriteLine("stty should now display " + sp.BaudRate +
" baud. press a key to continue");
Console.ReadKey();
// ----- from here stty shows only 9600 baud -----
sp.BaudRate = 1000000;
Console.WriteLine("stty should now display " + sp.BaudRate +
" baud. press a key to continue");
Console.ReadKey();
sp.Close();
}
}
}
Looks like the old bug
was fixed only up to 921600 leaving all higher baud rates unfixed
I'm attaching a patch against mono-2.10.8.1 (from Debian), that allows setting custom baud rates. Maybe this would also work for newer "standard" baud rates like 1000000.
Created attachment 3302 [details]
Patch against support/serial.c
Hi Chris,
Sorry for the very long wait.
I reviewed your patch and it looks good. Before we can include it, it must be released under the MIT license.
In order to to do so, please email the patch to mono-dev and state "I release this patch under the MIT license."
Thanks.
@Rodrigo FYI, there's a PR pending that does something similar:
Sent mail to mono-devel-list.
Applied patch
Still can't set the baud rate to 1000000 on Linux.
Getting
System.ArgumentOutOfRangeException: Given baud rate is not supported on this platform
That is because
mcs/class/System/System.IO.Ports/SerialPort.cs
relay on
mcs/class/System/System.IO.Ports/SerialPortStream.cs
which uses
is_baud_rate_legal
from
support/serial.c
which is just a dumb switch with several predefined values.
Perhaps because the code has not been released.
Because the switch statement has support for non-standard baud rates now.
Switch knows but is_baud_rate_legal doesn't use it.
To be exact:
setup_baud_rate knows about non-standard baud rates. But is_baud_rate_legal doesn't use it. And SerialPort still throws an exception because of is_baud_rate_legal.
is_baud_rate_legal (int baud_rate)
{
gboolean ignore = FALSE;
return setup_baud_rate (baud_rate, &ignore) != -1;
}
The title of the bug report "Can't set 1000000 baud with System.IO.Ports.SerialPort.BaudRate". I still can't set 1000000 baud rate. I wouldn't reopen the bug, if I hadn't test it.
As I understand the code, this would fix the issue:
gboolean
is_baud_rate_legal (int baud_rate)
{
custom_baud_rate = FALSE;
int result = setup_baud_rate (baud_rate, &custom_baud_rate);
return result != -1 || custom_baud_rate;
}
Looking closer at the code shows that I'm wrong. It should work now. I was using the master to test the issue. Have to recheck what goes wrong then
I'm running on Mac and see this exact problem with a baud rate of 2000000.
Found a workaround as follows:
a) Open the port using some safe, dummy baud rate (e.g. 9600)
b) Use reflection to dig out the file descriptor in the SerialPortStream (UGH!)
c) p/invoke to ioctl to set the desired baud rate:
const uint IOSSIOSPEED = 0x80000000 | ((sizeof(uint) & 0x1fff) << 16) | ((uint)'T' << 8) | 2;
[DllImport("/System/Library/Frameworks/IOKit.framework/IOKit")]
extern public static int ioctl(int fileNumber, uint request, ref int baudRate);
public static int SetBaudRate(int fileNumber, int baudRate)
{
var customBaudRate = baudRate;
var result = ioctl(fileNumber, IOSSIOSPEED, ref customBaudRate);
return result;
}
Preliminary results are encouraging.
BTW, that IOSSIOSPEED value … Dang, the version of Xcode I was using really did not want to yield that information. Ultimately had to dig it out of preprocessor output.
So then what is the sequence to use a custom baudrate, say 500000, with this fix in place?
By using direct SerialPort.BaudRate=500000; does not work, I get exception as below:
System.IO.IOException: Inappropriate ioctl for device
at System.IO.Ports.SerialPortStream.ThrowIOException () [0x00000] in <filename unknown>:0
at System.IO.Ports.SerialPortStream..ctor (System.String portName, Int32 baudRate, Int32 dataBits, Parity parity, StopBits stopBits, Boolean dtrEnable, Boolean rtsEnable, Handshake handshake, Int32 readTimeout, Int32 writeTimeout, Int32 readBufferSize, Int32 writeBufferSize) [0x00000] in <filename unknown>:0
at (wrapper remoting-invoke-with-check) System.IO.Ports.SerialPortStream:.ctor (string,int,int,System.IO.Ports.Parity,System.IO.Ports.StopBits,bool,bool,System.IO.Ports.Handshake,int,int,int,int)
at System.IO.Ports.SerialPort.Open () [0x00000] in <filename unknown>:0
at (wrapper remoting-invoke-with-check) System.IO.Ports.SerialPort:Open ()
at USBPortTest.MainFormTestUSB.ConnectPort (System.String PortName) [0x00000] in <filename unknown>:0
To add context to my previous comment:
* Using Ubuntu 14
* Using Mono JIT compiler version 4.0.1 (tarball Tue Apr 28 11:49:45 UTC 2015) official distribution
* Exception raise when doing System.IO.Ports.SerialPort.Open() call, as listed in the exception. I do not get any exception when setting a baudrate.
If the serial port is not open, the baud rate validation does not run, IIRC. The only bounds checking done when the port is not open is a check for < 0.
Check this:
To get things to work for my situation, I first did NOT set the baud rate to anything. The default baud rate is 9600. Just open the port at the default baud rate, then, prior to doing any communication, use the workaround above.
You could wrap SerialPort and provide your own Open() method that does this.
I get the same problem if I set the baudrate after opening the port. All these things fail to the same exception:
case 1)
- start with a closed port
- set BaudRate=9600 (or 38400)
- open port (works)
- define BaudRate=500000 (exception)
case 2)
- start with a closed port
- set BaudRate=500000
- open port (exception)
So in my Ubuntu testing, there is no way or workaround that seems to work at all for custom baudrate.
Just in case, I'm still using this workaround
@arocholl:
Read the workaround I posted about p/Invoking out to ioctl. That's the workaround. Until the Mono runtime is released that allows for custom baud rates, some kind of workaround will be required.
On Mac, the stty 'ForceSetBaudRate' trick didn't work for me. I can't recall the precise error.
If you're using the Starter version of Xamarin.Mac, though, you won't be able to p/Invoke. Maybe some other command line call out could work for you, but I prefer the ioctl in-process workaround instead.
@arocholl:
D'oh… you're in Linux. So have you tried the workaround romanovda found on Stack Overflow? | https://xamarin.github.io/bugzilla-archives/82/8207/bug.html | CC-MAIN-2019-35 | en | refinedweb |
Map|Home
Knox car cycle
Toyota car dealer in chilmark massachusetts
Cars cobra 289
Resetting a cars computer
Are hybrid cars more economical
Yahoo cars tokyo top concept motor
Departments in a car dealership
Buy back car process
Race car set german
Car hire malaga airpor
Auto insurance company south dakota
Monroe michigan auto auction
Hertz rent car braybrook
Car paint job ideas
auto loans one, michigan auto collision repairs,
nra auto purchase programmn auto bureau bemidji, minto alaska auto insurance, car accident san antonio january 9,fast pinewood carsmya high end car,
juiced car downloads, duncan auto roanoke, jeffrey garon car accident, sears auto colonial heights va, car chase i-20 january 5 2008,
online car tricks, leaning technology for cars
Top offers:
Friend links:
auto x plan, 2004 used auto loan rates,
used cars seattle wauniversal car radio calculator download, auto dealership pawling ny, used cars orange park fl,rent a car budapestcar stereo amplifier repair san diego,
import car parts from japan,
help selecting type of car, goodyear auto tires, jacksonville nc holiday cars inc, auto insurance coverage medical, radio control cars michigan,
bellevue auto tint, base car evenflo infant seat, car accidents becuase of marijuana, auto union type b, kansas city auto maintenance engines,
otis car piece, fresno auto moving service turn on auto play dvd, embassy auto sales ajax,
types of rims for carscar tornodo, maryland car inspections check list, auto halogen under lights,1964 gm carsquality auto audio,
mazda car dealer in escanaba michigan,
auto restoration san antonio texas, new cars under 20000 dollars, garage mayfair london 4 cars
nascar car engine audio sportsman race car for sale inside police car car hood covers low price car insurance with accidents online quote car classic at t razor car charger reeves auto athens al ford replacement auto part acura auto parts online | http://qqaaaix.fcpages.com/old-cars-and-accident.html | CC-MAIN-2019-35 | en | refinedweb |
Introduction
In today's tutorial, I'll introduce Yii's error and exception handling and guide you through some introductory scenarios.
Wondering what Yii is? Check out our Introduction to the Yii Framework and Programming With Yii2 series.
What's the Difference Between Errors and Exceptions?
Errors are unexpected defects in our code often discovered first by users. They'll typically break program execution. It's important not only to break gracefully for the user but to inform the developer of the problem so it can be fixed.
Exceptions are created by the developer when a potentially predictable error condition occurs. In code where an exception might occur, the developer can throw() an exception to a robust error handler.
How Does Yii Manage These?.
Exceptions and fatal PHP errors can be assessed only in debug mode. In these kinds of development scenarios, Yii can display detailed call stack information and segments of source code (you can see this above in the title image).
Fatal errors are the kind of events that break application execution. These include out of memory, instantiating an object of a class that doesn't exist, or calling a function that doesn't exist.
For example:
$t = new Unknownobject();
Let's get started with some examples of error and exception handling.
Configuring Error and Exception Handling
First, we configure our application in frontend/config/main.php. The errorHandler is defined as a component, as shown below. This example is from my startup series application, Meeting Planner. Notice the
errorHandler configuration in
components:
<, ], ... ], ];
In the above example,
errorAction directs the user to my SiteController's error action.
More broadly, Yii offers a variety of configuration options for
errorHandler for redirection and data gathering:
Using errorActions to Direct Execution
Generally, when a user encounters a serious error, we want to redirect them to a friendly, descriptive error page.
That's what the
errorAction in
errorHandler does. It redirects to our SiteController's actionError:
return [ 'components' => [ 'errorHandler' => [ 'errorAction' => 'site/error', ], ] ];
In our SiteController, we define an explicit
error action:
namespace app\controllers; use Yii; use yii\web\Controller; class SiteController extends Controller { public function actions() { return [ 'error' => [ 'class' => 'yii\web\ErrorAction', ], ]; } }
Here's a basic error handler (you can read more about these here):
public function actionError() { $exception = Yii::$app->errorHandler->exception; if ($exception !== null) { return $this->render('error', ['exception' => $exception]); } }
You can also respond differently whether there is an error or whether the page request does not exist in your application:
public function actionError() { $exception = Yii::$app->errorHandler->exception; if ($exception instanceof \yii\web\NotFoundHttpException) { // all non existing controllers+actions will end up here return $this->render('pnf'); // page not found } else { return $this->render('error', ['exception' => $exception]); } }
Here's my current Page Not Found 404 error handler:
You could theoretically include a site map of links, suggested pages similar to the page request, a search feature and a contact support link on your error pages. All of these can help the user recover and move on gracefully.
Here's my current general error page (obviously I have features to add):
Catching Exceptions
If we want to monitor a section of code for problems, we can use a PHP try catch block. Below, we'll experiment by triggering a fatal divide by zero error:
use Yii; use yii\base\ErrorException; ... try { 10/0; } catch (ErrorException $e) { Yii::warning("Division by zero."); } ...
The
catch response above is to generate a warning for the log. Yii has extensive logging:
- Yii::trace(): log a message to trace how a piece of code runs. Primarily for development.
- Yii::info(): log a message that conveys information about the event.
- Yii::warning(): log a warning message that an unexpected event occurred
- Yii::error(): log a fatal error for investigation
If, instead of logging an event, you wish to direct the user to the error page we configured earlier, you can throw an exception with the event:
use yii\web\NotFoundHttpException; throw new NotFoundHttpException();
Here's an example where we throw an exception with a specific HTTP status code and customized message:
try { 10/0; } catch (ErrorException $e) { throw new \yii\web\HttpException(451, 'Tom McFarlin\'s humor is often lost on me (and lots of people).'); }
Here's what that code looks like to the user:
About Yii Logging
All errors in Yii are logged depending on how you've set them up. You may also be interested in my tutorial about Sentry and Rollbar for logging in Yii:
In Closing
I hope you enjoyed our exploration of error and exception handling. Watch for upcoming tutorials in our Programming With Yii2 series as we continue diving into different aspects of the framework.
If you'd like to see a deeper dive in Yii application development, check out our Building Your Startup With PHP series which uses Yii2's advanced template. It tells the story of programming each step of Meeting Planner. It's very useful if you want to learn about building applications in Yii from the ground up.
If you'd like to know when the next Yii2 tutorial arrives, follow me @lookahead_io on Twitter or check my instructor page. | http://esolution-inc.com/blog/how-to-handle-errors-exceptions-in-the-yii-framework--cms-28531.html | CC-MAIN-2018-09 | en | refinedweb |
#include <QuadNodePolarEuclid.h>
Add a point at polar coordinates (angle, R) with content input.
May split node if capacity is full
If the query point is not within the quadnode, the distance minimum is on the border. Need to check whether extremum is between corners.
angular boundaries
radial boundaries.
Shrink all vectors in this subtree to fit the content.
Call after quadtree construction is complete, causes better memory usage and cache efficiency | https://networkit.iti.kit.edu/api/doxyhtml/class_networ_kit_1_1_quad_node_polar_euclid.html | CC-MAIN-2018-09 | en | refinedweb |
Any user can delete all issues and merge requests
Jobert from HackerOne reported this issue:
Vulnerability detailsVulnerability details
The state filter in the
IssuableFinder class has the ability to filter issues and merge requests by state. This filter is implemented by calling
public_send with unfiltered user input. This allows an attacker to call
delete_all or
destroy_all. Because the method is called before the project / group scope is applied, it deletes all issues and merge requests of the GitLab instance.
Proof of conceptProof of concept
Create two users and a new project for each of them. It doesn't matter if they're private or not. Now create an issue (or merge request) for each project. Now browse to the Issues overview. When clicking All, you'll be redirected to hxxp://gitlab-instance/root/xxxx/issues?scope=all&state=all. Simply substitude
all with
delete_all in the URL and ALL issues will be deleted: hxxp://gitlab-instance/root/xxxx/issues?scope=all&state=delete_all. To delete all merge requests, substitude
issues with
merge_requests. When requesting the
delete_all URL, a 500 internal server error will be shown. This is caused by the
delete_all method returning a boolean instead of an
ActiveRecord::Relation class.
OriginOrigin
The vulnerability comes from the fact that un-sanitized user input is passed into a
public_send call that is being called on
model.all. Here's the
execute method of the
IssuableFinder:
def execute items = init_collection items = by_scope(items) items = by_state(items) items = by_group(items) items = by_project(items) items = by_search(items) items = by_milestone(items) items = by_assignee(items) items = by_author(items) items = by_label(items) items = by_due_date(items) sort(items) end
Now take a look at the
by_state method:
def by_state(items) params[:state] ||= 'all' if items.respond_to?(params[:state]) items.public_send(params[:state]) else items end end
The controllers are passing the
state parameter without any form of sanitization or validation to the finder. Since you're passing around ActiveRecord relations,
delete_all can be called early on in the relation chain. Since the scope hasn't been applied (the
by_project is called later), this will affect all issues and merge requests.
RemediationRemediation
Never pass un-sanitized or unvalidated user input to
public_send or
send.
I've verified this is exploitable. | https://gitlab.com/gitlab-org/gitlab-ce/issues/25064 | CC-MAIN-2018-09 | en | refinedweb |
hgdistver 0.25
obsoleted by setuptools_scm
Warning
this module is superseeded by setuptools_scm
This module is a simple drop-in to support setup.py in mercurial and git based projects.
Alternatively it can be a setup time requirement.
It extracts the last Tag as well as the distance to it in commits from the scm, and uses these to calculate a version number
By default, it will increment the last component of the Version by one and append .dev{distance} in case the last component is .dev, the version will be unchanged
Tis requires always using all components in tags (i.e. 2.0.0 instead of 2.0) to avoid misstakenly releasing higher version (i.e. 2.1.devX instead of 2.0.1.devX)
It uses 4 strategies to archive its task:
- try to directly ask hg for the tag/distance
- try to infer it from the .hg_archival.txt file
- try to read the exact version the cache file if it exists
- try to read the exact version from the ‘PKG-INFO’ file as generated by setup.py sdists (this is a nasty abuse)
The most simple usage is:
from setuptools import setup from hgdistver import get_version setup( ..., version=get_version(), ..., )
get_version takes the optional argument cachefile, which causes it to store the version info in a python script instead of abusing PKG-INFO from a sdist.
The setup requirement usage is:
from setuptools import setup setup( ..., get_version_from_hg=True, setup_requires=['hgdistver'], ..., )
The requirement uses the setup argument cache_hg_version_to instead of cachefile.
- Author: Ronny Pfannschmidt
- License: MIT
- Categories
- Package Index Owner: ronny
- DOAP record: hgdistver-0.25.xml | https://pypi.python.org/pypi/hgdistver/ | CC-MAIN-2018-09 | en | refinedweb |
I’ve adored Way Yes for years now, and their new album gives me a much-needed hit of inspiration.
Four Tet’s new music has been making waves, partly because his sharing of the minimal gear that he wrote the album on.
I’ve been learning and practicing classical music on the guitar recently. I would say ‘classical guitar music’, but I’m not sure if that’s pedantically true, given that all of the music I like to learn was originally written for the piano. classtab.org is just spectacular for this purpose: it’s a well-organized, fast, to-the-point website with tabs of many many pieces. I learned Ravel’s Pavane de la Belle au Bois Dormant, (wiki link). I’m also learning the rest of Satie’s catalog - currently Ce Que Dit La Petite Princesse Des Tulipes ‘What the Little Princess of Tulips Said’.
I’m working on reducing my TV consumption and replace it with reading, running, and going to things in the area. I’m trying.
I’ve made a bit of an update to this site: when I link to books, from now on I’m going to link to WorldCat instead of Amazon. For years, I’ve defaulted links to Amazon. It’s where a majority of people consume books anyway, it’s the only place to link to for Amazon-only eBooks, and fees Amazon paid helped to cover my domain registration & gaug.es account. Not entirely, of course - there’s probably a loss of $30 or $40 yearly for running this site.
Anyway, I decided that I cared more about good, neutral, reliable links than I do about absolute convenience or some attempt at profitability. After reviewing a bunch of options, I learned quite a bit about how books can be referenced, and the many systems involved. ISBNs are far from the only identifier in town - there are also OCLC numbers, issued by WorldCat, and Open Library identifiers, and you can identify books using their EAN codes, there are Library of Congress identifiers, and so on. WorldCat won out as the link target because it has high quality data, it’s a non-profit union catalog and their technical chops seem good: pages follow schema rules, are accessible, and are simple. I really like Open Library’s design, but the data is more iffy.
For the curious, this is the script I used to mass-convert the initial batch of URLs, and then I did a separate secondary pass for harder conversions, like links to eBooks that didn’t contain an ISBN in the URL. The process is:
/isbn/path to get its redirect, which is the canonical page on WorldCat.
import re import codecs import requests import glob AMZN_RE = re.compile(u"https?://amzn.to/([0-9A-Za-z]+)") ISBN1 = re.compile(u"(?:[A-Za-z\-]+)/dp/(\d{10})/") ISBN2 = re.compile(u"(\d{10})/") def remove_amazon(filename): print("Translating %s", filename) f = codecs.open(filename, encoding='utf-8').read() for cap in re.finditer(AMZN_RE, f): url = cap.group(0) redirected_to = requests.head(url, allow_redirects=True).url capture = ISBN1.match(redirected_to) or ISBN2.match(redirected_to) if capture == None: print("Could not capture ISBN from %s", redirected_to) continue isbn = capture.group(1) worldcat_permalink = requests.head("" % isbn, allow_redirects=True).url print(url, worldcat_permalink) f = f.replace( url, worldcat_permalink ) codecs.open(filename, 'w', encoding='utf-8').write(f) for file in glob.glob('../tmcw.github.com/_posts/*.md'): remove_amazon(file)
I still have mild concerns about WorldCat. They support the OCLC Control Numbers, which, though they’re public domain, the WorldCat search API isn’t open to regular old folks like myself. So if I were to do this conversion again, I’d have to either be a full-time librarian to get access to the API, or I’ll need to scrape WorldCat’s pages to get the alternative identifiers.
That was kind of surprising - that, despite so much effort spent on great indexes for books, there wasn’t a cross-reference service that would give you alternative IDs: provide an ISBN, get a Library of Congress number, and so on. Maybe I just couldn’t find it. | https://macwright.org/2017/11/05/recently.html | CC-MAIN-2018-09 | en | refinedweb |
Realtime Android Geolocation Tracking with the Google Maps API (1/4)
🔔 Before we begin this tutorial, signup for a PubNub account to get your API keys. We’ve got a generous sandbox tier that’ll cost you nothing until it’s time to scale! 🔔
Realtime mapping and geolocation tracking is a core feature of web and mobile apps across every industry. The idea is simple – detect and stream location data to a live-updating map to smoothly watch location updates as they change in the real world. With that, Android geolocation tracking is an essential technology today.
Tutorial Overview
This is a 4-part tutorial on building realtime maps for Android using the Google Maps API and PubNub. We’ll begin with basics, to get your Android environment set up, then add realtime geolocation functionality in Parts 2-4.
- Part One: Google Maps API and Android Setup (you’re here!)
- Part Two: Live Map Markers with Google Maps API
- Part Three: Live Location Tracking with Google Maps API
- Part Four: Flight Paths with Google Maps API
In the end, you’ll have a basic Android location tracking app with map markers, device location-tracking, and flight paths, powered by the Google Maps API and PubNub.
This is what our app will look like:
Setup
Tutorial Assets
For this tutorial, we presume you are working with an Android application running Version 23 (Marshmallow) or higher, as this tutorial is intended primarily for mobile devices.
The code for the example used in this tutorial series is available in our GitHub repository. It’s visible via the web UI as well as by using Git to clone the folder to your local machine.
PubNub Developer Keys.
Creating your Google Maps API Project and Credentials
To integrate with Google Maps, you’ll need to create a Google Maps API key. To do so, check out this getting started guide.
This creates a unique API key that can be locked down to whatever Android apps you like. You can manage your credentials in the Google API console here.
PubNub Overview
In our Android application, PubNub provides the realtime communication capability that lets our Android application receive incoming events as the position of a remote object (latitude and longitude) changes.
The core PubNub feature we’ll use is Realtime Messaging. Our application uses a PubNub channel to send and receive position changed events.
In this sample example, the same application both sends and receives the position events, so consider it is both the publisher and subscriber. But in your application, the publisher (sender) system(s) may be different from the subscriber (receiving) system(s).
Google Maps API Overview
The Google Maps API provides an Android Map widget that displays a map based on your configuration. The map size, map center latitude and longitude, zoom level, map style and other options may be configured to your initial preferences and updated on the fly.
It also provides an initialization callback option that calls a function you specify when the map is loading. You can use features like Map Markers and Polylines which allow you to place friendly markers and flight paths on the map at the location(s) you specify. All the points are user-specified and may be updated in real time as updates arrive.
Google Maps provides great mapping functionality, and combined with PubNub for the realtime location coordinates updates, the two work incredibly well together.
Working with the Code
First, clone the code onto your local machine using Git or the import project feature of Android Studio. Once the project is imported, you’ll be able to modify the files using the IDE like this:
Once you’re in the IDE, you’ll be able to perform the minor code changes to get the app running quickly.
Application Setup and Configuration
This code structure should be familiar if you’ve worked with Android applications in the past. We start with a plain Android application that has two activities:
- LoginActivity, which simply collects a username
- MainActivity, that contains a tabbed view for all of our realtime location features.
We include the PubNub library for realtime communications in the build.gradle file.
compile group: 'com.pubnub', name: 'pubnub-gson', version: '4.12.0'
You’ll also want to include the Google Maps API via Play Services. At the time of writing, the relevant version is 11.0.4.
compile 'com.google.android.gms:play-services:11.0.4'
There are a few key permission settings you’ll need to set up in your AndroidManifest.xml.
<uses-permission android: <uses-permission android: <uses-permission android:
While you’re in there, you’ll also need to update the Google Maps API key you set up earlier:
<meta-data android:
We’ve taken the approach of encapsulating common settings into a Constants.java file – for example, the PubNub publish and subscribe keys mentioned earlier in this tutorial.
public class Constants { public static final String PUBNUB_PUBLISH_KEY = "YOUR_PUB_KEY"; public static final String PUBNUB_SUBSCRIBE_KEY = "YOUR_SUB_KEY"; ... }
Running the Code
To run the code, you just need to push the green “play” button in the IDE. That will bring up the Android device chooser so you can choose where to run the app. We typically run our apps using emulation first since it’s quicker.
Once you get the hang of that, it’s easy to connect an Android device via USB for live debugging. This is especially handy for testing live location tracking, which isn’t as practical in the emulator.
Next Steps
With that, we now have our app set up. In Part Two, we’ll implement live map markers, which identify where a device is located on a map. | https://www.pubnub.com/tutorials/android/mapping-geolocation-tracking/ | CC-MAIN-2018-09 | en | refinedweb |
To maintain high frequency stability, RF oscillator circuits are sometimes “ovenized” where their temperature is raised slightly above ambient room temperature and held precisely at one temperature. Sometimes just the crystal is heated (with a “crystal oven”), and other times the entire oscillator circuit is heated. The advantage of heating the circuit is that other components (especially metal core instructors) are temperature sensitive. Googling for the phrase “crystal oven”, you’ll find no shortage of recommended circuits. Although a more complicated PID (proportional-integral-derivative) controller may seem enticing for these situations, the fact that the enclosure is so well insulated and drifts so little over vast periods of time suggests that it might not be the best application of a PID controller. One of my favorite write-ups is from M0AYF’s site which describes how to build a crystal oven for QRSS purposes. He demonstrates the MK1 and then the next design the MK2 crystal oven controller. Here are his circuits:
Briefly, desired temperature is set with a potentiometer. An operational amplifier (op-amp) compares the target temperature with measured temperature (using a thermistor – a resistor which varies resistance by tempearture). If the measured temperature is below the target, the op-amp output goes high, and current flows through heating resistors. There are a few differences between the two circuits, but one of the things that struck me as different was the use of negative feedback with the operational amplifier. This means that rather than being on or off (like the air conditioning in your house), it can be on a little bit. I wondered if this would greatly affect frequency stability. In the original circuit, he mentions
The oven then cycles on and off roughly every thirty or forty seconds and hovers around 40 degrees-C thereafter to within better than one degree-C.
I wondered how much this on/off heater cycle affected temperature. Is it negligible, or could it affect frequency of an oscillator circuit? Indeed his application heats an entire enclosure so small variations get averaged-out by the large thermal mass. However in crystal oven designs where only the crystal is heated, such as described by Bill (W4HBK), I’ll bet the effect is much greater. Compare the thermal mass of these two concepts.
How does the amount of thermal mass relate to how well it can be controlled? How important is negative feedback for partial-on heater operation? Can simple ON/OFF heater regulation adequately stabalize a crystal or enclosure? I’d like to design my own heater, pulling the best elements from the rest I see on the internet. My goals are:
- use inexpensive thermistors instead of linear temperature sensors (like LM335)
- use inexpensive quarter-watt resistors as heaters instead of power resistors
- be able to set temperature with a knob
- be able to monitor temperature of the heater
- be able to monitor power delivered to the heater
- maximum long-term temperature stability
Right off the bat, I realized that this requires a PC interface. Even if it’s not used to adjust temperature (an ultimate goal), it will be used to log temperature and power for analysis. I won’t go into the details about how I did it, other than to say that I’m using an ATMEL ATMega8 AVR microcontroller and ten times I second I sample voltage on each of it’s six 10-bit ADC pins (PC0-PC5), and send that data to the computer with USART using an eBay special serial/USB adapter based on FTDI. They’re <$7 (shipped) and come with the USB cable. Obviously in a consumer application I’d etch boards and use the SMT-only FTDI chips, but for messing around at home I a few a few of these little adapters. They’re convenient as heck because I can just add a heater to my prototype boards and it even supplies power and ground. Convenient, right? Power is messier than it could be because it’s being supplied by the PC, but for now it gets the job done. On the software side, Python with PySerial listens to the serial port and copies data to a large numpy array, saving it every once and a while. Occasionally a bit is sent wrong and a number is received incorrectly (maybe one an hour), but the error is recognized and eliminated by the checksum (just the sum of all transmitted numbers). Plotting is done with numpy and matpltolib. Code for all of that is at the bottom of this post.
That’s the data logger circuit I came up with. Reading six channels ten times a second, it’s more than sufficient for voltage measurement. I went ahead and added an op-amp to the board too, since I knew I’d be using one. I dedicated one of the channels to serve as ambient temperature measurement. See the little red thermistor by the blue resistor? I also dedicated another channel to the output of the op-amp. This way I can measure drive to whatever temperature controller circuity I choose to use down the road. For my first test, I’m using a small thermal mass like one would in a crystal oven. Here’s how I made that:
I then build the temperature controller part of the circuit. It’s pretty similar to that previously published. it uses a thermistor in a voltage divider configuration to sense temperature. It uses a trimmer potentiometer to set temperature. An LED indicator light gives some indication of on/off, but keep in mind that a fraction of a volt will turn the Darlington transistor (TIP122) on slightly although it doesn’t reach a level high enough to drive the LED. The amplifier by default is set to high gain (55x), but can be greatly lowered (negative gain actually) with a jumper. This lets me test how important gain is for the circuitry.
When using a crystal oven configuration, I concluded high high gain (cycling the heater on/off) is a BAD idea. While average temperature is held around the same, the crystal oscillates. This is what is occurring above when M0AYF indicates his MK1 heater turns on and off every 40 seconds. While you might be able to get away with it while heating a chassis or something, I think it’s easy to see it’s not a good option for crystal heaters. Instead, look at the low gain (negative gain) configuration. It reaches temperature surprisingly quickly and locks to it steadily. Excellent.
Clearly low (or negative) gain is best for crystal heaters. What about chassis / enclosure heaters? Let’s give that a shot. I made an enclosure heater with the same 2 resistors. Again, I’m staying away from expensive components, and that includes power resistors. I used epoxy (gorilla glue) to cement them to the wall of one side of the enclosure.
I put a “heater sensor” thermistor near the resistors on the case so I could get an idea of the heat of the resistors, and a “case sensor” on the opposite side of the case. This will let me know how long it takes the case to reach temperature, and let me compare differences between using near vs. far sensors (with respect to the heating element) to control temperature. I ran the same experiments and this is what I came up with!
Right off the bat, we observe that even with the increased thermal mass of the entire enclosure (being heated with two dinky 100 ohm 1/4 watt resistors) the system is prone to temperature oscillation if gain is set too high. For me, this is the final nail in the coffin – I will never use a comparator-type high gain sensor/regulation loop to control heater current. With that out, the only thing to compare is which is better: placing the sensor near the heating element, or far from it. In reality, with a well-insulated device like I seem to have, it seems like it doesn’t make much of a difference! The idea is that by placing it near the heater, it can stabilize quickly. However, placing it far from the heater will give it maximum sensation of “load” temperature. Anywhere in-between should be fine. As long as it’s somewhat thermally coupled to the enclosure, enclosure temperature will pull it slightly away from heater temperature regardless of location. Therefore, I conclude it’s not that critical where the sensor is placed, as long as it has good contact with the enclosure. Perhaps with long-term study (on the order of hours to days) slow oscillations may emerge, but I’ll have to build it in a more permanent configuration to test it out. Lucky, that’s exactly what I plan to do, so check back a few days from now!
Since the data speaks for itself, I’ll be concise with my conclusions:
- two 1/4 watt 100 Ohm resistors in parallel (50 ohms) are suitable to heat an insulated enclosure with 12V
- two 1/4 watt 100 Ohm resistors in parallel (50 ohms) are suitable to heat a crystal with 5V
- low gain or negative gain is preferred to prevent oscillating tempeartures
- Sensor location on an enclosure is not critical as long as it’s well-coupled to the enclosure and the entire enclosure is well-insulated.
I feel satisfied with today’s work. Next step is to build this device on a larger scale and fix it in a more permanent configuration, then leave it to run for a few weeks and see how it does. On to making the oscillator! If you have any questions or comments, feel free to email me. If you recreate this project, email me! I’d love to hear about it.
Here’s the code that went on the ATMega8 AVR (it continuously transmits voltage measurements on 6 channels).
#define F_CPU 8000000UL #include <avr/io.h> #include <util/delay.h> #include <avr/interrupt.h> /* 8MHZ: 300,600,1200,2400,4800,9600,14400,19200,38400 1MHZ: 300,600,1200,2400,4800 */ #define USART_BAUDRATE 38400 #define BAUD_PRESCALE (((F_CPU / (USART_BAUDRATE * 16UL))) - 1) /* ISR(ADC_vect) { PORTD^=255; } */ void USART_Init(void){ UBRRL = BAUD_PRESCALE; UBRRH = (BAUD_PRESCALE >> 8); UCSRB = (1<<TXEN); UCSRC = (1<<URSEL)|(1<<UCSZ1)|(1<<UCSZ0); // 9N1 } void USART_Transmit( unsigned char data ){ while ( !( UCSRA & (1<<UDRE)) ); UDR = data; } void sendNum(long unsigned int byte){ if (byte==0){ USART_Transmit(48); } while (byte){ USART_Transmit(byte%10+48); byte-=byte%10; byte/=10; } } int readADC(char adcn){ ADMUX = 0b0100000+adcn; ADCSRA |= (1<<ADSC); // reset value while (ADCSRA & (1<<ADSC)) {}; // wait for measurement return ADC>>6; } int sendADC(char adcn){ int val; val=readADC(adcn); sendNum(val); USART_Transmit(','); return val; } int main(void){ ADCSRA = (1<<ADEN) | 0b111; DDRB=255; USART_Init(); int checksum; for(;;){ PORTB=255; checksum=0; checksum+=sendADC(0); checksum+=sendADC(1); checksum+=sendADC(2); checksum+=sendADC(3); checksum+=sendADC(4); checksum+=sendADC(5); sendNum(checksum); USART_Transmit('n'); PORTB=0; _delay_ms(200); } }
Here’s the command I used to compile the code, set the AVR fuse bits, and load it to the AVR.
del *.elf del *.hex avr-gcc -mmcu=atmega8 -Wall -Os -o main.elf main.c -w pause cls avr-objcopy -j .text -j .data -O ihex main.elf main.hex avrdude -c usbtiny -p m8 -F -U flash:w:"main.hex":a -U lfuse:w:0xe4:m -U hfuse:w:0xd9:m
Here’s the code that runs on the PC to listen to the microchip, match the data to the checksum, and log it occasionally.
import serial, time import numpy ser = serial.Serial("COM16", 38400, timeout=100) line=ser.readline()[:-1] t1=time.time() lines=0 data=[] def adc2R(adc): Vo=adc*5.0/1024.0 Vi=5.0 R2=10000.0 R1=R2*(Vi-Vo)/Vo return R1 while True: line=ser.readline()[:-1] lines+=1 if "," in line: line=line.split(",") for i in range(len(line)): line[i]=int(line[i][::-1]) if line[-1]==sum(line[:-1]): line=[time.time()]+line[:-1] print lines, line data.append(line) else: print lines, line, "<-- FAIL" if lines%50==49: numpy.save("data.npy",data) print "nSAVINGn%d lines in %.02f sec (%.02f vals/sec)n"%(lines, time.time()-t1,lines/(time.time()-t1))
Here’s the code that runs on the PC to graph data.
import matplotlib matplotlib.use('TkAgg') # <-- THIS MAKES IT FAST! import numpy import pylab import datetime import time def adc2F(adc): Vo=adc*5.0/1024.0 K=Vo*100 C=K-273 F=C*(9.0/5)+32 return F def adc2R(adc): Vo=adc*5.0/1024.0 Vi=5.0 R2=10000.0 R1=R2*(Vi-Vo)/Vo return R1 def adc2V(adc): Vo=adc*5.0/1024.0 return Vo if True: print "LOADING DATA" data=numpy.load("data.npy") data=data print "LOADED" fig=pylab.figure() xs=data[:,0] tempAmbient=data[:,1] tempPower=data[:,2] tempHeater=data[:,3] tempCase=data[:,4] dates=(xs-xs[0])/60.0 #dates=[] #for dt in xs: dates.append(datetime.datetime.fromtimestamp(dt)) ax1=pylab.subplot(211) pylab.title("Temperature Controller - Low Gain") pylab.ylabel('Heater (ADC)') pylab.plot(dates,tempHeater,'b-') pylab.plot(dates,tempCase,'g-') #pylab.axhline(115.5,color="k",ls=":") #ax2=pylab.subplot(312,sharex=ax1) #pylab.ylabel('Case (ADC)') #pylab.plot(dates,tempCase,'r-') #pylab.plot(dates,tempAmbient,'g-') #pylab.axhline(0,color="k",ls=":") ax2=pylab.subplot(212,sharex=ax1) pylab.ylabel('Heater Power') pylab.plot(dates,tempPower) #fig.autofmt_xdate() pylab.xlabel('Elapsed Time (min)') pylab.show() print "DONE"
2 thoughts on “Crystal Oven Testing”
Congratulation for your work! It is the best article on this subject I have ever seen.
I suggest you to use only the Darlington transistor as heating element and eliminate the resistors.
It is much simpler and more compact. Also the heat transfer would be improved.
I would be interested to see the results of an double oven on the same principle.
73! Corneliu
Is it possible to use a PNP Darlington transistor such as BD678, BD682 with the collector at ground level?
What changes have to be done?
Corneliu
You must log in to post a comment. | http://www.swharden.com/wp/2013-06-22-crystal-oven-testing/ | CC-MAIN-2017-22 | en | refinedweb |
view raw
I want to execute some code at pre install time when installing a gem from rubygems.org with a command like
gem install some-gem
# File lib/rubygems.rb, line 724
def self.pre_install(&hook)
@pre_install_hooks << hook
end
RubyGems defaults are stored in rubygems/defaults.rb. If you're packaging RubyGems or implementing Ruby you can change RubyGems' defaults.
For RubyGems packagers, provide lib/rubygems/defaults/operating_system.rb and override any defaults from lib/rubygems/defaults.rb.
For Ruby implementers, provide lib/rubygems/defaults/#{RUBY_ENGINE}.rb and override any defaults from lib/rubygems/defaults.rb.
If you need RubyGems to perform extra work on install or uninstall, your defaults override file can set pre and post install and uninstall hooks. See ::pre_install, ::pre_uninstall, ::post_install, ::post_uninstall.
Gem.pre_install { puts 'pre install hook called!' }
s.require_paths = ["lib", "test", "rubygems"]
The answer is presently (2015-11-11) NO you cannot execute arbitrary code at install time for a specific gem. The hooks mentioned in your question are for use by the RubyGem installer itself and are not gem specific. See: How can I make a Ruby gem package copy files to arbitrary locations? for additional details.
These files:
lib/rubygems/defaults/defaults.rb lib/rubygems/defaults/operating_system.rb rubygems/defaults.rb
Are not called from your gem directory. They are found in the RubyGems system location.
If you wish to execute the same code for every gem before any are installed then you can use the pre_install hooks by placing the code in
/usr/lib64/ruby/2.2.0/rubygems/defaults.rb or wherever your version of Ruby is installed on your system. The
operating_system.rb file will get loaded from the same location as well. | https://codedump.io/share/hUraLbzlx4RA/1/how-to-add-a-prepostinstallhook-to-ruby-gems | CC-MAIN-2017-22 | en | refinedweb |
ServerTraceEventSet.TraceDeprecation Property
Namespace: Microsoft.SqlServer.Management.Smo
Gets or sets a Boolean property value that specifies whether Deprecation category events are recorded in the trace.
Assembly: Microsoft.SqlServer.Smo (in Microsoft.SqlServer.Smo.dll)
Property ValueType: System.Boolean
A Boolean value that specifies whether Deprecation events are included in the trace.
If True, Deprecation events are included in the trace.
If False (default), Deprecation events are not included in the trace.
This namespace, class, or member is supported only in version 2.0 of the Microsoft .NET Framework.
Return to top
Show: | https://msdn.microsoft.com/en-us/library/microsoft.sqlserver.management.smo.servertraceeventset.tracedeprecation.aspx | CC-MAIN-2017-22 | en | refinedweb |
view raw
i want to try my js.erb if it is working?
here my view posts/index to trigger
<%= link_to "Like", {:controller => 'post', :id => post , :action =>'like'}, class: "btn btn-danger btn-xs", remote: true %>
def like
@user = current_user
@post.liked_by(@user)
redirect_to posts_path
respond_to do |format|
format.html {redirect_to posts_path}
format.js { render :action => 'stablelike' }
end
end
alert("working");".
``` ruby
78 @user = current_user
79 @post.liked_by(@user)
80 redirect_to posts_path
81 respond_to do |format|
82 format.html {redirect_to posts_path}
83 format.js { render :action => 'stablelike' }
84
85 end
86 end
87
In a controller action, neither
render nor
redirect_to stops the execution of the action. That is why you're getting a
DoubleRenderError. Remove the first
redirect_to and everything should be fine.
def like @user = current_user @post.liked_by(@user) # I am unsure what you're trying to accomplish here respond_to do |format| format.html { redirect_to posts_path } format.js { render :stablelike } end end
If this doesn't work, probably you have not setup
stablelike action to respond to
xhr requests. You can do that by
def stablelike respond_to do |format| # other formats format.js # by default looks for `stablelike.js.erb` end end | https://codedump.io/share/zPXFwLTHolnL/1/rendering-jserb-in-rails | CC-MAIN-2017-22 | en | refinedweb |
DNS Devolution
Published: October 21, 2009
Updated: July 7, 2010
Applies To: Windows 7, Windows Server 2008 R2
Devolution is a behavior in Active Directory environments that allows client computers that are members of a child namespace to access resources in the parent namespace without the need to explicitly provide the fully qualified domain name (FQDN) of the resource.
With devolution, the DNS resolver creates new FQDNs by appending the single-label, unqualified domain name with the parent suffix of the primary DNS suffix name, and the parent of that suffix, and so on, stopping if the name is successfully resolved or at a level determined by devolution settings. Devolution works by removing the left-most label and continuing to get to the parent suffix.
For example, if the primary DNS suffix is central.contoso.com and devolution is enabled with a devolution level of two, an application attempting to query the host name emailsrv7 will attempt to resolve emailsrv7.central.contoso.com and emailsrv7.contoso.com. If the devolution level is three, an attempt will be made to resolve emailsrv7.central.contoso.com, but not emailsrv7.contoso.com.
Devolution is not enabled in Active Directory domains when the following conditions are true:
- A global suffix search list is configured using Group Policy.
-.
This topic describes update to the behavior of DNS devolution in Windows Server® 2008 R2 and Windows® 7. For more information about DNS devolution, see Chapter 9 – Windows Support for DNS () in TCP/IP Fundamentals for Windows.
The DNS client in Windows Server 2008 R2 and Windows 7 introduces the concept of a devolution level, which provides control of the label where devolution will terminate. Previously, the effective devolution level was two. An administrator can now specify the devolution level, allowing for precise control of the organizational boundary in an Active Directory domain when clients attempt to resolve resources within the domain. This update to DNS devolution is also available for previous versions of Microsoft Windows. For more information, see Post-installation behavior on client computers after you install the DNS update ().
Changes to the devolution level can affect the ability of client computers to resolve the names of resources in a domain. The following is the new default behavior for DNS devolution:
First, the Forest Root Domain (FRD) and primary DNS suffix of the local computer are determined. Based on this information:
- If the number of labels in the forest root domain is 1 (single labeled), devolution is not performed.
Example: The FRD is contoso and the primary DNS suffix is contoso.com. Devolution is not performed in this case because contoso is single-labeled. Previously, the devolution level was two.
- If the primary DNS suffix is a trailing subset of (ends with) the forest root domain, the devolution level is set to the number of labels in the FRD.
Example: The FRD is corp.contoso.com and the primary DNS suffix is east.corp.contoso.com. Devolution level in this case is three because east.corp.contoso.com ends with corp.contoso.com and the FRD has three labels. Previously, the devolution level was two.
- If the primary DNS suffix is not a trailing subset of the FRD, devolution is not performed.
Example: The FRD is corp.contoso.com and the primary DNS suffix is east.contoso.com. Devolution is not performed in this case because east.contoso.com does not end with corp.contoso.com. Previously, the devolution level was two.
The following table summarizes the default behavior for devolution after applying the update.
Previously, devolution was done until only two labels in the suffix were left. Now, assuming a contiguous namespace, devolution proceeds down to the FRD name and no further. If DNS resolution is required past the level of the FRD, the following options are available:
- Configure a global suffix search list. When you configure a suffix search list, devolution is disabled and the suffix search list is used instead.
- Specify the devolution level. You can configure the devolution level using Group Policy or by configuring the DomainNameDevolutionLevel registry key.
This feature will be of interest to IT professionals who manage Active Directory® Domain Services (AD DS) and DNS.
This update to DNS devolution is also available for computers running earlier versions of the Microsoft Windows operating system. For information about this update, see the Overview section of Microsoft Security Advisory 971888 ().
Devolution can be configured using Group Policy or using the Windows Registry. The following tables provide values that are used to configure DNS devolution.
Registry settings
Group Policy settings
This feature is available in all editions. | https://technet.microsoft.com/en-us/library/ee683928(v=ws.10).aspx | CC-MAIN-2017-22 | en | refinedweb |
Structure Initialization 50
Posted by Loren Shure,
This post continues in the theme of other recent ones, questions frequently posed to me (and probably others). It has to do with initializing structures. There is a healthy set of posts on the MATLAB newsgroup devoted to this topic. So let's peel things apart today.
Contents
Structures - Mental Model
It first helps to understand how MATLAB treats structures and their fields. First clear the workspace.
clear variables close all
Let's just start with a scalar structure.
mystruct.FirstName = 'Loren'; mystruct.Height = 150
mystruct = FirstName: 'Loren' Height: 150
Each field in the structure mystruct appears to be a separate MATLAB array. The first one, FirstName, is a string of length 5, and the second, height, is a scalar double (in cm, for those who are paying attention to units).
I can add another field and its contents can be the contents of any valid MATLAB variable. Each field is independent in size and datatype.
Array of structs
Suppose I want to extend this array to include other people and measurements. I can grow this array an element at a time.
mystruct(2).FirstName = 'Fred'; mystruct(2)
ans = FirstName: 'Fred' Height: []
You can see here that since the field Height does not yet have a value, its value is set to empty ([]).
Don't Grow Arrays
Over the years, we have learned that growing arrays is a poor use of resources in MATLAB and that preallocation is helpful in terms of both not fragmenting memory and not spending time looking for a large enough memory slot. So, if I know I want to have 100 names in my struct, I can initialize the struct to be the right size. I may or may not feel the need to initialize the contents of the struct array however, since each field element is essentially its own MATLAB array.
How to Initialize a struct Array
Here are 2 ways to initialize the struct.
mystruct(100).FirstName = 'George';
With this method, we can see that elements are filled in with empty arrays.
mystruct(17)
ans = FirstName: [] Height: []
There's another way to initialize the struct and that is fill it with initial values.
If we were building our struct with the 5 sons of George Forman, we might create it like this.
georgeStruct = struct('FirstName','George','Height', ... {195 189 190 194 193})
georgeStruct = 1x5 struct array with fields: FirstName Height
Looking at the contents of georgeStruct we see that his sons are all named George
{georgeStruct.FirstName}
ans = 'George' 'George' 'George' 'George' 'George'
and I made up their heights
[georgeStruct.Height]
ans = 195 189 190 194 193
To see when and how to use cell arrays in the initialization, read the struct reference page carefully. If you want a field to contain a cell array, you must embed that cell inside another cell array.
Initializing the Contents
How important is it to initialize the contents of the struct. Of course it depends on your specifics, but since each field is its own MATLAB array, there is not necessarily a need to initialize them all up front. The key however is to try to not grow either the struct itself or any of its contents incrementally.
Your Use of Structures
What do you use structures for? Are you able to populate the contents of your struct up front? Or at least pin down the sizes early in your application? To tell me about your usage, please post details here.
Get the MATLAB code
Published with MATLAB® 7.5
Note
50 CommentsOldest to Newest
Some C++ types (e.g., std::vector) deal with allocation as follows: 1. The space allocated grows exponentially, so that the amortized cost of adding a single element is constant. 2. A reserve() function is provided, letting you preallocate space without adding elements.
Any thoughts about adding this to Matlab?
Tom-
Can you explain what the specific benefit in MATLAB would be. Each structure member is its own entity. And the overall structure can be allocated all at once.
–Loren
Loren,
Would I be correct in thinking that a struct is really an array of pointers? By comparison that would make an array of struct an array of arrays of pointers? What implications does this have in terms of the relative efficiency of nesting structs (for example: mystruct.Physiology.height) as opposed to setting up a struct that is entirely flat at the top level? The principal reason I’m asking is that I have an application which has modularly segmented data. As a simplified example, suppose I have a pipe with material characteristics that I need in one subroutine, temperature characteristics I need in another, and stress characteristics I need in a third. Is there a penalty to creating a struct with three sub-structs under it and just passing those sub-structs into the subroutines? Assume for argument’s sake the two cases where each substruct is either large or small, in terms of its contents. Does that change the answer? I hope that I’ve asked the questions clearly…
Thanks,
Dan
I’ve grown to really appreciate the power of structures recently. Your article about vectorizing access to an array of structures was incredibly helpful, though sometimes I don’t know whether or not I am using them most efficiently. It’s an ongoing problem in my code. I generally preallocated what I call the namespace of structures whose values and lengths will be unknown. For instance, I might say:
data = repmat(struct(‘field1′,[],’field2′,[],…,’fieldN’,[]),1,M);
for M instances of N fields. This way, at least I know what the data structure is supposed to look like and how big it is supposed to be. Among other things, this helps me troubleshoot various aspects of the code, when fields are unpopulated when they should be, etc.
Other preallocation, such as x = zeros(1,L) for some loop that writes to x(1,a) for a = 1:L is more useful in terms of memory preallocation, and I find that structure name preallocation doesn’t seem to have a huge performance difference. Additionally, I have found that, in the first example, if ‘field1’ happened to be a scalar double, there doesn’t seem to be an advantage to preallocating the memory space (as opposed to the so-called name space) by saying:
data = repmat(struct(‘field1′,zeros(1,1),’field2′,[],…,’fieldN’,[]),1,M);
I think my benchmarks on my code showed no performance increase (and possibly a performance decrement), and presumably the reason was because when it writes a single scalar value to that space, it has to write it twice when preallocated instead of once. I think in this case that operation actually ended up being more tedious for some reason.
It’s worth playing around with. Those were some of my findings on that particular issue, of course in some arbitrary case that I happened to be looking at. Perhaps it’s not generalizable. For instance, if one preallocates a vector as in the second example, such that x = zeros(1,L) for length L, if you write a vector ‘x’ later on in your code that assigns x all at once, then it seems to slow the code down once again because now MATLAB is writing a vector twice instead of just writing it once. This showed me that preallocation must be carefully considered in instances when one’s code is well-vectorized.
Shane-
Really nice point about initialization. I totally agree that it can go overboard. If you are not replacing just a few values, but replacing the whole array, then preallocating is potentially costly.
–Loren
Dan-
A struct is essentially an array of pointers to other MATLAB arrays. In nested structs, the nested levels might not all be the same, and they themselves are also arrays of pointers (all under the hood of course) to other MATLAB arrays. When you pass a struct into a function AND return it changed from that function, only the fields that got changed will have copies made. MATLAB treats each field separately and smartly does a lazy copy of the data, or copy on write. The only penalty in passing the sub-structs is the creation of an intermediate struct, but you are not copying the data at all (except for the pieces that change).
Not sure if that really answers you question. Please feel free to nudge me in a different direction if you need more information.
–Loren
Loren,
You came close to covering all the pieces of the question. To give a concrete example of the last piece of it:
Suppose I have:
a.b.c = ones(1000,1000);
a.b.d = a.b.c;
a.b.e = a.b.c;
Then I pass a.b into my function doStuff
a.b = doStuff(a.b,’c’);
function out = dostuff(in,fieldname)
out = in; % Yes I know I’m not inplacing here
out.(fieldname) = sqrt(in.(fieldname);
In a case like this, what happens to a.b.d, and a.b.e? Do they get re-copied, or is Matlab smart enough to recognize that it’s the same thing as if I were at the top level of a structure? The other piece is wondering whether I’m paying a penalty for accessing a pointer to a pointer to a variable. Does this double (or worse) the memory access time in the look up process? Finally if an allocation changes, or i create a new field, in a low (deep?) level of the structure, do I pay an additional overhead as all of the layers of pointers above it need to be reallocated? I think I may not have been clear on that last question. Let me know if I should try to clarify further.
Thanks,
Dan
I’ve found that my code is easier to maintain if I pass all variables to & from functions. This makes it much easier to follow the data flow. But, this can lead to long variable lists in the function calls.
MatLab Data Structures are a handy way to collect associated groups of variables together. This can make the function calls easier to read, while still clearly showing the source & destination of the data.
Unfortunately, many novice MatLab users aren’t aware of MatLab Data Structures so they can be confused when I use them. So, I tend to restrict my use of them to places where the name of the structure can give the reader another cue to help them understand.
For example, maybe a function needs a series of defined constants, for example, length & weight. These could be packaged into a structure called const such that when they encounter:
const.length
or
const.weight
Novice users are likely to follow the syntax when presented with the additional cue in the name of the structure.
This appears to be similar to the design of the various dynamic system objects in the Control System Toolbox. For example, the SS objects are state space systems and appear to have the fields of A, B, C, D & E for the respectively named matrices that define such state space systems. Although I haven’t found these fields defined in the MatLab documentation, many a hacker like me has stumbled upon these fields and used it when, for example, access to the system matrix is required.
I often use structures to pass parameters from one function to another. This help keeping argument lists short. I only rarely use structures of another dimension than 1×1.
Further, when programming large projects, I use Matlab classes and objects. The way I access my objects is just like I access structures. One advantage of using classes and objects is that you can never accidently create a new field instead of replacing the value of an existing one. Also, you can organize functions into class directories. The only drawback is that accessing objects is far slower than accessing structures, even if they seems to be very similar.
Loren, maybe classes and objects could be worth another blog entry.
Markus
A particular problem I had was adding a field to an empty array of structs.
This is sometimes needed if you want to concatenate arrays of structs – it would be helpful if MATLAB could be a little more forgiving in type checking empty arrays in such circumstances!
Loren,
I was responding to the ‘don’t grow arrays in a loop’ comment. The C++ STL approach partially separates allocating memory from adding a value. A benefit of this is to allow growing an array within a loop to be more efficient. This is useful in case one does not know how big the array will be, the user is less knowledgeable about the impact of growing an array on efficiency, or maybe even sometimes a knowledgeable user would rather not preallocate.
The ‘reserve’ facility would address Shane’s comment, in that it just preallocates space, and there would be no inefficiency from a possibly unnecessary initialization. Semantically, if all that has been done to array is reserve space, it is still empty.
What Tom is referring to is the dynamic-array method. Suppose you don’t know how big an array should be. The rule is, if you add a new element and run out of space, double its size. Then the amortized time of each insertion is O(1). Consider this poor code, which takes O(n^2) time:
x = 0
for i = 1:n
x (i) = i ;
end
Now consider a variant that takes O(n) total time, or O(1) *amortized* per iteration:
x = 0 ;
len = 1 ;
for i = 1:n
if (len > size(x,1))
% double the size of x
len = 2*len ;
x (len) = nan ;
end
x (i) = i ;
end
% trim x to size
x = x (1:len) ;
This works fine in MATLAB, and it’s a replacement for code that truly can’t tell how big x should be at the beginning.
The problem with trying to do this inside MATLAB itself is that it would be a huge change to the internal data structure (not having *seen* it, of course). Each MATLAB array would have to have some kind of notion of “capacity” (len in the example above) which is >= the size of the array. That would not be easy to change, I would guess, since the changes would percolate wildly.
An array of structures can be a very neat way to organize data; however, we should be aware of the price we pay in performance (pointers storage) when working with such a data structure.
For example:
a=repmat(struct(‘f1′,{{}},’f2’,[1 2; 3 4]),100000,1);
whos a
Name Size Bytes Class Attributes
a 100000×1 15200128 struct
takes a little more than four times memory than
a=struct(‘f1′,{cell(100000,1)},’f2’,repmat([1 2; 3 4],[1,1,100000]));
whos a
Name Size Bytes Class Attributes
a 1×1 3600248 struct
This ratio increases significantly as the structure becomes more complicated (data types)
Tom-
Thanks for the clarification. As of now, there are no empty arrays in MATLAB that don’t have at least one dimension of size 0. So to reserve space currently you have to fill the array with something, be it zeros, blanks, nans, etc. I am unaware of plans for changes to this.
–Loren
Dan,
I am going to first reproduce your code so I can discuss it:
When you call doStuff with a.b, you are passing in a new temporary variable and will not be affecting the a struct in your workspace at all. In any case, the fields a.b.d and a.d.e are unaffected because they have not be changed at all, even wrt your temporary a.b. So there is very low impact to those fields being there, even via a function call, when they are being ignored. No realloc’ing of existing fields happens when a new field gets added. The struct itself my need to realloc space because of one new array header, but each of pre-existing arrays will stay put.
Does that help?
–Loren
Very much. Thank you.
Dan
How can i create a pointer to struct?
I want to creat linked list using structures.
Priyanka-
MATLAB does not have pointers. You can create a linked list either using nested functions or, in R2008a, using the newer object oriented class system and derive your class from the handle class.
–Loren
Loren,
I have a question that is somewhat related to initializing structs. I have an object class that has a setup function and a number of structs that are global to the that class, like this:
function aObject()
anObjectIOwn = []; % this will be set to a bunch of Fhs
function setupObject(parameters)
aStruct = anotherObject();
end
end
—
The ‘anotherObject’ object passes back a struct of function handles.
I get an error “Conversion to double from struct is not possible.”
If I initialize anObjectIOwn with:
anObjectIOwn = struct();
Then I get an error ‘??? Subscripted assignment between dissimilar structures.’
If I do a clear in my setup function of the ‘anObjectIOwn’ before I try to set it, it works.
I know I could just remove the setup function and do all that work in the body of the main function and it would work (I did this before). Is there a good way of doing this short of clearing the variable? Am I missing some way of initializing a variable to be a structure which is not defined yet?
Thanks – great blog :)
Oops, I made a mistake above: the code should read:
—
function aObject()
anObjectIOwn = []; % this will be set to a bunch of Fhs
function setupObject(parameters)
anObjectIOwn = anotherObject();
end
end
—
Greg-
Do you know what field(s) the struct will have? If so, try initializing the struct (still empty) but with those fields. I don’t know if that will fix things, but it’s worth a try.
But if anObjectIOwn is really an object and not a struct, perhaps you’ll need to overload subsasgn???
–Loren
I second poster #10 above: wish Matlab could be more forgiving so that appending the 1st struct to an array of structs is easier. I am aware that
>> a.a=’q’;a.b=1;q=struct(‘a’,{},’b’,{});q(end+1)=a
q =
a: ‘q’
b: 1
works. But frequently I don’t know what the fields in the structure are (nor do I care/need that dependency), just want to collect them in an array, and then don’t know what to initialise q to so that q(end+1) still works.
Ljubomir
Hi,
I just wanted to know how to implement singly linked and doubly linked lists in matlab….using arrays.
Suhas-
See answer #18 above. You might also check the file exchange for solutions.
–Loren
Hi quick query.
General:
With structures: is it as easy (or easier) to operate on as normal arrays?
Specific:
Say i had a structure set up with a field called time which always had 3 values such as [0 5 10] and I wanted to find all entries with those certain values, would it recognise that [0 10 5] is equivalent? Which is what i would like it to do.
Baalzamon-
You’d have to set up a test that equated permutations of vectors, perhaps using ismember. If that doesn’t work, you can write a function that does the comparison you want, but would have to extract the data from the struct most likely, before doing the comparison.
–Loren
One area not covered (as far as I can see is the vectorised initialisation/population of structure fields in a structure array. Vectorised extraction has been covered:
However, what if you want to do the opposite – say you take all the field1 values and multiply them by a scalar and then try to write the answer back to the respective fields in the structure array:
I always have to resort to looping through the structure array and do it the long winded way (which is frustrating):
Any ideas on a more concise method for doing this?
Paul-
Depends what you mean by concise. Three lines of code isn’t that long. You could us struct2cell, work on values in cells (or convert to numeric and work on them) and then convert back. I doubt that’s more concise. Under some conditions, it might be faster than the loop, perhaps. There isn’t a one-liner to what you want however.
–Loren
Paul – you can use the following vectorized approach, which is faster than a for loop;
You can choose to work with cell-arrays from the onset, saving the mat2cell conversion. In your example above:
note that there is no need to use (:) anywhere, only {:}
Yair
I realize this may seem a bit against the spirit of using structs, but I found that, instead of an Nx1 struct with several scalar fields, a 1×1 struct with Nx1 vectors (one in each field) is much easier to work with. It’s easy to initialize, and interfacing the data with vectorized code is instant (both reading and writing to the struct).
An example of some vectorized code using this scheme:
As you can see, it’s very friendly to vectorized code, and doesn’t give up any of the advantages of using a struct!
Jotaf-
That’s a completely fine way to use structs and the way I use them quite often. There is nothing enforcing that .x and .y have the same length in the scalar struct version you show however, so you could have issues if the data stored in the fields isn’t commensurate. struct arrays can certainly have the same issue, but being a scalar struct doesn’t get you the guarantee either. FWIW, the scalar struct version is more memory efficient as well.
–Loren
Hi Loren,
I had a question for you in January on this post (at #12)
After considering cells as you indicated, I also tried structures (a tip from Doug Hull). I have been playing with structures for a few months now and they seem to be the perfect solution to what I wanted to do.
I like the flexibility they afford.
And I like to use your first method for initializing.
Here is a piece of code that does exactly what I was looking for and uses the first method for initializing.
But I have a question for you on my next step:
what if I wanted now to write the (i)th output to an individual matrix with the (i)th outputNames as its name?
Thank you
Matteo
Matteo-
We strongly recommend against creating variable names on the fly as it can confuse MATLAB and the analysis it does on the programs (e.g., the code checker).
–Loren
Hi Loren,
thanks for the advice and great reference.
I did not intend to do that as an input to further analysis, only as a way to easily export variables to give to people that do not have experience with structures. But I see I can use this tip “How do I dynamically generate a filename for SAVE?” for that, so thanks twice. Cheers. Matteo
Nico-
I don’t have time to try anything out right now, but I would look into arrayfun (which has a for loop inside) or some combination of struct2cell and indexing, and possibly deal. You might not be able to do it in one line. You might need to create a temporary cell array and then use that for the final input to containers.Map.
–Loren
Hi Loren,
I have been learning/coding in MATLAB for a few months now, and can say that in comparison to my graduate studies in Fortran 77 and Perl, it has been nothing but a pleasure—especially while having your posts as a resource, they are so very beneficial. This is my first post, as I have finally hit a snag I haven’t yet been able to solve on my own, or find the solution through these forums and others. I apologize in advance if this has been discussed in another post.
I currently began working with structures, and am looking to develop a quick way to lookup the data it contains, using a hash table. I have been following the examples for Map Containers at:
So, first I create some structure, s, so that:
Where we can view the first index in the structure as:
Continuing with the example, we are shown showing how one can implement mapping to a structure array by defining a map.container as:
which works wonderfully.
Now, assume I have a structure, S, with N number of elements. Additionally I have a name_S, a 1xN character cell-array that contains the keys which I wish to assign as keys to the structure indexes. Being that N is quite large (near 1000), I am looking for a way to perform the mapping in a vector operation, however, can’t find a solution. I realize I could do this with a for loop, but am always trying to keep my code fast and clean.
Using the previously defined structure, I have tried to do so with the following statement (and many others), all to no avail.
I guess this is more general question of how we can express a range of structure elements to be be operated on in one statement. Any advice is greatly appreciated.
Thank you much,
Nico.
I am wondering about how to initialize a structure array with more elements than I want to type in manually. For example, if I read a directory of 500 dicom headers using ‘dicominfo’, I get a structure info(1:500), where there might be ~100 field entires for each array element. One solution might be to read one header, then use it as the preallocation template, but is there a way to be more generic?
Chris-
If you have all the info at once, you can use the struct command itself which will allow you to create an array and initialize the values.
–Loren
Hi, I’m new to matlab. I would like to create empty struct and add its values using a for loop. I have an image which I divide into non-overlapping blocks. For each block I calculate the quality measure. What I would like to do is take the block store it in a struct with its corresponding quality value. Later in my code I would like to compute the mean of the quality measure for all blocks and set those blocks less then this quality mean to zeros.
Nteza-
No reason why you can’t do this. Just define your struct – but if you know how many blocks, you don’t need to make it an empty struct (size 0x0) but the correct size so it doesn’t grown. Specify the fields. So you could initialize like this
Then use your for loop to fill them. Then do your other calculations.
–Loren
My contribution.
In my lab we make many runs of our experiment. Each run is a data set consisting of many fields (time, energy, polarization, momentum, jitter…). This means that I need to create an array of structures. Each structure contains the data for a run with these field names.
The problem is that each structure contains maaaany fields, and I do NOT want to write each field separately as you do here. Instead, I first create a cell array with the strings of every field name, like this:
fieldNames = {‘field1’, ‘field2’, … , ‘fieldN’};
Now I will pass each of this string variables in the cell as a dynamic fieldname to my structure, that is:
for c = fieldNames
mystruct.(c{1}) = someData; % In each iteration c is a 1×1 cell array containing the next fieldName.
end
Now, this should work for a single structure with maaany fields. In my case I still need more than this. I need an array of such structures, one for each run of experiment. In that case, I will include another for loop that goes through each run number. I will also need to initialize somehow this array of structures. Here’s the whole (but simple) code:
fieldNames = {‘field1’, ‘field2’, … , ‘fieldN’};
struct(numberOfRuns).field1 = 0; % Initialize the array of structures with at least one field each.
for i=1:numberOfRuns
for c = fieldNames
mystruct(i).(c{1}) = someData;
end
end
In my opinion this is an elegant (at least code-friendly) way to work with large arrays of structures that contain many fields. A good analogy is a dataBase containing many other personal data besides the usual ‘name’ and ‘height’.
I would anyway appreciate comments on the efficiency of this code.
Thanks!
Gerard-
If you have access to Statistics Toolbox, you may find dataset arrays very useful for your application. The idea of arrays of structures can be quite useful, but it is costly in terms of memory and access.
–Loren
Thank you Loren, I have been checking the statistics toolbox and looks very promising.
However, in all examples they generate datasets from already existing variables. I do not see how could you open, say, 10 bin files and store them directly into a dataset with the corresponding label from the cell array (which is what I am doing in my code if I replace ‘someData’ by e.g. ‘fread’ function).
Gerard, there are two ways to create a dataset array from scratch: from workspace variables, or by reading from a text or Excel file. In you case, if possible, it would be fastest to create one dataset variable at a time by reversing the order of your loops, and putting something like this line inside the outer (fields) loop, but following the inner (runs) loop:
myData.(varNames{j}) = aVectorOfData;
You can certainly use your existing code, and replace the innermost line with
myData.(c{1})(i) = someData;
but it is fastest to work on one entire variable at a time.
In either case, you would need to preallocate the dataset array before doing the above, much as you did for the structure array.
Thanks for your support Peter.
In my case, the ‘fieldNames’ cell array contains the strings that are, at the same time, both the names of each bin file to be imported AND the field names I want to use as variables in the structures.
This means I still have not created any variables of the bin files. I am just reading the files DIRECTLY into each field of the structure.
mystruct(i).(c{1}) = fread( [c{1} ‘.bin’] ); % It was maybe unclear before, but I read a whole vector in each field of the structure, not a single data value.
% Each ‘i’ has the run number and each field has a vector of data.
I don’t see how could I read these files DIRECTLY into a dataset (seems it is not possible, as you say). So I should first read the files into variables anyway using my code, and then create the dataset. This looks a bit messy to me.
Also, the bin files have different data size. ‘dataset’ is apparently used with equal size variables. When used with different variable sizes I cannot visualize it well in the workspace (harder to debug, then?). Moreover, with my code I have now an array of structures that contains a different structure for each run, all packed in one variable in the workspace (very clean). I guess dataset arrays cannot have more than two dimensions (?), so as to make the third dimension the run number (this way I could have sheets of datasets for each run, like excel sheets, and access them very easily).
Are in this case dataset arrays still worth the effort?
Gerard, two things:
> This means I still have not created any variables of the
> bin files. I am just reading the files DIRECTLY into
> each field of the structure.
Sure you have. You just haven’t given them a name. They’re temporary values that exist only for the lifespan of that one line. You can do the same thing with dataset.
> Also, the bin files have different data size. ‘dataset’
> is apparently used with equal size variables.
Yes. A dataset array is like a table where each column (dataset variable) has the same length. So if your data aren’t like that, then dataset isn’t for you. The 2-D-ness is perhaps also a problem, but dataset arrays do allow you to have variables that are themselves matrices. That can sometimes be convenient for higher dimensional data.
Thanks Peter. Well then, I guess I will have to stick on my code, since datasets cannot handle size and multi-D. It is a pity, they look very flexible and efficient for vectorizing and also for code simplification.
I won’t get rid of the two for loops that I have to carry during the whole code :( (I have to make calculations for each run and for every field).
One more question: how to retrieve data from a field for all the structures of an array of structures? That is, this line
myStruct(:).fieldName
does only retrieve myStruct(1).fieldName, but not all of them.
Gerard-
myStruct(:).fieldname produces a comma-separated list in MATLAB (see the doc to learn more about this). To collect all the outputs, assuming they are scalar numeric, into a single vector, enclose the expression in square brackets [myStruct(:).fieldname]. To stick the values into a cell array, in case they won’t concatenate properly, do the same trick but wrap with curly braces {myStruct(:).fieldname}
–Loren
Hi Loren, thanks for your support again.
In my case, each ‘fieldName’ contains a vector, so ‘myStruct(:).fieldName’ does only produce the first ‘fieldName’ vector, the same as (myStruct(:).fieldName).
Only {myStruct(:).fieldName} is left. Could you help me out on how to concatenate this in a for loop? This doesn’t seem correct:
fieldNames = {‘field1’, … , ‘fieldN’}
for C=fieldNames
temp = { myStruct(:).(C{1}) };
for i=1:size(myStruct.(C{1}),2)
dataCat.(C{1}) = [dataCat.(C{1}) temp{i}];
end
end
Gerard-
Please contact technical support for more help. I don’t exactly understand your situation and am not near MATLAB to try things out.
–Loren | http://blogs.mathworks.com/loren/2008/02/01/structure-initialization/ | CC-MAIN-2017-22 | en | refinedweb |
OK I've just wrote something to fill some time, and I thought I had done it all right, but when I try compile it, it gives some errors, but the errors don't make much sense to me really. It's giving error messages like ' and (each.
I'm pretty new so I have no idea what they're on about so I was hoping someone on here could help me.
That's the source and the errors on lines 39 and 40 are in notes at the side.That's the source and the errors on lines 39 and 40 are in notes at the side.Code:#include <iostream> #include <cstdio> #include <cstdlib> #include <cmath> using namespace std; int main (int argc, char *argv[]) { cout << "Works out two values of X in Quadratic Formulas (ax^2+bx+c=0)" << endl; cout << "Integers only" << endl; //Declare A,B,C,negative B, answer to both top and bottom halfs, number to be square rooted and the SQRT. //Also declares answers. int A; int B; int NegB; int C; int TopHalfP; int TopHalfN; int BottomHalf; int InRoot; int Root; int AnswerP; int AnswerN; //Put values for A.B & C into the program. cout << "Enter A" << endl; cin >> A; cout << "Enter B" << endl; cin >> B; cout << "Enter C" << endl; cin >> C; //Work out Negative B, the number to be SQRTed, the SQRT and the numbers to be divided NegB = B*-1; InRoot = (b*b)-(4*A*C); // Gives ' and (each Root = sqrt(InRoot); //Gives call TopHalfP = NegB+Root; TopHalfN = NegB-Root; BottomHalf = 2*A; AnswerP = TopHalfP/BottomHalf; AnswerN = TopHalfN/BottomHalf; //Displays answers cout << "The answers for X are:" << endl; cout << AnswerP << endl; cout << "And" << endl; cout << AnswerN << endl; cout << "Press ENTER to continue..." << endl; cin.get(); return 0; }
It also gave some errors on lines in the hundreds, though this only gets to about 60 lines. I've put them in the image.
If anyone could help with any of these it would be great. | https://cboard.cprogramming.com/cplusplus-programming/80778-help-sorting-out-errors.html | CC-MAIN-2017-22 | en | refinedweb |
Implementing First Pass
Very rough time getting started
All the previous feasibility tests needed to work 100% now
Original plan got put on hold to ship 1.0 (2 months)
Lots of little problems were uncovered as we went
New plan, vertical slice using Pux!
New problems!
From JavaScript to PureScript
Floating the Idea
Validating Concerns
The Vote
Lessons learned
Moving Forward
Initially I didn't even present PureScript as an option
When other options all failed to do what we wanted, I brought it up
There was cautious enthusiasm for the idea amongst the team and strong support from our team lead
There were surprisingly few places that could be pulled out cleanly
Eff / Aff seems very intimidating at first but is easily understood from the JS side
Pick your battles with conversion, leave stuff in-tact and come back later
FFI is extremely powerful, we never feel backed into a corner, but
very easy to get wrong
Training is high on the priority list
Slow rollout, converting when it makes sense, but get the top level stuff converted first
As PureScript in the app becomes more a reality, concerns around it rises within team
Currently have 1 PureScript developer who can work solo, that number needs to be everyone
Interop
Performance
Training
Vote: TypeScript vs PureScript
Team picked PureScript by a fair margin (7 to 2)
People were voting yes but did they know what they were in for?
Had I oversold the benefits?
Git Kraken
Cross platform desktop app
Built using Electron
Built on React + Flux
Built on NodeGit (C++)
~64,000 non-whitespace lines of JavaScript
Needs
React rendering performance improvements from immutability (pure render)
Enforcing standards beyond linting
Refactoring with confidence beyond unit tests
Ability to control effects that touch native code
Options
Flow
PureScript
TypeScript
Redux
Immutable.js
Elm
Compiling PureScript within Electron
Calling between JavaScript and PureScript
Passing data between the two
Embedding PureScript arbitrarily in JavaScript
The cost of curried functions
Immutable data
Runtime overhead
Sat down and 1 on 1 went through the benefits of static typing
Started with TypeScript syntax and transitioned to PureScript as examples got more sophisticated
Demonstrated examples of Maybe, pattern matching, ADT's, newtypes
Original plan was to replace Flux stores
Standalone
Data oriented
New plan is to replace the action, view, and store of one part of the app, now needed React integration
This can't be done all in 1 go, requires integrating with existing actions/stores during transition
Adapter subscribes to flux emits
Converts relevant changes to a Pux message
Flux emit
Pux message
data Message
=
AppMessage AppStateHandler.Message
| Child1 Child1.Message
| Child2 Child2.Message
...
Top level component dispatches these messages to a separate handler
Top level component
Child1
Child2
Child3
AppStateHandler
ChildA
ChildB
ChildC
Internal
AppState Message
update :: Message ->
AppState
-> State -> State
view ::
AppState
-> State -> Html Message
AppState is passed into component's update and view
update :: Message -> State -> State
view :: State -> Html Message
Regular Pux
GitKraken with AppState
update :: AppMessage -> AppState -> AppState
Some nice wins along the way, locking NodeGit operations
return asyncAction(constants.Commit.actions.ACTION_NAME, initialState)
.
addWork
(actionThatDoesntNeedAnyLocking)
.
addWork
(
constants.Lock.INDEX
, (result, state) =>
actions.Foo.doSomethingThatNeedsToBeLocked(state.bar)
)
.
addWork
(
constants.Lock.INDEX
, (result, state) =>
dangerousThingThatNeedsToBeLocked(result, state.baz)
.then(() => nextStepInProcess())
.then(() => iSureHopeThisDoesntLockAlsoOrWeDeadlock())
)
Existing system locked on a per-promise level, if functions that are called inside a lock try to acquire the same lock, they will never be able to progress
Using effect types we can guarantee a lock is obtained
lockConfig :: forall eff a.
StateT Locks (Aff (configlock :: CONFIGLOCK | eff)) a
-> StateT Locks (Aff eff) a
NodeGit functions that need a lock are given an effect type of XYZLock
setPushUrl :: forall eff.
Repository
-> String
-> Url
-> Aff (configlock :: CONFIGLOCK | eff) Unit
The type of our top level component has a closed effect row that does NOT contain any of the lock types
So the effect must be consumed by a function, this is done similarly to catchException
The StateT Locks keeps track of what locks we've obtained
so far so any attempt to re-obtain the lock will be no-ops
A run function is then used to get the Aff out of the StateT Locks
runNodeGitAction :: forall eff a. StateT Locks (Aff eff) a -> Aff eff a
myAction :: forall eff a. Aff eff a
myAction =
runNodeGitAction do
lockIndex $ liftAff do
affThatRequiresIndexLock
affThatInternallyUsesLockIndex
liftAff regularAffThatDoesntNeedLocking
liftAff is needed to turn the Aff into a StateT Locks
Challenges
Type errors are extremely intimidating for newcomers
Error messages involving large records are difficult to read
Tooling is an issue generally and switching to "better tooling" via Emacs or Vim is not realistic for our team, Atom is what we use (it is improving rapidly though, thanks Nate!)
Some devs previously worked in C# and are used to very reliable intellisense, frustrating when this doesn't work, especially for local variables
No refactoring tooling
Non-standard project structure makes tooling even more problematic, psc-ide-server must be started with a custom directory, Pulp doesn't work without extra params, which necessitates custom build scripts
Huge
thanks to everyone who has helped me get this far
Gary Burgess
John De Goes
Phil Freeman
Harry Garrood
Dennis Gosnell
Joel Grus
Christoph Hegemann
Language transition from JS is quite rough
Lots of new concepts that seem strange
This is not a "read a book over the weekend" situation
Purity forces some structure that gets in the way of development
unsafeLog :: a -> b -> b
During dev, especially when dealing with FFI some practical "impurity" is very helpful
function unsafeLog (valueToLog) {
return function (valueToReturn) {
console.log(valueToLog);
return valueToReturn;
}
}
The ability to "see behind the curtain" in JavaScript is
very
helpful
PureScript syntax is much less scary
when you can see what it translates
to
Will Jones
Alex Mingoia
Matt Parsons
thimoteus
Nate Wolverson
And everyone else that I am forgetting
*Also see Debug.Trace
-- PureScript
module Utils (AwesomeThingType, doAwesomeThing) where
data AwesomeThingType
= Kittens
| Ponies
| FpConferences
howAwesomeIsThing :: AwesomeThingType -> Int
howAwesomeIsThing Kittens = 5
howAwesomeIsThing Ponies = 7
howAwesomeIsThing FpConferences = 11
// JavaScript
const Utils = require('utils');
log(Utils.howAwesomeIsThing(Utils.Kittens.Value));
-- PureScript code
module Foreign where
foreign import jsThing :: Int -> Int -> Int
doThingFromJs :: String -> Int -> Int
doThingFromJs a b = jsThing (length a) b
// JavaScript code
// module Foreign
function jsThing(param1) {
return function(param2) {
return param1 + param2
}
}
module.exports = { jsThing: jsThing }
module Main where
import Prelude
import Control.Monad.Eff (Eff)
import Control.Monad.Eff.Console (CONSOLE, log)
functionWith2Params :: Int -> Int -> Int
functionWith2Params a b = a + b
main :: forall e. Eff (console :: CONSOLE | e) Unit
main = do
log "Hello sailor!"
// Generated by psc version 0.8.5.0
"use strict";
var Prelude = require("../Prelude");
var Control_Monad_Eff =
require("../Control.Monad.Eff");
var Control_Monad_Eff_Console =
require("../Control.Monad.Eff.Console");
var main =
Control_Monad_Eff_Console.log("Hello sailor!");
var functionWith2Params = function (a) {
return function (b) {
return a + b | 0;
};
};
module.exports = {
main: main,
functionWith2Params: functionWith2Params
};
Purescript is installed as a npm dev dependency
Run psc / psa from a script that invokes node_modules/.bin/psa
Currently done from within electron JavaScript to PureScript
No description
byTweet
David Koontzon 25 May 2016
Please log in to add your comment. | https://prezi.com/wgq786ck7uc8/from-javascript-to-purescript/ | CC-MAIN-2017-22 | en | refinedweb |
This chapter discusses the Oracle JDBC implementation of distributed transactions. These are multi-phased transactions, often using multiple databases, that must be committed in a coordinated way. There is also related discussion of XA, which is a general standard (not specific to Java) for distributed transactions.
The following topics are discussed:
Error Handling and Optimizations
Implementing a Distributed Transaction
For further introductory and general information about distributed transactions, refer to the Sun Microsystems specifications for the JDBC 2.0 Optional Package and the Java Transaction API (JTA).
For information on the OCI-specific HeteroRM XA feature, see "OCI HeteroRM XA"..
In the JDBC 2.0 extension API, distributed transaction functionality is built on top of connection pooling functionality. remainder of this overview covers the following topics:
Distributed Transaction Components and Scenarios
Distributed Transaction Concepts
Switching Between Global and Local Transactions
For further introductory and general information about distributed transactions and XA, refer to the Sun Microsystems specifications for the JDBC 2.0 Optional Package and the Java Transaction API.
In reading the remainder of the distributed transactions section, it will be helpful to keep the following points in mind:
A distributed transaction system typically relies on an external transaction manager—such as a software component that implements standard Java Transaction API functionality—to coordinate the individual transactions.
Many vendors offer XA-compliant JTA modules, including Oracle, which includes JTA in Oracle 9.
When you use XA functionality, the transaction manager uses XA resource instances to prepare and coordinate each transaction branch and then to commit or roll back all transaction branches appropriately.
XA functionality includes the following key components:
XA datasources—These are extensions of connection pool datasources and other datasources, and similar in concept and functionality.
There will be one XA datasource instance for each resource manager (database) that will be used in the distributed transaction. You will typically create XA datasource instances (using the class constructor) in your middle-tier software.
XA datasources produce XA connections. (one at a time), as with pooled connection instances.
You will typically get an XA connection instance from an XA datasource instance (using a
getXAConnection method) in your middle-tier software. You can get multiple XA connection instances from a single XA datasource instance if the distributed transaction will involve multiple sessions (multiple physical connections) in the same database.
XA connections produce XA resource instances and JDBC connection instances.
XA resources—These are used by a transaction manager in coordinating the transaction branches of a distributed transaction.
You will get one XA resource instance from each XA connection instance (using a
getXAResource method), typically in your middle-tier software. There is a one-to-one correlation between XA resource instances and XA connection instances; equivalently, there is a one-to-one correlation between XA resource instances and Oracle sessions (physical connections).
In a typical scenario, the middle-tier component will hand off XA resource instances to the transaction manager, for use in coordinating distributed transactions.
Because each XA resource instance corresponds to a single Oracle session, there can be only a single active transaction branch associated with an XA resource instance at any given time. There can be additional suspended transaction branches, however—see "XA Resource Method Functionality and Input Parameters".
Each XA resource instance has the functionality to start, end, prepare, commit, or roll back the operations of the transaction branch running in the session with which the XA resource instance is associated.
The "prepare" step is the first step of a two-phase
COMMIT operation. The transaction manager will issue a
prepare to each XA resource instance. Once the transaction manager sees that the operations of each transaction branch have prepared successfully (essentially, that the databases can be accessed without error), it will issue a
COMMIT to each XA resource instance to commit all the changes.
Transaction IDs—These are used to identify transaction branches. Each ID includes a transaction branch ID component and a distributed transaction ID component—this is how a branch is associated with a distributed transaction. All XA resource instances associated with a given distributed transaction would have a transaction ID that includes the same distributed transaction ID component.
As of JDBC 3.0, applications can share connections between local and global transactions. Applications can also switch connections between local transactions and global transactions.
A connection is always in one of three modes:
NO_TXN,
LOCAL_TXN, or
GLOBAL_TXN. executed on the connection. A connection is always in
NO_TXN mode when it is instantiated.
If none of the rules above is applicable, the mode does not change.
The current connection mode restricts which operations are valid within a transaction.
In
LOCAL_TXN mode, applications must not invoke
prepare(),
commit(),
rollback(),
forget(), or
end() on an
XAResource. Doing so causes an
XAException to be thrown.
In
GLOBAL_TXN mode, applications must not invoke
commit(),
rollback() (both versions),
setAutoCommit(), or
setSavepoint() on a java.sql.Connection, and must not invoke (
OracleXid and
OracleXAException classes)
oracle.jdbc.xa.client
oracle.jdbc.xa.server
Classes for XA datasources, XA connections, and XA resources are in both the
client package and the
server package. (An abstract class for each is in the top-level package.) The
OracleXid and
OracleXAException classes are in the top-level
oracle.jdbc.xa package, because their functionality does not depend on where the code is running.
In middle-tier scenarios, you will import
OracleXid,
OracleXAException, and the
oracle.jdbc.xa.client package.
If you intend your XA code to run in the target Oracle database, however, you will import the
oracle.jdbc.xa.server package instead of the
client package.
If code that will run inside a target database must also access remote databases, then do not import either package—instead, you must fully qualify the names of any classes that you use from the
client package (to access a remote database) or from the
server package (to access the local database). Class names are duplicated between these packages.
This section discusses the XA components—standard XA interfaces specified in the JDBC 2.0 Optional Package, and the Oracle classes that implement them. The following topics are covered:
XA Datasource Interface and Oracle Implementation
XA Connection Interface and Oracle Implementation
XA Resource Interface and Oracle Implementation
XA Resource Method Functionality and Input Parameters
XA ID Interface and Oracle Implementation
The
javax.sql.XADataSource interface outlines standard functionality of XA datasources, "DataSource datasource. The connection obtained from an
XAConnection object behaves exactly like a regular connection, until it participates in a global transaction; at that time, autocommit status is set to false. After the global transaction ends, autocommit status is returned to the value it had before the global transaction. The default autocommit status on a connection obtained from
XAConnection is
false in all releases prior to 10g Release 1 (10.1); from this release forward, the default status is
true.:
It associates and disassociates distributed transactions with the transaction branch operating in the XA connection instance that produced this XA resource instance. (Essentially, associates distributed transactions with the physical connection or session encapsulated by the XA connection instance.) This is done through use of transaction IDs.
It performs the two-phase
COMMIT functionality must be one of the following values:
XAResource.TMNOFLAGS (no special flag). (It must first have been suspended.). statements.
XAResource.XA_OK—This is returned if the transaction branch executes updates that are all prepared without error.
n/a (no value returned)—No value is returned if the transaction branch executes updates and any of them encounter errors during preparation. In this case, an XA exception is thrown. step:
format identifier (4 bytes)
A format identifier specifies a Java transaction manager—for example, there could be a format identifier
ORCL. This field cannot be null.
global transaction identifier (64 bytes) (or "distributed transaction ID component", as discussed earlier)
branch qualifier (64 bytes) (or "transaction branch ID component", as discussed earlier)()
This section has two focuses: 1) the functionality of XA exceptions and error handling; and 2) Oracle optimizations in its XA implementation. The following topics are covered:
XA Exception. An XA exception is an instance of the standard class
javax.transaction.xa.XAException or a subclass. Oracle subclasses
XAException with the
oracle.jdbc.xa.OracleXAException class.
An
OracleXAException instance consists of an Oracle error portion and an XA error portion and is constructed as follows by the Oracle JDBC driver:
public OracleXAException()
or:
public OracleXAException(int error)
The error value is an error code that combines an Oracle SQL error value and an XA error value. (The JDBC driver determines exactly how to combine the Oracle and XA error values.)
The
OracleXAException class has the following methods:
public int getOracleError()
This method returns the Oracle SQL error code pertaining to the exception—a standard ORA error number (or 0 if there is no Oracle SQL error).
public int getXAError()
This method returns the XA error code pertaining to the exception. XA error values are defined in the
javax.transaction.xa.XAException class; refer to its Javadoc at the Sun Microsystems Web site for more information.
Oracle errors correspond to XA errors in
OracleXAException instances as documented in Table 9-2.
The following example uses the
OracleXAException class to process an XA exception:
try { ... ...Perform XA operations... ... } catch(OracleXAException oxae) { int oraerr = oxae.getOracleError(); System.out.println("Error " + oraerr); } catch(XAException xae) {...Process generic XA exception...}
In case the XA operations did not throw an Oracle-specific XA exception, the code drops through to process a generic XA exception.
Oracle JDBC has functionality to improve performance if two or more branches of a distributed transaction use the same database instance—meaning that the XA resource instances associated with these branches are associated with the same resource manager.
In such a circumstance, the
prepare() method of only one of these XA resource instances will return
XA_OK (or failure); the rest will return
XA_RDONLY, even if updates are made. This allows the transaction manager to implicitly join all the transaction branches and commit (or roll back, if failure) the joined transaction through the XA resource instance that returned
XA_OK (or failure).
The transaction manager can use the
OracleXAResource class
isSameRM() method to determine if two XA resource instances are using the same resource manager. This way it can interpret the meaning of
XA_RDONLY return values.
This section provides an example of how to implement a distributed transaction using Oracle XA functionality.
You must import the following for Oracle XA functionality:
import oracle.jdbc.xa.OracleXid; import oracle.jdbc.xa.OracleXAException; import oracle.jdbc.pool.*; import oracle.jdbc.xa.client.*; import javax.transaction.xa.*;
The
oracle.jdbc.pool package has classes for connection pooling functionality, some of which are subclassed by XA-related classes.
In addition, if the code will run inside an Oracle database and access that database for SQL operations, you must import the following:
import oracle.jdbc.xa.server.*;
(And if you intend to access only the database in which the code runs, you would not need the
oracle.jdbc.xa.client classes.)
The
client and
server packages each have versions of the
OracleXADataSource,
OracleXAConnection, and
OracleXAResource classes. Abstract versions of these three classes are in the top-level
oracle.jdbc.xa package.
This example uses a two-phase distributed transaction with two transaction branches, each to a separate database.
Note that for simplicity, this example combines code that would typically be in a middle tier with code that would typically be in a transaction manager (such as the XA resource method invocations and the creation of transaction IDs).
For brevity, the specifics of creating transaction IDs (in the
createID() method) and performing SQL operations (in the
doSomeWork1() and
doSomeWork2() methods) are not shown here. The complete example is shipped with the product.
This example executes the following sequence:
Start transaction branch #1.
Start transaction branch #2.
Execute DML operations on branch #1.
Execute DML operations on branch #2.
End transaction branch #1.
End transaction branch #2.
Prepare branch #1.
Prepare branch #2.
Commit branch #1.
Commit branch #2.
// You need to import the java.sql package to use JDBC import java.sql.*; import javax.sql.*; import oracle.jdbc.*; import oracle.jdbc.pool.*; import oracle.jdbc.xa.OracleXid; import oracle.jdbc.xa.OracleXAException; import oracle.jdbc.xa.client.*; import javax.transaction.xa.*; class XA4 { public static void main (String args []) throws SQLException { try { String URL1 = "jdbc:oracle:oci:@"; // (); // Prepare a statement to create the table Statement stmtb = connb.createStatement (); try { // Drop the test table stmta.execute ("drop table my_table"); } catch (SQLException e) { // Ignore an error here } try { // Create a test table stmta.execute ("create table my_table (col1 int)"); } catch (SQLException e) { // Ignore an error here too } try { // Drop the test table stmtb.execute ("drop table my_tab"); } catch (SQLException e) { // Ignore an error here } try { // Create a test table stmtb.execute ("create table my_tab (col1 char(30))"); } catch (SQLException e) { // Ignore an error here too } // Create XADataSource instances and set properties. OracleXADataSource oxds1 = new OracleXADataSource(); oxds1.setURL("jdbc:oracle:oci:@"); oxds1.setUser("scott"); oxds1.setPassword("tiger"); OracleXADataSource oxds2 = new OracleXADataSource(); oxds2.setURL("jdbc:oracle:thin:@(description=(address=(host=dlsun991) (protocol=tcp)(port=5521))(connect_data=(sid=rdbms2)))"); oxds2.setUser("scott"); oxds2.setPassword("tiger"); // Get XA connections to the underlying datasources XAConnection pc1 = oxds1.getXAConnection(); XAConnection pc2 = oxds2.getXAConnection(); // Get the physical connections Connection conn1 = pc1.getConnection(); Connection conn2 = pc2.getConnection(); // Get the XA resources XAResource oxar1 = pc1.getXAResource(); XAResource oxar2 = pc2.getXAResource(); // Create the Xids With the Same Global Ids Xid xid1 = createXid(1); Xid xid2 = createXid(2); // Start the Resources oxar1.start (xid1, XAResource.TMNOFLAGS); oxar2.start (xid2, XAResource.TMNOFLAGS); // Execute SQL operations with conn1 and conn2 doSomeWork1 (conn1); doSomeWork2 (conn2); // END both the branches -- IMPORTANT oxar1.end(xid1, XAResource.TMSUCCESS); oxar2.end(xid2, XAResource.TMSUCCESS); // Prepare the RMs int prp1 = oxar1.prepare (xid1); int prp2 = oxar2.prepare (xid2); System.out.println("Return value of prepare 1 is " + prp1); System.out.println("Return value of prepare 2 is " + prp; System.out.println("do_commit is " + do_commit); System.out.println("Is oxar1 same as oxar2 ? " + oxar1.isSameRM(oxar2));; ResultSet rset = stmta.executeQuery ("select col1 from my_table"); while (rset.next()) System.out.println("Col1 is " + rset.getInt(1)); rset.close(); rset = null; rset = stmtb.executeQuery ("select col1 from my_tab"); while (rset.next()) System.out.println("Col1 is " + rset.getString(1)); rset.close(); rset = null; stmta.close(); stmta = null; stmtb.close(); stmtb = null; conna.close(); conna = null; connb.close(); connb = null; } catch (SQLException sqe) { sqe.printStackTrace(); }()); } } } static Xid createXid(int bids) throws XAException {...Create transaction IDs...} private static void doSomeWork1 (Connection conn) throws SQLException {...Execute SQL operations...} private static void doSomeWork2 (Connection conn) throws SQLException {...Execute SQL operations...} } | http://docs.oracle.com/cd/B14117_01/java.101/b10979/xadistra.htm | CC-MAIN-2017-22 | en | refinedweb |
view raw
I was a bit curious if I could do more work in a function after returning a result. Basically I'm making a site using the pyramid framework(which is simply coding in python) after I process the inputs I return variables to render the page but sometimes I want to do more work after I render the page.
For example, you come to my site and update your profile and all you care about is that its successful so I output a message saying 'success!' but after that done I want to take your update and update my activity logs of what your doing, update your friends activity streams, etc.. Right now I'm doing all that before I return the result status that you care about but I'm curious if I can do it after so users get their responses faster.
I have done multi-processing before and worst case I might just fork a thread to do this work but if there was a way to do work after a return statement then that would be simpler.
example:
def profile_update(inputs):
#take updates and update the database
return "it worked"
#do maintainence processing now..
No, unfortunately, once you hit the
return statement, you return from the function/method (either with or without a return value).
From the docs for return:
return leaves the current function call with the expression list (or None) as return value.
You may want to look into generator functions and the yield statement, this is a way to return a value from a function and continue processing and preparing another value to be returned when the function is called the next time. | https://codedump.io/share/GDtgOWWlJwxA/1/is-there-a-way-to-do-more-work-after-a-return-statement | CC-MAIN-2017-22 | en | refinedweb |
view raw
I am using reactjs and material-ui for development of my application. I am developing a dashboard. Before i used react-mdl for component and using it the navbar component is working as expected. I want to use material-ui, though as it has lots of component and support. It is mature than react-mdl. Everything is working perfectly except navbar component where i have used AppBar.
Below is the screenshot of navbar i have designed using both material-ui and react-mdl. I want the 2nd navbar(it is designed using react-mdl) using material-ui which i could not. I tried using tab inside AppBar but no luck. The concept is there will be 2 tab at first and when the add icon is clicked that is on the right side, user gets to add a new tab over there. How can i design it the same way?
Here is my code
render() {
const navigation = (
<BottomNavigation style={{ background:'transparent'}}>
<i className="material-icons md-23">delete</i>
<i className="material-icons md-23">add_circle</i>
</BottomNavigation>
)
return (
<div>
<div className="mdlContent" style={{height: '900px', position: 'relative'}}>
<AppBar iconElementRight={navigation} >
</AppBar>
)
You can do this via passing a
component as the title to the
<AppBar/> component.
Check the following pen:
Instead of using
tabs inside the
navBar which is a bit tricky, you can use
<a> and handle the route to render the specific page. | https://codedump.io/share/toibTL2Oe9ew/1/could-not-show-items-in-navbar-as-per-the-need | CC-MAIN-2017-22 | en | refinedweb |
view raw
I need to show a lock screen when application comes from background and also after a time duration. I have searched a lot. But not found any useful solution. Please help me. Thanks in advance.
From your added information in the comments, I'd suggest you have one base activity that implements your desired behavior and have your other activities inherit from that one.
It could look somewhat like this:
public class BaseActivity extends Activity{ @Override public void onResume() { super.onResume(); // check if you want to display your login } }
If you want to show your login after a set amount of time of inactivity, you could implement that in
BaseActivity, too:
@Override public void onUserInteraction(){ // reset your timer... super.onUserInteraction(); } | https://codedump.io/share/pSxOoEzxdXsK/1/android---show-lock-screen-when-application-resumes-in-android | CC-MAIN-2017-22 | en | refinedweb |
Some friends here on the Hyper-V team shared a PowerShell 2.0 script for creating a VM:
# Create a VM
param(
[string]$vmName = $(throw “Must supply a virtual machine name”)
)
# Get a new instance of Msvm_VirtualSystemGlobalSettingData
$vsgsdClass = [wmiclass]”root\virtualization:Msvm_VirtualSystemGlobalSettingData”
$vsgsd = $vsgsdClass.CreateInstance()
$vsgsd.ElementName = $vmName
# Get the managment service and define the system from the embedded instance
$vmms = gwmi -namespace root\virtualization Msvm_VirtualSystemManagementService
$result = $vmms.DefineVirtualSystem($vsgsd.GetText(1))
if($result.ReturnValue -eq 4096){
# A Job was started, and can be tracked using its Msvm_Job instance
$job = [wmi]$result.Job
# Wait for job to finish
while($job.jobstate -lt 7){$job.get()}
# Return the Job’s error code
return $job.ErrorCode
}
# Otherwise, the method completed
return $result.ReturnValue
This seems sort of primitive to do workflow automation, especially in comparison to VMWARE Orchestrator in VSPHERE. I’d be interested to see what is coming down the pipe for a hyper v automation workflow engine.
—- Ken.
Um, check these out for a less "primitive" solution:
PowerShell Management Library for Hyper-V at
The Official Scripting Guys Forum (click the Hyper-V tag) at
SCVMM at | https://blogs.technet.microsoft.com/tonyso/2009/08/20/hyper-v-how-to-create-a-vm-using-script/ | CC-MAIN-2017-22 | en | refinedweb |
This document lists the document formats that will be used by the Java Persistence API XML descriptors. The Java Persistence API requires that its XML descriptors be validated with respect to the XML Schema listed by this document.
Starting with the 2.1 version,
the Java Persistence API Schemas share the namespace,.
Previous versions used the namespace. Each schema
document contains a version attribute that contains the version of the
Java Persistence specification. This pertains to the specific version
of the specification as well as the schema document itself. 2.1.
This table contains the XML Schema components for Java Persistence 2.0.
This table contains the XML Schema components for Java Persistence 1.0. | http://www.oracle.com/webfolder/technetwork/jsc/xml/ns/persistence/index.html | CC-MAIN-2017-22 | en | refinedweb |
NRWT currently uses apache2-debian-httpd.conf for running the httpd server, thus only runs on debian based distros, this is a huge regression compared to ORWT.
NRWT should at least check for the /etc/debian_version file, and if not present it should use the generic apache2-httpd.conf file.
Created attachment 99977 [details]
proposed fix
Comment on attachment 99977 [details]
proposed fix
OK. There is a similar "is_redhat_based()" function in the Gtk port.
Please add a test for this in qt_unittest.py. Should be very easy to do with MockFileSystem({'/etc/debian_version', ''})
Something like this:
def test_is_debian_based(self)
port = self.make_port() # This is available in PortTestCase which QtPortTest isn't yet, I don't think, but could be made to
self.assertEqual(port.is_debian_based(), False)
port._filesystem = MockFileSystem({'/etc/debian_version', ''})
self.assertEqual(port.is_debian_based(), True)
One could also test that overriding _is_debian_based() with lamba: False or lambda: True affected port._path_to_appache_config_file().basename() if you wanted. In any case, we just need some sort of testing.
Comment on attachment 99977 [details]
proposed fix
View in context:
> Tools/Scripts/webkitpy/layout_tests/port/qt.py:32
> +import os
Don't need this once you use FileSystem.
> Tools/Scripts/webkitpy/layout_tests/port/qt.py:69
> + if os.path.isfile('/etc/debian_version'):
Oh, this is wrong too. This should use self._filesystem.exists() or isfile if that method exists on FileSystem. We don't ever talk to os directly if we can help it. (Makes mockng hard.)
Same or similar bug on Qt Mac bot:
2011-07-11 04:15:11,000 86746 apache_http_server.py:182 DEBUG Starting httpd server, cmd="/usr/sbin/httpd -f "/buildbot/snowleopard-qt-release/snowleopard-qt-intel-release/build/layout-test-results/httpd.conf" -C 'DocumentRoot "/buildbot/snowleopard-qt-release/snowleopard-qt-intel-release/build/LayoutTests/http/tests"' -c 'Alias /js-test-resources "/buildbot/snowleopard-qt-release/snowleopard-qt-intel-release/build/LayoutTests/fast/js/resources"' -c 'Alias /media-resources "/buildbot/snowleopard-qt-release/snowleopard-qt-intel-release/build/LayoutTests/media"' -C 'Listen 127.0.0.1:8000' -C 'Listen 127.0.0.1:8081' -c 'TypesConfig "/buildbot/snowleopard-qt-release/snowleopard-qt-intel-release/build/LayoutTests/http/conf/mime.types"' -c 'CustomLog "/buildbot/snowleopard-qt-release/snowleopard-qt-intel-release/build/layout-test-results/access_log.txt" common' -c 'ErrorLog "/buildbot/snowleopard-qt-release/snowleopard-qt-intel-release/build/layout-test-results/error_log.txt"' -C 'User "buildbot"' -c 'PidFile /tmp/WebKit/httpd.pid' -k start -c 'SSLCertificateFile /buildbot/snowleopard-qt-release/snowleopard-qt-intel-release/build/LayoutTests/http/conf/webkit-httpd.pem'"
Traceback (most recent call last):
File "/buildbot/snowleopard-qt-release/snowleopard-qt-intel-release/build/Tools/Scripts/webkitpy/layout_tests/run_webkit_tests.py", line 438, in <module>
Ignoring unsupported option: --use-remote-links-to-tests
sys.exit(main())
File "/buildbot/snowleopard-qt-release/snowleopard-qt-intel-release/build/Tools/Scripts/webkitpy/layout_tests/run_webkit_tests.py", line 433, in main
return run(port_obj, options, args)
File "/buildbot/snowleopard-qt-release/snowleopard-qt-intel-release/build/Tools/Scripts/webkitpy/layout_tests/run_webkit_tests.py", line 112, in run
num_unexpected_results = manager.run(result_summary)
File "/buildbot/snowleopard-qt-release/snowleopard-qt-intel-release/build/Tools/Scripts/webkitpy/layout_tests/controllers/manager.py", line 874, in run
interrupted, keyboard_interrupted, thread_timings, test_timings, individual_test_timings = self._run_tests(self._test_files_list, result_summary)
File "/buildbot/snowleopard-qt-release/snowleopard-qt-intel-release/build/Tools/Scripts/webkitpy/layout_tests/controllers/manager.py", line 723, in _run_tests
self.start_servers_with_lock()
File "/buildbot/snowleopard-qt-release/snowleopard-qt-intel-release/build/Tools/Scripts/webkitpy/layout_tests/controllers/manager.py", line 936, in start_servers_with_lock
self._port.start_http_server()
File "/buildbot/snowleopard-qt-release/snowleopard-qt-intel-release/build/Tools/Scripts/webkitpy/layout_tests/port/base.py", line 669, in start_http_server
server.start()
File "/buildbot/snowleopard-qt-release/snowleopard-qt-intel-release/build/Tools/Scripts/webkitpy/layout_tests/servers/http_server_base.py", line 86, in start
self._pid = self._spawn_process()
File "/buildbot/snowleopard-qt-release/snowleopard-qt-intel-release/build/Tools/Scripts/webkitpy/layout_tests/servers/apache_http_server.py", line 185, in _spawn_process
raise http_server_base.ServerError('Failed to start %s: %s' % (self._name, err))
webkitpy.layout_tests.servers.http_server_base.ServerError: Failed to start httpd: httpd: Syntax error on line 197 of /buildbot/snowleopard-qt-release/snowleopard-qt-intel-release/build/layout-test-results/httpd.conf: Cannot load /usr/lib/apache2/modules/mod_mime.so into server: dlopen(/usr/lib/apache2/modules/mod_mime.so, 10): image not found
Assigning this bug to Kristóf, since he has more knowledge of python than I have.
Comment on attachment 99977 [details]
proposed fix
Marking my patch obsolete.
I tried to disable NRWT on qt-mac platform:
isDarwin() checks $^O, and it should be "darwin" on the Mac bot.
Dihan, could you check what is $^O on the bot? (print $^O; in perl)
".
(In reply to comment #8)
> ".
The problem was that isSnowLeopard() is true for qt-mac too.
Fix landed in.
This is super-easy to fix.
Is the FIXME in question. This is a 3 line python change. Tell me where your httpd file is on all platforms you care about and I can do so. (Or I can just copy the logic from ORWT).
Is this the only thing blocking NRWT no Qt?
I have a patch to fix this. I will post it this afternoon.
Created attachment 100378 [details]
Patch
Comment on attachment 100378 [details]
Patch
View in context:
Thanks for fixing this, Eric.
> Tools/Scripts/webkitpy/layout_tests/port/webkit.py:396
> + if sys_platform.startswith('linux'):
> + if self._is_redhat_based():
> + return 'fedora-httpd.conf'
> + # FIXME: Seems wrong to assume that all non-redhat linux is debian.
> + return 'apache2-debian-httpd.conf'
> + if using_apache2:
> + return "apache2-httpd.conf"
This seems wrong to me, too. ORWT checks for the /etc/debian_version file which all debian based distros have, and apache2-httpd.conf should be used if nor Fedora nor Debian was true and the version string contained "Apache/2" since for example on Slackware and Arch the apache2 binary is called httpd. All others should fallback to httpd.conf. This would mimic ORWT behaviour.
the patch looked fine to me apart from the apache2 on mac issue ...
Created attachment 100395 [details]
Patch
Comment on attachment 100395 [details]
Patch
I'm not an expert on this code, but this looks fine.
Comment on attachment 100395 [details]
Patch
Clearing flags on attachment: 100395
Committed r90810: <>
All reviewed patches have been landed. Closing bug. | https://bugs.webkit.org/show_bug.cgi?id=64086 | CC-MAIN-2017-22 | en | refinedweb |
31829/src/code
Modified Files:
eval.lisp
Log Message:
0.7.8.23:
* Fixed bug 204: (EVAL-WHEN (:COMPILE-TOPLEVEL) ...) inside
MACROLET.
* Expanders, introduced by MACROLET, are defined in a
restricted lexical environment.
* SB-C:LEXENV-FIND works in any package.
Index: eval.lisp
===================================================================
RCS file: /cvsroot/sbcl/sbcl/src/code/eval.lisp,v
retrieving revision 1.15
retrieving revision 1.16
diff -u -d -r1.15 -r1.16
--- eval.lisp 19 Dec 2001 20:04:10 -0000 1.15
+++ eval.lisp 10 Oct 2002 07:16:14 -0000 1.16
@@ -13,29 +13,31 @@
;;; general case of EVAL (except in that it can't handle toplevel
;;; EVAL-WHEN magic properly): Delegate to #'COMPILE.
-(defun %eval (expr)
- (funcall (compile (gensym "EVAL-TMPFUN-")
- `(lambda ()
+(defun %eval (expr lexenv)
+ (funcall (sb!c:compile-in-lexenv
+ (gensym "EVAL-TMPFUN-")
+ `(lambda ()
- ;; The user can reasonably expect that the
- ;; interpreter will be safe.
- (declare (optimize (safety 3)))
+ ;; The user can reasonably expect that the
+ ;; interpreter will be safe.
+ (declare (optimize (safety 3)))
- ;; It's also good if the interpreter doesn't
- ;; spend too long thinking about each input
- ;; form, since if the user'd wanted the
- ;; tradeoff to favor quality of compiled code
- ;; over compilation speed, he'd've explicitly
- ;; asked for compilation.
- (declare (optimize (compilation-speed 2)))
+ ;; It's also good if the interpreter doesn't
+ ;; spend too long thinking about each input
+ ;; form, since if the user'd wanted the
+ ;; tradeoff to favor quality of compiled code
+ ;; over compilation speed, he'd've explicitly
+ ;; asked for compilation.
+ (declare (optimize (compilation-speed 2)))
- ;; Other properties are relatively unimportant.
- (declare (optimize (speed 1) (debug 1) (space 1)))
+ ;; Other properties are relatively unimportant.
+ (declare (optimize (speed 1) (debug 1) (space 1)))
- ,expr))))
+ ,expr)
+ lexenv)))
;;; Handle PROGN and implicit PROGN.
-(defun eval-progn-body (progn-body)
+(defun eval-progn-body (progn-body lexenv)
(unless (list-with-length-p progn-body)
(let ((*print-circle* t))
(error 'simple-program-error
@@ -52,17 +54,21 @@
(rest-i (rest i) (rest i)))
(nil)
(if rest-i ; if not last element of list
- (eval (first i))
- (return (eval (first i))))))
+ (eval-in-lexenv (first i) lexenv)
+ (return (eval-in-lexenv (first i) lexenv)))))
-;;; Pick off a few easy cases, and the various top level EVAL-WHEN
-;;; magical cases, and call %EVAL for the rest.
(defun eval (original-exp)
#!+sb-doc
"Evaluate the argument in a null lexical environment, returning the
result or results."
+ (eval-in-lexenv original-exp (make-null-lexenv)))
+
+;;; Pick off a few easy cases, and the various top level EVAL-WHEN
+;;; magical cases, and call %EVAL for the rest.
+(defun eval-in-lexenv (original-exp lexenv)
(declare (optimize (safety 1)))
- (let ((exp (macroexpand original-exp)))
+ ;; (aver (lexenv-simple-p lexenv))
+ (let ((exp (macroexpand original-exp lexenv)))
(typecase exp
(symbol
(ecase (info :variable :kind exp)
@@ -80,7 +86,7 @@
;; compatibility, it can be implemented with
;; DEFINE-SYMBOL-MACRO, keeping the code walkers happy.
(:alien
- (%eval original-exp))))
+ (%eval original-exp lexenv))))
(list
(let ((name (first exp))
(n-args (1- (length exp))))
@@ -89,11 +95,13 @@
(unless (= n-args 1)
(error "wrong number of args to FUNCTION:~% ~S" exp))
(let ((name (second exp)))
- (if (or (atom name)
- (and (consp name)
- (eq (car name) 'setf)))
+ (if (and (or (atom name)
+ (and (consp name)
+ (eq (car name) 'setf)))
+ (not (consp (let ((sb!c:*lexenv* lexenv))
+ (sb!c:lexenv-find name funs)))))
(fdefinition name)
- (%eval original-exp))))
+ (%eval original-exp lexenv))))
(quote
(unless (= n-args 1)
(error "wrong number of args to QUOTE:~% ~S" exp))
@@ -117,9 +125,9 @@
;; variable; the code should now act as though that
;; variable is NIL. This should be tested..
(:special)
- (t (return (%eval original-exp))))))))
+ (t (return (%eval original-exp lexenv))))))))
((progn)
- (eval-progn-body (rest exp)))
+ (eval-progn-body (rest exp) lexenv))
((eval-when)
;; FIXME: DESTRUCTURING-BIND returns ARG-COUNT-ERROR
;; instead of PROGRAM-ERROR when there's something wrong
@@ -145,15 +153,15 @@
;; otherwise, the EVAL-WHEN form returns NIL.
(declare (ignore ct lt))
(when e
- (eval-progn-body body)))))
+ (eval-progn-body body lexenv)))))
(t
(if (and (symbolp name)
(eq (info :function :kind name) :function))
(collect ((args))
- (dolist (arg (rest exp))
- (args (eval arg)))
- (apply (symbol-function name) (args)))
- (%eval original-exp))))))
+ (dolist (arg (rest exp))
+ (args (eval arg)))
+ (apply (symbol-function name) (args)))
+ (%eval original-exp lexenv))))))
(t
exp))))
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/sbcl/mailman/sbcl-commits/thread/E17zXYP-0000CO-00@usw-pr-cvs1.sourceforge.net/ | CC-MAIN-2017-22 | en | refinedweb |
Content-type: text/html
setservent, setservent_r - Open or rewind the services file
Standard C Library (libc.so, libc.a)
#include <netdb.h>
void setservent(
int stay_open);
[Digital] The following function is supported in order to maintain backward compatibility
with previous versions of the operating system.
int setservent_r(
int stay_open,
struct servent_data *serv_data);
[Digital] The following function is supported in order to maintain backward compatibility with previous versions of the operating system.
int setservent(
int stay_open);
Interfaces documented on this reference page conform to industry standards as follows:
setservent(): XPG4-UNIX
Refer to the standards(5) reference page for more information about industry standards and associated tags.
Indicates when to close the services file. Specifying a value of 0 (zero) causes the file to be closed after each call to the getservent() function. Specifying a nonzero value allows the file to remain open after each call. [Digital] Points to a structure where setservent_r() stores information about the services file.
The setservent() (set service entry) function opens either the local /etc/services file or the NIS distributed services file, and sets the file marker at the beginning of the file. To determine which file or files to search, and in which order, the system uses the switches in the /etc/svc.conf file.
[Digital] The setservent_r() function is the reentrant version of the setservent() function. It is supported in order to maintain backward compatibility with previous versions of the operating system. Upon successful completion, the setservent_r() function returns a value of 0 (zero). Otherwise, it returns a value of -1.
[Digital] Before calling the setservent_r() function for the first time, you must zero-fill the servent_data structure. The netdb.h header file defines the servent_data structure.
Current industry standards for setservent() do not define return values.
[Digital] Upon successful completion, the setservent() function included for backward compatibility returns a 1 for success. Otherwise, it returns a value of 0 (zero).
Current industry standards for setservent() do not define error values.
[Digital] If any of the following conditions occurs, the setservent_r() function sets errno to the corresponding value: If serv_data is invalid.
In addition, the setservent() and setservent_r() functions can fail to open the file. In this case, errno will be set to the failure.
Contains service names. The database service selection configuration file.
Functions: endservent(3), getservbyname(3), getservbyport(3), getservent(3).
Files: services(4), svc.conf(4).
Networks: nis_intro(7).
Standards: standards(5). delim off | http://backdrift.org/man/tru64/man3/setservent.3.html | CC-MAIN-2017-22 | en | refinedweb |
Colour Gradient - Online Code
Description
This Code shows the simple Window with the Colour Gradient which you often go through.
Source Code
import java.awt.Color; import java.awt.Dimension; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.geom.Rectangle2D; import javax.swing.JFrame; import javax.swing.JPanel; import javax.swing.UIMana... (login or register to view full code)
To view full code, you must Login or Register, its FREE.
Hey, registering yourself just takes less than a minute and opens up a whole new GetGyan experience. | http://www.getgyan.com/show/814/Colour_Gradient | CC-MAIN-2017-22 | en | refinedweb |
I would like to write a console copy program in C++ that copies a file into another file like linux or windows copy programs.this programs take name of file and copy into the same folder with the same name concatenate it to a "copy_" word.this is a simple program but i would like to develop it's features.
there are some problems at this way.when you give a executable program name to this simple copy program , the new "copy_executablefile" can't run.for other files such as files with 'pdf' extension it has no problems and copied file (copy_pdfextensionfile) is exactly like the same original file.but in executable file i have a problem.after some check and compare between original file and copied file byte by byte using "Hex Editor" program i discovered that at the end of copied file one byte with 00H value is extra.after removing this extra byte using Hex Editor ,copied file runs successfully.but i can't find out why this extra byte appends into copy_file.what do you know about this prob?please help me.perhaps copying character by character is the reason.because a character is 4byte.what's the solution?
you can see this simple prog following.
Sincerely
Kaveh Shahhosseini 6/May/2011
Code://this prog tested in Ubuntu 10.10 with Gcc compiler and works correctly. //begining of copy program. //======================= #include <iostream> #include <fstream> #include <string.h> using namespace::std; //--------------------------- int main() { char filename1[12],ch,filename2[18]; cout<<"Enter file name:\n"; cin>>filename1; ifstream in(filename1,ios::in); strcpy(filename2,"copy_"); strcat(filename2,filename1); ofstream out(filename2,ios::out); while(!in.eof()) { in.get(ch); out.put(ch); } return 0; //end } //==================== | https://cboard.cprogramming.com/cplusplus-programming/137774-console-copy-program-cplusplus.html | CC-MAIN-2017-22 | en | refinedweb |
Portuguese translation to follow.
Since last June 22nd, SAP has released under General Availability (GA) the latest version for its NF-e (Brazilian electronic invoicing) solution: SAP NFE 10.0. It had been in Ramp-up since December 13th, 2010, and since the successful go-live of the first of its ramp-up projects, it has been made generally available.
But what has changed in this newer version? Well, first of all, the version itself. 🙂 Lots of people have asked why the number went all the way from 1.0 up to 10.0! The answer is rather simple though, it has nothing to do with the (real) fact that the solution has improved 1000% (hehe) – the point is that, as part of the SAP Governance, Risk and Compliance (GRC) Suite, SAP NFE was also part of the GRC Suite Harmonization project, under which all GRC products were redesigned & relabeled as 10.0. That’s why we have Access Control 10.0, Process Control 10.0 and so on…
Secondly, the product name has changed altogether. The first version used to be called, in its full name, SAP GRC Nota Fiscal Eletronica 1.0, or SAP NFE 1.0 for shorts. Now, also under the GRC harmonization, its official full name has been changed to SAP BusinessObjects Electronic Invoicing for Brazil 10.0. Pretty fancy, ain’t it?? The short version remains rather the same, SAP NFE 10.0. Folks down here in Brazil, who have been used to call the product just as “GRC”, will have to get used to the new name… or not. Power to the people!
But what has really changed underneath?? Is it the same thing with a new name? Not at all!!!
History
In simple terms, the first version of the product (covered in SAP GRC NFE 1.0 – New Solution Introduction & Implemention Best Practices) was aimed at meeting with the requirements of the issuing of electronic invoices by the companies selling (or returning, importing etc.) goods. For any scenario that demanded an invoice (or Nota Fiscal, in Brazilian terms) to be issued, it needed to be converted into the electronic format defined by the government and processed through a predefined set of steps in order to comply with the legal requirements. Only after getting the online approval from the government for that particular invoice, the company would be allowed to ship the goods.
This was well achieved by the SAP NFE 1.0 solution. It revolutionized the way the Brazilian SAP customers were managing electronic invoices, in terms of integration to the SAP ERP as well as the stability & scability that only a solution based on the SAP NetWeaver platform could offer. Being the best of breed solution in its niche, SAP NFE is now present in more than 1/3 of the SAP installed base in Brazil, even though it was released later than most of the competitive solutions.
However, the experience that SAP gathered with the NF-e projects allied to the business insight that only SAP has, made it possible for our engineers & consultants to perceive a couple of bottlenecks that were happening in the process.
While the outgoing part of the process (i.e. selling the goods) was considerably efficient, the adoption of electronic invoicing brought serious drawbacks to the other side of the supply chain: the companies pruchasing the goods were having a much harder time processing these incoming electronic invoices, due to new legal requirements which were specific to the NF-es.
The new Incoming piece
So, with the experience gathered in SAP’s 300+ implementations of SAP NFE among the Brazilian customers, our development engineers & architects were able to come up with a solution that would make the life of the companies purchasing the goods much easier. Instead of having a bigger burden because of the electronic document, the solution SAP has come up with is able to actually take advantage of the fact that the process is based on such electronic documents and reduce the overall average time spent in the fiscal validation of these incoming invoices by 7/8!
So, now, the SAP NFE solution is comprised of two pieces, or “modules”:
- SAP NFE, Outbound: the classic feature of SAP NFE, best of breed NF-e issuing system for SAP ERP customers in the market;
- SAP NFE, Incoming: leveraging the NF-e capabilities in order to improve the procurement & logisitc process for the Brazilian customers.
Much different from the SAP NFE Outbound piece, though, the Inbound piece is not just a mere transactional message handler anymore. It is tightly integrated to the SAP ERP invoice verification & material movement processes, leveraging the actual Business Users to migrate from the old-fashioned transactions into modern web-based mashed-up interfaces, which integrate in a single workplace all the information the users need to do their jobs.
More specific details about the new SAP NFE Incoming Automation process will be explored further in a future blog.
What’s new for the Outbound process
While the main feature of the new version is indeed the Incoming process, the Oubtound process has not been neglected by the developers and it has been remodeled and improved since SAP NFE 1.0.
There were 3 major enhancements, besides several other minor improvements (some of which have been downported to the 1.0 codeline):
- Simplification of the architecture, with the complete removal of the SLL-NFE-JWS Java component: now, the XML digital signature is fully handled by the SLL-NFE ABAP component (leveraging new features of the 7.02 NetWeaver platform), what has reduced the complexity & TCO of the solution, as well as increased the overall message throughput by 30+%;
- New Mass Download UI: an old recurring request of existing customers, now it’s possible to download a set of XML files (filtered by date, for example) in a single step;
- Standardized look & feel of the UIs: the UIs have been aligned with the more classical web dynpro look & feel of other SAP applications. They were also modified to make use of the POWL (Personal Object Worklist) Web Dynpro framework, leveraging the use of NW Business Client as an alternative frontend option.
All in all, it continues to be the best of breed solution for SAP ERP customers to issue Brazilian electronic invoices, yet more powerful. It will continue to grow both in terms of robustness and functionalities, along with the Incoming piece.
One important message to all SAP NFE 1.0 customers out there: the SAP NFE 1.0 licenses you’ve acquired are equivalent to the SAP NFE Outbound piece, no matter what the version. So, if you want to leverage the new goodies of the Oubound process that come in 10.0, it doesn’t require any additional licensing. It’s possible to migrate your existing NFE 1.0 installation to NFE 10.0 – Outbound. The Inbound piece, on the other hand, does need to be licensed additionally, since it covers an entirely new business process that was never addressed in 1.0.
Summary
In short, starting from version 10.0, SAP NFE now comprises two business processes (or “modules”) within the same product: the Outbound part, for issuing NF-es, and the Inbound part, for receiving NF-es.
We at SAP are pretty sure that, with these latest additions, SAP has put NFE even more into the leadership position of a governance & compliance solution for NF-e in Brazil, helping our customers to meet the legal requirements and, at the same time, to improve the procure-to-pay business process at these companies.
I am not sure whether this is right place for the question.
We are implementing the NFE 10.0. We have imported the XI content for the NFE 10.0 in ESR.
Now we have to start the PI configuration for the same. I have found four namespaces in the XI content for NFE 10.0:
I found the scenarios that we want to configure in two namespaces and
I am not sure objects from which namespace to be used in PI ID configuration.
Could you please put some light on this?
Regards,
Sami.
it’s ok to talk about it here, no problem.
You need to use configure the integration scenarios in the “” namespace, for the NF-e scenarios.
Also, make sure you have the latest version of the SLL-NFE SAPBO SLL-NFE 10.0 XI Content. In the one in the internal system, I can see the “” namespace as well. But if you’re considering to implement the receiving of CT-es, you’ll probably still use the 103 namespace.
Cheers,
Henrique.
I will check it and will configure the integration accordingly.
Regards,
Sami.
Hi Samiullah,
How you did configuration for namespace “” as it was already existing from NFe 1.0. Was it a fresh one or used the existing? Please explain how you did configuration by differentiating namespace for NFe 10.0 with NFe 1.0, which is same in both the versions.
Thank you,
Farooq
Hi Farooq,
In my opinion you have to do following:
First of all you have to import the BPMs of Newer SWCV(NFe10.0) into your Integration Directory to be used for scenarios with Authorities. If you already have the BPMs for NFe1.0 available in your integration directory, then you can use different names for the BPMs(Integration Scenarios) in integration directory(I have used suffix _NFE10 to the BPMs name).
Then use the Integration scenarios available in NFE 10.0 standard content to generate the configuration objects using the new BPMs.
There will few objects(Receiver Determinations etc.) that will be reused(where GRC system is sender system). For those objects you need to change the SWCV to NFE10.0 as explained by Hanrique.
After above changes it worked for me.
Regards,
Sami.
Hi Henrique,
You say it is OK to ask here? I am at the same juncture where Samiullah was. We have NFE 10.0 just imported in system. My query is about next step, configuration.
Do we really need to configure or the system will automatically take the existing configurations as in NFE 1.0 because namespace is same. If we have to configure, we have the namespace ““, Will the system treat this as that belongs to NFE10.0.
What exactly happening here. Please elaborate on the behavior of namespace in NFe10.0 & NFe 1.0 & configuration for NFe 10.0.
Thank you,
Farooq.
Hi Farooq,
even though the namespace is the same, the SWCV (software component version) is different (from 1.0 to 10.0), and hence you really need to rerun the configuration wizards.
Hello Henrique,
We are in the process of upgrading from SAP NFE 1.0 to SAP NFE 10.0
I have a question regarding SAP NFE 10.0 architecture :
Currently on SAP NFE 1.0 we have deployed the “one instance architecture” ;
All ABAP & Java NFE have been installed on SAP PI 7.0 ( based on Netweaver 7.0), that solution is briefly mentionned in one of your previous blog (even though you mentionned that it was not really recommended)
Now we SAP NFE 10.0 we need to upgrade SAP netweaver AS ABAP 7.02 (PI abap stack) in order to deploy GRC suite 10.0.
That would mean that the PI j2ee stack would also be upgraded to J2ee 7.02.
I have one question ;
Is the “one instance” scenario still supported with NFE 10.0 ?
Thank you very much
Best Regards
Yes, it is still possible to deploy the “all in one” architecture.
Specifically, AS Java is not relevant anymore for NFE since the SLL-NFE-JWS component does not exist anymore – in 10.0, the digital signature is handled by the ABAP layer.
Of course, AS Java will still be an inherent part of PI 702.
The problem of this architecture is that you won’t be able to update your PI to the newer versions (7.3x), being limited to the same codeline as NFE (7.0x, x >= 2).
Hi
Thanks for the blog. I’ll pop in a question as well as I am not sure how the signature service can be utilized. We are currently signing on a different server than PI, but want to move the signature to PI. We don’t have the DigitalSignature service interface on PI at the moment – does this come with one of the SWC?
Do we also have an opportunity to sign the messages in the backend system?
Thank you
regards Ole
Hi Henrique,
We upgraded from 1.0 to 10.0 and I have a question regarding the Digital Certificate;
We created the .pse using sapgenpse and imported it in STRUST. Do we need to send the
..p10 file to the CA/Brazilian Govt to be signed and imported back in to the .pse for it to work? We have an existing cert from them in the .p7b format that was used in the old Java system but are confused if we can use it with the .pse in the abap 10.0 system ?
Best Regards..
If you go for the standard SAP solution, yes, that is the recommended architecture. Just not sure what you mean by “standalone PI” – you could use any existing PI 7+ instances you already have in your landscape.
There is no official sizing recommendations for NFE sizing. Our presales team in SAP Brasil has developed a benchmark locally, so my recommendation would also to get your customer in touch with our local sales reps so that they might request this information.
Hi, any advice where can the NFE Master Guide be found? Using the SDN search engine it didn’t jump out at me.
Many thanks,
Aaron
It’s in SAP Service Marketplace. A quick link is available in.
Greate Blog Henrique!
I´d like to know where could I find documents regarding configuration in SAP ECC e GRC 10.0 to work with GRC 10.0 Inbouud.
Kind Regards,
Luis
Hi Luis
Below the link to NFe10 help
Regards
Eduardo Chagas
Hi Eduardo,
Somehow I was on a different version of the NFe 10 help…search engine maybe…
The process overview looks different. I think your link is better to use…
Yup! The link I’ve posted is the right path.
I saw that you asked about the Master Guide. Do you still need?
Regards
Eduardo Chagas
You can find it in the PAM session…
PAM (Product Availability Matrix)
Regards
Eduardo Chagas
Actually, this is the 1.0 version help (you couldn’t say just by looking at the URL since you can’t have dots in the URL). I do agree the URLs are a bit misleading.
Hi Henrique,
Very nice blog!!
Could you please provide me the link where i can get more specific details on SAP NFE Incoming Automation process?
Thanks,
Prasanthi
Hi Prashanthi Chavala,
Could you please check the link below ?
You can find anything about Inbound/Outbound process for NF-e 10.0
Kind regards,
Viana.
Great blog
But I´m confused about implementation of the new SP 13 into our landscape, specially in put to work the new scenarios like NFe Download and NFe List.
It´s assumed I need some previous knowledge that I don´t have.
There is some training track for this?
I´m a ABAP senior and a PI junior.
Thank you.
Not really, SAP training for NFE has been really lacking, specially regarding newer functionalities (automation, manifesto do destinatario, etc.). I’d say to follow up in the space (with a question). And notice the topics are not really technical but rather functional – it’d be better to have someone with fiscal knowledge.
Hi Henrique,
Your blog on NFE was a real eye opener. 🙂
We have a migration project ( 7.0 to 7.3) and NFE is involved in it.
How do we approach NFE migration?
Should the content be exported/imported or should it to be downloaded fresh for 7.3 system?
What would the impact be?
Note : NFE 10.0 is used
Kindly let me know.
Regards,
Sanjay
Hi Sanjay,
is the SLL-NFE Add-On installed in the same instance as PI?
If so, I’d say this is a proper moment for your company to consider separating them, so that in future PI migrations, you don’t need to consider NFE objects as well.
Nevertheless, be it on a new separate instance or the new PI instance, there is no standard NFE migration tool. You’ll need to figure out which tables to move and move them on your own (basically all tables with the /XNFE/* prefix).
Additionally, is it PI 7.30 or 7.31? If 7.31, I’d consider changing the integration pattern to AEX instead of ccBPMs, since it’s much more performatic. You can find more info in SAP Note 1743455.
Best regards,
Henrique.
Hi Henrique,
First of all thanks for sharing these information.
We are currently working on PI 7.1 Enhancement Pack 1 SPS11 and we have a requirement to add NFE 10.0 in our landscape. But we also have plan to upgrade our PI system to 7.3 EHP 1 Java only. So logically we should use AEX instead of ccBPM.
Do you know whether AEX components mentioned in SAP Note 1743455 will work with PI 7.11 SP 11 or not? I am really confused.
Regards,
Nabendu.
The Adapter Modules that are used by AEX are not present in 7.11.
If you want to set it up now, you’ll need to use the ccBPM based scenarios and change that to AEX once you move to PI 7.31 Java-only.
Thanks a lot Henrique for clarification. Appreciate your quick response. We are going to implement NFE in our landscape with PI 7.11, so pretty much sure that I am coming back with lot of questions in this forum.
Regards,
Nabendu.
Nice Blog!
I have just a question regarding the communication interfaces between NFE<->PI.
I know the connection type is HTTP. But i cannot find the prefixes, which have to defined in SM59 in NFE an PI system.
PI –> NFE: /sap/xi/engine?type=entry???
NFE–>PI: ????
Moreover do i still need to assign IE in SXMB_ADM on NFE Core?
Thanks for you input,
Reagds Christoph
Hi Chris,
sorry for not replying earlier.
The connection details remain more or less the same as in NFE 1.0, so I’d say to refer to that blog (linked in the beginning of this one). But in summary, NFE -> PI you don’t need a channel (since it’s a proxy, it’ll go automatically to the integration engine), and PI -> NFE you use a HTTP destination, referred to in a XI channel.
And yes, you do need to assign a HTTP Destination pointing to IE in SXMB_ADM -> Integration Engine Configuration.
Henrique Pinto precisa voltar a publicar por aqui hein!
Eu publico, mas mais em relacao a HANA que a NFE…
Hi Henrique!
Your blog clarified a lot of things regarding NF-e.Thank you.
We are in process of implementingthe same in our landscape.
Infact I have configured almost all the configuration scenrarion in SAP PI but I have imported the process inetgration scenarios from namespace 5a not 6, so does it make any differenec as I can see that service interfaces are the ones I require.
Secondly, for the B2B scenarios which send the cancelled NF-e to the receiver, a mailpackage inbound interface has been mentioned in one of the refernce guide i received from client.But I can not find any such interface in any of namespaces in ESR.
Could you please suggest!
Thanks!
Indu Khurana.
Hi Indu,
you should use the latest namespace, since it’s the only one being updated with the latest changes. Namespace 005a is deprecated is most likely will lead to communication issues.
The cancellation scenario is deprecated, the cancellation business process is now fulfilled as an event. You then use the event B2B scenarios.
Best,
Henrique.
Hi Henrique
Thanks for replying.
We are trying to implement B2B outbound mail scenario.
The GRC and ECC team is not sure of how to send a attachment with the xml file which is to be sent to recepient in SEFAZ.
Can you please help us with what approach should we folllow?
We are trying to follow this appraoch:
Please suggest!
Thanks,
Indu Khurana.
I’d say to ask a question in the NFE space: SPED & NF-e. | https://blogs.sap.com/2011/09/02/sap-nfe-100-whats-it-all-about/ | CC-MAIN-2017-47 | en | refinedweb |
Build Portable Mp3 Player 117
Greenpiece/Toasty writes: "Build your own portable MP3 player around 8000-9000 Yen. Uses 32 Megabyte flash media cards its the ultimate in geek. The link can be found here with circuit diagrams and pictures of the finished product. The kit can also be bought, but not from that page; another company is manufacturing it in Japan. The board seems quite easy to manufacture. "
Re:Another site (Score:1)
Then there's the cost of a PROM blower to write data to the GAL (Generic Array Logic - it's an EPROM type of technology, similar to the way data is stored by your PC's BIOS etc). PROM blowers are seriously expensive (several hundreds of dollars last time I looked).
So while the Soundbastard looks like a really professional attempt and I applaud their effort and skill, I think that this is not a project for everyone. However, if you can get hold of equipment from your friendly local electronics geek or your high school/university then this may be a really interesting project to make.
Re:And the cheapest car mp3 player is... (Score:1)
What about those low power FM transmitters? The kind you get with those CD changers that you don't have to wire up to your stereo, but tune to a certain frequency and listen to the CD. Does anyone know where to get those/how much they cost?
Re:Mp3 player (Score:1)
Re:Noisy circuitry! (Score:1)
Patching "some shit" into the circuit will do nothing to reduce the noise, but may make your circuit smell bad
:-)
But seriously, this is a different sort of noise to normal audio noise. The noise exists intrinsically in the Veroboard circuit (due to uneccessarily long copper tracks picking up radio frequency signals, noise on the power and ground lines, parasitic capacticance etc). Your Dolby chip would be subjected to the same noise as the rest of the circuit and so wouldn't help much, and may even make things worse. (Dolby is horrible anyway, it really spoils the vibrancy of a recording IMHO.) Good circuit design and construction is your only hope to reduce noise.
Why use memory chips? Why not an HD? (Score:1)
Sure, it'd be more expensive that 32 mb ram, and it'd be more fragile - but I'm not trying to make a ToughPlayer(tm).
Radio Shack and doing it yourself. (Score:1)
But does anyone know of a site that just lists the parts I could buy at say Radio Shack or somewhere similiar and also has the schemata/instructions for it? (My soldering/machine assembly skills have declined since the last time I had to pull out the old Odyssey II and repair it for use).
Re:HI SLASH DOT IS INSECURE (Score:1)
Re:HI SLASH DOT IS INSECURE (Score:1)
Altoids (Score:1)
"What are the three words guaranteed to humiliate men everywhere?
Re:Noisy circuitry! (Score:1)
Ummm... Isn't dolby a noise reduction method which requires a specific Dolby encoding method? It isn't just a filter, IIRC -- it is some fancy-schmancy encoding scheme that allows encoded music to be played without the decode, but allows playback through Dolby-licensed processing to be better.
Re:Interesting But... (Score:1)
1.) it's less sensitive to shock
2.) (related to 1) less likely to crap out randomly (I've heard from others w/this problem)
3.) if there's any alternative to using SmartMedia cards
Re:Exchange rate (Score:1)
Re:Looks pretty easy. (Score:1)
-russ
Re:Interesting But... (Score:1)
-russ
SMD's on perfboard?!? (Score:1)
How about a USB? (Score:1)
There's an IDE to USB [allusb.com] adapter that would let you use your existing CDROM.
Re:Where's the link to the KIT?! (Score:1)
--
BeDevId 15453 - Download BeOS R5 Lite [be.com] free!
Re:Some Assembly Required (Score:1)
Well, this is the first design I have seen and I like it for one reason. It gives me a starting point.
In my work, especially my hobby stuff, I feel like people will accuse me of working for a Japanese conglomerate in that I am really go at taking someone else's stuff and getting what I need, but not inventing it on my own.
So, a design like this gives me a start on my own portable that meets my requirements. That is worthwhile. While it may be nothing more than a Rio clone, I do not have Rio schematics. I have schematics for this and the basic problem solved.
Plus, building it might be just plain fun.
Herb
Re:I'm Off to RadioShack! (Score:1)
While reading the article, I was imagining using an empty Penguin Mints tin for the case. I've got a ton of 'em lying around.
Re:Where's the link to the KIT?! (Score:1)
Hah! I knew about this thing over a week ago.
The shop does have a URL, and I've put it in at the bottom of this post, but it ain't gonna do you any good for two reasons:
1) You almost have to be able to order in Japanese (AFAIK, Wakamatsu doesn't take orders in English).
2) These kits are back-ordered for more than a month in advance; good luck at actually getting your hands on one.
I'll probably drop by the store (it's in the Akihabara district of Tokyo) either tomorrow or the day after, but it looks pretty hopeless.
OK, here's the URL: CL ICK HERE [wakamatsu-net.com]
(Be warned, the above link is in Japanese.)
Re:Where's the link to the KIT?! (Score:1)
Since some people have been asking for a translation of the page, here you go:
Construction kits
Product number: 9301001000239
Maker: Wakamatsu Tsusho
Product name/model: WAKA-MP3 Ver1.1
Notes: A kit to build your own pocket-size MP3 player. We are very sorry, but any orders made at the present time cannot be filled for more than one month.
Price: 9800 yen
A kit to build an MP3 player for leading-edge digital audio.
We're proud of the fact that it sounds better than prebuilt products from corporate manufacturers.
By using a new deice, it can be run on one AA (AAA) battery.
MP3 decoder (MAS3507D)
D/A converter (DAC3550A)
MCU (AT90S8515)
Printed circuit board (84x64mm)
Smartmedia socket
A complete kit that includes MCU firmware and programming tools for PCs.
To build this kit, you will need an AT-compatible PC, a tester, a narrow-pointed soldering iron, tweezers and solderwick. It is aimed at people who have experience with embedded MPU (PICs, etc.) devices, and who can handle soldering (technical term for parts that are soldered directly to the circuit board - can't remember the English term, dammit) parts.
PLEASE NOTE: The photograph is only an assembled example. ( larger photo is available here [wakamatsu-net.com].
The Smartmedia, battery, headphones and case are not included.
[I've editied out some pointless stuff here]
Because of demand, shipping is behind schedule.
Any orders received at the present time will take more than a month to ship.
We apologize for the inconvenience.
We will provide estimates for completed kits, modules, etc., depending on the amount required.
[That's about it. The text box at the bottom is for the number of kits you want to order.]
Re: Noisy circuitry! (Score:1)
Re:Yippeee.....love the cover design (Score:1)
Hack value (Score:1)
I was dismayed when I saw the 'why bother' responses
to what is IMHO one of the coolest projects I've seen lately.
Seems like lately money and market share, rather
than hacking, has ruled
--Kevin
=-=-=
Sakura-chan! (Score:1)
I want that one just because it has Sakura Kinomoto from Card Captor Sakura on it!
"Hoee!"
(I'm a sucker for cute...)
Also noticed that they had some Megumi Hayashibara and Kikuko Inoue songs on their playlist (the monitor picture). They not only made an MP3 player, they actually use it for good music! ^_^)
--
32 MB ? Forget it (Score:1)
What I do want is a portable player that can play mp3s from a cd - or from some other cheap and spacious media.
If I wanted a device that uses a media that can only hold 10 songs, I'd buy a regular CD player.
I don't know how easy it will be to get the kit... (Score:1)
Good news - they take Visa and Mastercard, bad news - there is no mention about shipping outside of Japan anywhere on their site. They list shipping costs for delivery inside Japan, but unless you speak Japanese and ask REALLY nicely (not to mention pay through the nose for shipping) you aren't going to be able to order the kit any time soon.
You mean something like this ?? (Score:1)
"As seen on Slashdot"
Does this thing actually work? (Score:1)
Rio (Score:1)
MP3 Players: wave of future? (Score:1)
There are several on the market currently but I have yet to find one that would make me want to go out and buy it.
The only thing keeping me from going out and building my own is my meager electronics skills and the fact that I wouldn't have a warranty. I wonder if this creation will cause a 'cottage industry' utilizing the 'net for distribution.
I think I just came up with my first million dollar idea.
Engrish (Score:1)
Re:HI SLASH DOT IS INSECURE (Score:1)
- store the login into the cookie string
- append the hashed password to the string
- append the client IP to the string
- append an expiration time (makes user session mortal)
- append a hash result of the previous string with a secret key (known only from Slashdot). This "sign" the cookie and ensure Slashdot is the original author of it
- crypt the whole string (just to be 100% safe)
Unfortunately there are few site with such high level of security...
Re:Noisy circuitry! (Score:1)
The only analog part of the entire design is chip outs 5 and 7 from the DAC and those go directly (well, via some resistors, but still fairly directly) to the headphones. (NB: volume is apparently performed on the digital data stream, and not as a post processing step by the DAC -- although I could be misinterpreting the pins on the dac)
Mind you, he did mention that he had high THD, which might be due using the veroboard.
Or have I misunderstood everything (IANAEE (electrical engineer)).
Re:Smartmedia vs. Compactflash (Score:1)
they were nice, but never made it to the public.
thinner (height), but wider than a standard 2.5" floppy. Too bad i got pissed and decided to put the one in a tree, and accidently pull the other one out during a write operation....
Re:HI SLASH DOT IS INSECURE (Score:1)
Re:HI SLASH DOT IS INSECURE (Score:1)
Re:Trademark namespace collision (Score:1)
My old school logo looked pretty similar to the Timberland logo, tree and lines under. Timberland sued us and won. So now we have a new logo. How there could possibly be confusion between a school and Timberland is anyone's guess.
This guy is hardware and software hacker (Score:1)
But this guy is serious hardware/software hacker. Look at his site in English.
He has many FSW (Free Softwares). Though, FSW in Japanese tends to mean not-GPL but free-to-use. But most of these FSW auther opens source code and if asked with reasonable tone, they agree to GPL. Japanese FSW sometimes tends to imply non-commercial use only but usually not explicit enough to prevent modified version used for commercial use. Also sometimes restrictive about modification in fear of distribution of virus-contaminated software to disgrage original author.
WAKAMATSU is a small shop in Tokyo-Akihabara where they sell kits. These kits are usually just parts with generic PC board (Sometimes no specific pattern and you are required to connect parts running thin wires in between).
Complete circuit is posed.
If you are in US, you will have easier time to get parts by your self and those IC's from device manufactures as sample for free or minimul cost. Tough part is getting small quantities os special socket etc.
Nice thing about Wakamatsu is they have detailed circuits and all small parts and socket in one package. But not much more. Some of the kit I did required QFP 1.0 mm pitch hand soldering to PC board. Not for faint hearted.
Anyway, his bootloader seems to have nice Japanese-Anime chick photo. Maybe not politically correct in US but typically geeky japanese stuff. Check it out.
Re:And the cheapest car mp3 player is... (Score:1)
I got mine (not good quality, but audio wise my cds still sound better than fm radio) for about US$15 at Fry's (an electronics supermarket in California, mostly).
Runs on 2 AA batteries.
Re:Engrish (Score:1)
Re:And the cheapest car mp3 player is... (Score:1)
The other kind is the versatile but cheap kind. These either run on AAs or plug into a cig lighter. PLug it into the headphone jack on your discman, and viola! A low power FM transmitter. Just tune your radio to the right frequency, and then fine tune the transmitter. These are very narrow band, so changes in air density will affect your transmission enough for you to need to retune every time you use it.
The Good kind is $50 from PartsExpress (don't remember the exact url for the item). The other kind is about $15 from lots of places (Fry's Radioshack, Best Buy, etc).
--Jeff
Re:And the cheapest car mp3 player is... (Score:1)
<signature>
"No food or beverage in computer lab". Hmm. I think they mean to say "Don't Spill".
Re:I don't know how easy it will be to get the kit (Score:1)
8,000-9,000 Yen = $75-$84 US (Score:1)
-----
Digital Camera / MP3 Player combo yet? (Score:1)
Is there a digital camera out yet which can use it's compact flash card for MP3's???
It seems like such an OBVIOUS perfect match.
Re:Need this for my hooptie (Score:1)
Seems like what you're asking for.
Re:Link for kit! (Score:1)
1. WARNING: The picture is an example of the product (Implying that what you actually recieve might be different).
2.Due to the popularity of this kit the date of delivery will be more than a month from when you place your order.
There is also some interesting commentary on the possible uses of the kit and a mention that smart media, batteries, headphones and the case are all not included.
Have a good night. Feel free to email me at spacecow10@hotmail.com if you need J->E translations in the future (especially technical translation).
Re:Looks pretty easy. (Score:1)
(and I have two left paws - I also have two right paws...think about it)
I won't, though. My time is too vaulable a comodity at the moment, it'd cost me less to work an extra hour and then buy a Rio clone off E-Bay.
On the other hand, 20 years ago I'd be headed over to Nutron Electronics with a shopping list. (which is exactly how my time got so valuable in the first place)
Meow!
Re:Car MP3 players... (Score:1)
btw, did you ever make it to the technomart while you were in seoul? 8 floors of electronic gadgets, mmm...they have some mp3 players there too, along with pirate psx cds and the like.
btw, i keep trying to find the electronics discrict but can't find out which subway line leads there. could you give me directions from, say, seoul station or dongdaemun?
Re: Stupid time saving device (Score:1)
Car MP3 players... (Score:1)
In this case, I'm pretty sure you actually put the MP3 player itself right into the cassette player! I think the turning of the rotors in the cassette player triggered the player to start playing. Of course the player also had a headphone jack, and little embedded buttons to use just as a walkman type device.
Awesome stuff.
Re:Car MP3 players... (Score:1)
From the way you're talking, it sounds like you're in Seoul now, so maybe thse photos I took will help you:
Re:Smartmedia vs. Compactflash (Score:1)
I have built my own home and car mp3 players from older computer parts scrapped by the company I work for. I've built them out of a pentium 75 and 120. Got 'em in a metal box about the size of a car amp, and have in fact attached the car mp3 player to the amp in my car. IRman plus a bit of cabling gives me remote control. Now all I have to do is figure out how to get input to my player from an IDE cdrom drive which out of necessity will be about 4.5 feet away. (a Pioneer slot-load cdrom.) For now I just stick an IDE hard drive in the thing.
Anybody got any ideas here?
Re:Smartmedia vs. Compactflash (Score:1)
I want more! (Score:1)
Ahh, having different options is such a great thing!
Exactly one year ago, the German computer magazin c't featured a Do-It-Yourself-MP3-Player, developed by some students from the university of Aachen. More info can be found here [heise.de], but it's in German.
Re:Mp3 player (Score:1)
Re:Some Assembly Required (Score:1)
I wish HeathKit was still around, my Amp from them still works. Oh, well, maybe I could tweak the balance a bit, but it works and was fun to build. -d
Re:Looks pretty easy. (Score:1)
1) buy Radio Shack soldering iron.
2) ZAP chip with cheap
.45 cal tip.
3) realize that soldering irons that are too hot and conduct a charge killed the chip.
4) start over with proper tools. -d
Re: Noisy circuitry! (Score:1)
Peter Allen
Re:Argh... (Score:1)
Ironic that he's converting DC to AC to DC. It'd be more elegant to just condition the DC, but I don't know of any cheaper way to do it than with a cheapie PC power supply. *sigh*
Re:HI SLASH DOT IS INSECURE (Score:1)
If you go here:
It sends you here:.)(.document.location='http:/
Which then shows you your usernum and password
Re:Interesting But... (Score:1)
Depends on your definition of valuable (Score:1)
Trademark namespace collision (Score:1)
Re:Playing MP3 from a CD (Score:1)
contactNOSPAMromanISAIDNOSPAM@hotmail.com
Thanks!
Re:Link for kit! - Poor Translation! (Score:1)
This reads:
ManufacturerMerchandise nameRemarksPriceStockQuanityPurchase
Young Pine Tree TradeWAKA-MP3 Ver1.1Kit to build your own MP3 player.9,800 small amountAmount to buyAdd to cart image?
h ead=1&detail_kit_930100100023939
This reads
Model number, 9301001000239
....too much for my rusty Japanese ...
Manufacture's name, Young Pine Tree Trade
Merchandise name
WAKA-MP3, Ver1.1
Notes:
Cost: 9,800yen
MP3 decoder, MAS3507D
DA conveter, DAC3550A
MCU AT90S8515
Size 84x64mm
Play, Pause, Next, Prev, Stop
MCU can communicate with a PC by...
Memory Kits, 64MB, 32MB, 16MB, 8MB
I hope this helps.
Re:Exchange rate (Score:1)
Informative = +4 Karma
Bad Pun = -2 Karma
Total = +2 Karma...
kwsNI
Re:Micro hard drives, maybe? (Score:1)
I know of two big problems with using hard drives for this kind of application.
(I think that) Hard drives use more power than RAM-esque storage. More power use -> less battery life -> less play time -> less fun.
They're much better than they used to be, but hard drives are still not suited for shaky environments. Think how much shaking the unit would get in your pocket as you walk? Try stairs or jogging.
I think people want their music to be very portable, so it will have to last a long time and take a beating.
---
Dammit, my mom is not a Karma whore!
Re:Sakura-chan! (Score:1)
Re:Mp3 player (Score:1)
Use the preview button, Luke.
Noisy circuitry! (Score:2)
I hope that the MP3 player kit comes complete with a properly fabricated and well designed printed circuit board. Of course, you can always design your own PCB using shareware software (I don't know if there is any GPL'ed stuff to do PCB design), but it takes a fair bit of skill to do a good PCB design and you'll need to know someone who can etch the PCB for you. PCB etching equipment isn't cheap!
But all this talk of MP3 players and electronics is pretty dull. I'm far more excited that Stone Cold Steve Austin is back at the WWF Pay Per View this Sunday. Austin 3:16 is back, I can't wait! Oh, man!
Re:HI SLASH DOT IS INSECURE (Score:2)
something like:
If so, it does say that this method of logging in is *very* insecure.
PaIA should make a kit out of this... (Score:2)
It'd rock.
I'm off to spec out components.
Where's the link to the KIT?! (Score:2)
I want to build this thing, but it'd be nice if the company that's making the kit had a URL...
It sure looks like the overall set of hardware is the same set that the vendors of all the MP3 players are using.
It seems likely to me that this is not merely "similar" hardware, but really is the same hardware. And if that be the case, once the MPAA foists "copy control clients" on the industry, those clients will be happy to update the "firmware" on the MP3 players, whether they're boxed units from Sony or Panasonic, or a custom job that you built yourself.
Not that it'shoul affect my Diamond Rio; I use the open source "SnowBlind Alliance" interface software, and merely upload files.
Re:Looks pretty easy. (Score:2)
Re:Exchange rate (Score:2)
That will get you in the range within a few dollars.
Re:/.ers complaing about a hack. How sad. (Score:2)
Part of "Hack Value" is creating something that either isn't available elsewhere, or being able to put something together for far less than buying it at the shops. It's not about re-inventing the wheel. If you just want to say that you built it with your own hands you're not a hacker, you're an enthusiest. Same thing if you're just putting together something that's a slightly higher quality.
Now, if you take a cheap old Rio and solder in 128MB of RAM that you salvaged from something else cheaply, you're a hacker.
Re:Smartmedia vs. Compactflash (Score:2)
CompactFlash is much easier to handle than SmartMedia... I'm the kind of person that scratches CDs easily, and I'd be scared to have those (relatively) delicate SmartMedia cards. Can anyone here adapt this hack ("hack this hack"?) to be able to use CompactFlash? Plus, there are more applications for CompactFlash (The TRGPro [trgpro.com] for example) that would offset the cost of an IBM MicroDrive.
Could this control a Hard drive as well? It'd be nice to be able to make your own EMPEG [empeg.com] type device.. Throw on your own LCD and one of these monsters [buy.com] and you're set. 75 Gigs of MP3 storage. Is there a better way to do this than with these schematics?
Re:Some Assembly Required (Score:2)
Go buy a rio, with solidly soldered circuit boards hidden away beneath a nice shiny black case; a mass-produced masterpiece designed by some faceless intrepid entity toiling away in a forgotten corporate cubicle. You will gain an excellent warranty, a pretty cardboard box, and a nice pair of cheap earphones in the process.
But if you take as much joy from the melting of metal as you do from the music itself, if you dream of harnassing the secrets of the universe for your own personal pleasure, then this kind of thing is the only option. Nothing my mother can buy at Wal-Mart will be as exciting or as interesting as something I piece together out of scraps of metal, a broken Walkman and a radio shack chip.
I had no interest whatsoever in portable mp3 players until building my own became a possibility. I don't yet have the skill to design one of these myself- but I can solder, and I think I can read a schematic well enough to put this thing together. I can probably even modify this to do other neat things- and in doing that, I will learn a great deal.
Or maybe you already know everything there is to know about electrical engineering. Maybe you can design one of these in your sleep, and that is why this doesn't excite you. If that's the case, do it. And then put up a web page and show me how- cuz I am excited.
Rev Neh
Re:HI SLASH DOT IS INSECURE (Score:2)
Re:Sakura-chan! (Score:2)
Damn straight
Re:Some Assembly Required (Score:2)
IMnsHO, I'll learn more than the ~$50-100 dollars I'll save over buying a ready-made MP3 player and that's more than a fair tradeoff.
I reckon we all should have just used existing computers and operating systems 'cuz they're just another clone that provide the same functionality.
Dave
Smartmedia vs. Compactflash (Score:2)
Why do I care? Well, because, I don't see IBM being able to squeeze their 340 MB Microdrive [ibm.com] into a Smartmedia form factor anytime soon.
Other than that, what a cool project! This is the stuff Slashdot outta have more of!
Where can I get mas3507d? (Score:2)
Argh... (Score:2)
I'm starting out with an M590 motherboard from PCWare. True, PCWare doesn't make the highest quality mobos in the world, but my experience is that if you get one that works, it tends to keep working. This mobo has onboard sound and video (linux supported!), and so will allow me to place it in a relatively flat case (no cards to worry about). My friend has developed a library to interface with 3-line text LCDs, so that they can display menus while selecting and audio meters while playing; it's open source (all of his stuff is), and you can find it at
I was originally thinking of using hard disk storage to avoid swapping media all the time, but since hard disks of sufficient durability are not available at a reasonable size/price ration, I'm going with a CD-ROM for now. One CD-R will hold FAR more than a Rio...I can put Linux on a small solid-state hard disk, and I'm set! For power, an adaptor from car-DC to computer-AC is not terribly expensive.
Playing MP3 from a CD (Score:2)
Looks pretty easy. (Score:2)
I'm Off to RadioShack! (Score:2)
Building this seems like a great project. I am gonna give this one a go!. I am gonna try to tweak the case design though (see if I can't cram it into an empty smoke pack, or can of Spam) That would rock eh? the Spam Brand MP3 Player? Hey, If you can get a Linux Server in a Pizza Box, then why not a Spam MP3 Player!
If anyone knows of any similar projects, I'd sure like to know (click here to mail me) [mailto]
Lotteries are a tax for people who suck at math.
Some Assembly Required (Score:3)
(Is it me, or do others detect a "glut" of designs that are almost identical to the Rio?)
I don't see much point in having to integrate my own "Rio" when it provides no more functionality.
If the design provided some reasonable way of storing 1GB of data, cheaply, that sure would be interesting.
But another "me too!" design integrating together a synthesizer, some simple CPU/DSP, parallel interface, and a 32MB chunk of "flash" memory just does not excite.
Re:Noisy circuitry! (Score:3)
There is absolutely nothing wrong with using veroboard for a prototype, or even for a finished design! Keep your decoupling caps very close (on top or under if you can) and keep the DAC and amp far away from the DSP and processor and you'll be fine.
As someone who's built lots of low-noise (0.1mV sensitivity) and/or medium speed (40MHz) equipment on protoboard first, I know what I'm talking about. You can always throw copper shield up around the sensitive components and keep the power supplies clean with carefully selected bypass caps and even lowish resistances. Or get fancy and use ferrite.
If it is true veroboard (with tracks, as opposed to the board-only stuff I like), you can just rip off the copper you don't need and down goes all your sensitivity issues. True you haven't got a ground plane but if you can keep everything encased in grounded metal sheild you're flying high.
where can i get a cheap laptop? (Score:3)
#----------------------------
$mrp=~s/mrp/elite god/g;
Interesting But... (Score:3)
Picked mine up at an online auction site for about $50...
They always have a buttload of em.
Re:Exchange rate (Score:3)
--
"And is the Tao in the DOS for a personal computer?"
Link for kit! (Score:3)
And the cheapest car mp3 player is... (Score:4)
My car came with a pretty good casette player, so I didn't want to replace it. Instead, I got a casette adapter from Future Shop, that plugs into the line out of any cd player/sound card. I also bought a lighter power adapter for my old Thinkpad laptop.
The total cost for the adapters and laptop was around $600CAD, which is pretty steep. But I get to play mp3s in my car off my mp3 cds, and have a laptop that is usefull for something other than just that.
So IMHO, this is the best solution to having a car mp3 player. Feel free to disagree though...
Exchange rate (Score:5)
Today 108 yen = 1 USD.
So 8000-9000 yen = $74-$83
--
Have Exchange users? Want to run Linux? Can't afford OpenMail?
Another site (Score:5)
Soundbastard [go.to]
/.ers complaing about a hack. How sad. (Score:5)
"Oh, I can get a Rio for the same amount."
Pah. A pox on you and your like. Whatever happened to pure HACK VALUE? Sorry, but building the equivalent of a commercial machine for fun is neat, fun, and educational.
Go buy your little Rio and leave the real hackers be.
Dave | https://slashdot.org/story/00/04/28/0852256/build-portable-mp3-player | CC-MAIN-2017-47 | en | refinedweb |
JUnit Configuration
A test configuration producing JUnit compatible output. This is useful for continuous integration servers (such as Jenkins) that support displaying JUnit test results.
This library is open source, stable and well tested. Development happens on GitHub. Feel free to report issues or create a pull-request there. The most recent stable versions are available through pub.dartlang.org.
Continuous build results are available from Jenkins.
Installation and Use
Add the dependency to your package's pubspec.yaml file:
dependencies: junitconfiguration: ">=1.0.0 <2.0.0"
Then on the command line run:
$ pub get
To import the package into your Dart tests add:
import 'package:junitconfiguration/junitconfiguration.dart';
At the top of your main method, before the actual tests write:
JUnitConfiguration.install();
And this is all that is needed.
Misc
License
The MIT License, see LICENSE. | https://www.dartdocs.org/documentation/junitconfiguration/1.1.0/index.html | CC-MAIN-2017-47 | en | refinedweb |
Spring ApplicationListener
Spring has an implementation of an ApplicationListener that allows Spring projects to fire useful events for those interested (Observer Pattern, see Spring Doc). To gain access to these events all you need is a bean in the Application Context that implements ApplicationListener
The code below will catch any ApplicationEvent fired by Spring. With this style it’s up to us to filter the events and process them accordingly. Based on the implemented Spring packages the events capabilities will vary.
@Component public class MyAppListener implements ApplicationListener<ApplicationEvent> { @Override public void onApplicationEvent(ApplicationEvent applicationEvent) { // process event if(applicationEvent instanceof ContextRefreshedEvent) { // fire ping to statsd server } else if (applicationEvent instanceof ServletRequestHandledEvent) { // fire ping to statsd server } } }
The alternative to the above example is to specifically declare the event we want. Spring 3 recognizes the type in the generic declaration and will only send what we want.
@Component public class MyAppListener implements ApplicationListener<AuthenticationSuccessEvent> { @Override public void onApplicationEvent(AuthenticationSuccessEvent applicationEvent) { // We have a successful login, track the event, clear the failed login count } }
Here are some events you can listen for. The Spring documents have more info on specific events.
- Core events are available (ContextClosedEvent, ContextRefreshedEvent, ContextStartedEvent, ContextStoppedEvent)
- Web events (PortletRequestHandledEvent, ServletRequestHandledEvent)
- Security has a lot of events with every type of failure (AuthenticationFailureBadCredentialsEvent, AuthenticationFailureCredentialsExpiredEvent…) and non failure events (AuthenticationSuccessEvent, AuthenticationSwitchUserEvent, InteractiveAuthenticationSuccessEvent)
The Security events are useful to fire off logging, graphing or reporting pings. It can also be used to track failed login attempts across multiple entry points. The alternative is to add logic into a Success or Failed handler in the security context but this is a nice separation and a central location. Just be sure to document! it can be hard to see what happens in events when following code in the IDE.
By default these events are not asynchronous. | https://www.luckyryan.com/2014/05/28/spring-application-event-listeners/ | CC-MAIN-2017-47 | en | refinedweb |
The safest way to keep a channel under control
and free from unwanted intruders is to mark it as invite-only. Of
course, you need a nice way of letting the regular users
in.
Marking a channel as invite-only means
that you can join that channel only if an operator in that channel
invites you. This is a great way of keeping out unwanted or abusive
visitors. To mark a channel as being invite-only, you must be a
channel operator to apply the mode
+i, for example:
/mode #irchacks +i
If a user now tries to join this channel, she will be told that she
cannot do so because it is invite-only. To send an invitation to
another user, you can use the /invite command, for
example:
/invite Jibbler #irchacks
Marking your channel as secret (+s) can also
help to avoid unwanted attention. This prevents the channel being
listed when users execute the /list command. If
people don't even know your channel exists, they
can't even begin to contemplate causing any trouble
there.
Of course, the whole invite-only solution is not perfect. If you have
just connected to the server, the only way you would be able to join
the channel is if you get invited. To get invited, you would probably
need to send a private message to an operator in the channel and ask
to be invited. The operator then invites you into the channel, and
you are able to join it. But what if the operator
isn't there to invite you? And how can you tell
who's an operator in that channel if you
can't see who's in it?
One obvious solution is to create an IRC
bot that is responsible for handing out invitations. This will be
called InviteBot. The bot will sit in the
channel and accept invitation requests via private message. To ensure
that only valid users are able to use the bot, the invitation request
will actually be a password. You can define what this password is and
then share it with everybody who is allowed to use the channel. If
you send the bot the correct password, it will send you an
invitation.
Remember to make sure the bot is a channel operator, otherwise it
won't be able to send any invitations.
Save the following as InviteBot.java:
import org.jibble.pircbot.*;
import java.util.*;
public class InviteBot extends PircBot {
private String channel;
// The invitation request password.
private String password = "password";
public InviteBot(String name, String channel) {
setName(name);
this.channel = channel;
}
// Return the channel that this bot lives in.
public String getChannel( ) {
return channel;
}
// Accept private messages.
public void onPrivateMessage(String sender, String login,
String hostname, String message) {
// Send an invitation if the password was correct.
if (message.trim( ).equals(password)) {
sendInvite(sender, getChannel( ));
}
}
}
This is quite a simple bot. You will notice that the password is set
and stored in the password field. Feel free to
change this to whatever you want, but remember not to use a sensitive
password, as you will be sharing this with other users!
When the bot receives a private message, the
onPrivateMessage method gets called. The bot then
checks to see if the message matches the password. If it does, the
sender of the message is sent an invitation to join the channel. The
sender can then join the channel.
Now you need a main method to launch InviteBot. When you construct
InviteBot, you must tell it which channel it is going to live in
(#irchacks in this case). Save the following as
InviteBotMain.java:
public class InviteBotMain {
public static void main(String[] args) throws Exception {
InviteBot bot = new InviteBot("InviteBot", "#irchacks");
bot.setVerbose(true);
bot.connect("irc.freenode.net");
bot.joinChannel(bot.getChannel( ));
}
}
Compile the bot like this:
C:\java\InviteBot> javac -classpath .;pircbot.jar *.java
You can then run the bot like this:
C:\java\InviteBot> java -classpath .;pircbot.jar InviteBotMain
In the channel, Paul sets the bot up with operator privileges and
sets the channel as invite-only:
* Paul sets mode: +o InviteBot
* Paul sets mode: +i
Jibbler comes along and tries to join the channel, but is told he
can't:
/join #irchacks
#irchacks unable to join channel (invite only)
Now, rather than pestering a channel operator and asking her to
invite him in, Jibbler can now just send the password to InviteBot:
/msg InviteBot password
If Jibbler got the right password, he will receive a message similar
to this:
* InviteBot ([email protected]) invites you to join #irchacks
Jibbler is now able to join the channel. If
Jibbler's IRC client was set up to automatically
join a channel when invited, he won't even need to
type
/join
#irch | http://codeidol.com/community/internet/invite-users-into-channels/15891/ | CC-MAIN-2017-47 | en | refinedweb |
Calculating Task Periodic Rate
I'm having a complete brain seizure here with what should be a simple math problem. I need to set the periodic rate of an embedded task so that it performs certain things at a particular rate.
E.g if the task has to perform 4 things at, say, 6Hz, 3Hz, 2Hz and 4Hz how do I calculate taht the task must run at 12Hz?? (Accuracy is not super important here)
-- No, this is not homework. A student could probably figure this out in less time than it has taken me to type the question in...
Brain dead today
Friday, June 18, 2004
You have to find the lowest common multiple. Take multiples of each number until you find one that all share.
DJ
Friday, June 18, 2004
6Hz, 3Hz, 2Hz and 4Hz
Step 1
6 = 2x3
3 = 3
2 = 2
4 = 2^2
Step 2
n = 2^2 x 3
And a note: if a process has a highest frequency F in its spectrum, you need to sample it at 2xF in order to be able to capture all events and reconstruct the original signal (Nyquist criteria/Shannon sampling theorem)
Dino
Friday, June 18, 2004
Take multiples (starting with 1) of the highest resolution number (6):
6x1%4 = 4, so 6x1 doesn't work.
6x2%4 = 0, so 6x2 (=12) works.
Derek
Friday, June 18, 2004
Example:
24 and 90
24 = 2^3 x 3
90 = 2 x 3^2 x 5
N = 2^3 x 3^2 x 5 = 360
Oops, I forgot to try all the other numbers from the problem (4, 3, 2). Continuing:
6x1%4 = 4, so no need to keep trying 6x1
6x2%4 = 0
6x2%3 = 0
6x2%2 = 0
...so 6x2 works.
dorks.
muppet is now from madebymonkeys.net
Friday, June 18, 2004
The direct method for finding lowest common multiple is to use the prime factorization, i.e,
2, 3, 4, 6, have prime factorizations of:
2^1, 3^1, 2^2, 2^1*3^1.
Take the primes:
2 and 3
Take the highest exponents present:
2^2, 3^1
multiply = 12.
Ryan Anderson
Friday, June 18, 2004
But then you have the prime factorization first.
I think the best way is via the greatest common denominator, and the use the equation gcd(a, b) * lcm(a, b) = a * b to calculate the lcm.
I'd use Euler's method to calculate the gcd. In Python, but close enough to pseudocode for everyone to understand:
def gcd(a, b):
if b == 0:
return a
else:
return gcd(b, a % b)
def lcm(a, b):
return a*b/gcd(a, b)
For more than two numbers, note that
lcm(a, b, c) = lcm(lcm(a, b), c) (and the same for more numbers)
For four numbers, you could use:
lcm(a, b, c, d) = lcm(lcm(a, b), lcm(c, d))
In your example with the functions defined above, Python gives
lcm(lcm(6, 3), lcm(2, 4)) = 12
vrt3
Saturday, June 19, 2004
(errata: That's Euclidean algorithm, not Euler's method)
Thanks guys. vrt3 that's pretty much how I'm going to do it since GCD is simple to implement (I have to use c++) and I need to be able to change the number of rates and the rate values on the fly.
Brain dead today
Saturday, June 19, 2004
Recent Topics
Fog Creek Home | http://discuss.fogcreek.com/joelonsoftware5/default.asp?cmd=show&ixPost=153814&ixReplies=10 | CC-MAIN-2017-47 | en | refinedweb |
ISO/IEC JTC1 SC22 WG21
Document Number: P0692R0
Audience: Evolution Working Group
Matt Calabrese (metaprogrammingtheworld@gmail.com)
2017-06-17
This paper attempts to address a long-standing hole in the ability for developers to specialize templates on their
private and
protected nested class-types. It also identifies an implementation divergence between compilers.
To be clear about what is being discussed, the following code is a minimal example:
template<class T> struct trait; class class_ { class impl; }; // Not allowed in standard C++ (impl is private) template<> struct trait<class_::impl>;
It is important to note that even though the above specialization of
trait is not allowed according to the standard, it builds with all compilers that were tested, including various versions of gcc, clang, icc, and msvc. Already, for the sake of standardizing existing practice, one might argue that this should be allowed. Though this specific code seems to have consistent acceptance between implementations, similar code that deals with partial (as opposed to explicit) specializations is permitted in some compilers while disallowed in others. This paper presents rationale for why code such as this is desirable along with options for how to adjust the language pending the committee's agreement that specializations such as this should be sanctioned.
From the minimal code above, it may not be clear why anyone may desire this functionality, but it does come up in practice and is how the differences in compiler behavior were discovered. For example, in development of a standard library proposal[1][2], I frequently encountered the desire to specialize namespace-scope traits on a
private nested type of a
struct. This is due to function objects of the library frequently returning an instance of an unspecified type that needs to model a specific concept with associated traits. If this sounds rather abstract, consider the more familiar case of a
Range type that may have a
private iterator type. Because this nested type is
private, it is not possible to specialize
iterator_traits on it directly.
While all tested compilers allow the non-standard specialization seen in this paper's abstract, something similar but with a partial specialization uncovers implementation divergence. Consider the following:
template<class T> struct trait; class class_ { template<class U> struct impl; }; // Not allowed in standard C++ (impl is private) // Not allowed in clang, icc // Allowed in gcc, msvc template<class U> struct trait<class_::impl<U>>;
A developer who is not a language lawyer may incorrectly believe that there is an obvious, standard solution to this problem: declare the trait to be a
friend of
class_. Of course, this will not actually work because declaring a specialization of a trait to be a
friend does not mean that the declarator's template argument list can refer to
private members of the class. All that such a
friend declaration means is that the definition of the trait would have access. In existing C++, if the trait, itself, were nested in a hypothetical type
foo, then
foo could be befriended and we'd be able to directly specialize the trait. Because the trait is at namespace scope, this is not an option.
An actual workaround is to either make
impl
public, or make an alias of it
public, or put an alias or
impl itself in a hidden details namespace. All of these options may be considered suboptimal by some developers as they require directly exposing a type that is [arguably] most-naturally a
private nested type.
James Dennett discovered a related issue[3] submitted by John Spicer in 1999 that attempts to address part of the problem (though only for explicit specializations). Resolution of this issue in 2002 was to consider this lack of ability to declare a specialization to be NAD, however there is still no direct way to accomplish what is desired. As well, as was described earlier, the resolution from the issue was never actually implemented in gcc, clang, icc, or msvc, so there may be reason to reconsider that resolution.
Assuming that the committee feels this is a problem worth solving, this paper presents four possible solutions to consider, with a preference for one of the alternatives. A table comparing user code corresponding to each option is presented after their enumeration.
The first option is to standardize a generalization of existing practice and simply allow an explicit or partial specialization to ignore access when the
template being refered to is not dependent on a
template parameter. Very important to note is that this does not grant the definition of the template any privileged access that it wouldn't already have had. Because it would not affect access when the
template itself is dependent, it would not change the behavior of existing, well-formed code (specifically, it would not break specializations that rely on access playing a role in SFINAE in a partial specialization, which is a common pattern when creating traits).
"Option A" is somewhat consistent with existing explicit instantiation rules and also with the behavior of explicit specializations as it exists in all of the tested compilers, however this does contradict the resolution of issue 182[3].
The second option is to make it such that if a
class
class_ with a
private nested type
impl declares a specialization
trait<class_::impl> to be a
friend, then that specialization can be declared and defined outside of the
class.
One drawback of "Option B" is that it gives the definition of the trait privileged access to
class_ when it otherwise may not be necessary, though subjectively, this is a minor drawback.
The third option is very similar to the second, except the
friend declaration appears in the nested class itself as opposed to the type(s) that it is nested in. This has some similar drawbacks and is also strange in that the specialization does not need privileged access of the nested
class for the declaration to be valid, but rather, it needs privileged access to the type that contains it. This option is included because an informal poll of a very small set of C++ programmers did favor it.
The final option is to allow a
template (as opposed to a specialization of a
template) to be specified as a
friend. This would be a new kind of
friend declaration and would be provided by specifying only the
template name in the
friend declaration rather than a specialization of that
template. This would not give the definition of the trait any privileged access at all, which may be considered a positive aspect of this choice. Of course, privileged access can still be granted via a normal
friend declaration if it is actually desired.
Below is a side-by-side comparison of all of the options described above.
The author of this paper is most in favor of "Option A". Behavior like this has existed in commonly-used compilers without problem for at least 15 years and it doesn't require adding a new kind of feature to the language that only experts would know about. It is also somewhat of a safer choice compared to the alternatives as it does not require altering specialization behavior or access in novel ways that have yet to be explored.
Wording will be drafted once there is consensus on a general direction, or if detailed clarification is needed before direction can be determined. If anyone interested in the topic requires wording before this paper is presented, please email the author or post to the reflectors.
Thanks to James Dennett and Richard Smith for their interest and research.
[1] Matt Calabrese: "A Single Generalization of std::invoke, std::apply, and std::visit" P0376R0
[2] Matt Calabrese: "Call: A Library that Will Change the Way You Think about Function Invocations"
[3] C++ Standard Core Language Issue 182. "Access checking on explicit specializations" | http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2017/p0692r0.html | CC-MAIN-2017-47 | en | refinedweb |
Restrict.
1. After Successfully installation: You can find 'Zipcode Serviceability Check' in app section.
2. Settings: You can Enable/Disable Pin code Check On Product Page
3. Import Zipcodes:
4. Export Zipcodes:
5. You need to place widget code in your product.liquid file.
<!-- onj pincode serviceability code -->
{% if shop.metafields.onj_pincode.status == 1 %}
{{ shop.metafields.onj_pincode.front_code }}
{% endif %}
<!-- ends -->
You should perform the following:
6. After all above steps, 'Zipcode Serviceability Check' widget will appear on product page where you have placed widget code.
Shipping availability for shopify
These are just some of the stores that are using "Pincode/Zipcode Serviceability Check".
Our Shopify Extensions
SHIPWAY.IN: It is for multiple courier Shipment Tracking Solution.
Customer can check delivery status of his order.
Track airwaybill number from multiple courier companies at one place.
Track complete shipping status from confirm order to order delivery Shows shipment realtime status details
List of pre-defined Shipping Carrier
In case of any query or support feel free to contact us at contact@onjection.com
How about the checkoutpage it disables the COD option if the particular zipcode is not on the list ?
working perfectly.....
wonderful app!
very easy to integrate and make it work, good support too, as experienced in somewhat complicated implementation in my case.
keep up the good work team! | https://apps.shopify.com/pincode-zipcode-serviceability-check | CC-MAIN-2017-47 | en | refinedweb |
There are good number of articles that explains the different caching option Microsoft Office SharePoint Server 2007 provides and ways to leverage them to achieve better site performance. However, there are scenarios where you might want to implement output caching on your site/page, but have some controls (Web Parts, User Control) excluded from caching so that they dynamically have their content updated on every request.
These are the kind of scenarios where you'd be using Post Caching Substitution. I first setup, output caching on a SharePoint 2007 site that was built using collaboration site definition. I created my own caching profile and below are the settings I chose when I created it:
I then enabled output cache and set this up for both anonymous and authenticated cache profiles as shown below:
To verify if my caching works, I wrote a small web part that shows the current date/time whenever it runs. Code below:
1: using System;
2: using System.Runtime.InteropServices;
3: using System.Web.UI;
4: using System.Web.UI.WebControls;
5: using System.Web.UI.WebControls.WebParts;
6: using System.Xml.Serialization;
7:
8: using Microsoft.SharePoint;
9: using Microsoft.SharePoint.WebControls;
10: using Microsoft.SharePoint.WebPartPages;
11:
12: namespace HellWorld
13: {
14: [Guid("<random guid>")]
15: public class HellWorld : System.Web.UI.WebControls.WebParts.WebPart
16: {
17: public HellWorld()
18: {
19: }
20:
21: protected override void Render(HtmlTextWriter writer)
22: {
23: writer.WriteLine(DateTime.Now.ToString());
24: }
25:
26: protected override void CreateChildControls()
27: {
28: base.CreateChildControls();
29: }
30: }
31: }
After deploying this web part to my site collection, if I refresh the page or open the site using a new browser session, I'll get to see the date/time that I saw the first time I visited the page. This happens till the cache expiration time expires (in my case it was 3600 seconds). Now, that caching happens, I had to implement post cache substitution to not cache another web part that also returns the current date/time.
The actual code the renders the current date/time is the same, but it is rendered in a different mechanism. First, I added another class file to my current web part project and implemented the response substitution call back method. Sample code below:
1: using System;
2: using System.Collections.Generic;
3: using System.Text;
4: using System.Globalization;
5: using System.IO;
6: using System.Reflection;
7: using System.Web;
8: using System.Web.UI;
9: using System.Web.UI.WebControls;
10:
11: namespace PCSWebPart
12: {
13: public abstract class DontCachePlease
14: {
15: private ConstructorInfo _writerConstructor;
16: private HttpContext _context;
17:
18: protected DontCachePlease()
19: {}
20:
21: protected HttpContext Context
22: {get { return _context; }}
23:
24: public void Render(HttpContext context, HtmlTextWriter writer)
25: {
26: if (context == null)
27: throw new ArgumentNullException("context");
28: if (writer == null)
29: throw new ArgumentNullException("writer");
30: Type writerType = writer.GetType();
31: Type[] constructorArgs = new Type[] { typeof(TextWriter) };
32: _writerConstructor = writer.GetType().GetConstructor(constructorArgs);
33: if (_writerConstructor == null)
34: throw new InvalidOperationException("The HtmlTextWriter does not have a public constructor taking in a TextWriter");
35: HttpResponseSubstitutionCallback subCallback = new HttpResponseSubstitutionCallback(this.RenderCallback);
36: context.Response.WriteSubstitution(subCallback);
37: }
38:
39: protected abstract void Render(HtmlTextWriter writer);
40:
41: private string RenderCallback(HttpContext context)
42: {
43: StringWriter baseWriter = new StringWriter(CultureInfo.CurrentCulture);
44: HtmlTextWriter writer = (HtmlTextWriter)_writerConstructor.Invoke(new object[] { baseWriter });
45: try
46: {
47: _context = context;
48: Render(writer);
49: }
50: finally
51: {
52: _context = null;
53: }
54: return baseWriter.ToString();
55: }
56: }
57: }
And then, I made the web part render the date/time using the call back method exposed off of this class. Sample code below: PCSWebPart
12: {
13: [Guid("<random guid>")]
14: public class PCSWebPart : System.Web.UI.WebControls.WebParts.WebPart
15: {
16: public PCSWebPart()
17: {
18: this.ExportMode = WebPartExportMode.All;
19: }
20:
21: protected override void Render(HtmlTextWriter writer)
22: {
23: ShowTime st = new ShowTime();
24: st.Render(Context, writer);
25: }
26: }
27:
28: class ShowTime : DontCachePlease
29: {
30: public ShowTime()
31: {}
32:
33: protected override void Render(HtmlTextWriter writer)
34: { writer.WriteLine(DateTime.Now.ToString()); }
35: }
36: }
And that's it! I was able to see it working in the UI once I deployed this web part to my collaboration portal site. Every time, I request for the page where I have test web parts loaded (1 using post caching substitution and the other without it), I could see the date/time value in the web part that does not use post caching substitution does not change till the cache expiration time I set, whereas the other show different date/time on every page request. A visual representation of one my test instance below:
Hope this tip was helpful!!!
PingBack from
Hi,
Have you tested it as an anonymous user? It does not work. Somehow, SharePoint sends cached content regardless of what substitution you perform. I have tested it with your method as well as by using a substitution control on the master page.
Thanks.
Sandeep.
Its nice and simple post some more extensive caching concepts with similar approach
Thanks
Arun Nehru
I have used Subsitution control on ASP.NET which does same functionality. | https://blogs.msdn.microsoft.com/sridhara/2008/08/12/using-post-caching-substitution-in-sharepoint-2007-web-parts/ | CC-MAIN-2017-47 | en | refinedweb |
Quick links
Comment is non-executable line in source code used to describe a piece of code or program. Comments provides inline documentation of source code and enhances readability of the code. It describes what a piece of code does.
Comments are for readers not for compilers. They make source code more developer friendly.
The compiler has nothing to do with comments, it is non-executable piece of code. Therefore, during the compilation process, pre-processor removes all comments from the source code.
C programming supports two types of commenting style.
- Single line comments
- Multi-line comments
Single line comments
Single line comments begin with
// double forward slash character. Characters following
// is treated as comment and is not executed by the compiler. Single line comments are best suited when you want to add short detail about a complex code that can be described in a line.
Example program to demonstrate single line comments
#include <stdio.h> // Include header file // Starting point of program int main() { // Print hello message printf("Hello, Codeforwin!"); return 0; }
Multi line comments
Single line comment provide support to add short description about the code in one line. However, most often it is required to add long description about working of the code spanning across multiple lines. Definitely in such situation you can add multiple single line comments. But it is recommended to use multi-line comments.
Multi-line comments are used to add a detailed description about the code. It begins with
/* character and ends with
*/. Characters between
/*
*/ are treated as comments.
Multi-line comments can span upto multiple lines. In addition, you can use multi-line comments for both single as well as multiple line commenting.
Example program to demonstrate multi line comments
/** * @author Pankaj Prakash * @description C program to add two numbers. * Program reads two numbers from user and * displays the sum of two numbers. */ #include <stdio.h> int main() { /* Variable declarations */ int num1, num2, sum; /* Reads two number from user */ printf("Enter two number: "); scanf("%d%d", &num1, &num2); /* Calculate the sum */ sum = num1 + num2; /* Finally display sum to user */ printf("Sum = %d", sum); return 0; }
Advantages of commenting a program
Every developer must have a habit of commenting source code. It is as important as cleaning your bathroom. I already mentioned a well commented program is easy to read and maintain.
Let us consider the following piece of code.
void strcpy(char *src, char * dest) { while(*(src++) = *(dest++)); }
At this point with little knowledge of programming. Do you have any idea what the above piece of code will do? Now consider the same code again.
/** * Function to copy one string to another. * @src Pointer to source string * @dest Pointer to destination string */ void strcpy(char * src, char * dest) { /* * Copy one by one characters from src to dest string. * The loop terminates after assigning NULL * character to src string. */ while(*(src++) = *(dest++)); }
The above commented program gives you a basic idea about the code. Which is where comments are useful. The second program is more cleaner and readable for developers.
Let me add my story to the picture. In my early days of programming. I was a mad over developing small applications. I used to spend all my time on computers. Then, in my fourth semester of graduation I developed a Remote Administration Tool (in C#). Which was pretty cool tool, but lacks comments. After one year when I looked at the code, I got puzzled where to start and where to end due insufficiency of comments. For a moment I thought, did I developed this?
Being a programmer develop a good habit to comment your code. So that not only others but you can also understand your code after ages.
<pre><code> ----Your Source Code---- </code></pre> | http://codeforwin.org/2017/08/comments-in-c-programming.html | CC-MAIN-2017-47 | en | refinedweb |
I've used PHP's array_filter() in the past to filter elements from an array, but occasionally I find that I need to pass an additional value to the function in order to determine whether to keep the element. This just came up in a case where I was using GET variables to filter search results on a large array.
I have a large 2D array of books where each record has some information such as title, author, publisher, ISBN, etc. I include search boxes for each column when I draw the table to the screen, so the user can search a specific column for some text.
<?php $data = array( 0 => array('Title' => "Career and Life Planning", 'Author' => "Michel Ozzi", 'ISBN' => "978-0-07-284262-3"), 1 => array('Title' => "Beginning Algebra", 'Author' => "Blitzer", 'ISBN' => "978-0-555-03971-7"), 2 => array('Title' => "Primary Flight Briefing", 'Author' => "Federal Aviation", 'ISBN' => "978-1-5602755-7-2") ); foreach( $_GET as $k=>$v ) { $data = array_filter($data, create_function('$data', 'return ( stripos($data["'.$k.'"], "'.$v.'") !== false );')); } ?>
Notice the values of
$k and
$v are actually written into the PHP code, whereas
$data is written to the function as a variable name. If you were to echo the body of the function, it would actually look something like this:
return ( stripos($data["ISBN"], "555") !== false );
In each iteration of the loop the function is re-created, and the key/value pair are essentially hard-coded into the function instead of being variables.
So there you go! A real-live use for anonymous functions in PHP! | https://aaronparecki.com/tag/search | CC-MAIN-2017-47 | en | refinedweb |
Groovy adds several methods to the
List interface. In this post we look at the
head() and
tail() methods. The
head() method returns the first element of the list, and the
tail() returns the rest of the elements in the list after the first element. The following code snippet shows a simple recursive method to reverse a list.
def list = [1, 2, 3, 4] def reverse(l) { if (l.size() == 0) { [] } else { reverse(l.tail()) + l.head() } } assert [4, 3, 2, 1] == reverse(list) // For the same result we can of course use the List.reverse() method, // but then we didn't learn about tail() and head() ;-) assert [4, 3, 2, 1] == list.reverse() | http://mrhaki.blogspot.com/2009/10/groovy-goodness-getting-tail-of-list.html | CC-MAIN-2017-47 | en | refinedweb |
How I would dynamically create a few form fields with different questions, but the same answers?
from wtforms import Form, RadioField
from wtforms.validators import Required
class VariableForm(Form):
def __init__(formdata=None, obj=None, prefix='', **kwargs):
super(VariableForm, self).__init__(formdata, obj, prefix, **kwargs)
questions = kwargs['questions']
// How to to dynamically create three questions formatted as below?
question = RadioField(
# question ?,
[Required()],
choices = [('yes', 'Yes'), ('no', 'No')],
)
questions = ("Do you like peas?", "Do you like tea?", "Are you nice?")
form = VariableForm(questions = questions)
It was in the docs all along.
def my_view(): class F(MyBaseForm): pass F.username = TextField('username') for name in iterate_some_model_dynamically(): setattr(F, name, TextField(name.title())) form = F(request.POST, ...) # do view stuff
What I didn't realize is that the class attributes must be set before any instantiation occurs. The clarity comes from this bitbucket comment:
This is not a bug, it is by design. There are a lot of problems with adding fields to instantiated forms - For example, data comes in through the Form constructor.
If you reread the thread you link, you'll notice you need to derive the class, add fields to that, and then instantiate the new class. Typically you'll do this inside your view handler. | https://codedump.io/share/j0hE75MOM0wh/1/wtforms-create-variable-number-of-fields | CC-MAIN-2017-47 | en | refinedweb |
Maintaining.
public sealed class HttpSessionState : ICollection,
IEnumerable
{
// properties
public int CodePage {get; set;}
public int Count {get;}
public bool IsCookieless {get;}
public bool IsNewSession {get;}
public bool IsReadOnly {get;}
public KeysCollection Keys {get;}
public int LCID {get; set;}
public SessionStateMode Mode {get;}
public string SessionID {get;}
public HttpStaticObjectsCollection StaticObjects {get;}
public int Timeout {get; set;}
// indexers
public object this[string] {get; set;}
public object this[int] {get; set;}
// methods
public void Abandon();
public void Add(string name, object value);
public void Clear();
public void Remove(string name);
public void RemoveAll();
public void RemoveAt(int index);
//...
}
public class Page : TemplateControl, IHttpHandler
{
public virtual HttpSessionState Session {get;}
//...
}
public sealed class HttpContext : IServiceProvider
{
public HttpSessionState Session {get;}
//...
} string _description;
private int _cost;
public Item(string description, int cost)
{
_description = description;
_cost = cost;
}
public string Description
{
get { return _description; }
set { _description = value; }
}
public int Cost
{
get { return _cost; }
set { _cost = value; }
}
} : System.Web.HttpApplication
{
protected void Session_Start(Object sender, EventArgs e)
{
// Initialize shopping cart
Session["Cart"] = new ArrayList();
}
}="c#" Codebehind="Purchase.aspx.cs".cs
public class PurchasePage : Page
{
private void AddItem(string desc, int cost)
{
ArrayList cart = (ArrayList)Session["Cart"];
cart.Add(new Item(desc, cost));
}
// handler for button to buy a pencil
private void BuyPencil_Click(object sender, EventArgs e)
{
// add pencil ($1) to shopping cart
AddItem("pencil", 1);
}
// handler for button to buy a pen
private void BuyPen_Cick(object sender, EventArgs e)
{
// add pen ($2) to shopping cart
AddItem("pen", 2);
}
}
<!— File: Checkout.aspx —>
<%@ Page language="c#" Codebehind="Checkout.aspx.cs"
Inherits="CheckoutPage" %>
<HTML>
<body>
<form runat="server">
<asp:Button id=Buy
<a href="purchase.aspx">Continue shopping</a>
</form>
</body>
</HTML>
// File: Checkout.aspx.cs
public class CheckOutPage : Page
{
private void Page_Load(object sender, System.EventArgs e)
{
// Print out contents of cart with total cost
// of all items tallied
int totalCost = 0;
ArrayList cart = (ArrayList)Session["Cart"];
foreach (Item item in cart)
{
totalCost += item.Cost;
Response.Output.Write("<p>Item: {0}, Cost: ${1}</p>",
item.Description, item.Cost);
}
Response.Write("<hr/>");
Response.Output.Write("<p>Total cost: ${0}</p>",
totalCost);
}
}
The key features to note about session state are that it keeps state on behalf of a particular client across page boundaries in an application, and that the state is retained in memory on the server in the default session state configuration. (in fact, it is the default technique) and is demonstrated in Figure.. Listing 10-10 shows some pseudocode demonstrating the technique used by ASP.NET for creating session keys.
//.)
, as shown in Figure.
ASP.NET introduces the ability to store session state out of process, without resorting to a custom database implementation. The sessionState element in an ASP.NET application's web.config file controls where session state is stored (see Figure).. Figure shows the ASP.NET State Service in the local machine services viewer.,> shows the various values of EnableSessionState and their effect on your Page-derived | http://codeidol.com/community/dotnet/session-state/16755/ | CC-MAIN-2017-47 | en | refinedweb |
Hi,
I am doing a painting program (KIds Paint - you can find in Android Market) and I have a lot of requests to save the content on disk or to wallpaper. I have been searching around but cannot find solution.
My guess is that I probably wanted to get the bitmap from the canvas, but I can't find ways to get it (why isn't there a getBitmap or capturePicture or some sort?). Then I try to set an empty bitmap into the canvas and draw on the canvas, and save the bitmap... but I got an empty bitmap.
Please help! Thanks. I would like to add the feature to the application.
Here's my codes:
public class KidsPaintView extends View {
Bitmap bitmap = null;
...
protected void onDraw(Canvas canvas) {
if (bitmap == null) {
bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
canvas.setBitmap(bitmap);
}
... // do painting on canvas
}
}
Then in my main code I try to retrieve the bitmap and save it as
wallpaper:
Bitmap bitmap = view.bitmap;
try { setWallpaper(bitmap); }
catch (IOException e) { e.printStackTrace(); }
But all I got is a black wallpaper. What am I doing wrong? Or is there a better way? Thanks! | http://www.anddev.org/other-coding-problems-f5/saving-canvas-to-disk-t8105.html | CC-MAIN-2017-47 | en | refinedweb |
If you've ever diagnosed a bug in a web application, you've undoubtedly experienced annoyance digging through a list of fifteen exception stack traces trying to identify the one you're interested in (if it's even present), or a sinking feeling when you tailed the web server log only to find:
java.lang.NullPointerException
I sure have. The output to the browser client helps even less, typically churning out the de facto, "Cannot display page," message. Avoid these symptoms of exception handling pitfalls and troubleshoot web applications effortlessly with this simple recipe.
AntiPattern: T.M.I. (Too Much Information)
The first exception throwing antipattern occurs when we repeatedly log or wrap and rethrow an exception every time we catch it:
catch (Exception e) { e.printStackTrace(); throw new WrappingException(e); }
The application prints the stack trace and rewraps the exception fifteen times before it finally propagates to the top where we're often subjected to the stack trace for each wrapper exception. Our visual noise filters go into overload as we sift through a proverbial log hay stack in search of the one stack trace we actually care about.
AntiPattern: Lie of Omission
The second antipattern surfaces when we trash the original stack trace:
catch (Exception e) { // print message only. System.err.println(e); throw new WrapperException(e.getMessage()); }
If we look at the web server log, we'll see our
WrapperException instance's stack trace, which will point to where we caught the original exception as opposed to where it was thrown. Where did the original error occur? Your guess is as good as mine.
AntiPattern: Head in the Sand
This brings us to our final, most evil antipattern, ignoring exceptions:
catch (Exception e) { e.printStackTrace(); }
Printing the stack trace and going on about your business is on par with trying to drive a car after the wheel has fallen off. Doing so leaves the system in an unpredictable state, often leading to security holes and code that's insanely difficult to debug. It's the modern day incarnation of a segmentation fault.
The moral of this story: don't be afraid to throw your hands in the air and refuse to go on. People entrusting you with their credit card numbers may not thank you, but they'll hold on to their identities a little longer.
Solution: NestedException
Failing early helps avoid these pitfalls. The original stack trace in addition to other pertinent state information (user IDs, primary keys, method arguments, etc.) is a troubleshooter's best friend. I've found that when faced with a checked exception I can't possibly handle, it's best to wrap the exception in a runtime exception (once) and throw it to the top where it can ultimately be thrown to the web container. The exception propagates to the top sans explicit handling until the web container catches and logs it once and only once.
We can accomplish this with a class called
NestedException which I originally inherited from my friend and mentor Tim Williams and mentioned in my book Bitter EJB.
NestedException wraps exceptions only when necessary (so we don't end up with exceptions nested fifteen deep) and keeps the original stack trace intact. The catch block becomes:
catch (CheckedException e) { throw NestedException.wrap(e); }
NestedException consists of a simple wrapper class and static factory method,
wrap(Throwable).
NestedException overrides all methods to delegate to the wrapped exception (you don't even need to unwrap it to get the message or stack trace you're really interested in):
public class NestedException extends RuntimeException { Throwable throwable; private NestedException(Throwable t) { this.throwable = t; } /** Wraps another exeception in a RuntimeException. */ public static RuntimeException wrap(Throwable t) { if (t instanceof RuntimeException) return (RuntimeException) t; return new NestedException(t); } public Throwable getCause() { return this.throwable; } public void printStackTrace() { this.throwable.printStackTrace(); } ... }
Create a Default Error Page
Now that we've scrubbed our logs, we're left with the matter of output to the web browser. From my experience, only the finest of quality assurance testers take time to look at server logs. I often see bugs filed with the following description:
Did so and so... Browser said "Error 500: Cannot display page."
Not a lot of help. On the other hand, most testers are willing to cut and paste browser output into an issue tracking system. A simple solution is to configure your web application's default error page in the
WEB-INF/web.xml file:
<error-page> <exception-type>java.lang.Exception</exception-type> <location>/error.jsp</location> </error-page>
Now, when we throw an exception to the container, it forwards the request to
/error.jsp providing the exception instance as an implicit variable called "exception" (go figure). The error page is a simple JSP with the
isErrorPage page directive set to "true":
<%@ pageError</h1> <pre> </pre> </body></html>
The resulting page looks something like this:
Errorcom.mycompany.ApplicationException at _TestError._jspService(_TestError.java:65) at com.vendor.SomeClass.service(SomeClass.java:89) at javax.servlet.http.HttpServlet.service(HttpServlet.java:853) ...
Beautify our error page's design a bit, add some logic to show the stack trace to QA users but hide it from customers, and we're ready to go to production! | https://community.oracle.com/blogs/crazybob/2004/02/06/exception-handling-web-applications | CC-MAIN-2017-47 | en | refinedweb |
Class.
Computed Properties and Computed Property Macros
Computed Properties are among the first things developers new to Ember learn. They are a great way of defining dependencies between data points in the application and ensuring the UI stays consistent as these data points change.
Ember comes with a set of macros that implement property logic that most applications need and allow for short and expressive definitions like
isActive: Ember.computed.equal('state', 'isActive')
There are addons that provide even more macros for common use cases like ember-cpm or ember-awesome-macros.
Where Computed Property Macros fall short today
Computed Properties are very similar to template helpers in the way that both are pure functions that can only depend on their inputs. While a template helpers receives its inputs as arguments, Computed Properties define their inputs as dependent keys.
In some cases pure functions are not sufficient though as the computation in the template helper or computed property also depends on global state or the inputs cannot statically be listed in the helper or property definition. This is the case for example for computations on collections when it is unknown upfront on which property of each element in the collection the computation depends, e.g.
filteredUsers: filterByProperty('users' 'filter')
Here what we would like to do is filter the
users array by the value of the
filter property of the context. E.g. when
filter is
'isActive' we’d expect
filteredUsers to contain all active users, when
filter is
'isBlocked' we’d expect it to contain all blocked users and so on.
With template helpers and the ember-composable-helpers addon, we’re be able to write something like this in the template:
{{#each (filter-by filter users) as |user|}} … {{/each}}
and because the
filter-by helper is a Class based helper this actually works and the DOM updates correctly whenever the value of the
filter property or e.g. the
isActive property of any user changes.
With Computed Properties it is not currently possible to implement something like this (at least not as a reusable macro).
Enter Class based Computed Properties
With the Class based Computed Properties that ember-classy-computed introduces it is actually possible now to implement something like the above mentioned
filterByProperty macro. The computed property returned by that macro can now correctly be invalidated when any of the user’s
isActive,
isBlocked etc. properties change although it is not actually possible to know what these properties might be upfront. This allows keeping the filtering logic in JavaScript as opposed to in the template when using a Class based template helper:
import filterByProperty from 'app/computeds/filter-by'; … filteredUsers: filterByProperty('users' 'filter')
{{#each filteredUsers as |user|}} … {{/each}}
The implementation for the Computed Property macro looks like this:
// app/computeds/filter-by.js import Ember from 'ember'; import ClassBasedComputedProperty from 'ember-classy-computed'; const { observer, computed: { filter }, defineProperty } = Ember; const DynamicFilterByComputed = ClassBasedComputedProperty.extend({ contentDidChange: observer('content', function() { // This method is provided by the ClassBasedComputedProperty // base class and invalidates the computed property so that // it will get recomputed on the next access. this.invalidate(); }), filterPropertyDidChange: observer('filterProperty', function() { let filterProperty = this.get('filterProperty'); let property = filter(`[email protected]${filterProperty}`, (item) => item.get(filterProperty)); defineProperty(this, 'content', property); }), // This method is called whenever the computed property on the context object // is recomputed. The same lazy recomputation behavior as for regular computed // properties applies here of course. The method receives the current values // of its dependent properties as its arguments. compute(collection, filterProperty) { this.set('collection', collection); this.set('filterProperty', filterProperty); return this.get('content'); } }); export default ClassBasedComputedProperty.property(DynamicFilterByComputed);
Comparing this code to the implementation of the
filter-by helper mentioned above you will recognize that both are almost identical. This illustrates very well what Class based Computed Properties are: a way to use the same mechanisms that are already established for Class based template helpers for Computed Properties as well.
Notice
ember-classy-computed is currently at a very early stage and we haven’t thoroughly tested the implementation just yet. We have also not done any benchmarking to get a better understanding of what the performance implications are. That is to say, while we encourage everyone to try this out, be aware you’re currently doing so at your own risk as this is most likely not production ready (yet). We have the feeling though that this will be a valuable addition to Computed Properties in the future and can close the gap that currently exists between Computed Properties and template helpers. | https://simplabs.com/blog/2017/02/01/class-based-computed-properties.html | CC-MAIN-2017-47 | en | refinedweb |
HTML and CSS Reference
In-Depth Information
public void setCount(Integer count) {
getStateHelper().put(ATTR_COUNT, count);
}
public Integer getMinWords() {
return (Integer) getStateHelper().eval(ATTR_MIN_WORDS, ATTR_MIN_WORDS_DEFAULT);
}
public void setMinWords(Integer minWords) {
getStateHelper().put(ATTR_MIN_WORDS, minWords);
}
public Integer getMaxWords() {
return (Integer) getStateHelper().eval(ATTR_MAX_WORDS, ATTR_MAX_WORDS_DEFAULT);
}
public void setMaxWords(Integer maxWords) {
getStateHelper().put(ATTR_MAX_WORDS, maxWords);
}
}
You have probably noticed that the getters and setters in Listing 6-3 are not your typical getters and setters
encapsulating a class member. Instead they use the StateHelper exposed on the UIComponent class. If the attributes
simply used class members to store their values, the values would disappear after every request, as they are not persisted
anywhere. All UIComponents implement the PartialStateHolder interface with the intent that each UIComponent must
manage its own state. All standard components implement the PartialStateHolder and use the StateHelper to persist
and retrieve the necessary data. However, if you extend UIComponent rather than a standard component you must
manage the state of the component yourself. Considering that the JSF implementation may store the component state on
either the client or server side (depending on the value of the javax.faces.STATE_SAVING_METHOD context parameter)
it could potentially require a lot of work for a component writer to implement state management. Luckily, the authors
of JSF realized that and provide the StateHelper class to any class that implements UIComponent . The StateHelper
transparently takes care of saving and restoring the state of a component between views. See the StateHolder class
hierarchy and methods in Figure 6-3 . Basically, the StateHelper allows us to put an object into a map with a serializable
name. Later we can fetch (evaluate) the object using the same serializable name. If a requested name is not available
a null object is returned. To avoid checking for null values, the StateHelper has an overloaded eval method where
you specify the name of the object you are looking for and the value that should be returned in case it does not find the
requested object. This is handy for providing default values for attributes.
Search WWH ::
Custom Search | http://what-when-how.com/Tutorial/topic-605lt008/JSF-and-HTML5-156.html | CC-MAIN-2017-47 | en | refinedweb |
Overview
Atlassian Sourcetree is a free Git and Mercurial client for Windows.
Atlassian Sourcetree is a free Git and Mercurial client for Mac.
CMSPlugin Redactor
This is a Django-CMS plugin providing Django-RedactorMedia and thus Django-Redactor to the CMS. Django-Redactor itself is a Django app providing the Redactor Javascript WYSIWYG-editor in Django. Please note: While this software is licensed under MIT the Redactor Javascript editor and Django-Redactor are not. See the Django-Redactor license.
Installation
For now you have to install from the Bitbucket repository. One requirement is Django-Redactor. Install as they suggest. Currently there is no PyPi package for it.
Installing manual from Bitbucket
To install from the source do:
$ hg clone $ cd django-redactormedia
Install the module:
$ python setup.py install
Or:
$ pip install .
Or copy the redactormedia sub dir to your Python path or Django project root dir:
$ cp django-redactormedia/redactormedia $PYTHONPATH
Now, add the redactormedia application to your INSTALLED_APPS setting.
Usage
The redactormedia app provides a Django widget called RedactorWithMediaEditor. It is a drop-in replacement for any TextArea widget. Example usage:
from django import forms from django.db import models from redactormedia.widgets import RedactorWithMediaEditor class MyForm(forms.Form): about_me = forms.TextField(widget=RedactorWithMediaEditor())
You can also customize any of the Redactor editor's settings when instantiating the widget:
class MyForm(forms.Form): about_me = forms.TextField(widget=RedactorWithMediaEditor(redactor_settings={ 'autoformat': True, 'overlay': False, 'imageUpload': reverse('upload_files'), 'imageGetJson': reverse('recent_files'), }))
Django-redactormedia also includes a widget with some some customizations that make it function and look better in the Django admin:
class MyAdmin(admin.ModelAdmin): formfield_overrides = { models.TextField: {'widget': AdminRedactorWithMediaEditor}, }
Finally, you can connect a custom CSS file to the editable area of the editor:
class MyForm(forms.Form): about_me = forms.CharField(widget=RedactorWithMediaEditor( redactor_css="styles/text.css") )
Paths used to specify CSS can be either relative or absolute. If a path starts with '/', 'http://' or 'https://', it will be interpreted as an absolute path, and left as-is. All other paths will be prepended with the value of the STATIC_URL setting (or MEDIA_URL if static is not defined).>
Acknowledgment
This code is heavily based on this Blog article by Patrick Altman and its comments. Thanks guys.
License
Django-Redactormedia is licensed under the MIT License. Please note the dependency Django-Redactor license. | https://bitbucket.org/alien8/djangocms-text-redactorjs | CC-MAIN-2017-47 | en | refinedweb |
I've got a public class, which implements Serializable, that is extended by multiple other classes. Only those subclasses were ever serialized before - never the super class.
The super class had defined a serialVersionUID.
I'm not sure if it matters, but it was not marked private, but rather it just had the default protection - you might say it was package protected
static final long serialVersionUID = -7588980448693010399L;
com.SomeCompany.SomeSubClass; local class incompatible: stream classdesc serialVersionUID = 1597316331807173261, local class serialVersionUID = -3344057582987646196
@DanielChapman gives a good explanation of serialVersionUID, but no solution. the solution is this: run the
serialver program on all your old classes. put these
serialVersionUID values in your current versions of the classes. as long as the current classes are serial compatible with the old versions, you should be fine. (note for future code: you should always have a
serialVersionUID on all
Serializable classes)
if the new versions are not serial compatible, then you need to do some magic with a custom
readObject implementation (you would only need a custom
writeObject if you were trying to write new class data which would be compatible with old code). generally speaking adding or removing class fields does not make a class serial incompatible. changing the type of existing fields usually will.
Of course, even if the new class is serial compatible, you may still want a custom
readObject implementation. you may want this if you want to fill in any new fields which are missing from data saved from old versions of the class (e.g. you have a new List field which you want to initialize to an empty list when loading old class data). | https://codedump.io/share/0zKPZWocVoIe/1/java-serialization---javaioinvalidclassexception-local-class-incompatible | CC-MAIN-2017-47 | en | refinedweb |
I'd like a function in my module to be able to access and change the local namespace of the script that's importing it. This would enable functions like this:
>>> import foo
>>> foo.set_a_to_three()
>>> a
3
>>>
An answer was generously provided by @omz in a Slack team:
import inspect def set_a_to_three(): f = inspect.currentframe().f_back f.f_globals['a'] = 3
This provides the advantage over the
__main__ solution that it works in multiple levels of imports, for example if
a imports
b which imports my module,
foo,
foo can modify
b's globals, not just
a's (I think)
However, if I understand correctly, modifying the local namespace is more complicated. As someone else pointed out:
It breaks when you call that function from within another function. Then the next namespace up the stack is a fake dict of locals, which you cannot write to. Well, you can, but writes are ignored.
If there's a more reliable solution, that'd be greatly appreciated. | https://codedump.io/share/7z8NedOwCTLl/1/modify-namespace-of-importing-script-in-python | CC-MAIN-2017-47 | en | refinedweb |
Andrew: cc list For biochar, add the concepts of internal rate of return (IRR) and recommended analysis time period (30 years? 100 years?) (as a few of the needed extra parameters, for given types of soil, climate, species, etc). First cost is not the right parameter.
Ron > On Feb 4, 2018, at 5:28 AM, Andrew Lockley <andrew.lock...@gmail.com> wrote: > > Hi > > I'm designing a survey on attitudes to CE, and I'm trying to simplify the > costs estimates by making it per person per year. I've put $1 for SRM and > $100 for CDR - but it occurs to me that this is really very complicated. Has > anyone done any proper research on this? I can't find anything... > > Typically, SRM is costed on a program basis (bn/yr globally) but CDR is > costed per tonne (and volumes are highly variable). > > Issues to roll into this calculation are > * population growth > * future emissions (CDR gets a lot more expensive, if you're still emitting) > * whether temperature stabilises or reduces - and how fast > * experience/cost curve for each approach > * how much heterogeneity to expect (it's likely impractical to expect only a > single CDR to do everything) > > There are probably other factors. This strikes me as something that's > sufficiently useful to be worked up into a paper. > > Andrew > > -- >. | https://www.mail-archive.com/geoengineering@googlegroups.com/msg14106.html | CC-MAIN-2019-04 | en | refinedweb |
On 7 June 2016 at 16:03, Raymond Hettinger <raymond.hettinger at gmail.com> wrote: > >>? By the time decorators run, the original execution namespace is no longer available - the contents have been copied into the class dict, which will still be a plain dict (and there's a lot of code that calls PyDict_* APIs on tp_dict, so replacing the latter with a subclass is neither trivial nor particularly safe in the presence of extension modules). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia | https://mail.python.org/pipermail/python-dev/2016-June/144899.html | CC-MAIN-2019-04 | en | refinedweb |
There are many reasons you might find yourself needing to create an image gallery – whether it’s to show off album covers for a music app, to present feature images for articles in a feed, or to showcase your work in a portfolio. To make the right impression though, these apps should allow users to effortlessly swipe through multiple images without slowdown and that’s where things get a little tricky.
This tutorial will show you how to create a seamless gallery filled with nice big images and then adapt that for a number of different applications. Along the way, we’ll see how to use RecyclerViews, adapters and Picasso – so hopefully it will make for a great learning exercise, whatever you end up doing with it! Full code and project included below…
Introducing RecyclerView
To create our Android gallery, we’re going to use something called a RecyclerView. This is a handy view that acts very much like a ListView but with the advantage of allowing us to scroll quickly through large data sets. It does this by only loading the images that are currently in view at any given time. This means we can load more images without the app becoming very slow. There’s a lot more that you can do with this view and it’s used all over Google’s own apps, so check out the full explanation to using RecyclerView to find out more.
The good news is that this is all we really need to create our gallery – a RecyclerView filled with images. The bad news is that the RecyclerView is a little more complicated than most other views. Because of course it is.
RecyclerView is not, for starters, available to drag and drop using the design view. So we’ll just have to add it to the activity_main.xml, like so:
<android.support.v7.widget.RecyclerView android:
Notice that we’re referencing the Android Support Library. This means we also need to modify our build.gradle in order to include the dependency. Just add this line to the app level file:
compile 'com.android.support:recyclerview-v7:24.2.1'
And if that’s not installed, then you’re going to have to open the SDK manager and install it. Fortunately, Android Studio is pretty smart about prompting you to do all this. I just got a new computer, so I can play along with you!
Head back to the XML and it should now be working just fine. Except the list is not populated except with ‘item 1, item 2, item 3’. What we need to do, is load our images into here.
Creating your list of images
As mentioned, populating our recycler view is a little more complicated than using a regular list. By which, I mean it’s way more complicated… but it’s a great chance for us to learn some handy new skills. So there’s that.
For a RecyclerView, we’re also going to need a layout manager and an adapter. This is what’s going to allow us to organize the information in our view and add the images. We’ll start by initializing our views and attaching an adapter in the onCreate of MainActivity.java. This looks like so:
setContentView(R.layout.activity_main); RecyclerView recyclerView = (RecyclerView)findViewById(R.id.imagegallery); recyclerView.setHasFixedSize(true); RecyclerView.LayoutManager layoutManager = new GridLayoutManager(getApplicationContext(),2); recyclerView.setLayoutManager(layoutManager); ArrayList<CreateList> createLists = prepareData(); MyAdapter adapter = new MyAdapter(getApplicationContext(), createLists); recyclerView.setAdapter(adapter);
We’re setting the layout as activity_main, then we’re finding the RecyclerView and initializing it. Notice that we use HasFixedSize to make sure that it won’t stretch to accommodate the content. We’re also creating the layout manager and the adapter here. There are multiple types of layout manager but true to gallery-form, we’re going to pick a grid rather than a long list. Remember to import the GridLayoutManager and the RecyclerView as Android Studio prompts you to do so. Meanwhile, when you highlight MyAdapter, you’ll be given the option to ‘Create Class MyAdapter’. Go for it – make your own MyAdapter.Java and then switch back. We’ll come back to this later.
Before we can use the new adapter class, we first need to create our data set. This is going to take the form of an array list. So in other words, we’re going to place a list of all our images in here, which the adapter will then read and use to fill out the RecyclerView.
Just to make life a little more complicated, creating the Array List is also going to require a new class. First though, create a string array and an integer array in the same MainActivity.Java:
private final String image_titles[] = { "Img1", "Img2", "Img3", "Img4", "Img5", "Img6", "Img7", "Img8", "Img9", "Img10", "Img11", "Img12", "Img13", }; private final Integer image_ids[] = { R.drawable.img1, R.drawable.img2, R.drawable.img3, R.drawable.img4, R.drawable.img5, R.drawable.img6, R.drawable.img7, R.drawable.img8, R.drawable.img9, R.drawable.img10, R.drawable.img11, R.drawable.img12, R.drawable.img13, };
The strings can be anything you want – these will be the titles of your images. As for the integers, these are image IDs. This means they need to point to images in your Drawables folder. Drop some images into there that aren’t too massive and make sure the names are all correct.
Remember: a list is a collection of variables (like strings or integers), whereas an array is more like a filing cabinet of variables. By creating an ArrayList then, we’re basically creating a list of filing cabinets, allowing us to store two collections of data in one place. In this case, the data is a selection of image titles and image IDs.
Now create a new Java Class called CreateList and add this code:
public class CreateList { private String image_title; private Integer image_id; public String getImage_title() { return image_title; } public void setImage_title(String android_version_name) { this.image_title = android_version_name; } public Integer getImage_ID() { return image_id; } public void setImage_ID(Integer android_image_url) { this.image_id = android_image_url; } }
What we have here is a method we can use to add new elements (setImage_title, setImage_ID) and retrieve them (getImage_title, getImage_ID). This will let us run through the two arrays we made and stick them into the ArrayList. You’ll need to import array lists.
We do this, like so:
private ArrayList<CreateList> prepareData(){ ArrayList<CreateList> theimage = new ArrayList<>(); for(int i = 0; i< image_titles.length; i++){ CreateList createList = new CreateList(); createList.setImage_title(image_titles[i]); createList.setImage_ID(image_ids[i]); theimage.add(createList); } return theimage; } }
So we’re performing a loop while we go through all the image titles and adding them to the correct array in the ArrayList one at a time. Each time, we’re using the same index (i), in order to add the image ID to its respective location.
Confused yet?
Using the adapter
Before you head over to MyAdapter.java, you first need to create a new XML layout in the layout directory. I’ve called mine cell_layout.xml and it looks like so:
<LinearLayout xmlns: <ImageView android: <TextView android: </LinearLayout>
All this is, is the layout for the individual cells in our grid layout. Each one will have an image at the top, with text just underneath. Nice.
Now you can go back to your MyAdapter.java. This is where we’re going to take the list, take the cell layout and then use both those things to fill the RecyclerView. We already attached this to the RecyclerView in MainActivity.Java, so now all that’s left is… lots and lots of complex code.
It’s probably easiest if I just show you…
public class MyAdapter extends RecyclerView.Adapter<MyAdapter.ViewHolder> { private ArrayList<CreateList> galleryList; private Context context; public MyAdapter(Context context, ArrayList<CreateList> galleryList) { this.galleryList = galleryList; this.context = context; } @Override public MyAdapter.ViewHolder onCreateViewHolder(ViewGroup viewGroup, int i) { View view = LayoutInflater.from(viewGroup.getContext()).inflate(R.layout.cell_layout, viewGroup, false); return new ViewHolder(view); } @Override public void onBindViewHolder(MyAdapter.ViewHolder viewHolder, int i) { viewHolder.title.setText(galleryList.get(i).getImage_title()); viewHolder.img.setScaleType(ImageView.ScaleType.CENTER_CROP); viewHolder.img.setImageResource((galleryList.get(i).getImage_ID())); } @Override public int getItemCount() { return galleryList.size(); } public class ViewHolder extends RecyclerView.ViewHolder{ private TextView title; private ImageView img; public ViewHolder(View view) { super(view); title = (TextView)view.findViewById(R.id.title); img = (ImageView) view.findViewById(R.id.img); } } }
So what we’re doing here is to get our ArrayList and then create a ViewHolder. A ViewHolder makes it easier for us to iterate lots of views without having to write findViewByID every time – which would be impractical for a very long list.
We create the VewHolder by referencing the cell_layout file we created earlier, and then bind it with the data from our ArrayList. We find the TextView first and set that to be the relevant string, then we find the ImageView and use the image ID integer to set the image resource. Notice that I’ve also setScaleType to CENTER_CROP. This means that the image will be centered but cropped to fill the enter cell in a relatively attractive manner. There are other scale types but I believe that this is by far the most attractive for our purposes.
Don’t forget to import the ImageView and TextView classes. And remember you need to add some images to your drawables folder. Once you’ve done that though you should be ready to go!
Give it a try and you should end up with something that looks a little like this:
Except without all the pictures of me… This is just what I happened to have to hand, don’t judge!
Not working as expected? Don’t worry – this is a pretty complicated app for beginners. You can find the full thing over at GitHub here and then just work through each step while referring to the code.
Making this into a useful app
So right now we have a strange slideshow of photos of me. Not really a great app…
So what might you use this code for? Well, there are plenty of apps that essentially boil down to galleries – this would be a great way to create a portfolio for your business for example, or perhaps a visual guide of some sort.
In that case, we might want to add an onClick so that we can show some information, or perhaps a larger version of the image when someone taps their chosen item. To do this, we just need to import the onClickListener and then add this code to onBindViewHolder:
viewHolder.img.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { Toast.makeText(context,"Image",Toast.LENGTH_SHORT).show(); } });
If we wanted to load a selection of photos on the user’s device meanwhile, we’d simply have to list files in a particular directory. To do that, we’d just need to use listFiles to take the file names and load them into our ListArray list, using something list this:
String path = Environment.getRootDirectory().toString(); File f = new File(path); File file[] = f.listFiles(); for (int i=0; i < file.length; i++) { CreateList createList = new CreateList(); createList.setImage_Location(file[i].getName()); }
Except you’ll be changing your path string to something useful, like the user’s camera roll (rather than the root directory). Then you can load the bitmaps from the images on an SD card or internal storage by using the image name and path, like so:
Bitmap bmp = BitmapFactory.decodeFile(pathName); ImageView img; img.setImageBitmap(bmp);
You’ll probably want to get thumbnails from them too. This way, the list will be populated dynamically – so that when new photos are added to that directory, you’re gallery will update to show them each time you open it. This is how you might make a gallery app for displaying the images on a user’s phone, for example.
Or alternatively, another way we could make this app a little fancier, would be to download images from the web.
This might sound like a whole extra chapter but it’s actually pretty simple as well. You just need to use the Picasso library, which is very easy and completely free. First, add the dependency like we did earlier:
compile 'com.squareup.picasso:picasso:2.5.0'
Then, change your ArrayList to contain two string arrays instead of a string and an integer. Instead of image IDs, you’re going to fill this second string array with URLs for your images (in inverted commas). Now you just swap out the line in your onBindViewHolder to:
Picasso.with(context).load(galleryList.get(i).getImage_ID()).resize(240, 120).into(viewHolder.img);
Remember to add the relevant permission and it really is that easy – you can now download your images right from a list of URLs and that way update them on the fly without having to update the app! Picasso will also cache images to keep things nice and zippy for you.
Note as well that if you wanted to have more than two images per row, you would simply swap:
RecyclerView.LayoutManager layoutManager = new GridLayoutManager(getApplicationContext(),2);
For:
RecyclerView.LayoutManager layoutManager = new GridLayoutManager(getApplicationContext(),3);
This will give you something like the following:
If you don’t like the text and you just want images, then you can easily remove the string array from proceedings. Or for a quick hack if you don’t want to stray too far from my code, you can just make the TextView super thin.
Closing comments
And there you have it – your very own basic image gallery. There are plenty of uses for this and hopefully you’ve learned a few useful bits and pieces along the way. Stay tuned for more tutorials just like this one!
And remember, the full project can be found here for your reference. | https://www.androidauthority.com/how-to-build-an-image-gallery-app-718976/ | CC-MAIN-2019-04 | en | refinedweb |
java.lang.Object
akka.io.PipelineStage<HasLogging,Tcp.Command,Tcp.Command,Tcp.Event,Tcp.Event>akka.io.PipelineStage<HasLogging,Tcp.Command,Tcp.Command,Tcp.Event,Tcp.Event>
akka.io.BackpressureBufferakka.io.BackpressureBuffer
public class BackpressureBuffer
This pipeline stage implements a configurable buffer for transforming the
per-write ACK/NACK-based backpressure model of a TCP connection actor into
an edge-triggered back-pressure model: the upper stages will receive
notification when the buffer runs full (
BackpressureBuffer.HighWatermarkReached) and when
it subsequently empties (
BackpressureBuffer.LowWatermarkReached). The upper layers should
respond by not generating more writes when the buffer is full. There is also
a hard limit upon which this buffer will abort the connection.
All limits are configurable and are given in number of bytes.
The
highWatermark should be set such that the
amount of data generated before reception of the asynchronous
BackpressureBuffer.HighWatermarkReached notification does not lead to exceeding the
maxCapacity hard limit; if the writes may arrive in bursts then the
difference between these two should allow for at least one burst to be sent
after the high watermark has been reached. The
lowWatermark must be less
than or equal to the
highWatermark, where the difference between these two
defines the hysteresis, i.e. how often these notifications are sent out (i.e.
if the difference is rather large then it will take some time for the buffer
to empty below the low watermark, and that room is then available for data
sent in response to the
BackpressureBuffer.LowWatermarkReached notification; if the
difference was small then the buffer would more quickly oscillate between
these two limits).
public BackpressureBuffer(long lowBytes, long highBytes, long maxBytes)
public java.lang.Object apply(HasLogging ctx) | https://doc.akka.io/japi/akka/2.2.5/akka/io/BackpressureBuffer.html | CC-MAIN-2019-04 | en | refinedweb |
def OnEndOfDay(self):
self.Debug("END OF DAY")
/> />
Runtime Error: A data subscription for type 'List`1' was not found. (Open Stacktrace)
def OnEndOfDay(self):
self.Debug("END OF DAY")
/> />
Runtime Error: A data subscription for type 'List`1' was not found. (Open Stacktrace)
hmm you did not go through the list of securities/assets. It sounds like you would subscribe not to stocks but to the list1 itself.
check the "def Initialize(self):" method how you subscribe to data/stocks...
from the MultipleSymbolConsolidationAlgorithm starting at line 53 from here:
code snippet:
#. | https://www.quantconnect.com/forum/discussion/3592/onendofday-callback-gives-error-when-running-backtest/ | CC-MAIN-2019-04 | en | refinedweb |
.
Woke up early yesterday, so I started skimming @xuwei_k's override blog post. The topic was so intriguing, I got out of the bed and started translating it as the curious case of putting override modifier when overriding an abstract method in Scala. In there he describes the conundrum of providing the default instances to typeclasses by using Scalaz codebase as an example.
override
Here's a simplified representation of the problem:
trait Functor {
def map: String
}
trait Traverse extends Functor {
override def map: String = "meh"
}
sealed trait OneOrFunctor extends Functor {
override def map: String = "better"
}
sealed trait OneOrTraverse extends OneOrFunctor with Traverse {
}
object OneOr {
def OneOrFunctor: Functor = new OneOrFunctor {}
def OneOrTraverse: Traverse = new OneOrTraverse {}
} | http://eed3si9n.com/category/tags/scala?page=2 | CC-MAIN-2019-04 | en | refinedweb |
A Look at Ruby 2.1
In this article, we take a look at the spanking new features of Ruby 2.1. It was first announced by Matz at the Barcelona Ruby Conference (BaRuCo) 2013. We’ll be focusing on Ruby 2.1.0, which was released over the holiday.
Hopefully, by the end of the article, you’ll be very excited about Ruby 2.1!
Getting Ruby 2.1
The best way to learn and explore the various features is to follow along with the examples. To do that, you need to get yourself a copy of the latest Ruby 2.1:
If you are on
rvm:
(You need to run
rvm get head to get 2.1.0 final installed)
$ rvm get head $ rvm install ruby-2.1.0 $ rvm use ruby-2.1.0
or if you are on
rbenv:
$ rbenv install 2.1.0 $ rbenv rehash $ rbenv shell 2.1.0
Note that for
rbenv users, you probably want to do a
rbenv shell --unset after you are done playing with the examples – unless you like to live on the the bleeding edge. Or you could simply just close the terminal window.
Let’s make sure that we are both using the same version:
$ ruby -v ruby 2.1.0dev (2013-11-23 trunk 43807) [x86_64-darwin13.0]
So, What’s New?
Here is the list of features we’ll tackle today. For a more comprehensive list, take a look at the release notes for Ruby 2.1.0.
- Rational Number and Complex Number Literals
def‘s return value
- Refinements
- Required Keyword Arguments
- Garbage Collector
- Object Allocation Tracing
- Exception#cause
1. Rational Number and Complex Number Literals
In previous versions of Ruby, it was a hassle to work with complex numbers:
% irb irb(main):001:0> RUBY_VERSION => "2.0.0" irb(main):002:0> Complex(2, 3) => (2+3i) irb(main):003:0> (2+3i) SyntaxError: (irb):3: syntax error, unexpected tIDENTIFIER, expecting ')' (2+3i) ^ from /usr/local/var/rbenv/versions/2.0.0-p247/bin/irb:12:in `<main>'
Now, with the introduction of the
i suffix:
% irb irb(main):001:0> RUBY_VERSION => "2.1.0" irb(main):002:0> (2+3i) => (2+3i) irb(main):003:0> (2+3i) + Complex(5, 4i) => (3+3i)
Working with rationals is also more pleasant. Previously, you had to use floats if you wanted to work with fractions or use the
Rational class. The
r suffix improves the situation by providing a shorthand for the
Rational class.
Therefore, instead of:
irb(main):001:0> 2/3.0 + 5/4.0 => 1.9166666666666665
We could write this instead:
irb(main):002:0> 2/3r + 5/4r => (23/12)
2.
def‘s Return Value
In previous versions of Ruby, the return value of a method definition has always been
nil:
% irb irb(main):001:0> RUBY_VERSION => "2.0.0" irb(main):002:0> def foo irb(main):003:1> end => nil
In Ruby 2.1.0, method definitions return a symbol:
irb(main):001:0> RUBY_VERSION => "2.1.0" irb(main):002:0> def foo irb(main):003:1> end => :foo
How is this useful? So far, one of the use cases I’ve come across is how
private methods are defined. I’ve always disliked the way Ruby defines private methods:
module Foo def public_method end private def a_private_method end end
The problem I have with this is when classes get really long (despite our best intentions), it is sometimes easy to miss out that
private keyword.
What is interesting is that
private can take in a symbol:
module Foo def public_method end def some_other_method end private :some_other_method private def a_private_method end end Foo.private_instance_methods => [:some_other_method, :a_private_method]
Now, we can simply combine the fact that
def returns a symbol and
private takes in a symbol:
module Foo def public_method end private def some_other_method end private def a_private_method end end Foo.private_instance_methods => [:some_other_method, :a_private_method]
If you are interested in the implementation of this new feature, check out this blog post.
3. Refinements
Refinements are no longer experimental in Ruby 2.1. If you are new to refinements, it helps to compare it to monkey patching. In Ruby, all classes are open. This means that we can happily add methods to an existing class.
To appreciate the havoc this can cause, let’s redefine
String#count (The original definition is here):
class String def count Float::INFINITY end end
If you were to paste the above into
irb, every string returns
Infinity when
count-ed:
irb(main):001:0> "I <3 Monkey Patching".count => Infinity
Refinements provide an alternate way to scope scope our modifications. Let’s make something slightly more useful:
module Permalinker refine String do def permalinkify downcase.split.join("-") end end end class Post using Permalinker def initialize(title) @title = title end def permalink @title.permalinkify end end
First, we define a module,
Permalinker that
refines the String class with a new method. This method implements a cutting edge permalink algorithm.
In order to use our refinement, we simply add
using Permalinker into our example
Post class. After that, we could treat as if the
String class has the
permalinkify method.
Let’s see this in action:
irb(main):002:0> post = Post.new("Refinements are pretty awesome") irb(main):002:0> post.permalink => "refinements-are-pretty-awesome"
To prove that
String#permalinkify only exists within the scope of the
Post class, let’s try using that method elsewhere and watch the code blow up:
irb(main):023:0> "Refinements are not globally scoped".permalinkify NoMethodError: undefined method `permalinkify' for "Refinements are not globally scoped":String from (irb):23 from /usr/local/var/rbenv/versions/2.1.0/bin/irb:11:in `<main>'
4. Required Keyword Arguments
In Ruby 2.0, keyword arguments were introduced:
def permalinkfiy(str, delimiter: "-") str.downcase.split.join(delimiter) end
Unfortunately, there wasn’t a way to mark
str as being required. That’s set to change in Ruby 2.1. In order to mark an argument as required, simply leave out the default value like so:
def permalinkify(str:, delimiter: "-") str.downcase.split.join(delimiter) end
If we fill in all the required arguments, everything works as expected. However if we leave something out, an
ArgumentError gets thrown:
irb(main):001:0> permalinkify(str: "Required keyword arguments have arrived!", delimiter: "-lol-") => "required-lol-keyword-lol-arguments-lol-have-lol-arrived!" irb(main):002:0> permalinkify(delimiter: "-lol-") ArgumentError: missing keyword: str from (irb):49 from /usr/local/var/rbenv/versions/2.1.0/bin/irb:11:in `<main>'
5. Restricted Generational Garbage Collector (RGenGC)
Ruby 2.1 has a new garbage collector that uses a generational garbage collection algorithm.
The key idea and observation is that objects that are most recently created often die young. Therefore, we can split objects into young and old based on whether they survive a garbage collection run. This way, the garbage collector can concentrate on freeing up memory on the young generation.
In the event we run out of memory even after garbage collecting the young generation (minor GC), the garbage collector will then proceed on to the old generation (major GC).
Prior to Ruby 2.1, Ruby’s garbage collector was running a conservative stop-the-world mark and sweep algorithm. In Ruby 2.1, we are still using the mark and sweep algorithm to garbage collect the young/old generations. However, because we have lesser objects to mark the marking time decreases, which leads to improved collector performance.
There are caveats, however. In order to preserve compatibility with C extensions, the Ruby core team could not implement a “full” generational garbage collection algorithm. In particular, they could not implement the moving garbage collection algorithm – hence the “restricted”.
That said, it is very encouraging to see the Ruby core team taking garbage collection performance very seriously. For more details, do check out this excellent presentation by Koichi Sasada.
6. Exception#cause
Charles Nutter, who implemented this feature, explains it best:
Often when a lower-level API raises an exception, we would like to re-raise a different exception specific to our API or library. Currently in Ruby, only our new exception is ever seen by users; the original exception is lost forever, unless the user decides to dig around our library and log it.
We need a way to have an exception carry a “cause” along with it.
Here is an example of how
Exception#cause works:
class ExceptionalClass def exceptional_method cause = nil begin raise "Boom!"" # RuntimeError raised rescue => e raise StandardError, "Ka-pow!" end end end begin ExceptionalClass.new.exceptional_method rescue Exception => e puts "Caught Exception: #{e.message} [#{e.class}]" puts "Caused by : #{e.cause.message} [#{e.cause.class}]" end
This is what you will get:
Caught Exception: Ka-pow! [StandardError] Caused by : Boom! [RuntimeError]
7. Object Allocation Tracing
If you have a bloated Ruby application, it is usually a non-trivial task to pinpoint the exact source of the problem. MRI Ruby still doesn’t have profiling tools that can rival, for example, the JRuby profiler.
Fortunately, work has begun to provide object allocation tracing to MRI Ruby.
Here’s an example:
require 'objspace' class Post def initialize(title) @title = title end def tags %w(ruby programming code).map do |tag| tag.upcase end end end ObjectSpace.trace_object_allocations_start a = Post.new("title") b = a.tags ObjectSpace.trace_object_allocations_stop puts ObjectSpace.allocation_sourcefile(a) # post.rb puts ObjectSpace.allocation_sourceline(a) # 16 puts ObjectSpace.allocation_class_path(a) # Class puts ObjectSpace.allocation_method_id(a) # new puts ObjectSpace.allocation_sourcefile(b) # post.rb puts ObjectSpace.allocation_sourceline(b) # 9 puts ObjectSpace.allocation_class_path(b) # Array puts ObjectSpace.allocation_method_id(b) # map
Although knowing that we can obtain this information is great, it is not immediately obvious how this could be useful to you, the developer.
Enter the
allocation_stats gem written by Sam Rawlins.
Let’s install it:
% gem install allocation_stats Fetching: allocation_stats-0.1.2.gem (100%) Successfully installed allocation_stats-0.1.2 Parsing documentation for allocation_stats-0.1.2 Installing ri documentation for allocation_stats-0.1.2 Done installing documentation for allocation_stats after 0 seconds 1 gem installed
Here’s the same example as before, except that we are using
allocation_stats this time:
require 'allocation_stats' class Post def initialize(title) @title = title end def tags %w(ruby programming code).map do |tag| tag.upcase end end end stats = AllocationStats.trace do post = Post.new("title") post.tags end puts stats.allocations(alias_paths: true).to_text
Running this produces a nicely formatted table:
sourcefile sourceline class_path method_id memsize class ---------- ---------- ---------- --------- ------- ------ post.rb 10 String upcase 0 String post.rb 10 String upcase 0 String post.rb 10 String upcase 0 String post.rb 9 Array map 0 Array post.rb 9 Post tags 0 Array post.rb 9 Post tags 0 String post.rb 9 Post tags 0 String post.rb 9 Post tags 0 String post.rb 17 Class new 0 Post post.rb 17 0 String
Sam gave a wonderful presentation that looks into more details of the
allocation_stats gem.
Happy Holidays!
Ruby 2.1 is scheduled to be released on Christmas day. If everything goes well, it would make for a wonderful present for all Rubyists. I am especially excited to see improvements in Ruby’s garbage collector, and also better profiling capabilities baked into the language that allow for the building of better profiling tools.
Happy coding and happy holidays! | https://www.sitepoint.com/look-ruby-2-1/ | CC-MAIN-2018-05 | en | refinedweb |
Haskell/Lists III
Folds[edit]
Like
map, a fold is a higher order function that takes a function and a list. However, instead of applying the function element by element, the fold uses it to combine the list elements into a result value.
Let's look at a few concrete examples.
sum could be implemented as:
Example: sum
sum :: [Integer] -> Integer sum [] = 0 sum (x:xs) = x + sum xs
and
product as:
Example: product
product :: [Integer] -> Integer product [] = 1 product (x:xs) = x * product xs
concat, which takes a list of lists and joins (concatenates) them into one:
Example: concat
concat :: [[a]] -> [a] concat [] = [] concat (x:xs) = x ++ concat xs
All these examples show a pattern of recursion known as a fold. Think of the name referring to a list getting "folded up" into a single value or to a function being "folded between" the elements of the list.
Prelude defines four
fold functions:
foldr,
foldl,
foldr1 and
foldl1.
foldr[edit]
The right-associative
foldr folds up a list from the right to left. As it proceeds, foldr uses the given function to combine each of the elements with the running value called the accumulator. When calling foldr, the initial value of the accumulator is set as an argument.
foldr :: (a -> b -> b) -> b -> [a] -> b foldr f acc [] = acc foldr f acc (x:xs) = f x (foldr f acc xs)
The first argument to
foldr is a function with two arguments. The second argument is value for the accumulator (which often starts at a neutral "zero" value). The third argument is the list to be folded.
In
sum,
f is
(+), and
acc is
0. In
concat,
f is
(++) and
acc is
[]. In many cases (like all of our examples so far), the function passed to a fold will be one that takes two arguments of the same type, but this is not necessarily the case (as we can see from the
(a -> b -> b) part of the type signature — if the types had to be the same, the first two letters in the type signature would have matched).
Remember, a list in Haskell written as
[a, b, c] is an alternative (syntactic sugar) style for
a : b : c : [].
Now,
foldr f acc xs in the
foldr definition simply replaces each cons (:) in the
xs list with the function
f while replacing the empty list at the end with
acc:
foldr f acc (a:b:c:[]) = f a (f b (f c acc))
Note how the parentheses nest around the right end of the list.
An elegant visualisation is given by picturing the list data structure as a tree:
: f / \ / \ a : foldr f acc a f / \ -------------> / \ b : b f / \ / \ c [] c acc
We can see here that
foldr (:) [] will return the list completely unchanged. That sort of function that has no effect is called an identity function. You should start building a habit of looking for identity functions in different cases, and we'll discuss them more later when we learn about monoids.
foldl[edit]
The left-associative
foldl processes the list in the opposite direction, starting at the left side with the first element.
foldl :: (a -> b -> a) -> a -> [b] -> a foldl f acc [] = acc foldl f acc (x:xs) = foldl f (f acc x) xs
So, brackets in the resulting expression accumulate around the left end of the list:
foldl f acc (a:b:c:[]) = f (f (f acc a) b) c
The corresponding trees look like:
: f / \ / \ a : foldl f acc f c / \ -------------> / \ b : f b / \ / \ c [] acc a
Because all folds include both left and right elements, beginners can get confused by the names. You could think of foldr as short for fold-right-to-left and foldl as fold-left-to-right. The names refer to where the fold starts.
Note
Technical Note: foldl is tail-recursive, that is, it recurses immediately, calling itself. For this reason the compiler will optimise it to a simple loop for efficiency. However, Haskell is a lazy language, so the calls to f will be left unevaluated by default, thus building up an unevaluated expression in memory that includes the entire length of the list. To avoid running out of memory, we have a version of foldl called
foldl' that is strict — it forces the evaluation of f immediately at each step.
An apostrophe at the end of a function name is pronounced "tick" as in "fold-L-tick". A tick is a valid character in Haskell identifiers.
foldl' can be found in the
Data.List library module (imported by adding
import Data.List to the beginning of a source file). As a rule of thumb, you should use
foldr on lists that might be infinite or where the fold is building up a data structure and use
foldl' if the list is known to be finite and comes down to a single value. There is almost never a good reason to use
foldl (without the tick), though it might just work if the lists fed to it are not too long.
Are foldl and foldr opposites?[edit][edit]
As previously noted, the type declaration for
foldr makes it quite possible for the list elements and result to be of different types. For example, "read" is a function that takes a string and converts it into some type (the type system is smart enough to figure out which one). In this case we convert it into a float.
Example: The list elements and results can have different types
addStr :: String -> Float -> Float addStr str x = read str + x sumStr :: [String] -> Float sumStr = foldr addStr 0.0
There is also a variant called
foldr1 ("fold - R - one") which dispenses with an explicit "zero" for an accumulator by taking the last element of the list instead:
foldr1 :: (a -> a -> a) -> [a] -> a foldr1 f [x] = x foldr1 f (x:xs) = f x (foldr1 f xs) foldr1 _ [] = error "Prelude.foldr1: empty list"
And
foldl1 as well:
foldl1 :: (a -> a -> a) -> [a] -> a foldl1 f (x:xs) = foldl f x xs foldl1 _ [] = error "Prelude.foldl1: empty list"
Note: Just like for foldl, the Data.List library includes foldl1' as a strict version of foldl1.
With foldl1 and foldr1, all the types have to be the same, and an empty list is an error. These variants are useful when there is no obvious candidate for the initial accumulator value and we are sure that the list is not going to be empty. When in doubt, stick with foldr or foldl'.
folds and laziness[edit]
One reason that right-associative folds are more natural in Haskell than left-associative ones is that right folds can operate on infinite lists. A fold that returns an infinite list is perfectly usable in a larger context that doesn't need to access the entire infinite result. In that case, foldr can move along as much as needed and the compiler will know when to stop. However, a left fold necessarily calls itself recursively until it reaches the end of the input list (because the recursive call is not made in an argument to f). Needless to say, no end will be reached if an input list to foldl is infinite.
As a toy example, consider a function
echoes that takes a list of integers and produces a list such that wherever the number n occurs in the input list, it is replicated n times in the output list. To create our echoes function, we will use the prelude function
replicate in which
replicate n x is a list of length n with x the value of every element.
We can write echoes as a foldr quite handily:
echoes = foldr (\ x xs -> (replicate x x) ++ xs) [] take 10 (echoes [1..]) -- [1,2,2,3,3,3,4,4,4,4]
(Note: This definition is compact thanks to the
\ x xs -> syntax. The
\, meant to look like a lambda (λ), works as an unnamed function for cases where we won't use the function again anywhere else. Thus, we provide the definition of our one-time function in situ. In this case,
x and
xs are the arguments, and the right-hand side of the definition is what comes after the
->.)
We could have instead used a foldl:
echoes = foldl (\ xs x -> xs ++ (replicate x x)) [] take 10 (echoes [1..]) -- not terminating
but only the foldr version works on an infinite lists. What would happen if you just evaluate
echoes [1..]? Try it! (If you try this in GHCi or a terminal, remember you can stop an evaluation with Ctrl-c, but you have to be quick and keep an eye on the system monitor or your memory will be consumed in no time and your system will hang.)
As a final example,
map itself can be implemented as a fold:
map f = foldr (\ x xs -> f x : xs) []
Folding takes some time to get used to, but it is a fundamental pattern in functional programming and eventually becomes very natural. Any time you want to traverse a list and build up a result from its members, you likely want a fold.
Scans[edit]
A "scan" is like a cross between a
map and a fold. Folding a list accumulates a single return value, whereas mapping puts each item through a function returning a separate result for each item. A scan does both: it accumulates a value like a fold, but instead of returning only a final value it returns a list of all the intermediate values.
Prelude contains four scan functions:
scanl :: (a -> b -> a) -> a -> [b] -> [a]
scanl accumulates the list from the left, and the second argument becomes the first item in the resulting list. So,
scanl (+) 0 [1,2,3] = [0,1,3,6].
scanl1 :: (a -> a -> a) -> [a] -> [a]
scanl1 uses the first item of the list as a zero parameter. It is what you would typically use if the input and output items are the same type. Notice the difference in the type signatures between
scanl and
scanl1.
scanl1 (+) [1,2,3] = [1,3,6].
scanr :: (a -> b -> b) -> b -> [a] -> [b] scanr (+) 0 [1,2,3] = [6,5,3,0] scanr1 :: (a -> a -> a) -> [a] -> [a] scanr1 (+) [1,2,3] = [6,5,3]
These two functions are the counterparts of
scanl and
scanl1 that accumulate the totals from the right.
filter[edit]
A common operation performed on lists is filtering — generating a new list composed only of elements of the first list that meet a certain condition. A simple example: making a list of only even numbers from a list of integers.
retainEven :: [Int] -> [Int] retainEven [] = [] retainEven (n:ns) = -- mod n 2 computes the remainder for the integer division of n by 2. if (mod n 2) == 0 then n : (retainEven ns) else retainEven ns
This definition is somewhat verbose and specific. Prelude provides a concise and general
filter function with type signature:
filter :: (a -> Bool) -> [a] -> [a]
So, a
(a -> Bool) function tests an elements for some condition, we then feed in a list to be filtered, and we get back the filtered list.
To write
retainEven using
filter, we need to state the condition as an auxiliary
(a -> Bool) function, like this one:
isEven :: Int -> Bool isEven n = (mod n 2) == 0
And then retainEven becomes simply:
retainEven ns = filter isEven ns
We used ns instead of xs to indicate that we know these are numbers and not just anything, but we can ignore that and use a more terse point-free definition:
retainEven = filter isEven
This is like what we demonstrated before for
map and the folds. Like
filter, those take another function as argument; and using them point-free emphasizes this "functions-of-functions" aspect.
List comprehensions[edit]
List comprehensions are syntactic sugar for some common list operations, such as filtering. For instance, instead of using the Prelude
filter, we could write
retainEven like this:
retainEven es = [n | n <- es, isEven n]
This compact syntax may look intimidating, but it is simple to break down. One interpretation is:
- (Starting from the middle) Take the list es and draw (the "<-") each of its elements as a value n.
- (After the comma) For each drawn n test the boolean condition
isEven n.
- (Before the vertical bar) If (and only if) the boolean condition is satisfied, append n to the new list being created (note the square brackets around the whole expression).
Thus, if
es is [1,2,3,4], then we would get back the list [2,4]. 1 and 3 were not drawn because
(isEven n) == False .
The power of list comprehensions comes from being easily extensible. Firstly, we can use as many tests as we wish (even zero!). Multiple conditions are written as a comma-separated list of expressions (which should evaluate to a Boolean, of course). For a simple example, suppose we want to modify
retainEven so that only numbers larger than 100 are retained:
retainLargeEvens :: [Int] -> [Int] retainLargeEvens es = [n | n <- es, isEven n, n > 100]
Furthermore, we are not limited to using
n as the element to be appended when generating a new list. Instead, we could place any expression before the vertical bar (if it is compatible with the type of the list, of course). For instance, if we wanted to subtract one from every even number, all it would take is:
evensMinusOne es = [n - 1 | n <- es, isEven n]
In effect, that means the list comprehension syntax incorporates the functionalities of
map and
filter.
To further sweeten things, the left arrow notation in list comprehensions can be combined with pattern matching. For example, suppose we had a list of
(Int, Int) tuples, and we would like to construct a list with the first element of every tuple whose second element is even. Using list comprehensions, we might write it as follows:
firstForEvenSeconds :: [(Int, Int)] -> [Int] firstForEvenSeconds ps = [fst p | p <- ps, isEven (snd p)] -- here, p is for pairs.
Patterns can make it much more readable:
firstForEvenSeconds ps = [x | (x, y) <- ps, isEven y]
As in other cases, arbitrary expressions may be used before the
|. If we wanted a list with the double of those first elements:
doubleOfFirstForEvenSeconds :: [(Int, Int)] -> [Int] doubleOfFirstForEvenSeconds ps = [2 * x | (x, y) <- ps, isEven y]
Not counting spaces, that function code is shorter than its descriptive name!
There are even more possible tricks:
allPairs :: [(Int, Int)] allPairs = [(x, y) | x <- [1..4], y <- [5..8]]
This comprehension draws from two lists, and generates all possible
(x, y) pairs with the first element drawn from
[1..4] and the second from
[5..8]. In the final list of pairs, the first elements will be those generated with the first element of the first list (here,
1), then those with the second element of the first list, and so on. In this example, the full list is (linebreaks added for clarity):
Prelude> [(x, y) | x <- [1..4], y <- [5..8]] [(1,5),(1,6),(1,7),(1,8), (2,5),(2,6),(2,7),(2,8), (3,5),(3,6),(3,7),(3,8), (4,5),(4,6),(4,7),(4,8)]
We could easily add a condition to restrict the combinations that go into the final list:
somePairs = [(x, y) | x <- [1..4], y <- [5..8], x + y > 8]
This list only has the pairs with the sum of elements larger than 8; starting with
(1,8), then
(2,7) and so forth. | https://en.wikibooks.org/wiki/Haskell/Lists_III | CC-MAIN-2018-05 | en | refinedweb |
An Extensive Examination of Data Structures Using C# 2.0
Scott Mitchell
4GuysFromRolla.com
Update January 2005
Summary: A graph, like a tree, is a collection of nodes and edges, but has no rules dictating the connection among the nodes. In this fifth part of the article series, we'll learn all about graphs, one of the most versatile data structures.(22
Examining the Different Classes of Edges
Creating a Graph Class
A Look at Some Common Graph Algorithms
Conclusion
Introduction
Part 1 and Part 2 of this article series focused on linear data structures—the array, the List, the Queue, the Stack, the Hashtable, and the Dictionary. In Part 3 we began our investigation of trees. Recall that trees consist of a set of nodes, where all of the nodes share some connection to other nodes. These connections are referred to as edges. As we discussed, there are numerous rules spelling out how these connections can occur. For example, all nodes in a tree except for one—the root—must have precisely one parent node, while all nodes can have an arbitrary number of children. These simple rules ensure that, for any tree, the following three statements will hold:
- Starting from any node, any other node in the tree can be reached. That is, there exists no node that can't be reached through some simple path.
- There are no cycles. A cycle exists when, starting from some node v, there is some path that travels through some set of nodes v1, v2, ..., vk that then arrives back at v.
- The number of edges in a tree is precisely one less than the number of nodes.
In Part 3 we focused on binary trees, which are a special form of trees. Binary trees are trees whose nodes have at most two children.
In this fifth installment of the article series, we're going to examine graphs. Graphs are composed of a set of nodes and edges, just like trees, but with graphs there are no rules for the connections between nodes. With graphs there is no concept of a root node, nor is there a concept of parents and children. Rather, a graph is just a collection of interconnected nodes.
Note Realize that all trees are graphs. A tree is a special case of a graph, one whose nodes are all reachable from some starting node and one that has no cycles.
Figure 1 shows three examples of graphs. Notice that graphs, unlike trees, can have sets of nodes that are disconnected from other sets of nodes. For example, graph (a) has two distinct, unconnected set of nodes. Graphs can also contain cycles. Graph (b) has several cycles. One such is the path from v1 to v2 to v4 and back to v1. Another one is from v1 to v2 to v3 to v5 to v4 and back to v1. (There are also cycles in graph (a).) Graph (c) does not have any cycles, as one less edge than it does number of nodes, and all nodes are reachable. Therefore, it is a tree.
Figure 1. Three examples of graphs
Many real-world problems can be modeled using graphs. For example, search engines model the Internet as a graph, where Web pages are the nodes in the graph and the links among Web pages are the edges. Programs like Microsoft MapPoint that can generate driving directions from one city to another use graphs, modeling cities as nodes in a graph and the roads connecting the cities as edges.
Examining the Different Classes of Edges
Graphs, in their simplest terms, are a collection of nodes and edges, but there are different kinds of edges:
- Directed versus undirected edges
- Weighted versus unweighted edges
When talking about using graphs to model a problem, it is usually important to indicate what class of graph you are working with. Is it a graph whose edges are directed and weighted, or one whose edges are undirected and weighted? In the next two sections we'll discuss the differences between directed and undirected edges and weighted and unweighted edges.
Directed and Undirected Edges
The edges of a graph provide the connections between one node and another. By default, an edge is assumed to be bidirectional. That is, if there exists an edge between nodes v and u, it is assumed that one can travel from v to u and from u to v. Graphs with bidirectional edges are said to be undirected graphs, because there is no implicit direction in their edges.
For some problems, though, an edge might infer a one-way connection from one node to another. For example, when modeling the Internet as a graph, a hyperlink from Web page v linking to Web page u would imply that the edge between v to u would be unidirectional. That is, that one could navigate from v to u, but not from u to v. Graphs that use unidirectional edges are said to be directed graphs.
When drawing a graph, bidirectional edges are drawn as a straight line, as shown in Figure 1. Unidirectional edges are drawn as an arrow, showing the direction of the edge. Figure 2 shows a directed graph where the nodes are Web pages for a particular Web site and a directed edge from u to v indicates that there is a hyperlink from Web page u to Web page v. Notice that both u links to v and v links to u, two arrows are used—one from v to u and another from u to v.
Figure 2. Model of pages making up a website
Weighted and Unweighted Edges
Typically graphs are used to model a collection of "things" and their relationship among these "things." For example, the graph in Figure 2 modeled the pages in a Web site and their hyperlinks. Sometimes, though, it is important to associate some cost with the connection from one node to another.
A map can be easily modeled as a graph, with the cities as nodes and the roads connecting the cities as edges. If we wanted to determine the shortest distance and route from one city to another, we first need to assign a cost from traveling from one city to another. The logical solution would be to give each edge a weight, such as how many miles it is from one city to another.
Figure 3 shows a graph that represents several cities in southern California. The cost of any particular path from one city to another is the sum of the costs of the edges along the path. The shortest path, then, would be the path with the least cost. In Figure 3, for example, a trip from San Diego to Santa Barbara is 210 miles if driving through Riverside, then to Barstow, and then back to Santa Barbara. The shortest trip, however, is to drive 100 miles to Los Angeles, and then another 30 up to Santa Barabara.
Figure 3. Graph of California cities with edges valued as miles
Realize that directionality and weightedness of edges are orthogonal. That is, a graph can have one of four arrangements of edges:
- Directed, weighted edges
- Directed, unweighted edges
- Undirected, weighted edges
- Undirected, unweighted edges
The graph's in Figure 1 had undirected, unweighted edges. Figure 2 had directed, unweighted edges, and Figure 3 used undirected, weighted edges.
Sparse Graphs and Dense Graphs
While a graph could have zero or a handful of edges, typically a graph will have more edges than it has nodes. What's the maximum number of edges a graph could have, given n nodes? It depends on whether the graph is directed or undirected. If the graph is directed, then each node could have at an edge to every other node. That is, all n nodes could have n – 1 edges, giving a total of n * (n – 1) edges, which is nearly n2.
Note For this article, I am assuming nodes are not allowed to have edges to themselves. In general, though, graphs allow for an edge to exist from a node v back to node v. If self-edges are allowed, the total number of edges for a directed graph would be n2.
If the graph is undirected, then one node, call it v1, could have an edge to each and every other node, or n – 1 edges. The next node, call it v2, could have at most n – 2 edges, because there already exists an edge from v2 to v1. The third node, v3, could have at most n – 3 edges, and so forth. Therefore, for n nodes, there would be at most (n – 1) + (n – 2) + ... + 1 edges. Summed up this comes to [n * (n-1)] / 2, or, as you might have already guessed, exactly half as many edges as a directed graph.
If a graph has significantly less than n2 edges, the graph is said to be sparse. For example, a graph with n nodes and n edges, or even 2n edges would be said to be sparse. A graph with close to the maximum number of edges is said to be dense.
When using graphs in an algorithm it is important to know the ratio between nodes and edges. As we'll see later on in this article, the asymptotic running time operations performed on a graph is typically expressed in terms of the number of nodes and edges in the graph.
Creating a Graph Class
While graphs are a very common data structure used in a wide array of different problems, there is no built-in graph data structure in the .NET Framework. Part of the reason is because an efficient implementation of a
Graph class depends on a number of factors specific to the problem at hand. For example, graphs are typically modeled in either one of two ways:
- As an adjacency list
- As an adjacency matrix
These two techniques differ in how the nodes and edges of the graph are maintained internally by the
Graph class. Let's examine both of these approaches and weigh the pros and cons of each approach.
Representing a Graph Using an Adjacency List
In Part 3 we created a base class to represent nodes, the
Node class. This base class was extended to provide specialized node classes for the
BinaryTree,
BST, and
SkipList classes. Because a each node in a graph has an arbitrary number of neighbors, it might seem plausible that we can simply use the base
Node class to represent a node in the graph, because the
Node class consists of a value and an arbitrary number of neighboring
Node instances. However, while this base class is a step in the right direction, it still lacks needed features, such as a way to associate a cost between neighbors. One option, then, is to create a
class that derives from the base
GraphNode
Node class and extends it to include the required additional capabilities. Each
GraphNode class, then, will keep track of its neighboring
GraphNodes in the base class's
Neighbors property.
The
Graph class contains a
NodeList holding the set of
GraphNodes that constitute the nodes in the graph. That is, a graph is represented by a set of nodes, and each node maintains a list of its neighbors. Such a representation is called an adjacency list, and is depicted graphically in Figure 4.
Figure 4. Adjacency list representation in graphical form
Notice that with an undirected graph, an adjacency list representation duplicated the edge information. For example, in adjacency list representation (b) in Figure 4, the node a has b in its adjacency list, and node b also has node a in its adjacency list.
Each node has precisely as many
s in its adjacency list as it has neighbors. Therefore, an adjacency list is a very space-efficient representation of a graph—you never store more data than needed. Specifically, for a graph with V nodes and E edges, a graph using an adjacency list representation will require V + E
GraphNode
instances for a directed graph and V + 2E
GraphNode
Node instances for an undirected graph.
While Figure 4 does not show it, adjacency lists can also be used to represent weighted graphs. The only addition is that for each
n's adjacency list, each
GraphNode
instance in the adjacency list needs to store the cost of the edge from n.
GraphNode
The one downside of an adjacency list is that determining if there is an edge from some node u to v requires that u's adjacency list be searched. For dense graphs, u will likely have many
s in its adjacency list. Determining if there is an edge between two nodes, then, takes linear time for dense adjacency list graphs. Fortunately, when using graphs we'll likely not need to determine if there exists an edge between two particular nodes. More often than not, we'll want to simply enumerate all the edges of a particular node.
GraphNode
Representing a Graph Using an Adjacency Matrix
An alternative method for representing a graph is to use an adjacency matrix. For a graph with n nodes, an adjacency matrix is an n x n two-dimensional array. For weighted graphs the array element (u, v) would give the cost of the edge between u and v (or, perhaps -1 if no such edge existed between u and v. For an unweighted graph, the array could be an array of Booleans, where a True at array element (u, v) denotes an edge from u to v and a False denotes a lack of an edge.
Figure 5 depicts how an adjacency matrix representation in graphical form.
Figure 5. Adjacency matrix representation in graphical form
Note that undirected graphs display symmetry along the adjacency matrix's diagonal. That is, if there is an edge from u to v in an undirected graph then there will be two corresponding array entries in the adjacency matrix, (u, v) and (v, u).
Because determining if an edge exists between two nodes is simply an array lookup, this can be determined in constant time. The downside of adjacency matrices is that they are space inefficient. An adjacency matrix requires an n2 element array, so for sparse graphs much of the adjacency matrix will be empty. Also, for undirected graphs half of the graph is just repeated information.
While either an adjacency matrix or adjacency list would suffice as an underlying representation of a graph for our
Graph class, let's move forward using the adjacency list model. I chose this approach primarily because it is a logical extension from the
and
BinaryTreeNode
BinaryTree classes that we've already created together, and can be implemented by extending the
Node class used as a base class for the data structures we've examined previously.
Creating the GraphNode Class
The
class represents a single node in the graph, and is derived from the base
GraphNode
Node class we examined in Part 3 of this article series. The
class extends its base class by providing public access to the
GraphNode
Neighbors property, as well as providing a
Cost property. The
Cost property is of type
List<int>; for weighted graphs
Cost[i] it can be used to specify the cost associated with traveling from the
GraphNode to
Neighbors[i].
public class GraphNode<T> : Node<T> { private List<int> costs; public GraphNode() : base() { } public GraphNode(T value) : base(value) { } public GraphNode(T value, NodeList<T> neighbors) : base(value, neighbors) { } new public NodeList<T> Neighbors { get { if (base.Neighbors == null) base.Neighbors = new NodeList<T>(); return base.Neighbors; } } public List<int> Costs { get { if (costs == null) costs = new List<int>(); return costs; } } }
As the code for the
GraphNode class shows, the class exposes two properties:
- Neighbors: this just provides a public property to the protected base class's
Neighborsproperty. Recall that Neighbors is of type
NodeList<T>.
- Costs: a
List<int>mapping a weight from the GraphNode to a specific neighbor.
Building the Graph Class
Recall that with the adjacency list technique, the graph maintains a list of its nodes. Each node, then, maintains a list of adjacent nodes. So, in creating the
Graph class we need to have a list of
s. This set of nodes is maintained using a
GraphNode
NodeList instance. (We examined the
NodeList class in Part 3; this class was used by the
BinaryTree and
BST classes, and was extended for the
SkipList class.) The
Graph class exposes its set of nodes through the public property
Nodes.
Additionally, the
Graph class has a number of methods for adding nodes and directed or undirected and weighted or unweighted edges between nodes. The
AddNode() method adds a node to the graph,
while AddDirectedEdge() and
AddUndirectedEdge() allow a weighted or unweighted edge to be associated between two nodes.
In addition to its methods for adding edges, the
Graph class has a
Contains() method that returns a Boolean indicating if a particular value exists in the graph or not. There is also a
Remove() method that deletes a
GraphNode and all edges to and from it. The germane code for the
Graph class is shown below (some of the overloaded methods for adding edges and nodes have been removed for brevity):
public class Graph<T> : IEnumerable<T> { private NodeList<T> nodeSet; public Graph() : this(null) {} public Graph(NodeList<T> nodeSet) { if (nodeSet == null) this.nodeSet = new NodeList<T>(); else this.nodeSet = nodeSet; } public void AddNode(GraphNode<T> node) { // adds a node to the graph nodeSet.Add(node); } public void AddNode(T value) { // adds a node to the graph nodeSet.Add(new GraphNode<T>(value)); } public void AddDirectedEdge(GraphNode<T> from, GraphNode<T> to, int cost) { from.Neighbors.Add(to); from.Costs.Add(cost); } public void AddUndirectedEdge(GraphNode<T> from, GraphNode<T> to, int cost) { from.Neighbors.Add(to); from.Costs.Add(cost); to.Neighbors.Add(from); to.Costs.Add(cost); } public bool Contains(T value) { return nodeSet.FindByValue(value) != null; } public bool Remove(T value) { // first remove the node from the nodeset GraphNode<T> nodeToRemove = (GraphNode<T>) nodeSet.FindByValue(value); if (nodeToRemove == null) // node wasn't found return false; // otherwise, the node was found nodeSet.Remove(nodeToRemove); // enumerate through each node in the nodeSet, removing edges to this node foreach (GraphNode<T> gnode in nodeSet) { int index = gnode.Neighbors.IndexOf(nodeToRemove); if (index != -1) { // remove the reference to the node and associated cost gnode.Neighbors.RemoveAt(index); gnode.Costs.RemoveAt(index); } } return true; } public NodeList<T> Nodes { get { return nodeSet; } } public int Count { get { return nodeSet.Count; } } }
Using the Graph Class
At this point, we have created all of the classes needed for our graph data structure. We'll soon turn our attention to some of the more common graph algorithms, such as constructing a minimum spanning tree and finding the shortest path from a single node to all other nodes, but before we do let's examine how to use the
Graph class in a C# application.
Once we create an instance of the
Graph class, the next task is to add the
Nodes to the graph. This involves calling the
Graph class's
AddNode() method for each node to add to the graph. Let's recreate the graph from Figure 2. We'll need to start by adding six nodes. For each of these nodes let's have the
Key be the Web page's filename; we'll leave the
Data as
null, although this might conceivably contain the contents of the file, or a collection of keywords describing the Web page content.
Graph<string> web = new Graph<string>(); web.AddNode("Privacy.htm"); web.AddNode("People.aspx"); web.AddNode("About.htm"); web.AddNode("Index.htm"); web.AddNode("Products.aspx"); web.AddNode("Contact.aspx");
Next we need to add the edges. Because this is a directed, unweighted graph, we'll use the
Graph class's
AddDirectedEdge(u, v) method to add an edge from u to v.
web.AddDirectedEdge("People.aspx", "Privacy.htm"); // People -> Privacy web.AddDirectedEdge("Privacy.htm", "Index.htm"); // Privacy -> Index web.AddDirectedEdge("Privacy.htm", "About.htm"); // Privacy -> About web.AddDirectedEdge("About.htm", "Privacy.htm"); // About -> Privacy web.AddDirectedEdge("About.htm", "People.aspx"); // About -> People web.AddDirectedEdge("About.htm", "Contact.aspx"); // About -> Contact web.AddDirectedEdge("Index.htm", "About.htm"); // Index -> About web.AddDirectedEdge("Index.htm", "Contact.aspx"); // Index -> Contacts web.AddDirectedEdge("Index.htm", "Products.aspx"); // Index -> Products web.AddDirectedEdge("Products.aspx", "Index.htm"); // Products -> Index web.AddDirectedEdge("Products.aspx", "People.aspx");// Products -> People
After these commands,
web represents the graph shown in Figure 2. Once we have a constructed a graph we'll typically want to answer some questions. For example, for the graph we just created we might want to answer, "What's the least number of links a user must click to reach any Web page when starting from the homepage (
Index.htm)?" To answer such questions we can usually fall back on using existing graph algorithms. In the next section we'll examine two common algorithms for weighted graphs: constructing a minimum spanning tree and finding the shortest path from one node to all others.
A Look at Some Common Graph Algorithms
Because graphs are a data structure that can be used to model a bevy of real-world problems, there are innumerous algorithms designed to find solutions for common problems. To further our understanding of graphs, let's take a look at two of the most studied applications of graphs: finding a minimum spanning tree and computing the shortest path from a source node to all other nodes.
The Minimum Spanning Tree Problem
Imagine that you work for the phone company and your task is to provide phone lines to a village with 10 houses, each labeled H1 through H10. Specifically this involves running a single cable that connects each home. That is, the cable must run through houses H1, H2, and so forth, up through H10. Due to geographic obstacles—hills, trees, rivers, and so on—it is not feasible to necessarily run the cable from one house to another.
Figure 6 shows this problem depicted as a graph. Each node is a house, and the edges are the means by which one house can be wired up to another. The weights of the edges dictate the distance between the homes. Your task is to wire up all ten houses using the least amount of telephone wiring possible.
Figure 6. Graphical representation of hooking up a 10-home village with phone lines
For a connected, undirected graph, there exists some subset of the edges that connect all the nodes and does not introduce a cycle. Such a subset of edges would form a tree (because it would comprise one less edge than vertices and is acyclic), and is called a spanning tree. There are typically many spanning trees for a given graph. Figure 7 shows two valid spanning trees from the Figure 6 graph. (The edges forming the spanning tree are bolded.)
Figure 7.Spanning tree subsets based on Figure 6
For graphs with weighted edges, different spanning trees have different associated costs, where the cost is the sum of the weights of the edges that comprise the spanning tree. A minimum spanning tree, then, is the spanning tree with a minimum cost.
There are two basic approaches to solving the minimum spanning tree problem. One approach is build up a spanning tree by choosing the edges with the minimum weight, so long as adding that edge does not create a cycle among the edges chosen thus far. This approach is shown in Figure 8.
Figure 8. Minimum spanning tree that uses the edges with the minimum weight
The other approach builds up the spanning tree by dividing the nodes of the graph into two disjoint sets: the nodes currently in the spanning tree and those nodes not yet added. At each iteration, the least weighted edge that connects the spanning tree nodes to a node not in the spanning tree is added to the spanning tree. To start off the algorithm, some random start node must be selected. Figure 9 illustrates this approach in action, using H1 as the starting node. (In Figure 9 those nodes that are in the set of nodes in the spanning tree are shaded light yellow.)
Figure 9. Prim method of finding the minimum spanning tree
Notice that the techniques illustrated in both Figures 8 and 9 arrived at the same minimum spanning tree. If there is only one minimum spanning tree for the graph, then both of these approaches will reach the same conclusion. If, however, there are multiple minimum spanning trees, these two approaches might arrive with different results (both results will be correct, naturally).
Note The first approach we examined was discovered by Joseph Kruskal in 1956 at Bell Labs. The second technique was discovered in 1957 by Robert Prim, also a researcher at Bell Labs. There is a plethora of information on these two algorithms on the Web, including Java applets showing the algorithms in progress graphically (Kruskal's Algorithm | Prim's Algorithm), as well as source code in a variety of languages.
Computing the Shortest Path from a Single Source
When flying from one city to another, part of the headache is finding a route that requires the fewest number of connections—who likes their flight from New York to L.A. to first go from New York to Chicago, then Chicago to Denver, and finally Denver to L.A? Rather, most people would rather have a direct flight straight from New York to L.A.
Imagine, however, that you are not one of those people. Instead, you are someone who values his money much more than his time, and are most interested in finding the cheapest route, regardless of the number of connections. This might mean flying from New York to Miami, then Miami to Dallas, then Dallas to Phoenix, Phoenix to San Diego, and finally San Diego to L.A.
We can solve this problem by modeling the available flights and their costs as a directed, weighted graph. Figure 10 shows such a graph.
Figure 10. Modeling of available flights based on cost
What we are interested in knowing is what is the least expensive path from New York to L.A. By inspecting the graph, we can quickly determine that it's from New York to Chicago to San Francisco and finally down to L.A., but in order to have a computer accomplish this task we need to formulate an algorithm to solve the problem at hand.
The late Edsger Dijkstra, one of the most noted computer scientists of all time, invented the most commonly used algorithm for finding the shortest path from a source node to all other nodes in a weighted, directed graph. This algorithm, dubbed Dijkstra's Algorithm, works by maintaining two tables, each of which have a record for each node. These two tables are:
- A distance table, which keeps an up-to-date "best distance" from the source node to every other node.
- A route table, which, for each node n, indicates what node was used to reach n to get the best distance.
Initially, the distance table has each record set to some high value (like positive infinity) except for the start node, which has a distance to itself of 0. The route table's rows are all set to
null. Also, a collection of nodes, Q, that need to be examined is maintained; initially, this collection contains all of the nodes in the graph.
The algorithm proceeds by selecting (and removing) the node from Q that has the lowest value in the distance table. Let this selected node be called n and the value in the distance table for n be d. For each of the n's edges, a check is made to see if d plus the cost to get from n to that particular neighbor is less than the value for that neighbor in the distance table. If it is, then we've found a better way to reach that neighbor, and the distance and route tables are updated accordingly.
To help clarify this algorithm, let's begin applying it to the graph from Figure 10. Because we want to know the cheapest route from New York to L.A. we use New York as our source node. Our initial distance table, then, contains a value of infinity for each of the other cities, and a value of 0 for New York. The route table contains
nulls for all entries, and Q contains all nodes (see Figure 11).
Figure 11. Distance table and route table for determining cheapest fare
We start by extracting the city from Q that has the lowest value in the distance table—New York. We then examine each of New York's neighbors and check to see if the cost to fly from New York to that neighbor is less than the best cost we know of, namely the cost in the distance table. After this first check, we'd have removed New York from Q and updated the distance and route tables for Chicago, Denver, Miami, and Dallas.
Figure 12. Step 2 in the process of determining the cheapest fare
The next iteration gets the cheapest city out of Q, Chicago, and then checks its neighbors to see if there is a better cost. Specifically, we'll check to see if there's a better route for getting to San Francisco or Denver. Clearly the cost to get to San Francisco from Chicago—$75 + $25—is less than Infinity, so San Francisco's records are updated. Also, note that it is cheaper to fly from Chicago to Denver than from New York to Denver ($75 + $20 < $100), so Denver is updated as well. Figure 13 shows the values of the tables and Q after Chicago has been processed.
Figure 13. Table status after the third leg of the process is finished
This process continues until there are no more nodes in Q. Figure 14 shows the final values of the tables when Q has been exhausted.
Figure 14. Final results of determining the cheapest fare
At the point of exhausting Q, the distance table will contain the lowest cost from New York to each city. To determine the flight path to arrive at L.A., start by examining the L.A. entry in the route table and work back up to New York. That is, the route table entry for L.A. is San Francisco, meaning the last leg of the flight to L.A. leaves from San Francisco. The route table entry for San Francisco is Chicago, meaning you'll get to San Francisco via Chicago. Finally, Chicago's route table entry is New York. Putting this together we see that the cheapest flight path is from New York to Chicago to San Francisco to L.A, and costs $145.
Note To see a working implementation of Dijkstra's Algorithm check out the download for this article, which includes a testing application for the
Graphclass that determines the shortest distance from one city to another using Dijkstra's Algorithm.
Conclusion
Graphs are a commonly used data structure because they can be used to model many real-world problems. A graph consists of a set of nodes with an arbitrary number of connections, or edges, between the nodes. These edges can be either directed or undirected and weighted or unweighted.
In this article we examined the basics of graphs and created a
Graph class. This class was similar to the
BinaryTree class created in Part 3, the difference being that instead of only have a reference for at most two edges, the
Graph class's
s could have an arbitrary number of references. This similarity is not surprising because trees are a special case of graphs.
GraphNode
In addition to creating a
Graph class, we also looked at two common graph algorithms, the minimum spanning tree problem and computing the shortest path from some source node to all other nodes in a weighted, directed graph. While we did not examine source code to implement these algorithms, there are plenty source code examples available on the Internet. Too, the download included with this article contains a testing application for the
Graph class that uses Dijkstra's Algorithm to compute the shortest route between two cities.
In the next installment, Part 6, we'll look at efficiently maintaining disjoint sets. Disjoint sets are a collection of two or more sets that do not share any elements in common. For example, with Prim's Algorithm for finding the minimum spanning tree, the nodes of the graph can be divided into two disjoint sets: the set of nodes that currently constitute the spanning tree and the set of nodes that are not yet in the spanning tree.. | https://msdn.microsoft.com/en-US/library/ms379574(v=VS.80).aspx | CC-MAIN-2018-05 | en | refinedweb |
I have done a tracert and after the kinect server I have seen a internal ip I have not seen before. Does anyone know more about it. it's an internal ip.
10.55.85.80.
Is this of any concern?
Kind Regards
For Free Games, Geekiness and Reviews, visit :
Home Of The Overrated Raccoons
Michael Murphy |
Want to be with an epic ISP? Want $20 to join them too? Well, use this link to sign up to BigPipe!
The Router Guide | Community UniFi Cloud Controller | Ubiquiti Edgerouter Tutorial
Tracing route to twitch.tv [192.16.71.165]
over a maximum of 30 hops:
1 <1 ms <1 ms <1 ms pfsense.lanrouter [10.3.57.1]
2 30 ms 25 ms 24 ms 218-101-115-254.dsl.dyn.ihug.co.nz [218.101.115.254]
3 30 ms 35 ms 23 ms bvi-400.bgnzldv02.akl.vf.net.nz [203.109.180.242]
4 23 ms 23 ms 25 ms bvi-188.bgnzldv02.akl.vf.net.nz [203.109.180.241]
5 51 ms 50 ms 50 ms 10.123.80.58
6 * * * Request timed out.
#include <std_disclaimer>
Any comments made are personal opinion and do not reflect directly on the position my current or past employers may have.
CcMaN:RunningMan: Post the traceroute.
AFAIK Trustpower use CG-NAT, so you'll see internal IPs after your router.
I'm pretty certain TPW don't use CGNAT...
Services are provided with a dynamic IP address or Carrier Grade NAT. | https://www.geekzone.co.nz/forums.asp?forumid=49&topicid=186912 | CC-MAIN-2018-05 | en | refinedweb |
I } }) store.dispatch('checkLoggedIn')
This file sets up Vue, and serves as the main entry point (or starting point) for the whole JavaScript application.
Next, create
router.js:
import Vue from 'vue' import Router from 'vue-router' import store from './store' import Dashboard from './components/Dashboard.vue' import Login from './components/Login.vue' Vue.use(Router) function requireAuth (to, from, next) { if (!store.state.loggedIn) { next({ path: '/login', query: { redirect: to.path } }) } else { next() } } export default new Router({ mode: 'history', base: __dirname, routes: [ { path: '/', component: Dashboard, beforeEnter: requireAuth }, { path: '/login', component: Login }, { path: '/logout', async beforeEnter (to, from, next) { await store.dispatch('logout') } } ] })
The Vue router keeps track of what page the user is currently viewing, and handles navigating between pages or sections of your app. This file configures the router with three paths (
/,
/login, and
/logout) and associates each path with a Vue component.
You might be wondering what
store,
Dashboard, and
Login"> <h2>{{name}}, here's your to-do list</h2> <input class="new-todo" autofocus <ul class="todo-list"> <todo-item</todo-item> </ul> <p>{{ remaining }} remaining</p> <router-linkLog out</router-link> </div> </template> <script> import TodoItem from './TodoItem' export default { components: { TodoItem }, mounted() { this.$store.dispatch('getAllTodos') }, computed: { name () { return this.$store.state.userName },. (You’ll build the data store in a minute!)> import { mapActions } from 'vuex' export default { props: ['item'], methods: { ...mapActions([ 'toggleTodo', 'deleteTodo' ]) } } < a.
Add another component called
Login.vue:
<template> <div> <h2>Login</h2> <p v- You need to login first. </p> <form @submit. <label for="email">Email</label> <input id="email" v- <label for="password">Password</label> <input id="password" v- <button type="submit">login</button> <p v-{{loginError}}</p> </form> </div> </template> <script> export default { data () { return { email: '', password: '', error: false } }, computed: { loginError () { return this.$store.state.loginError } }, methods: { login () { this.$store.dispatch('login', { email: this.email, password: this.password }) } } } </script> <style scoped> .error { color: red; } label { display: block; } input { display: block; margin-bottom: 10px; } </style>
The Login component renders a simple login form, and shows an error message if the login is unsuccessful.
Notice the
scoped attribute on the
<style> tag? That’s a cool feature called scoped CSS. Marking a block of CSS as
scoped means the CSS rules only apply to this component (otherwise, they apply globally). It’s useful here to set
display: block on the
input and
label elements in this component, without affecting how those elements are rendered elsewhere in the app.
The Dashboard and Login components (and the router configuration) refer to something called
store. I’ll explain what the store is, and how to build it, in the next section.
Before you get there, you need to build one more component. Create a file called
App.vue in the
components folder:
<template> <div class="app-container"> <div class="app-view"> <template v- <router-view></router-view> </template> </div> </div> </template> <script> export default { computed: { loggedIn () { return this.$store.state.loggedIn } } } <; padding: 20px 25px 15px 25px; margin: 30px; position: relative; box-shadow: 0 2px 4px 0 rgba(0, 0, 0, 0.2), 0 5px 10px 0 rgba(0, 0, 0, 0.1); } </style>
The App component doesn’t do much -- it just provides the base HTML and CSS styles that the other components will be rendered inside. The
<router-view> element loads a component provided by the Vue router, which will render either the Dashboard or Login component depending on the path in the address bar.
If you look back at
boot-app.js, you’ll see this line:
import App from './components/App'
This statement loads the App component, which is then passed to Vue in
new Vue(...).
You’re all done building components! It’s time to add some state management. First, I’ll explain what state management is and why it’s useful.
Add Vuex for state management
Components are a great way to break up your app into manageable pieces, but when you start passing data between many components, it can be, and that’s it. The real meat of Vuex is in mutations and actions, but you’ll write those in separate files to keep everything organized.
Create a file called
mutations.js:
import router from '../router' export const state = { todos: [], loggedIn: false, loginError: null, userName: null } export const mutations = { loggedIn(state, data) { state.loggedIn = true state.userName = (data.name || '').split(' ')[0] || 'Hello' let redirectTo = state.route.query.redirect || '/' router.push(redirectTo) }, loggedOut(state) { state.loggedIn = false router.push('/login') }, loginError(state, message) { state.loginError = message }, loadTodos(state, todos) { state.todos =.
As you can see, this app uses the Vuex store to keep track of both the to-do list (the
state.todos array), and authentication state (whether the user is logged in, what their name is). The Dashboard and Login components access this data with computed properties like:
todos () { return this.$store.state.todos },
The mutations defined here are only half the story, because they only handle updating the state after an action has run. Create another file called
actions.js:
import axios from 'axios' const sleep = ms => { return new Promise(resolve => setTimeout(resolve, ms)) } export const actions = { checkLoggedIn({ commit }) { // Todo: commit('loggedIn') if the user is already logged in }, async login({ dispatch, commit }, data) { // Todo: log the user in commit('loggedIn', { userName: data.email }) }, async logout({ commit }) { // Todo: log the user out commit('loggedOut') }, async loginFailed({ commit }, message) { commit('loginError', message) await sleep(3000) commit('loginError', null) }, async getAllTodos({ commit }) { // Todo: get the user's to-do items commit('loadTodos', [{ text: 'Fake to-do item' }]) }, async addTodo({ dispatch }, data) { // Todo: save a new to-do item await dispatch('getAllTodos') }, async toggleTodo({ dispatch }, data) { // Todo: toggle to-do item completed/not completed await dispatch('getAllTodos') }, async deleteTodo({ dispatch }, id) { // Todo: delete to-do item await dispatch('getAllTodos') } } same mutation with the items returned from the API.
Note: Because the starter template includes Babel and the transform-async-to-generator plugin, the new async/await syntax in ES2017 is available. I love async/await, because it makes dealing with async things like API calls much easier (no more big chains of Promises, or callback hell). As you’ll see in the next section, C# uses the same syntax!
Run the app
You still need to add the backend API and authentication bits, but let’s take a quick break and see what you’ve built so far. Start up the app with
dotnet run and browse to. Log in with a fake username and password:
Tip: If you need to fix bugs or make changes, you don’t need to stop and restart the server with
dotnet run again. As soon as you modify any of your Vue or JavaScript files, the frontend app will be recompiled automatically. Try making a change to the Dashboard component and see it appear instantly in your browser (like magic).
The to-do item is fake, but your app is very real! You’ve set up Vue.js, built components and routing, and added state management with Vuex. The next step is adding the backend API with ASP.NET Core. Grab a refill of coffee and let’s dive in!
Add APIs with ASP.NET Core
The user’s to-do items will be stored in an online database.
Tip:. You’ll modify this file later.
- A pair of controllers in the aptly-named Controllers folder.
In ASP.NET Core, Controllers handle requests to specific routes in your app later, throughout the ASP.NET Core project. }) { let response = await axios.get('/api/todo') if (response && response.data) { let updatedTodos = response.data commit('loadTodos', updatedTodos) } },
The new action code uses the axios HTTP library to make a request to the backend on the
/api/todo route, which will be handled by the
GetAllTodos method on the is still fake, but less fake than before! You’ve successfully connected the backend and frontend and have data flowing between them.
The final step is to add authentication and real data storage to the app. You’re almost there!
Add identity and security with Okta
Okta is a cloud-hosted identity API that makes it easy to add authentication, authorization, and user management to your web and mobile apps. You’ll use it in this project to:
- Add functionality to the Login component
- Require authentication on the backend API
- Store each user’s to-do items securely
To get started, sign up for a free Okta Developer account. After you activate your new account (called an Okta Organization, or Org for short), click Applications at the top of the screen. Choose Single-Page App and change both the base URI need to connect your frontend code to Okta.
Add the Okta Auth SDK
The Okta Auth SDK provides methods that make it easy to authenticate users from JavaScript code. Install it with npm:
npm install @okta/okta-auth-js@1.11.0
Create a file in the ClientApp folder called
oktaAuth.js that holds the Auth SDK configuration and makes the client available to the rest of your Vue app:
import OktaAuth from '@okta/okta-auth-js' const org = '{{yourOktaOrgUrl}}', clientId = '{{appClientId}}', redirectUri = '', authorizationServer = 'default' const oktaAuthClient = new OktaAuth({ url: org, issuer: authorizationServer, clientId, redirectUri }) export default { client: oktaAuthClient }
Replace
{{yourOktaOrgUrl}} with your Okta Org URL, which usually looks like this:. You can find it in the top right corner of the Dashboard page.
Next, paste the Client ID you copied from the application you created a minute ago into the
clientId property.
The
checkLoggedIn,
login, and
logout actions can now be replaced with real implementations in
actions.js:
checkLoggedIn({ commit }) { if (oktaAuth.client.tokenManager.get('access_token')) { let idToken = oktaAuth.client.tokenManager.get('id_token') commit('loggedIn', idToken.claims) } }, async login({ dispatch, commit }, data) { let authResponse try { authResponse = await oktaAuth.client.signIn({ username: data.email, password: data.password }); } catch (err) { let message = err.message || 'Login error' dispatch('loginFailed', message) return } if (authResponse.status !== 'SUCCESS') { console.error("Login unsuccessful, or more info required", response.status) dispatch('loginFailed', 'Login error') return } let tokens try { tokens = await oktaAuth.client.token.getWithoutPrompt({ responseType: ['id_token', 'token'], scopes: ['openid', 'email', 'profile'], sessionToken: authResponse.sessionToken, }) } catch (err) { let message = err.message || 'Login error' dispatch('loginFailed', message) return } // Verify ID token validity try { await oktaAuth.client.token.verify(tokens[0]) } catch (err) { dispatch('loginFailed', 'An error occurred') console.error('id_token failed validation') return } oktaAuth.client.tokenManager.add('id_token', tokens[0]); oktaAuth.client.tokenManager.add('access_token', tokens[1]); commit('loggedIn', tokens[0].claims) }, async logout({ commit }) { oktaAuth.client.tokenManager.clear() await oktaAuth.client.signOut() commit('loggedOut') },
These actions delegate to the Okta Auth SDK, which calls the Okta authentication API to log the user in and get access and ID tokens for the user via OpenID Connect. The Auth SDK also stores and manages the tokens for your app.
You’ll also need to add an
import statement at the top of
actions.js:
import oktaAuth from '../oktaAuth'
Try it out: run the server with
dotnet run and try logging in with the email and password you used to sign up for Okta:
Try logging in, refreshing the page (you should still be logged in!), and logging out.
That takes care of authenticating the user on the frontend. The Vuex store will keep track of the authentication state, and the Okta Auth SDK will handle login, logout, and keeping the user’s tokens fresh.
To secure the backend API, you need to configure ASP.NET Core to use token authentication and require a token when the frontend code makes a request.
Add API token authentication
Under the hood, the Okta Auth SDK uses OpenID Connect to get access and ID tokens when the user logs in. The ID token is used to display the user’s name in the Vue app, and the access token can be used to secure the backend API.
Open up the
Startup.cs file and add this code to the
ConfigureServices method:
services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme) .AddJwtBearer(options => { options.Authority = "{{yourOktaOrgUrl}}/oauth2/default"; options.Audience = "api://default"; });
This code adds token authentication to the ASP.NET Core authentication system. With this in place, your frontend code will need to attach an access token to requests in order to access the API.
Make sure you replace
{{yourOktaOrgUrl}} with your Okta Org URL (find it in the top-right of your Okta developer console’s Dashboard).
You’ll also]")] [Authorize] // Add this public class TodoController : Controller { // ...
If you tried running the app and looking at your browser’s network console, you’d see a failed API request:
The
TodoController is responding with 401 Unauthorized because it now requires a valid token to access the
/api/todo route, and your frontend code isn’t sending a token.
Open up
actions.js once more and add a small function below
sleep that attaches the user’s token to the HTTP
Authorization header:
const addAuthHeader = () => { return { headers: { 'Authorization': 'Bearer ' + oktaAuth.client.tokenManager.get('access_token').accessToken } } }
Then, update the code that calls the backend in the
getAllTodos function:
let response = await axios.get('/api/todo', addAuthHeader())
Refresh the page (or start the server) and the now-authenticated request will succeed once again.
Add the Okta .NET SDK
You’re almost done! The final task is to store and retrieve the user’s to-do items in the Okta custom profile attribute you set up earlier. You’ll use the Okta .NET SDK to do this in a few lines of backend code.
Stop the ASP.NET Core server (if it’s running), and install the Okta .NET SDK in your project with the
dotnet tool: = "{{yourOktaOrgUrl}}",ktaOrgUrl}}! Because of this, before? It’s time to replace it with a new service that uses Okta to store the user’s to-do items. Create
OktaTodoItemService.cs in the Services foldre: are limited to storing primitives likes strings and numbers, but you’re using the
TodoModel type to represent to-do items. This service serializes the strongly-typed }, addAuthHeader()) await dispatch('getAllTodos') }, async toggleTodo({ dispatch }, data) { await axios.post( '/api/todo/' + data.id, { completed: data.completed }, addAuthHeader()) await dispatch('getAllTodos') }, async deleteTodo({ dispatch }, id) { await axios.delete('/api/todo/' + id, addAuthHeader()) await dispatch('getAllTodos') }
Start the server one more time with
dotnet run and try adding a real to-do item to the list:
Learn More! our other recent posts:
- The Lazy Developer's Guide to Authentication with Vue.js
- Build a Cryptocurrency Comparison Site with Vue.js
Happy coding!
This content is sponsored via Syndicate Ads | https://scotch.io/tutorials/build-a-secure-to-do-app-with-vuejs-aspnet-core-and-okta | CC-MAIN-2018-05 | en | refinedweb |
#include <types.h>
#include <types.h>
Inheritance diagram for UUID:
[inline]
Construct a new UUID with a new unique value.
Construct a UUID from a sequence of bytes.
Construct a UUID as a copy of another UUID.
Copy constructor.
Construct a UUID from an end-swapped UL.
Set a UUID from a UL, does end swapping.
Produce a human-readable string in one of the "standard" formats.
Reimplemented from Identifier< 16 >. | http://freemxf.org/mxflib-docs/mxflib-1.0.0-docs/classmxflib_1_1_u_u_i_d.html | CC-MAIN-2018-05 | en | refinedweb |
In this program we will see how to read an integer number entered by user. Scanner class is in java.util package. It is used for capturing the input of the primitive types like int, double etc. and strings.
Example: Program to read the number entered by user
We have imported the package
java.util.Scanner to use the Scanner. In order to read the input provided by user, we first create the object of Scanner by passing
System.in as parameter. Then we are using nextInt() method of Scanner class to read the integer. If you are new to Java and not familiar with the basics of java program then read the following topics of Core Java:
→ Writing your First Java Program
→ How JVM works
import java.util.Scanner; public class Demo { public static void main(String[] args) { /* This reads the input provided by user * using keyboard */ Scanner scan = new Scanner(System.in); System.out.print("Enter any number: "); // This method reads the number provided using keyboard int num = scan.nextInt(); // Closing Scanner after the use scan.close(); // Displaying the number System.out.println("The number entered by user: "+num); } }
Output:
Enter any number: 101 The number entered by user: 101 | https://beginnersbook.com/2017/09/java-program-to-read-integer-value-from-the-standard-input/ | CC-MAIN-2018-05 | en | refinedweb |
public class Solution { public TreeNode InvertTree(TreeNode root) { Invert(root); return root; } public void Invert(TreeNode root) { if (root == null) return; TreeNode temp = root.left; root.left = root.right; root.right = temp; Invert(root.left); Invert(root.right); } }
Basic idea is that you need to invert pointers pointing to root's left and right children at any time going down from the root.
Therefore I'm keeping a temporary variable (temp) for left child, assigning right to left and then temp to right. This is a simple swap operation if you will.
Above is the core of the algorithm. We need to apply this core to both left and right children recursively noting that base case being (root == null) return;. | https://discuss.leetcode.com/topic/31875/recursive-c-solution | CC-MAIN-2018-05 | en | refinedweb |
Caution: The documentation you are viewing is
for an older version of Zend Framework.
You can find the documentation of the current version at docs.zendframework.com
The StandardAutoloader — Zend Framework 2 2.4.9 documentation.
The StandardAutoloader may also be configured at instantiation. Please note:
The following is equivalent to the previous example.
The StandardAutoloader defines the following options.
Initialize a new instance of the object __construct($options = null)
Constructor Takes an optional $options argument. This argument may be an associative array or Traversable object. If not null, the argument is passed to setOptions().
Set object state based on provided options. setOptions($options)
setOptions() Takes an argument of either an associative array or Traversable object. Recognized keys are detailed under Configuration options, with the following behaviors:.
Query fallback autoloader status isFallbackAutoloader()
isFallbackAutoloader() Indicates whether or not this instance is flagged as a fallback autoloader..
Register multiple namespaces with the autoloader registerNamespaces($namespaces)
registerNamespaces() Accepts either an array or Traversable object. It will then iterate through the argument, and pass each item to registerNamespace(). many vendor prefixes with the autoloader registerPrefixes($prefixes)
registerPrefixes() Accepts either an array or Traversable object. It will then iterate through the argument, and pass each item to registerPrefix().().
Please review the examples in the quick start for usage. | https://framework.zend.com/manual/2.4/en/modules/zend.loader.standard-autoloader.html | CC-MAIN-2018-05 | en | refinedweb |
The documentation of the DtoKPiPiCLEO class implements the Dalitz plot fits of the CLEO collaboration for
, Phys.
More...
#include <DtoKPiPiCLEO.h>
The documentation of the DtoKPiPiCLEO class implements the Dalitz plot fits of the CLEO collaboration for
, Phys.
Rev. Lett. 89 (2002) 251802, and
, Phys. Rev. D63 (2001) 092001.
Definition at line 31 of file DtoKPiPiCLEO.h.
Calculate the amplitude for a resonance.
Make a simple clone of this object.
Implements ThePEG::InterfacedBase.
Definition at line 122 of file DtoKPiPiCLEO 128 of file DtoKPiPiCLEO.
Magnitudes and phases of the amplitudes for
.
Amplitude of the non-resonant component
Definition at line 328 of file DtoKPiPiCLEO.h.
Magnitudes and phases of the amplitudes for
.
Amplitude for the
Definition at line 453 of file DtoKPiPiCLEO.h.
Mass, Widths and related parameters.
Whether to use local values for the masses and widths or those from the ParticleData objects
Definition at line 173 of file DtoKPiPiCLEO.h.
Parameters for the phase-space integration.
Maximum weights for the modes
Definition at line 638 of file DtoKPiPiCLEO.h.
Masses for the
Breit-Wigner.
The pion mass
Definition at line 653 of file DtoKPiPiCLEO.h.
Parameters for the Blatt-Weisskopf form-factors.
Radial size for the
Definition at line 623 of file DtoKPiPiCLEO.h.
The static object used to initialize the description of this class.
Indicates that this is a concrete class with persistent data.
Definition at line 155 of file DtoKPiPiCLEO.h. | http://herwig.hepforge.org/doxygen/classHerwig_1_1DtoKPiPiCLEO.html | CC-MAIN-2018-05 | en | refinedweb |
FAQ How do I access the active project?Input(); if (!(input instanceof IFileEditorInput)) return null; return ((IFileEditorInput)input).getFile(); }
See Also:
This FAQ was originally published in Official Eclipse 3.0 FAQs. Copyright 2004, Pearson Education, Inc. All rights reserved. This text is made available here under the terms of the Eclipse Public License v1.0. | http://wiki.eclipse.org/index.php?title=FAQ_How_do_I_access_the_active_project%3F&oldid=6075 | CC-MAIN-2018-05 | en | refinedweb |
Jul 20, 2010, at 2:58 PM, SourceForge.net wrote:
> I know it\'s wierd but 2 always seemed too hard to see and 4 seemed too much.
Please use 4-space indents. This is the standard value used by all major Python projects, and the one described in PEP 8. The minor aesthetic preference you have for 3-space indents will be massively outweighed by every tool (pydev, emacs, vim) fighting you, not to mention the fact that you will need to re-indent any example code that you wish to copy.
The following forum message was posted by at:
I am using RHEL 5.2 and:
Eclipse IDE for C/C++ Developers
Version: Helios Release
Build id: 20100617-1415
The following forum message was posted by at:
As a newbie, my problem is I started out coding with a mixture of space and tab indents and rapidly evolving coding styles. The Eclipse [b]Source->Format Code[/b] is nice but the resulting code still has all my[b] bad indention warnings[/b]. I prefer to use a 3-space indent. I know it\'s wierd but 2 always seemed too hard to see and 4 seemed too much.
I am new to both Eclipse an Python. Pydev certainly makes programming in Python more enjoyable. Thanks for a great tool!
-Ed
The following forum message was posted by at:
Hi Fabio,
Thank you very much for your time!
Firewall and SE Linux are both disabled.
I have 2 network cards installed.
eth0 - Is currently connected to a switch that goes nowhere. Static IP: 172.16.1.2 / Mask: 255.255.254.0
eth1 - Connected to Internet: DHCP / Mask 255.255.255.0
I modified code as follows:
[code]#=======================================================================================================================
# StartClient
#=======================================================================================================================
def StartClient(host, port):
\"\"\" connects to a host/port \"\"\"
PydevdLog(1, \"Connecting to \", host, \":\", str(port))
try:
s = socket(AF_INET, SOCK_STREAM);
s.connect((host, port))
PydevdLog(1, \"Connected.\")
return s
except:
import traceback;traceback.print_exc()
sys.stderr.write(\"server timed out after 10 seconds, could not connect to %s: %s\\n\" % (host, port))
sys.stderr.write(\"Exiting. Bye!\\n\")
sys.exit(1)
[/code]
I now get:
[code]pydev debugger: warning: psyco not available for speedups (the debugger will still work correctly, but a bit slower)
pydev debugger: starting
Traceback (most recent call last):
File \"/home/esutton/setup/eclipse/plugins/org.python.pydev.debug_1.5.9.2010063001/pysrc/pydevd_comm.py\", line 334, in StartClient
s.connect((host, port))
File \"<string>\", line 1, in connect
gaierror: (-2, \'Name or service not known\')
server timed out after 10 seconds, could not connect to localhost: 48145
Exiting. Bye![/code]
Any clues?
eth0 goes nowhere; it is connected to an empty switch. Does the socket connection need loopback enabled or something? Or is there someway to connect to eth1 instead?
Thank you for your help,
-Ed
The following forum message was posted by rekveld at:
hi all,
not too relevant to this list perhaps, but still I\'m happy to share
the cause of the problem I had:
the fstab entry for my volume read like this: rw,exec,auto,users
And it turns out that the \'users\'-option \"automatically implies
noexec, nosuid, nodev unless overridden\" according to wikipedia.
Apparently the options are set sequentially, so that having \'users\' at
the end invisibly sets noexec which was the cause of my problem.
so the following very similar-looking fstab entry works where the
above one doesn\'t: users,rw,exec,auto
what about that !
Took a couple of days to figure out, but I learned a lot.
thanks,
Joost. | https://sourceforge.net/p/pydev/mailman/pydev-users/?viewmonth=201007&viewday=20 | CC-MAIN-2018-05 | en | refinedweb |
Jug Tutorial¶
What is jug?¶
Jug is a simple way to write easily parallelisable programs in Python. It also handles intermediate results for you.
Example¶
This is a simple worked-through example which illustrates what jug does.
Problem¶
Assume that I want to do the following to a collection of images:
- for each image, compute some features
- cluster these features using k-means. In order to find out the number of clusters, I try several values and pick the best result. For each value of k, because of the random initialisation, I run the clustering 10 times.
I could write the following simple code: = argmin(bics) // 10
Very simple and solves the problem. However, if I want to take advantage of the obvious parallelisation of the problem, then I need to write much more complicated code. My traditional approach is to break this down into smaller scripts. I’d have one to compute features for some images, I’d have another to merge all the results together and do some of the clustering, and, finally, one to merge all the results of the different clusterings. These would need to be called with different parameters to explore different areas of the parameter space, so I’d have a couple of scripts just for calling the main computation scripts. Intermediate results would be saved and loaded by the different processes.
This has several problems. The biggest are
- The need to manage intermediate files. These are normally files with long names like features_for_img_0_with_parameter_P.pp.
- The code gets much more complex.
There are minor issues with having to issue several jobs (and having the cluster be idle in the meanwhile), or deciding on how to partition the jobs so that they take roughly the same amount of time, but the two above are the main ones.
Jug solves all these problems!
Tasks¶
The main unit of jug is a Task. Any function can be used to generate a Task. A Task can depend on the results of other Tasks.
The original idea for jug was a Makefile-like environment for declaring Tasks. I have moved beyond that, but it might help you think about what Tasks are.
You create a Task by giving it a function which performs the work and its arguments. The arguments can be either literal values or other tasks (in which case, the function will be called with the result of those tasks!). Jug also understands lists of tasks and dictionaries with tasks. For example, the following code declares the necessary tasks for our problem:
imgs = glob('*.png') feature_tasks = [Task(computefeatures,img,parameter=2) for img in imgs] cluster_tasks = [] bic_tasks = [] for k in range(2, 200): for repeat in range(10): cluster_tasks.append(Task(kmeans,feature_tasks,k=k,random_seed=repeat)) bic_tasks.append(Task(compute_bic,cluster_tasks[-1])) Nr_clusters = Task(argmin,bic_tasks)
Task Generators¶
In the code above, there is a lot of code of the form Task(function,args), so maybe it should read function(args). A simple helper function aids this process:
from jug import TaskGenerator computefeatures = TaskGenerator(computefeatures) kmeans = TaskGenerator(kmeans) compute_bic = TaskGenerator(compute_bic) @TaskGenerator def Nr_Clusters(bics): return argmin(bics) // 10(bics)
You can see that this code is almost identical to our original sequential code, except for the decorators at the top and the fact that Nr_clusters is now a function (actually a TaskGenerator, look at the use of a decorators).
This file is called the jugfile (you should name it jugfile.py on the filesystem) and specifies your problem.
Jug¶
So far, we have achieved seemingly little. We have turned a simple piece of sequential code into something that generates Task objects, but does not actually perform any work. The final piece is jug. Jug takes these Task objects and runs them. Its main loop is basically
while len(tasks) > 0: for t in tasks: if can_run(t): # ensures that all dependencies have been run if need_to_run(t) and not is_running(t): t.run() tasks.remove(t)
If you run jug on the script above, you will simply have reproduced the original code with the added benefit of having all the intermediate results saved.
The interesting is what happens when you run several instances of jug at the same time. They will start running Tasks, but each instance will run its own tasks. This allows you to take advantage of multiple processors in a way that keeps the processors all occupied as long as there is work to be done, handles the implicit dependencies, and passes functions the right values. Note also that, unlike more traditional parallel processing frameworks (like MPI), jug has no problems with the number of participating processors varying throughout the job.
Behind the scenes, jug is using the filesystem to both save intermediate results (which get passed around) and to lock running tasks so that each task is only run once (the actual main loop is thus a bit more complex than shown above).
Intermediate and Final Results¶
You can obtain the final results of your computation by setting up a task that saves them to disk and loading them from there. If the results of your computation are simple enough, this might be the simplest way.
Another way, which is also the way to access the intermediate results if you want them, is to run the jug script and then access the result property of the Task object. For example,
img = glob('*.png') features = [computefeatures(img,parameter=2) for img in imgs] ... feature_values = [feat.result for feat in features]
If the values are not accessible, this raises an exception.
Advantages¶
jug is an attempt to get something that works in the setting that I have found myself in: code that is embarrassingly parallel with a couple of points where all the results of previous processing are merged, often in a simple way. It is also a way for me to manage either the explosion of temporary files that plagued my code and the brittleness of making sure that all results from separate processors are merged correctly in my ad hoc scripts.
Limitations¶
This is not an attempt to replace libraries such as MPI in any way. For code that has many more merge points (i.e., code locations which all threads must reach at the same time), this won’t do. It also won’t do if the individual tasks are so small that the over-head of managing them swamps out the performance gains of parallelisation. In my code, most of the times, each task takes 20 seconds to a few minutes. Just enough to make the managing time irrelevant, but fast enough that the main job can be broken into thousands of tiny pieces. As a rule of thumb, tasks that last less than 5 seconds should probably be merged together.
The system makes it too easy to save all intermediate results and run out of disk space.
This is still Python, not a true parallel programming language. The abstraction will sometimes leak through, for example, if you try to pass a Task to a function which expects a real value. Recall how we had to re-write the line Nr_clusters = argmin(bics) // 10 above. | http://jug.readthedocs.io/en/latest/tutorial.html | CC-MAIN-2017-39 | en | refinedweb |
Nearly Orthogonal Latin Hypercube Generator
This library allows to generate Nearly Orthogonal Latin Hypercubes (NOLH) according to Cioppa (2007) and De Rainville et al. (2012) and reference therein.
Installation
Clone the repository
$ git clone
and from the cloned directory type
$ python setup.py install
PyNOLH requires Numpy.
Usage
The library contains a single generator and a function to retrieve the necessary parameters from a desired dimensionality. To generate a 6 dimension NOLH from the indentity permutation:
import pynolh dim = 6 m, q, r = pynolh.params(dim) conf = range(q) remove = range(dim - r, dim) nolh = pynolh.nolh(conf, remove)
The NOLH returned is a numpy array with one row being one sample.
You can also produce a NOLH from a random permutation configuration vector and remove random columns:
import pynolh import random dim = 6 m, q, r = pynolh.params(dim) conf = random.sample(range(q), q) remove = random.sample(range(q), r) nolh = pynolh.nolh(conf, remove)
The nolh() function accepts configurations with either numbers in [0 q-1] or [1 q].
import pynolh dim = 6 m, q, r = pynolh.params(dim) conf = range(1, q + 1) remove = range(dim - r + 1, dim + 1) nolh = pynolh.nolh(conf, remove)
Some prebuilt configurations are given within the library. The CONF module attribute is a dictionary with the dimension as key and a configuration, columns to remove pair as value.
import pynolh conf, remove = pynolh.CONF[6] nolh = pynolh.nolh(conf, remove)
The configurations for dimensions 2 to 7 are from Cioppa (2007) and 8 to 29 are from De Rainville et al. 2012.
Configuration Repository
See the Quasi Random Sequences Repository for more configurations.
References
Cioppa, T. M., & Lucas, T. W. (2007). Efficient nearly orthogonal and space-filling Latin hypercubes. Technometrics, 49(1).
De Rainville, F.-M., Gagné, C., Teytaud, O., & Laurendeau, D. (2012). Evolutionary optimization of low-discrepancy sequences. ACM Transactions on Modeling and Computer Simulation (TOMACS), 22(2), 9.
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/pynolh/ | CC-MAIN-2017-39 | en | refinedweb |
Windows Azure Storage Blob (WASB) is a general-purpose object store.
The features of WASB include:
Object store with flat namespace.
Storage account consists of containers, which in turn have data in the form of blobs.
Authentication based on shared secrets - Account Access Keys (for account-level authorization) and Shared Access Signature Keys (for account, container, or blob authorization).
Overview of Configuring and Using WASB with HDP
The following table provides an overview of tasks related to configuring and using HDP with WASB. Click on the linked topics to get more information about specific tasks. | https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.1/bk_cloud-data-access/content/wasb-get-started.html | CC-MAIN-2017-39 | en | refinedweb |
The answer to all of the above is to have a uniform Project Geometry for your organization. When I talk about the geometry, I mean only the software aspect of the project, the code that gets churned out by the software development team, the documentation artifacts that get generated in the process and the machinery that builds the code and deploys the binary to the desired target platform. The geometry ensures a uniformity not only in the look and feel of the project (the directory hierarchy, package structure, archetypes, build engine etc.), but also the innards of implementation which include the whole gamut from the design of the exception hierarchy down to the details of how the application interfaces with the external services layer. The following rumblings are some of my thoughts on what I mean when I talk about Project Geometry.
Software Reuse
In the article Four Dynamics for Bringing Use Back Into Software Reuse published in the Communications of the ACM, January 2006, Kevin C Desouza, Yukika Awazu and Amrit Tiwana identify three salient dynamics associated with the knowledge consumption lifecycle of a project - reuse, redesign and recode. They define
Reuse is the application of existing software artifacts as is; redesign is the act of altering existing software artifacts; and recoding is the discovery of new software artifacts through construction of software code or system designs.
In each of the above dynamics, there is an implicit assumption of pre-existence of software artifacts which finds place in the current lifecycle through a discovery process - either as-is or in a derived manifestation.
The question is : where from do we get these artifacts that can be reused ?
The Project Container
Every modern day IT organization who delivers software can have a Project Container, a meta-project which helps individual project teams to incubate new projects. The project container evangelizes the best practices for development, deployment and documentation and provides plug-ins and archetypes to kick-start a new project for the organization.
It should be as simple as 1-2-3 .. Let us consider a case study ..
For my organization, the build platform of choice for a Java based project is maven 2 and I should be able to generate a standard project skeleton structure from an archetype which is part of my project container. Here they go ..
- Download plugin for bootstrap from the project container repository
- Run maven install (
mvn install ...)
- Setup project home
- Create archetype (
mvn archetype:create -D... ...)
Boom .. we go .. my entire project hierarchy skeleton is ready with the corporate standard directory hierarchy, package naming conventions, documentation folders and (most importantly) a skeleton Project Object Model (POM) for my project. When I open up my IDE, I can find my prject already installed in the workspace! I can straightway start adding external dependencies to the pom.xml. Maven has really done wonders to the project engineering aspect through its concepts of archetypes, plugins and POMs. I can start defining my own project specific package hierarchy and write my own business logic.
My Project Structure
Any Java based project bootstrapped using the above project container of my organization bears the stamp of its identity. With its families of plug-ins and artifacts, the project container ensures a uniform geometry of all projects delivered from this place. It's really geometry in action - promoting reuse and uniformity of structure, thereby making life easier for all team members joining the project later in the lifecycle. Joel Spolsky talks about the Development Abstraction Layer as an illusion created by the management with its associated services which makes the programmer feel that a software company can be run only by writing code. In this context, the project container takes care of the engineering aspects of the project environment and presents to the programmer a feeling that the software that he delivers is only about writing the business logic. The other machineries like coding conventions (comes integrated with the IDE through container based checkstyles), build engine (again comes with the incubation process), documentation (maven based, comes free with the project container) and project portal (maven generated with container based customization) gets plugged in automatically as part of the bootstrapping process. The best thing is that the process is repeatable - every project based on a specific platform gets bootstrapped the same way with the same conventions replicated, resulting in a uniform project geometry.
Is This All ?
Actually project geometry is extensible to the limits you take it to. I can consider a standard infrastructure layer to be part of my project container for Java based projects. The exception hierarchy, standard utilities, a generic database layer, a generic messaging layer can all be part of the container.
But, what if I don't need'em all ?
You pay only for what you take. The project container repository gives you all options that it has to provide - pick and choose only the ones you need and set up dependencies in your POM. Remember, Maven 2 can handle transitive dependencies, one feature that we all have been crying for months.
The Sky is the Limit!
Taking it to the extremes - the project container can offer you options of implementation of some features if you base your code based on the container's contracts. This implies that the project container is not a mere engineering vehicle - it acts as a framework as well. Suppose in your Java application you need to have an Application Context for obvious reasons. You can design your application's context based upon the contracts exposed by your project container. And you can choose to select one of the many possible implementations during deployment - you can choose to use Spring's IoC container based context implementation or you can select the vanilla flat namespace based default implementation provided by the container itself. Whatever you do, you always honor the basic guideline, that of discovering from the project container and making it suitable for your use, and in the process maintaining the uniform geometry within your project. | http://debasishg.blogspot.com/2006/04/project-geometry.html | CC-MAIN-2017-39 | en | refinedweb |
It is over 5 weeks since I submitted this report and nearly 3 weeks since I asked for an update. Please provide an update as to the status of this issue?
Have you confirmed the issue?
Do you intend to fix it in this release of Visual Studio (2010)?
I wish to use codecvt_utf8 to write UTF-8 data to a std::wofstream file in a MFC C++ application. Compiles and works OK in the Release configuration. Fails to compile in Debug configuration. Workaround is to delete the following lines always added by the MFC project Wizard. However, I do not wish to do this in my large MFC application. #ifdef _DEBUG #define new DEBUG_NEW #endif It seems that the 'new' operator in std::codecvt_utf8 conflicts with the MFC 'DEBUG_NEW' operator.
Please wait...
Thanks for reporting this bug. We've fixed it, and the fix will be available in VC11.
According to the Standard, macroizing keywords when including Standard Library headers triggers undefined behavior, and VC11 will emit a hard #error when it detects this. However, macroizing "new" is unfortunately very common, so we've added special guards to all C++ Standard Library headers that will grant them immunity to macroized "new".
If you have any further questions, feel free to E-mail me at stl@microsoft.com .
Stephan T. Lavavej
Visual C++ Libraries Developer | https://connect.microsoft.com/VisualStudio/feedback/details/683483 | CC-MAIN-2017-39 | en | refinedweb |
CodePlexProject Hosting for Open Source Software
Is this what you are looking for?
var scope = engine.CreateScope()
engine.Execute(@”
def foo(a,b):
return a+b
def bar(a,b):
return a*b
def baz():
return 3
“, scope);
var foo = scope.GetVariable<Func<int, int, int>>(“foo”)
Console.WriteLine(foo(3, 4));
var bar = scope.GetVariable<Func<double, double, double>>(“bar”)
Console.WriteLine(bar(3.0, 4.0));
var baz = scope.GetVariable<Func<int>>(“baz”)
Console.WriteLine(baz());
Tomas
From: dvn21 [mailto:notifications@codeplex.com]
Sent: Tuesday, March 24, 2009 12:34 PM
To: Tomas Matousek
Subject: Invoke a script funtion? [dlr:51160]
From: dvn21
Hi,
I am kind of a newbie on this. I have a question.
I have a script file written in IronPython containing more than one function. From DLR, is it possible to invoke a named function? I have looked at some functions to create script source
CreateScriptSource,CreateScriptSourceFromFile, CreateScriptSourceFromString and none of them lets me specify function name. Am I missing something?
Please help! think you’re looking for something pretty simple in CLR 4.0, using the common hosting API:
var sr =
ScriptRuntime.CreateFromConfiguration();
dynamic random = sr.UseFile(@"random.py");
random.shuffle(...ienumerable...)
It’s still pretty simple now, without the new ‘dynamic’ keyword in C#, you could:
var
random = sr.ExecuteFile("...\random.py");
var ops = sr.GetEngine(“py”).ObjectOperations
ops.Invoke(random.GetVariable(“shuffle”), ... ienumerable...)
If you aren’t trying to host various languages, Python offers a convenience factory to get a runtime with only python in it:
ScriptRuntime py =
Python.CreateRuntime();
You can see the hosting APIs on the codeplex project under the “docs and specs” link on the front page.
Bill
From: dvn21 [mailto:notifications@codeplex.com]
Sent: Tuesday, March 24, 2009 12:34 PM
To: Bill Chiles
Subject: Invoke a script funtion? [dlr:51160]
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://dlr.codeplex.com/discussions/51160 | CC-MAIN-2017-39 | en | refinedweb |
Thank you for this message, Christian. It was really appreciated here. Christian Perrier <bubulle@debian.org> writes: > I think that, in the very long threads that are currently cluttering up > -devel, we would benefit from most participants to cool down and > consider moving from the extreme positions I've seen when overreading > the threads. I'm personally trying to take a breather right now and erring on the side of not responding. Joss going from "what's the point in Standards-Version?" to filing a Lintian bug asking for configuration capabilities to make it easier to ignore the tag in the course of what felt like less than a day (this is subjective -- I may well be *way* off) really hammered my mood more than it probably should have. As both a Policy delegate and a Lintian maintainer, I felt like it was direct criticism both coming and going, in a fast-escalating argument that I didn't really have the energy to expend on. I'm feeling way too much stress over it and it's feeling personal, and that's not good, particularly since I'm sure it was not meant as anything remotely personal. It's too easy to personally identify with the things one works on, for all of us I think. One of the things that I find difficult to deal with about these long threads is that they very quickly begin to take up a huge amount of time, but there's a feeling that if one doesn't participate, one's voice or position won't be heard. Silence is taken to mean consent, or at least apathy, and then people start acting. I suspect that feeling is partly not true, but not entirely. This is one of the reasons, from my perspective, why we have a Policy process that doesn't come to a conclusion in a 200-message thread over the course of a couple of days. (It, of course, errs on the side of being way *too* slow, but that's mostly lack of manpower.) I really wish people would not express apparently final personal decisions after a brief flurry of messages that many people have barely had time to wrap their mind around, particularly in the middle of a heated conversation. > - ddebs: the situation seems fairly balanced between people who > feel the need for a separate namespace by extension and those who > think this is not necessary. The general need for automatically > built debug pacakages does not seem to be questioned strongly > (but I may have missed something: I certainly haven't read all > thread branches, particularly when people were called names..:-)) I think there are a surprising number of parts of this discussion where we've reached consensus. The number of things we're hammering out seems to be decreasing. It would be great if someone would be willing to take on writing up the consensus as they see it; I did one message, but things have changed since then. -- Russ Allbery (rra@debian.org) <> | https://lists.debian.org/debian-devel/2009/08/msg00565.html | CC-MAIN-2018-09 | en | refinedweb |
Focus/Activation/Update problem for widget in nested QGraphicsScenes
Hi,
(Using Qt 4.8.1)
I've have an editable QComboBox which I put into a QGraphicsScene. The graphics scene gets visualised by a QGraphicsView which itself gets added to another QGraphicsScene. This second graphics scene gets again visualised by a QGraphicsView. (see code below)
I've got a few issues with this configuration:
I have to set the cache mode to ItemCoordinateCache or DeviceCoordinateCache in order to see the drop down list of the combo box. Not a major issue so far, but why is this necessary?
When hoovering with the mouse over the edit field of the combo box I would expect the mouse cursor to change into the caret symbol, this does not happen.
When clicking once into the edit field the text caret does not appear, typing however changes the text, so the edit field 'somehow' gets the keyboard events.
When using a QOpenGL widget as viewport then it seems like the inner scene does not update properly.
Has anyone any idea what I have to do so that the combo box behaves as if it would be added directly into the outer scene? Has anyone ever tried something like this successfully? Any idea or suggestion is welcome and appreciated.
Following the sourcecode which shows the described behaviour. This is just a simplified example of what I want to do. Later I want to use all kind of other widgets and dialogs for where I use the combo box at the moment.
@
#include <QApplication>
#include <QComboBox>
#include <QGraphicsScene>
#include <QGraphicsView>
#include <QGraphicsProxyWidget>
#include <QGLWidget>
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
QComboBox* combo = new QComboBox();
combo->addItem("First");
combo->addItem("Second");
combo->setEditable(true);
combo->setObjectName("the combo box");
// Create the outer scene and view, it may contain several 'inner' graphics
// views and scenes.
QGraphicsScene* outerScene = new QGraphicsScene();
outerScene->setObjectName("the outer scene");
QGraphicsView* outerView = new QGraphicsView(outerScene);
outerView->setObjectName("the outer view");
// The outer graphics view is our 'main window'
outerView->show();
// if set to 0 then the combo box gets added directly into the outer scene,
// if set to 1 then the combo box gets added into a graphics scene which
// gets visualised by a graphics view added to the outer scene.
if (0)
{
// In this scenario we add the combo box into the outer scene.
outerScene->addWidget(combo);
}
else
{
// In the second scenario we add another QGraphicsScene and QGraphicsView,
// the inner scene and inner view. The inner view gets added to the outer
// scene using a QProxyWidget. The combo box now gets added to this inner
// scene.
// Adding the combo box to its 'own' scene allows us to do all kind of
// funny things like scaling, rotation, etc.
QGraphicsScene* innerScene = new QGraphicsScene(); innerScene->setObjectName("the inner scene"); QGraphicsView* innerView = new QGraphicsView(innerScene); innerView->setObjectName("the inner view"); // If set to 1 then an OpenGL viewport gets set onto the inner scene if (0) { innerView->setViewport(new QGLWidget); } QGraphicsProxyWidget* innerViewProxy = outerScene->addWidget(innerView); innerViewProxy->setObjectName("the inner view proxy"); innerViewProxy->setFlag(QGraphicsItem::ItemIsPanel, true); innerViewProxy->setFlag(QGraphicsItem::ItemIsFocusable, true); innerViewProxy->setFlag(QGraphicsItem::ItemAcceptsInputMethod, true); QGraphicsProxyWidget* proxy = innerScene->addWidget(combo); proxy->setObjectName("the combo proxy"); proxy->setFlag(QGraphicsItem::ItemIsPanel, true); proxy->setFlag(QGraphicsItem::ItemIsFocusable, true); proxy->setFlag(QGraphicsItem::ItemAcceptsInputMethod, true); // Seems like we have to set the cache mode to Item or DeviceCorrdinateCaching // in order to see the drop down list. proxy->setCacheMode(QGraphicsItem::ItemCoordinateCache); //proxy->setCacheMode(QGraphicsItem::DeviceCoordinateCache); innerScene->setActiveWindow(proxy); // Turn the combo box upside down and scale it a bit. It gets rotated and // scaled independent of objects in the outer scene, Nice! innerView->scale(2.0, 2.0); innerView->rotate(180.0);
}
return a.exec();
}
@
Here the project file:
@
QT += core gui opengl
greaterThan(QT_MAJOR_VERSION, 4): QT += widgets
TARGET = EmbeddedGraphicsViewTest
TEMPLATE = app
SOURCES += main.cpp
HEADERS +=
FORMS +=
@
Thank You!
Peer
edited: added a comment about the used Qt version
- Asperamanca
To me, this sounds like a case of "not everything that can be done should be done". Nested graphic scenes?
What are you trying to accomplish? I think it's likely you can accomplish it with a simpler structure, and at least some of your problems would go away in the process.
Well Asperamanca, you are right, not everything that can be done should be done. However, in this case I do it for a special purpose, actually for two:
I want to display UI items in an outer scene. We talk here about mostly custom QGraphicsItems or Objects deriving from QWidget (up to comples UI with lots of widgets and panels). So far no problem, all that is properly supported and works fine. However, due to the potentially large number of objects I want to use a mechanism where I put some of the items into a second 'inner scene' which is part of the outer scene. The inner scene is display scaled down so that the items appearing in the inner scene are kind of minimized but still show their content as if they would be in the outer scene. No mouse interaction necessary so far. Works fairly good.
Some of the Ui which I want to display in the outer scene shall render using OpenGL. Because I need several of them in parallel I need to have several scenes with QGraphicsViews using a QGLWidget as viewport. I also need to add this UI elements into the outer scene. And here I also need some user interaction and therefore, events etc. should work.
If you, or someone else, has a better way of doing what I need/want, then I would love to hear about it ;)
Thanks for your reply!
Peer
Ok I learned something new in the meantime. I did not knew that I can apply transformations on proxies directly.
So there is only one scenarion left where I think I have to use nested scenes. This is when I want to display several OpenGL items in one scene at the same time. Anybody ever tried this?
- Asperamanca
If your "outer scene" is basically just widgets and regular UI, why do it in GraphicsView in the first place? Are the custom GraphicsItems just Widgets you happened to write based on GraphicsItem? Or are they truly something you wouldn't do as a QWidget?
If there is a way to cleanly separate your UI between a QWidget and a QGraphicsView part, I advise to do it.
The outer scene consists of widgets, regular UI, custom QGraphicsItems and the OpenGL displays I am talking about. I want to be able to move the items around on the scene, I want to scale (maybe later rotate) them, I want to group them and scale them as a group, I want to dock them together, etc. . For this functionality the QGraphicsScene framework is ideal; if I would not have to display some OpenGL displays I would not have any problems ;)
I know that I have to find another solution If I cannot get this done, however before I did not dig deep on it I do not want to give up so easily. Thanks!
@joesteeve: My solution is to use just one scene and no OpenGL.
Embedding other QGraphicViews into a QGraphicScene seems not to be supported, and because this would have to work for me in order to use OpenGL I do not use it. I would have used OpenGL just for displaying image data, however I was able to achieve a good enough performance by using Pixmaps, so I am fine. | https://forum.qt.io/topic/16568/focus-activation-update-problem-for-widget-in-nested-qgraphicsscenes | CC-MAIN-2018-09 | en | refinedweb |
Contents
- Introduction
- Viewing global variables in the terminal
- Creating a global variable of the terminal
- Receiving a value of a global variable
- Global variable names
- Checking the existence of a variable
- Time of a global variable
- Viewing all global variables
- Deleting global variables
- GlobalVariablesFlush function
- Temporary variable, GlobalVariableTemp function
- Changing a variable by condition, GlobalVariableSetOnCondition function
- Class for easy work with global variables
- Conclusion
- Attachments
Introduction
Global variables of the terminal (Fig. 1) are the unique feature of MetaTrader and MQL language.
Fig. 1. Code fragment with the global variables
Be sure not to confuse terminal global variables with well-known program global variables (Fig. 2) trying to find their counterparts in other programming languages.
Fig. 2. MovingAverage EA code fragment from the terminal examples. The program global variables are highlighted in red
Since the terminal global variables do not have exact matches in other programming languages, they are not popular among MQL5 novice learners. Perhaps they simply have no idea how and why they can be used or their application seems too complicated due to rather cumbersome function names: GlobalVariableSet(), GlobalVariableGet(), GlobalVariableCheck(), etc.
The main feature of the terminal global variables is that they retain their values even after the terminal is closed. That is why they provide very convenient and fast means for storing important data and are almost indispensable when developing reliable EAs involving complex interaction between orders. After you master the global variables, you will no more be able to imagine developing EAs on MQL5 without them.
The article provides practical examples allowing you to learn all functions for working with global variables, examine their features and application methods, as well as develop a class that significantly simplifies and accelerates working with them.
The functions for working with the global variables can be found in the MQL5 Documentation here.
Viewing global variables in the terminal
While in the terminal, execute the following command: Main menu — Tools — Global Variables. The Global Variables window appears (Fig. 3).
Fig. 3. The Global Variables window in the terminal
Basically, all works with global variables are performed programmatically. However, this window might be useful when testing EAs. It allows you to view all the terminal global variables as well as edit names and values. To change a variable, click on the field with its name or value. Also, the window allows creating new variables. To do this, click Add in the upper right corner. Delete button is used to delete the global variables. Before deletion, click on the necessary variable to make the button active. Practice in creating, changing and deleting the global variables. Note: if the window already contains some variables when you open it, leave them intact since they may be necessary for an EA launched at your account.
A global variable has three attributes: name, value and time. The Time field displays when the variable was last accessed. Upon expiration of four weeks after the last access, the variable is deleted automatically. However, if the variable stores important data, it should be accessed periodically to extend its lifetime.
Creating a global variable of the terminal
The variable is generated automatically when a value is assigned to it. If the variable with the given name already exists, its value is updated. The GlobalVariableSet() function is used to assign a value to the variable. The two parameters are passed to the function: variable name (string) and its value (double type). Let's try to create a variable. Open MetaEditor, create a script and write the following code to its OnStart() function:
GlobalVariableSet("test",1.23);
Execute the script and open the global variables window from the terminal. The window should contain a new variable named "test" and the value of 1.23 (Fig. 4).
Fig. 4. Fragment of the global variables window with the new "test" variable
The code of the example can be found in the sGVTestCreate script.
Receiving a value of a global variable
After the script from the previous example completes its work, the variable still exists. Let's look at its value. The GlobalVariableGet() function is used to receive the value. There are two function call methods. When applying the first one, only a name is passed to the function. The function returns the double type value:
double val=GlobalVariableGet("test"); Alert(val);
When running the code, a window with the value of 1.23 is opened. The example can be found in the sGVTestGet1 script attached below.
When applying the second method, the two parameters – name and double variable for the value (the second parameter is passed by the reference) – are passed to the function that in turn returns true or false depending on the results:
double val; bool result=GlobalVariableGet("test",val); Alert(result," ",val);
The window with the message "true 1.23" is opened as a result.
If we try to receive the value of a non-existent variable, the function returns false and 0. Let's change a bit the previous code sample: assign 1.0 to the 'val' variable when declaring and try receiving the value of the non-existent "test2" variable:
double val=1.0; bool result=GlobalVariableGet("test2",val); Alert(result," ",val);
The window with the message "false 0.0" is opened as a result. The example can be found in the sGVTestGet2-2 script attached below.
When calling the function using the first method, we also obtain 0.0 in case of a non-existent variable. However, the error is received as well. By using the first function call method and error check, we may obtain the analog of the second function method:
ResetLastError(); double val=GlobalVariableGet("test2"); int err=GetLastError(); Alert(val," ",err);
As a result of the code operation (the example can be found in the sGVTestGet1-2 script), the message "0.0 4501" is opened. 0.0 is a value, 4501 is an error code — "Global variable of the client terminal not found". This is not a critical error, but rather notification. You can refer to a non-existent variable if the algorithm permits that. For example, if you want to track the maximum equity:
GlobalVariableSet("max_equity",MathMax(GlobalVariableGet("max_equity",AccountInfoDouble(ACCOUNT_EQUITY)));
The code works correctly even if the "max_equity" variable does not exist. First, the MathMax() function selects the maximum value between the actual equity and the one previously saved in the "max_equity" variable. Since the variable does not exist, we receive the actual equity value.
Global variable names
As we can see, the global variable name is a string. There are no restrictions on the characters for symbol names and their order. Any characters that can be typed on the keyboard can be used in names, including spaces and characters that are forbidden in file names. However, I recommend selecting simple and easily readable names, including characters, numbers and underscore similar to common variables.
There is only one significant limitation, which requires accuracy when selecting names for global variables — name length: no more than 63 characters.
Checking the existence of a variable
The GlobalVariableCheck() function is used to check if the variable exists. A single parameter is passed to the function — the variable name. If the variable exists, the function returns true, otherwise, false. Let's check if the "test" and "test2" variables exist:
bool check1=GlobalVariableCheck("test"); bool check2=GlobalVariableCheck("test2"); Alert(check1," ",check2);
This example can be found in the sGVTestCheck script attached below. "true false" message is received as a result of the script operation — the "test" variable exists, while the "test2" one does not.
Sometimes, the check for a variable existence is necessary, for example, when tracking a minimum equity. If we replace the MathMax() functions with the MathMin() ones in the above example related to tracking the maximum equity, it will work incorrectly and the variable will always contain 0.
In this case, the check for a variable existence can help:
if(GlobalVariableCheck("min_equity")){ GlobalVariableSet("min_equity",MathMin(GlobalVariableGet("min_equity"),AccountInfoDouble(ACCOUNT_EQUITY))); } else{ GlobalVariableSet("min_equity",AccountInfoDouble(ACCOUNT_EQUITY));}
If the variable exists, select the smallest value using the MathMin() function. Otherwise, assign the equity value right away.
Time of a global variable
The global variable time we have already seen on Fig. 3 can be obtained using the GlobalVariableTime() function. A single parameter is passed to the function — the variable name. The function returns the datetime type value:
datetime result=GlobalVariableTime("test"); Alert(result);
The code can be found in the sGVTestTime script attached below. The variable time attribute value is changed only when accessing it, i.e. no other functions can change the variable time value when using the GlobalVariableSet() and GlobalVariableGet() functions. If we manually change the variable via the Global Variables window, its time is changed as well (regardless of whether we change its value or name).
As I have already said, the variable exists four weeks since the last access to it and is automatically deleted by the terminal afterwards.
Viewing all global variables
Sometimes, we need a global variable not knowing its exact name. We may remember the beginning but not the end, for example: gvar1, gvar2, etc. In order to find such variables, we need to iterate all global variables of the terminal and check their names. To do this, we need the GlobalVariablesTotal() and GlobalVariableName() functions. The GlobalVariablesTotsl() function returns the total number of global variables. The GlobalVariableName() one returns the variable name by its index. A single int type parameter is passed to the function. First, let's have a look at all variables and display their names and values in the message box:
Alert("=== Start ==="); int total=GlobalVariablesTotal(); for(int i=0;i<total;i++){ Alert(GlobalVariableName(i)," = ",GlobalVariableGet(GlobalVariableName(i))); }
The window with all variable names and values is opened as a result (Fig. 5). The code can be found in the sGVTestAllNames script attached below.
Fig. 5. The message window containing all global variables of the terminal
Let's add a new check in order to view the variables with certain attributes in names. The following example illustrates a check for variable names to begin with "gvar" (this example can be found in the sGVTestAllNames2 script attached below):
Alert("=== Start ==="); int total=GlobalVariablesTotal(); for(int i=0;i<total;i++){ if(StringFind(GlobalVariableName(i),"gvar",0)==0){ Alert(GlobalVariableName(i)," = ",GlobalVariableGet(GlobalVariableName(i))); } }
The check is performed using the StringFind() function. If you want to improve your skills in working with string functions, read the article MQL5 Programming Basics: Strings.
Deleting global variables
The GlobalVariableDel() function for deleting one global variable receives a single parameter — variable name. Delete the previously created "test" variable (sGVTestDelete script attached below):
GlobalVariableDel("test");
In order to check the operation results, you may use sGVTestGet2-1 or sGVTestGet2-2 script or open the global variables window.
Removal of a single variable is simple, but most often you want to delete more than one variable. The GlobalVariablesDeleteAll() function is used for that. Two optional parameters are passed to the function. If we call the function without parameters, all global variables are deleted. Usually, it is necessary to delete only a group of variables having the same prefix (beginning of a name). The first function parameter is used for specifying the prefix. Let's experiment with this function. First, we should create a number of variables with different prefixes:
GlobalVariableSet("gr1_var1",1.2); GlobalVariableSet("gr1_var2",3.4); GlobalVariableSet("gr2_var1",5.6); GlobalVariableSet("gr2_var2",7.8);
The code creates four variables: the two ones having gr1_ prefix, as well as another two ones with the _gr2 prefix. The code can be found in the sGVTestCreate4 script attached below. Examine the script operation results by launching the sGVTestAllNames script (Fig. 6).
Fig. 6. Variables created by the sGVTestCreate4 script
Now, let's delete the variables beginning with gr1_ (sGVTestDeleteGroup script attached below):
GlobalVariablesDeleteAll("gr1_");
After executing the code, view all the variables once again using the sGVTestAllNames script (Fig. 7). We will again see the list of all variables except for the two ones beginning with gr1_.
Fig. 7. Variables beginning with gr1_ have been deleted
The second parameter of the GlobalVariableDeleteAll() function is used if you need to delete old variables only. The date is specified in this parameter. If the date of the last access to the variable is less than the specified one, the variable is deleted. Please note that only the variables with a lesser time are deleted, while the ones with a lesser or equal time remain. The variables can be additionally selected by prefix. If selection by prefix is not needed, the default NULL value is set as the first parameter:
GlobalVariablesDeleteAll(NULL,StringToTime("2016.10.01 12:37"));
In reality, deletion of variables by time can be necessary only for solving some very rare and unusual tasks, therefore let me not dwell too much on that topic.
GlobalVariablesFlush function
When closing the terminal, the global variables are automatically saved to the file to be read again by the terminal during its launch. There is no need to know all the subtleties of the process (file name, data storage format, etc.) when using global variables.
In case of the terminal's emergency shutdown, the global variables may be lost. The GlobalVariableFlush() function helps you to avoid this. The function forcibly saves the global variables. After the values are set by the GlobalVariableSet() function or the variables are deleted, simply call the GlobalVariableFlush() function. The function is called without parameters:
GlobalVariableSet("gr1_var1",1.2); GlobalVariableSet("gr1_var2",3.4); GlobalVariableSet("gr2_var1",5.6); GlobalVariableSet("gr2_var2",7.8); GlobalVariablesFlush();
The code can be found in the attached sGVTestFlush file.
It would be good to illustrate the operation of the GlobalVariableFlush() function, but unfortunately, I failed to make the global variables disappear during the terminal's emergency shutdown. The terminal operation was interrupted via the Processes tab of the Task Manager. Perhaps, the global variables may disappear in case of a PC blackout. An unexpected PC blackout happens rarely nowadays, since the vast majority of users have laptops, while desktop PCs usually feature uninterruptible power supply devices. If a terminal works on a dedicated server, then protection against power supply failure is even more significant. Therefore, global variables are quite reliable means for saving data even without the GlobalVariableFlush() function.
Temporary variable, GlobalVariableTemp function
The GlobalVariableTemp() function creates a temporary global variable (that exists till the terminal is stopped). In a few years that I develop EAs on MQL5, I have never faced the need for such a variable. Moreover, the very concept of a temporary global variable contradicts the basic principle of their application — long-term data storage not affected by the terminal relaunches. But since the function exists in MQL5 language, we should pay it some attention in case you need it.
When calling the function, a single parameter — the variable name — is passed to it. If the variable with such a name does not exist, a temporary variable with the value of 0 is created. After that, use the GlobalVariableSet() function to assign a value to it so that it can be used as usual. If the variable already exists (created by the GlobalVariableSet() function earlier), it is not converted into a temporary one:
GlobalVariableSet("test",1.2); // set the value variable to make sure the variable exists GlobalVariableTemp("temp"); // create a temporary variable Alert("temp variable value right after creation - ",GlobalVariableGet("temp")); GlobalVariableSet("temp",3.4); // set the temporary variable value GlobalVariableTemp("test"); // attempt to convert the "test" variable into a temporary one
This example can be found in the sGVTestTemp file attached below. After launching the script, open the global variables window. It should contain the "temp" variable with the value of 3.4 and "test" with the value of 1.2. Close the global variables window, relaunch the terminal and open the window again. The "test" variable is saved, while "temp" is no more there.
Changing a variable by condition, GlobalVariableSetOnCondition function
Now, it is finally the time to consider the last and, in my opinion, the most interesting function: GlobalVariableSetOnCondition(). Three parameters are passed to the function: name, new value and test value. If the variable value is equal to the test one, it receives a new value and the function returns true, otherwise it returns false (the same happens if the variable does not exist).
In the terms of the operation principles, the function is similar to the following code:
double check_value=1; double value=2; if(GlobalVariableGet("test")==check_value){ GlobalVariableSet("test",value); return(true); } else { return(false); }
If the "test" global variable is equal to check_value, the value is assigned to it and true is returned, otherwise — false. The check_value variable has the default value of 1, so that false is returned in case the "test" global variable does not exist.
The main objective of the GlobalVariableSetOnCondition() function is to provide consistent execution of several EAs. Since present-day operating systems are multi-task programs, while each EA can be considered a separate thread, no one can guarantee that all EAs will be able to perform all their tasks one after another.
If you have some experience in working with MetaTrader 4, you may remember busy trade flows. Now, several EAs may send trade requests to the server simultaneously and they will be executed, unlike previous times when only one EA was able to send a request at a given moment. If there were several EAs in the terminal, busy trade flow error occurred quite often when executing market operations. When opening and closing orders, the error was more like a nuisance since good EAs were able to repeat their attempts to close or open a position. Besides, different EAs opening or closing positions at the same point in time were quite a rare occasion. However, if a trailing stop function (built into an EA rather than the terminal) was activated at several EAs, only one of them was able to modify its stop loss per tick which was a problem already. Despite the fact there can be no such issue now, there can still be tasks requiring sequential execution of some EAs.
The mentioned global variable is used to ensure the consistent work of the group of EAs. At the start of the OnTick() function execution, assign value to the variable, so that other EAs can see that some EA is working already and they should either interrupt their OnTick() function or enter the waiting cycle. After the EA completes all necessary actions, we assign check_value to the variable, and now another EA is able to execute its OnTick() function, etc.
The code displayed above is suitable for solving the task but we cannot be sure that the line:
if(GlobalVariableGet("test")==check_value){
will be immediately followed by the line:
GlobalVariableSet("test",value);
Another EA may interpose between them and start working after detecting check_value. After its task is partially executed, the first EA may continue its operation. Thus, two EAs may work simultaneously. The GlobalVariableSetOnCondition() function solves this issue. As noted in the documentation, "function provides atomic access to the global variable". Atomic means "indivisible". Thus, no other program can interpose during a variable value verification and assigning a new value to it.
The only drawback of the function is that it does not create a variable if the latter does not exist. This means we should carry out an additional check (preferable during the EA initialization) and generate the variable.
Let's write the two EAs in order to perform an experiment. Both EAs are completely identical. The window with "EA1 start" message appears at the start of the OnTick() function followed by a three-second pause (the Sleep() function). "EA1 end" message appears in the end:
void OnTick(){ Alert("EA1 start"); Sleep(3000); Alert("EA1 end"); }
The second EA is similar, though the messages are different: "EA2 start" and "EA2 end". The attached EAs are named eGVTestEA1 and eGVTestEA2. Open two identical charts in the terminal and attach EAs to them. The message window shows that EAs start and end their work simultaneously (Fig. 8).
Fig. 8. EA messages about the OnTick() function execution start and end
Now, let's apply the GlobalVariableSetOnCondition() function to provide consistent EAs' operation. The inserted changes are identical for both EAs, therefore let's write a code in the included file. The file is called GVTestMutex.mqh (attached below).
Now, let's examine the GVTestMutex.mqh file functions. Check if the global variable exists during the EA initialization and create it if necessary (the Mutex_Init() function). A single parameter — variable name — is passed to the function:
void Init(string name){ if(!GlobalVariableCheck(name)){ GlobalVariableSet(name,0); } }
The second function (Mutex_Check()) is used for verification. A cycle waiting for the global variable release is executed in the function. As soon as the variable is released, the function returns true, and the EA goes on executing its OnTick() function. If the variable is not released within a specified time, the function returns false. The OnTick() function execution should be interrupted in that case:
bool Mutex_Check(string name,int timeout){ datetime end_time=TimeLocal()+timeout; // waiting end time while(TimeLocal()<end_time){ // cycle within a specified time if(IsStopped()){ return(false); // if an EA is removed from a chart } if(GlobalVariableSetOnCondition(name,1,0)){ return(true); } Sleep(1); // small pause } return(false); // failed to wait for a release }
The global variable name and waiting time in seconds are passed to the function.
The third function is Mutex_Release(). It sets the value of 0 for the global variable (release) so that other EAs are able to start their work:
void Mutex_Release(string name){ GlobalVariableSet(name,0); }
Make a copy of one EA, include a file and add a function call to it. The variable name is "mutex_test". Let's call the Mutex_Check() function with a 30-second timeout. The full EA code is displayed below:
#include <GVTestMutex.mqh> int OnInit(){ Mutex_Init("mutex_test"); return(INIT_SUCCEEDED); } void OnTick(){ if(!Mutex_Check("mutex_test",30)){ return; } Alert("EA1 start"); Sleep(3000); Alert("EA1 end"); Mutex_Release("mutex_test"); }
Let's make a copy of the EA and change the text of displayed messages. The attached EAs are named eGVTestEA1-2 and eGVTestEA2-2. Launch the EAs on two similar charts to make sure they now work in turns (Fig. 9).
Note the time-out parameter: set the time that exceeds the operation time of all EAs in the group. It may happen that some EA is removed from a chart during the OnTick() function execution but the Mutex_Release() function is not executed. In this case, not a single EA will be able to wait for its turn. Therefore, for the case of time-out expiration, we should set the global variable to 0 or find out some other way to track it. This depends on a specific task. Sometimes, simultaneous EAs' operation may be acceptable, while in some other cases, they should run in turns.
Fig. 9. The EAs are working in turns
Class for easy work with global variables
Pay attention to the following points in order to work with global variables more conveniently.
- Unique variable names are necessary for each EA copy.
- Names of the variables used in the tester should be different from the ones used on the account.
- If an EA works in the tester, the EA should delete all variables it has created during the test upon completion of each single test run.
- Provide more convenient call of functions for working with global variables to make function names shorter.
The situation is a little more complicated when forming names for common variables. Let's make a rough estimation of a possible variable length. The first attribute that we can use to separate the variables of one EA from another is an EA name consisting of, say, 20 characters. The same EAs may work on different symbols. This means that the second unique feature is a symbol (4 more characters). EAs working on a single symbol have different order IDs — magic numbers of ulong types (maximum length — 20 characters). It is possible to switch between accounts from a single terminal. An account number is a long type variable (maximum length — 19 characters). Totally, we receive 63 characters, which comprises the entire permitted variable length, and we have reached only the prefix length!
This means we have to sacrifice something. Let's follow the rule: one terminal only works with one account. If you have multiple accounts, set a terminal instance for each. This means we can rid of the account number, with the maximum prefix size reducing to 43 characters and freeing 20 characters. We may add yet another rule: do not use long magic numbers. Finally, it would be reasonable to pay attention to EA names giving them shorter names. A global variable name formed from the name of an EA, symbol and magic number can be considered acceptable. Perhaps, you will be able to come up with more convenient way of forming names but let's stick to this method in the article.
Let's start writing a class. The class is named CGlobalVariables, the file itself is called CGlobalVariables.mqh and attached below. Declare two variables for prefixes in the 'private' section: the first one — for common variables, the second one — for the ones bound to orders:
class CGlobalVariables{ private: string m_common_prefix; // prefix of common variables string m_order_prefix; // prefix of order variables public: // constructor void CGlobalVariables(){} // destructor void ~CGlobalVariables(){} }
Let's create the Init() method in the 'public' section. The method is to be called during an EA initialization. The two parameters — symbol and magic number — are to passed to it. Prefixes are to be formed in this method. The prefixes of order variables are simple, as you only need to separate EA variables working on the account from an EA working in the tester. Thus, order variables on the account start from "order_", while the tester ones — from "tester_order_". Only "t_" can be added to the prefix of common variables in the tester (since they are unique, besides we should use characters sparingly). Old global variables should also be deleted during initialization in the tester. Of course, they should be deleted during deinitialization as well, but we cannot be sure in test results as the variables may remain. For now, let's create the DeleteAll() method and call it. I recommend placing the method to the 'private' section. The code for it will be added later. The Init() method code is displayed below:
void Init(string symbol,int magic){ m_order_prefix="order_"; m_common_prefix=MQLInfoString(MQL_PROGRAM_NAME)+"_"+symbol+"_"+IntegerToString(magic)+"_"; if(MQLInfoInteger(MQL_TESTER)){ m_order_prefix="tester_"+m_order_prefix; m_common_prefix="t_"+m_common_prefix; DeleteAll(); } }
Let's add the method returning the prefix of common variables as it may be useful for some special work involving global variables:
string Prefix(){ return(m_common_prefix); }
Add the basic methods: for checking, setting, receiving values and deleting separate variables. Since we have two prefixes, we should have two methods with name overloading for each function (two functions with he same name but different set of parameters). Only one parameter — variable name — is to be passed to one group of methods. These are the methods for common variables. A ticket and a variable name are passed to another group of functions. These are the methods for order variables:
// for common variables bool Check(string name){ return(GlobalVariableCheck(m_common_prefix+name)); } void Set(string name,double value){ GlobalVariableSet(m_common_prefix+name,value); } double Get(string name){ return(GlobalVariableGet(m_common_prefix+name)); } void Delete(string name){ GlobalVariableDel(m_common_prefix+name); } // for order variables bool Check(ulong ticket,string name){ return(GlobalVariableCheck(m_order_prefix+IntegerToString(ticket)+"_"+name)); } void Set(ulong ticket,string name,double value){ GlobalVariableSet(m_order_prefix+IntegerToString(ticket)+"_"+name,value); } double Get(ulong ticket,string name){ return(GlobalVariableGet(m_order_prefix+IntegerToString(ticket)+"_"+name)); } void Delete(ulong ticket,string name){ GlobalVariableDel(m_order_prefix+IntegerToString(ticket)+"_"+name); }
Let's go back to the DeleteAll() method and write a code for deleting variables by prefixes:
GlobalVariablesDeleteAll(m_common_prefix); GlobalVariablesDeleteAll(m_order_prefix);
Deletion can be performed in the tester after testing, so let's add the Deinit() method that is to be called during an EA deinitialization:
void Deinit(){ if(MQLInfoInteger(MQL_TESTER)){ DeleteByPrefix(); } }
In order to improve reliability of the global variables, we should use the GlobalVariablesFlush() function. Let's add yet another method with the function. It is much easier to call the method class rather than write the long name of the function (fulfilling the requirements stated in point 4):
void Flush(){ GlobalVariablesFlush(); }
Sometimes, you may need to combine common variables in groups by adding additional prefixes to them and then delete these groups during an EA operation. Let's add yet another DeletByPrefix() method:
void DeleteByPrefix(string prefix){ GlobalVariablesDeleteAll(m_common_prefix+prefix); }
As a result, we have obtained sufficient class functionality allowing us to solve 95% of tasks when working with global variables.
In order to use the class, include the following file to an EA:
#include <CGlobalVariables.mqh>
Create an object:
CGlobalVariables gv;
Call the Init() method during an EA initialization by passing a symbol and magic number to it:
gv.Init(Symbol(),123);
Call the Deinit() method during deinitialization to delete the variables from the tester:
gv.Deinit();
After that, all we have to do when developing an EA is to use the Check(), Set(), Get() and Delete() methods passing them a unique part of the variable name only, for example:
gv.Set("name1",123.456); double val=gv.Get("name1");
As a result of the EA operation, the variable named eGVTestClass_GBPJPY_123_name1 appears in the list of the global variables (Fig. 10).
Fig. 10. Fragment of the global variables window with the variable created using the CGlobalVariables class
The variable length is 29 characters meaning that we are relatively free in selecting variable names. For order variables, we need to pass an order ticket with no need to constantly form a full name and call the IntegerToSTring() function to convert a ticket into a line, thus greatly simplifying the use of global variables. An example of the class use is available in the eGVTestClass EA attached below.
It is also possible to slightly change the class to simplify its usage even further. Now, it is time to improve the class constructor and destructor. Let's add the Init() method call to the constructor with the appropriate parameters and the Deinit() method call to the destructor:
void CGlobalVariables(string symbol="",int magic=0){ Init(symbol,magic); } // destructor void ~CGlobalVariables(){ Deinit(); }
After that, there is no need to call the Init() and Deinit() methods. Instead, we only need to specify a symbol and a magic number when creating a class instance:
CGlobalVariables gv(Symbol(),123);
Conclusion
In this article, we have examined all functions for working with the terminal global variables, including the GlobalVariableSetOnCondition() function. We have also created the class significantly simplifying the use of global variables when creating EAs. Of course, the class does not include all features related to working with global variables but it still has the most necessary and frequently used ones. You can always improve it or develop your own one if necessary.
Attachments
- sGVTestCreate — creating a variable.
- sGVTestGet1 — first method of receiving a value.
- sGVTestGet2 — second method of receiving a value.
- sGVTestGet1-2 — first method of receiving a value of a non-existent variable.
- sGVTestGet2-2 — second method of receiving a value of a non-existent variable.
- sGVTestCheck — checking the existence of a variable.
- sGVTestTime — receiving a variable time.
- sGVTestAllNames — receiving the list of names of all variables.
- sGVTestAllNames2 — receiving the list of names with a specified prefix.
- sGVTestDelete — deleting a variable.
- sGVTestCreate4 — creating the four variables (two groups with two variables each).
- sGVTestDeleteGroup — deleting one group of variables.
- sGVTestFlush — forced saving of the variables.
- sGVTestTemp — creating a temporary variable.
- eGVTestEA1, eGVTestEA2 — demonstrating a simultaneous EAs' operation.
- GVTestMutex.mqh — functions for Mutex development.
- eGVTestEA1-2, eGVTestEA1-2 — demonstrating EAs working in turns.
- CGlobalVariables.mqh — CGlobalVariables class for working with global variables.
- eGVTestClass — EA demonstrating how to use the CGlobalVariables class.
Translated from Russian by MetaQuotes Software Corp.
Original article: | https://www.mql5.com/en/articles/2744 | CC-MAIN-2018-09 | en | refinedweb |
For one of my scripts I want to write an R function that checks if a package is already installed: if so it should use library() to import it in the namespace, otherwise it should install it and import it.
I assumed that pkgname is a string and tried to write something like:
ensure_library <- function(pkgname) {
if (!require(pkgname)) {
install.packages(pkgname, dependencies = TRUE)
}
require(pkgname)
}
ensure_library("dplyr")
pkgname
dplyr
ensure_library("dplyr")
Installing package into ‘/home/luca/R-dev’
(as ‘lib’ is unspecified)
trying URL ''
Content type 'application/x-gzip' length 708476 bytes (691 KB)
==================================================
downloaded 691 KB
* installing *source* package ‘dplyr’ ...
** package ‘dplyr’ successfully unpacked and MD5 sums checked
** libs
.... a lot of compiling here....
installing to /home/luca/R-dev/dplyr/libs
** R
** data
*** moving datasets to lazyload DB
** inst
** preparing package for lazy loading
*** installing help indices
** building package indices
** installing vignettes
** testing if installed package can be loaded
* DONE (dplyr)
The downloaded source packages are in
‘/tmp/Rtmpfd2Lep/downloaded_packages’
Warning messages:
1: In library(package, lib.loc = lib.loc, character.only = TRUE, logical.return = TRUE, :
there is no package called ‘pkgname’
2: In library(package, lib.loc = lib.loc, character.only = TRUE, logical.return = TRUE, :
there is no package called ‘pkgname’
dplyr
require
Expanding on suggestion to use
character.only=TRUE: If you look at the code for
require, you see that the first step is only performed when the default value of 'character.only' (
= FALSE) holds:
> require function (package, lib.loc = NULL, quietly = FALSE, warn.conflicts = TRUE, character.only = FALSE) { if (!character.only) package <- as.character(substitute(package)) loaded <- paste("package", package, sep = ":") %in% search() if (!loaded) { if (!quietly) packageStartupMessage(gettextf("Loading required package: %s", package), domain = NA) value <- tryCatch(library(package, lib.loc = lib.loc, character.only = TRUE, logical.return = TRUE, warn.conflicts = warn.conflicts, # snipped rest of code
So leaving the default value of character.only in place forces the function to convert the symbol
pkgname to a character value.
as.character(substitute(pkgname)) [1] "pkgname"
And since 'character.only' is also part of the
library logic, and require calls
library, you could have used
library.
Further comment: You posted a follow-up to Rhelp and got some useful answers from Duncan Murdoch and Peter Dalgaard which clarified (I hope) this question. In the process I wondered whether your resistance to this answer comes about because of an expectation set up by the name of this function that substitution should occur but nothing was happening that looked like "substitution". That expectation seems perfectly reasonable I see now belatedly in retrospect. I think the correct name of the function could have been: substitute_but_only_on_the_basis_of_the_second_argument. The more common use of
substitute is with two arguments:
y_val=45; a_val=99 substitute( x + y == z + a , list( y= y_val, a = a_val) x + 45 == z + 99
There was no 'effort' to examine the values of any symbol in the first argument unless it has a named item in the second argument (which is named
env.) | https://codedump.io/share/FY2TLN48XqUu/1/r-function-to-install-missing-packages | CC-MAIN-2018-09 | en | refinedweb |
I have the below route. In unit test, since I doesn't have the FTP server available, I'd like to use camel's test support and send a invalid message to
""
""
public class MyRoute extends RouteBuilder
{
@Override
public void configure()
{
onException(EdiOrderParsingException.class).handled(true).to("");
from("")
.bean(new OrderEdiTocXml())
.convertBodyTo(String.class)
.convertBodyTo(Document.class)
.choice()
.when(xpath("/cXML/Response/Status/@text='OK'"))
.to("").otherwise()
.to("");
}
}
As Ben says you can either setup a FTP server and use the real components. The FTP server can be embedded, or you can setup a FTP server in-house. The latter is more like an integration testing, where you may have a dedicated test environment.
Camel is very flexible in its test kit, and if you want to build an unit test that do not use the real FTP component, then you can replace that before the test. For example in your example you can replace the input endpoint of a route to a direct endpoint to make it easier to send a message to the route. Then you can use an interceptor to intercept the sending to the ftp endpoints, and detour the message.
The advice with part of the test kit offers these capabilities:. And is also discussed in chapter 6 of the Camel in action book, such as section 6.3, that talks about simulating errors.
In your example you could do something a like
public void testSendError() throws Exception { // first advice the route to replace the input, and catch sending to FTP servers context.getRouteDefinitions().get(0).adviceWith(context, new AdviceWithRouteBuilder() { @Override public void configure() throws Exception { replaceFromWith("direct:input); // intercept valid messages interceptSendToEndpoint("") .skipSendToOriginalEndpoint() .to("mock:valid"); // intercept invalid messages interceptSendToEndpoint("") .skipSendToOriginalEndpoint() .to("mock:invalid"); } }); // we must manually start when we are done with all the advice with context.start(); // setup expectations on the mocks getMockEndpoint("mock:invalid").expectedMessageCount(1); getMockEndpoint("mock:valid").expectedMessageCount(0); // send the invalid message to the route template.sendBody("direct:input", "Some invalid content here"); // assert that the test was okay assertMockEndpointsSatisfied(); }
From Camel 2.10 onwards we will make the intercept and mock a bit easier when using advice with. As well we are introducing a stub component. | https://codedump.io/share/qgaWUocRfJKf/1/unit-testing-ftp-consumer-with-apache-camel | CC-MAIN-2018-09 | en | refinedweb |
hello all
I'm trying to do a concatenation function I do not want to you use the strings property where we can add strings, I need to add two strings and save them in one string without the use of addition "+"
i have three variables title , first_name, last_name I should somehow combine them and save the result inside the full_name string and return it and then print it from the main function. so all the three varaibles should be saved inside one variable which is full name.
what I did here is a void function coz I was not sure how will I do it otherwise ..
there's some other issue which is that the first character of the full_name is ommited and there's no space between first name and last name
here's the code .. your guidence is highly appreciated
# include <iostream> # include <string> using namespace std; void concat( char [], char [], char [], string , int, int, int); int string_length( char []); int main() { // l for the lengths int l1, l2, l3; char title[]="Dr"; char first_name[]="Christina"; char last_name []="Brown"; string full_name; l1 =string_length(title); l2 =string_length(first_name); l3 =string_length(last_name); concat( first_name, last_name, title, full_name, l1, l2, l3); } int string_length( char anything[]) { int length=0; for (int i=0; anything[i]!='\0'; i++) { length++; } return length; } void concat( char first_name[], char last_name[], char title[], string full_name, int l1, int l2, int l3) { for(int i=0; i<l1; i++) { full_name = title[i]; } for(int i=0; i<l2; i++) { full_name = full_name + first_name[i]; } for(int i=0; i<l3; i++) { full_name = full_name + last_name[i]; } cout << full_name; cout <<endl; cout <<endl; } | https://www.daniweb.com/programming/software-development/threads/244415/concatenation-function-semantic-error | CC-MAIN-2018-09 | en | refinedweb |
Validating Reactive Forms
Building from the previous login form, we can quickly and easily add validation.
Angular provides many validators out of the box. They can be imported along with the rest of dependencies for procedural forms.
app/login-form.component.ts
import { Component } from '@angular/core'; import { Validators, FormBuilder, FormControl } from '@angular/forms'; @Component({ // ... }) export class AppComponent { username = new FormControl('', [ Validators.required, Validators.minLength(5) ]); password = new FormControl('', [Validators.required]); loginForm: FormGroup = this.builder.group({ username: this.username, password: this.password }); constructor(private builder: FormBuilder) { } login () { console.log(this.loginForm.value); // Attempt Logging in... } }
app/login-form.component.html
<form [formGroup]="loginForm" (ngSubmit)="login()"> <div> <label for="username">username</label> <input type="text" name="username" id="username" [formControl]="username"> <div [hidden]="username.valid || username.untouched"> <div> The following problems have been found with the username: </div> <div [hidden]="!username.hasError('minlength')"> Username can not be shorter than 5 characters. </div> <div [hidden]="!username.hasError('required')"> Username is required. </div> </div> </div> <div > <label for="password">password</label> <input type="password" name="password" id="password" [formControl]="password"> <div [hidden]="password.valid || password.untouched"> <div> The following problems have been found with the password: </div> <div [hidden]="!password.hasError('required')"> The password is required. </div> </div> </div> <button type="submit" [disabled]="!loginForm.valid">Log In</button> </form>
Note that we have added rather robust validation on both the fields and the form itself, using nothing more than built-in validators and some template logic.
We are using
.valid and
.untouched to determine if we need to show errors - while the field is required, there is no reason to tell the user that the value is wrong if the field hasn't been visited yet.
For built-in validation, we are calling
.hasError() on the form element, and we are passing a string which represents the validator function we included. The error message only displays if this test returns true. | https://angular-2-training-book.rangle.io/handout/forms/reactive-forms/reactive-forms_validation.html | CC-MAIN-2018-09 | en | refinedweb |
Rendering GUI widgets with generic look and feelDownload PDF
Info
- Publication number
- US7694271B2US7694271B2 US10787663 US78766304A US7694271B2 US 7694271 B2 US7694271 B2 US 7694271B2 US 10787663 US10787663 US 10787663 US 78766304 A US78766304 A US 78766304A US 7694271 B2 US7694271 B2 US 7694271B2
- Authority
- US
- Grant status
- Grant
- Patent type
-
- Prior art keywords
- instance
- display
- protowidget
- values
-
1. Field of the Invention
The field of the invention is data processing, or, more specifically, methods, systems, and products for rendering graphical user interface (“GUI”) widgets with generic look and feel.
2. Description of Related Art
It is difficult to design an overall look and feel for GUI displays and at the same time allow third parties other than the designer to establish custom controls, GUI components, or widgets to their own specifications. The designer may not wish to hinder the developer's ability to lay out screens and displays, but it is difficult to maintain overall look and feel without limiting layout specifications. An inflexible example would involve a set of control attributes for a button, where the attributes are rectangle width, rectangle height, text color, and background color. This may work for simple button designs, but when a developer wishes to build elliptical buttons that contain icons, inflexible predetermination of width, height, color, and so on, is insufficient.
Methods, systems, and products are disclosed that operate generally to support application developers other than an original look and feel designer to set up custom control with arbitrary additional aspects of look and feel. Methods, systems, and products according to embodiments of the present invention typically render GUI widgets with generic look and feel by receiving in a display device a master definition of a graphics display, the master definition including at least one graphics definition element, the graphics definition element including a reference to a protowidget and one or more instance parameter values characterizing an instance of the protowidget, the protowidget includes a definition of a generic GUI object, including generic display values affecting overall look and feel of the graphics display.
Typical embodiments also include rendering at least one instance of the protowidget to a graphics display in dependence upon the generic display values and the instance parameter values. In typical embodiments, rendering at least one instance of the protowidget includes inserting in the instance of the protowidget the instance parameter values from the master definition. In some embodiments, rendering at least one instance of the protowidget includes creating instance display values for the instance of the protowidget in dependence upon the instance parameter values. In many embodiments, the protowidget also includes at least one generic display rule and creating instance display values for the instance of the protowidget includes creating instance display values for the instance of the protowidget in dependence upon the generic display rule.
Typical embodiments include creating the protowidget, defining the protowidget in a scalable vector graphics language, and creating the master definition of a graphics display. In typical embodiments, rendering at least one instance of the protowidget also includes creating in computer memory a data structure representing an instance of the protowidget. In such embodiments, the data structure may be implemented as a DOM.
The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular descriptions of exemplary embodiments of the invention as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts of exemplary embodiments of the invention.
The present invention is described to a large extent in this specification in terms of methods for rendering GUI widgets with generic look and feel..
Methods, systems, and products for rendering GUI widgets with generic look and feel are explained with reference to the accompanying drawings beginning with
A widget is a graphical user interface (“GUI”) component that displays information and implements user input for interfacing with software applications and operating systems. ‘Widget’ is a term that is often used to refer to such graphical components. In some environments other terms are used for the same thing. In Java environments, for example, widgets are often referred to as ‘components.’ In other environments, widgets may be referred to as ‘controls’ or ‘containers.’ This disclosure, for clarity of explanation, uses the term ‘widget’ generally to refer to such graphical components. Examples of widgets include buttons, dialog boxes, pop-up windows, pull-down menus, icons, scroll bars, resizable window edges, progress indicators, selection boxes, windows, tear-off menus, menu bars, toggle switches, checkboxes, and forms. The term ‘widget’ also refers to the underlying software program that displays the graphic component of the widget in a GUI and operates the widget, depending on what action the user takes while operating the GUI in response to the widget. That is, ‘widget,’ depending on context, refers to data making up a GUI component, a software program controlling a GUI component, or to both the data and the program.
A protowidget is a widget definition from which widgets may be instantiated with similar generic look and feel but different instance characteristics. Protowidgets typically are created by a generic look and feel designer operating a graphics editor on a graphics workstation or personal computer (120). Protowidgets may include generic display values (130) that govern the overall look and feel of a display, values that may be similar for a related group of protowidgets defining, buttons, dialog boxes, pull-down menus, and so on, all supporting the creation of instances of the protowidgets having a similar overall generic look and feel. Such a similar overall generic look and feel is sometimes referred to as a ‘skin,’ and GUI displays created by use of protowidgets according to instances of the present invention may be considered readily ‘skinnable.’ An instance of a protowidget, of course, is a widget, but for clarity in this specification, an instance derived from a protowidget is referred to as an ‘instance.’ A protowidget is typically defined in a graphics definition language, such as, for example, “SVG,” the Scalable Vector Graphics language, a modularized language for describing graphics in XML, the eXtensible Markup Language. The SVG specification is promulgated by the World Wide Web Consortium.
A master definition (104) of a graphics display is a description of a display for one or more widgets, that is, instances of protowidgets. That is, the master definition lists protowidgets and describes how instances of them are to be created and displayed. Multiple instances of a single protowidget may be described in a master definition. That is, a protowidget defining a tool bar button, for example, may be instantiated and used at several locations on a single GUI display to perform several different functions.
For further explanation, consider the example of the display shown in
In the example of
That fact the exemplary application of
In the system of
In the system of
In the system of
Given the flexibility of XML language specification, many such super-languages no doubt will occur to those of skill in the art, but one example of a language in which master definitions of graphics may be expressed is MXML from Macromedia, Inc., 600 Townsend Street, San Francisco, Calif. 94103. MXML is an XML-based markup language used to declaratively describe the layout of widgets on a graphics display, and an object-oriented programming language which handles user interactions with an application. MXML runs on a presentation server from Macromedia called “Flex.” Flex is a presentation server installed on top of a Java™ application server or servlet container.
Here is an example of a master definition (104) of a graphics display expressed in MXML:
This exemplary master definition lists references to three protowidgets, a Button, a CheckBox, and a ComboBox. The Button has instance parameter values for an identification code of ‘button1,’ for a width of ‘125’ , and for a height of ‘35’. The CheckBox has instance parameter values for an identification code of ‘checkbox1’ and for label text of ‘Check Me.’ The ComboBox has instance parameter values for an identification code of ‘combobox1,’ for a width of ‘100’, and for a height of ‘35.’
The references to all three protowidgets include a namespace identifier ‘mx’ at a location in cyberspace specified by the URL: “ /2003/mxml.” The URL identifies the location of the protowidgets for each reference, the Button, the CheckBox, and the ComboBox. That is, in this example, a reference to a protowidget is implemented as a markup element name of another markup document where the protowidget is defined. As described in more detail below, the protowidgets found at the URL contain the pertinent generic display values and generic display rules effecting their overall look and feel.
Display devices in this specification are generally computers, that is, any automated computing machinery having a graphics display. The terms “display device” or “computer” include not only general purpose computers such as laptops, personal computer, minicomputers, and mainframes, but also devices such as personal digital assistants (“PDAs), network enabled handheld devices, internet-enabled mobile telephones, and so on.
The computer (134) of
Also stored in RAM (168) is an operating system (154). Operating systems useful in computers according to embodiments of the present invention include Unix, Linux198 , Microsoft NT™, and others as will occur to those of skill in the art. In the example of
The example computer (134) of
The example computer (134) of
For further explanation,
Netscape and Microsoft specify HTML DOMs for their browsers, but the W3C's DOM specification supports both HTML and XML. The W3C's DOM specification includes an API for valid HTML and well-formed XML documents. It defines the logical structure of documents and the way a document is accessed and manipulated. A DOM may be used to manage or manipulate any graphics components or widgets represented in compliant XML. With a DOM, programmers can build documents, navigate their structure, and add, modify, or delete elements and content. Almost anything found in an HTML or XML document can be accessed, changed, deleted, or added using a DOM. The specification for the DOM API for use with any programming language. The specification itself at this time provides language bindings for Java and ECMAScript, an industry-standard scripting language based on JavaScript and JScript.
In the method of
This exemplary protowidget contains two SVG component definitions, one for the button itself, <symbol id=“Button”>, and another component definition:
defining storage locations for instance parameter values. The rectangle having id=“parms” is considered a dummy component, not to be displayed, but provided only to define the storage space for the instance parameter values inside an instance of the protowidget, such as, for example, a DOM. In the example of
That is, the rendering function at render time calls the ‘onload’ function defined in the SVG for the protowidget, “setParms( ).” The setParms( ) function tests with an if( ) statement whether each supported instance parameter has a value in the master definition (“parms”), and, if the value is present, setParmtZ( ) sets that value in a DOM representing an instance of the protowidget. The functions setX( ), setY( ), setWidth( ), setHeight( ), and so on, are DOM API functions. In this example, creating (410) instance display values (116) for the instance (112) of the protowidget (128) in dependence upon the instance parameter values (114) may be carried out in a trivial example by using the instance parameter values as instance display values. Often, however, the protowidget (128) includes at least one generic display rule (118) and creating (410) instance display values (116) for the instance (112) of the protowidget (128) is carried out by creating instance display values for the instance (112) of the protowidget (128) in dependence upon the generic display rule (118). In the exemplary SVG protowidget set forth above, a generic display rule is exemplified by the member method:
in which the value of the parameter ‘att’ is an instance parameter value which is used according to processing rules to produce instance display values. In this example, the generic display rules may be interpreted as:
- for a first rectangle defining the screen appearance of a button, create the instance display value for the width of the first rectangle as the instance parameter value minus 1
- for a second rectangle defining the screen appearance of a button, create the instance display value for the width of the second rectangle as the instance parameter value minus 5
- for button text defining the screen appearance of a button, create the instance display value for the button text as the instance parameter value divided by. | https://patents.google.com/patent/US7694271 | CC-MAIN-2018-09 | en | refinedweb |
Opened 4 years ago
Closed 4 years ago
Last modified 4 years ago
#24658 closed Bug (fixed)
Schema tests fail when run in isolation
Description
Because tables are deleted in
tearDown, when running an individual test, the tables are still existing and any
create_model operation fail.
Change History (8)
comment:1 Changed 4 years ago by
comment:2 Changed 4 years ago by
comment:3 Changed 4 years ago by
comment:4 Changed 4 years ago by
Also, I ran this on both the 1.8 branch, and the master on the commit f043434174db3432eb63c341573c1ea89ef59b91, with Python version Python 2.7.5+
I'm releasing this ticket, so that someone else can take a look. If you provide more information, I'll check it out from time to time.
comment:5 Changed 4 years ago by
Claude, could you more details on which tests fail when run in isolation.
I couldn't reproduce with:
./runtests.py schema.tests.SchemaTests --settings=test_postgres
or
./runtests.py schema.tests.SchemaTests.test_creation_deletion
or
./runtests.py schema.tests.SchemaTests.test_creation_deletion schema.tests.SchemaTests.test_fk --settings=test_postgres
comment:6 Changed 4 years ago by
I found the issue. The problem happens only with tests using the Note model, and the cause is that the Note model misses the
apps = new_apps Meta attribute. I'll fix that ASAP.
I've tried to test your scenario, but I think more information.
I dont know if I understood correctly, but I tried to run just one individual test, and use the SchemaEditor.create_model method to create a new model class, and instantiate it.
Here's my code, what works (I can create a dynamic model within the test and instantiate it - no errors)
from django.test import TestCase
Then from the command line, I ran the test suites with all 3 of these commands:
Please give more details on how you're getting this issue. | https://code.djangoproject.com/ticket/24658 | CC-MAIN-2019-09 | en | refinedweb |
In our face-paced modern society, who has time to click through pages and pages of content? “Not I,” said the web developer. In a world full of shortcuts, swipes and other gestures, the most efficient way to get through pages of content is the infinite scroll.
While not a new concept, the idea of infinite scroll is still somewhat controversial. Like most things, it has a time and a place in modern web design when properly implemented.
For anybody unfamiliar, infinite scroll is the concept of new content being loaded as you scroll down a page. When you get to the bottom of the content, the site automatically loads new content and appends it to the bottom.
Infinite scroll may not be ideal for all types of content, it is especially useful of feeds of data that an end-user would most probably want to page through quickly.
A perfect use case, and one you may already be familiar with is Instagram. You are presented with a feed of images and as you scroll down, more images keep showing up. Over and over and over until they run out of content to give you.
This article will teach you how to implement the infinite scroll concept into a React component and uses the Random User Generator to load a list of users from.
🐊 Alligator.io recommends ⤵Fullstack Advanced React & GraphQL by Wes Bos
Before we begin, we need to make sure we have the proper dependencies in our project. To load data from the Random User Generator we are going to use superagent.
To add
superagent to your project via
npm run:
$ npm install --save superagent
Or via
yarn:
$ yarn add superagent
The crux of our infinite scroll component is going to be an
onscroll event that will check to see if the user has scrolled to the bottom of the page. Upon reaching the bottom of the page, our event will attempt to load additional content:
window.onscroll = () => { if ( window.innerHeight + document.documentElement.scrollTop === document.documentElement.offsetHeight ) { // Do awesome stuff like loading more content! } };
The data that has been loaded will be appended to an array in the component’s state and will be iterated through in the component’s
render method.
All good things come to an end. For demonstration purposes our component will eventually stop loading new content and display a message that it's reached the end and there is no additional content.
Now that we understand the logic flow that’s necessary to implement infinite scroll, let’s dive into our component:
import React, { Component, Fragment } from "react"; import { render } from "react-dom"; import request from "superagent"; class InfiniteUsers extends Component { constructor(props) { super(props); // Sets up our initial state this.state = { error: false, hasMore: true, isLoading: false, users: [], }; // Binds our scroll event handler window.onscroll = () => { const { loadUsers, state: { error, isLoading, hasMore, }, } = this; // Bails early if: // * there's an error // * it's already loading // * there's nothing left to load if (error || isLoading || !hasMore) return; // Checks that the page has scrolled to the bottom if ( window.innerHeight + document.documentElement.scrollTop === document.documentElement.offsetHeight ) { loadUsers(); } }; } componentWillMount() { // Loads some users on initial load this.loadUsers(); } loadUsers = () => { this.setState({ isLoading: true }, () => { request .get('') .then((results) => { // Creates a massaged array of user data const nextUsers = results.body.results.map(user => ({ email: user.email, name: Object.values(user.name).join(' '), photo: user.picture.medium, username: user.login.username, uuid: user.login.uuid, })); // Merges the next users into our existing users this.setState({ // Note: Depending on the API you're using, this value may // be returned as part of the payload to indicate that there // is no additional data to be loaded hasMore: (this.state.users.length < 100), isLoading: false, users: [ ...this.state.users, ...nextUsers, ], }); }) .catch((err) => { this.setState({ error: err.message, isLoading: false, }); }) }); } render() { const { error, hasMore, isLoading, users, } = this.state; return ( <div> <h1>Infinite Users!</h1> <p>Scroll down to load more!!</p> {users.map(user => ( <Fragment key={user.username}> <hr /> <div style={{ display: 'flex' }}> <img alt={user.username} src={user.photo} style={{ borderRadius: '50%', height: 72, marginRight: 20, width: 72, }} /> <div> <h2 style={{ marginTop: 0 }}> @{user.username} </h2> <p>Name: {user.name}</p> <p>Email: {user.email}</p> </div> </div> </Fragment> ))} <hr /> {error && <div style={{ color: '#900' }}> {error} </div> } {isLoading && <div>Loading...</div> } {!hasMore && <div>You did it! You reached the end!</div> } </div> ); } } const container = document.createElement("div"); document.body.appendChild(container); render(<InfiniteUsers />, container);
There’s really not much to it!
The component manages a few status flags and list of data in it’s state, the
onscroll event does most of the heavy lifting and the render method brings it to life on your screen!
We’re also making use of setState with a callback function passed-in as the second argument. The initial call to
setState in our
loadUsers method sets the value of loading to true and then in the callback function we load some users and call
setState again to append our new users to the users already in the state.
I hope you’ve found this article on implementing infinite scroll in React informative.
If interested, you can find a working demo of this component over on CodeSandbox.
Enjoy! 💥 | https://alligator.io/react/react-infinite-scroll/ | CC-MAIN-2019-09 | en | refinedweb |
pminfo man page
pminfo — display information about performance metrics
Synopsis
pminfo [-dfFIlLmMstTvxz] [-a archive] [-b batchsize] [-c dmfile] [-h hostname] [-K spec] [-[n|N] pmnsfile] [-O time] [-Z timezone] [metricname | pmid | indom]...
Description
pminfo displays various types of information about performance metrics available through the facilities of the Performance Co-Pilot (PCP).
Normally pminfo operates on the distributed Performance Metrics Name Space (PMNS), however if the -n/--namespace option is specified an alternative local PMNS is loaded from the file pmnsfile. The -N/--uniqnames option supports the same function as -n/--namespace, except for the handling of duplicate names for the same Performance Metric Identifier (PMID) in pmnsfile - duplicate names are allowed with -n/--namespace but they are not allowed with -N/--uniqnames../--host/--archive option causes pminfo to use the specified set of archives rather than connecting to a PMCD. The argument to -a/--archive is a comma-separated list of names, each of which may be the base name of an archive or the name of a directory containing one or more archives.
The -L/--local-PMDA option causes pminfo to use a local context to collect metrics from PMDAs on the local host without PMCD. Only some metrics are available in this mode.
The -a/--archive, -h/--host and -L/--local-PMDA options are mutually exclusive.
The -b/--batch option may be used to define the maximum size of the group of metrics to be fetched in a single request for the -f/--fetch and -v/--verify options. The default value for batchsize is 128.
Other options control the specific information to be reported.
- -c dmfile, --derived=dmfile
The dmfile argument specifies a file that contains derived metric definitions in the format described for pmLoadDerivedConfig(3). The -c/--derived option provides a way to load derived metric definitions that is an alternative to the more generic use of the PCP_DERIVED_CONFIG environment variable as described in PCPIntro(1). Using the -c/--derived/--origin).
- -I, --fullindom
Print the InDom in verbose mode.
- -K spec, --spec-local=spec
When using the -L/--local-PMDA option to fetch metrics from a local context, this option controls the DSO PMDAs that should be made accessible. The spec argument conforms to the syntax described in pmSpecLocalPMDA(3). More than one -K/--spec-local option may be used.
- .
- -O time, --origin=time
When used in conjunction with an archive source of metrics and the options -f/--fetch, the time argument defines a time origin at which the metrics should be fetched from the set of archives. Refer to PCPIntro(1) for a complete description of this option, and the syntax for the time argument.
- . See also pmLookupLabels(3) and the -l/--labels.
Files
- $PCP_VAR_DIR/pmns/*
default localchart(1), pmdumplog(1), pmprobe(1), pmrep(1), pmval(1), PMAPI(3), pmLookupLabels(3), pmLoadDerivedConfig(3), pmSpecLocalPMDA(3), pcp.conf(5), pcp.env(5) and pmns(5).-dstat(1), pcp-dstat(5), PCPIntro(1), PCPIntro(3), pmcd(1), pmchart(1), pmclient(1), pmdagluster(1), pmdajson(1), pmdaLabel(3), pmdalio(1), pmdaoracle(1), pmdaprometheus(1), pmdatrace(1), pmdaweblog(1), pmevent(1), pmFreeOptions(3), pmie(1), pmMergeLabelSets(3), pmprobe(1), pmrep(1), pmseries(1), pmsocks(1), pmstore(1), zbxpcp(3). | https://www.mankier.com/1/pminfo | CC-MAIN-2019-09 | en | refinedweb |
Technical Debt: A Definition
A detailed explanation of what technical debt is and isn't.
Join the DZone community and get the full member experience.Join For Free.
He’s certainly right about that! But just prior to that, he says, “Technical debt doesn’t exist”, and sort of wanders around that idea for a bit.
Here’s the rub: he then tries to define what technical debt actually is:
- “Maintenance work.”
- “Features of the codebase that resist change.”
- “Operability choices that resist change.”
- “Code choices that suck the will to live.”
- “Dependencies that resist upgrading.”
I’ll leave you to read his descriptions of each.
Critique
Unfortunately, a lot of the definitions he raises there are highly subjective and extremely difficult to understand, except at a base, emotional, almost visceral level. I mean, when you explicitly use the phrase “suck the will to live” as one of your definitions, it’s hard to really put a concrete discussion around that.
Consider, for example, that particular point: “A significant percentage of what gets referred to as technical debt are the decisions that don’t so much discourage change but rather discourage us from even wanting to look at the code. ….”
I’m sure every single person reading this has an immediate reaction, akin to the screams through the Force that Obi-Wan felt when Alderaan was destroyed. Everybody remembers That One Project, or That One Class, or That One File…. Nobody wanted to touch it, it was a mess, and people would look for every reason under the sun to avoid opening it, as if there was some kind of icky black ichor that could ooze out of the screen and keyboard and infect us with its ugliness.
And yet, if we compare the stories, we will all have very different concrete-terms descriptions of what that thing was. And I’ll even bet that if you cast the net wide enough, and we spend enough time comparing stories, we’ll even find that one man’s “suck the will to live” is another man’s “Whoa, man, that’s actually kind of a cool hack.”
Case in point: in the earliest days of my career, I was a contractor working on some C/Win16 code at Intuit. A really cool 3-month gig (and in those days, it was way cool to have Intuit on your resume). I was working as part of the “Slickrock” team, which was the code-name for Intuit’s nascent electronic banking section of Quicken 5 for Windows. It was some cool stuff.
Except...
Well, first of all, everything was written in C. Not C++, as was the leading-edge of the day, but using Intuit’s home-grown C/Windows library that they’d put together since the earliest days of the product. At the time, I was kinda bleah on the whole idea. (In retrospect, hey, if it still works, you know?)
And there was this one dialog box to which I was assigned, which had a bunch of bugs in it that needed fixing, that nobody else on the team wanted to touch. Eager to prove to all these grizzled veterans that I was capable of handling the toughest stuff, I leapt at the chance to get into this thing. (If you get this picture of the eager young Private fresh from boot camp, volunteering to go out on that mission that the grizzled old Sargeant knows will just crush the life out of him, you’re probably not too far off the mark.)
And here’s what I found: this dialog box code was one, giant, four-page-long function, where three-and-three-quarters of it was wrapped in one giant-ass
do-while loop. But not just any
do-whileloop; no, this one was the most bizarre thing I’d ever seen. It looked something like this:
do { /* do one thing */ /* do another thing */ /* check that thing */ /* what about the thing over there */ } while (0);
It was my own private “WTF?!?” moment. No wonder everybody wanted to stay clear of this thing! This was the craziest code I’d ever seen, and clearly it was because they weren’t using C++!
(Yeah, I kinda was that stupid back then.)
But when I showed this to one of the other engineers and said the 90’s equivalent of “Dude, seriously?”, he pointed out that I’d missed an important part of the whole thing:
int result = -1; /* Not OK! */ do { /* do one thing */ if (!thing-worked) break; /* do another thing */ if (!another-thing-worked) break; /* check that thing */ if (!thing-checked) break; /* what about the thing over there */ if (!thing-over-there-checked) break; result = 0; /* OK! */ } while (0); return result;
In other words, this incredibly idiotic thing actually served a useful purpose: it obeyed the old C rule of “single entry, single exit”, and more importantly, it was rather elegantly obeying the fail-fast principle. (Why bother doing all these other checks if you’ve already failed at the first step?)
Now, I grant you, this could’ve been solved using C++ using exceptions; instead of the (not-really-a-)loop, he could just have done a “try”, and then each step could’ve thrown their own new exception type, and there’d have been either a single “catch” to return the appropriate error code (since this block was returning either -1 or 0, depending on success), or even maybe a separate “catch” block to handle each different error condition, and—
But you know what? Today, looking back at it, I don’t know if that would’ve been much clearer, or much shorter, or what-have-you.
Is this still life-sucking-code? Or is this an elegant hack? I’ll be honest, I’m not sure anymore, of either position.
Technical Debt: A Definition
I don’t have one.
Seriously.
Not one I particularly like, anyway. Google it, and you get:
Technical debt (also known as design debt or code debt) is a metaphor referring to the eventual consequences of any system design, software architecture or software development within a codebase.
“...eventual consequences”? You mean, like “it works”? Seriously, consequences are not always bad, which is why the Gang-of-Four used that same word to describe the results of a particular solution applied to a particular problem within a certain context. Consequences can be positive, and they can also be negative. The use of the Strategy pattern can allow for varying an algorithm at runtime—but with it comes an added cost in complexity of determining which Strategy to load, for example, or the additional cognitive load of having to realize that now the Strategy being executed may be nowhere local to the code actually executing it (which would at some level seem to violate the principle of locality, depending on the situation).
Wikipedia goes on to say:
The debt can be thought of as work that needs to be done before a particular job can be considered complete or proper.
Now that’s interesting, because that certainly doesn’t jibe with what @kellan was alluding to earlier—this sounds like things like documentation and tests and such. And yes, that definitely could create a problem, if a company/team/programmer goes off and writes a whole bunch of untested, undocumented code; I’d call that indebted code, probably, sure.
Unless, you know, it doesn’t really need documentation. Or tests. Like, for example, a module composed of much smaller functions, each of which are effectively small primitives that really don’t need testing, a la:
def calcuate(lhn, op, rhn) return op(lhn, rhg) end def add(lhn, rhn) return lhn + rhn end def sub(lhn, rhn) return lhn - rhn end puts calculate(1, add, 2)
Do these really require comments? Tests? Wouldn’t it actually add to the technical debt to put those into place, since now they must be maintained and kept up to date should something change in here?
I’m obviously reaching here, but I don’t think the point is entirely invalidated by the simplicity of my example—after all, well-written methods are supposed to be small and focused, and we prefer classes not to be large, and so on, for precisely these kinds of reasons.
Technical Debt: It’s a Metaphor, Stupid
Go back to Wikipedia for a second; there, they finish the definition’s first paragraph with this:
If the debt is not repaid, then it will keep on accumulating interest, making it hard to implement changes later on. Unaddressed technical debt increases software entropy.
See, this is the heart of the matter: technical debt is a metaphor. That’s it. That’s all it is. It’s a literary mechanism designed to help people who are not programmers understand that there are decisions made during the development process of a project, decisions which are deliberate choices to take a shortcut or avoid a more generic solution in the interests of getting past the obstacle quickly.
Except that nothing ever remains “just” a metaphor in our industry. Inevitably, we have to dissect it, treat as if it were a real, in-the-room-with-us kind of thing, and start crusades “for” or “against” it. Because REASONS.
And I admit, I’m not immune to this tendency myself. Because in examining a metaphor, so long as we recognize it’s a metaphor and therefore bound to fail at some point (the model is not reality), we can actually find some interesting edge-cases that may or may not apply, and that leads us to some interesting conversations about the concept, even if it doesn’t fit the metaphor anymore.
Technical Debt: A Fowlerian Definition
Martin Fowler has gone into great detail about the different kinds of technical debt in the form of a debt quadrant, arranged along two axes of “Deliberate vs Inadvertent” against “Reckless vs Prudent”.
I like this, simply by virtue of the fact that it captures the mindset of the developer or the team at the time they were making that decision.
But I don’t like it because, well, because who cares what they were thinking, or why? Isn’t technical debt just technical debt? I mean, that $50 purchase on your credit card, was it a measured and thoughtful purchase, perhaps some tools at the local hardware store, so that you can perform some home repais, or was it a reckless and idiotic purchase, perhaps some tools at the local hardware store, so that you can pretend to yourself that you’re actually going to take this weekend and perform some home repairs, but deep down you know you’re just fooling yourself, you’ll never get it done, and the tools will now be left to rust in a quiet corner of the garage (or worse, you’ll leave them out in the backyard and it rains and and and)?
Seriously: the guy who wrote that
do-while loop at Intuit? I have no idea what he was thinking—and did that intent really make a helluvalot of difference to me (or the rest of the team) as I (we) tried to pick it up and extend/modify/debug it? I won’t speak for the rest of the team, but to me, it made not a single whit of difference.
But here’s the vastly more important thing to realize about debt: At the end of the day, you still owe $50.
Wherever that debt appears on Fowler’s quadrant, you still have to pay it off. Or it will gather interest, and eventually (if you leave it long enough) bankrupt you.
Granted, this is perhaps where that metaphor starts to wear a little thin. In a codebase, where we perhaps deliberately chose not to use a Strategy pattern, but instead just coded the algorithm by hand directly into the code, because we don’t really see any need for any greater degree of flexibility, we have potentially amassed some (perhaps small) amount of technical debt. The $50 hammer, if you will.
In a traditional credit card scenario, that $50, compounded at 5% or 10% interest, will, without exception, eventually turn into a monstrous pile of debt that you cannot pay off, assuming you leave it unpaid for that long.
But that non-Strategized algorithm? So long as the client requirements don’t change, there’s not a thing wrong with it. It can continue to run, and run, and run, until the heat death of the universe, and nothing happens.
This suggests to me that technical debt isn’t just about what the developers on the team at the time were thinking about. This suggests that technical debt has two more components to it:
- The thoughts of the developer(s) who have now inherited the code.
- The requirements (or lack thereof) of the project for which the code was written.
See, if the client never changes their requirements, there is no technical debt. So long as the code continues to run, without problem, then what the code looks like is entirely irrelevant. It’s only when the client says, “OK, now we need to do this other thing with this codebase” that it becomes a problem.
Although, now that I write this, I realize that’s not entirely accurate, either.
If the client’s requirements haven’t changed, but the code doesn’t run, or runs into errors while running? Those are bugs, and the code needs to change (to remove the bug). And that’s the other case where now, a developer struggling to understand the code in which the bug may (or may not) live will be running into difficulty. Enter technical debt again.
Which now suggests that technical debt is essentially “a developer’s cognitive difficulty in understanding and/or modifying a codebase”. Nothing to do with the decisions made at the time of the codebase’s creation, and everything to do with the developer who is attempting to understand what the code is trying to do or how to modify it.
Technical Debt: Moving On
So does @kellan (and Peter Norvig) have it right, that “code is a liability”?
On the surface of it, maybe: if I write code, it is potential technical debt.
See, it’s not technical debt yet, because it won’t actually be a debt until we trigger the “understanding and/or modifying” clause of the above. The Ruby script I hacked together to transform my old blog’s XML over to Markdown files for the new system, I won’t know whether that code is technical debt until I (or anybody else who wants to use or modify it) go back into it again and face the cognitive load of understanding it or modifying it. So it’s like the infamous cat in the box, neither debt nor not, until the box is opened.
Published at DZone with permission of Ted Neward, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own. | https://dzone.com/articles/technical-debt-a-definition-1 | CC-MAIN-2021-04 | en | refinedweb |
Introduction to Multiple Inheritance in Python
Multiple Inheritance in python is a well-known feature that is supported by all the major object oriented programming languages. It can be described as a process where the child class or object inherits the methods and attributes from one or more parent classes. It is a hierarchical process that leads to reusability of a code, higher performance and flexibility. It also has it’s own disadvantages like increased complexity, more chances of ambiguity and requires deeper coding knowledge.
Syntax #1:
The syntax of multiple inheritances involving two base classes and a derived class is as shown below.
class Base_1:
pass
class Base_2:
pass
class DerivedClass(Base_1, Base_2):
pass
Syntax #2:
The syntax of multiple inheritances involving three base classes and a derived class is as shown below.
class Base_1:
pass
class Base_2:
pass
class Base_3:
pass
class DerivedClass(Base_1, Base_2, Base_3)
pass
Examples of Multiple Inheritance in Python
Let’s go through the following program codes in Python, which implements the concept of multiple inheritance.
Example #1
We will begin with a simple example, so as to understand the working of the concept.
Code:
class A:
def A(self):
print('This is class A.')
class B:
def B(self):
print('This is class B.')
class C(A,B):
def C(self):
print('This is class C which inherits features of both classes A and B.')
o = C()
o.A()
o.B()
o.C()
Going through the python programming code, as shown above, there are three classes viz. class A, class B, and Class C. Class C is a derived class that inherits features from classes A and B. class A and class B thus act as base classes. Finally, we have a variable o that we have assigned to the derived class C. the variable operates all the classes. The program code doesn’t perform any complex task, however, it allows us to familiarize ourselves with the concept.
Output:
Each of the three classes just has print statements, and when the program code is executed the statements are printed.
We’ll extend this example to a more logical form. Let’s consider that want to find the area of a rectangle. In this case, two inputs are required viz. length and breadth of the rectangle. Using the concept of multiple inheritances, the area of the rectangle can be calculated. Here, three classes are required of which two would act as base classes and one would be the derived class. Two base classes for length and breadth respectively and the derived class would be used for calculation of area of the rectangle. This class will derive length and breadth inputs from the respective base classes.
Example #2
The program code implementing the above-discussed concept is as follows. Go through the code, so as to understand each of its components properly. = int(input("Enter the required length for rectangle: "))
o.b = int(input("Enter the required breadth for rectangle: "))
o.r_area()
Let’s go through the program code step-by-step. First, we have a class length. In this class, there’s a variable l. Initially, we have set the value of this variable to zero. Then we have a routine length(), which returns the value of variable l. Similarly, we have another class breadth which has a variable b, initially assigned the value of zero. Routine breadth() returns the value of variable b. Finally, we have a third class which is rect_area that derives value from both the base classes. It has a routine r_area() which gives the area of the rectangle based on the values of length and breadth that comes from the respective two base classes.
In the end, we create a variable o which we assign to class rect_area. This variable acts as an operator, which we further operate upon variables from base classes. So, with derived class operating upon the variables of base classes, the assignment happens. We can see that, variables l and b are assigned values through input boxes. By default they are of string type, so we converted them to numbers using int() function. The o.r_area() function gives the area.
Output:
We checked the above program through a series of inputs. When the program is executed the user is asked to provide the input for variables l and b as shown below.
- First, the user is asked to input length for the rectangle in the text.
- When a requisite value for length is passed, the breadth value needs to be passed.
- As we can see, we passed two integer values and got the output in a well-formatted form.
The above program implemented works well only with integer values. And in case if the user tries to pass decimal values, then he may end up having an error. In order to overcome this, we need to convert the string input values to float (or any decimal type).
Example #3
The program code will remain the same, except for the function used for type conversion. The program code is as shown below. = float(input("Enter the required length for rectangle: "))
o.b = float(input("Enter the required breadth for rectangle: "))
o.r_area()
Output:
Conclusion
Amongst various types of inheritances, multiple inheritance is a type that is supported by python. Python offers easy-to-implement methodology in this context. The concept is quite useful in situations that involve the use of numerous interrelated variables, and wherein the relations need to be regulated properly.
Recommended Articles
This is a guide to Multiple Inheritance in Python. Here we discuss the Introduction and examples of multiple inheritance in python along with code implementation. You may also look at the following articles to learn more – | https://www.educba.com/multiple-inheritance-in-python/?source=leftnav | CC-MAIN-2021-04 | en | refinedweb |
#include <vtkRenderWindow.h>
Inheritance diagram for vtkRenderWindow:
vtkRenderWindow is an abstract object to specify the behavior of a rendering window. A rendering window is a window in a graphical user interface where renderers draw their images. Methods are provided to synchronize the rendering process, set window size, and control double buffering. The window also allows rendering in stereo. The interlaced render stereo type is for output to a VRex stereo projector. All of the odd horizontal lines are from the left eye, and the even lines are from the right eye. The user has to make the render window aligned with the VRex projector, or the eye will be swapped.
Definition at line 80 of file vtkRenderWindow.h. | https://vtk.org/doc/release/5.0/html/a01970.html | CC-MAIN-2021-04 | en | refinedweb |
Neural Style Transfer
Overview
One of the funner/more popular tricks you can employ using Deep Learning is the notion of style transfer between two images, like the canonical examples shown below.
from IPython.display import Image Image('images/style_transfer.png')
To get started, you want to determine some cost function that takes into consideration both:
- Content: how similar the principal shapes are between the Content Image and the Generated Image (e.g. the bridge clock tower)
- Style: How much the Generated Image “looks/feels” like the Style Image
(more on these below)
Actually working the algorithm is as follows:
- Start with some pre-trained model (VGG seems to be the most popular)
- Instantiate some completely-random image
Gwith the same resolution as your intended output
- Compute the cost of your generated image, as outlined above
- Modify the image to minimize the cost
- Rinse, repeat
Determining Cost
As previously mentioned, there are two aspects to our notion of cost: content and style
Content Cost
This one’s pretty straight-forward. Two images that have similar content have objects/pixel values that activate in about the same locations.
Thus, if we were to crack open some arbitrary intermediate VGG layer, run both images through it, and inspect the activation values, we’d expect to see a high degree of similarity between images of similar spatial-content and a low degree otherwise.
Extending this further, we can consider the distance between the activations of a particular layer,
l, between the Content Image and Generated Image as a cost that we want to minimize.
$Jstyle(C, G) = || a^{\lbrack l \rbrack©} - a^{\lbrack l \rbrack(G)}|| ^{2}$
Style Cost
This one’s a bit trickier.
Note, the design of this approach relies heavily on the intuition that we extract increasingly-complex features as we look at later convolutional layers.
Image('images/layer_over_layer.png')
Thus, in our selection of layer
l, we should be aiming for a more-intermediate layer– not primative enough that we’re just looking for edges, not advanced enough that we accidentally attribute “number of dogs in frame” as style.
From there we employ a similar “correlation-type” idea as before, except this time instead of looking at pairwise correlations of activations, this time we’re comparing the inter-relatedness between channels of a layer, for a given image.
More concretely, because each channel can have dramatically different representations, the value that you get when you unroll everything and correlate is extremely specific– for instance:
- Liberal use of rounded edges in one layer
- Pastel coloration in another
- Complementary colors often found right next to each other in a third
Image('images/style_corr.png')
Running our Style Image thorough the “sum everything up” operation will yield a specific value.
Then, once we run our not-yet-similar Generated Image through it, we might find that it’s close-ish, perhaps checking 2 of the 3 boxes above. We feed this information to whatever optimizer we’re using and it determines that we could achieve a closer score by modifying the image in a way that improves in this third, poor-performing channel.
The Math
Assume that a given layer,
l with width
w, height
h, and channels
c, has activation values (
i,
j,
k) for each dimension, respectively. Then any given activation value would be written as
$a^{\lbrack l \rbrack}(i, j, k)$
And our “sum everything up” operation is actually just creating a Gram Matrix
G, operating between two channels
k and
k_prime, where
$G(k, k’)^{\lbrack l \rbrack} = \sum_{i}^{nH} \sum{j}^{n_W} a(i,j,k) * a(i, j, k’)$
This is basically one matrix transposed, dot-multiplied by the other. If the output of this is high, the two channels are highly-correlated.
Finally, we calculate the Style Loss between the Style Image and the Generated Image as
$Jstyle(S, G) = \frac{1}{4n_W^2n_H^2n_C^2} ||G^{\lbrack l \rbrack (S)} - G^{\lbrack l \rbrack (G)}||^{2}$
And this just is another (albeit, more-complicated) distance measure, with a normalizing factor out front to ease computation woes.
Drawing From More Layers
We the intuition behind layer selection above. However, many implementations of neural style transfer instead draw from multiple hidden layers for each step, each with some pre-determined weighting that influences how things converge.
$Jstyle(S, G) = \sum_{l} \lambda^{\lbrack l \rbrack} Jstyle^{\lbrack l \rbrack }(S, G)$ | https://napsterinblue.github.io/notes/machine_learning/computer_vision/style_transfer/ | CC-MAIN-2021-04 | en | refinedweb |
.
Tabbed panel
The TabbedPanel widget manages different widgets in tabs, with a header area for the actual tab buttons and a content area for showing the current tab content.
The TabbedPanel provides one default tab.
To use it must import :
from kivy.uix.tabbedpanel import TabbedPanel
Basic Approach: 1) import kivy 2) import kivy App 3) import floatlayout 4) import tabbedpanel 5) set minimum version(optional) 6) Create Tabbed panel class 7) create the App class 8) create .kv file: # create multiple tabs in it. # Do there functioninging also. 9) return the widget/layout etc class 10) Run an instance of the class
Implementation Of Approach:
.py file
.kv file
Output:
Tab 1:
Tab 2:
Tab 3:
Attention geek! Strengthen your foundations with the Python Programming Foundation Course and learn the basics.
To begin with, your interview preparations Enhance your Data Structures concepts with the Python DS Course. | https://www.geeksforgeeks.org/python-tabbed-panel-in-kivy/?ref=lbp | CC-MAIN-2021-04 | en | refinedweb |
Lex
Lex is an actively used grammar language created in 1975. Lex is a computer program that generates lexical analyzers ("scanners" or "lexers"). Lex is commonly used with the yacc parser generator. Lex, originally written by Mike Lesk and Eric Schmidt and described in 1975, is the standard lexical analyzer generator on many Unix systems, and an equivalent tool is specified as part of the POSIX standard. Read more on Wikipedia...
- Lex ranks in the top 10% of languages
- the Lex wikipedia page
- Lex first appeared in 1975
- file extensions for Lex include l and lex
- See also: yacc, unix, c, regex, bison, ragel
- Have a question about Lex not answered here? Email me and let me know how I can help.
Example code from Linguist:
/* +----------------------------------------------------------------------+ | Zend Engine | +----------------------------------------------------------------------+ | Copyright (c) 1998-2012 Zend Technologies Ltd. () | +----------------------------------------------------------------------+ | This source file is subject to version 2.00 of the Zend license, | | that is bundled with this package in the file LICENSE, and is | | available through the world-wide-web at the following url: | |. | | If you did not receive a copy of the Zend license and are unable to | | obtain it through the world-wide-web, please send a note to | | license@zend.com so we can mail you a copy immediately. | +----------------------------------------------------------------------+ | Authors: Zeev Suraski <zeev@zend.com> | | Jani Taskinen <jani@php.net> | | Marcus Boerger <helly@php.net> | | Nuno Lopes <nlopess@php.net> | | Scott MacVicar <scottmac@php.net> | +----------------------------------------------------------------------+ */ /* $Id$ */ #include <errno.h> #include "zend.h" #include "zend_globals.h" #include <zend_ini_parser.h> #include "zend_ini_scanner.h" #if 0 # define YYDEBUG(s, c) printf("state: %d char: %c\n", s, c) #else # define YYDEBUG(s, c) #endif #include "zend_ini_scanner_defs.h" #define YYCTYPE unsigned char /* allow the scanner to read one null byte after the end of the string (from ZEND_MMAP_AHEAD) * so that if will be able to terminate to match the current token (e.g. non-enclosed string) */ #define YYFILL(n) { if (YYCURSOR > YYLIMIT) return 0
Example code from Wikipedia:
Saw an integer: 123 Saw an integer: 2 Saw an integer: 6
Trending Repos
Last updated August 9th, 2020 | https://codelani.com/languages/lex.html | CC-MAIN-2021-04 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.