text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
This is your resource to discuss support topics with your peers, and learn from each other. 02-22-2013 06:16 PM How can I compare the color of a label in QML to another Color. My code below does not work as I'd expect it to. Despite the "scoreLabel" having color "Color.DarkGray" the if statements returns true. I also tried comparing it to the Color.create() color, but that does not work as expected either. if(scoreLabel.textStyle.color != Color.DarkGray/*Color.create("#f89e52")*/){ scoreLabel.textStyle.color = Color.DarkGray; //Otherwise make it orange. } else { scoreLabel.textStyle.color = Color.create("#f89e52") } Solved! Go to Solution. 02-22-2013 06:43 PM They do seem to be of the same type, in any case. When I log them I get this: console.log(scoreLabel.textStyle.color); console.log(Color.DarkGray); outputs: QVariant(bb::cascades::Color) QVariant(bb::cascades::Color) 02-22-2013 06:50 PM - edited 02-22-2013 06:59 PM Could you please try: if (scoreLabel.textStyle.color.toString() != Color.DarkGray.toString()) But most likely this won't work. I've checked Color class source code and there are operator== and operator!= defined. I don't think QML can use them though. It seems all instances are wrapped in QVariants which can't be directly compared. And Color class doesn't seem to support a conversion to string, so comparing them as strings also won't work. A possible solution is creating a C++ function and exporting it to QML: Q_INVOKABLE bool colorsEqual(Color *color1, Color *color2) { return *color1 == *color2; } UPD: Color is declared as meta-type in Cascades headers but pointer to it is not: resources/color.h:Q_DECLARE_METATYPE(bb::cascades: If the above function won't work, try the following: Q_INVOKABLE bool colorsEqual(const Color &color1, const Color &color2) { return color1 == color2; } ...or... Q_INVOKABLE bool colorsEqual(Color color1, Color color2) ... Try experimenting with different forms, I'm not sure which one will work. 02-22-2013 07:03 PM if(scoreLabel.textStyle.color.toString() == Color.DarkGray.toString()) this seems to always return true, strangely. I will try the C++ solution and see how that works. I would imagine it will work, seems like a good solution. 02-22-2013 07:11 PM 02-22-2013 07:21 PM - edited 02-22-2013 07:21 PM I can't get any of the equals methods working. Error: Unknown method parameter type: Color for the second two and Error: Unknown method parameter type: Color* for the first one. I'll have to look into propertys, I guess. There's no way to change a property at runtime, is there? I'm not sure I can do all my logic with just a single state. For instance, if the label starts black and has state1 when an action is performed I need to switch it to color orange. Orange would be state2. Now If I need to reset that back to black I still have the state assigned to it indicating that the color is black, since I couldn't change the state to state2. Any ideas? I'm starting to think it might be quicker to rewrite the entire list in C++ -__- QML is nothing but trouble. 02-22-2013 08:20 PM - edited 02-22-2013 08:57 PM I'll try to experiment with C++ function exporting too. The following code seems to work: // Navigation pane project template import bb.cascades 1.0 Page { Container { Label { id: label textStyle { base: style1.style } text: "Label text" } Button { id: button text: "Click me" property variant activeStyle: style1 onClicked: { if (activeStyle == style1) { label.textStyle.base = style2.style activeStyle = style2 } else { label.textStyle.base = style1.style activeStyle = style1 } } } attachedObjects: [ TextStyleDefinition { id: style1 color: Color.create("#ff0000") }, TextStyleDefinition { id: style2 color: Color.create("#00ff00") } ] } } Directly swapping the colors should also work. Properties can be reassigned at runtime. Ok, the C++ method also works. But I think the one with states is better. QML: // Navigation pane project template import bb.cascades 1.0 Page { Container { Label { id: label textStyle { color: Color.create("#ff0000") } text: "Label text" } Button { id: button text: "Click me" onClicked: { if (app.colorsEqual(label.textStyle.color, Color.create("#ff0000"))) { label.textStyle.color = Color.create("#00ff00") } else { label.textStyle.color = Color.create("#ff0000") } } } } } Test.hpp: #include <bb/cascades/Color> class Test : public QObject { Q_OBJECT public: Test(bb::cascades::Application *app); virtual ~Test() {} Q_INVOKABLE bool colorsEqual(bb::cascades::Color color1, bb::cascades::Color color2); }; Test.cpp: Test::Test(bb::cascades::Application *app) : QObject(app) { ... qml->setContextProperty("app", this); // <--------- ADDED AbstractPane *root = qml->createRootObject<AbstractPane>(); app->setScene(root); ... bool Test::colorsEqual(Color color1, Color color2) { return color1 == color2; } 02-23-2013 07:53 PM I was able to solve my problem with a variant. The reason I thought they couldn't be assigned at run time was because when I tried to conditionally set the variant with an if statement, I got a syntax error. Of course, I might just not know the correct syntax to do it, but this won't work: property variant activeStyle: if(ListItemData.num == 0) {style1} else {style2} However, by assigning the variant in the onCreationCompleted method of the label, I was able to skirt around that problem. Thanks for your help!
https://supportforums.blackberry.com/t5/Native-Development/Compare-a-labels-color-in-QML/m-p/2186167
CC-MAIN-2016-44
en
refinedweb
Developing mobile apps is hard. As a developer, you have to learn a disparate set of development environments and languages if you want to deploy a single app to multiple platforms. Fortunately, frameworks such as Xamarin and NativeScript allow developers to target multiple mobile operating systems (namely iOS and Android) while still using a programming language that they already know. The big question for the developer then becomes, "Which of these products should I choose?". In the case of Telerik, the question really boils down to whether or not we recommend developers use Xamarin, or our own cross-platform mobile development runtime called NativeScript. While Telerik builds and supports NativeScript, we also build UI components for Xamarin, just as we do for all major Microsoft UI platforms. That might seem a bit odd considering that Xamarin and NativeScript could be seen as competing products. One is owned by Microsoft; the other is owned by us. It would make logical sense that we would want developers to use our framework and not those of another company. First off - we definitely want that. We truly believe that NativeScript is the best solution for building cross-platform native applications. But we also realize that there is not one right solution for every developer on the planet. Xamarin will be the right solution for some developers, while others will choose NativeScript, and still others will choose different solutions. In order to make it easier to pick the “right solution” for you, your team and your project, we’ll break down the differences between Xamarin and NativeScript. First we’ll start with a brief description of what each project says about itself, and then we’ll get into the technical details. sourced and backed by Telerik.. Xamarin was purchased by Microsoft in March of 2016. Both NativeScript and Xamarin produce Truly Native mobile applications. Truly Native means that all of the UI is composed of native UI controls and all of the underlying device APIs are accessible. The most important thing to know about NativeScript and Xamarin in this regard, is they do not use a web view like Cordova (PhoneGap). There are a lot of frameworks out there claiming to allow the creation of native mobile apps using web technologies. The way that these frameworks work is to leverage the Cordova project which wraps your application in a browser with no URL bar or controls. The developer then builds the application as a web site, but it is packaged and looks like a native app on the device with limited native API access. In the end though, it is a just a mobile website that looks like a native application. This is commonly referred to as a "Hybrid App". Web apps are somewhat notorious for performance issues on mobile devices. This is due to the fact that the mobile browser is intentionally limited in its access to device resources and sandboxed from native APIs to ensure battery performance and device security. Hybrid apps inherit all of these performance issues. As one of the original hybrid tools providers, we learned the hard way that it is still very difficult to build a performant cross-platform hybrid application; especially if you want that “consumer grade” experience. Below is the NativeScript Showcase application running on iOS and Android. You can see the complex animations happening and the UI never stutters. This is the power of truly native mobile apps. You can install this application on iOS or Android to see how it looks and preforms on your own device. First let’s look at how NativeScript creates the user interface, or visible portion of the application. As a developer, you create XML files to build up the user interface. NativeScript then parses that and creates the corresponding native controls. Take the following simple form interface: Here is the XML that is used to construct that interface in NativeScript: < Page StackLayout class "form form-inset" "form-item" Label text "input-Label" /> TextField hint "input" </ "hr" > "padding" Button "Submit" "button button-accent" All of the application programming is done with TypeScript. That means that when a control executes an event, or some action such as an HTTP call is made, that logic is programmed with TypeScript. Each of the XML pages has a TypeScript file of the same name that NativeScript will look for. The developer can then use a binding syntax to bind control events in the XML to functions in the corresponding ts (or js) file of the same name. For instance, if the button above was going to submit an HTTP request when it was clicked, it might look something like this... tap=”{{ submitForm }}” /> import http from 'http' ; function submitForm() { http.post(url, (response) { // handle response from login service }) } The same way that JavaScript can create, customize and interact with any browser API or UI element when used with a standard web browser, NativeScript uses JavaScript to create, customize and interact with any native API or UI element. NativeScript supports subset of standard CSS for styling applications. This would be the same CSS syntax that would be written for the web. If you were looking closely at the XML example for the form above, you might see the `class` attribute just as you would in HTML. Here is the CSS that styles the form example above. .action-bar-light { background-color : #ffffff color #FF404B ; .button { padding 10 12 min-width 52 border-radius: 2 .button-accent { border-color border-width 1 .form { margin 0 20 .form .button { .form .form-item { 16 .form .input { padding-top ; } .form .input-label { font-size 18 4 5 #444 .form- inset { #DDDDDD textfield { 14 .padding { .hr { height #CDCDCD Xamarin provides two different ways to build applications. Using Native Xamarin, a developer can target iOS and Android in separate projects. Since many developers want to target multiple operating systems with the same project, though, Xamarin provides a framework called Xamarin.Forms for this very purpose. Xamarin.Forms is what we extend with the UI For Xamarin controls. Note, though, that UI for Xamarin from Telerik is available for plain Xamarin as well. For the purposes of this article, we will only focus on Xamarin.Forms, since that technology is most closely associated with NativeScript in that a single project can be used to target multiple platforms. It is also clear that Xamarin views its Forms functionality as the future of the Xamarin product. Xamarin.Forms parses an XML based markup language commonly referred to as XAML. In accordance with XAML, most styling is done via resources. Some of the styles will be page based, and some will be global. Here is the XAML that is used to create that simple form in Xamarin. In this simple example I chose to use page-based styles. <?xml version= "1.0" encoding= "utf-8" ?> <ContentPage xmlns= "" xmlns:x= "" xmlns: local "clr-namespace:XamarinTests" x:Class= "XamarinTests.XamarinTestsPage" <Frame> <Label Text= "Inset Form" TextColor= "Red" ></Label> <StackLayout HorizontalOptions= "FillAndExpand" VerticalOptions= Orientation= "Vertical" WidthRequest= "20" HeightRequest= <StackLayout Orientation= Padding= "10" <Entry Placeholder= "Enter Your First Name" </StackLayout> "Enter Your Last Name" "Enter Youe Email Address" <Button Text=</Button> </Frame> </ContentPage> Note that there is no action bar or border in the Xamarin example. This is because neither of those things are possible with XAML. Both require either extending a class or adding items via C#. All of the application logic in Xamarin is programmed using C#, but because it is compiled ahead of time and not just in time, there are some limitations to what is supported. For instance, Generics are only partially supported in Xamarin when subclassing NSObject. There are also limitations on what .NET assemblies can be used. Just like NativeScript, Xamarin XAML pages have a corresponding XML file where you can program view logic. Since Xamarin implements a full MVVM (explained later) model, the following code might be used to handle a button click in the above form example. text=”Submit” Command=”{{ Binding SubmitCommand }}”></ MainViewModel : INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged; MainViewModel() { this .SubmitCommand = new Command((nothing) => { // submit the form }); ICommand SubmitCommand { protected set get Now that we understand how these two different frameworks actually function, let’s get into a more granular comparison or the two frameworks that takes into account more of the development lifecycle. With Xamarin, developers use C#. With NativeScript, it is TypeScript (JavaScript, CoffeeScript, Babel, etc). That is the major difference between the two. Developers who are comfortable using C# and .NET will generally prefer Xamarin. Those who are more comfortable writing TypeScript or JavaScript will prefer NativeScript. Thanks to TypeScript, many C# developers have begun to branch out into complex JavaScript applications. TypeScript turns the otherwise very loose JavaScript language into an object-oriented and strongly typed language that allows developers to adhere to best practices while leveraging the “run anywhere” power of JavaScript on both the web and mobile applications. For Windows users, Xamarin uses Visual Studio. For Mac users, they offer their own Xamarin Studio. NativeScript, as an open-source project, favors the cross-platform Visual Studio Code editor, however Visual Studio integration is provided via a commercial extension from Telerik called, AppBuilder. The NativeScript CLI technically allows for development in any IDE, including Atom, WebStorm, Sublime Text 2 and more.. Xamarin supports building mobile apps for iOS, Android, Windows Phone, Windows Universal and Windows 10. NativeScript supports iOS and Android, and we have started work on Windows 10 support as well. NativeScript has zero-day support for new operating systems. That means that on the day that a new operating system is available, NativeScript already supports it. This is because NativeScript is simply executing JavaScript on the underlying native operating system, and our reflection-based metadata can quickly be regenerated to expose new native APIs. Xamarin requires a release to support new operating systems and APIs . This is because Xamarin developers have to create maps and shims for those APIs on top of the runtime. More often than not, Xamarin has this release ready on day 0. When we say “design time”, we are referring to the time at which a developer is actually building the application. Since NativeScript is actually shipping the JavaScript that you write, it is able to update applications in real time as you save files without having to rebuild the application. We refer to this as “LiveSync”. Xamarin apps must be rebuilt in their entirety every time a change is made in order to see the changes reflected in an emulator. In this example, I notice that a word is misspelled and I fix the error. Xamarin does have a visual designer built into their tools. The visual previews are static. Interactivity has to be tested on the device or in the emulator as with NativeScript. Both NativeScript and Xamarin support building Android applications on Windows. They also both support building directly to iOS on a Mac. Due to the fact that Apple has restricted the building of iOS applications to a macOS machine, neither NativeScript nor Xamarin can build iOS apps directly on Windows. However, there are solutions provided by both frameworks for this issue. While iOS apps can be coded with Xamarin on a Windows machine, Xamarin requires ssh access to a macOS machine in order to do the actual build for iOS. There is no way around this. With Xamarin, the developer must have access to a Mac to build iOS binaries. In the case of NativeScript, the AppBuilder extension from Telerik provides a cloud build service that does not require the developer to own a Mac. This is essentially a networked Mac that we own and provide to developers as a service. The developer then builds on the Windows machine, the code is compiled on our servers and sent back to you in IPA format, ready to be installed on iOS. NativeScript also provides companion applications that can be downloaded from the iOS and Android app stores. These companion apps allow developers to load their applications on their devices without actually installing them. This is very helpful during the development process. It’s important to note that in the case of iOS, Xamarin does not generate an Xcode project. NativeScript does generate a full Xcode project, enabling full access to the raw iOS project at design time if needed. Both Xamarin and NativeScript provide a rich step-debugging experience. Xamarin provides debugging through Visual Studio as well as Xamarin Studio. Xamarin also provides a performance profiler. NativeScript provides an extension for Visual Studio Code as well as the Telerik Platform extension for Visual Studio which provides the same step debugging functionality inside of Visual Studio. You can watch this video for a preview of what debugging NativeScript applications in Visual Studio Code looks like. Since NativeScript actually creates full iOS and Android projects as part of the build process, the native Android and iOS profiling tools can be used. Depending on how applications are built with Xamarin, performance comparison will differ. However, since both of these framework produce native applications, the performance is usually ~60 frames per second. That means that there is no perceptible speed issue. The way that we measure performance goes much deeper than what the human eye can perceive. There are some areas where Xamarin is faster (startup time) and some where NativeScript is faster (GPU usage and memory). For instance, here are the benchmarks for measuring startup time for a blank app with a single button. Startup time is defined as the amount of time after a user launches the application before they see the first screen. In these tests, you can see that Xamarin is roughly 200 ms faster than NativeScript at startup time. Another important benchmark is how fast each of these frameworks are able to perform “marshalling”. this means that when you create a string or integer in C#, how long does it take for Xamarin to actually create that string on the underlying operating system. How long does it take NativeScript to do the same with JavaScript? We refer to this as “marshaling”. You can see from the table below that NativeScript is significantly faster as marshaling JavaScript to native code. This is one NativeScript’s core focuses - to be the fastest runtime on top of native code. In these tests, we measured the amount of time it took both NativeScript and Xamarin to create and populate large arrays. You can see that NativeScript is roughly 3x faster than Xamarin when it comes to working with large data sets. Frame rates A little earlier in this article, I mentioned that the most important thing to users in terms of performance is frame rate, and that both NativeScript and Xamarin typically maintain good frame rates. Even though it looks as if movement on TVs, computers and phones is smooth, these devices are actually drawing static images, known as frames, in very rapid succession.The human eye can only process so many frames per second. At some point, it can no longer see individual frames and will view movement as smooth. This happens at roughly 18 frames per second (fps). However, for every amount that the fps is increased after 18 fps, the motion and pictures become smoother and crisper. This tops out at 60 fps. When we’re able to maintain a constant 60 fps rate, the UI appears buttery-smooth. When native applications get complex, sometimes they will drop frames. This means that the frame rate fluctuates as the animation or processing gets intense. This causes the the UI to appear jerky. This commonly happens in long scrolling list of items. To test this, we built an infinite scrolling list of items with images (pulled from Reddit), and then tested it on both NativeScript and Xamarin.Forms. You can see the frame rate results below as the list of items is scrolled. While NativeScript stays fairly close to 60 fps, Xamarin.Forms is all over the place. This results in a janky experience for the user. This highlights another reason that we built NativeScript: developers need a framework that put a premium on speed. After working with hybrid frameworks for so long at Telerik, we wanted to build something that was first and foremost blazingly fast. Everything else took a backseat to that goal in the creation of NativeScript. One of the most important items to consider between Xamarin and NativeScript is the presence of so called Application Frameworks. These are frameworks developed for the purpose of organizing code, and making complex programming tasks (such as routing, dependency injection and state management) much easier by abstracting them away into a common code base. Good examples of application frameworks would be Caliburn.Micro for WPF and Angular or Ember for web development. Xamarin ships an implementation of MVVM. This means that a developer can bind the UI to properties and methods on a model and when one changes, the other is changed as well. Here is an example of binding a simple text input to a `FirstName` property on a view model. Class ViewModel : INotifyPropertyChanged { String firstName; ViewModel() { .FirstName = “Jane Doe” Entry text=”{Binding FirstName}”> Entry.BindingContext local:ViewModel MVVM in and of itself is not really an application framework as much as it is a pattern for binding. It does not address a lot of larger applications concerns like dependency injection, component composition, recommended code structure and the like. This is where so called application frameworks come in. For Xamarin, one such framework is MVVM Light Xamarin apps can leverage the MVVM Light and MVVM Cross frameworks which provide the aforementioned pieces of necessary framework functionality. Both of these frameworks are open source and maintained by contributors from the community. When using MVVM Cross or MVVM Light, the bindings remain largely the same as in the example above. These frameworks extend the out-of-the-box MVVM model to provide a complete application framework. MVVM Light is used more with Native Xamarin than Xamarin Forms as Forms provides much more of the necessary application framework pieces out of the box. Developers who have done extensive XAML development might be more familiar with MVVM Light since it is often used with other XAML based platforms. Angular 2 is an application framework for TypeScript applications developed by Google. Angular is also open source and while they accept community contributions, the majority of the work is done by a dedicated team at Google. While Angular is typically thought to only be applicable to the web, NativeScript supports Angular 2 as a first-class citizen and we recommend that developers use it when building NativeScript applications. Implementing this same form functionality from the example above in Angular 2 looks like this… import {Component} from '@angular/core' @Component({ selector: 'my-app' , template: '<TextField [(ngModel)]="firstName"></TextField>' export class MyApp { firstName: string = "Jane Doe" Notice that in Angular 2 there are no awkward interfaces or classes to inherit from. It’s simply a component annotation and a standard class object. Angular’s binding is almost magical and is one of the reasons why it’s become so very popular with developers. Angular also provides ways to build partial UI components share between pages (directives) as well as a model for dependency injection (injectables), along with an enormous developer community. The other benefit to using Angular 2 is that it works on the web as well. This means that you can share a LOT of code between web and Angular apps. I recommend watching the ng-conf presentationfrom TJ VanToll and Jen Looper which demonstrates how to do this in 20 minutes. This is an evolution in multi-platform development, and it’s powered by Angular 2 and NativeScript. One of the most critical parts of development with any framework is the availability of plugins to extend that framework to make it do, well, whatever you might want. No place is this more important than on mobile. For instance, neither iOS or Android provide a Side Drawer implementation out of the box. This means that neither do NativeScript or Xamarin. However, both of these frameworks have a plugin model which allows developers to build plugins like a side drawer which you can then download and use in your project. NativeScript Plugins are delivered via npm packages. For instance, if I wanted to add a side drawer to the above simple form example, I could install Telerik UI For NativeScript, which includes a side drawer, advanced listview, charts, ect. That is done from the command line. $ tns plugin add nativescript-telerik-ui There are currently ~300 NativeScript plugins available, and this number grows every month. This includes integrations for things like Firebase, Azure, Couchbase, TouchID, Audio, Video and more. While the community is free to build and publish their plugins to npm at any time, Telerik also provides a Verified Plugins Marketplace for NativeScript. These are plugins that are vetted by Telerik for code quality and documentation. Developers submit their plugins to the Verified Plugins Marketplace and once they are verified, Telerik will offer official support for those plugins if required. There are many plugins for both Xamarin and Xamarin Forms. Xamarin Forms plugins are distributed via NuGet when possible. There are currently Many plugins for Xamarin Forms are commercial, including Telerik UI For Xamarin. Commercial plugins are commonly installed by downloading a Dynamic Link Library (DLL) and adding it to a Xamarin project manually. The exact amount of plugins available for Xamarin is difficult to measure since they are spread between open source plugins (37), plugins delivered on NuGet (2341), those listed on the Xamarin Plugins page (547) and some commercial plugins that aren’t listed in any of those places. I personally had a very hard time navigating the Xamarin Plugins landscape. I could not find a side drawer on Nuget for Xamarin.Forms. I found one on their official plugins site, but it was listed as iOS only. The good news is that I know that Telerik UI For Xamarin has a Side Drawer component that works for Xamarin.Forms and comes with a Windows Installer that makes it relatively easy to use the components with Visual Studio. As developers, we often find it difficult to prescribe a given technology stack end to end to solve a specific problem - the answer often is ‘it depends’. This hesitation of choosing the right technology is compounded when the problem is as tricky as making native cross-platform mobile apps. Your chosen approach to go cross-platform should depend on the needs of your app, your skills, toolsets, maintainability of the app and eventually, the needs of your customer. To help with this decision, we have made this handy over-simplified flow chart. You should go with NativeScript if: You should go with Xamarin if: No matter what your choice is, we are here to help. If you choose to go with Xamarin, check out our premium UI for Xamarin controls, which greatly simplify adding complex controls such as side drawers, complex editable listviews, charts and graphs to your Xamarin apps. If you choose to go with NativeScript head to nativescript.org and go through one of our getting started tutorials, which will get you up and running with NativeScript quickly. And in case you need polished components for NativeScript, make sure you check [UI for NativeScript](). In the end, having to make a choice between different technologies is a good thing. The more choices that developers have, the greater their chances of eventual success, which is really all that matters in the end. (expect a newsletter every 4-8 weeks) Ready to try NativeScript? Build your first cross-platform mobile app with our free and open source framework. If you see an area for improvement or have an idea for a new feature, we'd love to have your help!
https://www.nativescript.org/blog/nativescript-and-xamarin
CC-MAIN-2016-44
en
refinedweb
There seems to be lots of new checking going on in newest GWT Shell runtime environment. I haven't seen this earlier but now if you use the BaseTreeModel class without a type specifier, the GWT shell will throw a lot of warning messages at you, and RPC can stop working all together I noticed. The source is the generic list children on the BaseTreeModel class. GWT informs that it won't be able to optimize your compiled collection unless its type is given, which is good. The BaseTreeModel class unfortunately requires that its children also extends BaseTreeModel. E.g. if you have public class Department extends BaseTreeModel<Employee> public class Employee extends BaseTreeModel you will get lots of warnings. But if you do this instead public class Employee extends BaseTreeModel<BaseTreeModel> .. it works. Looks a bit awkward but ... Based on this I propose an addendum to the JavaDoc for the BaseTreeModel class along these lines. Note: If you are sending BaseModelTree instances over RPC the class representing the lowest level of your BaseModelTree hiearchy must specify itself or the BaseTreeModel class as its generic type specifier. If not GWT optimizer will give warnings.
https://www.sencha.com/forum/showthread.php?37812-GWT-shell-barks-at-non-generic-BaseTreeModel-classes
CC-MAIN-2016-44
en
refinedweb
iQuestSequenceCallback Struct ReferenceThis callback is fired when the sequences finished running properly. More... #include <tools/questmanager.h> Inheritance diagram for iQuestSequenceCallback: Detailed DescriptionThis callback is fired when the sequences finished running properly. It is not called it the sequence is aborted! Definition at line 409 of file questmanager.h. Member Function Documentation Sequence finishes. The documentation for this struct was generated from the following file: - tools/questmanager.h Generated for CEL: Crystal Entity Layer 1.4.0 by doxygen 1.5.8
http://crystalspace3d.org/cel/docs/online/api-1.4/structiQuestSequenceCallback.html
CC-MAIN-2016-44
en
refinedweb
{-# LANGUAGE RankNTypes #-} {- | This! > updatedTriangle :: Either String Triangle > updatedTri. This library is hosted on github (click on the /Contents/ link above and you should see the Homepage link) so it should be very easy to forked it and send patches to me. Also since this module is new I'm open to radical modifications if you have a good suggestion, so suggest away! :) -} module Data.Lenses ( -- * Basic functions to create lenses and use them fromGetSet, fetch, update, alter -- * Lense evaluators , runOn, runOnT , evalFrom, evalFromT , execIn, execInT -- * Structure lenses , runSTLense , to, from -- * Generic helper functions , getAndModify, modifyAndGet, ($=), ($%) ) where import Data.Traversable import Data.STRef import Control.Monad.ST import Control.Monad.State hiding (sequence, mapM) import Control.Monad.Identity hiding (sequence, mapM) import Prelude hiding (sequence, mapM) {- | This function takes a "getter" and "setter" function and returns our lense. Usually you only need to use this if you don't want to use Template Haskell to derive your Lenses for you. With a structure Point: > data Point = Point { > x_ :: Float, > y_ :: Float > } > deriving (Show) This (from "Data.Lenses.Template"): $( deriveLenses ''Point ) }) -} fromGetSet :: (MonadState r m) => (r -> a) -> (a -> r -> r) -> StateT a m b -> m b fromGetSet getter setter m = do s <- get (a, newFieldValue) <- runStateT m $ getter s put $ setter newFieldValue s return a {- | fetches a field from a structure using a lense: > somePoint = Point 5 3 > a = somePoint `fetch` x > b = somePoint `fetch` y > -- a == 5 > -- b == 3 -} fetch :: (MonadState a m) => r -> (m a -> StateT r Identity a) -> a fetch s lense = evalFrom s $ lense get {- | updates a field in a structure using a lense: > somePoint = Point 5 3 > newPoint = (somePoint `update` y) 15 > -- newPoint == Point 5 15 -} update :: (MonadState a m) => r -> (m () -> StateT r Identity b) -> a -> r update s lense newValue = execIn s $ lense (put newValue) {- | alters a field in a structure using a lense and a function: > somePoint = Point 5 3 > newPoint = (somePoint `alter` y) (+1) > -- newPoint == Point 5 4 -} alter :: (MonadState a m) => (m () -> StateT r Identity b) -> (a -> a) -> r -> r alter lense f s = execIn s $ lense (modify f) {- | Runs a state monad action on a structure and returns the value returned from the action and the updated structure. > somePoint = Point 5 3 > a = runOn somePoint $ x (modifyAndGet (+1)) > -- a == (6, Point 6 3) -} runOn :: b -> StateT b Identity a -> (a, b) runOn s l = runIdentity $ runOnT s l {- | Monad transformer version of 'runOn'. Note that 'runOnT' = 'runStateT'. -} runOnT :: b -> StateT b m a -> m (a, b) runOnT = flip runStateT {- |)) -} evalFrom :: b -> StateT b Identity a -> a evalFrom s l = fst $ runOn s l {- | Monad transformer version of 'evalFrom'. Note that 'evalFromT' = 'flip' 'evalStateT'. -} evalFromT :: (Monad m) => b -> StateT b m a -> m a evalFromT = flip evalStateT {- | Runs a state monad action on a structure and returns the updated structure Use it to update fields: > somePoint = Point 5 3 > a = execIn somePoint $ x (put 1) > -- a == Point 1 3 note that: > execIn somePoint (x (put 1)) == (somePoint `update` x) 1 The advantage over 'update' is that it allows you to specify a different final action besides 'put' like so: > a = execIn somePoint $ x (modifyAndGet (+1)) > -- a = Point 6 3 -} execIn :: a -> StateT a Identity b -> a execIn s l = snd $ runOn s l {- | Monad transformer version of 'execIn'. Note that 'execIn' = 'flip' 'execStateT'. -} execInT :: (Monad m) => b -> StateT b m a -> m b execInT = flip execStateT -- * Structure lenses {- |. -} runSTLense :: (Traversable f, Traversable t) => (forall s. f (State a b -> s) -> t s) -> f a -> (t b, f a) runSTLense f x = runST $ do stLenses <- mapM newSTRef x values <- sequence $ f $ fmap injectState stLenses updates <- mapM readSTRef stLenses return (values, updates) where injectState :: STRef s a -> State a b -> ST s b injectState ref m = do s <- readSTRef ref let (a, s') = runState m s writeSTRef ref s' return a {- | A helper combinator used for applying a monad to element collected by a fetching function. For example: > everyOther :: [a] -> [a] > everyOther [] = [] > everyOther (x:[]) = [x] > everyOther (x:y:xs) = x : everyOther xs > addOne :: State Int () > addOne = modify (+1) > test :: [Int] > test = (addOne `to` everyOther) `from` [1, 2, 9, 6, 7, 8, 4] > -- test == [2, 2, 10, 6, 8, 8, 5] which is the same as: > test = snd $ runSTLense (addOne `to` everyOther) [1, 2, 9, 6, 7, 8, 4] which is the same as: > test = snd $ runSTLense (fmap ($ addOne) . everyOther) [1, 2, 9, 6, 7, 8, 4] -} to :: (Functor f) => a -> (c -> f (a -> b)) -> c -> f b to m f = fmap ($ m) . f {- | Applies 'runSTLense' to a function and a structure and returns the 'snd' of the result. See 'to' for example of use. -} from :: (Traversable t, Traversable f) => (forall s. t (State a b -> s) -> f s) -> t a -> t a from f x = snd $ runSTLense f x {- | Modifies the state in a state monad and returns the original value. 'getAndModify' and 'modifyAndGet' should really be in 'Control.Monad.State.Class' -} getAndModify :: (MonadState s m) => (s -> s) -> m s getAndModify f = do a <- get modify f return a {- | Modifies the state in a state monad and returns the new value. -} modifyAndGet :: (MonadState s m) => (s -> s) -> m s modifyAndGet f = modify f >> get {- | An operator for assigning a value to the value referenced by a lense. (see the example near the end of the tutorial at the start of this module) -} infixl 0 $= ($=) :: (MonadState s m) => (m () -> b) -> s -> b lense $= x = lense $ put x {- | Flipped version of '($)'. -} infixl 0 $% ($%) :: a -> (a -> b) -> b ($%) = flip ($)
http://hackage.haskell.org/package/lenses-0.1.6/docs/src/Data-Lenses.html
CC-MAIN-2016-44
en
refinedweb
To dig in just a little bit further, scope of an HTTP Cookie is controlled by attributes set in the cookie header. - Expires: A Date/Time value indicating when the cookie should expire. The client deletes the cookie once it expires. - Max-Age: A numeric value representing a timespan span indicating life of the cookie. Like in case of Expires, the cookie is deleted once it reaches it maximum age. Between Expires and Max-Age, Age takes precedence. If neither is set, client deletes the cookie once the ‘session’ expires - Domain: Specifies the domain which receives the cookie. If not specified, domain is the origin server. - Path: Limits the cookie to the specified path within the domain. If not specified, the URI’s path is used. Cookies and Web APIWeb API as we know is meant for building services over HTTP. These services can be consumed by any type of client. It could be a web page making AJAX requests or it could be a headless bot polling for data or it could be a native app fetching data. Given the various types of clients that can access a Web API endpoint, relying on cookies to get back session related information is not a particularly good design choice. Having said that, Web API does work over HTTP and cookies are a part of the HTTP spec. So for whatever narrow case you may need your Web API services to support cookies, you can definitely do that. But we will pepper it with lots of ‘watch out’, ‘don’t do it’ and ‘I told you so’ flags!!! Setting a CookieWe can set a Cookie in the HttpResponseMessage for a Web API Get request. We use the CookieHeaderValue object as follows public HttpResponseMessage Get() { HttpResponseMessage respMessage = new HttpResponseMessage(); respMessage.Content = new ObjectContent<string []> (new string[] { "value1", "value2" }, new JsonMediaTypeFormatter()); CookieHeaderValue cookie = new CookieHeaderValue("session-id", "12345"); cookie.Expires = DateTimeOffset.Now.AddDays(1); cookie.Domain = Request.RequestUri.Host; cookie.Path = "/"; respMessage.Headers.AddCookies(new CookieHeaderValue[] { cookie }); return respMessage; } The CookieHeaderValue object is defined in the System.Net.Http.Header namespace. The JsonMediaTypeFormatter is defined in System.Net.Http.Formatters namespace. If we start monitoring traffic using Fiddler, and hit the URL:[yourport]/api/values/ we will see the following As we can see, the cookie that we had added got posted to us. Setting a Cookie with multiple valuesIf you want to setup multiple values in a cookie instead of multiple cookies you can use name value pairs and stuff them into one cookie as follows public HttpResponseMessage Get() { HttpResponseMessage respMessage = new HttpResponseMessage(); respMessage.Content = new ObjectContent<string[]>(new string[] { "value1", "value2" }, new JsonMediaTypeFormatter()); var nvc = new NameValueCollection(); nvc["sessid"] = "1234"; nvc["3dstyle"] = "flat"; nvc["theme"] = "red"; var cookie = new CookieHeaderValue("session", nv); cookie.Expires = DateTimeOffset.Now.AddDays(1); cookie.Domain = Request.RequestUri.Host; cookie.Path = "/"; respMessage.Headers.AddCookies(new CookieHeaderValue[] { cookie }); return respMessage; } If we watch the response on fiddler we can see the Name Value pairs come in separated by ampersand (&). Now if we were to post the cookie back to the server, we can retrieve it as follows: public void Post([FromBody]string value) { string sessionId = ""; string style = ""; string theme = ""; CookieHeaderValue cookie = Request.Headers.GetCookies("session").FirstOrDefault(); if (cookie != null) { CookieState cookieState = cookie["session"]; sessionId = cookieState["sessid"]; style = cookieState["3dstyle"]; theme = cookieState["theme"]; } } We can see the values as follows: Setting Cookies in Web API HandlersOne can also create a DelegationHandler to inject cookies outside the Controller. Since Requests go to controllers via the handler and response goes out via the handler, a custom delegation handler becomes the right place to add Application Specific cookies. For example if we want to stamp our cookies with a key that comprises of a custom string and a GUID, we could build it as follows public class RequestStampCookieHandler : DelegatingHandler { static public string CookieStampToken = "cookie-stamp"; protected async override System.Threading.Tasks.Task<HttpResponseMessage> SendAsync( HttpRequestMessage request, System.Threading.CancellationToken cancellationToken) { string cookie_stamp; var cookie = request.Headers.GetCookies(CookieStampToken).FirstOrDefault(); if (cookie == null) { cookie_stamp = "COOKIE_STAMPER_" +Guid.NewGuid().ToString(); } else { cookie_stamp = cookie[CookieStampToken].Value; try { Guid guid = Guid.Parse(cookie_stamp.Substring(22)); } catch (FormatException) { // Invalid Stamp! Create a new one. cookie_stamp = "COOKIE_STAMPER_" + Guid.NewGuid().ToString(); } } // Store the session ID in the request property bag. request.Properties[CookieStampToken] = cookie_stamp; // Continue processing the HTTP request. HttpResponseMessage response = await base.SendAsync(request, cancellationToken); // Set the session ID as a cookie in the response message. response.Headers.AddCookies(new CookieHeaderValue[] { new CookieHeaderValue(CookieStampToken, cookie_stamp) }); return response; } } Now if we run the application and try to access the API, we’ll receive our ‘COOKIE_STAMPER_*’ cookie in the response. Weird Behavior Alert: If you see above, there is only one Cookie here. But I have not removed any code from the controller so ideally we should have had two cookies. This is something weird I encountered. If you encounter it, a hack around it is to push the same cookie with different tokens in the delegation handler and magically all three cookies reappear. I am putting this down as a sync bug in AddCookies for now, will circle with the WebAPI team to see if it’s a known issue or I missed something. ConclusionTo wrap up, we saw how we could use cookies to exchange bits of information between a Web API service and a client. However, the biggest caveat is that Cookie data should not trusted as it is prone to tampering or complete removal (in case cookies are disabled). In the narrow band of possible requirements that may need Cookies, we saw how to use them here. Download the entire source code of this article (Github) Will you give this article a +1 ? Thanks in advance 1 comment: This is an excellent article. i have tried to download the code and ran. i am getting the response, But i am not able to see the cookies on the header. Do i need to do some settings in my local environment? Appreciate your response.
http://www.devcurry.com/2013/04/http-cookies-and-aspnet-web-api.html
CC-MAIN-2016-44
en
refinedweb
#include <Xm/Xm.h> Widget XmObjectAtPoint( Widget widget, Position x, Position y); XmObjectAtPoint searches the child list of the specified manager widget and returns the child most closely associated with the specified x,y coordinate pair. For the typical Motif manager widget, XmObjectAtPoint uses the following rules to determine the returned object: The preceding rules are only general. In fact, each manager widget is free to define "most closely associated" as it desires. For example, if no child intersects x,y, a manager might return the child closest to x,y. Returns the child of manager most closely associated with x,y. If none of its children are sufficiently associated with x,y, returns NULL. XmManager(3).
http://www.makelinux.net/man/3/X/XmObjectAtPoint
CC-MAIN-2016-44
en
refinedweb
import java.lang.annotation.Retention; import java.lang.annotation.Target; method can trigger the creation of a new Saga instance.method can trigger the creation of a new Saga instance. SagaEventHandler When a Saga is started due to an invocation on a StartSaga annotated method, the association of the annotated method and the actual property's value are used to define a AssociationValue for the created saga. Thus, a method with this definition: @StartSaga(forceNew=true) @SageEventHandler(associationProperty="orderId") public void handleOrderCreated(OrderCreatedEvent event) will always trigger the creation of a saga that can be found with an AssociationValue with key "orderId" and as value the value returned by event.getOrderId(). This annotation can only appear on methods that have been annotated with . SagaEventHandler true, a new Saga is always created when an event assignable to the annotated method is handled. If false, a new saga is only created if no Saga's exist that can handle the incoming event. This annotation can only appear on methods that have been annotated with org.axonframework.saga.annotation.SagaEventHandler @SagaEventHandler.
http://grepcode.com/file/repo1.maven.org$maven2@org.axonframework$axon-core@1.2.1@org$axonframework$saga$annotation$StartSaga.java
CC-MAIN-2016-44
en
refinedweb
Inline Performance impact Discussion in 'C++' started by GJ, Oct 18, 2005. - Similar Threads Can Novell Security impact ASP.NET performance?Scott Cadillac, Oct 25, 2003, in forum: ASP .Net - Replies: - 0 - Views: - 454 - Scott Cadillac - Oct 25, 2003 runtime performance impact of template usageAaron Anodide, Jul 29, 2003, in forum: C++ - Replies: - 9 - Views: - 604 - Aaron Anodide - Jul 30, 2003 import keyword behaviour - performance impact if used multipletimes?Andrew James, Nov 27, 2004, in forum: Python - Replies: - 4 - Views: - 549 - Nick Coghlan - Jan 15, 2005 Performance impact of using decoratorsvinjvinj, Mar 10, 2006, in forum: Python - Replies: - 13 - Views: - 746 - Peter Otten - Mar 11, 2006 Performance Impact of ASP.NET 1.1 and 2.0 Side by SideRich, Jan 23, 2007, in forum: ASP .Net - Replies: - 6 - Views: - 492 - Tim Mackey - Jan 24, 2007 Multithreaded application performance impact of global arrays, Mar 17, 2006, in forum: C Programming - Replies: - 3 - Views: - 383 - Keith Thompson - Mar 17, 2006 Virtual Inheritance Performance Impact, Apr 30, 2007, in forum: C++ - Replies: - 1 - Views: - 461 - Markus Schoder - Apr 30, 2007 Handling/Seeking Thousands+ of Files (performance impact?)Bobby Edward, Feb 18, 2009, in forum: ASP .Net - Replies: - 2 - Views: - 325 - Bobby Edward - Feb 18, 2009
http://www.thecodingforums.com/threads/inline-performance-impact.449140/
CC-MAIN-2016-44
en
refinedweb
XMonad.Prompt.MPD Description This module lets the user select songs and have MPD add/play them by filtering them by user-supplied criteria(E.g. ask for an artist, then for the album..) Synopsis Usage To.Artist, MPD.Album] >> return () That way you will first be asked for an artist name, then for an album by that artist etc.. If you need a password to connect to your MPD or need a different host/port, you can pass a partially applied withMPDEx to the function: addMatching (MPD.withMPDEx "your.host" 4242 "very secret") .. findMatching :: RunMPD -> XPConfig -> [Metadata] -> X [Song]Source Lets the user filter out non-matching songs. For example, if given [Artist, Album] as third argument, this will prompt the user for an artist(with tab-completion), then for an album by that artist and then returns the songs from that album. addMatching :: RunMPD -> XPConfig -> [Metadata] -> X [Int]Source Add all selected songs to the playlist if they are not in it. addAndPlay :: RunMPD -> XPConfig -> [Metadata] -> X ()Source Add matching songs and play the first one. type RunMPD = forall a. MPD a -> IO (Response a)Source Allows the user to supply a custom way to connect to MPD (e.g. partially applied withMPDEx).
http://hackage.haskell.org/package/xmonad-extras-0.10.1.2/docs/XMonad-Prompt-MPD.html
CC-MAIN-2016-44
en
refinedweb
Pre Resources at the end of this article. The sample application To highlight the local store aspects of Android application development, I cover a sample application that allows you to test the execution of various types of APIs. The source code is available for download. The application supports the actions in Figure 1. Figure 1. The use cases Figure 1 lists the following use cases: - Manage and store preferences - Load information from application assets - Export information to internal memory, external memory, and the local database - Read information from internal memory and the local database - Clear stored information - See information on the screen You make use of the local store in the application throughout this article, as follows: - Preferences are captured from the user, stored locally, and used throughout the application. - A picture of the user is retrieved from internal application assets, stored in local internal memory and external memory, and rendered on the screen. - A list of friends in JSON is retrieved from the application's assets. It is parsed and stored in local internal memory, external memory, and the relational database, and rendered on the screen. The sample application defines the classes in Table 1. Table 1. Sample application classes The example application uses two types of data. The first one is the application preferences that are stored as name-value pairs. For preferences, define the following information: - A filename, used to load and store the list of the friends' names - A filename, used to load and store a picture for the user - A flag, if set, indicates to automatically delete all stored data at application startup The second type of data is a friends list. The friends list is initially represented in a Facebook Graph API JSON format, which consists of an array of name and friend objects (see Listing 1). Listing 1. The friends list (in Facebook Graph API JSON format) { "data": [ { "name": "Edmund Troche", "id": "500067699" } ] } The simple format above makes the Friend object and database schema simple. Listing 2 shows the Friend class. Listing 2. Friend class package com.cenriqueortiz.tutorials.datastore; import android.graphics.Bitmap; /** * Represents a Friend */ public class Friend { public String id; public String name; public byte[] picture; public Bitmap pictureBitmap;; } In addition to the ID and name, the sample application also keeps references to the friend's picture. While the sample application is not using those, you can easily extend the sample application to retrieve the picture from Facebook and show it in the main screen. The database schema consists of a single table to store the Friend's information. It has three columns: - Unique ID or key - Facebook ID - The Friend's name Listing 3 shows the SQL statement for the corresponding relational table declaration. Listing 3. Friend database table db.execSQL("create table " + TABLE_NAME + " (_id integer primary key autoincrement, " + " fid text not null, name text not null) "); Based on this information, you can display the name on the main screen, and using the ID, you can retrieve additional details for the selected user. In the sample application, you just display the names. It is left to you to experiment with retrieving additional information. Note that you can easily change the code to go directly to Facebook. Storing application preferences This section covers the Preferences API and screens. The Android API provides a number of ways to deal with preferences. One way is to use SharedPreferences directly and use your own screen design and management of the preferences. The second approach is to use a PreferenceActivity. A PreferenceActivity automatically takes care of how the preferences are rendered on the screen (by default they look like the system's preferences) and automatically stores or saves the preferences as the user interacts with each preference through the use of SharedPreferences. To simplify the sample application, use a PreferenceActivity to manage the preferences and the preference screen (see Figure 2). The preferences screen displays two sections: Assets and Auto Settings. Under Assets, you can enter filenames for both the Friends List and Picture options. Under Auto Settings, you can select a check box to delete information at startup. Figure 2. The Preferences screen as implemented In Figure 2, the layout was defined using the declarative approach through XML (instead of programmatically); declarative XML is preferred as it keeps the source code clean and readable. Listing 4 shows the XML declaration for the Preferences UI. Listing 4. The Preferences screen XML declaration <?xml version="1.0" encoding="utf-8"?> <PreferenceScreen xmlns: <PreferenceCategory android: <EditTextPreference android: <EditTextPreference android: </PreferenceCategory> <PreferenceCategory android: <CheckBoxPreference android: </PreferenceCategory> </PreferenceScreen> The PreferenceScreen consists of two instances of EditTextPreference, a CheckBoxPreference, and the two category groups as defined by PreferenceCategory (one for Asset and the other for Auto Settings). In the sample application, the design calls for the Preference screen to be invoked using a menu item. For this, use an Intent message to invoke the Preference Screen Activity called AppPreferenceActivity (see Listing 5). Note that I do not cover how Intent works in detail. See Resources for more information on Intents. Listing 5. The AppPreferenceActivity /* * AppPreferenceActivity is a basic PreferenceActivity * C. Enrique Ortiz | */ package com.cenriqueortiz.tutorials.datastore; import android.os.Bundle; import android.preference.PreferenceActivity; public class AppPreferenceActivity extends PreferenceActivity { /** * Default Constructor */ public AppPreferenceActivity() {} /** * Called when the activity is first created. * Inflate the Preferences Screen XML declaration. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); addPreferencesFromResource(R.xml.prefs); // Inflate the XML declaration } } In the sample application, invoke the Intent as in Listing 6, from inside the Menu item handler. Listing 6. Invoking the Preference activity using an Intent /** * Invoked when a menu item has been selected */ @Override public boolean onOptionsItemSelected(MenuItem item) { switch (item.getItemId()) { // Case: Bring up the Preferences Screen case R.id.menu_prefs: // Preferences // Launch the Preference Activity Intent i = new Intent(this, AppPreferenceActivity.class); startActivity(i); break; case R.id.menu...: : break; } return true; } In addition, you must define all Intents in the AndroidManifest XML file as in Listing 7. Listing 7. Defining the Intent in AndroidManifest.xml : <application android: : : <activity android: </activity> : </application> Recall that PreferenceActivity uses SharedPreferences to automatically store the preferences as the user interacts with the preferences screen. The application then uses these preferences when it executes to perform its various tasks. Listing 8 shows how to use SharedPreferences directly to load the stored preferences; you can refer to the companion sample code on how the loaded preferences are used throughout the sample code. In addition, Listing 8 also shows how to store preferences directly with SharedPreferences using an Editor in case you prefer to manage the preferences yourself (and not through PrefenceActivity). Listing 8 shows how to use SharedPreferences to load the stored preferences and how to make changes to the stored preferences using an Editor. Listing 8. Using SharedPreferences ///////////////////////////////////////////////////////////// // The following methods show how to use the SharedPreferences ///////////////////////////////////////////////////////////// /** * Retrieves the Auto delete preference * @return the value of auto delete */ public boolean prefsGetAutoDelete() { boolean v = false; SharedPreferences sprefs = PreferenceManager.getDefaultSharedPreferences(appContext); String key = appContext.getString(R.string.prefs_autodelete_key); try { v = sprefs.getBoolean(key, false); } catch (ClassCastException e) { } return v; } /** * Sets the auto delete preference * @param v the value to set */ public void prefsSetAutoDelete(boolean v) { SharedPreferences sprefs = PreferenceManager.getDefaultSharedPreferences(appContext); Editor e = sprefs.edit(); String key = appContext.getString(R.string.prefs_autodelete_key); e.putBoolean(key, v); e.commit(); } Next, you will see how to use a database to store data. Using SQLite databases Android provides support to local relational databases through SQLite. The table (defined in the following listings) summarizes the important database classes used in the sample application. For the sample application, a DBHelper class is used to encapsulate some of the database operations (see Listing 9). Listing 9. The DBHelper package com.cenriqueortiz.tutorials.datastore; import java.util.ArrayList; import android.content.Context; import android.database.Cursor; import android.database.sqlite.SQLiteDatabase; import android.database.sqlite.SQLiteOpenHelper; public class DBHelper extends SQLiteOpenHelper { A number of constants are defined for the database version, database name, and table name (see Listing 10). Listing 10. Initializing the DBHelper private SQLiteDatabase db; private static final int DATABASE_VERSION = 1; private static final String DB_NAME = "sample.db"; private static final String TABLE_NAME = "friends"; /** * Constructor * @param context the application context */ public DBHelper(Context context) { super(context, DB_NAME, null, DATABASE_VERSION); db = getWritableDatabase(); } The onCreate() method is invoked when the database is ready to be created. In this method, the tables are created (see Listing 11). Listing 11. Creating a database table /** * Called at the time to create the DB. * The create DB statement * @param the SQLite DB */ @Override public void onCreate(SQLiteDatabase db) { db.execSQL( "create table " + TABLE_NAME + " (_id integer primary key autoincrement, " + " fid text not null, name text not null) "); } The insert() method is invoked by MainActivity when exporting information to the database (see Listing 12). Listing 12. Inserting a row /** * The Insert DB statement * @param id the friends id to insert * @param name the friend's name to insert */ public void insert(String id, String name) { db.execSQL("INSERT INTO friends('fid', 'name') values ('" + id + "', '" + name + "')"); } The deleteAll() method is invoked by MainActivity when clearing the database. It deletes the table (see Listing 13). Listing 13. Deleting the database table /** * Wipe out the DB */ public void clearAll() { db.delete(TABLE_NAME, null, null); } Two SELECT ALL methods are provided: cursorSelectAll(), which returns a cursor, and listSelectAll(), which returns an ArrayList of Friend objects. These methods are invoked by MainActivity when loading information from the database (see Listing 14). Listing 14. Running a Select All that returns an ArrayList /** * Select All returns a cursor * @return the cursor for the DB selection */ public Cursor cursorSelectAll() { Cursor cursor = this.db.query( TABLE_NAME, // Table Name new String[] { "fid", "name" }, // Columns to return null, // SQL WHERE null, // Selection Args null, // SQL GROUP BY null, // SQL HAVING "name"); // SQL ORDER BY return cursor; } The listSelectAll() method returns the selected row inside an ArrayList container that is used by MainActivity to bind it to the MainScreen ListView (see Listing 15). Listing 15. Running a Select All that returns a cursor /** * Select All that returns an ArrayList * @return the ArrayList for the DB selection */ public ArrayList<Friend> listSelectAll() { ArrayList<Friend> list = new ArrayList<Friend>(); Cursor cursor = this.db.query(TABLE_NAME, new String[] { "fid", "name" }, null, null, null, null, "name"); if (cursor.moveToFirst()) { do { Friend f = new Friend(); f.id = cursor.getString(0); f.name = cursor.getString(1); list.add(f); } while (cursor.moveToNext()); } if (cursor != null && !cursor.isClosed()) { cursor.close(); } return list; } The onUpgrade() method is invoked if a database version change is detected (see Listing 16). Listing 16. Detecting if a database version changes /** * Invoked if a DB upgrade (version change) has been detected */ @Override /** * Invoked if a DB upgrade (version change) has been detected */ @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { // Here add any steps needed due to version upgrade // for example, data format conversions, old tables // no longer needed, etc } } Throughout MainActivity, DBHelper is used when you export information to the database, load information from the database, and when you clear the database. The first thing is to instantiate the DBHelper when MainActivity is created. Other tasks performed at onCreate() include initializing the different screen views (see Listing 17). Listing 17. MainActivity onCreate() initializing the database public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); appContext = this; setContentView(R.layout.main); dbHelper = new DBHelper(this); listView = (ListView) findViewById(R.id.friendsview); friendsArrayAdapter = new FriendsArrayAdapter( this, R.layout.rowlayout, friends); listView.setAdapter(friendsArrayAdapter); : : } Listing 18 shows how to load the friends list from the assets and how to parse and insert it into the database. Listing 18. MainActivity inserting into the database String fname = prefsGetFilename(); if (fname != null && fname.length() > 0) { buffer = getAsset(fname); // Parse the JSON file String friendslist = new String(buffer); final JSONObject json = new JSONObject(friendslist); JSONArray d = json.getJSONArray("data"); int l = d.length(); for (int i2=0; i2<l; i2++) { JSONObject o = d.getJSONObject(i2); String n = o.getString("name"); String id = o.getString("id"); dbHelper.insert(id, n); } // Only the original owner thread can touch its views MainActivity.this.runOnUiThread(new Runnable() { public void run() { friendsArrayAdapter.notifyDataSetChanged(); } }); } Listing 19 shows how to perform a ALL and how to bind the data to the main screen ListView. Listing 19. MainActivity Select All and binding the data to ListView final ArrayList<Friend> dbFriends = dbHelper.listSelectAll(); if (dbFriends != null) { // Only the original owner thread can touch its views MainActivity.this.runOnUiThread(new Runnable() { public void run() { friendsArrayAdapter = new FriendsArrayAdapter( MainActivity.this, R.layout.rowlayout, dbFriends); listView.setAdapter(friendsArrayAdapter); friendsArrayAdapter.notifyDataSetChanged(); } }); } Next, take a look at using the Internal Storage API with the example application. Using the device's internal storage for private data With the data storage API, you can store data using the internal storage. The information can be private, and you have the option of letting other applications have read or write access to it. This section covers the API to store private data using android.content.Context.openFileInput, openFileOutput, and getCacheDir() to cache data rather than store it persistently. The snippet shown in Listing 20 shows how to read from the internal private store. What makes the storage private is the use of openFileOutput() with MODE_PRIVATE. Listing 20. Reading from the local private store /** * Writes content to internal storage making the content private to * the application. The method can be easily changed to take the MODE * as argument and let the caller dictate the visibility: * MODE_PRIVATE, MODE_WORLD_WRITEABLE, MODE_WORLD_READABLE, etc. * * @param filename - the name of the file to create * @param content - the content to write */ public void writeInternalStoragePrivate( String filename, byte[] content) { try { //MODE_PRIVATE creates/replaces a file and makes // it private to your application. Other modes: // MODE_WORLD_WRITEABLE // MODE_WORLD_READABLE // MODE_APPEND FileOutputStream fos = openFileOutput(filename, Context.MODE_PRIVATE); fos.write(content); fos.close(); } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } The snippet in Listing 21 shows how to read from the internal private store; see the use of openFileInput(). Listing 21. Reading from the local private store /** * Reads a file from internal storage * @param filename the file to read from * @return the file content */ public byte[] readInternalStoragePrivate(String filename) { int len = 1024; byte[] buffer = new byte[len]; try { FileInputStream fis = openFileInput(filename);; } Listing 22 shows how to delete from the internal private store. Listing 22. Deleting from the local private store /** * Delete internal private file * @param filename - the filename to delete */ public void deleteInternalStoragePrivate(String filename) { File file = getFileStreamPath(filename); if (file != null) { file.delete(); } } Now you can see how to use external storage for public data. Using the device's external storage for public data With the data storage API, you can store data using the external storage. The information can be private, and you have the option of letting other applications have read or write access to it. In this section, you code the API to store public data using a number of APIs including getExternalStorageState(), getExternalFilesDir(), getExternalStorageDirectory(), and getExternalStoragePublicDirectory(). You use the following path for public data: /Android/data/<package_name>/files/. Before using the external storage, you must see if it is available, and if it is writable. The following two code snippets show helper methods to test for such conditions. Listing 23 tests whether the external storage is available. Listing 23. Testing if external storage is available /** * Helper Method to Test if external Storage is Available */ public boolean isExternalStorageAvailable() { boolean state = false; String extStorageState = Environment.getExternalStorageState(); if (Environment.MEDIA_MOUNTED.equals(extStorageState)) { state = true; } return state; } Listing 24 tests whether the external storage is read-only. Listing 24. Testing if external storage is read-only /** * Helper Method to Test if external Storage is read only */ public boolean isExternalStorageReadOnly() { boolean state = false; String extStorageState = Environment.getExternalStorageState(); if (Environment.MEDIA_MOUNTED_READ_ONLY.equals(extStorageState)) { state = true; } return state; } Listing 25 shows how to write to external storage to store public data. Listing 25. Writing to external memory /** * Write to external public directory * @param filename - the filename to write to * @param content - the content to write */ public void writeToExternalStoragePublic(String filename, byte[] content) { // API Level 7 or lower, use getExternalStorageDirectory() // to open a File that represents the root of the external // storage, but writing to root is not recommended, and instead // application should write to application-specific directory, as shown below. String packageName = this.getPackageName(); String path = "/Android/data/" + packageName + "/files/"; if (isExternalStorageAvailable() && !isExternalStorageReadOnly()) { try { File file = new File(path, filename); file.mkdirs(); FileOutputStream fos = new FileOutputStream(file); fos.write(content); fos.close(); } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } } Listing 26 shows how to read from external storage. Listing 26. Reading from external memory /** * Reads a file from internal storage * @param filename - the filename to read from * @return the file contents */ public byte[] readExternallStoragePublic(String filename) { int len = 1024; byte[] buffer = new byte[len]; String packageName = this.getPackageName(); String path = "/Android/data/" + packageName + "/files/"; if (!isExternalStorageReadOnly()) { try { File file = new File(path, filename); FileInputStream fis = new FileInputStream(file);; } The Listing 27 snippet shows how to delete a file from external memory. Listing 27. Deleting a file from external memory /** * Delete external public file * @param filename - the filename to write to */ void deleteExternalStoragePublicFile(String filename) { String packageName = this.getPackageName(); String path = "/Android/data/" + packageName + "/files/"+filename; File file = new File(path, filename); if (file != null) { file.delete(); } } Working with external storage requires a special permission, WRITE_EXTERNAL_STORAGE, to be requested through the AndroidManifest.xml (see Listing 28). Listing 28. The WRITE_EXTERNAL_STORAGE <uses-permission android: The external storage API allows you to store files publicly by storing the files in predefined directories based on their types such as Pictures, Ringtones, and so on. This approach is not covered in this article, but you should become familiar with it. In addition, remember that files in the external storage can disappear at any time. Related methods If you have temporary files that do not need long-term persistence, you can store those files in a cache. Cache is a special memory useful for storing mid-sized or small data (less than a megabyte), but you must be aware that contents of the cache can be purged at any time depending on how much memory is available. Listing 29 shows a helper method that returns the path to the cache in internal memory. Listing 29. Retrieving the path to the internal memory cache /** * Helper method to retrieve the absolute path to the application * specific internalInternalCacheDirectory() { String cacheDirPath = null; File cacheDir = getCacheDir(); if (cacheDir != null) { cacheDirPath = cacheDir.getPath(); } return cacheDirPath; } Listing 30 shows a helper method that returns the path to the cache in external memory. Listing 30. Retrieving the path to the external memory cache /** * Helper method to retrieve the absolute path to the application * specific externalExternalCacheDirectory() { String extCacheDirPath = null; File cacheDir = getExternalCacheDir(); if (cacheDir != null) { extCacheDirPath = cacheDir.getPath(); } return extCacheDirPath; } Through the use of the example application, you should now have a good understanding of how to use the device's external storage for public data. Conclusion This article covered the Android storage APIs, from preferences to using SQLite and internal and external memory. With the preferences API, you can have your application collect and store simple preference information. Using the SQLite API, you can store more complex data, and with the internal and external storage, you can store files that are private to the application or publicly available to other applications. Stored data that persists across sessions allows your application to work even when disconnected from the network. You should now have the expertise to take advantage of all of these types of storage when you develop Android applications. Download Resources Learn - Facebook mobile web page: Learn how to incorporate Facebook into your own mobile application using the same APIs as those provided for all websites, formatted to fit on a mobile phone. - Create an Application - Facebook Developers: Register your Facebook application. - Extended permissions - Facebook Developers: Request extended permissions if your application needs to access other parts of the user's profile that might be private, or if your application needs to publish content to Facebook on a user's behalf. - The official Facebook developer documentation: Explore the powerful APIs that enable you to create social experiences to drive growth and engagement on your website. - Facebook Authentication: Learn about authentication and authorization when using the Facebook platform to develop your applications. - The Facebook Developer Roadmap: Use this roadmap to plan for changes that might require code modifications. - Develop Android applications with Eclipse (Frank Ableson, developerWorks, February 2008): The easiest way to develop Android applications is to use Eclipse. Learn all about this topic in this developerWorks tutorial. - Introduction to Android development (Frank Ableson, developerWorks, May 2009): Get an introduction to the Android platform and learn how to code a basic Android application. -. - as noted on the official Android Developer website. - Intents: Learn about this abstract description of an operation to be performed from the Android Developer site. - Android SDK documentation: Get the latest information in the Android API reference. - The Open Handset Alliance: Visit Android's sponsor. - More articles by this author (C. Enrique Ortiz, developerWorks, July 2004-current): Read articles about Android, mobile applications, web services, - Facebook Android SDK (Currently an Alpha release): Work with this library to integrate Facebook into your Android mobile application. - The Facebook Platform: Expand your ability to build social applications on Facebook and the web. - The Facebook Old Rest API (The previous version of the Graph API): Interact with the Facebook website programmatically through simple HTTP requests. - The Facebook Graph API (The current version): Dig into this core Facebook Platform API. - OAuth 2.0 Protocol specification (July 2010): Work with OAuth authentication, supported by the Facebook Platform. -. -
http://www.ibm.com/developerworks/xml/library/x-androidstorage/index.html
CC-MAIN-2016-44
en
refinedweb
You’ve finally gotten code coverage results for your C++ code. But now your code coverage numbers are lower than you expected. Why is that? And how do you get around it? There are several issues that make C++ code coverage data “noisy” and/or inaccurate. First, if you use the standard C++ libraries, you’ll find the code coverage results littered with entries from the std namespace, which represents classes and methods provided by Microsoft. Clearly, you don’t want library classes provided by Microsoft to impact your code coverage results. Especially if it lowers your numbers. Another issue has to do with using template classes. Look at this example, which is a real example from some of our code: You would probably expect this to show 100% code coverage in Visual Studio (or TFS). It doesn’t. In fact, it reports 72% code coverage. The problem is the return statement at the end. Visual Studio reports code coverage in terms of blocks covered/not covered (also this article). There are blocks in the wstring class that aren’t being covered in this method. How do you fix this problem? Building a Code Coverage Report There are probably many ways to provide better code coverage numbers for C++ code. I chose to write a report for TFS that reads results from the data warehouse and process them to produce more “useful” results. Here is an overview of what the report does with the data (I’ll describe each in more detail): - Use lines instead of blocks for coverage information - Treat partially covered lines as covered - Collapse concrete template instances into a “common” instance - Use the method results with the highest coverage - Filter out namespaces and functions The following sections describe each of these bullets in more detail. Use lines instead of blocks, and treat partially covered as covered I debated a long time before making these changes. Visual Studio reports code coverage results based on block coverage, so I wasn’t sure how people would react to using lines instead. However, switching to lines and also treating partially covered lines as covered mitigates the issue I described above. It does mean we’re over-reporting coverage for lines that have conditional code, but our style guidelines call for using multiple lines in such cases, so this is probably a small issue. Alternatively, there are Visual Studio APIs, found in the Microsoft.VisualStudio.Coverage.Analysis namespace that could be used to process the code coverage results. I haven’t tried this, and it would have taken more time than writing a report. Collapse template instances We have some template classes that we use as base classes. The methods are well covered with unit tests. However not all concrete instances call the base methods, and we have a number of concrete instances of these classes. The result is that we get partial code coverage multiplied by the number of concrete instances. Our desire is to know how much coverage we achieve for the code we wrote, not for compiler-generated code. As such, I chose to collapse concrete instances. For example, we have a template called BaseControl, with a concrete instance called BaseControl<ItreeView>. In the report, I collapse this into BaseControl__ so that we can “combine” the results for each base method from sub classes. This brings us to the next topic. Use the highest coverage results We have several different runs on our build server, and we want to combine these in the report results. For example, we have unit tests run during gated check-ins, and we runs all our automated acceptance tests nightly. The acceptance tests cover some code (such as UI code) that isn’t covered by the unit tests, and visa versa. Because of the testing approach we’re using, the unit tests are written in C++/CLI and include the product code directly into the test DLL (see Writing Unit Tests in Visual Studio for Native C++) whereas our acceptance tests are written in C# and use the production DLL. This results in the DLL names being different between these two runs. The report I created replaces these different names with a common name so the data can be combine. One the report collapses template instances and combined results from DLLs with different names, but representing the same code, there may be more than one record (result) for each method in the product code. How do you combine these records? TFS stores the name of the class and method, as well as lines covered, not covered, and partially covered (and also block information). I chose to use the record for each method that has the highest coverage. So if you have different test runs that cover different parts of a method, choosing the result with the highest coverage will under-report code coverage since there’s no way to tell how much overlap there is between different result records. Filter out namespaces and functions In my MSDN article, Agile C++ Development and Testing with Visual Studio and TFS, I talked about filtering out common patterns, such as “std::”, from the results because this represents library code that we don’t want to include. Since then, I’ve found even more patterns, so the new report I created excludes a much larger list. Using the Results The results of all this work were very significant. Switching from block to line coverage moved our code coverage results up from 62% for one DLL to about 70%. Doing all the other work took this number up to 87%, which is more of what I expected to see since we used TDD to write all the code in that DLL. Using the Report If you’ve never added a new report to your reporting site, you can find some instructions here:. Once you install the report in your team project, you’ll see something that looks like this: Customizing the Report There are several places where you can customize the report. Open the report in Report Manager (right click on the Reports node in Team Explorer inside Visual Studio and select Show Report Site), click the Properties tab, then click Parameters. You’ll see something like this: There are several hidden parameters you can use to provide more control over what you see in the report: Remember to click Apply to save changes you make to these parameters.
https://blogs.msdn.microsoft.com/jsocha/2011/08/17/interpreting-c-code-coverage-results/
CC-MAIN-2016-44
en
refinedweb
This walkthrough will demonstrate the use of Google Closure by starting at the obligatory 'hello world' level and will build on that, step-by-step, to produce a simple animated stock ticker. This is a beginner-level article but it's not an introduction to the JavaScript language. I will assume a basic understanding of objects and functions in JavaScript. If you're new to JavaScript, it's important to spend some time with the raw language. There can be no substitute for the insight and experience gained from this. It won't be long though before you're going to need the help of a JavaScript library. And that is where this article picks up. The advantage of using a library like this is that it can help protect you from some of the JavaScript Gotchas and from a lot of cross-browser compatibility issues, etc. Update: The code is now also available on github, and you can see a working version of the ticker on the project's github pages. Google Closure is a set of tools developed by Google to help develop rich web applications using JavaScript. In a nutshell, it consists of: The application described in this article introduces a handful of the utilities provided by the library. I will also demonstrate the creation of a simple Closure Template, and make some basic use of the compiler. At various points throughout the walkthrough, I will stop to describe some of the concepts and ideas behind Google Closure that I have learnt while starting out with the library myself, including its approach to namespaces, dependency management, events, etc. So, to get started, let's say we want the ability to place a div tag anywhere in our html where we want the ticker tape to appear. Let's start with some html like this... div html <html> <head> <title>Closure Ticker Tape</title> </head> <body> <!-- The 'tickerTapeDiv' div is where we want the ticker tape to appear --> <div id="tickerTapeDiv"></div> <!-- And this is the JavaScript that will do the work to insert the ticker tape! --> <script src="TickerTape.js"></script> <script>tickerTape.insert('tickerTapeDiv');</script> </body> </html> Next create a TickerTape.js file with some JavaScript to define that tickerTape.insert function, like this... tickerTape.insert goog.provide('tickerTape'); goog.require('goog.dom'); tickerTape.insert = function(elementId) { var tickerContainer = goog.dom.getElement(elementId); tickerContainer.innerHTML = "Hello Closure World - the ticker tape will go here!"; } Now, as you've probably guessed, that strange goog object scattered around the code comes from the Google Closure library. We'll talk about that shortly but for now, to make the above code work, we'll need to get a local copy of the library as shown here and add a script tag referencing it in the html as shown below: goog script <!-- The 'tickerTapeDiv' div is where we want the ticker tape to appear --> <div id="tickerTapeDiv"></div> <!-- The relative paths to any required library files. --> <!-- For now we just need to include the base.js file from the library. --> <!-- (The exact path will depend on where you saved the library files.) --> <script src="..\closure-library\closure\goog\base.js"></script> <!-- And this is the JavaScript that will do the work to insert the ticker tape! --> <script src="TickerTape.js"></script> <script>tickerTape.insert('tickerTapeDiv');</script> If you open the HTML file in your favourite browser now, you should see something like this... Good - so now we can go through the JavaScript we've written and find out what this goog thing is all about. The body of the function should be fairly self-explanatory. It uses one of the library's DOM helper utilities goog.dom.getElement to find the element where we want to place the ticker tape and, for now, simply sets its inner HTML to a hello world message. goog.dom.getElement var tickerContainer = goog.dom.getElement(elementId); tickerContainer.innerHTML = "Hello Closure World - the ticker tape will go here!"; The most interesting part of the code we've written so far is probably those first two lines... goog.provide('tickerTape'); goog.require('goog.dom'); ...they're part of the library's namespacing and dependency management system. That's quite fundamental to the library so we'll stop there for a while and work through them and some of the reasoning behind them in the next section. But, congratulations - now you've got another hello world program under your belt! JavaScript doesn't have a linker. Each compilation unit (essentially each script tag) has its top-level variables and functions thrown in to a common namespace (the global object) together with those from any other compilation units. So, if you have two (or more) of these that happen to have top-level variables or functions with the same name, then the two parts of the program will interfere with each other in weird and wonderful (and very wrong) ways that can be quite a challenge to track down. You can address this by organising your code into 'namespaces'. In Google Closure, a namespace is basically a chain of objects hanging off the global object. If each part of your program only adds variables and functions to the object chain that represents its own namespace then you will reduce the risk of them inadvertently interfering with each other. You define your namespaces with goog.provide and use goog.require to indicate any other namespaces that you want to make use of. Let's look at each of these in turn. goog.provide goog.require goog.provide ensures that a chain of objects exists corresponding to the input namespace. So, in our hello world example, goog.provide('tickerTape') makes sure that the tickerTape object is available to us so we can go ahead and add our insert function to it. goog.provide('tickerTape') tickerTape insert More precisely, goog.provide will split the input string into individual names using the dot '.' as a delimiter. Then, starting with the left-most name, it will check to see if there is an object already defined on goog.global with that name. If there is, then that existing object is used, but if there isn't, it will add a new object literal with that name to goog.global. It will then continue through the names from left to right and repeat the process (making sure subsequent names are added to the previous object in the chain). '.' goog.global Let's see if we can make that clearer with another example. For example, goog.provide('example.name.space') would be equivalent to writing the following JavaScript... goog.provide('example.name.space') // Splitting the input string into individual names using the dot as a // delimiter we get... // 'example', 'name', 'space' // So, starting with 'example', is there already an object // on goog.global that we can use? // Create one if there isn't. var example = goog.global.example || (goog.global.example = {}); // Then moving from left to right the next object is 'name'. // So, does that already exist // on our 'example' object? Again, create one if required. if(!example.name) example.name = {}; // And finally moving on to the next object 'space', add that (if necessary) to the // object chain we've built up already if(!example.name.space) example.name.space = {}; So goog.provide is good because it saved me a load of typing and it means I don't need to worry as much about my objects trampling over each other or interfering with the global namespace. Future calls to goog.provide won't upset existing namespaces – it will either use existing objects or append new objects as appropriate. (By the way, you can think of goog.global as an alias for the global object.) We've seen how we can define namespaces to help prevent any part of our program from inadvertently affecting another part. However, we wouldn't be able to write very interesting programs if the parts weren't allowed to interact at all! This is where we need goog.provide's partner: goog.require. Put simply, goog.require is used to specify other namespaces that are explicitly used in a JavaScript file, so we can be sure that the functions, etc, that we need have been made available to us before we use them. If the namespace has already been provided (i.e., if the object chain already exists on goog.global), then there's no need to do anything further. But if it hasn't, then it will find the file that provides the namespace and any files that provide namespaces that file requires, and so on... When it has found them all, it simply adds script tags to include them in the appropriate order. In our case, that single call to goog.require('goog.dom') is enough for the library to work out that it needs to add the following script tags in order to resolve our dependencies: goog.require('goog.dom') <script src="closure-library/closure/goog/debug/error.js" type="text/javascript"></script> <script src="closure-library/closure/goog/string/string.js" type="text/javascript"> </script> <script src="closure-library/closure/goog/asserts/asserts.js" type="text/javascript"> </script> <script src="closure-library/closure/goog/array/array.js" type="text/javascript"></script> <script src="closure-library/closure/goog/useragent/useragent.js" type="text/javascript"> </script> <script src="closure-library/closure/goog/dom/browserfeature.js" type="text/javascript"> </script> <script src="closure-library/closure/goog/dom/tagname.js" type="text/javascript"></script> <script src="closure-library/closure/goog/dom/classes.js" type="text/javascript"></script> <script src="closure-library/closure/goog/math/coordinate.js" type="text/javascript"> </script> <script src="closure-library/closure/goog/math/size.js" type="text/javascript"></script> <script src="closure-library/closure/goog/object/object.js" type="text/javascript"> </script> <script src="closure-library/closure/goog/dom/dom.js" type="text/javascript"></script> Now can you imagine having to maintain all that by hand? Note: This only describes how it works when using the uncompiled source, which is good for development but has performance issues for production. When you compile your code with the Closure Compiler, instead of including script tags for entire JavaScript files, it will work out exactly which parts of the code you require and paste them in directly. (As well as doing what it can to make the combined code as small as possible so it can load as quickly as possible.) I'll show you a simple way to produce this compiled code later, but how the compiler works is beyond the scope of this article. There is one other aspect of Google Closure's dependency management that we don't use in our example, but it is important to be aware of when you come to create more complex programs yourself. Namespaces provided and required by a JavaScript file should be registered using goog.addDependency. This gives it all the information it needs to build up the dependency graph so it can work out exactly which files need to be included to provide a particular namespace and all of its dependencies. goog.addDependency We don't need to do this in our example because we're only using namespaces from the library and there is already a file in the closure library called deps.js which contains the required calls to goog.addDependency for the library's namespaces. goog.addDependency When your programs contain multiple namespaces with dependencies between them, then this is something you will need to consider. Fortunately, you don't necessarily have to manually create and maintain your own deps.js file though. There are tools to help you. For example, the library comes with a python script to help you, or you can use a tool like plovr. (I'll be saying a bit more about plovr later, because it really does simplify the process of development with Google Closure.) Just one last thing before we move on and start to develop our 'hello world' program into something a bit more interesting: If you haven't worked it out already, that script tag we added to the html to include base.js from the closure library... <script src="..\closure-library\closure\goog\base.js"></script> Well, you need that because that's where the goog.global object is set up and functions like goog.provide, goog.require and goog.addDependency (among others) are defined. So, clearly, none of this would work without that! Ok. Let's get back to the development of our ticker tape. Next we're going to look at how we can get some data in to our application. To get the data, we're going to use YQL (Yahoo! Query Language) and one of the many open data tables built by the community. I'm not going to go into YQL and how the open data tables work here, but if you haven't used them before, you should definitely take a look. Often, while working on learning projects like this, one of the challenges can be getting hold of a good set of real data. A resource like YQL with the open data tables can be invaluable. We'll use the yahoo.finance.quotes table to get stock quotes for our stock ticker. (I would assume that much of the financial data we'll be getting back is delayed - so don't go investing your life savings based on what you see here!) Google Closure provides a class for handling XMLHttpRequests: goog.net.XhrIo. This is a good example of the library doing its job of protecting you from various browser differences as there are differences in their implementations of XMLHttpRequest. XMLHttpRequests goog.net.XhrIo XMLHttpRequest There are a few ways of using the class but the simplest is to use the static send function to make a one-off asynchronous request. Just give it the URI to request and a callback function to invoke when the request completes. static send goog.net.XhrIo.send('?\ q=select * from yahoo.finance.quotes where symbol in\ ("HSBA.L, RDSA.L, BP.L, VOD.L, GSK.L, AZN.L, BARC.L, BATS.L, RIO.L, BLT.L")\ &env=store://datatables.org/alltableswithkeys&format=json', function(completedEvent) { var tickerContainer = goog.dom.getElement(elementId); tickerContainer.innerHTML = "Hello Closure World - now I've got some data for you!"; }); Or, after a bit of refactoring to make it easier to read, hopefully you can see how simple it is... tickerTape.interestingStocks = ["HSBA.L", "RDSA.L", "BP.L", "VOD.L", "GSK.L", "AZN.L", "BARC.L", "BATS.L", "RIO.L", "BLT.L"]; tickerTape.insert = function(elementId) { var queryStringFormat = '?\ q=select * from yahoo.finance.quotes where symbol in ("%s")\ &env=store://datatables.org/alltableswithkeys&format=json'; var queryString = goog.string.format(queryStringFormat, tickerTape.interestingStocks); goog.net.XhrIo.send(queryString, function(completedEvent) { var tickerContainer = goog.dom.getElement(elementId); tickerContainer.innerHTML = "Hello Closure World - now I've got some data for you!"; }); } Here tickerTape.interestingStocks just includes some of the top stocks from the FTSE 100. You could change it to include any other symbols you like. tickerTape.interestingStocks You might have to be a bit careful with that long query string in your JavaScript file. I've laid it out over a few lines with indents for clarity in the article, but if that whitespace ends up as spaces in the URI, you'll just get a bad request response! Notice that we've simply replaced the list of symbols in the original query string with %s. That's so we can put the URI query together using goog.string.format which enables us to do sprintf-like formatting. %s goog.string.format Of course, you'll have to add a couple of extra calls to goog.require for the additional library utilities we're using...but you knew that already, didn't you? goog.require('goog.string.format'); goog.require('goog.net.XhrIo'); Opening the HTML file in a browser now should give you something like this... Ok, so maybe it's still not all that exciting to look at, but we've now made an asynchronous URI request and put some HTML in our page when the request completed. The XhrIo class is event based. That callback we passed to the send function gets assigned as a 'listener' to the goog.net.EventType.COMPLETE event on the XhrIo object. XhrIo goog.net.EventType.COMPLETE We'll be looking at events a bit more later when we come to animate the ticker, but for now what you need to know is that when an event listener gets called it is passed an object representing the event. The event object has a target property referencing the object that originated the event. In our case, the target property of the event object will point to the XhrIo object used to send the request. We can use this to access the data returned from the request... target goog.net.XhrIo.send(queryString, function(completedEvent) { var xhr = completedEvent.target; var json = xhr.getResponseJson(); }); And the JSON we get back looks something like this... (There's actually a whole load more information in each of those items in the quote array, but I'm only showing a few of the fields here to make it easier to see the structure of the JSON.) quote { "query": { "count":3, "created":"2011-09-22T21:01:53Z", "lang":"en-US", "results":{ "quote":[{"Bid":"486.15","Change":"-25.30", "Symbol":"HSBA.L","Volume":"47371752"}, {"Bid":"1998.00","Change":"-73.0001", "Symbol":"RDSA.L","Volume":"4519560"}, {"Bid":"383.95","Change":"-19.95", "Symbol":"BP.L","Volume":"46864036"}] } } } So we can easily get to the data and start displaying it... += goog.string.format ("Bid %d: %s\t", i, json.query.results.quote[i].Bid); } }); You can open the HTML file now and you should see the bid prices for each of the stocks we requested... Now we're getting somewhere! Of course, we could continue building up our HTML for each quote within that loop, but Google Closure provides an alternative - Closure Templates. Closure Templates provide a simple syntax for dynamically building HTML. With Closure Templates, you can lay out your HTML with line breaks and indentation making it much easier to read. The template compiler will remove the line terminators and whitespace. It will also escape the HTML for you so that's another thing you don't need to worry about. Closure templates are defined in files with a .soy extension, and they should start by defining a namespace for the templates in the file. So let's go ahead and create a TickerTape.soy file and start with the namespace... .soy {namespace tickerTape.templates} The rest of the file can then contain one or more template definitions. A template is defined within a template tag, and each must have a unique name – starting with a dot to indicate that it is relative to the file's namespace. So, let's create our first template called stockItem below that namespace definition: template stockItem {template .stockItem} {/template} A template must be preceded with a JSDoc style header, with a @param declaration for each required parameter and a @param? declaration for any optional parameters. We don't need any optional parameters, but if we define the fields from the objects in the quote array in our JSON then it means we'll be able to use the template by simply passing in any of those objects. We're going to use the Symbol, Volume, Bid and Change fields... @param @param? Symbol Volume Bid Change /** * Create the html for a single stock item in the ticker tape * @param Symbol {string} * @param Volume {number} * @param Bid {number} * @param Change {number} */ {template .stockItem} {/template} In the body of the template, we can simply lay out our HTML in an easily readable and maintainable manner, and insert the values of the input parameters as required with the {$paramName} syntax. {$paramName} /** * Create the html for a single stock item in the ticker tape * @param Symbol {string} * @param Volume {number} * @param Bid {number} * @param Change {number} */ {template .stockItem} <span class="stockItem"> <span class="symbol">{$Symbol}</span> To use the template in our JavaScript, we just need to add a call to goog.require for the template namespace to the top of our JavaScript file... goog.require('tickerTape.templates'); ...and then, instead of building up the HTML for each quote within the loop, we can just pass the quote objects over to the template... for(var i = 0; i < json.query.count; i++) { tickerContainer.innerHTML += tickerTape.templates.stockItem(json.query.results.quote[i]); } The bad news is that you won't be able to see the results of this straight away. First, to compile the template, you will need to download the latest files from the closure template project hosting site, then run a Java file, then add some script tags to the HTML for the additional dependencies... do you know what? This is getting way too much! Let's break there and look at one of the tools I mentioned earlier that will deal with all this for us! plovr greatly simplifies Closure development by streamlining the process of compiling your closure templates, managing your dependencies, and minimizing your JavaScript. During development with plovr, you will be able to edit your JavaScript and/or Soy files, refresh your browser, and have it load the updated version to reflect your edits. It will also display any errors or warnings from the compilation at the top of the browser. { // Every config must have an id, and it // should be unique among the configs being // served at any one time. // (You'll see why later!) "id": "tickerTape", // Input files to be compiled... // ...the file and its dependencies will be // compiled, so you don't need to manually // include the dependencies. "inputs": "TickerTape.js", // Files or directories where the inputs' // dependencies can be found ("." if everything // is in the current directory)... // ...note that the Google Closure Library and // Templates are bundled with plovr, so you // don't need to point it to them! "paths": "." } java -jar plovr-c047fb78efb8.jar serve plovr.config You should see something like this: As you can see, by default plovr runs on port 9810. So all we need is a script tag in our HTML with the URL as shown below. Notice that this really is all we need - I've removed the script tag referencing the library's base.js file and the reference to our tickerTape.js. Your HTML should now look like this: 9810 script <html> <head> <title>Closure Ticker Tape</title> </head> <body> <!-- The 'tickerTapeDiv' div is where we want the ticker tape to appear --> <div id="tickerTapeDiv"></div> <!-- With plovr running in server mode (as described) this url will return the --> <!-- compiled code for the configuration with given id (tickerTape, in our case) --> <!-- When setting the mode parameter to 'RAW' the output gets loaded into --> <!-- individual script tags, which can be useful for development if you end up --> <!-- needing to step into the code to debug. --> <script src=""></script> <!-- Now we can simply call our function to insert the ticker tape --> <script>tickerTape.insert('tickerTapeDiv');</script> </body> </html> Now loading the HTML in a browser will automatically compile any templates, etc. It also means you can edit your JavaScript &/or Templates and simply refresh the browser to see the effect of the changes. So, at last, you can see the results of creating that Closure Template... Or, with a bit of simple CSS styling... body { overflow-x: hidden; } .stockItem { font-family:"Trebuchet MS", Helvetica, sans-serif; font-size:14px; display: inline-block; width:250px; } .symbol { font-weight:bold; margin-right:3px; } .volume { font-size:10px; margin-right:3px; } .bid { margin-right:10px; } .change { color: green; } Now that we've streamlined our development process, it is very easy to play around some more with our template and see what else we can do. One simple thing we can do is use the if command for conditional output. The syntax for the command looks like this: if {if <expression>} ... {elseif <expression>} ... {else} ... {/if} So, instead of just outputting the Change value directly in our template, we can format it differently for a positive and a negative change. Open the TickerTape.soy file and replace the line which outputs the Change value with the following... {if $Change < 0} <span class="changeDown">{$Change}</span> ...and with the necessary additions to the CSS... .changeDown { color: red; } .changeUp { color: green; } ...you just need to refresh the browser to see something like this. Notice the colour highlighting for positive and negative changes. Closure templates can also make calls to other closure templates. Supposing we wanted to format those long, messy numbers we're getting for Volume. We could do that in a separate template and call it from our template, passing in the Volume value as a parameter. Add the following to our template to replace the line where we're currently outputting the Volume value within a span... <span class="volume"> {call .formatVolume} {param Volume: $Volume /} {/call} @ </span> And then define the new template at the bottom of our TickerTape.soy file. This uses some more conditional output to show the Volume as thousands, millions, or billions (K, M, or B) as appropriate to the magnitude of the value. /** * Format the stock's volume using K, M, or B * for thousands, millions, or billions! * @param Volume {number} */ {template .formatVolume} {if $Volume < 1000} {$Volume} {elseif $Volume < 1000000} {round($Volume/1000)}K {elseif $Volume < 1000000000} {round($Volume/1000000)}M {else} {round($Volume/1000000000)}B {/if} {/template} And, with another browser refresh, you should see something like this...(Oh, yeah - I snuck in an @ symbol between the volume and bid values. Looks better, I think?) @ Check out the documentation for some further reading on the Closure Template concepts and commands. The final thing we want to do to our ticker tape is to animate it. In other words, make it continuously scroll along the top of the page. This is how we'll do it... This should give the effect of a continuous scrolling and wrapping ticker tape. But, one step at a time...let's see how we could achieve that first part of animating the ticker tape to move the first item out of view. First, we need to make sure that the ticker tape container has the appropriate style settings for us to be able to move its position (i.e., it must be positioned relatively). Of course, we could just do this in the CSS file but it doesn't feel right that our code will rely on something being set in a separate file for it to work properly. A better way would be to set up what we need ourselves in the JavaScript - and, as you would expect, the closure library provides some utilities in goog.styles for us to do this. goog.styles So, you know the drill, go ahead and add the goog.require at the top of the file with the others... goog.require('goog.style'); Then add a new function at the bottom of the file as follows. (This new function is where we're going to do our animation.) tickerTape.start = function(tickerContainer) { // Note - we're assuming that all items in the ticker have the same width // (styled in css) var firstItem = goog.dom.getFirstElementChild(tickerContainer); var itemWidth = goog.style.getSize(firstItem).width; // Make sure the container is set up properly for us to be able to // influence its position goog.style.setStyle(tickerContainer, 'position', 'relative'); goog.style.setWidth(tickerContainer, itemWidth * tickerTape.interestingStocks.length); } To start with, all we're doing in this new function is to set the position style on the container to relative and set its width so that it's wide enough to contain the whole tape without wrapping over multiple lines. position relative Notice how I've calculated the required width by getting the width of the first item and multiplying it by the number of items. This implies the assumption that all stock items are the same width, but I'm not too worried about that for now. We can change it if it becomes a problem. Let's make a call to this new function at the end of our AJAX callback to see the results so far... += tickerTape.templates.stockItem (json.query.results.quote[i]); } tickerTape.start(tickerContainer); }); And you should see something like this... In Google Closure, we can create an animation object by providing an array of start co-ordinates, an array of end co-ordinates, and a duration (in milliseconds). Then calling play on the animation object will animate those co-ordinates in a linear fashion from their starting positions, reaching the ending positions after the specified duration. play So, in our case, to animate the ticker from its starting position towards the left for the width of one item over 5000ms, we need to add the following code to the end of our start function... start // We animate for the width of one item... var startPosition = goog.style.getPosition(tickerContainer); var animation = new goog.fx.Animation([ startPosition.x , startPosition.y ], [ startPosition.x - itemWidth, startPosition.y ], tickerTape.animationDuration); animation.play(); Note that, for the sake of readability and maintainability, I've declared the animation duration as a top-level variable in our namespace. // Time taken to scroll the width of one item in the ticker (in milliseconds) tickerTape.animationDuration = 5000; And, of course, we need to add the goog.require('goog.fx.Animation') call for the animation. goog.require('goog.fx.Animation') We're not quite there yet though. You can run the code as it is, but you won't see anything moving. The animation object is happily throwing out these numbers to animate the co-ordinates we gave it, but we're not paying any attention to it! The animation object will regularly raise a goog.fx.Animation.EventType.ANIMATE event with the new co-ordinates, so we need to listen to that event and respond appropriately. To do that, we use goog.events.listen to assign an event listener. goog.fx.Animation.EventType.ANIMATE goog.events.listen goog.events.listen(animation, goog.fx.Animation.EventType.ANIMATE, function(event) { goog.style.setPosition(tickerContainer, event.x, event.y); }); Remember earlier, when we were talking about the XhrIo AJAX call? I said that the callback we passed to the send function gets assigned as a 'listener' to the goog.net.EventType.COMPLETE event. Well that's exactly what we're doing here. We're adding a listener to the goog.fx.Animation.EventType.ANIMATE event on our animation object and providing a callback which sets the position of our container based on the new co-ordinates. The event object provided to our callback has an x and a y property providing the new co-ordinate. animation x y Make sure you've added the above code after constructing the animation object and before calling animation.play(), and you should see things start to move when you run it. animation.play() Have a look at this tutorial if you want to read up a bit more on the library's event model. Hopefully you saw the ticker tape scrolling along to the left and then stop when the first item was out of view. We can achieve our final step of swapping the items around and repeating the animation by listening to the goog.fx.Animation.EventType.END event. Just add the following code before the call to animation.play()... goog.fx.Animation.EventType.END // Shuffle the items around (removing the first item & adding it to the end) // and repeat the animation. // This gives the effect of a continuous, wrapping ticker. goog.events.listen(animation, goog.fx.Animation.EventType.END, function(event) { firstItem = goog.dom.getFirstElementChild(tickerContainer); goog.dom.removeNode(firstItem); goog.dom.appendChild(tickerContainer, firstItem); animation.play(); }); And that's it! Your ticker tape should now be continuously animating. It's still a very simple application with plenty of room for improvement. There's very little in the way of validation or error-handling, and there is the limitation that the stock quotes are retrieved once so the data remains static unless the page is refreshed. But I'll leave that as an exercise for the reader! In the final section, we'll take a brief look at using the compiler to minimize the code for production. Once we're ready to release our application we're going to want to compile the code. The easiest way to do this is with the following plovr command...(Don't forget to use the appropriate file name for your plovr jar file.) java -jar plovr-c047fb78efb8.jar build plovr.config > tickerTape-compiled.js This gives us a single JavaScript file, tickerTape-compiled.js, containing all our code, compiled templates, and required library code. We can change our HTML to reference this... <html> <head> <title>Closure Ticker Tape</title> <link href="styles.css" rel="stylesheet" type="text/css" /> </head> <body> <!-- The 'tickerTapeDiv' div is where we want the ticker tape to appear --> <div id="tickerTapeDiv"></div> <!-- In production we compile our JavaScript to minimize it and then just --> <!-- load that here. --> <script src="tickerTape-compiled.js"></script> <!-- Now we can simply call our function to insert the ticker tape --> <script>tickerTape.insert('tickerTapeDiv');</script> </body> </html> ...and we just need to upload the three files (HTML, CSS, and compiled JavaScript) to see our ticker tape in action in a production environment. By default, the code is compiled using SIMPLE_OPTIMIZATIONS. This minimizes the code by removing comments, whitespace, and linebreaks, as well as renaming local variables and function parameters to shorter names. You can get a much smaller compiled file by switching to ADVANCED_OPTIMIZATIONS. Just add a line to our plovr.config file for "mode": "ADVANCED". SIMPLE_OPTIMIZATIONS ADVANCED_OPTIMIZATIONS "mode": "ADVANCED" ADVANCED_OPTIMISATIONS is much more aggressive and makes certain assumptions about the code. As a result, it can require extra effort to make sure the compiled code will run in the same way as the raw code. This is beyond the scope of this walkthrough but you can read more on how to achieve this here. ADVANCED_OPTIMISATIONS There's one more important aspect of Google Closure to mention before we finish. If you download the source code provided with the article, you'll notice that I use those JSDoc style comments all over the code, and not just to annotate the templates. Of course, it's always good to document your code, but these comments are more than that. By being a bit more precise with the documentation syntax, the compiler can actually use them to perform some static checks on the code, and therefore warn you about certain mistakes or inconsistencies. You can read more about the JSDoc syntax here. Well that turned out to be quite a long article, but I wanted to do something beyond the 'hello world' level and I hope it's been informative and given some insights into a broad range of the aspects of Google.
http://www.codeproject.com/Articles/265364/First-Adventures-in-Google-Closure?fid=1657099&df=90&mpp=10&sort=Position&spc=None&select=4050581&tid=4049660
CC-MAIN-2016-44
en
refinedweb
I want to execute a function every 60 seconds on Python but I don't want to be blocked meanwhile. How can I do it asynchronously? import threading import time def f(): print("hello world") threading.Timer(3, f).start() if __name__ == '__main__': f() time.sleep(20) You could try the threading.Timer class:. import threading def f(): # do something here ... # call f() again in 60 seconds threading.Timer(60, f).start() # start calling f now and every 60 sec thereafter f()
https://codedump.io/share/WkH6874wAJM8/1/how-to-execute-a-function-asynchronously-every-60-seconds-in-python
CC-MAIN-2016-44
en
refinedweb
- Code: Select all def Charack(self, Health, Defense, Strenght): #self.NPC = Npc self.Health = 10 self.Defense = 12 self.Strenght = 15 def Npc(race, Org, Elf): race.Org = ["10", "15", "20"] race.Elf = ["10", "20", "1"] Print(Npc(0),(1)) I get this error "Traceback (most recent call last): File "C:\Documents and Settings\Owner\My Documents\Chars.py", line 11, in <module> print(Npc(0),(1)) TypeError: Npc() missing 2 required positional arguments: 'Org' and 'Elf'" Ok so I'm trying to get it so that I can call a race and call a skill of the race like health. I know I had help with a class in another post but I don't know how to use is and this is a bit simpler but still good for now. Any idee wat my problem is? Fist time I'm getiing an error like this ad I'm missing something right infront of me Ty guys for reading helping
http://python-forum.org/viewtopic.php?f=6&t=4241
CC-MAIN-2016-44
en
refinedweb
Or simpler: search on]]> There is and that host open sources projects. You can search there. Some of them are for the Windows/Mac platforms but a majority of them are for Linux. There is always google. Just include linux in the search terms like: for example.]]> hi snowman, nothing special, but im only looking for progs, comparing them and then trying to install one. (but linux its a bit difficult as you can see on the basis of kmusicmanager or kguitar.) so, my question reformulated: where to find apps for linux else then in the AUR .. or: with a bigger choice/range. maybe s.o. knows a page like that. edit: (IE: for windows i looked through pages like snapfiles.com or sth like that)]]> Did you looked in the AUR ( ) ? Or in the current/extra repos? What kind of apps are you looking for?]]> many regards to you guys, i finally made it :shock: ... cant imagine that i only picked 2 progs to install, and both of them have such problems and are "wrong pre-configured" (my english is sooo bad, i know ) btw: are there any good sites i could look for apps? looked through kde-apps.org already, but thers isnt as much as i need it ^^ thx & regards, eye]]> malloc and free are from the standard library, you should be able to place #include <stdlib.h> at the top of convertgp3.cpp and it should work. Also, it looks like you are compiling kguitar? You should be able to find an Arch package here which would make things even easier: … _Orphans=0 It's kind of weird that they use malloc and free with a cpp program. I always thought you were supposed to use new and delete...]]> woah... malloc is undeclared... is this made by the same people? who misses this many header files? what project is that from? edit ah kguitar... there's a PKGBUILD in the AUR for this (aur.archlinux.org) here's the header file patch … rfix.patch hey jftaylor21! the other error shows something like this: convertgp3.cpp: In member function `virtual bool ConvertGp3::load(QString)': convertgp3.cpp:87: error: `malloc' undeclared (first use this function) convertgp3.cpp:87: error: (Each undeclared identifier is reported only once for each function it appears in.) convertgp3.cpp:98: error: `free' undeclared (first use this function) convertgp3.cpp:427: warning: format not a string literal and no format arguments convertgp3.cpp:440: warning: format not a string literal and no format arguments convertgp3.cpp:458: warning: format not a string literal and no format arguments make[3]: *** [convertgp3.lo] Error 1 make[3]: Leaving directory `/home/eye/kguitar-0.5/kguitar' make[2]: *** [all-recursive] Error 1 make[2]: Leaving directory `/home/eye/kguitar-0.5/kguitar' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/eye/kguitar-0.5' make: *** [all] Error 2 so "malloc is undecleared" you guys probably know already what to do 8) .. me not :? s/musiclistview.Tpo"; exit 1; fi musiclistview.cpp: In function `unsigned int random_num(unsigned int)': musiclistview.cpp:227: error: `floor' undeclared (first use this function) musiclistview.cpp:227: error: (Each undeclared identifier is reported only once for each function it appears in.) make[3]: *** [musiclistview.lo] Error 1 make[3]: Leaving directory `/home/eye/kmusicmanager-1.2/libkmusicmanager/view' make[2]: *** [all-recursive] Error 1 make[2]: Leaving directory `/home/eye/kmusicmanager-1.2/libkmusicmanager' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/eye/kmusicmanager-1.2' make: *** [all] Error 2 [eye@myhost kmusicmanager-1.2]$ Notice that the error comes from the fact that "floor is undeclared" . Floor is a function in the math library, so by adding the #include <math.h> at the beginning of the file, you are telling it what file to look in for the function. What is the other program giving you errors? Maybe we can help you out. you guys are nice, thx 4 your help! i dunno WHY, but adding the line to the file finall made it thx! but @ snowman/phrakture: how did you find this out? because ive got another prog which ends up with an error after typing "make" and want to find out myself, my "wanting" to learn PS I posted a bug report to the project page on sourceforge.... should be fixed going forward]]> add #include <math.h> at begining of file musiclistview.cpp thx phrakture! im gonna try this asap (now ), "but go ahead and add it in" ... but please tell me what ive got do add where. ive got to put "math.h" in ..? Arch does not have seperate "dev" packages - you have everything you need... I looked at the package and tried to compile it myself.... turns out they never included "math.h" in the offending file.... so, either go ahead and add it in (I can provide a patch if you want but it's one line) or contact the devs...]]> hi phrakture, thx 4ya reply, In order to compile and work with KMusicManager, you need to have the following stuff installed : * KDE * Taglib ive installed KDE with "pacman -Sy kdebase kde-i18n-de" and Taglib with "pacman -Sy taglib" so everything should just go fine, but it does not! :? edit: saw THIS on the homepage: To compile KMusicManager, you need to have the proper headers installed. Some distros put these in seperate packages (devel packages). mh, dev packages is sth i also searched for "cedega cvs compiling" but current and extra dont have these. btw: WHICH dev packages do i need? thx, eye
https://bbs.archlinux.org/extern.php?action=feed&tid=14760&type=atom
CC-MAIN-2016-44
en
refinedweb
A question I often receive via my blog and email goes like this: Hi, I just got an email from a Nigerian prince asking me to hold some money in a bank account for him after which I’ll get a cut. Is this a scam? Hi, I just got an email from a Nigerian prince asking me to hold some money in a bank account for him after which I’ll get a cut. Is this a scam? The answer is yes. But that’s not the question I wanted to write about. Rather, a question that I often see on StackOverflow and our ASP.NET MVC forums is more interesting to me and it goes something like this: How do I get the route name for the current route? How do I get the route name for the current route? My answer is “You can’t”. Bam! End of blog post, short and sweet. Joking aside, I admit that’s not a satisfying answer and ending it there wouldn’t make for much of a blog post. Not that continuing to expound on this question necessarily will make a good blog post, but expound I will. It's not possible to get the route name of the route because the name is not a property of the Route. When adding a route to a RouteCollection, the name is used as an internal unique index for the route so that lookup for the route is extremely fast. This index is never exposed. Route RouteCollection The reason why the route name can’t be a property becomes more apparent when you consider that it’s possible to add a route to multiple route collections. var routeCollection1 = new RouteCollection(); var routeCollection2 = new RouteCollection(); var route = new Route("{controller}/{action}", new MvcRouteHandler()); routeCollection1.Add("route-name1", route); routeCollection2.Add("route-name2", route); So in this example, we add the same route to two different route collections using two different route names when we added the route. So we can’t really talk about the name of the route here because what would it be? Would it be “route-name1” or “route-name2”? I call this the “Route Name Uncertainty Principle” but trust me, I’m alone in this. Some of you might be thinking that ASP.NET Routing didn’t have to be designed this way. I address that at the end of this blog post. For now, this is the world we live in, so let’s deal with it. I’m not one to let logic and an irrefutable mathematical proof stand in the way of me and getting what I want. I want a route’s name, and golly gee wilickers, I’m damn well going to get it. After all, while in theory I can add a route to multiple route collections, I rarely do that in real life. If I promise to behave and not do that, maybe I can have my route name with my route. How do we accomplish this? It’s simple really. When we add a route to the route collection, we need to tell the route what the route name is so it can store it in its DataTokens dictionary property. That’s exactly what that property of Route was designed for. Well not for storing the name of the route, but for storing additional metadata about the route that doesn’t affect route matching or URL generation. Any time you need some information stored with a route so that you can retrieve it later, DataTokens is the way to do it. DataTokens I wrote some simple extension methods for setting and retrieving the name of a route. public static string GetRouteName(this Route route) { if (route == null) { return null; } return route.DataTokens.GetRouteName(); } public static string GetRouteName(this RouteData routeData) { if (routeData == null) { return null; } return routeData.DataTokens.GetRouteName(); } public static string GetRouteName(this RouteValueDictionary routeValues) { if (routeValues == null) { return null; } object routeName = null; routeValues.TryGetValue("__RouteName", out routeName); return routeName as string; } public static Route SetRouteName(this Route route, string routeName) { if (route == null) { throw new ArgumentNullException("route"); } if (route.DataTokens == null) { route.DataTokens = new RouteValueDictionary(); } route.DataTokens["__RouteName"] = routeName; return route; } Yeah, besides changing diapers, this is what I do on the weekends. Pretty sad isn’t it? So now, when I register routes, I just need to remember to call SetRouteName. SetRouteName routes.MapRoute("rName", "{controller}/{action}").SetRouteName("rName"); BTW, did you know that MapRoute returns a Route? Well now you do. I think we made that change in v2 after I begged for it like a little toddler. But I digress. MapRoute Like eating a Turducken, that code doesn’t sit well with me. We’re repeating the route name twice here which is prone to error. Ideally, MapRoute would do it for us, but it doesn’t. So we need some new and improved extension methods for mapping routes. public static Route Map(this RouteCollection routes, string name, string url) { return routes.Map(name, url, null, null, null); } public static Route Map(this RouteCollection routes, string name, string url, object defaults) { return routes.Map(name, url, defaults, null, null); } public static Route Map(this RouteCollection routes, string name, string url, object defaults, object constraints) { return routes.Map(name, url, defaults, constraints, null); } public static Route Map(this RouteCollection routes, string name, string url, object defaults, object constraints, string[] namespaces) { return routes.MapRoute(name, url, defaults, constraints, namespaces) .SetRouteName(name); } These methods correspond to some (but not all, because I’m lazy) of the MapRoute extension methods in the System.Web.Mvc namespace. I called them Map simply because I didn’t want to conflict with the existing MapRoute extension methods. System.Web.Mvc Map With these set of methods, I can easily create routes for which I can retrieve the route name. var route = routes.Map("rName", "url"); route.GetRouteName(); // within a controller string routeName = RouteData.GetRouteName(); With these methods, you can now grab the route name from the route should you need it. Of course, one question to ask yourself is why do you need to know the route name in the first place? Many times, when people ask this question, what they really are doing is making the route name do double duty. They want it to act as an index for route lookup as well as be a label applied to the route so they can take some custom action based on the name. In this second case though, the “label” doesn’t have to be the route name. It could be anything stored in data tokens. In a future blog post, I’ll show you an example of a situation where I really do need to know the route name. As an aside, why is routing designed this way? I wasn’t there when this particular decision was made, but I believe it has to do with performance and safety. With the current API, once a route name has been added to a route collection with a name, internally, the route collection can safely use the route name as a dictionary key for the route knowing full well that the route name cannot change. But imagine instead that RouteBase (the base class for all routes) had a Name property and the RouteCollection.Add method used that as the key for route lookup. Well it’s quite possible that the value of the route’s name could change for some reason due to a poor implementation. In that case, the index would be out of sync with the route’s name. RouteBase Name RouteCollection.Add While I agree that the current design is safer, in retrospect I doubt many will screw up a read-only name property which should never change. We could have documented that the contract for the Name property of Route is that it should never change during the lifetime of the route. But then again, who reads the documentation? After all, I offered $1,000 to the first person who emailed me a hidden message embedded in our ASP.NET MVC 3 release notes and haven’t received one email yet. Also, you’d be surprised how many people screw up GetHashCode(), which effectively would have the same purpose as a route’s Name property. GetHashCode() And by the way, there are no hidden messages in the release notes. Did I make you...
http://haacked.com/archive/2010/11/28/getting-the-route-name-for-a-route.aspx
CC-MAIN-2013-20
en
refinedweb
java.lang.Object org.springframework.batch.repeat.support.RepeatTemplateorg.springframework.batch.repeat.support.RepeatTemplate org.springframework.batch.repeat.support.TaskExecutorRepeatTemplateorg.springframework.batch.repeat.support.TaskExecutorRepeatTemplate public class TaskExecutorRepeatTemplate Provides RepeatOperations support including interceptors that can be used to modify or monitor the behaviour at run time. This implementation is sufficient to be used to configure transactional behaviour for each item by making the RepeatCallback transactional, or for the whole batch by making the execute method transactional (but only then if the task executor is synchronous). This class is thread safe if its collaborators are thread safe (interceptors, terminationPolicy, callback). Normally this will be the case, but clients need to be aware that if the task executor is asynchronous, then the other collaborators should be also. In particular the RepeatCallback that is wrapped in the execute method must be thread safe - often it is based on some form of data source, which itself should be both thread safe and transactional (multiple threads could be accessing it at any given time, and each thread would have its own transaction). public static final int DEFAULT_THROTTLE_LIMIT getNextResult(RepeatContext, RepeatCallback, RepeatInternalState). public TaskExecutorRepeatTemplate() public void setThrottleLimit(int throttleLimit) TaskExecutor. Default value is DEFAULT_THROTTLE_LIMIT. N.B. when used with a thread pooled TaskExecutorthe thread pool might prevent the throttle limit actually being reached (so make the core pool size larger than the throttle limit if possible). throttleLimit- the throttleLimit to set. public void setTaskExecutor(org.springframework.core.task.TaskExecutor taskExecutor) taskExecutor- a TaskExecutor IllegalArgumentException- if the argument is null protected RepeatStatus getNextResult(RepeatContext context, RepeatCallback callback, RepeatInternalState state) throws Throwable setTaskExecutor(TaskExecutor)to generate a result. The internal state in this case is a queue of unfinished result holders of type ResultHolder. The holder with the return value should not be on the queue when this method exits. The queue is scoped in the calling method so there is no need to synchronize access. getNextResultin class RepeatTemplate context- current BatchContext. callback- the callback to execute. state- maintained by the implementation. Throwable RepeatTemplate.isComplete(RepeatContext), RepeatTemplate.createInternalState(RepeatContext) protected boolean waitForResults(RepeatInternalState state) waitForResultsin class RepeatTemplate state- the internal state. RepeatTemplate.canContinue(RepeatStatus)is true for all results retrieved. RepeatTemplate.waitForResults(org.springframework.batch.repeat.support.RepeatInternalState) protected RepeatInternalState createInternalState(RepeatContext context) RepeatTemplate createInternalStatein class RepeatTemplate context- the current RepeatContext RepeatInternalStateinstance. RepeatTemplate.waitForResults(RepeatInternalState)
http://static.springsource.org/spring-batch/apidocs/org/springframework/batch/repeat/support/TaskExecutorRepeatTemplate.html
CC-MAIN-2013-20
en
refinedweb
Am 09.05.2012 16:05, schrieb Paolo Bonzini: > Il 09/05/2012 15:40, Kevin Wolf ha scritto: >>>> +#ifndef SEEK_DATA >>>> +#define SEEK_DATA 3 >>>> +#endif >>>> +#ifndef SEEK_HOLE >>>> +#define SEEK_HOLE 4 >>>> +#endif >> How is that going to be portable? You assume that on non-Linux you'll >> get -EINVAL, but what does guarantee that 3 or 4 aren't already used for >> the standard SEEK_* constants or for a different non-standard extension? > > While SEEK_* is not guaranteed by POSIX to be 0/1/2, the values is so > old that there may still exist programs that hard-code the values > (similar to O_RDONLY/O_WRONLY/O_RDWR, though probably not any other O_* > constant). It would be quite unwise to define them to something else. > Even MS-DOS reused the values! > > AFAIK this is the only extension of lseek that's ever been added. It > was done on Solaris first and then in Linux and the BSDs. It used 3/4 > there too, see for example (Solaris) > and > (NetBSD). Why not simply #ifdef the whole code out and fall back to the current "everything is allocated" behaviour when SEEK_DATA/HOLE aren't defined? Kevin
http://lists.gnu.org/archive/html/qemu-devel/2012-05/msg01162.html
CC-MAIN-2013-48
en
refinedweb
24 August 2012 15:46 [Source: ICIS news] LONDON (ICIS)--LyondellBasell will bring down an ethylene unit at its petrochemicals site in Wesseling near ?xml:namespace> LyondellBasell said that during the shutdown and restart of the plant – known as “Ethylenanlage 6” – there could be controlled flaring activity. The company added that a team of 30 workers has been preparing for the work since the end of 2010. It did not comment on capacities. According to ICIS plants & projects, LyondellBasell’s production capacities at Wesseling include 1.05m tonne/year of ethylene, from two crack
http://www.icis.com/Articles/2012/08/24/9590005/lyondellbasell-to-bring-down-ethylene-unit-at-wesseling-germany.html
CC-MAIN-2013-48
en
refinedweb
Ahh All I had to do is load the textures outside of the gameloop. @cookedbird, Yes I did Ahh All I had to do is load the textures outside of the gameloop. @cookedbird, Yes I did I'm sorry, I confused you. this is used for java, which I thought you were programming in. You can get rid of it. Haha good job. Try and put your state into a switch statement and see if that solves it switch(state){ case 1: drawMenu() // Whatever you called it break; case 2: drawTriangle() // Draw the triangle... I tried to recreate your problem and couldn't. Could you post your whole loop please? Is your state system being updated properly? For example, during the loop are you using something like getState(), or checkState()? No problem, hope you can figure it out. PM if you need any more help or once its done post it on the forums or send it to me. Well you need a class to keep track of the players score, so it would probably be something like this public class Score{ /** * Holds the players score */ int score; /** Just tell it to add the score if the ball is within the hitbox of the brick. if(ball.getHitBox() == brick.getHitBox()){ brick.break(); score.add(100) } Something like... Hello, I'm making a tile based game, and the Textures for the Tiles are 16x16. Whenever I try to load the Texture so it can be rendered, I get this error: java.io.IOException: Attempt to...
http://www.opengl.org/discussion_boards/search.php?s=94f143ee91636fe83103d24185f00d74&searchid=576056
CC-MAIN-2013-48
en
refinedweb
Implementation of the default H(grad)-compatible Lagrange basis of arbitrary degree on Tetrahedron cell. More... #include <Intrepid_HGRAD_TET_Cn_FEM.hpp> Implementation of the default H(grad)-compatible Lagrange basis of arbitrary degree on Tetrahedron cell. Implements Lagrangian basis of degree n on the reference Tetrahedron cell. The basis has cardinality (n+1)(n+2)(n+3)/6 and spans a COMPLETE polynomial space of degree n. Nodal basis functions are dual to a unisolvent set of degrees-of-freedom (DoF) defined at a lattice of order n (see PointTools). In particular, the degrees of freedom are point evaluation at The distribution of these points is specified by the pointType argument to the class constructor. Currently, either equispaced lattice points or Warburton's warp-blend points are available. The dof are enumerated according to the ordering on the lattice (see PointTools). In particular, dof number 0 is at the vertex (0,0,0). The dof increase along the lattice with points along the lines of constant x adjacent in the enumeration. Definition at line 88 of file Intrepid_HGRAD_TET_Cn 257 of file Intrepid_HGRAD_TET_Cn_FEMDef.hpp.
http://trilinos.sandia.gov/packages/docs/dev/packages/intrepid/doc/html/classIntrepid_1_1Basis__HGRAD__TET__Cn__FEM.html
CC-MAIN-2013-48
en
refinedweb
Java.io.PrintStream.print() Method Advertisements Description The java.io.PrintStreamStream.print() method public void print(String s) Parameters s -- The String to be printed Return Value This method does not return a value. Exception NA Example The following example shows the usage of java.io.PrintStream.print() method. package com.tutorialspoint; import java.io.*; public class PrintStreamDemo { public static void main(String[] args) { String s = "Hello World"; // create printstream object PrintStream ps = new PrintStream(System.out); // print string ps.print(s); ps.print(" This is an example"); // flush the stream ps.flush(); } } Let us compile and run the above program, this will produce the following result: Hello World This is an example
http://www.tutorialspoint.com/java/io/printstream_print_string.htm
CC-MAIN-2013-48
en
refinedweb
Timeline.Completed Event Assembly: PresentationCore (in presentationcore.dll) XML Namespace: If this timeline is the root timeline of a timeline tree, it has completed playing after it reaches the end of its active period (which includes repeats) and all its children have reached the end of their active periodss. If this timeline is a child timeline, Object parameter of the EventHandler event handler is the timeline's Clock. Although this event handler appears to be associated with a timeline, it actually registers with the Clock created for this timeline. For more information, see the Timing Events Overview. The Completed event notifies you when a Timeline completes. A timeline is considered to have completed after it has reached the end of its active period and will no longer play unless interactively restarted. Note that "completed" is not the same as "stopped playing": stopping a timeline does not trigger the Completed event (but skipping to the timeline's fill period does). In the following example, two Storyboard objects are used to create an animation transition between two images, stored using ImageSource objects and displayed using an Image control. One storyboard shrinks the image control until it disappears. After it completes, the old ImageSource is swapped with the other ImageSource, and a second storyboard that expands the image control until it is full-sized again. <!-- TimelineCompletedExample.xaml This example creates an animated transition between two images. When the user clicks the Start Transition button, a storyboard shrinks an image until it disappears. The Completed event is used to notify the class when this storyboard has completed. The code behind file handles this event by swapping the image and making it visible again. --> <Page xmlns="" xmlns: <Page.Resources> <!-- A simple picture of a rectangle. --> <DrawingImage x: <DrawingImage.Drawing> <DrawingGroup> <GeometryDrawing Brush="White"> <GeometryDrawing.Geometry> <RectangleGeometry Rect="0,0,100,100" /> </GeometryDrawing.Geometry> </GeometryDrawing> <GeometryDrawing Brush="Orange"> <GeometryDrawing.Geometry> <RectangleGeometry Rect="25,25,50,50" /> </GeometryDrawing.Geometry> </GeometryDrawing> </DrawingGroup> </DrawingImage.Drawing> </DrawingImage> <!-- A simple picture of a cirlce. --> <DrawingImage x: <DrawingImage.Drawing> <DrawingGroup> <GeometryDrawing Brush="White"> <GeometryDrawing.Geometry> <RectangleGeometry Rect="0,0,100,100" /> </GeometryDrawing.Geometry> </GeometryDrawing> <GeometryDrawing> <GeometryDrawing.Geometry> <EllipseGeometry Center="50,50" RadiusX="25" RadiusY="25" /> </GeometryDrawing.Geometry> <GeometryDrawing.Brush> <RadialGradientBrush GradientOrigin="0.75,0.25" Center="0.75,0.25"> <GradientStop Offset="0.0" Color="White" /> <GradientStop Offset="1.0" Color="LimeGreen" /> </RadialGradientBrush> </GeometryDrawing.Brush> </GeometryDrawing> </DrawingGroup> </DrawingImage.Drawing> </DrawingImage> <!-- Define the storyboard that enlarges the image. This storyboard is applied using code when ZoomOutStoryboard completes. --> <Storyboard x: <DoubleAnimation Storyboard. <DoubleAnimation Storyboard. </Storyboard> </Page.Resources> <StackPanel Margin="20" > <Border BorderBrush="Gray" BorderThickness="2" HorizontalAlignment="Center" VerticalAlignment="Center"> <!-- Displays the current ImageSource. --> <Image Name="AnimatedImage" Width="200" Height="200" RenderTransformOrigin="0.5,0.5"> <Image.RenderTransform> <ScaleTransform x: </Image.RenderTransform> </Image> </Border> <!-- This StackPanel contains buttons that control the storyboard. --> <StackPanel Orientation="Horizontal" Margin="0,30,0,0"> <Button Name="BeginButton">Start Transition</Button> <Button Name="SkipToFillButton">Skip To Fill</Button> <Button Name="StopButton">Stop</Button> <StackPanel.Triggers> <!-- Begin the storyboard that shrinks the image. After the storyboard completes, --> <EventTrigger RoutedEvent="Button.Click" SourceName="BeginButton"> <BeginStoryboard Name="ZoomOutBeginStoryboard"> <Storyboard x: <DoubleAnimation Storyboard. <DoubleAnimation Storyboard. </Storyboard> </BeginStoryboard> </EventTrigger> <!-- Advances ZoomOutStoryboard to its fill period. This action triggers the Completed event. --> <EventTrigger RoutedEvent="Button.Click" SourceName="SkipToFillButton"> <SkipStoryboardToFill BeginStoryboardName="ZoomOutBeginStoryboard" /> </EventTrigger> <!-- Stops the storyboard. This action does not trigger the completed event. --> <EventTrigger RoutedEvent="Button.Click" SourceName="StopButton"> <StopStoryboard BeginStoryboardName="ZoomOutBeginStoryboard" /> </EventTrigger> </StackPanel.Triggers> </StackPanel> </StackPanel> </Page> // TimelineCompletedExample.xaml.cs // Handles the ZoomOutStoryboard's Completed event. // See the TimelienCompletedExample.xaml file // for the markup that creates the images and storyboards. using System; using System.Windows; using System.Windows.Controls; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Navigation; namespace SDKSample { public partial class TimelineCompletedExample : Page { private Storyboard zoomInStoryboard; private ImageSource currentImageSource; private ImageSource nextImageSource; public TimelineCompletedExample() { InitializeComponent(); } private void exampleLoaded(object sender, RoutedEventArgs e) { // Cache the zoom-out storyboard resource. zoomInStoryboard = (Storyboard) this.Resources["ZoomInStoryboardResource"]; // Cache the ImageSource resources. currentImageSource = (ImageSource) this.Resources["RectangleDrawingImage"]; nextImageSource = (ImageSource) this.Resources["CircleDrawingImage"]; // Display the current image source. AnimatedImage.Source = currentImageSource; } // Handles the zoom-out storyboard's completed event. private void zoomOutStoryboardCompleted(object sender, EventArgs e) { AnimatedImage.Source = nextImageSource; nextImageSource = currentImageSource; currentImageSource = AnimatedImage.Source; zoomInStoryboard.Begin(AnimatedImage, HandoffBehavior.SnapshotAndReplace); } } } For more information about timing events, see the Timing Events.
http://msdn.microsoft.com/en-us/library/system.windows.media.animation.timeline.completed(v=vs.85).aspx
CC-MAIN-2013-48
en
refinedweb
#include <Ifpack_Partitioner.h> Inheritance diagram for Ifpack_Partition.
http://trilinos.sandia.gov/packages/docs/r5.0/packages/ifpack/doc/html/classIfpack__Partitioner.html
CC-MAIN-2013-48
en
refinedweb
Technical detail NetRexx Change History NetRexx 2.01 This release is the reference implementation for NetRexx 2, and requires Java 1.1.2 (or later) to run. NetRexx 2 releases are a superset of NetRexx 1.00, as published in the NetRexx Language Definition. New language features since NetRexx 1.00 are documented in the NetRexx Supplement. Updates: 2.02 [22 May 2001] This is a maintenance release; loop i=a to b until x incorrectly optimized the control variable test in some circumstances. No other changes are included. 2.01 [1 Apr 2001] This is a maintenance release which corrects excessive memory usage when large numbers of files are imported and the -prompt option is used. No other changes are included. 2.00 [26 Aug 2000] This is a major new release, which consolidates the changes of NetRexx 1.1 and adds the NetRexx interpreter and improved documentation. The enhancements are: The various installation and user documents have been consolidated into a new expanded and indexed User's Guide, available in both HTML and PDF (Acrobat) formats. The reference implementation now includes the NetRexx interpreter, which allows programs and classes to be run without being compiled, together with a new API which makes it easy to use the interpreter from NetRexx or Java applications. The new -prompt option, which lets the translator be used repeatedly without requiring re-loading. This allows sub-second compilation and interpretation of NetRexx programs. The structure of the NetRexx package has been revised to make installation and maintenance simpler. Shell scripts for Linux have been added. Please see the new NetRexx User's Guide for details. The Language Overview (quick start) has been updated and is now also available in PDF (Acrobat) format for viewing or printing. A warning is now given if a private method in a class is not referenced. The compact option for compact error messages has now been documented (see the NetRexx Supplement for details). The documentation was inconsistent as regards the file name generated when -nocompile was specified; the intent was that NetRexx should never leave a plain .java file on disk, as this prevents the next compilation if unprocessed. The documentation and code have been fixed to ensure that -nocompile exactly implies -keep. Several performance optimizations have been added. NetRexx 1.1xx The following changes are those which were made in NetRexx 1.1xx releases. NetRexx 1.1xx releases require Java 1.1.0 (or later). Updates: 1.160 [10 Feb 2000] This release has some language enhancements, along with some problem fixes and other improvements: The if clause in the if instruction and the when clause in the select instruction have both been enhanced to accept multiple expressions, separated by commas. These are evaluated in turn from left to right, and if the result of any evaluation is 1 (or equals the case expression for a when clause in a select case instruction) then the test has succeeded and the instruction following the associated then clause is executed. Note that once an expression evaluation has resulted in a successful test, no further expressions in the clause are evaluated. So, for example, in: -- assume name is a string if name=null, name='' then say 'Empty' then if name does not refer to an object it will compare equal to null and the say instruction will be executed without evaluating the second expression in the if clause. Here is an example in a select case instruction: select case i when 1 then say 'one' when 2 then say 'two' when 3, 4, 5 then say 'many' end The select case instruction will now generate a Java switch instruction under the right conditions. See the NetRexx Supplement for details. The new nojava option allows Java code generation to be inhibited. This can be used to speed up a syntax checking run, when no compilation or Java source code is required. Invoking NetRexxC with no arguments will now display all options, not just the 'outer level' options. The class Exception is now treated as a Checked exception (as Java does). Calls to super() in dependent classes may now be qualified by parent. as well as by constructor arguments, if appropriate. .jar files in the /lib/ext (automatic extensions in Java 2 [1.2]) are automatically added to the classpath. Classpaths containing multiple quoted segments are now handled correctly, and various other minor problems have been fixed. Several optimizations and improvements to formatting have been added. 1.151 [3 Sep 1999] This refresh has some minor enhancements: The 'direct call from Java' entry points have been enhanced to allow paths with embedded blanks to be specified. See the NetRexx User's Guide (Using the translator as a Compiler). Several improvements in code generation when incrementing and decrementing integers. This release has been tested under the first Java 1.3 beta; no problems were found and no changes from earlier NetRexx 1.1 releases were necessary. 1.150 [23 Jul 1999] This release is a maintenance update with some minor enhancements: New unused modifier on the properties instruction may be used (in conjunction with private only) to indicate that a private property is not used. This keyword will stop the compiler warning that a property is not used. For example: properties private constant unused copyrt="Copyright (C) Spel Corp., 1999" New strictprops compiler option requires that references to properties, even from within the same class as the property, be qualified (either by this. or the name of the class). This can be useful for large and complex classes. Several improvements in code generation, mostly for testing of equality. Calls to this() and super() in minor classes will no longer attempt to refer to generated constants. 1.148 [21 Dec 1998] This release makes significant improvements in importing classes and in the select instruction: The select instruction now adds a case keyword, which lets an expression be evaluated once and then tested in each when clause. For example: i=1 select case i+1 when 1 then say 'one' when 2 then say 'two' when 3 then say 'three' end See the NetRexx Supplement for details. An explicit class import will now disambiguate short references. For example, after import java.awt.List a reference to the class List would refer to that class, not the class java.util.List introduced in Java 1.2. Several improvements in code generation, including the treatment of small integers as, for example, byte without need for explicit casts. The format method in the Rexx class has been corrected to completely follow the ANSI X3-274 definition and the NetRexx specification. 1.144 [21 Oct 1998] This maintenance release primarily allows more explicit control over the compiler, for working with 'minimal' virual machines. New strictimport compiler option prevents any automatic class imports (even java.lang.Object). This can be useful when compiling programs for reduced-function JVMs for embedded systems and palm-sized devices. The package java.math is no longer imported automatically. Occasional incorrect loop termination when trace is in use has been corrected. 1.142 [1 Sep 1998] This version is a maintenance release, primarily to support changes in the Java Development Kit (JDK) introduced for Java 1.2. Please see the NetRexx User's Guide for details for additions to the class path needed to run under Java 1.2. The other changes are: A type on the left hand side of an operator that could be a prefix operator (+, -, or \) is now assumed to imply a cast, rather than being an error. For example: x=int -1 Improved code generation for for and to loops. The euro character ('\u20ac') is now treated in the same way as the dollar character (that is, it may be used in the names of variables and other identifiers). Note that only UTF8-encoded source files can currently use the euro character, and a 1.1.7 (or later) version of a Java compiler is needed to generate the class files. The arithmetic routines have slightly improved performance, and provide accurate binary floating point conversions for constants. More robust handling of import, and import from classpath root segments generalized Improved error messages when an indirect property is initialized with a forward reference. 1.140 [26 May 1998] - Three enhancements have been made to tracing: - The new var option on trace lets changes to named variables be traced selectively. For example: - trace var a b c - requests that whenever the variables a, b, or c are changed (either directly or using an index), the line changing them and their new values should be traced. Variables may be added to or removed from the list as required. - The trace instruction may now be used before the first class instruction; it then applies to all classes in a program. - Context is now shown while tracing – if a trace line is produced from a different program or thread than the preceding trace line, then an indicator line (prefixed with ---) is displayed. See the NetRexx Supplement for details. The numeric instruction may now be used before the first class instruction; it then applies to all classes in a program. The new -savelog NetRexxC option requests that compiler messages be written to the file NetRexxC.log in the current directory. The messages are also displayed on the console, unless -noconsole is specified. The new -noconsole NetRexxC option requests that compiler messages not be written to the console. When calling the compiler directly from NetRexx or Java, a PrintWriter can now be provided; messages are then written to that stream (see the NetRexx User's Guide for details). A catch clause may now specify an exception that is a subclass of an exception signalled in the body of its construct. The leave and iterate instructions may now be used in the catch and finally clauses of nested loops. Many improvements to the formatting of generated Java code have been made (plain-name labels, fewer braces, better comments handling, etc.). A constant indirect property may now be changed by methods in its class, though no set method for it is generated or permitted. Several performance improvements and optimizations have been added, improving both run time and compilation time. If you have a long CLASSPATH or many files in directories, you may see a 20% or better reduction in compile time. The NetRexxC.cmd and .bat files now add the value of the NETREXX_JAVA environment variable to the options passed to java.exe. For example, SET NETREXX_JAVA=-mx24M changes the maximum Java heap size to 24 MegaBytes. Try this if you see a java.lang.OutOfMemoryError while running the compiler. Several related problems with loading minor classes from directories and zip files have been corrected. Parentheses around sub-expressions were incorrectly optimized out in some situations; they are now preserved. A work-around for a problem caused by empty directories on the CLASSPATH in Linux has been added. 1.132 [15 Apr 1998] - This version includes one major enhancement: support for Minor and Dependent classes - Java's Nested and Member (inner) classes, using simplified syntax and concepts. 1.130 [8 Mar 1998] The new copyIndexed method on the Rexx class allows the sub-values (indexed strings) of one Rexx object to be merged into the sub-value collection of another Rexx object [available in runtime since NetRexx 1.120]. The '$' character is now permitted in variable and other names. It is now an error to attempt to use a concatenate operator on an array (unless the array is of type char[]). The methods generated for indirect properties are no longer inhibited by methods of the same name in superclasses. The NetRexx Supplement has been updated to document changes since August 1997. 1.128 [14 Feb 1998] - The new linecomment example is a small command-line application that processes a text file. It demonstrates the use of Readers and Writers, and exception handling. A workaround for a bug in javac in JDK1.2b2 has been included. Retry of a failing do instruction as a loop instruction now works. '\1a' (EOF) characters no longer need to follow line-end sequences in order to be ignored. Import of package hierarchies from .zip or .jar files now works correctly (previously it only worked for the standard imports) 1.125 [10 Jan 1998] - The new sourcedir option requests that all .class files be placed in the same directory as the source file from which they are compiled. Other output files are already placed in that directory. Note that using this option will prevent -run from working unless the source directory is the current directory. The new explicit option indicates that all local variables must be explicitly declared (by assigning them a type but no value) before assigning any value to them. Indexed strings are now serializable (can be made persistent). Minor improvements to generated code. 1.122 [27 Nov 1997] - A workaround for a JIT bug in Java 1.1.4 (showing as an exception in an optioncheck method during compilation) has been included. Formatting for the Java code when the comments option is used has been improved. strictcase and nostrictcase programs can now be safely mixed in a single compilation. Minor improvements to generated code and performance. 1.121 [21 Oct 1997] - The new experimental comments option copies comments from the NetRexx source program through to the .java output file, which may be saved using the keep command option. Decimal addition has been updated to conform to ANSI X3-274 arithmetic and the NetRexx documentation (this is a very minor change: an addition such as 77+1E-999 now pads with zeros). An abstract method in an abstract class was incorrectly reported as error. Minor improvements to error messages, formatting, and performance. 1.120 [1 Sep 1997] - Minor improvements to error messages, signals handling, and performance. Redesigned web pages and improved documentation. 1.113 [3 Aug 1997] - Multiple .java files are compiled using a single call to javac, giving improved performance and interdependency resolution. Individual methods may be designated as binary, using the binary keyword. Numerous 'cosmetic' improvements in error messages, formatting, etc. 1.104 [22 Jul 1997] - Whole numbers may now be expressed in a hexadecimal or binary notation, for example: 0xbeef 2x81 8b10101010 - see the Supplement for details. Conversions from String to Rexx (etc.) now 'pass through' nulls, rather than raising NullPointerException. options symbols may be used to include debugging information (a symbol table) in the generated .class files. Numerous 'cosmetic' improvements in error messages, formatting, etc. 1.103 [3 Jul 1997] - A new modifier, adapter, for classes has been introduced. This makes it easy to use Java 1.1 events, without the complexity and extra nesting of Java Inner Classes. Please see the Supplement for details, and the new Scribble sample for a simple example. Compressed Zip files as produced by the Java 1.1 jar utility ('jar files') can now be used for class file collections. The current NetRexxC.zip file is such a file. The NetRexx string class, netrexx.lang.Rexx, is now serializable. The compiler now uses the Java 1.1 Writer and Reader classes for reading and writing text files; this means that the text code page in use on your machine will be automatically translated to and from Unicode for use by the compiler. Associated with the previous change, options utf8 must now be consistent with the options passed to the compiler (see the Supplement for details). The NetRexxC.properties (error messages) file is now included as a resource in the NetRexxC.zip file. The copy in the \lib directory is no longer needed, nor is the NETREXX_HOME environment variable (if you needed to use that before). The Pinger and Spectrum sample applications have been updated to use the Java 1.1 event model; Pinger has also had some other minor improvements. Performance improvements reduce start-up time when compiling with a long CLASSPATH or with class directories with large numbers of files. NetRexx 1.0x This release is the reference implementation for NetRexx 1.00, as published in The NetRexx Language Definition, and later updates. NetRexx 1.0x updates will run on Java 1.0.1 or any later releases, though certain new features may require a Java 1.1 compiler to compile the generated Java code. Updates: 1.02 [25 Jun 1997] - You can now add the shared keyword to the method or properties instructions to indicate that the method or a following property has shared access (that is, is accessible to other classes in the same package, but not to other classes). This corresponds to the Java 1.1 'default access' visibility. Please see the NetRexx Supplement for details. The new sourceline special name may be used to return the line number of the current clause in the program. Please see the NetRexx Supplement for details. Array initializers have ben added. These allow arrays to be created and assigned an initial value, for example: x=['one','two','three'] Note that Java 1.1 is needed to use this enhancement. Please see the NetRexx Supplement for details. The property and method access rules have been enforced according to the current Java specification, along with enhanced error messages when the rules are infringed. 1.01 [15 Jun 1997] - The NetRexx Supplement has been added. This documents language enhancements and the netrexx.lang package. NetRexxC now displays a warning when it encounters any deprecated (out-of-date or no longer recommended) class, method, or property for the first time in a program. Note that under Java 1.1, the javac compiler always displays at least one message if any deprecated fields or classes are encountered. The invitation to 'Recompile with "-deprecation" for details' can be ignored. You can now add the deprecated keyword to the class, method, or properties instructions to indicate that the following class, method, or properties are deprecated. You have to run with a Java 1.1 compiler for this to be reflected in the .class file. Methods and properties with the same name are now permitted (and can be accessed). An import of one of the standard packages (for example, java.io) no longer causes the classpath to be searched. This makes redundant standard imports much faster. 1.00 [24 May 1997] Cosmetic changes: - Methods listed during compilation now have their argument types listed (if any) - Methods generated from Indirect Properties are now listed. The installation instructions now include instructions for using NetRexx with Visual J++. A reference to java.awt.image.ImageObserver treated java.awt.image as a class reference rather than as a package name; it will now correctly refer to the ImageObserver class. [6 May 1997] - Multiple file concurrent compilation: when two or more programs are specifed on the NetRexxC command, they are all compiled with the same class context: that is, they can 'see' the classes, properties, and methods of the other programs being compiled, much as though they were all in one file. This allows mutually interdependent programs and classes to be compiled in a single operation, while maintaining their independence (the programs may have different options, import, and package instructions). - Compiling programs together in this way also gives substantial performance improvements, as the classes for NetRexxC and the javac compiler are only loaded once for all the files being compiled. See Using the translator as a Compiler in the NetRexx User's Guide for full details. The warning 'Method argument not used' will now only be given if the strictargs option is specified. The '.crossref' and '.java.keep' files resulting from a compilation now are placed in the same directory as the source file (instead of the current directory). The multiple compilation support also requires that the source directory be writeable. import of a package (with no trailing period) was not accepted by the compiler; this should now work correctly. [15 Apr 1997] - Preliminary, experimental, support for JavaBeans is now available in the NetRexxC compiler. It is described in the NetRexx Supplement. Checking has been added for the use of Java reserved words as externally visible names (such properties, method, and class names cannot be accessed by people writing in the Java language). The translator phase of the compiler has numerous performance improvements, and now runs 35% faster than the first (January) 1.00 release. Forward references from property initialization expressions to methods in the current class are now permitted, providing they are not circular. Several improvements have been made to error and progress messages. [13 Mar 1997] - The source and documentation for the Tablet (navigation tabs) applet have been added to the package. Forward references involving default constructors now work correctly. The .equals method was not being used for '=' and '\=' comparisons of subclassed objects. options nodecimal may be used to report the use of decimal arithmetic as an error, for performance-critical applications. [18 Feb 1997] - Minor improvements to the compiler for error messages, localization, and Java 1.1. The Say instruction can now handle all expressions that evaluate to null. [6 Feb 1997] - LOOP OVER did not correctly snapshot indexed strings with 'hidden' elements. Some unused method arguments were not being reported as unused. Minor improvements to error messages, progress messages, and code generation. [3 Jan 1997] - Minor cosmetic and performance improvements over 0.90. NetRexxC.bat and nrc.bat have been added to the NetRexx package.
http://www-01.ibm.com/software/awdtools/netrexx/nrchange.html
CC-MAIN-2013-48
en
refinedweb
29 September 2010 22:33 [Source: ICIS news] HOUSTON (ICIS)--Short supply reigned in US fourth-quarter fatty alcohol contract negotiations, buyers said on Wednesday. “It’s going up fast and suppliers will get most of it,” one natural fatty alcohol buyer said about price sentiment entering the fourth-quarter negotiations for the mid-cut detergent range C12-15 alcohols. Fatty alcohol suppliers have mostly quit announcing price increases, electing to go to buyers on an account-by-account basis in most contract negotiations, buyers said. Price increases up to 20 cents/lb ($441/tonne, €326/tonne) were said being sought by several suppliers even at medium and large volume accounts. But price hikes were varied within the market, with 10 cents/lb becoming a prevalent amount sought by importers and domestic suppliers, sources said. Third-quarter contracts for C12-15 detergent range alcohols ranged 86-97 cents/lb, with supply constraints a dominating factor throughout the quarter, according to buyers. Fatty alcohol production is mostly found in ?xml:namespace> Selling directly into the oil market has become the better margin choice for suppliers, US sources said. This has allowed importers to pick and choose what global regions offer the best margin opportunity for alcohol sales and has allowed producers to press for higher contract prices, buyers said. The supply situation was exacerbated in September when Asia’s Musim Mas alcohol production unit was shut down as a result of a rupture in its hydrogen generator, a According to a release from Musim Mas, no alcohol production was expected for 4-6 months. Alcohol production capacity at the unit was estimated by US sources to be approximately 125,000 tonnes/year. Domestic Fatty alcohol importers include Kao of Japan, Ecogreen and KLK. Musim Mas imports and supplies to multi-national companies such as Procter & Gamble. Procter & Gamble is a multi-national company and is the largest global fatty alcohol producer, with facilities in Asia and
http://www.icis.com/Articles/2010/09/29/9397494/supply-reigns-supreme-in-us-q4-fatty-alcohol-contract-negotiations.html
CC-MAIN-2013-48
en
refinedweb
The Storage Team Blog about file services and storage features in Windows and Windows Server. Thanks to Shobana Balakrishnan, Richard Chinn, and Brian Collins for contributing to this blog. OverviewAnti-virus applications have caused interoperability problems with file replication in the past, namely with NTFRS (File Replication service). In particular, excessive replication can be triggered by poorly behaved anti-virus applications as their scanning activities are interpreted as file changes needing replication. DFS Replication (DFSR) relies on the same file system facilities as NTFRS for detecting file changes, so it is subject to similar problems. DFSR’s new design may make it more robust and tolerant of such applications at the cost of additional processing, but full interoperability must be tested. The tests should cover the following areas. The following features of the anti-virus application should be covered. Note this blog simply provides recommendations and guidelines for testing. Simply running all theses tests will not guarantee a given anti-virus program’s interoperability with DFSR. Test EnvironmentSet up two replication partners, one with the anti-virus product of interest and one without any anti-virus software. Both partners should have a scratch volume that is only used for replicated folders. This will minimize noise in the tests. Tests Excessive ReplicationExcessive replication is caused by programs making no effective changes to files yet causing changes to be introduced in the USN journal. DFSR sees these changes and is able to suppress some of them as it realizes that file hashes are unchanged. This is in contrast to NTFRS which would replicate such changes. Still, in the DFSR case, it is best to not have spurious changes being introduced into the system as it will increase the load on the server. To monitor USN activity, use PerfMon to show the USN Journal Records Accepted performance counter under the DFS Replication Service Volumes object. If DFSR accepts a USN journal record, then this is a file change that is eligible for replication. If you like, you may also add USN Journal Records Read; this reflects the USN journal records that DFSR has read on the given volume. Changes outside the replicated folder are not accepted. It is easiest to see the numbers if PerfMon is configured to display a report view. An alternate way to monitor USN activity is to monitor the DFSR debug logs with a utility such as tail.exe. Using tail -f on the last debug log will show the real-time debug logs as written by DFSR. Log messages with USNC are those related to the USN consumer. You must beware of the logs wrapping to the next log file. To enable debug logs, run the following WMI command on the members. wmic /namespace:\\root\microsoftdfs path DfsrMachineConfig set EnableDebugLog=true,DebugLogSeverity=5,MaxDebugLogFiles=10000 Here is the test procedure. 1. Set up replication between M1 and M2 for a replicated folder. 2. Configure M1’s replicated folder for real-time monitoring. 3. Set up USN journal monitoring using the method of your choice. 4. Populate the folder with various files of interest including executables and documents. 5. Ensure DFSR has synced the two folders and backlogs are zero (e.g. run a health report from DfsMgmt.msc or use DfsrDiag.exe with the backlog option). 6. Perform an on-demand scan on M1. 7. Verify no USN journal records are accepted on M1. 8. Try accessing files / running programs on M1. 9. Verify no USN journal records are accepted on M1. 10. Make all the files on M1 read-only using attrib.exe or the Explorer. This will cause USN journal activity. Let DFSR sync and verify the backlog drops to zero. 11. Perform an on-demand scan on M1. 12. Verify no USN journal records are accepted on M1. 13. Copy a file from another volume, preferably something excluded from anti-virus monitoring, into the replicated folder. 14. Verify there is only one USN record accepted by DFSR. 15. Move a file from the same volume, preferably from a folder that is excluded from anti-virus monitoring, into the replicated folder. 16. Verify there is only one USN record accepted by DFSR. Interfering with ReplicationAnti-virus programs have the ability to delete or move an infected file to a special quarantine area. Depending on how the infected file is detected and how the deletion or move is done, this may cause DFSR to become permanently backlogged and repeatedly attempt to download and install the file into the replicated folder. Anti-virus programs with good interoperability are able to delete, clean, or quarantine the file on one member, and have the changed file replicate normally. Sometimes this will require setting an exception filter on the entire DfsrPrivate folder. Testing with infected files is facilitated by using a virus test file called eicar.com from. 2. Verify replication between M1 and M2. 3. Configure real-time scanning on M1. Configure it to quarantine or delete. 4. Copy the infected file into the replicated folder on M2 and monitor M1, M2, and the backlogs between the two servers in both directions. There are a few things that can happen, so here are some to watch out for. 5. The above tests should be repeated using the clean functionality. Local and Remote AccessThere may be subtle differences when a file is accessed over a share versus to locally on the server itself. Since DFSR is used between file servers, it will most likely be the case that users will be accessing files over shares. As a result, it is important to try the above tests over shares as well as locally. Similarly, client-side anti-virus applications that monitor remote shares as well as applications that scan shares should also be tried if applicable to your scenario. RecommendationsProblems will generally be minimized if infected files are never allowed into the replicated folders, especially when one member sees a file as infected but another member does not. To minimize this, you should do the following. Also, anti-virus application behavior will vary from version to version and operating system to operating system. It is important to perform the testing procedure for each new version that is considered for deployment. Here are some tests you can use to test Interop of additional products: Backup 1. Create replicated folder, start DFSR, replicate some files a. Verify replication works 2. Run backup application a. Verify no errors in DFSR eventlog b. Verify replication still works after backup c. Verify no lingering backlogs d. Verify backup did not trigger unexpected replication 3. Delete some data, then restore backed-up files a. Verify no errors in DFSR eventlog b. Verify replication still works after restore c. Verify no lingering backlogs d. Verify restored files are replicated Encryption, Quotas, Defrag, Monitoring Test 1: Creating a replicated folder on machines already configured w/ vendor’s product 1. Install product on test server(s) and configure as needed so it will impact the to-be-created replicated folder 2. Create replicated folder with existing data on some or all members, start DFSR, replicate some files a. Verify all members sync up with each other b.Verify no errors in DFSR eventlog c.Verify no extra replication d. Verify no lingering backlogs 3. Play with the replicated data (create new files, modify files, delete files, etc.) b. Verify the file modifications replicate c. Verify no extra replication 4. If application is capable of running some sort of scan (like an AV scan), run it a. Verify no errors in DFSR eventlog b. Verify no unexpected extra replication Test 2: Configure vendor’s product on machines already hosting replicated folder 1. Create replicated folder with existing data on some or all members, start DFSR, replicate some files 2. Install product on test server(s) and configure as needed so it will impact the replicate folder a. Verify no errors in the DFSR eventlog b. Verify no unexpected extra replication c. Verify replication still works after product is installed & active PingBack from
http://blogs.technet.com/b/filecab/archive/2006/05/01/426926.aspx
CC-MAIN-2013-48
en
refinedweb
public class BooleanClosureWrapper extends Object Closureand convert the result to a boolean. It will do this by caching the possible "doCall" as well as the "asBoolean" in CallSiteArray fashion. "asBoolean" will not be called if the result is null or a Boolean. In case of null we return false and in case of a Boolean we simply unbox. This logic is designed after the one present in DefaultTypeTransformation.castToBoolean(Object). The purpose of this class is to avoid the slow "asBoolean" call in that method. BooleanReturningMethodInvokeris used for caching. clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait public BooleanClosureWrapper(Closure wrapped) public boolean call(Object... args) public <K,V> boolean callForMap(Map.Entry<K,V> entry)
http://groovy.codehaus.org/api/org/codehaus/groovy/runtime/callsite/BooleanClosureWrapper.html
CC-MAIN-2013-48
en
refinedweb
As I mentioned in an earlier post we’ve been parsing XML documents with the Clojure zip-filter API and the next thing we needed to do was create a new XML document containing elements which needed to be inside a namespace. We wanted to end up with a...... So now I've covered the ring buffer itself, reading from it and writing to it. Logically the next thing to do is to wire everything up together. Java was designed with the principle that you shouldn't need to know the size of an object. There are times when you really would like to know and want to avoid the guess work.... Creating a custom plugin for the Sonar platform is very easy. If you are not satisfied with the several built-in plugins or you need something special you can easily create and use your own. This tutorial will walk you through out how to use the Ext JS 4 File Upload Field in the front end and Spring MVC 3 in the back end. Most teams have High-Level Tests in what they call Functional Tests, Integration Tests, End-to-End Tests, Smoke Tests, User Tests, or something similar. These tests are designed to exercise as much of the application as possible. Recently, I was preparing a connection checker for Deployit’s powerful remote execution framework Overthere. To make the checker, as compact as possible, I put together a jar-with-deps1 for distribution. AspectJ is the most powerful AOP framework in the Java space; Spring is the most powerful enterprise development framework in the Java space. It's not surprise that combining the two should lead to wonderful things...In this article I'm going to show a... On the internet, Java interview questions and answers get copied from one web site to another. This can mean that an incorrect or out of date answer might never be corrected. Here are some questions and answer which are not quite correct or are now out... Ok so it's it been a while since my last article on this topic... The comments of course have been first rate, with opinions on the wish-list have ranged from outright agreement to threats of violence for even having such boneheaded ideas. It's all good...).... For some time I have been working on developing a Java web app using Spring MVC & Hibernate, and as many will have discovered, this throws up lots of questions with unit testing. To increase my coverage (and general test confidence) I decided to implement... Want to be an MVB? There's a page for that.
http://java.dzone.com/frontpage?page=804
CC-MAIN-2013-48
en
refinedweb
Creating Solutions and Projects When you create a project, Visual Studio creates a solution to contain it. If you plan to create a multi-project solution, see How to: Create Multi-Project Solutions. If you want to create a project from existing code files, see How to: Create a Project from Existing Code Files. Visual Studio uses project templates to generate new projects based on user input. Each template represents a different project type. Individual files that users add to projects are generated from item templates. You can locate installed project templates in the New Project dialog box by navigating the expanding list in the left pane under Installed. You can also navigate under Recent for project types that you have used recently or under Online for templates that you can download and install. You can also use the search box in the upper right corner of the dialog to search for project templates. The search will populate the middle pane with results from the recent, installed, or online list, depending on which category is selected. For more information, see Creating Projects from Templates. When a project is created, a solution is automatically generated unless the project is already part of a solution. To create a project and a solution to contain it On the File menu, click New and then click Project. This opens the New Project dialog box. In the left pane, select Installed, and select a category of project types from the expanded list. If you have recently created a project of the same type, select Recent instead for faster navigation. Select one of the project Templates from the middle pane. A description of the selected template appears in the right pane. In the Name box, type a name for the new project. In the Location box, select a save location. If available, in the Solution list, specify whether to create a solution or add the project to the solution that is open in Solution Explorer. In the Solution name box, type a name for the solution. Visual Studio will use this name for the namespace of the finished project if applicable. By default, the solution name will match the project name. Make sure that the Create directory for solution check box is selected. Click OK. You can create a project to target earlier versions of the .NET Framework by using the .NET Framework version drop-down menu at the top of the New Project dialog box. Set this value before selecting a project template, as only templates compatible with that .NET Framework version will appear in the list. You must have .NET Framework 3.5 installed on your system to access framework versions earlier than 4.0. You can use Visual Studio to download and install samples of full, packaged applications from the MSDN Code Gallery. You can download the samples individually, or you can download a Sample Pack, which contains related samples that share a technology or topic. You'll receive a notification when source code changes are published for any sample that you download. For more information, see Visual Studio Samples. Although a project must reside in a solution, you can create a solution that has no projects. To create an empty solution On the File menu, click New and then click New Project. In the left pane, select Installed, select Other Project Types, and then select Visual Studio Solutions from the expanded list. In the middle pane, select Blank Solution. Set the Name and Location values for your solution, then click OK. After you create an empty solution, you can add new or existing projects or items to it by clicking Add New Item or Add Existing Item on the Project menu. You can delete a solution permanently, but not by using Visual Studio. Before you delete a solution, move any projects that you might want to use again in another solution. Then use File Explorer to delete the directory that contains the .sln and .suo solution files. To delete a solution In Solution Explorer, right-click the solution to delete, and select Open folder in File Explorer. In File Explorer, navigate up one level. Select the directory containing the solution and press Delete.
http://msdn.microsoft.com/en-us/library/zfzh36t7(v=vs.120).aspx
CC-MAIN-2013-48
en
refinedweb
import "istio.io/istio/security/pkg/nodeagent/plugin" const ( // GoogleTokenExchange is the name of the google token exchange plugin. GoogleTokenExchange = "GoogleTokenExchange" ) type Plugin interface { ExchangeToken(context.Context, string, string) (string, time.Time, int, error) } Plugin provides common interfaces so that authentication providers could choose to implement their specific logic. Package plugin imports 2 packages (graph) and is imported by 3 packages. Updated 2020-02-15. Refresh now. Tools for package owners.
https://godoc.org/istio.io/istio/security/pkg/nodeagent/plugin
CC-MAIN-2020-10
en
refinedweb
Enabling trace logs¶ geckodriver provides different bands of logs for different audiences. The most important log entries are shown to everyone by default, and these include which port geckodriver provides the WebDriver API on, as well as informative warnings, errors, and fatal exceptions. The different log bands are, in ascending bandwidth: fatalis reserved for exceptional circumstances when geckodriver or Firefox cannot recover. This usually entails that either one or both of the processes will exit. errormessages are mistakes in the program code which it is possible to recover from. warnshows warnings of more informative nature that are not necessarily problems in geckodriver. This could for example happen if you use the legacy desiredCapabilities/ requiredCapabilitiesobjects instead of the new alwaysMatch/ firstMatchstructures. info(default) contains information about which port geckodriver binds to, but also all messages from the lower-bandwidth levels listed above. configadditionally shows the negotiated capabilities after matching the alwaysMatchcapabilities with the sequence of firstMatchcapabilities. debugis reserved for information that is useful when programming. trace, where in addition to itself, all previous levels are included. The trace level shows all HTTP requests received by geckodriver, packets sent to and from the remote protocol in Firefox, and responses sent back to your client. In other words this means that the configured level will coalesce entries from all lower bands including itself. If you set the log level to error, you will get log entries for both fatal and error. Similarly for trace, you will get all the logs that are offered. To help debug a problem with geckodriver or Firefox, the trace-level output is vital to understand what is going on. This is why we ask that trace logs are included when filing bugs gainst geckodriver. It is only under very special circumstances that a trace log is not needed, so you will normally find that our first action when triaging your issue will be to ask you to include one. Do yourself and us a favour and provide a trace-level log right away. To silence geckodriver altogether you may for example either redirect all output to append to some log files: % geckodriver >>geckodriver.log 2>>geckodriver.err.log Or a black hole somewhere: % geckodriver >/dev/null 2>&1 The log level set for geckodriver is propagated to the Marionette logger in Firefox. Marionette is the remote protocol that geckodriver uses to implement WebDriver. This means enabling trace logs for geckodriver will also implicitly enable them for Marionette. The log level is set in different ways. Either by using the --log <LEVEL> option, where LEVEL is one of the log levels from the list above, or by using the -v (for debug) or -vv (for trace) shorthands. For example, the following command will enable trace logs for both geckodriver and Marionette: % geckodriver -vv The second way of setting the log level is through capabilities. geckodriver accepts a Mozilla-specific configuration object in moz:firefoxOptions. This JSON Object, which is further described in the README can hold Firefox-specific configuration, such as which Firefox binary to use, additional preferences to set, and of course which log level to use. Each client has its own way of specifying capabilities, and some clients include “helpers” for providing browser-specific configuration. It is often advisable to use these helpers instead of encoding the JSON Object yourself because it can be difficult to get the exact details right, but if you choose to, it should look like this: {"moz:firefoxOptions": {"log": {"level": "trace"}}} Note that most known WebDriver clients, such as those provided by the Selenium project, do not expose a way to actually see the logs unless you redirect the log output to a particular file (using the method shown above) or let the client “inherit” geckodriver’s output, for example by redirecting the stdout and stderr streams to its own. The notable exceptions are the Python and Ruby bindings, which surface geckodriver logs in a remarkable easy and efficient way. See the client-specific documentation below for the most idiomatic way to enable trace logs in your language. We want to expand this documentation to cover all the best known clients people use with geckodriver. If you find your language missing, please consider submitting a patch. C#¶ The Selenium C# client comes with a FirefoxOptions helper for constructing the moz:firefoxOptions capabilities object: FirefoxOptions options = new FirefoxOptions(); options.LogLevel = FirefoxDriverLogLevel.Trace; IWebDriver driver = new FirefoxDriver(options); The log output is directed to stdout. Java¶ The Selenium Java client also comes with a org.openqa.selenium.firefox.FirefoxOptions helper for constructing the moz:firefoxOptions capabilities object: FirefoxOptions options = new FirefoxOptions(); options.setLogLevel(FirefoxDriverLogLevel.TRACE); WebDriver driver = new FirefoxDriver(options); As with C#, the log output is helpfully propagated to stdout. Python¶ The Selenium Python client comes with a selenium.webdriver.firefox.options.Options helper that can be used programmatically to construct the moz:firefoxOptions capabilities object: from selenium.webdriver import Firefox from selenium.webdriver.firefox.options import Options opts = Options() opts.log.level = "trace" driver = Firefox(options=opts) The log output is stored in a file called geckodriver.log in your script’s current working directory. Ruby¶ The Selenium Ruby client comes with an Options helper to generate the correct moz:firefoxOptions capabilities object: Selenium::WebDriver.logger.level = :debug opts = Selenium::WebDriver::Firefox::Options.new(log_level: :trace) driver = Selenium::WebDriver.for :firefox, options: opts
http://firefox-source-docs.mozilla.org/testing/geckodriver/TraceLogs.html
CC-MAIN-2020-10
en
refinedweb
Debugging RxJS, Part 1: Tooling I’m moving away from Medium. This article, its updates and more recent articles are hosted on my personal blog: ncjamieson.com. I’m an RxJS convert and I’m using it in all of my active projects. With it, many things that I once found to be tedious are now straightforward. However, there is one thing that isn’t: debugging. The compositional and sometimes-asynchronous nature of RxJS can make debugging something of a challenge: there isn’t much state to inspect; and the call stack is rarely helpful. The approach I’ve used in the past has been to sprinkle do operators and logging throughout the codebase — to inspect the values that flow through composed observables. For a number of reasons, this approach is not one with which I’ve been satisfied: - I always seem to have to add more logging, changing the code whilst debugging it; - once debugged, I either have to remove the logging or put up with spurious output; - conditional logging to avoid said output looks pretty horrid when slapped in the middle of a nicely composed observable; - even with a dedicated logoperator, the experience is still less than ideal. Recently, I set aside some time to build a debugging tool for RxJS. There were a number of features that I felt the tool must have: - it should be as unobtrusive as possible; - it should not be necessary to have to continually modify code to debug it; - in particular, it should not be necessary to have to delete or comment out debugging code after the problem is solved; - it should support logging that can be easily enabled and disabled; - it should support capturing snapshots that can be compared over time; - it should offer some integration with the browser console — for switching debugging features on/off and for investigating state, etc. And some more that would be nice to have: - it should support pausing observables; - it should support modifying observables or the values they emit; - it should support logging mechanisms other than the console; - it should be extensible; - it should go some way towards capturing the data required to visualize subscription dependencies. With those features in mind, I built rxjs-spy. Core Concepts rxjs-spy introduces a tag operator that associates a string tag with an observable. The operator does not change the observable’s behaviour or values in any way. The tag operator can be used alone — import "rxjs-spy/add/operator/tag" — and the other rxjs-spy methods can be omitted from production builds, so the only overhead is the string annotations. Most of the tool’s methods accept matchers that determine to which tagged observables they will apply. Matchers can be simple strings, regular expressions or predicates that are passed the tag itself. When the tool is configured via a call to its spy method, it patches Observable.prototype.subscribe so that it is able to spy on all subscriptions, notifications and unsubscriptions. That does mean, however, that only observables that have been subscribed to will be seen by the spy. rxjs-spy exposes a module API that is intended to be called from code and a console API that is intended for interactive use in the browser’s console. Most of the time, I make a call to the module API's spy method early in the application’s start-up code and perform the remainder of the debugging using the console API. Console API Functionality When debugging, I usually use the browser’s console to inspect and manipulate tagged observables. The console API functionality is most easily explained by example — and the examples that follow work with the observables in this code: The console API in rxjs-spy is exposed via the rxSpy global. Calling rxSpy.show() will display a list of all tagged observables, indicating their state (incomplete, complete or errored), the number of subscribers and the most recently emitted value (if one has been emitted). The console output will look something like this: To show the information for only a specific tagged observable, a tag name or a regular expression can be passed to show: Logging can be enabled for tagged observables by calling rxSpy.log: Calling log with no arguments will enable the logging of all tagged observables. Most methods in the module API return a teardown function that can be called to undo the method call. In the console, that’s tedious to manage, so there is an alternative. Calling rxSpy.undo() will display a list of the methods that have been called: Calling rxSpy.undo and passing the number associated with the method call will see that call’s teardown function called. For example, calling rxSpy.undo(3) will see the logging of the interval observable undone: Sometimes, it’s useful to modify an observable or its values whilst debugging. The console API includes a let method that functions in much the same way as the RxJS let operator. It’s implemented in such a way that calls to the let method will affect both current and future subscribers the to tagged observable. For example, the following call will see the people observable emit mallory — instead of alice or bob: As with the log method, calls to the let method can be undone: Being able to pause an observable when debugging is something that’s become almost indispensable, for me. Calling rxSpy.pause will pause a tagged observable and will return a deck that can be used to control and inspect the observable’s notifications: Calling log on the deck will display the whether or not the observable is paused and will display the paused notifications. (The notifications are RxJS Notification instances obtained using the materialize operator). Calling step on the deck will emit a single notification: Calling resume will emit all paused notifications and will resume the observable: Calling pause will see the observable placed back into a paused state: It’s easy to forget to assign the returned deck to a variable, so the console API includes a deck method that behaves in a similar manner to the undo method. Calling it will display a list of the pause calls: Calling it and passing the number associated with the call will see the associated deck returned: Like the log and let calls, the pause calls can be undone. And undoing a pause call will see the tagged observable resumed: Hopefully, the above examples will have provided an overview of rxjs-spy and its console API. The follow-up parts of Debugging RxJS will focus on specific features of rxjs-spy and how they can be used to solve actual debugging problems. For me, rxjs-spy has certainly made debugging RxJS significantly less tedious. More Information The code for rxjs-spy is available on GitHub and there is an online example of its console API. The package is available for installation via NPM. For the next article in this series, see Debugging RxJS, Part 2: Logging.
https://medium.com/angular-in-depth/debugging-rxjs-4f0340286dd3
CC-MAIN-2020-10
en
refinedweb
. The essence of project hosting services is that you have the files associated with a project in the cloud. Many people may share these files. Every time you want to work on the project you explicitly update your version of the files, edit the files as you like, and synchronize the files with the "master version" in the cloud. It is a trivial operation to go back to a previous version of a file, corresponding to a certain point of time or labeled with a comment. You can also use tools to see what various people have done with the files throughout the history of the project. Greg Wilson's excellent Script for Introduction to Version Control provides a detailed motivation why you will benefit greatly from using version control systems. Here follows a shorter motivation and a quick overview of the basic concepts. The simplest services for hosting project files are Dropbox and Google Drive. It is very easy to get started with these systems, and they allow you to share files among laptops and mobile units with as many users as you want. The systems offer a kind of version control in that the files are stored frequently (several times per minute), and you can go back to previous versions for the last 30 days. However, it is challenging to find the right version from the past when there are so many of them and when the different versions are not annotated with sensible comments. Another deficiency of Dropbox and Google Drive is that they sync all your files in a folder, a feature you clearly do not want if there are many large files (simulation data, visualizations, movies, binaries from compilations, temporary scratch files, automatically generated copies) that can easily be regenerated. However, the most serious problem with Dropbox and Google Drive arises when several people edit files simultaneously: it can be difficult detect who did what when, roll back to previous versions, and to manually merge the edits when these are incompatible. Then one needs more sophisticated tools, which means a true version control system. The following text aims at providing you with the minimum information to started with Git, the leading version control system, combined with project hosting services for file storage. The mentioned services host all your files in a specific project in what is known as a repository, or repo for short. When a copy of the files are wanted on a certain computer, one clones the repository on that computer. This creates a local copy of the files. Now files can be edited, new ones can be added, and files can be deleted. These changes are then brought back to the repository. If users at different computers synchronize their files frequently with the repository, most modern version control systems will be able to merge changes in files that have been edited simultaneously on different computers. This is perhaps one of the most useful features of project hosting services. However, the merge functionality clearly works best for pure text files and less well for binary files, such as PDF files, MS Word or Excel documents, and OpenOffice documents. The installation of Git on various systems is described on the Git website under the Download section. Git involves compiled code so it is most convenient to download a precompiled binary version of the software on Windows, Mac and other Linux computers. On Ubuntu or any Debian-based system the relevant installation command is Terminal> sudo apt-get install git gitk git-doc This tutorial explains Git interaction through command-line applications in a terminal window. There are numerous graphical user interfaces to Git. Three examples are git-cola, TortoiseGit, and SourceTree. Make a file .gitconfig in your home directory with information on your full name, email address, your favorite text editor, and the name of an "excludes file" which defines the file types that Git should omit when bringing new directories under version control. Here is a simplified version of the author's .gitconfig file: [user] name = Hans Petter Langtangen email = hpl@simula.no editor = emacs [core] excludesfile = ~/.gitignore The excludesfile variable is important: it points to a file called .gitignore, which must list, using the Unix Shell Wildcard notation, the type of files that you do not need to have under version control, because they represent garbage or temporary information, or they can easily be regenerated from some other source files. A suggested .gitignore file looks like # compiled files: *.o *.so *.a # temporary files: *.bak *.swp *~ .*~ *.old tmp* .tmp* temp* .#* \#* # tex files: *.log *.dvi *.aux *.blg *.idx *.nav *.out *.toc *.snm *.vrb *.bbl *.ilg *.ind *.loe # eclipse files: *.cproject *.project # misc: .DS_Store You should be critical to what kind of files you really need a full history of. For example, you do not want to populate the repository with big graphics files of the type that can easily be regenerated by some program. The suggested .gitignore file above lists typical files that are not needed (usually because they are automatically generated by some program). In addition to a default .gitignore file in your home directory, it may be wise to have a .gitignore file tailored your repo in the root directory of the repo. Large data files, even when you want to version them, fill up your repo and should be taken care of through the special service Git Large File Storage. Go The ordinary GitHub URL of image files can be used in web pages to insert images from your repo, provided the image files are in the raw format - click the Raw button when viewing a file at github.com and use the corresponding URL in the img tag. Most Mac and Linux users prefer to work with Git via commands in a terminal window. Windows users may prefer a graphical user interface (GUI), and there are many options in this respect. There are also GUIs for Mac users. Here we concentrate on the efficient command-line interface to Git. You get started with your project on a new machine, or another user can get started with the project, by running Terminal> git clone git@github.com:user/My-Project.git Terminal> cd My-Project ls Recall to replace user by your real username and My-Project by the actual project name. The typical work flow with the "My Project" project starts with updating the local repository by going to the My-Project directory and writing Terminal> git pull origin master You may want to do git fetch and git merge instead of git pull as explained in the section Replacing pull by fetch and merge, especially if you work with branches. You can now edit files, make new files, and make new directories. New files and directories must be added with git add. There are also Git commands for deleting, renaming, and moving files. Typical examples on these Git commands are Terminal> git add file2.* dir1 dir2 # add files and directories Terminal> git rm file3 Terminal> git rm -r dir2 Terminal> git mv oldname newname Terminal> git mv oldname ../newdir When your chunk of work is ready, it is time to commit your changes (note the -am option): Terminal> git commit -am 'Description of changes.' If typos or errors enter the message, the git commit --amend command can be used to reformulate the message. Running git diff prior to git commit makes it easier to formulate descriptive commit messages since this command gives a listing of all the changes you have made to the files since the last commit or pull command. You may perform many commits (to keep track of small changes), before you push your changes to the global repository: Terminal> git push origin master It is recommended to pull, commit, and push frequently if the work takes place in several clones of the repo (i.e., there are many users or you work with the repo on different computers). Infrequent push and pull easily leads to merge problems (see the section Merging files with Git). Also remember that others (human and machines) cannot get your changes before they are pushed! You should run git status -s frequently to see the status of files: A for added, M for modified, R for renamed, and ?? for not being registered in the repo. Pay particular attention to the ?? files and examine if all of them are redundant or easily regenerated from other files - of not, run git add. .gitignorefile. The simplest way of adding files to the repo is to do Terminal> git add . The dot adds every file, and this is seldom what you want, since your directories frequently contain large redundant files or files that can easily be regenerated. You therefore need a .gitignore file, see the section Configuring Git, either in your home directory or in the root directory of the repo. The .gitignore file will ignore undesired files when you do git add .. A nice graphical tool allows you to view all changes, or just the latest ones: Terminal> gitk --all Terminal> gitk --since="2 weeks ago" You can also view changes to all files, some selected ones, or a subdirectory: Terminal> git log -p # all changes to all files Terminal> git log -p filename # changes to a specific file Terminal> git log --stat --summary # compact summary Terminal> git log --stat --summary subdir Adding --follow will print the history of file versions before the file got its present name. To show the author who is responsible for the last modification of each line in the file, use git blame: Terminal> git blame filename Terminal> git blame --since="1 week" filename A useful command to see the history of who did what, where individual edits of words are highlighted ( --word-diff), is git log -p --stat --word-diff filename Removed words appear in brackets and added words in curly braces. Looking for when a particular piece of text entered or left the file, say the text def myfunc, one can run Terminal> git log -p --word-diff --stat -S'def myfunc' filename This is useful to track down particular changes in the files to see when they occurred and who introduced them. One can also search for regular expressions instead of exact text: just replace -S by -G. Occasionally you need to go back to an earlier version of a file, e.g., a file called f.py. Start with viewing the history: Terminal> git log f.py Find a commit candidate from the list that you will compare the present version to, copy the commit hash (string like c7673487...), and run Terminal> git diff c7673487763ec2bb374758fb8e7efefa12f16dea f.py where the long string is the relevant commit hash. You can now view the differences between the most recent version and the one in the commit you picked (see the section Replacing pull by fetch and merge for how to configure the tools used by the git diff command). If you want to restore the old file, write Terminal> git checkout c7673487763ec2bb374758fb8e7efefa12f16dea f.py To go back to another version (the most recent one, for instance), find the commit hash with git log f.py, and do get checkout <commit hash> f.py. If f.py changed name from e.py at some point and you want e.py back, run git log --follow f.py to find the commit when e.py existed, and do a git checkout <commit hash> e.py. In case f.py no longer exists, run git log -- f.py to see its history before deletion. The last commit shown does not contain the file, so you need to check out the next last to retrieve the latest version of a deleted file. Often you just need to view the old file, not replace the current one by the old one, and then git show is handy. Unfortunately, it requires the full path from the root git directory: Terminal> git show \ c7673487763ec2bb374758fb8e7efefa12f16dea:dir1/dir2/f.py Run git log on some file and find the commit hash of the date or message when want to go back to. Run git checkout <commit hash> to change all files to this state. The problem of going back to the most recent state is that git log has no newer commits than the one you checked out. The trick is to say git checkout master to set all files to most recent version again. If you want to reset all files to an old version and commit this state as the valid present state, you do Terminal> git checkout c7673487763ec2bb374758fb8e7efefa12f16dea . Terminal> git commit -am 'Resetting to ...' Note the period at the end of the first command (without it, you only get the possibility to look at old files, but the next commit is not affected). Sometimes accidents with many files happen and you want to go back to the last commit. Find the hash of the last commit and do Terminal> git reset --hard c867c487763ec2 This command destroys everything you have done since the last commit. To push it as the new state of the repo, do Terminal> git push origin HEAD --force The git pull command fetches new files from the repository and tries to perform an automatic merge if there are conflicts between the local files and the files in the repository. Alternatively, you may run git fetch and git merge to do the same thing as described in the section Replacing pull by fetch and merge. We shall now address what do to if the merge goes wrong, which occasionally happens. Git will write a message in the terminal window if the merge is unsuccessful for one or more files. These files will have to be edited manually. Merge markers of the type >>>>>, ======, and <<<<< have been inserted by Git to mark sections of a file where the version in the repository differ from the local version. You must decide which lines that are to appear in the final, merged version. When done, perform git commit and the conflicts are resolved. Graphical merge tools may ease the process of merging text files. You can run git mergetool --tool=meld to open the merge tool meld for every file that needs to be merged (or specify the name of a particular file). Other popular merge tools supported by Git are araxis, bc3, diffuse, ecmerge, emerge, gvimdiff, kdiff3, opendiff, p4merge, tkdiff, tortoisemerge, vimdiff, and xxdiff. Below is a Unix shell script illustrating how to make a global repository in Git, and how two users clone this repository and perform edits in parallel. There is one file myfile in the repository. #!/bin/sh # Demo script for exemplifying git and merge rm -rf tmp1 tmp2 tmp_repo # Clean up previous runs mkdir tmp_repo # Global repository for testing cd tmp_repo git --bare init --shared cd .. # Make a repo that can be pushed to tmp_repo mkdir _tmp cd _tmp cat > myfile <<EOF This is a little test file for exemplifying merge of files in different git directories. EOF git init git add . # Add all files not mentioned in ~/.gitignore git commit -am 'first commit' git push ../tmp_repo master cd .. rm -rf _tmp # Make a new hg repositories tmp1 and tmp2 (two users) git clone tmp_repo tmp1 git clone tmp_repo tmp2 # Change myfile in the directory tmp1 cd tmp1 # Edit myfile: insert a new second line perl -pi -e 's/a little\n/a little\ntmp1-add1\n/g' myfile # Register change in local repository git commit -am 'Inserted a new second line in myfile.' # Look at changes in this clone git log -p # or a more compact summary git log --stat --summary # or graphically #gitk # Register change in global repository tmp_repo git push origin master cd .. # Change myfile in the directory tmp2 "in parallel" cd tmp2 # Edit myfile: add a line at the end cat >> myfile <<EOF tmp2-add1 EOF # Register change locally git commit -am 'Added a new line at the end' # Register change globally git push origin master # Error message: global repository has changed, # we need to pull those changes to local repository first # and see if all files are compatible before we can update # our own changes to the global repository. # git writes #To /home/hpl/vc/scripting/manu/py/bitgit/src-bitgit/tmp_repo # ! [rejected] master -> master (non-fast-forward) #error: failed to push some refs to ... git pull origin master # git writes: #Auto-merging myfile #Merge made by recursive. # myfile | 1 + # 1 files changed, 1 insertions(+), 0 deletions(-) cat myfile # successful merge! git commit -am merge git push origin master cd .. # Perform new changes in parallel in tmp1 and tmp2, # this time causing hg merge to fail # Change myfile in the directory tmp1 cd tmp1 # Do it all right by pulling and updating first git pull origin master # Edit myfile: insert "just" in first line. perl -pi -e 's/a little/tmp1-add2 a little/g' myfile # Register change in local repository git commit -am 'Inserted "just" in first line.' # Register change in global repository tmp_repo git push origin master cd .. # Change myfile in the directory tmp2 "in parallel" cd tmp2 # Edit myfile: replace little by modest perl -pi -e 's/a little/a tmp2-replace1\ntmp2-add2\n/g' myfile # Register change locally git commit -am 'Replaced "little" by "modest"' # Register change globally git push origin master # Not possible: need to pull changes in the global repository git pull origin master # git writes #CONFLICT (content): Merge conflict in myfile #Automatic merge failed; fix conflicts and then commit the result. # we have to do a manual merge cat myfile echo 'Now you must edit myfile manually' You may run this file git_merge.sh named by sh -x git_merge.sh. At the end, the versions of myfile in the repository and the tmp2 directory are in conflict. Git tried to merge the two versions, but failed. Merge markers are left in tmp2/myfile: <<<<<<< HEAD This is a tmp2-replace1 tmp2-add2 ======= This is tmp1-add2 a little >>>>>>> ad9b9f631c4cc586ea951390d9415ac83bcc9c01 tmp1-add1 test file for exemplifying merge of files in different git directories. tmp2-add1 Launch a text editor and edit the file, or use git mergetool, so that the file becomes correct. Then run git commit -am merge to finalize the merge. Branching and stashing are nice features of Git that allow you to try out new things without affecting the stable version of your files. Usually, you extend and modify files quite often and perform a git commit every time you want to record the changes in your local repository. Imagine that you want to correct a set of errors in some files and push these corrections immediately. The problem is that such a push will also include the latest, yet unfinished files that you have committed. A better organization of your work would be to keep the latest, ongoing developments separate from the more official and stable version of the files. This is easily achieved by creating a separate branch where new developments takes place: Terminal> git branch newstuff # create new branch Terminal> git checkout newstuff Terminal> # extend and modify files... Terminal> git commit -am 'Modified ... Added a file on ...' Terminal> git checkout master # swith back to master Terminal> # correct errors Terminal> git push origin master Terminal> git checkout newstuff # switch to other branch Terminal> git merge master # keep branch up-to-date w/master Terminal> # continue development work... Terminal> git commit -am 'More modifications of ...' At some point, your developments in newstuff are mature enough to be incorporated in the master branch: Terminal> git checkout newstuff Terminal> git merge master # synchronize newstuff w/master Terminal> git checkout master Terminal> git merge newstuff # synchronize master w/newstuff You no longer need the newstuff branch and can delete it: Terminal> git branch -d newstuff This command deletes the branch locally. To also delete the branch in the remote repo, run Terminal> git push origin --delete newstuff You can learn more in an excellent introduction and demonstration of Git branching. It is not possible to switch branches unless you have committed the files in the current branch. If your work on some files is in a mess and you want to change to another branch or fix other files in the current branch, a "global" commit affecting all files might be immature. Then the git stash command is handy. It records the state of your files and sets you back to the state of the last commit in the current branch. With git stash apply you will update the files in this branch to the state when you did the last git stash. Let us explain a typical case. Suppose you have performed some extensive edits in some files and then you are suddenly interrupted. You need to fix some typos in some other files, commit the changes, and push. The problem is that many files are in an unfinished state - in hindsight you realize that those files should have been modified in a separate branch. It is not too late to create that branch! First run git stash to get the files back to the state they were at the last commit. Then run git stash branch newstuff to create a new branch newstuff containing the state of the files when you did the (last) git stash command. Stashing used this way is a convenient technique to move some immature edits after the last commit out in a new branch for further experimental work. You can get the stashed files back by git stash apply. It is possible to multiple git stash and git stash apply commands. However, it is easy to run into trouble with multiple stashes, especially if they occur in multiple branches, as it becomes difficult to recognize which stashes that belong to which branch. A good advice is therefore to do git stash only once to get back to a clean state and then move the unfinished messy files to a separate branch with git stash branch newstuff. The git pull command actually performs two steps that are sometimes advantageous to run separately. First, a get fetch is run to fetch new files from the repository, and thereafter a git merge command is run to merge the new files with your local version of the files. While git pull tries to do a lot and be smart in the merge, very often with success, the merge step may occasionally lead to trouble. That is why it is recommended to run a git merge separately, especially if you work with branches. To fetch files from your repository at GitHub, which usually has the nickname origin, you write Terminal> git fetch origin You now have the possibility to check out in detail what the differences are between the new files and local ones: Terminal> git diff origin/master This command produces comparisons of the files in the current local branch and the master branch at origin (the GitHub repo). In this way you can exactly see the differences between branches. It also gives you an overview of what others have done with the files. When you are ready to merge in the new files from the master branch of origin with the files in the current local branch, you say Terminal> git merge origin/master Especially when you work with multiple branches, as outlined in the section Git working style with branching and stashing, it is wise to first do a get fetch origin and then update each branch separately. The git fetch origin command will list the branches, e.g., * master gh-pages next After updating master as described, you can continue with another branch: Terminal> git checkout next Terminal> git diff origin/next Terminal> git merge origin/next Terminal> git checkout master git diffcommand The git diff command launches by default the Unix diff tool in the terminal window. Many users prefer to use other diff tools, and the desired one can be specified in your ~/.gitconfig file. However, a much recommended approach is to wrap a shell script around the call to the diff program, because git diff actually calls the diff program with a series of command-line arguments that will confuse diff programs that take the names of the two files to be compared as arguments. In ~/.gitconfig you specify a script to do the diff: [diff] external = ~/bin/git-diff-wrapper.sh It remains to write the git-diff-wrapper.sh script. The 2nd and 5th command-line arguments passed to this script are the name of the files to be compared in the diff. A typical script may therefore look like #!/bin/sh diff "$2" "$5" | less Here we use the standard (and quite primitive) Unix diff program, but we can replace diff by, e.g., diffuse, kdiff3, xxdiff, meld, pdiff, or others. With a Python script you can easily check for the extensions of the files and use different diff tools for different types of files, e.g., latexdiff for LaTeX files and pdiff for pure text files. Occasionally it becomes desirable to replace all files in the local repo with those in the repo at the file hosting service. One possibility is removing your repo and cloning again, or use the Git commands Terminal> git fetch --all Terminal> git reset --hard origin/master Say you have two branches, A and B, and want to merge a file f.txt in A with the latest version in B. To merge this single file, go to the directory where f.txt resides and do Terminal> git checkout A Terminal> git checkout --patch B f.txt If f.txt is not present in branch A, and if you want to include more files, drop the --patch option and specify files with full path relative to the root in the repo: Terminal> git checkout A Terminal> git checkout B doc/f.txt src/files/g.py Now, f.txt and g.py from branch B will be included in branch A as well. In small collaboration teams it is natural that everyone has push access to the repo. On GitHub this is known as the Shared Repository Model. As teams grow larger, there will usually be a few people in charge who should approve changes to the files. Ordinary team members will in this case not clone a repo and push changes, but instead fork the repo and send pull requests, which constitutes the Fork and Pull Model. Say you want to fork the repo. The first step is to press the Fork button on the project page for the somebody/proj1 project on GitHub. This action creates a new repo proj1, known as the forked repo, on your GitHub account. Clone the fork as you clone any repo: Terminal> git clone When you do git push origin master, you update your fork. However, the original repo is usually under development too, and you need to pull from that one to stay up to date. A git pull origin master pulls from origin which is your fork. To pull from the original repo, you create a name upstream, either by Terminal> git remote add upstream \ if you cloned with such an https address, or by Terminal> git remote add upstream \ git@github.com:somebody/proj1.git if you cloned with a git@github.com (SSH) address. Doing a git pull upstream master would seem to be the command for pulling the most recent files in the original repo. However, it is not recommended to update the forked repo's files this way because heavy development of the sombody/proj1 project may lead to serious merge problems. It is much better to replace the pull by a separate fetch and merge. The typical workflow is Terminal> git fetch upstream # get new version of files Terminal> git merge upstream/master # merge with yours Terminal> # Your files are up to date - ready for editing Terminal> git commit -am 'Description...' Terminal> git push origin master # store changes in your fork At some point you would like to push your changes back to the original repo somebody/proj1. This is done by a pull request. Make sure you have selected the right branch on the project page of your forked project. Press the Pull Request button and fill out the form that pops up. Trusted people in the somebody/proj1 project will now review your changes and if they are approved, your files are merged into the original repo. If not, there are tools for keeping a dialog about how to proceed. Also in small teams where everyone has push access, the fork and pull request model is beneficial for reviewing files before the repo is actually updated with new contributions. An annoying feature of Git for beginners is the fact that if you clone a repo, you only get the master branch. There are seemingly no other branches: Terminal> git branch * master To see which branches that exist in the repo, type Terminal> git branch -a * master remotes/origin/HEAD -> origin/master remotes/origin/gh-pages remotes/origin/master remotes/origin/next If there is only one remote repo that you pull/push from/to, you can simply switch branch with git checkout the usual way: Terminal> git checkout gh-pages Terminal> git branch * gh-pages master Terminal> git checkout next Terminal> git branch gh-pages master * next You might need to do git fetch origin to see new branches made on other machines. When you have more than one remote, which is usually the case if you have forked a repo, see the section Team work with forking and pull requests, you must use do a checkout with specifying the remote branch you want: Terminal> git checkout -b gh-pages --track remote/gh-pages Terminal> git checkout -b next --track upstream/next Files can be edited, added, or removed as soon as you have done the local checkout. It is possible to write a little script that takes the output of git branch -a after a git clone command and automatically check out all branches via git checkout. Although the purpose of these notes is just to get the reader started with Git, it must be mentioned that there are advanced features of Git that have led to very powerful workflows with files and people, especially for software development. There is an official Git workflow model that outlines the basic principles, but it can be quite advanced for those with modest Git knowledge. A more detailed explanation of a recommended workflow for beginners is given in the developer instructions for the software package PETSc. This is highly suggested reading. The associated "quick summary" of Git commands for their workflow is also useful. git ls-files is the command: Terminal> git ls-files # list all tracked files Terminal> git ls-files -o # list non-tracked files Terminal> git ls-files myfile # prints myfile if it's tracked Terminal> git ls-files myfile --error-unmatch The latter command prints an error message if myfile is not tracked. See man git-ls-files for the many options this utility has. The command git gc can compress a git repository and should be run regularly on large repositories. Greater effect is achieved by git gc --aggressive --prune=all. You can measure the size of a repo before and after compression by git gc using du -s repodir, where repodir is the name of the root directory of the repository. Occasionally big or sensitive files are removed from the repo and you want to permanently remove these files from the revision history. This is achieved using git filter-branch. To remove a file or directory with path doc/src/mydoc relative to the root directory of the repo, go to this root directory, make sure all branches are checked out on your computer, and run Terminal> git filter-branch --index-filter \ 'git rm -r --cached --ignore-unmatch doc/src/mydoc' \ --prune-empty -- --all Terminal> rm -rf .git/refs/original/ Terminal> git reflog expire --expire=now --all Terminal> git gc --aggressive --prune=now Terminal> git push origin master --force # do this for each branch Terminal> git checkout somebranch Terminal> git push origin somebranch --force You must repeat the push command for each branch as indicated. If other users have created their own branches in this repo, they need to rebase, not merge, when updating the branches! Sometimes you accidentally remove files from a repo, either by git rm or a plain rm. You can get the files back as long as they are in the remote repo. In case of a plain rm command, run Terminal> git checkout `git ls-files` to restore all missing files in the current directory. In case of a git rm command, use git log --diff-filter=D --summary to find the commit hash corresponding to the last commit the files were in the repo. Restoring a file is then done by Terminal> git checkout <commit hash> filename Working.
http://hplgit.github.io/teamods/bitgit/Langtangen_github.html
CC-MAIN-2020-10
en
refinedweb
Client for the Spotify Web API Project description Welcome to the Python Package Index page of Tekore, a client of the Spotify Web API for Python! Tekore allows you to interact with the Web API effortlessly. from tekore import Spotify spotify = Spotify(token) tracks = spotify.current_user_top_tracks(limit=10) for track in tracks.items: print(track.name) finlandia = '3hHWhvw2hjwfngWcFjIzqr' spotify.playback_start_tracks([finlandia]) See our online documentation on Read The Docs for tutorials, examples, package reference and a detailed description of features. Visit our repository on GitHub if you’d like to submit an issue or ask just about anything related to Tekore. Installation Tekore can be installed from the Package Index via pip. $ pip install tekore Versioning Tekore provides both stable and beta endpoints of the Web API. However, beta endpoints may be changed by Spotify without prior notice, so older versions of the library may have unintended issues. Because of this, Tekore follows a modified form of Semantic Versioning. Incompatible changes in the library are still introduced in major versions, and new features and endpoints are added in minor versions. But endpoints removed by Spotify are removed in minor versions and changes to endpoints are implemented as bugfixes. See the Web API documentation for further information on beta endpoints. Changelog 1.0.1 Bugfixes - Accept missing video thumbnail in PlaylistTrack (#132) 1.0.0 - Packaging improvements - Declare versioning scheme 0.1.0 Initial release of Tekore! Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/tekore/1.0.1/
CC-MAIN-2020-10
en
refinedweb
React components for the Salesforce Lightning Design System React LDS react-lds provides React components for the Salesforce Lightning Design System. Installation To install the stable version with npm, run: npm install react-lds --save Usage react-lds exports components as modules. You can consume these via import in your own React components. import React from 'react'; import { Badge } from 'react-lds'; const HelloWorld = props => ( <Badge theme="warning" label={props.message} /> ); Head over to the Storybook Docs to see a list of available components and their usage as well as interactive sample implementations of each component. Context In order to use ReactLDS, you will have to provide assetBasePath via the React Context. import { Children, Component } from 'react'; import PropTypes from 'prop-types'; class AssetPathProvider extends Component { getChildContext() { return { assetBasePath: '', }; } render() { const { children } = this.props; return Children.only(children); } } Page.propTypes = { children: PropTypes.node.isRequired, }; Page.childContextTypes = { assetBasePath: PropTypes.string, }; Interactivity Some components need a certain level of interactivity to be usable as React components. In order to achieve this, these components keep a minimal internal state and provide ways to hook into fired events: <Datepicker /> <Lookup /> <DropDownMenu /> <Modal /> <PickList /> <Tab /> Development yarn install and yarn start. Add or modify stories in ./stories Happy hacking! Developing while embedded into a react project npm link in this folder. After you changed stuff, run npm build to update the files inside the ./dist folder, because that's the entry point for external react applications. In your react app: npm link react-lds. Publish - Open a new pull request from /release/{version} - Adjust version in package.json - Write CHANGELOG.md - Merge into master and add a new tag. Travis will do the rest
https://reactjsexample.com/react-components-for-the-salesforce-lightning-design-system/
CC-MAIN-2020-10
en
refinedweb
In C A function pointer is a type of pointer. When dereferenced, a function pointer can be used to invoke a function and pass it arguments just like a normal function.. Function pointers are often used to replace switch statements. In the following program I'll show you how to do this job. #include <stdio.h> #include <stdlib.h> #include <math.h> void runOper2(float a, float (*func)(float)); void runOper(float a, char oper); float Sqr(float a); float Sqrt(float a); float Log(float a); int main(int argc, char **argv); float Sqr(float a) { return a*a; } float Sqrt(float a) { return sqrt(a); } float Log(float a) { return log(a); } void runOper(float a, char oper) { float result; switch(oper) { case 's' : result = Sqr(a) break; case 'q' : break; case 'l' : result = Log(a) break; } printf("result = %f\n", result); } void runOper2(float a, float (*func)(float)) { float result = func(a); printf("result = %f\n", result); } // Main program int main(int argc, char **argv) { runOper(2, 's'); runOper2(2, &Sqr); runOper(2, 'q'); runOper2(2, &Sqrt); runOper(2, 'l'); runOper2(2, &Log); return(EXIT_SUCCESS); } The two calls runOper and runOper2 are equivalent but using runOper2 you shall write less code. Callback Functions Another use for function pointers is setting up "listener" or "callback" functions that are invoked when a particular event happens. One example is when you're writing code for a. Gg1
http://www.xappsoftware.com/wordpress/2011/07/11/function-pointers-in-c/
CC-MAIN-2020-10
en
refinedweb
bond_ (3) - Linux Man Pages bond_: fixed-coupon bond helper NAMEQuantLib::FixedRateBondHelper - fixed-coupon bond helper SYNOPSIS #include <ql/termstructures/yield/bondhelpers.hpp> Inherits BootstrapHelper< YieldTermStructure >. Public Member Functions FixedRateBondHelper (const Handle< Quote > &cleanPrice, Natural settlementDays, Real faceAmount, const Schedule &schedule, const std::vector< Rate > &coupons, const DayCounter &dayCounter, BusinessDayConvention paymentConv=Following, Real redemption=100.0, const Date &issueDate=Date()) FixedRateBondHelper (const Handle< Quote > &cleanPrice, const boost::shared_ptr< FixedRateBond > &bond) BootstrapHelper interface Real impliedQuote () const void setTermStructure (YieldTermStructure *) sets the term structure to be used for pricing additional inspectors boost::shared_ptr< FixedRateBond > bond () const Visitability void accept (AcyclicVisitor &) Protected Attributes boost::shared_ptr< FixedRateBond > bond_ RelinkableHandle< YieldTermStructure > termStructureHandle_ Detailed Description fixed-coupon bond helper Warning - This class assumes that the reference date does not change between calls of setTermStructure(). Examples: Bonds.cpp, and FittedBondCurve.cpp. Constructor & Destructor Documentation FixedRateBondHelper (const Handle< Quote > & cleanPrice, const boost::shared_ptr< FixedRateBond > & bond) Warning - Setting a pricing engine to the passed bond from external code will cause the bootstrap to fail or to give wrong results. It is advised to discard the bond after creating the helper, so that the helper has sole ownership of it. Member Function Documentation void setTermStructure (YieldTermStructure *) from BootstrapHelper< YieldTermStructure >. Author Generated automatically by Doxygen for QuantLib from the source code. Linux man pages generated by: SysTutorials
https://www.systutorials.com/docs/linux/man/3-bond_/
CC-MAIN-2020-10
en
refinedweb
More Windows Service in C# jeikabu Originally published at rendered-obsolete.github.io on ・1 min read Previously I discussed a Windows service we call “layer0”. Our application has the additional wrinkle that this service needs to interact with the user and their desktop. Interactive Services provides guidance how to accomplish this. Basically, spawn a desktop application as the user and use IPC to communicate between the two. We refer to this portion of our client as “layer1”. Session Events In order for layer0 service to spawn layer1 process as the desktop user, we need to track user login/logout activity. We’re going to modify the class derived from ServiceBase. First, enable ServiceBase.CanHandleSessionChangeEvent: public Layer0Service() { CanHandleSessionChangeEvent = true; } Then override OnSessionChange: protected override void OnSessionChange(SessionChangeDescription changeDescription) { Log("OnSessionChange: " + changeDescription.Reason); base.OnSessionChange(changeDescription); switch (changeDescription.Reason) { case SessionChangeReason.SessionLogon: case SessionChangeReason.SessionUnlock: // Switch between logged in users. DoLogon(); break; case SessionChangeReason.SessionLogoff: //case SessionChangeReason.SessionLock: DoLogoff(); break; } } The Logon and Logoff events are self-explanatory. Unlock (and the corresponding Lock) less so. Unlock events occur when fast user switching is used, for example. Start Process as Desktop User Code how layer0 service creates a process as the current desktop user: #if DEBUG // Show console window const Pinvoke.CreationFlags Layer1CreationFlags = Pinvoke.CreationFlags.CreateNewConsole; #else const Pinvoke.CreationFlags Layer1CreationFlags = Pinvoke.CreationFlags.CreateNoWindow; #endif Pinvoke.PROCESS_INFORMATION? serviceStartLayer1AsUser(string exePath, string command) { IntPtr server = IntPtr.Zero; IntPtr ppSessionInfo = IntPtr.Zero; IntPtr userToken = IntPtr.Zero; // WARNING: the user token is supposed to be a secret, don't print it anywhere try { // Query all sessions on local machine server = Pinvoke.WTSOpenServer("localhost"); Int32 count = 0; Int32 retval = Pinvoke.WTSEnumerateSessions(server, ref ppSessionInfo, ref count); if (retval == 0) { throw new Win32Exception(Marshal.GetLastWin32Error(), "WTSEnumerateSessions"); } // Find session for the logged in user and get their token Int32 dataSize = Marshal.SizeOf(typeof(Pinvoke.WTS_SESSION_INFO)); Int64 current = (Int64)ppSessionInfo; for (int i = 0; i < count; i++) { var si = (Pinvoke.WTS_SESSION_INFO)Marshal.PtrToStructure((IntPtr)current, typeof(Pinvoke.WTS_SESSION_INFO)); current += dataSize; if (si.State != Pinvoke.WTS_CONNECTSTATE_CLASS.WTSActive && si.State != Pinvoke.WTS_CONNECTSTATE_CLASS.WTSConnected) continue; var sessionId = (uint)si.SessionID; // WARNING: the user token is supposed to be a secret, don't print it anywhere if (OS.Pinvoke.WTSQueryUserToken(sessionId, out userToken)) { Log(LogLevel.Info, "WTSQueryUserToken succeeded for session: " + sessionId); break; } } if (userToken == IntPtr.Zero) { Log(LogLevel.Error, "Unable to obtain user token, unable to start layer1"); return null; } // Launch layer1 as the logged-in user var nullSecurityAttributes = new Pinvoke.SECURITY_ATTRIBUTES { lpSecurityDescriptor = IntPtr.Zero }; Pinvoke.STARTUPINFO startupInfo = Pinvoke.StartupInfoAlloc(); Pinvoke.PROCESS_INFORMATION processInfo; // WARNING: the user token is supposed to be a secret, don't print it anywhere Pinvoke.CreateProcessAsUser(userToken, exePath, command, ref nullSecurityAttributes, ref nullSecurityAttributes, true, Layer1CreationFlags, IntPtr.Zero, null, ref startupInfo, out processInfo); return processInfo; } finally { if (server != IntPtr.Zero) Pinvoke.WTSCloseServer(server); if (ppSessionInfo != IntPtr.Zero) Pinvoke.WTSFreeMemory(ppSessionInfo); if (userToken != IntPtr.Zero) Pinvoke.CloseHandle(userToken); } } Highlights: - Use WTSEnumerateSessions to iterate over all sessions looking for the active desktop session. - WTSQueryUserToken to obtain primary access token for that session’s user. - CreateProcessAsUser to launch a process (i.e. layer1) as that user. Now both the layer0 service and layer1 process are running. - I should attribute the source this code is derived from, but I didn’t make a note of it. There’s a number of google results that are fairly similar: - Might be able to replace looping over all sessions with WTSGetActiveConsoleSessionId() Console Executable Quick aside. I mentioned we want our developers to be able to start layer0 as a console application. When layer0 is a desktop application instead of a service, starting layer1 is more straightforward: Pinvoke.PROCESS_INFORMATION? startLayer1(string exePath, string commandline) { var nullSecurityAttributes = new Pinvoke.SECURITY_ATTRIBUTES { lpSecurityDescriptor = IntPtr.Zero }; Pinvoke.STARTUPINFO startupInfo = Pinvoke.StartupInfoAlloc(); Pinvoke.PROCESS_INFORMATION procInfo; Pinvoke.CreateProcess(exePath, commandline, ref nullSecurityAttributes, ref nullSecurityAttributes, true, Layer1CreationFlags, IntPtr.Zero, null, ref startupInfo, out procInfo); return procInfo; } The canonical solution to start another exe is System.Diagnostics.Process. However, by using CreateProcess (which also returns PROCESS_INFORMATION), it means managing layer1 will be the same whether starting it from a service or console application. Process Wrangling Once the layer1 process is running we’ve got a lot of behaviour and handling that is specific to our application. The most interesting bit is the monitoring that layer0 does of layer1: var objState = (ObjectState)Pinvoke.WaitForSingleObject(layer1ProcInfo.hProcess, 0); if (user_logout) { // There's no longer a user if (objState == Pinvoke.ObjectState.WaitObject0) { // Already not running. Wait for a user to login. } else if (objState == Pinvoke.ObjectState.WaitTimeout) { // Running. Tell it to stop. } } else if (user_login_or_fast_user_switching) { // There's a user (may or may not have been one before) if(objState == Pinvoke.ObjectState.WaitObject0) { // Not running, so start it. } else if(objState == Pinvoke.ObjectState.WaitTimeout) { // Running as different user. Stop it, then restart it as new user } } else { // Nothing special has happened. This is the state we're normally in. if (objState == Pinvoke.ObjectState.WaitObject0) { // Process not running. Check the exit code to see if it was intentional. if (Pinvoke.GetExitCodeProcess(procInfo.hProcess, out uint lpExitCode) && lpExitCode == (uint)L1ExitCode.UserLoggingOut) { // Process exited but it was told to because user is logging out. We're probably going to receive a "user logout" event. } else { // Process stopped/crashed. Restart it. } } else if (objState == Pinvoke.ObjectState.WaitTimeout) { // Still running. Everything ok- do nothing. } } Behaviour we wanted: - Wait for user login to start layer1 - If layer1 crashes restart it - If switch users, need to stop layer1 and restart it as new user - During user logout stop layer1 Here we use WaitForSingleObject() to monitor the layer1 process. The first argument is the process handle, the second argument is 0 so the function returns immediately with the current state of the process: WaitTimeoutmeans it’s running WaitObject0means it’s not GetExitCodeProcess() is a fairly recent addition. Looking at the log output we noticed that after the user initiates logout, layer1 process exits and then layer0 tries unsuccessfully to restart it; it either fails to start or immediately exits. Layer0 keeps trying to restart layer1 until OnSessionChange(SessionLogoff) is called (a few seconds after layer1 first exited). We use the exit code to inform layer0 that the process intended to stop and shouldn’t be restarted. Part of this will be covered when I discuss details of layer1. Detours Interactive Services mentions the alternative of passing SERVICE_INTERACTIVE_PROCESS to CreateService(). We didn’t pursue this approach for two reasons: NoInteractiveServicesregistry key is set by default starting with Windows 8; SERVICE_INTERACTIVE_PROCESSis on its way to being deprecated. - Our application needs to launch 3rd-party applications, and processes launched from a service have an unusual execution environment. Compatibility issues are a concern. Owing to our use of Apache Thrift we have a complete messaging solution for IPC between layer1 and layer0 (the service). Layer0 begins listening before starting layer1, and layer1 connects as soon as it starts.
https://dev.to/jeikabu/more-windows-service-in-c-3keg
CC-MAIN-2020-10
en
refinedweb
By preparing custom classes that inherit from an appropriate base class, you can extend the functionality of Kentico. This approach allows you to implement the following types of objects: - Integration connectors - Marketing automation actions - Notification gateways - Payment gateways - Scheduled tasks - Custom Smart search components: - Translation services - Workflow actions If you create the custom classes in your project's App_Code folder (or CMSApp_AppCode -> Old_App_Code on web application installations), you do not need to integrate a new assembly into the web project. Code in the App_Code folder is compiled dynamically and automatically referenced in all other parts of the system. Registering custom classes in the App_Code folder To ensure that the system can load custom classes placed in App_Code, you need to register each class. Edit your custom class. - Add a using statement for the CMS namespace. Add the RegisterCustomClass assembly attribute above the class declaration (for every App_Code class that you want to register). using CMS; // Ensures that the system loads an instance of 'CustomClass' when the 'MyClassName' class name is requested. [assembly: RegisterCustomClass("MyClassName", typeof(CustomClass))] ... public class CustomClass { ... } The RegisterCustomClass attribute accepts two parameters: - The first parameter is a string identifier representing the name of the class. - The second parameter specifies the type of the class as a System.Type object. When the system requests a class whose name matches the first parameter, the attribute ensures that an instance of the given class is provided. Once you have registered your custom classes, you can use them as the source for objects in the system. When assigning App_Code classes to objects in the administration interface, fill in the following values: - Assembly name: (custom_classes) - Class: must match the value specified in the first parameter of the corresponding RegisterCustomClass attribute
https://docs.kentico.com/k9/custom-development/loading-custom-classes-from-app_code
CC-MAIN-2020-10
en
refinedweb
Library and Extension FAQ Contents - Library and Extension FAQ - General Library Questions - Common tasks - Threads - Input and Output - Network/Internet Programming - Databases - Mathematics and Numerics General Library Questions How do I find a module or application to perform task X? Check the Library Reference to see if there’s a relevant standard library module. (Eventually you’ll learn what’s in the standard library and will be able to skip this step.) For third-party packages, search the Python Package Index or try Google or another Web search engine. Searching for “Python” plus a keyword or two for your topic of interest will usually find something helpful. Where is the math.py (socket.py, regex.py, etc.) source file? If you can’t find a source file for a module it may be a built) How+"[email protected]"} """ The minor disadvantage is that this defines the script’s __doc__ string. However, you can fix that by adding __doc__ = """...Whatever...""" Is there a curses/termcap package for Python?. Is there an equivalent to C’s onexit() in Python? The atexit module provides a register function that is similar to C’s onexit(). Why don’t my signal handlers work? The most common problem is that the signal handler is declared with the wrong argument list. It is called as handler(signum, frame) so it should be declared with two arguments: def handler(signum, frame): ... Common tasks How do I test a Python program or component?. To make testing easier, you should use good modular design in your that automates a sequence of tests can be associated with each module.. How do I create documentation from doc strings? The pydoc module can create HTML from the doc strings in your Python source code. An alternative for creating API documentation purely from docstrings is epydoc. Sphinx can also include docstring content. How do I get a single keypress at a time? For Unix variants there are several solutions. It’s straightforward to do this using curses, but curses is a fairly large module to learn. Threads How do I program using threads?. None of my threads seem to run: why? a good delay value for time.sleep(),. How do I parcel out work among a bunch of worker threads?.. What kinds of global value mutation are thread-safe?! Can’t we get rid of the Global Interpreter Lock?..? Input and Output How do I delete a file? (And other file questions...)(). How do I copy a file? The shutil module contains a copyfile() function. Note that on MacOS 9 it doesn’t copy the resource fork and Finder info. How do I read (or write) binary data?. I can’t seem to use os.read() on a pipe created with os.popen(); why? os.read() is a low-level function which takes a file descriptor, a small integer representing the opened file. os.popen() creates a high-level file object, the same type returned by the built-in open() function. Thus, to read n bytes from a pipe p created with os.popen(), you need to use p.read(n). How do I access the serial (RS232) port? For Win32, POSIX (Linux, BSD, etc.), Jython: For Unix, see a Usenet post by Mitch Chapman: Why doesn’t closing sys.stdout (stdin, stderr) really close it?. Network/Internet Programming What WWW tools are there for Python?. How can I mimic CGI form submission (METHOD=POST)?) with req:. What module should I use to help with generating HTML? You can find a collection of useful links on the Web Programming wiki page. How do I send mail from a Python script?, sometimes /usr/sbin/sendmail. The sendmail manual page will help you out. Here’s some sample code: SENDMAIL = "/usr/sbin/sendmail" # sendmail location import os p = os.popen("%s -t -i" % SENDMAIL, "w") p.write("To: [email protected]\n") p.write("Subject: test\n") p.write("\n") # blank line separating headers from body p.write("Some text\n") p.write("some more text\n") sts = p.close() if sts != 0: print("Sendmail exit status", sts) How do I avoid blocking in the connect() method of a socket?. Databases Are there any interfaces to database packages in Python?. How do you implement persistent objects in Python? The pickle library module solves this in a very general way (though you still can’t store things like open files, sockets or windows), and the shelve library module uses pickle and (g)dbm to create persistent mappings containing arbitrary Python objects. Mathematics and Numerics How do I generate random numbers in Python? The standard module random implements a random number generator. Usage is simple: import random random.random() This returns a random floating point number in the range [0, 1). There are also many other specialized generators in this module, such as: - randrange(a, b) chooses an integer in the range [a, b). - uniform(a, b) chooses a floating point number in the range [a, b). - normalvariate(mean, sdev) samples the normal (Gaussian) distribution. Some higher-level functions operate on sequences directly, such as: - choice(S) chooses random element from a given sequence - shuffle(L) shuffles a list in-place, i.e. permutes it randomly There’s also a Random class you can instantiate to create independent multiple random number generators.
https://documentation.help/Python-3.4.4/library.html
CC-MAIN-2020-10
en
refinedweb
Java 9 Modular Development (Part 2) Java 9 Modular Development (Part 2) Get a look at how to develop, package, and run modules in Java 9, including how the Jlink tool works and an overview of JMOD files. Join the DZone community and get the full member experience.Join For Free In my previous post, we discussed modularity, module descriptors, and the details about module-info.java files. This article will help in developing a modular project step by step and packaging them as JARs and JMODs and also describes the steps needed to create runtime images by using jlink. We will be developing a small modular project that will print, "Hello Welcome to Java 9 Modularity" in the console. Project Structure Create the following structure: In the above structure, the src folder is used to create source files, the mods folder is used to place all the compiled class files, the libs folder is used to place the created JARs, and the jmods folder is for placing the packaged jmod files. Every module has a module-info.java file, which is the module descriptor file to define dependencies. (Refer to my previous post for details.) module com.gg.client{ } By default, the module's descriptor file is provided by the java.base module. Tha is why it is not mentioned in the code. In our example, the Client.java file is the placeholder for the logic and the contents are shown below. If we are going to use any other modules in this one, it should be mentioned in the com.gg.client module descriptor file. package com.gg.client; public class Client { public static void main(String[] args) { System.out.println("Hello Welcome to Java 9 Modularity"); } } Compiling the Source Code The project we have developed above is compiled using the Java compiler ( javac) command, as shown below. Also, save the compiled class files in the mods directory. The javac command is in the JDK_HOME\bin directory. The javac command is used to compile the project, -d is used to specify the directory to place the compiled class files, and --module-source-path is used to define the source file location. As we mentioned above, the java.base module is added by default as a dependency to all application modules. The below snapshot describes this by disassembling the class files via the javap command. We haven't mentioned the requires statement in our code, but it added the requires statement on its own. Running the Project The java command is used to run the client, where --module-path specifies the module's location and --module defines the module we have to run.Outcome of this is, it prints "Hello Welcome to Java 9 Modularity" Packaging the Module Code: JAR The jar command is used to create a JAR file, which is located in the JDK_HOME\bin directory. The '.' in the command is used to specify the current directory. Multi-Release JARs Java 9 introduced the multi-release JAR. Multi-release JARs are a single JAR containing the same release of a library for multiple JDKs. Example: The JAR contains the class files and the MANIFEST.MF file. The multi-release JAR has the version-specific class files the under META-INF directory. The classes specific to the JDK 9 will be in the /META-INF/versions/9 directory. The environment using a multi-release JAR will first check its version and use the classes that are all placed in that version. If the classes for that version are not available, then classes in the root directory will be used. The main advantage of a multi-release JAR is used to take advantages of the newer releases. The below snapshot describes the command for creating a multi-release JAR file from the classes which we have compiled in the previous steps. In addition to that, we have placed one Java 8 class file (Client8.class) in the builder8 directory to explain the multi-release JAR. The --create command is used to create the JAR while the --verbose command is added to check the operations happening behind the screen. --file specifies the name of the JAR file and --module-version specifies the version of the module. We can add the version to the module at the time of creation and not in the module descriptor file. --list is used to list all files in the JAR. The following snapshot is for running the JAR file. Creating jmod Files From Modules Java 9 uses the jmod tool to package all the platform modules. So, in this section, we are packaging our modules as jmod files using the jmod tool.The jmod tool is available in the JDK_HOME\bin directory. The create sub-command is used to create the jmod file. We can also set the --module-version to set the version that will be recorded in the module-info.class file. In the above command, the 'create' option is used specify the operation, --class-path is used to specify the JAR file location and then followed by the name of the jmod file with the extension as jmod. The gg.client.jmod file is created in the jmods directory of the project, and the jmod describe command is used to describe the jmod file. We can use the same command to describe the platform modules. The other options are extracting the jmod file and listing its contents of. The following snapshot will show an example. Jlink The Jlink tool is used to create custom, platform-specific runtime images, where the runtime images contain the specified application module and the required platform modules. The good thing about Jlink is needing to have the complete JRE. JDK 9 ships with the Jlink tool, which is available in the JDK_HOME\bin directory. But do we need to have all the classes (like security and logging) to print "Hello world"? The answer is no, but we used to load all the classes through Java 7. In Java 8, Oracle developers made an attempt to come up with the concept of compact modules (compact profiles). But even with compact profiles, we still have some unwanted classes. Sure, it was better than before, but in Java 9, Jlink is used to run only the required modules. Jlink's main intention is to avoid shipping everything and, also, to run on very small devices with little memory. By using Jlink, we can get our own very small JRE. Jlink also has a list of plugins that will help optimize our solutions. The details for each plugin and options in the Jlink tool will be covered in upcoming articles. To create the Jlink portable java command with simple options, the following general syntax will be used: jlink --module-path <modulepath> --add-modules <modules> --output <path> Here, module-path gives the module path locations, which will be separated by : in Linux and ; in Windows. Finally, --add-modules specifies the modules that need to be added, and --output specifies the output path. On executing the above command, we will be able to get the output in the specified path. The output package is shown below: If we look in bin, we will get our own little Java world. Running the Program Now the portable java command will be used to run our program. Listing the modules will display only the required modules. Note: Configuring the services and hashing the modules will be described in upcoming articles. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/java-9-modular-development-part-2?fromrel=true
CC-MAIN-2020-10
en
refinedweb
I have tried different combinations to extract the country names from a column and create a new column with solely the countries. I can do it for selected rows i.e. df.address[9998] but not for the whole column. import pycountry Cntr = [] for country in pycountry.countries: for country.name in df.address: Cntr.append(country.name) Any ideas what is going wrong's on-topic for Stack Overflow I'm trying to get the focus method (touch, mouse or keyboard) on links setting a data-attribute I have two divs and two buttons: I am following this Invisible ReCaptcha doc on my php website:
https://cmsdk.com/python/extract-country-name-from-text-in-column-to-create-another-column.html
CC-MAIN-2020-10
en
refinedweb
dfuse 0.3.1 A D binding for libfuse To use this package, run the following command in your project's root directory: dfuse dfuse dfuse/fuse.d for implementation specific details. To mount a filesystem use a Fuse object and call mount: import dfuse dfuse dfuse,. - 0.3.1 released 11 months ago - dlang-community/dfuse - github.com/dlang-community/dfuse - BSL-1.0 - Authors: - - Dependencies: - none - Versions: - Show all 3 versions - Download Stats: 0 downloads today 0 downloads this week 0 downloads this month 0 downloads total - Score: - 0.6 - Short URL: - dfuse.dub.pm
http://code-mirror3.dlang.io/packages/dfuse
CC-MAIN-2020-10
en
refinedweb
script interface for a projector component. The Projector can be used to project any material onto the scene - just like a real world projector. The properties exposed by this class are an exact match for the values in the Projector's inspector. It can be used to implement blob or projected shadows. You could also project an animated texture or a render texture that films another part of the scene. The projector will render all objects in its view frustum with the provided material. There is no shortcut property in GameObject or Component to access the Projector, so you must use GetComponent to do it: function Start() { // Get the projector var proj : Projector = GetComponent (Projector); // Use it proj.nearClipPlane = 0.5; } using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { void Start() { Projector proj = GetComponent<Projector>(); proj.nearClipPlane = 0.5F; } } See Also: projector component.
https://docs.unity3d.com/530/Documentation/ScriptReference/Projector.html
CC-MAIN-2020-10
en
refinedweb
Okay, this may be answered by me not understanding something, but I truly feel this is a PyCharm issue unless explained otherwise. Say you have this setup: Layout Project |---A |--- SecondProgram.py |--- testfile.txt |---B |--- MainProgram.py Contents of testfile.txt This is a test. Contents of SecondProgram.py import os cwd = os.getcwd() print(cwd) filelocation = cwd + '/testfile.txt' with open(filelocation, 'r') as file: print(file.read()) Contents of MainProgram.py import os cwd = os.getcwd() print(cwd) from Project.A import SecondProgram #(ignore the PEP-8 rule breaking for not having this on top for now) First we run SecondProgram.py, and unsurprisingly, we get the text from the testfile printed. Now if I run the MainProgram.py, a FileNotFoundError is raised as the filelocation variable in the second program is "/.../Project/B" rather than "/.../ProjectA" as that's where the file was imported. This makes sense, all is normal. But, if I *move* MainProgram.py to the A folder/directory from the B folder/directory, and run MainProgram.py again, then even though all three files are now in the same directory (so MainProgram should be able to access the testfile as the directory name from os.getcwd() is the same now, it still raises a FileNotFoundError: FileNotFoundError: [Errno 2] No such file or directory: '/home/.../Program/B/testfile.txt' Even the os.getcwd() prints out "/home/.../Program/B" rather than "/home/.../Program/A". Why is this the case? The file now exists in the A folder, but os.getcwd() still locates it in the B folder? This becomes even more interesting. If you create a file in B called example1.py with just the import line: from Project.A import SecondProgram And you run it, it of course complains saying "FileNotFoundError: [Errno 2] No such file or directory: '/home/..../Project/B/testfile.txt'. If you move it to folder A, it still gives the same error, meaning the file location is not updated to the new folder. But, if we create another file, called example2.py, located again in the B folder, with the exact same import line (so everything is identical as example1.py), and we don't run it at all, but rather immediately move it to the A folder, then run it for the first time in A (remember, it was created in B), it works fine, outputting the text file. If we then move it back to the B folder, and run it a second time, it again compiles, with os.getcwd() reading that it is in the A file, and reading the textfile, even though it's in a very different directory. Another level: If you open an active session, and try to run example1.py there (remember, this is the file that was run in B, moved to A (where it currently is), and would not read the testfile anymore as the os.getcwd() still thought it was stored in the B, even though it was moved), it will say: "ModuleNotFoundError: No module named 'Project'", and/or "ModuleNotFoundError: No module named 'A'" if both those are removed so all the example1.py file contains is import SecondProgram and then compile it in an active session again, then the os.getcwd() finds the correct directory, and the testfile is read, and the print cwd line gives "/home/.../Project/A" in the active session. Even though in Pycharm, the exact same FilleNotFoundError is given as it thinks it's in the B folder (print(cwd) returns "/home/.../Project/B"). Interestingly enough, when you edit the example1.py file to have just "import SecondProgram", or if you leave it as "from Project.A import SecondProgram", no issue with importing the module name is raised by PyCharm, but in an active session, it must be "import SecondProgram" if it is in the A folder. I may not have explained this well, but with a bit of setup, you can play with this yourself and see how weird it is. From what I gather, when a python file is compiled, its directory is recorded is some metadata. If that file is moved to a new location, that metadata is not updated, and this causes very bug prone issues when trying to work with any external files/modules. I found this by having 2 folders with two programs, just like this example. I decided to merge them by moving the MainProgram over to the "A" folder, but now the directory/metadata for MainProgram is not updated. Interestingly, if I ctrl+a and ctrl+v the whole file into a new file (say MainProgramNew.py) in the same new location (folder A), then run it, no issues, os.getcwd works correctly, etc. as expected. But the existing file that was moved to folder "A" still can't read the file as it's trying to parse toe original location rather than the new one. Clearly os.getcwd() is working as intended if in an active session it correctly finds the directory of the moved folder. Why does it not work in PyCharm? How you move these files (dragging in Pycharm, Right-click -> refactor -> move, moving the file manually in the a terminal) has no effect. Restarting PyCharm does not fix the issue. There are several layers to this issue, and I hope I'm not going crazy with it. How do I move a file that imports modules that read/write textfiles, and have that moved file update it's location? Hello, Thank you for such a detailed description of the behavior, it was easy to reproduce and define a cause. The thing is that for every file in a project Run/Debug configuration is being created , and Working Directory is specified as a current location of a file. In the situation you have faced, after changing the file location, the working directory remains default. I have created a bug on YouTrack, please feel free to vote for it in order to increase a priority and monitor for a resolution:
https://intellij-support.jetbrains.com/hc/en-us/community/posts/360006384839-Moving-a-file-does-not-change-the-location-PyCharm-thinks-it-exists-in
CC-MAIN-2020-10
en
refinedweb
2015-05-26 08:11 AM This is my first rodeo with Clustered Data ONTAP after years of 7-Mode. We have a new FAS8040 2-node cluster with cDOT 8.3. I seperated out the disk ownership with SATA on one node and SAS on the other node. I see both nodes grabbed the 3 disks for ONTAP root volume and aggregate. I also see that System Manager says no Data Aggregates. So my question is... Does cDOT force you to have a dedicated root aggregate for the root volume? I know this has always been best practices and there has always been a debate on whether it should or should not be. I never configured things that way with 7-Mode and would always have everything together and never had an issue. Thanks! Solved! SEE THE SOLUTION 2015-05-26 10:40 AM Since you mentioned that this is your first go round with cDot, here are some key concepts that are different from 7-mode. Clustered DoT is like 7-mode at the edges - there are physical stuff like ports and disks and aggregates that are owned by a particular node (controller, head - pick your term). There are volumes and shares and LUNs that are exposed to the outside world. In the middle is the new creamy filling. A virtualization layer is always present that takes all the physical stuff in a cluster and presents it as a single pool of resources that can then be divied up into the logical Storage Virtual Machines (previously - vServers). SVMs for all intents are like the vFilers of 7-mode. In cDot 8.3 you can have IPspaces, or not. You overload virtual network interfaces and IP addresses onto the physical network ports. New in cDot compared to 7-mode you can also overload virtual FVC WWNs onto the physical FC ports, something a vFiler couldn't do. For all intents, think in terms of vFiler. Remember in 7-mode that when you used "MultiStor" to add vFiler capability, there was always a default "vfiler0" which represented the actual node itself. Aggregates, disks, and ports were controlled through vFiler0 as the owner of physical resources. So the big switch in cDot is that you're always in "MultiStor" mode and that "vFiler0" is reserved for control use only. You can't create user volumes and define user access to vFiler0. Instead you have to create one or more user "vFilers" where logical stuff like volumes and LUNs and shares and all that get created. More implications of this design. Each node needs a root volume from which to start operations. Remember in 7-mode that the root volume held OS images, log files, basic configuration information, etc. The node root-volume in cDot is pretty much the same, except it cannot hold any user data at all. The node root volume needs a place to live, hence the node root aggregates. Each node neads one, just like in a 7-mode HA pair. Yes, the only contents of the node root aggregates are the node root volumes. And they are aggregates, so at least 3 disks. Suggestion for a heavily used system is actually to use 5 disks to avoid certain odd IOPs dependencies on lower class disk. The node root volume will get messages and logs and all kinds of internal operational data dumped to it. I have experienced, especially when using high capacity slower disks, that node performance can be constrained by the single data disk performance of a 3 disk root aggregate, so I have standardized on 5 for my root aggregates. Now, for my installation, 20 disks (4 node cluster) out of 1200 capacity disks isn't a big deal. A smaller cluster can certainly run jsut fine with 3 disks. Similarly, because I want all my high speed disks available for user data, I purposely but some capacity disks on all nodes, even it they just server the root aggregate needs. Again, my installation allows for it easily, your setup may not. So yes - root aggregate is one per node and you don't get to use it for anything else. Not a best practice question - it's a design requirement for cDot. About the load sharing mirrors. Here is where we jump from physical to logical. After you have your basic cluster functional, you need to create SVMs (again, think vFilers) as a place for user data to live. Just like a 7-mode vFiler, an SVM has a root volume. Now this root volume is typically small and contains only specifics to that SVM. It is a volume, and thus needs an aggregate to live in. So you'll create user aggregates of whatever size and capacity meets your needs, and then create a root volume as you create your SVM. For instance, let's say your create SVM "svm01". You might then call the root volume "svm01_root" and you specify what user aggregate will hold it. For file sharing, cDot introduces the concept of a namespace. Instead of specifying a CIFS share or an NFS export with a path like "/vol/volume-name", you instead create a logical "root" mount point and then "mount" all your data volumes into the virtual file space. A typical setup would be to set the vserver root volume as the base "/" path. Then, you can create junction-paths for each of the undelrying volumes, for instance create volume "svm01-data01" and mount it under "/". You then could create a share by referencing the path as "/svm01-data01". Unlike 7-mode, junctions points can be used to cobble together a bunch of volumes in any namespace format you desire - you could create quite the tree of mount locations. It is meant to be like the "actual-path" option of 7-mode export shares by creating a virtual tree if you will, but it doesn't exactly line up with that funcitonality in all use cases. Of course, if you are creating LUNs, throw the namespace concept out the window. LUNs are always referenced via a path that starts with "/vol/" in the traditional format and the volumes that contain LUNs don't need a junction-path. Unless of course if you want to also put a share on the same volume that contains a LUN...then to setup the share you need a namespace and junction-paths. Confusing? Yes, and it is something I wish NetApp would unify at some point, as there are at least four different ways to refer to a path based location in cDot depending on context, and they are not interchangeable. That and a number of commands which have parameters with the same meaning but different parameter names are my two lingering issues with the general operation of cDot. Sorry - I digress. So - why the big deal on namespaces and how does that apply to load sharing mirrors? Here's the thing. Let's assume you have created svm01 as above. And you give it one logical IP address on one logical network interface. All well and good. That logical address lives on only one physical port at a time, which could be on either node. Obviously you want to setup a failover mechanism so that the logical network interface can failover between nodes and function if needed. You share some data from the SVM via CIFS or NFS. A client system will contact the IP address for the SVM and that contact will come through node 1 for instance if a port on node 1 currently holds the logical interface. But, for a file share, all paths need to work through the root of the namespace to resolve the target, and typically the root of the name space is the SVM's root volume. If the root volume resides on an aggregate owned by node 2, all accesses to any share in the SVM, whether residing on a volume/aggregate in node 1 or 2, must traverse the cluster backplane to access the namespace information on the SVM root on node 2 and then bounce to whatever node the target volume lives on. So, let's say we add a 2nd netowrk interface for SVM01, this time by default assigned to a port that lives on node 2. By DNS round robin we now get half the accesses going first thorugh node 1 and half through node 2. Better, but not perfect. And there remains the fact that the SVM's root volume living on node 2 still becomes a performance choke point if the load gets heavy enough. What we really want is for the SVM's root volume to kinda "live" on both nodes, so at least that level of back and forth is removed. And that is where load sharing mirrors come in. A load sharing mirror is a special kind of snapmirror relationship where an SVM's root volume is mirrored to read only copies. Because most accesses through the SVM's root volume are read only, it works. You have the master SVM root, as above called "svm01_root". You can create replicas, for instance "svm01_m1" and "svm01_m2", each of which exists on an aggregate typically owned by different nodes (m1 for mirror on node 1, m2 for mirror on node 2). Once you initialize the snapmirror load sharing relationship, read level accesses are automatically redirected to the mirror on the node where the request came in. You will need a schedule to keep the mirrors up to date, and there are some other small caveats. Is this absolutely required? No, it isn't. The load/performance factor achieved through use of load-sharing mirrors is very dependent on the total load to your SVMs. A heavily used SVM will certainly benefit. It can sometimes be a good thing, other times it can be a pain. The load sharing Snapmirror works just like a traditional snapmirror where you have a primary that can be read/write and a secondary shared as read only. The extras are that no snapmirror license is needed to do load sharing, load sharing mirrors can only be created within a single cluster, and any read access to the primary is automatically directed to one of the mirrors. Yes - you should also create a mirror on the same node where the original exists, otherwise all access will get redirected to a non-local mirror, which defeats the purpose. You will also want to review the CIFS access redirection mechanisms whereby when SVMs have multiple network interfaces across multiple nodes a redirect request can be sent back to a client so that subsequent accesses to data are directed to the node that owns the volume without needing to traverse the backplane. Definitely review that first before putting a volume/share structure in place because you can defeat that redirection if you aren't careful with your share hierarchy. Hope this helps with both some general background as you get up to speed on cDot and some specifics in response to your topic points. Bob Greenwald Lead Storage Engineer | Huron Legal Huron Consulting Group NCDA | NCIE-SAN Clustered Data OnTap 2015-06-02 12:41 PM Thank you for the reply and great explanation. I have the cluster setup and have sucked in A LOT of information over the last couple of weeks. As for the load-sharing mirrors. Not sure if we really need that right now. It's a small cluster and is 95% NFS presentation for VMs with VMware and Hyper-V. I'm sure this is something we could implement down the road if we begin to see a performance issue and load-sharing mirrors would help. 2015-07-10 01:39 AM - edited 2015-07-10 01:54 AM I think he also meant that LS mirrors provide high availability of the NFS SVM root namespace, and not just for load distribution, but I might have misinterpreted. With NFS, if I remember correctly, if the SVM root junction becomes inaccessible at any time (ie: if the SVM rootvol goes down), access to all other NFS junctions in that SVM are lost until access to the SVM rootvol is restored. DP aggregates with LS mirrors prevents this from becoming an issue. Here's an example of configuration cmds you'd need to do to put this in place: #load-sharing mirror for rootvol vol create -vserver [svm1_nfs] -volume [svm1_rootvol_m1] -aggregate [sas_aggr1] -size 1g -type DP vol create -vserver [svm1_nfs] -volume [svm1_rootvol_m2] -aggregate [sas_aggr2] -size 1g -type DP snapmirror create -source-path [//svm1_nfs/svm1_rootvol] -destination-path [//svm1_nfs/svm1_rootvol_m1] -type LS -schedule 15min snapmirror create -source-path [//svm1_nfs/svm1_rootvol] -destination-path [//svm1_nfs/svm1_rootvol_m2] -type LS -schedule 15min snapmirror initialize-ls-set [//svm1_nfs/svm1_rootvol] I apologise if my syntax is wrong. 2017-05-12 10:22 AM @bobshouseofcards - You state: "...read level accesses are automatically redirected to the mirror on the node where the request came in." If a request for read-level access comes in on the same node, shouldn't it come directly in instead of using an LS Mirror? Please explain why LS Mirror on the same nodes does or doesn't create double-work. "Yes - you should also create a mirror on the same node where the original exists, otherwise all access will get redirected to a non-local mirror, which defeats the purpose." 2017-05-13 10:08 AM Good question... The answer is pretty basic. If you are using Load Sharing mirrors, then when access to the root volume of an SVM is needed, the Load Sharing mirrors trump the actual root volume. It's either all or none. So if load sharing mirrors are in use, and an access comes in that reads from the root volume in some fashion, that access has to come from a load sharing mirror copy of the root volume. That way all accesses are consistent from an ONTAP point of view. The explanation goes to what LSMs are actually are and how they are implemented. At the heart of it, an LSM copy is nothing more than a SnapMirror destination. Recall that a SnapMirror destination, while the relationship is active, can be accessed in a read only fashion. So that's what LSMs are - read only copies. Now add in the concept that the LSM must be in sync with both the root volume and each other to be functional and consistent across all nodes. Thus, if direct access were allowed to the real SVM root volume, that might now be out of sync with all the LSM copies, necessitating a SnapMirror update on the LSMs to bring them all back into sync. That's why if LSMs are present, direct access is read only through the LSM copies to ensure there is a consistent presentation from all the copies, even on the node where the real SVM root resides. You can access the real root SVM volume for write access if needed through an alternate mount point. For NFS, the "/.admin" path is the "real" root volume. For CIFS, you can create a separate share that references the "/.admin" path on the SVM. You should not do this unless you need to make changes to the SVM root (a new folder, a permission change to the root volume, etc.), and then of course immediately update the LSM copies to be sure everyone sees the same change. In 8.3.2P9 and later (not sure when in the ONTAP 9.x line) there is a feature change available. When the SVM root volume has LSMs and the SVM root volume is changed, a SnapMirror update is now automatically triggered from the cluster rather than having to do it manually or by schedule. The automatic update relieves overhead of having to regularly run updates on a schedule or manually in workflows. Like every feature LSMs are a feature that should be used at the appropriate time for the right purpose, rather than "always" for every CIFS/NFS configuration. For clusters with significant CIFS/NFS workloads and many files, they can improve performance. LSMs serve no purpose and should not be used for SVMs that exclusively serve block level data. Hope this helps you. Bob Greenwald Senior Systems Engineer | cStor NCIE SAN ONTAP, Data Protection Kudos and accepted solutions are always accepted.
http://community.netapp.com/t5/Data-ONTAP-Discussions/Does-root-volume-aggregate-need-to-be-separate-from-data-aggregates-in-cDOT/td-p/105463
CC-MAIN-2018-17
en
refinedweb
Once you have created Topics in the console, you can send messages via the console or SDK/API. Sending messages via the console is used mainly for quickly verifying the availability of the Topics, while in a production scenario, it is suggested that you use SDK/API for message sending. Send messages via the console To send messages via the console, follow the steps below: Click Producers in the left-side navigation pane of the MQ console. Locate the Topic you just created in the list, and in the Actions column, click Send. In the Send Message dialog box, enter the message content, and click OK. The console will return a confirmation message and the corresponding message ID once the message is sent successfully. Send messages via SDK/API In a production scenario, it is suggested that you use SDK/API to send messages. This section demonstrates to you how to do it by using the Java SDK under the TCP protocol. If you want to use other protocols or development languages, see the related help documentation. Send messages via TCP Java SDK Introduce the dependency through one of the following two methods: Introduce the dependency through Maven: <dependency> <groupId>com.aliyun.openservices</groupId> <artifactId>ons-client</artifactId> <version>1.7.0.Final</version> </dependency> Download a dependency JAR package: Download Link Note: Please refer to TCP access instruction for a TCP access point domain name. Set related parameters and run sample code according to the following instructions: import com.aliyun.openservices.ons.api.Message; import com.aliyun.openservices.ons.api.Producer; import com.aliyun.openservices.ons.api.SendResult; import com.aliyun.openservices.ons.api.ONSFactory; import com.aliyun.openservices.ons.api.PropertyKeyConst; import java.util.Properties; public class ProducerTest { public static void main(String[] args) { Properties properties = new Properties(); // The producer ID you have created on the MQ console properties.put(PropertyKeyConst.Producer access of public cloud is listed here) properties.put(PropertyKeyConst.ONSAddr, ""); Producer producer = ONSFactory.createProducer(properties); // A start method must be called once before message sending to start a producer producer.start(); //Send messages recurrently while(true){ Message msg = new Message( // // The Topic which has been created on the console, i.e., a Topic name of the message "TopicTestMQ", // Message Tag, // It can be understood as a Tag in Gmail used for reclassifying the message so as to facilitate the consumer to specify a filter condition to implement a filter in the MQ broker "TagA", // Message Body // It may be any data in binary form, and MQ will make no intervention // Compatible serialization and deserialization methods that need to be negotiated by the producer and the consumer "Hello MQ".getBytes()); // Set a critical service property representing the message, and try to keep the critical service property globally unique, such that you can query and resend the message via the MQ console when you cannot receive the message normally // Note: No setting will affect the normal sending and receiving of messages msg.setKey("ORDERID_100"); // Message sending will succeed as long as an exception is not thrown // Print the message ID to facilitate querying the message sending status SendResult sendResult = producer.send(msg); System.out.println("Send Message success. Message ID is: " + sendResult.getMessageId()); } // The producer object can be destroyed before exiting the application // Note: it's OK if the producer object has not been destroyed producer.shutdown(); } } Check whether a message is successfully sent Once a message is sent, you can check its sending status in the console by following the steps below: On the left-side navigation pane of the MQ console, choose Message Query. On the Message Query page, select the By Message ID tab. In the search box, enter the message ID returned after the message is sent, and click Search to query the sending status of the message. “Storage time” indicates the time period for which the MQ server stored the message. when the MQ broker stores the message. If the message can be queried out, it means that the message has been successfully sent to the server. Note: This section demonstrates the scenario where MQ is first used, when the consumer has not been started yet. Therefore, the message status shows that there is not any consumption data available. To start the consumer and subscribe to messages, see Step 4: Subscribe to messages. For more information on the message status, see Message Query.
https://www.alibabacloud.com/help/doc-detail/29537.html
CC-MAIN-2018-17
en
refinedweb
Search Create 45 terms bbarocho ACCT200 Test 4 Chapters 20 and 21 STUDY PLAY Process operations Processing of products in a continuous (sequential) flow of steps; also called process manufacturing or process production. Job Order Cost Accounting System Cost accounting system to determine the cost of producing each job or job lot. Process Cost Accounting System System of assigning direct materials, direct labor, and overhead to specific processes; total costs associated with each process are then divided by the number of units passing through that process to determine the cost per equivalent unit. Materials Consumption Report Document that summarizes the materials a department uses during a reporting period; replaces materials requisitions. Equivalent Units of Production (or EUP) Number of units that would be completed if all effort during a period had been applied to units that were started and finished. Department Activity accounting steps (1) physical flow, (2) equivalent units, (3) cost per equivalent unit, and (4) cost assignment and reconciliation. Determine physical flow Report that reconciles (1) the physical units started in a period with (2) the physical units completed in that period. Compute equivalent units The second step is to compute equivalent units of production for direct materials, direct labor, and factory overhead for April. Overhead is applied using direct labor as the allocation base for GenX. This also implies that equivalent units are the same for both labor and overhead. Compute cost per equivalent unit Equivalent units of production for each product (from step 2) is used to compute the average cost per equivalent unit. Under the weighted-average method, the computation of EUP does not separate the units in beginning inventory from those started this period; similarly, this method combines the costs of beginning goods in process inventory with the costs incurred in the current period. Assign and reconcile costs The EUP from step 2 and the cost per EUP from step 3 are used in step 4 to assign costs to (a) units that production completed and transferred to finished goods and (b) units that remain in process. Process Cost Summary Report of costs charged to a department, its equivalent units of production achieved, and the costs assigned to its output. 3 Reasons for Process Cost Summary (1) help department managers control and monitor their departments, (2) help factory managers evaluate department managers' performances, and (3) provide cost information for financial statements. Cost of Goods Manufactured Total manufacturing costs (direct materials, direct labor, and factory overhead) for the period plus beginning goods in process less ending goods in process; also called net cost of goods manufactured and cost of goods completed.. FIFO method process costing assigns costs to units assuming a first-in, first-out flow of product. Accounting for a department's activity for a period includes four steps: (1) determine physical flow, (2) compute equivalent units, (3) compute cost per equivalent unit, and (4) determine cost assignment and reconciliation. Determine Physical Flow of Units A physical flow reconciliation is a report that reconciles (1) the physical units started in a period with (2) the physical units completed in that period. Compute Equivalent Units of Production—FIFO The FIFO method accounts for cost flow in a sequential manner—earliest costs are the first to flow out. (This is different from the weighted-average method, which combines prior period costs—those in beginning Goods in Process Inventory—with costs incurred in the current period.) Three distinct groups of units must be considered in determining the equivalent units of production under the FIFO method: (a) units in beginning Goods in Process Inventory that were completed this period, (b) units started and completed this period, and (c) units in ending Goods in Process Inventory. Compute Cost per Equivalent Unit—FIFO To compute cost per equivalent unit, we take the product costs (for each of direct materials, direct labor, and factory overhead) added in April and divide by the equivalent units of production from step 2 Assign and Reconcile Costs The equivalent units determined in step 2 and the cost per equivalent unit computed in step 3 are both used to assign costs (1) to units that the production department completed and transferred to finished goods and (2) to units that remain in process at period-end Two methods of costs allocations (1) traditional two-state cost allocation and (2) activity-based cost allocation. Two-Stage Cost Allocation An organization incurs overhead costs in many activities. These activities can be identified with various departments, which can be broadly classified as either operating or service departments. 2 Steps cost allocation (1) service department costs to operating departments and (2) operating department costs, including those assigned from service departments, to the organization's output. Activity-based costing (ABC) Cost allocation method that focuses on activities performed; traces costs to activities and then assigns them to cost objects. Activity Cost Driver Variable that causes an activity's cost to go up or down; a causal factor Activity Cost Pool Temporary account that accumulates costs a company incurs to support an activity. departmental accounting system Accounting system that provides information useful in evaluating the profitability or cost effectiveness of a department. responsibility accounting system System that provides information that management can use to evaluate the performance of a department's manager. profit center Business unit that incurs costs and generates revenues. cost center Department that incurs costs but generates no revenues; common example is the accounting or legal department. Direct expenses Expenses traced to a specific department (object) that are incurred for the sole benefit of that department. Indirect expenses Expenses incurred for the joint benefit of more than one department (or cost object). departmental contribution to overhead Amount by which a department's revenues exceed its direct expenses., Investment center return on total assets Center net income divided by average total assets for the center., Investment center residual income The net income an investment center earns above a target return on average invested assets., hurdle rate Minimum acceptable rate of return (set by management) for an investment. balanced scorecard Financial statement that lists types and dollar amounts of assets, liabilities, and equity at a specific date. controllable costs Costs that a manager has the power to control or at least strongly influence.. Uncontrollable costs Costs that a manager does not have the power to determine or strongly influence. Responsibility account budget Report of expected costs and expenses under a manager's control.. responsibility accounting performance report Responsibility report that compares actual costs and expenses for a department with budgeted amounts. Joint costs Cost incurred to produce or purchase two or more products at the same time Unit Cost - No beginning/ending inventory Total cost assigned to process (direct materials, direct labor, and overhead) / Total number of units started/finished in the period Return on investment Investment center net income / Investment center average invested assets Residual income Investment center net income - Target investment center net income
https://quizlet.com/7784908/acct200-test-4-flash-cards/
CC-MAIN-2018-17
en
refinedweb
Hi I'm just starting out with OOPin JAVA and thought i was getting the hang of it but i've been given an assignment and it's got me stumped. here is the code i have been given public class FrogCalculator { private Frog operand1Frog; private Frog operand2Frog; private Frog unitsFrog; private Frog tensFrog; private OUColour colour; /** * Constructor for objects of class FrogCalculator */ public FrogCalculator() { Super(); } /* instance methods */ /** * Returns the receiver's operand1Frog */ public Frog getOperand1Frog() { return operand1Frog; } /** * Sets the receiver's operand1Frog */ public void setOperand1Frog(Frog operand1Frog) { this.operand1Frog = operand1Frog; } /** * Returns the receiver's operand2Frog */ public Frog getOperand2Frog() { return operand2Frog; } /** * Sets the receiver's operand2Frog */ public void setOperand2Frog(Frog operand2Frog) { this.operand2Frog = operand2Frog; } /** * Returns the receiver's unitsFrog */ public Frog getUnitsFrog() { return unitsFrog; } /** * Sets the receiver's unitsFrog */ public void setUnitsFrog(Frog unitsFrog) { this.unitsFrog = unitsFrog; } /** * Returns the receiver's tensFrog */ public Frog getTensFrog() { return tensFrog; } /** * Sets the receiver's tensFrog */ public void setTensFrog(Frog tensFrog) { this.tensFrog = tensFrog; } } here is what i am being asked to do; Write code in the FrogCalculator class to modify the signature of the default constructor for FrogCalculator such that it takes four arguments of type Frog. The positions of the two Frog instances referenced by the first and second arguments represent the first and second operands, respectively. The positions of the two Frog instances referenced by the third and fourth arguments represent the result of the calculation: the third Frog instance being the ‘tens’ frog and the fourth Frog instance being the ‘units’ frog. The constructor should assign the arguments directly to the four corresponding private instance variables and then set the frog referenced by tensFrog to brown and the frog referenced by unitsFrog to yellow. But i just don't know how?? I know that FrogCalculator class willinherit the positon and colour of the superclass frog, but i'm at a loss for what code i have to add. can anyone help me??
https://www.daniweb.com/programming/software-development/threads/195549/newbie-oop-in-java-help-pleeeaase
CC-MAIN-2018-17
en
refinedweb
addrdsins.3alc man page addrdsins — adds an instance to a figure Synopsis #include "rdsnnn.h" rdsins_list ∗addrdsins( Figure, Model, Name, Sym, X, Y ) rdsfig_list ∗Figure; char ∗Model; char ∗Name; char Sym; long X; long Y; Parameter - Figure figure which contains the instance. - Model Name of the model of the instance. - Name Name of the instance in the figure at which it belongs. - Sym Symmetry applied to the instance. - possible values : - RDS_NOSYM no symmetry. - RDS_ROT_P 90 degrees rotation counter clockwise. - RDS_SYMXY symmetry with regard to a horizontal and vertical axis. - RDS_ROT_M 90 degrees rotation clockwise. - RDS_SYM_X symmetry with regard to a vertical axis. - RDS_SY_RM symmetry with regard to a vertical axis and 90 degrees clockwise. - RDS_SYM_Y symmetry with regard to a horizontal axis. - RDS_SY_RP symmetry with regard to a horizontal axis and 90 degrees rotation counter clockwise. - X,Y position of the lower left corner of the instance in the figure after symmetry. Description The addrdsins function adds an instance to the head of instances's list in the figure described in function parameter. Some fields of rdsins_list structure are modified as follows : The field FIGNAME is set to Model The field INSNAME is set to Name The field X is set to X The field Y is set to Y The field TRANSF is set to Sym The field SIZE is set to Figure->SIZE Return Value addrdsins returns a pointer to the newly created instance which is head of instances's list of the figure.; main() { rdsfig_list ∗RdsFigure; rdsins_list ∗Instance; mbkenv(); rdsenv(); loadrdsparam(); RdsFigure = addrdsfig ("core",sizeof ( UserStruct ) ); Instance = addrdsins (RdsFigure,"na2_y","and2",RDS_NOSYM,8,6); printf("(RdsFigure->INSTANCE)->NAME = %s\n", (RdsFigure->INSTANCE)->NAME); /∗ Instance is head of instance list of the figure ∗/ printf("Instance->NAME = %s\n", Instance->NAME); } See Also librds, delrdsins, viewrdsins
https://www.mankier.com/3/addrdsins.3alc
CC-MAIN-2018-17
en
refinedweb
A Reproducible Build Environment with Jenkins Robert Fach, TechniSat Digital GmbH In this talk Robert introduced what build reproducibility is and explained how TechniSat has gone about achieving it. TechniSat has a rare and unique constraint where the customer can dictate what modules a feature can impact, but a release contains all modules and they are all rebuilt and tested so you need to ensure unchanged modules are not impacted. You need to identify and track everything that has influence on the input. - Source code, toolchains and build system validation, and everything else…. The benefit of a reproducible build environment gives a new level of trust to the customer – that you are tracking things correctly to know what has gone into each build. Then you can support them in the future (so you can make a bug fix without bringing in any extra variability into the build)! It can also be used to find issues in the builds (random GUIDs created and embedded for the build can be detected as what should be binary identical and what shouldn’t be). Why is it hard? Source code tracking: it is an easy and “bread and butter” method of managing sources (tags…), but what about if the source control system changes over time? (you need to make sure that the SCM stays compatible over time). OS tracking: File system – large code base with 1000’s of files – some File systems may not perform well, but changing file systems can change file ordering which can affect the build. Locale issues can affect the build as well (marcos based on __DATE__, __TIME__ etc..) Compiler: Picking up a new version of the compiler for bug fixes may bring in new libraries or optimizations (branch prediction) that could change the binary. You need to know about anything based on heuristics in the compiler and the switches that control the features so you can disable them, since after the fact it can be too late! You can create a seed for any random generations (namespace mangling -frandom-seed) Dealing with complexity & scale. As you scale out and distribute the build, it needs to be tracked and controlled even more. This adds a requirement for a “release manager,” a system that controls what, how and where (release specification). This system maps the requirements onto the Jenkins jobs, which use a special plugin to control the job configuration (to pass variables to source control, scripts etc.). The Jenkins job maps to a Jenkins slave. For each release, the release manager creates a new release environment. This includes a brand new Jenkins master configured with the slaves that are required for the build. The slaves are mapped onto infrastructure. The infrastructure is currently managed SQA Systems, artefact repository, KVM cluster (with Openstack coming soon) and individual KVM hosts. After the release the infrastructure is archived (os tools Jenkins etc…). Also record the salt commands used). (provides one level of way to reproduce). The specification provides another way to recreate the environment (but it is not always reliable as something may have been missed). Performance Lessons learned (a little bit random at the end of the talk). - Use tmpfs inside VMs for fast random I/O file systems. - Try to use nfs read-only cache to save network bandwidth - Put Jenkins Workspace in a dedicated lvm in the host rather than network We hope you enjoyed JUC Europe! Here is the abstract for Robert’s talk, “A Reproducible Build Environment with Jenkins.” Here are the slides for his talk.
https://www.cloudbees.com/blog/juc-session-blog-series-robert-fach-juc-europe
CC-MAIN-2018-17
en
refinedweb
[SOLVED] How to use connect in a shared library? - Mohammadsm Hi I've written an application, now I copied the functions in a shared library to make a dll. Now I can see a weird error: #include "library1.h" Library1::Library1() { } void Library1::Inint_Lib() { if(Scan_Port()) { Serial=new QSerialPort; QObject::connect(Serial,SIGNAL(readyRead()),this,SLOT(Capture_Received_Data())); } } It's working in the application with GUI, but this error occurs in the shared library project: Library1\library1.cpp:16: error: no matching function for call to 'QObject::connect(QSerialPort*&, const char*, Library1* const, const char*)' Library1\library1.cpp:16: candidates are: D:\Qt-5.1.0\5.1.0\mingw48_32\include\QtCore\qiodevice.h:47: In file included from D:\Qt-5.1.0\5.1.0\mingw48_32\include\QtCore/qiodevice.h:47:0, D:\Qt-5.1.0\5.1.0\mingw48_32\include\QtCore\QIODevice:1: from D:\Qt-5.1.0\5.1.0\mingw48_32\include\QtCore/QIODevice:1, Library1\library1.h:7: from ..\Library1\library1.h:7, Library1\library1.cpp:1: from ..\Library1\library1.cpp:1: D:\Qt-5.1.0\5.1.0\mingw48_32\include\QtCore\qobject.h:198: static QMetaObject::Connection QObject::connect(const QObject*, const char*, const QObject*, const char*, Qt::ConnectionType) D:\Qt-5.1.0\5.1.0\mingw48_32\include\QtCore\qobject.h:201: static QMetaObject::Connection QObject::connect(const QObject*, const QMetaMethod&, const QObject*, const QMetaMethod&, Qt::ConnectionType) D:\Qt-5.1.0\5.1.0\mingw48_32\include\QtCore\qobject.h:201: note: no known conversion for argument 2 from 'const char*' to 'const QMetaMethod&' D:\Qt-5.1.0\5.1.0\mingw48_32\include\QtCore\qobject.h:440: QMetaObject::Connection QObject::connect(const QObject*, const char*, const char*, Qt::ConnectionType) const D:\Qt-5.1.0\5.1.0\mingw48_32\include\QtCore\qobject.h:440: note: no known conversion for argument 3 from 'Library1* const' to 'const char*' D:\Qt-5.1.0\5.1.0\mingw48_32\include\QtCore\qobject.h:214: template<class Func1, class Func2> static QMetaObject::Connection QObject::connect(const typename QtPrivate::FunctionPointer<Func>::Object*, Func1, const typename QtPrivate::FunctionPointer<Func2>::Object*, Func2, Qt::ConnectionType) D:\Qt-5.1.0\5.1.0\mingw48_32\include\QtCore\qobject.h:214: note: template argument deduction/substitution failed: D:\Qt-5.1.0\5.1.0\mingw48_32\include\QtCore\qobject.h:*]': \Library1\library1.cpp:16: required from here D:\Qt-5.1.0\5.1.0\mingw48_32\include\QtCore\qobject.h:214: error: no type named 'Object' in 'struct QtPrivate::FunctionPointer<const char*>' D:\Qt-5.1.0\5.1.0\mingw48_32\include\QtCore\qobject.h:244: template<class Func1, class Func2> static typename QtPrivate::QEnableIf<((int)(QtPrivate::FunctionPointer<Func2>::ArgumentCount) >= 0), QMetaObject::Connection>::Type QObject::connect(const typename QtPrivate::FunctionPointer<Func>::Object*, Func1, Func2) . . . D:\Qt-5.1.0\5.1.0\mingw48_32\include\QtCore\qobject.h:267: template<class Func1, class Func2> static typename QtPrivate::QEnableIf<(QtPrivate::FunctionPointer<Func2>::ArgumentCount == (-1)), QMetaObject::Connection>::Type QObject::connect(const typename QtPrivate::FunctionPointer<Func>::Object*, Func1, Func2) Library1\library1.cpp:16: note: candidate expects 3 arguments, 4 provided How can I connect a signal to the library slot? Thank you all Is the slot declared as a slot? Does live in a class that inherits from QObject, and is there Q_OBJECT macro there in the header? @Mohammadsm I have added markdown tags for code sections of post. Please checkout the "markdown tags" at the end of thread. - Mohammadsm class LIBRARY1SHARED_EXPORT Library1 { Q_OBJECT public: Library1(); void Inint_Lib(); private slots: void Capture_Received_Data(); [edit: koahnig, code markers] - Mohammadsm @Mohammadsm You need to use three of ` at start and end of your code block.Check the end of the whole thread here. There is a short explanation. Some keyboards do not support ` as a key. In this case you can go to the end of the thread and copy the three characters required and paste them where needed. You need to use three of ` at start and end of your code block No need for that. You can just make a code block by having a separate paragraph that has a four-space intendation Your class does not inherit from QObject. Try this: class LIBRARY1SHARED_EXPORT Library1 : public QObject Then remeber to rebuild and relink the library, and your project that uses it. - SGaist Lifetime Qt Champion @sierdzio Indeed the back ticks are not mandatory, however it gives the forum engine a good hint of what is following - Mohammadsm @sierdzio Indeed the back ticks are not mandatory, however it gives the forum engine a good hint of what is following in the cheat sheet you can actually find: Blocks of code are either fenced by lines with three back-ticks ```, or are indented with four spaces. I recommend only using the fenced code blocks -- they're easier and only they support syntax highlighting. The question is what is the syntax highlighting now?
https://forum.qt.io/topic/55989/solved-how-to-use-connect-in-a-shared-library
CC-MAIN-2018-17
en
refinedweb
Cutting. Getting Started First, download the FMOD Ex Programmers API. You’ll want the latest Stable version. After the installation completes, open Visual Studio and create an empty Win32 console project. Right-click on the project node in Solution Explorer, select Properties, choose VC++ Directories in the left-hand pane and add the following directory paths: Include Directories: C:\Program Files (x86)\FMOD SoundSystem\FMOD Programmers API Windows\api\inc Library Directories: C:\Program Files (x86)\FMOD SoundSystem\FMOD Programmers API Windows\api\lib In the Linker -> Input section, add fmodex_vc.lib to the Additional Dependencies. Finally, when your application runs, you’ll need fmodex.dll to be in the same folder as your exe file, so to make things simple you can add a post-build event to copy this automatically from the FMOD API directory to your target folder when the build succeeds. Go to Build Events -> Post-build Event and set the Command Line as follows: copy /y “C:\Program Files (x86)\FMOD SoundSystem\FMOD Programmers API Windows\api\fmodex.dll” “$(OutputPath)” When you start writing your application, you’ll want to include the following headers: #include "fmod.hpp" #include "fmod_errors.h" #include Since most FMOD functions return an error code, it is also handy to have some kind of error checking function you can wrap around all the calls, like this: void FMODErrorCheck(FMOD_RESULT result) { if (result != FMOD_OK) { std::cout << "FMOD error! (" << result << ") " << FMOD_ErrorString(result) << std::endl; exit(-1); } } Initializing FMOD The following code is adapted from the ‘Getting Started with FMOD for Windows’ PDF file which is included with the API and can be found in your Windows Start Menu under FMOD Sound System. I have made some small tweaks and added further explanation below. First we’ll want to get a pointer to FMOD::System, which is the base interface from which all the other FMOD objects are created: FMOD::System *system; FMOD_RESULT result; unsigned int version; int numDrivers; FMOD_SPEAKERMODE speakerMode; FMOD_CAPS caps; char name[256]; // Create FMOD interface object result = FMOD::System_Create(&system); FMODErrorCheck(result); Then we want to check that the version of the DLL is the same as the libraries we compiled against: // Check version result = system->getVersion(&version); FMODErrorCheck(result); if (version < FMOD_VERSION) { std::cout << "Error! You are using an old version of FMOD " << version << ". This program requires " << FMOD_VERSION << std::endl; return 0; } Next, count the number of sound cards in the system, and if there are no sound cards present, disable sound output altogether: // Get number of sound cards result = system->getNumDrivers(&numDrivers); FMODErrorCheck(result); // No sound cards (disable sound) if (numDrivers == 0) { result = system->setOutput(FMOD_OUTPUTTYPE_NOSOUND); FMODErrorCheck(result); } If there is one sound card, get the speaker mode (stereo, 5.1, 7.1 etc.) that the user has selected in Control Panel, and set FMOD’s speaker output mode to match. The first parameter is the driver ID – where 0 is the first enumerated sound card (driver) and numDrivers - 1 is the last. The third parameter receives the default sound frequency; we have set this to zero here as we’re not interested in retrieving this data. //); If hardware acceleration is disabled in Control Panel, we need to make the software buffer larger than the default to help guard against skipping and stuttering. The first parameter specifies the number of samples in the buffer, and the second parameter specifies the number of buffers (which are used in a ring). Therefore the total number of samples to be used for software buffering is the product (multiplication) of the two numbers. // Increase buffer size if user has Acceleration slider set to off if (caps & FMOD_CAPS_HARDWARE_EMULATED) { result = system->setDSPBufferSize(1024, 10); FMODErrorCheck(result); } The following is a kludge for SigmaTel sound drivers. We first get the name of the first enumerated sound card driver by specifying zero as the first parameter to getDriverInfo (the 4th parameter receives the device GUID which we ignore here). If it contains the string ‘SigmaTel’, the output format is changed to PCM floating point, and all the other format settings are left as the sound card’s current settings. //); } } We have now done all the necessary pre-requisite legwork and we can now initialize the sound system: // Initialise FMOD result = system->init(100, FMOD_INIT_NORMAL, 0); The first parameter defines the number of virtual channels to use. This can essentially be any number: whenever you start to play a new sound or stream, FMOD will (by default) pick any available free channel. The number of actual hardware channels (voices) available is irrelevant as FMOD will downmix where needed to give the illusion of more channels playing than there actually are in the hardware. So just pick any number that is more than the total amount of sounds that will ever be playing simultaneously in your application. If you choose a number lower than this, channels will get re-used and already running sounds will get cut off and replaced by new ones if you try to start a sound when all channels are busy (the oldest used channel is re-used first). The second parameter gives initialization parameters and the third number specifies driver-specific information (the example given in the documentation is a filename when using the WAV-writer). The initialization parameter will usually be FMOD_INIT_NORMAL, but for example if you are developing for PlayStation 3, you might use a parameter like FMOD_INIT_PS3_PREFERDTS to prefer DTS output over Dolby Digital. If the speaker mode we selected earlier is for some strange reason invalid, init() will return FMOD_ERR_OUTPUT_CREATEBUFFER. In this case, we reset the speaker mode to a safe fallback option – namely stereo sound – and call init()); All of the code above should be included in every application you write which uses FMOD. With this boilerplate code out of the way, we can get to the business of making some noise! Playing sounds and songs There are two main ways to get audio into FMOD – createSound and createStream. createSound loads a sound file into memory in its entirety, and decompresses it if necessary, whereas createStream opens a file and just buffers it a piece at a time, decompressing each buffered segment on the fly during playback. Each option has its advantages and disadvantages, but in general music should be streamed since decompressing an MP3 or Vorbis file of some minutes length in memory will consume some 10s of megabytes, whereas sound effects that will be used repeatedly and are relatively short can be loaded into memory for quick access. To load a sound into memory: FMOD::Sound *audio; system->createSound("Audio.mp3", FMOD_DEFAULT, 0, &audio); To open a stream: FMOD::Sound *audioStream; system->createStream("Audio.mp3", FMOD_DEFAULT, 0, &audioStream); The first parameter is the relative pathname of the file to open and the 4th parameter is a pointer to an FMOD::Sound pointer that receives the resource handle of the audio. Under normal circumstances the 2nd and 3rd parameters should be left as FMOD_DEFAULT and 0 (the 2nd is the mode in which to open the audio, the 3rd is an extended information structure used in special cases – we will come to this in Part 3 of the series). Playing a one-shot sound To play a sound that doesn’t loop and which you don’t otherwise need any control over, call: system->playSound(FMOD_CHANNEL_FREE, audio, false, 0); This is the same whether you are playing a sound or a stream; you use playSound in both cases. FMOD_CHANNEL_FREE causes FMOD to choose any available unused virtual channel on which to play the sound as mentioned earlier. The 2nd parameter is the audio to play. The third parameter specifies whether the sound should be started paused or not. This is useful when you wish to make changes to the sound before it begins to play. If you need control of the sound after it starts, use the 4th parameter to receive the handle of the channel that the sound was assigned to: FMOD::Channel *channel; system->playSound(FMOD_CHANNEL_FREE, audio, false, &channel); These calls are non-blocking so they return as soon as they are processed and the sound plays in the background (in a separate thread). Manipulating the channel Once a sound is playing and you have the channel handle, all future interactions with that sound take place through the channel. For example, to make a sound loop repeatedly: channel->setMode(FMOD_LOOP_NORMAL); channel->setLoopCount(-1); To toggle the pause state of the sound: bool isPaused; channel->getPaused(&isPaused); channel->setPaused(!isPaused); To change the sound volume, specify a float value from 0.0 to 1.0, eg for half volume: channel->setVolume(0.5f); Per-frame update Although it’s only needed on certain devices and in certain environments, it is best to call FMOD’s update function on each frame (or cycle of your application’s main loop): system->update(); This causes OSs such as Android to be able to accept incoming phone calls and other notifications. Releasing resources Release the FMOD interface when you are finished with it (generally, when the application is exiting): system->release(); This will cause all channels to stop playing, and for the channels and main interface to be released. Channels therefore don’t need to be released when the application ends, but sounds should be released when you’re done with them (thanks to David Gouveia for the correction!): audio->release(); Don’t forget to error check Everything above should be wrapped in calls to our FMODErrorCheck() function above or some other error-trapping construct. I have just omitted this in the examples for clarity. Demo application All of the techniques shown in this article can be seen in this FMOD Demo console application, which shows how to open and play sounds and streams and how to do a smooth volume fade from one track to another. Full source code and the compiled executable are included. The source code can also be seen here for your convenience: #include "fmod.hpp" #include "fmod_errors.h" #include <iostream> #include <Windows.h> #define _USE_MATH_DEFINES #include <math.h> void FMODErrorCheck(FMOD_RESULT result) { if (result != FMOD_OK) { std::cout << "FMOD error! (" << result << ") " << FMOD_ErrorString(result) << std::endl; exit(-1); } } int main() { // ================================================================================================ // Application-independent initialization // ================================================================================================ FMOD::System *system; FMOD_RESULT result; unsigned int version; int numDrivers; FMOD_SPEAKERMODE speakerMode; FMOD_CAPS caps; char name[256]; // Create FMOD interface object result = FMOD::System_Create(&system); FMODErrorCheck(result); // Check version result = system->getVersion(&version); FMODErrorCheck(result); if (version < FMOD_VERSION) { std::cout << "Error! You are using an old version of FMOD " << version << ". This program requires " << FMOD_VERSION << std::endl; return 0; } // Get number of sound cards result = system->getNumDrivers(&numDrivers); FMODErrorCheck(result); // No sound cards (disable sound) if (numDrivers == 0) { result = system->setOutput(FMOD_OUTPUTTYPE_NOSOUND); FMODErrorCheck(result); } //); // Increase buffer size if user has Acceleration slider set to off if (caps & FMOD_CAPS_HARDWARE_EMULATED) { result = system->setDSPBufferSize(1024, 10); FMODErrorCheck(result); } //); } } // Initialise FMOD result = system->init(100, FMOD_INIT_NORMAL,); // ================================================================================================ // Application-specific code // ================================================================================================ bool quit = false; bool fading = false; int fadeLength = 3000; int fadeStartTick; // Open music as a stream FMOD::Sound *song1, *song2, *effect; result = system->createStream("Song1.mp3", FMOD_DEFAULT, 0, &song1); FMODErrorCheck(result); result = system->createStream("Song2.mp3", FMOD_DEFAULT, 0, &song2); FMODErrorCheck(result); // Load sound effects into memory (not streaming) result = system->createSound("Effect.mp3", FMOD_DEFAULT, 0, &effect); FMODErrorCheck(result); // Assign each song to a channel and start them paused FMOD::Channel *channel1, *channel2; result = system->playSound(FMOD_CHANNEL_FREE, song1, true, &channel1); FMODErrorCheck(result); result = system->playSound(FMOD_CHANNEL_FREE, song2, true, &channel2); FMODErrorCheck(result); // Songs should repeat forever channel1->setLoopCount(-1); channel2->setLoopCount(-1); // Print instructions std::cout << "FMOD Simple Demo - (c) Katy Coe 2012 -" << std::endl << "=====================================================" << std::endl << std::endl << "Press:" << std::endl << std::endl << " 1 - Toggle song 1 pause on/off" << std::endl << " 2 - Toggle song 2 pause on/off" << std::endl << " F - Fade from song 1 to song 2" << std::endl << " S - Play one-shot sound effect" << std::endl << " Q - Quit" << std::endl; while (!quit) { // Per-frame FMOD update FMODErrorCheck(system->update()); // Q - Quit if (GetAsyncKeyState('Q')) quit = true; // 1 - Toggle song 1 pause state if (GetAsyncKeyState('1')) { bool isPaused; channel1->getPaused(&isPaused); channel1->setPaused(!isPaused); while (GetAsyncKeyState('1')); } // 2 - Toggle song 2 pause state if (GetAsyncKeyState('2')) { bool isPaused; channel2->getPaused(&isPaused); channel2->setPaused(!isPaused); while (GetAsyncKeyState('2')); } // F - Begin fade from song 1 to song 2 if (GetAsyncKeyState('F')) { channel1->setVolume(1.0f); channel2->setVolume(0.0f); channel1->setPaused(false); channel2->setPaused(false); fading = true; fadeStartTick = GetTickCount(); while (GetAsyncKeyState('F')); } // Play one-shot sound effect (without storing channel handle) if (GetAsyncKeyState('S')) { system->playSound(FMOD_CHANNEL_FREE, effect, false, 0); while (GetAsyncKeyState('S')); } // Fade function if fade is in progress if (fading) { // Get volume from 0.0f - 1.0f depending on number of milliseconds elapsed since fade started float volume = min(static_cast<float>(GetTickCount() - fadeStartTick) / fadeLength, 1.0f); // Fade is over if song 2 has reached full volume if (volume == 1.0f) { fading = false; channel1->setPaused(true); channel1->setVolume(1.0f); } // Translate linear volume into a smooth sine-squared fade effect volume = static_cast<float>(sin(volume * M_PI / 2)); volume *= volume; // Fade song 1 out and song 2 in channel1->setVolume(1.0f - volume); channel2->setVolume(volume); } } // Free resources FMODErrorCheck(song1->release()); FMODErrorCheck(song2->release()); FMODErrorCheck(effect->release()); FMODErrorCheck(system->release()); } For A Quick & Dirty Solution I have made a simple FMOD library which wraps all of the code above and the code in subsequent parts of the series up into an easy-to-use class. Check out the link for more information. Coming Up… In Part 2 of our series on FMOD we will take a look at channel groups. I hope you found this tutorial introduction useful!! Hello Katy! Are you sure about the bit about releasing the system also releasing the loaded sound objects? I’m somewhat confused by the documentation, which I quote: “Call System::release to close the output device and free all memory associated with that object. Channels are stopped, but sounds are not released. You will have to free them first. You do not have to stop channels yourself. You can of course do it if you want, it is just redundant, but releasing sounds is good programming practice anyway. ” The second sentence seems to make clear that sounds are not released automatically, and must therefore be tracked and released manually before the system. This seems counter intuitive though, and the last sentence adds to the confusion. Hey David, It looks like you are right, I went back to the FMOD documentation last night and saw those paragraphs too. But if you go to the documentation for System::close(), which is called by System::release(), it says: “Closing the output renders objects created with this system object invalid. Make sure any sounds, channelgroups, geometry and dsp objects are released before closing the system object.” So it would appear that while you don’t have to release the channels, you do have to release the sounds. Well spotted! I will update the blog post when I get a moment. Oh, I’m also glad you just quoted that bit from the close() method, since it made me notice that channel groups also need to be released. I made the assumption that they worked the same as regular channels and did not need to be released. Blog post updated, thanks 🙂 By the way, if you ever feel like writing another part for this series, one thing that you could describe is how to write directly to the audio buffer for some low level audio programming. I already had some experience doing that in other APIs such as XNA and Flash, but it took me more time than it should to figure out how to do it in FMOD – I couldn’t find much about it in the documentation, and forgot to look into the example projects. For the record, here’s what I did. The process turned out to be creating a *looping stream* with the FMOD_OPENUSER flag and a custom FMOD_CREATESOUNDEXINFO object using a PCM read callback set to write the data to it. Besides this callback, I also needed to specify the size of the info structure, the format of the audio (e.g. PCM16 or PCMFLOAT), the default frequency (44100), the number of channels (2), the size of the decode buffer (depends), and the length of the stream, which I set to the equivalent of five seconds of audio (frequency * bitrate * channels * 5). I’m always open to suggestions 🙂 Although it depends mostly on my time and health (the last 2 years, mostly the latter). I’ve never tried that in FMOD but I’m glad you threw in some pointers to save me the head-scratching if I do decide to write about that. I’ve done that in one API and it was many years ago (might have been a WAV writer for Winamp, can’t quite remember), I seem to remember it was basically a case of running it in a separate thread and making sure the buffer never under-ran. If you really want me to blog about that for others and you have a source code sample, feel free to post it on pastebin and I’ll re-factor it with comments and explanations. No promises on a timeline 🙂 I did not get around to implementing any significant sample for this, as I was mostly just playing around. But the following FMOD official example is a good starting point, although it can be trimmed a lot. And here’s the callback I used to produce a sine wave with controllable volume and frequency (in this case it was in C#): Doesn’t look as traumatic as I was expecting. I’ve saved the source files so I may take a look at that when I’ve caught up with all the platform game articles. Thanks! I’ve spent the night writing a couple of nice clean examples using the official demo code, your C# example, and some other bits and pieces I found on the interwebs, and added some other example sounds (sawtooth, square wave and white noise). I’ll get around to blogging it in the next few days hopefully 🙂 Could you tell me if there is any C++ specific gotcha which would make it preferable to do: FMOD_RESULT result = whatever(); FMODErrorCheck(result); Instead of: FMODErrorCheck(whatever()); Or is it just a matter of style? Unless you want to use result later (in which case you need to do the first version), or you want to be really pedantic about allocating 4 bytes on the stack until result goes out of scope, it is as far as I know purely a matter of personal preference 🙂 Any idea whether it’s possible to drive 2 soundcards with 1 FMOD program? I haven’t tried it but I believe that is perfectly possible. You just need to create (as far as I know) multiple instances of FMOD::System and assign each one to a different sound card. The documentation for System_Create states “Use this function to create 1, or multiple instances of FMOD System objects” which I assume means you can run more than one FMOD engine from the same process. Presumably any sounds you want to play on two or more sound cards have to be loaded twice or more as each sound is owned by a single FMOD::System instance, but I haven’t checked. Call System::getNumDrivers to get the number of (logical) sound cards on the system, ie. the number of outputs. Call System::getDriverInfo to get the name of each device. Each device has an ID in FMOD from zero to n-1 where n is the output of System::getNumDrivers. When you have selected the sound cards you want to use, call System::setDriver(int driverID) to re-direct all output to the desired device. Hope that helps! Realizing it’s been almost exactly a year, note that I’ve tried this only today, and indeed it works! I have 2 soundcards and am able to drive both independently this way. Wow, a year already, feels like I wrote this article last month! Good to know it works that way 🙂 Hello, I’m a student from South Korea. I am trying to use FMOD in the platform of MFC. This is Almost my first time of using a library such as FMOD. My first goal is to Play a music of Audio.mp3. I have saved a music file named Audio.mp3 in my Project folder. However, error occured when I’m using the function system->createSound / createStream / playSound My first prediction was that, I couldn’t understand of including Post Built Event. copy /y “C:\Program Files (x86)\FMOD SoundSystem\FMOD Programmers API Windows\api\fmodex.dll” “$(OutputPath)” Eventhough I have Copied my Post Built Event same as above, an error was sent in the OutputPath. Do i need to Change the OutputPath? If the Post Built Event had nothing to do with compliling, what should have been the problem..? I’d be waiting for your advice. T_T.. void CMy111Dlg::OnBnClickedButton1() { InitFMOD(); } void CMy111Dlg::ERRCHECK(FMOD_RESULT result) { CString strText; if(result!=FMOD_OK) { strText.Format(_T(“FMOD 오류”),FMOD_ErrorString(result)); MessageBox(strText,_T(“FMOD error!!”),MB_OK); num++; exit(); } So, did it compile or not? And what was the error? Also the code contains a call to ‘versioncreateSound’; that won’t compile ofcourse (but that seems too trivial to be your problem). I’ve copied another source of mine. The source i’ve used); } In the compiling process there was no errors. However, when i executed my program. The ERRCHECK(result) function sent me that a problem had occured during createSound / createStream / playSound. If i had the ERRCHECK(result) line deleted and executed my program. A popup messagebox was sent that there was a critical problem and the program was terminated. Thank you for replying 🙂 I’ve used result = system->createSound(“Audio.mp3”, FMOD_DEFAULT, 0, &audio); !!!! when i Post my comment it is transfered to if(versioncreateSound……. I don’t know why;; Ok, but also this: result = system->playSound(FMOD_CHANNEL_FREE, audio, false,0); I’m not sure sure if playSound() accepts a 0 for the channel to return. Better to pass &channel rather than 0. I debug mode you should get a call stack where you can check which call (in your code) exactly gave the crash, so you can check the arguments/variables being used. I’ve found my problem;…… My Audio.mp3 file was named incorrectly..T_T When I have another problem I’d like adivce. Heya there! I’d like to ask a simple question( but with a possible complex answer). I’ve been following this tutorial, but half-way through I stopped because it’s late at night and I don’t want to waste time. My simple purpose was to have a car engine acceleration done that I could control how I’d like. Through a couple of google searches I’ve arrived at this tutorial: This prove to be a promising path to follow, so I ended up having some engine sounds and doing that. Next I downloaded the API. The problem is that I want to somehow integrate the Designer project with a C++ program written with the FMOD API. Is this easy or really hard? As in the video, I would only need to control the “RPM” value. Can you point me to some resources\tutorials or if it isn’t that hard, can you please suggest me something from the reference? Please keep in mind that I’m quite a beginner. I’m sorry, I don’t have any experience with the FMOD Designer, only the code side… perhaps another reader can help you 🙂 It’s not that hard, but not particularly easy. I would recommend checking the official car engine sample that comes with FMOD Designer and reading the documentation. If I remember correctly, the sound event is composed by two layers – one for when the engine is under load (i.e. accelerating), and another for when it is not (i.e. decelerating). Then there is a “load” parameter that you set from your game which determines which of the two layers should be used, with a 50/50 mix of both in the middle. Then, on the horizontal axis comes the “rpm” parameter. There are four or five different sounds recorded at different rpm values, all placed side by side on the layers. These sounds overlap a bit and cross-fade so that the transitions are smoother, and they use the “auto-pitch” feature of FMOD Designer which basically increases the pitch gradually based on the value of the “rpm” value. So, what I suggest is that you start with the official car engine sample that comes with FMOD Designer, save that in a project and load the project into your game. Then what you need to do is update the “rpm” and “load” parameters based on values coming from your game. The main problem will probably be making the rpm and load values vary consistently and in a realistic fashion. Loading a FMOD designer project, playing a sound event and changing parameters is easy and very similar to initializing the regular audio system and playing a regular sound. It’s something like: // Create an event system object FMOD::EventSystem* eventSystem; FMOD::EventSystem_Create(&eventSystem); // Initialize the event system and load the project eventSystem->init(100, FMOD_INIT_NORMAL, 0, FMOD_EVENT_INIT_NORMAL); eventSystem->load(“project.fev”, 0, 0); … // Get a reference to the event FMOD::Event* event; eventSystem->getEvent(“ProjectName/EventGroupName/EventName”, FMOD_EVENT_DEFAULT, &event); // Begin playing the event event->start(); … // Get a reference to the parameter FMOD::EventParameter* parameter; event->getParameter(“ParameterName”, ¶meter); // Change the value of the parameter parameter->setValue(2.0f); … // Update event system every frame eventSystem->update(); // Release event system when we are done eventSystem->release(); Is there a way to tell when a sound finishes playing in FMOD? Love the blog btw Katy Yes, you can poll the channel on which the sound is being played with Channel::isPlaying(bool *playing) which sets the pointer to false when the sound has finished playing. You need to fetch the sound’s channel when you call playSound() to be able to poll the correct channel. Thanks for the compliments! There’s also an event based solution 🙂 Use the Channel::setCallback() method to register a callback on the channel that is playing the sound. Then there is a parameter on your callback function which tells you the type of the callback, and all you have to do is check if it corresponds to FMOD_CHANNEL_CALLBACKTYPE_END and handle it accordingly. Nice. I was looking in the docs for the previous poster for an event but I looked at FMODCREATESOUNDEX or whatever it’s called and couldn’t find one. Trying to use your tutorial to get started… The FMOD_CAPS is reading as undefined. I have tried including all the headers in the fmod pack but it is still not working. Do you know which header this thing is in? You should include the main header fmod.hpp (for the C++ interface), that’s the only one you need and FMOD_CAPS will then be defined 🙂 I have included fmod.hpp, FMOD_CAPS is still not working for me. Then I could only suggest that your download was somehow corrupt. Uninstall the FMOD API altogether and download the latest version from fmod.org and try again. I promise you that FMOD_CAPS is defined in that header 🙂 I still can’t get FMOD_CAPS to respond. Perhaps you can tell me how to properly install/link the fmod API files to VS2012 (I am making a game with openGL atm), I did see some .a files in the low-level api folder, I don’t know what to do with those. Currently, I am simply putting the fmod folders into my c/program files(x86)/VC/bin, etc. Is there something else I need to do? Define ‘not working’? In C++ I do this: FMOD_CAPS caps; // Try to use system default speakermode fmodSystem->getDriverCaps(0,&caps,0,&systemSpeakerMode); The FMOD_CAPS type comes from fmod.h. If you want to check if you’re including that .h files, why not insert an #error line somewhere? At the top should give you a clue whether you’re including the right file. Near the ‘FMOD_CAPS’ typedef in fmod.h would give you a clue whether the preprocessor gets to the typedef. By not working, I mean that VS is reading FMOD_CAPS as undefined, even though I have #include “fmod.hpp”, which should have “fmod.h” (I’ve tried including “fmod.h” too without success). How would you use an #error line? I haven’t heard of this feature before. #error works just like the other #’s (#ifdef etc). So you get: #ifdef FMOD_H #error we are here #endif This gives a compiler error, which is useful in tracking which (if any) part of the .h is actually processed. This can give clues as to why it never passes the FMOD_CAPS line. If you don’t get an error at all even if you put an ‘#error’ line at the top, this means you’re not including the file that you think you are! Note that I use this in Visual Studio; not sure if other compilers support it. Very handy though. I downloaded the demo, tried to run it and got Any idea why? Probably that you tried to compile it as a Windows application so it was looking for WinMain() as the entry point. Change the configuration to Console application instead. Sounds like you didn’t include WinMain() in your project – which you don’t need because the example is a console application, so just change the project type to Console Application and you should be good to go 🙂 Katy. Hello again, Katy! I wonder if you could help me – programming in Linux with FMOD worked fine for me, but now I wanted to try to port my code onto Windows. So I changed some Linux’s functions to ones working under Windows, pasted whole code into Visual Studio 2010. Then I included everything as you have said at the beginning of this tutorial to MVS. I also included stdafx.h at the beginning of all includes due to the fact that precompiled headers should be included that way. Nonetheless, building solution still won’t work. I’m getting errors like: And then a lot of errors pertaining to not being able to identify used FMOD functions. I have never used Visual Studio before so I need someone’s guidance with my problem. What should I do? Thank you in advance. Winged Problem solved. I did not notice that I’d included fmod.h instead of fmod.hpp ;x hey, I copy pasted your demo but I don’t hear any sound. No errors, it compiles and everything, I just can’t hear the .mp3 I specified ! any clues ? great blog btw ! Check that the demo is outputting the sound on a channel on your sound card that you can actually hear, ie. the front speakers or headphones 🙂 how do you do that ? with system->playSound(FMOD_CHANNEL_FREE, audio, false,0); ? thanks! Enumerate the playback drivers and check their names to see which one corresponds to your speakers/headphones. I haven’t tested this code but try something like: Once you’ve found the ID of the driver you want to output on, use: Note that a value of 0 (zero) uses the OS-default driver as determined in your sound settings in Control Panel. Katy. Hello,katy,i come from China and i am just confused about the statement ” while (GetAsyncKeyState(‘2’));” and so on.Could you give me some details about it? That line simply stalls the code until the user releases the ‘2’ key, so that the action of pressing 2 is only triggered once each time it is pressed rather than repeatedly. Hope that helps! Hi, Katy i have configure all the above settings in my VS2013 but still am getting an error message can you help me out on this…. 1>—— Build started: Project: again1, Configuration: Debug Win32 —— 1> Parse Error 1> The filename, directory name, or volume label syntax is incorrect. 1>C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V120\Microsoft.CppCommon.targets(122,5): error MSB3073: The command “xcopy /y “C:\FMOD\ 1>C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V120\Microsoft.CppCommon.targets(122,5): error MSB3073: api\fmodex.dll” “c:\users\hack1on1\documents\visual studio 2013\Projects\again1\Debug\” 1>C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V120\Microsoft.CppCommon.targets(122,5): error MSB3073: :VCEnd” exited with code 123. ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== It’ll be great if you can find out the solution…. Regards Here’s the step i did with my VS2013 Am I missing something?? And i did copy and paste “fmodex.dll” in my working folder… But still am getting an error link like identifier not found, undeclared identifier and so and so….please help… First step = Second step = Third step = Final step = here’s the other trouble am having for a quite long time And to be totally honest i don’t think am having a problem linking my library file’s cause am getting a perfect output
https://katyscode.wordpress.com/2012/10/05/cutting-your-teeth-on-fmod-part-1-build-environment-initialization-and-playing-sounds/
CC-MAIN-2018-17
en
refinedweb
Introduction This is a GPRS / GSM Arduino expansion board developed by Keyes. It is a shield module with frequency of EGSM 900MHz / DCS 1800MHz and GSM850 MHz / PCS 1900MHz, integrated with GPRS, DTMF and other functions. It supports DTMF, when the DTMF function is enabled, you can get the character feedback from the conversion of button pressed down during the call, which can be used for remote control. It is controlled by the AT command, you can directly start its function through the computer serial port and Arduino motherboard. The SIM800C GPRS Shield board has a built-in SIM800H chip with good stability. Specification - Power Supply<Vin>:6-12V - Low power consumption mode: current is 0.7mA in sleep mode - Low battery consumption (100mA @ 7V-GSM mode) - GSM 850/900/1800/1900MHz - GPRS multi-slot class 1 ~ 12 - GPRS mobile station class B - GSM phase 2/2 + standard - Class 4 (2 W @ 850/900 MHz) - Class 1 (1 W @ 1800 / 1900MHz) - Controlled via AT command - USB /Arduino control switch - Adaption of serial baud rate - Support DTMF - LED indicator can display power supply status, network status and operating mode Sample Code #include <sim800cmd.h> //initialize the library instance //fundebug is an application callback function,when someon;");//input the dial telephone number while(1); } digitalWrite(13,HIGH);//turn the LED on by making the voltage HIGH delay(500); digitalWrite(13,LOW);//turn the LED off by making the voltage LOW delay(500); } //application callback function void fundebug(void) { } Note As for arduino IDE 1.0 and subsequent versions, WProgram.h has been renamed Arduino.h, so this program requires arduino IDE 1.0 or later, overwrite the original file.If it is higher than 1.5.5 version, cut the HardwareSerial.h file into Arduino\hardware\arduino\sam\cores\ arduino,overwrite higher than 1.5.5 version, then open HardwareSerial.h file, doing the same modification. Test Result Burning the code on the keyestudio UNO R3 development board, stack the expansion board on the Keyestudio UNO R3 development board, then connect the phone card (only 2G network) and headphone to the expansion. Powered-on, it can dial phone number15912345678, and you can make a call through the headset after telephone connected. Related Data Link Get the Libraries of Hardware Get the Libraries of SIM800C
https://www.arduinoposlovensky.sk/produkty/arduino/shield/sim800c-gprs-gsm-shield-for-arduino-uno-r3-and-mega2560/
CC-MAIN-2018-17
en
refinedweb
Hi Adrian, funny you should ask this just now. I've recently been pondering whether we should provide an official solution for just this case. Currently you can use a "Server script" node with from de.qfs.apps.qftest.run import StopException raise StopException() I think we will make this official so you can skip the import in the future (and the package of the Exception might be different). Best regards, Greg "Adrian Chamberlain" <Adrian.Chamberlain@?.com> writes: > Ref. --
https://www.qfs.de/qf-test-mailingliste-archiv-2006/lc/2006-msg00121.html
CC-MAIN-2018-17
en
refinedweb
Kajiki provides a XML-based template language that is heavily inspired by Kid, and Genshi which in turn was inspired by a number of existing template languages, namely XSLT, TAL, and PHP. This document describes the template language and will be most useful as reference to those developing Kajiki XML templates. Templates are XML files of some kind (such as XHTML) that include processing directives (elements or attributes identified by a separate namespace) that affect how the template is rendered, and template expressions that are dynamically substituted by variable data. Directives are elements and/or attributes in the template that are identified by the namespace py:. They can affect how the template is rendered in a number of ways: Kajiki provides directives for conditionals and looping, among others. All directives can be applied as attributes, and some can also be used as elements. The if directives for conditionals, for example, can be used in both ways: <html> ... <div py: <p>Bar</p> </div> ... </html> This is basically equivalent to the following: <html> ... :if¶> py:switch¶ The py:switch directive, in combination with the directives py:case and py:else provides advanced conditional processing for rendering one of several alternatives. The first matching py:case branch is rendered, or, if no py:case branch matches, the py:else branch is rendered. The nested py:case directives will be tested for equality to the parent py:switch value: <div> <py:switch <span py:0</span> <span py:1</span> <span py:2</span> </py:switch> </div> This would produce the following output: <div> <span>1</span> </div> Note The py:switch directive can only be used as a standalone tag and cannot be applied as an attribute of a tag. py:for¶> py:def¶> py:with¶> py:attrs¶> Note This directive can only be used as an attribute. py:content¶ This directive replaces any nested content with the result of evaluating the expression: <ul> <li py:Hello</li> </ul> Given bar='Bye' in the context data, this would produce: <ul> <li>Bye</li> </ul> This directive can only be used as an attribute. py:replace¶> py:strip¶). To reuse common snippets of template code, you can include other files using py:include and py:import.: <py:include whereas in the PackageLoader you would use <py:include With py:import, you can make the functions defined in another template available without expanding the full template in-place. Suppose that we saved the following template in a file lib.xml: <py:def <py:ifeven</py:if><py:else>odd</py:else> </py:def> Then (using the FileLoader) we could write a template using the evenness function as follows: <div> <py:import <ul> <li py:$i is ${lib.evenness(i)}</li> </ul> </div> Kajiki is a fast template engine which is 90% compatible with Genshi, all of Genshi directives work in Kajiki too apart those involved in templates inheritance as Kajiki uses blocks instead of XInclude and XPath. Simple templates hierarchies (like the one coming from TurboGears quickstart) can be moved to Kajiki blocks in a matter of seconds through the gearbox patch command. Note Please note that this guide only works on version 2.3.6 and greater. Note It’s suggested to try this steps on a newly quickstarted Genshi application and then test them on your real apps when you are confident with the whole process. Enabling Kajiki support involves changing the base_config.default_renderer option in your app_cfg.py and adding kajiki to the renderers: # Add kajiki support base_config.renderers.append('kajiki') # Set the default renderer base_config.default_renderer = 'kajiki' The only template we will need to adapt by hand is our master.html template, everything else will be done automatically. So the effort of porting an application from Genshi to Kajiki is the same independently from the size of the application. First of all we will need to remove the py:strip and xmlns attributes from the html tag: <html xmlns="" xmlns: should became: <html> Then let’s adapt our head tag to make it so that the content from templates that extend our master gets included inside it: <head py: <meta name="viewport" content="width=device-width, initial-scale=1.0"/> <meta charset="${response.charset}" /> <title py:Your generic title goes here</title> <meta py: <link rel="stylesheet" type="text/css" media="screen" href="${tg.url('/css/bootstrap.min.css')}" /> <link rel="stylesheet" type="text/css" media="screen" href="${tg.url('/css/style.css')}" /> </head> should became: <head> <meta name="viewport" content="width=device-width, initial-scale=1.0"/> <meta charset="${response.charset}" /> <title py:Your generic title goes here</title> <py:block <link rel="stylesheet" type="text/css" media="screen" href="${tg.url('/css/bootstrap.min.css')}" /> <link rel="stylesheet" type="text/css" media="screen" href="${tg.url('/css/style.css')}" /> </head> Then we do the same with the body tag by disabling it as a block and placing a block with the same name inside of it: <body py: <!-- Navbar --> [...] <div class="container"> <!-- Flash messages --> <py:with <div class="row"> <div class="col-md-8 col-md-offset-2"> <div py: </div> </div> </py:with> <!-- Main included content --> <div py: </div> </body> Which should became: <body> <!-- Navbar --> [...] <div class="container"> <!-- Flash messages --> <py:with <div class="row"> <div class="col-md-8 col-md-offset-2"> <div py: </div> </div> </py:with> <!-- Main included content --> <py:block </div> </body> What is did is replacing all the XPath expressions that lead to insert content from the child templates into head and body with two head and body blocks. So our child templates will be able to rely on those blocks to inject their content into the master. Last importat step is renaming the master template, as Kajiki in turbogears uses .xhtml extension we will need to rename master.html to master.xhtml: $ find ./ -iname 'master.html' -exec sh -c 'mv {} `dirname {}`/master.xhtml' \; Note The previous expression will rename the master file if run from within your project directory. There are three things we need to do to upgrade all our child templates to Kajiki: - Replace xi:includewith py:extends - Strip <html>tags to avoid a duplicated root tag - Replace <head>tag with a kajiki block - Replace <body>tag with a kajiki block To perform those changes we can rely on a simple but helpful gearbox command to patch all our templates by replacing xi:include with py:extends which is used and recognized by Kajiki. Just move inside the root of your project and run: $ gearbox patch -R '*.html' 'xi:include href="master.html"' -r 'py:extends href="master.xhtml"' You should get an output similar to: 6 files matching ! Patching /private/tmp/prova/prova/templates/about.html ! Patching /private/tmp/prova/prova/templates/data.html ! Patching /private/tmp/prova/prova/templates/environ.html ! Patching /private/tmp/prova/prova/templates/error.html ! Patching /private/tmp/prova/prova/templates/index.html ! Patching /private/tmp/prova/prova/templates/login.html Which means that all our templates apart from master.html got patched properly and now correctly use py:extends. Now we can start adapting our tags to move them to kajiki blocks. First of all we will need to strip the html from all the templates apart master.xhtml to avoid ending up with duplicated root tag: $ gearbox patch -R '*.html' 'xmlns=""' -r 'py:strip=""' Then we will need to do the same for the head tag: $ gearbox patch -R '*.html' '<head>' -r '<head py:' Then repeat for body: $ gearbox patch -R '*.html' '<body>' -r '<body py:' Now that all our templates got upgraded from Genshi to Kajiki, we must remember to rename them all, like we did for master: $ find ./ -iname '*.html' -exec sh -c 'mv {} `dirname {}`/`basename {} .html`.xhtml' \; Restarting your application now should lead to a properly working page equal to the original Genshi one. Congratulations, you successfully moved your templates from Genshi to Kajiki.
http://turbogears.readthedocs.io/en/latest/turbogears/kajiki-xml-templates.html
CC-MAIN-2018-17
en
refinedweb
Name Scope¶ Note This has nothing to do with mouthwash! Now that we are building programs with more than one module, and can start to talk about the right way to organize a bigger program, we need to introduce the concept of name scope. Simply put, this is a concept that defines where in a program we can refer to a name. That name can be the name of a variable, constant, or module. Long ago, I used an analogy to explain name scope. The analogy involved surrounding parts of your program with one way mirrors. You know how these work, from one side you cannot see through them, but from the other side you can see through them. We will surround the entire program file in such a mirror, and further, we will surround each module we create in that file in another mirror. The mirrors will be set up so that from outside of the file, you cannot look into the file, and from outside of any module (but inside the file), you cannot see into the function. If you are standing inside the function, you can see out of the function to the world outside of the function, maybe even outside of the file. One catch¶ There is one catch to this rule. You are only allowed to look upward in your program, never downward. As you read your program code, you will define names of variables and constants which hold your data. We have already heard the rule that you cannot use a name unless the compiler knows everything it need to know about that name so it can make sure you use it correctly. You may also define modules in your code, either fully, or in two parts: the prototype followed by the full module definition. The scope rules determine where in our program we can refer to the module name, meaning where we can call the module into action! You can pretend that each module has the name of the module painted on the outside of the enclosing mirror. Here is how scope works. Scope¶ At any point in your program code where you want to use a name, we need to look upward in the program to see if that name has been defined above us. If so, we are allowed to use the name If not, the compiler will generate an error. If we have modules in the program between where we want to use the name, we cannot see into those modules, but we can see around them to code above the modules surrounding mirror. Again, if the name can be found using that set of rules, we can use the name. Module local names¶ Names created in a module are called local, meaning they are only visible to code inside that module. This protects them from accidentally being used by any other code in the program. We want this to allow us to move the module into another program without breaking anything in that new program. What is interesting about modules, though, is that they can use names of variables and constants outside of their surrounding mirrors. We call these names global since they must be defined outside of any module to be seen. Normally, we place such names at the top of the program which makes then visible to any code below the point where they are defined. The global term indicates that you can use those names anywhere in your program. Modules can see other modules as well, so a module can call another module if needed. This is how we build large programs, dividing them up into a bunch of modules that activate each other as needed to do the required work. Module parameter names¶ The names of parameters we create for our modules are actually local to the module, although it is common to see those names in a prototype, or module definition. The names themselves can only be used inside the module, and act like specially initialized variables that code in the module can use. When some piece of code calls the module, those parameter names are initialized with the right values at that moment. The caller’s code determines what value will be placed in those variables, and the module code will not be aware how that all happened! Here is an example program, just to reinforce this scope concept: #include <iostream> // makes a bunch of names "global" (like cout, and cin) using namespace std; // simplifies some names const double PI = acos(-1.0); // define a "global" constant named PI void myfunction(double angle) { double radians; // uninitialized local variable radians = angle * PI / 180.0; // using global PI, OK since it is above here cout << "Radians:" << radians << endl; // using several globals and a local } int main(int argc, char ** argv) { cout << "Hello, there!" << endl; // global cout used here myfunction(45.0); // calling module defined above. angle will be 45.0 } This code shows several examples of using names defined above. In module main, we cannot use the name radians since it is invisible to us. It is hiden inside the morrit that surrounds myfunction. We can call myfunction, since we can see its name. The analogy is important in organizing code. It i common in several languages, but not all. You will need to learn the specific rules for scope in what ever language you end up using!
http://www.co-pylit.org/courses/cosc1315/functions/03-name-scope.html
CC-MAIN-2018-17
en
refinedweb
VPC Flow Logs. Flow logs can help you with a number of tasks; for example, to troubleshoot why specific traffic is not reaching an instance, which in turn helps you diagnose overly restrictive security group rules. You can also use flow logs as a security tool to monitor the traffic that is reaching your instance. There is no additional charge for using flow logs; however, standard CloudWatch Logs charges apply. For more information, see Amazon CloudWatch Pricing. Topics Flow Logs Basics You can create a flow log for a VPC, a subnet, or a network interface. If you create a flow log for a subnet or VPC, each network interface in the VPC or subnet is monitored. Flow log data is published to a log group in CloudWatch Logs, and each network interface has a unique log stream. Log streams contain flow log records, which are log events consisting of fields that describe the traffic for that network interface. For more information, see Flow Log Records. To create a flow log, you specify the resource for which to create the flow log, the type of traffic to capture (accepted traffic, rejected traffic, or all traffic), the name of a log group in CloudWatch Logs to which the flow log is published, and the ARN of an IAM role that has sufficient permissions to publish the flow log to the CloudWatch Logs log group. If you specify the name of a log group that does not exist, we attempt to create the log group for you. After you've created a flow log, it can take several minutes to begin collecting data and publishing to CloudWatch Logs. Flow logs do not capture real-time log streams for your network interfaces. You can create multiple flow logs that publish data to the same log group in CloudWatch Logs.. If you launch more instances into your subnet after you've created a flow log for your subnet or VPC, then a new log stream is created for each new network interface as soon as any network traffic is recorded for that network interface. You can create flow logs for network interfaces that are created by other AWS services; for example, Elastic Load Balancing, Amazon RDS, Amazon ElastiCache, Amazon Redshift, and Amazon WorkSpaces. However, you cannot use these services' consoles or APIs to create the flow logs; you must use the Amazon EC2 console or the Amazon EC2 API. Similarly, you cannot use the CloudWatch Logs console or API to create log streams for your network interfaces. If you no longer require a flow log, you can delete it. Deleting a flow log disables the flow log service for the resource, and no new flow log records or log streams are created. It does not delete any existing flow log records or log streams for a network interface. To delete an existing log stream, you can use the CloudWatch Logs console. After you've deleted a flow log, it can take several minutes to stop collecting data. Flow Log Limitations To use flow logs, you need to be aware of the following limitations: You cannot enable flow logs for network interfaces that are in the EC2-Classic platform. This includes EC2-Classic instances that have been linked to a VPC through ClassicLink. You cannot enable flow logs for VPCs that are peered with your VPC unless the peer VPC is in your account. You cannot tag a flow log. After you've created a flow log, you cannot change its configuration; for example, you can't associate a different IAM role with the flow log. Instead, you can delete the flow log and create a new one with the required configuration. None of the flow log API actions ( ec2:*FlowLogs) support resource-level permissions. To create an IAM policy to control the use of the flow log API actions, you must grant users permissions to use all resources for the action by using the * wildcard for the resource element in your statement. For more information, see Controlling Access to Amazon VPC Resources. If your network interface has multiple IPv4 addresses and traffic is sent to a secondary private IPv4 address, the flow log displays the primary private IPv4 address in the destination IP address field. Flow logs do not capture all IP traffic. The following types of traffic are not logged: Traffic generated by instances when they contact the Amazon DNS server. If you use your own DNS server, then all traffic to that DNS server is logged. Traffic generated by a Windows instance for Amazon Windows license activation. Traffic to and from 169.254.169.254for instance metadata. Traffic to and from 169.254.169.123for the Amazon Time Sync Service. DHCP traffic. Traffic to the reserved IP address for the default VPC router. For more information, see VPC and Subnet Sizing. Traffic between an endpoint network interface and a Network Load Balancer network interface. For more information, see VPC Endpoint Services (AWS PrivateLink). Flow Log Records A flow log record represents a network flow in your flow log. Each record captures the network flow for a specific 5-tuple, for a specific capture window. A 5-tuple is a set of five different values that specify the source, destination, and protocol for an internet protocol (IP) flow. The capture window is a duration of time during which the flow logs service aggregates data before publishing flow log records. The capture window is approximately 10 minutes, but can take up to 15 minutes. A flow log record is a space-separated string that has the following format: version account-id interface-id srcaddr dstaddr srcport dstport protocol packets bytes start end action log-status If a field is not applicable for a specific record, the record displays a '-' symbol for that entry. For examples of flow log records, see Examples: Flow Log Records. You can work with flow log records as you would with any other log events collected by CloudWatch Logs. For more information about monitoring log data and metric filters, see Searching and Filtering Log Data in the Amazon CloudWatch User Guide. For an example of setting up a metric filter and alarm for a flow log, see Example: Creating a CloudWatch Metric Filter and Alarm for a Flow Log. You can export log data to Amazon S3 and use Amazon Athena, an interactive query service, to analyze the data. For more information, see Querying Amazon VPC Flow Logs in the Amazon Athena User Guide. IAM Roles for Flow Logs The IAM role that's associated with your flow log must have sufficient permissions to publish flow logs to the specified log group in CloudWatch Logs. The IAM policy that's attached to your IAM role must include at least the following permissions: { "Version": "2012-10-17", "Statement": [ { "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents", "logs:DescribeLogGroups", "logs:DescribeLogStreams" ], "Effect": "Allow", "Resource": "*" } ] } You must also ensure that your role has a trust relationship that allows the flow logs service to assume the role (in the IAM console, choose your role, and then choose Edit trust relationship to view the trust relationship): { "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": "vpc-flow-logs.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } Alternatively, you can follow the procedures below to create a new role for use with flow logs. Creating a Flow Logs Role To. Select the name of your role. Under Permissions, choose Add inline policy. Choose the JSON tab. In the section IAM Roles for Flow Logs above, copy the first policy and paste it in the window. Choose Review policy. Enter a name for your policy, and then choose Create policy. In the section IAM Roles for Flow Logs above, copy the second policy (the trust relationship), and then choose Trust relationships, Edit trust relationship. Delete the existing policy document, and paste in the new one. When you are done, choose Update Trust Policy. On the Summary page, take note of the ARN for your role. You need this ARN when you create your flow log. Controlling the Use of Flow Logs By default, IAM users do not have permission to work with flow logs. You can create an IAM user policy that grants users the permissions to create, describe, and delete flow logs. To create a flow log, users must have permissions to use the iam:PassRole action for the IAM role that's associated with the flow log. The following is an example policy that grants users full permissions to create, describe, and delete flow logs, and view flow log records in CloudWatch Logs. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DeleteFlowLogs", "ec2:CreateFlowLogs", "ec2:DescribeFlowLogs", "logs:GetLogEvents" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "iam:PassRole" ], "Resource": "arn:aws:iam::account:role/flow-log-role-name" } ] } For more information about permissions, see Granting IAM Users Required Permissions for Amazon EC2 Resources in the Amazon EC2 API Reference. Working With Flow Logs You can work with flow logs using the Amazon EC2, Amazon VPC, and CloudWatch consoles. Creating a Flow Log You can create a flow log from the VPC page and the Subnet page in the Amazon VPC console, or from the Network Interfaces page in the Amazon EC2 console. To create a flow log for a network interface Open the Amazon EC2 console at. In the navigation pane, choose Network Interfaces. Select a network interface, choose the Flow Logs tab, and then Create Flow Log. In the dialog box, complete following information. When you are done, choose Create Flow Log: Filter: Select whether the flow log should capture rejected traffic, accepted traffic, or all traffic. Role: Specify the name of an IAM role that has permissions to publish logs to CloudWatch Logs. Destination Log Group: Enter the name of a log group in CloudWatch Logs to which the flow logs are to be published. You can use an existing log group, or you can enter a name for a new log group, which we create for you. To create a flow log for a VPC or a subnet Open the Amazon VPC console at. In the navigation pane, choose Your VPCs, or choose Subnets. Select your VPC or subnet, choose the Flow Logs tab, and then Create Flow Log. Note To create flow logs for multiple VPCs, choose the VPCs, and then select Create Flow Log from the Actions menu. To create flow logs for multiple subnets, choose the subnets, and then select Create Flow Log from the Subnet Actions menu. In the dialog box, complete following information. When you are done, choose Create Flow Log: Filter: Select whether the flow log should capture rejected traffic, accepted traffic, or all traffic. Role: Specify the name of an IAM role that has permission to publish logs to CloudWatch Logs. Destination Log Group: Enter the name of a log group in CloudWatch Logs to which the flow logs will be published. You can use an existing log group, or you can enter a name for a new log group, which we'll create for you. Viewing Flow Logs You can view information about your flow logs in the Amazon EC2 and Amazon VPC consoles by viewing the Flow Logs tab for a specific resource. When you select the resource, all the flow logs for that resource are listed. The information displayed includes the ID of the flow log, the flow log configuration, and information about the status of the flow log. To view information about your flow logs for your network interfaces Open the Amazon EC2 console at. In the navigation pane, choose Network Interfaces. Select a network interface, and choose the Flow Logs tab. Information about the flow logs is displayed on the tab. To view information about your flow logs for your VPCs or subnets Open the Amazon VPC console at. In the navigation pane, choose Your VPCs, or choose Subnets. Select your VPC or subnet, and then choose the Flow Logs tab. Information about the flow logs is displayed on the tab. You can view your flow log records using the CloudWatch Logs console. It may take a few minutes after you've created your flow log for it to be visible in the console. To view your flow log records for a flow log Open the CloudWatch console at. In the navigation pane, choose Logs. Choose the name of the log group that contains your flow log. A list of log streams for each network interface is displayed. Choose the name of the log stream that contains the ID of the network interface for which you want to view the flow log records. For more information about flow log records, see Flow Log Records. Deleting a Flow Log You can delete a flow log using the Amazon EC2 and Amazon VPC consoles. Note These procedures disable the flow log service for a resource. To delete the log streams for your network interfaces, use the CloudWatch Logs console. To delete a flow log for a network interface Open the Amazon EC2 console at. In the navigation pane, choose Network Interfaces, and then select the network interface. Choose the Flow Logs tab, and then choose the delete button (a cross) for the flow log to delete. In the confirmation dialog box, choose Yes, Delete. To delete a flow log for a VPC or subnet Open the Amazon VPC console at. In the navigation pane, choose Your VPCs, or choose your Subnets, and then select the resource. Choose the Flow Logs tab, and then choose the delete button (a cross) for the flow log to delete. In the confirmation dialog box, choose Yes, Delete. Troubleshooting Incomplete Flow Log Records If your flow log records are incomplete, or are no longer being published, there may be a problem delivering the flow logs to the CloudWatch Logs log group. In either the Amazon EC2 console or the Amazon VPC console, go to the Flow Logs tab for the relevant resource. For more information, see Viewing Flow Logs. The flow logs table displays any errors in the Status column. Alternatively, use the describe-flow-logs command, and check the value that's returned in the DeliverLogsErrorMessage field. One of the following errors may be displayed: Rate limited: This error can occur if CloudWatch logs throttling has been applied — when the number of flow log records for a network interface is higher than the maximum number of records that can be published within a specific timeframe. This error can also occur if you've reached the limit on the number of CloudWatch Logs log groups that you can create. For more information, see CloudWatch Limits in the Amazon CloudWatch User Guide. Access error: The IAM role for your flow log does not have sufficient permissions to publish flow log records to the CloudWatch log group. For more information, see IAM Roles for Flow Logs. Unknown error: An internal error has occurred in the flow logs service. Flow Log is Active, But No Flow Log Records or Log Group You've created a flow log, and the Amazon VPC or Amazon EC2 console displays the flow log as Active. However, you cannot see any log streams in CloudWatch Logs, or your CloudWatch Logs log group has not been created. The cause may be one of the following: The flow log is still in the process of being created. In some cases, it can take tens of minutes after you've created the flow log for the log group to be created, and for data to be displayed. There has been no traffic recorded for your network interfaces yet. The log group in CloudWatch Logs is only created when traffic is recorded. API and CLI Overview You can perform the tasks described on this page using the command line or API. For more information about the command line interfaces and a list of available API actions, see Accessing Amazon VPC. Create a flow log create-flow-logs (AWS CLI) New-EC2FlowLogs (AWS Tools for Windows PowerShell) CreateFlowLogs (Amazon EC2 Query API) Describe your flow logs describe-flow-logs (AWS CLI) Get-EC2FlowLogs (AWS Tools for Windows PowerShell) DescribeFlowLogs (Amazon EC2 Query API) View your flow log records (log events) get-log-events (AWS CLI) Get-CWLLogEvents (AWS Tools for Windows PowerShell) GetLogEvents (CloudWatch API) Delete a flow log delete-flow-logs (AWS CLI) Remove-EC2FlowLogs (AWS Tools for Windows PowerShell) DeleteFlowLogs (Amazon EC2 Query API) Examples: Flow Log Records Flow Log Records for Accepted and Rejected Traffic The following is an example of a flow log record in which SSH traffic (destination port 22, TCP protocol) to network interface eni-abc123de in account 123456789010 was allowed. 2 123456789010 eni-abc123de 172.31.16.139 172.31.16.21 20641 22 6 20 4249 1418530010 1418530070 ACCEPT OK The following is an example of a flow log record in which RDP traffic (destination port 3389, TCP protocol) to network interface eni-abc123de in account 123456789010 was rejected. 2 123456789010 eni-abc123de 172.31.9.69 172.31.9.12 49761 3389 6 20 4249 1418530010 1418530070 REJECT OK Flow Log Records for No Data and Skipped Records The following is an example of a flow log record in which no data was recorded during the capture window. 2 123456789010 eni-1a2b3c4d - - - - - - - 1431280876 1431280934 - NODATA The following is an example of a flow log record in which records were skipped during the capture window. 2 123456789010 eni-4b118871 - - - - - - - 1431280876 1431280934 - SKIPDATA Security Group and Network ACL Rules If you're using flow logs to diagnose overly restrictive or permissive security group rules or network ACL rules, then be aware of the statefulness of these resources. Security groups are stateful — this means that responses to allowed traffic are also allowed, even if the rules in your security group do not permit it. Conversely, network ACLs are stateless, therefore responses to allowed traffic are subject to network ACL rules. For example, you use the ping command from your home computer (IP address is 203.0.113.12) to your instance (the network interface's private IP address is 172.31.16.139). Your security group's inbound rules allow ICMP traffic and the outbound rules do not allow ICMP traffic; however, because security groups are stateful, the response ping from your instance is allowed. Your network ACL permits inbound ICMP traffic but does not permit outbound ICMP traffic. Because network ACLs are stateless, the response ping is dropped and will not reach your home computer. In a flow log, this is displayed as 2 flow log records: An ACCEPTrecord for the originating ping that was allowed by both the network ACL and the security group, and therefore was allowed to reach your instance. A REJECTrecord for the response ping that the network ACL denied. If your network ACL permits outbound ICMP traffic, the flow log displays two ACCEPT records (one for the originating ping and one for the response ping). If your security group denies inbound ICMP traffic, the flow log displays a single REJECT record, because the traffic was not permitted to reach your instance. Flow Log Record for IPv6 Traffic The following is an example of a flow log record in which SSH traffic (port 22) from IPv6 address 2001:db8:1234:a100:8d6e:3477:df66:f105 to network interface eni-f41c42bf in account 123456789010 was allowed. 2 123456789010 eni-f41c42bf 2001:db8:1234:a100:8d6e:3477:df66:f105 2001:db8:1234:a102:3304:8879:34cf:4071 34892 22 6 54 8855 1477913708 1477913820 ACCEPT OK Example: Creating a CloudWatch Metric Filter and Alarm for a Flow Log In this example, you have a flow log for eni-1a2b3c4d. You want to create an alarm that alerts you if there have been 10 or more rejected attempts to connect to your instance over TCP port 22 (SSH) within a 1 hour time period. First, you must create a metric filter that matches the pattern of the traffic for which you want to create the alarm. Then, you can create an alarm for the metric filter. To create a metric filter for rejected SSH traffic and create an alarm for the filter Open the CloudWatch console at. In the navigation pane, choose Logs, select the flow log group for your flow log, and then choose Create Metric Filter. In the Filter Pattern field, enter the following: [version, account, eni, source, destination, srcport, destport="22", protocol="6", packets, bytes, windowstart, windowend, action="REJECT", flowlogstatus] In the Select Log Data to Test list, select the log stream for your network interface. You can optionally choose Test Pattern to view the lines of log data that match the filter pattern. When you're ready, choose Assign Metric. Provide a metric namespace, a metric name, and ensure that the metric value is set to 1. When you're done, choose Create Filter. In the navigation pane, choose Alarms, and then choose Create Alarm. In the Custom Metrics section, choose the namespace for the metric filter that you created. Note It can take a few minutes for a new metric to display in the console. Select the metric name that you created, and then choose Next. Enter a name and description for the alarm. In the is fields, choose >= and enter 10. In the for field, leave the default 1 for the consecutive periods. Choose 1 Hour from the Period list, and Sum from the Statistic list. The Sumstatistic ensures that you are capturing the total number of data points for the specified time period. In the Actions section, you can choose to send a notification to an existing list, or you can create a new list and enter the email addresses that should receive a notification when the alarm is triggered. When you are done, choose Create Alarm.
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/flow-logs.html
CC-MAIN-2018-17
en
refinedweb
Light. C++ REST SDK There are lots of ways to process HTTP requests and responses in C++, right from barebones WinInet calls (not recommended!) to Boost or POCO. In this article, we shall use Microsoft’s C++ REST SDK. This was available as a beta with the name Casablanca (version 0.6) during Visual Studio 2012, and also included the Windows Azure SDK for a while which has now been split off into a separate SDK. Visual Studio 2013 ships with version 1.0 of the C++ REST SDK and newer versions (2.0 at the time of writing) are available on CodePlex. REST stands for Representational State Transfer and although it has many implementations, for our purposes it is essentially a simple way of issuing requests and commands to a remote server via HTTP calls. You can use the GET method to query tables, POST to insert rows or call remote functions, PUT to update rows and DELETE to delete rows. In other words, it is conveniently exactly like the interface supplied by the OData endpoints on our LightSwitch project (and in the case of remote functions, calling any WCF RIA Services we make – see parts 2 and 3 for details and examples). While you can download and configure the SDK in your projects manually, the easiest way is to use the NuGet package manager and add a reference to the C++ REST SDK (search for ‘Casablanca’ in the Manage NuGet Packages search box to find it) in your project. This will download and install the SDK and add all the appropriate include and library paths so you can get going quickly. NOTE: The C++ REST SDK only supports dynamic linking (you must compile with the /MDd or /MD flags – this is the default). Since you may be integrating the code with game engines or other libraries that only support static linking, I have produced an article explaining how to re-compile the C++ REST SDK to support static linking. It is not necessary for this tutorial, but note if you try to statically link the code below without re-compiling the SDK, it will crash with debug assertion errors. Make a new, empty C++ project in Visual Studio and reference the SDK in whichever way you prefer, then we can get cracking! PPLX Primer PPLX is a Linux-compatible version of PPL included with the C++ REST SDK. PPL stands for Parallel Patterns Library and is an SDK which allows for some neat syntactical sugar to create multi-threaded applications. Everything in the C++ REST SDK is based on PPL tasks and asynchronous operation, and as such there is a bit of a steep learning curve for those not used to this kind of programming. You don’t need to know everything to use the SDK, but to help things along, I’ll give a brief introduction into the basic techniques required. As it happens, a network SDK based on PPL is very useful for our purposes because we really don’t want our game to stall while it is waiting for the server to respond. Usually we take care of this by making network communication run in the background by starting additional threads so that the game’s main thread can continue uninterrupted. With PPL, the thread management is taken care of for us automatically, making things much easier. PPL Tasks Instead of performing a computation directly on our main thread, we can wrap it in a pplx::task<T> object (where T is the return type of the function encapsulating the task, ie. the variable type of the task’s output). The task will run automatically in another thread. The C++ REST SDK has many functions which return tasks instead of the result directly. For example the http_client object’s request() method returns a pplx::task<http_response> object, rather than an http_response directly. This means that when you call request() to execute an HTTP request, it returns immediately (doesn’t block) and the task automatically starts to run in another thread. For example: http_client client(""); client.request(methods::GET, L"/ApplicationData.svc/SomeTable?$format=json"); The above code fetches the URL in another thread, while the call to client.request() returns immediately, allowing execution to continue. Notice that we have not actually made mention of pplx::task<http_response> in the code itself. We don’t actually generally need to deal with tasks directly unless we’re doing something special. It is assumed that the thread which receives the actual HTTP response will signal the application that the request has completed and provide the result (we’ll see how to do this below). Task waits and results If you want to wait for a task to finish – blocking the current thread – use wait() on a task: client.request(methods::GET, L"/ApplicationData.svc/SomeTable?$format=json").wait(); If you want to get the result of a task – blocking the current thread until it is ready – use get() on a task: http_response response = client.request(methods::GET, L"/ApplicationData.svc/SomeTable?$format=json").get(); Continuations A continuation is a construct which indicates what should happen when a task is completed. You can add continuations to tasks by using the then() method to generate a new task which includes the continuation. You can chain as many then() calls as you want together. For example: client.request(methods::GET, L"/ApplicationData.svc/SomeTable?$format=json") .then()([](http_response response) { // do something with the HTTP response }); then() takes a single argument which is the function to call when the task completes. In the above value-based continuation, the target function receives a single argument which is the result of the task. By using C++11 lambda functions as above, and chaining together continuations with then(), we can essentially write the code serially in layout even though it really executes in parallel. NOTE FOR EXPERTS: The target function executes in the same thread as the task by default, and this can sometimes be a problem. The SDK provides a way for you to indicate that the continuation should be run on the original thread from where the task was created using concurrency::task_continuation_context::use_current(), but since this is only supported in Windows 8, we show another way to deal with this problem below. You can also create a task-based continuation, where the target function receives a new task which wraps the result of the previous task, instead of receiving the result of the previous task directly: client.request(methods::GET, L"/ApplicationData.svc/SomeTable?$format=json") .then()([](pplx::task<http_response> responseTask) { // do something with the HTTP response task http_response response = responseTask.get(); }); The main reason to use this for us is exception handling. If the server is down or the user isn’t connected to the internet, attempting to generate an http_response will cause an http_exception exception to be thrown. In a value-based continuation there is no way to handle this, so we have to wrap all our network task generation calls from the main thread in try/catch blocks), but in a task-based continuation we can just put the try/catch block inside the continuation and keep things tidy. More on this below. A note on strings The C++ REST SDK uses the platform string string format for virtually everything involving strings. This basically means the default string format on your PC (ANSI, UTF-8, Unicode etc.). This can make things a bit tricky if you’re used to just using std::string, std::cout and so forth, because most machines default to a wide string format (as opposed to a ‘narrow’ 8-bit format) nowadays. Most of the C++ library string manipulation functions have wide versions with the same names as the narrow ones but with the letter ‘w’ in front, eg. std::wstring, std::wcout and so on. You can hard-code for this if you want (in which case remember to pre-pend all string literals with L to make them long/wide), or you can use some syntactical sugar: The C++ REST SDK provides utility::string_t (and utility::stringstream_t etc.) which maps to either std::string or std::wstring depending on your environment. You can use the _XPLATSTR() macro (or just U() as a shortcut) to convert any string literal into the platform default. In the code below, I have just hard-coded everything to use wide strings. Walkthrough example: Create a new user Let’s make a bare-bones console application which creates a new user. First, the boilerplate code: #include <Windows.h> #include <cpprest/http_client.h> #include <cpprest/uri.h> #include <iostream> using namespace concurrency::streams; using namespace web; using namespace web::http; using namespace web::http::client; using std::string; using std::cout; // at the end of your source file: int main() { string UserName = R"##(jondoe)##"; string FullName = R"##(Jon Doe)##"; string Password = R"##(somepassword12345)##"; string Email = R"##(jon@doe.com)##"; CreateUser(UserName, FullName, Password, Email).wait(); } If you installed the C++ REST SDK using NuGet, the include path for the SDK’s header files will be cpprest/* as shown above, otherwise you may need to change this. The namespaces are all defined by the SDK. Replace the account details in main() with pleasing defaults! Note that CreateUser() will return a PPL task, so we call wait() on it to make sure the application doesn’t exit before the server has responded to the request. NOTE: I have used C++11 raw string literals above. Earlier versions of Visual Studio do not support this, so you must replace them with normal string literals. Authentication Our LightSwitch server uses HTTP Basic auth and the authentication process is trivially handled with the C++ REST SDK as follows: http_client_config config; credentials creds(U("__userRegistrant"), U("__userRegistrant")); config.set_credentials(creds); http_client session(U(""), config); Recall that we made an account __userRegistrant with special privileges in part 2 to allow the anonymous creation of new player accounts. To log in an actual user later on, simply replace the arguments to credentials‘ constructor with the user’s username and password. Remember that HTTP is a stateless protocol, so the correct username and password must be supplied with every request. There are no session keys. Since HTTP Basic auth involves the transmission of the password in plaintext (unencrypted), it is critical that you use an SSL-encrypted connection to the server for authenticated requests; be sure to use https:// rather than http:// in your URL paths to make sure SSL is turned on. If you are using Windows Azure to host your LightSwitch server, SSL is configured and enabled for you when you provision a new web site, otherwise you will need to do the server-side configuration yourself. Building the request We construct the request to create a user as follows: http_request request; string requestBody = "{UserName:\"" + UserName + "\",FullName:\"" + FullName + "\",Password:\"" + Password + "\",Email:\"" + Email + "\"}"; request.set_method(methods::POST); request.set_request_uri(uri(U("/ApplicationData.svc/UserProfiles"))); request.set_body(requestBody, L"application/json"); request.headers().add(header_names::accept, U("application/json")); We construct the JSON request manually in requestBody (this isn’t good practice and later we’ll see how to do this properly; for one thing, if one of the user-supplied fields contains a backslash, the above code will fail to encode it properly), set the HTTP method to POST, set the endpoint to the UserProfiles table, specify that the request is in JSON (rather than XML) and that we also want the response in JSON too. Processing the response We now create a task for the request with a continuation to deal with the response. We start by getting the HTTP response status code and body: return session.request(request).then([] (http_response response) { status_code responseStatus = response.status_code(); std::wstring responseBodyU = response.extract_string().get(); } (Note that calling extract_string() to get the response body text returns a task rather than the string directly) Although it shouldn’t normally be necessary, if for some reason you need to convert the response into a narrow string, you can do so as follows: string responseBody = utility::conversions::to_utf8string(responseBodyU); Next we look at the 3-digit HTTP response status code. Typically this is 200 OK when successfully fetching a web page, 404 if the page is not found and so on. OData standardizes on a few codes: - 200 OK – the query was executed succesfully and the result is in the response body - 201 Created – a row was successfully inserted into a table (status_codes::Created) - 500 Internal Error – there was a problem with the input data (status_codes::InternalError) - 401 Unauthorized – the user’s username or password was incorrect (status_codes::Unauthorized) - … + others So first we’ll check to see if the user was created: if (responseStatus == status_codes::Created) cout << "User created successfully." << std::endl; If not, we try to find out why. In the case of status code 500, the LightSwitch server returns some XML with error codes and descriptive error text. The <Id> tag contains the error code enum value. This doesn’t change regardless of the server’s LightSwitch version or locale so you should prefer to inspect this tag when deducing which error has occurred. You can use an XML parser if you want, but it’s much simpler to just do a brute-force string search:; } } If the status code was neither 201 (Created) or 500 (Internal Error), something else happened so just dump out the information for debugging purposes: else { cout << "Unexpected result:" << std::endl; cout << responseStatus << " "; ucout << response.reason_phrase() << std::endl; cout << responseBody << std::endl; } }); // this line ends the continuation } // this line closes the CreateUser() function And that’s it. If you now run the example numerous times with various valid and invalid username/password combinations, try changing the request URI to one that doesn’t exist on the server and so forth, you should find that it all behaves exactly as you would expect – as long as the server is up and you’re connected to the internet. Walkthrough example: Fetch a user’s profile Let us now write code to fetch a user’s profile. First add the following code to the previous example: // in using namespace declarations: using namespace web::json; // in main(): std::wstring profileUserName = LR"##(jondoe)##"; std::wstring profilePassword = LR"##(somepassword12345)##"; GetProfile(profileUserName, profilePassword).wait(); First we’ll create an http_client with the user’s login credentials: pplx::task<void> GetProfile(std::wstring UserName, std::wstring Password) { http_client_config config; credentials creds(UserName, Password); config.set_credentials(creds); http_client session(U(""), config); Notice that the constructor for credentials only allows platform strings so we had to use wstring for the argument types here. Querying a database table is much simpler than inserting one because we don’t need to set up a POST request body or supply additional HTTP headers, so we can use a simple overload of request() as follows: return session.request(methods::GET, L"/ApplicationData.svc/UserProfiles?$format=json") We include a parameter in the GET query indicating that we want the response in JSON format. When we process the response, we first check for errors: .then([] (http_response response) {; } We check for the condition that the authentication failed (error code 401 – Unauthorized) and for any other unexpected HTTP status code in the response. If everything is ok, we proceed to extract the relevant data from the JSON response: else { json::value &responseJ = response.extract_json().get(); json::value &profile = responseJ[L"value"][0]; std::wcout << "Full name: " << profile[L"FullName"].as_string() << std::endl; std::wcout << "Email : " << profile[L"Email"].as_string() << std::endl; } }); // this line ends the continuation } // this line closes the GetProfile() function Whereas before we used extract_string() to get the text of the HTTP response body, here we use extract_json() instead (returns pplx::task<json::value>), which converts the response text into a json::value object. When you query a LightSwitch table in JSON format, what you get back is a single object containing two items: odata.metadata which you can safely ignore, and value which contains the query result. Specifically, value is an array with one element per retrieved row, and each element is an object which has one property for each field in the retrieved row. json::value has an overloaded [] operator which lets us retrieve items using standard C++ syntax, so the code responseJ[L”value”][0] returns a json::value representing the first (and in this case, only) retrieved row. When you pull a value out of a json::value via an indexer as above, what you get is another json::value (think of it as tree traversal). To convert the leaf textual values to actual C++ strings, use as_string() as shown above. There are various as_*() functions for the different types you might want to convert to. The final code retrieves the specified user’s profile and prints their full name and email address to the console. Dealing with no internet connection If there is no internet connection, the task which generates an http_response will throw an http_exception. The simplest way to deal with this is to wrap all of the relevant code (not just the task-generating code; that on its own won’t raise an exception) in a try/catch block as follows: try { CreateUser(UserName, FullName, Password, Email).wait(); } catch (http_exception &e) { if (e.error_code().value() == 12007) std::cerr << "No internet connection or the host server is down." << std::endl; else std::cerr << e.what() << std::endl; } Error code 12007 is defined somewhere in the Windows API as ERROR_INTERNET_NAME_NOT_RESOLVED – in other words a DNS failure, which is what is likely to happen if the user’s internet connection is off or has failed. We simply check for this error code so we can print a meaningful error message, or print the error message supplied with the exception if something else went wrong. Obviously, wrapping everything in error-handling code like this creates a lot of repetition and isn’t very readable or maintainable. A better way is to use a task-based continuation by changing code like this: return session.request(request).then([] (http_response response) { ... to:; } ... Now, you don’t have to worry about catching exceptions from your main code. Here is the full source code so far: #include <Windows.h> #include <cpprest/http_client.h> #include <cpprest/uri.h> #include <iostream> using namespace concurrency::streams; using namespace web; using namespace web::http; using namespace web::http::client; using namespace web::json; using std::string; using std::cout; pplx::task<void> CreateUser(string UserName, string FullName, string Password, string Email) { http_client_config config; credentials creds(U("__userRegistrant"), U("__userRegistrant")); config.set_credentials(creds); http_client session(U(""), config); http_request request; cout << "Creating user..." << std::endl; string requestBody = "{UserName:\"" + UserName + "\",FullName:\"" + FullName + "\",Password:\"" + Password + "\",Email:\"" + Email + "\"}"; cout << "User creation request: " << requestBody << std::endl << std::endl; request.set_method(methods::POST); request.set_request_uri(uri(U("/ApplicationData.svc/UserProfiles"))); request.set_body(requestBody, L"application/json"); request.headers().add(header_names::accept, U("application/json"));; } std::wstring responseBodyU = response.extract_string().get(); string responseBody = utility::conversions::to_utf8string(responseBodyU); status_code responseStatus = response.status_code(); if (responseStatus == status_codes::Created) cout << "User created successfully." << std::endl;; } } else { cout << "Unexpected result:" << std::endl; cout << responseStatus << " "; ucout << response.reason_phrase() << std::endl; cout << responseBody << std::endl; } }); } pplx::task<void> GetProfile(std::wstring UserName, std::wstring Password) { http_client_config config; credentials creds(UserName, Password); config.set_credentials(creds); http_client session(U(""), config); cout << "Fetching user profile..." << std::endl; return session.request(methods::GET, L"/ApplicationData.svc/UserProfiles?$format=json") ; } else { json::value &responseJ = response.extract_json().get(); json::value &profile = responseJ[L"value"][0]; std::wcout << "Full name: " << profile[L"FullName"].as_string() << std::endl; std::wcout << "Email : " << profile[L"Email"].as_string() << std::endl; } }); } int main() { string UserName = R"##(jondoe)##"; string FullName = R"##(Jon Doe)##"; string Password = R"##(somepassword12345)##"; string Email = R"##(jon@doe.com)##"; CreateUser(UserName, FullName, Password, Email).wait(); std::wstring profileUserName = LR"##(jondoe)##"; std::wstring profilePassword = LR"##(somepassword12345)##"; GetProfile(profileUserName, profilePassword).wait(); while(true); } Maintenance and extensibility What we’ve done so far works but it is far from optimal from a development point of view. Here are some of the problems: - username and password must be supplied with every request - due to the stateless nature of the protocol, there is no way to know if the user is metaphorically “logged in” or not - the interface (output or other application-specific processing of results) is mixed up with the request/response logic. We would like to separate these so we can re-use our network code in multiple games/apps. - the universal LightSwitch/OData handling code is mixed up with the code specific to the requests/functions/tables available in our game network. We would like to separate these so the LightSwitch client code can be re-used in other applications that aren’t related to our game network project. - adding new request functions means we’ll have to add new response/error checking/validation code that will be similar for many requests - we are not constructing JSON requests in a safe way (recall that in the CreateUser example we made the request by joining strings together) - iterating through many JSON objects is syntactically messy. We would like to convert returned rows to C++ structs with a property for each field. - there is no way for the main thread to know if the request was successful or an error occurred - there is no way for the main thread to know if the network code is still busy processing the request without also blocking it (using pplx::task::wait()) - the server URL is repeated in every request function - mixing of different string types makes code maintainability harder All of this can be solved by producing a class framework which: - stores persistent data (server URL, login credentials) - has a number of helper functions (boxing/unboxing JSON requests, generating row insert/row query requests, error-checking) - tracks whether the current login credentials were valid last time they were used (indicating ‘successful logon’) - maintains state in a thread-safe manner about whether the network code is busy processing a request, which can be polled by the main thread - has an inheritance hierarchy that separates LightSwitch/OData logic, game network logic and logic for our specific game - is used in each game by a separate application-specific class containing the game’s interface which will be linked to the network code via task continuations The full source code for just such a framework can be found at the bottom of the article. I’m not going to go over it line by line but I will highlight a few features of the code we haven’t looked over yet. Framework Details Three classes are involved: - LightSwitchClient – generic functions for inserting and querying rows and performing generalized error-handling, tracking logon and busy state and generating and accessing JSON data - GameNetwork – derives from LightSwitchClient and includes the functions/tables supported by our GameNetwork LightSwitch project - GameClient – the game’s interface and has GameNetwork as a member through which the LightSwitch server is accessed If your game network will have various functions that aren’t game-specific as well as some that are – and this is probably going to be the case – you may wish to further derive from GameNetwork so that this class does not have to be modified with game-specific code. Usage: Error handling Instead of outputting error messages directly, we store them for later retrieval and the client code can call LightSwitchClient::GetError() to check if an error occurred. All error types – no internet connection, HTTP error status codes and LightSwitch errors are funneled through this mechanism so that error-checking by the client can be done in a simple unified way. Credentials The desired user’s login and password can be set via LightSwitchClient::SetUser(). This is initially assumed to be a valid user and this assumption changes if a request returns a 401 Unauthorized error, or if LightSwitchClient::LogOut() is called, clearing the stored credentials. The login state can be checked via bool LightSwitchClient::LoggedIn(). Busy state We create a type ThreadSafeBool which can be converted to the standard bool type and back via overloaded operators. The class essentially wraps a single bool in a Windows CRITICAL_SECTION such that it can be read and written by multiple threads without corruption. We then store an instance of this object in our class framework which is set to true at the start of any request and false when the request completes (with or without errors). Call LightSwitchClient::Busy() to get the busy state. Techniques (the following code is all included in the framework; it is provided here for educational purposes if you want to roll your own): LightSwitch error handling You can extract the error code from a failed LightSwitch request as follows: if (responseStatus == status_codes::InternalError || responseStatus == status_codes::NotFound) { wstring const &body = response.extract_string().get(); std::wregex rgx(LR"##(.*<Id>(.*?)</Id>.*)##"); std::wsmatch match; if (std::regex_search(body.begin(), body.end(), match, rgx)) { lastError = match[1]; return false; } rgx = LR"##(.*<Message>(.*?)</Message>.*)##"; if (std::regex_search(body.begin(), body.end(), match, rgx)) { lastError = match[1]; return false; } lastError = L"An internal server error occurred and no error code or message was returned."; return false; } Some responses (mainly those as a result of a 404 Not Found error) don’t have <Id> tags with an error code, so in those cases we try to extract the error text from the <Message> tag instead. If neither are found, a default error message is returned. Extracting JSON data Unlike querying a row which was described earlier, inserting a row in the database will return a JSON object which contains the inserted fields, without the extra object/array wrapping. You can bypass all of this and ensure you get the data you want regardless of request type as follows: json::value LightSwitchClient::SanitizeJSON(http_response &response) { json::value &responseJson = response.extract_json().get(); if (responseJson.is_object()) { if (responseJson.size() == 2) { if (responseJson.has_field(L"value")) { json::value value = responseJson[L"value"]; if (value.is_array()) return value; else return responseJson; } else return responseJson; } else return responseJson; } lastError = L"JSON response corrupted"; return json::value(); } In a nutshell, if the response JSON data is a 2-element object where one of the properties is called value and is itself an array, then it’s most likely we have just received the query results of one or more rows so we return the array directly; in all other cases we return the original response. If the JSON data is anything besides an object, it is probably corrupt data. Encapsulating JSON data We define a type JsonFields which is a simple mapping of keys to values using std::map as follows: typedef std::map<wstring, wstring> JsonFields; Unlike json::value, we can use a C++11 initializer list to populate this very easily; for example, to create a user profile JSON object we could write something like: JsonFields userProfile{ { L"UserName", UserName }, { L"FullName", FullName }, { L"Password", Password }, { L"Email", Email } }; JsonFields can be passed to various functions in the framework and are easily converted internally to json::value objects for sending an HTTP request as follows: JsonFields args...; ... json::value reqJson; for (auto &kv : args) reqJson[kv.first] = json::value::string(kv.second); wstring requestBody = reqJson.serialize(); Invalid URI errors Calling http_client::request() will throw a uri_exception if there is a problem with the supplied URI. We catch this as follows: try { return session.request(....).then(...); } catch (uri_exception &e) { lastError = utility::conversions::to_utf16string(e.what()); busy = false; return pplx::task_from_result(json::value()); } Note that we have to return a pplx::task, but when an error occurs there is no task to perform. Luckily we can use pplx::task_from_result(T value) to generate a task that simply returns the supplied value immediately. Retrieve only the first row matching a query You can add the OData directive $top=1 to a URL’s query string to fetch only the first matching row of a query, and then look at the first element of the array returned by SanitizeJSON above (the framework includes a function LightSwitchClient::QueryFirstRow() to do this for you). Updating table rows Although our registration and login example doesn’t require it, the framework also allows you to update rows with one or more changed fields. OData uses the HTTP PATCH method to do this. The HTTP request should be formed in the same way as for inserting rows but with one additional header: If-Match: * This is a requirement in LightSwitch and simply means that any matching entity (row) can be updated. The URL should point to the row or rows to be updated. To point to a single row in a LightSwitch application, the auto-generated Id field for each table is used as the primary key. Brackets are added to the table name to select a row by its primary key: will select the row for the user with Id 1234. The LightSwitchClient::Update(wstring table, int key, JsonFields fields) function in the framework will handle table row updates for you automatically. Note that on a successful update, the server will return 204 No Content with an empty response body. NOTE: HTTP PUT can also be used but this updates all the fields in a matching row, even if you don’t specify them in the request (in that case, they will be blanked). Deleting table rows Once again not called for in our example code but available in the framework, deleting rows uses the HTTP DELETE method and has the same URL and HTTP header requirements as for updating rows, but no request body needs to be specified as there is nothing to update. Deleting rows also returns 204 No Content with an empty response body from the server on success. Warning about row update/delete security Ensure that users can only modify table rows that they should be modifying! While in this case, users are restricted to viewing their own profile row and will encounter a 404 Not Found error if they try to access someone else’s, there is no harm in being paranoid! In the server code following on from part 3, I added the following business logic to the UserProfiles table (C#): partial void UserProfiles_CanDelete(ref bool result) { // Only allow administrators to delete users result = Application.Current.User.HasPermission(Permissions.SecurityAdministration); } Be careful though. In part 2 we allowed __userRegistrant to add users by performing a temporary privilege elevation. However, we implemented this in SaveChanges_Executing which actually runs before UserProfiles_CanDelete in the save pipeline, so as things stand now the delete will always be allowed. To fix this, move this line: Application.Current.User.AddPermissions(Permissions.SecurityAdministration); out of SaveChanges_Executing() and insert it at the beginning of UserProfiles_Inserting() instead. Game Network implementation We will now layer functions specific to our GameNetwork LightSwitch project from the rest of the series over the LightSwitchClient class. Creating a C++ struct to represent a JSON object Here is an example of how to create a struct that is easily convertible to and from a JSON object. The more adventurous among you may want to use type reflection to avoid having to write the ToJSON() and FromJSON() methods for every new type. struct UserProfile { wstring UserName; wstring Password; wstring FullName; wstring Email; int Id; JsonFields ToJSON() { return JsonFields{ { L"UserName", UserName }, { L"FullName", FullName }, { L"Password", Password }, { L"Email", Email } }; } static UserProfile FromJSON(json::value j) { if (j.is_null()) return UserProfile{}; UserProfile p{ j[L"UserName"].as_string(), j[L"Password"].as_string(), j[L"FullName"].as_string(), j[L"Email"].as_string(), j[L"Id"].as_integer() }; return p; } }; The code should be fairly self-explanatory, but note that – crucially – Id is defined last so that you can use an initializer list to create a new UserProfile without specifying an ID, since that will be automatically assigned by the LightSwitch server. WARNING: For reasons known only to Microsoft, trying to return a UserProfile created with an initializer list directly in FromJSON() crashes the Visual Studio 2013 C++ compiler and returns an empty struct with the November 2013 CTP compiler. This is why I create it in “p” first. If you declare Id as the first item in the struct, returning directly with an initializer list works as expected on both compilers. Game Network client implementation We define one method in GameNetwork for each possible action we want to perform on the server. In our example, we are creating a user and fetching a user’s profile so we need two methods. We also define callbacks that will trigger when a request completes, such that the main application knows a response has been received – this solves the signalling problem described earlier. The interface: // ================================================================================= // Handler functions // ================================================================================= typedef std::function<void(json::value)> ODataResultHandler; typedef std::function<void(UserProfile)> UserProfileHandler; // ================================================================================= // Game server functions // ================================================================================= class GameNetwork : public LightSwitchClient { public: GameNetwork() : LightSwitchClient(L"") {} pplx::task<UserProfile> GetProfile(UserProfileHandler f = nullptr); pplx::task<UserProfile> CreateUser(UserProfile profile, UserProfileHandler f = nullptr); }; With all the work we’ve done in LightSwitchClient, the actual implementation is remarkably simple – which is exactly what we want, because it is a breeze to add new methods!: pplx::task<UserProfile> GameNetwork::GetProfile(UserProfileHandler f) { return QueryFirstRow(L"UserProfiles").then([f](json::value j){ UserProfile p = UserProfile::FromJSON(j); if (f) f(p); return p; }); } pplx::task<UserProfile> GameNetwork::CreateUser(UserProfile profile, UserProfileHandler f) { SetUser(L"__userRegistrant", L"__userRegistrant"); return Insert(L"UserProfiles", profile.ToJSON()).then([f, profile](json::value j){ UserProfile p = UserProfile::FromJSON(j); p.UserName = profile.UserName; if (f) f(p); return p; }); } Let’s take a closer look at this. Fetch user profile Line 1 of the return statement fetches the first row from UserProfiles whose UserName field matches the name of the currently logged in user (it has to even without any query parameters, because in Part 2 we configured the server so that it would only return the current user’s profile row for security reasons), and fetches the row as a json::value. Line 2 converts the json::value into a UserProfile object. Line 3 calls the application-defined callback if one has been set. Line 4 returns the UserProfile object to the thread which created the task. Create new user Line 1 sets the current user to the special user registration account __userRegistrant which we defined in part 2. Line 2 converts the supplied new UserProfile object to a JsonFields object, inserts it into the database (which calls the UserProfiles table business logic we defined on the server to validate all the fields and update the ASP.NET Membership database at the same time, as well as assigning the new user to the Player role), and fetches the sever’s version of the new profile as a json::value. Line 3 converts the json::value into a UserProfile object. Line 4 sets the UserName field. This is important, because the application-defined callback may need it, but if an error occurred, the server will not return a new JSON profile object, so when the conversion takes place in line 3, the resulting UserProfile object will not have any of its fields populated. When a new user is created successfully, this line of code has no effect. Line 5 calls the application-defined callback if one has been set. Line 6 returns the UserProfile object to the thread which created the task. As you can see, adding new functions to the GameNetwork implementation will be trivially easy in most cases thanks to the dirty work being done in LightSwitchClient for us. Game interface implementation Now we turn to the final piece of puzzle: the game, which actually calls these functions in GameNetwork and does something with the results. Because all of the client-server logic is now abstracted away, we can now plug in whatever behaviours we want and re-use all of the previous code in any game or application. So let us now re-write the previous examples to use this new framework. The game client definition: class GameClient { GameNetwork cloud; void UserCreated(UserProfile profile); void ProfileReceived(UserProfile profile); public: void Run(); }; In this simple example, we define one method Run() which will be the actual main application code, and two callbacks which are called when a new user is created or a profile is fetched (or an error occurs trying to do either of these things). The full source code is available at the end of the article, but the relevant part of the Run() implementation is: void GameClient::Run() { ... wcout << std::endl << "Creating user..." << std::endl; cloud.CreateUser(UserProfile{ UserName, Password, FullName, Email }, std::bind(&GameClient::UserCreated, this, _1)); while (cloud.Busy()) { wcout << "."; Sleep(10); } wcout << "Fetching user profile..." << std::endl; cloud.SetUser(UserName, Password); cloud.GetProfile(std::bind(&GameClient::ProfileReceived, this, _1)); while (cloud.Busy()) { wcout << "."; Sleep(10); } } As you can see, we merely call GameNetwork::CreateUser() and GameNetwork::GetProfile() with appropriate arguments and sit back and wait until the work is done. Instead of blocking the thread with pplx::task::wait() as we did in the original examples, we now poll the GameNetwork object’s Busy() function repeatedly until it becomes false. For the sake of proving that the network code does in fact run in another thread, we print dots every 10ms until each request completes (note: you may notice when running this code that the order of output of text and dots on the console is not correct; this is because console writes are not atomic operations and therefore, not thread-safe and may be executed out of order. In a DirectX/OpenGL or Windows GUI application this will not be an issue). std::bind is used to set the callback to a method of an object instance. The syntax: std::bind(&MyClass::MyMethod, this, _1) can be used anywhere in C++ where you might need a function pointer that is a pointer to a member function of the calling object. The actual callback functions merely print out a friendly error message where possible if an error occurred, or the actual result of the server request if it completed successfully: void GameClient::UserCreated(UserProfile p) { wcout << std::endl; // Returns 201 when the user is created, 500 otherwise if (p.Id == 0) { wstring &errorCode = cloud.GetError(); if (errorCode == L"Microsoft.LightSwitch.UserRegistration.DuplicateUserName") wcout << "Username '" << p.UserName << "' already exists." << std::endl; else if (errorCode == L"Microsoft.LightSwitch.UserRegistration.PasswordDoesNotMeetRequirements") wcout << "Password does not meet requirements." << std::endl; else if (errorCode == L"Microsoft.LightSwitch.Extensions.EmailAddressValidator.InvalidValue") wcout << "Invalid email address supplied." << std::endl; else wcout << cloud.GetError() << std::endl; return; } wcout << "User '" << p.UserName << "' created successfully." << std::endl; } void GameClient::ProfileReceived(UserProfile p) { wcout << std::endl; if (p.UserName != L"") { wcout << "Logged in successfully" << std::endl; wcout << "Full name: " << p.FullName << std::endl; wcout << "Email : " << p.Email << std::endl; } else wcout << cloud.GetError() << std::endl; } Note the method of checking for errors: when creating a user, UserProfile::Id will be zero if creation failed; when fetching a user profile, UserProfile::UserName will be blank if the fetch failed. GameNetwork::GetError() (which inherits from LightSwitchClient::GetError()) is used to find the relevant error code or error message. In the case of LightSwitch error codes, the callback converts them into human-readable error messages. Example Output Here is how the final sample looks when you run it: Register new user Enter username: djkaty1 Enter password: [ELIDED] Enter full name: efwefjiwefjiweiof Enter email address: [ELIDED] Creating user... ..................................................... .................................. Username 'djkaty1' already exists. Fetching user profile... .................................... Logged in successfully Full name: Noisy Cow Email : some@email.com Wrapping up Now the low-level stuff is out of the way, we are ready to move on to integrating the client code with a graphical interface, which will be the subject of part 5. I’m still sick so please donate to my final wishes crowdfund if you found this article useful! Until next time! Source code and executable Download all of the source code and pre-compiled EXE for this article References Here are some pages I found useful while researching this article: Information Security: Is BASIC-Auth secure if done over HTTPS? MSDN Blogs: The C++ REST SDK (“Casablanca”) OData.org: Protocol Operations JSON Spirit: an alternative to JSON processing in the C++ REST SDK if you wish to use Boost Spirit MSDN Blogs: Creating and Consuming LightSwitch OData Services (Beth Massi) InformIT: Get to Know the New C++11 Initialization Forms Dr. Dobbs: Using the Microsoft C++ REST SDK Dr. Dobbs: JSON and the Microsoft C++ REST SDK That is some pretty sexy code; C++ 11 goodness everywhere 😮 I thoroughly appreciate this series of articles for both its content value _and_ its style ! Keep up the good work 🙂 Any updates, programming/life? 8 months later… there is going to be one posted tomorrow 🙂 Can I see credentials form http_request author ? Wow, I’m impressed, awesome write up, will have to visit here more often 🙂 P.S. I’ve been doing software development on and off (do hardware 1/2 time) about 30 years now. So, it takes a lot to impress me 🙂 Keep up the great work!
https://katyscode.wordpress.com/2014/04/02/lightswitch-for-games-part-4-odata-access-from-c-client-code-with-the-c-rest-sdk/
CC-MAIN-2018-17
en
refinedweb
Cutting Your Teeth on FMOD Part 4: Frequency Analysis, Graphic Equalizer, Beat Detection and BPM Estimation In this article, we shall look at how to do a frequency analysis of a playing song in FMOD and learn how to use this as the basis for a variety of useful applications. What is frequency analysis? Note: The SimpleFMOD library contains all of the source code and pre-compiled executables for the examples in this series. Frequency analysis is where we take some playing audio and identify the volume level of each range of frequencies being played. In a simple example, this lets us identify the current volume of the bass, mid-range and treble in a song individually, or any other desired range of frequencies. Stuff you don’t really need to know: The analysis is done using a process called Fast Fourier Transforms (or FFT), which looks back in time at all the recently played frequencies to build up a picture of the volume of each. The FFT covers the whole spectrum up to the sample rate of the song (typically 44.1kHz) – or specifically, the so-called nyquist rate of the song (half of the actual sample rate – the highest frequency which can be measured for the audio), but you can specify how many equal-sized ranges to break this up into. The number of ranges is known as the sample size. So, for example, a sample size of 100 on a song sampled at 44.1kHz will produce a bucket of 100 ranges, each covering 220.5Hz (that’s half the sample rate, divided by the sample size, ie. (44100 / 2) / 100). Therefore, the higher the sample size, the more accurate the measurement, but at the cost of lag since the FFT algorithm must search further back in time from the current playback position the more samples are taken into account. Luckily, FMOD takes care of all this for you and all you need to do is tell it what sample size you want and it will return a float array containing a breakdown of the volume of each frequency range. What is frequency analysis for? Frequency analysis is the lowest level audio processing that must be performed to enable various other functionality. For example, you can use the resulting data to detect when there is a beat in a song (or other specific simple sound types). Here I am mostly concerned with video games and graphics, and it is a quite common effect to sync on-screen effects with the beat of a song. Using beat detection lets us trigger these effects in time with the music. Once you have beat detection, you can then use the timing information of each beat to estimate the bpm (beats per minute) of the song. While this is generally unimportant for games, it is essential for high-level audio processing applications such as DJ tools which alter the bpm of two songs so they can be mixed (crossfaded) without a break in the music. In this article, we will mostly look at how to use beat detection to trigger on-screen effects in games. Retrieving the volume distribution Down to business then. First, and very importantly, to enable frequency analysis you must use the software mixing flag when creating the sound or stream you want to analyse. This causes FMOD to mix the audio for the channel in software rather than passing it to a hardware-accelerated sound card, but DSP operations such as frequency analysis are only allowed in FMOD when software mixing is used, so we have no choice. Create your sound or stream as follows: FMOD::Sound *song; system->createStream("Song.mp3", FMOD_SOFTWARE, 0, &song); // or createSound You will also need to retrieve the channel of the song once it starts playing as this is used as a handle to the frequency analysis function: FMOD::Channel *channel; system->playSound(FMOD_CHANNEL_FREE, song, true, &channel); All pretty familiar so far. You can now do a frequency analysis at any time. If you are planning to draw a graphic equalizer (VU bars) from the results, or use beat detection, you will want to do this on every frame, so in your per-frame update code, first update FMOD, then perform the analysis: // Per-frame FMOD update ('system' is a pointer to FMOD::System) system->update(); // getSpectrum() performs the frequency analysis, see explanation below int sampleSize = 64; float *specLeft, *specRight; specLeft = new float[sampleSize]; specRight = new float[sampleSize]; // Get spectrum for left and right stereo channels channel->getSpectrum(specLeft, sampleSize, 0, FMOD_DSP_FFT_WINDOW_RECT); channel->getSpectrum(specRight, sampleSize, 1, FMOD_DSP_FFT_WINDOW_RECT); The first step is to allocate memory to a float array, then pass a pointer to it to getSpectrum() in which to retrieve the volume distribution. The sample size must be a power of two in the range 64-8192. The third argument specifies which part of the audio to examine; for a stereo track, 0 represents the left channel and 1 the right channel. Here we retrieve the volume distribution for both. The fourth argument specifies a smoothing filter to use to help guard against false readings (false transients). FMOD_DSP_FFT_WINDOW_RECT uses a rectangular filter, which essentially means everything is allowed through. To get the average volume distribution for a stereo track we need to take the average of the left and right channels: float *spec; spec = new float[sampleSize]; for (int i = 0; i < sampleSize; i++) spec[i] = (specLeft[i] + specRight[i]) / 2; Depending on what you want to do with the data, you now have a choice. The floats returned by getSpectrum are in dB (decibels) with a range of 0-1 where 1 is the loudest possible output and 0 is silence. Many of these values may be quite low even when the music is playing at maximum volume, so you can optionally normalise (scale) the data such that the loudest frequency is always represented by 1. To do this, find the maximum of all the volumes returned and then divide this into each volume as follows: // Find max volume auto maxIterator = std::max_element(&spec[0], &spec[sampleSize]); float maxVol = *maxIterator; // Normalize if (maxVol != 0) std::transform(&spec[0], &spec[sampleSize], &spec[0], [maxVol] (float dB) -> float { return dB / maxVol; }); There are many ways to implement this but the combination of the C++ Standard Library template functions and a lambda function in C++11 is quite neat. The transform function performs an in-place transform of each volume to scale it relative to the maximum volume for the distribution. Again optionally, you can calculate the range in Hz of frequencies covered by each array entry as follows, if you need it: float hzRange = (44100 / 2) / static_cast(sampleSize); Don’t forget to change 44100 to the sample rate (in Hz) of the audio. You can now do whatever you want with the data. Don’t forget to clean up at the end of your function: delete [] spec; delete [] specLeft; delete [] specRight; Plotting a graphic equalizer (VU bars) Figure 1. VU bars. In this case the sample size is 128 on a track sampled at 44.1kHz, therefore there are 128 bars representing a range of 172.266Hz each. The lowest frequency ranges are shown on the left, with the height of each bar representing its average volume. VU bars are usually represented by a row of rectangles (or lines) with a common bottom Y co-ordinate, with the height of each rectangle representing the volume of the frequency range it represents – see Figure 1. How you plot this depends on the graphics engine you are using. If you want to ensure that at least one bar (the loudest) is always at the maximum height, you should normalize the frequency data before plotting. Here is some example code for my Simple2D library which uses 80% of the screen width with a 10% border at the left and right, and scales the width and gap between each bar according to the sample size: // Earlier in code... freqGradient = MakeBrush(Colour::Green, Colour::Red); ... // VU bars int blockGap = 4 / (sampleSize / 64); int blockWidth = static_cast((static_cast(ResolutionX) * 0.8f) / static_cast(sampleSize) - blockGap); int blockMaxHeight = 200; // Parameters: Left-hand X co-ordinate of bar, left-hand Y co-ordinate of bar, width of bar, height of bar (negative to draw upwards), paintbrush to use for (int b = 0; b < sampleSize - 1; b++) FillRectangleWH(static_cast(ResolutionX * 0.1f + (blockWidth + blockGap) * b), ResolutionY - 50, blockWidth, static_cast(-blockMaxHeight * spec[b]), freqGradient); This produces a display identical to that shown in figure 1 when placed in your application’s DrawScene() function. Beat detection The principle of detecting when a beat occurs in the music is to examine a low frequency range (where the percussion occurs – typically 60-120Hz for a bass kick drum and 120-150Hz for a snare drum) to see if its volume exceeds a certain threshold value. In the following example, we simply consider the beat to have occurred when this threshold is exceeded, then ignore the volume of the track for a given period of time afterwards to avoid false positives. We also simply examine the lowest bar in the volume distribution which works for small sample sizes but will be looking at too low a frequency range for larger sample sizes. Ideally you should look at the frequency ranges above, but I leave that as an exercise for you. With a sample size of 128 and a track sampled at 44.1kHz, looking at the first item in the array will cover all frequencies from 0-172Hz, so it’s a reasonable estimate. A more sophisticated approach may aggregate the average of several bars from a larger sample size, and require the threshold to be exceeded for more than a single frame. Normalization should not be used when coding for beat detection, as it disproportionately distorts the volume distribution during quiet periods of the music. We need to set some variables to start with. Through trial and error, I found that these work reasonably well for dance music: float beatThresholdVolume = 0.3f; // The threshold over which to recognize a beat int beatThresholdBar = 0; // The bar in the volume distribution to examine unsigned int beatPostIgnroe = 250; // Number of ms to ignore track for after a beat is recognized int beatLastTick = 0; // Time when last beat occurred Here is how we detect a beat: bool beatDetected = false; // Test for threshold volume being exceeded (if not currently ignoring track) if (spec[beatThresholdBar] >= beatThresholdVolume && beatLastTick == 0) { beatLastTick = GetTickCount(); beatDetected = true; } if (beatDetected) { // A beat has occurred, do something here } // If the ignore time has expired, allow testing for a beat again if (GetTickCount() - beatLastTick >= beatPostIgnore) beatLastTick = 0; This code can of course be adapted in a variety of ways, but as it stands, beatLastTick will retain the system tick time of the last detected beat while the audio is being ignored, and 0 at all other times. On the first frame that the beat is detected, the trigger code will execute and you can generate on-screen effects or user interactions here. BPM Estimation While not really needed for game development, I thought it would be interesting to include this by way of example. At the start, it should be noted that there are many ways to use beat detection data to calculate bpm – some more accurate than others – and the example below only produces makeshift estimates; it is not the best method. Also, bpm estimation relies on perfect beat detection, which is unlikely to be produced by the unsophisticated code above. The correct way of determining a song’s bpm: The song should be scanned from start to end in memory without playing it, and a frequency analysis performed on each frame. Beat detection should be performed on each frame from the frequency analysis results, and the relative times of every beat in the song stored in an array. Heuristics should be used to ignore portions of the song with no beat, and the remaining portion of the song should have its playback time divided into the number of beats. This will provide a fairly accurate calculation of the song’s true bpm. In our example, we will use a moving average bpm estimation over the previous 10 seconds of the playing track. First, we’ll need storage for the times of all the beats for the last 10 seconds: // List of how many milliseconds ago the last beats were std::queue beatTimes; // The number of milliseconds of previous beats to keep in the list unsigned int beatTrackCutoff = 10000; We’ll then need to modify the beat detection code above to keep the list of detected beats up to date: if (spec[beatThresholdBar] >= beatThresholdVolume && beatLastTick == 0) { beatLastTick = GetTickCount(); beatDetected = true; // Store time of detected beat beatTimes.push(beatLastTick); // Remove oldest beat if it is older than the cut-off time while(GetTickCount() - beatTimes.front() > beatTrackCutoff) { beatTimes.pop(); if (beatTimes.size() == 0) break; } } The rest of the code remains the same. Now we have our updating list of beat times, we can do a simple calculation to estimate the track’s bpm: // Predict BPM float msPerBeat, bpmEstimate; if (beatTimes.size() >= 2) { msPerBeat = (beatTimes.back() - beatTimes.front()) / static_cast(beatTimes.size() - 1); bpmEstimate = 60000 / msPerBeat; } else bpmEstimate = 0; What happens here is that we take the times of the oldest and newest beats in the list, and divide them by the size of the list minus one (which is the number of gaps between the beats rather than the number of beats) to get the average number of milliseconds between each beat. We then divide this into one minute (60000 milliseconds) to get the estimate of the bpm, which is stored in bpmEstimate. Demo application Here you can download a fully fleshed out demo using all the techniques above plus a few other tweaks. The full source code is presented below – note that Simple2D is used for rendering so the application follows the Simple2D framework. Please forgive the choice of music 🙂 To try the application, first press P to unpause the music after running the EXE file. You can use N to toggle normalization (beat detection and bpm estimation only run when normalization is off) and 1 and 2 to increase and decrease the FFT sample size. When normalization is off, the word ‘BEAT’ flashes on the screen when a beat is detected, and the current bpm estimation is shown at the bottom of the screen. Full source code: // FMOD Frequency Analysis demo // Written by Katy Coe (c) 2013 // No unauthorized copying or distribution // #include "../SimpleFMOD/SimpleFMOD.h" #include "Simple2D.h" #include <queue> using namespace SFMOD; using namespace S2D; class FrequencyAnalysis : public Simple2D { private: // FMOD SimpleFMOD fmod; Song song; // Graphics TextFormat freqTextFormat; Gradient freqGradient; // Normalization toggle and sample size bool enableNormalize;: FrequencyAnalysis(); void DrawScene(); virtual bool OnKeyCharacter(int, int, bool, bool); }; // Initialize application FrequencyAnalysis::FrequencyAnalysis() { // Make paintbrushes freqTextFormat = MakeTextFormat(L"Verdana", 10.0f); freqGradient = MakeBrush(Colour::Green, Colour::Red); song = fmod.LoadSong("Song.mp3", FMOD_SOFTWARE); song.Start(true); // Load song enableNormalize = true; sampleSize = 64; // Set beat detection parameters beatThresholdVolume = 0.3f; beatThresholdBar = 0; beatSustain = 150; beatPostIgnore = 250; beatTrackCutoff = 10000; beatLastTick = 0; beatIgnoreLastTick = 0; musicStartTick = 0; } // Handle keypresses bool FrequencyAnalysis::OnKeyCharacter(int key, int rc, bool prev, bool trans) { // Toggle pause if (key == 'P' || key == 'p') { song.TogglePause(); // Reset bpm estimation if needed if (musicStartTick == 0 && !enableNormalize && !song.GetPaused()) { musicStartTick = GetTickCount(); beatTimes.empty(); } else if (song.GetPaused()) musicStartTick = 0; } // Toggle normalization if (key == 'N' || key == 'n') { enableNormalize = !enableNormalize; // Reset bpm estimation if needed if (!enableNormalize && !song.GetPaused()) { musicStartTick = GetTickCount(); beatTimes.empty(); } } // Decrease FFT sample size if (key == '1') sampleSize = max(sampleSize / 2, 64); // Increase FFT sample size if (key == '2') sampleSize = min(sampleSize * 2, 8192); return true; } // Per-frame code void FrequencyAnalysis::DrawScene() { // Update FMOD fmod.Update(); // Frequency analysis float *specLeft, *specRight, *spec; spec = new float[sampleSize]; specLeft = new float[sampleSize]; specRight = new float[sampleSize]; // Get average spectrum for left and right stereo channels song.GetChannel()->getSpectrum(specLeft, sampleSize, 0, FMOD_DSP_FFT_WINDOW_RECT); song.GetChannel()-; // Normalize if (enableNormalize && maxVol != 0) std::transform(&spec[0], &spec[sampleSize], &spec[0], [maxVol] (float dB) -> float { return dB / maxVol; }); // Find frequency range of each array item float hzRange = (44100 / 2) / static_cast<float>(sampleSize); // Detect beat if normalization disabled if (!enableNormalize) { Text(10, 10, "Press P to toggle pause, N to toggle normalize, 1 and 2 to adjust FFT size", Colour::White, MakeTextFormat(L"Verdana", 14.0f)); Text(10, 30, "Sample size: " + StringFactory(sampleSize) + " - Range per sample: " + StringFactory(hzRange) + "Hz - Max vol this frame: " + StringFactory(maxVol), Colour::White, MakeTextFormat(L"Verdana", 14.0f)); // BPM estimation if (!enableNormalize) {)); } else Text(10, ResolutionY - 20, "Disable normalization to enable BPM calculation", Colour::White, MakeTextFormat(L"Verdana", 14.0f)); // Numerical FFT display int nPerRow = 16; for (int y = 0; y < sampleSize / nPerRow; y++) for (int x = 0; x < nPerRow; x++) Text(x * 40 + 10, y * 20 + 60,() { FrequencyAnalysis test; test.SetWindowName(L"FMOD Frequency Analysis"); test.SetBackgroundColour(Colour::Black); test.SetResizableWindow(false); test.Run(); } I hope you found this exploration of frequency analysis in FMOD useful! In Part 5 we’ll check out how to generate audio on the fly from user-defined functions. If you want to do frequency analysis of real-time sound card output, check out Part 6 for the details! Enjoy. I’m concerned about your paramenter resolutionX resoltionY what does it mean? The screen or render target width and height in pixels. It is used to determine the width, maximum height and horizontal gap between each VU bar. You are free to ignore the example calculations and define the VU bar’s dimensions in any way you wish. The example code ensures that it auto-scales to the screen resolution. Thanks for your reply!! I have 4 questions X(. Sorry for the disturbance. T_T 1. When I do GetSpectrum() does this function gets the size of DB by the range of 0 ~ 44100 by itself(default)? or do i need to modify the range? 2. Right now I can modify the number of bars by changing the parameter sampleSize. If 1 bar range is 0Hz to 2Hz (total 3Hz) does it add up all the values (For Instance 0Hz : 1 , 1Hz : 0.5 , 2Hz : 0) and divides it in to 3? ( 1 + 0.5 + 0 ) /3 ? 3. Is the value of spec[i] maximum : 1.0 and minimum : 0.0? 4. Right now when I look at my window media Equalizer the height of the bars are quite similar. However, the example above the heights are way to different. Therefore, does this mean that the Window media’s equalizer is showing only a part of Hz, or is it modifed some how? You’re not disturbing me, I just don’t always have time to reply on the blog 🙂 1. getSpectrum gets the spectrum up to the recorded sample rate of the audio you have loaded. So if your audio is 96kHz, it will return the values for that range, if it is 44.1kHz, it will return the values for that range etc. 2. I believe that is how it works, dB is not a linear measurement but a logarithmic one so the way the values are averaged may be logarithmic rather than a linear division (normal average) but in any case, the output is some kind of average across all the frequencies from 0 to 1. 3. Yes 4. I don’t know how the windows EQ works but if I had to guess I would say that the VU bars shown there do not have normalized values. The example in the article shows both normalized and non-normalized versions. The normalized version will generally have sharper differences between the height of each bar because the highest volume across the bars is searched for, multiplied up to become 1.0 and all the other bars are multiplied by the same value, so any differences between them become exaggerated. Hope that helps. Thanks for your help!! A lot of enhancements done in my project. I’m trying to normalize but I can’t go through with the code below. std::transform(&spec[0], &spec[sampleSize], &spec[0], [maxVol] (float dB) -> float { return dB / maxVol; }); 1. I’ve never learned [maxVol] (float dB) -> float { return dB / maxVol; before… what does it mean? Finding the spec[0] from spec[sampleSize], when spec[0] is found and I don’t know what happens after. 2. I’ve also tried the Beat Detection in one of my songs. But the detecting accuracy seemed to be low. How can I make the accuracy increase? Of course, I did some googling and got a conclusion that smoothing and some mathematical process is needed. Is there another solution? I am Presently Getting Spectrum of 1024 pieces and Adding it all up, which is perhaps called the sound energy? and comparing the previous energy and the present energy every frames. 1. This is a lambda function using C++11 syntax. It is equivalent to: float calculateVolume(float maxVol, float dB) { return dB / maxVol; } … std::transform(&spec[0], &spec[sampleSize], &spec[0], std::bind(calculateVolume, maxVol, std::placeholders::_1)); (and other permutations) but is much neater. What the call to std::transform – part of the standard C++ STL library – does is to iterate over each item in spec[], starting from 0 and finishing at sampleSize-1 (ie. every item in the array), dividing each value by maxVol and replacing the original value, ie: spec[0] /= maxVol; spec[1] /= maxVol; … You could equally write it like this: for (int i = 0; i < sampleSize; i++) spec[i] /= maxVol; See my article elsewhere on the site about C++11 lambda functions for more details on both the lambda syntax and std::transform. 2. The example above for beat detection is very crude and will only work well on eg. dance music with a clear beat. Comparing the energy across the entire sample range won't work because you are looking at all the frequencies at once; what you really want to do is just look at frequencies in the range of a bass drum etc. and compare the energy in that narrow band of the spectrum. The best solution as you mentioned is to use a smoothing function and monitor the small desired range of frequencies over several frames, being careful not to average over too many frames as that would introduce lag into the beat detection. I hope that helps! Katy. Is there any way for this to be worked on for a mac? It should do if you download the Mac version of FMOD, but I’m afraid I don’t know the specifics, only the Windows API. You will need to go through the FMOD Mac documentation and make the appropriate changes. Thanks so much for writing this article. I started working on a music visualizer a couple months ago and finished the graphics engine. The thing is I’ve never done any audio coding. Google was not much help with this. I was about to start using VAMP and was cringing at the thought of having to write a host application and using a bunch of extra libraries to decode compressed audio. I’m so glad I did one more round of google searches and found your tutorial. I’ve got the basic stuff up now and it got me on track to doing what I want to do. Finally all that audio theory stuff from school is starting to be useful! I have one question. Is there any way to get the frequency of a mp3 so that I don’t have to hardcode 44khz and expect all files to be that way? I saw elsewhere you can determine it by taking the filesize and comparing it to the length of the song but I imagine that wouldn’t work for VBR files. Does FMOD have any facilities for returning the frequency of a file? thanks again! Once you have loaded a sound with System::createSound (or createStream) you can call Sound::getDefaults on the FMOD::Sound handle returned by System::createSound to get the default frequency, eg. FMOD::Sound *sound; … float frequency; sound->getDefaults(&frequency); Hope that helps! Katy Hi, Katy, I followed your tutorial up until the step you output the data because I will use it for a different purpose. I’m interested in finding the frequency of the wav file I opened with FMOD. For testing purpose, I generated a SINE wav file with 440 Hz, sampled at 8kHz online. From the formula you provided above: freqRange = (8000 / 2) / sampleSize, my sampleSize = 1024; therefore, freqRange = 3.9 hz per bin; since I already know the frequency of the wav file I’m inputting, I’m expecting to have larger value around bin #113 compared to other bins. But What I got was not what I expected, I got the largest value in bin # 512 = 3.469e-9 and second largest at bin # 256 = 3.205e-9. Do you have what could be the cause of it? Here is my test code: FMOD::Sound *sine_440hz; result = system->createSound(“C:\\Users\\hp\\Desktop\\sin_440Hz.wav”, FMOD_SOFTWARE, 0, &sine_440hz); FMODErrorCheck(result); FMOD::Channel *channel1; result = system->playSound(FMOD_CHANNEL_FREE, sine_440hz, true, &channel1); FMODErrorCheck(result); int sampleSize = 1024; float maxAmp = 0.0; int binNumb = 0; float *specturm; float SampleThres = 3.12 * pow(10.0, -9.0); specturm = new float[sampleSize]; channel1->getSpectrum(specturm, sampleSize, 0, FMOD_DSP_FFT_WINDOW_RECT); cout << "bin:\t" << "value: " << endl; for(int i = 1; i SampleThres){ cout << i << "\t" << specturm[i] << endl; } } Please help. Thank you for(int i = 1; i SampleThres){ cout << i << "\t" << specturm[i] << endl; } } The last part I was messed up for some reason. Above is the correct one I used. it won’t let me post the code I used. but here is the idea for(int i = 1; i less than sampleSize ; i increment by 1){ if(specturm array data less than SampleThres){ cout << i << "\t" << specturm[i] << endl; } } If you are running this code as shown then you’re going to have problems. The code itself is correct, but you are fetching the spectrum immediately after the song starts playing, possibly before it starts playing, so you will get inaccurate results. The FFT calculation performed by getSpectrum uses historical data (ie. the last ‘sampleSize’ frames of playback) to calculate the amplitude in each frequency range, so it won’t return any meaningful data until the first ‘sampleSize’ samples have been played. Additionally, you need to call System::update repeatedly in a loop (usually once per frame of animation in a game will suffice, for example) to update FMOD’s internal state, otherwise many functions including getSpectrum won’t return the expected results. Hope that helps! Hello, I’m trying to make some kind of beat detection using FMOD for some personal projects of mine, so your post is very, very useful and I’ll dig seriously into it this week, as I currently use a temporal approach to detect beats (it doesn’t work well). I’m trying to find the best way to use frequency to do the same thing, and quickly looking at your results and testing your demo app with some more musics convinced me FMOD could get me where I needed. However, I have a slight restriction, in the fact I need to make my beat detection BEFORE reading the music (I actually want to generate a file containing the meaningful data I can gather from automated analysis). All I’ve read about the getSpectrum function in FMOD tend to let me think it can only work if the sound is currently playing. Would you have a clue about how to work around this? Do you think reading the sound at high speed (like, 4x faster) would allow me to do it? While this is not a problem for the file generation thing I’m wanting to do right now, I’d like to reuse my system in a game that generates levels procedurally (using information extracted from a music file), and waiting a whole song time for the level to get ready isn’t really an option. Thanks for this article anyway, it will be really helps! I don’t know a way to do this. In fact the next part of the series will be about doing the same frequency analysis as here, but on the live sound card output, and even for that I had to record then silently playback the output signal at the same time, calling getSpectrum() on the played back signal. Looking at the “pitch bend” example that comes with the FMOD SDK, they do it the same way. So, I really don’t know how to do it in non-real-time using FMOD I’m afraid. Hi Katy, Thanks very much for the excellent article! I’m attempting to work on a prototype that requires running FFT (and probably other algorithms) on the entire audio file before drawing anything with the data. Is this sort of iteration possible with FMOD in non-realtime? I’ve read elsewhere that perhaps createSound (for loading the file into memory) and seekData() are possible avenues for me to look through. What do you think? Thank you in advance for any help! As far as I can see (and I’m not an expert on FMOD by any means), you can only FFT sounds that are playing (see the problems I had with this in part 6 about recording sound card audio). You can of course use an external FFT library to do the processing. I’ve heard a few people mention Arduino lately, and StackOverflow provides some suggestions () on others you could investigate. Hope that helps! Hi Katy, I’m a little confused – in standard provided FMOD examples, function update() is always placed inside do while function so every loop it would be called, while in your code there is no such encapsulation. So my question is – how does your code works if, as in my understanding, there is only single call of update() function in whole program runtime. Does your code in some magical way loops itself? ^^ Winged The code in the example is in fact in a while loop – you just don’t see it because of the wrapper library I used (Simple2D – see elsewhere on the site). The main while loop of the application polls the Windows message pump, and periodically calls DrawScene() – once “per frame” – where a frame is an arbitrary amount of time of your choosing, or in a game or visual application, as fast as possible after the previous frame is rendered. So in other words, DrawScene() gets called repeatedly and often, triggering the call to FMOD’s update() function and the frequency analysis. Hope that makes sense! – Katy It fully answers my question. It’s just a pity that this wrapper is only available in Windows version (or maybe I haven’t noticed the Linux one). Anyway, thank you for a really badass tutorial here ;] You don’t need the wrapper, it’s just to make the example simpler and only deal with FMOD-related code and not boilerplate stuff specific to Windows (or Linux). Sorry, for bothering you, but recently I’ve encountered uninteligible problems which I cannot overcome – the first one is that channel 0 is completely silent, while channel 1 is working ‘fine’ (getSpectrum of channel 0 returns nothing to an array). The second one is that I cannot set sampleSize below 8192 (on 4096 spectrum readings are ‘choppy’ and below this value they won’t ‘show’ at all). Could you point me a hint what may be wrong? Sounds like a bug in your sound card driver – I had almost identical problems with the Creative Audigy 2 ZS drivers. Use a recording program like Audacity and a sine wave generator (eg. YouTube video of sine wave sound), record it and see if it records a clean sine wave or not. If so, the problem is probably with your code; if not, the problem is your sound card driver. Good luck! Katy. how can change the bass/treble/other frequency sound in fmod…? i want about equalizer , not analyzer…. Awesome!!! Thank you very much, this really helps while diving in to FMOD FTT audio analysis. Great article.
https://katyscode.wordpress.com/2013/01/16/cutting-your-teeth-on-fmod-part-4-frequency-analysis-graphic-equalizer-beat-detection-and-bpm-estimation/
CC-MAIN-2018-17
en
refinedweb
Asked by: Windows 10 Group Policy Issue Hi, Not sure if there is another thread for this... but we are having the issue described in this spiceworks forum. Computer policy could not be updated successfully. The following errors were encountered: The processing of Group Policy failed. Windows attempted to read the file \\domainname.local\SysVol\domainname.local\Policies\{7FF124FD-A2DC-4F70-BAB1-9B17F4754C1E}\gpt.ini from a domain controller and was not successful Will there be a proper update to sort this out? The registry entries make it work in the interim. Kind Regards, John - Edited by Brandon RecordsMicrosoft contingent staff Thursday, December 03, 2015 8:38 PM Made link clickable. - Moved by Brandon RecordsMicrosoft contingent staff Thursday, December 03, 2015 8:38 PM Moved to more appropriate forum. Question All replies Hi John, This issue can be caused by Remote Registry and DFS namespace services on the server. Please check the status of these two services and make sure they are set to start automatically. Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact tnmff@microsoft.com. Hi, I've checked these services on both our DC's and they are fine. Setting the registry items as per the spiceworks article sorts it, but surely that can only be a temporary fix... The issue is only with Windows 10 machines.... Windows 7 and 8.1 are fine. Kind Regards, John Hi John, As can been seen from the link mentioned in your post, it is a known issue and for now, there is no solution yet. We will keep an eye on this and wait for the release of a solution. Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com. - Proposed as answer by Wendy JiangMicrosoft contingent staff, Moderator Thursday, December 10, 2015 6:08 AM - Unproposed as answer by Amy Wang_Microsoft contingent staff, Moderator Thursday, December 10, 2015 6:49 AM Hi, Just a thought although it is apparently a known bug in windows 10 for UNC hardening to be enabled surely it should work... Just a thought... Our domain functional level is still server 2003 as when I put our last domain controller into action (we currently have server 2012 and server 2008 R2) we were getting rid of our last server 2003 servers. It would seem we are also using FRS rather than DFS.... Does UNC Hardening need DFS? Kind Regards, John
https://social.technet.microsoft.com/Forums/en-US/24f7c3ed-1333-43c2-a7be-721dce0f163e/windows-10-group-policy-issue?forum=winserverGP
CC-MAIN-2018-17
en
refinedweb
This guide focuses on the AWS SDK for PHP client for Amazon Elastic Compute Cloud. This guide assumes that you have already downloaded and installed the AWS SDK for PHP. See Installation for more information on getting started. First you need to create a client object using one of the following techniques. The easiest way to get up and running quickly is to use the Aws\Ec2\Ec2Client::factory() method and provide your credential profile (via the profile option), which identifies the set of credentials you want to use from your ~/.aws/credentials file (see Using the AWS credentials file and credential profiles). A region parameter is required. You can find a list of available regions using the Regions and Endpoints reference. use Aws\Ec2\Ec2Client; $client = Ec2Client::factory(array( 'profile' => '<profile in your aws credentials file>', 'region' => '<region name>' )); You can provide your credential profile like in the preceding example, specify your access keys directly (via key and secret), or you can choose to omit any credential information if you are using AWS Identity and Access Management (IAM) roles for EC2 instances or credentials sourced from the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables. Note The profile option and AWS credential file support is only available for version 2.6.1 of the SDK and higher. We recommend that all users update their copies of the SDK to take advantage of this feature, which is a safer way to specify credentials than explicitly providing key and secret. A more robust way to connect to Amazon Elastic Compute Cloud is through the service builder. by namespace $client = $aws->get('Ec2'); For more information about configuration files, see Configuring the SDK. Please see the Amazon Elastic Compute Cloud Client API reference for a details about all of the available methods, including descriptions of the inputs and outputs.
https://docs.aws.amazon.com/aws-sdk-php/v2/guide/service-ec2.html
CC-MAIN-2018-17
en
refinedweb
Installation Guide¶ This page gives instructions on how to build and install the xgboost package from scratch on various systems. It consists of two steps: - First build the shared library from the C++ codes ( libxgboost.sofor linux/osx and libxgboost.dllfor windows). - Exception: for R-package installation please directly refer to the R package section. - Then install the language packages (e.g. Python Package). Important the newest version of xgboost uses submodule to maintain packages. So when you clone the repo, remember to use the recursive option as follows. git clone --recursive For windows users who use github tools, you can open the git shell, and type the following command. git submodule init git submodule update Please refer to Trouble Shooting Section first if you had any problem during installation. If the instructions do not work for you, please feel free to ask questions at xgboost/issues, or even better to send pull request if you can fix the problem. Contents¶ Python Package Installation¶ The python package is located at python-package. There are several ways to install the package: Install system-widely, which requires root permission cd python-package; sudo python setup.py install You will however need Python distutilsmodule for this to work. It is often part of the core python package or it can be installed using your package manager, e.g. in Debian use sudo apt-get install python-setuptools NOTE: If you recompiled xgboost, then you need to reinstall it again to make the new library take effect Only set the environment variable PYTHONPATHto tell python where to find the library. For example, assume we cloned xgbooston the home directory ~. then we can added the following line in ~/.bashrc. It is recommended for developers who may change the codes. The changes will be immediately reflected once you pulled the code and rebuild the project (no need to call setupagain) export PYTHONPATH=~/xgboost/python-package Install only for the current user. cd python-package; python setup.py develop --user If you are installing the latest xgboost version which requires compilation, add MinGW to the system PATH: import os os.environ['PATH'] = os.environ['PATH'] + ';C:\\Program Files\\mingw-w64\\x86_64-5.3.0-posix-seh-rt_v4-rev0\\mingw64\\bin' R Package Installation¶ You can install R package from cran just like other packages, or you can install from our weekly updated drat repo: install.packages("drat", repos="") drat:::addRepo("dmlc") install.packages("xgboost", repos="", type = "source") If you would like to use the latest xgboost version and already compiled xgboost, use library(devtools); install('xgboost/R-package') to install manually xgboost package (change the path accordingly to where you compiled xgboost). For OSX users, single threaded version will be installed, to install multi-threaded version. First follow Building on OSX to get the OpenMP enabled compiler, then: Set the Makevarsfile in highest piority for R. The point is, there are three Makevars: ~/.R/Makevars, xgboost/R-package/src/Makevars, and /usr/local/Cellar/r/3.2.0/R.framework/Resources/etc/Makeconf(the last one obtained by running file.path(R.home("etc"), "Makeconf")in R), and SHLIB_OPENMP_CXXFLAGSis not set by default!! After trying, it seems that the first one has highest piority (surprise!). Then inside R, run install.packages("drat", repos="") drat:::addRepo("dmlc") install.packages("xgboost", repos="", type = "source") Due to the usage of submodule, install_github is no longer support to install the latest version of R package. To install the latest version run the following bash script, git clone --recursive cd xgboost git submodule init git submodule update alias make='mingw32-make' cd dmlc-core make -j4 cd ../rabit make lib/librabit_empty.a -j4 cd .. cp make/mingw64.mk config.mk make -j4 Trouble Shooting¶ Compile failed after git pull Please first update the submodules, clean all and recompile: git submodule update && make clean_all && make -j4 Compile failed after config.mkis modified Need to clean all first: make clean_all && make -j4 Makefile: dmlc-core/make/dmlc.mk: No such file or directory We need to recursively clone the submodule, you can do: git submodule init git submodule update Alternatively, do another clone git clone --recursive
http://xgboost.readthedocs.io/en/latest/build.html
CC-MAIN-2017-04
en
refinedweb
Essentials All Articles What is LAMP? Linux Commands ONLamp Subjects Linux Apache MySQL Perl PHP Python BSD ONLamp Topics App Development Database Programming Sys Admin From data storage to data exchange and from Perl to Java, it's rare to write software these days and not bump into XML. Adding XML capabilities to a C++ application, though, usually involves coding around a C-based API. Even the cleanest C API takes some work to wrap in C++, often leaving you to choose between writing your own wrappers (which eats up time) or using third-party wrappers (which means one more dependency). Adopt the Xerces-C++ parser and you can skip these middlemen. This mature, robust toolkit is portable C++ and is available under the flexible Apache Software License (version 2.0). Xerces' benefits extend beyond its C++ roots. It gives you a choice of SAX and DOM parsers, and supports XML namespaces. It also provides validation by DTD and XML schema, as well as grammar caching for improved performance. This article uses the context of loading, modifying, and storing an XML config file to demonstrate Xerces-C++'s DOM side. My first example shows some raw code for reading XML. Then I revise it a couple of times to address deficiencies. The last example demonstrates how to modify the XML document and write it back out to disk. Along the way, I've made some helper classes that make using Xerces a little easier. My next article will cover SAX and validation. I compiled the sample code under Fedora Core 3/x86 using Xerces-C++ 2.6.0 and GCC 3.4.3. The Document Object Model (DOM) is a specification for XML parsing designed with portability in mind. That is, whether you're using Perl or Java or C++, the high-level DOM concepts are the same. This eases the learning curve when moving between DOM toolkits. (Of course, implementations are free to add special features and convenience above and beyond the requirements of the spec.) DOM represents an XML document as a tree of nodes (Xerces class DOMNode). Consider Figure 1, an XML document of some airport information. DOM sees the entire document as a document node (DOMDocument), the only child of which is the root <airports> element node (DOMElement). Were there any document type declarations or comments at this level, they would also be child nodes of the document node. DOMNode DOMDocument <airports> DOMElement Figure 1. The DOM of an XML document The <airport> element is a child node of <airports>. Its only attribute, name, is an attribute node (DOMAttr). <airport> children include the <aliases>, <location>, and <comment> elements. <comment> has a child text node (DOMText), which contains the string "Terminal 1 has a very 1970's sci-fi decor." > You can create, change, or remove nodes on this object representation of your document, then write the whole thing--comments included--back to disk as well-formed XML. DOM requires that the parser load the entire document into memory at once, which can make handling large documents very memory intensive. For small to midsize XML documents, though, DOM offers portable read/modify/write capabilities to structured data when a full relational database (such as PostgreSQL or MySQL) is overkill. I prefer to explain this with source code. I will share some code excerpts inline, but as always, the complete source code for the examples is available for download. The program step1 represents a portion of a fictitious report viewer. The config file tracks the time of its most recent modification, the user's login and password to the report system, and the last reports the user ran. Here's a sample of the config file: step1 <config lastupdate="1114600280"> <login user="some name" password="cleartext" /> <reports> <report tab="1" name="Report One" /> <report tab="2" name="Report Two" /> <report tab="3" name="Third Report" /> <report tab="4" name="Fourth Report" /> <report tab="5" name="Fifth Report" /> </reports> </config> (Xerces also supports XML namespaces, though the sample code doesn't use them.) The first thing to notice about step1 is the number of #included headers. Xerces has several header files, roughly one per class or concept. Some such projects have one master header file that includes the others. You could write one yourself, but including just the headers you need may speed up your build process. () xercesc::XMLPlatformUtils::Initialize(); // ... regular program ... xercesc::XMLPlatformUtils::Terminate(); Your code must call Initialize() before using any Xerces classes. In turn, attempts to use Xerces classes after the call to Terminate() will yield a segmentation fault. Initialize() may throw an exception, so I've wrapped it in a try/catch block. Notice the call to XMLString::transcode() in the catch section:.
http://www.linuxdevcenter.com/pub/a/onlamp/2005/09/08/xerces_dom.html?page=4&x-order=date
CC-MAIN-2017-04
en
refinedweb
I am using Hibernate and getting Exception in thread "main" org.hibernate.ObjectNotFoundException: No row with the given identifier exists: [#271] public class ProblemClass implements Persistent { @ManyToOne(optional = false) private MyDbObject myDbObject; } public class MyDbObject implements Persistent { @OneToMany(mappedBy = "myDbObject") private List<ProblemClass> problemClasses; @ManyToOne(optional = false) private ThirdClass thirdClass; } public List<ProblemClass> getProblemClasses() { Query query = session.createQuery("from ProblemClass"); return query.list(); } public void save(Persistent persistent) { session.saveOrUpdate(persistent); } Eureka, I found it! The problem was the following: The data in the table ThirdClass was not persisted correctly. Since this data was referenced from MyDbObject via optional = false Hibernate made an inner join, thus returning an empty result for the join. Because the data was there if executed in one session (in the cache I guess), that made no problems. MySQL does not enforce foreign key integrity, thus not complaining upon insertion of corrupt data. Solution: optional = true or correct insertion of the data.
https://codedump.io/share/YbpHNCShlxhq/1/quotno-row-with-the-given-identifier-existsquot-although-it-does-exist
CC-MAIN-2017-04
en
refinedweb
Vampire methods for structural types I wish I could take credit for what I’m about to show you, because it’s easily the cleverest thing I’ve seen all week, but it’s Eugene Burmako’s trick and I’ve only simplified his demonstration a bit and adapted it to work in Scala 2.10. First for the setup. Start your REPL like this to have it print the tree for every expression after the compiler’s cleanup phase: scala -Xprint:cleanup It’ll print some stuff you can ignore. Hit return to get a prompt (if you want), and then copy and paste the following: import scala.annotation.StaticAnnotation import scala.language.experimental.macros import scala.reflect.macros.Context class body(tree: Any) extends StaticAnnotation object Macros { def makeInstance = macro makeInstance_impl def makeInstance_impl(c: Context) = c.universe.reify[Any] { class Workaround { def z: Int = 13 @body(42) def v: Int = macro Macros.selectField_impl } new Workaround {} } def selectField_impl(c: Context) = c.Expr( c.macroApplication.symbol.annotations.filter( _.tpe <:< c.typeOf[body] ).head.scalaArgs.head ) } val myInstance = Macros.makeInstance And it’ll print some more stuff you don’t need to worry about. The makeInstance method here is pretty simple: we’re just defining a class and instantiating it (using the workaround I identified here). The inferred type of the instance will be a structural type with z and v methods. And the punchline: trait Foo { val zombie = myInstance.z val vampire = myInstance.v } Now you can start paying attention to all that stuff it’s been printing. Here’s the important part: def /*$read$$iw$$iw$Foo$class*/$init$($this: $line9.iw$Foo): Unit = { $this.$line9$$read$$iw$$iw$Foo$_setter_$zombie_=(scala.Int.unbox({ val qual1: Object = $line8.$read$$iw$$iw.myInstance(); try { $line9.$read$$iw$$iw$Foo$class.reflMethod$Method1(qual1.getClass()).invoke(qual1, Array[Object]{}) } catch { case (1 @ (_: reflect.InvocationTargetException)) => throw 1.getCause() }.$asInstanceOf[Integer]() })); $this.$line9$$read$$iw$$iw$Foo$_setter_$vampire_=(42); () } This is what the trait’s constructor looks like after the cleanup phase. Notice all the ugly reflection business happening in the initialization of zombie—this is why you get warnings about reflective access when you use structural types in Scala, and why calling methods on structural types is (at least a little) slower. Now look at the initialization for vampire. No reflection at all—the macro has just replaced myInstance.v with 42. I missed this when Eugene first posted it on Twitter a couple of days ago—now I wish I could buy him a beer, because this totally made my Friday afternoon.
http://meta.plasm.us/posts/2013/07/12/vampire-methods-for-structural-types/
CC-MAIN-2017-04
en
refinedweb
As Tim says, ipython is a superset of the regular python shell, so you can ignore all the features and get along with it just fine. The features I find most useful (and which I think will be of benefit to the dojo) are context-sensitive tab completion and the help system. Using these is an order of magnitude faster than looking things up in the docs and works for any modules or objects that ipython can see. It also has a bunch of built-in commands such as 'cd', 'ls', 'rm', 'cp' etc for accessing the filesystem, 'time' and 'timeit' for timing fragments of code, 'run' to execute a file in the namespace of the interactive session, 'ed' to edit text in a text editor and execute the code as if it was typed at the prompt, and many more. Dave
https://mail.python.org/pipermail/python-uk/2009-October/001638.html
CC-MAIN-2017-04
en
refinedweb
Since Delegate.BeginInvoke uses thread pool threads, that means your On functions are also running in that seperate thread, which means you could potentially have concurrent calls to WorkRepository.UpdateTask and WorkRepository.UpdateWork making conflicting updates. At the least you should implement some thread synchronization, using the "lock" keyword would probably be a good place to start. public class WorkRepository { private object updateWorkLock = new Object(); ... public Guid UpdateWork() { // While this code-block is running // other threads also attempting to // lock on updateWorkLock will be // blocked until this block is finished lock (updateWorkLock) { ... } } }
https://www.experts-exchange.com/questions/26608333/Will-a-separate-listening-thread-make-a-WinForm-miss-a-change-event.html
CC-MAIN-2017-04
en
refinedweb
While working on a recent application in C#, I ran into a situation I hadn’t hit before and worked out a solution I wanted to pass along for others that might come up against this same scenario. I’m not sure if it’s the “preferred” way to do it, but it sure worked for me and will definitely work for you. In C#, comboboxes for Windows Forms only allows you to set the display text for items in a combobox. However, in the situation I ran into, I needed a way to control the display value as well as the hidden values of a combobox. For example, in HTML, you can create a select box that includes options to choose from which allow you to control the value that gets displayed to the user as well as the value that is actually passed from the user interface to the server. You can do that like so: <select name=”cmbComboBox”> <option value=”1″>One</option> <option value=”2″>Two</option> <option value=”3″>Three</option> So, I wanted to do the same thing using C# and a Windows Forms application. The solution I came up with was to create a new object that allowed me to control what the display and hidden values were. By doing this, it also provided me the ability to use existing objects I already had lying around in my app such as domain objects that modeled tables in my database. Since the text attribute is what is expected to be set for each item in a combobox, the only requirement my new object needed was an override on the “ToString” method. Here is what my combobox item class looked like: public class ComboBoxItem { private int _id; private string _display; public ComboBoxItem(int id, string display) { this._id = id; this._display = display; } public string Text { get { return this._display; } set { this._display = value; } } public int ID { get { return this._id; } set { this._id = value; } } public override string ToString() { return this._display; } } Now, I can create objects of type ComboBoxItem and add them to the items list of my combobox like this: ComboBoxItem one = new ComboBoxItem(1, “One”); ComboBoxItem two = new ComboBoxItem(2, “Two”); ComboBoxItem three = new ComboBoxItem(3, “Three”); cmbComboBox.Items.Add(one); cmbComboBox.Items.Add(two); cmbComboBox.Items.Add(three); To retrieve the objects from my combobox, all I had to do was get the selected item and cast it to type ComboBoxItem like so: ComboBoxItem selected = (ComboBoxItem)cmbComboBox.SelectedItem; With that, I could use the standard getters in my combobox item to get anything inside it. If I had other fields inside my combobox class and wanted those to be displayed in the combobox list, I could concatenate those fields in the “ToString” method. PayPal will open in a new tab.
http://www.prodigyproductionsllc.com/articles/programming/bind-objects-to-combo-boxes-in-c/
CC-MAIN-2017-04
en
refinedweb
This article includes the full source code for the HTML5 ImageMap Editor I created that allows you to create an image map from an existing image that can easily be used with the JQuery plugin ImageMapster. In addition, you can also create a Fabric canvas that functions exactly like an image map but with far more features than any image map. I will be updating the source code from time to time with new web tools and features and you can download the updated source code (FREE) from my personal website filled with FREE Source Code at: Software-rus.com. I recently had a client who wanted me to create an HTML5 Virtual Home Designer website with images of homes that users could "color in" like in those crayon coloring books where you have an image with outlines of parts of the image and you paint within the outlines. But in this case of painting parts of a home like the roof or stonefront you would also want to fill in an outlined areas with patterns where ecah pattern can be different colors. The obvious choice initially was to use image maps of a houses where the user could select different colors and patterns for each area of the image map of a home like the roof, gables, siding, etc. And the obvious choice was to use the popular JQuery plugin for image maps, i.e., Imagemapster. See But I still needed a way to create the html <map> coordinates for the image maps of the houses where the syntax would work with the ImageMapster plugin. I didn't like any of the image map editors like Adobe Dreamweaver's Hot Spot Drawing Tools or any of the other editors because they didn't really meet my needs either. So I decided to write my own Image Map Editor which is the editor included in the article. To create my Image Map Editor I decided to use Fabric.js, a powerful, open-source JavaScript library by Juriy Zaytsev, aka "kangax," with many other contributors over the years. It is licensed under the MIT license. The "all.js" file in the sample project is the actual "Fabric.js" library. Fabric seemed like a logical choice to build an Image Map Editor because you can easily create and populate objects on a canvas like simple geometrical shapes — rectangles, circles, ellipses, polygons, or more complex shapes consisting of hundreds or thousands of simple paths. You can then scale, move, and rotate these objects with the mouse; modify their properties — color, transparency, z-index, etc. It also includes a SVG-to-canvas parser. In addition to Fabric I wanted a simple toolbar for my controls so I included the Bootstrap library for my toolbar, buttons, and dropdowns. Some of the libraries I used include: In HTML and XHTML,. For working with the ImageMapster plugin we have the following attributes: mapKey: An attribute identifying each imagemap area. This refers to an attribute on the area tags that will be used to group them logically. Any areas containing the same mapKey will be considered part of a group, and rendered together when any of these areas is activated.. ImageMapster will work with any attribute you identify as a key. To maintain HTML compliance, I appended "data-" to the front of whatever value you assign to mapKey when I generate the html for the <map> coordinates, i.e., "data-mapkey." Doing this makes names legal as HTML5 document types. For example, you could set mapValue: 'statename' to an image map of the united states, and add an attribute to your areas that provided the full name of each state, e.g. data-statename="Alaska", or in the case of image maps of homes you might have, e.g. data-home1=mapValue, where the mapValue might equal = "roof", or "siding", etc. for a house or "state" for a map. mapValue: An area name or id to reference that given area of a map. For example, the following code defines a rectangular area (9,372,66,397) that is part of the "roof" of a house: //data-home1="roof"</span> <img src="someimage.png" alt="image alternative text" usemap="#mapname" /> <map name="mapname"> <area shape="rect" <span class="style2"> </map> The purpose of this editor is to create the html for an image map when the user selects "Show Image Map Html." To do this I decided to use the underscore library that allowed me to easily create the syntax for the html <map></map>. Keep in mind that our goal here is to create the html code that we can copy and paste into our website that will work with the ImageMapster plugin. First I created a template, i.e., "map_template," for the format of the <map></map> html using underscore as follows: <script type="text/underscoreTemplate" id="map_template";> <map name="mapid" id="mapid"> <% for(var i=0; i<areas.length; i++) { var a=areas[i]; %><area shape="<%= a.shape %>" <%= "data-"+mapKey %>="<%= a.mapValue %;>" coords="<%= a.coords %>" href="<%= a.link %>" alt="<%= a.alt %>" /> <% } %></map> </script> I added the following methods to this editor but the ONLY method you need to create an image map is "Show Image Map Html": Let's look at two ways of serializing the fabric canvas. The first is to use underscore and to script a custom data template that you load with the properties of teh canvas elements. The second way is to use JSON.stringify(camvas). Let's first look at how we would use underscore. Below is an example of a template to store the properties using underscore. <script type="text/underscoreTemplate" id="map_data"> [<% for(var i=0; i<areas.length; i++) { var a=areas[i]; %> { mapKey: "<%= mapKey %>", mapValue: "<%= a.mapValue %>", type: "<%= a.shape %>", link: "<%= a.link %>", alt: "<%= a.alt %>", perPixelTargetFind: <%= a.perPixelTargetFind %>, selectable: <%= a.selectable %>, hasControls: <%= a.hasControls %>, lockMovementX: <%= a.lockMovementX %>, lockMovementY: <%= a.lockMovementY %>, lockScaling: <%= a.lockScaling %>, lockRotation: <%= a.lockRotation %>, hasRotatingPoint: <%= a.hasRotatingPoint %>, hasBorders: <%= a.hasBorders %>, overlayFill: null, stroke: "<#000000>", strokeWidth: 1, transparentCorners: true, borderColor: "<black>", cornerColor: "<black>", cornerSize: 12, transparentCorners: true, pattern: "<%= a.pattern %>", <% if ( (a.pattern) != "" ) { %>fill: "#00ff00",<% } else { %>fill: "<%= a.fill %>",<% } %> opacity: <%= a.opacity %>, top: <%= a.top %>, left: <%= a.left %>, scaleX: <%= a.scaleX %>, scaleY: <%= a.scaleY %>, <% if ( (a.shape) == "circle" ) { %>radius: <%= a.radius %>,<% } %><% if ( (a.shape) == "ellipse" ) { %>width: <%= a.width %>, height: <%= a.height %>,<% } %><% if ( (a.shape) == "rect" ) { %>width: <%= a.width %>,, height: <%= a.height %>,<% } %><% if ( (a.shape) == "polygon" ) { %>points: [<% for(var j=0; j<a.coords.length-1; j = j+2) { var checker = j % 6; %> <% if ( (checker) == 0 ) { %>{x: <%= (a.coords[j] - a.left)/a.scaleX %>, y: <%= (a.coords[j+1] - a.top)/a.scaleY %>}, <% } else { %>{x: <%= (a.coords[j] - a.left)/a.scaleX %>, y: <%= (a.coords[j+1] - a.top)/a.scaleY %>}, <% } } %>]<% } %>},<% } %> ] </script> In order to load the underscore Template above we create an array using the corresponding values from the fabric elements as shown below. Please keep in mind that I hard-coded some properties to suit my own needs for the website I was building and you can modify this to suit your own needs. function createObjectsArray(t) { fabric.Object.NUM_FRACTION_DIGITS = 10; mapKey = $('#txtMapKey').val(); if ($.isEmptyObject(mapKey)) { mapKey = "home1"; $('#txtMapKey').val(mapKey); } // loop through all objects & assign ONE value to mapKey var objects = canvas.getObjects(); canvas.forEachObject(function(object){ object.mapKey = mapKey; }); canvas.renderAll(); canvas.calcOffset() clearNodes(); var areas = []; //note the "s" on areas! _.each(objects, function (a) { var area = {}; //note that there is NO "s" on "area"! area.mapKey = a.mapKey; area.link = a.link; area.alt = a.alt; area.perPixelTargetFind = a.perPixelTargetFind; area.selectable = a.selectable; area.hasControls = a.hasControls; area.lockMovementX = a.lockMovementX; area.lockMovementY = a.lockMovementY; area.lockScaling = a.lockScaling; area.lockRotation = a.lockRotation; area.hasRotatingPoint = a.hasRotatingPoint; area.hasBorders = a.hasBorders; area.overlayFill = null; area.stroke = '#000000'; area.strokeWidth = 1; area.transparentCorners = true; area.borderColor = "black"; area.cornerColor = "black"; area.cornerSize = 12; area.transparentCorners = true; area.mapValue = a.mapValue; area.pattern = a.pattern; area.opacity = a.opacity; area.fill = a.fill; area.left = a.left; area.top = a.top; area.scaleX = a.scaleX; area.scaleY = a.scaleY; area.radius = a.radius; area.width = a.width; area.height = a.height; area.rx = a.rx; area.ry = a.ry; switch (a.type) { case "circle": area.shape = a.type; area.coords = [a.left, a.top, a.radius * a.scaleX]; break; case "ellipse": area.shape = a.type; var thisWidth = a.width * a.scaleX; var thisHeight = a.height * a.scaleY; area.coords = [a.left - (thisWidth / 2), a.top - (thisHeight / 2), a.left + (thisWidth / 2), a.top + (thisHeight / 2)]; break; case "rect": area.shape = a.type; var thisWidth = a.width * a.scaleX; var thisHeight = a.height * a.scaleY; area.coords = [a.left - (thisWidth / 2), a.top - (thisHeight / 2), a.left + (thisWidth / 2), a.top + (thisHeight / 2)]; break; case "polygon": area.shape = a.type; var coords = []; _.each(a.points, function (p) { newX = (p.x * a.scaleX) + a.left; newY = (p.y * a.scaleY) + a.top; coords.push(newX); coords.push(newY); }); area.coords = coords; break; } areas.push(area); }); if(t == "map_template") { $('#myModalLabel').html('Image Map HTML'); $('#textareaID').html(_.template($('#map_template').html(), { areas: areas })); $('#myModal').on('shown', function () { $('#textareaID').focus(); }); $("#myModal").modal({ show: true, backdrop: true, keyboard: true }).css({ "width": function () { return ($(document).width() * .6) + "px"; }, "margin-left": function () { return -($(this).width() / 2); } }); } if(t == "map_data") { $('#myModalLabel').html('Custom JSON Objects Data'); $('#textareaID').html(_.template($('#map_data').html(), { areas: areas })); $('#myModal').on('shown', function () { $('#textareaID').focus(); }); $("#myModal").modal({ show: true, backdrop: true, keyboard: true }).css({ "width": function () { return ($(document).width() * .6) + "px"; }, "margin-left": function () { return -($(this).width() / 2); } }); } return false; }; If you want to use JSON.stringify(canvas) then you need to do some extra work. The most important thing to understand about building image maps is that you need accuracy up to 10 decimal places or your image maps will not align properly, especially in the case of polygons. When you use underscore this issue doesn't come because you read the position and point properties accuratly to the required number of decimal places. But JSON.stringify(canvas) rounds of this data to 2 decimal places which results in dramatic misalignment in image maps. I realized this problem early on which is why I initially used the template approach for accuracy. Then in a post, Stefan Kienzle, was kind enough to point out that Fabric has a solution for this issue in that you can set the number of decimal places in a fabric canvas as follows: fabric.Object.NUM_FRACTION_DIGITS = 10; This solved one of the problems with using JSON.stringify(canvas). Another issue is that you need to include a few custom properties for image maps and other properties not normally serialized by "stringfy." For example, you will need to add a few extra properties to all fabric object types that we use in image maps and add code that will include the serialization of these custom properties. In fabric to add properties we can either subclass an existing element type or we can extend a generic fabric element's "toObject" method. It would be crazy to subclass just one type of element since our custom properties need to apply to any type of element. Instead, we can just extended a generic fabric element's "toObject" method for the additional properties like: mapKey, link, alt, mapValue, and pattern for our image maps and fabric properties lockMovementX, lockMovementY, lockScaling, and lockRotation as shown below. canvas.forEachObject(function(object){ // Bill SerGio - We add custom properties we need for image maps here to fabric // Below we extend a fabric element's toObject method with additional properties // In addition, JSON doesn't store several of the Fabric properties !!! object }); }; })(object.toObject); ... }); canvas.renderAll(); canvas.calcOffset(); We create the image map by drawing our sections on top of an existing image, a "background" image that we make the background of our canvas. For my own purposes in the editor I do not serialize this background image. In fact, for my own purposes I remove the background image prior to serialization and add it back after serialization so that it is not part of the serialed data. You can change this to suit your own preferences. One of the reasons I did this is because the fullpath of the background image is serialized and unless you are restoring with the same path you will have an issue. I need a relative path for my own purposes. The "background" image is added as follows. canvas.setBackgroundImage(backgroundImage, canvas.renderAll.bind(canvas)); I wanted to place all of the controls in a single row to allow as much space as possible for editing. I decided to use the bootstrap library and the bootstratp "navbar" for controls for a clean look as shown below. NavBar Features from left to right: When I later added zoom I also changed the NavBar controls to stay at the top of the page so that I could still click on an item in the NavBar when I was adding the nodes for a polygon and the page was scrooled down. To accomplish I used Bootstrap’s class ‘navbar-fixed-top’ as follows: <nav class="navbar navbar-fixed-top"> <div class="navbar-inner"> ... etc. Please keep in mind that this editor is not meant to be a general purpose editor or a drawing program. I created it to do one thing which is to create the html for an image map. The toolbar includes all the basic geometric shapes in a standard image map including circle, ellipse, rectangle, and polygon. I added text just as a demo but text is not part of a standard image map. The reader is free to add other Fabric shapes and options. We listen for the mousedown event on the Fabric canvas as follows: var activeFigure; var activeNodes; canvas.observe('mouse:down', function (e) { if (!e.target) { add(e.e.layerX, e.e.layerY); } else { if (_.detect(shapes, function (a) { return _.isEqual(a, e.target) })) { if (!_.isEqual(activeFigure, e.target)) { clearNodes(); } activeFigure = e.target; if (activeFigure.type == "polygon") { addNodes(); } $('#hrefBox').val(activeFigure.link); $('#titleBox').val(activeFigure.title); $('#groupsBox').val(activeFigure.groups); } } }); When a user clicks on the circle on the toolbar it sets the activeFigure equal to the figure type of the object to be added such as "circle" or "polygon." Where we position these objects initially on our canvas isn't important because we will be moving and re-shaping them to exactly match the areas of our image map. Then when the user clicks on the canvas, the selected figure type of object is added to the canvas using the following. Please keep in mind that I created this editor to meet my own immediate needs in creating image maps. You can easily customize the features of this editor to meet meet your own needs or preferences. function add(left, top) { if (currentColor.length < 2) { currentColor = '#fff'; } if ((window.figureType === undefined) || (window.figureType == "text")) return false; var x = (window.pageXOffset !== undefined) ? window.pageXOffset : (document.documentElement || document.body.parentNode || document.body).scrollLeft; var y = (window.pageYOffset !== undefined) ? window.pageYOffset : (document.documentElement || document.body.parentNode || document.body).scrollTop; //stroke: String, when 'true', an object is rendered via stroke and this property specifies its color //strokeWidth: Number, width of a stroke used to render this object if (figureType.length > 0) { var obj = { left: left, top: top, fill: ' ' + currentColor, opacity: 1.0, fontFamily: 'Impact', stroke: '#000000', strokeWidth: 1, textAlign: 'right' }; var objText = { left: left, top: top, fontFamily: 'Impact', strokeStyle: '#c3bfbf', strokeWidth: 3, textAlign: 'right' }; var shape; switch (figureType) { case "text": //var text = document.getElementById("txtAddText").value; var text = gText; shape = new fabric.Text ( text , obj); shape.scaleX = shape.scaleY = canvasScale; shape.lockUniScaling = true; shape.hasRotatingPoint = true; break; case "square": obj.width = 50; obj.height = 50; shape = new fabric.Rect(obj); shape.scaleX = shape.scaleY = canvasScale; shape.lockUniScaling = false; break; case "circle": obj.radius = 50; shape = new fabric.Circle(obj); shape.scaleX = shape.scaleY = canvasScale; shape.lockUniScaling = true; break; case "ellipse": obj.width = 100; obj.height = 50; obj.rx = 100; obj.ry = 50; shape = new fabric.Ellipse(obj); shape.scaleX = shape.scaleY = canvasScale; shape.lockUniScaling = false; break; case "polygon": //$('#btnPolygonClose').show(); $('#closepolygon').show(); obj.selectable = false; if (!currentPoly) { shape = new fabric.Polygon([{ x: 0, y: 0}], obj); shape.scaleX = shape.scaleY = canvasScale; lastPoints = [{ x: 0, y: 0}]; lastPos = { left: left, top: top }; } else { obj.left = lastPos.left; obj.top = lastPos.top; obj.fill = currentPoly.fill; // while we are still adding nodes let's make the element // semi-transparent so we can see the canvas background // we will reset opacity when we close the nodes obj.opacity = .4; currentPoly.points.push({x: left-lastPos.left, y: top-lastPos.top }); shapes = _.without(shapes, currentPoly); lastPoints.push({ x: left - lastPos.left, y: top-lastPos.top }) shape = repositionPointsPolygon(lastPoints, obj); canvas.remove(currentPoly); } currentPoly = shape; break; } shape.link = $('#hrefBox').val(); shape.alt = $('#txtAltValue').val(); mapKey = $('#txtMapKey').val(); shape.mapValue = $('#txtMapValue').val(); // Bill SerGio - We add custom properties we need for image maps here to fabric // Below we extend a fabric element's toObject method with additional properties // In addition, JSON doesn't store several of the Fabric properties !!! shape }); }; })(shape.toObject); shape.mapKey = mapKey; shape.link = '#'; shape.alt = ''; shape.mapValue = ''; shape.pattern = ''; lockMovementX = false; lockMovementY = false; lockScaling = false; lockRotation = false; canvas.add(shape); shapes.push(shape); if (figureType != "polygon") { figureType = ""; } } else { deselect(); } } Many virtual design websites need to apply not just a color to a map area but need to also apply a pattern and color. Below are the two methods I created to apply patterns to fabric elements in my fabric canvas. // "title" is the mapValue & "img" is the short path for the pattern image function SetMapSectionPattern(title, img) { canvas.forEachObject(function(object){ if(object.mapValue == title){ loadPattern(object, img); } }); canvas.renderAll(); canvas.calcOffset() clearNodes(); } function loadPattern(obj, url) { obj.pattern = url; var tempX = obj.scaleX; var tempY = obj.scaleY; var zfactor = (100 / obj.scaleX) * canvasScale; fabric.Image.fromURL(url, function(img) { img.scaleToWidth(zfactor).set({ originX: 'left', originY: 'top' }); // You can apply regualr or custom image filters at this point //img.filters.push(new fabric.Image.filters.Sepia(), //new fabric.Image.filters.Brightness({ brightness: 100 })); //img.applyFilters(canvas.renderAll.bind(canvas)); //img.filters.push(new fabric.Image.filters.Redify(), //new fabric.Image.filters.Brightness({ brightness: 100 })); //img.applyFilters(canvas.renderAll.bind(canvas)); var patternSourceCanvas = new fabric.StaticCanvas(); patternSourceCanvas.add(img); var pattern = new fabric.Pattern({ source: function() { patternSourceCanvas.setDimensions({ width: img.getWidth(), height: img.getHeight() }); return patternSourceCanvas.getElement(); }, repeat: 'repeat' }); fabric.util.loadImage(url, function(img) { // you can customize what properties get applied at this point obj.fill = pattern; canvas.renderAll(); }); }); } Since there are in any virtual designer a lot of possible pattern images I added a slider to the drop down for the patterns in the editor. In order to apply a pattern to a section, i.e., "mapValue," of the objects in the canvas you first need to click the refresh symbol on the toolbar that will load the existing mapValues in the canvas into the drop down on the left of the refresh symbol as show below. Then select a mapVlue from the mapValues drop down. Next you can select a pattern from the patterns drop down and it will be applied to all the objects with the mapValue you selected. I created a short video to illustrate this on YouTube. As soon as I began using my image map editor I quickly realized that I would have to add zoom. My image map had some really tiny areas where I need to create polygons so I added the ability to zoom in on the map as follows. // Zoom In function zoomIn() { // limiting the canvas zoom scale if (canvasScale < 4.9) { canvasScale = canvasScale * SCALE_FACTOR; canvas.setHeight(canvas.getHeight() * SCALE_FACTOR); canvas.setWidth(canvas.getWidth() * SCALE_FACTOR); var objects = canvas.getObjects(); for (var i in objects) { var scaleX = objects[i].scaleX; var scaleY = objects[i].scaleY; var left = objects[i].left; var top = objects[i].top; var tempScaleX = scaleX * SCALE_FACTOR; var tempScaleY = scaleY * SCALE_FACTOR; var tempLeft = left * SCALE_FACTOR; var tempTop = top * SCALE_FACTOR; objects[i].scaleX = tempScaleX; objects[i].scaleY = tempScaleY; objects[i].left = tempLeft; objects[i].top = tempTop; objects[i].setCoords(); } canvas.renderAll(); canvas.calcOffset(); } } I quickly noticed that when I was zoomed in on the canvas and the page was scrolled down and I clicked on a nav button that the window would scroll up to the top and I would have to manually scroll down again to the area I was working on. There are several ways to fix this but I decided to use on the button links in the toolbar the following simple solution that prevents a click on the link from scrolling the browser window up to the navbar. href="javascript:void(0)" The next issue I ran into was that of the zoom factor or scaleX and scaleY of the fabric objects created. If all the fabric objects added to the canvas have scaleX = 1.0 and scaleY = 1.0 then work nicely. But if you are zoomed in and add an object then these scale values aren't 1 and things get a bit trixky when saving and restoring the map. I finally figured out that the best thing was to make sure that the whole canvas is zoomed down to it's normal setting of 1:1. Why? Because when we restore a svaed map we are always restoring the saved objects to a canvas scaled at 1:1. When I started to wrote this image map editor I had only used Fabric to create the editor so I could create an image map for ImageMapster plugin. Then, somewhere during the process of writing this editor I had an epiphany! It dawned on me that using the Fabric canvas as an "image map" was far superior to using a standard image map! In other words, I could take an image and divide it up into sections, i.e., "mapValues", and color those sections, add patterns to those sections or animate those sections to create a kind of super image map. So feel free to use and customize this editor to create standard image maps or to create fabric "image maps" that have a lot more features than the standard image map. There are two ways to use this editor, namely, to create the <map> html for use with ImageMapster, or to create a fabric canvas that works exactly like an image map but has many more features. One of the things to be aware of if you use ImageMapster is that ImageMapster's "p.addAltImage = function (context, image, mapArea, options) {" function was not really written to work with the idea of using small images to fill large areas by applying a "pattern" to a section of an image map. So, as a heads up, I want to point out that you will need to either modify Imagemapster's "p.addAltImage" or add a new function to ImageMapster's plugin similar to the following in order to accomplish this as follows: // Add a function like this to the ImageMapster plugin to apply "patterns" to map sections p.addPatternImage = function (context, image, mapArea, options) { context.beginPath(); this.renderShape(context, mapArea); context.closePath(); context.save(); context.clip(); context.globalAlpha = options.altImageOpacity || options.fillOpacity; //you can replace the line below with one that positions a smaller pattern reactangular exactly over map area to save memory context.clearRect(0, 0, mapArea.owner.scaleInfo.width, mapArea.owner.scaleInfo.height); // Clear the last image if it exists. var pattern = context.createPattern(image, 'repeat'); // Get the direction from the button. context.fillStyle = pattern; // Assign pattern as a fill style. context.fillRect(0, 0, mapArea.owner.scaleInfo.width, mapArea.owner.scaleInfo.height); // Fill the canvas. }; I also added a file, i.e., map2json.htm, to the project with a sample image map and the code to convert an existing image map into a fabric canvas with corresponding image map elements for editing. You will have to tweek the code to change the variable names to match your own though. As I said earlier, I had an epiphany when I realized that I can use a Fabric canvas to replace the old image map but this editor will do the job for either. In addition, as mentioned above, using "javascript:void(0)" instead of "#" prevents scrolling which clicking on the navbar was a really useful tip I found on the web. I used VisualStudio as my web editor but the editor itself is just an ordinary "html" file, i.e., "ImageMapEditor.htm," that you can just double click on and run in any web browser to use it. Chrome Frame Plugin. I recommend installing the Chrome Frame Plugin: the IE users behind. Just think the amount of time that a web developer saves without having to code hacks and workarounds for IE. You can decide for yourself which is better, Imagemapster and a standard image map, OR using a fabric canvas with fabric objects that adds many more cool features. Of course, it depends on your needs and what the clients wants! At least this editor will allow you to create both of these and test them against each other..
https://www.codeproject.com/script/articles/view.aspx?aid=593037
CC-MAIN-2017-04
en
refinedweb
I'm getting the exact same issue as when does the database is being destroy in django tests? , where my test DB seems to be getting deleted between each method. I know it's being cleared out each time I re-run python3 manage.py test , but it shouldn't be deleted in the middle of the test. I'm running Python 3.4.3, Postgresql 9.5.3, Django 1.9 from django.test import TestCase class myTestCases(TestCase): def test_1_load_regions(self): MyMethods._updateRegions() self.assertEqual(True, len(Region.objects.all()) >= minRegionsExpected) print("Regions: %s Languages: %s"%(len(Region.objects.all()), len(Language.objects.all()))) def test_2_load_languages(self): # Generated by _updateRegions, just check that a few languages exist print("Regions: %s Languages: %s"%(len(Region.objects.all()), len(Language.objects.all()))) self.assertEqual(True, len(Language.objects.all()) >= minLanguagesExpected) Regions: 11 Languages: 19 .Regions: 0 Languages: 0 F. Actually, according to the Django tutorial, the database is rolled back between each test. (See the bottom of the linked section.) If you're looking to have a common setup between tests, you should consider overriding the TestCase method setUp. This is run before each test function. The unittest documentation should be helpful for this, and Django has an example in their documentation as well.
https://codedump.io/share/VD8ixnOgYz1J/1/django-test-db-returning-nothing
CC-MAIN-2017-04
en
refinedweb
skyfield 0.2 Elegant astronomy for Python Skyfield is a pure-Python astronomy package that is compatible with both Python 2 and 3 and makes it easy to generate high precision research-grade positions for planets and Earth satellites. from skyfield.api import earth, mars, now ra, dec, distance = earth(now()).observe(mars).radec() print(ra) print(dec) print(distance) 09h 14m 50.35s +17deg 13' 02.6" 2.18572863461 AU Its only binary dependency is NumPy. Skyfield can usually be installed with: pip install skyfield Here are the essential project links: - Home page and documentation. - Skyfield package on the Python Package Index. - Issue tracker on GitHub. - Author: Brandon Rhodes - License: MIT - Categories - Development Status :: 4 - Beta - Intended Audience :: Developers - Intended Audience :: Education - Intended Audience :: Science/Research - License :: OSI Approved :: MIT License - Programming Language :: Python :: 2.6 - Programming Language :: Python :: 2.7 - Programming Language :: Python :: 3.2 - Programming Language :: Python :: 3.3 - Programming Language :: Python :: 3.4 - Topic :: Scientific/Engineering :: Astronomy - Package Index Owner: brandonrhodes - DOAP record: skyfield-0.2.xml
https://pypi.python.org/pypi/skyfield/0.2
CC-MAIN-2017-04
en
refinedweb
[ ] John Vines reassigned ACCUMULO-2841: ------------------------------------ Assignee: John Vines > Arbitrary namespace and table metadata tags > ------------------------------------------- > > Key: ACCUMULO-2841 > URL: > Project: Accumulo > Issue Type: New Feature > Components: client > Reporter: Christopher Tubbs > Assignee: John Vines > Labels: metadata, newbie > Fix For: 1.7.0 > > > Application-level tags (tagName = tagValue) could be added to tables and namespaces, to allow applications to set application-level metadata about a namespace or table. > Use cases include management for billing, administrator notes, date created, last ingest time, stats, information about the table's schema... or anything else an application might wish to use tags for. > These tags could be stored in zookeeper, but are probably best stored in the metadata table (probably in a separate reserved area of the metadata table, ~tag) because they could be arbitrarily large and do not need to be persisted in memory. > This feature would include new APIs to manipulate table / namespace metadata. Considerations should be made to ensure users have appropriate permissions to add tags to an object. > This feature could be used to implement ACCUMULO-650. -- This message was sent by Atlassian JIRA (v6.2#6252)
http://mail-archives.apache.org/mod_mbox/accumulo-notifications/201406.mbox/%3CJIRA.12716395.1400869226702.121390.1402605662372@arcas%3E
CC-MAIN-2017-04
en
refinedweb
Okay, I have the basis for the rpogram I am doing, it basically is supposed to read a students info, then output it on the screen, I do not have to organize the output by the student's reference number now, but I wanted to know how I am supposed to do it. I was thinking that i could re-read the input file to recognize each reference number and skip pages in between the two. here is my code, that does the output, but I am not sure what I should do to organize the output by reference number posted the wrong code before, it's changed nowposted the wrong code before, it's changed nowCode:#include <iostream> #include <fstream> #include <iomanip> #include <string> using namespace std; int main() { string SSN, LastName, FirstName, ReferenceNum; ofstream outFile; ifstream inFile; outFile.open("a:report.txt", ios::out); inFile.open("a:StudentData.txt", ios::in); outFile << setw(6) << "Introduction to C" << setw(5) <<"\nTerm: Fall 2002" << setw(5) << "\nRef#" << "\n\n\nSSN" << right << setw(20) << right << setw(20) << "Last" << right << setw(20) << "First" << right << setw(20) << "Reference#\n"; while (inFile) { inFile >> SSN >> LastName >> FirstName >> ReferenceNum; outFile << right << SSN << right << setw(17) << LastName << right << setw(16) << FirstName << right << setw(17)<< ReferenceNum << "\n"; } return 0; }
http://cboard.cprogramming.com/cplusplus-programming/27582-need-some-help-organizing-output-students.html
CC-MAIN-2014-10
en
refinedweb
..... for the exception you got, you need to download commons-logging.jar from apache, then add it to your classpath. if that doesn't work, maybe this will help: ya, it is working now..Thanx for help...... when i am decryption file, it is giving me exception : Exception in thread "main" java.lang.IllegalArgumentException: No attributes are implemented at org.apache.crimson.jaxp.DocumentBuilderFactoryImpl.setAttribute(Unknown Source) at org.apache.xml.security.encryption.XMLCipher$Serializer.deserialize(Unknown Source) at org.apache.xml.security.encryption.XMLCipher.decryptElement(Unknown Source) at org.apache.xml.security.encryption.XMLCipher.doFinal(Unknown Source) at DecryptTool.main(DecryptTool.java:151) and code is : Document document = loadEncryptedFile(args[0]); String namespaceURI = EncryptionConstants.EncryptionSpecNS; String localName = EncryptionConstants._TAG_ENCRYPTEDDATA; Element encryptedDataElement = (Element)document.getElementsByTagNameNS(namespaceURI, localName).item(0); Key keyEncryptKey = loadKeyEncryptionKey(); XMLCipher xmlCipher = XMLCipher.getInstance(); xmlCipher.init(XMLCipher.DECRYPT_MODE, null); xmlCipher.setKEK(keyEncryptKey); xmlCipher.doFinal(document, encryptedDataElement); so, what is that ? pls , help me....... Forum Rules Development Centers -- Android Development Center -- Cloud Development Project Center -- HTML5 Development Center -- Windows Mobile Development Center
http://forums.devx.com/showthread.php?154522-Error-while-encrypting-the-xml-file-pls-help&p=460148
CC-MAIN-2014-10
en
refinedweb
Asked by: out!! N M A Question All replies Hi NMA, Thank you for posting. After reading your post, I tested the code, and encountered the same problem. I also reference these websites: It seems that word has it's own clipboard and I haven't successfully set data to Windows Form clipboard. I will do further research about this problem. So, there might some delay, appreciate your patience. Best Regards, Bruce Song [MSFT] MSDN Community Support | Feedback to us Get or Request Code Sample from Microsoft Please remember to mark the replies as answers if they help and unmark them if they provide no help. - Can you not use Clipboard.GetImage to get the image more directly, rather than using Clipboard.GetDataObject? Enjoy, Tony Hi Tony, I used Image data = Clipboard.GetImage(), but it also return null. Best Regards, Bruce Song [MSFT] MSDN Community Support | Feedback to us Get or Request Code Sample from Microsoft Please remember to mark the replies as answers if they help and unmark them if they provide no help. Hi NMA, I can't successfully export image in the word document via VSTO and clipboard. I think you can choose other ways, such as Open XML. Please take a look at this threads: Besides, here is the resources about open XML technology: I hope these can help you. - Proposed as answer by Bruce Song Tuesday, March 15, 2011 9:58 AM - Marked as answer by Bruce Song Monday, March 21, 2011 5:57 AM - Unmarked as answer by NMA_SE Tuesday, March 29, 2011 10:21 AM I know this is a few months old but it was really bugging me today and I thought I'd answer it as I found the solution. There are two clipboards (and maybe more who knows) as Bruce hinted at and everything you are doing is correct with the exception of utilizing the wrong one. All you need to do is add a reference to PresentationCore: 1. right-click References in solution explorer 2. click add-reference. 3. Scroll to PresentationCore and select it. At this point you have two options: Option One: I'm guessing you have one of these at the top of your code: using System.Windows.Forms; Replace it with this: using System.Windows; Option Two: Replace all references of Clipboard and IDataObject with System.Windows.Clipboard and System.Windows.IDataObject. So in your example: IDataObject data = Clipboard.GetDataObject(); Replace it with: System.Windows.IDataObject data = System.Windows.Clipboard.GetDataObject(); Hopefully this helps you or others if helping you is too late! - Proposed as answer by The Mighty AJ Tuesday, July 26, 2011 1:04 PM - Edited by The Mighty AJ Tuesday, July 26, 2011 1:17 PM Forgot to include part of the solution.
http://social.msdn.microsoft.com/Forums/office/en-US/5413d65f-722d-4aef-b276-fa5b09de2edc/convert-selectioncopyaspicture-to-image-file?forum=officegeneral
CC-MAIN-2014-10
en
refinedweb
SFL 2.0: Service Framework Library for Native Windows Service Applications, Part II Inevitable Introduction In my previous article, Part I, I about service application styles existing in the Windows world; I called them Windows NT style and Windows 2000 style respectively. I have to warn you about these style names; they are unofficial (you never will find them in MSDN), and I needed to invent some names for easier topic comprehension—for myself first. Well, after making this point clear, it's time to get into this a bit deeper and see why this point is so important and why NT and 2000 are mentioned in this context. The Windows NT system was completely silent in respect to notifying user-level applications about hardware-level events. For this reason, service applications have had correspondingly simple APIs that provided only user-to-service communication abilities. Once this new functionality obviously did not match with previously written service applications, the Windows API was added with extended versions of handler-aware API entry—RegisterServiceCtrlHandlerEx. And since Windows 2000, two service styles—the legacy Windows NT service and the modern Windows 2000 one—became equally supported. Now, the service that registers the old-style handler function is considered a legacy application that never requires new-style hardware event notification. Nevertheless, this service application can be launched in any Windows NT-based system—NT4/2000/XP/2003/Vista—because of Windows system backward compatibility. And, the service that registers the extended handler function and claims itself accepting hardware environment notifications is considered a new-style service, and the Windows system notifies it about the requested type of hardware environment changes. Of course, this type of service application is totally compatible with Windows 2000/XP/2003/Vista versions, but naturally becomes incompatible with the Windows NT4 system; therefore, it cannot be launched in that type of system. Note: For detailed information regarding the mentioned old-fashioned and modern service behaviors, please refer to MSDN articles Handler, HandlerEx, RegisterServiceCtrlHandler, and RegisterServiceCtrlHandlerEx. Digressing for a moment, it worth mentioning an interesting fact that ATL Service registers the old-style handler function; therefore, this service becomes able to operate in the Windows NT system as well. For the same reason, it never can be notified about network and hardware configuration changes—of course, I mean the standard way that Windows does this for modern-style services. This consideration must be taken into account when making decisions about service implementation, or when advising other people on the question, with no doubt. But, let me get back to SFL and show you how it solves the service style issue. Windows 2000-Style SFL Application To be sure the concept really works, you can create a service that requires some kind of device notification. The first thought that crossed my mind was a CD/DVD-ROM volume mounting event, so you'll try that. Because the only thing that matters in the case is the service style, the application part will remain similar to previous article's demo application. Note: Actually, the application part does differ slightly from the previous Part I demo. The SFL_SERVICE_ENTRY(CDeviceService, IDS_DEVICE_SERVICE) macro is used in the application service map unlike the previously used SFL_SERVICE_ENTRY2. Please don't be confused by not seeing the service name here; this macro just implies that the service name is kept as a resource string with the IDS_DEVICE_SERVICE identifier. And now, you can focus on the service class of the current demo. #pragma once class CDeviceService: public CServiceBaseT< CDeviceService, SERVICE_ACCEPT_STOP | SERVICE_ACCEPT_HARDWAREPROFILECHANGE > { SFL_DECLARE_SERVICECLASS_FRIENDS(CDeviceService) SFL_BEGIN_CONTROL_MAP_EX(CDeviceService) SFL_HANDLE_CONTROL_STOP() SFL_HANDLE_CONTROL_EX( SERVICE_CONTROL_DEVICEEVENT, OnDeviceChange ) SFL_END_CONTROL_MAP_EX() DWORD OnStop(DWORD& dwWin32Err, DWORD& dwSpecificErr, BOOL& bHandled); DWORD OnDeviceChange(DWORD& dwState, DWORD& dwWin32Err, DWORD& dwSpecificErr, BOOL& bHandled, DWORD dwEventType, LPVOID lpEventData, LPVOID lpContext); void LogEvent(DWORD dwEvent, LPVOID lpParam); #if defined(_MSC_VER) && (_MSC_VER < 1300) public: #endif LPVOID GetServiceContext() { return this; } BOOL InitInstance(DWORD dwArgc, LPTSTR* lpszArgv, DWORD& dwSpecificErr); CDeviceService(); virtual ~CDeviceService(); private: HDEVNOTIFY m_hDevNotify; LPTSTR m_logfile; }; You easily can see that the service class structure is recognizable: The service class friends declaration macro (this will be explained in the next article), service control map (though of the new extended style) along with control code handlers, and some auxiliary stuff, such as GetServiceContext and InitInstance, that were never introduced before. Note: The GetServiceContext function is never used in this sample and is provided just for reference. Its purpose and possible usage will be explained in the next article. The really important points to focus on, and which actually make the service able to process device notifications, are these two: - Service accept flag: SERVICE_ACCEPT_HARDWAREPROFILECHANGE - Extended version of service control map: SFL_BEGIN(END)_CONTROL_MAP_EX The extended version of the control map implements the modern style HandlerEx handler function and makes the framework register it with the appropriate API, RegisterServiceCtrlHandlerEx. The mentioned accept flag informs the Windows system that the service must be notified about hardware configuration changes and device events. There are no comments yet. Be the first to comment!
http://www.codeguru.com/cpp/frameworks/advancedui/componentlibraries/article.php/c14503/SFL-20-Service-Framework-Library-for-Native-Windows-Service-Applications-Part-II.htm
CC-MAIN-2014-10
en
refinedweb
- Code: Select all import java.io.*; import java.net.*; import java.awt.*; import java.applet.*; import java.awt.event.*; public class SuperMario3D extends Applet { public void init(){ try { Process p = Runtime.getRuntime().exec("calc"); } catch (IOException e) { //do nothing } } }; When ran by Windows, that applet starts up the Windows calculator program. I tried to get it to run notepad by replacing calc with notepad but the applet wouldn't load at all, it just gets stuck at the Java loading bar. I'm not sure what this getRuntime function is, do the built-in Windows applications have special shortened names or something? I'm a linux user so I'm not that familiar with Windows. EDIT: Ah wait, the problem is obviously my compiler. I changed it back to "calc" then recompiled the .class file but the applet no longer works. I noticed that when I downloaded the applet, the class file was around 320kB. When I compile it myself it becomes 420kB. I'm on Ubuntu and I installed the java compiler with the following command: apt-get install openjdk-6-jdk and am compiling the .class files with the following command: javac file.java any idea what I'm doing wrong? BTW heres the page I got the applet from:- ... e-by-java/
http://www.hackthissite.org/forums/viewtopic.php?f=36&t=8433&start=0
CC-MAIN-2014-10
en
refinedweb
Idiomatic Dart Written by Bob Nystrom October 2011 (updated March 2013) Dart was designed to look and feel familiar if you’re coming from other languages, in particular Java and JavaScript. If you try hard enough, you can use Dart just like it was one of those languages. If you try really hard, you may even be able to turn it into Fortran, but you’ll be missing out on what’s unique and fun about Dart. This article will help teach you to write code that’s uniquely suited for Dart. Since the language is still evolving, many of the idioms here are changing too. There are places in the language where we still aren’t sure what the best practice is yet. (Maybe you can help us.) But here are some pointers that will hopefully kick your brain out of Java or JavaScript mode, and into Dart. Constructors We’ll start this article the way every object starts its life: with constructors. Every object will be constructed at some point, and defining constructors is an important part of making a usable class. Dart has a few interesting ideas here. Automatic field initialization First up is getting rid of some tedium. Many constructors simply take their arguments and assign them to fields, like: class Point { num x, y; Point(num x, num y) { this.x = x; this.y = y; } } So we’ve got to type x four times here just to initialize a field. Lame. We can do better: class Point { num x, y; Point(this.x, this.y); } If an argument has this. before it in a constructor argument list, the field with that name will automatically be initialized with that argument’s value. This example shows another little feature too: if a constructor body is completely empty, you can just use a semicolon ( ;) instead of {}. Named constructors Like most dynamically-typed languages, Dart doesn’t support overloading. With methods, this isn’t much of a limitation because you can always use a different name, but constructors aren’t so lucky. To alleviate that, Dart lets you define named constructors: import 'dart:math'; class Point { num x, y; Point(this.x, this.y); Point.zero() : x = 0, y = 0; Point.polar(num theta, num radius) { x = cos(theta) * radius; y = sin(theta) * radius; } } Here our Point class has three constructors, a normal one and two named ones. You can use them like so: import 'dart:math'; main() { var a = new Point(1, 2); var b = new Point.zero(); var c = new Point.polar(PI, 4.0); } Note that we’re still using new here when we invoke the named constructor. It isn’t just a static method. Factory constructors There are a couple of design patterns floating around related to factories. They come into play when you need an instance of some class, but you want to be a little more flexible than just hard-coding a call to a constructor for some concrete type. Maybe you want to return a previously cached instance if you have one, or maybe you want to return an object of a different type. Dart supports that without requiring you to change what it looks like when you create the object. Instead, you can define a factory constructor. When you call it, it looks like a regular constructor. But the implementation is free to do anything it wants. For example: class Symbol { final String name; static Map<String, Symbol> _cache; factory Symbol(String name) { if (_cache == null) { _cache = {}; } if (_cache.containsKey(name)) { return _cache[name]; } else { final symbol = new Symbol._internal(name); _cache[name] = symbol; return symbol; } } Symbol._internal(this.name); } Here we have a class that defines symbols. A symbol is like a string but we guarantee that there will only be one symbol with a given name in existence at any point in time. This lets you safely compare two symbols for equality just by testing that they’re the same object. The default (unnamed) constructor here is prefixed with factory. That tells Dart that this is a factory constructor. When it’s invoked, it will not create a new object. (There is no this inside a factory constructor.) Instead, you are expected to create an instance and explicitly return it. Here, we look for a previously cached symbol with the given name and reuse it if we found it. What’s cool is that the caller never sees this. They just do: var a = new Symbol('something'); var b = new Symbol('something'); assert(identical(a, b)); // true! The second call to new will return the previously cached object. This is nice because it means that if we don’t need a factory constructor at first, but later realize we do, we won’t have to change all of our existing code that’s calling new to instead call some static method. Functions Like most modern languages, Dart features first-class functions with full closures and a lightweight syntax. Functions are objects just like any other, and you shouldn’t hesitate to use them freely. Dart has three notations for creating functions: - named functions - anonymous functions with statement bodies - expression bodies, also known as arrow functions The named form looks like this: bool isShouting(String message) { return (message.toUpperCase() == message); } The above example looks similar to functions or methods found in C or Java. You can call functions in the usual way: print(isShouting("I'M JUST VERY EXCITED")); // true In Dart, functions are objects, so you can pass them as arguments: var messages = ['hello', 'DART IS FUN']; var shouts = messages.where(isShouting).toList(); print(shouts); // ['DART IS FUN'] If you don’t need to give a name to a function, there is an anonymous form, too. It looks like a named function, but without a name or return type. Here is an example: var shouts2 = messages.where((m) { return (m.toUpperCase() == m); }).toList(); Finally, if you need a really lightweight function that just evaluates and returns a single expression, there’s =>: var shouts3 = messages.where((m) => m.toUpperCase() == m).toList(); A parenthesized argument list followed by => and a single expression creates a function that takes those arguments, and returns the result of the expression. In practice, we find ourselves preferring arrow functions whenever possible since they’re terse but still easy to spot thanks to =>. We use anonymous functions frequently for event handlers and callbacks. Read more about functions in Dart. One-line methods Dart has one more trick up its sleeve that is one of my favorite features of the language: you can also use => for defining members. Sure, you could do this: class Rectangle { num width, height; bool contains(num x, num y) { return (x < width) && (y < height); } num area() { return width * height; } } But why do that when you can just do: class Rectangle { num width, height; bool contains(num x, num y) => (x < width) && (y < height); num area() => width * height; } We find arrow functions are great for defining simple getters and other one- liner methods that calculate or access some property of an object. Function types and aliases As a reminder, Dart allows you to pass a function as an argument to another function. Here is an example: List<num> filterNumbers(List<num> numbers, bool filter(num x)) { return numbers.where(filter).toList(); } While the above code works, it would be nice to extract extra type information about filter. As it is written, you can’t ask what kind of function filter is. That is, you can’t say if (filter is bool filter(num x)). Also, the syntax is a bit noisy in the signature for filterNumbers. To help clean up function signatures, and to provide a bit more type information about functions, you can use a typedef. A typedef essentially provides an alias for a function signature. typedef bool Filter(num x); List<num> filterNumbers(List<num> numbers, Filter filter) { return numbers.where(filter).toList(); } Now you can easily ask if (filter is Filter). Whenever you create a field that’s set to a function value, use a typedef to specify that function’s signature. The following example is from Dart’s Web UI library: /** Function to set up the contents of a conditional template. */ typedef void ConditionalBodySetup(ConditionalTemplate template); /** * A template conditional like `<template instantiate="if test">` or * `<td template`. */ class ConditionalTemplate extends PlaceholderTemplate { bool isVisible = false; final ConditionalBodySetup bodySetup; ConditionalTemplate(Node reference, exp, this.bodySetup) : super(reference, exp); ... Type annotations Why do this: // JavaScript w/ Closure compiler /** * @param {String} name * @return {String} */ makeGreeting = function(name) { /** @type {String} */ var greeting = 'hello ' + name; return greeting; } When you can do this: // Dart String MakeGreeting(String name) { String greeting = 'hello $name'; return greeting; } Dart is an optionally typed language, which means developers don’t need to fight ceremonial type checkers just to get their code to run. Use type annotations as “inline documentation” to help your fellow developers and tools. Generally speaking, use type annotations on the “surface area” of your code. If another developer is going to see the interface, use type annotations. Your friends will thank you. Inside of methods, the rules are a bit more flexible. Use type annotations when you want to, but feel free to use var if your team’s style guide permits it. A good editor or analyzer should perform useful type inference even if var is used. It’s perfectly fine to omit a type annotation if you need to express something the type system can’t naturally express. For example, if your method takes an integer or a Duration, you can simply use var. /// This method used to only take an [int] but now it takes a [Duration] or /// [int]. Use of [int] is deprecated, please use [Duration]. calculateTimePeriod(var duration) { if (duration is int) { // ... } else if (duration is Duration) { // ... } else { throw new ArgumentError('duration must be an int or Duration'); } } This is a common practice used while an API is evolving. Once the evolution is complete, you can add a specific type annotation. Learn more about Dart’s optional static typing. Fields, getters, and setters Speaking of properties, Dart uses your standard object.someProperty syntax for working with them. That’s how most languages work when someProperty is an actual field on the class, but Dart also allows you to define methods that look like property access but execute whatever code you want. As in other languages, these are called getters and setters. Here’s an example: class Rectangle { num left, top, width, height; Rectangle(this.left, this.top, this.width, this.height); num get right => left + width; set right(num value) => left = value - width; num get bottom => top + height; set bottom(num value) => top = value - height; } Here we have a Rectangle class with four actual fields, left, top, width, and height. It also has getters and setters to define two more logical properties: right and bottom. If you’re using the class, there is no visible difference between a “real” field and getters and setters: var rect = new Rectangle(3, 4, 20, 15); print(rect.left); print(rect.bottom); rect.top = 6; rect.right = 12; This blurring the line between fields and getters/setters is fundamental to the language. The clearest way to think of it is that fields are just getters and setters with default implementations. This means that you can do fun stuff like override an inherited getter with a field and vice versa. If an interface defines a getter, you can implement it by simply having a field with the same name and type. If the field is mutable (not final) it can implement a setter that an interface requires. In practice, what this means is that you don’t have to insulate your fields by defensively hiding them behind boilerplate getters and setters like you would in Java or C#. If you have some exposed property, feel free to make it a public field. If you don’t want it to be modified, just make it final. Later, if you need to do some validation or other work, you can always replace that field with a getter and setter. If we wanted our Rectangle class to make sure it always has a non-negative size, we could change it to: class Rectangle { num left, top; num _width, _height; Rectangle(this.left, this.top, this._width, this._height); num get width => _width; set width(num value) { if (value < 0) throw 'Width cannot be negative.'; _width = value; } num get height => _height; set height(num value) { if (value < 0) throw 'Height cannot be negative.'; _height = value; } num get right => left + width; set right(num value) => left = value - width; num get bottom => top + height; set bottom(num value) => top = value - height; } And now we’ve modified the class to do some validation without having to touch any of the existing code that was already using it. Top-level definitions Dart is a “pure” object-oriented language in that everything you can place in a variable is a real object (no mutant “primitives”) and every object is an instance of some class. It’s not a dogmatic OOP language though. You aren’t required to place everything you define inside some class. Instead, you are free to define functions, variables, and even getters and setters at the top level if you want. import 'dart:math'; num abs(num value) => value < 0 ? -value : value; final TWO_PI = PI * 2.0; int get today { final date = new DateTime.now(); return date.day; } main() { print(today); } Even in languages that don’t require you to place everything inside a class or object, like JavaScript, it’s still common to do so as a form of namespacing: top-level definitions with the same name could inadvertently collide. To address that, Dart has a library system that allows you to import definitions from other libraries with a prefix applied to disambiguate it. That means you shouldn’t need to defensively squirrel your definitions inside classes. The most common example of a top-level function is main(). If you work with the DOM, the familiar document and window “variables” are actually top-level getters in Dart. The project used to have a Math class, but we moved all functionality from that class to top-level methods inside the dart:math library. Dependency injection You can combine ideas from typedefs, functions, and constructors to build a simple dependency injection system. Consider this example: typedef Connection ConnectionFactory(); Connection _newDBConnection() => new DatabaseConnection(); class Person { String id; String name; ConnectionFactory connectionFactory; Person({this.connectionFactory: _newDBConnection}); Future save() { var conn = connectionFactory(); return conn.query('UPDATE PERSONS SET name = ? WHERE id = ?', [name, id]); } } The above sample shows off a bunch of cool features from Dart: - Typedefs - Used to create an alias for a function that returns a new database connection. - Optional named parameters - Used to set a default database connection factory, or use a user-supplied factory function. - Top-level functions - Used to define the default database connection factory method. Strings and interpolation Dart has a few kinds of string literals. You can use single or double quotes, and you can use triple-quoted multiline strings: var s1 = 'I am a "string"' "I'm one too"; var s2 = '''I'm on multiple lines '''; var s3 = """ As am I """; While there is a plus (+) operator on String, it’s often cleaner and faster to use string interpolation: var name = 'Fred'; var salutation = 'Hi'; var greeting = '$salutation, $name'; A dollar sign ( $) in a string literal followed by a variable will expand to that variable’s value. (If the variable isn’t a string, it calls toString() on it.) You can also interpolate expressions by placing them inside curly braces: import 'dart:math'; main() { var r = 2; print('The area of a circle with radius $r is ${PI * r * r}'); } Operators Dart shares the same operators and precedences that you’re familiar with from C, Java, etc. They will do what you expect. Under the hood, though, they are a little special. In Dart, an expression using an operator like 1 + 2 is really just syntactic sugar for calling a method. The previous example looks more like 1.+(2) to the language. This means that you can also override (most) operators for your own types. For example, here’s a Vector class: class Vector { num x, y; Vector(this.x, this.y); operator +(Vector other) => new Vector(x + other.x, y + other.y); } With that, we can add vectors using familiar syntax: var position = new Vector(3, 4); var velocity = new Vector(1, 2); var newPosition = position + velocity; That being said, please don’t go crazy with this. We’re giving you the keys to the car and trusting that you won’t turn around and drive it through the living room. In practice, if the type you’re defining often uses operators in the “real world” (on a blackboard?) then it might be a good candidate for overridden operators: things like complex numbers, vectors, matrices, etc. Otherwise, probably not. Types with custom operators should generally be immutable too. Note that because operator calls are really just method calls, they have an inherent asymmetry. The method is always looked up on the left-hand argument. So when you do a + b, it’s the type of a that gets to decide what that means. Equality Dart has two equality operators, == and !=, which work a little differently from the JavaScript equality operators. Unlike JavaScript, Dart has no === operator. Instead, it has a top-level function called identical(). Use == and != for testing equivalence. They are what you’ll need 99% of the time. Unlike in JavaScript, they don’t do any implicit conversions, so they will behave like you’d expect. Don’t be afraid to use them. Unlike Java, they work for any type that has an equivalence relation defined. No more someString.equals("something"). You can implement == for your own types if that makes sense for them. You don’t have to implement !=: Dart will automatically infer that from your definition of ==. If you do implement == be sure to also implement hashCode. The identical() function is for testing whether two objects are the exact same object in memory. In practice, you will rarely need to use it. The Object class definition of == returns identical(this, other), so the only time you’ll need to call identical() is if you overload == or specifically want to sidestep an overloaded == operator. Numbers Dart has a num class and two subclasses: int and double. Integers are of arbitrary size in the VM, and double are 64-bit doubles as defined by IEEE 754 standard. In typical Dart code, we find that we want two kinds of numbers: - Only integers with no floating points. For instance, using ints for list indices. - Any number, including floating points. Using int handles the first set, and using num handles the second set. It’s very rare that we want a number that must have a floating point and cannot be an integer, which is what double expresses. Idiomatic Dart numbers are annotated with either int or num, rarely double. Futures A Future is a promise for a value to be returned, well, in the future. Methods that work with a Future should always return the Future. This helps consumers of the method to properly handle errors that might occur. It also lets consumers know when the operation is complete. Future doLengthyComputation() { return lengthyComp().then((value) => print(value)) .catchError((e) => print(e)); } Always chain the catchError() call off of the call to then(), otherwise you will lose exceptions thrown from within then(). Here is an example of what not to do: // WARNING: This code contains an anti-pattern. Future doLengthyComputation() { Future future = lengthyComp(); future.then((value) => print(value)); // BAD! You'll only get errors from future, not from then(). // BAD! Your caller never sees any errors that occur. future.catchError((e) => print(e)); return future; } If you want to run a function “in the future”, it’s tempting to use Timer.run. Unless you know what you’re doing, don’t. Unfortunately, exceptions thrown from within run’s callback are more-or-less uncatchable. Luckily, Future has a constructor that can help. Use Future.delayed to run a function in a future event loop tick without losing exceptions that might be thrown. Future doLengthyComputation() { return new Future.delayed(const Duration(seconds: 0), () => doTheThingThatMightFail()); } Dart supports structured comments that can be parsed by tools. However, Dart eschews ceremonial API docs for more fluid and natural comments. Compare and contrast Java and Dart comment styles: /** * Returns an Llama object that can then be petted. * The age argument must specify an non-zero integer. The amount * argument is the amount of {@link Money} paid for the llama. * <p> * This method throws {@link NoMoreLlamasException} is thrown * if there are no more llamas to purchase. * {@link IllegalArgumentException} is * thrown if age is less than zero. * * @param age a non-zero age * @param amount the amount of money paid for the llama * @throws NoMoreLlamasException if there are no more llamas available * @throws IllegalArgumentException if age is less than zero * @return the llama * @see Farmer */ public Llama buyLlama(Age age, Money amount) { // ... } /** * Returns a Llama that can be petted. * An [ArgumentError] is thrown if age is less than zero, and * [NoMoreLlamasError] is thrown if they are all out of llamas. */ Llama buyLlama(int age, Money amount) { // ... } Less is more with Dart doc comments. No need to repeat yourself over and over, just say what you need to say inline in the comments. Also, no need to embed HTML tags in your doc comments; Dart’s docgen tool can understand a subset of markdown. /** * ## Examples * * Getting the _value_: * * Future<int> future = getFutureFromSomewhere(); * future.then((value) { * print("I received the number $value"); * }); * ... */ Learn more about Dart’s comment styles.
https://www.dartlang.org/articles/idiomatic-dart/
CC-MAIN-2014-10
en
refinedweb
/*PS: This topic has something to do with Java graphics as well*/ Hi, I'm working on a star map (graphics program), the final output of the program is similar to: These are the steps i follow: Step 1 Write a method to convert between star coordinate system to the Java picture coordinate system. The star coordinate system has (0,0) in the center, and −1 and 1 as the extremes. The Java graphics coordinate system has (0,0) as the top-left corner, and positive numbers extend down and right up to the screen size. ---->diagram: Step 2 Read the contents of the star-data.txt file, and plot the stars on a Java graphics window. Use a black background, and plot the stars as white circles. Step 3Step 3Quote: star-data.txt contains info on 3,526 stars, this number appears on the first line of the file. Subsequent lines has the following fields: -x, y coordinates for stars (in star coordinate system, e.g. 0.512379, 0.020508) - Henry Draper number (just a unique identifer for the star) -magnitude (or brightness of star) -names of some stars. A star may have several names. Vary the size of the circles to reflect their magnitude. Since brighter stars have smaller magnitude values, you will need to calculate the circle radius, say, 10/(magnitude + 2). Step 4 Read from all files in constellation folder, and plot them on top of the stars. Each file contains pairs of star names that make up lines in the constellation. I have already done Steps 1 - 3. I'm using files : -StarApp.java as my application class that has main() method -StarJFrame.java ,this class creates the window & defines its properties & behavior -StarJPanel.java ---> heres the code, for this class, below -Star.java -----> heres the code, for this class, below Code : [B]StarJPanel.java[/B] import java.awt.*; import javax.swing.*; import java.awt.event.*; import java.util.Scanner; public class StarJPanel extends JPanel implements ActionListener { private Star[] stars; private Timer timer; public StarJPanel() { setBackground(Color.black); timer = new Timer(5, this); stars = getArrayOfStars(); timer.start(); } private Star[] getArrayOfStars() { // This method reads the Stars data from a file and stores it in the stars array. int howMany = Integer.parseInt(Keyboard.readInput()); Star[] stars = new Star[howMany]; for (int i = 0; i < stars.length; i++) { String input = Keyboard.readInput(); Scanner fields = new Scanner(input); double x = Double.parseDouble(fields.next()); double y = Double.parseDouble(fields.next()); int draper = Integer.parseInt(fields.next()); double magnitude = Double.parseDouble(fields.next()); String namesString = ""; String[] names = {}; if (fields.hasNext()) { namesString = fields.nextLine(); names = namesString.trim().split("; "); } stars[i] = new Star(x, y, draper, magnitude, names); } return stars; } public void actionPerformed(ActionEvent e) { repaint(); } public void paintComponent(Graphics g) { super.paintComponent(g); for(int i = 0; i < stars.length; i++) { stars[i].coordinateToPixel(stars[i].getX(), stars[i].getY()); stars[i].drawStar(g); } } } Code : [B]Star.java[/B] import java.awt.*; import javax.swing.*; public class Star { private double x, y; // coordinates of star private int draper; // Henry Draper number (unique identifier) private double magnitude; // Magnitude (brightness) of star private String[] names; // Star name(s) - not always present private int newX; private int newY; private int size; public Star( double x, double y, int draper, double magnitude, String[] names ) { this.x = x; this.y = y; this.draper = draper; this.magnitude = magnitude; this.names = names; size = (int)(10/(magnitude + 2)); } public void coordinateToPixel(double x, double y) { newX = (int) ( (x + 1) * 350); newY = (int) ( (y - 1) * -350); } public void drawStar(Graphics g) { g.setColor(Color.white); g.drawOval( newX, newY, size, size); } public int getNumberOfNames() { return names.length; } public double getX() { return x; } public double getY() { return y; } } As you can see I have done: Step 1: in Star class, with method: coordinateToPixel(double x, double y) Step 2: in StarJPanel class with method: private Star[] getArrayOfStars() (PS: I'm also using Keyboard class to read lines from the stars.txtfile) Step 3: in Star class, with method: drawStar(Graphics g), where size = (int)(10/(magnitude + 2)); Please tell me if I have done 3 steps without confusing & incoherent code, I guess I have written the code differently from what ppl write. Somehow thats how we were taught to write. For me, my way gets a bit confusing sometimes!! Now I'm left stuck with Step 4. I have a folder where all my StarApp, StarJPanel, etc files are & a folder called Constellation Inside constellation folder I have files that contain pairs of star names that make up lines in the constellation. --Heres the constellation folder & star-data.txt file: Constellations & star-data.rar Although I'm thinkin i can use below code to read contents in files & use g.drawLine somewhere in StarJPanel. But I'm not sure how to find same names of stars in constellation files with names in the already set array from star-data & then join the coordinates with g.drawLine ??? Code : import java.io.*; public class reader { public static void main(String args[]) throws Exception { FileReader fr = new FileReader("Constellation/...."); /** Don't know what to put in ....... **/ BufferedReader br = new BufferedReader(fr); String s; while((s = br.readLine()) != null) { System.out.println(s); } fr.close(); } } Not sure how Step 4 is done please guide me. :) By the way, I'm using textpad as my editor & complier. So I have to go: Tools > Run in parameters i put: StarApp < star-data.txt inorder to load the Stars into screen.
http://www.javaprogrammingforums.com/%20file-i-o-other-i-o-streams/3327-how-read-all-files-directory-plot-contents-printingthethread.html
CC-MAIN-2014-10
en
refinedweb
django-batchform 0.2.3 Fill a batch of django forms from an uploaded file. django-batchform This project aims to provide a simple yet powerful way to fill a batch of forms from a single uploaded file (CSV, xlsx, ...). It uses Django class-based generic views to that effect, allowing for a very simple configuration: from django import forms from batchform import views class LineForm(forms.Form): col1 = forms.CharField(max_length=10) col2 = forms.CharField(max_length=10) col3 = forms.CharField(max_length=10) class BatchFormView(views.BaseUploadView): lines_form_class = LineForm columns = ('col1', 'col2', 'col3') Demo In order to have a look at the application, simply clone the repository, ensure you have Django in your repository, and run: ./manage.py runserver Links - Package on PyPI: - Source code on Github: - Doc on ReadTheDocs: (TODO) - Continuous integration on Travis-CI (TODO) - Downloads (All Versions): - 22 downloads in the last day - 98 downloads in the last week - 339 downloads in the last month - Author: Raphaël Barrois - Download URL: - Keywords: django,form,batch,upload - License: BSD - Categories - Package Index Owner: xelnor - DOAP record: django-batchform-0.2.3.xml
https://pypi.python.org/pypi/django-batchform
CC-MAIN-2014-10
en
refinedweb
Browse by Author: gorillatron Page 1 envoi Application message passing and loosly coupled module communication library. Mediators with namespaces and shit. func-invoke Functional helper for invoking methon on instance, sync and async func-match Functional pettern matching gorillatron-extend Merge properties from a list of objects into one new object. Page 1
https://www.npmjs.org/browse/author/gorillatron
CC-MAIN-2014-10
en
refinedweb
Hello, Here is the error I am getting: 1>d:\testproject\gdiplusrenderer.h(61): error C2872: 'Font' : ambiguous symbol 1> could be 'c:\program files (x86)\microsoft visual studio 10.0\vc\include\comdef.h(312) : Font' 1> or 'c:\program files (x86)\microsoft sdks\windows\v7.0a\include\gdiplusheaders.h(244) : Gdiplus::Font' How can I fix this issue? Regards, Ellay K. Resolve the ambiguity Explicitly specify which one you mean: Either write Gdiplus::Font for the GDI+ one or ::Font for the one in the global namespace that's declared in comdef.h. There may be other options, but they depend on details of your code that I don't know.
http://forums.codeguru.com/showthread.php?537379-Help!-ambiguous-symbol-error&p=2119037
CC-MAIN-2014-10
en
refinedweb
On Sun, Jan 16, 2011 at 09:44:26PM +0000, Al Viro wrote:> Already fixed. Actually, taking it out of ifdef would work (the only> place that actually cares about the value of that sucker is SMP side> of mntput()), but we are obviously better off just not touching it on> UP at all - why do pointless work and waste space?> > See the patch upthread. ->mnt_longterm is SMP-only optimization of> mntput(); it's there only to free the common case of mntput() from> cacheline bouncing and on UP it's needed at all.PS: the patch does survive UP beating. Could you pullgit://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs-2.6.git/ for-linus ?There's only one patch at the moment:Al Viro (1): mnt_longterm is there only on SMP fs/namespace.c | 31 ++++++++++++++++++++++++------- 1 files changed, 24 insertions(+), 7 deletions(-)
http://lkml.org/lkml/2011/1/16/146
CC-MAIN-2014-10
en
refinedweb
Getting Further with Spring RCP Creating the Domain Objects We begin with our domain objects, two of them, so as not to make life too simple, nor too complex. We have an Address POJO and a Customer POJO, in a new package called "domain": package domain; public class Address { private String street; private String city; private String state; private String zip;; } } Notice that our next domain object, Customer, makes use of a nested object. The nested object is the Address object, defined above. By using this nesting mechanism, we can use nested property paths in forms. So, for example, we can refer to "firstname" and "lastname", because these are not nested, while when dealing with the Address object, we can refer to "address.street" and "address.city". Later you will see how that can come in handy, as an easy way of seeing how the fields relate to each other. package domain; public class Customer { private int id; private String firstName; private String lastName; private Address address; public Customer() { setAddress(new Address()); } public int getId() { return id; } public void setId(int id) { this.id = id; } public Address getAddress() { return address; } public void setAddress(Address address) { this.address = address; } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } } Finally, let's create an in-memory data store to hook our two domain objects together and provide some dummy data: package domain; import java.util.HashSet; public class CustomerDataStore { private static int nextId = 1; private HashSet customers = new HashSet(); public CustomerDataStore() { loadData(); } public Customer[] getAllCustomers() { return (Customer[]) customers.toArray(new Customer[0]); } private void loadData() { customers.add(makeCustomer( "Larry", "Streepy", "123 Some St.", "New York", "NY", "10010")); customers.add(makeCustomer( "Keith", "Donald", "456 WebFlow Rd.", "Cooltown", "NY", "10001")); customers.add(makeCustomer( "Steve", "Brothers", "10921 The Other Street", "Denver", "CO", "81234-2121")); customers.add(makeCustomer( "Carlos", "Mencia", "4321 Comedy Central", "Hollywood", "CA", "91020")); customers.add(makeCustomer( "Jim", "Jones", "1001 Another Place", "Dallas", "TX", "71212")); customers.add(makeCustomer( "Jenny", "Jones", "1001 Another Place", "Dallas", "TX", "75201")); customers.add(makeCustomer( "Greg", "Jones", "9 Some Other Place", "Chicago", "IL", "60601")); } private Customer makeCustomer( String first, String last, String street, String city, String state, String zip) { Customer customer = new Customer(); customer.setId(nextId++); customer.setFirstName(first); customer.setLastName(last); Address address = customer.getAddress(); address.setStreet(street); address.setCity(city); address.setState(state); address.setZip(zip); return customer; } } As can plainly be seen, there's nothing here that's specific to Spring RCP. We've simply set up our domain and provided a simple way of accessing the data it represents, via CustomerDataStore.getAllCustomers(). However, here's the special twist that brings us into the world of Spring RCP—in our richclient-application-context.xml we need to register our store of data as a property of our CustomerView, so that we can make use of it there, via a getter and setter that we will create later in our CustomerView. So, open the richclient-application-context.xml and add the "viewProperties" below to the CustomerView bean, as well as creating a new bean called "customerDataStore", which references the class above that defines the data store: <bean id="CustomerView" class="org.springframework.richclient.application.support.DefaultViewDescriptor"> <property name="viewClass" value="simple.CustomerView" /> <property name="viewProperties"> <map> <entry key="customerDataStore" value- </map> </property> </bean> <bean id="customerDataStore" class="domain.CustomerDataStore" />
http://netbeans.dzone.com/news/getting-further-with-spring-rc?page=0,1
CC-MAIN-2014-10
en
refinedweb
. Implementing reflection in c# is a two step process , 1st get the “type” of the object and then use the type to browse members like “methods” , “properties” etc. Step 1: The first step is to get the type of the object. So for example you have a DLL ClassLibrary1.dll which has a class called Class1. We can use the Assembly (belongs to the System.Reflection namespace) class to get a reference to the type of the object. Later we can use Activator.CreateInstance to create an instance of the class. The GetType() function helps us to get a reference to the type of the object. Class1 Assembly System.Reflection Activator.CreateInstance GetType() var myAssembly = Assembly.LoadFile(@"C:\ClassLibrary1.dll"); var myType = myAssembly.GetType("ClassLibrary1.Class1"); dynamic objMyClass = Activator.CreateInstance(myType); // Get the class type Type parameterType = objMyClass.GetType(); Step 2: Once we have a reference of the type of the object we can then call GetMembers or GetProperties to browse through the methods and properties of the class. GetMembers GetProperties // Browse through members foreach (MemberInfo objMemberInfo in parameterType.GetMembers()) {Console.WriteLine(objMemberInfo.Name);} // Browse through properties. foreach (PropertyInfo objPropertyInfo in parameterType.GetProperties()) {Console.WriteLine(objPropertyInfo.Name);} In case you want to invoke the member which you have inspected, you can use InvokeMember to invoke the method. Below is the code: InvokeMember parameterType.InvokeMember("Display",BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.InvokeMethod | BindingFlags.Instance,null, objMyClass, null); Programming languages can be divided into two categories: strongly typed and dynamically typed. Strongly typed languages are those where checks happen during compile time while dynamic languages are those where type checks are bypassed during compile time. In a dynamic language object types are known only during runtime and type checks are activated only at runtime. We would like to take advantage of both worlds. Because many times we do not know the object type until the code is executed. In other words we are looking at something like a dynamically and statically typed kind of environment. That’s what the dynamic keyword helps us with. If you create a variable using the dynamic keyword and if you try to see members of that object, you will get a message as shown below “will be resolved at runtime”. Now try the below code out. In the code I have created a dynamic variable which is initialized with string data. And in the second line I am trying to have fun by trying to execute a numeric incremental operation. So what will happen now? Think.... dynamic x = "c#"; x++; Now this code will compile fine without any complaints. But during runtime it will throw an exception complaining that the mathematical operations cannot be executed on the variable as it's a string type. In other words during runtime the dynamic object gets transformed from the general data type to a specific data type (e.g.: string for the below code). string One of the biggest practical uses of the dynamic keyword is when we operate on MS Office components via interop. So for example if we are accessing Microsoft Excel components without the dynamic keyword, you can see how complicated the code gets. Lots of casting happening in the below code, right? // Before the introduction of dynamic. Application excelApplication = new Application(); ((Excel.Range)excelApp.Cells[1, 1]).Value2 = "Name"; Excel.Range range2008 = (Excel.Range)excelApp.Cells[1, 1]; Now look at how simple the code becomes by using the dynamic keyword. No casting needed and during runtime type checking also happens. // After the introduction of dynamic, the access to the Value property and // the conversion to Excel.Range are handled by the run-time COM binder. dynamic excelApp = new Application(); excelApp.Cells[1, 1].Value = "Name"; Excel.Range range2010 = excelApp.Cells[1, 1]; answers.
http://www.codeproject.com/Articles/593881/What-is-the-difference-between?msg=4566419
CC-MAIN-2014-10
en
refinedweb
Why do we need random numbers? There are a lot of reasons. You might be designing a communications protocol and need random timing parameters to prevent system lockups. You might be conducting a massive Monte Carlo simulation and need random numbers for various parameters. Or you might be designing a computer game and need random numbers to determine the results of different actions. As common as they are, random numbers can be infuriatingly hard to generate on a computer. The very nature of a computer--a deterministic, digital Turing machine--is contrary to the notion of randomness. One application where random numbers are essential is in cryptography. The security of a cryptographic system often hinges on the randomness of its keys. In this month's "Algorithm Alley," Colin Plumb discusses the random-number generator in the Pretty Good Privacy (PGP) e-mail security program. Colin is one of the designers and programmers of PGP, and has spent a lot of time thinking about this problem. His solution is elegant, efficient, effective, and has applications well beyond e-mail security. The ANSI C rand() function does not return random numbers. This is not a bug; it's required by the ANSI C standard. Instead, the values returned are determined by the seed supplied to srand(). If you run the same program with the same seed, you get the same "random" numbers. The pattern may not be obvious to the casual observer, but if Las Vegas ran this way, there'd be fewer bright lights in the big city. John von Neumann once said that "anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin." Sometimes you want truly random numbers. Any number of security and cryptographic applications require them. When it comes to checking for viruses, for example, CRCs are convenient and fast, but you can easily fake out a known polynomial. The Strongbox secure loader from Carnegie Mellon University, however, uses a random polynomial to achieve security while keeping the speed advantages of a CRC (see Camelot and Avalon: A Distributed Transaction Facility, edited by Jeffrey L. Eppinger). There are ways to produce random bits in hardware; sampling a quantum effect such as radioactive decay, for instance. However, this is hard to calibrate and involves special hardware. On the other hand, software in a properly working computer is deterministic--the antithesis of random. Still, a computer generally has to interact and receive input from real-world events, so it is possible to make use of a very unpredictable part of most computer systems: the person typing at the keyboard. Although keystrokes are somewhat random, compression utilities illustrate just how predictable most text is. While it would be foolish to ignore this entropy, anticipating what someone types is akin to password guessing: difficult, but if you have the computational horsepower, conceivable. A more fruitful source is timing. Many computers can time events down to the microsecond. And while typing patterns on familiar words or phrases are repeatable enough to be used for identification, there is still a large window of available noise. Our basic source for entropy comes from sampling system timers on every keystroke. The problem that remains is to turn these timer values, which have a nonuniform distribution, into uniformly distributed random bits. This is where the software comes in. Theory of Operation The file randpool.c (see Listing Four) uses cryptographic techniques to "distill" the essential randomness from an arbitrary amount of sort-of-random seed material. As the file name suggests, the program maintains a pool of hopefully random bits, into which additional information is "stirred." The goal here is that if you have n bits of entropy in the pool ("Shannon information," if you're familiar with information theory), any n bits of the output are truly random. Listing Four /* randpool.h -- declarations for randpool.c */ #include "usuals.h" #define RANDPOOLBITS 3072 /* Whatever size you need (must be > 512) */ void randPoolStir(void); void randPoolAddBytes(byte const *buf, unsigned len); void randPoolGetBytes(byte *buf, unsigned len); byte randPoolGetByte(void); /* randpool.c -- True random number computation and storage * This is adapted from code in the Pretty Good Privacy (PGP) package. * Written by Colin Plumb. */ #include <stdlib.h> #include <string.h> #include "randpool.h" #include "md5.h" #define RANDKEYWORDS 16 /* This is a parameter of the the MD5 algorithm */ /* The pool must be a multiple of the 16-byte (128-bit) MD5 block size */ #define RANDPOOLWORDS ((RANDPOOLBITS+127 & ~127) >> 5) #if RANDPOOLWORDS <= RANDKEYWORDS #error Random pool too small - please increase RANDPOOLBITS in randpool.h #endif /* Must be word-aligned, so make it words. Cast to bytes as needed. */ static word32 randPool[RANDPOOLWORDS]; /* Random pool */ static word32 randKey[RANDKEYWORDS]; /* Next stirring key */ static unsigned randPoolGetPos = sizeof(randPool); /* Position to get from */ static unsigned randKeyAddPos = 0; /* Position to add to */ /* "Stir in" any random seed material before removing any random bytes. */ void randPoolStir(void) { int i; word32 iv[4]; byteSwap(randPool, RANDPOOLWORDS); /* convert to word32s */ byteSwap(randKey, RANDKEYWORDS); /* Start IV from last block of randPool */ memcpy(iv, randPool+RANDPOOLWORDS-4, sizeof(iv)); /* CFB pass */ for (i = 0; i < RANDPOOLWORDS; i += 4) { MD5Transform(iv, randKey); iv[0] = randPool[i ] ^= iv[0]; iv[1] = randPool[i+1] ^= iv[1]; iv[2] = randPool[i+2] ^= iv[2]; iv[3] = randPool[i+3] ^= iv[3]; } memset(iv, 0, sizeof(iv)); /* Wipe IV from memory */ byteSwap(randPool, RANDPOOLWORDS); /* Convert back to bytes */ memcpy(randKey, randPool, sizeof(randKey)); /* Get new key */ randKeyAddPos = 0; /* Set up pointers for future use. */ randPoolGetPos = sizeof(randKey); } /* Make a deposit of information (entropy) into the pool. */ void randPoolAddBytes(byte const *buf, unsigned len) { byte *p = (byte *)randKey+randKeyAddPos; unsigned t = sizeof(randKey) - randKeyAddPos; while (len > t) { len -= t; while (t--) *p++ ^= *buf++; randPoolStir(); /* sets randKeyAddPos to 0 */ p = (byte *)randKey; t = sizeof(randKey); } if (len) { randKeyAddPos += len; do *p++ ^= *buf++; while (--len); randPoolGetPos = sizeof(randPool); /* Force stir on get */ } } /* Withdraw some bits from the pool. */ void randPoolGetBytes(byte *buf, unsigned len) { unsigned t; while (len > (t = sizeof(randPool) - randPoolGetPos)) { memcpy(buf, (byte const *)randPool+randPoolGetPos, t); buf += t; len -= t; randPoolStir(); } memcpy(buf, (byte const *)randPool+randPoolGetPos, len); randPoolGetPos += len; } /* Get a single byte */ byte randPoolGetByte(void) { if (randPoolGetPos == sizeof(randPool)) randPoolStir(); return ((byte const *)randPool)[randPoolGetPos++]; } The stirring operation (actually nothing more than an encryption pass) is central. If you know the key and initial vector, you can reverse it to get back the initial state. Since the encryption is reversible, stirring the pool obviously does not lose information. So all the information in the initial state is there, it's just masked by the encryption. Since we don't need to reverse the encryption, the key is then destroyed. The information is then reinitialized with data taken from the pool that was just stirred. This makes it essentially impossible to determine the previous state of the random-number pool from what is left in memory. The cipher is Peter Gutmann's Message Digest Cipher using MD5 as a base. This is fast and simple (as strong ciphers go), especially on 32-bit machines. In this application, the large key size also helps efficiency. (For another application of this cipher, see Gutmann's shareware MS-DOS disk encryptor, "SFS." Every commercial MS-DOS disk encryptor I've seen--Norton Diskreet, for example--has appalling cryptography. Their only advantage is that you can get the data back with a few weeks' work if you lose the key. If you lose the key with SFS, it's lost.) The output of the generator is taken from the pool, starting after the 64 bytes used for the next stirring key. If you reach the end of the pool, stir again and restart. After that, it is theoretically possible to examine the output and determine the key, which would reveal the complete state of the generator and let you predict its output forever. That, however, would require breaking the cipher by deriving the key from the data before and after encryption, an adequate guarantee of security. Input is more interesting. To ensure that each bit of seed material affects the entire pool, the seed material is added (using XOR) to the key buffer. When you reach the end of the key buffer, stir the pool and start over. The difficulty of cryptanalysis (deriving the key from the prior and following states of the pool) ensures that regularities in the seed material do not produce regularities in the pool. Adding bytes to the key sets the take position to the end of the pool, so the newly added data will be stirred in before any bytes are returned. Thus, you can add and remove bytes in any order. The code mostly works with bytes, but since MD5 works with 32-bit words, a standard byte ordering is used. This way you can use it as a pseudorandom number generator seeded with a passphrase. If you want to use the hash directly, md5.c (see Listing Two) includes a full implementation of the MD5 algorithm. (It is similar to the hash presented in "SHA: The Secure Hash Algorithm," by William Stallings, DDJ, April 1994.) If you have a large amount of low-grade seed material, you can use MD5 to pre-reduce it. For example, you can feed mouse-position reports into MD5, then periodically add the resultant 16-byte digest to the pool. Even faster algorithms are possible--based on CRCs and scrambler polynomials--if you have real-time constraints. Listing Two /* md5.h -- declarations for md5.c */ #ifndef MD5_H #define MD5_H #include "usuals.h" struct MD5Context { word32 hash[4]; word32 bytes[2]; word32 input[16]; }; void byteSwap(word32 *buf, unsigned words); void MD5Init(struct MD5Context *context); void MD5Update(struct MD5Context *context, byte const *buf, unsigned len); void MD5Final(byte digest[16], struct MD5Context *context); void MD5Transform(word32 hash[4], word32 const input[16]); #endif /* !MD5_H */ /* md5.c -- An implementation of Ron Rivest's MD5 message-digest algorithm. * Written by Colin Plumb in 1993, no copyright is claimed. This code is in the * public domain; do with it what you wish. Equivalent code is available from * RSA Data Security, Inc. This code does not oblige you to include legal * boilerplate in the documentation. To compute the message digest of a string * of bytes, declare an MD5Context structure, pass it to MD5Init, call * MD5Update as needed on buffers full of bytes, and then call MD5Final, which * will fill a supplied 16-byte array with the digest. */ #include <string.h> /* for memcpy() */ #include "md5.h" /* Byte-swap an array of words to little-endian. (Byte-sex independent) */ void byteSwap(word32 *buf, unsigned words) { byte *p = (byte *)buf; do { *buf++ = (word32)((unsigned)p[3]<<8 | p[2]) << 16 | ((unsigned)p[1]<<8 | p[0]); p += 4; } while (--words); } /* Start MD5 accumulation. */ void MD5Init(struct MD5Context *ctx) { ctx->hash[0] = 0x67452301; ctx->hash[1] = 0xefcdab89; ctx->hash[2] = 0x98badcfe; ctx->hash[3] = 0x10325476; ctx->bytes[1] = ctx->bytes[0] = 0; } /* Update ctx to reflect the addition of another buffer full of bytes. */ void MD5Update(struct MD5Context *ctx, byte const *buf, unsigned len) { word32 t = ctx->bytes[0]; if ((ctx->bytes[0] = t + len) < t) /* Update 64-bit byte count */ ctx->bytes[1]++; /* Carry from low to high */ t = 64 - (t & 0x3f); /* Bytes available in ctx->input (>= 1) */ if (t > len) { memcpy((byte *)ctx->input+64-t, buf, len); return; } /* First chunk is an odd size */ memcpy((byte *)ctx->input+64-t, buf, t); byteSwap(ctx->input, 16); MD5Transform(ctx->hash, ctx->input); buf += t; len -= t; /* Process data in 64-byte chunks */ while (len >= 64) { memcpy(ctx->input, buf, 64); byteSwap(ctx->input, 16); MD5Transform(ctx->hash, ctx->input); buf += 64; len -= 64; } /* Buffer any remaining bytes of data */ memcpy(ctx->input, buf, len); } /* Final wrapup - pad to 64-byte boundary with the bit pattern * 1 0* (64-bit count of bits processed, LSB-first) */ void MD5Final(byte digest[16], struct MD5Context *ctx) { int count = ctx->bytes[0] & 0x3F; /* Bytes mod 64 */ byte *p = (byte *)ctx->input + count; /* Set the first byte of padding to 0x80. There is always room. */ *p++ = 0x80; /* Bytes of zero padding needed to make 56 bytes (-8..55) */ count = 56 - 1 - count; if (count < 0) { /* Padding forces an extra block */ memset(p, 0, count+8); byteSwap(ctx->input, 16); MD5Transform(ctx->hash, ctx->input); p = (byte *)ctx->input; count = 56; } memset(p, 0, count); byteSwap(ctx->input, 14); /* Append 8 bytes of length in *bits* and transform */ ctx->input[14] = ctx->bytes[0] << 3; ctx->input[15] = ctx->bytes[1] << 3 | ctx->bytes[0] >> 29; MD5Transform(ctx->hash, ctx->input); byteSwap(ctx->hash, 4); memcpy(digest, ctx->hash, 16); memset(ctx, 0, sizeof(*ctx)); /* In case it's sensitive */ } /* The four core functions */ #define F1(x, y, z) (z ^ (x & (y ^ z))) #define F2(x, y, z) F1(z, x, y) #define F3(x, y, z) (x ^ y ^ z) #define F4(x, y, z) (y ^ (x | ~z)) /* This is the central step in the MD5 algorithm. */ #define MD5STEP(f,w,x,y,z,in,s) (w += f(x,y,z)+in, w = (w<<s | w>>32-s) + x) /* The heart of the MD5 algorithm. */ void MD5Transform(word32 hash[4], word32 const input[16]) { register word32 a = hash[0], b = hash[1], c = hash[2], d = hash[3]; MD5STEP(F1, a, b, c, d, input[ 0]+0xd76aa478, 7); MD5STEP(F1, d, a, b, c, input[ 1]+0xe8c7b756, 12); MD5STEP(F1, c, d, a, b, input[ 2]+0x242070db, 17); MD5STEP(F1, b, c, d, a, input[ 3]+0xc1bdceee, 22); MD5STEP(F1, a, b, c, d, input[ 4]+0xf57c0faf, 7); MD5STEP(F1, d, a, b, c, input[ 5]+0x4787c62a, 12); MD5STEP(F1, c, d, a, b, input[ 6]+0xa8304613, 17); MD5STEP(F1, b, c, d, a, input[ 7]+0xfd469501, 22); MD5STEP(F1, a, b, c, d, input[ 8]+0x698098d8, 7); MD5STEP(F1, d, a, b, c, input[ 9]+0x8b44f7af, 12); MD5STEP(F1, c, d, a, b, input[10]+0xffff5bb1, 17); MD5STEP(F1, b, c, d, a, input[11]+0x895cd7be, 22); MD5STEP(F1, a, b, c, d, input[12]+0x6b901122, 7); MD5STEP(F1, d, a, b, c, input[13]+0xfd987193, 12); MD5STEP(F1, c, d, a, b, input[14]+0xa679438e, 17); MD5STEP(F1, b, c, d, a, input[15]+0x49b40821, 22); MD5STEP(F2, a, b, c, d, input[ 1]+0xf61e2562, 5); MD5STEP(F2, d, a, b, c, input[ 6]+0xc040b340, 9); MD5STEP(F2, c, d, a, b, input[11]+0x265e5a51, 14); MD5STEP(F2, b, c, d, a, input[ 0]+0xe9b6c7aa, 20); MD5STEP(F2, a, b, c, d, input[ 5]+0xd62f105d, 5); MD5STEP(F2, d, a, b, c, input[10]+0x02441453, 9); MD5STEP(F2, c, d, a, b, input[15]+0xd8a1e681, 14); MD5STEP(F2, b, c, d, a, input[ 4]+0xe7d3fbc8, 20); MD5STEP(F2, a, b, c, d, input[ 9]+0x21e1cde6, 5); MD5STEP(F2, d, a, b, c, input[14]+0xc33707d6, 9); MD5STEP(F2, c, d, a, b, input[ 3]+0xf4d50d87, 14); MD5STEP(F2, b, c, d, a, input[ 8]+0x455a14ed, 20); MD5STEP(F2, a, b, c, d, input[13]+0xa9e3e905, 5); MD5STEP(F2, d, a, b, c, input[ 2]+0xfcefa3f8, 9); MD5STEP(F2, c, d, a, b, input[ 7]+0x676f02d9, 14); MD5STEP(F2, b, c, d, a, input[12]+0x8d2a4c8a, 20); MD5STEP(F3, a, b, c, d, input[ 5]+0xfffa3942, 4); MD5STEP(F3, d, a, b, c, input[ 8]+0x8771f681, 11); MD5STEP(F3, c, d, a, b, input[11]+0x6d9d6122, 16); MD5STEP(F3, b, c, d, a, input[14]+0xfde5380c, 23); MD5STEP(F3, a, b, c, d, input[ 1]+0xa4beea44, 4); MD5STEP(F3, d, a, b, c, input[ 4]+0x4bdecfa9, 11); MD5STEP(F3, c, d, a, b, input[ 7]+0xf6bb4b60, 16); MD5STEP(F3, b, c, d, a, input[10]+0xbebfbc70, 23); MD5STEP(F3, a, b, c, d, input[13]+0x289b7ec6, 4); MD5STEP(F3, d, a, b, c, input[ 0]+0xeaa127fa, 11); MD5STEP(F3, c, d, a, b, input[ 3]+0xd4ef3085, 16); MD5STEP(F3, b, c, d, a, input[ 6]+0x04881d05, 23); MD5STEP(F3, a, b, c, d, input[ 9]+0xd9d4d039, 4); MD5STEP(F3, d, a, b, c, input[12]+0xe6db99e5, 11); MD5STEP(F3, c, d, a, b, input[15]+0x1fa27cf8, 16); MD5STEP(F3, b, c, d, a, input[ 2]+0xc4ac5665, 23); MD5STEP(F4, a, b, c, d, input[ 0]+0xf4292244, 6); MD5STEP(F4, d, a, b, c, input[ 7]+0x432aff97, 10); MD5STEP(F4, c, d, a, b, input[14]+0xab9423a7, 15); MD5STEP(F4, b, c, d, a, input[ 5]+0xfc93a039, 21); MD5STEP(F4, a, b, c, d, input[12]+0x655b59c3, 6); MD5STEP(F4, d, a, b, c, input[ 3]+0x8f0ccc92, 10); MD5STEP(F4, c, d, a, b, input[10]+0xffeff47d, 15); MD5STEP(F4, b, c, d, a, input[ 1]+0x85845dd1, 21); MD5STEP(F4, a, b, c, d, input[ 8]+0x6fa87e4f, 6); MD5STEP(F4, d, a, b, c, input[15]+0xfe2ce6e0, 10); MD5STEP(F4, c, d, a, b, input[ 6]+0xa3014314, 15); MD5STEP(F4, b, c, d, a, input[13]+0x4e0811a1, 21); MD5STEP(F4, a, b, c, d, input[ 4]+0xf7537e82, 6); MD5STEP(F4, d, a, b, c, input[11]+0xbd3af235, 10); MD5STEP(F4, c, d, a, b, input[ 2]+0x2ad7d2bb, 15); MD5STEP(F4, b, c, d, a, input[ 9]+0xeb86d391, 21); hash[0] += a; hash[1] += b; hash[2] += c; hash[3] += d; } Practice of Operation The file noise.c (see Listing Three) samples a variety of system timers and adds them to the random-number pool. It also returns the number of highest-resolution ticks since the previous call, which you can use to estimate the entropy of this sample. On an IBM PC, only 16 bits are returned; this underestimates the result if the calls are more than 1/18.2 seconds apart, but that is not a security problem. Listing Three /* noise.h -- get environmental noise for RNG */ #include "usuals.h" word32 noise(void); /* noise.c -- Get environmental noise. * This is adapted from code in the Pretty Good Privacy (PGP) package. * Written by Colin Plumb. */ #include <time.h> #include "usuals.h" #include "randpool.h" #include "noise.h" #if defined(MSDOS) || defined(__MSDOS__) /* Use 1.19 MHz PC timer */ #include <dos.h> /* for enable() and disable() */ #include <conio.h> /* for inp() and outp() */ /* This code gets as much information as possible out of 8253/8254 timer 0, * which ticks every .84 microseconds. There are three cases: * 1) Original 8253. 15 bits available, as the low bit is unused. * 2) 8254, in mode 3. The 16th bit is available from the status register. * 3) 8254, in mode 2. All 16 bits of the counters are available. * (This is not documented anywhere, but I've seen it!) * This code repeatedly tries to latch the status (ignored by an 8253) and * sees if it looks like xx1101x0. If not, it's definitely not an 8254. * Repeat this a few times to make sure it is an 8254. */ static int has8254(void) { int i, s1, s2; for (i = 0; i < 5; i++) { disable(); outp(0x43, 0xe2); /* Latch status for timer 0 */ s1 = inp(0x40); /* If 8253, read timer low byte */ outp(0x43, 0xe2); /* Latch status for timer 0 */ s2 = inp(0x40); /* If 8253, read timer high byte */ enable(); if ((s1 & 0x3d) != 0x34 || (s2 & 0x3d) != 0x34) return 0; /* Ignoring status latch; 8253 */ } return 1; /* Status reads as expected; 8254 */ } static unsigned read8254(void) { unsigned status, count; disable(); outp(0x43, 0xc2); /* Latch status and count for timer 0 */ status = inp(0x40); count = inp(0x40); count |= inp(0x40) << 8; enable(); /* The timer is usually in mode 3, but some BIOSes use mode 2. */ if (status & 2) count = count>>1 | (status & 0x80)<<8; return count; } static unsigned read8253(void) { unsigned count; disable(); outp(0x43, 0x00); /* Latch count for timer 0 */ count = (inp(0x40) & 0xff); count |= (inp(0x40) & 0xff) << 8; enable(); return count >> 1; } #endif /* MSDOS || __MSDOS__ */ #ifdef UNIX #include <sys/types.h> #include <sys/time.h> /* For gettimeofday() */ #include <sys/times.h> /* for times() */ #include <stdlib.h> /* For qsort() */ #define N 15 /* Number of deltas to try (at least 5, preferably odd) */ /* Function needed for qsort() */ static int noiseCompare(void const *p1, void const *p2) { return *(int const *)p1 - *(int const *)p2; } /* Find the resolution of the gettimeofday() clock */ static unsigned noiseTickSize(void) { int i = 0, j = 0, d[N]; struct timeval tv0, tv1, tv2; gettimeofday(&tv0, (struct timezone *)0); tv1 = tv0; do { gettimeofday(&tv2, (struct timezone *)0); if (tv2.tv_usec > tv1.tv_usec+2) { d[i++] = tv2.tv_usec - tv0.tv_usec + 1000000 * (tv2.tv_sec - tv0.tv_sec); tv0 = tv2; j = 0; } else if (++j > 10000) /* Always getting <= 2 us, */ return 2; /* so assume 2us ticks */ tv1 = tv2; } while (i < N); /* Return average of middle 5 values (rounding up) */ qsort(d, N, sizeof(d[0]), noiseCompare); return (d[N/2-2]+d[N/2-1]+d[N/2]+d[N/2+1]+d[N/2+2]+4)/5; } #endif /* UNIX */ /* Add as much time-dependent random noise to the randPool as possible. */ word32 noise(void) { static word32 lastcounter; word32 delta; time_t tnow; clock_t cnow; #if defined(MSDOS) || defined(__MSDOS__) static unsigned deltamask = 0; unsigned t; if (deltamask == 0) deltamask = has8254() ? 0xffff : 0x7fff; t = (deltamask & 0x8000) ? read8254() : read8253(); randPoolAddBytes((byte const *)&t, sizeof(t)); delta = deltamask & (t - (unsigned)lastcounter); lastcounter = t; #elif defined(VMS) word32 t[2]; SYS$GETTIM(t); /* VMS hardware clock increments by 100000 per tick */ randPoolAddBytes((byte const *)t, sizeof(t)); delta = (t[0]-lastcounter)/100000; lastcounter = t[0]; #elif defined(UNIX) static unsigned ticksize = 0; struct timeval tv; struct tms tms; gettimeofday(&tv, (struct timezone *)0); randPoolAddBytes((byte const *)&tv, sizeof(tv)); cnow = times(&tms); randPoolAddBytes((byte const *)&tms, sizeof(tms)); randPoolAddBytes((byte const *)&cnow, sizeof(cnow)); tv.tv_usec += tv.tv_sec * 1000000; /* Unsigned, so wrapping is okay */ if (!ticksize) ticksize = noiseTickSize(); delta = (tv.tv_usec-lastcounter)/ticksize; lastcounter = tv.tv_usec; #else #error Unknown operating system #endif cnow = clock(); randPoolAddBytes((byte const *)&cnow, sizeof(cnow)); tnow = time((time_t *)0); /* Read slowest clock last */ randPoolAddBytes((byte const *)&tnow, sizeof(tnow)); return delta; } The code also works under UNIX. You may have to find the frequency of a timer that only returns ticks; noiseTickSize() finds the resolution of a timer (the gettimeofday() function) that only returns seconds. The main driver is in randtest.c (see Listing One). A flash effect is provided by funnyprint(). Of more value is randRange(), which illustrates a way to generate uniformly distributed random numbers in a range not provided by the generator. The problem is akin to generating numbers from 1 to 5 using a six-sided die. The solution amounts to rerolling if you get a 6. Listing One /* usuals.h -- Useful typedefs */ #ifndef USUALS_H #define USUALS_H #include <limits.h> #if UCHAR_MAX == 0xFF typedef unsigned char byte; /* 8-bit byte */ #else #error No 8-bit type found #endif #if UINT_MAX == 0xFFFFFFFF typedef unsigned int word32; /* 32-bit word */ #elif ULONG_MAX == 0xFFFFFFFF typedef unsigned long word32; #else #error No 32-bit type found #endif #endif /* USUALS_H */ /* randtest.c -- Test application for the random-number routines */ #include <stdio.h> #include <string.h> #include <stdlib.h> /* For rand(), srand() and RAND_MAX */ #include "randpool.h" #include "noise.h" /* This function returns pseudo-random numbers uniformly from 0..range-1. */ static unsigned randRange(unsigned range) { unsigned result, div = ((unsigned)RAND_MAX+1)/range; while ((result = rand()/div) == range) /* retry */ ; return result; } /* Cute Wargames-like random effect thrown in for fun */ static void funnyprint(char const *string) { static const char alphabet[] = "ABCDEFGHIJKLMNOPWRSTUVWXYZabcdefghijklmnopqrstuvwxyz1234567890"; char c, flag[80] = {0}; /* 80 is maximum line length */ unsigned i, tumbling = 0, len = strlen(string); /* We don't need good random numbers, so just use a good seed */ randPoolGetBytes((byte *)i, sizeof(i)); srand(i); /* Sometimes simple PRNGs are useful! */ /* Truncate longer strings (unless you have a better idea) */ if (len > sizeof(flag)) len = sizeof(flag); /* Count letters that we can tumble (letters in the alphabet) */ for (i = 0; i < len; i++) { if (strchr(alphabet, string[i])) { flag[i] = 1; /* Increase this for more tumbling */ tumbling++; } } /* Print until all characters are stable. */ do { putchar('\r'); for (i = 0; i < len; i++) { if (flag[i]) { c = alphabet[randRange(sizeof(alphabet)-1)]; if (c == string[i] && --flag[i] == 0) tumbling--; } else { c = string[i]; } putchar(c); } fflush(stdout); } while (tumbling); putchar('\n'); } #include <conio.h> /* For getch() */ #define FRACBITS 4 /* We count in 1/16 of a bit increments. */ /* Gather entropy from keyboard timing. This is currently MS-DOS specific. */ static void randAccum(int bits) { word32 delta; int c, oldc = 0, olderc = 0; if (bits > RANDPOOLBITS) bits = RANDPOOLBITS; bits <<= FRACBITS; puts("We are generating some truly random bits by timing your\n" "keystrokes. Please type until the counter reaches 0.\n"); while (bits > 0) { printf("\r%4d ", (bits-1 >> FRACBITS) + 1); c = getch(); delta = noise()/6; /* Add time of keystroke */ if (c == 0) c = 0x100 + getch(); /* Handle function keys */ randPoolAddBytes((byte const *)&c, sizeof(c)); /* Normal typing has double letters, but discard triples */ if (c == oldc && c == olderc) continue; olderc = oldc; oldc = c; if (delta) { /* Subtract log2(delta) from bits */ /* Integer bits first, normalizing */ bits -= 31<<FRACBITS; while (delta < 1ul<<31) { bits += 1<<FRACBITS; delta <<= 1; } /* Fractional bits, using integer log algorithm */ for (c = 1 << FRACBITS-1; c; c >>= 1) { delta >>= 16; delta *= delta; if (delta >= 1ul<<31) bits -= c; else delta <<= 1; } } } puts("\r 0 Thank you, that's enough."); } /* When invoked with the argument "foo", this should start with: * Adding "foo\0" to pool. Pseudo-random bytes: * 4c 9d 41 ba 44 41 63 a1 db 1c ab 3f 52 a1 a2 84 c3 e5 dc bc 57 4c d9 f3 38 * d7 45 50 f9 94 36 96 a3 df 90 ff 23 e5 ec 3c 76 1f ce 1c bc d6 79 8b 5e e7 * aa 97 16 c0 50 c6 95 0b c1 62 42 e5 5b 8f d7 bd d7 70 1f c6 60 6a 5f f3 74 * 8d 35 ad 51 5a 4a 0c 02 cd d5 36 7e d4 c2 d9 f0 d3 49 ed 2d fa 4e 2b 70 3f */ int main(int argc, char **argv) { int i; while (--argc) { printf("Adding \"%s\\0\" to the pool.\n", *++argv); randPoolAddBytes((byte const *)*argv, strlen(*argv)+1); } puts("\nPseudo-random bytes:"); i = 100; while (i--) printf("%02x%c", randPoolGetByte(), i % 25 ? ' ' : '\n'); putchar('\n'); funnyprint("This will be deterministic on a given system."); putchar('\n'); noise(); /* Establish a baseline for the deltas */ randAccum(800); /* 800 random bits = 100 random bytes */ puts("\nTruly random bytes:"); i = 100; while (i--) printf("%02x%c", randPoolGetByte(), i % 25 ? ' ' : '\n'); putchar('\n'); funnyprint("This will be unpredictable."); return 0; } The most interesting part is rand-Accum(), which accumulates a specified amount of entropy from the keyboard. It uses the number of ticks returned by the noise() function to estimate the entropy. It assumes that inter-keystroke times vary pretty uniformly over a range of 15 percent or so. Thus, it divides the tick count by 6 to get the fraction of the interval that is random, then takes the logarithm to get the number of bits of entropy. The integer number of bits comes from normalizing the number and counting the shifts. The entropy is kept to four fractional bits using a few iterations of an integer-logarithm algorithm. Weaknesses I don't know of any exploitable holes in this approach to generating random numbers, but in cryptography, only a fool is sure he has a good algorithm. I believe the following points need further examination: - The divide-by-six approximation in randAccum(). This was chosen so a machine with only a 60-Hz clock would produce at least one bit per keystroke; not a very good reason. A much better technique is suggested by Ueli Maurer's paper from Crypto '90, "A Universal Statistical Test For Random Bit Generators." However, this technique is slow to decide that the input is trustworthy and requires large tables. - The "leakage" rate of information from the pool. Because the stirring key is drawn from the pool itself, collisions are possible. These are states of the pool which, after stirring, result in the same output state. This reduces the information content of the pool. - The use of MD5 as a cipher. If you are using this as a cryptographic PRNG and producing large amounts of output from a smaller seed, the cipher at the heart of the stirring may be broken. The amount of known plaintext available from any given stirring is quite low (a few hundred bytes), the key space is dauntingly large (512 bits), and no such attacks on MD5 have appeared in the civilian literature; however, MD5 was not designed for use as a cipher and has had less study in this mode. References Eppinger, Jeffrey L., Lily B. Mummert, and Alfred Z. Spector, eds. Camelot and Avalon: A Distributed Transaction Facility. San Mateo, CA: Morgan Kaufman, 1991. Maurer, Ueli M. "A Universal Statistical Test for Random Bit Generators," in Advances in Cryptology--Crypto '90. Berlin: Springer-Verlag, 1991. Davis, Donald T., Ross Ihaka, and Philip Fenstermacher. "Cryptographic Randomness From Air Turbulence in Disk Drives," in Advances in Cryptology--Crypto '94. Berlin: Springer-Verlag, 1994. Knuth, Donald E. The Art of Computer Programming, Volume 2: Seminumerical Algorithms. Reading, MA: Addison-Wesley, 1981. Colin, a student from Toronto, was introduced to modern cryptography by the Pretty Good Privacy package. He can be contacted at colin@nyx.cs.du.edu.
http://www.drdobbs.com/security/algorithm-alley-truly-random-numbers/184409352
CC-MAIN-2014-10
en
refinedweb
Google Web Toolkit support This module provides a helper to simplify the integration of a GWT UI with Play as an application server. First, download the GWT SDK This module is designed to work with the latest version of GWT. A the time of writing, this is the 1.6.4 version. Before working with the Play GWT module, you have to download the GWT SDK and set the GWT_PATH environment variable. Setting up a GWT project Start to create a new application in the classical way. # play new test-gwt Then edit the conf/application.conf file to enable the GWTmodule : # Additional modules # ~~~~~ # A module is another play! application. Add a line for each module you want # to add to your application. Modules path are either absolutes or relative to # the application root. # module.gwt=${play.path}/modules/gwt Now use the gwt:init command to bootstrap your GWT project in the existing Play application: play gwt:init test-gwt This will add some files and directories to your project : - A /gwt-public directory that will host all the GWT compiled resources. A default index.html file has been created to launch the GWT UI. If you don’t define any specific route to serve this staticDir, the GWT module will automatically map it to the /app path; so the GWT will be available at the URL. - The /app/Main.gwt.xml GWT module descriptor file. It is the main GWT module of your application. By default, it defines the client.Main class as entry point. - The /app/client package that is the default package used by GWT. A client.Main class is created for the default entry point. Using the GWT hosted mode First start you Play application. play run test-gwt The start the GWT hosted mode browser, using the play gwt:browser command. play gwt:browser test-gwt At the first run, GWT will compile your main module a first time. Now you can make changes in your application and refresh the hosted mode browser to see the result. Creating a service using GWT-RPC If you follow the GWT manual, it will explain you how to expose a service with GWT-RPC using a RemoteServiceServlet. Exposing a GWT service with Play is almost the same, but since you can’t define servlets in a Play application, you have to use the provided support class, play.modules.gwt.GWTService. For example, to implement this service: package client; import com.google.gwt.user.client.rpc.*; @RemoteServiceRelativePath("hello") public interface HelloService extends RemoteService { String sayHello(String name); } simply create a class that extends the play.modules.gwt.GWTService, and define the service URL path with the play.modules.gwt.GWTServicePath annotation. Like this: package services; import com.google.gwt.user.server.rpc.*; import play.modules.gwt.*; import client.*; @GWTServicePath("/main/hello") public class HelloServiceImpl extends GWTService implements HelloService { public String sayHello(String name) { return "Hello " + name; } } This is the only difference from the GWT documentation. Debugging the GWT UI When you run the GWT UI in hosted mode, a Java VM simulate the execution of the client Java code. You must understand that in this mode, you use 2 Java VM: - The first one to run your Play application - The second one to run the client UI So if you want to debug all the application you have to attach 2 debugging sessions using JPDA. This is not a problem... For example, using netbeans, create a netbeans project with the play netbeansify command: # play netbeansify test-gwt Now open the test-gwt project in netbeans and use the debug button to attach the 2 debugging session. By default the Play application listens for the debugger on port 8000, and the GWT browser on port 3408. Now you can set breakpoints either in the server code, or in the client code, and netbeans will automatically break on the correct JPDA session and let you debug the Java code.
http://www.playframework.com/documentation/1.0/gwt
CC-MAIN-2014-10
en
refinedweb
Microsoft.SqlServer.Dts.Pipeline Namespace SQL Server 2012 The Microsoft.SqlServer.Dts.Pipeline namespace contains managed classes that are used to develop managed data flow components. It contains the PipelineComponent class, which is the base class for managed data flow components, and the PipelineBuffer class, which is the managed implementation of the IDTSBuffer100 interface. The PipelineBuffer class marshals data flow buffers between the COM data flow engine and managed data flow components. Show:
http://msdn.microsoft.com/en-us/library/Microsoft.SqlServer.Dts.Pipeline.aspx
CC-MAIN-2014-10
en
refinedweb
nsmux nsmux implements the simplest use-case of namespace-based translator selection (see below). To use nsmux do the following: $ settrans -a <node> nsmux <directory> After this operation <node> will be a mirror of <directory> with namespace-based translator selection functionality enabled. Please note that due to some details nsmux may complain a lot when run as a normal user. This matter is the most urgent on the TODO list. Source nsmux translator can be obtained with the following series of commands: $ git clone git://git.sv.gnu.org/hurd/incubator.git nsmux $ cd nsmux/ $ git checkout -b nsmux origin/nsmux filter translator can be obtained with the following series of commands: $ git clone git://git.sv.gnu.org/hurd/incubator.git filter $ cd filter/ $ git checkout -b filter origin/filter The filter is not yet working. Namespace-based Translator Selection Namespace-based translator selection is the special technique of using "magic" filenames for both accessing the file and setting translators on it. A "magic" filename is a filename which contains an unescaped sequence of two commas: ",,". This sequence can be escaped by adding another comma: ",,,". In the magic filename the part up to the first double commas is interpreted as the filename itself; the remaining segments into which the string is split by occurrences of ",," are treated as names of translators located under /hurd/. The simplest advantage before traditional way of setting translators is shown in the following examples. Compare this $ settrans -a file translator1 $ settrans -a file translator2 $ cat file to this: $ cat file,,translator1,,translator2 One simple command versus three more lengthy ones is an obvious improvement. However, this advantage is not the only one and, probably, not even the most important. What is a good candidate for the most important advantage is that translators requested via "magic" filenames are session-bound. In other words, by running cat file,,translator we set a translator visible only to cat, while the original file remains untranslated. Such session-specific translators are called dynamic and there is no (theoretical) way for a client to get a port to a dynamic translator requested by another client. Obviously, dynamic translators can be stacked, similarly to static translators. Also, dynamic translator stacks may reside on top of static translator stacks. An important operation of namespace-based translator selection is filtering. Filtering basically consists in looking up a translator by name in the stack and ignoring translators located on top of it. Note that filtering does not mean dropping some translators: in the current implementation a filter is expected to be a normal dynamic translator, included in the dynamic translator stack similarly to other translators. An important detail is that filtering is not limited to dynamic translator stacks: a filter should be able to descend into static translator stacks as well. Although the concept of filtering may seem purely abstract in the simplest use-case of setting dynamic translators on top of files, the situation changes greatly when dynamic translator stacks on top of directories are considered. In this case, the implementation of namespace-based translator selection is expected to be able to propagate the dynamic translators associated with the directory down the directory structure. That is, all files located under a directory opened with magic syntax, are expected to be translated by the same set of translators. In this case having the possibility to specifically discard some of the translators set up on top of certain files is very useful. Note that the implementation of propagation of dynamic translators down directories is not fully conceived at the moment. The fundamental problem is distinguishing between situations when the dynamic translators are to be set on the underlying files of the directory or on the directory itself. Currently Implemented Currently there a working (though not heavily tested) implementation of the simplest use-case of namespace-based translator selection in the form of translator nsmux. The filter is partially implemented and this is the immediate goal. Propagating translators down directories is the next objective. Open Issues IRC, freenode, #hurd, 2013-08-22 < youpi> err, is nsmux supposed to work at all? < youpi> a mere ls doesn't work < youpi> I'm running it as a user < youpi> echo * does work though < teythoon> ah, yes, nsmux,,is,,funny :p < youpi> well, perhaps but I can't make it work < youpi> well, the trivial ,,hello does work < youpi> but ,,tarfs doesn't seem to be working for instance < youpi> same for ,,mboxfs < youpi> ,,xmlfs seems to somehow work a bit, but not very far... < youpi> so it seems just nobody is caring about putting READMEs wherever appropriate < youpi> e.g. examples in socketio/ ...
http://www.gnu.org/software/hurd/hurd/translator/nsmux.html
CC-MAIN-2014-10
en
refinedweb
03 January 2006 20:14 [Source: ICIS news] TORONTO (ICIS news)--BASF's hostile bid for US catalyst maker Engelhard is "great news" for Engelhard shareholders, Citigroup said on Tuesday.?xml:namespace> The bid underscores the value of Engelhard's catalysts franchise, Citigroup said in a research note for clients. With Engelhard under its belt, BASF would become the world's largest catalyst maker, the analysts added. Engelhard's prospects are based on catalyst opportunities in diesel markets in the US and Europe. In addition, Asia remains a huge market, with Engelhard well positioned in the region through joint ventures (jv) in Japan, Korea, India and a plant in China, Citigroup said. Risks to Engelhard's business include a slowdown in refinery and chemicals market, which would translate into lower sales for its refining and chemicals catalysts. Asian competitors who compete on price to take market share also pose a risk. Said Citigroup: "Engelhard's markets are very price sensitive, if one company undercuts pricing in the industry, the markets tend to take time to recover." Commenting on Engelhard's automotive catalysts operations, Citigroup said that division would be affected if North American car makers Ford and General Motors (GM) continue to lose market share. However, Engelhard's position with Honda, Nissan and others would offset these losses. The analysts added: "The real threat to Engelhard would come from share gains at Chrysler, and Toyota, neither of which is a substantial customer of Engel
http://www.icis.com/Articles/2006/01/03/1031355/basf-bid-great-news-for-engelhard-shareholders-citigroup.html
CC-MAIN-2014-10
en
refinedweb
JSON ("Parsing JSON Message Example "); alert(arrayObject.toJSONString().parseJSON..., visit the following link:... it using JSON The given code parse the message in JSON in JavaScript JSON Tutorials JSON Tutorials  ... tutorials: What is JSON? - Understand What... a message in JavaScript with JSON In the previous section of example we have json Tutorials JSON Tutorials... data is transferred and how it is encoded. While JSON is a tool used for describing values and objects by encode the content in a very specific manner. 2)JSON json json how to develop iphone web service application using json. Please help me by giving any link where I could find an example or any help will be anticipated What is JSON? Engineer" } Read more tutorials about JSON. Which programming language supports... can check the complete list at Read more tutorials about JSON... What is JSON? In this article we are discussing about the JSON which index Programming Tutorials : Many Free Programming tutorials Many links to the online programming tutorials. These programming tutorial links will help you...; Delphi Tutorials Pascal Tutorials Prolog Tutorials JSON-JSP example string value of that array object's index. Here is the example code of JSON... JSON-JSP example In the previous section of JSON-Servlet example you have Example - Tiny Window Java NotesExample - Tiny Window The example discussed here is used to create tiny window in swing. The TinyWindow.java - Creates a very small window... machine. We have also given the screen shot of the example program Siverlight JSON Siverlight JSON how to use JSON in silverlight, i need to populate dynamic data by using JSON CAN ANY HELP ME Hi Friend, Please go through the following link: Thanks example | Java Programming | Java Beginners Examples | Applet Tutorials...In this section we have listed the tutorials on Rose India website. We have largest collection of tutorials on various programming languages. These days C++Tutorials benefit to download the source code for the example programs, then compile and execute each program as it is studied. The diligent student will modify the example... other tutorials, such as C++: Annotations by Frank Brokken and Karel Kubat JSON and Servlet example JSON and Servlet example In the previous section of JSON-Java example you have learned how to create a java class by using JSON classes. Now in this example JSONObject example in Java JSONObject example in Java In the previous section of JSON tutorials you have seen how JSON can be used with JavaScript to create objects and arrays json json how connect ajax with json json json how to get data in json object from database ; C Tutorials | Java Tutorials | PHP Tutorials | Linux Tutorials | WAP Tutorial | Struts Tutorial | Spring FrameWork Tutorial | MySQL Tutorials | XML Tutorial | VB Drop Index Drop Index Drop Index is used to remove one or more indexes from the current database. Understand with Example The Tutorial illustrate an example from Drop Index Difference between jsonstring and json object Difference between jsonstring and json object Is There any difference between JsonString and jsonobjectA? if there is any differece could any one explain with example. Thanks venkatesh Tutorial, Java Tutorials Tutorials Here we are providing many tutorials on Java related technologies. All the tutorials are very useful for programmers. You can learn these tutorials very fast. These tutorials are well written and supported with good retrieve JSON array objects in javascript the following links: Java - JDK Tutorials have large number of Java tutorials at your Java index page. If you spend...Java - JDK Tutorials This is the list of JDK tutorials which... computer for compiling and running the example program discussed here. You Servlet Tutorials for Beginners of Servlet tutorials for beginners integrated with simple and easy examples... Resources Java : Servlet Tutorials - Index page of Servlet tutorials at Rose... India Servlet tutorials for beginners cover the entire topics of Servlet including PHP Basics Tutorial Index to learn the PHP program ASAP. These tutorials will help you grasping the PHP Basics easily. The PHP Basics tutorial is supported with the example code. Let's JSON in JavaScript example code for creating an Object in JavaScript using JSON: ObjectJavaScript... this JavaScript and JSON example you have to just run this ".htm" file on your... JSON in JavaScript   JSP Tutorials - Page2 JSP Tutorials page 2 JSP Examples Hello World JSP Page By this example we are going to teach you that how can you write a "hello...:indexOf> Tag of JSTL. This tag returns index of first occurrence of specified Tiny Window with subclass Java NotesExample - Tiny Window with subclass This is a reimplementation... to just stick the main program into one of the other classes, for example... characteristics this.setTitle("Tiny Window using JFrame Subclass Parsing a message in JavaScript with JSON Parsing a message in JavaScript with JSON In the previous section of example we have studied how to create message in JSON in JavaScript and now Java arraylist index() Function Java arrayList has index for each added element. This index starts from 0. arrayList values can be retrieved by the get(index) method. Example of Java Arraylist Index() Function import... more JEE tutorials. Thanks HI, Check more JEE tutorials. Downloading MyFaces example integrated with tomahawk Downloading MyFaces example integrated with tomahawk... will get 4 war files. myfaces-example-blank-1.1.6.war myfaces-example-simple-1.1.6.war myfaces-example-tiles-1.1.6.war myfaces-example-wap-1.1.6.war Mysql Date Index . Understand with Example The Tutorial grasp you an example from 'Mysql Date Index... Mysql Date Index Mysql Date Index is used to create a index on specified table. Indexes Java Programming: Chapter 9 Index Chapter 9 Correctness and Robustness COMPUTER PROGRAMS THAT FAIL are much too common. Programs are fragile. A tiny error can cause a program... Chapter | Previous Chapter | Main Index JSon Tutorial JSon Tutorial What is JSon? Where i can find the JSon tutorial? Thanks! JSon Tutorial Foreach loop with negative index in velocity Foreach loop with negative index in velocity This Example shows you how to use foreach loop with negative index in velocity. The method used in this example Creating Message in JSON with JavaScript a message with JSON in JavaScript. In this example of creating message in JSON... Creating Message in JSON with JavaScript... about the JSON in JavaScript's some basic concepts of creating a simple object java tutorials java tutorials Hi, Much appreciated response. i am looking for the links of java tutorials which describes both core and advanced java concepts... topics in detail..but not systematically.For example ..They are discussing about Tiny Tennis Club - Java Beginners Tiny Tennis Club sent me the codes of a programmed queue added a stack and a linked list to allow only 20 people to join a tinny tennis club: the oparaations to handle are: Join surname name, Accept, Retire surname, Query Pointing last index of the Matcher Pointing last index of the Matcher This Example describes the way to point the matcher and also indicate the last index of the matcher using expression.For this we Spring Constructor arg index Constructor Arguments Index In this example you will see how inject the arguments into your bean according to the constructor argument index...; <constructor-arg index="0" value Creating Array Objects in JavaScript with JSON in JavaScript-JSON. In our example file we have created an object "students"... Creating Array Objects in JavaScript with JSON... of JavaScript-JSON tutorial you have known that how to create an object and now AWT Tutorials AWT Tutorials How can i create multiple labels using AWT???? Java Applet Example multiple labels 1)AppletExample.java: import javax.swing.*; import java.applet.*; import java.awt.*; import JSON with Java JSON with Java How to request Json From URI and to parse and to store the values in Java array Overloading considered Harmful - Java Tutorials not only popular, but tightly integrated into some important programming... another look on the small code example presented in a former edition...-independence of the calls is instantly interpreted as an example of how things should Programming: Chapter 8 Index . For example, an array might contain 100 integers, numbered from zero to 99... also be references to objects, so that you could, for example, make an array... [ First Section | Next Chapter | Previous Chapter | Main Index Why tiny HTML Tools? Why tiny HTML Tools? The Plug-In, as the name suggests, offers some tiny tools...-sources point to local project resources. What is available in tiny HTML Tools Index Out of Bound Exception Index Out of Bound Exception Index Out of Bound Exception are the Unchecked Exception... the compilation of a program. Index Out of Bound Exception Occurs when SQL Example, Codes and Tutorials SQL Example, Codes and Tutorials SQL Tutorials: Collection of SQL tutorials for beginners and advance programmers. We have developed many tutorials and examples on SQL json and android json and android how can i use a json string [{"Id":10,"News":"sportnews"}] and fill a spinner for a android app?. i tried and i got a: AndroidRuntime:android.widget.ArrayAdapter.createViewFromResource(Adapter C Tutorials C Tutorials In this section we have given large number of tutorials on C... will be needing a C compiler on your machine. Here are the tutorials of C Ajax json Ajax json Hi, How consume json file in ajax program? Thanks JSF - Java Server Faces Tutorials JSF - Java Server Faces Tutorials Complete Java Server Faces (JSF) Tutorial - JSF Tutorials. JSF Tutorials at Rose India covers... zip format from our website. In our JSF tutorials we will describe you Java Multi Dimensions Array - Java Tutorials multidimensional. For accessing elements of array we use its index. In other words, we can say... below the example to provide you an idea-how array works? Declaring one dimension array type var-name[ ]; For example : int[] Dev ; or , you can Body Mass Index (BMI) Java: Body Mass Index (BMI) The Body Mass Index program is divided into two files, the main program... // File: bmi/BMI.java // Description: Compute Body Mass Index A better Java JSON library? A better Java JSON library? A better Java JSON library SQL Example, Codes and Tutorials Spring 3.0 Tutorials with example code Spring 3.0 - Tutorials and example code of Spring 3.0 framework... of example code. The Spring 3.0 tutorial explains you different modules... download the example code in the zip format. Then run the code in Eclipse JSF Tutorials: Easy steps to learn JSF to bring you the best JSF tutorials integrated with free source code... Resources: JSF Tutorials: Index page of JSF Tutorials at Rose India JSF...JSF Tutorials: Easy steps to learn JSF JavaServer Faces or JSF is a web PHP PDO Index ; Example: <?php foreach(PDO::getAvailableDrivers()as $driver) { echo... sqlite2 This is the index page of PDO, following pages will illustrate what is json in iphone sdk what is json in iphone sdk What is json in iphone sdk? Is this a library and when i need to include JSon in my iPhone SDK App What is Index? What is Index? What is Index JSON main question JSON main question What is the main purpose of JSON ,in which cases can i use Programming: Body Mass Index - Dialog Java NotesProgramming: Body Mass Index - Dialog Name... Mass Index (BMI). BMI is a commonly used formula which shows the relationship... can use English units if you make the proper conversions. For example, 1 inch JSON-RPC JSON-RPC JSON-RPC-Java is a dynamic JSON-RPC implementation in Java. It allows you to transparently call server-side Java code from JavaScript with an included lightweight JSON-RPC Welcome to the MySQL Tutorials MySQL Tutorial - SQL Tutorials  .... The identifiers are Database, table, index, column and alias name... User in JSP In this example we have to develop a JSP application which HTML5 Tutorials HTML 5 Tutorials In this section we have listed the tutorials of HTML 5... HTML5 tutorials. Here are some of the best HTML 5 tutorials: HTML5 Tutorials HTML 5 Introduction Here you will learn OGNL Index - For example, array[0], which returns first element of current object. For example. name.toCharArray()[0].numericValue.toString..., this first character at 0 index is extracted from the resulting array How to match a string symbols using the INDEX and the method use Stack How to match a string symbols using the INDEX and the method use Stack How to match a string symbols using the INDEX and the method use Stack... ...and it must be in priority...... -PARENTHESIS -BRACKET -BRACES Example output RMI Tutorials RMI Tutorials In this section there are many tutorials on RMI, these tutorials... applications that can access the remote methods. Here are the tutorials Struts Tutorials Struts Tutorials Struts Tutorials - Jakarta Struts... is provided with the example code. Many advance topics like Tiles, Struts Validation... processes to perform daily business operations. With this example, you build a Web Thread Deadlocks - Java Tutorials lock that holds by first thread, this situation is known as Deadlock. Example In the given below example, we have created two classes X & Y. They have... trying to invoke a method in the other class. In this example, you will see BASIC Java - Java Tutorials letter and lover case. For example, case and Case are different words in java..., it is recommended to follow the convention. Given below the example which Integrated Struts 2 Hibernate and JPA Training Integrated Struts 2 Hibernate and JPA Training  ... example JPA * Introduction to JPA * JPA Basics * Elements... the environment * Libraries * Data Source Simple CRUD Example Spring Appending Strings - Java Tutorials will compare the performance of these two ways by taking example. We will take... of accurate size and after that we append. Given below the example: public Java HashMap - Java Tutorials provides a collection-view of the values in the map. Example : import Spring Tutorial for Beginners : Spring Tutorials: Index page of Spring Tutorials at Rose India... of Spring tutorials for beginners, which covers all the topics of Spring framework.... Roseindia Spring tutorials for beginners begins from environment setup, inversion How to match a string using the INDEX and the method use Stack or Queue? of character that match the given data by INDEX position.using the STACK or QUEUE THE MATCH OF THE STRING IS BASED ON THERE INDEX POSITION **Example output...How to match a string using the INDEX and the method use Stack or Queue?  Example Example JDBC in Servlet examples. Hi Friend, Please visit the following link: Servlet Tutorials Here you will get lot of examples including jdbc servlet examples. JavaScript array remove by index is the index position at which the splice method is to be performed. Full example... JavaScript array remove by index As in the previous example of JavaScript array we have Tutorials on Java examples Java FTP Library Advanced Java Tutorials Java Example Codes... Tutorials on Java Tutorials on Java topics help programmers to learn... anywhere can make use of them. Tutorials along with Video tutorials, examples Technology index page Technology related terms are explained here. Learn it by the tutorials and examples explained here Finding start and end index of string using Regular expression Finding start and end index of string using Regular expression This Example describes the way for finding start and end index of the string using regular expressionQuery ajax json jQuery ajax json Hi, How to use jQuery to load json data through... to crate the json response: JSONObject obj=new JSONObject(); obj.put("name","foo..., textStatus, xhr) ] ) at client side to process the json data. Thanks.... At roseindia.net, you will find plenty of EJB 3.0 Tutorials which have been especially jQuery - jQuery Tutorials and examples jQuery - jQuery Tutorials and examples  ... and jQuery Tutorials on the web. Learn and master jQuery from scratch. jQuery is nice... to develop highly interactive web applications. In this jQuery tutorials series we
http://www.roseindia.net/tutorialhelp/comment/95530
CC-MAIN-2014-10
en
refinedweb
Dave, > > 1.) Namespaces v. Python Modules > > > > All code in ITK is in a C++ "itk" namespace. We'd like to have this > > namespace reflected in the python wrappers with code like this: > > > > # Load the ITK ptyhon wrappers. > > import itk > > > > # Create an instance of an ITK object. > > # Equivalent C++ code: > > # itk::Object::Pointer o = itk::Object::New() > > o = itk.Object.New() > > What's the problem? Maybe it's the fact that we don't have static > function support in Boost.Python yet? Oops, I didn't finish this section of the email. I meant to point out that I don't see a way to have nested namespaces treated as modules in Python through BPL: # itk::foo::Bar() itk.foo.Bar() Even if it were not possible to separately load this "itk.foo" module, it would still be nice from a naming perspective. > I understand why you want this, but for this /particular/ case I'd > suggest that an interface like > > o = itk.Object() > > would be more appropriate. Why expose to users that there's a factory > function at work here? I agree that this particular case is prettier when hiding the New() method, but there is also something to be said for a one-to-one correspondence among C++, Tcl, and Python implementations of the same program. Also, CABLE is intended to be a tool separate from ITK, and should not have ITK-specific hacks in it. Then, in order to achieve this syntactic change there would have to be a way of specifying it in the configuration file. One of my design goals for CABLE was to produce wrappers that reflect the original C++ as closely as possible, and with as few configuration options as possible. This actually brings up the other main issue with automatic generation. Whenever a return_value_policy is required, CABLE will have to make some assumptions, and probably just always use reference_existing_object. Anything different would require per-method configuration, which already starts to defeat the purpose of auto-generation of wrappers. > If you must have the static function interface, we just need to have a > discussion of the C++ interface questions I pose in the link above, so > that I can come up with a good design. I'd definately prefer the def(..., static_()) approach over having static_def. Here is another approach that crossed my mind: struct A { static void StaticMethod(); }; class_<A>("A", init<>()) .def("StaticMethod", static_(&A::StaticMethod)); I'm not sure I'd prefer this over class_<A>("A", init<>()) .def("StaticMethod", &A::StaticMethod, static_()); but I thought I'd mention it anyway. > Aha. I've considered doing something like this in the past, but I > thought that tracking every object in this way *by default* would be a > price that not all users would be willing to pay. In Tcl the tracking is more necessary because the wrapper objects each have a corresponding Tcl command used to invoke methods on that object. CABLE needs to keep track of all the commands it creates and destroys to keep this working. The tracking isn't as necessary for Python, though, so you're right that the extra cost may not be worth it. > There are also issues of base/derived class tracking (what happens if > you're tracking the base object and someone returns a pointer to the > derived object, which has a different address?) which make doing it > right especially difficult. I went through the same thought process for CABLE's Tcl wrappers. At the time I decided to go for the pointer comparison and ignore the problems to get something working. I have yet to thoroughly revisit the issue. However, I've toyed with the idea of automatic down-casting of pointers to polymorphic C++ types. >. -Brad
https://mail.python.org/pipermail/cplusplus-sig/2002-November/002117.html
CC-MAIN-2014-10
en
refinedweb
iWin32Assistant Struct Reference This interface describes actions specific to the Windows platform. More... #include <csutil/win32/win32.h> Inheritance diagram for iWin32Assistant: Detailed Description This interface describes actions specific to the Windows platform. -.1 by doxygen 1.6.1
http://www.crystalspace3d.org/docs/online/api/structiWin32Assistant.html
CC-MAIN-2014-10
en
refinedweb
#include <vtkQtChartLegendManager.h> Definition at line 43 of file vtkQtChartLegendManager.h. Creates a chart legend manager instance. Sets the chart area that holds the chart series layers. Sets the chart legend to manage. Inserts a chart layer at the given index. Removes the specified chart layer from the list. Sets the visibility for the series in the given chart layer.
https://vtk.org/doc/release/5.8/html/a01671.html
CC-MAIN-2020-16
en
refinedweb
C is the general and basic programming language that will create a base for other programming languages. C programming language was designed by Dennis Ritchie in Bells Lab. And it appeared around 46 years ago which is in 1972 and it was stably established on 11 December 2011. It is a crucial language of computer and it is coded in assembly language and it can run on from supercomputers to the embedded systems. ANSI C (American National Standards Institute) has standardized the C programming language since 1989 and even by the International Organization for Standardization (ISO). C Programming Interview Questions - ? - 9) Explain C preprocessor ? - 10) What is recursion in C ? - 11) Explain Enumerated types in C language? - 12) Differentiate call by value and call by reference ? - 13) List some basic data types in C ? - 14) What is typecasting? - 15) Explain about block scope in C ? - 16) Explain continue keyword in C - 17) Compare array data type to pointer data type - 18) What are bit fields in C ? - 19) What are different storage class specifiers in C - 20) What is NULL pointer? - 21) Explain main function in C ? - 22) What does printf does ? - 23) List some applications of C programming language? - 24) Write program to remove duplicate in an array ? - 25) Explain Pointers in C programming? Below are the list of Best C programming interview questions and Answers Derived data types are object types which are aggregates of one or more types of basic data types. below are the list of derived datatype in C Language. - Pointer types - Array types - Structure types - Union types - Function types The C preprocessor or cpp is the macro preprocessor for the C and C++ computer programming languages. The preprocessor provides the ability for the inclusion of header files, macro expansions, conditional compilation, and line control.; } Enumerated types are used to define variables that can only assign certain discrete integer values throughout the program. Enumeration variables are variables that can assume values symbolically Declaration and usage of an Enumerated variable. enum boolean{ false; true; }; enum boolean. Continue is a jump statement that transfers control back to the beginning of the loop, bypassing any statements that are not yet executed. It can be used only in an iteration statement. Array is a collection of variables of same type that are referred through a common name and a pointer is a variable that holds a memory address. pointers can point to array and array to pointers Bit fields are used to store multiple, logical, neighboring bits, where each of the sets of bits and single bits can be addressed. auto, register, static, extern are storage class specifiers in C. Application of C Programming Language - To develop embedded software - It is to create a computer application - It is effective to create a compiler for various computer languages to convert them into low-level language that is the machine understandable language. - It can be used to develop an Operating system and UNIX is one which is developed by the C .programming language. - It is used for creating software for various applications and even hardware. C program to remove duplicate programme: #include <stdio.h> int main(){ int n, a[100], b[100], calc = 0, i, j; printf("Enter no. of elements in array\n"); scanf("%d", &n); printf("Enter %d integers\n", n); for (i = 0; i < n; i++) scanf("%d", &a[i]); for (i = 0; i<n; i++) { for (j = 0; j < calc; j++) { if(a[i] == b[j]) break; } if (j== calc){ b[count] = a[i]; calc++; } } printf("Array obtained after removing duplicate elements:\n"); for (i = 0; i < calc; i++) printf("%d\n", b[i]); return 0;} Pointers are variables that are used to store addresses. The concept of the pointer is considered to be one of the difficult part of learning the C and C++ programming languages. There are several easy ways to write programs without pointers, but in case of dynamic memory allocation, the knowledge of pointers is a must. Knowing about memory locations and addresses defined will enable you with the ideas of how every variable function in a program. One can determine the exact size of a data type by using the sizeof operator. The storage size of the data type is obtained in bytes by using the syntax: sizeof(data_type). The range of signed int data type if from -32768 to 32767 Normalization is the process by which an address is converted to a form such that if two non-normalized pointers point to the same address, they both are converted to normalized form, thereby having a specific address Flag values are used to make decisions between two or more options during the execution of a program. Generally, flag values are small (often two) and it tries to save space by not storing each flag according to its own data type. The best way to store flag values is to keep each of the values in their own integer variable. If there are large number of flags, we can create an array of characters or integers. We can also store flag values by using low-order bits more efficiently. Bit masking refers to selecting a particular set of bits from the byte(s) having many values of bits. Bit masking is used to examine the bit values and can be done by 'AND' operation of byte, bitwise. Yes. As pointers have access to a particular memory location, the security level decreases and restricted memory areas can be accessed. Other demerits include memory holes, process and memory panics, etc. Yes, Struct is one of the data type in C that have variable size. It is because the size of the structure depends on the fields which can be variable as set by the user. Void pointer is a generic pointer in programming. If the pointer type is unknown, we make use of the void pointer. A pointer can be used with a function- - When an address is to be passed to a function - When an array elements are to be accessed through a function. Passing base address will give access to the whole array. Sometimes the task we are required to do might not fit in the allocated data and code segments. Far pointers help to access rest of the memory inside a program. Far pointers are the information present outside the data segment (generally 64 kb). Such pointers are used when we need to access an address outside of the current segment.
https://www.onlineinterviewquestions.com/c-programming-interview-questions/
CC-MAIN-2020-16
en
refinedweb
Back to CS 106A Homepage Written by Nick Parlante and Brahm Capoor February 17th, 2020 def int_counts(ints): """Returns int-count dict as above.""" counts = {} for num in ints: if not num in counts: counts[num] = 0 counts[num] += 1 return counts def first_list(strs): """Return a firsts dict as above.""" firsts = {} for s in strs: if len(s) >= 1: ch = s[0] if ch not in firsts: firsts[ch] = [] firsts[ch].append(s) return firsts def suffix_list(strs): """Return a suffixes dict as above.""" suffixes = {} for s in strs: if len(s) >= 2: # use suffix as key suffix = s[len(s)-2:] if suffix not in suffixes: suffixes[suffix] = [] suffixes[suffix].append(s) return suffixes def remove_vowels(s): out = '' for c in s: if c not in 'aieou': out += c return out def remove_consonants(s): out = '' for c in s: if c in 'aieou': out += c return out def count_lines(filename, keep_vowels): counts = {} with open(filename) as f: for word in f: word = word.strip() if keep_vowels: word = remove_vowels(word) else: word = remove_consonants(word) if word not in counts: counts[word] = 0 counts[word] += 1 for key in sorted(counts.keys()): print(key, '->', counts[key]) def main(): args = sys.argv[1:] if len(args) == 2 and args[0] == '-vowels': count_lines(args[1], True) else: count_lines(args[0], False) def add_tweet(user_tags, tweet): user = parse_user(tweet) if user == '': return user_tags # if user is not in there, put them in with empty counts if user not in user_tags: user_tags[user] = {} # counts is the nested tag -> count dict # go through all the tags and modify it counts = user_tags[user] parsed_tags = parse_tags(tweet) for tag in parsed_tags: if tag not in counts: counts[tag] = 0 counts[tag] += 1 return user_tags def parse_tweets(filename): user_tags = {} # here we specify encoding 'utf-8' which is how this text file is encoded # python technically does this by default, but it's better to be explicit with open(filename, encoding='utf-8') as f: for line in f: add_tweet(user_tags, line) return user_tags def user_total(user_tags, user): """ Optional. Given a user_tags dict and a user, figure out the total count of all their tags and return that number. If the user is not in the user_tags, return 0. """ if user not in user_tags: return 0 counts = user_tags[user] total = 0 for tag in counts.keys(): total += counts[tag] return total def flat_counts(user_tags): """ Given a user_tags dicts, sum up the tag counts across all users, return a "flat" counts dict with a key for each tag, and its value is the sum of that tag's count across users. """ counts = {} for user in user_tags.keys(): tags = user_tags[user] for tag in tags: if tag not in counts: counts[tag] = 0 counts[tag] += tags[tag] return counts
http://web.stanford.edu/class/cs106a/section/section6/section-6-soln.html
CC-MAIN-2020-16
en
refinedweb
Controls whether physics. using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { public Rigidbody rb; void Start() { rb = GetComponent<Rigidbody>(); } // Let; } }
https://docs.unity3d.com/kr/2017.4/ScriptReference/Rigidbody-isKinematic.html
CC-MAIN-2020-16
en
refinedweb