text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Today I stumbled upon Scott Haselman’s post: How to access NuGet when NuGet.org is down (or you’re on a plane) in which Scott discusses how he recovered from an issue with the nuget.org site being down during his demo at the Dallas Day of .Net. As it turns out, while NuGet stores packages it downloads in a local Cache folder within your AppData folder, it doesn’t actually use this cache by default. Scott was able to remedy the situation by adding his local cache as a source through the the Visual Studio Package Manager plugin. Last year, I wrote about my philosophy for dependency management and how I use NuGet to facilitate dependency management without using the Visual Studio plugin wherein I discuss using the NuGet.exe command line tool to manage .Net dependencies as part of my rake build. After reading Scott’s post, I got to wondering whether the NuGet.exe command line tool also had the same caching issue and after a bit of testing I discovered that it does. Since I, with the help of a former colleague, Josh Bush, have evolved the solution I wrote about previously a bit, I thought I’d provide an update to my approach which includes the caching fix. As discussed in my previous article, I maintain a packages.rb file which serves as a central manifest of all the dependencies used project wide. Here’s one from a recent project: packages = [ [ "Machine.Specifications", "0.5.3.0" ], [ "ExpectedObjects", "1.0.0.2" ], [ "Moq", "4.0.10827" ], [ "RabbitMQ.Client", "2.7.1" ], [ "log4net", "1.2.11" ] ] configatron.packages = packages This is sourced by a rakefile which which is used by a task which installs any packages not already installed. The basic template I use for my rakefile is as follows: require 'rubygems' require 'configatron' ... NUGET_CACHE= File.join(ENV['LOCALAPPDATA'], '/NuGet/Cache/') FEEDS = ["http://[corporate NuGet Server]:8000", "" ] require './packages.rb' task :default => ["build:all"] namespace :build do task :all => [:clean, :dependencies, :compile, :specs, :package] ... task :dependencies do feeds = FEEDS.map {|x|"-Source " + x }.join(' ') configatron.packages.each do | name,version | feeds = "-Source #{NUGET_CACHE} " + feeds unless !version packageExists = File.directory?("#{LIB_PATH}/#{name}") versionInfo="#{LIB_PATH}/#{name}/version.info" currentVersion=IO.read(versionInfo) if File.exists?(versionInfo) if(!packageExists or !version or !versionInfo or currentVersion != version) then versionArg = "-Version #{version}" unless !version sh "nuget Install #{name} #{versionArg} -o #{LIB_PATH} #{feeds} -ExcludeVersion" do | ok, results | File.open(versionInfo, 'w') {|f| f.write(version) } unless !ok end end end end end This version defines a NUGET_CACHE variable which points to the local cache. In the dependencies task, I join all the feeds into a list of Sources for NuGet to check. I leave out the NUGET_CACHE until I know whether or not a particular package specifies a version number. Otherwise, NuGet would simply check for the latest version which exists within the local cache. To avoid having to change Visual Studio project references every time I update to a later version of a dependency, I use the –ExcludeVersion option. This means I can’t rely upon the folder name to determine whether the latest version is already installed, so I’ve introduced a version.info file. I imagine this is quite a bit faster than allowing NuGet to determine whether the latest version is installed, but I actually do this for a different reason. If you tell NuGet to install a package into a folder without including the version number as part of the folder and you already have the specified version, it uninstalls and reinstalls the package. Without checking the presence of the correct version beforehand, NuGet would simply reinstall everything every time. Granted, this rake task is far nastier than it needs to be. It should really only have to be this: task :dependencies do nuget.exe install depedencyManifest.txt –o lib end Where the dependencyManifest file might look a little more like this: Machine.Specifications 0.5.3.0 ExpectedObjects 1.0.0.2 Moq 4.0.10827 RabbitMQ.Client 2.7.1 log4net 1.2.11 Nevertheless, I’ve been able to coerce the tool into doing what I want for the most part and it all works swimmingly once you get it set up.
http://lostechies.com/derekgreer/2012/03/09/dependency-management-in-net-offline-dependencies-with-nuget-command-line-tool/
CC-MAIN-2014-41
refinedweb
709
57.77
Writing a cross-platform library - best practicesPosted Sunday, 23 September, 2012 - 21:47 by james_lohr in Hello, I would like to modify my QuickFont library to work on Android (Mono Android), and I have a few questions on how best to go about this. My first thought is to refactor out everything that is truly platform independent into a separate parent project, and then to have a separate child project for each platform. My first question is how best I should be using OpenTK. Presumably I want one child library using ES11, and the other using OpenGL; however, I'll only be using functions that are duplicated in both... so is there a clean way of doing this without duplicating the code in both child libraries with the only difference being the import? I also suspect that the OpenTK included with Mono Android is not complete (auto-complete seems to indicate only ES11, ES20, but no OpenGL). So perhaps my parent library is going to need to be independent of OpenTK (which will be a massive pain, since I can't even use the maths functions!). Perhaps I should be using the preprocessor to get around this? My next question is how I should go about structuring this in terms of solutions and namespaces. Should I just shove it all (parent and child projects) into a single solution? And what about namespaces? Presumably they would tie up with the child project names? (perhaps QuickFont.Andriod, and QuickFont.Something)... Perhaps I shouldn't even be creating multiple projects, and should instead simply be implement the missing bits of C# using the android tools (e.g. System.Drawing.Bitmap is missing, but there is Android.Graphics.Bitmap which I could wrap to look like a System.Drawing.Bitmap..), and include these either as a separate library, or ... using the preprocessor? I would really appreciate some guidance on what the best solution is. Kind regards, James L. Re: Writing a cross-platform library - best practices I'd go for #pragma too if you only have 2 codepaths, but I don't work with Android so I really cannot give you the "best practice" advice that you asked for. (But I guess some reply beats no reply)
http://www.opentk.com/node/3166
CC-MAIN-2014-15
refinedweb
373
63.59
Linux 2016-12-12(3), and waitpid(2). Parent process ID (PPID) A process’s parent process ID identifies the process that created this process using fork(2). A process can obtain its PPID using getppid(2). A PPID is represented using the type pid_t. Process group ID and session ID Each process has a session ID and a process group ID, both represented using the type pid_t. A process can obtain its session ID using getsid collection of processes that share the same process group ID; the shell creates represented using the types uid_t and gid_t (defined in <sys/types.h>). On Linux, each process has the following user and group identifiers:.00 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. REFERENCED BY renice(1), access(2), execve(2), fork(2), getgid(2), getgroups(2), getpid(2), getresuid(2), getrlimit(2), getsid(2), getuid(2), keyctl(2), kill(2), prctl(2), ptrace(2), seteuid(2), setfsgid(2), setfsuid(2), setgid(2), setpgid(2), setresuid(2), setreuid(2), setsid(2), setuid(2), wait(2), euidaccess(3), initgroups(3), killpg(3), tcgetpgrp(3), intro(2), intro(3), proc(5), capabilities(7), cgroup_namespaces(7), namespaces(7), nptl(7), path_resolution(7), pid_namespaces(7), unix(7), user_namespaces(7), killpg(2), procenv(1), vlimit(3), faccessat(2)
https://reposcope.com/man/en/7/credentials
CC-MAIN-2019-51
refinedweb
230
53.71
csSchedule Class ReferenceThe csSchedule class provides an easy way to get timers in applications. More... #include <csutil/schedule.h> Detailed DescriptionThe csSchedule class provides an easy way to get timers in applications. It can handle both repeating and single-shot callbacks. It is useful for handling time in 3D virtual worlds. Use it like this: class myEntity { public: virtual void Update(); }; Suppose you have an object of class myEntity, which looks like a button in your virtual world, and you want the button to blink. Calling Update every NextFrame would look bad, and handling the timing yourself is a hassle (and can be lots slower then mass-handling by csSchedule). So you can use the csSchedule to call the myEntity::Update method every second. You can do it this way: void call_entity_update(void *arg) { myEntity *mp = (myEntity*)arg; mp->Update(); } You would then use the csSchedule method AddCallback(call_entity_update, (void*)my_entity, 1000); to have it call the function with the object pointer as argument after 1000 milliseconds (= 1 second) once. or you can use: AddRepeatCallback(call_entity_update, (void*)my_entity, 1000); to have the function called repeatedly, every 1000 msec (= second). To notify the schedule that time has passed, each frame, for example in the NextFrame() method, you must call the TimePassed(elapsed_time) function. This class is useful for callbacks in 3D virtual worlds, but the callbacks can have some jitter due to framerates. For mission-critical hardware IO calls (like controlling a floppy drive or controlling the UART) this jitter will be too big. In those cases use interrupt-driven callbacks, and place the controlling code in some platform-specific implementation file, since this type of use is typically platform-dependent. However, although this class cannot give callbacks inside a single frame, it will behave as best as possible using callbacks every frame. Definition at line 81 of file schedule.h. Constructor & Destructor Documentation create an empty schedule Member Function Documentation Add a single-shot callback the function must be of type: void function(void *arg) arg is passed as argument to the function. delay: the function is called this many msec have passed. Add a repeating callback the function must be of type: void function(void *arg) arg is passed as argument to the function. period: the function is called every time this many msec pass. Remove a single-shot or repeating callback. (if multiple identical calls exist, all are removed) removes all callbacks using given argument, whatever their function or period. Useful if the argument in question is an object that is destructed. So, in a class myEntity, in ~myEntity() you can call: schedule->RemoveCallback(this); Remove a single-shot callback. (if multiple identical calls exist, the first is removed) Remove a repeating callback. (if multiple identical calls exist, the first is removed) Notify the schedule that time has passed, elapsed_time is in msec. It will update the internal data and call any callbacks if necessary. The documentation for this class was generated from the following file: - csutil/schedule.h Generated for Crystal Space 1.0.2 by doxygen 1.4.7
http://www.crystalspace3d.org/docs/online/api-1.0/classcsSchedule.html
CC-MAIN-2014-35
refinedweb
513
54.63
Hello !! after 35 character i want to do truncatechars in template but i have dynamic data like {{listings.address}}, {{listings.city}}, {{listings.state}} now i need to have like this>> address, city, stat.. if all these total character is greater than 35 else address, city, state Any help is appreciated ! MilanVasima So you wanna truncate text if all characters (address, city and state) are greater than 35. what if one of them is not greater than 35. bishwasbh If not greater than 35 then default value else if truncate chars This can be done using JavaScript it's simple . If you want I can drop code for it . NeErAjlamsal Without js is it not possible or what?? plz drop your code MilanVasima It is… MilanVasima You can do this: {% if title|length > 35 %} {{ title|slice:":35" }} {% else %} <<DO YOUR SH*T, JUST KIDDING>> {% endif %} bishwasbh I want combination of length of address, city, state not only single value then after that … like milanchowk, kapan, Kathman… MilanVasima Try this: {{ address|add:","|add:city|add:","|add:state|slice:":35" }} Or you can simply send the concatenated context/value from the views and do all the thingy things there. bishwasbh Done but I just did from view bishwasbh Thanks
https://webmatrices.com/d/713-truncates-inside-template
CC-MAIN-2022-33
refinedweb
206
60.75
import "go.chromium.org/luci/server/auth/signing/signingtest" Package signingtest implements signing.Signer interface using small random keys. Useful in unit tests. Signer holds private key and corresponding cert and can sign blobs with PKCS1v15. func NewSigner(serviceInfo *signing.ServiceInfo) *Signer NewSigner returns Signer instance that use small random key. Panics on errors. Certificates returns a bundle with public certificates for all active keys. func (s *Signer) KeyForTest() *rsa.PrivateKey KeyForTest returns the RSA key used internally by the test signer. It is not part of the signing.Signer interface. It should be used only from tests. KeyNameForTest returns an ID of the signing key. ServiceInfo returns information about the current service. It includes app ID and the service account name (that ultimately owns the signing private key). func (s *Signer) SignBytes(c context.Context, blob []byte) (keyName string, signature []byte, err error) SignBytes signs the blob with some active private key. Returns the signature and name of the key used. Package signingtest imports 11 packages (graph). Updated 2019-10-14. Refresh now. Tools for package owners.
https://godoc.org/go.chromium.org/luci/server/auth/signing/signingtest
CC-MAIN-2019-43
refinedweb
178
54.79
I was recently asked to give a talk at the Sydney F# User Group about how to write a Type Provider (and other things). Now I’m fairly new to writing F# and even newer to writing Type Providers but having done code generation in the past using various .NET APIs (DSL’s, CodeDom, T4) I’m well versed in the pain that is to be expected when doing code generation. What’s a Type Provider? If you haven’t come across Type Providers before what they are is something that hooks into the F# compiler to generate types based on some pre-conditions. The most common usage is to generate data source information, such as a SQL data context, strongly typed CSV’s or classes from JSON. The primary advantage here is that it’s done at the compiler level, types are generated then and those types are used in your codebase. If something changes in your data schema, say the properties of a JSON object change, you hit a compiler error rather than a runtime error, and that’s pretty neat. Sounds cool, how do I get started? When writing a Type Provider you can probably generate something without any external dependencies. Unfortunately that is a hell of a lot of code to write to build some of the stuff out, code that you’re likely to get wrong or is just painful to write. If you look at any of the samples out there or existing Type Providers you’ll see two files named something like ProvidedTypes.fsi and ProvidedTypes.fs. What these contains is some nice base classes for starting your implementations. Note: Presently I don’t know exactly where you get them from, there seems to be no NuGet package or anything, instead what I will be doing for this walkthrough is copying them from the F# Samples project. If someone knows where you get the “master” copy from or a NuGet package to reference I’m all ears! Edit: As has been pointed out in the comments there is a NuGet package which will include the appropriate base classes, FSharp TypeProviders Starter Pack. I haven’t updated the code below to work with it so there may be some minor differences. We’ll start be creating a new F# library project then copy in our ProvidedTypes.fs/fsi files and deleting Library1.fs. File -> New -> Type Provider For this walkthrough I’m going to create the super-simple Type Provider I demoed at the F# User Group. It’s called StringTypeProvider so create a new F# file named that. Let’s also open a few namespaces so it looks like so: namespace Samples.FSharp.StringTypeProvider open System open System.Reflection open Samples.FSharp.ProvidedTypes open Microsoft.FSharp.Core.CompilerServices open Microsoft.FSharp.Quotations Note: Samples.FSharp.ProvidedTypes is the namespace for the stuff I got imported. Next we’ll create our Type Provider type: [<TypeProvider>] type StringTypeProvider(config: TypeProviderConfig) as this = inherit TypeProviderForNamespaces() This is a compiler error for the moment but we’ll get to that. We’ve done three things here: - Created a type that has an attribute of TypeProvider. This tells the F# compiler that this type is a Type Provider and to use it as such - Created a type that has a constructor argument of TypeProviderConfigwhich we then alias to thisfor us to use internally - Inherited from a type called TypeProviderForNamespacewhich takes the complexity of our type construction (which we’ll get to later) The final thing we need to do before we go about implementing our Type Provider is tell the F# compiler that this assembly has Type Providers in it, we do that with an assembly attribute, so put this in the AssemblyInfo.cs (or somewhere else): [<assembly:TypeProviderAssembly>] do() So far our file looks like this: namespace Samples.FSharp.StringTypeProvider open System open System.Reflection open Samples.FSharp.ProvidedTypes open Microsoft.FSharp.Core.CompilerServices open Microsoft.FSharp.Quotations [<TypeProvider>] type StringTypeProvider(config: TypeProviderConfig) as this = inherit TypeProviderForNamespaces() [<assembly:TypeProviderAssembly>] do() Building the basics There’s a few basic things that you’ll need to do for every Type Provider that you create, you need to: - Create a namespace - Create a type - Add members to the type - Add type to the namespace - Add type to the assembly For the namespace you can generate anything you want, you can get the namespace from the current assembly (ie - your project) but adding types to someone else’s namespace is a bad idea, you might generate a type that clashes with something they too have created. Because of this you’re better off creating your own namespace. Also we’re going to need a reference to the assembly, let’s set that up: [<TypeProvider>] type StringTypeProvider(config: TypeProviderConfig) as this = inherit TypeProviderForNamespaces() let namespace = "Samples.StringTypeProvider" let thisAssembly = Assembly.GetExecutingAssembly() Now we’ll create our type to “export” from the Type Provider and export it: [<TypeProvider>] type StringTypeProvider(config: TypeProviderConfig) as this = inherit TypeProviderForNamespaces() let namespace = "Samples.StringTypeProvider" let thisAssembly = Assembly.GetExecutingAssembly() let t = ProvidedTypeDefinition(thisAssembly, namespaceName, "StringTyped", Some typeof<obj>) do this.AddNamespace(namespace, [t]) The let t = ... line creates us a new type that will be exported by the namespace. I’ve named it StringTyped so when using the Type Provider we’d access it via Samples.StringTypeProvider.StringTyped. When creating a new Type Definition you need to specify the base type to inherit from, it’s an Option type of type and can have anything as the base type. Generally speaking you’ll want to use obj as the base type but really you could use anything you wanted as your base type. If you really want to generate a slimmed down type you can set the HideObjectMethods property to false to suppress the intellisense for members exposed off System.Object, members such as ToString. Lastly we add the type and namespace to the type provider using the AddNamespace method. Passing arguments to our Type Provider The way I want to use my Type Provider is like so: type helloWorld = Samples.StringTypeProvider.StringTyped< @"Hello World!" > For this to happen I need to specify that it will receive an argument. This is done by defining a static parameter: let staticParams = [ProvidedStaticParameter("value", typeof<string>)] I’m creating it as an array as I’ll need an array later, but essentially what I’m doing is saying that there will be a static parameter provided, it will be a string and I want you to call it value. Next up I need to handle what will happen when the Type Provider is invoked, I do this by defining static parameters on my Type Definition created above: do t.DefineStaticParameters( parameters = staticParams, instantiationFunction = ... ) There’s two things we’re providing here, the list of static parameters and an instantiation function. This instantiation function is what will be called by the Type Provider when the compiler comes across it, so it’s where we want to generate our logic for actually building something up and it takes an F# function that receives the name of the type (ie - StringTyped) and then and obj[] of the parameters which were provided. This array will match to the parameters we define with the parameters property so in our case we expect a single parameter that is a string. I’m going to use a match to validate this: do t.DefineStaticParameters( parameters = staticParams, instantiationFunction = (fun typeName paramValues -> match paramValues with | [| :? string as value |] -> ... | _ -> failwith "That wasn't supported!" ) ) So our primary match condition checks: 1) Is this an array 2) It has a single value 3) That value can be cast as a string, which I’ll do and call value (this is important later on) Finally from this fun we need to return a Type Definition so let’s create that: do t.DefineStaticParameters( parameters = staticParams, instantiationFunction = (fun typeName paramValues -> match paramValues with | [| :? string as value |] -> let ty = ProvidedTypeDefinition( thisAssembly, namespaceName, typeName, Some typeof<obj> ) ty | _ -> failwith "That wasn't supported!" ) ) This is basically the same as we used originally with the only difference being that I’m using the name passed in rather than a hard-coded name. Adding constructors Now that I have created my Type to be instantiated it’s time that I make it do something useful. To do that I’m going to create a constructor to it. Thanks to our base class creating a constructor: let ctor = ProvidedConstructor( parameters = [], InvokeCode = fun args -> <@@ value :> obj @@> ) Well that was easy wasn’t it! I use the ProvidedConstructor method, define any parameters I want and finally give it the code that I want to run. The code is in the form of an F# Quotation which is that the <@@ @@> syntax is all about and I am saying that the available value (captured earlier) will be downcast to obj. If you’re curious this code, when used, compiles down to the following C#: var something = (object)"Hello World"; Where something was the name of our instance and Hello World the value we passed to it. Pretty cool huh! Generating intellisense We’re generating a type on the fly here so it stands to reason that documentation is going to be sparse. If your users are using Visual Studio it might be nice to give them some intellisense help to guide them onto your usage. Conveniently the API we’re working with to build our Type Provider gives us such a facility: ctor.AddXmlDoc "Initialise the awesomes" And there you go, intellisense done! Now there are actually two others ways to generate intellisense, it can either be dalyed ctor.AddXmlDocDelayed (fun () -> "Initializes a the awesomes") Meaning that until the intellisense is requested the function won’t be evaluated. This can be useful if you’re generating your documentation based off some intensive process. Remember that a Type Provider is evaluated at compile time so if it’s something expensive that you don’t have to do consider delaying it. Your other option is to use a computed doc: ctor.AddXmlDocComputed (fun () -> "Initializes a the awesomes") While this looks similar to delayed the difference is that delayed docs are generated then cached while computed docs are generated evey single time. Once you’ve setup your documentation the final step is to add your constructor to the type: ty.AddMember ctor Properties Now that we have a constructor let’s add some properties to the type you’re going to get. let lengthProp = ProvidedProperty( "Length", typeof<int>, GetterCode = fun args -> <@@ value.Length @@> ) ty.AddMember lengthProp There we go, that’s pretty easy isn’t it! We have a few things that we’re doing like giving the property a name, Length, giving it a type, int and then we can provide getters and setters using F# Quotations again. These functions can be as simple or as complex as you like. I’m doing something simple here but you could say, generate a setter that does validation by adding a more complex body. I could even do something like bulk generate properties: let charProps = value |> Seq.map(fun c -> let p = ProvidedProperty( c.ToString(), typeof<char>, GetterCode = fun args -> <@@ c @@> ) let doc = sprintf "The char %s" (c.ToString()) p.AddXmlDoc doc p ) |> Seq.toList ty.AddMembersDelayed (fun () -> charProps) You’ll see here that you can add properties (well, any members) in a delayed fashion, again useful when you’re generating them from a data source, like a SQL schema or REST end point. There’s a bunch of other properties on your properties that you can set, if you’re after a static then set the IsStatic to true (default is false). Check out what you get from intellisense (or is defined in the fsi) for the full details of what you can do to a property. Methods When generating a method it’s similar to all the other members but with the difference, we get to create a method body. Here’s a method we could make: let reverser = ProvidedMethod( methodName = "Reverse", parameters = [], returnType = typeof<string>, InvokeCode = (fun args -> <@@ value |> Seq.map (fun x -> x.ToString()) |> Seq.toList |> List.rev |> List.reduce (fun acc el -> acc + el) @@>)) ty.AddMember reverser This takes our string and reverses it through a few pipeline steps. You can though make something as complex as you want, doing whatever you need it to do. Ready for consumption There we have it, our Type Provider is ready for us to use. If you want to see the completed Type Provider it can be found here. Now it’s worth talking about some gotchas and things to be mindful of. Member Names Remember that F#’s member naming is a lot more relaxed than C#, you can use a lot more characters provided you escape them. That means the following code is valid: type snowman = Samples.StringTypeProvider.StringTyped< @"☃" > let doYouWantToBuildASnowman = snowman() doYouWantToBuildASnowman.``☃`` Yep, that’s a snowman property. Isn’t unicode fun! Visual Studio locks the assembly This is something that hits me all the time when I’m mucking with Type Providers, when you reference a Type Provider, either from a project or within the F# interactive. The problem here is that when you write some code, compile and then use it you’ve got your assembly locked. Now you can’t change it until you restart Visual Studio. Yay… You’re impacting compile time Remember that a Type Provider is something that is evaluated at compile time by the F# compiler. The more complex the processing you do with your Type Provider the greater an impact you have on compile time. If you’re worried about doing something too intense don’t be afraid to leverage the delayed features, be it for documentation or member creation. Conclusion There you have it folks, a walkthrough on how to create an F# Type Provider. Remember that there is a video from F# Sydney that also covers this (and some other rambling on my part) and you can find the full code as a gist.
https://www.aaron-powell.com/posts/2015-02-06-writing-a-fsharp-type-provider/
CC-MAIN-2019-18
refinedweb
2,344
61.77
One of the major things I was using jQuery for was the interactions with my Web API. I am used to called $.getJSON() to get that data and using jQuery Deferred Objects to handle the async nature of the call. Given that with ECMAScript 6, most of the incompatibilities between browsers goes away (and thus a lot of the need for jQuery), what about AJAX? We still have the XMLHttpRequest() object to do the request. That bit hasn’t changed. A new feature that has changed is Promises. Promises are a basic recipe that can be distilled down to “do something, then do something else, unless you fail, in which case do this other thing”. In Promise land, this turns into: dosomething() .then(function(response) { doSomethingElse(); }) .catch(function(error) { doThisOtherThing(); }); You can cascade and interleave then/catch to catch things at different positions, but this is the basic format. Back to my case of collecting data from my Web API. I have to code doSomething() to return a Promise and do it on an event-driven basis. If I do it right, then my code can be this: $http.ajaxGet(uri) .catch(function(error) { throw new AJAXError(error); }) .then(JSON.parse) .catch(function(error) { throw new JSONError(error); }) .then((r) => this.doSomethingWithJSON(r); }) .catch(function(error) { throw new ApplicationError(error); }); The basic flow is something like this. Retrieve some data from Web API, then run that data through JSON.parse() to turn it from a string into an object, then go do something with that JSON in my class. In terms of error handling, I’m going to throw a different exception for each step of the way. If I didn’t care about error handling in the individual steps, then I could do the following: $http.ajaxGet(uri) .then(JSON.parse) .then((r) => this.doSomethingWithJSON(r); }) .catch(function(error) { throw new ApplicationError(error); }); This is much simpler to read than the jQuery / ES5 equivalent code. The magic occurs because my ajaxGet method returns a Promise. At the top of my ECMAScript 6 file I’ve got the following: import $http from "./AJAX"; This brings in my AJAX library. Here is the relevant portion of that library: function ajaxGet(url) { return new Promise(function(resolve, reject) { let req = new XMLHttpRequest(); req.open("GET", url); req.onload = function() { if (req.status === 200) { resolve(req.response); } else { reject(new Error(req.statusText)); } }; req.onerror = function() { reject(new Error("Network error")); }; req.send(); }); } export default { ajaxGet }; Notice how I return a new Promise. When a new Promise is created you pass it a function which has two arguments: - What to call when you have a successful or resolved promise - What to call when you have an unsuccessful or rejected promise The contents of this function are a pretty standard XMLHttpRequest() logic where I specify two event handlers – one is the onload() which is called when the XMLHttpRequest succeeds. In here I check to ensure I get a 200 back and resolve the Promise if I do (resolving a Promise is a success criteria so it calls the next then… clause). Anything else is an error so I reject the Promise. Once that is all set up, I send the request async. I have similar functions for ajaxPost, ajaxPut and ajaxDelete – some of them take data and some of them don’t. As long as I’m returning a Promise, I can deal with them using the Promise recipe above.
https://shellmonger.com/2015/03/24/promises-and-ajax-in-ecmascript-6/
CC-MAIN-2017-51
refinedweb
576
65.62
Recently, I set out to install a doorbell in my new house and thought: why doesn’t my doorbell tell me who is at the door?. Here’s what we’re going to build: a $60 Raspberry Pi-powered security camera setup that takes pictures, posts them to the cloud, and then does face recognition. You could also stream the data to Amazon S3,. Machine learning with Amazon Rekognition This tutorial will focus on the machine learning part—using Amazon’s new Rekognition service to do face recognition on your guests, and send that to your Amazon Echo so you will always know who’s at your door. In order to build a reliable service, we’ll also make use of one of Amazon’s coolest and most useful products: Lambda. Ingredients: Amazon Echo Dot ($50) Raspberry PI V3 ($38) (This project would also work with a Pi v2 and USB Wifi) Raspberry PI-Compatible Camera ($16) Raspberry Pi Case ($6) 16GB SD Card ($8) Total: $118 We will use Amazon’s S3, Lambda, and Rekognition services for the face matching. These services are free to get started, and you can recognize thousands of people at your door every month for pennies. Setting up the Raspberry Pi If you’ve done any of my other Raspberry Pi tutorials, much of this will be familiar. First, download Noobs Pixel, the new desktop environment. Next, change the name of your Pi to something you can remember, so you can SSH into it. There’s good instructions for this on howtogeek. Next, you should attach your Raspberry Pi Camera to your PI. Remember, the tape should face the Ethernet jack—I’ve probably Googled that a hundred times by now. Note: you might want to buy a wide-angle camera for a bigger field of view; you also might want to buy an infrared camera to add night vision. .) I’ve actually installed a couple of these around the house at this point. The camera ribbon cable is so thin, you can potentially mount the Pi inside and slide the cable over the door like I did from my laboratory (garage). Next, you need to install RPi-Cam-Web-Interface.. Setting up Amazon S3 and Amazon Rekognition If you haven’t created an AWS account, you need to do that now. You should create an IAM user and give that user access to S3, Rekognition, and Lambda (we’ll use Lambda later). Install the AWS command line interface with: sudo apt install awscli Set your region to US-East (as of the time of writing this article, Rekognition is only available there). Create a face recognition group: aws create-collection –collection-id friends You can add your friends’ faces with a quick unix shell script I wrote: aws s3 cp $1 s3://doorcamera > output aws rekognition index-faces \ --image "{\"S3Object\":{\"Bucket\":\"doorcamera\",\"Name\":\"$1\"}}" \ --collection-id "friends" --detection-attributes "ALL" \ --external-image-id "$2" Either copy that into a file and use as a shell script, or type it into a command line and replace $1 with the local filename of a picture of your friend, and $2 with the name of your friend.. Now you can test out that face recognition with a similar script: aws s3 cp $1 s3://doorcamera > output aws rekognition search-faces-by-image --collection-id "friends" \ --image "{\"S3Object\":{\"Bucket\":\"doorcamera\",\"Name\":\"$2\"}}" You will get back a big JSON, with not just the match, but other aspects of the picture, including gender, emotion, facial hair, and a bunch of other interesting stuff: { "FaceRecords": [ { "FaceDetail": { "Confidence": 99.99991607666016, "Eyeglasses": { "Confidence": 99.99878692626953, "Value": false }, "Sunglasses": { .... Either way, we need to expose this functionality in a webserver for later. I used Flask as my webserver: from flask import Flask, request import cameras as c app = Flask(__name__) @app.route('/faces/<path:camera>') def face_camera(camera): data = c.face_camera(camera) return ",".join(data) if __name__ == '__main__': app.run(host='0.0.0.0', port=5000) I put the parsing code (and all the other code mentioned here) at github.com/lukas/facerec.). I actually sewed one of these into a stuffed animal and made a very creepy face-recognizing Teddy Bear sentry that lives on my desk. Making the face recognition camera work with Amazon Echo. We’re going to have the Echo connect to an AWS Lambda service, which then talks to our Raspberry Pi through an SSH Tunnel. This might feel a little convoluted, but it’s the easiest way to do it. Exposing our HTTP face-recognizing API through an SSH Tunnel At this point, we’ve built a little webapp that does face recognition, and we need to make it accessible to the outside world. If we had a webserver somewhere, we could setup an SSH tunnel, but there’s a sweet little app called localtunnel that does everything for us. You can install it easily with: npm install -g localtunnel I like to wrap it with a little script that keeps it alive in case it goes down. Change MYDOMAIN to something meaningful to you: until lt --port 5000 -s MYDOMAIN; do echo 'lt crashed... respawning...' sleep 1 done Now you can ping your server with a call to. Creating an Alexa skill To hack our Echo, we need to create a new Alexa skill. Amazon has a good guide on getting started, or you can go directly to the Alexa Developer Portal. First, we need to setup an intent: { "intents": [ { "intent": "PersonCameraIntent", "slots": [ { "name": "camera", "type": "LIST_OF_CAMERAS" } ] }]} And then we give Alexa some sample utterances: PersonCameraIntent tell me who {camera} is seeing PersonCameraIntent who is {camera} seeing PersonCameraIntent who is {camera} looking at PersonCameraIntent who does {camera} see Next, we need to give Alexa an endpoint, and for that we are going to use a Lambda function. Setting up a Lambda function If you’ve never used a Lambda function, you’re in for a treat! Lambda functions are a simple way to define a consistent API for a simple function on Amazon's servers, and you only pay when it’s invoked.: speech_output = "I don't know what robot you're talking about"should_end_session = False return build_response({}, build_speechlet_response( card_title, speech_output, None, should_end_session)): You can take this same technology and apply it/extend it in a lot of cool ways. For example, I put the same code on my robots from my robot Pi/Tensorflow project, and now they can all talk to me and tell me what they’re seeing. I was also thinking of connecting the Pi to my August lock using this GitHub project, so that my door would automatically open for my friends or automatically lock if an angry looking person is at the door.
https://www.oreilly.com/ideas/build-a-talking-face-recognizing-doorbell-for-about-100
CC-MAIN-2017-22
refinedweb
1,125
56.89
I am really not sure WHERE to post this question so I'm starting here -- since our project is all C#. We have joined the many that discovered the coding shortcut made in the IDE where the Add Reference does not enumerate the GAC. I have yet to get the registry key built in a way that the IDE recognizes I have made an entry to add to the Add Reference list either. But that aside, with the team doing various portions of the code, we have found each coder uses different ways to do Add Reference. You can compile the DLL and in the main project create reference to the DLL by browsing. Works great for this one solution -- but move that solution to a desktop that does not have the DLL compiled and you get broken code. You can create reference by pointing to the project as well. We have one solution that has about seven projects in it right now and references are made to each DLL project. This is portable as long as you place the solution in VSS along with all of the projects. Without the original solution, though, the project becomes sensitive to the order you rebuild the solution. So all of the dependant projects must be added then the main project added. This is not good for support in the long run. So now that leads to my query as to what others have done as a standard to address this! What should be the standard for creating reference to a company DLL? What should be the standard for DLL's that are for just this project vs. ones that are reusable for many projects? One thought I had was to do this: Each DLL is a stand-alone solution. Each solution is unit tested and then compiled...with the DLL installed in a standard directory (GAC Projects for example). I would like to get the GAC registry key figured out, because then we can create a new key and export the registry keys to that directory. This way I can transport the DLL library, with registration, and store it on a network share which everyone can then run. This way if they need to create reference to one of these they are no longer dependant on order, or whether or not they compiled the DLL, or such. It will just be in the Add Reference list. I am also interested in what others have done for naming standards. We have two separate DLL's (and probably more to follow) that create the System.Mycompany namespace. Currently these are named only by the technology which matches the namespace technology. eg. System.Mycompany.prgParseToXSL will be in the DLL named prgParseToXSL. I thought that to be easier for coders than naming the DLL to match the namespace name. I am open to any opinions and directions other have to offer! Thanks. Forum Rules Development Centers -- Android Development Center -- Cloud Development Project Center -- HTML5 Development Center -- Windows Mobile Development Center
http://forums.devx.com/showthread.php?16141-GAC-Add-Reference-solutions-and-stds&p=36752
CC-MAIN-2016-40
refinedweb
505
71.55
06 August 2012 04:25 [Source: ICIS news] SINGAPORE (ICIS)--A fire broke out at ?xml:namespace> The blaze was caused by a leak in a pipeline delivering natural gas feedstock to the complex, according to the news agency of The fire was fully extinguished by 4 August, Shana news said. Further details on the shut units were not immediately available. Bandar Imam is Iran's largest petrochemical complex. In February this year, a fire was reported at an idle polymer plant owned by Bandar Imam Petrochemical Co (BIPC). The Bandar Imam Khomeni complex has a total polyethylene capacity of 1.56m tonnes/year, and a polypropylene capacity of 450
http://www.icis.com/Articles/2012/08/06/9584033/fire-at-irans-bandar-imam-complex-kills-one-shuts-units-report.html
CC-MAIN-2014-41
refinedweb
110
64.81
mine was :-)>is UML I wonder if we could do save a couple of bytes & cycles foreveryone else by doing something like #ifdef CONFIG_IRQ_HAS_RELEASE,#endif around that and then letting the Kconfig magic setCONFIG_IRQ_HAS_RELEASE as required? If other arches need it thay cando the same and if eventually almost everyone does we can kill the#ifdef crud?Longer term I wonder if some of the irq mechanics in UML couldn't endup being a bit more like the s390 stuff too?-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2005/5/26/274
CC-MAIN-2016-07
refinedweb
109
59.64
help me find sample program in array..that determ help me find sample program in array..that determines the 8 numbers if it is lost or found doubt in vector code hi, how to add the single string as given below 234.9 435.0 903.1 342.1 874.2 234.1 134.1 341.2 123.2 245.1 243.2 342.6 234.9 230.5 343.7 in a vector .I want to add the above text as single string in a vector.Kindly send me source code for t Wanna help import java.net.*; import java.io.*; import javax.swing.*; import java.awt.event.*; import java.awt.*; import javax.swing.table.*; import java.util.*; class Learning implements ActionListener { static JFrame jf; static JTextArea jta; Help: cannot find main class. Getting : cannot find main class.program will exit after running below pgm. import java.util.*; public class VectorDemo{ public static void main(String[] args){ Vector<Object> vector = new Vector<Object>(); int primitiveType = 10; storing Vector get value from a text doc.and store it in Vector Request Still i need the crear answer plese tell me plese tell me how to envoke the method in class whose object is stored in vector.......... **************************** base class { vod disp(); } derivedclass1 extends base { void disp() { } } derivedclass2 extends base { vo Thanks Your example program is excellent, thanks!! question hi, i have gone through ur notes i get some useful information and i want the answer for my doubt: i want to addd the details of student name,rollno,address,etc and to save and i want to retrieve the information when is paedcify the rollno by using vectorclass in java i want vectorclass uses ,advantages ,disadvantages and sourcecode.pls tell me vectorclass advantages of vectorclass Explanation about vector Explanation about vector is really useful. vector why we are use the vector class and then benefits Array of Vectors I tried creating an array of Vectors as follows... Vector v=new Vector[5]; for(int i=0;i<5;i++) { v[i]=new Vector();} now, the code, 'v[0].add(4);' is not working please help... ...it says 'cant resolve method add(int)'.... i need help hi i need help about making an simple inventory system using vector that has add delete and search... thnx just for my project please Vector what is the effect of the following Vector<Double> list = new Vectoe<Double> and what vectors are actually about please help me in a java program !! - Java Beginners please help me in a java program !! the porgram should use...://en.wikipedia.org/wiki/Kosaraju_algorithm) there are 4 classes in my program : Vertex... and run the program" i still need to modify the method kosaraju in the class Graph plz help me find a program plz help me find a program plz help..i want a source code in jsp for order processing into 2 decimal places sample 1.25) Mexican_________ guys help me..thank...help me helo guys can you share me a code about Currency Conversion.Money will convert according to the type of currency.would you help me please please please i wanna insertion sort program find calendar to array value (1000,5000,10000), please help me Insertion sort program find calendar to array value 1000,5000,10000 program find calendar for insertion sort in jfram to array 1000 help me 2 help me 2 write java program to enter five numbers and will determine the location of the number sample output: ENTER 5 NUMBERS: 1 2 3 4...) { Scanner input=new Scanner(System.in); int array[]=new int[5 Sample Java program for beginners Here beginners in Java can learn to create there first Hello World program..., a programmer is ready to create their own Java program. Here we have also described what is Java, Java class, object and methods that will help create your first for this program - Java Beginners please help me for this program Write a program that asks the user for a starting value and an ending value and then writes all the integers (inclusive) between those two values. Enter Start: 10 Enter End: 14 10
http://www.roseindia.net/tutorialhelp/allcomments/1564
CC-MAIN-2014-10
refinedweb
692
61.46
First, install ckeditor (put the ckeditor inside a js/ckeditor file in your static folder and include it) : <script type="text/javascript" src="{{=URL(request.application,'static','js/ckeditor/ckeditor.js')}}"></script> After that, we define a ‘body’ text field (or whatever name you like): db.define_table('page', # more Fields … Field('body', 'text')) Then, inside your db.py (or to another file in your models directory) put the widget: def advanced_editor(field, value): return TEXTAREA(_id = str(field).replace('.','_'), _name=field.name, _class='text ckeditor', value=value, _cols=80, _rows=10) and use the widget db.page.body.widget = advanced_editor From now, SQLFORM will use ckeditor to show/edit/save data! Update: After a lot of request, I am updating this slice so you can also upload files via ckeditor! Also, I a quotes bug (due to a copy-paste from my blog) was fixed. What it needs to be done in order to have ckeditor uploading works. First, it needs a "file browser" url. That's just a form with an upload field where we can use to find and upload files. The think is that we must return the path of the uploaded file BACK to the parent form. The most difficult part is activating the Upload button. That is done by specifying the "filebrowserBrowseUrl". So, here it goes! Let's add a new table that will hold our files: import datetime; timestamp = datetime.datetime.today() db.define_table('files', Field('title', 'string'), Field('uploaded_data', 'upload'), Field('created_on','datetime',default=timestamp)) db.files.title.requires = IS_NOT_EMPTY() db.files.uploaded_data.requires = IS_NOT_EMPTY() Now, lets add an action the our controller def upload_file(): url = "" form = SQLFORM(db.files, showid=False) if form.accepts(request.vars, session): response.flash = T('File uploaded successfully!') url = URL(r=request, f="download", args = db(db.files.title == request.vars.title).select(orderby=~db.files.created_on)[0].uploaded_data) return dict(form=form, cknum=request.vars.CKEditorFuncNum, url=url) and a view upload_file.html: {{extend 'layout.html'}} <h2>Upload file</h2> {{=form}} {{ if url != "": }} <script type="text/javascript"> window.opener.CKEDITOR.tools.callFunction({{=cknum}}, '{{=url}}'); </script> {{ pass }} Then, lets add the javascript files to our layout AFTER web2py_ajax.html: <script type="text/javascript" src="{{=URL(request.application,'static','js/ckeditor/ckeditor.js')}}"></script> Finally, lets create a test page: {{extend 'layout.html'}} {{=form}} <script type="text/javascript"> var ckeditor = CKEDITOR.replace('page_body', { filebrowserBrowseUrl : "{{=URL(request.application, c='default', f='upload_file')}}", //filebrowserUploadUrl : "{{=URL(request.application, c='default', f='upload_file')}}", }); </script> We are ready! show more comments - Login to post order by: newest oldest upvoted downvoted I have used this slice, along with other information to create plugin_ckeditor which can be used as a FORM widget, and also supports edit in place. Feel free to check it out: I can seem to get the test page working... Works great, thanks. However I had to replace 'page_body' by 'content' to make it work. Well, now that I think about it, maybe it's obvious that this parameter has to be personalised... isn't it possible to remove a comment once it is posted on web2pyslices.com? First part of this slice works for me. Afterwards, I tried to use tinymce instead of ckeditor. TinyMce editor is displayed instead of the textarea, but the text typed is not saved into the database, just empty value. What could be the reason for that? Many thanks Jon, this is a really helpful and easy to implement slice. Note: I think, there is already an internal File Table in web2py (for all uploaded files) which maybe you could use to not implement the table yourself... 0 rosspeoples 1 year ago 0 chrismay77 2 years ago 0 rth 2 years ago 0 jumi 3 years ago 0 aleksdj 3 years ago
http://www.web2pyslices.com/slice/show/1345/using-ckeditor-for-text-fields
CC-MAIN-2013-48
refinedweb
627
52.26
Your AWS client might see calls to AWS services fail due to unexpected issues on the client side. Or calls might fail due to rate limiting from the AWS service you're attempting to call. In either case, these kinds of failures often don’t require special handling and the call should be made again, often after a brief waiting period. Boto3 provides many features to assist in retrying client calls to AWS services when these kinds of errors or exceptions are experienced. This guide provides you with details on the following: Legacy mode is the default mode used by any Boto3 client you create. As its name implies, legacy mode uses an older (v1) retry handler that has limited functionality. Legacy mode’s functionality includes: A default value of 5 for maximum retry attempts. This value can be overwritten through the max_attempts configuration parameter. Retry attempts for a. Note For more information about additional service-specific retry policies, see the following botocore references in GitHub. Standard mode is a retry mode that was introduced with the updated retry handler (v2). This mode is a standardization of retry logic and behavior that is consistent with other AWS SDKs. In addition to this standardization, this mode also extends the functionality of retries over that found in legacy mode. Standard mode’s functionality includes: A default value of 3 for maximum retry attempts. This value can be overwritten through the max_attempts configuration parameter. Retry attempts for an. Note Adaptive mode is an experimental mode and is subject to change, both in features and behavior. Boto3 includes a variety of both retry configurations as well as configuration methods to consider when creating your client object. In Boto3, users can customize two retry configurations: retry_mode- This tells Boto3 which retry mode to use. As described previously, there are three retry modes available: legacy (default), standard, and adaptive. max_attempts- This provides Boto3’s retry handler with a value of maximum retry attempts, where the initial call counts toward the max_attemptsvalue that you provide. This first way to define your retry configuration is to update your global AWS configuration file. The default location for your AWS config file is ~/.aws/config. Here’s an example of an AWS config file with the retry configuration options used: [myConfigProfile] region = us-east-1 max_attempts = 10 retry_mode = standard Any Boto3 script or code that uses your AWS config file inherits these configurations when using your profile, unless otherwise explicitly overwritten by a Config object when instantiating your client object at runtime. If no configuration options are set, the default retry mode value is legacy, and the default max_attempts value is 5. The second way to define your retry configuration is to use botocore to enable more flexibility for you to specify your retry configuration using a Config object that you can pass to your client at runtime. This method is useful if you don't want to configure retry behavior globally with your AWS config file Additionally, if your AWS configuration file is configured with retry behavior, but you want to override those global settings, you can use the Config object to override an individual client object at runtime. As shown in the following example, the Config object takes a retries dictionary where you can supply your two configuration options, max_attempts and mode, and the values for each. config = Config( retries = { 'max_attempts': 10, 'mode': 'standard' } ) Note The AWS configuration file uses retry_mode and the Config object uses mode. Although named differently, they both refer to the same retry configuration whose options are legacy (default), standard, and adaptive. The following is an example of instantiating a Config object and passing it into an Amazon EC2 client to use at runtime. import boto3 from botocore.config import Config config = Config( retries = { 'max_attempts': 10, 'mode': 'standard' } ) ec2 = boto3.client('ec2', config=config) Note As mentioned previously, if no configuration options are set, the default mode is legacy and the default max_attempts is 5. To ensure that your retry configuration is correct and working properly, there are a number of ways you can validate that your client's retries are occurring. If you enable Boto3’s logging, you can validate and check your client’s retry attempts in your client’s logs. Notice, however, that you need to enable DEBUG mode in your logger to see any retry attempts. The client log entries for retry attempts will appear differently, depending on which retry mode you’ve configured. If legacy mode is enabled: Retry messages are generated by botocore.retryhandler. You’ll see one of three messages: If standard or adaptive mode is enabled: Retry messages are generated by botocore.retries.standard. You’ll see one of three messages: You can check the number of retry attempts your client has made by parsing the response botocore provides when making a call to an AWS service API. Responses are handled by an underlying botocore module, and formatted into a dictionary that's part of the JSON response object. You can access the number of retry attempts your client has taken by calling the RetryAttempts key in the ResponseMetaData dictionary: 'ResponseMetadata': { 'RequestId': '1234567890ABCDEF', 'HostId': 'host ID data will appear here as a hash', 'HTTPStatusCode': 400, 'HTTPHeaders': {'header metadata key/values will appear here'}, 'RetryAttempts': 4 }
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/retries.html
CC-MAIN-2022-40
refinedweb
879
51.38
Announcing Entity Framework Core 5.0 Preview 5 Today we are announcing the fifth preview release of EF Core 5.0. The fifth.0-preview.5.20278.2 5 release of the Microsoft.Data.Sqlite.Core ADO.NET provider. Installing dotnet ef As with EF Core 3.0 and 3.1, the dotnet ef command-line tool is no longer included in the .NET Core SDK. Before you can execute EF Core migration or scaffolding commands, you'll have to install this package as either a global or local tool. To install the preview tool globally, first uninstall any existing version with: dotnet tool uninstall --global dotnet-ef Then install with: dotnet tool install --global dotnet-ef --version 5.0.0-preview.5.20278.2 It's possible to use this new version of dotnet ef with projects that use older versions of the EF Core runtime. What's new in EF Core 5 Preview 5 We maintain documentation covering new features introduced into each preview. Some of the highlights from preview 4 are called out below. This preview also includes several bug fixes. Database collations The default collation for a database can now be specified in the EF model. This will flow through to generated migrations to set the collation when the database is created. For example: modelBuilder.UseCollation("German_PhoneBook_CI_AS"); Migrations then generates the following to create the database on SQL Server: CREATE DATABASE [Test] COLLATE German_PhoneBook_CI_AS; The collation to use for specific database columns can also be specified. For example: modelBuilder "German_PhoneBook_CI_AS");<User>() (e => e.Name) ( For those not using migrations, collations are now reverse-engineered from the database when scaffolding a DbContext. Finally, the EF.Functions.Collate() allows for ad-hoc queries using different collations. For example: context.Users.Single(e => EF.Functions.Collate(e.Name, "French_CI_AS") == "Jean-Michel Jarre"); This will generate the following query for SQL Server: SELECT TOP(2) [u].[Id], [u].[Name] FROM [Users] AS [u] WHERE [u].[Name] COLLATE French_CI_AS = N'Jean-Michel Jarre' Note that ad-hoc collations should be used with care as they can negatively impact database performance. Documentation is tracked by issue #2273. Flow arguments into IDesignTimeDbContextFactory Arguments are now flowed from the command line into the CreateDbContext method of IDesignTimeDbContextFactory. For example, to indicate this is a dev build, a custom argument (e.g. dev) can passed on the command line: dotnet ef migrations add two --verbose --dev This argument will then flow into the factory, where it can be used to control how the context is created and initialized. For example: public class MyDbContextFactory : IDesignTimeDbContextFactory<SomeDbContext> { public SomeDbContext CreateDbContext(string[] args) => new SomeDbContext(args.Contains("--dev")); } Documentation is tracked by issue #2419. No-tracking queries with identity resolution No-tracking queries can now be configured to perform identity resolution. For example, the following query will create a new Blog instance for each Post, even if each Blog has the same primary key. context.Posts.AsNoTracking().Include(e => e.Blog).ToList(); However, at the expense of usually being slightly slower and always using more memory, this query can be changed to ensure only a single Blog instance is created: context.Posts.AsNoTracking().PerformIdentityResolution().Include(e => e.Blog).ToList(); Note that this is only useful for no-tracking queries since all tracking queries already exhibit this behavior. Also, following API review, the PerformIdentityResolution syntax will be changed. See #19877. Documentation is tracked by issue #1895. Stored (persisted) computed columns Most databases allow computed column values to be stored after computation. While this takes up disk space, the computed column is calculated only once on update, instead of each time its value is retrieved. This also allows the column to be indexed for some databases. EF Core 5.0 allows computed columns to be configured as stored. For example: modelBuilder .Entity<User>() .Property(e => e.SomethingComputed) .HasComputedColumnSql("my sql", stored: true); SQLite computed columns EF Core now supports computed columns in SQLite databases. Daily builds EF Core previews are aligned with .NET 5 previews. These previews tend to lag behind the latest work on EF Core. Consider using the daily builds instead to get the most up-to-date EF Core features and bug fixes. As with the previews, the daily builds do not require .NET 5; they can be used with GA/RTM release of .NET Core 3.1. Documentation and feedback EF Core docs has a new landing page! The main page for Entity Framework documentation has been overhauled to provide you with a hub experience. We hope this new format helps you find the documentation you need faster and with fewer clicks.). This is a gret Platform Yeah, gret for fret (still waiting for a usable version).
https://devblogs.microsoft.com/dotnet/announcing-entity-framework-core-5-0-preview-5/
CC-MAIN-2022-40
refinedweb
781
50.02
Table Of Contents - Installing Kivy - Using pip - Installation using Conda - Installing Kivy’s dependencies - Python glossary Installing Kivy¶ Installation for Kivy version 2.1.0.dev0. Read the changelog here. For other Kivy versions, select the documentation from the dropdown on the top left. Kivy 2.1.0.dev0 officially supports Python versions 3.6 - 3.9. Using pip¶ The easiest way to install Kivy is with pip, which installs Kivy using either a pre-compiled wheel, if available, otherwise from source (see below). Kivy provides pre-compiled wheels for the supported Python versions on Windows, OS X, Linux, and RPi. Alternatively, installing from source is required for newer Python versions not listed above or if the wheels do not work or fail to run properly. Setup terminal and pip¶ Before Kivy can be installed, Python and pip needs to be pre-installed. Then, start a new terminal that has Python available. In the terminal, update pip and other installation dependencies so you have the latest version as follows (for linux users you may have to substitute python3 instead of python and also add a --user flag in the subsequent commands outside the virtual environment): python -m pip install --upgrade pip setuptools virtualenv Create virtual environment¶ Create a new virtual environment for your Kivy project. A virtual environment will prevent possible installation conflicts with other Python versions and packages. It’s optional but strongly recommended: Create the virtual environment named kivy_venvin your current directory: python -m virtualenv kivy_venv Activate the virtual environment. You will have to do this step from the current directory every time you start a new terminal. This sets up the environment so the new kivy_venvPython is used. For Windows default CMD, in the command line do: kivy_venv\Scripts\activate If you are in a bash terminal on Windows, instead do: source kivy_venv/Scripts/activate If you are in linux, instead do: source kivy_venv/bin/activate Your terminal should now preface the path with something like (kivy_venv), indicating that the kivy_venv environment is active. If it doesn’t say that, the virtual environment is not active and the following won’t work. Install Kivy¶ Finally, install Kivy using one of the following options: Pre-compiled wheels¶ The simplest is to install the current stable version of kivy and optionally kivy_examples from the kivy-team provided PyPi wheels. Simply do: python -m pip install kivy[base] kivy_examples This also installs the minimum dependencies of Kivy. To additionally install Kivy with audio/video support, install either kivy[base,media] or kivy[full]. See Kivy’s dependencies for the list of selectors. For the Raspberry Pi, you must additionally install the dependencies listed in source dependencies before installing Kivy above. From source¶ If a wheel is not available or is not working, Kivy can be installed from source with some additional steps. Installing from source means that Kivy will be installed from source code and compiled directly on your system. First install the additional system dependencies listed for each platform: Windows, OS X, Linux, RPi. With the dependencies installed, you can now install Kivy into the virtual environment. To install the stable version of Kivy, from the terminal do: python -m pip install kivy[base] kivy_examples --no-binary kivy To install the latest cutting-edge Kivy from master, instead do: python -m pip install "kivy[base] @" If you want to install Kivy from a different branch, from your forked repository, or from a specific commit (e.g. to test a fix from a user’s PR) replace the corresponding components of the url. For example to install from the stable branch, the url becomes. Or to try a specific commit hash, use e.g. Pre-release, pre-compiled wheels¶ To install a pre-compiled wheel of the last pre-release version of Kivy, instead of the current stable version, add the --pre flag to pip: python -m pip install --pre kivy[base] kivy_examples This will only install a development version of Kivy if one was released to PyPi. Instead, one can also install the latest cutting-edge Nightly wheels from the Kivy server with: python -m pip install kivy --pre --no-deps --index-url python -m pip install kivy[base] --pre --extra-index-url It is done in two steps, because otherwise pip may ignore the wheels on the server and install an older pre-release version from PyPi. For the Raspberry Pi, remember to additionally install the dependencies listed in source dependencies before installing Kivy above. Development install¶ the terminal. The typical process is to clone Kivy locally with: git clone git://github.com/kivy/kivy.git This creates a kivy named folder in your current path. Next, install the additional system dependencies listed for each OS: Windows, OS X, Linux, RPi. Then change to the kivy directory and install Kivy as an editable install: cd kivy python -m pip install -e ".[dev,full]" Now, you can use git to change branches, edit the code and submit a PR. Remember to compile Kivy each time you change cython files as follows: python setup.py build_ext --inplace Or if using bash or on Linux, simply do: make to recompile. To run the test suite, simply run: pytest kivy/tests or in bash or Linux: make test Checking the demo¶ Kivy should now be installed. You should be able to import kivy in Python or, if you installed the Kivy examples, run the demo (on Windows): python kivy_venv\share\kivy-examples\demo\showcase\main.py or in bash or Linux: python kivy_venv/share/kivy-examples/demo/showcase/main.py The exact path to the Kivy examples directory is also stored in kivy.kivy_examples_dir. The 3d monkey demo under kivy-examples/3Drendering/main.py is also fun to see. Installation using Conda¶ If you use Anaconda, you can install Kivy with its package manager Conda using: conda install kivy -c conda-forge Do not use pip to install kivy if you’re using Anaconda, unless you’re installing from source. Installing Kivy’s dependencies¶ Kivy supports one or more backends for its core providers. E.g. it supports glew, angle, and sdl2 for the graphics backend on Windows. For each category (window, graphics, video, audio, etc.), at least one backend must be installed to be able to use the category. To facilitate easy installation, we provide extras_require groups that will install selected backends to ensure a working Kivy installation. So one can install Kivy more simply with e.g.``pip install kivy[base,media,tuio]``. The full list of selectors and the packages they install is listed in setup.py. The exact packages in each selector may change in the future, but the overall goal of each selector will remain as described below. We offer the following selectors: - base: The minimum typical dependencies required for Kivy to run, - not including video/audio. - media: Only the video/audio dependencies required for Kivy to - be able to play media. - full: All the typical dependencies required for Kivy to run, including video/audio and - most optional dependencies. - dev: All the additional dependencies required to run Kivy in development mode - (i.e. it doesn’t include the base/media/full dependencies). E.g. any headers required for compilation, and all dependencies required to run the tests and creating the docs. tuio: The dependencies required to make TUIO work (primarily oscpy). The following selectors install backends packaged as wheels by kivy under the Kivy_deps namespace. They are typically released and versioned to match specific Kivy versions, so we provide selectors to facilitate installation (i.e. instead of having to do pip install kivy kivy_deps.sdl2==x.y.z, you can now do pip install kivy[sdl2] to automatically install the correct sdl2 for the Kivy version). - gstreamer: The gstreamer video/audio backend, if it’s available - (currently only on Windows) - angle: A alternate OpenGL backend, if it’s available - (currently only on Windows) - sdl2: The window/image/audio backend, if it’s available (currently only on Windows, - on OSX and Linux it is already included in the main Kivy wheel). glew: A alternate OpenGL backend, if it’s available (currently only on Windows) Following are the kivy_deps dependency wheels: kivy_deps.gstreameris an optional dependency which is only needed for audio/video support. We only provide it on Windows, for other platforms it must be installed independently. Alternatively, use ffpyplayer instead. kivy_deps.glewand kivy_deps.angleare for OpenGL. You can install both, that is no problem. It is only available on Windows. On other platforms it is not required externally. One can select which of these to use for OpenGL using the KIVY_GL_BACKENDenvironment variable: By setting it to glew(the default), angle_sdl2, or sdl2. Here, angle_sdl2is a substitute for glewbut requires kivy_deps.sdl2be installed as well. kivy_deps.sdl2is for window/images/audio and optionally OpenGL. It is only available on Windows and is included in the main Kivy wheel for other platforms. Python glossary¶ Here we explain how to install Python packages, how to use the command line and what wheels are. Installing Python¶ Kivy is written in Python and as such, to use Kivy, you need an existing installation of Python. Multiple versions of Python can be installed side by side, but Kivy needs to be installed as package under each Python version that you want to use Kivy in. To install Python, see the instructions for each platform: Windows, OS X, Linux, RPi. Once Python is installed, open the console and make sure Python is available by typing python --version. How to use the command line¶ To execute any of the pip or wheel commands given here, you need a command line (here also called console, terminal, shell or bash, where the last two refer to Linux style command lines) and Python must be on the PATH. The default command line on Windows is the command prompt, short cmd. The quickest way to open it is to press Win+R on your keyboard. In the window that opens, type cmd and then press enter. Alternative Linux style command lines on Windows that we recommend are Git for Windows or Mysys. Note, the default Windows command line can still be used, even if a bash terminal is installed. To temporarily add your Python installation to the PATH, simply open your command line and then use the cd command to change the current directory to where python is installed, e.g. cd C:\Python37. If you have installed Python using the default options, then the path to Python will already be permanently on your PATH variable. There is an option in the installer which lets you do that, and it is enabled by default. If however Python is not on your PATH, follow the these instructions to add it: Instructions for the windows command line Instructions for bash command lines What is pip and what are wheels¶ In Python, packages such as Kivy can be installed with the python package manager, named pip (“python install package”). When installing from source, some packages, such as Kivy, require additional steps, like compilation. Contrary, wheels (files with a .whl extension) are pre-built distributions of a package that has already been compiled. These wheels do not require additional steps when installing them. When a wheel is available on pypi.org (“Python Package Index”) it can be installed with pip. For example when you execute python -m pip install kivy in a command line, this will automatically find the appropriate wheel on PyPI. When downloading and installing a wheel directly, use the command python -m pip install <wheel_file_name>, for example: python -m pip install C:\Kivy-1.9.1.dev-cp27-none-win_amd64.whl What are nightly wheels¶ Every day we create a snapshot wheel of the current development version of Kivy (‘nightly wheel’). You can find the development version in the master branch of the Kivy Github repository. As opposed to the last stable release (which we discussed in the previous section), nightly wheels contain all the latest changes to Kivy, including experimental fixes. For installation instructions, see Pre-release, pre-compiled wheels. Warning Using the latest development version can be risky and you might encounter issues during development. If you encounter any bugs, please report them.
https://kivy.org/doc/master/gettingstarted/installation.html
CC-MAIN-2021-39
refinedweb
2,035
53.92
I’ve been looking for a site to practice some programming problems when I came across CodeChef. It has tons of problems ranging from beginner to advanced. Some of the more advanced problems are what I would imagine in a coding interview. You have to make an account but it’s all free. There are also contests and rankings and things. It’s pretty convenient too because once you submit your solution, it tells you right away rather you solved it or not as the process is automated. The following code is my solution to an easy problem I did to test it out. All it is, is a program to print integers to the console until the number 42 is inputted. And here’s a link to the problem. using System; public class Program { public static void Main() { int input = 0; while(input != 42) { input = int.Parse(Console.ReadLine()); if(input != 42) { Console.WriteLine(input); } } } } The IDE on the site has been super buggy for me. It almost made me look elsewhere for some coding problems. But I decided to try and just use another online IDE and just copy and paste my solution. Once I did that, it worked quite nicely. So I think I’ll stick with this site. Other than the buggy IDE it seems really good. The IDE I started using is .NetFiddle. One thought on “CodeChef” Great work!
https://ccgivens.wordpress.com/2017/02/21/codechef/
CC-MAIN-2019-35
refinedweb
234
76.52
[PATCH 0/10] sysfs network namespace support - From: ebiederm@xxxxxxxxxxxx (Eric W. Biederman) - Date: Sat, 01 Dec 2007 02:06:58 -0700 Now -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at Please read the FAQ at - Follow-Ups: - Re: [PATCH 0/10] sysfs network namespace support - From: Greg KH - namespace support requires network modules to say "GPL" - From: Mark Lord - [PATCH 01/10] sysfs: Make sysfs_mount static again. - From: Eric W. Biederman - Prev by Date: [PATCH] Tokyo Electron SDIO controller (Ellen) support - Next by Date: [PATCH 01/10] sysfs: Make sysfs_mount static again. - Previous by thread: [PATCH] Tokyo Electron SDIO controller (Ellen) support - Next by thread: [PATCH 01/10] sysfs: Make sysfs_mount static again. - Index(es):
http://linux.derkeiler.com/Mailing-Lists/Kernel/2007-12/msg00096.html
crawl-002
refinedweb
132
55.58
a brainfuck monad Inspired by "An ASM Monad", I've built a Haskell monad that produces brainfuck programs. The code for this monad is available on hackage, so cabal install brainfuck-monad. Here's a simple program written using this monad. See if you can guess what it might do: import Control.Monad.BrainFuck demo :: String demo = brainfuckConstants $ \constants -> do add 31 forever constants $ do add 1 output Here's the brainfuck code that demo generates: >+>++>+++>++++>+++++>++++++>+++++++>++++++++>++++++++++++++++++++++++++++++++<<<<<<<<[>>>>>>>>+.<<<<<<<<] If you feed that into a brainfuck interpreter (I'm using hsbrainfuck for my testing), you'll find that it loops forever and prints out each character, starting with space (32), in ASCIIbetical order. The implementation is quite similar to the ASM monad. The main differences are that it builds a String, and that the BrainFuck monad keeps track of the current position of the data pointer (as brainfuck lacks any sane way to manipulate its instruction pointer). newtype BrainFuck a = BrainFuck (DataPointer -> ([Char], DataPointer, a)) type DataPointer = Integer -- Gets the current address of the data pointer. addr :: BrainFuck DataPointer addr = BrainFuck $ \loc -> ([], loc, loc) Having the data pointer address available allows writing some useful utility functions like this one, which uses the next (brainfuck opcode >) and prev (brainfuck opcode <) instructions. -- Moves the data pointer to a specific address. setAddr :: Integer -> BrainFuck () setAddr n = do a <- addr if a > n then prev >> setAddr n else if a < n then next >> setAddr n else return () Of course, brainfuck is a horrible language, designed to be nearly impossible to use. Here's the code to run a loop, but it's really hard to use this to build anything useful.. -- The loop is only entered if the byte at the data pointer is not zero. -- On entry, the loop body is run, and then it loops when -- the byte at the data pointer is not zero. loopUnless0 :: BrainFuck () -> BrainFuck () loopUnless0 a = do open a close To tame brainfuck a bit, I decided to treat data addresses 0-8 as constants, which will contain the numbers 0-8. Otherwise, it's very hard to ensure that the data pointer is pointing at a nonzero number when you want to start a loop. (After all, brainfuck doesn't let you set data to some fixed value like 0 or 1!) I wrote a little brainfuckConstants that runs a BrainFuck program with these constants set up at the beginning. It just generates the brainfuck code for a series of ASCII art fishes: >+>++>+++>++++>+++++>++++++>+++++++>++++++++> With the fishes^Wconstants in place, it's possible to write a more useful loop. Notice how the data pointer location is saved at the beginning, and restored inside the loop body. This ensures that the provided BrainFuck action doesn't stomp on our constants. -- Run an action in a loop, until it sets its data pointer to 0. loop :: BrainFuck () -> BrainFuck () loop a = do here <- addr setAddr 1 loopUnless0 $ do setAddr here a I haven't bothered to make sure that the constants are really constant, but that could be done. It would just need a Contaol.Monad.BrainFuck.Safe module, that uses a different monad, in which incr and decr and input don't do anything when the data pointer is pointing at a constant. Or, perhaps this could be statically checked at the type level, with type level naturals. It's Haskell, we can make it safer if we want to. ;) So, not only does this BrainFuck monad allow writing brainfuck code using crazy haskell syntax, instead of crazy brainfuck syntax, but it allows doing some higher-level programming, building up a useful(!?) library of BrainFuck combinators and using them to generate brainfuck code you'd not want to try to write by hand. Of course, the real point is that "monad" and "brainfuck" so obviously belonged together that it would have been a crime not to write this. Syndicated 2014-12-12 05:02:52 from see shy jo
http://www.advogato.org/person/joey/
CC-MAIN-2014-52
refinedweb
656
60.04
Azure Serverless IoT Button Tweet with Azure Functions and Flic Button This tutorial shows you how to integrate an Azure Function with your Flic button by posting a tweet to your Twitter account when the Flic is clicked. We are going to use Azure Functions for Serverless Compute, and Azure Logic Apps for serverless workflows/integration with Twitter. Prerequisites: - Twitter account - Flic button - iPhone or Android smartphone with Flic app installed - Azure account The solution will be: A Flic button sends an HTTPS request to an Azure Function which processes the data and sends a message to tweet to an Azure Logic App. The Logic App fires and posts the tweet. It only takes a few minutes to setup and get working end-to-end. Working with Functions in the Azure Portal Functions can be created, developed, configured, and tested the Azure portal. Create a Function App Functions require a function app to host function execution. This can be done in the Azure portal. Log in to the Azure portal and click the New button in the upper left-hand corner. Click Compute > Function App. Then, configure your app settings: - App Name: Create a globally unique name. - Subscription: Add a new or existing subscription. - Resource Group: Add a new or existing resource group. - Hosting plan: the Consumption Plan is recommended. - Location: Choose a location near you. - Storage account: Create a globally unique name for the storage account that will be used by your function app, or use an existing account. Click Create. Create an HTTP Triggered Function Now that the function app has been created, a function can be added to it. The template for an HTTP triggered function will execute when sent an HTTP request. At the top of the portal, locate and click the magnifying glass button to search for your new function app. Enter the function app's name in the search bar to find and select it. Expand your new function app, then click the + button next to functions. Select the HttpTrigger function template for either C# or JavaScript. Change the Authorization level to Anonymous Click Create. Configure Function - In the portal, expand the function and click Integrate in the expanded view. - Add the following route to the Route template field: notify/{messageType:alpha} This will give the function URL a path parameter messageType we can access within the function. Next choose either C# or JavaScript for a sample function. C# Sample If writing a C# Function, here is the code you can use to send a request to a Logic App to post a Tweet: using System.Net; using System.Net.Http; public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, string messageType, TraceWriter log) { log.Info("C# HTTP trigger function processed a request."); var _messageMap = new Dictionary<string, string> { ["arrived"] = "Arrived at #ServerlessConf NYC. Trying out this cool #AzureFunctions demo", ["joinme"] = "You should join me at the Microsoft booth at #Serverlessconf NYC", ["azureserverless"] = "Azure Serverless is awesome! @AzureFunctions @logicappsio" }; _messageMap.TryGetValue(messageType, out string message); var client = new HttpClient(); await client.PostAsJsonAsync(Environment.GetEnvironmentVariable("LogicAppEndpoint", EnvironmentVariableTarget.Process), message); return req.CreateResponse(HttpStatusCode.OK); } JavaScript Sample If writing a JavaScript function, here is the code you can use to send a request to a Logic App to post a tweet: const https = require('https'); const url = require('url'); const myURL = url.parse(process.env["LogicAppEndpoint"]); module.exports = function (context, req) { context.log('JavaScript HTTP trigger function processed a request.'); let messageType = context.bindingData.messageType; let messageMap = { arrived: "Arrived at #ServerlessConf NYC. Trying out this cool #AzureFunctions demo", joinme: "You should join me at the Microsoft booth at #Serverlessconf NYC", azurefunctions: "Azure Serverless is awesome! @AzureFunctions @logicappsio" }; let statusMessage = messageMap[messageType]; if(statusMessage) { const options = { hostname: myURL.hostname, port: 443, path: myURL.path, method: 'POST' }; const req = https.request(options, (res) => { context.log(`STATUS: ${res.statusCode}`)}); req.write(statusMessage); req.end(); context.res = { status: 200, body: "Tweet sent" }; } else { context.res = { status: 400, body: "Invalid request. Messing message type" }; } context.done(); }; Creating a tweeting Logic App - Click the + New button in the Azure Portal - Click Web + Mobile > Logic App, and configure one in your subscription - After it is deployed, use the search in the top of the portal to open the logic app - Select to Edit (should open by default) and choose Start from Blank - Our function will invoke this workflow via HTTP, so add a "Request" trigger for When an HTTP Request is received. - After the trigger, click New Step and add an action with Twitter to Post a Tweet. Login with your twitter account. - For the Tweet Text, select the request body from the trigger. - Click save, and copy the URL from the request trigger. Configure the Function Environment Variables If you see in the code we reference the LogicAppEndpoint, now we just need to set that environmental variable. - In the portal, navigate to the function app that hosts the recently created function. - In the function app overview tab, click on Application settings. - Scroll down to the Application settings section, click on the "+ Add new setting" button and add the key: - LogicAppEndpoint: The Request URL from the Logic App - something like*** - Click Save Configure Flic Copy the function url by navigating to the function in the portal and clicking the "</> Get function URL" link. This url is needed in the Flic app and can be quite long. It is recommended to paste the url in a cloud based document for mobile access. In the Flic App, connect a button if you haven't already done so and enter the button settings by tapping it. For the click setting, press + to the right of the click command and add a Internet Request function to the button by searching in the function menu. Edit the function by adding the function url and adding one of the three routes that is mapped to a tweet message: - arrived: "Arrived at #ServerlessConf NYC. Trying out #AzureFunctions" - joinme: "You should join me at the Microsoft booth at #Serverlessconf NYC" - azurefunctions: "Azure Serverless is awesome!" The url should look similar to this: Press done to save the settings. Repeat steps 3-4 for the button's double click and hold settings. Avoid reusing the same routes for each button setting. Triggering the Function Based on your click command configuration, the HTTP function will send a request to Twitter to authenticate and post a tweet to the specified account with one of the three predefined messages. Because tweeting the same message twice in a row is prohibited on Twitter, each button click command will only tweet once. Change the tweet message text in the logic app or delete the posted tweets to create more tweets through button clicks.
https://azure.microsoft.com/pt-pt/resources/samples/azure-serverless-iot-button/
CC-MAIN-2018-09
refinedweb
1,115
64.3
Maneuvering Continues For Control of Dell 57 (Score:5, Funny) Sarcasm is not what it was (Score:2, Interesting): (Score:1, Interesting) no Re: (Score:2) More details about the "powerful BASIC interpreter". There obviously is Integer, then later AppleSoft (licensed from Microsoft). How are those not as powerful as other BASICs available at the time (esp AppleSoft, since it's almost the identical BASIC used in other computers of the time). Re: (Score:2) Steve Jobs was responsible for Apple's 2013 bond issue? They've got better tech than I thought. Acquisitions take time to assimilate (Score:2) Steve Jobs was responsible for Apple's 2013 bond issue? They've got better tech than I thought. [forbes.com] Steve Jobs 1995 Re: Sarcasm is not what it was (Score:2) Re: (Score:3): (Score:2) Re: (Score:2) On what planet is a stock with a P/E ratio of ~10 overvalued? When the profit margin is unlikely to be sustainable. Re: (Score:1) If everybody knows that the profit margin is unsustainable, why hasn't the stock price corrected? Or are you saying that the market is wrong when you disagree with the market, and only right when you agree? Re: (Score:2) If everybody knows that the profit margin is unsustainable, why hasn't the stock price corrected? A low P/E is a market correction. Hate is too emotive (Score:2) Put that comment down (Score:4, Interesting) (Score:2) I also feel entitled to $450 million for doing nothing. Gimmee! Re: Avarice (Score:5, Funny) I think you guys have just invented financial homeopathy. -jcr Re: (Score:3) Probably better for us than Icahn's financial sociopathy. Re: (Score:2, Interesting) Re: (Score:1) Re: (Score:2) Re: (Score:2) Re: (Score:2) Re: (Score:1) grea Re: (Score:2) Re: (Score:1) p Re: (Score:2) deskto Worked example of the importance of namespaces (Score:1) Does anyone else think this whole sage would be much simpler if the media consistently referred to people:dell or company:dell? Re: (Score:2) Meanwhile, where is the farmer in all of this? And why don't they ask him to insure the deal? I hear he's out standing in his field. As a Rule of Thumb (Score:2) I have an idea (Score:3, Informative) (Score:2) Re: (Score:2) Re: (Score:2) Re: (Score:2) Re: (Score:1) val Re: (Score:2) Re: (Score:2) Re: (Score:1) Re: (Score:2) Not answering calls? (Score:3) What is the future of Dell? (Score:2) businesse You want profit? Give the company to me. (Score:1).
https://slashdot.org/story/13/07/22/0215209/maneuvering-continues-for-control-of-dell?sdsrc=rel
CC-MAIN-2016-44
refinedweb
442
72.16
- Elite's Conflict Mod: v1.2 Information - Elite's Conflict Mod: v1.2 is on the way. Our team is working hard to bring you a completely revamped and quite polished mod that feels great to play and keeps you playing again and again. Here is an updated list of some of the things you can expect to see in v1.2. - New models and textures for units including the Victory I-class Star Destroyer, Victory II-class Star Destroyer, Acclamator II-class Assault Ship, Venator-class Star Destroyer, Imperial-class Star Destroyer, Imperial II-class Star Destroyer, MC80 Liberty-class Star Cruiser, and more! - 2 new Galactic Conquests that are unique in play-style and difficulty. - Tweaked skirmish to be more fun. - A more balanced AI. - Updated space-stations. - Updated unit descriptions that are grammatically correct and accurate to the new models and EU. - A significantly more balanced game than in previous versions. - Removing redundancies in units and increasing unit uniqueness. - Many bugs and other issues have been fixed. - We are looking into even more great ideas that our team and this community have given us. Please leave a comment on this article and let us know what you would like to see in v1.2! - Elite's Conflict Mod Future - Our team has a lot of great ideas for the future of this mod. We want to bring you a unique, fun, and memorable experience while playing this fantastic but aging game. If you are interested in helping us please contact {HEROIC}Elite, any help is greatly appreciated. - For versions to come, we currently plan to release a v1.3, v2.0, and v2.1. However, we are definitely open to the possibility of more v1.4, v1.5, etc... I am confident that v1.2 will be the biggest change aside from v1.0 and v2.0. - Screenshots - Some new models of the Imperial Fleet - Corellian Heavy Gunship - Acclamator II-class Assault Ship new model/texture - Unknown, do you know what this is..? Thank you for reading and have a nice day! {HEROIC}Elite Great Article, mate ! Defently sums up our passion :) This Modells! That last image looks like a Mon Calamari cruiser of some sort. Bucman55, You might be onto something... {HEROIC}Elite MC 40? pincuishin, Nope. Good guess though. {HEROIC}Elite Pretty close tho :D Is it the MC90 Star cruiser? Tino1148, Good try but no. {HEROIC}Elite Either republic class star destroyer or majestic class ? will it contain a better camera? mostly zoom level of course, plenty of other mods did that already and I consider it a must-have TheVidmaster, I don't think we've discussed it, but I will look into this immediately. Thanks for bringing that up. {HEROIC}Elite TheVidmaster, Done. Looked into it and adjusted the camera abilities to allow for significantly more player control. :) {HEROIC}Elite The last pic looks almost like the Mon Calamari capital ship in the comic Star Wars - General Grevious: Vignette2.wikia.nocookie.net That wasnt intentionally :P But it is indeed a Mon Calamari ship. This guy is totally a fannon design. It is called a MC-17 designed to engage way larger ships with it´s heavy weaponery :) I think you guys already mentioned this somewhere but is there an expected release for the update? pincuishin, We don't have a solid date yet because we just keep wanting to add more into the mod. However there are two possibilities on the table: 1. We wait until we are satisfied with everything and do the full v1.2 final release. (looking at Jan-Feb 2017) 2. Or we finish everything we are currently doing and do another "beta type v1.2" which would either be literally Beta #2 or it could be called v1.2 and then the finished version (the true final version) could be called v1.3. (Mid Dec 2016 for Beta #2, Jan-Feb 2017 for final) What do you think? {HEROIC}Elite Personally I am thinking option #1 but at the same time it would be cool to have a pre-Christmas break release for everyone. {HEROIC}Elite Does this mods run on OS X ? you mean win 10 ? We actually dont know that as we dont use win10. i think he means Apple OS X. But i can assure that is does run on Windows 10. Hey! Mod is looking great, though I have a few questions: - Are there/will there be Super Star Destroyers in the mod? - Will we see Rogue One content? For example, Jyn Erso or Deathtroopers? - Does this mod focus on the early, middle or late days of the Galactic Civil War? Thanks! kilian45, Thank you very much! 1. Yes. 2. Yes, we haven't worked on any of this yet but it is in the works. 3. Kind of all? The time span is roughly a few years before Episode IV to a few years after Episode VI. Hope this answers your questions. :) {HEROIC}Elite So will you guys be adding in new content from the Rouge One movie? Such as the AT-ACT, Shore troopers, etc? Also will there be different imperial trooper units, such as Shock Troopers, Vaders Fist, Death troopers, etc? Thanks for reading, OmegasTurn
https://www.moddb.com/news/elites-conflict-mod-update-six-11212016
CC-MAIN-2021-49
refinedweb
876
76.82
On Wed, Apr 16, 2008 at 04:18:31PM +0100, Richard W.M. Jones wrote: > > Virt-df[1] has now gained the ability to fully parse LVM2 partitions, > thus: > > # virt-df -c qemu:///system -h > Filesystem Size Used Available Type > rhel51x32kvm:hda1 96.8 MiB 14.6 MiB 82.2 MiB Linux ext2/3 > rhel51x32kvm:VolGroup00/LogVol00 6.4 GiB 3.6 GiB 2.8 GiB Linux ext2/3 > rhel51x32kvm:VolGroup00/LogVol01 992.0 MiB Linux swap > > However it still has to do it by opening the local partitions / files, > which means it isn't using a "proper" part of libvirt and more > importantly it cannot run remotely. > > I'd like to find out whether the time has come for us to look again at > a virDomainBlockPeek call for libvirt. Here is the original thread > plus patch from 7 months ago: > > > > (I've attached an updated patch against current CVS). > > I appreciate that some cases might not be simple (iSCSI?), but the > same argument applies to block device statistics too, and we make > those available where possible. I think a best-effort call allowing > callers to peek into the block devices of guests would be very useful. While I don't particularly like the idea of adding a general API for reading data from guest disks, I think given that clear & valid use case from the virt-df, we should go ahead and add the virDomainBlockPeek API. It will be rather fun for virt-df to deal with non-raw devices like qcow but that's not really a concern for libvirt... > Index: configure.in > =================================================================== > RCS file: /data/cvs/libvirt/configure.in,v > retrieving revision 1.139 > diff -u -r1.139 configure.in > --- configure.in 8 Apr 2008 16:45:57 -0000 1.139 > +++ configure.in 16 Apr 2008 15:18:07 -0000 > @@ -60,6 +60,10 @@ > > LIBVIRT_COMPILE_WARNINGS(maximum) > > +dnl Support large files / 64 bit seek offsets. > +dnl Use --disable-largefile if you don't want this. > +AC_SYS_LARGEFILE > + IIRC, this is redundant now - we already added it elsewhere in the configure script when we did the storage patches. > - > +int virDomainBlockPeek (virDomainPtr dom, > + const char *path, > + long long offset, > + size_t size, > + void *buffer); Should probably make offset be an 'unsigned long long' unless we have some semantics which want -ve numbers ? Is 'char *' better or worse than 'void *' for the buffer arg ? > > +/* "domblkpeek" command > + */ > +static vshCmdInfo info_domblkpeek[] = { > + {"syntax", "domblkpeek <domain> <path> <offset> <size>"}, > + {"help", gettext_noop("peek at a domain block device")}, > + {"desc", gettext_noop("Peek into a domain block device.")}, > + {NULL,NULL} > +}; > + > +static vshCmdOptDef opts_domblkpeek[] = { > + {"domain", VSH_OT_DATA, VSH_OFLAG_REQ, gettext_noop("domain name, id or uuid")}, > + {"path", VSH_OT_DATA, VSH_OFLAG_REQ, gettext_noop("block device path")}, > + {"offset", VSH_OT_DATA, VSH_OFLAG_REQ, gettext_noop("start offset")}, > + {"size", VSH_OT_DATA, VSH_OFLAG_REQ, gettext_noop("size in bytes")}, > + {NULL, 0, 0, NULL} > +}; I'm wondering if this is perhaps one of the few APIs we should /not/ include in virsh ? The data is pretty useless on its own - I figure any app needing to access this is almost certainly not going to be shell scripting, and thus using one of the language bindings directly. Having the virsh command will probably just encourage people to use this as a really dumb file copy command. > + /* The path is correct, now try to open it and get its size. */ > + fd = open (path, O_RDONLY); > + if (fd == -1 || fstat (fd, &statbuf) == -1) { > + virXendError (domain->conn, VIR_ERR_SYSTEM_ERROR, strerror (errno)); > + goto done; > + } > + > + /* XXX The following test fails for LVM devices for a couple > + * of reasons: (1) They are commonly symlinks to /dev/mapper/foo > + * and despite the man page for fstat, fstat stats the link not > + * the file. (2) Stat even on the block device returns st_size==0. > + * > + * Anyhow, it should be safe to ignore this test since we are > + * in O_RDONLY mode. > + */ > +#if 0 > + /* NB we know offset > 0, size >= 0 */ > + if (offset + size > statbuf.st_size) { > + virXendError (domain->conn, VIR_ERR_INVALID_ARG, "offset"); > + goto done; > + } > +#endif Actually the core problem is that fstat() does not return block device capacity. The best way to determine capacity of a block device is to lseek() to the end of it, and grab the return value of lseek. There's also a Linux speciifc ioctl() to get device capacity, but the lseek approach is portable. > + > + /*; > + } > + > + ret = 0; > + done: > + if (fd >= 0) close (fd); > + return ret; > +} > + > #endif /* ! PROXY */ > #endif /* WITH_XEN */ > Index: src/xend_internal.h > =================================================================== > RCS file: /data/cvs/libvirt/src/xend_internal.h,v > retrieving revision 1.40 > diff -u -r1.40 xend_internal.h > --- src/xend_internal.h 10 Apr 2008 16:54:54 -0000 1.40 > +++ src/xend_internal.h 16 Apr 2008 15:18:26 -0000 > @@ -228,6 +228,8 @@ > int xenDaemonDomainMigratePrepare (virConnectPtr dconn, char **cookie, int *cookielen, const char *uri_in, char **uri_out, unsigned long flags, const char *dname, unsigned long resource); > int xenDaemonDomainMigratePerform (virDomainPtr domain, const char *cookie, int cookielen, const char *uri, unsigned long flags, const char *dname, unsigned long resource); > > +int xenDaemonDomainBlockPeek (virDomainPtr domain, const char *path, long long offset, size_t size, void *buffer); > + > #ifdef __cplusplus > } > #endif > Index: src/xm_internal.c > =================================================================== > RCS file: /data/cvs/libvirt/src/xm_internal.c,v > retrieving revision 1.70 > diff -u -r1.70 xm_internal.c > --- src/xm_internal.c 10 Apr 2008 16:54:54 -0000 1.70 > +++ src/xm_internal.c 16 Apr 2008 15:18:29 -0000 > @@ -3160,4 +3160,15 @@ > return (ret); > } > > +int > +xenXMDomainBlockPeek (virDomainPtr dom, > + const char *path ATTRIBUTE_UNUSED, > + long long offset ATTRIBUTE_UNUSED, > + size_t size ATTRIBUTE_UNUSED, > + void *buffer ATTRIBUTE_UNUSED) > +{ > + xenXMError (dom->conn, VIR_ERR_NO_SUPPORT, __FUNCTION__); > + return -1; > +} Hmm, is there no way to share the code here with the main Xen driver ? THe XenD driver impl parses the SEXPR to get the device info. Perhaps we could parse the XML format instead, then the only difference would be the API you call to get the XML doc in each driver. For that matter, if we worked off the XML format, this driver impl would be trivially sharable to QEMU and LXC, etc too. Regards, Daniel. -- |: :|
https://www.redhat.com/archives/libvir-list/2008-April/msg00336.html
CC-MAIN-2015-22
refinedweb
974
63.09
This module provides functions to parse an XML document to a tree structure, either strictly or lazily, as well as a lazy SAX-style interface. The GenericXMLString type class allows you to use any string type. Three string types are provided for here: String, ByteString and Text. Here is a complete) = parse defaultParserOptions inputText :: (UNode String, Maybe XMLParseError) -- Process document before handling error, so we get lazy processing. L.hPutStr stdout $ format xml putStrLn "" case mErr of Nothing -> return () Just err -> do hPutStrLn stderr $ "XML parse failed: "++show err exitWith $ ExitFailure 2) DEPRECATED: Use [Node tag text] instead. Type shortcut for nodes. DEPRECATED: Use [UNode text] instead. Type shortcut for nodes with unqualified tag names where tag and text are the same string type. Deprecated Lazily parse XML to tree. In the event of an error, throw XMLParseException..
http://hackage.haskell.org/package/hexpat-0.11/docs/Text-XML-Expat-Tree.html
CC-MAIN-2014-41
refinedweb
138
66.13
-- slightly experimental add-on for Alloy involving the idea of routes to a -- particular part of a tree. module Data.Generics.Alloy.Route (Route, routeModify, routeModifyM, routeGet, routeSet, (@->), identityRoute, routeId, routeList, makeRoute, routeDataMap, routeDataSet, AlloyARoute(..), BaseOpARoute(..), baseOpARoute, (:-@)(..), OneOpARoute, TwoOpARoute) where import Control.Applicative import Control.Monad.Identity import Control.Monad.State import qualified Data.Map as Map import qualified Data.Set as Set -- | A Route is a way of navigating to a particular node in a tree structure. -- -- Let's say that you have some binary tree structure: -- -- > data BinTree a = Leaf a | Branch (BinTree a) (BinTree a) -- -- Suppose you then have a big binary tree of integers, potentially with duplicate values, -- and you want to be able to modify a particular integer. You can't modify in-place, -- because this is a functional language. So you instead want to be able to apply -- a modify function to the whole tree that really just modifies the particular -- integer, deep within the tree. -- -- To do this you can use a route: -- -- > myRoute :: Route Int (BinTree Int) -- -- You apply it as follows (for example, to increment the integer): -- -- > routeModify myRoute (+1) myTree -- -- This will only work if the route is valid on the given tree. -- -- The usual way that you get routes is via the traversal functions in the module. -- -- Another useful aspect is composition. If your tree was in a tree of trees: -- -- > routeToInnerTree :: Route (BinTree Int) (BinTree (BinTree Int)) -- -- You could compose this with the earlier route: -- -- > routeToInnerTree @-> myRoute :: Route Int (BinTree (BinTree Int)) -- -- These routes are a little like zippers, but rather than building a new data -- type to contain the zipped version and the re-use aspect, this is just a -- simple add-on to apply a modification function in a particular part of the -- tree. Multiple routes can be used to modify the same tree, which is also -- useful. -- -- Routes support Eq, Show and Ord. All these instances represent a route as a -- list of integers: a route-map. [0,2,1] means first child (zero-based), then -- third child, then second child of the given data-type. Routes are ordered using -- the standard list ordering (lexicographic) over this representation. data Route inner outer = Route [Int] (forall m. Monad m => (inner -> m inner) -> (outer -> m outer)) instance Eq (Route inner outer) where (==) (Route xns _) (Route yns _) = xns == yns instance Ord (Route inner outer) where compare (Route xns _) (Route yns _) = compare xns yns instance Show (Route inner outer) where show (Route ns _) = "Route " ++ show ns -- | Gets the integer-list version of a route. See the documentation of 'Route'. routeId :: Route inner outer -> [Int] routeId (Route ns _) = ns -- | Given an index (zero is the first item), forms a route to that index item -- in the list. So for example: -- -- > routeModify (routeList 3) (*10) [0,1,2,3,4,5] == [0,1,2,30,4,5] -- routeList :: Int -> Route a [a] routeList 0 = Route [0] (\f (x:xs) -> f x >>= (\x' -> return (x': xs))) routeList n = Route [1] (\f (x:xs) -> f xs >>= (\xs' -> return (x:xs'))) @-> routeList (n-1) -- | Constructs a Route to the key-value pair at the given index (zero-based) in -- the ordered map. Routes involving maps are difficult because Map hides its -- internal representation. This route secretly boxes the Map into a list of pairs -- and back again when used. The identifiers for map entries (as used in the integer -- list) are simply the index into the map as passed to this function. routeDataMap :: Ord k => Int -> Route (k, v) (Map.Map k v) routeDataMap n = Route [n] (\f m -> let (pre, x:post) = splitAt n (Map.toList m) in do x' <- f x return $ Map.fromList $ pre ++ (x':post)) -- | Constructs a Route to the value at the given index (zero-based) in the ordered -- set. See the documentation for 'routeDataMap', which is nearly identical to -- this function. routeDataSet :: Ord k => Int -> Route k (Set.Set k) routeDataSet n = Route [n] (\f m -> let (pre, x:post) = splitAt n (Set.toList m) in do x' <- f x return $ Set.fromList $ pre ++ (x':post)) -- | Applies a pure modification function using the given route. routeModify :: Route inner outer -> (inner -> inner) -> (outer -> outer) routeModify (Route _ wrap) f = runIdentity . wrap (return . f) -- | Applies a monadic modification function using the given route. routeModifyM :: Monad m => Route inner outer -> (inner -> m inner) -> (outer -> m outer) routeModifyM (Route _ wrap) = wrap -- | Given a route, gets the value in the large data structure that is pointed -- to by that route. routeGet :: Route inner outer -> outer -> inner routeGet route = flip execState undefined . routeModifyM route (\x -> put x >> return x) -- | Given a route, sets the value in the large data structure that is pointed -- to by that route. routeSet :: Route inner outer -> inner -> outer -> outer routeSet route x = routeModify route (const x) -- | Composes two routes together. The outer-to-mid route goes on the left hand -- side, and the mid-to-inner goes on the right hand side to form an outer-to-inner -- route. (@->) :: Route mid outer -> Route inner mid -> Route inner outer (@->) (Route outInds outF) (Route inInds inF) = Route (outInds ++ inInds) (outF . inF) -- | The identity route. This has various obvious properties: -- -- > routeGet identityRoute == id -- > routeSet identityRoute == const -- > routeModify identityRoute == id -- > identityRoute @-> route == route -- > route @-> identityRoute == route identityRoute :: Route a a identityRoute = Route [] id -- | Given the integer list of identifiers and the modification function, forms -- a Route. It is up to you to make sure that the integer list is valid as described -- in the documentation of 'Route', otherwise routes constructed this way and via -- the Alloy functions may exhibit strange behaviours when compared. makeRoute :: [Int] -> (forall m. Monad m => (inner -> m inner) -> (outer -> m outer)) -> Route inner outer makeRoute = Route -- | An extension to 'AlloyA' that adds in 'Route's. The opsets are now parameterised -- over both the monad/functor, and the outer-type of the route. class AlloyARoute t o o' where transformMRoute :: Monad m => o m outer -> o' m outer -> (t, Route t outer) -> m t transformARoute :: Applicative f => o f outer -> o' f outer -> (t, Route t outer) -> f t -- | Like 'baseOpA' but for 'AlloyARoute'. baseOpARoute :: BaseOpARoute m outer baseOpARoute = BaseOpARoute -- | The type that extends an applicative/monadic opset (opT) in the given -- functor/monad (m) to be applied to the given type (t) with routes to the -- outer type (outer). This is for use with the 'AlloyARoute' class. data (t :-@ opT) m outer = ((t, Route t outer) -> m t) :-@ (opT m outer) infixr 7 :-@ -- | The terminator for opsets with 'AlloyARoute'. data BaseOpARoute (m :: * -> *) outer = BaseOpARoute -- | A handy synonym for a monadic/applicative opset with only one item, to use with 'AlloyARoute'. type OneOpARoute t = t :-@ BaseOpARoute -- | A handy synonym for a monadic/applicative opset with only two items, to use with 'AlloyARoute'. type TwoOpARoute s t = (s :-@ t :-@ BaseOpARoute)
http://hackage.haskell.org/package/alloy-1.1.0/docs/src/Data-Generics-Alloy-Route.html
CC-MAIN-2017-17
refinedweb
1,136
61.16
In the past couple of Monads posts, we’ve talked briefly about the State and Reader Monads and their potential uses and misuses. Before this series completes, I have a few more to cover including the Writer, Continuation and eventually Observable monad. Today, we’ll get started looking at the Writer Monad and what it can do for us. What’s the Motivation? Before we dive deep into what the Writer Monad is, let’s go deeper into the motivation of why we might consider this approach. Like proper developers should when approaching a new concept, we should ask, “What problem are we trying to solve? What’s the motivation here?” else we end up with an over-applied solution to our problem. In this case, let’s take the simple example of logging or tracing in our application. Typically in logging scenarios, we could have the dreaded singleton logger. let doSomething() = Logger.Instance.Log("Beginning doing something") // Doing something Logger.Instance.Log("End doing something") Or perhaps, we could have an injected ILogger instance handed to us via a constructor for the purposes of logging. type ILogger = abstract member Log : message : string -> unit type DreadedManager(logger : ILogger) = member __.DoSomething() = logger.Log ("Start doing something") // Do something logger.Log("End doing something") Or even in other scenarios, we might imagine putting in some AOP behavior for the purposes of logging, but that may of course not be fined grained enough what we need. Instead of doing this, how about generating some output “on the side” in a more functional manner. What if we could instead write something like the following where I add two numbers (simplistic example I know) and then run the computation? let addNumbers x y = writer { do! logMsg (sprintf "Adding %d and %d" x y) return x + y };; > val addNumbers : int -> int -> Writer<string list,int> > addNumbers 3 4 |> runWriter;; val it : int * string list = (7, ["Adding 3 and 4"]) What we’re seeing is that in my addNumbers function, I take two integer arguments and then log a message and finally return the computed value. What’s more interesting is the fact that we’re logging something physically alters the signature of our method to indicate such a thing is happening. When we compute our function, we not only get the result of 7 that we’re expecting, but also our transaction log as well. Then we can decide what we want to do with the log, for example persist it somewhere, clean it up, etc. So, how can we do this exactly? Defining the Writer So, now that we covered the motivations around what it is and why we might use it, let’s look at how we might implement it. In order to maintain both the transaction log and the return value of our method, we’ll need to create a container to hold these values. In this case, we have the Writer<’W,’T> where the ‘W is the writer, which in our case above was a simple string list, and the ‘T is the result of our method. We have a constructing function of our Writer which takes a function with no parameters and returns a tuple of our result and the log. type Writer<'W,'T> = Writer of (unit -> 'T * 'W) In addition, we’ll need a way to run our Writer so that we can return our tuple of our value and the log. Let’s create a function called runWriter which computes our Writer. let runWriter<'W,'T> (Writer w) : ('T * 'W) = w() Before we get started on the Monad part, there is another piece we need to understand, and that’s the Monoid. Don’t Avoid the Monoid Before everyone panics as I’ve brought up a work that sounds like Monad, just rest easy. In fact, it’s much easier to understand than the dreaded M word. A Monoid is an algebraic data structure with a simple associative binary operation and an identity element. Just to bring that into real-world speak, we could have a Monoid for natural numbers with an identity element of 0 and the associative binary operator of addition, or in the case of lists, we could have the identity element of an empty list and an associative binary operator of append. In Haskell, such a thing is implemented through a type class, but since in F#, we don’t have this, let’s instead create a simple interface to encompass the same behavior. type IMonoid<'T> = abstract member mempty : unit -> 'T abstract member mappend : 'T * 'T -> 'T In this instance, we have two methods we care about, defining our identity (mempty) and our binary operator (mappend). For example, we could implement a Monoid for a simple F# list such as the following: type ListMonoid<'T>() = interface IMonoid<'T list> with member this.mempty() = [] member this.mappend(a, b) = a @ b This instance in particular will come in handy when we’re talking about a simple logging solution. Our mempty simply returns an empty list of our ‘T objects and our mappend appends the first list to the next. Now, what we need is some sort of registration process for our IMonoid instances so that we could pick one up based upon the incoming type. We could use something like a Common Service Locator to do this and it could be idea for a testing situation, but for now, lets just hand roll a global associations which maps an instance of our IMonoid to our proper type. type MonoidAssociations private() = static let associations = new Dictionary<Type, obj>() static member Add<'T>(monoid : IMonoid<'T>) = associations.Add(typeof<'T>, monoid) static member Get<'T>() = match associations.TryGetValue(typeof<'T>) with | true, assoc -> assoc :?> IMonoid<'T> | false, _ -> failwithf "No IMonoid defined for %O" <| typeof<'T> Once we have this defined, we could add instances to this associations class such as a string list implementation: MonoidAssociations.Add(new ListMonoid<string>()) And now we can use the associations class to call instances of our IMonoid for our functions of mempty and mappend. let mempty<'T> = MonoidAssociations.Get<'T>().mempty let mappend<'T> a b = MonoidAssociations.Get<'T>().mappend(a, b) After that little diversion, we’re now able to get back to the point of the post, the Writer itself. But outside of here, that’s probably the last time you’ll probably use the word Monoid. But, I’m sure now you could impress your friends with the use of said word. Defining the Builder. From our previous creating a Builder posts, we should remember that the Return should have the following signature where we take a value of T and then return it’s Monadic type. type Builder() = member Return : 'T -> Monad<'T> Our implementation should take a value of a and construct a Writer with a function with no arguments and return a tuple of our value and an empty Monoid instance. type WriterBuilder() = member this.Return<'W,'T>(a : 'T) : Writer<'W,'T> = Writer(fun () -> a, mempty()) Next, we need to define the bind operation. The point of this method is so that we can bind two monadic types together such as two let! or do! statements. As you might recall, this method takes a Monadic type of ‘T and a function that takes a ‘T and returns a Monadic type of ‘U, and the return type of our bind should be the Monadic type of ‘U. type Builder() = member Bind : computation : Monad<'T> * binder : ('T -> Monad<'U>) -> Monad<'U> Our implementation should look like the following which creates a Writer with a constructed function which takes no parameters and then calls runWriter on our first Writer m which returns a tuple of our result and a log. We then execute the binder function with our result a which then gives us our second result and a log. Finally, we return a tuple of our second result with our two logs appended to each other. type WriterBuilder() = member this.Bind<'W,'T,'U>( computation: Writer<'W,'T>, binder : 'T -> Writer<'W,'U>) : Writer<'W,'U> = Writer(fun () -> let (res1, log1) = runWriter computation let (res2, log2) = runWriter (binder res1) in (res2, mappend<'W> log1 log2)) You can find the rest of the methods in order to create a fully functioned builder as a Gist on my GitHub. Now let’s talk about some helper methods. The first, and most useful method is the tell which adds an entry to our log. This function takes a log entry and creates a Writer with a function that takes no arguments and returns a tuple of unit (void for some folks) and our log entry. let tell logEntry = Writer(fun () -> (), logEntry) Next up is the listen function. This allows us to listen to our log output that is being generated by our writer. This function takes a Writer and executes it while returning a tuple of our result and a log, as well as a log. let listen writer = Writer(fun () -> let (result, log) = runWriter writer in ((result, log), log)) The last function we’ll cover is the censor function. This function returns a new Writer whose result is the same, but the alters the log based upon what is supplied as a parameter let pass m = Writer(fun () -> let ((a, f), w) = runWriter m in (a, f w)) let censor censoredValue writer = writer { let! result = writer return (result, censoredValue) } |> pass Doing Something Useful With It Ok, our helpers are now defined, so let’s go back and figure out how we’re going to log messages. If you remember from above, we have the tell function which does exactly that. Let’s implement that to take a message and call the tell function with our message wrapped in a list. let logMsg (message : string) = tell [message] Now we can try it out in such a function which does some file processing for us with some logging along the way. let processFiles files = writer { try do! logMsg "Begin processing files" for file in files do do! logMsg (sprintf "Processing %s" file) processFile file do! logMsg "End processing files" with e -> do! logMsg (sprintf "An exception occurred %s" (e.ToString())) } At the end of this method, it should return to us a tuple of unit and our log entries that happened along the way such as this: > processFiles files |> runWriter;; val it : unit * string list = (null, ["Begin processing files"; "Processing C:\Test1.txt"; "Processing C:\Test2.txt"; "End processing files"]) This of course is a pretty contrived and simple example, yet pretty interesting nonetheless. But, in an impure world that we live in with such languages as F#, C#, etc, how practical is it? That’s another matter altogether and good arguments on both sides. Using a writer implies some sort of change to your system to allow for logging, whereas in a more impure environment, logging could take place everywhere. So, when dealing with an impure world already, just go with what you know. Conclusion Once again, by looking at Monads, we can discover what abstractions can be accomplished with them, but just as well, what they are not about. Certainly, functional languages don’t need them, but they certainly can be quite useful. When we find repeated behavior, such as logging, this abstraction can be quite powerful. Practically though in languages such as F#, they can be useful, but other patterns and abstractions can as well. Find the right abstraction and use it.
http://codebetter.com/matthewpodwysocki/2010/02/02/a-kick-in-the-monads-writer-edition/
crawl-003
refinedweb
1,919
60.45
Hello All, I am attempting to use a keras neural net in one of my Edward models, and running into trouble. Essentially I am taking a keras Model and trying to apply it to an Edward RandomVariable, which doesn’t seem to work. From the documentation here this would be achieved simply: from edward.models import Bernoulli, Normal from keras.layers import Dense z = Normal(loc=tf.zeros([N, d]), scale=tf.ones([N, d])) h = Dense(256, activation='relu')(z) x = Bernoulli(logits=Dense(28 * 28)(h)) however when I try I get the following error: ValueError: Layer dense_1 was called with an input that isn't a symbolic tensor. Received type: <class 'abc.Normal'>. Full input: [<ed.RandomVariable 'Normal/' shape=(10, 4) dtype=float32>]. All inputs to the layer should be tensors. It seems that keras doesn’t play nice with edward. Is there a work around? Thanks!
https://discourse.edwardlib.org/t/neural-networks-example-doesnt-work/587
CC-MAIN-2018-43
refinedweb
150
61.53
fglrx 2:8.960-0ubuntu1: fglrx kernel module failed to build [error: ‘cpu_possible_map’ undeclared (first use in this function)] Bug Description Apport report. fglrx 8.960 fails to build with kernel 3.4.0-1-generic. Module compilation failed with: CC [M] /var/lib/ /var/lib/ /var/lib/ /var/lib/ /var/lib/ make[2]: *** [/var/lib/ ProblemType: Package DistroRelease: Ubuntu 12.10 Package: fglrx 2:8.960-0ubuntu1 Uname: Linux 3.3.0-4. NonfreeKernelMo ApportVersion: 2.0.1-0ubuntu7 Architecture: amd64 DKMSKernelVersion: 3.4.0-1-generic Date: Wed May 2 16:31:41 2012 InstallationMedia: Ubuntu 11.10 "Oneiric Ocelot" - Release amd64+mac (20111012) PackageVersion: 2:8.960-0ubuntu1 SourcePackage: fglrx-installer Title: fglrx 2:8.960-0ubuntu1: fglrx kernel module failed to build UpgradeStatus: No upgrade log present (probably fresh install) The attachment "34. I can get the module to build with that 34.patch but then I get fglrx: Unknown symbol old_rsp (err 0) and I found someone else reporting that bug to amd http:// New patch from http:// (Testing on my system with AMD driver is in progress ;) Using only the patch from #6 fglrx builds and X loads. glxinfo reports info as expected (as far as I can tell ;). Unity3D starts. I had no luck with the new patch. It still gave compiler warnings. So I combined the two patches to one and that worked for my situation. fglrx-installer (2:8.960-0ubuntu2) quantal; urgency=low * replace- replace- - Add support for Linux 3.5. -- Alberto Milone <email address hidden> Sun, 17 Jun 2012 17:20:49 +0200 Fix works for me on my AMD 64 bit system. Thanks. Still gives on x86 system: /var/lib/ /var/lib/ /var/lib/ That was fixed in the combined patch: @@ -187,6 +187,9 @@ #include <linux/gfp.h> #include <linux/swap.h> #include "asm/i387.h" +#if LINUX_VERSION_CODE >= KERNEL_ +#include <asm/fpu- +#endif #include "firegl_public.h" #include "kcl_osconfig.h" Guys, Im with same problem installing fglrx AMD driver in 3.4 kernel from ubuntu mainline. Can you explain me how to apply this patch? Thank you @Shabang: Check out the thread on ubuntuforums: http:// There's also a new patch for kernel 3.5 which works for me (credited to Enrico Tagliavini). For everybody who is interested in: for my Kernel 3.5.1 and the newest amd-driver- get amd-driver- get FGLRX-8- ./amd-driver- patch a/common/ patch a/common/ cd a sh /usr/share/ ./ati-installer.sh 8.98 --install Have fun, Holm @Holm: patches much appreciated. Last hunk ("for_each_ Perhaps the xorg-edgers guys might be interested in putting those patches to the fglrx-installer packages? Possible patch taken from: http:// phoronix. com/forums/ showthread. php?68922- Patch-to- compile- fgrlx-module- on-Linux- 3-3-rc4- with-x86- 32-bit- arch&p= 257809# post257809
https://bugs.launchpad.net/ubuntu/+source/fglrx-installer/+bug/993427/
CC-MAIN-2016-07
refinedweb
473
60.72
A customer reported a problem showing an icon on their dialog box. We verified that this code does execute during the handling of the WM_message. No assertion fires, yet no icon appears either. INITDIALOGSHFILEINFO sfi = { 0 }; DWORD_PTR dwResult = SHGetFileInfo(m_pszFile, &sfi, sizeof(sfi), SHGFI_ICON); assert(dwResult != 0); m_hico = sfi.hIcon; assert(m_hico != nullptr); assert(GetDlgItem(hdlg, IDI_CUSTOMICON) != nullptr); SendDlgItemMessage(hdlg, IDI_CUSTOMICON, WM_SETICON, ICON_BIG, (LPARAM)m_hico); assert(SendDlgItemMessage(hdlg, IDI_CUSTOMICON, WM_GETICON, ICON_BIG, 0) == (LPARAM)m_hico); Our dialog template saysICON "", IDI_CUSTOMICON, 10, 10, 0, 0 The customer did some helpful preliminary troubleshooting: - Verify that the code does indeed execute. It sounds obvious, but some people forget to check this. They get distracted trying to figure out why a function isn't working, when in fact the root cause is that you forgot to call the function in the first place. - Verify that the SHGetFileInfocall succeeded. That rules out the case that the static control is displaying nothing because you didn't give it anything to display. - Verify via GetDlgItemthat the control you're trying to talk to really does exist. That rules out the case that you are talking to an empty room. (For example, maybe you added the control to the wrong template.) - Verify via WM_that the attempt to change the icon really worked. GETICON The problem is that the customer is using the wrong icon-setting message. The WM_ message lets you customize the icon that is displayed in the window's caption bar. For this to have any effect, your window naturally needs to have the WS_ style. If you don't have a caption, then telling the window manager, "Please display this icon in my caption" is mostly a waste of time. It's like signing up for a lawn-mowing service when you don't have a lawn. The message to change the icon displayed inside a static control is STM_. SendDlgItemMessage(hdlg, IDI_CUSTOMICON, STM_SETICON, (LPARAM)m_hico, 0); Red herring: Some of you may have noticed that the customer set their control size to 0×0. "You aren't seeing an icon because you set the control to zero size!" But since this control was not created with SS_ or SS_, the control will resize itself to match the size of the icon. Here's a sample program to show both types of icons set on the same window, so you can see the difference. #include <windows.h> #include <commctrl.h> LRESULT CALLBACK SubclassProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam, UINT_PTR uIdSubclass, DWORD_PTR dwRefData) { switch (uMsg) { case WM_NCDESTROY: RemoveWindowSubclass(hwnd, SubclassProc, 0); PostQuitMessage(0); break; } return DefSubclassProc(hwnd, uMsg, wParam, lParam); } int WINAPI WinMain(HINSTANCE hinst, HINSTANCE hinstPrev, PSTR lpCmdLine, int nShowCmd) { HWND hwnd = CreateWindow("static", nullptr, WS_OVERLAPPEDWINDOW | WS_VISIBLE | SS_ICON | SS_CENTERIMAGE, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, nullptr, nullptr, hinst, nullptr); SetWindowSubclass(hwnd, SubclassProc, 0, 0); HICON hicoCaption = LoadIcon(nullptr, IDI_EXCLAMATION) SendMessage(hwnd, WM_SETICON, ICON_BIG, reinterpret_cast<LPARAM>(hicoCaption)); HICON hicoClient = LoadIcon(nullptr, IDI_QUESTION); SendMessage(hwnd, STM_SETICON, reinterpret_cast<LPARAM>(hicoClient), 0); MSG msg; while (GetMessage(&msg, NULL, 0, 0)) { TranslateMessage(&msg); DispatchMessage(&msg); } DestroyIcon(hicoClient); DestroyIcon(hicoCaption); return 0; } We create a top-level static window, which is highly unusual, since static controls are nearly always children of some other window. I'm doing this specifically to show the two different icons. You don't want to do this in a real program. The static control has the SS_ style, because we want it to display an icon, and the SS_ style, because we just want it to center the icon in its client area without resizing. (We will control the size.) We subclass the window so that we can post a quit message to exit the program when the window is destroyed, which the user can do by pressing Alt+ F4. (Hey, this is just a demo program. Catching clicks on the × button is just extra code that will distract from the purpose of the demonstration. Heck, this entire subclass thing is already distracting from the purpose of the demonstration!) We load up two icons, an exclamation point, which we set as our caption icon, and a question mark, which we put in our client area. (We could have used the Static_ macro in windowsx.h to send the STM_ message, but I did it manually just to make the message explicit.) Run the program, and there you can see the two different types of icons: The exclamation point goes in the caption, and the question mark goes in the client area. Ability to perform basic diagnostics is quite refreshing after all those that couldn't. Yeah, I'm prepared to forgive the customer for this one. Icon handling is confusing and using the wrong "set the icon" message seems like a pretty simple mistake to make. And their attempts to figure out what went wrong are quite impressive, especially compared to the average customer stories on this blog. If I were debugging that, it would not occur to me to check the actual message; I would just say to myself "hey, it says SETICON, it must mean set the icon. No worries there." I'm surprised it doesn't react to a click on the ×, while an ordinary windows does. That means the controls goes out of its way not to react, right? In this example, the main difference between using SetWindowSubclass vs SetWindowLongPtr with GWLP_WNDPROC is that with the former you just call DefSubclassProc, while with the latter you'd have to store the previous handler by calling GetWindowLongPtr with GWLP_WNDPROC and call it in the new handler. A feature of SetWindowSubclass is that the provided SUBCLASSPROC is called with the provided uIdSubclass and the dwRefData, but those are not being used in this example. At least uIdSubclass should be used in the call to RemoveWindowSubclass. Also, it's very strange to call RemoveWindowSubclass within the same SUBCLASSPROC. Will DefSubclassProc know how to deal with that, e.g. following a copy of the list of subclasses, made before the top SUBCLASSPROC was called? 2 typos: no semicolon after the first call to LoadIcon, and the second DestroyIcon references hicoCaptionSm (which doesn't exist) instead of hicoCaption. After years of reading posts where devs make some of the silliest programming mistakes, and not even perform the most basic troubleshooting techniques, it is refreshing to see a post featuring a competent customer. @morlamweb: IMO that's the first principle of debugging: Always verify the obvious, basic things first. To err is human, and it's always surprising (and slightly embarrassing) to see how often this finds the problem. > WinMain > CreateWindow(" We should probably start using UNICODE functions even in scratch programs, at least until the default ANSI code page is UTF-8.</pedant> Maurits: Can I set the code page to UTF-8? Though it probably wouldn't help my situation. We are stuck supporting an ancient MFC grid control that is used in a hundred paces in our software and though our software is built with UNICODE, I can't put non-code-page text into it. (Someone even figured out a trick to make it usable in our 64-bit build.) I successfully set OEMCP to UTF8 and it worked quite well. The editor can't handle it though so don't try to paste out of 7 bit. In order to execute this code from Visual-Studio I created an empty Win32 project, changed the lines as suggested by Rick C and added Comctl32.lib to the linker. I used this trick to provide a basic text viewer sample application; it created a readonly edit control as a top-level window. However rather than subclassing the window to notice its destruction I just used IsWindow(hwnd) as my loop test. Note that as @Medinoc suggested I didn't have to add any code to handle closing the window since the existing default syscommand handling took care of all that. @Paul Z: Yeah I wouldn't even speak of "forgiving". Everybody makes mistakes and they spent a large amount of work trying to figure out the problem and narrowing it down as much as they could before asking for help. That's pretty much model student behavior – what could one ask more for? If everybody was like this, support calls would be way rarer – sadly as Ray's usual posts demonstrate almost nobody goes to that effort :( I also don't understand why subclassing is necessary. Would this not be enough? MSG msg; while (GetMessage(&msg, NULL, 0, 0)) { if (msg.message == WM_NCDESTROY) PostQuitMessage(0); TranslateMessage(&msg); DispatchMessage(&msg); } "It sounds obvious, but some people forget to check this." Worse, they lock themselves into not checking because they already did and "I didn't change anything." e.g. my own experience at Microsoft at 2007 when an IInternetSecurityManager I had created just stopped working the day after I had tested it and checked it in. After a week of increasingly voodoo-driven investigation, and a desperate outreach to the IE team, I finally noticed that another dev had clobbered my OLE siting code in a merge. @Alex Cohn, the WM_NCDESTROY message is sent, not posted, so GetMessage will not return it. You have to somehow have a window proc there. I was just ranting that SetWindowSubclass and friends are not really necessary in this case (read these 3 words again with care). By using it, this otherwise little program that just uses a static control as a top-level window now requires comctl32. But I guess Raymond actually wishes the best for people that simply copy-paste his code snippets. Indeed, it's a much better way of subclassing. I haven't tested if calling RemoveWindowSubclass is safe within the subclass's proc, but I guess it is. Oh look, he already blogged about it a long time ago, how awesome is that? blogs.msdn.com/…/55653.aspx @John Doe, thanks — my Win32 got rusty in the last few years, but I took Raymond's challenge and installed MSVC Express on an ancient Windows laptop running some unsupported version of OS. True, WM_NCDESTROY does not happen without subclassing. But I found that even without WM_NCDESTROY, the following code simply works, without comctl32.lib and without SetWindowSubclass(). Am I missing simething? MSG msg; while (GetMessage(&msg, NULL, 0, 0)) { if (msg.message == WM_SYSCOMMAND && msg.wParam == SC_CLOSE) PostQuitMessage(0); TranslateMessage(&msg); DispatchMessage(&msg); } @Alex Cohn. If you call the PostQuitMessage function in response to WM_SYSCOMMAND with wParam == SC_CLOSE, then your application will miss out on getting its last few messages. These will include WM_CLOSE, WM_DESTROY and WM_NCDESTROY. Whether or not this will be a problem will depend upon the design of your application. @Sam Steele: you are definitely correct in general case; but in the specific case of an app that wraps around a standard control, there doesn't seem to be a problem with that. At least, my debugger shows that WM_NCDESTROY in SubclassProc() has a direct stack trace to message loop with message == WM_SYSCOMMAND && wParam == SC_CLOSE. And I found no other way to close this tiny app, but for Alt-F4 or Alt-Space + &Close. Am I missing something? I could not find a way to DestroyWindow() within this program. But I found a minimal change that shows the beauty of subclassing. For a reason I don't completely understand, when I change the "static" class to "edit", the program does catch clicks on the × button, but nothing except WM_NCLBUTTONDOWN arrives to the message loop. Now you either must check the mouse location and map it to the × button, or subclass and see it work all by itself. But why do static and edit windows handle the × button differently? Differently means (if I interpret the evidence correctly): "static" does not handle the caption bar buttons, while "edit" does handle them? @Alex Cohn, I'm conjecturing here: maybe because the edit control was made to be a top-level window by itself, e.g. a very, very simple notepad without a status bar. The static control, and probably checkboxes, radio buttons, push buttons and such, probably weren't, IMO following the line of thought that they're only useful as child windows, or rather distasteful as top-level windows.
https://blogs.msdn.microsoft.com/oldnewthing/20140514-00/?p=993/
CC-MAIN-2016-30
refinedweb
2,033
62.38
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). On 17/01/2014 at 02:51, xxxxxxxx wrote: Hi I try to Position Objects on a spline and have some issues when the spline is not in the same null object- i try to get the offset by checking all - parent objects - but i have to mention the rotation - have one of you an idea ? def GetContainerPosOffset(self, node) : parentpos = c4d.Vector(0) if node.GetUp() != None: parnode = self.GetContainerPosOffset(node.GetUp()) #print parnode parentpos += parnode parentpos += node.GetAbsPos() return parentpos Edit notice NiklasR : fixed code formatting On 17/01/2014 at 02:59, xxxxxxxx wrote: You should prefer GetMg().off over GetAbsPos(). Why not just use GetUpMg()? Best, -Niklas On 17/01/2014 at 03:19, xxxxxxxx wrote: hey Niklas, thanks for your reply - your right - it's a better way - but it don`t help me to calculate the right position On 17/01/2014 at 03:23, xxxxxxxx wrote: a nice way to get the world coordinates of a object On 17/01/2014 at 03:35, xxxxxxxx wrote: sorry your right - i red the description of the function -
https://plugincafe.maxon.net/topic/7655/9638_position-rotation--of-parent/1
CC-MAIN-2022-40
refinedweb
226
71.75
? A soft reminder: if you are not an administrator, you can’t run SSIS packages. While all situations are different, this post suggests three possible ways to do it. Each will have its advantages and disadvantages. So, why not check it out, including the examples? Let’s start. 1. Make Users Run SSIS Packages Using a File Let’s begin with a typical scenario of extracting files. SSIS packages are suitable for extracting data in a file and loading it into a relational database, like Excel spreadsheets to SQL Server. This way, users can trigger the package execution when the file is ready for extraction. What You Need - A file like an Excel spreadsheet (.xlsx) and a shared folder/FTP location, where the user will copy it. - An SSIS package that extracts the data from an Excel file to SQL Server. You schedule this in SQL Server Agent. You and your user have to agree on the frequency — it can be twice a day, at lunchtime, or whenever. - Only one user copies the file to the shared folder to avoid duplicate processing. Advantages - This is the simplest way of all the three approaches, depending on your requirements. - You only need to secure the shared folder or FTP location. Disadvantages - It limits your options to file input. The approach is not applicable either if the data come from the relational database. - Timing can be an issue. If the user is 1 second late to the next execution, they must wait for the next scheduled run. - Errors can occur for manually created files. SSIS Package Example Have a look at a simple example of using SSIS to extract the data from a file into a relational database: The SSIS Package should have: PACKAGE VARIABLES - filename — string. It stores the file name and the full path. - isFileExists — a boolean variable that determines if the file exists (true) or (false). CONNECTION MANAGERS - A database connection to the relational database. - A connection to open the Excel file. - An SMTP connection to send the user and admin emails. SCRIPT TASK — Check if File Exists The first Script Task checks if the file exists. See Figures 2 and 3 for the Script Task Properties. - Include the filename package variable to ReadOnlyVariables. - Include the isFileExists package variable to ReadWriteVariables. - In the code, add a namespace for System.IO. - Test the file for existence, using file.Exists. - If the file is found, set isFileExists to true. PRECEDENCE CONSTRAINTS You need two Precedence constraints with an Evaluation operation set to Expression. 1. If isFileExists is true, proceed to the Data Flow Task to process the file: 2. If the file does not exist (Expression = !@[User::isFileExists], execution will end. So, email the admin about it (it is optional): DATA FLOW TASK This will include opening the Excel file, extracting and transforming the data, and dumping it into a SQL Server table. SCRIPT TASK — Move the processed file to the Processed folder The file copied by the user will move to a Processed folder after extraction. Why? This avoids reprocessing the file. SEND EMAIL TASK After designing and testing the package, deploy it in the SSIS Catalog (SSISDB) using Visual Studio. Then, go to SQL Server Agent and add a job for the package with the step and the schedule. That’s it — a simple and straightforward way to make users run SSIS packages. But what if we don’t have a file to process? 2. Make Users Run SSIS Packages by Using a SQL Table The second approach is to use a table instead of a file. The SSIS package checks for new records from the table. Then, processing proceeds. And lastly, records are tagged as processed. What You Need - At least 1 table that the SSIS package will check. You may need more, depending on your requirements - The app will write to the table with all information required by the SSIS package. - The SSIS package is scheduled to run regularly. It could be 2x daily, or every 15mins, depending on the requirements or as agreed with users. Advantages - No “processing flat files” limitation. Any data source will do — an SQL Server or any relational database, Excel, text file, Sharepoint list, you name it. - More flexibility than in option 1. - The possibility to allow multiple users. Disadvantages - You need an app to do the INSERT in the designated table, and it could take some time to do this. If you need queuing, it will take longer. - Timing can still be an issue. SSIS Package Example Figure 6 below shows the general design of simple requirements. We use a table with columns postTime, databaseUser, and object. The package should include: PACKAGE VARIABLES - tableData — object type. It is the container of our recordset. - postTime — DateTime. It relates to the current value of the postTime column. - user — string. It relates to the current value of the databaseUser column. - object — string. It relates to the current value of the object column. CONNECTION MANAGER You need at least 1 ADO.Net connection to SQL Server or any relational database. DATA FLOW TASK — Get Table Data It retrieves the log table records and put them in the record set: The Recordset Destination has the following properties: - Include three columns we need. Note the arrangement of the columns in the recordset. You will need it in the ForEach Loop Container later. - Set the recordset to the tableData package variable. FOREACH LOOP CONTAINER Next, traverse the recordset with a loop and capture each column value. Here are the properties that you need to set: - Set Enumerator to Foreach ADO Enumerator. - Set the ADO object source variable to the tableData package variable, as shown in Figures 9 and 10. - Finally, in the Variable Mappings, add the package variables postTime, user, and object, based on columns’ arrangement in the recordset. DATA FLOW TASK Inside the ForEach Loop Container, there is a Data Flow Task. It does all you need to do for each of the records in the recordset. Here, you can also tag the records as processed to avoid reprocessing (see Figure 6). The package variables will be updated as the loop progresses. So, what do you think? 3. Make Users Run SSIS Packages by Using a Stored Procedure The third and last option is using a stored procedure. It triggers the execution of a package. Instead of a package, you’re going to have a stored procedure example to present. It doesn’t matter what’s inside the package at this point. You can use the same approach on other SSIS packages. What You Need - A SQL Server Agent job for the SSIS Catalog package with 1 step to execute the package. It does not require schedules. - A stored procedure that will call msdb.dbo.sp_start_job to execute the job. - The EXECUTE permissions for the user account that will execute the stored procedure. - An app (whatever it is) that will trigger the execution of the stored procedure. - An admin account to impersonate your non-admin user to execute the SSIS package. Advantages - No need to schedule a job for the SSIS package. Although, you need to create a job in SQL Server Agent. - Any data source will do — SQL Server or any relational database, Excel, text file, Sharepoint list, etc. - The timing of the execution is more flexible for your user. Disadvantages - The msdb stored procedure does not accept package parameters. If you need parameters, follow the 2nd approach with a table of parameters required by the package. Stored Procedure Example This example shows the stored procedure having EXECUTE AS and REVERT to execute an SSIS package successfully. The steps below assume you already have the SSIS Catalog Database (SSISDB), and you are familiar with it. - Create 2 Windows or domain accounts: User1 — the non-sysadmin user and ssis_user1 — the sysadmin user. - Using Visual Studio, create and deploy the SSIS package in SSISDB. We use PackageSample.dtsx as an example. - Create an SQL Server Agent job for the SSIS package. Note that you don’t need to add a schedule. We use testJob as an example. - Grant IMPERSONATE permissions. We need this to run msdb.dbo.sp_start_job that will execute the package successfully. Here’s an example: USE master GO GRANT IMPERSONATE ON LOGIN::[DOMAIN1\ssis_user1] TO [user1] GO 5. Create the stored procedure to run the package. The stored procedure will use the EXECUTE AS to impersonate the sysadmin ssis_user1 until the package is triggered to run. REVERT will return the security context to the non-sysadmin user user1. The msdb stored procedure sp_start_job will activate the package to start. This is why sysadmin permission is needed: CREATE PROCEDURE spRunJob AS BEGIN EXECUTE AS LOGIN = 'DOMAIN1\ssis_user1' -- impersonation starts here EXEC msdb.dbo.sp_start_job @job_name = 'testJob' -- execute the job REVERT -- impersonation ends here END GO 6. Test the setup work. Run this in SQL Server Management Studio query window: -- This assumes that you are logged-in to SSMS as an admin -- Simulate running the package using user1 with reduced security EXECUTE AS LOGIN = 'user1' -- this is for you to see if the context has shifted to user1 PRINT 'About to execute stored procedure by ' + SUSER_NAME() -- execute our sample stored procedure EXEC spRunJob-- return the security context to previous settings REVERT-- this is for you to see if the context has shifted back to you PRINT 'After REVERT, the security context is back to ' + SUSER_NAME() How about using the built-in stored procedures in SSISDB? If you use the built-in SSISDB stored procedures to execute packages, the stored procedure will look like this: CREATE PROCEDURE spRunPackage ( @fromDate DATE, @toDate DATE, @category VARCHAR(15) ) AS BEGIN DECLARE @execution_id bigint EXECUTE AS LOGIN = 'DOMAIN1\ssis_user1' -- impersonate EXEC [SSISDB].[catalog].[create_execution] @package_name=N'PackageSample.dtsx' ,@execution_id=@execution_id OUTPUT ,@folder_name=N'Sample1' ,@project_name=N'SSIS3' ,@use32bitruntime=False ,@reference_id=Null -- Define parameters EXEC [SSISDB].[catalog].[set_execution_parameter_value] @execution_id ,@object_type=30 ,@parameter_name=N'fromDate' ,@parameter_value=@fromDate EXEC [SSISDB].[catalog].[set_execution_parameter_value] @execution_id ,@object_type=30 ,@parameter_name=N'toDate' ,@parameter_value=@toDate EXEC [SSISDB].[catalog].[set_execution_parameter_value] @execution_id ,@object_type=30 ,@parameter_name=N'category' ,@parameter_value=@category -- Execute the package EXEC [SSISDB].[catalog].[start_execution] @execution_id -- Switch to previous security context REVERT END GO Advantages - No need to create a job in SQL Server Agent. - You can pass parameters in the stored procedure, and then to the package, using [SSISDB].[catalog].[set_execution_parameter_value]. The Big Catch I can’t make it work in SQL Server 2019, using a non-sysadmin user. However, I made it work before, using SQL Server 2012. The stored procedure works if there’s no impersonation only. It means no EXECUTE AS and REVERT and should be triggered only by a sysadmin user. That sucks because it won’t serve our purpose. I tried to make it work but failed, as you can see in this forum thread. Eventually, you can only use the msdb stored procedure along with the SQL Server Agent job. Continue reading at 👉. Join a community of database specialists. Weekly CodingSight tips right in your inbox. ✔ No spam ✔ 100% great content, always. Subscribe here to get more industry insights👋
https://codingsight.medium.com/3-easy-and-secure-ways-to-make-users-run-ssis-packages-c7aef736803e?source=post_page-----c7aef736803e--------------------------------
CC-MAIN-2021-43
refinedweb
1,844
58.48
Eclipse Community Forums - RDF feed Eclipse Community Forums EIS OneToMany issue <![CDATA[I'm reposting this thread which started on the eclipse-dev mailing list. James, I added a @Customizer to my Album class which calls setCustomSelectionQuery(readAllQuery) but my query is never called. I'm not sure what to do next, so I'm attaching my code so far. The datagrid I'm using is TIBCO ActiveSpaces. The structure looks like this : space Album: - ID (String) space Track: - ID (byte[]) - NAME (String) - NUMBER (int) - ALBUM_ID (String) Thanks for your help, Julien From: James Sutherland <JAMES.SUTHERLAND@xxxxxxxxxx> Date: Mon, 26 Mar 2012 05:43:49 -0700 (PDT) Delivered-to: eclipselink-dev@eclipse.org Subject: [eclipselink-dev] EIS OneToMany issue For your error, OneToMany mappings in NoSQL cannot use a mappedBy, this is solely a relational option. A OneToMany in NoSQL is typically stored as a collection of ids in the source object. So remove the mappedBy, you can set the field using @JoinField. It is possible to map a OneToMany using a query, but not through annotations, you would need to customize the EISOneToOneMapping in a DescriptorCustomizer to set a selectionQuery. What NoSQL database are you using? How is your data structured? From: Julien Ruaux [mailto:jruaux@xxxxxxxxx] Sent: March-26-12 1:28 AM To: eclipselink-dev@xxxxxxxxxxx Subject: [eclipselink-dev] EIS OneToMany issue Hi, I'm sorry for posting this on the dev list, but eclipselink-users seems to be down at the moment. Congratulations on a great ORM fr! amework, I'm very interested in the EIS support offered by EclipseLink. I am encountering this issue with the one-to-many relationship using the latest org.persistence.core from svn : Caused by: java.lang.ClassCastException: org.eclipse.persistence.eis.mappings.EISOneToOneMapping cannot be cast to org.eclipse.persistence.mappings.OneToOneMapping at org.eclipse.persistence.internal.jpa.metadata.accessors.mappings.OneToManyAccessor.processOneToManyMapping(OneToManyAccessor.java:207) My model is pretty simple - an Album has many Tracks, and since my EIS records do not support collections I would like the foreign key to be on the target (Track) : --------------------- @Entity @NoSql(dataFormat = DataFormatType.MAPPED) public class Album { @Id public String id; @OneToMany(mappedBy = "album") public List<Track> tracks; } --------------------- @Entity @NoSql(dataFormat = DataFormatType.MAPPED) public class Track { @Id @GeneratedValue public byte[] id; public int number; public String name; @ManyToOne public Album album; } --------------------- Any idea what I might be doing wrong? Thanks, Julien]]> Julien Ruaux 2012-03-27T03:42:57-00:00 Re: EIS OneToMany issue <![CDATA[How are you using TIBCO ActiveSpaces? Do you have your own EISPlatform and your own JCA adapter? Can you include these? Does TIBCO ActiveSpaces support querying, what kind of data format does it support? If it is a data grid, then it would probably be much more efficient to store a collection of trackIds in the Album, than query the entire grid for tracks with the albumId, or even better, just store the tracks with the Album instead of separately. Your selectionCriteria is not a valid expression (expression must be conditional, and use an ExpressionBuilder), but that doesn't matter unless you have implemented some kind of dynamic querying support for TIBCO. Normally the setCustomSelectionQuery would be a query that uses a native EISInteraction. ]]> James Sutherland 2012-04-02T14:26:55-00:00
http://www.eclipse.org/forums/feed.php?mode=m&th=317418&basic=1
CC-MAIN-2014-42
refinedweb
548
56.45
Java Quiz 4: Passing a Parameter to a Constructor Java Quiz 4: Passing a Parameter to a Constructor Catch up with the answer to the last Java Quiz about exception handling and dive into a new one about passing parameters to constructors. Join the DZone community and get the full member experience.Join For Free Download Microservices for Java Developers: A hands-on introduction to frameworks and containers. Brought to you in partnership with Red Hat. Before we start with this week's quiz, here is the answer to Java Quiz 3: Handling Exceptions By passing the parameters 13 and 0 to the method print, the statement nr = accounts[i] / i2; causes an ArrayIndexOutOfBoundsException. The reason is that element 13 doesn't exist. The equation nr = accounts[i] / i2; first tries to access element 13, then divides the number by zero. The code doesn't handle ArrayIndexOutOfBoundsExceptions, but the Exception is a generic Exception handler. The statement System.out.print("T"); writes T to the standard output. By passing the parameters 12 and 0 to the method print, the statement nr = accounts[i] / i2; divides zero by zero, which causes an ArithmeticException. The statement System.out.print("S"); writes S to the standard output. The correct answer is: D. Here is the quiz for today! What happens when the following program is compiled and run? public class MyClass { int y = 3; public MyClass(int i) { y += i; } public MyClass(int i, int i2) { y += (i + i2); System.out.print(y); } public int method(int i) { y += i; return y; } public static void main(String[] args) { new MyClass(new MyClass(5).method(2), 4); } } A. This program writes "12" to the standard output. B. This program writes "24" to the standard output. C. This program writes "21" to the standard output. D. This program writes "17" to the standard output. E. This program writes "18" to the standard output. F. This program writes "16" to the standard output. }}
https://dzone.com/articles/java-quiz-4-passing-a-parameter-to-a-constructor?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+dzone%2Fjava
CC-MAIN-2018-51
refinedweb
327
59.19
The class presented in this article helps to write client side scripting inside .NET code. Sometimes you have to write JavaScript inside your code. The project I was doing, involved JavaScript to be written inside .NET code on a number of pages. Before, I used a string object to create a script which is as follows: string string"; script+= "alert('Hello world');"; script+= "</script>"; RegisterClientScriptBlock("myscript", script); So, the class which I developed is very simple but helps the code to be a bit cleaner, and it's using StringBuilder which is better when it comes to concatenating strings. Notice that we always have to begin script (<script lanugage= "Javascript">) and end script (</script>), where we can make a mistake while typing our code. Using this class, you don't have to type the above mentioned tags in your scripts. StringBuilder <script lanugage= "Javascript"> </script> Following is an example to use the class which I have developed. // include the following namespace in your class where you'll use it using vs.helpers; ScriptHelper js = new ScriptHelper(); // another constructor ScriptHelper(string language) e.g ScriptHelper("VBScript") js.Add("alert('Hello world');"); js.Add("alert('This is an example');"); js.End(); // adds the </script> tag to the script RegisterClientScriptBlock("myscript", js.ToString()); Notice that you don't have to begin or end the <script> tags. As mentioned earlier, ScriptHelper is using StringBuilder to concatenate strings, which is better when it comes to performance. Default language for the script is JavaScript but you can change the language while constructing ScriptHelper object, by calling the constructor which takes one argument as string which lets you specify the language of your script. A DLL for Visual Studio 2002 (1.0 framework) is included in the code. If anyone would like to use the code in 1.1, they can recompile the ScriptHelper class in VS 2003. Sorry, I only have VS 2002 :). <script> ScriptHelper Today, while writing this article, it occurred to me that why not use a JavaScript interpreter in the Add method. That way, whenever you add a script using Add method, it can check that you are adding a valid JavaScript. So, you don't have to run the script in the browser in-order to find out that if it's syntactically or semantically right. It can check at compile time rather than run and fix. Please let me know if you have any suggestions to improve it further. Add This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here
http://www.codeproject.com/Articles/7442/Client-side-script-helper
crawl-003
refinedweb
455
73.78
The wcsrtombs() function is defined in <cwchar> header file. wcsrtombs() prototype size_t wcsrtombs( char* dest, const wchar_t** src, size_t len, mbstate_t* ps ); The wcsrtombs() function converts the wide character string represented by *src to corresponding multibyte character string and is stored in the character array pointed to by dest if dest is not null. A maximum of len characters are written to dest. The conversion process is similar to calling wcrtomb() repeatedly. The conversion stops if: - A wide null character was converted and stored. In this case, src is set to null and ps represents initial shift state. - An invalid wide character was encountered. In this case, src is set to point the beginning of the first unconverted wide character. - len bytes has been stored in dest. In this case, src is set to point the beginning of the first unconverted wide character. wcsrtombs() Parameters - >dest: Pointer to the character array where the converted multibyte character is stored. - src: Pointer to pointer to the first wide character to convert. - len: Maximum number of bytes available in dest array. - ps: Pointer to the conversion state object. wcsrtombs() Return value - On success, the wcsrtombs() function returns the number of multibyte characters written to dest excluding the terminating wide null character but including shift sequences. If dest is a null pointer, it returns the number of wide characters that would have been written excluding the terminating null character. - On conversion error, -1 is returned and errno is set to EILSEQ. Example: How wcsrtombs() function works? #include <cwchar> #include <clocale> #include <iostream> using namespace std; int main() { setlocale(LC_ALL, "en_US.utf8"); const wchar_t* wstr = L"\u0763\u0757\u077f\u075f"; char str[20]; mbstate_t ps = mbstate_t(); int len = 10; int retVal; retVal = wcsrtombs(str, &wstr, len, &ps); cout << "Number of multibyte characters written (excluding \"\\0\") = " << retVal << endl; cout << "Multibyte character = " << str << endl; return 0; } When you run the program, the output will be: Number of multibyte characters written (excluding "\0") = 8 Multibyte character = ݣݗݿݟ
https://cdn.programiz.com/cpp-programming/library-function/cwchar/wcsrtombs
CC-MAIN-2021-04
refinedweb
329
55.13
Gradient Descent is one of those topics which sometimes scare beginners and practitioners. I have seen most people when heard the term Gradient then they try to finish the topic without understanding Maths behind it. In this tutorial, I will explain your Gradient descent from a very ground level, And pick you up with simple maths examples, and make gradient descent completely consumable for you. Table of Contents - What is Gradient Descent and why it is important? - Intuition Behind Gradient Descent - Mathematical Formulation - Code for Gradient Descent with 1 variable - Gradient Descent with 2 variable - Effect of Learning Rate - Effect of Loss Function - Effect of Data - EndNote What is Gradient Descent? Gradient Descent is a first-order optimization technique used to find the local minimum or optimize the loss function. It is also known as the parameter optimization technique. Why Gradient Descent? It is easy to find the value of slope and intercept using a closed-form solution But when you work in Multidimensional data then the technique is so costly and takes a lot of time Thus it fails here. So, the new technique came as Gradient Descent which finds the minimum very fastly. Gradient descent is not only up to linear regression but it is an algorithm that can be applied on any machine learning part including linear regression, logistic regression, and it is the complete backbone of deep learning. The intuition behind Gradient Descent considers I have a dataset of students containing CGPA and salary package. We have to find the best fit line that gives a minimum value of b when the loss is minimum. The loss function is defined as the squared sum of the difference between actual and predicted values. To make the problem simple to understand Gradient descent suppose the value of m is given to us and we have to predict the value of intercept(b). so we want to find out the minimum value of b where L(loss) should be the minimum. So, if we plot the graph between L and b then it will be a parabolic shape. Now in this parabola, we have to find the minimum value of b where loss is minimum. want If we use ordinary least squares, it will differentiate and equate it to zero. But this is not convenient working with high-dimensional data. So, here comes Gradient Descent. let’s get started with performing Gradient Descent. Select a random value of b we select any random value of b and find its corresponding L value. Now we want to converge it to the minimum. On the left side, if we increase b then we are going towards minimum and if decreasing then we are going away from the minimum. On the right side, if we decrease b then we are going closer to a minimum and on increasing we are going away from the minimum. Now how would I know that I want to go forward or backward? So, The answer is simple, We find the slope at the current point where we stand. Now again the question may arise How to find a slope? To find the slope we differentiate the equation os loss function which is the equation of slope and on simplifying we get a slope. Now the direction of the slope will indicate that you have to move forward or backward. If a slope is positive then we have to decrease b and vice-versa. in short, we subtract the slope from the old intercept to find a new intercept. b_new = b_old - slope This is only the equation of gradient and Gradient means derivative if you have more than one variable as slope and intercept. Now again question arise is that How would I know where to stop? we are going to perform this convergence step multiple times in a loop so it is necessary to know when to stop. one more thing is, if we subtract the slope there is a drastic change in movement it is known as a zig-zag movement. To avoid this case we multiply the slope with a very small positive number known as the learning rate. Now the equation is bnew = bold - learning rate * slope hence, this is why we use the learning rate to reduce the drastic change in step size and direction of the movement. We will see the effect and use of the learning rate in deep further in this tutorial. Now the question is the same when to stop the loop? so there are 2 approaches when we stop moving forward. - when b_new – b_old = 0 means we are not moving forward so we can stop. - we can limit the number of iterations by 1000 times. Several iterations are known as epochs and we can initialize it as Hyperparameter. This is the Intuition behind the Gradient descent. we have only covered the theory part till And now we will start Mathematics behind Gradient descent and I am pretty sure you will get it easily. Maths behind Gradient Descent consider a dataset, we did not know the initial intercept. we want to predict the minimum value of b and for now, we are considering we know the value of m. we have to apply gradient descent to only know the value of b. the reason behind this is understanding with one variable will be easy and in this further article, we will implement the complete algorithm with b and m both. step-1) start with a random b At the start, we consider any random value of b and start iteration in for loop and find the new values for b with help of slope. now suppose the learning rate is 0.001 and epochs is 1000. Step-2) Run the iterations for i in epochs: b_new = b_old - learning_rate * slope Now we want to calculate the slope at the current value of b. So we will calculate the equation of slope with help of the loss function by differentiating it concerning b. That’s simple it is calculating slope, and simply you can put the values and calculate the slope. The value of m is given to us so it is easier. And we will do this thing till all iterations get over. This is only the Gradient Descent and you only need to this much. we have seen How gradient descent works and Now let’s make our hands dirty by implementing gradient descent practically using Python. Making Hands dirty by Implementing Gradient Descent with 1 variable here, I have created a very small dataset with four points to implement Gradient descent on this. And the value of m we are given so we will first try the Ordinary least square method to get m and then we will implement Gradient descent on the dataset. from sklearn.datasets import make_regression import numpy as np import matplotlib.pyplot as plt X,y = make_regression(n_samples=4, n_features=1, n_informative=1, n_targets=1, noise=80, random_state=13) plt.scatter(X,y) Get the value of m with OLS from sklearn.linear_model import LinearRegression reg = LinearRegression() reg.fit(X,y) print(reg.coef_) print(reg.intercept_) After applying OLS we get the coefficient, value of m as 78.35 and intercept as 26.15. Now we will implement Gradient descent by taking any random value of intercept and you will see that after performing 2 to 3 iteration we will reach near 26.15 and that is our end goal to implement. If I plot the prediction line by OLS then it will look something like this. plt.scatter(X,y) plt.plot(X,reg.predict(X),color='red') Now we will implement Gradient descent then you will see that our gradient descent predicted line will overlap it as iteration increases. Iteration-1 Let’s apply Gradient descent assuming slope is constant at 78.35 and a starting value of intercept b is 0. so let us apply the equation and predict the initial value. y_pred = ((78.35 * X) + 0).reshape(4) plt.scatter(X,y) plt.plot(X,reg.predict(X),color='red',label='OLS') plt.plot(X,y_pred,color='#00a65a',label='b = 0') plt.legend() plt.show() This is a line when an intercept is zero and Now as we will move forward by calculating the slope and find a new value of b will move towards the red line. m = 78.35 b = 0 loss_slope = -2 * np.sum(y - m*X.ravel() - b) # Lets take learning rate = 0.1 lr = 0.1 step_size = loss_slope*lr print(step_size) # Calculating the new intercept b = b - step_size print(b) When we calculate the learning rate multiplied by slope is known as step size and to calculate the new intercept we subtract step size from the old intercept and that’s what we have done. And the new intercept is 20.9 hence directly from 0 we have reached 20.9. Iteration – 2 Now again we will calculate the slope at intercept 20 and you will see it will move very near to the required intercept of 26.15. The code is the same as above. loss_slope = -2 * np.sum(y - m*X.ravel() - b) step_size = loss_slope*lr b = b - step_size print(b) Now the intercept is 25.1 which is very near to the required intercept. If you run one more iteration then I am sure you will get the required intercept and the green line will overtake the red one. And on plotting you can see the graph as below in which the green line overtakes the red. From the above experiment, we can conclude that when we are far from minima we take long steps and as we reach near to minima we take small steps. This is the beauty of Gradient Descent that even when you start with any wrong point say 100, then also after some iteration, you will reach the correct point And this is all due to learning rate. Gradient Descent for 2 Variables Now we can understand the complete working and intuition of Gradient descent. Now we will perform Gradient Descent with both variables m and b and do not consider anyone as constant. Step-1) Initialize the random value of m and b here we initialize any random value like m is 1 and b is 0. Step-2) Initialize the number of epochs and learning rate take learning rate small as possible suppose 0.01 and epochs as 100 Step-3) Start calculating the slope and intercept in iterations Now we will apply a loop for several epochs and calculate slope and intercept. for i in epochs: b_new = b_old - learning_rate * slope m_new = m_old - learning_rate * slope The equation is the same as we have derived above by differentiation. here we have to differentiate the equation 2 times. one concerning b(intercept) and one concerning m. This is Gradient Descent. Now we will build the Gradient Descent complete algorithm using Python for both variables. Implement Complete Gradient Descent Algorithm with Python from sklearn.datasets import make_regression X, y = make_regression(n_samples=100, n_features=1, n_informative=1, n_targets=1, noise=20, random_state=13) This is the dataset we have created, Now you are free to apply OLS and check coefficients and intercept. let us build a Gradient Descent class. class GDRegressor: def __init__(self, learning_rate, epochs): self.m = 100 self.b = -120 self.lr = learning_rate self.epochs = epochs def fit(self, X, y): #calculate b and m using GD for i in range(self.epochs): loss_slope_b = -2 * np.sum(y - self.m * X.ravel() - self.b) loss_slope_m = -2 * np.sum((y - self.m * X.ravel() - self.b)*X.ravel()) self.b = self.b - (self.lr * loss_slope_b) self.m = self.m - (self.lr * loss_slope_m) print(self.m, self.b) def predict(self, X): return self.m * X + self.b #create object and check algorithm gd = GDRegressor(0.001, 50) gd.fit(X, y) hence, We have implemented complete Gradient Descent from scratch. Effect of Learning Rate Learning rate is a very crucial parameter in Gradient Descent and should be selected wisely by experimenting two to three times. If you use learning rate as a very high value then you will never converge and the slope will dance from a positive to a negative axis. The learning rate is always set as a small value to converge fast. Effect of Loss Function One is the learning rate whose effect we have seen and the next thing which affects the Gradient descent is loss function. we have used mean squared error through this article which is a very simple and most used loss function. This loss function is convex. A convex function is a function wherein between two points if you draw a line then the line never crosses the function which is known as convex function. Gradient descent is always a convex function because in convex there would be only one minima. Effect Of Data Data affects the running time of Gradient Descent. If all the features in the data are at a common scale then it converges very fast and the contour plot is exactly circular. But If the feature scale is very different then the convergence time is too high And you will get a flatter contour. EndNote We have learned Gradient descent from ground level and build it with one as well as with two variables. The beauty of it is, it gets you at the correct point whether you start with any weird point. Gradient Descent is used in most Machine learning parts including Linear and Logistic Regression, PCA, ensemble techniques. I hope it was easy for you to catch up on each point we have discussed. If you have any query please comment them down, I will really be happy to help you. If you like my article please have a look at my other article through this link. About the Author Raghav Agrawal I am pursuing my Bachelor of Technology(B-Tech) in Information Technology. I am an enthusiast of learning and very much fond of data science and Machine learning. Please feel free to connect with me on Linkedin. The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion.You can also read this article on our Mobile APP
https://www.analyticsvidhya.com/blog/2021/05/gradient-descent-from-scratch-complete-intuition/
CC-MAIN-2021-25
refinedweb
2,387
64
Collections and types are a safe way to ensure that the Developer’s code is strongly typed. A strongly typed piece of code avoids the need for casting operations and provides for compile time errors instead of runtime errors. Using collections and types in a .NET assembly reduces the time and memory required to transform inputs. Collections also have the added attraction of being enumerated, and properties such as the number of items and a collection iterator allow for simple implementation. Anupam Banerji explains their use. .NET developers must often convert data types. A string from a textbox control or a console input may have to be converted into an integer, double, or a Boolean value type. Such a type may be then used in a collection of objects. Often custom reference types must be created in order to store various properties of an object. A large number of these objects might be needed for sorting or storing. Types and collections provide an efficient implementation technique. A type is a piece of data stored in memory. There are two such types. Value types are in-built .NET data types. For example, integers, doubles, decimals, floats, Booleans, characters, date/time and bytes are .NET data types. Value types are stored in an area called the stack, this is a Last In First Out (LIFO) storage space from the days of DOS. Access to the stack is quick, and the types stored are indexed. Reference types are user defined types (.NET supplies many pre-built reference types) stored in an area of memory known as the heap. Objects, strings, arrays and streams are .NET supplied reference types. All types (value and reference) are derived from the object type, the simplest data type in the .NET Common Type System (CTS). A user defined type may contain both CTS value and reference types. A simple user defined type is the enumeration: private enum ExampleEnum { firstItem, secondItem } The enumeration above provides an integer based index which allows the Developer to assign names to integer indexes. Using enumerations makes implementation easier and safer. The simplest user defined type is the structure: struct ExampleStr { public string str; public object o; public Exception e; } The structure is then used by declaring a variable of the type: ExampleStr ex = new ExampleStr(); A third way to hold data is the class type. Classes are objects containing types and other code and are a key building block in object oriented design patterns. Classes are a reference type; they are instanced the same way as structures and have constructors. Classes are different to structures in that they can derive base classes, implement interfaces, and in .NET, control their disposal through implemented garbage collection. A collection is a group of types. Several types of collection exist. These are array lists, queues, stacks, string collections and bit arrays. There are also generic collections such as SortedList<T,U>, Queue<T>, Stack<T> and List<T>. When the underlying data type implements the IComparable interface, the generic collection may be sorted through implemented methods. Why are types and collections necessary? It is possible to cast objects as CTS types and recast them into useful types when required. There are a few reasons why instancing common types and recasting them into required types should be avoided. The first reason is the amount of time required to cast a CTS type into another. A boxing operation occurs when a value type is cast into the object type. An un-boxing operation occurs when an object type is cast into a value type. Boxing and un-boxing operations result in a performance penalty. If there are thousands of such operations in your code, then there will be a noticeable performance drop. There is another issue. An incorrectly boxed or unboxed object throws an invalid cast exception during runtime. Correctly declaring the inputs into the method during design would result in a compile time (and not runtime) error. We will now build a simple sort-able class and a generic collection to demonstrate some of the concepts introduced in this article. We create a class that implements the IComparable interface. The class implements the CompareTo() interface method. using System.Collections; using System.Collections.Generic; public class Sortable : IComparable { private string name { get; private set; } private int price { get; private set; } public Sortable() { } public void Add(string _name, int _index) { name = _name; price = _index; } #region IComparable Members int IComparable.CompareTo(object obj) { return price.CompareTo(((Sortable)obj).price); } #endregion } The class adds an item with a name and an index. The interface implementation compares the prices and returns an integer with the value of zero if the prices are identical, less than zero if the price is less than the compared price, and greater than zero if the price is greater than the compared price. The object being compared must be un-boxed into the Sortable class before any comparison between the prices can be made. This is expected; the implemented interface cannot account for custom objects. We now create the main function. The collection of items is stored in a List<T> object: List<Sortable> list = new List<Sortable>(); We then add items to the generic list collection. Once this collection has been filled, we can sort the list: list.Sort(); Or reverse the sort order: list.Reverse(); The sorting functions use the implemented IComparable interface in the Sortable class to compare the sort values. We could easily sort by the name property: #region IComparable Members int IComparable.CompareTo(object obj) { return name.CompareTo(((Sortable)obj).name); } #endregion The List<T> collection contains several generic methods of the ArrayList object. We can also map the Stack<T> generic to the Stack object and the Queue<T> collection to the Queue object. Stacks and queues are not sorted. A stack is a Last In First Out (LIFO) model, and the Queue is a First In First Out model (FIFO). Stacks would use the Push() and Pop() methods to add and remove items, and queues would use the Enqueue() and Dequeue() methods to add and remove items. Types and collections are a powerful, safe and effective way to process large groups of data. Implementing type-safe code results in fewer runtime errors and allows the Developer to reuse pre-built .NET interfaces and methods. There are fewer implementation bugs and the code algorithms are widely understood. Using types and collections is therefore recommended as a standard development practice..
http://www.codeproject.com/Articles/93664/Types-and-Collections-in-C-3-0-NET
CC-MAIN-2014-35
refinedweb
1,077
57.77
I have some doubts in relation to packages structure in a python project when I make the imports These are some conventions python-irodsclient_API = Project Name I've defined python packages for each file, in this case are the following: python-irodsclient_API/config/ python-irodsclient_API/connection/ These packages are well define as a packages and not as a directories really? I have the file python-irodsclient_API/config/config.py in which I've defined some constants about of configuration for connect with my server: And I have the python-irodsclient_API/connection/connection.py file: In the last or previous image (highlighted in red) .. is this the right way of import the files? I feel the sensation of this way is not better. I know that the "imports" should be relatives and not absolutes (for the path) and that is necessary use "." instead "*" In my case I don't know if this can be applied in relation to the I'm doing in the graphics. I appreciate your help and orientation Best Regards There is a good tutorial about this in the Python module docs, which explains how to refer to packages under structured folders. Basically, from x import y, where y is a submodule name, allows you to use y.z instead of x.y.z. You have 2 options here: 1) make your project a package. Since it seems like your connection and config packages are interdependent, they should be modules within the same package. To make this happens, add a __init__.py files in python-irodsclient_API folder. Now you can use relative imports to import config into connection, as they are part of the same package: from ..config import config The .. part means import from one level above within the package structure (similar to how .. means parent directory in Unix) 2) if you don't want to make python-irodsclient_API a package for some reason, then the second option is to add that folder to the PYTHONPATH. You can do this dynamically per Tony Yang's answer, or do this from the bash command line as followed: export PYTHONPATH=$PYTHONPATH:/path/to/python-irodsclient_API I can invoke sys module to append python-irodsclient_API path. import sys sys.path.append('C:\..\python-irodsclient_API') When you operate connection.py and want to invoke config, it's able to be successful.
http://www.dlxedu.com/askdetail/3/ae05c287abe1d3fb9f458d4a72b13616.html
CC-MAIN-2018-47
refinedweb
390
64.41
A class Number has been defined to find the frequency of each digit present in it and the sum of the digit and to display the results. Some of the members of the class Number are given below: Class name: Number Data member: num: integer variable to store the number. Member functions: Number(int n): constructor to assign n to num. void frequency(): to find the frequency of each digit and to display it. int sum(): to return the sum of the digits of the number. Specify the class Number, giving details of the constructor and functions frequency() and sum(). You do not need to write the main() function. Program: import java.io.*; class Number{ private int num; public Number(int n){ num = n; } public void frequency(){ for(int i = 0; i <= 9; i++){ int count = 0; for(int j = num; j != 0; j /= 10){ int digit = j % 10; if(i == digit) count++; } if(count > 0){ System.out.println(i + " - " + count + " time(s)."); count = 0; } } } public int sum(){ int s = 0; for(int i = num; i != 0; i /= 10){ s = s + i % 10; } return s; } }
https://www.happycompiler.com/class-11-number-class-2018/
CC-MAIN-2020-05
refinedweb
185
74.9
In the previous article, we used RLlib’s IMPALA agent to learn the Atari Breakout environment from pixels in a respectable time. Here, we will take it one step further and try to learn from the contents of the game’s RAM instead of the pixels. As a software engineer, I expected the RAM environments to be easier to learn. After all, it seems likely that one location in memory would hold the x-coordinate of the bat, and two more would hold the position of the ball. If I was trying to write some code to play this game, and wasn’t using machine learning, that’s probably where I’d want to start. If forced to use graphics, I’d just process them to extract this information anyway, so surely it’s simpler to skip right over that step. Turns out I was wrong! It’s easier to learn from the images than from the RAM. Modern convolutional neural network architectures are good at extracting useful features from images. In contrast, programmers with so little memory to use were accustomed to coming up with all sorts of "neat tricks" to pack as much information into the space as possible. One byte might represent a number, or two numbers, of four bits each, or eight flags... Here is the code I used: import ray from ray import tune from ray.rllib.agents.dqn import DQNTrainer ray.shutdown() ray.init(include_webui=False, ignore_reinit_error=True) ENV = "Breakout-ramDeterministic-v4" TARGET_REWARD = 200 TRAINER = DQNTrainer tune.run( TRAINER, stop={"episode_reward_mean": TARGET_REWARD}, config={ "env": ENV, "monitor": True, "evaluation_num_episodes": 25, "double_q": True, "hiddens": [128], "num_workers": 0, "num_gpus": 1, "target_network_update_freq": 12_000, "lr": 5E-6, "adam_epsilon": 1E-5, "learning_starts": 150_000, "buffer_size": 1_500_000, } ) This is the progress up to the point where I stopped the process: This is not a great success. I let the training run for 54 hours to achieve the score of 40. So it had learnt something, and the graph suggests it was continuing to improve, but progress was very slow. In the next article, we will see how to do better. It is tempting to think that, even though Atari has only 128 bytes of memory, many of the stored values are just noise. For example, somewhere in there would be the player’s current score, and using this as an input feature won’t help the learning. So I tried to identify a subset of bits that carry useful information. By logging the observations and seeing which ones seemed to be changing meaningfully (that is, had lots of distinct values over the first hundred timesteps), I picked out the following column indexes as "interesting": 70, 71, 72, 74, 75, 90, 94, 95, 99, 101, 103, 105, and 119. Here is the code I used for training a model using only these values. I switched over to using the PPO algorithm because it seemed to perform a bit better than DQN. The interesting part is the TruncateObservation class, which simplifies the observation space from 128 bytes down to 13. TruncateObservation import pyvirtualdisplay _display = pyvirtualdisplay.Display(visible=False, size=(1400, 900)) _ = _display.start() import ray from ray import tune from ray.rllib.agents.ppo import PPOTrainer ray.shutdown() ray.init(include_webui=False, ignore_reinit_error=True) import numpy as np import gym from gym.wrappers import TransformObservation from gym.spaces import Box from ray.tune.registry import register_env from gym import ObservationWrapper class TruncateObservation(ObservationWrapper): interesting_columns = [70, 71, 72, 74, 75, 90, 94, 95, 99, 101, 103, 105, 119] def __init__(self, env): super().__init__(env) self.observation_space = Box(low=0, high=255, shape=(len(self.interesting_columns),), dtype=np.uint8) def observation(self, obs): # print(obs.tolist()) # print full observation to find interesting columns return obs[self.interesting_columns] # filter def env_creator(env_config): env = gym.make('Breakout-ramDeterministic-v4') env = TruncateObservation(env) return env register_env("simpler_breakout", env_creator) ENV = "simpler_breakout" TARGET_REWARD = 200 TRAINER = PPOTrainer tune.run( TRAINER, stop={"episode_reward_mean": TARGET_REWARD}, config={ "env": ENV, "num_workers": 1, "num_gpus": 0, "monitor": True, "evaluation_num_episodes": 25 } ) The learning performance looked like this: width="602px" alt="Image 2" data-src="/KB/AI/5271949/image002.png" class="lazyload" data-sizes="auto" data-> It achieved the score of 42 after 27 hours of training, at which point I stopped the process. This looks more promising than trying to train on all the bytes. Let me know in the comments if you manage to do better than this with a slightly different subset of memory locations. For example, "Learning from the memory of Atari 2600" by Jakub Sygnowski and Henryk Michalewski calls out memory locations 95 to 105 as particularly influential. In the next article, we will see how we can improve by approaching the RAM in a slightly different way. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
https://codeproject.freetls.fastly.net/Articles/5271949/Learning-Breakout-From-RAM-Part-1
CC-MAIN-2021-31
refinedweb
804
55.13
On Mon, Dec 15, 2008 at 9:52 PM, Andreas Tille <tillea@rki.de> wrote: > On Mon, 15 Dec 2008, Mathieu Malaterre wrote: > >> I'll check with the maintainer, otherwise I'll have to patch it myslef :( > > I commited the start of the patch - it does not yet compile because I'm > just lacking C++ knowledge. I guess the usual porting advises combined > with some C++ knowledge should be enough to fix the problem. > > The idea is to help out upstream with a patch that shows our interest > in getting things fixed quickly. This should be fixed now. The patch is pretty much self explanatory. Most changes were automatic: find . -type f -exec sed -i -e 's/#include <iostream.h>/#include <iostream>\nusing namespace std;/g' {} \; find . -type f -exec sed -i -e 's/#include <iomanip.h>/#include <iomanip>/g' {} \; find . -type f -exec sed -i -e 's/#include <fstream.h>/#include <fstream>/g' {} \; Please report if you have still any issue, thanks. -- Mathieu
https://lists.debian.org/debian-med/2008/12/msg00078.html
CC-MAIN-2015-22
refinedweb
164
85.49
10 tips to help you choosing the right hosting plan for your blog/arcade site When you are about to create a blog or an arcade game site, the first thing you should consider is where your site will be hosted. It’s something really important. As your site popularity grows, your server will get more and more stressed, and this may affect the site itself. An example: this blog generates about 10,000 pages/day, and every page is made by about 80kB html, 100kB images, 50kB files… that’s more than 2GB/day… not to mention all the MySql queries needed to generate every WordPress page. The whole thing gets still more complicated if you want to set up an Arcade site: the average game ranges from 500kB to 2MB. Now imagine to serve 10,000 games/day (and that’s not an huge number…) and you’ll get an idea of what I am talking about. It’s very important to choose the right hosting plan, and I am going to help you finding the one that can fit your needs. Please note: These rules fit perfectly if you want to set up a blog or a small Arcade site (when I say “small” I mean 99% of the arcade sites in the world). 1) Forget Blogger.com, WordPress.com and all minor free offers Having a domain name is the only way to look professional, and if you are going to try and monetize your blog/arcade, you must have your own domain name. 2) Hosting Vs Housing There is no reason for having an housing plan until your hosting provider can’t handle the resources your site is asking for. This means if you choose your hosting plan wisely, you won’t have to switch to an housing plan until you have a large amount of traffic (and revenues…) 3) Php Vs Asp I hope this is an useless question nowadays, but there is really no reason for you to choose Asp. Don’t listen to “programmers” saying Php is for small projects… NewGrounds is made with Php and if you aren’t developing next Expedia’s competitor you must choose Php Moreover most of the most famous free resources (such as WordPress, Phpbb and so on) are made with Php. So it’s time to choose… 4) Php version Don’t trust hosting services that still offer Php 4.. it’s no longer under development nor will any security updates be released. Php 5 was released more than four years ago… do you know how much are 4 years in internet? There is no reason why hosting services haven’t updated it until now… other than they will never update it. So run away from Php 4 hostings. Choose a hosting plan with at least Php 5.2.2 5) Disk space usage: Don’t believe the “infinite” word you read on their offers: upload 10,000 DivX movies and you’ll understand what I mean. Choose an hosting plan with a specific amount of space, so you know that space is guaranteed. 500GB should be enough for a long, long time. 6) Monthly Bandwidth Transfer: In this case, look for “infinite” word. Again, it’s not true, and probably in case of big traffic the hosting company will slow down your site, but there is nothing worse than a “Oooops: Bandwidth exceeded for this month” when you land to a page. Especially if that page is yours. Especially is it’s only the 15th day of the month and you aren’t able to upgrade your hosting plan in five minutes. 7) Email accounts: If you are an one man company, you will only need one email: info@yourdomain.com I hate when I have to write to marketing@yourdomain.com rathern than support@yourdomain.com when I know the site is mantained by one person. Let’s say five email address are enough, so you can give one email to your little sister and make her happy. Anyway, she will continue using HotMail 8) MySql Databases: Choose a plan where you can define the name of your databases. Having databases called “Blog”, “Arcade” and so on is way better than “34524″ and “gdfyrty_2″. 9) 24/24 customer support: Very important. There is an easy way to test it: open a ticket (or contact the support center by email) saying your can’t connect to your MySql database. It’s not true, but it’s just an “hello world” to see what time does it take for the support team to reply… 10) Memory management: From a cute little file called php.ini, that you won’t be able to edit, your hosting company can set an huge number of options such as memory management, maximum time to execute a script, and so on.. Where are you hosted? Are you happy? Do you need some more advices? Create a Flash game like Snowflakes – AS3 version As announced in Create a Flash game like Snowflakes, here it is the AS3 version. I used the same comments to help you understanding the conversion. - package { - import flash.display.Sprite; - import flash.ui.Mouse; - import flash.events.*; - public class snowflakesas3 extends Sprite { - // max stars on stage - var max_stars = 20; - // current stars on stage - var stars_on_stage = 0; - // gravity - var gravity = 0.1; - // this is the influence distance - // using influence and real_influence I perform the square root only once - // when the distance from the mouse and the star is less than real_influence - // then the star is affected by the mouse - var influence = 625; - var real_influence = Math.sqrt(influence); - // friction - var friction = 0.9; - // divider, to make stars move slowly - var divider = 50; - // mouse speed - var mouse_speed = 0; - var playersprite:pointer = new pointer(); - public function snowflakesas3() { - // mouse cursor replacement - Mouse.hide(); - playersprite.addEventListener(Event.ENTER_FRAME,playersprite_enterframe); - addChild(playersprite); - addEventListener(Event.ENTER_FRAME,main_enterframe); - } - // function to be executed by the mouse pointer - public function playersprite_enterframe(event:Event) { - // calculating the distance from the last point the mouse was spotted - // and the current mouse position - var dist_x = playersprite.x-mouseX; - var dist_y = playersprite.y-mouseY; - mouse_speed = Math.sqrt(dist_x*dist_x+dist_y*dist_y); - // minimum speed = minimum force applied - if (mouse_speed<0.2) { - mouse_speed = 0.2; - }// updating pointer position - playersprite.x = mouseX; - playersprite.y = mouseY; - } - // main function - public function main_enterframe(event:Event) { - // should I add a star? - if (stars_on_stage<max_stars) { - //adding a star - stars_on_stage++; - var starsprite: star = new star(); - starsprite.xspeed = 0; - starsprite.yspeed = 0; - starsprite.x = Math.random()*450+25; - starsprite.y = Math.random()*50-100; - starsprite.addEventListener(Event.ENTER_FRAME,starsprite_enterframe); - addChild(starsprite); - } - } - // function the star will execute at every frame - public function starsprite_enterframe(event:Event) { - var current_star:star = (event.currentTarget as star); - // gravity - current_star.yspeed += gravity; - // calculating the distance from the star and the mouse - // without square roots - var dist_x = current_star.x-mouseX; - var dist_y = current_star.y-mouseY; - var distance = dist_x*dist_x+dist_y*dist_y; - // if we are in the radius of influence... - if (distance<influence) { - // ...apply a force to the star - // force is determined by mouse distance and speed - var xforce = mouse_speed*dist_x/divider; - var yforce = mouse_speed*dist_y/divider; - current_star.xspeed += xforce; - current_star.yspeed += yforce; - } - // adding friction - current_star.xspeed *= friction; - current_star.yspeed *= friction; - // updating position - current_star.y += current_star.yspeed; - current_star.x += current_star.xspeed; - // make the star rotate - current_star.rotation += (current_star.xspeed+current_star.yspeed); - // if the star reaches the bottom of the stage, remove it - if (current_star.y>300) { - stars_on_stage--; - current_star.removeEventListener(Event.ENTER_FRAME,starsprite_enterframe); - removeChild(current_star); - } - } - } - } There is no better way in learning a new language than porting your old projects into the new language. I'll write a post about it... meanwhile download the source code. Create a Flash game like Snowflakes Today I enjoyed a cute game called Snowflakes and I am about to show you how to create the main engine behing the game. The game is simple: a bunch of stars (snowflakes in the game, but I guess the author new saw the snow...) is falling from the sky, and you can affect their direction with your mouse, blowing them around. Only two objects in this movie, the star and the mouse pointer. This is the AS2 version, tomorrow I'll publish the AS3 one.. Create a Flash game like line-by-line explanation of all differences between AS2 and AS3 versions. This will be very helpful for all people out there that are afraid to start coding AS3. While AS2 is far from being obsolete (in my opinion), you can't ignore it exists and it's better than AS2 Read more Distribute your Flash games worldwide with FlashGameDistribution If you're not a professional in marketing, one that loves to promote and sell games, pins, socks... whatever... Flash game marketing can be an hassle, or even a nightmare. Unfortunately, if your game does not hit the frontpage of one of most popular portals spreading virally, you'll have to manually submit your work to at least an hundred sites to earn some cash. As said, it's not a problem if you like marketing, just remember that every hour you spend in marketing cannot be spent in programming... or enjoying the summer. That's why I can't wait for FlashGameDistribution (FGD from now on) to be released. Still in a limited early beta, FGD is the last work from the creators of FlashGameLicense (read the review) and First Impressions (read the review). FGD's goal is assisting developers in distributing their flash games accross the internet, allowing developers to easily post their games to game portals, contact game portals, and automatically distribute their games to FGD partners. The service will remain in beta until FGD crew will be able to count on a considerable amount of portals distributing the games through their API (I'll review it later) and until some minor issues will be fixed. Obviously this service is not only for developers but for portal owners too... they will be able to install an API that will automate game submissions. I decided to test the service, as usual... this time on the developer side. Read more Flash obstacle avoidance prototype Obstacle avoidance can be very important in Flash gaming because allows designers to create smart enemies. Obstacle avoidance behavior gives a character the ability to maneuver in a cluttered environment by dodging around obstacles. In this prototype, I'll try to simulate everyday life. In everyday life, you walk straight until an obstacle appears in your line of sight. Then, you slow down, and choose a random direction to cross the obstacle. If necessary, you make two or three steps back and approach the obstacle again. Well, I hope you don't act this way in your real life but this is what we are going to do in this prototype. It's up to you to improve it and make a more realistic movement. In the project there are 20 random obstacles placed in the same way as seen on Create a Flash Game like Nano War, and an object linked as runner running through them. Read more Creation of a Flash arcade site using WordPress – step 5 In Creation of a Flash arcade site using WordPress - step 4, we saw how to post a game into a wp database, now we'll see how to retrieve game information. It's time to parse the json feed. Where can I find the feed? At this link you will find the json feed. Just replace the xxx with your publisher id. Or use like I am doing in this example. There are various solution according to your php settings. If you don't know how to check your php settings, refer to phpinfo() at this link. Php version 5.2.0 or above If your server runs php 5.2.0 or above, you're really lucky because it provides native json support. In order to have the $mochi array as shown at lines 37-50 in Creation of a Flash arcade site using WordPress - step 4, you just need to use this script: Read more The free sound dilemma Here I am to introduce you an interesting question made by Pierre Urban. I think we all asked this question to ourselves, and did not answer for our convenience... Here it is: «I'm a french developer currently coding a game. Your blog helped me a lot with some really specific stuff about AS! Also about monetizing =) I really appreciate your blog but I have a problem. My game is almost finish and I have some difficulties to find some free sounds. The fact is that I'll add mochiAds ad system to my game and I would like to know how to get some fine sounds. I found two websites: and. But I don't know if I can use some sounds of these since the mochiAds will make me earn a little bit money... For freesounds, the CC licence is non-commercial use. Even if it was commercial use, I would have to add every owners of the sounds in the credit which will make a very long list. But usually I do not see a very long list of people in the credits of flash games, hence I thought there was another way to have sounds... About the music: I found somebody who is OK to let me use a music he made on newgrounds.com. I clearly told him I will put ads with mochiAds, but he responded that if it's not commercial then it is OK... I guess he misunderstood the thing, no? Or maybe I misunderstood... Does putting some ads on a flash games makes the thing commercial? Maybe you can make a new post about how to find and use musics/sounds in a flash game which uses MochiAd? Maybe about how to create our own music/sounds using some random software? I'm almost sure that it will be really handy for many people! =) I really must have miss something because I don't get why I am getting so much trouble to put some random sounds (like gun reloading, explosions, etc.) in my game just because it uses mochiAds...» If any music/effects composer is reading, I would like to know his answer. Create a Flash game like Cirplosion Do you remember Cirplosion? It was quite successful some time ago, and now it's time to create a game like it. In this tutorial we'll design the main engine. When you are going to design a game, or to write whatever script, try to explain yourself what you are about to do. Let me try to explain with simple words what the does the script do: * There are some blue orbs running everywhere with a linear motion * You control a red orb moving it with the mouse * If you click and hold mouse button, your orb start growing * While growing, you can't touch stage border or other orbs, or you will return small * When you release mouse button you are ready to explode * Pressing again mouse button will make you explode and kill all orbs you are touching * Orbs close to explosion will move faster from now on That's about 75% of the original game... of course you will need to polish it and add new features.)
http://www.emanueleferonato.com/2008/08/
crawl-002
refinedweb
2,555
71.85
Problem Link : How is the below solution working ?? I found this in submission relatively easier than others . Can someone explain the intuition behind it? What I have understood is that we can move opposite to the direction s[i] in the string to get the required cooordinate. Can someone give a proper explanation for this and why are traversing string in the reverse direction? #include <bits/stdc++.h> using namespace std; int main(){ int t; cin>>t; while(t--){ int n,m; cin>>n>>m; string s; cin>>s; int x=1,y=1; for(int i=s.size()-1;i>=0;i--){ if(s[i]=='L'&&x<m)x++; if(s[i]=='R'&&x>1)x--; if(s[i]=='U'&&y<n)y++; if(s[i]=='D'&&y>1)y--; } cout<<y<<" "<<x<<endl; } }
http://codeforces.com/blog/ayush29azad
CC-MAIN-2021-49
refinedweb
135
66.84
Core is finally here! On June 27th Microsoft released v1.0 of .Net Core, ASP.NET Core and Entity Framework Core, including a preview of command line tools, as well as Visual Studio and Visual Studio Code extensions to create .NET Core applications. This article is published from the DNC Magazine for Developers and Architects. Download this magazine from here [PDF] or Subscribe to this magazine for FREE and download all previous and current editions Editor’s Note: ASP.NET Core (previously known as ASP.NET 5) is the latest .NET stack to create modern web applications. It is a complete rewrite of ASP.NET and is open source, encourages modular programming, is cloud enabled, and is available across non-Microsoft platforms like Linux and Mac. In this stack, you have ASP.NET MVC, WEB API and Web Pages which have been unified and merged into one single unified framework called as MVC 6. ASP.NET Web Forms is not a part of ASP.NET Core 1.0, although the WebForms team is actively maintaining the web forms framework. We now have in our hands a cross-platform, open source and modular .NET Platform that can be used to build modern web applications, libraries and services. Throughout this article, we will build a simple web application to demonstrate some of the changes in ASP.NET Core, like its new request pipeline. This will allow us to explore some of the new features and changes in action, hopefully making them easier to understand. Don’t be concerned, your past experience working with the likes of MVC or Web API is still quite relevant and helpful. The main purpose of the web application will be allowing public profiles of registered users to be associated with vanity urls. That is, if I select /daniel.jimenez as my vanity url, then visitors will be able to find my profile navigating to my-awesome-site.com/daniel.jimenez . Note: For those who are new to the concept of a vanity URL, it is a customized short URL, created to brand a website, person, item and can be used in place of traditional longer URLs. I hope you find this article as interesting to read, as it was for me to write! We could start completely from scratch when creating a new ASP.NET Core application, but for the purposes of this article I will start from one of the templates installed in Visual Studio 2015 as part of the tooling. Note: If you are using Visual Studio 2015 / Visual Studio Community edition, get VS2015 Update 3 first and then install the .NET Core Tools for Visual Studio. I have chosen the ASP.NET Core Web Application template including Individual User Accounts as the authentication method. Figure 1. New project type Figure 2. Starting from a Web Application with Individual User Accounts This will give us a good starting point for our website. It is also worth mentioning that the authentication will be setup using the new ASP.NET Core Identity framework, including an Entity Framework Core context for storing user accounts. If you are not using Visual Studio, you should be able to use the yeoman aspnet generators that are part of the OmniSharp project. Their templates are based on the ones included in Visual Studio, so its Web Application template provides a similar starting point. Initialize the database Once the new application has been created you should be able to launch it and navigate to /Account/Register, where you will see the initial page for registering new accounts. Figure 3. Default page for registering new accounts If you go ahead and try to register, you will find out that your password needs at least one non alphanumeric character, one digit and one upper case letter. You can either match those requirements or change the password options when adding the Identity services in the ConfigureServices method of the Startup class. Just for the sake of learning, let’s do the latter, and take a first look at the Startup class, where you add and configure the independent modules that our new web application is made of. In this particular case, let’s change the default Identity configuration added by the project template: services.AddIdentity<ApplicationUser, IdentityRole>(opts => { opts.Password.RequireNonAlphanumeric = false; opts.Password.RequireUppercase = false; opts.Password.RequireDigit = false; }) Try again and this time you will see a rather helpful error page that basically reminds you to apply the migrations that initialize the identity schema: Figure 4.Error prior to applying migrations Let’s stop the application (or the command will fail) and run the suggested command in a console from the project root folder, the one containing the *.xproj file: >dotnet ef database update This will initialize a new database in your SQL Server Local DB instance, including the new Identity schema. If you try to register again, it should succeed and you will have created the first user account. In Visual Studio, you can quickly open the SQL Server Object Explorer, open the localdb instance, locate the database for your web application, and view the data in the AspNetUsers table: Figure 5. Registered user saved to localdb Add a VanityUrl column to the schema So far, so good. The template gave us a good starting point and we have a web application where users can register by creating an account in the database. The next step will be updating the application so users can pick a vanity url when registering. First we are going to add a new VanityUrl field to the ApplicationUser class, which is the simplest way of adding additional properties to profiles. We will add the property as per the requirement, that is a max length of 256 characters (If you want, you can also go ahead and add additional fields like first name, last name, DoB, etc.): public class ApplicationUser : IdentityUser { [Required, MaxLength(256)] public string VanityUrl { get; set; } } Now we need to add a new EF migration (entity framework), so these schema changes get reflected in the database. In the process, we will also add a unique index over the new VanityUrl column. Since we will need to find users given their vanity url, we better speed up those queries! To add the migration, run the following command from the project root: >dotnet ef migrations add VanityUrlColumn This will auto-generate a migration class, but the database won’t be updated until you run the update command. Before that, make sure to update the migration with the unique index over the vanity url field: public partial class VanityUrlColumn : Migration { protected override void Up(MigrationBuilder migrationBuilder) { migrationBuilder.AddColumn<string>( name: "VanityUrl", table: "AspNetUsers", maxLength: 256, nullable: false, defaultValue: ""); migrationBuilder.CreateIndex( "IX_AspNetUsers_VanityUrl", "AspNetUsers", "VanityUrl", unique: true); } protected override void Down(MigrationBuilder migrationBuilder) { migrationBuilder.DropColumn( name: "VanityUrl", table: "AspNetUsers"); migrationBuilder.DropIndex("IX_AspNetUsers_VanityUrl"); } } Finally go ahead and update the database: Update the Register page Right now our Register page is broken and we are not able to create new user accounts. This makes sense since we added the VanityUrl column as required, but we haven’t updated the register page to capture the new field. We will fix this right now. Start by adding a new property to the existing RegisterViewModel class. As you might expect, we will add some attributes to make it a required field, allow 3 to 256 characters, and allow only lower case letters, numbers, dashes and dots: [Required] [StringLength(256, ErrorMessage = "The {0} must be at least {2} and at max {1} characters long.", MinimumLength = 3)] [RegularExpression(@"[a-z0-9\.\-]+", ErrorMessage = "Use only lower case letters, numbers, dashes and dots")] [Display(Name = "Vanity Url")] public string VanityUrl { get; set; } Now update the existing view Account\Register.cshtml, adding the VanityUrl field to the form. If you have used MVC before, you might be surprised by the lack of Html helpers. They are still available but, ASP.NET Core has added tag helpers to its tool belt. With tag helpers, you can write server side rendering code targeting specific html elements (either standard html tags or your own custom tags) that will participate in creating and rendering the final HTML from a razor file. This style of writing your server side views allows for more robust and readable code in your views, seamlessly integrating your server side rendering helpers within the html code. The razor code for the VanityUrl field will be quite similar to that of the existing fields, using the tag helpers for rendering the label, input and validation message. We will add a bit of flashiness using a bootstrap input group displaying our website’s host, so users can see what their full vanity url would look like: @ViewContext.HttpContext.Request.Host/ <span asp-</span> </div> </div> Finally, update the Register action in the AccountController, so the VanityUrl is mapped from the RegisterViewModel to the ApplicationUser. var user = new ApplicationUser { UserName = model.Email, VanityUrl = model.VanityUrl }; Users are now able to provide a vanity url while registering, and we will keep that vanity url together with the rest of the user data: Figure 6. Updated register page If you added additional profile fields (like first name, last name or DoB) to the ApplicationUser class, follow the same steps with those properties in order to capture and save them to the database. Right now users can create an account and enter a vanity url like /the-real-foo. The final objective will be associating those urls with a public profile page, but we will start by adding that page. It will initially be accessible only through the standard routing /controller/action/id?, leaving the handling of the vanity urls for the next section. Create a new ProfileController with a single Details action that receives an id string parameter, which should match the id in the AspNetUsers table. This means using urls like mysite.com/profile/details/b54fb19b-aaf5-4161-9680-7b825fe4f45a, which is rather far from ideal. Our vanity urls as in mysite.com/the-real-foo will provide a much better user experience. Next, create the Views\Profile\Details.cshtml view and return it from the controller action so you can test if the page is accessible: public IActionResult Details(string id) { return View(); } Since we don’t want to expose the ApplicationUser directly in that view (that would expose ids, password hashes etc.), create a new view model named Profile. Add any public properties from ApplicationUser that you want exposed, like the name or DoB. If you didn’t add any extra properties let’s just add the UserName and the VanityuUrl so we have something to show in the page. You will need to update the Profile\Details.cshtml view so it declares the new Profile class as its model and renders its properties. For the sake of brevity I will skip this, you should be able to manually write your own, or use the Visual Studio wizard for adding a new view, selecting the Details template and our new Profile class as the view model. Please check the source code in GitHub if you find any problems. A more interesting change is required in the ProfileController, where we need to retrieve an ApplicationUser from the database given its id, and then map it to the new Profile class. Using dependency injection in the Profile Controller In order to retrieve an ApplicationUser from the database, the Identity framework already provides a class that can be used for that purpose, the UserManager<ApplicationUser>, which contains a FindByIdAsync method. But how do we access that class from our ProfileController? Here is where dependency injection comes. Dependency Injection has been built into ASP.NET Core, and it is used by components like the Identity framework to register and resolve their dependencies. Of course, you can also register and resolve your own components. Right now, let’s use constructor injection to receive an instance of the UserManager class in our controller constructor: private readonly UserManager<ApplicationUser> _userManager; public ProfileController(UserManager<ApplicationUser> userManager) { _userManager = userManager; } If you set a breakpoint, you will see an instance is being provided. This is because in your Startup class, the line services.AddIdentity() has registered that class within the dependency injection container. When an instance of your controller needs to be created, the container realizes a UserManager is needed, providing an instance of the type that was previously registered. (You would get an exception if the required type is not registered) Now update the action so it finds the user, creates a Profile instance and pass it to the view: public async Task<IActionResult> Details(string id) { var user = await _userManager.FindByIdAsync(id); return View(new Profile { Name = user.UserName, VanityUrl = user.VanityUrl }); } This completes the public profile page, although accessible only with the default routing. We will make sure that page can also be accessed using the vanity urls in the next section! Figure 7. Public profile page with default routing The request pipeline in ASP.NET Core is one of the areas with the biggest number of changes. Gone is the request pipeline based on events, and gone are the HttpHandlers and HttpModules of old, that closely followed IIS features. The new pipeline is leaner, composable and completely independent of the hosting solution. It is based around the concept of middleware, in which a pipeline can be composed of independent modules that receive the request, run its own logic then call the next module. If you have worked with Express in Nodejs, this should sound very familiar! Figure 8. New request pipeline based on middleware After receiving the response from the next middleware, you also have a chance to run custom logic, potentially updating/inspecting the response. Calling the next module is optional, so some of these modules (like authentication) might decide to end the request earlier than usual. The ASP.NET Core framework provides a number of built-in middleware like Routing, Authentication or CORS (In case you are wondering, MVC is dependent on the Routing middleware, wired as a route handler). You can also create your own middleware classes or even add them inline as lambda functions. The place where the middleware components are plugged in to form the request pipeline is the Configure method of the Startup class. As you can imagine, the order in which these are added to the pipeline is critical! The Startup class of this project contains by default: app.UseStaticFiles(); app.UseIdentity(); app.UseMvc(routes => { routes.MapRoute( name: "default", template: "{controller=Home}/{action=Index}/{id?}"); }); Adding custom middleware for the vanity urls Ok, so we can add our own middleware to the pipeline. How will that help us with the vanity urls? The idea is to create a new piece of middleware that inspects the url before the routing middleware. If the url has a single segment, it could be one of our vanity urls, so we will query the database for a user account with that vanity url. In case we have a match, we are going to update the path in the Request object so it matches the standard route for the public profile page. This process will happen server side, without client redirections involved. You might be wondering why are we using a middleware for this, and not the routing features? In ASP.NET Core, middleware components are the proper place to inspect and modify the request and/or response objects. This allows writing very simple and testable components following the single responsibility principle. These components can then be composed in several different ways in order to build the request processing pipeline. In our case: In summary, when we process a url like mysite.com/the-real-foo, our middleware component will find the foo user in the database, get its ApplicationUser object which includes its id, and then update the url in the request object to be mysite.com/profile/details/b54fb19b-aaf5-4161-9680-7b825fe4f45a. After that we will call next(); to execute the next middleware in the pipeline, which means the routing middleware will be able to send the request to our ProfileController! Figure 9. Request pipeline including the VanityUrl middleware Let’s go ahead and create a new class VanityUrlsMiddleware inside a new Middleware folder (notice how you can use dependency injection again to receive the UserManager): public class VanityUrlsMiddleware { private readonly RequestDelegate _next; private readonly UserManager<ApplicationUser> _userManager; public VanityUrlsMiddleware( RequestDelegate next, UserManager<ApplicationUser> userManager) { _next = next; _userManager = userManager; } public async Task Invoke(HttpContext context) { await HandleVanityUrl(context); //Let the next middleware (MVC routing) handle the request //In case the path was updated, //the MVC routing will see the updated path await _next.Invoke(context); } private async Task HandleVanityUrl(HttpContext context) { //TODO } Now add it to the pipeline right before adding MVC in the Configure method of the Startup class: app.UseMiddleware<VanityUrlsMiddleware>(); app.UseMvc(routes => { routes.MapRoute( name: "default", template: "{controller=Home}/{action=Index}/{id?}"); }); If you run the application now, you should be able to set a breakpoint inside your middleware class, and it will be hit on every request. Let’s finish the middleware by adding the following logic to the HandleVanityUrl method: 1. Make sure the request path has a single segment and matches the regex for vanity urls. Remember we added a regex on the RegisterViewModel to allow only lower case letters, numbers, dashes and dots? We can use the same regex here, extracting it as a constant. 2. Query the database, trying to find a user with that vanity url 3. Replace the Path property of the Request object with the pattern profile/details/{id} This code will look like the following: private async Task HandleVanityUrl(HttpContext context) { //get path from request var path = context.Request.Path.ToUriComponent(); if (path[0] == '/') { path = path.Substring(1); } //Check if it matches the VanityUrl regex //(single segment, only lower case letters, dots and dashes) //Check accompanying sample project for more details if (!IsVanityUrl(path)) { return; } //Check if a user with this vanity url can be found var user = await _userManager.Users.SingleOrDefaultAsync(u => u.VanityUrl.Equals(path, StringComparison.CurrentCultureIgnoreCase)); if (user == null) { return; } //If we got this far, the url matches a vanity url, //which can be resolved to the profile details page. context.Request.Path = String.Format( "/profile/details/{0}", user.Id); } That’s it, now you have vanity urls working in your application! If you inspect the network tab of your browser, you will see there were no redirects, it all happened seamlessly on the server: Figure 11. Requests when accessing a vanity url You might remember that when we added the VanityUrl column to the database, we created a Unique Index. This means an exception is raised if you try to register a new user with a vanity url already in use. This isn’t the world’s best user experience, although it might be enough for this article. However the Remote attribute is still available which means we can quickly improve our application! Note: If you are not familiar with previous versions of ASP MVC, the Remote attribute is one of the built in validation attributes that adds client and/or server side validations. This particular attribute adds a client side validation which will call the specified controller action. We just need to add a new controller action that returns a JSON indicating whether the value matches an existing vanity url or not. I decided to add this action within the ProfileController so the validation logic stays close to its use case, but the AccountController will be an equally valid option: public async Task<JsonResult> ValidateVanityUrl(string vanityUrl) { var user = await _userManager.Users.SingleOrDefaultAsync( u => u.VanityUrl == vanityUrl); return Json(user == null); } Then add the Remote attribute to the property in the RegisterViewModel: .. [Remote("ValidateVanityUrl", "Profile", ErrorMessage = "This vanity url is already in use")] … public string VanityUrl { get; set; } Improvements in our application Since we have already covered a lot of ground, I have left out a few improvements that can be easily applied to this code. Feel free to get the source from GitHub and play with it: We have just seen the tip of the iceberg in terms of what ASP.NET Core has to offer. It contains fundamental changes like its new request pipeline or built-in dependency injection, and improvements not that disruptive as the tag helpers. However ASP.NET Core should be very familiar and easy to get started for developers used to the old ASP.Net. You should be able to start working with a leaner, cross-platform, composable framework without your current knowledge becoming completely obsolete. In fact, you should be able to transfer most of that knowledge. For further reading, I would recommend checking the new official documentation site and the framework GitHub page: The source code of this article is available on GitHub as well:
https://www.dotnetcurry.com/aspnet/1312/aspnet-core-request-pipeline-vanity-url
CC-MAIN-2018-43
refinedweb
3,477
53.1
This is the mail archive of the java@gcc.gnu.org mailing list for the Java project. Lars Andersen wrote: > > The news section mentions that it is possible to interface an apache > module to java thru CNI (I guess?). Attached is the code I wrote. This is for Apache 2.0.16. It goes into the modules sub-directory. You configure with -enable-so --enable-gccsp There are some issues with threading. I experimented with both --with-mpm=prefork and --with-mpm-threaded. You'd like to use use 'threaded', both for performance and because you want all "servlets" to share a single JVM. However, then you may have to kludge various things. For example you need to make sure that gc.h is included so that the GC can be notified of thread changes. I tried #include <gc.h> in srclib/apr/pthread.h. It's been a while since I worked on this, and I don't remember what worked and what didn't. I didn't get it far enough to be useful. Ideally, you'd want a mdoule that can run real servlets, and that's a lot of work. If you just to write modules in Java without using servlets it may not be as much work, and if you start with my code, at least you'll have something that can grow into a servlet container. If you succeed in producing something useful, you may need to patch some of the core (non-module) files, for example to make sure gc.h is included where it needs to. Hopefully, Apache will be willing to accept the patches, if they're clean enough. If you're willing to spend time on it, I give you my blessing. It could be cool to run gcj-compiled servlets (even if only a subset of the servlet spec is implemented) directly on apache. If you have any questions, feel free to ask. -- --Per Bothner per@bothner.com Attachment: gccsp.tgz Description: GNU Zip compressed data --- ./srclib/apr/threadproc/unix/signals.c~ Tue Feb 27 18:16:07 2001 +++ ./srclib/apr/threadproc/unix/signals.c Mon Aug 27 20:51:20 2001 @@ -280,6 +280,10 @@ * unblockable signals are included in the mask. This was first * observed on AIX and Tru64. */ +#if 0 + sigdelset(&sig_mask, SIGPWR); + sigdelset(&sig_mask, SIGXCPU); +#endif #ifdef SIGKILL sigdelset(&sig_mask, SIGKILL); #endif --- ./server/mpm/threaded/threaded.c~ Tue Apr 3 11:50:07 2001 +++ ./server/mpm/threaded/threaded.c Mon Aug 27 21:15:42 2001 @@ -146,7 +146,7 @@ * Continue through and you'll be fine.). */ -static int one_process = 0; +static int one_process = 1; #ifdef DEBUG_SIGSTOP int raise_sigstop_flags; @@ -325,7 +325,7 @@ if (sigaction(SIGINT, &sa, NULL) < 0) ap_log_error(APLOG_MARK, APLOG_WARNING, errno, ap_server_conf, "sigaction(SIGINT)"); #endif -#ifdef SIGXCPU +#if 0 sa.sa_handler = SIG_DFL; if (sigaction(SIGXCPU, &sa, NULL) < 0) ap_log_error(APLOG_MARK, APLOG_WARNING, errno, ap_server_conf, "sigaction(SIGXCPU)"); @@ -364,7 +364,7 @@ #ifdef SIGILL apr_signal(SIGILL, sig_coredump); #endif /* SIGILL */ -#ifdef SIGXCPU +#if 0 /*def SIGXCPU*/ apr_signal(SIGXCPU, SIG_DFL); #endif /* SIGXCPU */ #ifdef SIGXFSZ @@ -659,12 +659,14 @@ apr_threadattr_create(&thread_attr, pchild); apr_threadattr_detach_set(thread_attr); +#if 0 rv = apr_create_signal_thread(&thread, thread_attr, check_signal, pchild); if (rv != APR_SUCCESS) { ap_log_error(APLOG_MARK, APLOG_EMERG, rv, ap_server_conf, "Couldn't create signal thread"); clean_child_exit(APEXIT_CHILDFATAL); } +#endif for (i=0; i < ap_threads_per_child - 1; i++) { @@ -1152,7 +1154,7 @@ static int restart_num = 0; int no_detach = 0; - one_process = !!ap_exists_config_define("ONE_PROCESS"); + /*one_process = !!ap_exists_config_define("ONE_PROCESS");*/ no_detach = !!ap_exists_config_define("NO_DETACH"); /* sigh, want this only the second time around */ @@ -1183,7 +1185,7 @@ static void threaded_hooks(apr_pool_t *p) { - one_process = 0; + /*one_process = 0;*/ ap_hook_pre_config(threaded_pre_config, NULL, NULL, APR_HOOK_MIDDLE); }
http://gcc.gnu.org/ml/java/2002-02/msg00188.html
CC-MAIN-2017-39
refinedweb
592
66.33
0 I have this assignment for my java class, it runs fine, but the professor wants us to incorporate a private gradeExam method, which I don't know how it work fit, since I cannot call it from the DriverTest class which contains my main. package project6; /* A demonstration of how to Create Classes and Arrays and Test them. * Javier Falcon * Cop2250-U04 Project6 * #5 on page 529 of the textbook * */ public class DriverExam { private char [] correct = {'B', 'D', 'A', 'A', 'C', 'A', 'B', 'A', 'C', 'D','B', 'C', 'D', 'A', 'D', 'C', 'C', 'B', 'D', 'A'}; private char [] student; private int [] missed; private int numCorrect = 0; private int numIncorrect = 0; /* * Constructor fills student with content in s * * @param s char array filled with student answers * */ public DriverExam(char[] s) { student = s; } /** * This method makes sure the paper is graded. * * @param * @return */ private void gradeExam() { totalCorrect(); totalIncorrect(); passed(); questionsMissed(); } /** * This method creates the array for the missed questions. * * @param * @return */ private void makeMissedArray() { int[] missed = {}; } /** * This method determines whether you pass or fail. * * @param * @return boolean which decides pass or fail */ public boolean passed() { return (totalCorrect() > 14); } /** * This method calculates the number of correct answers. * * @param * @return number of correct answers */ public int totalCorrect() { int sameAnswers = 0; for (int i = 0; i < correct.length; i++) { if (student[i] == correct[i]) { sameAnswers++; } } return sameAnswers; } /** * This method calculates and returns number of wrong. * * @param * @return number of incorrect answers */ public int totalIncorrect() { int missedAnswers = 0; missedAnswers = correct.length - totalCorrect(); return missedAnswers; } /** * This method determines which spots in the student array were * incorrect. * * @param * @return array with numerical values of those missed */ public int[] questionsMissed() { int size = correct.length - totalCorrect(); makeMissedArray(); if (size < 1) return missed; else missed = new int [size]; int pos = 0; for (int i = 0; i < correct.length; i++) { if (correct[i] != student[i]) { missed[pos] = (i + 1); pos = pos + 1; } } return missed; } }
https://www.daniweb.com/programming/software-development/threads/442090/private-method-and-their-use
CC-MAIN-2017-34
refinedweb
313
58.21
The numpy.reshape() function shapes an array without changing the data of the array. Syntax: numpy.reshape(array, shape, order = 'C') In the above syntax parameters are - array : [array_like]Input array - shape : [int or tuples of int] e.g. if we are aranging an array with 10 elements then shaping it like numpy.reshape(4, 8) is wrong; we can do numpy.reshape(2, 5) or (5, The function will return array which is reshaped without changing the data. The arange([start,] stop[, step,][, dtype]) : Returns an array with evenly spaced elements as per the interval. The interval mentioned is half-opened i.e. [Start, Stop) Parameters : - start : [optional] start of interval range. By default start = 0 - stop : end of interval range - step : [optional] step size of interval. By default step size = 1, - For any output out, this is the distance between two adjacent values, out[i+1] - out[i]. - dtype : type of output array Return: Array of evenly spaced values. Length of array being generated = Ceil((Stop - Start) / Step) Example import numpy as np #array = geek.arrange(8) # The 'numpy' module has no attribute 'arange' array1 = np.arange(8) print("Original array : \n", array1) # shape array with 2 rows and 4 columns array2 = np.arange(8).reshape(2, 4) print("\narray reshaped with 2 rows and 4 columns : \n", array2) # shape array with 4 rows and 2 columns array3 = np.arange(8).reshape(4 ,2) print("\narray reshaped with 4 rows and 2 columns : \n", array3) # Constructs 3D array array4 = np.arange(8).reshape(2, 2, 2) print("\nOriginal array reshaped to 3D : \n", array4) Output Original array : [0 1 2 3 4 5 6 7] array reshaped with 2 rows and 4 columns : [[0 1 2 3] [4 5 6 7]] array reshaped with 4 rows and 2 columns : [[0 1] [2 3] [4 5] [6 7]] Original array reshaped to 3D : [[[0 1] [2 3]] [[4 5] [6 7]]] Note: only a member of this blog may post a comment.
http://www.tutorialtpoint.net/2021/12/data-reshaping-in-python.html
CC-MAIN-2022-05
refinedweb
333
62.17
? validator framework work in Struts package javax.ws.rs does not exist What does COMMIT do? Does memory for Class or Object? multiboxes - Struts java - Struts Difference between Struts and Spring import package.subpackage.* does not work Hello - Struts Diff between Struts1 and struts 2? - Struts Regarding struts tag and struts dojo tags. Struts Projects sing validator framework work in struts Combilation of class - Struts What does the delete operator do? How to Use Struts 2 token tag - Struts Does Java support multiple inheritance? Struts Tutorials Struts STRUTS Struts Links - Links to Many Struts Resources Struts STRUTS Jquery form validation does not work what does business analyst mean Managing Datasource in struts Application struts Object does not support proprty or method Package does not exist.. - Java Beginners Duplicate name in Manifest - Struts post method does not support this url Download Struts struts Struts Articles Dispatch Action - Struts Struts 1.x Vs Struts 2.x Struts 2 - History of Struts 2 Struts - Struts Page Refresh - Struts Struts Books Does javaScript have the concept level scope? What does a special set of tags <?= and ?> do in PHP? What does "1"+2+4 evaluate to? In netbean jdk, it does not import the file,"java.nio.file.Paths". why?. what does a technical business analyst do which part of the cpu does math calculations Why does Java not support operator overloading? What does it mean that a class or member is final? STRUTS INTERNATIONALIZATION Struts Alternative Ask Questions? If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
http://roseindia.net/tutorialhelp/comment/38216
CC-MAIN-2013-20
refinedweb
281
66.84
Edited by pyTony Edited by pyTony def pali(a): return str(a) == str(a)[::-1] print('Biggest palindromic product of two three digit integers is %i, which is %i * %i.' % max((a*b, a, b) for a in range(999, 100, -1) for b in range(a, 100000//a, -1) if pali(a*b))) """ Output: Biggest palindromic product of two three digit integers is 906609, which is 993 * 913. """ Edited by hughesadam_87 Edited by pyTony Are you able to help answer this sponsored question? Questions asked by members who have earned a lot of community kudos are featured in order to give back and encourage quality replies.
https://www.daniweb.com/programming/software-development/code/432797/brute-force-check-for-largest-palindromic-product-of-three-number-integers
CC-MAIN-2017-26
refinedweb
107
64.24
Categories (SeaMonkey :: UI Design, enhancement, P3) Tracking (Not tracked) RESOLVED DUPLICATE of bug 124029 mozilla1.8alpha1 People (Reporter: saska, Assigned: BenB) References Details (Keywords: helpwanted, meta). Status: NEW → RESOLVED Closed: 23 years ago Resolution: --- → WONTFIX Status: RESOLVED → VERIFIED saska@acc.umu.se, this is a great question to ask on the Newsgroups. (For more information, see and "Community".) I'm not (personally) aware of any plans to implement it on 5.x. Now, if *you'd* like to implement it... Status: VERIFIED → REOPENED elig, why is this won't fix??? Reopening. Resolution: WONTFIX → --- Rationale: Saska had plans to work on this. Maybe, he didn't assign it to him, because he wasn't sure, if he really will implement this. Assignee: don → saska Status: REOPENED → NEW Summary: roaming access - download bookmarks/cookies/history/etc from a central repository → roaming access - keep bookmarks/cookies/history/etc in a central repository My first opening of this bug maybe didn't make it clear that I am willing to look into this, but I'm currently only investigating it since I'm not sure if I'm capable of implementing this. I'll assign this to myself, but anyone is free to take over it if they feel appropriate since I don't expect to be able to contribute much for some time ahead. see also bug #17817, which is a feature suggestion for a smarter version of this. sorry, typo. that should be bug #17917 Status: NEW → ASSIGNED Summary: roaming access - keep bookmarks/cookies/history/etc in a central repository → [RFE] roaming access - keep bookmarks/cookies/history/etc in a central repository Another idea: What about an #include-like directive on the prefs which says get other info from LDAP/X500/NIS/NIS+ ? This would allow per-group settings... How about an option to use the Application Configuration Access Protocol (ACAP), in addition to HTTP and LDAP, to store the configuration information? This protocol is designed for exactly this purpose. More info on ACAP is here: yes, there was a (positive) discussion about ACAP recently on n.p.m.mail-news. adding myself to CC. I worked with (though not on) the 4.x roaming, and I have had some experience with roaming-type-stuff. I really like the idea of using ACAP for this. LDAP (4.x's preferred roaming store) is just not suitable for this. Putting on helpwanted keyword radar. At this time a net community engineer would need to implement this. Keywords: helpwanted QA Contact: leger → saska Saska, could you please update the state? No roaming in Mozilla would be a real bummer. If we want to lobby for a implementation by Netscape, we will have to do that now. But, of course, it would be nicer, if you could implement it :). I'm sure, Netscape employees would help you, where needed. This is definitely one of the most interesting projects, but it doesn't help when there's a lack of time. There have been too much changes in my personal life to even have a chance to give this a shot. I moved to another country and have a full time job now :/. But I have managed to have a look at the ACAP protocol. There is a free test server at "acap.andrew.cmu.edu" 674 where you can authenticate using the Anonymous SASL Mechanism. ('a authenticate "anonymous" "test"') ACAP looks like the right way to go. (I changed email btw.) Assignee: saska → markush Status: ASSIGNED → NEW QA Contact: saska → markush Summary: [RFE] roaming access - keep bookmarks/cookies/history/etc in a central repository → [RFE] roaming access / ACAP - keep bookmarks/cookies/history/etc in a central repository Saska, can you give a timeline for implementing this? M17 (due in ~2,5 months) is planned to be feature complete (if that still stands). Even if this doesn't get in the final release (ACAP, LDAP, HTTP or not), it would be useful to consider other ways to make roaming more easily possible in a LAN/WAN environment. Currently 4.x on Win32 is rather unfriendly to this, as you can't merely throw the user profiles on a network drive and pick up and go to another computer and easily get them, since nsreg.dat must contain the profile data, and it can't be moved from the system's local Windows directory. There are workarounds for this, but they're rather clumsy and can be a deterrent on large deployments. I fear that Mozilla is going down the same path, given what appears to be stored in mozregistry.dat. The goal would be to create a better way to allow network roaming, perhaps possibly storing mozregistry.dat, or at least its profile location data, within the user profile directories. If this should be branched into a different bug number, someone holler and I'll open a new one. This request is similar, but not identical, to bug 9556, though the fix may be the same. khecht, yes, this is the same goal, but a completely different method and thus bug/feature. Please file a new bug and post the number here. Ben, I'm afraid I can't give a timeline. Not under these circumstances. But I will look at it once more and see if I could get started with it (though it might become quite time consuming). Help still wanted! Saska, what kind of help do you need? Maybe we can break this bug up into several others (see below), so the work is spread across several developers to speed up implementation. Maybe - UI - Backend - Protocol Hanlders - ACAP ? (Other sugegstions welcome.) What part(s) do you want? If you need informations, e.g. what is the best place in the code to plug in your changes, Mozilla developers (including Netscape employees) will be happy to help you (fast hopefully). If not, tell me. Ops, the bug break up: We can turn this bug into a "tracking bug", i.e. its purpose turns into solely holding dependencies to other bugs. Ben, that sounds good. The two main parts are accessing the preferences in mozilla and the acap protocol. I hope it will be sufficient to implement only a subset of the acap protocol (only the parts mozilla needs), without breaking protocol compliance of course. And as you said, pointers to appropriate places to "hook" the code probably would increase the chance of someone getting started with this.. This RFE still needs someone who has some time.. OK, turning this bug into a tracking bug. M17. saska, asking module owners for the big picture (if you can't find docs, which is likely :-( ) is a good start. They can also tell you owners of more specific parts/topics. See <> and <> for module owners, e.g. gagan@netscape.com for Necko (or netlib or network library or netwerk) and matt@netscape.com for Preferences UI. Assign this bug to nobody@mozilla.org, if you want to give it away. You're still free to help and/or take one of the new subparts. I'm also officially founding the "roaming access lobby association" :-). Vote for this bug and tell your friends about it, if you think, it is important. Reassigning to nobody@mozilla.org for now. Not giving up though. Assignee: markush → nobody QA Contact: markush → nobody Rather than implementing ACAP, an easier way would be to just subclass (or even extend) nsIFile to allow interaction with remote files over HTTP and/or FTP. Then stuff like bookmark files and .newsrc files could simply be specified as complete URLs. The only parts of the core code, then, which would require much work would be places where the callers are assuming that nsIFile will return at disk rather than network speeds. It depends on how you implement it. The problem is that network operations are generally asynchronous, and nsIFile operations are generally sychronous. Most consumers of nsIFile objects make the assumption that calling into nsIFile will return their data immediately, without locking up the browser. I think that as this feature is implemented, it's important to maintain flexibility for the user community. This means that while ACAP is a good protocol to use which is promising; HTTP is easy to implement and hopefully can be included in the final release (like 4.5x is now w/ LDAP and HTTP). I'd like to make a plea to the implementors to allow roaming via HTTPS (Secure HTTP) -- this is broken in 4.5x. Certainly, the design phase of this should certainly result in something that accomodates any number of underlying technologies available now, e.g., ACAP, LDAP, local store, and HTTP, and be expandable to include whatever might come around in the future. An abstraction layer that mozilla interacts with (an interface, the associate IDL) should be able to achieve this flexibility if done correctly. Summary: [RFE] roaming access / ACAP - keep bookmarks/cookies/history/etc in a central repository → [RFE] Roaming access - keep bookmarks/cookies/history/etc in a central repository It would be really great if you could set an option in the config files so that a remote profile has to be loaded upon starting mozilla. Useful in computer labs or places where many users share several computers. The IMSP protocol would be another possible implementation, at least for the mail/addressbook. I note that Carnegie Mellon just made a public release of an ACAP server with complete functionality on comp.mail.imap. ACAP really is a nice protocol, and it would be great to have Mozilla support it, if it comes down to a choice of technologies to use in order to make this happen. the problem is not choosing a technology, it's getting the resources to add this to mozilla :( But I like ACAP, and CMU has been producing this server for a few years now, I think it's pretty good. Taking QA contact since it is currently nobody@mozilla.org. QA Contact: nobody → David People have tossed around various ideas on this problem, but I think at the minimum Mozilla should have the same roaming capabilities as Netscape 4.7x. At the minimum, LDAP and HTTP should be supported protocols with the following stored items: bookmarks, cookies, mail filters, address book, user preferences, history, Java security, certificates and private keys. Other protocols and stored items could be supported, but I've already seen a number of people complaining about the lack of what's already implemented in Netscape. Assuming this won't make it into an initial release of either Mozilla and Netscape 6, what can we do to get this moved to the top of the heap for the next go 'round. Is it just a matter of voting? it's a matter of finding (or funding) the right people to work on the problem. it's a big problem, would probably require 2-3 engineers a few months to get it right This would be great to have by mozilla1.0. Setting the Milestone for nobody since he's such a busy guy to do it himself. :) Keywords: 4xp, mozilla1.0 Target Milestone: M17 → mozilla1.0 Target Milestone: mozilla1.0 → --- David, I think, nobody prefers to have all his bugs sat to Milestone "---". :-) Yes, in fact please don't change the milestone or priority fields on a bug unless it's assigned to you to fix. Thanks for adding the mozilla1.0 keyword to keep this bug on the radar. The target milestone gets set when we find someone to sign up for this work: that way we can query for keyword contains mozilla1.0 and milestone != mozilla1.0 to find unscheduled work. yes, and given the magnitude of this feature and the fact that nobody is working on it, I think the chances of this being fixed by mozilla 1.0 are about 0. I think that ACAP is a bad idea because: - it is not as spread nor standardized as HTTP or LDAP; - c'mon folks the code DOES exits somewhere in the NS4.x base and it seems like if you want to see this feature at all, we should at least start from that. - by looking at the "how acap compares to..." document (), the designers of acap didn't have a very profound understanding of the LDAP or HTTP approaches. some examples taken from the acap vs ldap: "Directory servers don't have per-user quotas for control of option storage space; ACAP does." Well that's exactly one of the uses of ldap schema attribute constraints. - "In other words, if a client were to have a need for a non-pre-defined namespace or storagespace on a server, the server's administrator would have to re-define a field in the database." Get around that "problem" with a smart and flexible schema (hey, it worked for NS4)!! - "Rather than having to know something about the server's view of the universe, the option space in ACAP is free and open to unforseen uses (allowing for namespace conventions)." Same thing. As for HTTP, just go see Again IT WORKED BEFORE, why not start from there? BTW, if netscape is willing to commit the roaming code for 4 (or is it already in the cvs?). i'm willing to give it a look, and report... How can we at least start defining the requirements for this? Should a new module be started? We should at least be able to define the interfaces to a roaming modules regardless of whether we can actually get code from Netscape for the old support. Ideally, this should be a new ground up implementation that would frontend several "pluggable" storage and retrieval mechanisms (at the minimum HTTP and LDAP to give the functionality of Netscape 4.x). In fact, I can really only think of two valid calls to the module: store() and retrieve(). Assuming that everything is put together properly, it should make it easy for additional storage mechanisms (syncml, etc.) to be added at a later date. Would it be possible to allow roaming profiles download through proxy/firewall system ? This is a major problem I encounter when travelling and trying to get my profile from internet cafe systems... Also how about download of profiles using https ??? Let's move the general back-and-forth to the netscape.public.mozilla.general newsgroup so that we get more visibility for this. If you post a new messgae, try to reference the bug number in case anyone is interested in the history. I think XBEL is the shapeliest solution So there seems to be a lot of folks interested in seeing "roaming" happen, yet I can not find any actual activity being done to manifest this feature. I've searched through the roadmaps, module owners, projects, alternative projects and newsgroups I would be interested in helping out in making roaming happen with what limited time and knowledge about Mozilla I have. But would like to find others who are also interested but might have a clue as to how this would fit into the existing Mozilla structure. Is there a place to discuss such things? Is there anyone out there who is familiar enough with Mozilla that could give some pointers or could architect the overall design? Then others could start working on it... Maybe do it on a tiered plan where we can do something quick and simple and get it out, but with a roadmap to something more sophisticated and robust. If there is no one already focused on pulling this together, I would be glad to at least try to coordinate an effort... For sure - everyone is willing to help, but nobody is willing to start coding. That's been the state of this for almost 2 years now :) Everything related to "romaing" support should be part of 17048. Additionally, these discussions should happen in the newsgroups. The thing that has messed this item up is that everyone want sto use something other than what is necessary. See the previous info as to why ACAP should not be considered. We should start with LDAP as Netscape already has code for it. Please move this discussion to the netscape.public.mozilla.prefs newsgroups so we can talk about it without cluttering the tracker. TIA. netscape has barely any code for ldap. we should come up with an abstract roaming layer, and then build ldap, acap, and http implementations of each one. Added to CC I agree with Alec completely, that is the optimal way to go about it. There has been some pre-implementation brainstorming a month or so ago in the .prefs newsgroup; folks interested in this bug may wish to check that out. Assigning to self for future work. Assignee: nobody → jpm As you consider different protocol, Think about adding the jabber protocol to the list. Jabber started as an XML based instant messaging protocol but has growing into a generic XML routing protocol. I think it would be perfect for this. It has the ability to store XML information public and privately on a jabber server. There is even an avatar enhancement proposal going on in it now. You could show a users avatar in the profile :) Another thing to consider about it is there is already Jabberzilla in the works which is a mozilla based IM client of jabber. It is over at mozdev. Another thing to consider about using jabber is the dotGNU project is considering using it as well. The dotGNU project is: DotGNU will be a complete replacement for the .NET strategy - it will not be a Free Software implementation of .NET. While .NET has some very sound ideas, problems arise with its implementation, especially with the Authentication/Authorization systems which are centralized to Microsoft, and with Microsoft's vision for "web services". Roaming seems to me like it is a very good first step "web service". individual protocols should be listed in new bugs. This bug is just for the architectural design of roaming, with pluggable protocols *** Bug 100366 has been marked as a duplicate of this bug. *** A very simple way to satisfy about 80% of the needs of roaming-profile users is to make mozilla aware of changes on disk. That way one can effectively implement shared bookmarks with a little script that copies bookmarks over the network. Right now you can *remove* the bookmark file while mozilla is running, and mozilla will take no notice. It's (hopefully) very simple to do a: stat(bookmarks.html) if(bookmarks has changed) reload bookmarks on a regular basis (i.e. at the end of the main event loop). It would also be necessary for Mozilla to write out bookmark changes as soon as they are made (not sure if it does this already...). WindowMaker does this with its menus, and it's quite nice. Good point; that should probably be RFE'd as a separate bug. marking nsenterprise-; will be reevaluated for nsenterprise in future release. Keywords: nsenterprise- *** Bug 110087 has been marked as a duplicate of this bug. *** For those who are interested: Ben's suggestion is already in bugzilla: #78072 sorry, let me make that a link: bug #78072 and when I say "Ben", I mean "Bob", in comment 63. Sorry about that. Okay, enough spam from me. With the recent focus on "mobility" I don't see that awareness of changes in one file helps much, if I understand how Bob means that 80 percent of the need would be handled by this one change. I also don't know that I'd like the impact on a file server that would result--again, provided I understand Bob's meaning. Centralizing mail would be important for organizations using NS as the mail client, and this is a case where roaming profiles would be the answer. Perhaps this is in the 20 percent uncovered by Bob's suggestion. ;-) Component: Browser-General → XP Apps Scott, please find a new home for these bugs. Thx! Assignee: jpm → putterman what about DAV for roaming ? () ... today i played with the DAV module in the apache2 dist. it is great. it would be easy (for the admins like me) to get roaming running because a lot of server software is starting dav support. it would not require people to install a complete new set of servers like acap or stuff like that. the second importand thing ist that it uses no special tcp ports - in times where firewalls are implemented every day it easiely could happen that people cannot access thier profiles. the files would even be accessable for admin-scripts for example if someone would like to do a profile-value change on an >1000 userbase. therefor cool admin interfaces - masterconsoles can be build with remote access- and the arguments for solutions like exchange are vanishing i am no coder but an admin with php/apache skills - we use a "self-build",very messy roaming for netscape 4.7 (via samba macros) for a ~900 userbase. since i want to migrate to mozilla (if roaming works) i would help in terms of testing and stuff if if someone is interested. Aleksander -- they don't really seem related to me. Maybe Aleksander is thinking of Windows 'roaming profiles' which involves copying an obscene amount of data from a server to a local location when you login to a windows network machine. The profile would have to be movable for that. This feature lets you avoid those kinds of kludges, and could work anywhere on the net. Why not just use the stuff from 4.x instead of suggesting (and not implementing) all sorts of grand schemes; jabber, DAV, whatnot; it worked in 4.x and there are numerous (proven) ways of implementing it on the server side. Of course a new module that can do all sorts of magic would be nice but just re-introducing the 4.x roaming would be sufficient for a lot of users. Where is that source code? Did it never get released from the old netscape codebase? I cannot find it in any Maozilla source tree. there's no way we're ever going to ressurect the 4.x roaming code. mozilla has diverged too much.. sorry. ------- Additional Comments From alecf@netscape.com 2002-02-05 17:36 ------- there's no way we're ever going to ressurect the 4.x roaming code. mozilla has diverged too much.. sorry. Sorry to flame, but it's justified here. You guys have been dragging your feet on this issue for years now. This should not be that hard. In the mean time, most of your potential user base has been forced to IE because Mozilla/NS6 still has only a fraction of the capability of NS4. We don't WANT fancy be-all and end-all solutions that are too late to do any good - we just want it to WORK LIKE IT DID! The developers have consistently refused to even consider using the 4.x code, preferring grandiose visions while simultaneously complaining there aren't enough people to implement them. Finally, at least at a superficial level, it appears that roaming functionality should be relatively contained (I admit to not having looked at the code, and not being a good coder in any case.) Based simply on what it does, it doesn't appear to be deeply intertwined with the 4.x codebase. It sure looks to most of us as if this is just something the NS6/Mozilla team does not care about and has not taken seriously in the entire 2+ years this bug has existed. Your comment that "the code has diverged too much" in interesting, but if you have actually looked at the effort required to move it, you are the first in the history of this bug - a read through the comments suggests that no one has ever bothered to scope the effort required to implement the 4.x functionality. We are witnessing the abject failure of the open source development model. With every passing day, it seems Mozilla progresses on its path to elegance, persnickety geekoid perfection, and totally irrelevancy. > The developers have consistently refused to even consider using the 4.x code, > preferring grandiose visions while simultaneously complaining there aren't > enough people to implement them. Have you ever looked at the 4.x code, and then looked at the Mozilla code? I don't know anybody who has ever worked on Mozilla and who proposes to resurrect the 4.x code, other than maybe as cheatsheet. There are not only grandiose visions proposed, but also alternatives that are easier to implement. It is fine that grandiose visions are proposed, we need them to consider them. That doesn't mean that we consider them /necessary/ to fix this bug or that we get sidetracked by these ideas. > It sure looks to most of us as if this is just something the NS6/Mozilla > team does not care about and has not taken seriously in the entire 2+ years > this bug has existed. That's right. Obviously, nobody has had enough interest in it to impleemnt it. Netscape started some effort some time ago, but I don't know about any code coming out of it. I'm not happy about it either, but I have other priorities myself. So, what? Sue us? > We are witnessing the abject failure of the open source development model. No. Open-Source means that *you* can go in and implement it, if you need it. Or pay somebody <> to do it. Open-source is what allows you to talk to the developers at all and watch the (non-existant, granted) progress. I am in agreement with dub. This is ridiculous! One of the main selling points of using an NS browser was the roaming access. I work for a college and this feature is invaluable to our community. I work for a college and this feature is invaluble to our community. We are currently fighting a losing battle with our community, who need to use IE or NS6xx for certain applications. We cannot keep telling them to use NS 4.7 and since NS 6xx does not have roaming, we cannot use that as a reason why they should use NS instead of IE, which comes neatly packaged with their PC. Roaming is an excellent feature and I do not know why it cannot be applied to NS6xx as it was in 4.7 -- I really don't care about any more new features on it -- if I have the same exact features that are in 4.7, I would be a very happy camper. How difficult can this be? I am not a coder in this open source environment, but if the Mozilla group works with Netscape, why can't this be done? Is it due to a lack of interest -- which it should not be, as IE does not have this feature at all!, or not enough folks to work on it? I have never posted here before, but I have been tracking this bug very closely for the last year or so, and I am quite disappointed by the brief little statement by Alec Flett...Trust me, we have 20,000+ users here on campus and they will not be very happy if they lose roaming. And we will be forced to go the MS route with IE as our standard browser. Dub's being overly harsh (I don't think this is important enough to signify The End Of Open Source, and I don't see why people would go to IE over this issue when IE doesn't have any type of Roaming Access-functionality either), but his frustration over this is something many of us share. Correct me if I'm wrong, but it looks like the Netscape folks are refusing to release the 4.x roaming module source, which doesn't make much sense. When Alec says the code has diverged too much, I have a sneaking feeling he's referring to the LDAP portion of Roaming Access, which Netscape always seemed to think would be the main way of implementing it. But I don't know anyone who uses it in LDAP mode (who even runs Netscape's web server anymore?); the preferred use by everyone I've ever talked to is through HTTP with the Apache mod_roaming module on the server side. How 4.x-specific could the HTTP part possibly be? All it does is compare the dates of two files, does a write test, and then downloads or uploads through HTTP. This seems like fairly lizard-brain stuff to me. Of course if I'm wrong, it'd be easy to prove it by releasing the code. dub@conservor.com hit the nail right on the head: This is a *very useful* capability that was available in Netscape 4.x two years ago, that still isn't available in Mozilla/NS6, and, as far as I can tell, is likely to never be available in Mozilla/NS6. Which is really a shame, since, as a useful capability that isn't available in IE, this feature alone could be enough to convince some people to switch to Mozilla. (I know that the *lack* of this feature alone is enough to convince several people in our group to stick with NS4 instead of upgrading to Mozilla or NS6.) The resistance or lack of interest is baffling. It seems like one of the easier features to implement, and the utility should be obvious to anyone who uses more than one computer for web browsing. It may not be as sexy as some of the other development, but if you want more people to use Mozilla, implementing roaming would likely do more toward this goal (relative to the time invested) than almost any other effort on the slate. just to add some oil to the fire. one of the main reasons i am still using NS 4.7, and haven't switched to IE is because of its roaming capabilities. if MS implements roaming i will drop NS in a split second. i agree with sluggo that the roaming module can't be anything too difficult to implement. i am sure there's already open source sync software via HTTP available if it is soo hard to integrate the code from from 4.7. that's my $0.02 worth :) I don't have a lot constructive to add to what's already been said, except that to those who are saying IE doesn't have this functionality you are correct, BUT, MSN Explorer that comes with WinXP has web-based bookmark storage functionality. I've not used it myself but some of my friends swear by it. As I workaround until the mozilla developers get around to working on this (assuming they ever do), I've been toying with the idea of writing a sidebar that would be generated from CGI on a web server, which would allow you to work with remote bookmarks that way.. Presumably you would add this sidebar to any netscape 6.x + or mozilla browser you use and once you login your bookmarks are right there. I'm fully capable of doing the server side of it (in perl), but I haven't looked deep enough into XUL to see how difficult the sidebar piece of it would be. If anyone is interested in getting a project together for this e-mail me and we'll see what we can do. Yep, roaming access is a terribly useful feature that I still can't live without. I browse equally from two workstations at home, from work, and from my laptop. Roaming access was a godsend for keeping my huge store of bookmarks in sync. I still keep 4.7 around to keep everything in sync, then export the bookmarks to Mozilla. It's a pain, but worth it. Stop whining. Netscape usually doesn't base such decisions on bugzilla. Pratically everybody else is working in their free time. So, whom are you talking to? Are you telling me that I should spend *my* free time to implement the feature *you* want? Do something. Fund money. If you are willing to put up more than 50 US-$, mail me. If you administer lots of clients, your organization could fund 1$ or more per client. If that adds up to enough money, I'll organize something. Remember that you got Mozilla for free so far. If you are not willing to put up money and not willing to help coding, please save us with your comments. This bug has 104 votes, which means that - we already know it is a most-wanted feature - you spam a lot of people by commenting here. The following assumes this bug suffers from a lack of specification: HTTP roaming access doesn't have enough complexity to be worth reviewing the Netscape code. See here: If you want server-side example code, try this: It implements the MOVE command that netscape used for safe writes. You don't even have to have MOVE if you don't care about safe writes. The real work here is factoring out which parts of the Mozilla profile need to be stored on the server. Bookmarks? Cookies? Prefs? Certs? Skins? MailNews too? I don't know, I'm asking. Assuming more than just bookmarks, does anyone know what format netscape used? Was there an archive format for the server storage or are there several files per user? I'd suggest getting HTTP-based profiles working with straight puts, then add in the safe writes, then worry about LDAP or ACAP or WEBDAV later. A really generous hacker would stub in generic read and write functions in the beginning so it could be easily modular later. The read from the server would just have to happen really early in Mozilla startup, and the write really late in exit, right? What will that break, if anything? Did Netscape let you read and write from the server at arbitrary times? Maybe if this bug gets a little better specified it'll be easier to divvy up tasks and get something working. To the people afraid of the death of Open Source, the bazzar doesn't work for step 1, or so the theory goes. Mozilla's real collaboration problem is code-rot, but that's a separate issue. The netscape communicator implementation is actually pretty awful -- it would read once right at startup, and write right before exiting. The entire bookmark file is stored on the remote server as one file (same with cookies, etc.) and the whole thing is sent if the dates don't match. Mozilla bug #78072 isn't necessarily a dependency of this bug, but having that work would take a lot of pressure off of this one -- if the bookmarks file changes on disk, mozilla should transparently reload it. That way, until roaming is fully-integrated in Mozilla, it'd be easy for a simple, separate third-party program to do the work. Everybody go vote for bug #78072 (make mozilla aware of bookmark changes on disk). This would make much of this discussion moot since the "profile" could be handled by third parties with a very simple script using scp, or nfs. (i.e. two copies of mozilla running on two different machines with your home dir mounted wouldn't clobber the bookmark file on each other) bug #78072 could be extended to cover history, preferences, etc. The NS4.x solution of putting everything on a central server, and pushing updates upon quitting...just isn't very elegant. The NS4 solution is very elegant - it is a a major reason that roaming profiles worked so well. It would be a dream to share a single profile across OSX, Solaris, Be, Linux, and Win32, even if all of those don't support the same shared file access mechanisms, and are *not on the same physical network*. That was a huge key: web-based sharing would allowa dial-up users to access profiles from home or on the road, or broadband users to load their moz profile from their home server at work. Relying on the OS file-sharing mechanisms runs counter to the cross-platform applicability of the feature. There you go again with making the roaming all spiffy and elegant, when what most people want is just *the old functionality*. Never mind if it's clunky. The transfer stuff could be taken care of by an external script easily, if someone could explain (decode heaps of C++ code) how to handle e.g. platform-specific paths, and any other cross-platform issues. Note that it's the HTTP based roaming most people seem to want not the LDAP one. For those in the "it should be easy to port the 4.x code" camp the following might give you a useful starting point: People: This is NOT A PRIORITY right now.. it doesn't mean we don't agree that it's cool or what have you, it just means there are other things that are taking a higher priority. So you can stop your whining that people are "dragging their feet" and you can stop with the "how hard can this be" - if it's really that easy, go implement it yourself... nobody is stopping you. It's sad that open source's original goal of "if you want an improvement, you submit a patch" has degenerated into "if you want an improvement, you berate and insult people with the expectation that they will cower and bow to your wishes" and sluggo: your "sneaking feeling" that I'm referring to LDAP code, or that Netscape is "refusing to release code" is utter BUNK. I'm being totally honest here - no hidden motives. Let's end the x-files-style conspiracy theory right there. - the libpref backend which handles roaming prefs has DIVERGED TOO MUCH - there is no way to bring the code from 4.x and drop it into libpref. it would actuallyt take less time to write it from scratch - a significant chunk of roaming involved a UI for managing roaming settings, logging into a roaming server, profile download progress, etc.. 4.x didn't even have XUL, so we'd have to write that from scratch - other modules such as bookmarks and history have been completely rewritten for mozilla - so we would also have to write that from scratch - 4.x handled roaming by labelling certain profiles as roaming profiles - all the profile code in mozilla is brand new, so we'd have to write the roaming-profile management from scratch. And so you see, it isn't as easy as people are making it out to be. 99% of the above code is STILL available on the mozilla classic branch - and I welcome ANYONE to even attempt to "migrate it from 4.x" - you can post your patches here. I'm not the right owner for this. Reassigning to nobody@mozilla.org. Assignee: putterman → nobody Following up on #92 if anybody does want to dig into the old code, I think one file you might be interested in is here: On the other hand, what areas of the current code base would need to be touched to implement something similar? I just told Ben Bucksch that I'll put forth $1,000 US towards funding a developer. All of you that are foaming at the mouth and critiziing Alec can put your money where your mouth is and contact Ben about contributing as well. If we all start coughing up some money instead of griping, even if it's as little as $50 per person, we could be in roaming profile bliss within several months. Considering that this bug has 102 votes (minus me) that's a potential $5100 + $1000 from me to fund someone to work on this. Please take the discussion to the newsgroups if you still feel like complaining. Matt, this sounds very cool. I'll happily add some cash in to the development as well. But since there are so many methods on the table as to how to actually store & retrieve the profiles, shouldn't we narrow this down? I think this bug should become a meta bug, tracking all the potential methods to actually do this, and then we can figure out which one(s) we want to fund. That would give us (and the developer) a better sense of the cost & time involved. And a formal spec will make sure we're all in agreement on what we're paying for. - Adam The best course of action to start is something compatible with existing roaming servers like mod_roaming. That'd be immediately useful, and would allow a mixed environment with Moz/NS6 and NS4.x clients during a transition. The one addition I'd like to see is just the ability to use SSL/TLS (https) for the connection. I never did grok why they was left out. I've contacted Ben with my 'pledge'. Now - perhaps this effort should be taken to a mailing list or something of the sort for coordination without abusing Bugzilla? -MZ Adam- we should probably take any discussion of implementation details to bug 31763 and I should get back to work on bug 31764. Thank you Matt Perry! I filed bug 124026 for funding questions. If any concrete (technical) implementation supposals with enough funding support creep up there, I will file more bugs and reference them here, so you can ignore that bug, if you are not interested in the funding. Roaming should support IPC (inter-process communication) if bug Bug #68702 is landed. Then it is possible to use custom solutions for the saving process. I'm interested in this as well, but is *compatibility* with NS4's roaming support really required, or just *coexistence*? For example, I'd like to be able to continue to use just one HTTP mod_roaming server for both NS4 and Moz/NS6 users as I transition them, but I have no need to have users move their actual profile from NS4 to Moz/NS6 and back. I'd be happy to just have HTTP roaming at all in NS6, so if Moz/NS6 used different files on the HTTP roaming server (and therefore NS4 would ignore those files) and used a different format (encrypted or not) in those files, I'd be happy. I figure one-time conversion of profiles from NS4 to Moz/NS6 can be done by downloading the profile in NS4, disabling roaming (so the profile stays local), upgrade to NS6, then re-enable roaming and push the new stuff up to the server. I'd even be happy to live without SSL support (I obviously don't use it now for NS4 roaming). Similarly to others here, my company will be happy to pledge $2000 US towards getting this done (since we unfortunately don't have the skills and time to implement this ourselves). Hi, I have been fooling around a bit with a perl-skript. I had not so much time to get much done, because I have exams at uni, but I think it should be possible to get the basic http-roaming working quite easily. So if anyone with perl-knowledge wants to help, its welcome. I hope I can get some spare time in the near future and get a working prototype out. Greetinx Philipp Kolmann philipp@kolmann.at PS: Don't assign this but to me, I just want to do some testing, if it is possible. Here are just my $.02 ... My approach to software development is K.I.S.S. "keep it simple, stupid" ... The way I see this, all the grand schemes proposed seem perfect, but it would take a lot of time (=money) to develop. Instead, the way I see this, roaming profiles could be implemented effortlessly with just these changes: On the "profile manager" add an option (checkbox) titled "Use remote profile" when this option is checked, a text entry box (single line) would show up, where the user might enter the "remote location" of their user profile, as an ftp url. And that's it!. On proceeding, Netscape 6.x would just show a dialog like "retrieving user profile". The whole directory would be copied to the temp dir, and when the user exits the browser, the data would be copied back from the local temp dir to the remote dir (ftp transfer). So, in the end, the user only has to: 1) Open Netscape 6.x via the "Profile Manager" icon. 2) Click on the "use remote profile" check box (the status of this and the text of the last used url, sans passwords, would be saved locally) 3) Use the browser with their remote data. 4) have the data updated on the remote server when the user exits the browser. Advantages of this approach: 1) "User profile" server can be ANY standards-compliant ftp server. 2) Any user with a remote hosting account can access their bookmarks file and preferences from anywhere with a Netscape 6.x and mozilla installed. To make this work better on low-bandwidth (=dial up) connections, a GLOBAL setting on the Netscape/Mozilla preferences, saved to local hard disk, possibly under "Advanced->Remote Profiles" would let the user specify, via a simple list with check boxes, what items are retrieved/stored from/to the remote server. So, a user on dial-up from africa might want to login to his ftp server, but have only the bookmarks.html transferred to the local system, not the certificates etc. The idea here is giving the end user total control of what parts of the remote profiles he wants stored. Users on narrowband might want to check only "bookmarks" while others roaming on an enterprise LAN might want to have their skins/certificates/sidebar_preferences etc retrieved and updated as well. What do you think? I'd like to hear about the pros/cons of this approach. As far as I can see, corporate users are the ones most interested in having their profiles data accessible from anywhere, and most corporations surely have an ftp server already in place. Not to mention ftp servers are standard, available everywhere, easy to manage, etc. Regards Fernando PS: An easier approach, an "entry step" would be to do this same thing, but instead of over ftp, to label the profile manager dialog "Enter profile location manually", so the user could type "Z:\mail-profiles\fcassia\", where drive Z: is really a remote network drive mounted on the local workstation. (via nfs/netware/netbeui/samba/winnetworking). (This would please only the corporate users roaming inside a corporate lan, but it would leave dialup users out, hence that's why I prefer the previously described ftp approach). > where > the user might enter the "remote location" of their user profile, as an ftp url. FTP??? As in "send everything cleartext and play nasty tricks on me"? I would imagine that by feeding you a bad profile it should be pretty easy to take over your Mozilla completely without leaving much trace (as soon as you quit Mozilla and the temporary copy of the profile is deleted, there is no evidence profile was messed up with). Also, a part of the problem (especially for bookmarks) is beeing able to cope with more that one browser running simultaneously - e.g. I go home, without quitting Mozilla in my office and at home I add a few ne bookmarks to the shared profile and when I come back to my office, I see the new bookmarks there as well. This is not as hard as it may sound - we just need to make sure local profile are sync'ed with the remote one regularly, not just on start/exit. This should also answer questions like "what if Mozilla crashes after I spent an hour rearranging my bookmarks the way I like it" FTP might be okay for a first cut, but a) as Aleksey points out, cleartext protocols are bad, b) how/where is the password going to be stored securely?, and c) not simple to only send the parts of files that have changed. Complaint c) might not be a first-cut issue, but definitely needs to be addressed at some point. I don't think using HTTP is what's holding up this bug. How would using FTP would accelerate the process? Putting my sysadmin hat on, I'd much rather have a gaggle of users authenticate against an HTTP server, because there are a thousand and one authentication handlers available for apache. FTP servers are much more limited, even with PAM. Also, Mozilla is largely centered around HTTP already (API's,etc.). Wow, 110 votes. I have a work in progress solution at. There's a java server and a client which are in a working state, modulo some features (notably moving and copying branches is absent right now). It uses Mozilla's RDF bookmark schema as the internal model, wrapped in XML-RPC commands. A Java solution is okay and all but not a solution due to its licensing. (Not yours; Sun's.) a few random thoughts, incorporating what i've read above, and elsewhere: it doesn't sound like anybody has the time to add roaming functionality to mozilla anytime soon. i don't. so let's just live with it. a better interim approach might be to just do the roaming file sync before and after mozilla runs, via a Perl script (as suggested by Philipp Kolmann). mainly we just need a remote place to store files, and such a Perl script. the Perl script could use any number of protocols or places for file/information storage. it could theoretically use LDAP, webdav, or ACAP, but i haven't seen any publicly available servers out there lately. (remember xdrive.com, idrive.com, driveway.com? they're no longer free, if they even still exist.) but if you're lucky, your ISP will probably give you an ftp, web, or IMAP account. storing the files on ftp is trivial. for web storage, you'll need roaming or webdav server extensions, which i don't think are common, although there's RIPglobal.com. if you don't have that, you'd have to write a cgi or php server that allows you to store files in your web account. also, you could store the files via IMAP (put them in a hidden folder so you don't accidentally delete them). ok, so once we have a place to store the files, we just need a Perl script to do the file sync. the script should allow us to sync arbitrary local files to arbitrary remote files, using some subset of protocols. (note: for LAN users we could even add "rsync", "rdist", or "mirror" support. but for common ISP users, we need a sync method that does its work through the common methods available from a typical ISP, or publicly available on the www. and to my knowledge, that leaves ftp, http, and imap. you could even use smtp and pop3, but that would truly be an ugly hack. also, you could conceivably store your files via a P2P protocol, but to insure persistence you'd probably have to name all your files "britney spears"....) so, anyway, when/if i have time, i'll try to write such a Perl script, probably using my ripglobal.com or my imap account as the remote storage place. also, note that since the script is external to mozilla, it could also sync your IE favorites (if you had any, but you don't, so nevermind.) the main weakness to this roaming method is, of course, that since synchronization is done at the file level, it's hard to truly merge the files properly without some extra information, which we don't have. so, what will probably happen is that, like netscape 4.x, either the local or the remote file will "win", and no real "merging" will take place. there are better solutions, but they would require changing mozilla. so the mozilla developers can stop reading now. to allow intelligent merging, we'd need to store the important files (bookmarks, address book, etc.) in sort of a database format, with transactions (ADD/DELETE with serial number), so that we know which are the new entries we want to keep, and which are the old ones that we already deleted. also, people have already mentioned integrating LDAP, ACAP, etc. into mozilla. but let me add that, since i don't think writable LDAP and ACAP servers are readily available to the common user, it might be easier to go the "duct tape" route and use something like IMAP, sort of like the way Outlook/Exchange stores its online contacts in a folder that's sort of accessible via IMAP. yeah, it would be a hack, but you could also store your bookmarks and anything else you want in IMAP. you could store each whole file in a big IMAP message, or you could split up the bookmarks and contacts into separate messages for better searching. but even with this LDAP/ACAP/IMAP integration, it would still be a good idea to store a local copy of the files, in case the server goes down. ok, that's about it for my brain dump. > it doesn't sound like anybody has the time to add roaming functionality to > mozilla anytime soon. I plan to work on it after Mozilla 1.0, see bug 126029 (but please don't comment there). Ben must be getting sleepy... The best places to find out about status of the funded version of this bug are bug 124026 and bug 124029. - Adam Some features I would like: And this one might get you some corporate donations: a "push" configuration that reads the user's network account name and *automatically* sets up this config info (company home page / employee portal, IMAP server settings (IMAP login == network login), chatzillah nickname... the goal being: knowing no more than his login/password, user should be able to log in on a computer on the network for the first time and have a fully-functional mozilla installation with all the defaults set. And, when the user moves to another machine, this setup follows him, *without* using OS-specific profile features. Ability to decide what gets stored on the server and what doesn't. In the context of the above paragraph, that would allow an administrator to decide ahead of time what profile info will be centralized and what won't. I think the previous comment shows how grand visions are holding back what could be here and now. Yes, the previous comment sounds like the holy grail of mobile computing, but will surelly be an implementation hell. What EXACTLY is "the user's network account name"? What network? TCP/IP? TCP/IP have no "account names". Would that be the Netware Requester that I have on my corporate lan? or netbios/netbeui? The "system name" on windows based PCs?. WTF are you talking about?? Why complicate things for no reason? Perhaps people need to go back to the origins of this bug#. If you read the initial post, in 1999 (yes 3 years ago), it said: . So why does everyone insist on ignoring this original need and instead propose revolutionary (and complicate) solve-everything-and-cook-dinner-too solutions? Just my $0.02 Sorry for sounding silly, but... Is "roaming access" (as in Netscape Communicator 4.x) planned to be included in Mozilla? No. The problem is that the mozilla development team suffers from the delusion of grandeur, usual among the Open Source/Free Software developers. This is not a bad thing actually, since it leads to much good code (including the fact that, IIUC, mozilla is the most standard-compiant browser). Unfortunately, it has to be moderated by a realist leader (like RMS in Emacs and Miguel in Gnome) so that aiming for perfection does not get in the way of getting the job done. More specifically, TRT here is _really_ complex, so noone so far was able to define it (let alone code it). And "the poor man's solution" (which would satisfy many users), namely 1. reload the files changed since they were last read 2. add a "save current setup" menu item. is not glorious enough for anyone to implement. PS. just in case: the two items above allow for an external FTP solution: before leaving for home, you hit "save current setup" and mirror ~/.mozilla to the repository. when you get home, you mirror the repository to ~/.mozilla and mozilla reloads the changed files - no restart. PPS. the "next step" solution would be to aloow customization of the ~/.mozilla directory and specifying a URL there....... I take it then that simply porting the existing Netscape functionality is non-trivial? I'm trying to get more people here using mozilla, but the standard response I get is: does it support roaming yet? None of these users want anything too complicated, they just want what they've already got with ns4. I imagine this is a show-stopper for many sites, as it is here. Perhaps there should be 2 mozilla bugs: o Port the existing Netscape 4.x roaming functionality, with no major new features. o Implement a fancy new roaming functionality. There already ARE two such bugs! See comment #6 (or go directly to bug #17917). ARG!!, please read what has been posted before. Comments 116 and 119 have been discussed at length before, see e.g. comment 94 and those around it. If you read the bug and the references carefully, you'd see that I am working on 4.x-like roaming at the moment, thanks to nice people willing to *pay* for it. My comment 119 was not talking about code re-use from ns4. I was referring to porting solely ns4 *functionality*, as opposed to adding lots of fancy new functionality that people can't seem to agree on. Ben Bucksch is already working on NS4 compatible roaming and functionality. Please see bug 124029 for more details. Remove myself from the QA of open bugs and change to default QA contact, since I have no way to verify these easily. Still no working Mozilla on my primary platform and it doesn't look like it will happen anytime soon. :( QA Contact: mozilla → paw *** Bug 152809 has been marked as a duplicate of this bug. *** *** This bug has been marked as a duplicate of 152809 *** Status: NEW → RESOLVED Closed: 23 years ago → 20 years ago Resolution: --- → DUPLICATE Was that DUPE intentional? Status: RESOLVED → REOPENED Resolution: DUPLICATE → --- (That dupe created a "circular dupe loop". -- That sounds cool) I thought bugzilla checked for those... bug 154617 filed on the duplicate loop problem. Bugzilla used to check for that, so something must have broken. We need to support multiple protocols so that many people can use this. ACAP is cool, HTTPS is secure, and just about everybody has FTP. There are two levels of functionality here: Poor man's version: 1. On startup, copy the prefs from (HTTP,FTP,insert protocol here) to hard drive 2. On shutdown (or when the user asks) copy the prefs from hard drive to (HTTP,FTP,insert protocol here) This would satisfy most people's needs. Heavy-duty version: 1. On startup, copy the prefs from (HTTP,FTP,insert protocol here) to hard drive 2. When the user changes a pref, upload it 3. When the user changes a pref remotely, download it Since this is all profile info and not just prefs, we may not be able to get deltas for everything to upload all the time so we'll need a hybrid. But it's better than nothing. And support towards the heavy-duty can be built up over time, too, if we start with the poor man's version. Summary: [RFE] Roaming access - keep bookmarks/cookies/history/etc in a central repository → Roaming access - keep bookmarks/cookies/history/etc in a central repository related reading: *** Bug 180669 has been marked as a duplicate of this bug. *** I understand why the old Netscape 4.x code is basically useless now. Here's what I don't understand. Netscape already supported roaming when Mozilla was in the process of rewriting everything under the sun. So why wasn't roaming considered an "NS4 parity" essential, as were major components like Mail/News that had to be rewritten from scratch? If some thought and effort went into making a general profile management backend back then, we wouldn't have this dilemma now. The local file-based profile information could have just been one more pluggable backend for the profile manager, along with HTTP (mod_roaming), FTP, ACAP, LDAP and anything else that makes sense. It could be dynamically synchronized on an item-by-item basis, and we wouldn't have to be worrying about how to hook into the myriad pieces of code that are now a concern... Did it occur to nobody that hardcoding all the new profile code to assume file management would cause trouble down the road? Wasn't the existing Netscape 4 support a compelling reason to avoid this problem upfront? Given the amount of code that was rewritten anyway, it would have been much easier to incorporate the needed changes during the rewrite to do it the Right Way... I'm not slamming anybody here. These things happen, and things fall through the cracks. There were so many high-profile Netscape 4 features that this one was probably too small to attract attention until it was too late. I didn't think to point this out early on, either. It's just frustrating that we're in this situation when a little foresight at the right time could have avoided this problem entirely. Oh well, I guess there's only so much foresight to go around. Of course, this bug is now more than 3 years old. When it was filed, only a year had passed since the decision to scrap the classic codebase -- now we have 4 years of code to contend with instead of 1 year of code. If it was daunting in 1999 to implement this, I shudder to think of the difficulty NOW... :-( This is a very serious limitation because it makes it so difficult to move bookmarks between computers and discourages backup of bookmarks. I did export bookmarks about a month ago to edit them on Netscape 4.79, which is a better for editing large sets of bookmarks. That's the last backup I did. Installing Mozilla 1.2.1 over 1.2 on Windows 2000 just wiped out all my bookmarks. This would not be a problem with bookmarks in a file as in the old Netscape. Bookmark management is the reason I stayed with the old Netscape until Mozilla came out. I have used Mozilla ever since, but this experience completely undermines any confidence in continuing to use it. kahin@wyoming.com Here are a couple of thoughts on roaming: NS4 supported roaming, but it had a couple of extremely obnoxious misfeatures: a) For one thing, if you set a preference to a nonstandard setting, allowed it to be uploaded, and then wanted to change it back you were utterly screwed -- the "back to standard" setting wouldn't propagate and the nonstandard setting would overwrite it at all times. b) It only synced on open and shutdown. It meant you could lose data if you had left a Netscape running just about anywhere while using another. c) It didn't support .newsrc at all. A lot of thoughts in this thread seem to revolve around "make it compatible with <foo>." It seems it would be more useful to get a roaming infrastructure that does what really needs to be done, and if it is not compatible with the old stuff that is unfortunate but it's much better than the current situation, *or* with retaining misfeatures of the old one. What this really is a form of a version-control system: when two different versions of a file is presented, they need to be merged intelligently. Peter -- see bug #17917, "Add 'smart' roaming bookmarks (etc.) with sync capabilities". I'll ask this again: Can we please keep advocacy (i.e. whining about why this bug isn't fixed, about how important it is, about how it should have been done by now, about how easy it would be, etc) OUT of the bug. We get the idea already! We just need someone to step up and actually do the work. > We just need someone to step up and actually do the work. heh. Done. Bug 124029. (But please don't comment there.) Say thanks to Matt Perry (private) and Dave Caplinger (of Meridianmap) and others who agreed to pay $$$ to make it possible. >> We just need someone to step up and actually do the work. > heh. Done. Bug 124029. (But please don't comment there.) Then one of these bugs should be a duplicate or dependent on the other. No? > Then one of these bugs should be a duplicate or dependent on the other. No? No. This bug is useless to track work, because it has been spammed way too much. Marking this a dup of the other bug will make the spam go there, making that bug useless as well. There are a few good ideas (which I did not implement) thrown around here, occasionally, so I don't know yet what should happen with this bug. Somebody could write a webpage, summarizing the ideas. *** Bug 187880 has been marked as a duplicate of this bug. *** *** Bug 188542 has been marked as a duplicate of this bug. *** *** Bug 189587 has been marked as a duplicate of this bug. *** I don't know if this has been suggested in the past, but how about using a protable USB disk to store the profile? this would have several benefits over an online repository, including better performance, better price/MB ratio and most probably - simpler implementation. Prog. Isn't that just a glorified SneakerNet? A much *worse* price MB/ratio -- and worse performance too, once you factor in "fiddling with physical devices rather than having the computer do the work for you". And it can't be in more than one place at once. An interesting idea, I guess. If you are using a real OS you can softlink $HOME/.mozilla to your USB mount already. The other thing is that it does not scale well. That does not matter if your target audience is one or two people with USB drives/disk-on-key, but where I think this feature will really make a difference is where an administrator can set up the feature for many people at once. College campuses where people log in to different machines all the time, etc, could really benefit. It would be neat to have an LDAP schema containing the heart of the profile, without the cache, etc. This might be veering a bit offtopic, but I suppose that's ok seeing as not much seems to be going on with this bug anyway... What about a system where a person can carry around ID and password information their USB keychains that specify where the profile is stored and how to access it? That way the profile can be on the keychain or on the network, and by reading the drive Mozilla will have all credentials needed to access it as well. Some sort of checking for a specially named file in the root of any possibly USB drive would be called for, I suppose, though I don't know how this would be done nicely under non-windows machines... Just brainstorming Please, this is entirely irrelevant to Mozilla roaming. USB SneakerNet is fine for what it does, but it is utterly unable to do what real networks can. This is why the Internet itself doesn't mainly function by people passing floppy disks from point to point. Any further brainstorming on that topic should go elsewhere. you can tell mozilla where it finds the mails for each account. that's what is very useful for roaming profiles. but why can't we do it with all other personal files, like cookies.txt, etc.? I really appriciate everyone implementing this! Michael The central repository has to be on the web? Maybe it would be nicer to choose between various information sources like: - create a new profile on disk - create a new roamming profile on disk - load a roamming profile from a web server - load a roamming profile from disk - Use this profile next time you open mozilla? y/n For the sake of new people coming to this bug as well as those implementing potential solutions to this bug, what subsystems /should/ be included in 'roaming', and are they modular already (that is, could we change their "look here for data" settings?) I'll suggest: - Bookmarks - E-mail server settings - Contact lists - Proxy, general browser settings Apologies for spam, but this bug has been fixed: see bug #124029 This was fixed in bug 124029, which was one/a implementation bug of this feature request. Please do not add comments to that bug. Cleaning out dependencies. Interesing related bugs in there: Bug 147344 - Breaking up the profile for roaming, sharing and performance Bug 18043 - Allow bookmarks to reside remotely on an arbitrary user-defined host Bug 17917 - Add "smart" roaming bookmarks (etc.) with sync capabilities For the record, this bug currently has 238 votes, bug 124029 has 112. I am collecting votes :-). Marking dup of impl bug. Status: NEW → RESOLVED Closed: 20 years ago → 19 years ago Resolution: --- → FIXED Target Milestone: --- → mozilla1.8alpha Marking dup of impl bug. *** This bug has been marked as a duplicate of 124029 *** *** This bug has been marked as a duplicate of 124029 *** Resolution: FIXED → DUPLICATE Product: Core → Mozilla Application Suite
https://bugzilla.mozilla.org/show_bug.cgi?id=17048
CC-MAIN-2022-33
refinedweb
11,713
71.55
Define mapping between constructor arg list of base- and subclass. More... #include <serialize_meta.hh> Define mapping between constructor arg list of base- and subclass. When loading a polymorphic base class, the user must provide the union of constructor arguments for all subclasses (because it's not yet known which concrete subtype will be deserialized). This class defines the mapping between this union of parameters and the subset used for a specific subclass. In case the parameter list of the subclass is empty or if it is the same as the base class, this mapping will be defined automatically. In the other cases, the user must define a specialization of this class. Definition at line 123 of file serialize_meta.hh.
http://openmsx.org/doxygen/structopenmsx_1_1MapConstructorArguments.html
CC-MAIN-2018-51
refinedweb
119
54.63
Automated unit testing in the metal Unit. Automating builds I have been doing automated builds for some time now. This is the very first (basic) way to test your code: does it build? For some projects, like ESPurna, it really makes the difference since it has so many different targets and setting combinations it would be a nightmare to test them all manually. Instead, we are using Travis to build several fake images with different combinations to actually test that most of the code builds, i.e. it doesn’t have typos, unmet dependencies,… Travis also provides a way to create deployment images for the different supported boards in ESPurna. When you download a binary image from the releases page in the ESPurna repository, that file has been automatically created by Travis from a tagged release. That is so cool! You can see how this is done in the .travis.xml file in the root of the repository. But this is not what I wanted to talk about here. Existing options The fact that the project builds, does not mean that it works. The only way to really know that it does what it is supposed to do is to test it on the hardware. This is where we must start using special tools to evaluate conditions (actual versus expected results) and provide an output. This output will probably be via the serial port of the device, although we could think about other fashionable ways to show the result (LEDs, buzzers,…). Here we have specific tools to do the job. These tools are very much like their “native” counterparts, used for desktop or web languages like Java, PHP, Python… They are usually referred to as testing frameworks. If you are using the Arduino framework you should know about some of these solutions: - ArduinoUnit. It has no recent activity but it’s still the preferred choice by many people. There are two relevant contributors: Warren MacEvoy and Matthew Murdoch. - AUnit. It is actively developed by Bryan Parks and it has no other relevant contributor. - GoogleTest. It is a generic C++ test suite but they have recently started developing support for Arduino framework. It is very active and has a big community but it is still a WIP. - ArduinoCI. It started in 2018 just like the AUnit test suite but has had no activity since September and remains as “beta”. Anyway, it claims to have a really interesting set of features. It is based around mocked-up hardware. It has a single main developer named Ian. - PlatformIO Unit Testing. This is the only non-free and closed solution. And that’s a pity since it has really impressive options. There are other available options like Arduino-TestSuite or ArduTest, but they are abandoned. Visually testing it All the tools above allow you to “visually” test the code. I mean: you run the tests and they will output a result on the serial monitor. “PASSED” or “OK” mean everything is good. The tools in the previous section allow you (or will allow you) to do that, either on the hardware itself or in a mocked-up version of the hardware. I will focus here on two of the tools above: AUnit and PlatformIO Unit Test. Both are free to use in this stage and provide a very similar feature set. The project I’ll be using to test them is something I’ve been working recently: an RPN calculator for ESP8266 and ESP32 platforms. The RPNlib library is released under the Lesser GPL v3 license as free open software and can be checked out at my RPNlib repository on GitHub. The library is an RPN calculator that can process c-strings of commands and output a stack of results. Testing this is quite simple: you have an input and an output you can compare to the expected output. Let’s see how this can be tested with both solutions. Testing it with AUnit AUnit is a testing library by Brian Park. It’s inspired and almost 100% compatible with ArduinoUnit but it uses way less memory than the later and supports platforms as ESP8266 or ESP32. It features a full set of test methods and allows you to use wrapper classes with setup and teardown methods to isolate your tests. That’s pretty cool. Here you have an example of usage with one of those classes and the output: #include <Arduino.h> #include <rpnlib.h> #include <AUnit.h> using namespace aunit; // ----------------------------------------------------------------------------- // Test class // ----------------------------------------------------------------------------- class CustomTest: public TestOnce { protected: virtual void setup() override { assertTrue(rpn_init(ctxt)); } virtual void teardown() override { assertTrue(rpn_clear(ctxt)); } virtual void run_and_compare(const char * command, unsigned char depth, float * expected) { assertTrue(rpn_process(ctxt, command)); assertEqual(RPN_ERROR_OK, rpn_error); assertEqual(depth, rpn_stack_size(ctxt)); float value; for (unsigned char i=0; i<depth; i++) { assertTrue(rpn_stack_get(ctxt, i, value)); assertNear(expected[i], value, 0.000001); } } rpn_context ctxt; }; // ----------------------------------------------------------------------------- // Tests // ----------------------------------------------------------------------------- testF(CustomTest, test_math) { float expected[] = {3}; run_and_compare("5 2 * 3 + 5 mod", sizeof(expected)/sizeof(float), expected); } testF(CustomTest, test_math_advanced) { float expected[] = {1}; run_and_compare("10 2 pow sqrt log10", sizeof(expected)/sizeof(float), expected); } testF(CustomTest, test_trig) { float expected[] = {1}; run_and_compare("pi 4 / cos 2 sqrt *", sizeof(expected)/sizeof(float), expected); } testF(CustomTest, test_cast) { float expected[] = {2, 1, 3.1416, 3.14}; run_and_compare("pi 2 round pi 4 round 1.1 floor 1.1 ceil", sizeof(expected)/sizeof(float), expected); } // ----------------------------------------------------------------------------- // Main // ----------------------------------------------------------------------------- void setup() { Serial.begin(115200); delay(2000); Printer::setPrinter(&Serial); //TestRunner::setVerbosity(Verbosity::kAll); } void loop() { TestRunner::run(); delay(1); } As you can see, you can define any specific testing methods in the library and create and use them directly from the testF methods. This way you can create new tests very fast. Now I just have to build and upload the test to the target hardware, in this case, an ESP32 board: $ pio run -s -e esp32 -t upload ; monitor --- Miniterm on /dev/ttyUSB0 115200,8,N,1 --- --- Quit: Ctrl+C | Menu: Ctrl+T | Help: Ctrl+T followed by CtrlRunner started on 4 test(s). Test CustomTest_test_cast passed. Test CustomTest_test_math passed. Test CustomTest_test_math_advanced passed. Test CustomTest_test_trig passed. Test test_memory passed. TestRunner duration: 0.059 seconds. TestRunner summary: 4 passed, 0 failed, 0 skipped, 0 timed out, out of 4 test(s). You can check the full AUnit test suite for the RPNlib in the repo. Testing it with PlatformIO Let’s now see how you can do the very same using the PlatformIO Unit Test feature. As you can see it’s very much the same, albeit you don’t have the wrapping class feature by default, but you can still use helper methods. Of course, this means you have to take care of the code isolation yourself. #include <Arduino.h> #include "rpnlib.h" #include <unity.h> // ----------------------------------------------------------------------------- // Helper methods // ----------------------------------------------------------------------------- void run_and_compare(const char * command, unsigned char depth, float * expected) { float value; rpn_context ctxt; TEST_ASSERT_TRUE(rpn_init(ctxt)); TEST_ASSERT_TRUE(rpn_process(ctxt, command)); TEST_ASSERT_EQUAL_INT8(RPN_ERROR_OK, rpn_error); TEST_ASSERT_EQUAL_INT8(depth, rpn_stack_size(ctxt)); for (unsigned char i=0; i<depth; i++) { TEST_ASSERT_TRUE(rpn_stack_get(ctxt, i, value)); TEST_ASSERT_EQUAL_FLOAT(expected[i], value); } } // ----------------------------------------------------------------------------- // Tests // ----------------------------------------------------------------------------- void test_math(void) { float expected[] = {3}; run_and_compare("5 2 * 3 + 5 mod", sizeof(expected)/sizeof(float), expected); } void test_math_advanced(void) { float expected[] = {1}; run_and_compare("10 2 pow sqrt log10", sizeof(expected)/sizeof(float), expected); } void test_trig(void) { float expected[] = {1}; run_and_compare("pi 4 / cos 2 sqrt *", sizeof(expected)/sizeof(float), expected); } void test_cast(void) { float expected[] = {2, 1, 3.1416, 3.14}; run_and_compare("pi 2 round pi 4 round 1.1 floor 1.1 ceil", sizeof(expected)/sizeof(float), expected); } // ----------------------------------------------------------------------------- // Main // ----------------------------------------------------------------------------- void setup() { delay(2000); UNITY_BEGIN(); RUN_TEST(test_math); RUN_TEST(test_math_advanced); RUN_TEST(test_trig); RUN_TEST(test_cast); UNITY_END(); } void loop() { delay(1); } To test it you can use the built-in test command in PlatformIO Core. $ pio test -e esp32 PIO Plus () v1.5.3 Verbose mode can be enabled via `-v, --verbose` option Collected 2 items === [test/piotest] Building... (1/3) === Please wait... === 9.84 seconds === Automating your tests Next step would be to run these tests unassisted. That’s it: every time you commit a change to the repo, you want to run the tests on the metal to ensure the results are the expected ones and nothing is broken. Now, this is more involved and both options above (AUnit and PlatformIO) have solutions for that. The AUnit solution is based on the AUniter script, also maintained by Brian, and Jenkins, an open source continuous integration tool you can install locally or in a server of your own. The AUniter script is actually a wrapper around the Arduino binary in headless mode. This implies two strong conditions for me: a specific folder structure and pre-installed libraries. PlatformIO is more flexible here. Of course, if you are already using the Arduino IDE these conditions might not be hard to meet. Still, you are pretty much limited by the possibilities of the IDE. Maybe when the ArduinoCLI project would leave the alpha stage this will change. The PlatformIO solution supports a number of CI tools, including Jenkins and Travis. Travis is a very good option since it integrates very well with GitHub or GitLab, so you can have a cloud solution for free. But you might say: “How am I suppose to plug the hardware to the GitHub servers?”. Well, the very cool think about PlatformIO is that it supports remote flashing, deploying and testing. The bad news is that these features are not for free and you will have to have a Professional PIO Plus account which is USD36/year for non-commercial products. Remote testing with PlatformIO Let me go briefly through the steps to set a testing server locally so you can use it from Travis with PlatformIO. Basically, you will need to have PlatformIO Core installed and a PlatformIO Agent running connected to your PIO Plus account. Let’s assume you start with a new Raspbian installation on a Raspberry PI (with internet access already configured). Let’s first install PlatformIO Core (from the Installation page in the documentation of PlatformIO): $ sudo python -c "$(curl -fsSL)" And now register to our PIO Plus account (the first time it will install some dependencies): $ pio account login PIO Plus () v1.5.3 E-Mail: ************ Password: Successfully authorized! And request a token, you will be using this token to start the agent on boot and also to run the tests from Travis: $ pio account token PIO Plus () v1.5.3 Password: Personal Authentication Token: 0123456789abcdef0123456789abcdef01234567 Now, try to manually start the agent. You can see it’s named after the Raspberry Pi hostname, acrux in this case: $ pio remote agent start 2018-12-26 22:57:48 [info] Name: acrux 2018-12-26 22:57:48 [info] Connecting to PIO Remote Cloud 2018-12-26 22:57:49 [info] Successfully connected 2018-12-26 22:57:49 [info] Authenticating 2018-12-26 22:57:49 [info] Successfully authorized We are almost ready to run code remotely, just some final touch. Add your user to the dialout group so it has access to the serial ports: $ sudo adduser $USER dialout And make your life a little easier by using udev rules to create symlinks to the devices you have attached to the Raspberry Pi, this way you will be able to refer to their ports “by name”. You can first list all the connected devices to find the ones you want. In this example below I had just one Nano32 board which uses a FTDI chip: $ lsusb Bus 001 Device 005: ID 0403:6015 Future Technology Devices International, Ltd Bridge(I2C/SPI/UART/FIFO) Now create the rules and apply them (the Nano32 above and a D1 Mini board): $ sudo cat /etc/udev/rules.d/99-usb-serial.rules SUBSYSTEM=="tty", ATTRS{idVendor}=="1a86", ATTRS{idProduct}=="7523", SYMLINK+="d1mini" SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6015", SYMLINK+="nano32" $ sudo udevadm control --reload-rules $ sudo udevadm trigger OK, let’s try to run the code remotely. Go back to your PC and log into your PIO account as before: $ pio account login PIO Plus () v1.5.3 E-Mail: ************ Password: Successfully authorized! Check if you see the agent on the Raspberry Pi: $ pio remote agent list PIO Plus () v1.5.3 acrux ----- ID: e49b5710a4c7cbf60cb456a3b227682d7bbc1add Started: 2018-12-26 22:57:49 What devices does it have attached? Here you see the Nano32 in /dev/ttyUSB0 using the FTDI231X USB2UART chip (unfortunately you don’t see the aliases, but you can still use them from the platformio.ini file): $ pio remote device list PIO Plus () v1.5.3 Agent acrux =========== /dev/ttyUSB0 ------------ Hardware ID: USB VID:PID=0403:6015 SER=DO003GKK LOCATION=1-1.2 Description: FT231X USB UART /dev/ttyAMA0 ------------ Hardware ID: 3f201000.serial Description: ttyAMA0 And finally, run the tests. This won’t be fast, communication is slow and the first time it will install all the dependencies remotely too, so give it some time: $ pio remote -a acrux test -e esp32 PIO Plus () v1.5.3 Building project locally Verbose mode can be enabled via `-v, --verbose` option Collected 2 items === [test/piotest] Building... (1/3) === Please wait... Testing project remotely Verbose mode can be enabled via `-v, --verbose` option Collected 2 items === 13.10 seconds === Amazing! You have run the tests on a physical device attached to a different machine. Let’s automate this further. Running tests from Travis First, let’s run the agent when the Raspberry Pi boots. To do it add the following line to the /etc/rc.local file before the final exit 0. The PLATFORMIO_AUTH_TOKEN environment variable should be set to the token we retrieved before, so it will register to the same account. PLATFORMIO_AUTH_TOKEN=0123456789abcdef0123456789abcdef01234567 pio remote agent start We now need to set up the PlatformIO project in the root of the library defining the environments to test: $ cat platformio.ini [platformio] src_dir = . lib_extra_dirs = . [env:esp8266] platform = espressif8266 board = esp12e framework = arduino upload_port = /dev/d1mini test_port = /dev/d1mini upload_speed = 921600 test_ignore = aunit [env:esp32] platform = espressif32 board = nano32 framework = arduino upload_port = /dev/nano32 test_port = /dev/nano32 test_ignore = aunit You might have noticed we are using the named ports and also ignoring AUnit tests in the same repository. That’s fine. This is what we have been running already in our previous examples. Now let’s check the Travis configuration file: $ cat .travis.yml language: python python: - '2.7' sudo: false cache: directories: - "~/.platformio" install: - pip install -U platformio script: - pio remote -a acrux test So simple: just run all the tests using the acrux agent (our Raspberry Pi). Now the final setting, you have to link you PIO account from Travis. Of course, you will not set the token in the wild or configure you credentials visible in the Travis configuration file. You have two options here: either encrypt the credentials in the file or add it to your project environment variables (in the Settings page of your project page in Travis): Now we are ready. Do any commit and the code will be tested from Travis in you local tester machine. Enjoy! "Automated unit testing in the metal" was first posted on 26 December 2018 by Xose Pérez on tinkerman.cat under Tutorial and tagged arduino, arduinoci, arduinounit, aunit, deployment, embedded, esp32, esp8266, espurna, github, googletest, jenkins, platformio, raspberry pi, regression, rpn, rpnlib, travis, unit test.
https://tinkerman.cat/post/automated-unit-testing-metal
CC-MAIN-2019-22
refinedweb
2,547
62.98
Java Programming/Print version Contents - 1 Overview - 2 Preface - 3 About This Book - 4 History - 4.1 Earlier programming languages - 4.2 The Green team - 4.3 Reshaping thought - 4.4 The demise of an idea, birth of another - 4.5 Versions - 4.6 References - 5 Java Overview - 6 The Java Platform - 6.1 Java Runtime Environment (JRE) - 6.2 Java Development Kit (JDK) - 6.2.1 The Java compiler - 6.2.2 Applet development - 6.2.3 Annotation processing - 6.2.4 Integration of non-Java and Java code - 6.2.5 Class library conflicts - 6.2.6 Software security and cryptography tools - 6.2.7 The Java archiver - 6.2.8 The Java debugger - 6.2.9 Documenting code with Java - 6.2.10 The native2ascii tool - 6.2.11 Remote Method Invocation (RMI) tools - 6.2.12 Java IDL and RMI-IIOP Tools - 6.2.13 Deployment & Web Start Tools - 6.2.14 Browser Plug-In Tools - 6.2.15 Monitoring and Management Tools / Troubleshooting Tools - 6.2.16 Java class libraries (JCL) - 6.3 Similar concepts - 7 Getting started - 8 Installation - 8.1 Availability check for JRE - 8.2 Availability check for JDK - 8.3 Advanced availability check options on Windows platform - 8.4 Download instructions - 8.5 Updating environment variables - 8.6 Start writing code - 8.7 Availability check for JRE - 8.8 Availability check for JDK - 8.9 Installation using Terminal - 8.10 Download instructions - 8.11 Start writing code - 8.12 Updating Java for Mac OS - 8.13 Availability check for JDK - 9 Compilation - 10 Execution - 11 Understanding a Java Program - 11.1 The Distance Class: Intent, Source, and Use - 11.2 Detailed Program Structure and Overview - 11.2.1 Introduction to Java Syntax - 11.2.2 Declarations and Definitions - 11.2.3 Data Types - 11.3 Whitespace - 11.4 Indentation - 12 Java IDEs - 13 Language Fundamentals - 14 Statements - 14.1 Variable declaration statement - 14.2 Assignment statements - 14.3 Assertion - 14.4 Program Control Flow - 14.5 Statement Blocks - 14.6 Branching Statements - 14.7 Return statement - 14.8 Iteration Statements - 14.9 The continue and break statements - 14.10 Throw statement - 14.11 try/catch - 15 Conditional blocks - 16 Loop blocks - 17 Boolean expressions - 18 Variables - 19 Primitive Types - 20 Arithmetic expressions - 21 Literals - 22 Methods - 23 API/java.lang.String - 24 Classes, Objects and Types - 24.1 Instantiation and constructors - 24.2 Type - 24.3 Autoboxing/unboxing - 24.4 Methods in the Object class - 25 Keywords - 25.1 abstract - 25.2 assert - 25.3 boolean - 25.4 break - 25.5 byte - 25.6 case - 25.7 catch - 25.8 char - 25.9 class - 25.10 const - 25.11 continue - 25.12 See also - 25.13 default - 25.14 do - 25.15 double - 25.16 else - 25.17 enum - 25.18 extends - 25.19 final - 25.20 For a variable - 25.21 For a class - 25.22 For a method - 25.23 Interest - 25.24 finally - 25.25 float - 25.26 for - 25.27 goto - 25.28 if - 25.29 implements - 25.30 import - 25.31 instanceof - 25.32 int - 25.33 interface - 25.34 long - 25.35 native - 25.36 See also - 25.37 new - 25.38 package - 25.39 private - 25.40 protected - 25.41 public - 25.42 return - 25.43 short - 25.44 static - 25.45 Interest - 25.46 strictfp - 25.47 super - 25.48 switch - 25.49 synchronized - 25.50 Singleton example - 25.51 this - 25.52 throw - 25.53 throws - 25.54 transient - 25.55 try - 25.56 void - 25.57 volatile - 25.58 while - 26 Packages - 27 Arrays - 28 Mathematical functions - 28.1 Math constants - 28.2 Math methods - 28.2.1 Exponential methods - 28.2.2 Trigonometric and hyperbolic methods - 28.2.3 Absolute value: Math.abs - 28.2.4 Maximum and minimum values - 28.3 Functions dealing with floating-point representation - 28.4 Rounding number example - 29 Large numbers - 30 Random numbers - 31 Unicode - 32 Comments - 33 Coding conventions - 34 Classes and Objects - 35 Defining Classes - 36 Inheritance - 37 Interfaces - 38 Overloading Methods and Constructors - 39 Object Lifecycle - 40 Scope - 41 Nested Classes - 42 Generics Overview Preface The! About. History thought another Unlike C and C++, Java's growth is pretty recent. Here, we'd quickly go through the development paths that Java took with age. Initial Release (versions 1.0 and 1.1)) Released in 6 February 2002, Java 1.4 has improved programmer productivity by expanding language features and available APIs: - Assertion - Regular Expression - XML processing - Cryptography and Secure Socket Layer (SSL) - Non-blocking I/O (NIO) - Logging Tiger (version 1.5.0; Java SE 5).. Java Overview++. were close to machine instruction and were easy to convert into (JIT) compiler,". Standardization.s this approach. It is interesting to see how the approach was switched back and forth. AWT → Swing → SWT. Secure execution. In the new way of error handling, functions/methods do not return error codes. Instead, when there is an error, an exception is thrown. The exceptions can be handled by the catch keyword at the end of a try block. This way, the code that is calling the function. -. of the programmers hands; the Java Virtual Machine keeps track of all used memory. When memory is not used any more it is automatically freed up. A separate task is running. The) Quite possibly, the most important part of the JRE is the Java Virtual Machine (JVM). The JVM acts like a virtual processor, enabling Java applications to be run on the local system. Its extcheck— It can be used prior to the installation of a Java extension into the JDK or JRE environment. It checks if a particular Jar file conflicts with an already installed extension. This tool appeared first with Java 1.5. Software security and cryptography tools jdb— (short for Java debugger) is a command-line console that provides a debugging environment for Java programs. Although you can use this command line console, IDE's normally provide easier to use debugging environments. Documenting code with Java Java IDL and RMI-IIOP Tools Deployment & Web Start Tools Browser Plug-In Tools Monitoring and Management Tools / Troubleshooting Tools The success of the Java platform and the concepts of the write once, run anywhere principle has led to the development of similar frameworks and platforms. Most notable of these is the Microsoft's .NET framework and its open-source equivalent Mono. The .NET framework: Getting started Understanding systems: Installation In order to make use of the content in this book, you would need to follow along each and every tutorial rather than simply reading through the book. But to do so, you would need access to a computer with the Java platform installed on it — the Java platform is the basic prerequisite for running and developing Java code, thus it is divided into two essential pieces of software: - the Java Runtime Environment (JRE), which is needed to run Java applications and applets; and, - the Java Development Kit (JDK), which is needed to develop those Java applications and applets. However as a developer, you would only require the JDK which comes equipped with a JRE as well. Given below are installation instruction for the JDK for various operating systems: Availability check for JRE The Java Runtime Environment (JRE) is necessary to execute Java programs. To check which version of Java Runtime Environment (JRE) you have, follow the steps below. - To learn more about the Command Prompt syntax, take a look at this MS-DOS tutorial..exe executable installed. If it is, and it is a recent enough version (Java 1.4.2 or Java 1.5, for example), you should put the bin directory that contains javac in your system path. The Java runtime, java, is often in the same bin directory. If the installed version is older (i.e. it is Java 1.3.1 or Java 1.4.2 and you wish to use the more recent Java Some Windows based systems come built-in with the JRE, however for the purposes of writing Java code by following the tutorials in this book, you would require the JDK nevertheless. The Java Development Kit (JDK) is necessary to build Java programs. First, check to see if a JDK is already installed on your system. To do so, first open a command window and execute the command below. If the JDK is installed and on your executable path, you should see some output which tells you the command line options. The output will vary depending on which version is installed and which vendor provided the Java installation. Advanced availability check options on Windows platform On a machine using the Windows operating system, one can invoke the Registry Editor utility by typing REGEDIT in the Run dialog. In the window that opens subsequently, if you traverse through the hierarchy HKEY_LOCAL_MACHINE > SOFTWARE > JavaSoft > Java Development Kit on the left-hand. The resultant would be similar to figure 1.2, with the only exception being the version entries for the Java Development Kit. At the time of writing this manuscript, the latest version for the Java Development Kit available from the Internet was 1.7 as seen in the Registry entry. If you see a resultant window that resembles the one presented above, it would prove that you have Java installed on your system, otherwise it is not. Download instructions To acquire the latest JDK (version 7), you can manually download the Java software from the Oracle website. For the convenience of our readers, the following table presents direct links to the latest JDK for the Windows operating system. You must follow the instructions for the setup installer wizard step-by-step with the default settings to ensure that Java is properly installed on your system. Once the setup is completed, it is highly recommended to restart your Windows operating system. If you kept the default settings for the setup installer wizard, your JDK should now be installed at C:\Program Files\Java\jdk1.7.0_01. You would require the location to your bin folder at a later time — this is located at C:\Program Files\Java\jdk1.7.0_01\bin It may be a hidden file, but no matter. Just don't use Program Files (x86)\ by mistake unless that's were installed Java. Updating environment variables In order for you to start using the JDK compiler utility with the Command Prompt, you would need to set the environment variables that points to the bin folder of your recently installed JDK. To set permanently your environment variables, follow the steps below. Start writing code Once you have successfully installed the JDK on your system, you are ready to program code in the Java programming language. However, to write code, you would need a decent text editor. Windows comes with a default text editor by default — Notepad. In order to use notepad to write code in Java, you need to follow the steps below: Availability check for JRE The Java Runtime Environment (JRE) is necessary to execute Java programs. To check which version of JRE you have, follow the steps below. executable installed. If it is, and it is a recent enough version, you should put the bin directory that contains javac in your system path. The Java runtime, java, is often in the same bin directory. If the installed version is older (i.e. it is Java 5 and you wish to use the more recent Java. Installation using Terminal Downloading and installing the Java platform on Linux machines (in particular Ubuntu Linux) is very easy and straight-forward. To use the terminal to download and install the Java platform, follow the instructions below. Download instructions Alternatively, you can manually download the Java software from the Oracle website. For the convenience of our readers, the following table presents direct links to the latest JDK for the Linux operating system. Start writing code The most widely available text editor on GNOME desktops is Gedit, while on the KDE desktops, one can find Kate. Both these editors support syntax highlighting and code completion and therefore are sufficient for our purposes. However, if you require a robust and standalone text-editor like the Notepad++ editor on Windows, you would require the use of the minimalistic editor loaded with features – SciTE. Follow the instructions below if you wish to install SciTE: On Mac OS, both the JRE and the JDK are already installed. However, the version installed was the latest version when the computer was purchased, so you may want to update it. Updating Java for Mac OS - Go to the Java download page. - Mechanically accept Oracle's license agreement. - Click on the link for Mac OS X. - Run the installer package.. If you already have the JRE installed, you can use the Java Wiki Integrated Development Environment (JavaWIDE) to code directly in your browser, no account or special software required. For more information, click here to visit the JavaWIDE site. Compilation To execute your first Java program, follow the instructions below: - Ask for help if the program did not execute properly in the Discussion page for this chapter. Automatic Compilation of Dependent Classes A class with this package declaration has to be in a directory named example. Subpackages Debugging and Symbolic Information Ant -. Execution There are various ways in which Java code can be executed. A complex Java application usually uses third party APIs or services. In this section we list the most popular ways a piece of Java code may be packed together and/or executed. JSE code execution,ini. Understanding a Java Program This This class is named Distance, so using your favorite editor or Java IDE, first create a file named Distance.java, then copy the source below, object oriented languages such as C++ or C#, will be able to understand most if not all of the sample program. Once you save the file, compile the program: (If the javac command fails, review the As promised, we will now provide a detailed description of this Java program. We will discuss the syntax and structure of the program and the meaning of that structure. Introduction to Java Syntax - Figure 2.1: Basic Java syntax. The syntax of a Java class is the characters, symbols and their structure used to code the class.: Declarations and Definitions. Sequences of tokens are used to construct the next building blocks of Java classes as shown above: declarations and definitions. A class declaration provides the name and visibility of a class. type: Constructor Methods are the third and most important type of class member. This class contains three methods in which the behavior of the Distance class is defined: printDistance(), main(), and intValue() The printDistance() method Most declarations have a data type. Java has several categories of data types: reference types, primitive types, array types, and a special type, void. Primitive Types will not automatically convert values of type String into int values. Java's primitive types are , boolean , byte , char , short , int , long and float . Each of which are also Java language keywords. double Reference Types. void void is not a type in Java; it represents the absence of a type. Methods which do not return values are declared as void methods. This class defines two void methods: Whitespace. Java IDEs What is a Java IDE? A Java IDE (Integrated Development Environment) is a software application which enables users to more easily write and debug Java programs. Many IDEs provide features like syntax highlighting and code completion, which help the user to code more easily. Eclipse JCreator is a simple and lightweight JAVA IDE from XINOX Software. It runs only on Windows platforms. It is very easy to install and starts quickly, as it is a native application. This is a good choice for beginners. Processing JBuilder is an IDE with proprietary source code, sold by Embarcadero Technologies. One of the advantages is the integration with Together, a modeling tool. - More info: Embarcadero. DrJava DrJava is an IDE developed by the JavaPLT group at Rice University. It is designed for students. Other IDEs - Geany - IntelliJ IDEA - JDeveloper - jGRASP - jEdit - MyEclipse - Visual Café - Gel - JIPE - Zeus - Setu Eye Saving Lightweight(fast)C,C++,JAVA IDE Language. Statements Now that we have the Java platform on our systems and have run the first program successfully, we are geared towards understanding how programs are actually made. As we have already discussed, a program is a set of instructions, which are. If the above statement was the only one in the program, it would look similar to this: Java places its statements within a class declaration and, in the class declaration, the statements are usually placed in a method declaration, as above. Variable declaration statement The simplest statement is a variable declaration: It defines a variable that can be used to store values for later use. The first token is the data type of the variable (which type of values this variable can store). The second token is the name of the variable, by which you will be referring to it. Then each declaration statement is ended by a semicolon ( ;). Assignment statements Up until now, we've assumed the creation of variables as a single statement. In essence, we assign a value to those variables, and that's just what it is called. When you assign a value to a variable in a statement, that statement is called an assignment statement (also called an initialization statement). Did you notice one more thing? It's the semicolon ( ;), which is at the end of each statement. A clear indicator that a line of code is a statement is its termination with an ending semicolon. If one was to write multiple statements, it is usually done with each statement on a putting multiple statements on one line is, it's very difficult to read it. includes three parts: a data type, the variable name (also called the identifier) and the value of a variable. We will look more into the nature of identifiers and values in the section, like in the first two statements, then it's called a literal (the value is literally the value, hence the name literal). Note that after the assignment to result its value will not be changed if we assign different values to firstNumber or secondNumber, like in line 5. With the information you have just attained, you can actually write a decent Java program that can sum up values. Assertion An assertion checks if a condition is true: Each assert statement is ended by a semi-colon ( ;). However, assertions are disabled by default, so you must run the program with the -ea argument in order for assertions to be enabled ( java -ea [name of compiled program]). Program Control Flow Statements are evaluated in the order as they occur. a later section. Statement Blocks A bunch of statements can be placed in braces to be executed as a single block. Such a block of statements can be named or be provided with a condition for execution. Below is how you'd place a series of statements in a block. Branching Statements Program flow can be affected using function/method calls, loops and iterations. Of various types of branching constructs, we can easily pick out two generic branching methods. - Unconditional Branching - Conditional Branching Unconditional Branching Statements Conditional branching is attained with the help of the if...else and switch statements. A conditional branch occurs only if a certain condition expression evaluates to true. Conditional Statements Object variables can not by analyzed through switch statements. However, Iteration Statements are statements that are used to iterate a block of statements. Such statements are often referred to as loops. Java offers four kinds of iterative statements. - The whileloop - The do...whileloop - The forloop - The foreachloop The while loop The do-while loop is functionally similar to the while loop, except the condition is evaluated AFTER the statement executes do{ statement; } while(condition); The for loop The for loop is a specialized while loop whose syntax is designed for easy iteration through a sequence of numbers. Example: The program prints the numbers 0 to 99 and their squares. The same statement in a while loop: The foreach loop/catch A try/ catch must at least contain the try block and the catch block: Question 3.1: How many statements are there in this class? 5 One statement at line 3, two statements at line 6, one statement at line 7 and one statement at line 11. Conditional blocks Conditional blocks allow a program to take a different path depending on some condition(s). These allow a program to perform a test and then take action based on the result of that test. In the code sections, the actually executed code lines will be highlighted. If The if block executes only if the boolean expression associated with it is true. The structure of an if block is as follows: Here is a double example to illustrate what happens if the condition is true and if the condition is false: If/else The if block may optionally be followed by an else block which will execute if that boolean expression is false. The structure of an if block is as follows: If/else-if/else Conditional expressions use the compound ?: operator. Syntax: This evaluates boolean expression1, and if it is true then the conditional expression has the value of expression1; otherwise the conditional expression has the value of expression2. Example: This is equivalent to the following code fragment: Switch The switch conditional statement is basically a shorthand version of writing many if... else statements. The switch block evaluates a char, byte, short, or int . Loop The do- while loop is functionally similar to the while loop, except the condition is evaluated AFTER the statement executes It is useful when we try to find a data that does the job by randomly browsing an amount of data. For. Boolean expressions Boolean values are values that evaluate to either true or false, and are represented by the boolean data type. Boolean expressions are very similar to mathematical expressions, but instead of using mathematical operators such as "+" or "-", you use comparative or boolean operators such as "==" or "!". Comparative operators. Variables In the Java programming language, the words field and variable are both one and the same thing. Variables are devices that are used to store data, such as a number, or a string of character data. Variables in Java programming. Primitive dependant it's value, while short is the data type for that particular value. Other uses of integer data types in Java might see you write code such as this given below: Integer numbers and floating point numbers) Arithmetic: Liter There are two boolean literals There are no other boolean literals, because there are no other boolean values! Numeric Literals There are three types of numeric literals in Java. Integer Literals: Methods. We also need to set its visibility (private, protected or public). If the method throws a checked exception, that needs to be declared as well. It is called a method definition. The syntax of method definition is: class MyClass { ... publicReturnType methodName(ParamOneType parameter1, ParamTwoType parameter2) throwsExceptionName { ReturnType returnType; ... returnreturnType; } ... } We can declare that the method does not return anything using the void Java keyword. For example: When the method returns nothing, the return keyword at the end of the method is optional. When the execution flow reaches the return keyword, the method execution is stopped and the execution flow returns to the caller method. The return keyword can be used anywhere in the method as long as there is a way to execute the instructions below: In the code section 3.68, the return keyword at line 5 is well placed because the instructions below can be reached when a is negative or equal to 0. However, the return keyword at line 8 is badly placed because the instructions below can't be reached. Parameter passing We can pass any primitive data types or reference data type to a method. Primitive type parameter The primitive types are passed in by value. It means that as soon as the primitive type is passed in, there is no more link between the value inside the method and the source variable: As you can see in code section 3.70, the modifyValue() method has not modified the value of i. Reference type parameter The object references are passed by value. It means that: - There is no more link between the reference inside the method and the source reference, - The source object itself and the object itself inside the method are still the same. You must understand the difference between the reference of an object and the object itself. A object reference is the link between a variable name and an instance of object: An object reference is a pointer, an address to the object instance. The object itself is the value of its attributes inside the object instance: Take a look at the example above: The name has changed because the method has changed the object itself and not the reference. Now take a look at the other example: The name has not changed because the method has changed the reference and not the object itself. The behavior is the same as if the method was in-lined and the parameters were assigned to new variable names: Variable argument list Java SE 5.0 added syntactic support for methods with variable argument list, which simplifies the typesafe usage of methods requiring a variable number of arguments. Less formally, these parameters are called varargs[2]. The type of a variable parameter must be followed with ..., and Java will box all the arguments into an array: When calling the method, a programmer can simply separate the points by commas, without having to explicitly create an array of Point objects. Within the method, the points can be referenced as points[0], points[1], etc. If no points are passed, the array has a length of zero. A method can have both normal parameters and a variable parameter but the variable parameter must always be the last parameter. For instance, if the programmer is required to use a minimum number of parameters, those parameters can be specified before the variable argument: Return parameter special return object with the needed return values. Create that object inside the method, assign the values and return the reference to this object. This special object is "bound" to this method and used only for returning values, so do not use a public class. The best way is to use a nested class, see example below: In the above example the getPersonInfoById method returns an object reference that contains both values of the name and the age. See below how you may use that object: Special method, the constructor The constructor is a special method called automatically when an object is created with the new keyword. Constructor does not have a return value and its name is the same as the class name. Each class must have a constructor. If we do not define one, the compiler will create a default so called empty constructor automatically. Static methods A static method is a method that can be called without an object instance. It can be called on the class directly. For example, the valueOf(String) method of the Integer class is a static method: As a consequence, it cannot use the non-static methods of the class but it can use the static ones. The same way, it cannot use the non-static attributes of the class but it can use the static ones: You can notice that when you use System.out.println(), out is a static attribute of the System class. A static attribute is related to a class, not to any object instance, so there is only one value for all the object instances. This attribute is unique in the whole Java Virtual Machine. All the object instances use the same attribute: Question 3.11: Visit the Oracle JavaDoc of the class java.lang.Integer. How many static fields does this class have? 4. int MAX_VALUE, int MIN_VALUE, int SIZEand Class<Integer> TYPE. - To learn how to overload and override a method, see Overloading Methods and Constructors. API/java.lang: On the right hand side a String object is created represented by the string literal. Its object reference is assigned to the str variable. Immutability. See also Classes, Objects and Types, for a primitive type only its value is stored. For an object, also a reference to an instance can be stored. - In the memory, the allocated space of a primitive type is fixed, whatever their value. The allocated space of an object can vary, for instance either the object is instantiated or not. - The primitive types don't have methods callable on them. - A primitive type can't be inherited. Instantiation and constructors setting the color of your default sports car color to does concept of cloning an object, and the end results are similar to the copy constructor. Cloning an object is faster than creation with the new keyword, because all the object memory is copied at once to the destination cloned object. This is possible by implementing the Cloneable interface, which allows the method Object.clone() to perform a field-by-field copy. Type Types of drivers. We create separate user manuals for them, an Average user manual, a Power user manual, a Child user manual, or a Handicapped user manual. Each type of user manual describes only those features/operations appropriate for the type of driver. For instance, the Power driver may have additional gears to switch to higher speeds, that are not available to other type of users... When the car key is passed from an adult to a child we are replacing the user manuals, that is called Type Casting. In Java, casts can occur in three ways: - up casting going up in the inheritance tree, until we reach the Object - up casting to an interface the class implements - down casting until we reach the class the object was created from Autoboxing/unboxing Methods in the java.lang.Object class are inherited, and thus shared in common by all classes. The clone method The java.lang.Object.clone() method returns a new object that is a copy of the current object. Classes must implement the marker interface java.lang.Cloneable to indicate that they can be cloned. The equals method. system.out.println("MY NAME IS MIDO"); The finalize method. Keywords. break Jumps (breaks) out from a loop. Also used at switch statement. For example: See also: byte This is part of the switch statement, to find if the value passed to the switch statement matches a value followed by case. For example: catch const is a reserved keyword, presently not being used. In other programming languages, such as C, const is often used to declare a constant. However, in Java, final is used instead. continue continue is a Java keyword. It skips the remainder of the loop and continues with the next iteration. For example: results in 0 1 2 3 4 6 7 See also default default is a Java keyword. This is an optional part of the switch statement, which only executes if none of the above cases are matched. See also: do It starts a do-while looping block. The do-while loop is functionally similar to the while loop, except the condition is evaluated after the statements execute Syntax: For example: See also: double double is a keyword which designates the 64 bit float primitive type. The java.lang.Double class is the nominal wrapper class when you need to store a double value but an object reference is required. Syntax: double<variable-name> = <float-value>; For example: See also: else This enumeration constant then can be passed in to methods: An enumeration may also have parameters: It is also possible to let an enumeration implement interfaces other than java.lang.Comparable and java.io.Serializable, which are already implicitly implemented by each enumeration:
https://en.wikibooks.org/wiki/Java_Programming/Print_version
CC-MAIN-2016-50
refinedweb
5,360
58.79
In this blog we will discuss about a Trait and how Traits can help you to beautify your code by Multiple Inheritance. Traits Traits are a fundamental unit of code reuse in Scala. Trait encapsulates method and field definitions, which can be reused by mixing into classes. Two most important concept about Traits are :- - Widening from thin interface to rich interface - Defining stackable modifications. trait Philosophical { def philosphize() { println("I consume memory,therefore I am !") } } Trait can be mixed with a class using either extends or with keyword. class Frog extends Philosophical { override def toString = "green" } As we know that Frog is a class which is a child class of AnyRef but it extends a Trait which also extends a class AnyRef so we can say that Frog will have a parent class or Trait as Philosophical which will extend the class AnyRef. Frog will not have a direct connection to AnyRef, it will first extends Philosophical which will further extends AnyRef. Extending Traits A trait can be extended by other traits, concrete class, abstract class and case class as well. Traits extending a Traits trait Animal { def foo() { println("This is a type of Animal") } } trait Dog extends Animal { override def toString = "Dog" } Abstract classes can extend a trait abstract class Frog extends Animal {} In this case, if we have some methods or fields declared in that trait, abstract class need to override it. Classes extends a trait class Frog extends Philosophical { override def toString = "green" } Thin interface V/S Rich interface One major use of traits is to automatically add methods to a class in terms of methods. That is, traits can enrich a thin interface making it into a rich interface. Rich interface In case of rich interface we may have the implementation of methods and it depends on the need of a user whether the user wants to use the definition of the methods or user can override it according to the need. Thin interface In case of thin interface every class which extend that class need to override those methods, in case of thin interface we will not have the definition of those methods or fields and every time we need to override those methods to provide the definition. Suppose more than one class extends the same trait so in case of thin interface we may need to override that same method and every time need to give the definition of that method, which can be reduced by the help of rich interface, we just need to override those methods whose definition we want to change. Stackable modification As we know class/trait can extend more than one trait at a time and if all of those trait have same method and we need to override that method then how we will get to know that which overridden method will be invoked. In such a situation how can we resolve that thing? Trait plays an important role here. Stackable modifications state that “super” is accessed dynamically based on how the trait is mixed in, whereas in general super is statically determined. import scala.collection.mutable.ArrayBuffer trait WithLegs { def legs: String = "Base" } trait TwoLegged extends WithLegs { override def legs: String = "Two -> " + super.legs } trait FourLegged extends WithLegs { override def legs: String = "Four -> " + super.legs } trait SixLegged extends TwoLegged { override def legs: String = "Six -> " + super.legs } class ClassB extends FourLegged with TwoLegged { override def legs = "B -> " + super.legs } res1: String = B -> Two -> Four -> Base Lineraziation Trait linearization is a process which comes in picture when ever we mix any number of traits and classes in a single trait. Scala linearization is a process in which all traits are present in linear hierarchy, by this we can solve the diamond problem . Remember that the syntax to mixin traits is as follows: class A extends B with C with D. The rules for this process are as follows: - Start from the very first class or trait which is extended and write the linearized hierarchy. - Take the next trait and write this hierarchy down - now remove all those classes and traits which we have already used in our previous linearized hierarchy. - add the remaining traits to the bottom of the linearized hierarchy to create the new linearized hierarchy. - repeat step 2 for every trait. - Place the class itself as the last type extending the linearized hierarchy. class Beast extends TwoLegged with FourLegged { override def legs = super.legs } Without the Linearization process it would be unclear where the super.legs resolves to because of the linearization process, the compiler determines that super points to FourLegged. Let’s write this down by applying the rules mentioned above: - Start at the first Start at the first extended class or trait and write that complete hierarchy down. The first trait is TwoLegged, so this leads to: TwoLegged -> AnyRef -> Any - Take the next trait and write this hierarchy down. The next trait is FourLegged and the hierarchy is: FourLegged -> AnyRef -> Any - now remove all classes/traits from this hierarchy which are already in the linearized hierarchy. This removes AnyRef and Any, so we are left with: FourLegged - add the remaining traits to the bottom of the linearized hierarchy to create the new linearized hierarchy FourLegged -> TwoLegged -> AnyRef -> Any - repeat step 2 for every trait. There are no more traits, so we are done. - Place the class itself as the last type extending the linearized hierarchy Beast -> FourLegged -> TwoLegged -> AnyRef -> Any The graphical representation looks like the picture below: This hierarchy clearly shows that calling super from Beast will resolve to the FourLegged trait and thus the value of legs will be 4. Hope this block will help you. Happy Blogging! 1 thought on “Traits – The beauty of Scala5 min read”
https://blog.knoldus.com/traits-the-beauty-of-scala/
CC-MAIN-2021-04
refinedweb
956
65.66
Missing 1 required positional argument What's wrong with my code? It doesn't work inside Sage Math Cell. gp.eval("xmat(r,n) = [2*x, -1; 1, 0]*Mod(1,x^r-1)*Mod(1,n);") gp.eval("smallestr(n)={if(n==1 || n%2==0, return(0));forprime(r = 3, oo,my(u=n%r);if (u==0 && r < n, return(0));if (u!=0 && u!=1 && u!=r-1, return(r)));}") gp.eval("myisprime(n)={my(r = smallestr(n));if (r == 0, return(n == 2));my(xp = xmat(r,n)^n*[x,1]~);xp[2] == Mod(x*Mod(1,n),x^r-1)^n;}") def Check(n): if gp.function_call("myisprime",[n]).sage()==true: return ("Prime!") else: return ("Composite!") @interact def _(n=2017,action=selector(['Check'],buttons=True,label='')): action = eval(action) print (action()) I am getting the following message: TypeError: Check() missing 1 required positional argument: 'n' EDIT You can run this code here. As it's written, the indentation is bad: the four lines after def Check(n):should be indented four more spaces. @JohnPalmieri You are right, but that's not issue. Please see edit. @JohnPalmieri I have found a problem. The last line should read: print(action(n)). with the two corrections, why there is no slider present on sageCell but there is one on Jupyter notebook ?
https://ask.sagemath.org/question/49581/missing-1-required-positional-argument/
CC-MAIN-2020-40
refinedweb
225
53.58
Bugtraq mailing list archives Here's a set of patches by someone here at RPI.. I have not tested them, and make no guarantees, but apparently, they work.. From rpi!marcus.its.rpi.edu!lohnen Fri Apr 7 10:09:51 1995 Path: rpi!marcus.its.rpi.edu!lohnen From: lohnen () marcus its rpi edu (Nils Lohner) Newsgroups: rpi.os.linux,rpi.talk.linux Subject: SATAN: Linux Port/Hack Date: 5 Apr 1995 17:45:31 GMT Organization: its Lines: 210 Message-ID: <3lukvr$aks () usenet rpi edu> NNTP-Posting-Host: marcus.its.rpi.edu X-Newsreader: TIN [version 1.2 PL2] Xref: rpi rpi.os.linux:272 rpi.talk.linux:68 Linux port for SATAN by Nils Lohner lohnen () rpi edu This is less of a port and more of a quick hack to make it compile properly. I am not guaranteeing anything except that it compiles on my linux box. I am corrently running verion 1.2.0 of th kernel. It did successfully scan etc. and find vulnerabilities, so I am assuming that these fixes do make it work successfully. WHAT TO DO: - delete the first 6 lines from ./reconfig - REASON: sh doesn't like them - run reconfig by typing 'perl reconfig' SATAN will now cofigure itself if you have perl5 or higher installed successfully. - make a new header file 'satan-1.0/include/netinet/ip_icmp_lin.h' - include this header file in the following three files: #include "../../include/netinet/ip_icmp_lin.h" src/port_scan/tcp_scan.c src/port_scan/udp_scan.c src/fping/fping.c NOTE: do NOT comment out the existing include line!! Here, the existing header file is being supplemented and not replaced! This header file does several things: - it defines ICMP_MINLEN - it fixes a few ICMP name incompatibilities - it makes the proper 'struct ip' as needed by SATAN - note: check the endianness in the file if it it not little endian!!! - it makes the proper 'struct icmp' as needed by SATAN - make a new header file 'satan-1.0/include/netinet/udp_lin.h' - include this header file in the following file: #include "../../include/netinet/udp_lin.h" src/port_scan/udp_scan.c NOTE: IN this case _DO_ comment out the current line, or you will get udphdr redefined. In this case the header file is being replaced, and not supplemented as before. #include <netinet/udp.h> - now do a 'make linux' from the satan-1.0 directory. - it will bomb out in the src/misc directory - go to the src/misc directory and simply type 'make' - now it will make - go back up to the satan-1.0 dir and do a 'make linux' again - it will bomb out in the src/nfs-chk directory - go to the src/nfs-chk directory and simply type 'make' - now it will make - go back up to the satan-1.0 dir and do a 'make linux' again It will finish compiling. set dont_use_nslookup=1 if it asks you to set dont-use_dns. Now go ahead and scan! Please use this tool reasonably... Nils Lohner lohnen () rpi edu --- cut here for file include/netinet/ip_icmp_lin.h -------------------------- /* this value was taken from ip_icmp.h fom an RS-6000 */ #define ICMP_MINLEN 8 /* all of these exist, just under a different name */ #define ICMP_UNREACH ICMP_DEST_UNREACH #define ICMP_UNREACH_NET ICMP_NET_UNREACH #define ICMP_UNREACH_PROTOCOL ICMP_PROT_UNREACH #define ICMP_UNREACH_PORT ICMP_PORT_UNREACH #define ICMP_UNREACH_HOST ICMP_HOST_UNREACH /* this structure was taken from an RS-6000 */ /* ip_v and ip_hl are defined elsewhere as well, but necessary here */ struct ip { #if __BYTE_ORDER == __LITTLE_ENDIAN unsigned ip_hl:4, /* header length */ ip_v:4; /* version */ #endif /*#if __BYTE_ORDER == __BIG_ENDIAN*/ /* unsigned ip_v:4, */ /* version */ /* ip_hl:4; */ /* header length */ /*#endif*/ u_char ip_tos; /* type of service */ u_short ip_len; /* total length */ u_short ip_id; /* identification */ u_short ip_off; /* fragment offset field */ #define IP_DF 0x4000 /* dont fragment flag */ #define IP_MF 0x2000 /* more fragments flag */ u_char ip_ttl; /* time to live */ u_char ip_p; /* protocol */ u_short ip_sum; /* checksum */ struct in_addr ip_src,ip_dst; /* source and dest address */ }; /* this structure was taken from an RS-6000 */ /* * Structure of an icmp header. */ struct icmp { u_char icmp_type; /* type of message, see below */ u_char icmp_code; /* type sub code */ u_short icmp_cksum; /* ones complement cksum of struct */ union { u_char ih_pptr; /* ICMP_PARAMPROB */ struct in_addr ih_gwaddr; /* ICMP_REDIRECT */ struct ih_idseq { n_short icd_id; n_short icd_seq; } ih_idseq; int ih_void; } union { struct id_ts { n_time its_otime; n_time its_rtime; n_time its_ttime; } id_ts; struct id_ip { struct ip idi_ip; /* options and then 64 bits of data */ } id_ip; u_long id_mask; char_mask icmp_dun.id_mask #define icmp_data icmp_dun.id_data }; --- end cut here for file include/netinet/ip_icmp_lin.h ---------------------- --- cut here for file include/netinet/udp.h ---------------------------------- /* * INET An implementation of the TCP/IP protocol suite for the LINUX * operating system. INET is implemented using the BSD Socket * interface as the means of communication with the user level. * * Definitions for the UDP protocol. * * Version: @(#)udp.h 1.0.2 04/28/93 * * Author: Fred N. van Kempen, <waltje () uWalt NL Mugnet ORG> * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version * 2 of the License, or (at your option) any later version. */ #ifndef _LINUX_UDP_H #define _LINUX_UDP_H /* struct udphdr { unsigned short source; unsigned short dest; unsigned short len; unsigned short check; }; */ /* these are also taken from an RS-6000 */ struct udphdr { unsigned short uh_sport; /* source port */ unsigned short uh_dport; /* destination port */ unsigned short uh_ulen; /* udp length */ unsigned short uh_sum; /* udp checksum */ }; #endif /* _LINUX_UDP_H */ --- end cut here for file include/netinet/udp.h ------------------------------ -- - Nils Lohner internet: lohnen () rpi edu Rensselaer Polytechnic Institute ^`'~*-,._.^`'~*-,._.^`'~*-,._.^`'~*-,._.^`'~*-,._.^`'~*-,._.^`' Josh Wilmes (wilmesj () rpi edu) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ "Things are more like they are now than they ever were before." - Dwight D. Eisenhower ^`'~*-,._.^`'~*-,._.^`'~*-,._.^`'~*-,._.^`'~*-,._.^`'~*-,._.^`' By Date By Thread
http://seclists.org/bugtraq/1995/Apr/50
CC-MAIN-2014-10
refinedweb
947
53.71
Support for interactive validation of form elements, HTML5 specs section 4.10.15.2, is necessary. Some tip from other bugs this depends on: 1. Form control elements can be in a "no-validate state" that is controlled by "novalidate" and "formNoValidate" attributes, plus some more condition. This snippet could be of help: bool HTMLFormControlElement::isInNoValidateState() const { return (isSuccessfulSubmitButton() && formNoValidate()) || m_form->novalidate(); } 2. HTMLFormElement::checkValidity() needs to be adapted to deal with "unhandled invalid controls" (as per TODO comment). Actually it just iterates over form elements calling checkValidity() (that fires the invalid event, as per specs), but it must also return a list of invalid form controls that haven't been handled through the invalid event. The following snippet might be of some help (was part of the proposed patch for bug 27452): bool checkValidity(Vector<HTMLFormControlElement*>* unhandledInvalidControls = 0); bool HTMLFormElement::checkValidity(Vector<HTMLFormControlElement*>* unhandledInvalidControls) { Vector<HTMLFormControlElement*> invalidControls; for (unsigned i = 0; i < formElements.size(); ++i) { HTMLFormControlElement* control = formElements[i]; if (control->willValidate() && !control->validity()->valid()) invalidControls.append(control); } if (invalidControls.isEmpty()) return true; for (unsigned n = 0; n < invalidControls.size(); ++n) { HTMLFormControlElement* invalidControl = invalidControls[n]; bool eventCanceled = invalidControl->dispatchEvent(eventNames().invalidEvent, false, true); if (eventCanceled && unhandledInvalidControls) unhandledInvalidControls->append(invalidControl); } return false; } I have started implementation. (In reply to comment #2) > I have started implementation. Good! I think you'll be needing support for the validationMessage for interactive validation step 3 (4.10.15.2) [bug 27959], I'm gonna speed it up. > Good! I think you'll be needing support for the validationMessage for > interactive validation step 3 (4.10.15.2) [bug 27959], I'm gonna speed it up. That's right! The implementation requires validationMessage(). Created attachment 38918 [details] Demo patch A workable demo patch. This is not ready to be reviewed. It has no tests and it depends on bug#27959 and bug#28868. I'll use Balloon Tooltip for Windows to show validation messages. I don't know if Mac OS has a corresponding control. Created attachment 40698 [details] Incomplete patch (rev.0) I'd like to ask comments on the patch though it is incomplete. The patch will add the following behavior: - Show/hide a validation message when an invalid form control gets/losts the focus The way to show messages depends on each of platforms. - Prevent a form submission if the form has invalid controls TODO: - Add tests - Build file changes for other platforms - Provide ChangeLog. Comment on attachment 40698 [details] Incomplete patch (rev.0) As you say, the patch is incomplete. No ChangeLog. No tests. If you'd like feedback on this patch, I recommend asking the relevant people on IRC or email. Having this patch in the review queue just makes it harder to review complete patches. I split this to 3 patches. Bug#31716, Bug#31718 and this will have patches. I think that form validation should be disabled until: - a solution is found for the compatibility problem with "required" attribute name; - UI for correcting problems is implemented (as tracked by bug 31718/bug 40747). Currently the user experience is just horrible. I agree with Alexey. In 2008, I wrote an app to the HTML5 spec, then backported the validation API to JavaScript to support then-existing browsers. I've seen the app break a couple of times due to changes in WebKit (which Tamura is always eager to fix). If you go to without the WebKit validator, the page scrolls smoothly to take you to an invalid element on form submission. Since the validation constraints have been added, the page abruptly jumps to the first invalid <input/>. I'm sure there are users who don't understand what's going on when this happens. For that matter, I wasn't even sure what was going on when I was debugging #40591. *** Bug 80419 has been marked as a duplicate of this bug. *** *** Bug 136595 has been marked as a duplicate of this bug. *** *** Bug 142817 has been marked as a duplicate of this bug. *** This ticket was created as an enhancement, but the last few tickets which were marked as duplicate to this are more critical. It seems this problem now affects normal validation and not only the JavaScript API. IMHO the priority/importance of this ticket should be increased. This always affected “normal validation”, not just JavaScript API, and that validation is a new feature, and one that is not implemented yet in WebKit. I understand that you would like to see the feature! *** Bug 158331 has been marked as a duplicate of this bug. *** Is anyone currently working on this? It was submitted 7 years ago, and Safari is now the only browser whose current version will submit a form with invalid constraints. <rdar://problem/28636170> *** This bug has been marked as a duplicate of bug 164382 *** (In reply to comment #18) > Is anyone currently working on this? It was submitted 7 years ago, and > Safari is now the only browser whose current version will submit a form with > invalid constraints. Safari Technology Preview 19 has this enabled:
https://bugs.webkit.org/show_bug.cgi?format=multiple&id=28649
CC-MAIN-2019-51
refinedweb
841
57.57
Content All Articles Python News Numerically Python Python & XML Community Database Distributed Education Getting Started Graphics Internet OS Programming Scientific Tools Tutorials User Interfaces ONLamp Subjects Linux Apache MySQL Perl PHP Python BSD Profiling and Optimizing Python Pages: 1, 2, 3). ncalls tottime percall cumtime body_seg run segment. sort_stats(): timeit import timeit t = timeit.Timer("main()", "from __main__ import main") res_set = t.repeat(10, 1) print res_set, "::", min(res_set) in my if __name__ == "__main__": block and ran it. if __name__ == "__main__":. repeat(10, 1) repeat() Related Reading Python Cookbook By Alex Martelli, Anna Martelli Ravenscroft, David Ascher. body_seg(). aggregate_parser segment() NoOpEDIHandler pass. Pages: 1, 2, 3 Next Page Sponsored by: © 2016, O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://www.onlamp.com/pub/a/python/2005/12/15/profiling.html?page=2
CC-MAIN-2016-50
refinedweb
137
51.34
21061/is-there-a-way-to-run-python-on-android We are working on an S60 version and this platform has a nice Python API. However, there is nothing official about Python on Android, but since Jython exists, is there a way to let the snake and the robot work together? YES! An example via Matt Cutts via SL4A -- "here’s a barcode scanner written in six lines of Python code: import android droid = android.Android() code = droid.scanBarcode() isbn = int(code['result']['SCAN_RESULT']) url = " % isbn droid.startActivity('android.intent.action.VIEW', url) can you give a few sample projects ...READ MORE Every occurence of "foreach" I've seen (PHP, ...READ MORE Hi, This is an amazing Python framework just .. web.py is probably the simplest web framework ...READ MORE No, Python does not support labels and ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/21061/is-there-a-way-to-run-python-on-android
CC-MAIN-2022-21
refinedweb
163
67.15
iSequenceTrigger Struct Reference A sequence trigger. More... #include <ivaria/engseq.h> Detailed Description A sequence trigger. When all conditions in a trigger are true it will run a sequence. Note that after the succesfull firing of a trigger it will automatically be disabled. Main creators of instances implementing this interface: Main ways to get pointers to this interface: Main users of this interface: Definition at line 527 of file engseq.h. Member Function Documentation Condition: true if camera is in some sector. Condition: true if camera is in some sector and bbox. Condition: true if camera is in some sector and sphere. Condition: light change. Call this to add a trigger which fires a sequence when a light gets darker than a certain value or lighter than a certain value, or whenever a light changes. - Parameters: - Condition: manual trigger. Call this to set add a trigger that requires manual confirmation. The 'Trigger()' function can then be used later to actually do the trigger. Condition: true if clicked on a mesh. Condition: true if (part of) sector is visible. This function returns true if the trigger conditions are valid. This only works if TestConditions() has been called and it doesn't work immediatelly after TestConditions() because TestConditions() needs to take some time before it actually can retest the conditions. Clear all conditions. Force the sequence of this trigger to be fired right now. Note that this will even fire if the trigger is disabled and conditions are completely ignored. Also calling ForceFire() will NOT cause the trigger to become disabled (as opposed to when a trigger normally fires). So if you want to make sure the trigger does not accidently fire again right after firing it you should disable the trigger (and possibly let the sequence enable it again). Note that ForceFire() still respects the fire delay with which the sequence was registered. If you use 'now' == true then this delay will be ignored and the sequence will be started immediatelly. Get the attached sequence. Get the parameter block. Get enabled/disabled state. Query object. Enable/disable this trigger. Triggers start enabled by default. Set the parameter block to use for the sequence when it is fired. Test the conditions of this trigger every 'delay' milliseconds. Use this in combination with CheckState(). If 'delay' == 0 then this testing is disabled (default). Trigger the manual condition. The documentation for this struct was generated from the following file: Generated for Crystal Space 1.4.1 by doxygen 1.7.1
http://www.crystalspace3d.org/docs/online/api-1.4/structiSequenceTrigger.html
CC-MAIN-2014-10
refinedweb
417
60.01
C++ Multi Level Inheritance Program Hello Everyone! In this tutorial, we will learn how to demonstrate the concept of Multi-Level Inheritance, in the C++ programming language. To understand the concept of Multi-Level Inheritance in CPP, we will recommend you to visit here: C++ Types of Inheritance, where we have explained it from scratch. Code: #include <iostream> using namespace std; //Class Volume to compute the Volume of the Cuboid class Volume { public: float volume(float l, float b, float h) { return (l * b * h); } }; //Class Area to compute the Volume of the Cuboid class Area { public: float area(float l, float b, float h) { return (2 * (l * b + l * h + b * h)); } }; //Cuboid class inherites or is derived from two different classes Volume and Area. class Cuboid: private Volume, private Area { private: float length, breadth, height; //Default Constructor of the Cuboid Class public: Cuboid(): length(0.0), breadth(0.0), height(0.0) {} void getDimensions() { cout << "\nEnter the length of the Cuboid: "; cin >> length; cout << "\nEnter the breadth of the Cuboid: "; cin >> breadth; cout << "\nEnter the height of the Cuboid: "; cin >> height; } //Method to Calculate the area of the Cuboid by using the Area Class float volume() { //Calls the volume() method of class Volume and returns it. return Volume::volume(length, breadth, height); } //Method to Calculate the area of the Cuboid by using the Area Class float area() { //Calls the area() method of class Area and returns it. return Area::area(length, breadth, height); } }; //Defining the main method to access the members of the class int main() { cout << "\n\nWelcome to Studytonight :-)\n\n\n"; cout << " ===== Program to demonstrate the concept of Multiple Level Inheritence in CPP ===== \n\n"; //Declaring the Class objects to access the class members Cuboid cuboid; cout << "\nCalling the getDimensions() method from the main() method:\n\n"; cuboid.getDimensions(); cout << "\n\n"; cout << "\nArea of the Cuboid computed using Area Class is : " << cuboid.area() << "\n\n\n"; cout << "Volume of the Cuboid computed using Volume Class is: " << cuboid.volume(); cout << "\n\n\n"; return 0; } Output: We hope that this post helped you develop a better understanding of the concept of Multi-Level Inheritance in C++. For any query, feel free to reach out to us via the comments section down below. Keep Learning : )
https://studytonight.com/cpp-programs/cpp-multi-level-inheritance-program
CC-MAIN-2021-04
refinedweb
379
58.76
going to probe some of you for some information: I'm looking for some code snippets to load, into a c program, another language module, and execute a function by name... any language will do, I'm just looking for a point of reference... if possible, and you know how, I'd like to know how to convert a simple native type in the language, to a C struct... i'll make up a language to demonstrate what I'm looking for: # !/usr/bin/env phrak #phrak-script string hello_whirled() { string s = "hello"; return s + " whirled"; } #include <phrak-script.h> ... phrakscript_init(); phrakscript* s = phrak_script_load("test.phrak"); phrakscript_obj* o = s->get_symbol("hello_whirled"); if(o->is_callable()) std::cout << o->call().to_string(); phrakscript_free_obj(o); phrakscript_free(s); ... Offline You're writing a freaking scripting language now? what happened to the window manager???? Have a look at that yet? I can tell you exactly how to embed groovy into java.... :-D Dusty Offline no, the scripting language was a demo... I'm not writing one... Right now I'm going to work on an indepth module framework... the core of the WM is pretty easy, if I have it conceptualized correctly... it's not hard to make a C/C++ only module framework, but I'd like to understand how one accesses external languages so I can design it properly... right now, I'm torn between the following styles of "linking" modules to the core: apache style: precisely named C structures which contain function pointers xchat style: precisely named functions, loaded individually misc ideas: one single "switch" type function which calls the appropriate module function (??) basically, before I finalize the "glue", I'd like to know if that glue is extensible... Offline It depends on the design of the wm which style is best. If you have a lot functions, then apache style is best. If you only have a few, you can as wel use dlsym(). The misc idea is of a different kind than the other two, as it only tells something about the function and nothing about how the functions are found. It's a way to get in a big mess if you don't look out, so watch out with it. Server client style: The X way: through local sockets. Some other IPC like message queues or whatever. Fork + exec and pipes. Anyway, I wouldn't add support for other languages directly, but let a language specific wrapper plugin handle it (though if you use the last of the above styles then the core doesn't care about the client format, as long as it runs). Offline yeah, I debated using IPC mechanisms for a while but decided it added a layer I didn't really want... however, the more I think about incorporating in other languages, the more ipc tends to appeal (for instance, using some basic IPC, I can even create modules in bash) Pros and cons of going with formatted fifo ipc? not really sure atm. Using a messaging mechanism wouldn't work in the way that I would like it to.... I believe I'm close to something which will allow for loadable python modules... and perhaps perl could work too, given enough research... Offline
https://bbs.archlinux.org/viewtopic.php?id=11004
CC-MAIN-2018-17
refinedweb
539
73.37
transform libreoffice documents to supported formats Project description Overview py3o.renderclient is a client library that can be used to easily integrate with the py3o.renderserver to transform LibreOffice/OpenOffice documents into PDF. This architecture was designed to avoid the pyuno dependency in the client program. Depending on pyuno is really complicated on some plateforms. Using this client you can leverage a distant (or local) renderserver to transform documents for you with nearly no code. Example Here is a sample minimalistic client that considers you have a renderserver running on localhost: from py3o.renderclient import RenderClient client = RenderClient('localhost', 8994) client.login('toto', 'plouf') client.render('py3o_example.odt', 'py3o_example.pdf', 'pdf') For the moment the login/password phase is not checked by the server, but we aim to add an authentication layer in the future. Status Since we are still in pre 1.0 releases we may change the API and add more functionnalities, now is a good time to give feed-back and feature requests. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/py3o.renderclient/
CC-MAIN-2018-39
refinedweb
193
50.12
missing error when a slot-definition is created with a bogus or missing name Bug Description ." CL-USER> (make-instance 'sb-mop: #<SB-MOP: CL-USER> (make-instance 'sb-mop: #<SB-MOP: CL-USER> (make-instance 'sb-mop: #<SB-MOP: This is unfortunately the wrong place to fix this. Consider: ;;; Perfectly legal (defclass foo () ((name :initarg :name))) (make-instance 'foo :name nil) ;;; Should signal an error (make-instance (find-class 'sb-mop: I think the right place to do this check (and others mandated by AMOP, see "Initialization of Slot Definition Metaobjects" at http:// Nikodemus, thanks for pointing me in the right direction. Here's a new patch for this bug along with my rationale for it, given that it is a compliance bug. Unfortunately the patch has a rather drastic effect on performance (see timing at the end) so you might want to keep it on the back burner. Any suggestions welcome. dpANS does mentions in section "7.1.6 Initialize- During initialization, initialize-instance is invoked after the following actions have been taken: ... * The validity of the defaulted initialization argument list has been checked. If any of the initialization arguments has not been declared as valid, an error is signaled. ... But, on reading section "7.1.2 Declaring the Validity of Initialization Arguments" it seems that the narrow definition of the term "valid" applies to a different context and might not apply to the checks of the :NAME argument as required by the MOP for slot-definition initialization. If this is indeed the case, then as previously suggested by you, initialize-instance is a more appropriate method to effect this check than make-instance. Also, wrt. defining the phrase "a symbol which can be used as a variable name." as mentioned in the text of this bug I could only find http:// Finally, I was not sure whether the phrasing of the MOP requires the use of "signal" or "error" but, I opted for "error" as otherwise if *break-on-signal* is null the restrictions of the MOP are silently ignored which didn't seem appropriate. diff --git a/src/pcl/init.lisp b/src/pcl/init.lisp index a4c3dad..809c04f 100644 --- a/src/pcl/init.lisp +++ b/src/pcl/init.lisp @@ -56,6 +56,14 @@ finally (return (append supplied-initargs default- +(defmethod initialize-instance :before ((class slot-definition) &rest initargs) + (let* ((name-arg (member :name initargs :test #'eq)) + (name-value (cadr name-arg))) + (unless name-arg + (error "INITIALIZE- + (if (constantp name-value) + (error "INITIALIZE- + (defmethod initialize-instance ((instance slot-object) &rest initargs) (apply #'shared-initialize instance t initargs)) ======= Before defining the :before method * (time (dotimes (x 100000000) (make-instance 'sb-mop: Evaluation took: 19.969 seconds of real time 19.973249 seconds of total run time (19.725233 user, 0.248016 system) [ Run times consist of 2.629 seconds GC time, and 17.345 seconds non-GC time. ] 100.02% CPU 39,837,001,524 processor cycles 14,399,961,184 bytes consed After defining the :before method * (time (dotimes (x 100000000) (make-instance 'sb-mop: Evaluation took: 72.245 seconds... Reviewing the patch in comment #3: * I would use &key (name nil namep) rather than the member test in the lambda list; * the error should be a reference condition, using '(:amop :initialization slot-definition) as one of the references; * if we're going to do this, we should at least try to perform all the intialization checks referred to at once; * I'm not worried about the slowdown; I don't think the bottleneck in any application is likely to be the creation of slotds. These changes seem to fix it for me, --- src/pcl/init.lisp 2009-06-02 11:33:52.000000000 -0700 +++ ../init.lisp 2009-07-03 05:18:06.000000000 -0700 @@ -1,3 +1,4 @@ + ;;;; This file defines the initialization and related protocols. ;;;; This software is part of the SBCL system. See the README file for @@ -25,7 +26,10 @@ (in-package "SB-PCL") -(defmethod make-instance ((class symbol) &rest initargs) +(defmethod make-instance ((class symbol) &rest initargs &key (name nil) &allow-other-keys) + (declare (type (or symbol null) name)) + (unless name + (error "The name slot is unspecified.")) (apply #'make-instance (find-class class) initargs)) (defmethod make-instance ((class class) &rest initargs)
https://bugs.launchpad.net/sbcl/+bug/309072
CC-MAIN-2017-17
refinedweb
710
53.51
1.0.0.BUILD-SNAPSHOT Copies of this document may be made for your own use and for distribution to others, provided that you do not charge any fee for such copies and further provided that each copy contains this Copyright Notice, whether distributed in print or electronically. - Spring Cloud Spinnaker Documentation - Getting started - Debugging Your Installation - Appendices Spring Cloud Spinnaker Documentation This section provides a brief overview of Spring Cloud Spinnaker reference documentation. Think of it as map for the rest of the document. You can read this reference guide in a linear fashion, or you can skip sections if something doesn’t interest you. About the documentation The Spring Cloud Spinnaker reference guide is available as html and pdf documents. The latest copy is available at docs.spring.io/spring-cloud-spinnaker/docs/current/reference. Copies of this document may be made for your own use and for distribution to others, provided that you do not charge any fee for such copies and further provided that each copy contains this Copyright Notice, whether distributed in print or electronically. Getting started Interested in deploying applications to the cloud with complex rollouts, sophisticated notifications (Slack, Email, etc.)? Then this document is for you. It will coach you on using this application to install Spinnaker. Introducing Spring Cloud Spinnaker Spinnaker is a multi-cloud continuous deployment platform released by Netflix with contributions from Pivotal, Google, Microsoft and others. Spring Cloud Spinnaker is an installation tool meant to gather all the necessary information and use it to deploy Spinnaker’s microservices into a certified Cloud Foundry installation. It installs the follow Spinnaker components: Before installing Spinnaker Before you actually install Spinnaker, you must decide where it will run, i.e. pick an organization and space. You also need the following services created as well: An instance of Redis in the same space. Installing Spinnaker Composed of Boot-based microservices, Spinnaker is highly customizable. Instead of tuning ever single setting, Spring Cloud Spinnaker lets you pick several options from a web page, and will in turn apply the needed property settings for you. To get the bits, visit cloud.spring.io/spring-cloud-spinnaker. Included are download directions to run it locally, to upload it somewhere in your CF, or to run a hosted solution from Pivotal Web Services. Settings After installing Spring Cloud Spinnaker, whether locally or in PCF somewhere, you will be faced with a collection of settings. This may look like a lot, but compared to ALL the options Spinnaker comes with, this is a vast simplification. The settings are split into two parts: Target and Settings. Target describes information needed to "cf push" all the Spinnaker modules. Settings is information used to apply the right property settings after installation so that Spinnaker can do its own deployments. The following settings are needed to install Spinnaker modules. The following information is used by Spinnaker after installation to do its job. With your settings filled in, click on the Status tab. Deploying On the Status tab, you have the ability to check each module, or deal with them all. Click on Deploy All. Sit back and sip on a cup of coffee. This will take some time. Once completed, you can click Stop All or Start All to stop/start the whole set. You can click Link All and the names of each module will have a hyperlink added, taking you to App Manager. Next Steps After getting Spinnaker up and running, you should be able to access deck, the UI for Spinnaker, by visiting deck.<your domain> Debugging Your Installation Having trouble with your Spinnaker install? This section is meant to help you unravel things before you open a ticket. Logs, logs, and more logs When you are attempting to install Spinnaker, there are logs everywhere. The key is to find the right ones. Spring Cloud Spinnaker can log information about the deployment process, but once completed, it doesn’t gather any more information Each Spinnaker module will print out its own logs. Assuming you installed Spinnaker with a namespace of "test", you can gather information like this… $ cf logs clouddriver-test In another shell $ cf restart clouddriver-test If you watch the "cf logs" command, you should see your copy of clouddriver start up. If there’s a major issue, it should render an error, especially it it’s missing settings. 2016-09-06T11:32:31.91-0500 [API/0] OUT Updated app with guid 39bc3f7b-ee7f-45f2-bac9-053069092c7a ({"state"=>"STARTED"}) 2016-09-06T11:32:32.22-0500 [APP/0] OUT Exit status 143 2016-09-06T11:32:32.25-0500 [CELL/0] OUT Creating container 2016-09-06T11:32:32.25-0500 [CELL/0] OUT Destroying container 2016-09-06T11:32:32.71-0500 [CELL/0] OUT Successfully destroyed container 2016-09-06T11:32:33.02-0500 [CELL/0] OUT Successfully created container 2016-09-06T11:32:39.29-0500 [CELL/0] OUT Starting health monitoring of container :: Spring Boot :: (v1.2.8.RELEASE) 2016-09-06T11:32:44.85-0500 [APP/0] OUT 2016-09-06 16:32:44.851 INFO 18 --- [ main] pertySourceApplicationContextInitializer : Adding 'cloud' PropertySource to ApplicationContext ... ... 2016-09-06T11:33:06.12-0500 [APP/0] OUT 2016-09-06 16:33:06.126 INFO 18 --- [ main] s.d.spring.web.caching.CachingAspect : Caching aspect applied for cache modelProperties with key com.netflix.spinnaker.clouddriver.model.Network(true) 2016-09-06T11:33:06.12-0500 [APP/0] OUT 2016-09-06 16:33:06.126 INFO 18 --- [ main] s.d.spring.web.OperationsKeyGenerator : Cache key generated: .d.spring.web.caching.CachingAspect : Caching aspect applied for cache operations with key .w.ClassOrApiAnnotationResourceGrouping : Group for method list was vpc-controller 2016-09-06T11:33:06.12-0500 [APP/0] OUT 2016-09-06 16:33:06.127 INFO 18 --- [ main] s.w.ClassOrApiAnnotationResourceGrouping : Group for method list was vpc-controller 2016-09-06T11:33:06.12-0500 [APP/0] OUT 2016-09-06 16:33:06.127 INFO 18 --- [ main] .d.s.w.r.o.CachingOperationNameGenerator : Generating unique operation named: listUsingGET_10 2016-09-06T11:33:06.28-0500 [APP/0] OUT 2016-09-06 16:33:06.282 INFO 18 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080 (http) 2016-09-06T11:33:06.28-0500 [APP/0] OUT 2016-09-06 16:33:06.286 INFO 18 --- [ main] com.netflix.spinnaker.clouddriver.Main : Started Main in 24.791 seconds (JVM running for 26.911) 2016-09-06T11:33:06.57-0500 [CELL/0] OUT Container became healthy In this console output, you can see that clouddriver is running with Spring Boot 1.2.8.RELEASE. The "Started Main in 24.791 seconds" is the indicator that the app is finally up. "Container became healthy" is the indicator that the platform can see the app as being up. Environment settings To apply various settings, Spring Cloud Spinnaker "cf pushes" the module and then applies various environment variables settings in Cloud Foundry. Pay note: it’s a LOT of settings. If you see a deployment either empty of environment variables or only containing SPRING_APPLICATION_JSON, then something has gone terribly wrong with the deployment. Each of the services has a URL to reach the other relevant microservices. In this case, you can how it builds up the URL for clouddriver to speak to echo. In this specific example: service.echo.baseUrl = ${services.default.protocol}://${services.echo.host}${namespace}.${deck.domain} services.default.protocol = https services.echo.host = echo namespace = -spring deck.domain = cfapps.io This allows the deployer to flexibly adjust each piece as needed. Manually deploying Spinnaker You may be tempted to simple grab the fat JARs for clouddriver, deck, etc. and push them yourself. Unfortunately, that’s not an option (yet). Each module needs its own property file. clouddriver has clouddriver.yml, igor has igor.yml, etc. But they aren’t included in the JARs pulled from bintray. Netflix wraps each module in a Debian package and has those files in a different location. Spring Cloud Spinnaker grabs those JARs and dynamically inserts such files right before pushing to your Cloud Foundry instance.
https://docs.spring.io/spring-cloud-spinnaker/docs/1.0.0.BUILD-SNAPSHOT/reference/htmlsingle/
CC-MAIN-2018-05
refinedweb
1,362
50.84
Hide Forgot Description of problem: The latest update of mrepo (mrepo-0.8.7-1.el5) introduced a dependency on hashlib which isn't available in RHEL5 or EPEL5. Version-Release number of selected component (if applicable): mrepo-0.8.7-1.el5 How reproducible: Every time Steps to Reproduce: 1. mrepo -gu Actual results: Traceback (most recent call last): File "/usr/bin/mrepo", line 19, in ? import ConfigParser, urlparse, hashlib, types, traceback ImportError: No module named hashlib Arg. Sorry about this one - the fixed version should be in epel-testing soon. mrepo-0.8.7-2.el5 has been submitted as an update for Fedora EPEL 5. mrepo-0.8 mrepo'. You can provide feedback for this update here: mrepo-0.8.7-2.el5 has been pushed to the Fedora EPEL 5 stable repository. If problems still persist, please make note of it in this bug report.
https://bugzilla.redhat.com/show_bug.cgi?id=620704
CC-MAIN-2019-22
refinedweb
149
60.21
As there has been discussion about not writing unit tests recently, I thought I'd use my recent experience in finishing a non-trivial Haskell program to comment on the issue of writing tests (unit tests and other automated tests) in the context of real code. I'm especially prompted by this comment by Ned Batcheldor that I came across a few weeks ago: Since static type checking can't cover all possibilities, you will need automated testing. Once you have automated testing, static type checking is redundant. (that's in a comment on his own blog post) To some extent I agree with this, but I want to give some reasons why a strong and powerful static type checker really does eliminate the need for automated tests in some cases—that is to say, there are instances when the static type checking makes the automated tests redundant and not the other way around, and does a better job. I have very few tests in my Haskell blog software. There are significantly more in the Ella library which I wrote alongside it, but still far from complete coverage. While I like test driven development, and did it for some parts of this project, many times it felt like a waste of time. In some cases it was perhaps misdirected laziness, but I'm not convinced it always was. So what are the characteristics of code that doesn't benefit from automated/unit tests? Trivial code If code is extremely simple, it can actually be worse to have tests than to not have them. In defending that statement, the first thing to remember is that tests can have bugs in them too. Now, many bugs in the tests will be caught, as long as you follow the rule of making sure the test fails, then writing the code, then making sure it passes. However, many bugs of omission, which are also very common, will not be caught i.e. when the test fails to test something it ought to. Second, there is always a cost to writing tests. So, as the probability of making a mistake in your code tends to zero, the usefulness of tests against that code also tends to zero—and not just to zero, it can go negative. You spent x minutes writing a test for something that didn't need testing, which is lost time and money already, and you also have extra (test) code to maintain in the future, and a longer test suite to run. Third, you can write an infinite number of tests, and still have bugs. You can have 100% code coverage, and still have bugs. (I'll leave you to do the research on code coverage if you don't believe me). So, you have to stop somewhere, and therefore you need to know *when* to stop. So suppose you write a utility function that is used to sanitise phone numbers that people might enter. It removes '-' and ' ' characters. (The result will of course be validated separately, but we want to allow people to enter phone numbers in a convenient way). In Python: def sanitise_phone_number(s): return s.replace("-","").replace(" ","") The testing fanatics might stop to write a unit test, but not the rest of us, because: - You would mainly be testing that the built-in string library works. - If you think of the ways that the function is likely to be wrong, the test is just as likely to fail to catch it. For example, the function above might really need to strip newline chars as well, but that's not going to be tested unless I think to write a test for that. - If there actually is a bug here, or the implementation gets more complex so that it merits a test, I can cross that bridge when I come to it, and it won't cost me extra. - It's more likely that I'll forget to use this function than that I get it wrong. Therefore, an integration test would be far more useful. But in some cases, integration tests can be extremely expensive, both to write and to run, especially when testing javascript based web frontends, or GUIs that are not very testable. I'm almost certainly going to test this code by at least one manual integration test, and after that, do I really need to write an automatic one? However, if I was writing the function in a language that was less capable than Python, I might well write a test for the above. Declarative code (You could argue that this is an extension of trivial code, but it feels slightly different, and the case is even stronger). Imagine your spec says that you should have 5 news items on the front page of your web site. You are using a library that has utility code for getting the first n items, or page x of n items each. And of course you are going to use a constant for that 5, rather than code it right in. So somewhere you are going to write (assuming Python): NEWS_ITEMS_ON_HOME_PAGE = 5 Are you going to write a test that ensures that this value stays at 5, and doesn't accidentally get changed? Then your code base violates DRY—you now have two places where you are specifying the number of news items on the home page. That is, to some extent, the nature of all tests, but it's worse in this case. With non-declarative code and tests, one instance specifies behaviour, the other implementation, and it's usually obvious which is correct. But with declarative code, if one instance is different, how do you know which is correct? Or are you going to write a test for the actual home page having 5 items? That would be pointless, because it's just testing that you are capable of calling a trivial API, which itself belongs to thoroughly tested code. You might want a sanity check that you have made a typo, but checking that the page returns anything with a 200 code will often be enough. What about something like a Django model? Your spec says that a 'restaurant' needs to have a 'name' which is a maximum of 100 chars. You write the following code: class Restaurant(models.Model): name = models.CharField("Name", max_length=100) # ... Are you going to write code to test that you've typed this in correctly? It would again be violating DRY. Are you going to check that this interfaces with the database correctly? There are already hundreds of tests in Django which cover this. Are you going to write tests that are effectively checking for typos? Well, if you use this model at all, it's going to be very obvious if you've made a mistake, and some other simple integration test is going to catch it. Haskell Now, coming to Haskell. You can guess the point I'm going to make. In Haskell, a lot of code is either trivial or declarative. Further, many of the types of errors you could make are caught by the compiler. Typos and missing imports etc. are always caught, and many other errors beside. Functional programming languages, especially pure ones, eliminate a lot of the kind of mistakes that are easy in imperative languages. Everything being an expression helps a lot—it forces you to think about every branch and return a value. In monadic code it becomes possible to avoid this, but a lot of your code is pure functional. Example 1 Imagine a more complex function than our sanitise_phone_number above. It's going to take a list of 'transformation' functions and an input value and apply each function to the value in turn, returning the final value. In some languages, that would be just about worth writing a test for. You might have to worry about iterating over the list, boundary conditions, etc. But in Haskell it looks like this: apply = foldl' (flip ($)) In the above definition, there is basically nothing that can go wrong. We already know that foldl' works, and isn't going to miss anything, or fail with an empty list. You can't forget to return the return value, like you can in Python. The compiler will catch any type errors. If the function doesn't do anything approaching what it's supposed to then you'll know as soon as you try to use it. I've used point-free style, so there isn't any chance of doing something silly with the input variables, because they don't even appear in the function definition! For something like the above, you would often write your type signature first: apply :: a -> [a -> a] -> a Once you've done that, it's even harder to make a mistake. It's almost possible to try vaguely relevant code at random and see if it compiles. For something like this, if it compiles, and it looks very simple, it's probably correct. (There are obviously times when that will fail you, but it's amazing how often it doesn't. You often feel like you just have to keep doing what the compiler tells you and you'll get working code.) Is the above code 'trivial' or 'declarative'? Well, that's a tough call. A lot of code in Haskell quickly becomes very declarative in style, especially when written point free. Example 2 But what about something much bigger—say the generation of an Atom feed? With a library that makes use of a strong static type system, this can be actually quite hard to get wrong. In my blog software, I use the feed library for Atom feeds. The code I've had to write is extremely simple—a matter of creating some data structures corresponding to Atom feeds. The data structures are defined to force you to supply all required elements. Where there is a choice of data type, it forces you to choose — for example the 'content' field has to be set with either HTMLContent "<h1>your content</h1>" or TextContent "Your content". (For those who don't know Haskell, it should also be pointed out that there is no equivalent to 'null'. Optional values are made explicit using the Maybe type). After filling in all the values for these feeds, I wrote some very simple 'glue' functions that fed in the data and returned the result as an HTTP response. I created 4 different feeds, all of which worked perfectly first time, as soon as I got them to compile. I cannot see any value, and only cost, in adding tests for this. A check for a 200 response code and non empty content might be worth it, but would be much easier to write as a bash script that uses 'curl' on a few known URLs. Had I written this in Python, I might have wanted tests to ensure that the HTML in the Atom feed content was escaped properly and various other things, in addition to a simple check for status 200. But the API of the feed library, combined with the type checking that the compiler has done, has made that redundant, and has tested it far more easily and thoroughly than I could have done with tests. And it's not in general true that the simple functional test will catch any type errors, because often it will only exercise one route through the code, ignoring the fact that in many places dynamically typed code can return values of different types, which can cause type failures etc. Example 3 One final example of reducing the need for automated tests is the routing system I've used in Ella. OK, it's really a chance to show off the only slightly clever bit of code that I wrote, but hopefully it will explain something of the power of a strong type system :-) Consider the following bits of code/configuration in a Django project, which are responsible for matching a URL, pulling out some bits from it and dispatching it to a view function. ### myproject/urls.py patterns = ('', (r'^members/(\d+)/$', 'myproject.views.member_detail'), # etc... ) ### myproject/views.py def member_detail(request, memberid): memberid = int(memberid) member = get_member(memberid) # etc... Now, there are a number of possible failure points in this code that you might want some regression tests for. For example, if in the future we change it so that the URL uses a string such as a user name, rather an integer, we will need to change the URLconf, the line in member_detail that calls int, and the definition of get_member (or use a different function). There is a DRY or OAOO failure here—the fact that we are expecting an integer is specified multiple times, either implicitly or explicitly. This is one of the causes of fragility in this chunk of code — if one is changed, the others might not be updated, introducing bugs of different kinds. Now, there are things you can do about this, with some small or large changes to how URLconfs work. But they are not complete solutions, and one solution not open to Python developers is the one I coded in Ella. The equivalent bits of code, with type signatures and explanations of them for those who don't know any Haskell, would look like this in my system. ----- MyProject/Routes.hs import MyProject.Views routes = [ "members/" <+/> intParam //-> memberDetail $ [] -- etc... ] ----- MyProject/Views.hs -- memberDetail takes an 'Int' and an HTTP 'Request' object, and returns an -- HTTP 'Response' (or 'Nothing' to indicate a 404), doing some IO on the -- way. memberDetail :: Int -> Request -> IO (Maybe Response) memberDetail memberId request = do member <- getMember memberId -- etc... You should read <+/> as ‘followed by’ and //-> as ‘routes to’. Just ignore the $ [] bit for now (it exists to allow decorators to be applied easily in the routing configuration, but we are applying no decorators, hence the empty list). intParam is a ‘matcher’: it attempts to pull off the next chunk of the URL (ending in a '/'), match it and parse it as an integer. If it can do so, it passes the parsed value on to memberDetail as a parameter i.e. it partially applies memberDetail with an integer. The beauty of this system is that nothing can go wrong any more. We still have DRY violations at the moment, but it doesn't cause a problem, because the compiler checks for consistency. In fact, we can even remove the DRY violation. We could change the code like this: ----- MyProject/Routes.hs import MyProject.Views routes = [ "members/" <+/> anyParam //-> memberDetail $ [] -- etc... ] ----- MyProject/Views.hs memberDetail memberId request = do member <- getMember memberId -- etc... We've replaced intParam with anyParam, which is a polymorphic version that can match any parameter of type class Param. You can define your own Param instances, so this is completely extensible (and you can also define your own matchers, for complete power). We've also removed the type signature from memberDetail. So how can anyParam know what type of thing to match? This is where type inference comes in. The function getMember will probably have a type signature, or it will use its parameter in such a way that its type signature can be inferred. From that, the type of memberId can be inferred. From that, the type of value that anyParam must return can be inferred. And from that, finally, the instance of Param can be chosen. The compiler is using the type system to pick which method should be used to match and parse the URL parameters based on how those parameters are eventually used. This is very nice. (At least I think so :-). We've removed the DRY violation, or, if we choose to use type signatures or explicitly specify types in routes, DRY violations don't matter because the compiler will catch them for us. Would unit or functional tests have caught any problems? Well, they might. If they checked the happy case, they will prove whether that still works. But they're unlikely to check whether the URLconf is too permissive or not. But the compiler can do that kind of consistency check. The end result is that there are just fewer things that can possibly go wrong. I'm not saying that you wouldn't bother to write any tests. But in this case, if memberDetail was really just glue, you might decide to only test its component parts (for example, by testing the template that it relies on). Since most of the glue has been constructed so that it can't go wrong, you can focus tests on what can go wrong. And some sections of the code sink below the threshold at which tests provide positive value. There are many other ways in which static type checking can make automated tests redundant. Parsers are a great example — a spec might define a syntax in BNF notation. In Haskell, you might well implement that using parsec. But if you look at the code, it will have pretty much a one-to-one correspondence with the BNF definitions. Any tests you write will simply check that a few examples happen to be parsed correctly, as you cannot begin to cover the input space. It's therefore far better to spend your time manually checking that the code matches the BNF spec than writing lots of tests. Unit tests often will not catch the type of errors that a compiler can if there is any polymorphism in the code paths. Conclusion Before you flame me, don't think that I'm attacking other languages. This experience with Haskell has actually proved to me that Python is still easily my favourite language for web development, especially in combination with Django. (I could do a follow up on why that is—I have a growing list of things I dislike about Haskell, some of which are fixable). But I often hear the Python crowd saying things about static typing and testing that come from ignorance, and the way you would imagine things to be (often based on experience of Java/C++/C#), and not from experience of something like Haskell.
https://lukeplant.me.uk/blog/posts/is-static-type-checking-a-redundant-testing-mechanism/
CC-MAIN-2017-13
refinedweb
3,035
69.72
import "github.com/lxc/lxd/lxd/db/schema" Package schema offers utilities to create and maintain a database schema. doc.go errors.go query.go schema.go update.go ErrGracefulAbort is a special error that can be returned by a Check function to force Schema.Ensure to abort gracefully. Every change performed so by the Check will be committed, although ErrGracefulAbort will be returned. DoesSchemaTableExist return whether the schema table is present in the database. DotGo writes '<name>.go' source file in the package of the calling function, containing SQL statements that match the given schema updates. The <name>.go file contains a "flattened" render of all given updates and can be used to initialize brand new databases using Schema.Fresh(). Check is a callback that gets fired all the times Schema.Ensure is invoked, before applying any update. It gets passed the version that the schema is currently at and a handle to the transaction. If it returns nil, the update proceeds normally, otherwise it's aborted. If ErrGracefulAbort is returned, the transaction will still be committed, giving chance to this function to perform state changes. Hook is a callback that gets fired when a update gets applied. Schema captures the schema of a database in terms of a series of ordered updates. Empty creates a new schema with no updates. New creates a new schema Schema with the given updates. NewFromMap creates a new schema Schema with the updates specified in the given map. The keys of the map are schema versions that when upgraded will trigger the associated Update value. It's required that the minimum key in the map is 1, and if key N is present then N-1 is present too, with N>1 (i.e. there are no missing versions). NOTE: the regular New() constructor would be formally enough, but for extra clarity we also support a map that indicates the version explicitly, see also PR #3704. Add a new update to the schema. It will be appended at the end of the existing series. Check instructs the schema to invoke the given function whenever Ensure is invoked, before applying any due update. It can be used for aborting the operation. Dump returns a text of SQL commands that can be used to create this schema from scratch in one go, without going thorugh individual patches (essentially flattening them). It requires that all patches in this schema have been applied, otherwise an error will be returned. Ensure makes sure that the actual schema in the given database matches the one defined by our updates. All updates are applied transactionally. In case any error occurs the transaction will be rolled back and the database will remain unchanged. A update will be applied only if it hasn't been before (currently applied updates are tracked in the a 'shema' table, which gets automatically created). If no error occurs, the integer returned by this method is the initial version that the schema has been upgraded from. ExerciseUpdate is a convenience for exercising a particular update of a schema. It first creates an in-memory SQLite database, then it applies all updates up to the one with given version (excluded) and optionally executes the given hook for populating the database with test data. Finally it applies the update with the given version, returning the database handle for further inspection of the resulting state. File extra queries from a file. If the file is exists, all SQL queries in it will be executed transactionally at the very start of Ensure(), before anything else is done. If a schema hook was set with Hook(), it will be run before running the queries in the file and it will be passed a patch version equals to -1. Fresh sets a statement that will be used to create the schema from scratch when bootstraping an empty database. It should be a "flattening" of the available updates, generated using the Dump() method. If not given, all patches will be applied in order. Hook instructs the schema to invoke the given function whenever a update is about to be applied. The function gets passed the update version number and the running transaction, and if it returns an error it will cause the schema transaction to be rolled back. Any previously installed hook will be replaced. Trim the schema updates to the given version (included). Updates with higher versions will be discarded. Any fresh schema dump previously set will be unset, since it's assumed to no longer be applicable. Return all updates that have been trimmed. Update applies a specific schema change to a database, and returns an error if anything goes wrong. Package schema imports 12 packages (graph) and is imported by 20 packages. Updated 2018-12-26. Refresh now. Tools for package owners.
https://godoc.org/github.com/lxc/lxd/lxd/db/schema
CC-MAIN-2020-05
refinedweb
802
64.81
In today’s tutorial, we will build a password strength meter What We’re Building Try the live version of our Password Strength Meter in React. You’ve all seen password strength meters before on app sign-up pages, hovering underneath the password input field. So, what’s their purpose? A password strength meter’s purpose is to give the user visual feedback as to how strong their password is as they’re typing it out. Password strength meters are a vital aspect in securing your modern web app. Without it, your users will probably use ‘hello123’ as their password, or even worse, ‘password’. So, let’s get started! Creating a New React App As always, let’s create a new React app using Create React App. If you don’t know what this is, head on over to the Create React App Github repository page and follow the instructions for setting up a new React project. Come back here when you have the base app up and running. It should look something like this: Installing the ZXCVBN Library The password strength estimation library, Zxcvbn, takes a single string (the password) and returns an object with a number of useful options related to the strength of that string. Dropbox uses To pass a string to the zxcvbn library, we can simply do the following: zxcvbn('hello'); Navigate to the root of your new React project, open a terminal window and run the following command to install zxcvbn. npm install --save zxcvbn Once that’s finished installing, open up the codebase in your favorite editor (ours is Atom, hence why we created a list of our best Atom packages for front-end developers) Creating the Base Password Strength Meter Component Create a new JavaScript file in our root directory named PasswordStrengthMeter.js. This will be our new React class component. Remember to use title casing for the file and component name whenever you create a new React component. It’s a standard naming convention in React! We want our new password strength meter reac component to be a class component, so go ahead and open PasswordStrengthMeter.js and add the following code: import React, { Component } from 'react'; import './PasswordStrengthMeter.css'; class PasswordStrengthMeter extends Component { render() { return ( <div className="password-strength-meter"> I'm a password strength meter </div> ); } } export default PasswordStrengthMeter; Let’s step through the code above: - We’re importing React and the named export ‘Component’ from the react library. This means we can create a React class component. - We’re importing the .css file ./PasswordStrengthMeter.css which contains all of our styling for our component. - Finally, we’re defining a new class named PasswordStrengthMeter which has one method, render. This is rendering a single div with some text, just to show to us that the component is working. Save your PasswordStrengthMeter.js, and open up App.js. This is always the example component created for us whenever a fresh Create React App has finished running for the first time. Inside App.js, import our PasswordStrengthMeter component at the top of the file alongside the other import statements. Finally, insert the <PasswordStrengthMeter /> component tag inside the render method of App.js. App.js import React, { Component } from 'react'; import PasswordStrengthMeter from './PasswordStrengthMeter'; class App extends Component { constructor() { super(); } render() { return ( <div className="App"> <PasswordStrengthMeter /> </div> ); } } export default App; Save the file, jump back to your browser and your React app should look like this: Passing the Password to the Strength Meter Component Great! We’ve got a ‘working’ component that’s being rendered in our App.js component. Before we move on, let’s stop and think about how we want to architect the password strength meter react component. I’ve seen other password strength meter libraries that are both an input element AND a password strength meter. This is a bad approach, for two reasons: - You’re creating a dependency between the input field and the strength meter. - You’re not making your component flexible. What if we wanted to use another type of input element to enter our password? We couldn’t. Those two points are similar, but I hope you understand what I’m getting at. Basically, let’s just create the strength meter not the input field. That means that we need to pass a password string to our PasswordStrengthMeter component for it to know what to run through Inside App.js, add an input element and have it so that onChange, it sets the password state property to whatever the value that’s being typed into the input: import React, { Component } from 'react'; import PasswordStrengthMeter from './PasswordStrengthMeter'; class App extends Component { constructor() { super(); this.state = { password: '', } } render() { const { password } = this.state; return ( <div className="App"> <div className="meter"> <input autoComplete="off" type="password" onChange={e => this.setState({ password: e.target.value })} /> <PasswordStrengthMeter password={password} /> </div> </div> ); } } export default App; We do a couple of things above: - Give our component state, with a property called ‘password’. - Add an input element of type password, and attach an onChange handler to it which sets the state to whatever the value is. - Pass the password property from our state to the <PasswordStrengthMeter /> component through a prop called ‘password’. Getting a Result from ZXCVBN Before you save and jump back over to your browser, we need to test that we’re getting the password in our PasswordStrengthMeter component. I’m a big fan of destructuring props and state. Destructuring allows you to refer to prop and state values without having to write this.props.value or this.state.value every time. const { value, value2, value3 } = this.props; const { value } = this.state; Add the following code to the render method of PasswordStrengthMeter.js: render() { const { password } = this.props; return ( <div className="password-strength-meter"> <br /> <label className="password-strength-meter-label" > {password} </label> </div> ); } Save the component, jump over to your browser and you should now see whatever you’re typing into the input element below: This isn’t a great password strength meter. In fact, it’s the opposite of one right now This is where In order to evaluate the password coming from the input element, we need to pass our password string prop to the) Create a new constant called testedResult and assign it to the value of zxcvbn being passed the password string. ... render() { const { password } = this.props; const testedResult = zxcvbn(password); return ( <div className="password-strength-meter"> <label className="password-strength-meter-label" > {password} </label> </div> ); } ... Adding a Progress Element We’re almost there, but missing one crucial element: the strength meter itself! The progress HTML element is the perfect use for this. It takes two attributes: value and max. Insert a new Progress HTML element above the label and pass testedResult.score into the value attribute, and 4 into the max attribute. We’re passing 4 because that’s the highest value returned from the zxcvbn library, so the progress element will be out of 4. ... render() { const { password } = this.props; const testedResult = zxcvbn(password); return ( <div className="password-strength-meter"> <progress value={testedResult.score} <br /> <label className="password-strength-meter-label" > {password} </label> </div> ); } ... Save the file, jump back to your browser and type a password into the input field. Watch the progress bar fill as you type! Adding a Better Label We’re almost at the finish line. Technically, our Password Strength Meter in React is working, but it could be better. We don’t want to display the actual password that’s being typed. Instead, let’s show a handy label telling the user how strong their password is. To do this, create a new class method inside the component called createPasswordLabel, that takes a single integer parameter, the score, and returns a string, our interpretation of that score (weak, fair, good, etc) ... createPasswordLabel = (result) => { switch (result.score) { case 0: return 'Weak'; case 1: return 'Weak'; case 2: return 'Fair'; case 3: return 'Good'; case 4: return 'Strong'; default: return 'Weak'; } } ... This makes the Password Strength Meter a little more human-friendly (so we get top marks for UX). Finally, modify the render method so that we’re calling this new method: render() { const { password } = this.props; const testedResult = zxcvbn(password); return ( <div className="password-strength-meter"> <progress value={testedResult.score} <br /> <label className="password-strength-meter-label" > {password && ( <> <strong>Password strength:</strong> {this.createPasswordLabel(testedResult)} </> )} </label> </div> ); } Save the component, hop back to your browser, type a password and watch what happens: Styling our Password Strength Meter I’m a big fan of focusing on the user experience when it comes to creating React components. Our Password Strength Meter is good, but adding some color would really improve the user experience. Let’s change the color of the progress meter by applying a CSS class to the progress element depending on the return value of the createPasswordLabel method. ... <progress className={`password-strength-meter-progress strength-${this.createPasswordLabel(testedResult)}`} value={testedResult.score} ... Save the component, and create a new file in the same directory called PasswordStrengthMeter.css. Add the following CSS to it: .password-strength-meter { text-align: left; } .password-strength-meter-progress { -webkit-appearance: none; appearance: none; width: 250px; height: 8px; } .password-strength-meter-progress::-webkit-progress-bar { background-color: #eee; border-radius: 3px; } .password-strength-meter-label { font-size: 14px; } .password-strength-meter-progress::-webkit-progress-value { border-radius: 2px; background-size: 35px 20px, 100% 100%, 100% 100%; } .strength-Weak::-webkit-progress-value { background-color: #F25F5C; } .strength-Fair::-webkit-progress-value { background-color: #FFE066; } .strength-Good::-webkit-progress-value { background-color: #247BA0; } .strength-Strong::-webkit-progress-value { background-color: #70C1B3; } Save, jump back to your browser and enter in a password. You’ll now see a colorful, user-friendly password strength meter. Wrapping Up Well, that’s it. I hope you’ve enjoyed following this tutorial to build a password strength meter The full source code can be found over on the Upmostly Github repository for this project. As always, leave a comment if you have any questions, issues, or just straight up enjoyed coding this. See you next time! 💻 More React Tutorials Great tutorial guys and gals. I hope you do not mind, but I ported the code in this tutorial to the Aurelia Javascript framework, to show how easy it is to take other frameworks and libraries, and do the same thing in Aurelia. Thank you so much. This really helped. How do I make sure symbols and numbers are included in the password? thank you for this good tutorial. there is a question about size of zxcvbn library. Is it possible to only load zxcvbn on signup component? Great article, thanks Nice post! Very useful Woah!!! thanks James. What an awesome tutorial blog Excellent …don’t think anything ..i implement this
https://upmostly.com/tutorials/build-a-password-strength-meter-react
CC-MAIN-2020-29
refinedweb
1,794
57.27
. >>> [factorial(n) for n in range(6)] [1, 1, 2, 6, 24, 120] >>> factorial(30) 265252859812191058636308480000000 >>> And so on, eventually ending with: Trying: factorial(1e100) Expecting: Traceback (most recent call last): ... OverflowError: n too large ok section examines in detail how doctest works: which docstrings it looks at, how it finds interactive examples, what execution context it uses, how it handles exceptions, and how option flags can be used to control its behavior. This is the information that you need to know to write doctest examples; for information about actually running doctest on these examples, see the following sections. specified, everything following the leftmost colon and any module information in the exception name A: By default, if an expected output block contains just 1, an actual output block containing just 1 or just True is considered to be a match, and similarly for 0 versus. By default, if an expected output block contains a line containing only the string <BLANKLINE>, then that line will match a blank line in the actual output. Because a genuinely blank line delimits the expected output, this is the only way to communicate that a blank line is expected. When DONT_ACCEPT_BLANKLINE is specified, this substitution is not allowed. When specified, all sequences of whitespace (blanks and newlines) are treated as equal. Any sequence of whitespace within the expected output will match any sequence of whitespace within the actual output. By default, whitespace must match exactly. NORMALIZE_WHITESPACE is especially useful when a line of expected output is very long, and you want to wrap it across multiple lines in your source... It will also ignore the module name used in Python 3 doctest reports. Hence both these variations will work regardless of whether the test is run under Python 2.7 or Python 3.2 (or later versions): >>> raise CustomError('message') Traceback (most recent call last): CustomError: message >>> raise CustomError('message') Traceback (most recent call last): my_module.CustomError: message Note that ELLIPSIS can also be used to ignore the details of the exception message, but such a test may still fail based on whether or not the module details are printed as part of the exception name. Using IGNORE_EXCEPTION_DETAIL and the details from Python 2.3 is also the only clear way to write a doctest that doesn’t care about the exception detail yet continues to pass under Python 2.3 or earlier (those releases do not support doctest directives and ignore them as irrelevant comments). For example, >>> (1, 2)[3] = 'moo' Traceback (most recent call last): File "<stdin>", line 1, in ? TypeError: object doesn't support item assignment passes under Python 2.3 and later Python versions, even though the detail changed in Python 2.4 to say “does not” instead of “doesn’t”. Changed in version 3.2: IGNORE_EXCEPTION_DETAIL now also ignores any information relating to the module containing the exception under test.. A bitmask or’ing together all the comparison flags above. The second group of options controls how test failures are reported: When specified, failures that involve multi-line expected and actual outputs are displayed using a unified diff. When specified, failures that involve multi-line expected and actual outputs will be displayed using a context diff. When specified, differences are computed by difflib.Differ, using the same algorithm as the popular ndiff.py utility. This is the only method that marks differences within lines as well as across lines. For example, if a line of expected output contains digit 1 where actual output contains letter l, a line is inserted with a caret marking the mismatching column positions. When specified, display the first failing example in each doctest, but suppress output for all remaining examples. This will prevent doctest from reporting correct examples that break because of earlier failures; but it might also hide incorrect examples that fail independently of the first failure. When REPORT_ONLY_FIRST_FAILURE is specified, the remaining examples are still run, and still count towards the total number of failures reported; only the output is suppressed. A bitmask or’ing together all the reporting flags above. " \| ...(list(range(20))) (list(range(20))) [0, 1, ..., 18, 19] Multiple directives can be used on a single physical line, separated by commas: >>> print(list(range(20))) [0, 1, ..., 18, 19] If multiple directive comments are used for a single example, then they are combined: >>> print(list(range(20))) ... [0, 1, ..., 18, 19] As the previous example shows, you can add ... lines to your example containing only directives. This can be useful when an example is too long for a directive to comfortably fit on the same line: >>> print(list(range(5)) + list(range(10, 20)) + list(range(30, 40))) ... [0, ..., 4, 10, ..., 19, 30, ..., 39]. There’s also a way to register new option flag names, although this isn’t useful unless you intend to extend doctest internals via subclassing:') instead. Another is to do >>> d = sorted(foo().items()) >>> d [('Harry', 'broomstick'), ('Hermione', 'hippogryph')] There are others, but you get the idea. Another bad idea is to print things that embed an object address, like >>> id(1.0) # certain to fail some of the time 7948648 >>> class C: pass >>> C() # the default repr() for instances embeds an address <__main__.C instance at 0x00AC18F0> The ELLIPSIS directive gives a nice approach for the last example: >>> C() <__main__.C instance at 0x...> Floating-point numbers are also subject to small output variations across platforms, because Python defers to the platform C library for float formatting, and C libraries vary widely in quality here. >>> 1./7 # risky 0.14285714285714285 >>> print(1./7) # safer 0.142857142857 >>> print(round(1./7, 6)) # much safer 0.142857 Numbers of the form I/2.**J are safe across all platforms, and I often contrive doctest examples to produce numbers of that form: >>> 3./4 # utterly safe 0.75 Simple fractions are also easier for people to understand, and that makes for better. Optional argument parser specifies a DocTestParser (or subclass) that should be used to extract tests from the files. It defaults to a normal parser (i.e., DocTestParser()). Optional argument encoding specifies an encoding that should be used to convert the file to unicode.. As your collection of doctest’ed modules grows, you’ll want a way to run all their doctests systematically. doctest provides two functions that can be used to create unittest test suites from modules and text files containing doctests. To integrate with unittest test discovery, include a load_tests() function in your test module: import unittest import doctest import my_module_with_doctests def load_tests(loader, tests, ignore): tests.addTests(doctest.DocTestSuite(my_module_with_doctests)) return tests There are two main functions for creating unittest.TestSuite instances from text files and modules with doctests: Convert doctest tests from one or more text files to a unittest.TestSuite. The returned unittest.TestSuite is to be run by the unittest framework and runs the interactive examples in each file. If an example in any file fails, then the synthesized unit test fails, and a failureException exception is raised showing the name of the file containing the test and a (sometimes approximate) line number. Pass one or more paths (as strings) to text files to be examined. Options may be provided as keyword arguments: Optional argument module_relative specifies how the filenames in paths should be interpreted: Optional argument package is a Python package or the name of a Python package whose directory should be used as the base directory for module-relative filenames in paths. If no package is specified, then the calling module’s directory is used as the base directory for module-relative filenames. It is an error to specify package if module_relative is False. Optional argument setUp specifies a set-up function for the test suite. This is called before running the tests in each file. The setUp function will be passed a DocTest object. The setUp function can access the test globals as the globs attribute of the test passed. Optional argument tearDown specifies a tear-down function for the test suite. This is called after running the tests in each file. The tearDown function will be passed a DocTest object. The setUp function can access the test globals as the globs attribute of the test passed. Optional argument globs is a dictionary containing the initial global variables for the tests. A new copy of this dictionary is created for each test. By default, globs is a new empty dictionary. Optional argument optionflags specifies the default doctest options for the tests, created by or-ing together individual option flags. See section Option Flags and Directives. See function set_unittest_reportflags() below for a better way to set reporting options. Optional argument parser specifies a DocTestParser (or subclass) that should be used to extract tests from the files. It defaults to a normal parser (i.e., DocTestParser()). Optional argument encoding specifies an encoding that should be used to convert the file to unicode. The global __file__ is added to the globals provided to doctests loaded from a text file using DocFileSuite(). Convert doctest tests for a module to a unittest.TestSuite. The returned unittest.TestSuite is to be run by the unittest framework and runs each doctest in the module. If any of the doctests fail, then the synthesized unit test fails, and a failureException exception is raised showing the name of the file containing the test and a (sometimes approximate) line number. Optional argument module provides the module to be tested. It can be a module object or a (possibly dotted) module name. If not specified, the module calling this function is used. Optional argument globs is a dictionary containing the initial global variables for the tests. A new copy of this dictionary is created for each test. By default, globs is a new empty dictionary. Optional argument extraglobs specifies an extra set of global variables, which is merged into globs. By default, no extra globals are used. Optional argument test_finder is the DocTestFinder object (or a drop-in replacement) that is used to extract doctests from the module. Optional arguments setUp, tearDown, and optionflags are the same as for function DocFileSuite() above. This function uses the same search technique as testmod(). Under the covers, DocTestSuite() creates a unittest.TestSuite out of doctest.DocTestCase instances, and DocTestCase is a subclass of unittest.TestCase. DocTestCase isn’t documented here (it’s an internal detail), but studying its code can answer questions about the exact details of unittest integration. Similarly, DocFileSuite() creates a unittest.TestSuite out of doctest.DocFileCase instances, and DocFileCase is a subclass of DocTestCase. So both ways of creating a unittest.TestSuite run instances of DocTestCase. This is important for a subtle reason: when you run doctest functions yourself, you can control the doctest options in use directly, by passing option flags to doctest functions. However, if you’re writing a unittest framework, unittest ultimately controls when and how tests get run. The framework author typically wants to control doctest reporting options (perhaps, e.g., specified by command line options), but there’s no way to pass options through unittest to doctest test runners. For this reason, doctest also supports a notion of doctest reporting flags specific to unittest support, via this function: Set the doctest reporting flags to use. Argument flags or’s together option flags. See section Option Flags and Directives. Only “reporting flags” can be used. This is a module-global setting, and affects all future doctests run by module unittest: the runTest() method of DocTestCase looks at the option flags specified for the test case when the DocTestCase instance was constructed. If no reporting flags were specified (which is the typical and expected case), doctest attributes of the same names. DocTest defines the following attributes. They are initialized by the constructor, and should not be modified directly. A list of Example objects encoding the individual interactive Python examples that should be run by this test. The namespace (aka globals) that the examples should be run in. This is a dictionary mapping names to values. Any changes to the namespace made by the examples (such as binding new variables) will be reflected in globs after the test is run. A string name identifying the DocTest. Typically, this is the name of the object or file that the test was extracted from. The name of the file that this DocTest was extracted from; or None if the filename is unknown, or if the DocTest was not extracted from a file. The line number within filename where this DocTest begins, or None if the line number is unavailable. This line number is zero-based with respect to the beginning of the file. The string that the test was extracted from, or ‘None’ if the string is unavailable, or if the test was not extracted from a string. A single interactive example, consisting of a Python statement and its expected output. The constructor arguments are used to initialize the attributes of the same names. Example defines the following attributes. They are initialized by the constructor, and should not be modified directly. A string containing the example’s source code. This source code consists of a single Python statement, and always ends with a newline; the constructor adds a newline when necessary. The expected output from running the example’s source code (either from stdout, or a traceback in case of exception). want ends with a newline unless no output is expected, in which case it’s an empty string. The constructor adds a newline when necessary. The exception message generated by the example, if the example is expected to generate an exception; or None if it is not expected to generate an exception. This exception message is compared against the return value of traceback.format_exception_only(). exc_msg ends with a newline unless it’s None. The constructor adds a newline if needed. The line number within the string containing this example where the example begins. This line number is zero-based with respect to the beginning of the containing string. The example’s indentation in the containing string, i.e., the number of space characters that precede the example’s first prompt. A dictionary mapping from option flags to True or False, which is used to override default options for this example. Any option flags not contained in this dictionary are left at their default value (as specified by the DocTestRunner‘s optionflags). By default, no options are set.. DocTestFinder defines the following method: Return a list of the DocTests that are defined by obj‘s docstring, or by any of its contained objects’ docstrings. {}. A processing class used to extract interactive examples from a string, and use them to create a DocTest object. DocTestParser defines the following methods: Extract all doctest examples from the given string, and collect them into a DocTest object. globs, name, filename, and lineno are attributes for the new DocTest object. See the documentation for DocTest for more information. Extract all doctest examples from the given string, and return them as a list of Example objects. Line numbers are 0-based. The optional argument name is a name identifying this string, and is only used for error messages. Divide the given string into examples and intervening text, and return them as a list of alternating Examples and strings. Line numbers for the Examples are 0-based. The optional argument name is a name identifying this string, and is only used for error messages.. OutputChecker defines the following methods: Return True iff the actual output from an example (got) matches the expected output (want). These strings are always considered to match if they are identical; but depending on what option flags the test runner is using, several non-exact match types are also possible. See section Option Flags and Directives for more information about option flags. Return a string describing the differences between the expected output for a given example (example) and the actual output (got). optionflags is the set of option flags used to compare want and got.: """ >>> def f(x): ... g(x*2) >>> def g(x): ... print(x+3) ... import pdb; pdb.set_trace() >>> f(3) 9 """ Then an interactive Python session may look like this: >>> import a, doctest >>> doctest.testmod(a) --Return-- > <doctest a[1]>(3)g()->None -> import pdb; pdb.set_trace() (Pdb) list 1 def g(x): 2 print(x+3) 3 -> import pdb; pdb.set_trace() [EOF] (Pdb) p x 6 (Pdb) step --Return-- > <doctest a[0]>(2)f()->None -> g(x*2) (Pdb) list 1 def f(x): 2 -> g(x*2) [EOF] (Pdb) p x 3 (Pdb) step --Return-- > <doctest a[2]>(1)?()->None -> f(3) (Pdb) cont (0, 3) >>> Functions that convert doctests to Python code, and possibly run the synthesized code under the debugger: Convert text with examples to a script. Argument s is a string containing doctest examples. The string is converted to a Python script, where doctest examples in s are converted to regular code, and everything else is converted to Python comments. The generated script is returned as a string. For example, import doctest print(doctest.script_from_examples(r""" Set x and y to 1 and 2. >>> x, y = 1, 2 Print their sum: >>> print(x+y) 3 """)) displays: # Set x and y to 1 and 2. x, y = 1, 2 # # Print their sum: print(x+y) # Expected: ## 3 This function is used internally by other functions (see below), but can also be useful when you want to transform an interactive Python session into a Python script. Convert the doctest for an object to a script. Argument module is a module object, or dotted name of a module, containing the object whose doctests are of interest. Argument name is the name (within the module) of the object with the doctests of interest. The result is a string, containing the object’s docstring converted to a Python script, as described for script_from_examples() above. For example, if module a.py contains a top-level function f(), then import a, doctest print(doctest.testsource(a, "a.f")) prints a script version of function f()‘s docstring, with doctests converted to code, and the rest placed in comments. Debug the doctests for an object. The module and name arguments are the same as for function testsource() above. The synthesized Python script for the named object’s docstring is written to a temporary file, and then that file is run under the control of the Python debugger, pdb. A shallow copy of module.__dict__ is used for both local and global execution context. Optional argument pm controls whether post-mortem debugging is used. If pm has a true value, the script file is run directly, and the debugger gets involved only if the script terminates via raising an unhandled exception. If it does, then post-mortem debugging is invoked, via pdb.post_mortem(), passing the traceback object from the unhandled exception. If pm is not specified, or is false, the script is run under the debugger from the start, via passing an appropriate exec() call to pdb.run(). Debug the doctests in a string. This is like function debug() above, except that a string containing doctest examples is specified directly, via the src argument. Optional argument pm has the same meaning as in function debug() above. Optional argument globs gives a dictionary to use as both local and global execution context. If not specified, or None, an empty dictionary is used. If specified, a shallow copy of the dictionary is used. The DebugRunner class, and the special exceptions it may raise, are of most interest to testing framework authors, and will only be sketched here. See the source code, and especially DebugRunner‘s docstring (which is a doctest!) for more details: A subclass of DocTestRunner that raises an exception as soon as a failure is encountered. If an unexpected exception occurs, an UnexpectedException exception is raised, containing the test, the example, and the original exception. If the output doesn’t match, then a DocTestFailure exception is raised, containing the test, the example, and the actual output. For information about the constructor parameters and methods, see the documentation for DocTestRunner in section Advanced API. There are two exceptions that may be raised by DebugRunner instances: An exception raised by DocTestRunner to signal that a doctest example’s actual output did not match its expected output. The constructor arguments are used to initialize the attributes of the same names. DocTestFailure defines the following attributes: The DocTest object that was being run when the example failed. The example’s actual output. An exception raised by DocTestRunner to signal that a doctest example raised an unexpected exception. The constructor arguments are used to initialize the attributes of the same names. UnexpectedException defines the following attributes: The DocTest object that was being run when the example failed. A tuple containing information about the unexpected exception, as returned by sys.exc_info().: Footnotes
http://www.wingware.com/psupport/python-manual/3.2/library/doctest.html
CC-MAIN-2013-48
refinedweb
3,473
56.45
pynt 0.8.0 Lightweight Python Build Tool. A pynt of Python build. Features - Easy to learn. - Build tasks are just python funtions. - Manages dependencies between tasks. - Automatically generates a command line interface. - Rake style param passing to tasks - Supports python 2.7 and python 3.x Installation You can install pynt from the Python Package Index (PyPI) or from source. Using pip $ pip install pynt Using easy_install $ easy_install pynt Example The build script is written in pure Python and pynt takes care of managing any dependencies between tasks and generating a command line interface. Writing build tasks is really simple, all you need to know is the @task decorator. Tasks are just regular Python functions marked with the @task() decorator. Dependencies are specified with @task() too. Tasks can be ignored with the @task(ignore=True). Disabling a task is an useful feature to have in situations where you have one task that a lot of other tasks depend on and you want to quickly remove it from the dependency chains of all the dependent tasks. build.py #!/usr/bin/python import sys from pynt import task @task() def clean(): '''Clean build directory.''' print 'Cleaning build directory...' @task(clean) def html(target='.'): '''Generate HTML.''' print 'Generating HTML in directory "%s"' % target @task(clean, ignore=True) def images(): '''Prepare images.''' print 'Preparing images...' @task(html,images) def start_server(server='localhost', port = '80'): '''Start the server''' print 'Starting server at %s:%s' % (server, port) @task(start_server) #Depends on task with all optional params def stop_server(): print 'Stopping server....' @task() def copy_file(src, dest): print 'Copying from %s to %s' % (src, dest) @task() def echo(*args,**kwargs): print args print kwargs # Default task (if specified) is run when no task is specified in the command line # make sure you define the variable __DEFAULT__ after the task is defined # A good convention is to define it at the end of the module # __DEFAULT__ is an optional member __DEFAULT__=start_server Running pynt tasks The command line interface and help is automatically generated. Task descriptions are extracted from function docstrings. $ pynt -h usage: b [-h] [-l] [-v] [-f file] [task [task ...]] positional arguments: task perform specified task and all its dependencies optional arguments: -h, --help show this help message and exit -l, --list-tasks List the tasks -v, --version Display the version information -f file, --file file Build file to read the tasks from. Default is 'build.py' $ pynt -l Tasks in build file ./build.py: clean Clean build directory. copy_file echo html Generate HTML. images [Ignored] Prepare images. start_server [Default] Start the server stop_server Powered by pynt - A Lightweight Python Build Tool. pynt takes care of dependencies between tasks. In the following case start_server depends on clean, html and image generation (image task is ignored). $ pynt #Runs the default task start_server. It does exactly what "pynt start_server" would do. [ example.py - Starting task "clean" ] Cleaning build directory... [ example.py - Completed task "clean" ] [ example.py - Starting task "html" ] Generating HTML in directory "." [ example.py - Completed task "html" ] [ example.py - Ignoring task "images" ] [ example.py - Starting task "start_server" ] Starting server at localhost:80 [ example.py - Completed task "start_server" ] The first few characters of the task name is enough to execute the task, as long as the partial name is unambigious. You can specify multiple tasks to run in the commandline. Again the dependencies are taken taken care of. $ pynt cle ht cl [ example.py - Starting task "clean" ] Cleaning build directory... [ example.py - Completed task "clean" ] [ example.py - Starting task "html" ] Generating HTML in directory "." [ example.py - Completed task "html" ] [ example.py - Starting task "clean" ] Cleaning build directory... [ example.py - Completed task "clean" ] The 'html' task dependency 'clean' is run only once. But clean can be explicitly run again later. pynt tasks can accept parameters from commandline. $ pynt "copy_file[/path/to/foo, path_to_bar]" [ example.py - Starting task "clean" ] Cleaning build directory... [ example.py - Completed task "clean" ] [ example.py - Starting task "copy_file" ] Copying from /path/to/foo to path_to_bar [ example.py - Completed task "copy_file" ] pynt can also accept keyword arguments. $ pynt start[port=8888] [ example.py - Starting task "clean" ] Cleaning build directory... [ example.py - Completed task "clean" ] [ example.py - Starting task "html" ] Generating HTML in directory "." [ example.py - Completed task "html" ] [ example.py - Ignoring task "images" ] [ example.py - Starting task "start_server" ] Starting server at localhost:8888 [ example.py - Completed task "start_server" ] $ pynt echo[hello,world,foo=bar,blah=123] [ example.py - Starting task "echo" ] ('hello', 'world') {'blah': '123', 'foo': 'bar'} [ example.py - Completed task "echo" ] Organizing build scripts You can break up your build files into modules and simple import them into your main build file. from deploy_tasks import * from test_tasks import functional_tests, report_coverage Contributors/Contributing - Calum J. Eadie - pynt is preceded by and forked from microbuild, which was created by Calum J. Eadie. If you want to make changes the repo is at. You will need pytest to run the tests $ ./b t It will be great if you can raise a pull request once you are done. If you find any bugs or need new features please raise a ticket in the `issues section <>`_ of the github repo. License pynt is licensed under a MIT license Changes 0.8.0 - 02/09/2013 - Support for specifying a default task with __DEFAULT__ variable - pynt -v (--version) for displays version info - pynt -l lists tasks in alphabetical order 0.7.1 - 17/03/2013 - Migrated pynt to work on python 3.x. pynt still works on 2.7. - pynt version now displayed as part of help output. 0.7.0 - 16/02/2013 - New commandline interface. Distribution now includes 'pynt' executable. - 'build.py' is the default build file. - Build files no longer need "if main" construct. - pynt no longer exposes build method. This is a backward incompatible change. 0.6.0 - 17/12/2012 - Simplified ignoring tasks. ignore a keyword param for task and not a separate decorator. [This change is NOT backward compatible!!!] - Added support for listing tasks - Improved help 0.5.0 - 01/12/2012 - Ability to pass params to tasks. - Major rewrite and flattening the package hierarchy. 0.4.0 - 17/11/2012 - Support for running multiple tasks from commandline. - Ability to run tasks by typing in just the first few unambigious charecters. Changes before forking from microbuild 0.3.0 - 18/09/2012 - Fixed bug in logging. No longer modifies root logger. - Added ignore functionality. - Extended API documentation. 0.2.0 - 29/08/2012 - Added progress tracking output. - Added handling of exceptions within tasks. 0.1.0 - 28/08/2012 - Initial release. - Added management of dependancies between tasks. - Added automatic generation of command line interface. - Downloads (All Versions): - 41 downloads in the last day - 286 downloads in the last week - 826 downloads in the last month - Author: Raghunandan Rao - Documentation: pynt package documentation - License: MIT License - Package Index Owner: raghunadnanr - DOAP record: pynt-0.8.0.xml
https://pypi.python.org/pypi/pynt
CC-MAIN-2014-10
refinedweb
1,143
61.43
Nishan Jebanasam Microsoft Corporation May 2005 Applies to Windows Mobile-based devices Windows Mobile 2003 Second Edition-based devices Windows CE-based devices Visual Studio 2005 eMbedded Visual C++ version 4.0 ActiveX ActiveSync Summary: This article provides an overview about the Visual Studio 2005 native device development feature set. It is intended for both eMbedded Visual C++ developers who want to learn about the successor to eMbedded Visual C++, in addition to desktop computer C++ developers who want to learn about targeting device platforms with their native applications. (35 printed pages) Introduction Prerequisites IDE Native Libraries Debugging Emulator How Do I? Visual Studio 2005 includes C/C++ development for Windows Mobile-based and Windows CE-based devices. It will be the successor to eMbedded Visual C++ version 4.0, and it will allow developers to write C/C++ applications for Microsoft device platforms. Some of Visual Studio 2005 features include: If you plan to use Visual Studio 2005 to develop for devices, the following are prerequisites: This section covers the design-time features provided by Visual Studio 2005 to target devices. Visual Studio 2005 ships with five application wizards to help you create the following project types: You can find these application wizards in the New Project dialog box in the Visual C++ node under Smart Device, as shown in Figure 1. As you create your application, you also need to choose the platform SDK (or SDKs) that your project targets, as shown in Figure 2. Visual Studio 2005 ships with the Windows Mobile 2003 SDKs in the box, so when you first install Visual Studio 2005, the Pocket PC 2003 SDK and Smartphone 2003 SDK are available. Any additional Windows Mobile or Windows CE SDKs that you have installed in Visual Studio 2005 will also show up in this page. (You can choose one or more platform SDKs for your project.) Note that Visual Studio 2005 only supports Windows Mobile 2003 platforms and later, and Windows CE version 5.0 platforms and later. After you've chosen the platforms SDKs, the application wizard generates your project, template source code, default resources, and project properties (compiler switches, dependent libraries, and other project properties). Visual Studio 2005 also ships with class wizards that generate code to help you accomplish common tasks. Examples include helping you to create an Active Template Library (ATL) COM object or a Microsoft Foundation Class (MFC) class. To run a class wizard on your project, right-click your project, click Add, and then click Class. Visual Studio 2005 supports the following class wizards for device platforms, as shown in Figure 3: The class wizards that Visual Studio supports for smart device platforms feature a small "device" icon embedded in the wizard icon. Almost all of the settings your project has are "configuration" specific. A configuration-specific setting combines the debug or release build information with the project's platform. For example, you can set compiler switches specific to your Pocket PC 2003 (ARMV4) Debug configuration and different switches for your Pocket PC 2003 (ARMV4) Release configuration. Each configuration produces its own project output binary. If your project targets Pocket PC 2003 (ARMV4) and Smartphone 2003 (ARMV4), for example, when you build the Pocket PC 2003 (ARMV4) Release configuration, you get a different binary than if you build the Smartphone 2003 (ARM V4) Release configuration. Similarly, building the Pocket PC 2003 (ARMV4) Debug configuration produces yet another binary output. Figure 4 summarizes the SDK, platform, architecture, configuration, and project output relationships. Figure 4. Relationships among SDKs, platforms, architecture, configuration, and project outputs. Visual Studio applies project properties to a single specific configuration by default. To have settings take effect for multiple configurations, you must select All Configurations and/or All platforms in the Project Property Pages dialog box to have settings take effect for Debug and Release, and/or all the platforms in your project respectively. Furthermore, some properties contain text (rather than an enumeration of switches, for example). If you select All Configurations and/or All Platforms, properties that contain text may clear because the project system does not take the intersect or union of the text. If the text in the property does not match exactly for the two or more configurations selected by the user, then nothing is displayed. For these cases, you should apply the properties on a configuration-by-configuration basis to avoid any text being dropped. In Figure 5, the Preprocessor Definitions do not exactly match for Debug and Release, so when the use selects All Configurations, this property clears, as shown in Figure 6. This multiple-platform project capability has many advantages, which allow you to maintain one code base and customize your application's UI, input handling, and more by wrapping your code around #ifdef's. Furthermore, because you can apply properties to all of your configurations (see Project Properties), you can easily maintain your configurations. For example, you can choose to sign your project output with one certificate, which you can apply to all project outputs (so your Pocket PC binary and Smartphone binary are signed with the same certificate). This section covers properties that device developers may find interesting. All of the properties in this section apply on a per-configuration basis. The Deployment configuration property contains some of the more frequently used sets of properties for device developers, as shown in Figure 7. It allows you to choose your target deployment device, to enumerate any additional files you may want to deploy with your project, to specify the remote directory on the device for your project output, and to dictate whether you want your project output registered on the device after it is deployed. Most of these properties are very straightforward although to use the Additional Files property, you need to use a special syntax. The Additional Files property allows you to specify one or more additional files to be downloaded to the target device when you deploy your project. Note that files you specify will not be compiled; they will simply be copied to the device (and registered, if specified). For examples about the Additional Files syntax, see the How Do I? section. Application security is becoming more prevalent on Windows Mobile-based devices. Device developers should understand the various security models and how these security models can affect the ways they develop and redistribute their applications. Authenticode signing is a way of authenticating the origins of digital content. Signing encodes binary with a private key, which can only be verified with its corresponding public key. The public key is redistributed in the form of a certificate that can be installed on a device. In this way, users can verify that you created the application when they start it on their devices. Users can trace certificates to a trusted root certificate in attempt to validate signing authorities. For example, a well-known and trusted signing authority will most likely have a valid root certificate to trace on retail devices, whereas random individuals who sign their applications with certificate private keys that they generated themselves will most likely not have a trusted root to chain back to on retail devices. A Practical Guide to the Smartphone Application Security and Code Signing Model for Developers provides an excellent starting point for Authenticode signing. You should familiarize yourself with this article. The Authenticode Signing configuration property (as shown in Figure 8) allows you to select a certificate to sign your project output with, to dictate whether you want to provide the device with that certificate, and to specify which certificate store on the device to provide the certificate to. Provisioning is the act of configuring the device with some setting (in this example, installing a certificate into the certificate store). If you set the Authenticode Signature property to Yes, after you select a certificate, Visual Studio 2005 signs the project output each time it is built. If you set the Provision Device property to Privileged Certificate Store, then the certificate selected in the Certificate property is provisioned to the privileged certificate store on the target device the next time you deploy the project. Similarly, if you set the Provision Device property to Unprivileged Certificate Store, the certificate selected in the Certificate property is provisioned to the unprivileged certificate store on the target device the next time you deploy the project. If the device security policy does not permit provisioning certificates, this step fails, so you need to modify the policy on the device to allow certificate provisioning. The C/C++ configuration property contains all of the compiler settings for your project, as shown in Figure 9. Compilation for device platforms invokes specific device compilers for the device architecture you are targeting, so the properties that are available for device platforms are slightly different to the properties that are available for desktop computer platforms. Some of the key properties that affect device developers include: Most native device application projects that developers created in Visual Studio 2005 are set to Use Precompiled Header, which by default is stdafx. For more information about creating and using your own precompiled headers, see the How Do I? section. This property sets the instruction set to compile for. Each device compiler can compile for one of many architectures. This property enables generation of thunking code to interwork 16- and 32-bit ARM code. You can set additional switches that aren't available in the property pages. The Linker configuration property contain all the linker settings for your project, as shown in Figure 10. When your application targets device platforms, the C/C++ configuration property has a device-specific set of properties. The Linker configuration properties are the same for device or desktop computer platforms because you use the same linker. Because you use the same linker, some properties do not apply to device platforms but they will be visible anyway. Table 1 describes some examples. Table 1. Linker properties not applicable to device platforms Terminal Server Swap Run from CD Swap Run from Network Driver Profile Guided Database Profile CLR Thread Attribute CLR Image Type Key File Key Container Delay Sign The Visual Studio 2005 native resource editor should appear very familiar because it's the same native resource editor that eMbedded Visual C++, Visual C++ version 6.0, and Visual Studio .NET 2003 uses. Native smart device projects in Visual Studio 2005 supports all of the following resource types: If a project targets Pocket PC and Smartphone, Visual Studio 2005 makes it easy for you to customize the UI of your application for the different device form factors by generating a separate resource file for each targeted platform. The sample project shown in Figure 11 has both a Pocket PC resource file and a Smartphone resource file. Notice that the Smartphone resource file (MyDeviceApp1sp.rc) has a No Build icon — the current target platform for the project is Pocket PC. Therefore, when the user builds the project, only the Pocket PC resource file is included in the build. If the user changes the active target platform to Smartphone, then the No Build icon disappears from MyDeviceApp1sp.rc and appears on MyDeviceApp1ppc.rc. Therefore, the correct resource file compiles into the project depending on what platform the user targets. Figure 11. Sample project with Pocket PC and Smartphone resource files Some of the Application wizards generate an RC2 file in addition to the standard Resource Compiler (RC) file. The Resource Compiler does not touch this RC2 file, but the RC file includes this file, which contains resources that the Resource Compiler doesn't know how to handle. Some examples include the HI_RES_AWARE custom resource (for more information about the HI_RES_AWARE custom resource, see High Resolution and Orientation Awareness), in addition to the menu RCDATA that the Resource Compiler edits if placed in the RC file (hex value equivalents replace the style data and won't be translated back). The RC2 file is a great place to put other custom resources that you don't want the Resource Compiler editing for you. Device SDKs can define their own UI model, which you can use to filter the list of controls that appear in the Dialog Editor to only show the controls that a platform supports. Visual Studio 2005 ships with a "CE" UI model for the Windows Mobile 2003 SDKs that are already included in Visual Studio 2005, as shown in Figure 12. Windows Mobile 2003 Second Edition and later have high resolution capability (the ability to display graphics in a higher DPI) in addition to orientation switching capability (the ability to rotate the screen dynamically and display a "portrait" or "landscape" mode). Visual Studio 2005 provides native device developers with support for writing high resolution and orientation-aware applications. When the Developer Resources for Windows Mobile 2003 Second Edition was released, it included a useful header file, UIHelper.h, that contained several macros and functions to assist developers in creating high resolution and orientation aware applications. These functions included: Determines if the display is currently configured as portrait, square, or landscape. Stretches an icon to the specified size (only applies on Windows Mobile 2003 Second Edition platforms and later). Stretches a bitmap containing a grid of images. Operates identically to the platform ImageList_LoadImage, except that it first checks the DPI fields of the bitmap (by using GetBitmapLogPixels), compares it to the DPI of the screen (by using LogPixelsX and LogPixelsY), and then performs scaling (by using ImageList_StretchBitmap), if the values are different. Re-lays out a dialog based on a dialog template. This function iterates through all of the child window controls and calls SetWindowPos for each. It also calls SetWindowText for each static text control, and then updates the selected bitmap or icon in a static image control. This method assumes that the current dialog and the new template have all of the same controls with the same IDCs. Visual Studio 2005 includes these functions, in the header file, DeviceResolutionAware.h (namespace "DRA::"). The five smart device application wizards, and the ATL Dialog Class wizard, generate template code that contains the WM_SIZE event handler to rotate any dialogs the wizards generate. Furthermore, the wizards generate two versions of their default dialogs. For example, for the About dialog, a square/portrait version and a landscape version are generated. You can use this code as a useful example to follow when you design the UI for your native device applications. // Message handler for About box. INT_PTR CALLBACK About(HWND hDlg, UINT message, WPARAM wParam, LPARAM lParam) { switch (message) { // ... // Other message handlers cut for brevity // ... #ifdef _DEVICE_RESOLUTION_AWARE case WM_SIZE: { DRA::RelayoutDialog( g_hInst, hDlg, DRA::GetDisplayMode() != DRA::Portrait ? MAKEINTRESOURCE(IDD_ABOUTBOX_WIDE) : MAKEINTRESOURCE(IDD_ABOUTBOX)); } break; #endif } return (INT_PTR)FALSE; } Applications compiled for Windows Mobile 2003 (that is, Windows CE subsystem version 4.20) will automatically be pixel-doubled on high resolution capable devices, unless the users define a custom resource (HI_RES_AWARE) that tells the devices' operating system not to pixel-double the application. The wizard-generated code automatically defines the HI_RES_AWARE resource in Visual Studio 2005. This design encourages developers to think about high resolution awareness when they write their applications, so that they can take advantage of the crisper display capabilities of devices emerging in the market today. If you want your Windows Mobile 2003 application pixel-doubled on high resolution capable devices, you can remove the HI_RES_AWARE resource from the RC2 file in your project. Any application that you build on a later platform version (that is, later than Windows CE version 4.20) will not be pixel doubled, even if it does not include the HI_RES_AWARE resource. Also note that Smartphones do not pixel double at all on high resolution Smartphone devices. For more information about orientation and high resolution awareness, see Step by Step: Develop Orientation-Aware and DPI-Aware Applications for Pocket PC. Visual Studio 2005 contains updated versions of the Microsoft Foundation Classes (MFC), Active Template Library (ATL) and Standard C++ Library (SCL) for devices, along with a small subset of the C Runtime (CRT), as shown in Table 2. These new device libraries are based on the desktop MFC version 8.0, ATL version 8.0, SCL version 8.0, and CRT version 8.0 libraries, subset based on size, performance, and platform capability. They are not factored any differently for Windows CE, Pocket PC, or Smartphone, so you can rely on the functionality of these runtimes being available for these platforms. These runtimes, however, contain some degree of platform awareness. ATL, for example, behaves differently on DCOM platforms than on COM platforms, and the same is true for GUI and Headless platforms. MFC will be UI–model aware and will behave differently on AYGShell than on non-AYGShell platforms. The native libraries are available as both dynamic and static libraries (except SCL, which will be only available as a static library). Table 2. Summary of CRT, ATL, MFC, and SCL MFC and ATL 8.0 rely on certain C APIs that are not available in the CRT that ships in the device. Therefore, a "mini" C runtime provides these extra APIs. This runtime is not intended to be a full CRT, but it is provided primarily for MFC and ATL support. Table 3 lists the APIs that are provided in msvcr80.dll for devices. Table 3. APIs provided by msvcr80.dll Developers have traditionally used ATL for COM-based applications. ATL features useful classes to make COM programming easier, in addition to string manipulation and conversion, managing arrays, lists and trees, and more. Some differences that ATL device developers will see in Visual Studio 2005 compared to eMbedded Visual C++ include Web services client support, extended socket support (IPv6), and improved security and robustness. However, ATL 8.0 for devices does not have all of the desktop ATL functionality. Security, Services, ATL Data, and ATL Server are not included in the device version (Web services consumption is supported). These omissions are largely due to schedule and resource constraints. MFC still plays an important role in the device space. There are a large number of native applications on devices today that use MFC, and even with the advent of the .NET Compact Framework, there continues to be a need for native GUI applications, especially on resource-constrained devices. MFC for devices in Visual Studio 2005 provide a rich framework for applications, from simple dialog-based applications to sophisticated applications that employ the MFC document/view architecture. Naturally there are classes that have no underlying support in the device operating system, and there are also classes that were not ported due to size, performance, or schedule reasons. Figure 13 provides an overview of the subset of MFC that Visual Studio 2005 supports for devices. The Standard C++ Library 8.0 for devices is also a subset of the desktop SCL. Table 4 describes the facilities that SCL 8.0 provides for devices. Table 4. Facilities for devices in SCL 8.0 The SCL also incorporates the Standard C Library. Note that only portions of the Standard C Library that have underlying device operating system support is incorporated. The Windows Template Library 8.0 (WTL) continues to remain an unsupported sample on the Web. There will be a device port of WTL 8.0 most likely around the time Visual Studio 2005 releases. You can find the current WTL for devices in the Microsoft Download Center. The native device debugger in Visual Studio 2005 provides a fast, reliable, and feature-rich debugging experience for device developers. The most notable remote debugger improvements since eMbedded Visual C++ are speed and reliability, with large improvements to responsiveness in scenarios like stepping and expression evaluation. Key debugger features include the ability to: There are several ways to debug native device applications in Visual Studio 2005, many of which the How Do I? section outlines. In any situation where you are debugging an .dll or .exe file that you did not build (that is, no project is available), it is recommended to set your symbol search path to include the location of the .pdb files for the component you are debugging (if .pdb files are available). To set your symbol search path Perhaps the most common debugging scenario is F5: starting the application under the debugger. In Visual Studio 2005, debugging your native application on the device is as seamless as debugging a local desktop computer application. You can start the application under the debugger with F5 (start new instance of application), F10 (step over) or F11 (step into). Because you will have access to the application symbols and sources, you will get the following debugging information: Visual Studio 2005 ships with the Microsoft Device Emulator 1.0, an emulator that allows developers access to device targets that they can deploy their smart device applications to. The Device Emulator starts the device operating system (referred to as an "image" in this document) in its own address space and emulates the ARM instruction set to provide high-fidelity emulation of a real device, as shown in Figure 13. Developers can treat the emulator as a real device in almost every respect. Because the Device Emulator can run ARM binaries, any project that developers build for Windows Mobile can be run on the emulator without the developers having to rebuild. The Device Emulator appears as its own target "device" in the list of available target devices for a given platform, as shown in Figure 14. Figure 14. The Device Emulator appears as a target "device" in Visual Studio When you select an emulator and deploy the application, the emulator starts (in Figure 14, the image is Pocket PC 2003 Second Edition). After the emulator starts, the user treats the emulator like a real device, and Visual Studio 2005 downloads the application and starts the debugger. Furthermore, you can run multiple emulators at any given time, each with a different image booted. With Visual Studio 2005, you can have several "devices" at your disposal to deploy and debug your application on. The Device Emulator has a host of features to provide a rich device experience to developers. For more information about the Device Emulator's features, see the How Do I? section. This section provides more details about specific tasks that native Smart Device developers may want to accomplish. Before you create your project, it is ideal if you know what platforms you want to target. When you create your project, you can then select the platforms in the Application Wizard. However, if you don't know what platforms you'd like your project to target, or you wish to add desktop platforms as targets, you can add more platforms after you create your native device project. To add another platform to a project The Configuration Manager appears. Note that adding a Windows Mobile 5.0 platform to your existing Windows Mobile 2003 project requires you to perform a manual step to successfully build for your Windows Mobile 5.0 configuration. To add a Windows Mobile 5.0 platform to an existing Windows Mobile 2003 project If you do not perform the previous procedure, you will receive the following link error when you build your Windows Mobile 5.0 configurations: Fatal error LNK1112: module machine type 'THUMB' conflicts with target machine type 'ARM' To include additional files to be downloaded with your project, you need to specify them in the following format: file name|source directory|remote directory|register where: File name is the name of the file that you want to deploy. Source Directory is the fully qualified path on the desktop computer where you can find the file. Remote Directory is the location on the device where you want to deploy the file. Register is either a 0 or a 1 (0 means do not register; 1 means register). For example, to include c:\foo\bar.dll to be downloaded to the \windows directory on the device and registered on deployment, you would have the following entry in the Additional Files property: Bar.dll|c:\foo|\windows|1. To deploy more than one additional file If you have no signing certificates in your Personal certificate store, you can perform the following steps to import a certificate into your Personal certificate store. This example uses a test certificate that Visual Studio 2005 includes. To import a certificate into your Personal certificate store Note This is an important step because importing a .cer file will not allow you to sign with it because .cer files have no private key. If the certificate you imported doesn't appear in the Select Certificate dialog box in step 8, you have either imported a non-code signing certificate or a certificate without a private key. You need to follow the procedure again and make sure you select the .pfx file in step 5. Most native device application projects created in Visual Studio 2005 will be set to Use Precompiled Header, which by default is stdafx. If you want to use a different precompiled header, perform the following procedure. To use a precompiled header other than the default Creating menus for Smartphones involves some manual steps. The article, How to: Create a Soft Key Bar, is an excellent reference about this topic. You can use Visual Studio 2005 to create a Smartphone menu correctly. To create a Smartphone menu in Visual Studio 2005 IDR_MENU RCDATA BEGIN IDR_MENU, 2, I_IMAGENONE, IDM_OK, TBSTATE_ENABLED, TBSTYLE_BUTTON | TBSTYLE_AUTOSIZE, IDS_OK, 0, NOMENU, I_IMAGENONE, IDM_HELP, TBSTATE_ENABLED, TBSTYLE_DROPDOWN | TBSTYLE_AUTOSIZE, IDS_HELP, 0, 0, END When designing ActiveX controls for devices by using Visual Studio 2005, you need to take a few extra steps. Because the Resource Editor relies on the control being registered on the desktop computer to manipulate it at design time and because you cannot register device controls on the desktop computer, the following steps provide an alternative design time experience. The following procedure assumes you already have your ActiveX control project and host project, and you are hosting the ActiveX control in a dialog. To design ActiveX controls by using Visual Studio 2005 <project name> IDD_<project If the application is already running on the device (or emulator), you can attach the debugger to the already running instance. To attach the debugger to an application running on the device or emulator using Visual Studio 2005 You can choose to attach with the native or managed debugger explicitly, or you can select Automatic to let the IDE decide the appropriate debugger. If you are unsure which to select, Automatic is the best choice. After you select the target device, the Available Process list enumerates the running processes on the device. Note that the Type column indicates whether the application is managed or native. WinCE indicates native, and .NET CF indicates managed. All managed processes inherently have native code running in them, so for a managed application, you will see WinCE, .NET CF in the Type column. If you have a copy of the .dll or .exe file that you are debugging on the desktop computer and in the symbol search path of Visual Studio 2005, the debugger loads it, and tries to find symbols/sources to the component. If the debugger is successful, you'll receive full debugging information (similar to having launched the project with F5). If you cannot find the .dll or .exe file on the desktop computer, and you are targeting Windows Mobile 5.0, the debugger loads PDATA from the device. ARM, MIPS, and SH device compilers use PDATA structures to aid in stack walking at runtime. This structure aids in callstack unwinding. If you're debugging to a Windows Mobile 2003 device, the debugger will not be able to load the PDATA from the device, so even if you have the symbols and sources to the .dll or .exe file, but you don't actually have a copy of the .dll or .exe on the desktop computer, you'll receive no debugging information. Just-in-time (JIT) debugging allows you to attach the debugger to an application at the point of crash, providing you with the opportunity to get details about the cause of the crash. To do this, you need to install the JIT debugger onto the device to give the debugger a chance to catch the exception that the crash throws. To enable JIT debugging At this point, the JIT debugger is installed, and any application that crashes on the device results in the JIT debugger giving you notification and the opportunity to attach Visual Studio 2005 to the application (or to end the application). To disable JIT debugging In cases where you do not have the opportunity to debug a process at the time of the crash, post-mortem debugging allows you to debug an application after it has crashed by attaching the debugger to the crash dump file. The first step is to actually get the dump file from the device. There is an established process, called Windows Quality Online Services, that allows you to retrieve dump files from your application crash. Due to privacy issues, you need to sign up for the program. You can find more information at Windows Quality Online Services. After you get a dump file, perform the following procedure. To debug a dump file in Visual Studio 2005 Note Make sure you open the .kdump file as a Project/Solution. If you click Open File icon instead and open the .kdump file as a file, you will not be able to debug it. If you have the symbols to the .dll or .exe file that crashed, you should set the symbol search path to include the folder containing that file. Support for debugging Services.exe is being evaluated for Visual Studio 2005, but in the meantime, there is an unsupported workaround that enables services debugging for Visual Studio 2005 Beta 2. To enable services debugging in Visual Studio 2005 Beta 2 Note Make sure to create a backup of this .xsl file in a separate folder before proceeding. The next time you attach to process, you'll see Services.exe as an available process to attach to. Note that for Visual Studio 2005 Beta 2, this scenario is unsupported. It is being evaluated for official support in the Visual Studio 2005 final release. It is possible to map a folder on your desktop computer (or network) to the emulator as an "SD card." This action simulates inserting a card into the device that contains the files in the desktop computer's folder. It is a convenient way to move files between the Device Emulator image and your desktop computer. To folder share After an image has been started in the emulator, you can configure the image and then save its "state." Therefore, you can turn off the emulator completely, and the next time that you use that image, its last state is restored. This feature is extremely useful if your application requires a specific environment or other installed applications to run. Another benefit to the Save State feature is a drastically reduced start time the next time you start the emulator with that image (because the saved-state image is already started). To erase a saved state and cold boot the device The Pocket PC 2003 Second Edition and Smartphone 2003 Second Edition Emulator images that ship in Visual Studio 2005 are actually pre-started saved-state images, which is why the image appears to "boot" instantly when the developer starts the emulator. In Visual Studio 2005, it is possible to establish an ActiveSync connection to the emulator. You can do this by virtually placing the emulator in its "cradle.". Your desktop computer must have ActiveSync installed (Visual Studio 2005 only supports ActiveSync 4.0 or later). To establish an ActiveSync connection to the emulator After you have an ActiveSync connection to the emulator, you can use ActiveSync File Explorer and any other ActiveSync features. Note After you have an ActiveSync connection to your emulator, when you use Visual Studio 2005, you must treat that emulator as a device when deploying. For example, if you establish an ActiveSync connection to your Pocket PC 2003 Second Edition Emulator image and you want to deploy your application to it, you must select Pocket PC 2003 Device as the target device in Visual Studio. The emulator supports rotation to simulate real devices that have screen rotation capability (Portrait to Landscape mode). Note that the underlying image must also support rotation (for example, Pocket PC 2003 Second Edition and later). To rotate the emulator Note For Pocket PC, the Calendar button is mapped to the rotate function, so if you select this button, the emulator rotates (and the image inside it). You can also map serial ports on the emulator to physical COM ports on your desktop computer. This feature allows you to plug in peripherals and actually have them available to the emulator. A practical example of this feature is having a GPS device communicate over serial Bluetooth that is being mapped to your desktop computer's COM1 port, and then mapping your emulator's Serial port 1 to your desktop computer's COM1 port. You can then debug your GPS driver on the emulator. To COM port map on the emulator
http://msdn.microsoft.com/en-us/library/ms838270.aspx
crawl-002
refinedweb
5,541
52.19
November 10:50 AM. Link | Comments (8) | References May 18, 2005 Why Distinguish Between GETs and POSTs] Posted by cantrell at 10:48 AM. Link | Comments (14) | References April 24, 2005 Don't Forget to Scope CFHTTP I ran into a nasty MXNA 2.0 bug last week. As many of you noticed, we had a case where one person's posts were attributed to someone else. I was stumped for about an hour as I went through lots of lines of code, and long spells of staring into space and contemplating. Then it hit me that since this has only happened one time in all the thousands of posts MXNA 2.0 has aggregated, it must be a concurrency issue. And it was. MXNA 2.0 uses cached instances of parser components, and in one of those components was a CFHTTP tag that wasn't scoped, or "VARed". Just the right sequence of events caused the variable cfhttp.fileContent to be overwritten with a string from someone else's feed. It's a one in a million shot, but it happened once, and it would have happened again given enough time. If you're using CFHTTP in a component, and you're using CF 7.0, your code should look like this: <cfset var foo = 0/> <cfhttp result="foo".../> If you're using CF 6.x, it should look like this: <cfset var cfhttp.fileContent = 0/> <cfhttp .../> Note: I owe Sean Corfield a big thanks for helping me track this down. Posted by cantrell at 11:27 PM. Link | Comments (2) | References February 22, 2005 xmlSearch is Always Case Sensitive The. Posted by cantrell at 5:33 PM. Link | Comments (2) | References February 18, 2005 UTF8, MySQL 4.1, and CFMX 7.0 I. Posted by cantrell at 5:30 PM. Link | Comments (10) | References 10:20 AM. Link | References February 9, 2005 CFMX 7 in the News Here. Posted by cantrell at 11:47 AM. Link | Comments (4) | References February 7, 2005 Macromedia ColdFusion 7.0 Resources Everyone knows by now that Macromedia announced ColdFusion 7.0 today, so rather than making an announcement that everyone has already heard, I thought I'd post a few links to some good ColdFusion 7 resources that I've come across today: - The ColdFusion Product Page - SYS-CON TV Interview with Dave Gruber (ColdFusion Product Manager) - License Changes in ColdFusion MX 7 (via Talking Tree) - With the release of a new version comes new tech notes. - If you're not already subscribed to the ColdFusion Product RSS feed (or other Macromedia product RSS feeds), what are you waiting for? - Blackstone Locales by Paul Hastings - Learn More About ColdFusion MX by Ben Forta - Warning About Flash Forms (Ray Camden) - Installing CFMX 7 on Mac OS X - Warning About Application Events (Ray Camden) - Find everything else ColdFusion related in the ColdFusion category of MXNA Posted by cantrell at 3:02 PM. Link | Comments (1) | References January 12, 2005 Eliminate ColdFusion Whitespace Once and For All Since I'm at Macworld this week, and consequently don't have a lot of time to put into my weblog, I'm going to be lazy, and reprint a comment that was sent to me by Jon Alsbury. It was submitted in response to a post entitled Controlling Whitespace in ColdFusion. John writes: The most effective (and easy to implement) technique for reducing whitespace in CFMX generated pages I have discovered so far is to set up a simple servlet filter to intercept the response in order to strip out whitespace before it is returned to the client. The filer I've been using for this is called Trim Filter and can be downloaded here: Setup is easy: simply download trimflt.jar from the above URL, drop it into your 'cfusionmx/lib' directory. Add the following to 'cfusionmx/wwwroot/WEB-INF/web.xml': <filter> <filter-name>trimFilter</filter-name> <filter-class>com.cj.trim.trimFilter</filter-class> </filter> <filter-mapping> <filter-name>trimFilter</filter-name> <url-pattern>*.cfm</url-pattern> </filter-mapping> Posted by cantrell at 12:39 PM. Link | Comments (20) | References December 21, 2004 Who Would Use a CFCONTINUE Tag? This isn't an official survey (or even unofficial one, for that matter), but I'm wondering how many people out there would use a CFCONTINUE tag if there were one available. Personally, I find the ability to continue (to jump to the next item in a loop) very useful, and occasionally my ColdFusion code suffers without it. 90% of the time I easily get by with CFIF tags inside loops, but when there is a great deal of processing going on, and a lot of decisions being made, I've found myself having to nest a lot of CFIF tags when a CFCONTINUE tag could have simplified my code. What are your thoughts? Posted by cantrell at 11:35 AM. Link | Comments (23) | References December 7, 2004 Submitting Flash Forms Without Refreshing I've been working on a way to submit Blackstone Flash forms without refreshing the page, and I have it working quite well. The code lets you either submit data to the server without re-rendering the Flash, or submit the data to a different window (the CFFORM tag supports targets, but the target attribute doesn't give you a way to submit to a different window). I've only tested it with Firefox, but I'm pretty sure it will work on all modern browsers. It's a bit too early to release the code just yet, but once Blackstone is live, I'll release the code along with a tutorial. What do you guys think of what you've seen of Flash forms so far? Posted by cantrell at 12:01 PM. Link | Comments (5) | References November 23, 2004 Reinitializing an Application Using Blackstone Events One. Posted by cantrell at 1:46 PM. Link | Comments (1) | References November 16, 2004 Chopping Off the End of a List I'm sure there are tons of these functions around, but I decided to write my own. The listChop function chops a list down to the specified size. Use it like this: <cfset myList = "a,b,c,d,e"/> <!--- Chop this list down to 3 elements. ---> <cfset myList = listChop(myList, 3[, delimiter])/> Here's the function: <cffunction name="listChop" returnType="string" output="no" > <cfargument name="targetList" type="string" required="true"/> <cfargument name="amountToKeep" type="numeric" required="true"/> <cfargument name="delimiter" type="string" required="false" default=","/> <cfset var listSize = listLen(arguments.targetList, arguments.delimiter)/> <cfset var i = 0/> <cfif arguments.amountToKeep lt 0 or listSize - arguments.amountToKeep le 0> <cfreturn arguments.targetList/> </cfif> <cfloop from="#arguments.amountToKeep+1#" to="#listSize#" index="i"> <cfset arguments.targetList = listDeleteAt(arguments.targetList, arguments.amountToKeep+1, arguments.delimiter)/> </cfloop> <cfreturn arguments.targetList/> </cffunction> I'm using it in a pretty big application I'm writing, so let me know if you see any bugs. Posted by cantrell at 3:50 PM. Link | Comments (7) | References November 10, 2004 Safely Selecting the Last Inserted ID, Part II Well, as it turns out, this isn't an issue at all. I ran two tests. The first followed these steps: - Using a CFQUERY tag, I executed an insert statement. - Used Thread.sleep() to make the page hang for 10 seconds. - Inserted ten additional rows from the command line (using the MySQL command line tool). - After the Thread.sleep(), used LAST_INSERT_ID() to get the last inserted ID, and displaed it. I expected the result to be 11, but it was actually 1. The 10 rows I inserted between the initial insert and when I selected the last inserted ID had no effect. I figured this was because ColdFusion was considered one client, and the command line administrator was considered a different client, and since the scope of the last inserted ID is the client/connection, I was protected. So I ran this test, instead: - Executed an insert statement in ColdFusion. - Slept for 10 seconds. - Ran a second CFM page that inserted 10 rows while the first page was hung. - After the thread in the original page woke up, selected the last inserted id and displayed it. Keep in mind I was using no transactions or locks since I was trying to get a wrong result before trying to figure out how to get the correct one, however the result was once again 1. As it turns out, no transactions or locks are needed. Why? Apparently because each request gets its own database connection, and that one connection gets reused for the duration for the request, regardless of how many database operations you perform. Other requests can happen simultanously, of course, but they all get their own connections, and hense do not affect the first connection's last inserted ID value. In other words, it seems to work exactly how you would want it to! ColdFusion does it again! Disclaimer: This test was run using JRun, CFMX 6.1 (with the latest updater) and MySQL version 4.0.18. Before you rely on this data, you might want to run some tests yourself, though I will try to get confirmation from the ColdFusion team that this technique should work across all databases and future versions of ColdFusion. Posted by cantrell at 3:28 PM. Link | Comments (5) | References Safely Selecting the Last Inserted ID I? Posted by cantrell at 1:22 PM. Link | Comments (9) | References October 29, 2004 What If You Want To Round Down? ColdFusion's round() function rounds up. What if you want to round down? Use this: <cffunction name="roundDown" output="no"> <cfargument name="target" type="numeric" required="true"/> <cfreturn abs(round((arguments.target * -1)))/> </cffunction> Example: round(1.1) = 1 roundDown(1.1) = 1 round(1.5) = 2 roundDown(1.5) = 1 round(1.6) = 2 roundDown(1.6) = 2 Addition: Bill pointed out that the function above doesn't work with negative numbers. This one does. Thanks, Bill! <cffunction name="roundDown" output="no"> <cfargument name="target" type="numeric" required="true"/> <cfreturn (round((arguments.target * -1))) * -1/> </cffunction> Example: roundDown(-1.5) = -2 Posted by cantrell at 10:18 AM. Link | Comments (11) | References August 31, 2004 How to Log Out of an Application that Uses HTTP Authentication I? Posted by cantrell at 12:25 PM. Link | Comments (1) | References August 26, 2004 ColdFusion 6.1 Updater Now Available Well, you finally convinced us that there were too many hot fixes out there for you to keep track of, and we needed to roll them all into a 6.1 updater, so here you go. Now back to Blackstone. Posted by cantrell at 11:08 AM. Link | References July 28, 2004 Introduction to Flash (for ColdFusion Developers) I. Posted by cantrell at 4:48 PM. Link | Comments (2) | References July 26, 2004 ColdFusion Makes the World a Safer Place About three weeks ago, America's Most Wanted launch a new site with very comprehensive crime-solving and fugitive-finding functionality. And it's all powered by ColdFusion and Flash (man, I'd love to know how much traffic they are supporting). Macromedia isn't just changing the web -- we're helping to change the world! Posted by cantrell at 1:28 PM. Link | References July 13, 2004 Renaming Files As They Are Uploaded (how CFFILE actually works) When. Posted by cantrell at 11:23 AM. Link | Comments (14) | References 1:54 PM. Link | Comments (5) | References June 23, 2004 Free ColdFusion Applications Need an application, and need it fast? There is no more efficient technology for developing web applications than ColdFusion, so of course, you can build just about anything you need quickly enough, but now you can also just download a complete application for free from Free ColdFusion Applications (by the folks at EasyCFM). There are only about 12 applications at this point to choose from, but it looks like a lot of the basics are covered (weblog, calendars, forums, etc.) I have no idea how good any of these applications are, but it seems like they have a lot of potential. I can think of several uses: - Download and use as-is. - Download and customize. - Download and cannibalize. - At the very least, get some ideas for the fully customized version you are writing yourself. Anyone have any experiences with any of these apps? What do you think? Posted by cantrell at 11:57 AM. Link | Comments (2) | References June 22, 2004 Know Your List Functions If you use ColdFusion list functions, make sure you know the difference between listContains and listFind. Using listContains where you should be using listFind might appear to work at first, but can introduce hard-to-find bugs in your applications down the road. listContains returns the index of the first item in the list which contains a substring of the string you are searching for. For instance, consider the following code: <cfset myList = "abc,def,ghi"/> <cfoutput>#listContains(myList, "e")#</cfoutput> The substring "e" is contained in the second item in the list, so listContains returns 2 rather than 0. Now consider the code below which uses listFind: <cfset myList = "abc,def,ghi"/> <cfoutput>#listFind(myList, "e")#</cfoutput> listFind looks for an exact match rather than just a substring, so 0 is returned since there is no item in the list which matches "e" exactly. (The search is case-sensitive -- for a case-insensitive search, use listFindNoCase.) Most of the time, you are probably going to want to use listFind. Either way, just make sure you are aware of the difference. Posted by cantrell at 10:09 AM. Link | References June 8, 2004 Spike Milligan of Spike-fu made a very cool post recently entitled "Loading java class files from a relative path" which demonstrates a technique for loading class files through a local file URL so that you don't have put them in your classpath. I thought that was very innovative, and decided to take it a step further. The point of Spike's post was to make server configuration easier. To make configuration easier still, why not store all your class files on one central server, and use a similar technique to load them remotely? I wrote a component called RemoteClassLoader that does it for you: <cfcomponent displayName="RemoteClassLoader" output="no"> <cfset my = structNew()/> <cffunction name="setRemoteClassPaths" returnType="void" output="no"> <cfargument name="classPaths" type="array" required="true"/> <cfset var urls = arrayNew(1)/> <cfset var arrayFactory = createObject("java", "java.lang.reflect.Array")/> <cfset var urlClass = createObject("java", "java.net.URL").init("")/> <cfset var urlArray = arrayFactory.newInstance(urlClass.class, 0)/> <cfloop from="1" to="#arrayLen(arguments.classPaths)#" index="i"> <cfset urls[i] = createObject("java", "java.net.URL").init(arguments.classPaths[i])/> </cfloop> <cfset my.loader = createObject("java", "java.net.URLClassLoader").init(urls.toArray(urlArray))/> </cffunction> <cffunction name="getRemoteClass"> <cfargument name="class" type="string" required="true"/> <cfreturn my.loader.loadClass(class)/> </cffunction> </cfcomponent> Using the component above, I was able to load Spike's HelloWorld class right from his server, instantiate it, and call functions on it. The code looks like this: <cfscript> remoteClassLoader = createObject("component", "com.macromedia.net.RemoteClassLoader"); urlArray = arrayNew(1); urlArray[1] = ""; remoteClassLoader.setRemoteClassPaths(urlArray); helloWorld = remoteClassLoader.getRemoteClass("HelloWorld"); </cfscript> <html> <cfoutput>#helloWorld.newInstance().sayHello()#</cfoutput> </html> A couple of notes: - The code above assumes that the RemoteClassLoader component is in the package "com.macromedia.net" (which is where I put it after I wrote it before checking it into CVS). - I haven't tested this with jar files, but it should work fine. - I wrote this in about 5 minutes, so you might want to bulletproof it before using it in a production environment. Thanks for the inspiration, Spike! Posted by cantrell at 2:49 PM. Link | Comments (10) | References June 7, 2004 Logging Classes Loaded by the JVM Good ol' Debbie has just published an interesting TechNote entitled "Determining which class files are loaded by ColdFusion MX" which explains how to get ColdFusion (actually, Java) to log all the classes loaded by the JVM. This can be handy for resolving classpath conflicts, and for some types of optimization. Be warned, however, that the log file will get very big, very quickly, and that this should not be used in a production environment due to performance degradation unless that is the only way to debug your specific issue. Posted by cantrell at 11:09 AM. Link | References June 1, 2004 CFMX, OS X, and Java 1.4.2 Back in February, I posted some information on how to get ColdFusion to run on OS using Java 1.4.2. I made that post before realizing that, although CFMX does start up with the right version of Java, there is no way that I know of to add libraries to Java's (and therefore, ColdFusion's) classpath. Adding paths to the -cp or -classpath flags doesn't work because ColdFusion gets its classpath from the admin application. I thought that starting both CFMX and the admin application was fixing the problem until Sean Corfield pointed out that starting the admin application with CFMX forced CFMX to use Java 1.3 rather than 1.4.2. Oops. Well, now I have a complete solution. The instructions I posted back in February are still valid, but now we also have a way to add libraries, as well. Rather than using the classpath to make additional libraries available to Java, put jar files in a directory specified by the java.ext.dirs system property. With my configuration, the directory "/Library/Java/Extensions" works well. For a list of valid directories, log into the ColdFusion administrator, click on the "System Information" link at the top, and scroll down to "Java Ext Dirs". Pick a directory in that list, and copy all your third-party jar files there. If you don't like to jar up your classes, you should also be able to put your classes (in their appropriate packages) right into an extension directory, as well. The java.ext.dirs and java.class.path system properties cannot be changed dynamically after Java has started up, so don't bother going that route (I tried it just to be sure). At this point, I think using one of the Java extension directories is your best bet. There are still advantages to using a classpath on other platforms when you can since the classpath gives you the added advantage of sequencing your jars files (so that classes appearing earlier in your classpath take precedence over classes appearing later), but if that's all I have to give up in order to be able to fully run CFMX on OS X using the latest and greatest version of Java, that's perfectly fine with me. Posted by cantrell at 2:32 PM. Link | Comments (1) | References May 26, 2004 Macromedia ColdFusion Forums RSSified Ro. Posted by cantrell at 5:11 PM. Link | References May 20, 2004 ColdFusion MacroChat Today I think this is likely to be the coolest MacroChat yet. At 12:00 PM Pacific (3:00 PM Eastern), Ben Forta, Tim Buntel and Dave Gruber talk ColdFusion and answer community questions. To participate, go to: Other MacroChats for today (Thursday): CSS for Dreamweaver A free presentation by Macromedia Dreamweaver Product Evangelist, Greg Rewis Thursday, May 20, 2004 9:00 AM PT/12:00 PM ET Delegating Web Content Updates with Macromedia Contribute A free presentation by Macromedia Product Manager, Lawson Hancock Thursday, May 20, 2004 10:00 AM PT/1:00 PM ET Director MX 2004 New Features, Putting It All Together A free presentation by Macromedia Product Engineer, Thomas Higgins Thursday, May 20, 2004 3:00 PM PT/6:00 PM ET Customizing and Extending Dreamweaver MX 2004 A free presentation by Team Macromedia member, Danilo Celic Thursday, May 20, 2004 4:00 PM PT/7:00 PM ET Posted by cantrell at 8:41 AM. Link | References May 5, 2004 New ColdFusion IDE On The Way Gest! Posted by cantrell at 2:03 PM. Link | References March 23, 2004 Eclipse and ColdFusion Who. Posted by cantrell at 5:01 PM. Link | Comments (9) | References March 18, 2004 getAuthUser needs CFLOGIN I. Posted by cantrell at 3:15 PM. Link | Comments (8) | References March 17, 2004 ColdFusion Security Bulletins For. Posted by cantrell at 10:38 AM. Link | 11, 2004 Preserving Star Wars History with ColdFusion originaltrilogy.com could prove to be one of the most important websites of our time, and I'm proud to say it's powered by ColdFusion. The purpose of the site, in their own words: "George Lucas doesn't plan on releasing the original theatrical cuts of the first Star Wars trilogy on DVD -- or any other home video format for that matter. They're gone forever. The point of the petition is to try and change his mind." ColdFusion and Star Wars: an unbeatable combination, and a noble cause. Posted by cantrell at 12:14 AM. Link | Comments (6) | References February 4, 2004 Hidden String Functionality You> Posted by cantrell at 12:00 PM. Link | Comments (6) | References January 26, 2004 Generating Random Numbers in ColdFusion There> Posted by cantrell at 11:07 AM. Link | Comments (3) | References January 22, 2004 PATH Variable in ColdFusion I? Posted by cantrell at 7:03 PM. Link | Comments (7) | References 12:14 PM. Link | Comments (16) | References January 15, 2004 Byte Arrays and ColdFusion I'm looking for ways to dynamically create byte arrays of indeterminate length in ColdFusion. I have tried this: sb = createObject("java", "java.lang.StringBuffer"); sb.init(someLength); byteArray = sb.toString().getBytes(); // and this: baos = createObject("java", "java.io.ByteArrayOutputStream"); baos.init(someLength); byteArray = boas.toByteArray(); ... but both return byte arrays of 0 length rather than someLength. The only technique I have found that works is this: function getByteArray(someLength) { sb = createObject("java", "java.lang.StringBuffer"); for (i = 0; i lt someLength; i = i + 1) { sb.append("_"); } return sb.toString().getBytes(); } This works, but as you might expect, it's not the most efficient technique. Any other ideas? Posted by cantrell at 12:03 AM. Link | Comments (4) | References January 7, 2004 ColdFusion and Types I! Posted by cantrell at 12:04 PM. Link | Comments (11) | References November 13, 2003 ColdFusion vs Flash (Part III) I. Posted by cantrell at 3:43 PM. Link | Comments (8) | References October 31, 2003 ColdFusion vs Flash (Part II) (For. Posted by cantrell at 3:58 PM. Link | Comments (1) | References October 30, 2003 ColdFusion MX 6.1 Verses Flash MX 2004 I'm working on a relatively simple application that I have decided I am going to build both a ColdFusion/HTML and a Flash interface for. Neither interface will provide any more or less functionality than the other, and both will be written on top of the same components, but I think it will be an interesting experience. I will document the pros and cons of both approaches as I go along. I'm considering a slightly unique architecture for this project, as well. The Flash front-end will use web services to communicate with the server, and I'm actually considering having the ColdFusion/HTML interface communicate via the same web services. Unconventional, I know. The the obvious approach would be to have the web services that the Flash front-end uses wrap the components that the ColdFusion/HTML interface uses, but if they both use the same web service interface, that would allow me to run the ColdFusion interface on a different server than the rest of the back-end, which I think is interesting. I haven't decided yet, but I'm leaning in that direction. If nothing else, it would be an interesting experiment, and I am interested in seeing how the ColdFusion application would perform. Posted by cantrell at 5:24 PM. Link | Comments (8) | References October 28, 2003 Interesting Article on Community MX How. Posted by cantrell at 2:46 PM. Link | Comments (1) | References October 27, 2003 US Government Powered By ColdFusion I'm sure many of you already caught this on Ben Forta's weblog, but in case you missed it, it seems that ColdFusion is, by far, the preferred technology for government websites. Second is ASP, third is PHP, and last is JSP. You can get more specific numbers on forta.com. Posted by cantrell at 4:13 PM. Link | References October 16, 2003 ColdFusion MX 6.1 CFFORM Hot Fix From. Posted by cantrell at 4:00 PM. Link | Comments (1) | References October 9, 2003 Where Do You Put Your Components? The Macromedia Web Technology Group, in their most recent ColdFusion MX Coding Guidelines, recommends that you put components which are to be accessed directly either through Flash Remoting or web services in {cfmxroot}/wwwroot/{applicationname}/ and any other components should be stored under {cfmxroot}/extensions/components/{applicationname}/. I sometimes do something similar, although I generally use {cfmxroot}/com/macromedia/apps/{applicationname}/ instead. This works well for applications that you write, install and configure yourself, however I found that when I wanted to distribute an application, I preferred having all the application's files in a single directory. Therefore, I have started putting all application-specific components -- whether or not they are accessed directly through the browser, Flash Remoting or web services -- in {cfmxroot}/wwwroot/{applicationname}/components/{subdirectory}. At first, this may not appear to be the most elegant relationship, however I like the idea of having people unzip a single directory in their web root, set up a data source, tweak a few configuration parameters in the Application.cfm file or an external XML file, and be up and running. Now there's really no reason you can't do the same thing with your components outside your application directory, however I have found both packaging and unpacking to be more straightforward when everything is contained in a single directory. So my current thinking is that I try to consider the type of application that I am writing and what it is intended for before deciding where to place my components. Where do you put yours? Also, one circumstance that the WTG coding guidelines does not address is the location of generic, reusable components. For instance, I have a calc.cfc which performs certain mathematical functions in {cfmxroot}/wwwroot/com/macromedia/util, which has worked out well. Posted by cantrell at 4:02 PM. Link | Comments (24) | References October 8, 2003 Ben Forta Holds Forth on ASP.NET and ColdFusion Mac. Posted by cantrell at 6:10 PM. Link | Comments (2) | References October 6, 2003 Does This ColdFusion Tag Make Sense? I had a debate the other day over whether this tag makes sense or not. I say it doesn't, but the ColdFusion server says otherwise. What do you think, and why? <cfargument name="foo" required="true" default="bar"/> Posted by cantrell at 6:27 PM. Link | Comments (8) | References 5:02 PM. Link | Comments (2) | References October 1, 2003 CFFORM - An Informal Poll Yesterday I wrote about server-side validation and error handling versus client-side validation using JavaScript. I got some awesome comments, many of which contained some pretty valuable insight. Now on to a related topic: CFFORM. From what I've been able to gather, you either love CFFORM or hate it. If you love it, you love it because it saves you time, and because you might not know JavaScript very well and would rather build applications and make money than take the time to learn a new language. If you hate it, you probably know JavaScript pretty well, and prefer your own way doing validation. What are your thoughts on CFFORM? If you like it, what do you like about it? If you don't like it, is it the implementation or the concept? In other words, if it were completely re-factored, would you consider using it, or will you always prefer to use your own code? Posted by cantrell at 12:16 PM. Link | Comments (27) | References September 30, 2003 Validation - Client or Server-side? I? Posted by cantrell at 1:25 PM. Link | Comments (18) | References September 29, 2003 Simplifying Component Inheritance When. Posted by cantrell at 1:17 PM. Link | Comments (2) | References September 25, 2003 Multiple Threads in ColdFusion I. Posted by cantrell at 1:40 PM. Link | Comments (18) | References September 24, 2003 What's Your Dream ColdFusion Feature? Let. Posted by cantrell at 1:32 PM. Link | Comments (59) | References September 18, 2003 Builder.com Reviews ColdFusion MX 6.1 Builder.com has an extremely positive review of ColdFusion MX 6.1 entitled "Take another leap forward with ColdFusion MX 6.1".. Posted by cantrell at 1:37 PM. Link | Comments (13) | References September 15, 2003 Tabs or Spaces? Which? Posted by cantrell at 1:30 PM. Link | Comments (30) | References September 9, 2003 Binary Deployment of ColdFusion Applications I! Posted by cantrell at 1:27 PM. Link | Comments (14) | References September 8, 2003 Leveraging Java in Your ColdFusion Applications There are four basic ways to use Java with ColdFusion: - CFX tags - JSP tags - Using CFOBJECTor createObject()to accesses custom classes in your classpath - Direct embedding (usually using CFScript) CFX tags have a very straightforward interface. You simply implement a function called processRequest contained in the interface com.allaire.cfx.CustomTag, and work with the Request and Response objects that get passed in. CFX tags are the easiest way to implement ColdFusion tags in Java. The JSP custom tag interface is much more complex and flexible, allowing you to respond to very specific tag parsing events, nest tags, and work with body content. Using the CFOBJECT tag or createObject function is as easy or difficult or as the APIs you are accessing, while the simplest way to work with Java in ColdFusion is probably to embed it directly, typically using CFScript. How often do you find yourself using Java in your ColdFusion applications? What do you typically use it for, or what have you used it for in the past? Finally, which of the techniques above do you typically use, and why? Posted by cantrell at 11:22 AM. Link | Comments (13) | References September 2, 2003 Using expandPath with Virtual Directories and IIS Thanks to Nathan Strutz for submitting his findings on using expandPath with virtual directories: Nathan found that the following code... expandPath("../myDirectory/mySharedDirectory/") ...will return... c:\inetpub\wwwroot\myDirectory\mySharedDirectory\ However, if he adds a slash to the beginning, like this... expandPath("/../myDirectory/mySharedDirectory/") ...he gets... \\sharedComputer\ShareName\shareLocation\mySharedDirectory\ Anyone care to confirm or deny this behavior? (I'm running on OS X right now, so I can't.) If you have tips or tricks that you would like to blog vicariously through me (you get the credit, of course), send them to cantrell@macromedia.com. Don't worry about including a self-addressed stamped envelope. Posted by cantrell at 11:58 AM. Link | Comments (3) | References 2:03 PM. Link | Comments (7) | References August 26, 2003 Application Configuration: How Do You Do It? There? Posted by cantrell at 12:42 PM. Link | Comments (7) | References August 22, 2003 Customer Research: How Do You Use CFHTTP? I'd like to find out the primary ways in which ColdFusion customers use the CFHTTP tag. Specifically, I'm interested in the following: - How often do you use it? - What types of things do you use it for? - Do you primarily use it for GET or POST operations? - Do you use its more advanced capabilities like proxy and query support? Please posts responses here, or send them to me directly at cantrell@macromedia.com. Thanks for your time! Posted by cantrell at 1:23 PM. Link | Comments (13) | References August 21, 2003 Debate: The Best Way to Invoke Custom Tags How? Posted by cantrell at 10:34 AM. Link | Comments (17) | References August 20, 2003 Separating Sessions From Cookies Cookies. Posted by cantrell at 1:54 PM. Link | Comments (5) | References August 19, 2003 eWEEK Reviews ColdFusion MX 6.1 Yesterday,... Posted by cantrell at 11:04 AM. Link | Comments (2) | August 15, 2003 Tags vs. CFScript Now that you can write functions in both tag form and as CFScript, which way are people leaning and why? I like that tags allow a level of validation in terms of data types and requirements, however I must admit that I prefer the more streamlined syntax of CFScript. What are your thoughts? Should the same advantages that one gets from writing UDFs as tags be added to CFScript? Should CFScript become ECMAScript? Server-side ActionScript? Java? What? Posted by cantrell at 12:42 PM. Link | Comments (18) | References August 14, 2003 Macromedia Releases ColdFusion MX 6.1 Performance Briefs Check. Posted by cantrell at 12:57 PM. Link | Comments (7) | References August 13, 2003 Using CFHTTP to Build a Query Sorry I have been lazy about posts recently. I've been in Boston, meeting with various product teams. All I can say is that there is some cool stuff on the horizon. Anyway, did you know that CFHTTP can automatically turn a comma-delimited file into a query object for you? Let's say you have a file called data.txt that looks like this: firstName, lastName, emailAddress Christian, Cantrell, cantrell@macromedia.com Mike, Chambers, mesh@macromedia.com Baby, Blue, bluebaby@macromedia.com The following use of CFHTTP will parse the data above into a query stored in the variable "myQuery": <cfhttp method="GET" url="" name="myQuery"> You can use the columns attribute of the CFHTTP tag to specify a different set of column headers, and you can use the firstrowasheaders attribute to include the first row as data rather than column headers. And, of course, your comma-delimited file doesn't have to be static; the delimited values can be dynamically generated by any means. Posted by cantrell at 3:13 PM. Link | Comments (3) | References August 7, 2003 Improved Email Functionality in CFMX 6.1. Posted by cantrell at 2:27 PM. Link | Comments (3) | References August 6, 2003 Another Way to Continue in ColdFusion Last> Posted by cantrell at 2:32 PM. Link | Comments (3) | References August 5, 2003 Installing ColdFusion MX 6.1 on OS X I have seen a few people experience difficulties installing ColdFusion MX 6.1 on OS X. Not to worry -- it works fine. You just have to make a few adjustments. The problem people seem to be running into is with the graphical installer. There seems to be an issue with the graphical installer and Sun's 1.4.1 JVM for OS X. There are two ways to fix this: Run the installer in console mode like this: % java -jar ./coldfusion-61-other.jar -i console (Don't actually type the "%" -- that is meant to represent your command prompt.) This is not as scary as it might sound. The console installer is just as user friendly as the graphical version, except for the fact that you have to type paths in rather than navigate to them. Run the graphical installer with Java 1.3.1 rather than 1.4.1. This is perfectly safe and doable on most configurations since the OS X software updater does not remove the old version of Java when installing the new version (rather, it installs the new JVM right along side the old one and simply changes some symbolic links). Here is the command for running the graphical installer with Java 1.3.1: /System/Library/Frameworks/JavaVM.framework/Versions/1.3.1/Home/bin/java -jar ./coldfusion-61-other.jar -i gui The command above may have broken onto two lines in your browser -- make sure you run it as a single line. Posted by cantrell at 3:06 PM. Link | Comments (20) | References Red Sky is Live! The. Posted by cantrell at 10:20 AM. Link | Comments (2) | References August 4, 2003 Living Without "continue" in ColdFusion A. Posted by cantrell at 1:26 PM. Link | Comments (6) | References July 31, 2003 Using ColdFusion Server Variables Most? Posted by cantrell at 3:04 PM. Link | Comments (5) | References July 29, 2003 Getting a Client's IP Address With Flash Remoting I. Posted by cantrell at 12:31 PM. Link | Comments (2) | References July 25, 2003 Follow-up on Session Variables vs. Hidden Inputs A. Posted by cantrell at 11:41 AM. Link | Comments (7) | References July 24, 2003 Session Variables vs. Hidden Inputs I. Posted by cantrell at 2:00 PM. Link | Comments (12) | References July 21, 2003 Learning to Like the Var Keyword Coming. Posted by cantrell at 2:22 PM. Link | Comments (2) | References July 18, 2003 ColdFusion and Graphics on Linux Anyone out there have problems using image manipulation libraries -- or even just CFCHART, for that matter -- on a Linux server? If you aren't running an X server on your Linux box (which you most likely are not) and/or do not have the XFree86 libraries installed, you are not going to be able to use tags like CFCHART, or the upcoming Jimg package on DRK 4 which lets you use ColdFusion tags or components to manipulate images in a variety of ways. The reason is that Java uses native graphic libraries for many graphic operations, so in come cases, either X needs to be running, or a virtual X server (like the X Virtual Frame Buffer, or Xvfb). These are not fun issues to solve. Daemonite posted about this back in April in the context of CFCHART and offers some advice, and thanks to Ben Simon, I recently found a very comprehensive resource on how to solve the issue. Posted by cantrell at 11:43 AM. Link | Comments (4) | References July 7, 2003 Adding Enhanced Rounding Support to ColdFusion Over the weekend, I ran into a situation where I needed more fine-grained rounding than ColdFusion supports. Rather than rounding to the closest whole number, I wanted to be able to round to a specific decimal place. For instance, given the number 2.345, I didn't want the number 2; I wanted the number 2.35. Fortunately, Java picks up where ColdFusion leaves off. The following UDF lets you specify the number of places to round to: // mode can be "up", "down", or "even". Even is the default. function decimalRound(numberToRound, numberOfPlaces, mode) { var bd = createObject("java", "java.math.BigDecimal"); bd.init(arguments.numberToRound); if (structCount(arguments) eq 1) { mode = "even"; } if (mode is "up") { bd = bd.setScale(arguments.numberOfPlaces, bd.ROUND_HALF_UP); } else if (mode is "down") { bd = bd.setScale(arguments.numberOfPlaces, bd.ROUND_HALF_DOWN); } else { bd = bd.setScale(arguments.numberOfPlaces, bd.ROUND_HALF_EVEN); } return bd.toString(); } The mode is an interesting argument. Its meaning is clear for "up" and "down", but "even" is a little more involved. What "even" means is that if the last digit in your number (the number to the left of the discarded portion of the original number) is even, round down, and if it's odd, round up. In other words, half the time, act as through the mode were "up" and the other half, act as though the mode were "down". This is the best way to eliminate cumulative round errors over a series of calculations, and worked perfectly for me. Posted by cantrell at 12:55 PM. Link | Comments (4) | References July 3, 2003 Rolling Your Own CFPAUSE Tag Somehow it came to my attention recently that BlueDragon supports a CFPAUSE tag. According to the documentation: The CFPAUSE tag allows you to pause the execution of a page for a specified number of seconds. The interval attribute is required and must specify an integer value. I'm not sure why you would want to do this (if your application is just too darn fast?), but if it appeals to you for some reason, and you're using ColdFusion MX, here is my own CFPAUSE tag: <cfparam name="attributes.interval" default="1"/> <cfscript> thread = createObject("java", "java.lang.Thread"); thread.sleep(javaCast("long", 1000*attributes.interval)); </cfscript> Here is the dynamic new CFPAUSE tag in action: <cfimport taglib="/cf_tags" prefix="mm"/> <html> <cflog text="this"/> <mm:pause <cflog text="is"/> <mm:pause <cflog text="slow"/> </html> Posted by cantrell at 12:34 PM. Link | Comments (9) | References July 2, 2003 How to Snoop on ColdFusion Data Types Last week, I made a couple of posts about ColdFusion Arrays, and how they are actually java.util.Vectors, which means that you can convert them to Java arrays by calling toArray(). How did I figure that out? I didn't ask the ColdFusion engineers. That's cheating. The first thing I did was find out what type of class we are actually dealing with when we have a reference to an array. The Java object "Object" (which all objects extend) has a method called getClass() which returns the runtime class of an object. Calling toString() on the class (or simply the act of outputting it, which automatically calls toString()) will reveal the class name: <cfset cfArray = arrayNew(1)/> <cfset cfArray[1] = "c"/> <cfset cfArray[2] = "b"/> <cfset cfArray[3] = "a"/> <html> <cfoutput> #cfArray.getClass()# </cfoutput> </html> The result of the code above is: class coldfusion.runtime.Array So now I know that I'm dealing with a coldfusion.runtime.Array, however that information doesn't do me any good by itself. What I need to know is what a coldfusion.runtime.Array really is, and what its public interface looks like. That's where "javap" comes in. javap is a program that comes installed with your JDK that most people actually don't know about. I don't know what the "p" stands for (any ideas?), but javap is essentially a Java class disassembler. Running it against any class in your classpath will, by default, output public and protected method signatures along with other class information (use the -private flag to see private method signatures). If you have java installed and in your path, at the command line, type: javap java.lang.String And you will get something like: Compiled from String.java public final class java.lang.String extends java.lang.Object implements java.io.Serializable, java.lang.Comparable, java.lang. CharSequence { public static final java.util.Comparator CASE_INSENSITIVE_ORDER; public java.lang.String(); public java.lang.String(java.lang.String); public java.lang.String(char[]); public java.lang.String(char[],int,int); public java.lang.String(byte[],int,int,int); public java.lang.String(byte[],int); public java.lang.String(byte[],int,int,java.lang.String) throws java.io.UnsupportedEncodingException; public java.lang.String(byte[],java.lang.String) throws java.io.UnsupportedEncodingException; public java.lang.String(byte[],int,int); public java.lang.String(byte[]); public java.lang.String(java.lang.StringBuffer); java.lang.String(int,int,char[]); public int length(); ... } So to find out more about coldfusion.runtime.Array, I used the following command: javap -classpath /path/to/your/cfusion.jar coldfusion.runtime.Array The output is: No sourcepublic final class coldfusion.runtime.Array extends java.util.Vector { public coldfusion.runtime.Array(); public coldfusion.runtime.Array(int); static coldfusion.runtime.Array copy(coldfusion.runtime.Array); static coldfusion.runtime.Array copy(java.util.List); public int getDimension(); ... } Since I could see from the output above that coldfusion.runtime.Array extends java.util.Vector, I knew that I had access to all of Vector's public methods, as well, such as toArray(), which was precisely the method I was looking for. Posted by cantrell at 12:45 PM. Link | Comments (4) | References July 1, 2003 New Addition to Macromedia Weblogs Ben. Posted by cantrell at 11:28 AM. Link | Comments (3) | References June 30, 2003 Ping MXNA from ColdFusion Scott Keene has just released MXNAPing 1.0, a ColdFusion component for pinging the Macromedia XML News Aggregator. "Pinging" refers to the process of sending MXNA an XML-RPC request with a special ID in it that tells MXNA that you have just updated your blog. MXNA then knows to go pick up your RSS feed and get your new post aggregated immediately. MXNA checks all of its feeds twice an hour, however by pinging MXNA, you can make sure your posts are picked up instantly. For more information about pinging MXNA, including instructions on how to configure Movable Type for pings, see the MXNA FAQ. For information on pinging MXNA from PHP, see Rob Hall's recent work. Anyone want to have a go at Java support? Posted by cantrell at 11:17 AM. Link | References June 27, 2003 Sorting Two Dimensional Arrays Yesterday. Posted by cantrell at 12:10 PM. Link | Comments (4) | References June 26, 2003 Macromedia Opensources Spectra Macromedia recently announced that Spectra will be opensourced, which means that you will be able to download the source code for free, and both build and deploy Spectra applications under the macromedia Spectra Software License, which is based on the Apache Software Foundation license. I haven't played with Spectra myself, however now that it looks like it might come back to life, I'll have to give it a shot. This is the full post that Tim Buntel made to the spectra-talk list yesterday: As you know, Macromedia announced in May 2001 that there would be no new feature-additive releases of Macromedia Spectra. To allow Spectra applications to run on top of ColdFusion MX, Spectra 1.5.2 was made available in December 2002 as an update release distributed through SpectraSource for existing Spectra 1.5.1 customers. Macromedia is now pleased to announce that the full source of Macromedia Spectra is to be released under a public open-source license. The source will be available as a free download with which you can build and redistribute Spectra applications as allowed by the Macromedia Spectra Software License (based on the Apache Software Foundation license). We plan to release the full product as soon as possible as a free download from the SpectraSource site (). The final scheduling is still being planned, but we anticipate release by fall of 2003. Watch SpectraSource for details as they become available. This notice will be posted there within the next day or so. If you have any questions or comments, email spectraopensource@macromedia.com. Macromedia Spectra Open-source FAQ What does this open source announcement mean for me? If you already have an investment in Spectra, you can maintain or expand your applications as well as benefit from the support of the Spectra community. If you do not have an investment in Spectra, give it a try - you can now use as many or as few parts of the framework as you like in your ColdFusion applications. What will the free download contain? The full Spectra product code will be available with the exception of several OEM technologies, namely the Ektron HTML editor and the Sybase SQL Anywhere database. Will this version require an existing installation of Spectra? Will it require ColdFusion MX? Details regarding the distribution have not yet been finalized, but our intention is to provide a version of Spectra that does not require anything besides ColdFusion. When will it be available? Final release scheduling is still being planned, but we anticipate release by this fall. Watch SpectraSource for more details. Will macromedia continue to offer technical support for Spectra? We will continue to provide technical support for Macromedia Spectra through December 31, 2003 to customers holding a purchased license of Spectra 1.5.1. Please note, however, that support will not be available if you modify the core application source code. How does Open Source compare to Community Source? Originally, Spectra was organized around a community source model where Macromedia owned the software source, but contributions come from the developer community and were vetted against the company's own standards, and then included in future releases. Once the open source version of Spectra is released, the complete source will be available as a free download with which you can build and redistribute Spectra applications as allowed by the Macromedia Spectra Software License (based on the Apache Software Foundation license). Thanks! Tim Buntel Product Manager Macromedia ColdFusion Server Posted by cantrell at 11:46 AM. Link | Comments (5) | References June 24, 2003 Sorting 2-Dimensional Arrays There? Posted by cantrell at 12:17 PM. Link | Comments (7) | References June 19, 2003 Another Way to Serve Binary Data I made a post recently on using ColdFusion to write out binary data to the output stream in order to allow me to serve an image or other binary file that doesn't exist on disk. The original code I posted looked like this: > In the example above, I'm reading the file from disk, but in a real-life situation, I would be getting the bytes from a URL or a database. Anyway, Spike Washburn (you guys know Spike?) was able to reduce the code above to just this: <cffile action="readbinary" file="/home/cantrell/Pictures/Corrs2.jpg" variable="pic"/> <cfcontent type="image/gif; charset=8859_1"> <CFSCRIPT> writeOutput(toString(pic)); </cfscript> Very cool concept. (Hint: the key is in the character encoding.) Thanks for the fresh perspective, Spike! Posted by cantrell at 5:15 PM. Link | Comments (26) | References June 17, 2003 Data Connection Kit and ColdFusion Macromedia DevNet just published an excellent tutorial on using the Data Connection Kit with ColdFusion written by Ben Forta. I read through the article and worked through the examples over the weekend, and I found it to be very informative. If you have been wondering what the Data Connection Kit and FireFly components are all about, check out what Ben has to say. Posted by cantrell at 3:58 PM. Link | References Geoff Bowers Explains the CFMX Administrator Geoff Bowers of Daemon Internet Consultants has recently released an excellent Breeze presentation on the CFMX administrator. If you have questions or doubts about CFMX administration, Geoff's presentation is likely to clear them up. He goes into just enough detail to make the information valuable, however he also keeps it at a high enough level that he is able to cover the entire administrator in just a little over 15 minutes. Thanks for putting so much time into this, Geoff. Posted by cantrell at 12:22 PM. Link | Comments (1) | References June 16, 2003 Casio.com Uses ColdFusion and Fusebox I discovered over the weekend that Casio's site is implemented in ColdFusion, and seems to use Fusebox. Anyone know the folks who built it? It's a very well-built, well-designed, functional application. Posted by cantrell at 11:59 AM. Link | Comments (2) | References June 12, 2003 Curious about Royale? Who isn't? We have some information posted on our site now. It starts out: Royale is the internal code name for a new initiative at Macromedia that will address the requirements of enterprise programmers who want to develop rich client applications. There is also a FAQ available. Posted by cantrell at 2:28 PM. Link | Comments (2) | References IBM Promotes ColdFusion MX and WebSphere IBM recently published an article on the advantages of integrating CFMX and WebSphere. The article starts: The ColdFusion markup language (CFML) has a reputation for being an easy scripting language to learn. The ColdFusion tag-based programming model allows for rapid Web development, and the inherent simplicity of this model makes Internet application development possible for a wider population of developers. It's great to see how invested IBM is in ColdFusion MX. The article continues:. Posted by cantrell at 11:02 AM. Link | Comments (1) | References 11:47 AM. Link | Comments (28) | References May 23, 2003 Closing Tags with Slashes: An Informal Survey How many ColdFusion programers out there are religious about closing their tags? In other words, are you more likely to do this... <cfreturn foo> ... or this ... <cfreturn foo/> How about your HTML tags? Strict, transitional, or "freestyle"? Posted by cantrell at 6:05 PM. Link | Comments (15) | References New ColdFusion TechNote: Incorrect output behavior when using Sitewide Error Handler Macromedia has published a new ColdFusion TechNote: Incorrect output behavior when using Sitewide Error Handler If you are having problems using a site-side error handler in CFMX, have a look. The TechNote provides a work-around. Posted by cantrell at 11:36 AM. Link | Comments (1) | References May 21, 2003 Ben Forta's Presentation on ColdFusion MX for J2EE Check out Ben's Breeze presentation on ColdFusion MX for J2EE, The Marriage of Power and Productivity: Good stuff, Ben. Posted by cantrell at 2:18 PM. Link | References May 19, 2003 Using structKeyExists Rather Than isDefined If you have ever tried using the isDefined function like this: <cfif isDefined("url['foo']")> Then you have probably seen this error: Parameter 1 of function IsDefined, which is now "url['foo']",. Posted by cantrell at 6:47 PM. Link | Comments (2) | References May 13, 2003 New TechNote: ColdFusion MX and JRun 4 Support for Windows 2003 From Macromedia's webiste: Both ColdFusion MX and JRun4 will provide support for Microsoft's newest operating system, Windows 2003. This TechNote will discuss the availability of this enhancement to these Macromedia server products. You can find the entire TechNote here: Posted by cantrell at 10:11 AM. Link | 8, 2003 Jeremy Allaire Discusses ColdFusion Past And Present, and What's Next for the Web In this interview, Jeremy discusses: - The history of ColdFusion. - What his and JJ's roles were in its creation. - Some interesting fact about Microsoft and ASP. - Jeremy's background before the days of ColdFusion. - HTML and CFML editors. - Jeremy's new role at General Catalyst. - The next big web technologies. - Weblogs and RSS. Posted by cantrell at 12:37 May 3, 2003 Unit Testing ColdFusion Components Unit testing is a great way to black-box test your components. By "black-box testing," I mean that you are only testing the results of function calls as opposed to white-box testing which actually exposes the inter-workings of components and functions. Unit test is code written to test other code by simulating a real use case and comparing the results of function calls to expected results. These types of tests are most often done with languages that are object oriented, which means that unit testing code is appropriate for ColdFusion components. There is a unit testing framework on DRK 3 called cfunit (named after the very popular JUnit testing framework for Java). Read more about cfunit on Macromedia's website. Raymond Camden came across cfunit last week. You can read his reaction here: Posted by cantrell at 6:22 PM. Link | Comments (3) | References April 30, 2003 Three New TechNotes cfform errors with ColdFusion MX on multihomed servers Configuring the Microsoft SQL Server 2000 JDBC driver Data source verification fails for Oracle JDBC Thin Driver Posted by cantrell at 11:56 PM. Link | Comments (1) | References Controlling Whitespace in ColdFusion With browsers being as generous as they are about whitespace, ColdFusion, like every other scripting language I have used, doesn't seem to make much of an effort to keep it to a minimum. Typically, whitespace is not a concern since browsers handle it so well, however if you are trying to generate an XML document, it can be a big problem. That's why we have tags like cfsilent, cfprocessingdirective, and cfsetting, and that's why we have the ability to enable whitespace management in the ColdFusion administrator. However, CFMX for J2EE doesn't seem to come with the whitespace management option, and sometimes no matter what combination of whitespace management tags I use, I simply cannot prevent the server from writing out a few carriage returns at the top of the document (which makes for invalid XML). I finally discovered the ultimate solution. The best way I have found to prevent whitespace is to manage the buffer and the output stream yourself. When a page is being executed, generated content is being written to a buffer, and that buffer gets flushed to the output stream which represents the connection between your server and a browser. You can use the cfflush tag to flush that buffer if you need data to reach the client faster (for instance, if you want to give them feedback about a process which might take several seconds to complete), and you can also clear that buffer as well, which means you can get rid of unwanted whitespace! The only way I have found to do it is to resort to Java. The following does the trick: <cfscript> getPageContext().getOut().clearBuffer(); writeOutput(someContent); </cfscript> The most bullet proof way I have found for generating valid XML in ColdFusion is to use a combination of the cfsavecontent tag and the Java code above, like this: <cfset someVar = "Whitespace can be a pain."/> <cfsavecontent variable="responseXml"><?xml version="1.0"?> <root> <element><cfoutput>#someVar#</cfoutput></element> </root></cfsavecontent> <cfscript> getPageContext().getOut().clearBuffer(); writeOutput(responseXml); </cfscript> Feel free to share your whitespace techniques here. I'm interested in knowing how others solve this problem. Posted by cantrell at 12:22 PM. Link | Comments (14) | References April 29, 2003 XML-RPC and ColdFusion Ro. Posted by cantrell at 1:56 PM. Link | Comments (3) | References April 28, 2003 Ben & Jerry's Switches to CFMX, Flash MX and DW MX Ben and Jerry's Ice Cream recently "went dynamic" by upgrading their static 600 HTML page site to a full database driven CF MX, Flash MX and DW MX powered site. Check it out: But even more important is the fact that tomorrow (4/29) is free cone day, so go get yourself some free ice-cream at any Ben and Jerry's between noon and 8PM! Posted by cantrell at 11:40 PM. Link | References April 25, 2003 Crystal Reports -- An Informal Survey I have a few questions about the cfreport tag and the use of Crystal Reports in general. - How many of you out there use the cfreport tag to generate Crystal Reports? - What version of reports do you generate? - What do you use the reports for? - How important is the cfreport tag to you? Feel free to answer here or email me directly. Posted by cantrell at 5:39 PM. Link | Comments (10) | References April 23, 2003 Uploading a File with Mac IE 5.1! Posted by cantrell at 3:47 PM. Link | Comments (1) | References April 18, 2003 CF_Europe 2003 Who's going? I was just looking through the list of speakers, and I'm impressed. Too bad it's only two days! You can find details here: Here's the important stuff: CF_Europe is a yearly European Macromedia Developer Conference organised and run by the CFUGs and MMUGs of Europe. CF_Europe 2003 (our 2nd year) is being held at Olympia Conference Centre, London, UK on Thursday 29th May to Friday 30th May 2003. The event features numerous workshops on ColdFusion Development, Server Management, User Interfaces, and Web Development Solutions. The conference program is designed to enable emerging and seasoned developers extend their skill sets and expand their knowledgebase. Posted by cantrell at 4:05 PM. Link | References April 15, 2003 CFMX for J2EE License Transfer Program Extended Anyone interested in going from ColdFusion Server to CFMX for J2EE should check this out. From Macromedia's website: "For a limited time, Macromedia ColdFusion Server Enterprise customers can transfer their licenses to ColdFusion MX for J2EE and receive up to a 30% discount through the Macromedia Volume License Program (MVLP). Now, ColdFusion Server 4.5 (and later) Enterprise Edition (English version) customers can begin developing, deploying, and migrating their ColdFusion applications on their preferred J2EE application server at a significant savings." Offer good through 6/31/2003. Details here: Posted by cantrell at 2:36 PM. Link | Comments (1) | References April 9, 2003 No More OutOfMemory Errors: Posted by cantrell at 9:52 AM. Link | Comments (5) | References April 7, 2003 CFDJ Interview CFDJ recently published an interview with Sarge and myself. Check it out here: Posted by cantrell at 2:55 PM. Link | References EditPlus ColdFusion Syntax File Sam Neff, who I recently met in San Francisco, put together an EditPlus syntax file for ColdFusion MX. You can find it here: Posted by cantrell at 2:54 PM.. Posted by cantrell at 2:21 PM. Link | References March 20, 2003 Macromedia Releases ColdFusion Updater 3 Macromedia has just released ColdFusion MX Updater 3. The updater applies to ColdFusion MX Server Professional Edition, ColdFusion MX Server Enterprise Edition and ColdFusion MX for J2EE. Macromedia strongly encourages its customers to download and install updater 3 as it contains the latest security and stability functionality. The updater can be downloaded here: Read the updater release notes here: Note that updater 3 contains all the updates and fixes of the previous two updaters, so if you install updater 3, you do not need to install the first two. Posted by cantrell at 11:22 PM. Link | References March 18, 2003 How to Migrate or Switch to ColdFusion MX If Posted by cantrell at 9:33 AM. Link | References March 17, 2003 Using the createUUID Function I. Posted by cantrell at 2:48 PM. Link | Comments (5) | References March 12, 2003 Flash Remoting Update Mac: Posted by cantrell at 4:06 PM. Link | References March 5, 2003 Macromedia Launches Entirely New Site I. Posted by cantrell at 10:45 PM. Link | References March 4, 2003 ColdFusion 5 and Apache 2.x.x Mac: Posted by cantrell at 6:00 PM. Link | Comments (4) | February 26, 2003 cfform Work-around There. Posted by cantrell at 6:37 PM. Link | References February 20, 2003 Don't Forget About "contains" I. Posted by cantrell at 1:01 PM. Link | References February 18, 2003 Why Use cftry and cfcatch? Someone. Posted by cantrell at 12:31 AM. Link | References February 6, 2003 Shared Variable Scopes and Locking This is a post I made this morning on cf-talk in response to a thread on locking shared variable scopes. The short answer is that you don't need to unless you are preventing a race condition. For the long answer, read on: Just some additional interesting information on shared variable scopes: the reason you do not need to lock them (unless you are attempting to prevent a race condition) is that their underlying Java implementations use java.util.Hashtables. Hashtables are synchronized so that two threads cannot modify the same instance of a Hashtable concurrently. So Tony, you are right that Macromedia engineers did the right thing here, otherwise there would be a lot more cflocking going on. For instance, if they had used a HashMap, all access/modification would have to be locked (to prevent actual exceptions as opposed to just unexpected behavior), which would make for a lot more code with no advantages. Another thing to note is that synchronization at such a low level is very fast and efficient; faster than using cflocks. Posted by cantrell at 5:53 PM. Link | Comments (2) | References February 5, 2003 Pet Market Blueprint Application on Mac OS X In my spare time, I have been trying to get the Pet Market Blueprint app (Flash version) running on CFMX and JRun 4 on my Mac. As it turns out, the Unix version technically works just fine right out of the box -- just follow the instructions, and even the datasource will be set up properly since Pointbase is a 100% pure Java implementation. There is one little detail, however, that actually has more to do with JRun configuration than the Pet Market app itself, but boy is it a tricky one. JRun comes with it's own version of Flash Remoting which is mapped to flashservices/gateway. It is a Java-only version of Flash remoting, meaning it will not find and delegate to ColdFusion components. Of course, ColdFusion for J2EE servers comes with Flash Remoting as well, so you have to find a way for the two of them to coexist. I found that both of the options below worked, depending on how I had CFMX configured: - If you are using a context root for ColdFusion (like "cfmx" or "cfusion"), edit the shell_init.xml file in the petmarket web directory. You can continue to use the "default" backend (you do not have to use j2ee), but you will need to add your context root before the flashservices/gateway reference in the gatewayURI tag. For instance, if your context root is "cfmx", your tag should look like this: <gatewayURI dir="cfmx/flashservices/gateway" /> Save the file and everything should run fine. You don't even have to restart anything. - If you do not use a context path for ColdFusion (more properly stated, if your context root is "" or "/"), you simple need to make sure that the Flash Remoting gateway that came with JRun does not intercept your requests. The easiest way to do this is to go into the JRun administrator () and change the context root of the Flash Remoting Enterprise application to anything other than flashservices (for instance, I changed it to "flash-services"). Then redeploy the Flash Remoing application or restart JRun and you should be good to go. If you want to give it a try, download the necessary files here: Posted by cantrell at 6:00 PM. Link | Comments (7) | References February 4, 2003 Calling CFCs Directly From Your Browser I was debugging something the other day with a friend of mine, and I made a request for a CFC directly from my browser. My friend had never seen that before, so I thought I'd blog the technique in case others might have missed it, as well. If your CFCs are located in your web root, you can reference them directly from your browser like this: A request like this will redirect you to a component called cfcexplorer.cfc (you will be required to authenticate while being redirected) which will auto-generate documentation for your component similar to Javadoc. It's a great way to browse your API. Another technique you can use is to actually invoke a function from your browser, like this: A request like this will not redirect you to the cfcexplorer, but will actually invoke the method specified as the value of the "method" parameter (note that the method's access must be specified as remote). This is a great way to test your CFCs, or even to write entire applications (though latter would be an unusual architecture). If output is enabled in both your component and your function, you can output HTML from your components. I should add that outputting HTML from your components is an unusual practice, and typically presentation logic is contained in CFM and HTML files rather than CFCs, but occasionally it's worth doing. I should also note that a new instance of the CFC gets created on every request, so although that happens very quickly and efficiently, it is something to take into consideration while designing your overall architecture. Posted by cantrell at 5:40 PM. Link | Comments (1) | References January 31, 2003 How To Never Have to Write Another Get or Set Method Again If. Posted by cantrell at 6:49 PM. Link | References January 30, 2003 Windows NT Authentication Security Bulletin If you have trying to get Windows NT Authentication to with CFMX the way it used to with CF 5, you will want to check this out: Posted by cantrell at 11:59 PM. Link | References January 28, 2003 How To Limit File Upload Sizes There. Posted by cantrell at 12:33 PM. Link | Comments (5) | References January 22, 2003 Macromedia Releases a Pure CFMX Version of the Pet Market App Macromedia recently released a pure CFMX version of the blueprint Pet Market application. Tim Buntel, Senior Product Manager for ColdFusion MX, has good article at the URL below: Posted by cantrell at 2:43 PM. Link | References January 17, 2003 Answer a Few Questions About How You Use ColdFusion and Enter to Win $100 The following was posted by Phil Costa, the ColdFusion Product Manager: On the Macromedia server product teams, we believe in delivering tools and technologies that solve real-world problems for web application developers. To help us plan future releases of Macromedia server products, we'd like you to answer a few questions about your web development projects and your use of ColdFusion and JRun for database reporting applications. HELP SHAPE THE FUTURE OF MACROMEDIA SERVERS AND MAYBE WIN $100 AT AMAZON With your valuable feedback, we can ensure that Macromedia servers continue to meet your development needs. And if you complete the survey, you'll be automatically entered in a contest to win one of two $100 gift certificates at Amazon.com. Thanks again for your continued enthusiasm and support. We look forward to reviewing your feedback. Regards, Phil Costa Posted by cantrell at 8:42 PM. Link | References January 16, 2003 How To Get Around the Linux/Solaris and ColdFusion Installation Bug The. Posted by cantrell at 2:40 PM. Link | References January 15, 2003 10% off ColdFusion MX or JRun MX Until January 15th Use the promotion codes below to save on various Macromedia products (including server products like ColdFusion and JRun). US Codes: 10% off Tools Products: SP23CMMRTL 10% Off Server Products: SP23CMMRSV UK Codes: 10% off Auhtorware, Director, Fontographer, Freehand, Homesite, Fireworks and Dreamweaver: SP23CMMRTL1 10% off Flash: SP23CMMRTL2 Posted by cantrell at 12:05 AM. Link | References January 11, 2003 Patch Available For ColdFusion MX Enterprise Edition Sandbox Security Issue The <cfinclude> tag and the <cfmodule> tag will accept filenames with relative paths as arguments. ColdFusion MX does not check the Sandbox Security Files/Dirs permissions before including files with these tags. This could allow a template to access unauthorized data using these tags. Find out more (and download the patch) at the URL below: Posted by cantrell at 9:10 PM. Link | 9, 2003 New ColdFusion MX for JRun Performance Brief Published See how ColdFusion MX for JRun performs and scales on platforms and multiple server installations. Go to the URL below and look under the "White Papers" section on the left-hand side: Posted by cantrell at 9:12 PM. Link | References January 7, 2003 Macromedia Announces CFMX and JRun for Mac OS X! Yes,. Posted by cantrell at 9:12 PM. Link | References January 3, 2003 Using the "var" Keyword to Scope Variables in ColdFusion Components I've seen a fair number of posts recently regarding the use of the "var" keyword inside of ColdFusion components. I posted a pretty comprehensive explanation on cfcdev, but both for posterity and for those who might have missed it, I will re-post it here (along with some additional information). All local variables used within a cffunction should be declared at the top of your function (under your cfargument tags), including query names. For instance: <cffunction...> <cfargument ... /> <cfset var <cfquery name="queryName"> some query... </cfquery> </cffunction> It is a matter of scoping. Using "var" makes variables local to the function rather than being global in scope where they can stomp on other variables. It's the same in JavaScript: function returnFoo() { var myVar = "foo"; return myVar; } In the function above, if you were to remove the var keyword, myVar would be a global variable, which you rarely want. Though it usually does not cause a problem in either JavaScript or ColdFusion, when it does, it can be extremely difficult to find the bug. You are much better off coding your CFCs, UDFs and JavaScript functions as tightly as you can. The big difference between JavaScript and CFC functions regarding the use of var is that in ColdFusion, all variables declared using the var keyword have to be declared at the top of the function, whereas in JavaScript, you can declare them anywhere inside of a function. User defined functions are similar to CFCs in that declarations must be made at the top of the function. They cannot be made anywhere else and they must be contained within a function (as opposed to just cfscript tags). Posted by cantrell at 1:21 PM. Link | Comments (5) | References
http://weblogs.macromedia.com/cantrell/archives/coldfusion/index.html
crawl-002
refinedweb
12,023
64.51
At the same time of the Eclipse announcement, Novell announced also announced the availability of Novell exteNd 5, an innovative set of service-oriented integration and portal technologies. Much of the innovation of this release has gone into the creation of a completely new, standards-based and data-binding set of frontend technologies based on Xforms and JSR 168 portlets. See the full press release at and download the software from - developer versions of the product are FREE. What does TheServerSide community think about Xforms based development? What are your opinions about appserver independent products like exteNd? Comments always appreciated... Rik Van Bruggen rvanbruggen@novell.com Novell exteNd 5.0 and XForms based portlets (15 messages) - Posted by: Rik Van Bruggen - Posted on: January 22 2004 15:22 EST Threaded Messages (15) - Xforms might be the begining of a new generation of GUI tools by Rafael Benito on January 23 2004 17:55 EST - Re:Xforms might be the begining of a new generation of GUI tools by Jeff Dill on January 23 2004 19:24 EST - How Browser as presentation broker and intermediary? by Deepak Bajaj on January 23 2004 09:19 EST - Why not Do The Full Monty: Use XUL by Gerald Bauer on January 24 2004 03:43 EST - Re:Xforms might be the begining of a new generation of GUI tools by Mark N on February 03 2004 08:08 EST - how timely... by Markus Blumrich on January 23 2004 21:55 EST - FYI: XForms is just a data-binding architecture by Gerald Bauer on January 24 2004 03:58 EST - XForms in context by Mark Figley on January 24 2004 13:39 EST - Newsflash: XHTML *2.0* Is A Next-Gen Markup Language by Gerald Bauer on January 25 2004 05:02 EST - Newsflash: XHTML *2.0* Is A Next-Gen Markup Language by Mark Figley on January 25 2004 02:37 EST - Creating A Rich Internet For Everyone by Gerald Bauer on January 26 2004 12:35 EST - XUL and a stripped-down XForms spec go together by Gerald Bauer on January 25 2004 05:12 EST - Some XForms clarification by Rafael Benito on January 26 2004 11:40 EST - Free...but at what cost? by John Rubier on January 27 2004 13:53 EST - Novell exteNd 5.0 and XForms based portlets by Mark N on February 03 2004 08:16 EST Xforms might be the begining of a new generation of GUI tools[ Go to top ] I think Xforms is great. Currently, web development is a mesh. Everything is mixed up in web pages: presentation, data and logic with tags, scripting...The web was created to navigate through documents, not to make transactional applications, so a number of modifications to the original idea have been coming up through the last ten years to finish with we have today...garbage. - Posted by: Rafael Benito - Posted on: January 23 2004 17:55 EST - in response to Rik Van Bruggen There is a need of a new generation of universal client, XML based, that let developers to make a proper engineering decisions. Xforms is a good starting point but the problem is that when you embed it into HTML, you are losing a great deal of their adventages. The whole lenguage should be revisited. Xforms can provide for data and logic but a new, separated presentation layer should be developed to get rid of the current situation. This would not change the web, but for enterprise applications I do not see any justification about using web navigators, other that it is a universal client Rafael Benito rbenito@satec.es Re:Xforms might be the begining of a new generation of GUI tools[ Go to top ] There is a need of a new generation of universal client, - Posted by: Jeff Dill - Posted on: January 23 2004 19:24 EST - in response to Rafael Benito > XML based, that let developers to make a proper engineering > decisions. Right on. Enterprise applications are long overdue for a universal XML application language. I'm surprised Gerald hasn't replied to this yet with a plug for XUL. :o) > for enterprise applications I do not see any justification > about using web navigators, other that it is a universal client And that one justification is all-important. We are all writing web applications not because the web browser is easy to program, but because the browser is the only universal client. Small intranet apps might get away with some client installation (JVM, Flash, ActiveX, etc). Big intranet apps, and ~any~ extranet or Internet app, ~must~ deploy to web browsers. With an installed base of billions, the web browser isn't going away in this decade. But that's OK. With a properly engineered presentation layer, web browsers can handle much more than the limited concepts in Xforms. Jeff Dill Isomorphic Software How Browser as presentation broker and intermediary?[ Go to top ] I have been thinking about this for a while, and am at fundamental loggerheadwith following. - Posted by: Deepak Bajaj - Posted on: January 23 2004 21:19 EST - in response to Jeff Dill Using server to generate presentation artifacts. Look at most often used control(s) on browser, and you can quickly identify those to be text input fied and button (that is nothing but html directives to render certain behavior and presentation on browser face. Unarguably, native OS would always have the richest presentation controls and behavior. Only, if browser could tap those to help itself with the presentation and behavior control. That leads to an idea of federated presentation engine, where HTML or some close avtaar, negotiates and decides the browser native presenation controls available, and presents this capability matrix as part of init to the application boot strap code. The application on boot, would then, based on this information, spit out the presentation drivers most apt to leverage the interrogated layer, to the maximum. Obviously, it would be nice, if the common presentation capability could be defined as a spec and available on all native os's. I do not think, this is a long shot. application's default presentation driver, would always be programmed against this default spec compliant presentation driver, and changed to higher and more refined presentation controls, if and when available on the rendering platform. In essence, division of labor takes place as follows Native OS: 1) Implements all presentation controls 2) Implements a common presentation layer (spec compliant) Browser: 1) Interrogated the native os for presentation capability and relays this information to the application at app server end. Application Server: 1) Generates the code, while being cognizant of the current browser's presentation capabilities. Benefits: 1) Application does not have to devise ingenious mechanisms for presentation rendering. 2) Leads to division of labor 3) presentation executes, at point closest to its presence, hence best class performance This is obviously just a wild idea, but on analogous thoughts, JVM download, into browser, to paint swing faces, failed, because of remoteness of reality, lack of performance, plain ugly controls and lack of maturity in controls. Howzaaat? Regards, Deepak Why not Do The Full Monty: Use XUL[ Go to top ] Right on. Enterprise applications are long overdue for a universal XML - Posted by: Gerald Bauer - Posted on: January 24 2004 03:43 EST - in response to Jeff Dill > application language. I'm surprised Gerald hasn't replied to this yet with a > plug for XUL. :o) Just to get the story straight: XForms is *not* a free-standing markup language like HTML, SVG or XUL but XForms is just a data binding architecture and thus needs a next-gen XML host language such as SVG or XUL. To find out more about XUL (XML UI Language) check out the XUL Alliance site online @ or catch the latest XUL buzz live from the XUL News Wire online @ - Gerald Re:Xforms might be the begining of a new generation of GUI tools[ Go to top ] but because the browser is the only universal client. - Posted by: Mark N - Posted on: February 03 2004 08:08 EST - in response to Jeff Dill Which is weird because I can't count(like the stars in the sky) the number of products that only support one browser (IE). In my mind, it makes it not that universal. how timely...[ Go to top ] I'm evaluating XForms at the moment as a possible technology for easing portal development / interaction with our BPM engine. XML Schema is what makes this possibility exciting. - Posted by: Markus Blumrich - Posted on: January 23 2004 21:55 EST - in response to Rik Van Bruggen Things look promising so far, but I'd like to point out a couple things: 1) Vendors selling XForms solutions are invariably packaging those solutions along with other business components (most commonly BPM/Workflow frameworks). In our case - where we have already adopted a BPM and development framework (Weblogic platform 8.1) - there is a 0% chance I am going to approach my boss and suggest we spend that much again for essentially a competing development environment/application framework, just to gain access to pre-packaged Novell XForm portlets. I'm getting a little frustrated with vendors inability to find the right product offering granularity. A component is a component - sell it on its own, and let architects like myself choose what we want to piece together. And keep the prices realistic too, or weve forced to either roll our own, or find open source alternatives. 2) XForms is NOT restricted to client implementations. If it was, it would have no purpose for my application, since I couldn't rely on an incoming form submission's validity. Note the title of this website... I'd like to point out that Java too was misunderstood in a similar way. 3) There are opensource J2EE server-side implementations of XForms. Its easy enough to google them, but Ill throw one URL out, which is the package Ive decided to try a proof-of-concept with first. Id be extremely interested in hearing from others especially BEA portal, or JSR 168 users who are one a similar journey. cheers, Markus FYI: XForms is just a data-binding architecture[ Go to top ] a completely new, standards-based and data-binding set of frontend technologies - Posted by: Gerald Bauer - Posted on: January 24 2004 03:58 EST - in response to Rik Van Bruggen > based on Xforms and JSR 168 portlets. Rick, does Novell plan to support next-gen XML markup languages such as XUL (including Macromedia XUL - aka Flex or Microsoft XUL - aka XAML) or SVG? Also note that XForms is just a data-binding architecture and not a free-standing markup language like XUL, SVG or HTML and thus XForms requires a *next-gen* XML hosting language and a *next-gen* XML browser. - Gerald XForms in context[ Go to top ] To say that XForms _requires_ a next gen xml language is misleading, especially considering that it will be the native forms engine for the w3c XHTML 2.0 spec, (finally) replacing HTML forms. So even XHTML, as thin as it is, is an appropriate host for XForms. But your comments make a great point in that they do demonstrate one of the greatest strengths of XForms, which is that XForms is extremely flexible and can be integrated elegantly into other XML based presentation technologies. This will enable a paring of technologies in which the combined whole characteristically changes the underlying technologies. - Posted by: Mark Figley - Posted on: January 24 2004 13:39 EST - in response to Gerald Bauer A great example of this is SVG. SVG is great as a graphics display technology and is a technology choice free from IP issues and single vendor dependencies, but it could not compete with Flash for RIA (Rich Internet Application) development because there was not an elegant way to create UI Widgets for user input. But now SVG can be the host language for an XForm, which has rich and flexible user input capabilities to supplement SVG's lightweight graphics and animation capabilities. The result is an RIA environment that is WAY easier to programmatically generate than Flash is thanks to XML, based upon open standards, and free from single vendor dependencies. Flex comes a long way, but the XML tags Flex uses to generate the UI are better seen as a custom tag library, not a markup language based upon XML and backed by a schema (a la HTML). The difference is critical, and becomes most apparent when you are programmatically generating the markup for presentation. We programmatically construct HTML so easily that we take it for granted. The custom tag approach necessitates a completely different model where the developer is sitting there coding each display document and using these tags to do so. So if you have a presentation tier that dynamically constructs presentation flows based upon declarative metadata, XForms would be a better choice. I am far from being an expert on XUL (or presentation tier technologies in general), but for what it's worth, the biggest problem that I can see with an XForm-XUL integration is controller logic conflicts. Who is driving the process? Who controls the event model? XForms and XUL are both designed with the presumption that they are driving the event model. Also, why does XUL need XForms? It already has a richer widget set than XForms, and I am struggling to find the value that XForms would provide in an XUL context. A better candidate for XForms integration would be a technology that does not attempt to address the whole stack of UI construction concerns, but instead focuses on presentation and then communicates with the XForms engine via XML Events or JavaScript (ie XHTML or SVG). On last thing - the best thing about XForms is the way that it simplifies the server side. One feature in particular: support for multiple page flows within the context of a single request. The user can request your form, transition through a number of pages on the client without needing to talk to the server, and when the user submits the client sends ready-to-go XML back to the server. That means no need for a finite state machine in the application tier to manage user navigation, no more chatty and poor performing clients, and significantly reduced complexity in request processing logic because you are starting with the XML. - Mark Newsflash: XHTML *2.0* Is A Next-Gen Markup Language[ Go to top ] To say that XForms _requires_ a next gen xml language is misleading, especially - Posted by: Gerald Bauer - Posted on: January 25 2004 05:02 EST - in response to Mark Figley > considering that it will be the native forms engine for the w3c XHTML 2.0 spec, > (finally) replacing HTML forms. So even XHTML, as thin as it is, is an > appropriate host for XForms. Mark, I hope you realize that XHTML *2.0* is a next-gen XML markup language. XHTML *2.0* burns all bridges and won't be backwards compatible with XHTML *1.x*. - Gerald Newsflash: XHTML *2.0* Is A Next-Gen Markup Language[ Go to top ] Gerald, - Posted by: Mark Figley - Posted on: January 25 2004 14:37 EST - in response to Gerald Bauer Thanks for posting back. I think that we completely agree with each other, but just to be clear, according to the w3c Working Draft XHTML 2.0 will run in 95% of the existing browser base without modification. So the "burns all bridges" statement should not infer that HTML loses it's greatest strength, which is the existence of a universal thin client. From what I can see, XHTML 2.0 is focused on two major objectives: 1) the completion of the long migration towards CSS for presentation styling (as opposed to HTML attributes), and 2) fully embracing XML and the modularization/integration that it provides. If you already make use of XML compliant HTML and you use CSS for styling, you are already well on your way. Also, browsers will be able to support both XHTML 1.x and 2.0 at the same time, so the burning bridges statement should also not infer that architects will have to choose between one or the other. There is a caveat to the backward compatibility statement, which ironically enough involves XForms itself as well as a couple of other new modules. The new modules: XForms, XML Events, and Ruby (which is a module for displaying the Chinese character set) will not be supported by current browsers, just as HTML tables weren't supported by everybody when they were first introduced. But that is not to say that the rest of XHTML 2.0 won't work fine in your browser. Also, I did not intend to give the impression that XForms is dependent on XHTML 2.0. Perhaps a better example to demonstrate that XForms does not require a next gen markup language would be XHTML 1.x, which I assumed was a given since that is how production XForms implementations are deployed today. XHTML and XForms do not have integrated namespaces yet, but that doesn't stop XHTML 1.x from being a fine host language for XForms. Because most of the current browsers do not have XForms capability you will need a plugin until browser support is native, which admittedly makes my universal thin client argument pretty week once XForms is thrown into the mix. So we completely agree on that front. There is an existing mozilla bug to make XForms support on Mozilla/Netscape native. IE is another story, and I have no idea if Microsoft will see XForms as a competitor to XAML and therefore limit support for it. Most of the current XForms plugins specifically target IE, so there would be a workaround, although I think you would agree that plugin dependencies should be a last resort. On your other post, I am disappointed that you do not see value in XML Events or the forms capabilities that XForms provides. We may have to agree to disagree on that one. Again, thanks for posting back. Out of curiosity, are you involved in the XUL project? If so, are you guys going. Best of luck. - Mark Creating A Rich Internet For Everyone[ Go to top ] Out of curiosity, are you involved in the XUL project? If so, are you guys going - Posted by: Gerald Bauer - Posted on: January 26 2004 12:35 EST - in response to Mark Figley >. Mark, I invite you discuss your questions on the xul-talk mailinglist hosted at the XUL Alliance site at sourceforge online @ You can subscribe/unsubscribe to xul-talk at any time online @ - Gerald PS: To answer your question about XUL: I'm not involved in Mozilla XUL other than promoting it and pushing for change. However, I'm the XUL Alliance webmaster and head the Luxor XUL Toolkit project. I suggest using xul-talk because several Mozilla XUL guys such as Ian Hickson are subscribed to xul-talk and don't read TSS for sure. XUL and a stripped-down XForms spec go together[ Go to top ] Also, why does XUL need XForms? It already has a richer widget set than XForms, - Posted by: Gerald Bauer - Posted on: January 25 2004 05:12 EST - in response to Mark Figley > and I am struggling to find the value that XForms would provide in an XUL > context. Mark, XUL and XForms go together. XForms basically is just a data binding architecture that lets you load and save (send and receive) XML data (=XML in, XML out). All the rest that comes with the XForms 1.0 spec such as its widget set or XEvents model is just useless bloat. - Gerald Some XForms clarification[ Go to top ] IMHO XForms is not fully understood. XForms has an XML section which is the data model for the form and a second section that contains the logic with which the end-user interacts with the form. This section IS NOT ABOUT WIDGETS. They are outside the scope of XForms. The user could interact with the form with a voice application for example. Then, you have conditionals, iterators, calculated values and a number of other nice features. This is the beauty of XForms. - Posted by: Rafael Benito - Posted on: January 26 2004 11:40 EST - in response to Rik Van Bruggen On the other hand, for the desktop, you need to map XForms controls to widgets that will have their look and their layout. This part is not in the standard. As you put XForms as part of XHTML 2.0, the whole thing is covered but it seems to me that the logic part of XForms is meshed up with presentation, we lose the separation of presentation and logic. XUL is really mainly about widgets and does not have much about logic or data separation. Rafael Benito rbenito@satec.es Free...but at what cost?[ Go to top ] I guess they have to make up for their purchase of SUSE & Ximian somehow: "The Professional Suite is available for $50,000 per CPU, and the Enterprise Suite is available for $120,000 per CPU...". Of course, just get your CEO to take a .1% pay cut and it's paid for! - Posted by: John Rubier - Posted on: January 27 2004 13:53 EST - in response to Rik Van Bruggen Novell exteNd 5.0 and XForms based portlets[ Go to top ] I thought it was pretty interesting that the demo is Web Start Java application (or a download -) that the XForms run in. And the windows that popup aren't XForms - they're Swing. - Posted by: Mark N - Posted on: February 03 2004 08:16 EST - in response to Rik Van Bruggen
http://www.theserverside.com/discussions/thread.tss?thread_id=23491
CC-MAIN-2016-40
refinedweb
3,632
58.92
. As such, the authors establish some new basic economic facts. They conclude, for instance, that over the very long run it is housing, rather than equities, which provides the best return (see chart): both asset types have yielded about 7% a year on average over the 145 years, but equity returns are much more volatile. It is important to note that, though homeowners might cheer this news, it is not necessarily a reason to leap into the housing market. Rental yields account for about half of the long-run return on housing, and owning a diversified portfolio of rent-yielding property is not the same bet as borrowing to house the family. The new old normal Besides offering these baseline findings, the authors’ work helps to answer several pressing economic questions. One example is the puzzle of declining interest rates. The falling rates of the past few decades distress some economists, who worry they betoken weak growth and complicate central bankers’ ability to manage the economy. Yet the long-run data reveal that the high rates of return on government debt seen in the 1980s were an anomaly. The real return on bonds and short-term bills is normally relatively low—and can even be negative for long periods of time—as some other economists (such as Carmen Reinhart of Harvard University and Belen Sbrancia of the IMF) have also found. Recent declines therefore represent a return to more typical conditions... This article appeared in the Finance & economics section of the print edition under the headline "Many happy returns"
https://www.economist.com/finance-and-economics/2018/01/06/many-happy-returns-new-data-reveal-long-term-investment-trends
CC-MAIN-2021-25
refinedweb
259
56.59
In message <96933a4d0911051045p61431af5ie2cecb850a62267a@mail.gmail.com>, enh writes: > > On Wed, Nov 4, 2009 at 23:07, Regis <xu.regis@gmail.com> wrote: > > Mark Hindess wrote: > >> > >> In message <4AF0DC18.1000504@gmail.com>, Regis writes: > >> > >> Regis, these are really two different issues so lets treat them as > >> such. I actually think fixing the /proc reading is better done > >> in the java code. I've attached a patch that just catches and > >> ignores the IOException from the available call. I think this is > >> reasonable since if the exception was "real" then the read will > >> fail if a similar exception. In fact, since the user is doing > >> a read call, having the exception thrown by the read is more in > >> keeping with the expectations of the user than having an exception > >> thrown by the available call. > > > > > > Thanks Mark, I ran luni tests with your patch, no regression found, > > I'll leave patch to you to apply :) > > > > invoking in.available before read is a little overhead, in the > > worst case, one read need four system calls, three seek and one > > read. Actually the condition > > > > if ((in.available() = 0) && (out.position() > offset)) > > > > it's only for tty file, for normal file it's not necessary. It's > > better if we can remove the check for normal file, but I have no > > idea how to do this. > > isatty(3). for the price of one JNI call at construction and a boolean > field to record the result, you could avoid any later calls. This might be a good idea, but I'm getting confused... At the moment there seems to be a special case in FileInputStream.available() like this: // stdin requires special handling if (fd == FileDescriptor.in) { return (int) fileSystem.ttyAvailable(); } ... perform seek based available check ... but I find this confusing because the comment and check implies that stdin is special but the fileSystem method name implies that it is being a tty that is the distinction. This is confusing because *any* file descriptor can be a tty - e.g. new FileReader("/dev/tty") - and stdin doesn't have to be a tty - "java </dev/zero". Regis, can you explain a little more about why the check is needed in the available call? And why the available check is needed in the FileInputStream read method? In each case, is it really stdin that is special or any file descriptor representing a tty? I'd like to understand this as there are a whole bunch of ttyRead with a similar check that only affects stdin not any tty descriptor. > in the meantime, i wondered also whether it's worth swapping the two > conjuncts in the if, since the latter is essentially free while the > former is expensive. (i've no idea how often the cheap conjunct is > false, though.) I thought about this but held of doing it for the same reason. However, I guess it is almost certainly a win. Regards, Mark.
http://mail-archives.apache.org/mod_mbox/harmony-dev/200911.mbox/%3C20091105225701.9CBFB478445@athena.apache.org%3E
CC-MAIN-2014-52
refinedweb
484
73.58
Caution Buildbot is deprecating Python 2.7. This is one of the last releases supporting it on the buildmaster. More info. 2.9. Optimization¶ If you’re feeling your Buildbot is running a bit slow, here are some tricks that may help you, but use them at your own risk. 2.9.1. Properties load speedup¶ For example, if most of your build properties are strings, you can gain an approx. 30% speedup if you put this snippet of code inside your master.cfg file: def speedup_json_loads(): import json, re original_decode = json._default_decoder.decode my_regexp = re.compile(r'^\[\"([^"]*)\",\s+\"([^"]*)\"\]$') def decode_with_re(str, *args, **kw): m = my_regexp.match(str) try: return list(m.groups()) except Exception: return original_decode(str, *args, **kw) json._default_decoder.decode = decode_with_re speedup_json_loads() It patches json decoder so that it would first try to extract a value from JSON that is a list of two strings (which is the case for a property being a string), and would fallback to general JSON decoder on any error.
https://docs.buildbot.net/2.0.0/manual/optimization.html
CC-MAIN-2021-25
refinedweb
168
66.23
On what I call the summary screen, one line to an incident, in RT2, you can click on a link to take the incident. I like that, but I’m staying with RT1. In RT1 you can’t. Is there any problem with simply adding a condition in the "owner" field of the summary, so the if the ticket is not owned, the following link is there instead? I took the URL from the ticket page. I’m not quite sure of the meanings of all the cgi parameters (“transaction=0”?), but to my understanding, this should give request 3 to user “lorens” and go to the ticket page so that the user can work further on the ticket without somebody else interfering. Am I right? This ought to work in the same way from the summary page as it does from the ticket page, right? Technically it should be extremely simple (haven’t looked at the actual source code of the page yet), I just wanted to make sure there’s nothing I’m missing. TIA #include <std_disclaim.h> Lorens Kockum
https://forum.bestpractical.com/t/rt1-taking-tickets-directly-from-summary-page/853
CC-MAIN-2018-51
refinedweb
182
82.14
Printf C Basics: How To Print Formatted Data In C Two of the first functions a C programmer will learn are “printf()” and “fprintf().” The printf() function governs output to the user’s monitor, while fprintf() governs output to a file instead. Both printf() and fprintf() operate similarly (and their arguments may be interchangeable), but most C programmers will find themselves using printf() most frequently. In this article, we’ll discuss when you would use the printf() function, how to use it, and what the most common issues encountered might be. The printf() function is easy enough for beginners in C to learn — and it’s very similar to the “print formatted” functions in many other languages. Last Updated August 2019 Learn C in ten easy steps on Windows, Mac OS X or Linux | By Huw CollingbourneExplore Course When would you use the printf C function? The printf() function sends data to the user’s computer. This journey takes the information from your code to the user’s screen — and while it’s more complex than it might seem, all you really need to know is that it’s the primary method of presenting output to the user. Understandably, that’s critical for an application’s usability. The printf() function may be used to print strings of information, data points, error messages, debugging messages, and other content. Whenever you use a program, all the text you see is being “printed” to the screen. Some of that text is static (it never changes), whereas other parts of the text are dynamic (it changes based on variables). In combination with the placement of media (such as images and elements), printed text is what comprises the vast majority of the applications that you interact with. The printf() function will form the basis of how you interact with the user. It will also form the basis of how you return data to yourself — many programmers frequently use the printf() function to return debugging information so they can peer inside of the program’s state. Luckily, for a function as powerful as printf(), it’s a fairly simple function to learn. How do you use printf()? Now that we know when we might use printf(), we need to know how to use printf(). The printf() function is a very simple function. In fact, it can use a single argument: the data that it’s supposed to print. This is easiest seen with an example. A “Hello, World!” program in C would look like the following: #include <stdio.h> Int main() { printf(“Hello, World!”); Return 0; } The output of the above would be: Hello, World! In the above example, the printf() function is being used to return a simple character string. For the most part, this will be how a developer uses printf(): to return a character string. But printf can also be used for more complex actions, with additional arguments. You will also see that we include stdio.h in the header, which is required to use the function printf. If you want to learn more about the printf() function, you can hunt down information in stdio.h. “Int main()” is the main function of the program and is required for the program to operate — and “return 0,” tells the program that it is done with that function. You will always need the basic element of main() to run a C program. Now, consider the following example (which would still need the header file, main(), and return 0, but we will leave that out moving forward for the sake of brevity): printf(“Your team %s hit with %f percent accuracy.”, “Blue Team”,67.89); The output of the above would be: Your team Blue Team hit with 67.89 percent accuracy. In this example, the “%s” tells the compiler to expect a string, while the “%f” tells the compiler to expect a floating point number. There may be any number of “placeholders” within the initial format string, but the number of placeholders will need to at least meet the number of arguments — there can be excess arguments without problem, but there cannot be excess placeholders. Note that double quotes are always used around strings — they aren’t necessary for the printf function if a string is not being used. An example could be: printf(%i, 1); In this case, the output would be: 1 Because it is an int and not a string, it does not require double quotes. And double quotes can be, as it were, a double-edged sword; you cannot print any double quote inside of a series of double quotes unless you escape it with a slash. As an example: printf(“I said, “Hello world!””); The above code will not compile. It will generate a compiler error. Why? Because it’s trying to open and close new quotes. Instead, you would need to type: printf(“I said, \“Hello world!\””); The above code essentially tells the compiler to ignore the quote that’s coming next. The code will print: I said, “Hello world!” And it will do so without an error. In addition to being able to print out different arguments, C can also be used to calculate the results of expressions. The code printf(“2+2=%i”,2*2); would output “2+2=4” to the screen. The calculation of 2*2 would be handled by the compiler. It should also be noted that excess arguments will not be printed by the compiler. They won’t be entirely ignored; if an excess argument includes a calculation, the compiler will complete the calculation. But the compiler will not send an error because it has received excess arguments, so it is up to the programmer to make sure that all their arguments are being passed through to print. Static vs. dynamic data with printf() So far, we’ve only printed static information with printf(). Even when we included arguments, the data never changed; we knew that it would be the Blue Team and an accuracy of 67.89 percent each time. Most computer applications are rarely displaying static data, for obvious reasons. One of the most important aspects of computer applications is their ability to manipulate and change information. So, let’s take a look at how we might use C with variables, instead: char h[5] = “hello”; char w[5] = “world”; printf(“%s %s”, h, w); This will print: hello world Note that we had to place the space between the arguments on our own, or it would have printed “helloworld” instead. As seen, this is still not dynamic. It will always print “hello world.” But by adding variables (h and w), we’ve made it possible to change those variables, too. If we changed w to “everyone” instead of “world,” for instance, the program would then print “hello everyone.” When debugging, programmers will frequently use printf() to print out variables that are currently in flux. This gives the programmer key insights into what the program is doing; if the variable sent back isn’t what the programmer expects, then there’s unexpected activity within the program itself. The printf() function can be further used to trace where the application goes wrong. Format specifiers for the printf() function How do you control how each piece of data is printed in C? As an example, you might want to print a number as 7, 7.0, or 7.00. You might want to print a string “Hello” as “ Hello” or “ Hello .” Printing is one part of the printf() function, but formatting is another part altogether. There are a number of format specifiers that can be used with the printf() function. They include: These format specifiers are universal across C functions and can be used with the printf function as they would be used elsewhere in the program. Format specifiers may also have arguments of their own. You can limit the number of significant digits on a floating point, for instance, by using the following format: “%.xf.” As an example, “%.2f” would limit the maximum number of significant numbers to two decimal point values. For other format specifiers, you may be able to specify the minimum number of digits, negative numbers, field width, the minimum number of characters, and other formatting essentials. For integers, precision specifies the minimum characters. A danger with format specifiers is that it may not be immediately obvious if you have gotten them wrong. Of course, more advanced programs may have more complex options for formatting. But the simple printf() function works for any console-based program. Escape sequences following printf() data There’s an issue that individuals commonly run into with their printf() function. Let’s say that you run the following code: printf(“Hello world.”); printf(“How are you doing?”); It will print as follows: Hello world.How are you doing? By default, printf() doesn’t have an escape. It doesn’t print a new line; it leaves that up to your formatting. If you want to print formatted information that escapes, you would include a “\n” (new line) as so: printf(“Hello world.\n”); printf(“How are you doing?”); The output will be: Hello world. How are you doing? \n is a type of escape sequence; a sequence that can be used to generate special characters. The C compiler does this intentionally, because while you can add an escape if it’s not there, it would be harder to take away an escape if it was there by default. There are other types of escape sequences as well: - \b, which prints a backspace. - \t, which prints a tab. - \”, which prints a double quote (without closing the existing quote). These escape sequences are similar to format specifiers in that they are compiled by the C compiler rather than being passed along as part of the string. The easiest way to think of these is that they are inputs being sent to the computer one by one. So, if you printed “Goodbye\b\b\b,” it would actually print out “Good” rather than “Goodbye.” The “backspace” button would be pressed three times on what you had written! Tabs are also notable because tabs tend to vary depending on the system. Tabs are a more convenient measurement than spaces, but they are more inconsistent because you won’t know where elements are relative to each other. Field width specifiers for the printf() function How do you make sure your data isn’t just formatted correctly but readable? When data prints out to a console, it’s often first presented as an unorganized list. But you can use data formatting to print data in a particular field width, which can thereby be used to create organized columns. This is especially important if you are processing large volumes of data. The field width specifiers are essentially used the same way the precision specifiers are. For a string, the precision is the number of characters. In the following code, we set each string to be the size of 20: printf(“%20s%20s”, “user”, “email”); The code that prints out will be: user email This isn’t that exciting. But, let’s add another line: printf(“%20s%20s”,”user”,”email”); printf(“\n”); printf(“%20s%20s”,”john”,”[email protected]”); Now we end up with: user email john [email protected] The data has been justified to the right because each statement is the same size. This is an easy way to keep data in a column. Of course, you could likewise have simply added white space in front of each piece of data yourself — but that would be a far more arduous process. Using flags width precision, you can get to exact columns each time. Printing wide characters with printf() There are two important functions for wide characters within C: wprintf() and wscanf(). These operate almost identically to printf but will print wide characters. Wide characters are characters that are larger than the traditional data size permitted by C (8-bit). In all other respects, wprintf() will operate like printf(), including the format specifiers, escape sequences, and field width specifiers. It is rare that users will need to print wide characters, but if they find they are trying to print something and it’s failing, they may want to consider that this could be the problem. The companion function, wscanf(), is used to collect wide characters from the user’s own input, just as scanf() is used to collect regular characters from the user’s input. Without wscanf(), the user won’t be able to input the wide characters they may need to provide. Likewise, without wprintf(), the computer will not be able to display the input back to the user. What are common issues with printf C? The printf() function doesn’t know how many arguments it has passed. It will match up arguments with the data it has been given, but it may be given too many or too few arguments. In some cases, this can even become a security vulnerability. Other problems with printf() include: - Data type mismatches. Format specifiers need to match the given arguments. When they don’t, they aren’t going to print correctly. Likewise, programmers can run into issues if they try to update variables with content of the wrong type. - Using incorrect precision. A currency, for instance, should usually best be given at a precision of two decimal points. Having too much precision will not alter the numbers but will alter readability. - Not having escapes after data. This will lead to pieces of data running into each other. Formatting is an important part of output, which is also why field width specifiers are so useful. - Not escaping out quotation marks. This is the easiest way to throw the compiler into an error because it won’t know what was passed along to the function or not. - Using printf() too casually for debugging. While it’s a common practice, it can be detrimental; it can spit out a significant amount of garbage on the user’s screen. The latter of which is probably the most important issue with printf(). As programmers learn, they have a tendency to use printf() to describe the state of the software. For instance, they might use printf() at every iteration of a loop to print out the data that is changing within that loop. On its own, this is not a problem; printf() is a very useful tool for debugging. But problems can arise if these debugging messages are left within the code. It’s critical that any debugging messages be controlled by a “debugger” internally so that all debugging messages can be toggled on and off at the discretion of the programmer. In other words, “raw” printf() messages, (messages not toggled on and off on a program level), should never be used for the process of debugging. For many programmers, printf() will rapidly become their most used command. A great deal of information has to be displayed to a user throughout a program’s runtime, and printf() is by far the easiest way to do so. Most users will be able to use printf() without any major challenges — but digging deeper into the more advanced options can help users better understand the data management features of C. Recommended Articles Reading from Files in C Using Fread C FILE I/O: Using File Storage Finding the Best C Compiler for Windows 8 Display Special Characters with Escape Sequences Top courses in C (programming language) More C (programming language) Courses C (programming language) students also learn Empower your team. Lead the industry. Get a subscription to a library of online courses and digital learning tools for your organization with Udemy for Business.
https://blog.udemy.com/c-printf/
CC-MAIN-2021-49
refinedweb
2,617
63.49
Pythonistas cache memory - KiloPasztetowej Hello there. Is there something like cache memory in Pythonista? Functions inside editor module are very useful and will allow for vim-like extension for Pythonista, but such extension would need to register some data in between keystrokes/script runs since commands are chained. Does something like that exist in Pythonista? Or do I have to use some hacks like storing this data in some files or something else? I’d really like to avoid messing with clipboard too. @KiloPasztetowej, some different options, including using Apple reminders, but I think the most straightforward approach is to just put a file in a unix standard temp directory: import json from pathlib import Path from tempfile import gettempdir cache_name = 'vim.cache' cache_file = Path(gettempdir()) / cache_name def cache_put(data): with cache_file.open('w') as fp: json.dump(data, fp) def cache_get(): if not cache_file.exists(): return None with cache_file.open() as fp: return json.load(fp) def cache_clear(): if cache_file.exists(): cache_file.unlink() For your use case, you might want to include a time stamp and ignore/clear the cache if it is too old.
https://forum.omz-software.com/topic/6527/pythonistas-cache-memory/4
CC-MAIN-2020-40
refinedweb
186
67.65
All code from this tutorial as a complete package is available in this repository and a video version of this tutorial is available below: Do you work with large or semi-large codebases that are starting to get out of control? Do you have to deal with multiple different projects that interact with each other and have difficulty keeping versions aligned? If you said yes to either of those things (or even if you're just anticipating encountering them in the future) then this tutorial is for you. The purpose of this tutorial is to learn about some of the different ways that you can structure a large project which is composed primarily of smaller projects and modules. Monorepos One method of grouping code from multiple projects into one is called a monorepo. A monorepo is simply the practice of placing multiple different projects that are related in some way into the same repository. The biggest benefit is that you do not need to worry about version mismatch issues between the different pieces of your project. If you update an API route in the server of your monorepo, that commit will be associated with the version of the front end that consumes it. With two different repositories you could find yourself in a situation where your v1.2 front-end is asking for data from your v1.1 backend that somebody forgot to push the latest update for. Another big benefit is the ability to import and share code and modules between projects. Sharing types between the back-end and front-end is a common use case. Your can define the shape of the data on your server and have the front-end consume it in a typesafe way. Git Submodules In addition to monorepos, we also have the concept of submodules. Let's say that we want to add a feature to our app that we have in another separate project. We don't want to move the entire project into our monorepo because it remains useful as its own independent project. Other developers will continue to work on it outside of our monorepo project. We would like a way to include that project inside our monorepo, but not create a separate copy. Simply have the ability to pull the most recent changes from the original repository, or even make our own contributions to it from inside our monorepo. Git submodules allows you to do exactly that. This tutorial will teach you how to create your own project that implements both of these features. Table of Contents - Prerequisites and Setup - Initializing the Project - Create the React App - Create the Monorepo - Create Your Repository - Sharing Code and Adding Dependencies - Create a Shared Package - Add a Git Submodule - Namespacing - Wrapping Up Prerequisites and Setup This tutorial assumes you have a basic familiarity with the following. Beginner level experience is fine for most as the code can be simply copy/pasted. For git you should know how to clone, pull, commit and push. - Git - React - Node.js - Typescript - NPM This tutorial requires yarn v1 installed (we use v1.22). Initializing the Project To start, we need a packages directory to hold the different projects in our monorepo. Your structure should begin looking like this: . └── packages └── simple-express-app └── server.ts From within the `packages/simple-express-app` directory, run: yarn init yarn add express yarn add -D typescript @types/express npx tsc --init The final command will create a tsconfig.json file. Add the following to it: packages/simple-express-server/tsconfig.json { ... "outDir": "./dist", } Now create your server file if you haven't yet: packages/simple-express-server/server.ts import express from 'express'; const app = express(); const port = 3001; app.get("/data", (req, res) => { res.json({ foo: "bar" }); }); app.listen(port, () => { console.log(`Example app listening at{port}`); }); At this point your directory structure should look like: . └── packages └── simple-express-app ├── server.ts ├── yarn.lock ├── package.json └── tsconfig.json We'll create a simple script in package.json called start that we can run with yarn: packages/simple-express-server/package.json { "name": "simple-express-server", "version": "1.0.0", "main": "dist/server.js", "license": "MIT", "scripts": { "start": "tsc && node dist/server.js" }, "devDependencies": { "@types/express": "^4.17.13", "typescript": "^4.5.4" }, "dependencies": { "express": "^4.17.1" } } Open your browser to and you will see your data successfully queried: Create the React App Next we move onto our React app. Navigate to the packages directory and run this command: yarn create react-app simple-react-app --template typescript Before we do anything else we want to confirm that we can communicate with our server and get the JSON data that we are serving up. Open up the App.tsx file in the src directory of the project generated by create-react-app. We are going to add a simple button that uses the browser fetch API to grab the data from our server and log it to the console. packages/simple-react-app/src/App.tsx import React from "react"; import logo from "./logo.svg"; import "./App.css";> { /* NEW */ } <button onClick={() => { fetch("", {}) .then((response) => response.json()) .then((data) => console.log(data)); }} > GET SOME DATA </button> </header> </div> ); } export default App; When we open the browser's development console (F12) and then click our button, we will see our server data fetched and logged in the browser: This is great! We've accidentally created a template for a full stack React and Typescript app! But that's not the reason we're here, so let's start pushing further into scenarios we might encounter in real projects that would lead us to consider options like a monorepo and git submodules. Before you continue take a moment to verify your project structure: . └── packages ├── simple-express-server │ ├── server.ts │ ├── yarn.lock │ ├── package.json │ └── tsconfig.json └── simple-react-app └── [default setup] Create the Monorepo To manage our monorepo we are going to use two tools: Lerna: For running scripts across multiple projects and adding new dependencies. Lerna is also built to manage publishing your packages (though we will not be doing that as part of this tutorial) Yarn workspaces: For hoisting all shared dependencies into a single node_modulesfolder in the root directory. Each project can still define its own dependencies, so that you don't confuse which dependencies are required for which (client vs. server) for example, but it will pool the installed packages in the root. For yarn we are using the still most commonly used yarn v1 (current version as of this writing is v1.22). Navigate to the root directory and run the following commands: yarn init yarn add -D lerna typescript npx lerna init Edit your Lerna configuration file: { "packages": ["packages/*"], "version": "0.0.0", "npmClient": "yarn", "useWorkspaces": true } We need to specify that yarn is our NPM client and that we are using workspaces. Next we need to define the location of those workspaces in the root package.json: package.json { "name": "monorepo-example", "version": "1.0.0", "main": "index.js", "license": "MIT", "private": true, "workspaces": [ "packages/*" ], "scripts": { "start": "lerna run --parallel start" }, "devDependencies": { "lerna": "^4.0.0" } } We have made three changes above: Set privateto truewhich is necessary for workspaces to functions Defined the location of the workspaces as packages/*which matches any directory we place in packages Added a script that uses Lerna to run. This will allow us to use a single command to run the equivalent of yarn startin both our Express server and React app simultaneously. This way they are coupled together so that we don't accidentally forget to run one, knowing that currently they both rely on each other. The --parallelflag allows them to run at the same time. Now we are ready to install the dependencies in root: (Note: At this point before you run the install command, I would recommend you synchronize your Typescript version between your simple-express-server and the one that comes bundled with your simple-react-app. Make sure both versions are the same in each project's package.json and both are listed in devDependencies. Most likely the React app version will be older, so that is the one that should be changed.) Next run the following command: npx lerna clean -y yarn install The first command will clean up the old node_modules folders in each of your two packages. This is the equivalent of simply deleting them yourself. The second command will install all dependencies for both projects in a node_modules folder in the root directory. Go ahead and check it out! You'll see that node_modules in the root is full of packages, while the node_modules folders in simple-express-server and simple-react-app only have a couple (these are mostly symlinks to binaries that are necessary due to the way yarn/npm function). Before we go on we should create a .gitignore file in the root to make sure we don't commit our auto-generated files: .gitignore node_modules/ dist/ (If you're using VS Code you'll see the folder names in the side bar go grey as soon as you sae the file, so you know it worked) Verify your monorepo and workspaces are setup properly by running (from the root folder): yarn start You will see that both your Express app and React app start up at the same time! Click the button to verify that your server data is available and logs to the console. Lastly we need to initialize Typescript in the root of the project so that our different packages can import and export between one another. Run the command: npx tsc --init In the root directory and it will create your .tsconfig.json. You can delete all the defaults values from this file (your individual projects will se their own configuration values.) The only field you need to include is: tsconfig.json { "compilerOptions": { "baseUrl": "./packages" } } Our project now looks like: . ├── packages | ├── simple-express-server | │ ├── server.ts | │ ├── yarn.lock | │ ├── package.json | │ └── tsconfig.json | └── simple-react-app | └── [default setup] ├── lerna.json ├── tsconfig.json ├── package.json └── yarn.lock Create Your Repository This is also a good time to commit your new project to your repository. I'll be doing that now as well, you can see the final version here. Note that in order to learn submodules effectively, we are going to be adding a submodule from a repository that already exists, we don't want to use the one that create-react-app generated automatically. So for that reason I am going to delete the that repository by deleting the .git directory inside packages/simple-react-app. This step is VERY IMPORTANT. Make sure there is no .git directory inside simple-react-app. Now from the root directory you can run: git add . git commit -am 'first commit' git remote add origin YOUR_GIT_REPO_ADDRESS git push -u origin YOUR_BRANCH_NAME Sharing Code and Adding Dependencies So let's quickly take a look at some of the benefits we get from our monorepo. Let's say that there's a utility library that we want to use in both our React app and on our Express server. For simplicity let's choose lodash which many people are familiar with. Rather than adding it to each project individually, we can use lerna to install it to both. This will help us make sure that we keep the same version in sync and require us to only have one copy of it in the root directory. From the root run the following command: npx lerna add lodash packages/simple-* npx lerna add @types/lodash packages/simple-* --dev This will install lodash in any of the projects in the packages directory that match the simple-* pattern (which includes both of ours). When using this command you can install the package to dev and peer dependencies by adding --dev or --peer at the end. More info on this command here. If you check the package.json file in both your packages you'll see that lodash has been added with the same version to both files, but the actual package itself has a single copy in the node_modules folder of your root directory. So we'll update our server.ts file in our Express project to do a couple of new things. We'll import the shared lodash library and use one of its functions ( _.snakeCase()) and we'll define a type interface that defines the shape of the data we are sending and export it so that we can also use that interface in our React app to typesafe server queries. Update your server.ts file to look like the following: packages/simple-express-server.ts import express from "express"; import _ from "lodash"; const app = express(); const port = 3001; export interface QueryPayload { payload: string; } app.use((_req, res, next) => { // Allow any website to connect res.setHeader("Access-Control-Allow-Origin", "*"); // Continue to next middleware next(); }); app.get("/", (_req, res) => { const responseData: QueryPayload = { payload: _.snakeCase("Server data returned successfully"), }; res.json(responseData); }); app.listen(port, () => { console.log(`Example app listening at{port}`); }); (Note I have changed the key on the object from data to payload for clarity) Next we will update our App.tsx component in simple-react-app. We'll import lodash just for no other reason to show that we can import the same package in both client and server. We'll use it to apply _.toUpper() to the "Learn React" text. We will also import our QueryPayload interface from our simple-express-server project. This is all possible through the magic of workspaces and Typescript. packages/simple-react-app/src/App.tsx import React from "react"; import logo from "./logo.svg"; import "./App.css"; import _ from "lodash"; import { QueryPayload } from "simple-express-server/server";" > {_.toUpper("Learn React")} </a> <button onClick={() => { fetch("", {}) .then((response) => response.json()) .then((data: QueryPayload) => console.log(data.payload)); }} > GET SOME DATA </button> </header> </div> ); } export default App; I find this is one of the trickiest parts to get right (the importing between packages). The key to this is the installation of Typescript in the root of the project, and "baseUrl": "./packages" value in the the tsconfig.json in the root directory. If you continue to have difficulty this is one of the best explanations I have ever come across for sharing Typescript data between projects in a monorepo. Once everything is setup, press the button on your React application and you'll be greeted with: Notice the snake_case response that matches the correct shape we defined. Fantastic! Now there is one issue with our setup -- currently we are importing the QueryPayload directly from our server. That is fairly harmless, but what if we Create a Shared Package Using the lerna create command we can quickly and easily create new projects within our monorepo. Run the following commands from the root directory: npx lerna create simple-shared-data npx lerna add typescript --dev yarn install This will create a directory called simple-shared-data in your packages. We've already added the same version of Typescript as a dev dependency. You can remove the lib directory that includes the default JS entrypoint as we will not be using it. Create an index.ts file inside of packages/simple-shared-data where we will place any types or data that either our front-end, back-end or both can have access to. packages/simple-shared-data/index.ts export interface QueryPayload { payload: string; } And then import from this file in both our server and React app: packages/simple-express-server/server.ts import { QueryPayload } from 'simple-shared-data'; ... packages/simple-react-app/src/App.tsx import { QueryPayload } from 'simple-shared-data'; ... The benefit of creating this shared project is that your front-end for example won't have a strict dependency on the existence of your server. You could deploy as: Front-End simple-react-ap simple-shared-data Back-End simple-express-server simple-shared-data Now that we have all these different projects setup, lets take a look at git submodules. Add a Git Submodule Recently I wrote a blog post on a very simple component for a React app that adds a dark mode, a <DarkMode /> component. The component is not part of a separate library we can install with an NPM command, it exists as part of a React application that has its own repository. Let's add it to our project, while still keeping it as its own separated repo that can be updated and managed independent of our monorepo. From the packages/simple-react-app/src directory we'll run this command: git submodule add git@github.com:alexeagleson/react-dark-mode.git That will create the react-dark-mode directory (the name of the git repository, you can add another argument after the above command to name the directory yourself). To import from the submodule it's as simple as... importing from the directory. If we're going to add the <DarkMode /> component it's as simple as adding: packages/simple-react-app/src/App.tsx ... import DarkMode from "./react-dark-mode/src/DarkMode"; function App() { return ( <div className="App"> ... <DarkMode /> </div> ); } export default App; I've omitted some of the repetitive stuff above. Unfortunately the default background-color styles in App.css are going to override the body styles, so we need to update App.css for it to work: packages/simple-react-app/src/App.css ... .App-header { /* background-color: #282c34; */ min-height: 100vh; display: flex; flex-direction: column; align-items: center; justify-content: center; font-size: calc(10px + 2vmin); /* color: white; */ } .App-link { /* color: #61dafb; */ } ... Comment out those color values and you're good to go! Now you might be thinking -- couldn't I just have cloned that repo into that folder and done this? What's the difference with submodules? Well now that we have this in place, let's look for the answer to exactly that. Run the following command: git status In the output you'll see new file: ../../../.gitmodules. That's something new if you've never used submodules before. It's a hidden file that has been added to the project root. Let's take a look inside it: [submodule "packages/simple-react-app/src/react-dark-mode"] path = packages/simple-react-app/src/react-dark-mode url = git@github.com:alexeagleson/react-dark-mode.git It stores a mapping to the directories in our project that map to other repositories. Now if you commit your changes in the root of the monorepo and push, you'll see on Github that rather than being a regular directory inside this project -- it's actually a link to the real repository: So you can continue to update and make changes to this monorepo without impacting that other repository. Great! But can you update the dark mode repository from inside this one? Sure you can! (As long as you have write permission). Let's make a trivial change to the dark mode repository from inside this one and see what happens. Navigate to: packages/simple-react-app/src/react-dark-mode/src/DarkMode.css ... [data-theme="dark"] { --font-color: #eee; --background-color: #333; --link-color: peachpuff; } I'm going to update the colour of the link when the app is in dark mode, from lightblue to peachpuff. Now obviously you won't be able to update my repository, but if you're following you can continue reading to see where this is going (or you can use your own repository of course). From this directory I make a commit and push. When I check the repository there are no new commits to the monorepo-example repository, but there IS a new commit to react-dark-mode. Even though we are still inside our monorepo project! When working with submodules it's important to keep them up to date. Remember that other contributors could be making new commits to the submodules. The regular git pull and git fetch to your main root monorepo aren't going to automatically pull new changes to submodules. To do that you need to run: git submodule update To get the latest updates. You also have new command you'll need to run when cloning a project or pulling when new submodules have been added. When you use git pull it will pull the information about relevant submodules, but it won't actually pull the code from them into your repository. You need to run: git submodule init To pull the code for submodules. Lastly, in case you prefer not to run separate commands, there is a way to pull submodule updates with your regular commands you're already using like clone and pull. Simply add the --recurse-submodules flag like so: git pull --recurse-submodules or git clone --recurse-submodules Namespacing Although I didn't use it in the tutorial, it is good practice to use namespacing for your packages. This is commonly done by prefixing with the @ character. Below I will quickly show how to update this tutorial to add a @my-namespace namespace: Prefix the name value in each of your three package.json files with @my-namespace. For example simple-express-server/package.json will now be: { "name": "@my-namespace/simple-express-server", ... } Do that for each of the three packages. Next you need to update your imports: packages/simple-express-server/server.ts import { QueryPayload } from '@my-namespace/simple-shared-data'; ... packages/simple-react-app/src/App.tsx import { QueryPayload } from '@my-namespace/simple-shared-data'; ... Finally run yarn install to update those packages inside your root node_modules directory and you're good to go! Wrapping Up I hope you learned something useful about monorepos and submodules. There are tons of different ways to setup a new project, and there's no one-size-fits-all answer for every team. I'd encourage you to play around with small monorepos (even clone this example) and get get comfortable with the different commands. Please check some of my other learning tutorials. Feel free to leave a comment or question and share with others if you find any of them helpful: Learnings from React Conf 2021 How to Create a Dark Mode Component in React How to Analyze and Improve your 'Create React App' Production Build How to Create and Publish a React Component Library How to use IndexedDB to Store Local Data for your Web App Running a Local Web Server - - - - - Webpack: Loaders, Optimizations & Bundle Analysis For more tutorials like this, follow me @eagleson_alex on Twitter Discussion (7) Thanks for reading everyone. If anyone has any tips or suggestions themselves, I'm extremely interested in hearing them. I feel like this is the kind of thing that can be configured so many different ways, I'd love to hear of any success stories, particularly from large long-lived projects with many contributors. Cheers. Hi thank you, just what I was looking for. Instead of lerna I've been using nx and it's great. I am gonna try this submodules to all my projects, thanks again and hope you keep posting! Keep up the good work Good stuff! I've never used nx, glad to head it works well for you. Hi, thank you. Great article! Just I think some text is missing in the end of chapter 'Sharing Code and Adding Dependencies': ' ...That is fairly harmless, but what if we...' I am curious what is there :) That's a great question, nice catch! I'll have to update that The intention was to say "what if we are not deploying our backend and frontend to the same server". By creating a "shared" repository we can include the shared data in both back & front, but deploy each one separately when setting up our production environments. So, how to switch branch inside the git submodule ? Without it, this article seems useless ;) Same syntax as within the main module, just: git branch branch-name while within a directory of the submodule and the branch will be updated for the submodule and not the main module.
https://dev.to/alexeagleson/how-to-create-a-node-and-react-monorepo-with-git-submodules-2g83
CC-MAIN-2022-05
refinedweb
4,022
55.44
Hi: I write a simple application to test the FLOP count of the application by using SDE;code is as follow : #include <stdio.h> #include <stdlib.h> float addSelf(float a,float b) { return a + b; } int main() { float a = 10.0; float b = 7.0; int i = 0; float c = 0.0; for(i = 0; i < 999;i++) { c = addSelf(a,b); } printf("c = %f\n",c); return 0; } The processor is i7-7500U,OS is windows 10, the IDE is CodeBlock; I download the SDE "sde-external-8.16.0-2018-01-30-win", and run the SDE with command : sde -mix -- application.exe;the output file is "sde-mix-out.txt", I search "elements_fp" in the file , but I find nothing ! and I search "FMA" in the file, I find nothing either ! does it means there is no floating calculation in this application ? obviously it's impossible; excuse me, what's the problem ?
https://software.intel.com/en-us/forums/intel-isa-extensions/topic/759071
CC-MAIN-2018-43
refinedweb
156
77.94
- NAME - VERSION - SYNOPSIS - DESCRIPTION - VARIABLES - FUNCTIONS - BUGS - SEE ALSO - ACKNOWLEDGMENTS - AUTHOR NAME Test::PDL - Test Perl Data Language arrays (a.k.a. piddles) for equality VERSION version 0.13 SYNOPSIS use PDL; use Test::More tests => 3; use Test::PDL qw( is_pdl :deep ); # an example of a test that succeeds $got = sequence 5; $expected = pdl( 0,1,2,3,4 ); is_pdl( $got, $expected, 'sequence() works as expected' ); # OUTPUT: # ok 1 - sequence() works as expected # if a test fails, detailed diagnostics are printed; the output is # similar to that of is() from L<Test::More> $got = pdl( 0,-1,-2,3,4 ); $expected = sequence 5; is_pdl( $got, $expected, 'demonstrate the output of a failing test' ); # OUTPUT: # not ok 2 - demonstrate the output of a failing test # # Failed test 'demonstrate the output of a failing test' # at aux/pod.t line 16. # values do not match # got: Double D [5] (P ) [0 -1 -2 3 4] # expected: Double D [5] (P ) [0 1 2 3 4] # piddles within other data structures can be tested with Test::Deep use Test::Deep qw( cmp_deeply ); $got = { name => 'Histogram', data => long( 17,0,1 ) }; $expected = { name => 'Histogram', data => test_long( 17,0,0,1 ) }; cmp_deeply( $got, $expected, 'demonstrate the output of a failing deep comparison' ); # OUTPUT: # not ok 3 - demonstrate the output of a failing deep comparison # # Failed test 'demonstrate the output of a failing deep comparison' # at aux/pod.t line 30. # Comparing $data->{"data"} as a piddle: # dimensions do not match in extent # got : Long D [3] (P ) [17 0 1] # expect : Long D [4] (P ) [17 0 0 1] DESCRIPTION With Test::PDL, you can compare two piddles for equality. The comparison is performed as thoroughly as possible, comparing types, dimensions, bad value patterns, and finally the values themselves. The exact behaviour can be configured by setting certain options (see set_options() and %OPTIONS below). Test::PDL is mostly useful in test scripts. Test::PDL is to be used with the Perl Data Language (PDL). By default, Test::PDL exports only one function: is_pdl(). The other functions are exported on demand only. The export tag :deep exports test_pdl() and one function for each PDL type constructor (like short(), double(), etc.), prefixed with test_: test_short(), test_double(), ... VARIABLES %OPTIONS The comparison criteria used by Test::PDL can be configured by setting the values in the %OPTIONS hash. This can be done directly, by addressing %Test::PDL::OPTIONS directly. However, it is preferred that set_options() is used instead. - TOLERANCE The tolerance used to compare floating-point values. Initially set to 1e-6. This is currently an absolute tolerance, meaning that two values compare equal if the absolute value of their difference is below the tolerance. - EQUAL_TYPES If true, only piddles with equal type can be considered equal. If false, the types of the piddles being compared is not taken into consideration. Defaults to true: types must match for the comparison to succeed. If you want to write tests like is_pdl( $got, pdl([ 1, 3, 5, 6 ]) ); without having to worry about the type of the piddle being exactly double (which is the default type of the pdl() constructor), set EQUAL_TYPES equal to 0. FUNCTIONS import Custom importer that recognizes configuration options specified at use time, as in use Test::PDL -equal_types => 0; This invocation is equivalent to use Test::PDL; Test::PDL::set_options( EQUAL_TYPES => 0 ); but is arguably somewhat nicer. _approx Internal function reimplementing the functionality of PDL::approx(), but with a tolerance that is not remembered across invocations. Rather, the tolerance can be set by the user (see set_options() and $OPTIONS{TOLERANCE}), and defaults to 1e-6. _comparison_fails Internal function which does the real work of comparing two piddles. If the comparison fails, _comparison_fails() returns a string containing the reason for failure. If the comparison succeeds, _comparison_fails() returns zero. The criteria for equality are the following: Both arguments must be piddles for the comparison to succeed. Currently, there is no implicit conversion from scalar to piddle. The type of both piddles must be equal if (and only if) EQUAL_TYPES is true. The number of dimensions must be equal. That is, a two-dimensional piddle only compares equal with another two-dimensional piddle. The extent of the dimensions are compared one by one and must match. That is, a piddle with dimensions (5,4) cannot compare equal with a piddle of dimensions (5,3). Note that degenerate dimensions are not treated specially, and thus a piddle with dimensions (5,4,1) is considered different from a piddle with dimensions (5,4). For piddles that conform in type and shape, the bad value pattern is examined. If the two piddles have bad values in different positions, the piddles are considered different. Note that two piddles may compare equal even though their bad flag is different, if there are no bad values. And last but not least, the values themselves are examined one by one. For integer types, the comparison is performed exactly, whereas an approximate equality is used for floating-point types. The approximate comparison is implemented using a private reimplementation of PDL::approx(). See _approx() for more information. _dimensions_match Internal function which compares the extent of each of the dimensions of two piddles, one by one. The dimensions must be passed in as two array references. Returns 1 if all dimensions match pairwise. Returns 0 otherwise. This function will not operate correctly if the number of dimensions does not match between the piddles, so be sure to check that before calling this function. is_pdl Run a test comparing a piddle to an expected piddle, and fail with detailed diagnostics if they don't compare equal. is_pdl( $got, $expected, $test_name ); Yields ok if the first two arguments are piddles that compare equal, not ok if the piddles are different, or if at least one is not a piddle. Prints a diagnostic when the comparison fails, with the reason and a brief printout of both arguments. See the documentation of _comparison_fails() for the comparison criteria. $test_name is optional. Named after is() from Test::More. eq_pdl Return true if two piddles compare equal, false otherwise. my $equal = eq_pdl( $got, $expected ); eq_pdl() contains just the comparison part of is_pdl(), without the infrastructure required to write tests with Test::More. It could be used as part of a larger test in which the equality of two piddles must be verified. By itself, eq_pdl() does not generate any output, so it should be safe to use outside test suites. eq_pdl_diag Return true if two piddles compare equal, false otherwise, and the reason why the comparison failed (if it did). my( $ok ) = eq_pdl_diag( $got, $expected ); my( $ok, $diag ) = eq_pdl_diag( $got, $expected ); eq_pdl_diag() is like eq_pdl(), except that it also returns the reason why the comparison failed (if it failed). $diag will be false if the comparison succeeds. Does not need Test::Builder, so you can use it as part of something else, without side effects (like generating output). It was written to support deep comparisons with Test::Deep. test_pdl Special comparison to be used in conjunction with Test::Deep to test piddles inside data structures. my $expected = { ..., some_field => test_pdl( 1,2,-7 ), ... }; my $expected = [ ..., test_short( 1,2,-7 ), ... ]; Suppose you want to compare data structures that happen to contain piddles. You use is_deeply() (from Test::More) or cmp_deeply() (from Test::Deep) to compare the structures element by element. Unfortunately, you cannot just write my $got = my_sub( ... ); my $expected = { ..., some_field => pdl( ... ), ... }; is_deeply $got, $expected; Neither does cmp_deeply() work in the same situation. is_deeply() tries to compare the piddles using the (overloaded) == comparison operator, which doesn't work. It simply dies with an error message saying that multidimensional piddles cannot be compared, whereas cmp_deeply() performs only a shallow comparison of the references. What you need is a special comparison, which is provided by this function, to be used with cmp_deeply(). You need to rewrite $expected as follows my $expected = { ..., some_field => test_pdl( ... ), ... }; cmp_deeply $got, $expected; Note that you need to write test_pdl() instead of pdl(). You could achieve the same thing with my $expected = { ..., some_field => code( sub { eq_pdl_diag( shift, pdl( ... ) ) } ), ... }; but the diagnostics provided by test_pdl() are better, and it's easier to use. test_pdl() accepts the same arguments as the PDL constructor pdl() does. If you need to compare a piddle with a type different from the default type, use one of the provided test_byte(), test_short(), test_long(), etc.: my $expected = { data => test_short( -4,-9,13 ) }; If you need to manipulate the expected value, you should keep in mind that the return value of test_pdl() and the like are not piddles. Therefore, in-place modification of the expected value won't work: my $expected = { data => test_short( -99,-9,13 )->inplace->setvaltobad( -99 ) }; # won't work! You should rather do my $expected = { data => test_pdl( short(-99,-9,13)->inplace->setvaltobad(-99) ) }; test_pdl() will correctly set the type of the expected value to short in the above example. set_options Configure the comparison carried out by Test::PDL's testing functions. # e.g., if a tolerance of 1e-6 is too tight Test::PDL::set_options( TOLERANCE => 1e-4 ); The preferred way to set the options to this module. See %OPTIONS for all allowed options. set_options() dies with an error if an unknown option is passed. Note that sensible default values are provided for all options, so you needn't use this routine if you are fine with the defaults. This function is not exported. Rather, it must be called as Test::PDL::set_options( KEY => VALUE, ... ); BUGS None reported so far. SEE ALSO PDL, Test::More, Test::Deep, Test::PDL::Deep ACKNOWLEDGMENTS Thanks to PDL Porters Joel Berger, Chris Marshall, and David Mertens for feedback and improvements. AUTHOR Edward Baudrez <ebaudrez@cpan.org> This software is copyright (c) 2016 by Edward Baudrez. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
https://metacpan.org/pod/Test::PDL
CC-MAIN-2016-50
refinedweb
1,644
55.34
¤ Home » Programming » C Tutorial » Structure of a C Program A program in C consists of one or more functions, and one of them must be called main, which is the controlling function. Program execution always begins by executing the main function. Additional function definitions may precede or follow main. A function definition consists of the following components:. Let us look at an example C program. Consider the flow-chart below depicting how the roots of a quadratic polynomial can be found: Based on this flowchart, a sample C program for finding the real roots of a quadratic polynomial is given below: #include <stdio.h> /* a preprocessor directive to include the file stdio.h */ #include <math.h> /* < 0) printf("The roots are complex\n"); else { w = sqrt(discriminant); printf("The roots are real and different\n"); root1 = (-b - w) / d; root2 = (-b + w) / d; printf("Root1 = %f, Root2 = %f\n",root1,root2); } return; } /* Function definition */ float compute_discriminant (float x, float y, float z) { float d; /* local variable declaration */ d = y * y - 4 * x * z; return d; } Here the main function utilizes a separate programmer defined function called compute_discriminant, to find the value of the discriminant. Within the programmer defined function, x, y, z are arguments (also called parameters). The values for these parameters are supplied from the main() function. The values of a, b, c are supplied to x, y, z respectively. Then the programmer-defined function performs the calculations and the result is returned to the main. The programmer defined function is called in the main function, in the statement: discriminant = compute_discriminant( a, b, c ); The above statement invokes the function compute_discriminant with arguments a, b, and c. The function main() transfers the control of the program execution to the function compute_discriminant(). The called function finally, after doing its work, returns the control back to the calling function, viz. main(), with the result of the calculation performed by it. You would notice that the main function also includes a function declaration, which indicates that compute_discriminant accepts floating-point arguments and returns a floating-point value. 1. Identify the errors in the following C program: # include { STDIO.H }*/ c program *\ main [ ]{ printf("Welcome to C"} ; Correct the program to display "Welcome to C" on the screen. 2. Match the following: (i) main (a) defined in the header file stdio.h (ii) /*.. */ (b) terminates a expression (iii) ; (c) encloses compound statements (iv) { } (d) the controlling function of a C program (v) printf (e) encloses comment entries 3. Do you find anything odd in the following program? # include <stdio.h> main() { /* printf("Hi! How are you !!!!"); */ } 4. In the function, xyz (a, b, c); a, b and c are known as_____________. 5. LAB WORK: Enter, compile and run the following c program and watch their output. Note that some of the programs contain typical errors which are done by beginner programmers. So, study the errors carefully and correct the programs. a) # include<stdio.h> main() { printf("Welcome to c programming\n"); printf("Hope you enjoy programming\n"); printf("Wish you a good beginning\n\n"); } b) # include <stdio.h. main() { char name [30]; printf("This program demonstrates how C interacts with an user\n"); printf("Enter your name:"); gets(name); fflush(stdin); printf("Your name is : %s\n",name); } c) # include <stdio.h> main() { /* C does quick calculate for you */ /* Multiplication and division */ int num1,num2,result=0; printf("\nEnter the first number:"); scanf("%d",&num1); fflush(stdin); printf("\nEnter the second number:"); scanf("%d",&num2); fflush(stdin); /* multiplication */ result = num1* num2; printf("We get %d after dividing %d by %d\n",result, num1,num2); /* Division */ result = num1 / num2; printf("We get %d after dividing %d by %d\n",result, num1,num2); } d. # include <stdio.h> /* This is a demonstration of modular programming */ main() /* controlling module */ { printf("We are in main function now…\n"); message(); printf(We are back in function main again…\n"); } message() { printf("Hi! We are in message now and returning to main\n"); }.
http://www.how2lab.com/programming/c/structure-of-a-c-program.php
CC-MAIN-2018-47
refinedweb
664
53.61
Jumping into Truffle and Rinkeby Despite the recent publicity and popularity that the blockchain community has been receiving, developers in the space are still extremely few and far between. A huge goal of ours is to have a thriving community, not only behind our protocol, but in general. We want to help create talented developers for us, for all our friends in the space, and for the hundreds of new blockchain companies out there. One of the ways we’re forging this environment is by having live workshops, as seen below, where we teach technical blockchain fundamentals. In that spirit, we’ll also be putting some tutorials online- doing what we can to help this community of developers grow. Here’s a Truffle and Rinkeby tutorial, made with Truffle v3.4.9 (core: 3.4.8) and Solidity v0.4.15. Truffle Setup Windows If you’re a Windows user, we recommend installing and using Truffle via Windows PowerShell or Git BASH. These two shells provide features far beyond the standard Command Prompt, and will make your life on the command line much easier. Non-Windows Let’s install Testrpc. Testrpc is a Node.js based Ethereum client for testing and development. It uses Ethereumjs to simulate full client behavior and make developing Ethereum applications much faster. It also includes all popular RPC functions and features (like events) and can be run deterministically to make development a breeze. $ npm install -g ethereumjs-testrpc $ testrpc Now, while Testrpc is running in a brand new terminal, we can go ahead and install Truffle, initialize and app and run it. $ npm install -g truffle $ cd ~/Desktop $ mkdir truftest $ cd truftest $ truffle unbox react Here we are using a shortcut truffle unbox react which does an initialization and adds the necessary modules for react. If you want to have a pure installation you should instead run truffle init inside of the folder you created. Go into your Truffle folder. $ cd ~/Desktop/truftest $ truffle compile $ truffle migrate $ npm start These commands will migrate the current contracts into testrpc. Now, when your app starts, head over to localhost:3000 and you should see this screen: Solidity Let’s mess around with the Solidity contract. We are going to be making a contract called MySale. truftest/contracts/MySale.sol $ truffle create contract MySale Now we have to add a migration file for deploying this contract to the EVM. Create a file called truftest/migrations/3_add_Sale.js Note that the filename is prefixed with a number and is suffixed by a description. The numbered prefix is required in order to record whether the migration ran successfully. The suffix is purely for human readability and comprehension. Okay, let’s fill out the migration. var MySale = artifacts.require("./MySale.sol"); module.exports = function(deployer) { deployer.deploy(MySale); }; Now you will be provided with a Solidity file with no functions. I went ahead and filled it in with 3 rudimentary functions. truftest/contracts/MySale.sol pragma solidity ^0.4.4; contract MySale { mapping(address => uint) balance; uint total_coins = 1; function printCoin(uint howMuch) public{ balance[msg.sender] += howMuch; total_coins += howMuch; } function allCoins() constant public returns(uint){ return total_coins; } function myCoin() constant public returns(uint){ return balance[msg.sender]; } } Now, let’s go ahead and make our app have a touch event to trigger both a get and a set function from that contract. Go to truftest/src/App.js and add this. import React, { Component } from 'react' import MySale from '../build/contracts/MySale.json' import getWeb3 from './utils/getWeb3' import './css/oswald.css' import './css/open-sans.css' import './css/pure-min.css' import './App.css' const contract = require('truffle-contract') const mySale = contract(MySale) class App extends Component { constructor(props) { super(props) this.state = { storageValue: 0, web3: null } } componentWillMount() { getWeb3 .then(results => { this.setState({ web3: results.web3 }) }) .catch(() => { console.log('Error finding web3.') }) } coinCount(){ mySale.setProvider(this.state.web3.currentProvider) var mySaleInstance console.log("...getting data"); this.state.web3.eth.getAccounts((error, accounts) => { mySale.deployed().then((instance) => { mySaleInstance = instance return mySaleInstance.allCoins.call({from: accounts[0]}) }).then((result) => { console.log("result", result); }) }) } printCoin(){ mySale.setProvider(this.state.web3.currentProvider) var mySaleInstance console.log("...setting data"); this.state.web3.eth.getAccounts((error, accounts) => { mySale.deployed().then((instance) => { mySaleInstance = instance return mySaleInstance.printCoin(4, {from: accounts[0]}) }).then((result) => { console.log("result", result); }) }) } render() { return ( <div> <div id="get" onClick={this.coinCount.bind(this)}></div> <div id="set" onClick={this.printCoin.bind(this)}></div> </div> ); } } export default App I added some CSS to #get and #set so I can see those div’s. You can find this in truftest/src/App.css #set{ height:50px; width:50px; background-color:red; float:left; margin: 20px; cursor: pointer; } #get{ height:50px; width:50px; background-color:blue; float:left; margin: 20px; cursor: pointer; } What we get is two boxes that when clicked execute our contract and console.log the result. But what if we didn’t want to run the app inside of Testrpc? For that we will need Mist and Rinkeby. Rinkeby is the test EVM. Lets go ahead and get setup for that next. - Close the Testrpc terminal and close the Truffle server. Dependencies - Node.js x7.x (use the preferred installation method for your OS). - Meteor javascript app framework. - Yarn package manager. - Electron v1.4.15 cross platform desktop app framework. - Gulp build and automation system. $ curl | sh $ curl -o- -L | bash $ yarn global add electron@1.4.15 $ yarn global add gulp Mist Setup - Quick reminder to make sure you closed the Testrpc terminal and the Truffle server. $ cd ~/Desktop $ mkdir mist_test && cd mist_test $ git clone $ cd mist $ yarn $ cd interface && meteor --no-release-check Now you’re ready to initialize Mist for development: While Meteor is running, open another terminal window and go back to the folder you created. /Desktop/mist_test/mist Now let’s run Electron for the first time: $ cd ~/Desktop/mist_test/mist $ yarn dev:electron - Launch the application in test network. - Wait for past blocks to download. - (On the bottom left Mist is downloading the past blocks. Wait for this to update). - Write down your Ethereum address, we will need this later in this tutorial. 0x6a6401AEb4a3beb93820904E761b0d86364bb39E Were Done with Mist! If you want to read more about mist you can do so here. For now close both terminals. The one running meteor and the one running mist/electron. - Make sure to close both terminals. Free Rinkeby Ether In order to use the test network we have to have test Ether, which we can’t just print out. - Log into github, create an account if you dont have one. - Head to - Create a new public gist and only put your Ethereum address in there. - Get free Ether from. - Paste the url from the gist file into the faucet page. Connecting to Rinkeby Make sure to close the terminal. First, start geth with Rinkeby and make sure that the correct APIs for Truffle are enabled. $ geth --rinkeby --rpc --rpcapi db,eth,net,web3,personal --unlock="0x6a6401AEb4a3beb93820904E761b0d86364bb39E" --rpccorsdomain Please don’t forget to replace my wallet with the wallet you got from Mist/Electron. This will ask you for the password you gave Mist/Electron. Next, we need to add Rinkeby to our Truffle config file. If we open truffle.js in our contract code, we’ll see something like: module.exports = { rpc: { host: 'localhost', port: '8545' }, networks: { development: { host: "localhost", port: 8545, network_id: "*" // Match any network id }, rinkeby: { host: "localhost", // Connect to geth on the specified port: 8545, from: "0x6a6401AEb4a3beb93820904E761b0d86364bb39E", // default address to use for any transaction Truffle makes during migrations network_id: 4, gas: 4612388 // Gas limit used for deploys } }, }; Please dont forget to replace the from section in this file to your Ethereum address you scored from Mist/Electron. Now we just have to migrate the contract onto Rinkeby. This is going to ask for the password you gave Mist/Electron. $ truffle migrate --network rinkeby If you want to view the current state of your contracts on the Rinkeby test network go to this URL. (Replace my wallet with yours. My address’s transactions. ) Congratulations We Set It Up!! Hopefully this tutorial provided some useful information for you. Let us know what else you’d like us to cover and we’ll try to do it in either a live workshop or an online tutorial just like this.
https://medium.com/@PasschainBlog/jumping-into-truffle-and-rinkeby-3acf6a2d9bef
CC-MAIN-2018-09
refinedweb
1,390
59.6
#include<iostream> #include<string> using namespace std; //declare and initialize array INV along with its column and row respectively int INV[3][4]={{10,20,30,40},{15,20,25,30},{35,40,45,50}}; enum SIZES{S,M,L,XL}; enum COLORS{RED,GREEN,BLUE}; //declare functions void sizeNColor(); void totalInv(); void totalSize(); int main(){ //Total inventory totalInv(); //Total number of a given size totalSize(); //Total number of a given color and size sizeNColor(); return 0;} void totalInv() { int totalInv =0; for (COLORS row=RED; row<=BLUE;row=COLORS(row+1)) {for (SIZES col=S; col<=XL;col=SIZES(col+1)) {totalInv +=INV[row][col];} } cout<<"Total inventory is: "<<totalInv<<endl; } void totalSize(){ char ch; SIZES col; cout<<"Enter a size(S,M,L,XL): "; cin>>ch; ch=toupper(ch); switch(ch) {case 'S': col =S;break; case 'M': col = M;break; case 'L':col = L; break; case 'XL':col= XL; break;} int totalSize=0; for (COLORS row=RED;row<=BLUE;row=COLORS(row+1)) {totalSize+=INV[row][col];} cout<<"The total number of "<<ch<<" shirts is "<<totalSize<<endl; } void sizeNColor(){ string st; string size, color; cout<<"Enter size and color"<<endl; getline(cin, st); int n=st.find(' '); int m=st.length(); size= st.substr(0,n); color = st.substr(n+1,m-n+1); for(int i=0; i<size.length();++i) size[i]=toupper(size[i]); for(int i=0;i<color.length();++i) color[i]=toupper(color[i]); COLORS row; SIZES col; if (color == "RED") row = RED; else if (color == "GREEN") row =GREEN; else if (color == "BLUE") row = BLUE; if (size =="SMALL") col = S; else if (size == "MEDIUM") col = M; else if (size == "LARGE") col = L; else if(size == "EXTRA-LARGE" ) col = XL; cout<<INV[row][col]<<end if I comment out : totalSize() , the code stills runs ok Snippet ID=8118283 , but if I run both functions in Snippet ID=8118284 , then error appear about row and col not being intialized. It doesn't make sense since I was able to use the program when one function was commented out. I am totally confused :( Open in new window Open in new window Open in new window Your question, your audience. Choose who sees your identity—and your question—with question security. the first prgm doesn't work but if: I comment out totalSize() line 25 in the second prgm, then I don't the error message. if : I comment the other function, in the third pgm, sizeNColor() line 31, then I don't get the error message either. but If I try to use both function in the program (first program), then I get the error messge. But, after the cin >>, there is still left over stuff in the input stream (e.g., any whitespaces, newlines). Then your getline doesn't even have to wait for you to enter data. You can verify this by setting a breakpoint on the getline; and do not enter any data. Then step over the getline. Instead of waiting for an input, it just continues, and you have no valid token strings. To illustrate, if you replace the getline with cin >> size; cin >> color; then the cin skips the leading whitespaces/newlines and waits for a valid token string. I strongly recommend that you start using the debugger often and more often. You will be finding more quickly solutions to the problems you encounter. It allows you to be stymied by only the more difficult problems, and you will have to ask less questions. Using a good debugger is essential for development. To quickly get started on using the Visual Studio 2010 debugger (which is an excellent choice for your types of problems, then you can read: C/C++ Beginner's Debugging Guide This beginner's guide should take only about 15 minutes to walk through using your code as an example. After becoming familiar with the basics, move onto these two articles: Breakpoint Tips for C/C++ Watch, Memory, Stack Tips: C/C++ Experts Exchange Solution brought to you by Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.Start your 7-day free trial I'd like to emphasize for you sake that you take the time to really learn how to use the VS C++ debugger. Even those who may criticize Microsoft, will invariably say that this debugger is one of the best there is. I'll often have four memory windows open concurrently to analyze some software bug. The benefit to you is that you will gain more insight into code where you are still a little shaky. From this inside (literally, inside the program) information, you may be able to experiment a little and get the program to work. But if not, then you can ask questions where you explain what you are seeing in the debugger (even a snapshot of one or two of the debugger panes) will help us understand your problem better.
https://www.experts-exchange.com/questions/26941018/enum-error-using-functions.html
CC-MAIN-2018-30
refinedweb
836
58.72
. >>> >>> as your code uses MMX you need to at least mention EMMS/float issue in the >>> dox and probably a emms_c(); call before draw_horiz_band() >>> dunno if these are all >> >> Added in the comment. >> >>>. >>>. >> >> >> From c1d931e9846ca862935254726010e2d21737f5c5 Mon Sep 17 00:00:00 2001 >> From: Alexander Strange <astrange at ithinksw.com> >> Date: Fri, 15 Jan 2010 01:47:47 -0500 >> Subject: [PATCH 1/2] Add macros for write-combining optimization. >> >> --- >> libavutil/intreadwrite.h | 36 ++++++++++++++++- >> libavutil/x86/intreadwrite.h | 92 ++++++++++++++++++++++++++++++++++++++++++ >> 2 files changed, 127 insertions(+), 1 deletions(-) >> create mode 100644 libavutil/x86/intreadwrite.h >> >> diff --git a/libavutil/intreadwrite.h b/libavutil/intreadwrite.h >> index 933732c..73d6000 100644 >> --- a/libavutil/intreadwrite.h >> +++ b/libavutil/intreadwrite.h >> @@ -25,7 +25,7 @@ >> >> /* >> * Arch-specific headers can provide any combination of >> - * AV_[RW][BLN](16|24|32|64) macros. Preprocessor symbols must be >> + * AV_([RW]|COPY|SWAP)[BLN](16|24|32|64) macros. Preprocessor symbols must be > > That's both ugly and wrong. Write "AV_[RW][BLN](16|24|32|64) or > AV_(COPY|SWAP)(32|64|128)" instead, or whatever sizes make sense for > the second one. Quite true, done. Sizes smaller than 64 don't need optimizing like this, so I've left that out. They need special macros in the future, since all of these type-puns are strict aliasing violations. Somehow it's not being miscompiled just yet though. x264 handles aliasing like this:;a=commitdiff;h=1d54b2c7f9110cb7c7af1059cf595db17ed96273 All comments below not replied to are done (tested with --cpu=586 and 686). By the way, AV_WN is missing parens around the definition. >> * defined, even if these are implemented as inline functions. >> */ >> >> @@ -37,6 +37,8 @@ >> # include "mips/intreadwrite.h" >> #elif ARCH_PPC >> # include "ppc/intreadwrite.h" >> +#elif ARCH_X86 >> +# include "x86/intreadwrite.h" >> #endif >> >> /* >> @@ -397,4 +399,36 @@ struct unaligned_16 { uint16_t l; } __attribute__((packed)); >> } while(0) >> #endif >> >> +/* Parameters for AV_COPY*, AV_SWAP*, AV_ZERO* must be >> + * naturally aligned. They may be implemented using MMX, >> + * so emms_c() must be called before using any float code >> + * afterwards. >> + */ >> + >> +#define AV_COPY(n, d, s) *(uint##n##_t*)(d) = *(const uint##n##_t*)(s) > > Please put () around the entire expansion. You never know... > >> +#ifndef AV_COPY64 >> +# define AV_COPY64(d, s) AV_COPY(64, d, s) >> +#endif >> + >> +#ifndef AV_COPY128 >> +# define AV_COPY128(d, s) do {AV_COPY64(d, s); AV_COPY64((char*)(d)+8, (char*)(s)+8);} while(0) >> +#endif > > A few line breaks would make that much more readable. > >> +#define AV_SWAP(n, a, b) FFSWAP(uint##n##_t, *(uint##n##_t*)(a), *(uint##n##_t*)(b)) >> + >> +#ifndef AV_SWAP64 >> +# define AV_SWAP64(a, b) AV_SWAP(64, a, b) >> +#endif >> + >> +#define AV_ZERO(n, d) *(uint##n##_t*)(d) = 0 > > Once again, () around the whole thing. > >> +#ifndef AV_ZERO64 >> +# define AV_ZERO64(d) AV_ZERO(64, d) >> +#endif >> + >> +#ifndef AV_ZERO128 >> +# define AV_ZERO128(d) do {AV_ZERO64(d); AV_ZERO64((char*)(d)+8);} while(0) >> +#endif > > Some newlines wouldn't hurt. > >> #endif /* AVUTIL_INTREADWRITE_H */ >> diff --git a/libavutil/x86/intreadwrite.h b/libavutil/x86/intreadwrite.h >> new file mode 100644 >> index 0000000..bdb1e53 >> --- /dev/null >> +++ b/libavutil/x86/intreadwrite.h >> @@ -0,0 +1,92 @@ >> +/* >> + * Copyright (c) 2010 Alexander Strange <astrange at ithinks_X86_INTREADWRITE_H >> +#define AVUTIL_X86_INTREADWRITE_H >> + >> +#include <stdint.h> >> +#include "config.h" >> + >> +#if !HAVE_FAST_64BIT && __MMX__ > > If you insist that __MMX__ is the right thing to test, you must use > defined(__MMX__). It's not a 0/1 thing like ours. > >> +#define AV_COPY64 AV_COPY64 >> +static inline void AV_COPY64(void *d, const void *s) >> +{ >> + __asm__("movq %1, %%mm0 \n\t" >> + "movq %%mm0, %0 \n\t" >> + : "=m"(*(uint64_t*)d) >> + : "m" (*(const uint64_t*)s) >> + : "mm0"); >> +} >> + >> +#define AV_SWAP64 AV_SWAP64 >> +static inline void AV_SWAP64(void *a, void *b) >> +{ >> + __asm__("movq %1, %%mm0 \n\t" >> + "movq %0, %%mm1 \n\t" >> + "movq %%mm0, %0 \n\t" >> + "movq %%mm1, %1 \n\t" >> + : "+m"(*(uint64_t*)a), "+m"(*(uint64_t*)b) >> + ::"mm0", "mm1"); >> +} >> + >> +#define AV_ZERO64 AV_ZERO64 >> +static inline void AV_ZERO64(void *d) >> +{ >> + __asm__("pxor %%mm0, %%mm0 \n\t" >> + "movq %%mm0, %0 \n\t" >> + : "=m"(*(uint64_t*)d) >> + :: "mm0"); >> +} >> + >> +#endif /* __MMX__ && !HAVE_FAST_64BIT */ >> + >> +#if __SSE__ > > Use #ifdef. > >> +). I'd be OK with the first one, but s is const and d isn't, so it needs separate declarations, which look worse to me. As for the second, clang gives a strange warning and then breaks: uint64_t (*vd)[2] = d; const uint64_t (*vs)[2] = s; ./libavutil/x86/intreadwrite.h:67:31: warning: initializing 'void const *' discards qualifiers, expected 'uint64_t const (*)[2]' const uint64_t (*vs)[2] = s; ^ ./libavutil/x86/intreadwrite.h:72:20: error: cannot compile this unexpected cast lvalue yet : "m" (*vs) Looks a bit nicer to me without the typedef, but I'd rather not spend time changing this around. >> +#endif /* __SSE__ */ >> + >> +#if __SSE2__ > > Use #ifdef. > >> +#define AV_ZERO128 AV_ZERO128 >> +static inline void AV_ZERO128(void *d) >> +{ >> + typedef struct {uint64_t i[2];} v; >> + >> + __asm__("pxor %%xmm0, %%xmm0 \n\t" >> + "movdqa %%xmm0, %0 \n\t" >> + : "=m"(*(v*)d) >> + :: "xmm0"); >> +} > > > Same as above about the typedef. > >> +#endif /* __SSE2__ */ >> + >> +#endif /* AVUTIL_X86_INTREADWRITE_H */ >> -- >> 1.6.5.2 -------------- next part -------------- A non-text attachment was scrubbed... Name: 0001-Add-macros-for-write-combining-optimization.patch Type: application/octet-stream Size: 5158 bytes Desc: not available URL: <> -------------- next part -------------- A non-text attachment was scrubbed... Name: 0002-H.264-Use-64-and-128-bit-write-combining-macros.patch Type: application/octet-stream Size: 9637 bytes Desc: not available URL: <>
http://ffmpeg.org/pipermail/ffmpeg-devel/2010-January/087216.html
CC-MAIN-2016-36
refinedweb
856
57.87
Marcel is a new shell. It is similar to traditional shells in many ways, but it does a few things differently: Piping: All shells use pipes to send a text from the output of one command to the input of another. Marcel pipes structured data instead of strings. Python: Marcel is implemented in Python, and exposes Python in a number of ways. If you need a little bit of logic in your commands, marcel allows you to express it in Python. Scripting: Marcel takes an unusual approach to scripting. You can, of course, simply write a sequence of marcel commands in a text file and execute them. But Marcel also provides an API in the form of a Python module. You can import this module to do Python scripting in a far more convenient way than is possible with plain Python. Marcel is licensed under GPLv3. Installing Marcel Modern Shell in Linux Marcel requires Python 3.6 or later. It has been developed and tested on Linux, and it mostly works on macOS. (If you’d like to help port to Windows, or to fix the macOS deficiencies, get in touch.) To install marcel for your own use: # python3 -m pip install marcel Or if you want to install for all users (e.g., to /usr/local): $ sudo python3 -m pip install --prefix /usr/local marcel Once you have installed marcel, check that it’s working by running the command marcel, and then at the marcel prompt, run the version command: $ marcel Customization of Marcel Shell You can customize marcel in the file ~/.marcel.py, which is read on startup, (and reread when modified). As you can tell from the file’s name, customization of marcel is done in Python. One thing you probably want to do is to customize the prompt. To do this, you assign a list to the PROMPT variable. For example, if you want your prompt to be the current directory, printed in green, followed by > printed in blue: PROMPT = [ Color(0, 4, 0), lambda: PWD, Color(0, 2, 5), '> ' ] The resulting prompt looks like this: This replaces the inscrutable PS1 configuration that you would need to do in bash. Color(0, 4, 0) specifies green, (the arguments are RGB values, in the range 0-5). PWD is the environment variable representing your current directory and prefixing this variable with lambda: generates a function, evaluated each time the prompt is displayed. The ~/.marcel.py can also import Python modules. E.g., if you want to use the functions of the math module in your marcel commands: from math import * Once you’ve done this, you can refer to symbols from that module, e.g. pi: Note that pi is parenthesized. In general, marcel uses parentheses to delimit Python expressions. So (pi) evaluates the Python expression that retrieves the value of the variable pi. You can also access traditional environment variables in this way, e.g. (USER) and (HOME), or any valid Python expression relying on symbols in marcel’s namespace. And you can, of course, define your own symbols. For example, if you put this function definition in ~/.marcel.py: def factorial(n): f = 1 for i in range(1, n + 1): f *= i return f then you can use the factorial function on the command line, e.g. Marcel Shell Examples Here, we will learn some examples of commands in the marcel shell. Find File Sizes by Extension Explore the current directory recursively, group the files by their extension (e.g. .txt, .py and so on), and compute the total file size for each group. You can do this in marcel as follows: The ls operator produces a stream of File objects, ( -fr means visit directories recursively, and return only files). The File objects are piped to the next command, map. The map specifies a Python function, in the outermost parentheses, which maps each file to a tuple containing the file’s extension, and it’s size. (Marcel allows the lambda keyword to be omitted.) The red (reduce) operator, groups by the first part of the tuple (extension) and then sum up the sizes within each group. The result is sorted by extension. Host Executables and the Marcel Pipeline Pipelines may contain a mixture of marcel operators and host executables. Operators pipe objects, but at the operator/executable boundaries, marcel pipes strings instead. For example, this command combines operators and executables and lists the usernames of users whose shell is /bin/bash. $ cat /etc/passwd | map (line: line.split(':')) | select (*line: line[-1] == '/bin/bash') | map (*line: line[0]) | xargs echo cat is a Linux executable. It reads /etc/passwd, and marcel pipes its contents downstream to the marcel operator map. The parenthesized argument to map is a Python function that splits the lines at the : separators, yielding 7-tuples. A select is a marcel operator whose argument is a Python function identifying those tuples in which the last field is /bin/bash. The next operator, another map keeps the username field of each input tuple. Finally, xargs echo combines the incoming usernames into a single line, which is printed to stdout. Scripting in Marcel Shell While Python is sometimes considered to be a scripting language, it doesn’t actually work well for that purpose. The problem is that running shell commands, and other executables from Python is cumbersome. You can use os.system(), which is simple but often inadequate for dealing with stdin, stdout, and stderr. subprocess.Popen() is more powerful but more complex to use. Marcel’s approach is to provide a module that integrates marcel operators with Python’s language features. To revisit an earlier example, here is the Python code for computing the sum of file sizes by extension: from marcel.api import * for ext, size in (ls(file=True, recursive=True) | map(lambda f: (f.suffix, f.size)) | red('.', '+')): print(f'{ext}: {size}) The shell commands are the same as before, except for syntactic conventions. So ls -fr turns into ls(file=True, recursive=True). The map and red operators are there too, connected with pipes, as in the shell version. The entire shell command (ls … red) yields a Python iterator so that the command can be used with Python’s for a loop. Database Access with Marcel Shell You can integrate database access with marcel pipelines. First, you need to configure database access in the config file, ~/.marcel.py, e.g. define_db(name="jao", driver="psycopg2", dbname="acme", user="jao") DB_DEFAULT = 'jao' This configures access to a Postgres database named acme, using the psycopg2 driver. Connections from marcel will be made using the jao user, and the database profile is named jao. (DB_DEFAULT specifies the jao database profile as the one to be used if no profile is specified.) With this configuration done, the database can now be queried using the sql operator, e.g. sql 'select part_name, quantity from part where quantity This command queries a table named part, and dumps the query result into the file ~/reorder.csv, in CSV format. Remote Access with Marcel Shell Similarly to database access, remote access can be configured in ~/.marcel.py. For example, this configures a 4-node cluster:define_remote(name="lab", user="frankenstein", identity='/home/frankenstein/.ssh/id_rsa', host=['10.0.0.100', '10.0.0.101', '10.0.0.102', '10.0.0.103']) The cluster can be identified as a lab in marcel commands. The user and identity parameters specify login information, and the host parameter specifies the IP addresses of the nodes on the cluster. Once the cluster is configured, all nodes can be operated on at once. For example, to get a list of process pids and command lines across the cluster:@lab [ps | map (proc: (proc.pid, proc.commandline))] This returns a stream of (IP address, PID, command line) tuples. For more information visit: Marcel is pretty new and under active development. Get in touch if you would like to help out.
https://holhol24.com/2020/08/05/a-more-modern-shell-for-linux/
CC-MAIN-2022-33
refinedweb
1,328
64.41
The workbench is the new PyQt-based GUI that will be the primary interface for interacting with the mantid framework. The plotting is provided by matplotlib. It will eventually replace MantidPlot. The following build instructions assume you have followed the instructions on the Getting Started pages and can build mantid and MantidPlot. To enable the build of the workbench simply set the cmake flag ENABLE_WORKBENCH=ON and build as normal. A workbench startup script (Linux/macOS) or executable (Windows) will appear in the bin folder. For Windows the executable will appear in the configuration subdirectory of bin. Packaging is currently only supported on Linux platforms and must be enabled using the cmake flag PACKAGE_WORKBENCH=ON. The first thing that needs to be done is creating the PyCharm project and configuring the project settings. Please follow the instructions at Getting Started with PyCharm. After the project settings have been configured, a Run/Debug configuration needs to be created. To edit the configurations go to Run->Run... and select Edit Configurations. Select Templates->Python, and hit the green + in the top left. The necessary changes to the configuration are: <Mantid Build Directory>/bin/Debug/workbench-script.pyw <Mantid Build Directory>/bin/Debug <Mantid Build Directory>/bin/Release/workbench-script.pyw <Mantid Build Directory>/bin/Release Note that the only difference here is the change from /bin/Debug/ to /bin/Release/. Make sure you have finished the build you are using (Debug or Release), or there will be import errors. qtpy.PythonQtError: No Qt bindings could be found <Mantid Source Directory>/external/src/ThirdParty/lib/qt5/bin is missing from the Path environment variable. ImportError: DLL load failed: The specified module could not be found. <Mantid Source Directory>/external/src/ThirdParty/lib/qt5/lib is missing from the Path environment variable. The fix for these errors is to make sure you have started PyCharm through the <Mantid Build Directory>/pycharm.bat script. This sets up the PATH variable for the Python imports. Additionally, check that your PyCharm Run/Debug configuration does not overwrite the PATH variable. To check go to Edit Configurations -> Environment Variables and click the folder icon on the right side. In the Name column there should not be a Path variable. If there is one, try deleting it and running your configuration again. Follow these steps to narrow down the root of potential errors: command_prompt.bat. If the command_prompt.batfile is missing, the build has not been fully generated from CMake or is corrupted. python. import qtpy, from PyQt5 import QtCore, try running that line in the interpreter. PATHconfiguration in PyCharm. PyQt4/ PyQt5/ qtpy, and it fails to import from the command prompt, then the externaldependencies might not be downloaded or are corrupted.
http://developer.mantidproject.org/Workbench.html
CC-MAIN-2018-47
refinedweb
453
50.33
I've implemented an automatical deleter for C-pointers. The code works in a test program, but when I use the code in Google Test, the strange things happen. I can't understand why. Have I written with undefined behaviour? Or does Google Test interfere somehow? The code below, if the macro ASSERT_THAT i1 = 0x8050cf0 i2 = 0x8050d00 got: 0x8050cf0 got: 0x8050d00 go delete: 0x8050cf0 go delete: 0x8050d00 i1 = 0x8054cf0 i2 = 0x8054d00 got: 0x8054cf0 got: 0x8054d00 go delete: 0x8054c01 #include <iostream> #include <gmock/gmock.h> using namespace testing; class Scope_Guard { public: Scope_Guard(std::initializer_list<int*> vals) : vals_(vals) { for (auto ptr: vals_) { std::cerr << "got: " << ptr << std::endl; } } ~Scope_Guard() { for (auto ptr: vals_) { std::cerr << "go delete: " << ptr << std::endl; delete ptr; } } Scope_Guard(Scope_Guard const& rhs) = delete; Scope_Guard& operator=(Scope_Guard rhs) = delete; private: std::initializer_list<int*> vals_; }; TEST(Memory, GuardWorksInt) { int* i1 = new int(1); int* i2 = new int(2); std::cerr << "i1 = " << i1 << std::endl; std::cerr << "i2 = " << i2 << std::endl; Scope_Guard g{i1, i2}; ASSERT_THAT(1, Eq(1)); // (*) } int main(int argc, char** argv) { InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } It is undefined behavior: You are copying a std::initializer_list from the constructor argument into a class member. Copying a std::initializer_list does not copy its underlying elements. Therefore after leaving the constructor, there is no guarantee that vals_ contains anything valid anymore. Use a std::vector for the member instead and construct it from the initializer list. I am not sure about your intentions with this guard, but it would probably be easier to just use std::unique_ptr.
https://codedump.io/share/Tfa6wOwcm8IZ/1/raii-memory-corruption-in-google-test
CC-MAIN-2016-50
refinedweb
259
50.67
Re: [jslint] Re: function() -> function () Expand Messages - $0.02: First, "the good parts" advocates the following form, as it illustrates the true nature of functions: 1. var fnOnClick = function (){ ... }; Others advocate you should always name your functions for the benefit of stack traces. I think this is what Mr. Lorton's test code illustrates: 2. function fnOnClick (){ ... } One can combine the two, but its almost certainly bad practice because keeping two names in sync is invites mismatch errors: 3. var fnOnClick = function fnOnClick (){ ... }; Method (1) works fine with stack traces in the most current versions of Firebug, so appears to be the best solution for my purposes. IIRC, that was not the case until recently. Cheers, Mike ________________________________ From: Michael Lorton <mlorton@...> To: jslint_com@yahoogroups.com Sent: Sun, May 2, 2010 3:07:45 PM Subject: Re: [jslint] Re: function() -> function () Identical? Have you ever heard the expression "not always right, but never in doubt"? What should the following show? var f = function () { }; function g() { } alert(f.name == g.name); By any ordinary sense "identical", you'd think it would pop-up "true", but lo, f.name is undefined but g.name is "g". The function pointed to by f is anonymous in the sense that it does not know its own name, although other functions may have a name for it (indeed, it any function does not appear somewhere in the namespace, it's unreferenced and so, can never be called and will be cleaned up by the GC. M. ____________ _________ _________ __ From: "Cheney, Edward A SSG RES USAR USARC" <austin.cheney@ us.army.mil> To: jslint_com@yahoogro ups.com Sent: Sun, May 2, 2010 2:56:23 PM Subject: Re: [jslint] Re: function() -> function () That@crockford. com> Date: Saturday, May 1, 2010 23:53 Subject: [jslint] Re: function() -> function () To: jslint_com@yahoogro ups.com > --- In jslint_com@yahoogro ups. > ------------ --------- --------- ------ Yahoo! Groups Links [Non-text portions of this message have been removed] [Non-text portions of this message have been removed] Your message has been successfully submitted and would be delivered to recipients shortly.
https://groups.yahoo.com/neo/groups/jslint_com/conversations/topics/1299?source=1&var=1&l=1
CC-MAIN-2016-07
refinedweb
346
67.04
In this program we will see how we can get a number that is occurring odd number of times in an array. There are many different approaches. One of the easiest approach is performing ZOR operation. If a number is XORed with itself, it will be 0. So if a number XORed even number of times, it will be 0, otherwise the number itself. This solution has one problem, if more than one element has odd number of occurrences, it will return one of them. begin res := 0 for each element e from arr, do res := res XOR e done return res end #include <iostream> using namespace std; int getNumOccurredOdd(int arr[], int n) { int res = 0; for (int i = 0; i < n; i++) res = res ^ arr[i]; return res; } int main() { int arr[] = {3, 4, 6, 5, 6, 3, 5, 4, 6, 3, 5, 5, 3}; int n = sizeof(arr)/sizeof(arr[0]); cout << getNumOccurredOdd(arr, n) << " is present odd number of times"; } 6 is present odd number of times
https://www.tutorialspoint.com/c-cplusplus-program-to-find-the-number-occurring-odd-number-of-times
CC-MAIN-2021-21
refinedweb
171
64.64
DBIx::SearchBuilder::Record - Superclass for records loaded by SearchBuilder package MyRecord; use base qw/DBIx::SearchBuilder::Record/; sub _Init { my $self = shift; my $DBIxHandle = shift; # A DBIx::SearchBuilder::Handle::foo object for your database $self->_Handle($DBIxHandle); $self->Table("Users"); } # Tell Record what the primary keys are sub _PrimaryKeys { return ['id']; } # Preferred and most efficient way to specify fields attributes in a derived # class, used by the autoloader to construct Attrib and SetAttrib methods. # read: calling $Object->Foo will return the value of this record's Foo column # write: calling $Object->SetFoo with a single value will set Foo's value in # both the loaded object and the database sub _ClassAccessible { { Tofu => { 'read' => 1, 'write' => 1 }, Maz => { 'auto' => 1, }, Roo => { 'read' => 1, 'auto' => 1, 'public' => 1, }, }; } # A subroutine to check a user's password without returning the current value # For security purposes, we didn't expose the Password method above sub IsPassword { my $self = shift; my $try = shift; # note two __s in __Value. Subclasses may muck with _Value, but # they should never touch __Value if ( $try eq $self->__Value('Password') ) { return (1); } else { return (undef); } } # Override DBIx::SearchBuilder::Create to do some checking on create sub Create { my $self = shift; my %fields = ( UserId => undef, Password => 'default', #Set a default password @_ ); # Make sure a userid is specified unless ( $fields{'UserId'} ) { die "No userid specified."; } # Get DBIx::SearchBuilder::Record->Create to do the real work return ( $self->SUPER::Create( UserId => $fields{'UserId'}, Password => $fields{'Password'}, Created => time ) ); } DBIx::SearchBuilder::Record is designed to work with DBIx::SearchBuilder. DBIx::SearchBuilder::Record abstracts the agony of writing the common and generally simple SQL statements needed to serialize and De-serialize an object to the database. In a traditional system, you would define various methods on your object 'create', 'find', 'modify',, DBIx::SearchBuilder::Record. With::Record, you can in the simple case, remove all of that code and replace it by defining two methods and inheriting some code. Its pretty simple, and incredibly powerful. For more complex cases, you can, gasp, do more complicated things by overriding certain methods. Lets stick with the simple case for now. The two methods in question are '_Init' and '_ClassAccessible', all they really do are define some values and send you on your way. As you might have guessed the '_' suggests that these are private methods, they are. They will get called by your record objects constructor. Defines what table we are talking about, and set a variable to store the database handle. Defines what operations may be performed on various data selected from the database. For example you can define fields to be mutable, or immutable, there are a few other options but I don't understand what they do at this time. And really, thats it. So lets have some sample code. The example code below makes the following assumptions: id integer not NULL, primary_key(id), foo varchar(10), bar varchar(10) First, let's define our record class in a new module named "Simple.pm". 000: package Simple; 001: use DBIx::SearchBuilder::Record; 002: @ISA = (DBIx::SearchBuilder::Record); This should be pretty obvious, name the package, import ::Record and then define ourself as a subclass of ::Record. 003: 004: sub _Init { 005: my $this = shift; 006: my $handle = shift; 007: 008: $this->_Handle($handle); 009: $this->Table("Simple"); 010: 011: return ($this); 012: } Here we set our handle and table name, while its not obvious so far, we'll see later that $handle (line: 006) gets passed via ::Record::new when a new instance is created. Thats actually an important concept, the DB handle is not bound to a single object but rather, its shared across objects. 013: 014: sub _ClassAccessible { 015: { 016: Foo => { 'read' => 1 }, 017: Bar => { 'read' => 1, 'write' => 1 }, 018: Id => { 'read' => 1 } 019: }; 020: } What's happening might be obvious, but just in case this method is going to return a reference to a hash. That hash is where our columns are defined, as well as what type of operations are acceptable. 021: 022: 1; Like all perl modules, this needs to end with a true value. Now, on to the code that will actually *do* something with this object. This code would be placed in your Perl script. 000: use DBIx::SearchBuilder::Handle; 001: use Simple; Use two packages, the first is where I get the DB handle from, the latter is the object I just created. 002: 003: my $handle = DBIx::SearchBuilder::Handle->new(); 004: $handle->Connect( 'Driver' => 'Pg', 005: 'Database' => 'test', 006: 'Host' => 'reason', 007: 'User' => 'mhat', 008: 'Password' => ''); Creates a new DBIx::SearchBuilder::Handle, and then connects to the database using that handle. Pretty straight forward, the password '' is what I use when there is no password. I could probably leave it blank, but I find it to be more clear to define it. 009: 010: my $s = Simple->new($handle); 011: 012: $s->LoadById(1); LoadById is one of four 'LoadBy' methods, as the name suggests it searches for an row in the database that has id='0'. ::SearchBuilder has, what I think is a bug, in that it current requires there to be an id field. More reasonably it also assumes that the id field is unique. LoadById($id) will do undefined things if there is >1 row with the same id. In addition to LoadById, we also have: Takes two arguments, a column name and a value. Again, it will do undefined things if you use non-unique things. Takes a hash of columns=>values and returns the *first* to match. First is probably lossy across databases vendors. Populates this record with data from a DBIx::SearchBuilder. I'm currently assuming that DBIx::SearchBuilder is what we use in cases where we expect > 1 record. More on this later. Now that we have a populated object, we should do something with it! ::Record automagically generates accessos and mutators for us, so all we need to do is call the methods. Accessors are named <Field>(), and Mutators are named Set<Field>($). On to the example, just appending this to the code from the last example. 013: 014: print "ID : ", $s->Id(), "\n"; 015: print "Foo : ", $s->Foo(), "\n"; 016: print "Bar : ", $s->Bar(), "\n"; Thats all you have to to get the data, now to change the data! 017: 018: $s->SetBar('NewBar'); Pretty simple! Thats really all there is to it. Set<Field>($) returns a boolean and a string describing the problem. Lets look at an example of what will happen if we try to set a 'Id' which we previously defined as read only. 019: my ($res, $str) = $s->SetId('2'); 020: if (! $res) { 021: ## Print the error! 022: print "$str\n"; 023: } The output will be: >> Immutable field Currently Set<Field> updates the data in the database as soon as you call it. In the future I hope to extend ::Record to better support transactional operations, such that updates will only happen when "you" say so. Finally, adding a removing records from the database. ::Record provides a Create method which simply takes a hash of key=>value pairs. The keys exactly map to database fields. 023: ## Get a new record object. 024: $s1 = Simple->new($handle); 025: $s1->Create('Id' => 4, 026: 'Foo' => 'Foooooo', 027: 'Bar' => 'Barrrrr'); Poof! A new row in the database has been created! Now lets delete the object! 028: 029: $s1 = undef; 030: $s1 = Simple->new($handle); 031: $s1->LoadById(4); 032: $s1->Delete(); And its gone. For simple use, thats more or less all there is to it. In the future, I hope to exapand this HowTo to discuss using container classes, overloading, and what ever else I think of. Each method has a lower case alias; '_' is used to separate words. For example, the method _PrimaryKeys has the alias _primary_keys. Instantiate a new record object. Returns this row's primary key. Return a hash of the values of our primary keys for this function. Private method. Returns undef unless KEY is accessible in MODE otherwise returns MODE value Return our primary keys. (Subclasses should override this, but our default is that we have one primary key, named 'id'.) An older way to specify fields attributes in a derived class. (The current preferred method is by overriding Schema; if you do this and don't override _ClassAccessible, the module will generate an appropriate _ClassAccessible based on your Schema.) Here's an example declaration: sub _ClassAccessible { { Tofu => { 'read'=>1, 'write'=>1 }, Maz => { 'auto'=>1, }, Roo => { 'read'=>1, 'auto'=>1, 'public'=>1, }, }; } Returns an array of the attributes of this class defined as "read" => 1 in this class' _ClassAccessible datastructure Returns an array of the attributes of this class defined as "write" => 1 in this class' _ClassAccessible datastructure Takes a field name and returns that field's value. Subclasses should never override __Value. _Value takes a single column name and returns that column's value for this row. Subclasses can override _Value to insert custom access control. _Set takes a single column name and a single unquoted value. It updates both the in-memory value of this column and the in-database copy. Subclasses can override _Set to insert custom access control. This routine massages an input value (VALUE) for FIELD into something that's going to be acceptable. Takes Takes: Returns a replacement VALUE. Validate that VALUE will be an acceptable value for FIELD. Currently, this routine does nothing whatsoever. If it succeeds (which is always the case right now), returns true. Otherwise returns false. Truncate a value that's about to be set so that it will fit inside the database' s idea of how big the column is. (Actually, it looks at SearchBuilder's concept of the database, not directly into the db). _Object takes a single column name and an array reference. It creates new object instance of class specified in _ClassAccessable structure and calls LoadById on recently created object with the current column value as argument. It uses the array reference as the object constructor's arguments. Subclasses can override _Object to insert custom access control or define default contructor arguments. Note that if you are using a Schema with a REFERENCES field, this is unnecessary: the method to access the column's value will automatically turn it into the appropriate object. Takes a single argument, $id. Calls LoadById to retrieve the row whose primary key is $id Takes two arguments, a column and a value. The column can be any table column which contains unique values. Behavior when using a non-unique value is undefined Takes a hash of columns and values. Loads the first record that matches all keys. The hash's keys are the columns to look at. The hash's values are either: scalar values to look for OR has references which contain 'operator' and 'value' Loads a record by its primary key. Your record class must define a single primary key column. Like LoadById with basic support for compound primary keys. Takes a hashref, such as created by DBIx::SearchBuilder and populates this record's loaded values hash. Load a record as the result of an SQL statement Takes an array of key-value pairs and drops any keys that aren't known as columns for this recordtype Delete this record from the database. On failure return a Class::ReturnValue with the error. On success, return 1; Returns or sets the name of the current Table Returns or sets the current DBIx::SearchBuilder::Handle object Jesse Vincent, <jesse@fsck.com> Enhancements by Ivan Kohler, <ivan-rt@420.am> Docs by Matt Knopp <mhat@netlag.com>
http://search.cpan.org/~ruz/DBIx-SearchBuilder-1.63_02/lib/DBIx/SearchBuilder/Record.pm
CC-MAIN-2016-36
refinedweb
1,938
63.09
SHLOCK(1) NetBSD General Commands Manual SHLOCK(1)Powered by man-cgi (2021-06-01). Maintained for NetBSD by Kimmo Suominen. Based on man-cgi by Panagiotis Christias. NAME shlock -- create or verify a lock file for shell scripts SYNOPSIS shlock [-du] [-p PID] -f lockfile DESCRIPTION remove the lock file, and create a new one. shlock uses the link. The -d option causes shlock to be verbose about what it is doing. The -f argument with lockfile is always required. The -p option with PID is given when the program is to create a lock file; when absent, shlock will simply check for the validity of the lock file. The -u option causes shlock to read and write the PID as a binary pid_t, instead of as ASCII, to be compatible with the locks created by UUCP. EXIT STATUS A zero exit code indicates a valid lock file. EXAMPLES BOURNE SHELL #!/bin/sh lckfile=/tmp/foo.lock if shlock -f ${lckfile} -p $$ then # do what required the lock rm ${lckfile} else echo Lock ${lckfile} already held by `cat ${lckfile}` fi C SHELL #!/bin/csh -f set lckfile=/tmp/foo.lock shlock -f ${lckfile} -p $$ if ($status == 0) then # do what required the lock rm ${lckfile} else echo Lock ${lckfile} already held by `cat ${lckfile}` endif The examples assume that the file system where the lock file is to be created is writable by the user, and has space available. SEE ALSO flock(1) HISTORY shlock was written for the first Network News Transfer Protocol (NNTP) software distribution, released in March 1986. The algorithm was sug- gested by Peter Honeyman, from work he did on HoneyDanBer UUCP. AUTHORS Erik E. Fair <fair@clock.org>. NetBSD 9.2 November 2, 2012 NetBSD 9.2
https://man.netbsd.org/NetBSD-9.2/shlock.1
CC-MAIN-2022-40
refinedweb
293
72.97
Part of speech is really useful in every aspect of Machine Learning, Text Analytics, and NLP. This article will help you in part of speech tagging using NLTK python.NLTK provides a good interface for POS tagging. So let’s understand how – Part of Speech Tagging using NLTK Python- Step 1 – This is a prerequisite step. In this step, we install NLTK module in Python. Here is the following code – pip install nltk # install using the pip package manager import nltk nltk.download('averaged_perceptron_tagger') The above line will install and download the respective corpus etc. Step 2 – Here we will again start the real coding part. Lets import – from nltk import pos_tag Step 3 – Let’s take the string on which we want to perform POS tagging. We will also convert it into tokens . Lets checkout the code – data ="Data Science Learner is an easy way to learn data science" data_token =data.split() Step 4 – This is a step we will convert the token list to POS tagging. If we refer the above lines of code then we have already obtained a data_token list by splitting the data string. Let’s check out further – data_tokens_tag = pos_tag(data_token) print(data_tokens_tag ) Let’s see the complete code and its output here – Here you can see we have extracted the POS tagger for each token in the user string. Notes – Well ! if you look the second line – nltk.download(‘averaged_perceptron_tagger’) , Here we have to define exactly which package we really need to download from the NLTK package. Because usually what people do is that they download the complete NLTK corpus. This increases the space complexity as well as time complexity unnecessary. Now Few words for the NLP libraries. NLTK is one of the good options for text processing but there are few more like Spacy, gensim, etc . Here is the complete article for Best Python NLP libraries , You check it out. Thanks Data Science Learner Team Join our list Subscribe to our mailing list and get interesting stuff and updates to your email inbox.
https://www.datasciencelearner.com/part-of-speech-tagging-using-nltk-python/
CC-MAIN-2020-50
refinedweb
341
64.71
Below a list of interesting links that I found this week: Frontend: Development: Mobile: Marketing: Interested in more interesting links follow me at twitter In part 4 of the Nifty .Net series we have the Enumerable.Any method. The Any method is part of the .NET 3.5 LINQ framework, so it’s only available in .NET 3.5 and upwards. The Any method determines whether a sequence contains any elements. The method definition: public static bool Any<TSource>(this IEnumerable<TSource> source) As you can see the the Any method is an extension method on IEnumerable, so you can use the Any method on al classes that implements IEnumerable, like List<T>, string[]. For example: 1: using System; 2: using System.Linq; 3: 4: public class MyClass 5: { 6: public static void Main() 7: { 8: string[] foo = { "a", "b", "c" }; 9: 10: if(foo.Any()) 11: { 12: Console.WriteLine("Contains any elements"); 13: } 14: 15: Console.ReadLine(); 16: } 17: } In this code snippet we determines if the foo array has any elements. To use the Any method we have to add the System.Linq namespace as “using” to our class. Some of the great advantages of the Any method is that it will stop the enumeration of source as soon as the result can be determined. Beware don’t use the method .Count() like the following code snippet: 9: 10: if(foo.Count() > 0) 12: Console.WriteLine("Contains any elements"); The above code snippet uses a lot more resources to determine if a sequence contains any elements, because it have to enumerate through all elements of the collection to get the count. The Any method, as I said earlier, stops as soon as he found an element in the collection. The Any method has also an overload: public static bool Any<TSource>(this IEnumerable<TSource> source, Func<TSource, bool> predicate) With this overload you can check if there are any elements in collection that matches the predicate. A code example will look like this: 10: if(foo.Any(s => s == "a")) This code example will look for any elements that matches with “a”, and again it will stops if as soon as the result can be determined. In some cases you can beter use “.length > 0”, this property has a better performance compared to the Any method: 2: 3: public class MyClass 4: { 5: public static void Main() 6: { 7: string[] foo = { "a", "b", "c" }; 8: 9: if(foo.Length > 0) 10: { 11: Console.WriteLine("Contains any elements"); 12: } 13: 14: Console.ReadLine(); 15: } 16: } The snippet above is faster because the length property is updated when an item is added or deleted from the list, so it doesn’t have to enumerate through the list. You can find more info and examples on the MSDN pages:
http://weblogs.asp.net/erwingriekspoor/archive/2011/07.aspx
CC-MAIN-2014-10
refinedweb
468
73.17
term.ui A V module for designing terminal UI apps import term.ui as tui struct App { mut: tui &tui.Context = 0 } fn event(e &tui.Event, x voidptr) { mut app := &App(x) println(e) if e.typ == .key_down && e.code == .escape { exit(0) } } fn frame(x voidptr) { mut app := &App(x) app.tui.clear() app.tui.set_bg_color(r: 63, g: 81, b: 181) app.tui.draw_rect(20, 6, 41, 10) app.tui.draw_text(24, 8, 'Hello from V!') app.tui.set_cursor_position(0, 0) app.tui.reset() app.tui.flush() } mut app := &App{} app.tui = tui.init( user_data: app event_fn: event frame_fn: frame hide_cursor: true ) app.tui.run() ? See the /examples/term.ui/ folder for more usage examples. user_data voidptr- a pointer to any user_data, it will be passed as the last argument to each callback. Used for accessing your app context from the different callbacks. init_fn fn(voidptr)- a callback that will be called after initialization and before the first event / frame. Useful for initializing any user data. frame_fn fn(voidptr)- a callback that will be fired on each frame, at a rate of frame_rateframes per second. event_fn fn(&Event, voidptr)- a callback that will be fired for every event received. cleanup_fn fn(voidptr)- a callback that will be fired once, before the application exits. fail_fn fn(string)- a callback that will be fired if a fatal error occurs during app initialization. buffer_size int = 256- the internal size of the read buffer. Increasing it may help in case you're missing events, but you probably shouldn't lower this value unless you make sure you're still receiving all events. In general, higher frame rates work better with lower buffer sizes, and vice versa. frame_rate int = 30- the number of times per second that the framecallback will be fired. 30fps is a nice balance between smoothness and performance, but you can increase or lower it as you wish. hide_cursor bool- whether to hide the mouse cursor. Useful if you want to use your own. capture_events bool- sets the terminal into raw mode, which makes it intercept some escape codes such as ctrl + cand ctrl + z. Useful if you want to use those key combinations in your app. window_title string- sets the title of the terminal window. This may be changed later, by calling the set_window_title()method. reset []int = [1, 2, 3, 4, 6, 7, 8, 9, 11, 13, 14, 15, 19]- a list of reset signals, to setup handlers to cleanup the terminal state when they're received. You should not need to change this, unless you know what you're doing. All of these fields may be omitted, in which case, the default value will be used. In the case of the various callbacks, they will not be fired if a handler has not been specified. Q: My terminal (doesn't receive events / doesn't print anything / prints gibberish characters), what's up with that? A: Please check if your terminal. The module has been tested with xterm-based terminals on Linux (like gnome-terminal and konsole), and Terminal.app and iterm2 on macOS. If your terminal does not work, open an issue with the output of echo $TERM. Q: There are screen tearing issues when doing large prints A: This is an issue with how terminals render frames, as they may decide to do so in the middle of receiving a frame, and cannot be fully fixed unless your console implements the synchronized updates spec. It can be reduced drastically, though, by using the rendering methods built in to the module, and by only painting frames when your app's content has actually changed. Q: Why does the module only emit keydown events, and not keyup like sokol/ gg? A: It's because of the way terminals emit events. Every key event is received as a keypress, and there isn't a way of telling terminals to send keyboard events differently, nor a reliable way of converting these into keydown / keyup events. fn init(cfg Config) &Context enum Direction { unknown up down left right } enum EventType { unknown mouse_down mouse_up mouse_move mouse_drag mouse_scroll key_down resized } enum KeyCode { null = 0 tab = 9 enter = 10 escape = 27 space = 32 backspace = 127 exclamation = 33 double_quote = 34 hashtag = 35 dollar = 36 percent = 37 ampersand = 38 single_quote = 39 left_paren = 40 right_paren = 41 asterisk = 42 plus = 43 comma = 44 minus = 45 period = 46 slash = 47 _0 = 48 _1 = 49 _2 = 50 _3 = 51 _4 = 52 _5 = 53 _6 = 54 _7 = 55 _8 = 56 _9 = 57 colon = 58 semicolon = 59 less_than = 60 equal = 61 greater_than = 62 question_mark = 63 at = 64 left_square_bracket = 91 backslash = 92 right_square_bracket = 93 caret = 94 underscore = 95 backtick = 96 left_curly_bracket = 123 vertical_bar = 124 right_curly_bracket = 125 tilde = 126 insert = 260 delete = 261 up = 262 down = 263 right = 264 left = 265 page_up = 266 page_down = 267 home = 268 end = 269 f1 = 290 f2 = 291 f3 = 292 f4 = 293 f5 = 294 f6 = 295 f7 = 296 f8 = 297 f9 = 298 f10 = 299 f11 = 300 f12 = 301 f13 = 302 f14 = 303 f15 = 304 f16 = 305 f17 = 306 f18 = 307 f19 = 308 f20 = 309 f21 = 310 f22 = 311 f23 = 312 f24 = 313 } enum Modifiers { ctrl shift alt } enum MouseButton { unknown left middle right } struct Color { pub: r byte g byte b byte } fn (c Color) hex() string struct Config { user_data voidptr init_fn fn (voidptr) frame_fn fn (voidptr) cleanup_fn fn (voidptr) event_fn fn (&Event, voidptr) fail_fn fn (string) buffer_size int = 256 frame_rate int = 30 use_x11 bool window_title string hide_cursor bool capture_events bool use_alternate_buffer bool = true skip_init_checks bool reset []os.Signal = [.hup, .int, .quit, .ill, .abrt, .bus, .fpe, .kill, .segv, .pipe, .alrm, .term, .stop, ] } struct Context { ExtraContext pub: cfg Config mut: print_buf []byte paused bool enable_su bool enable_rgb bool pub mut: frame_count u64 window_width int window_height int } fn (mut ctx Context) bold() bold sets the character state to bold. fn (mut ctx Context) clear() fn (mut ctx Context) draw_dashed_line(x int, y int, x2 int, y2 int) draw_dashed_line draws a dashed line segment, starting at point x, y, and ending at point x2, y2. fn (mut ctx Context) draw_empty_dashed_rect(x int, y int, x2 int, y2 int) draw_empty_dashed_rect draws a rectangle with dashed lines, starting at top left x, y, and ending at bottom right x2, y2. fn (mut ctx Context) draw_empty_rect(x int, y int, x2 int, y2 int) draw_empty_rect draws a rectangle with no fill, starting at top left x, y, and ending at bottom right x2, y2. fn (mut ctx Context) draw_line(x int, y int, x2 int, y2 int) draw_line draws a line segment, starting at point x, y, and ending at point x2, y2. fn (mut ctx Context) draw_point(x int, y int) draw_point draws a point at position x, y. fn (mut ctx Context) draw_rect(x int, y int, x2 int, y2 int) draw_rect draws a rectangle, starting at top left x, y, and ending at bottom right x2, y2. fn (mut ctx Context) draw_text(x int, y int, s string) draw_text draws the string s, starting from position x, y. fn (mut ctx Context) flush() flush displays the accumulated print buffer to the screen. fn (mut ctx Context) hide_cursor() hide_cursor will make the cursor invisible fn (mut ctx Context) horizontal_separator(y int) horizontal_separator draws a horizontal separator, spanning the width of the screen. fn (mut ctx Context) reset() reset restores the state of all colors and text formats back to their default values. fn (mut ctx Context) reset_bg_color() reset_bg_color sets the current background color back to it's default value. fn (mut ctx Context) reset_color() reset_color sets the current foreground color back to it's default value. fn (mut ctx Context) run() ? fn (mut ctx Context) set_bg_color(c Color) set_color sets the current background color used by any succeeding draw_* calls. fn (mut ctx Context) set_color(c Color) set_color sets the current foreground color used by any succeeding draw_* calls. fn (mut ctx Context) set_cursor_position(x int, y int) set_cursor_position positions the cusor at the given coordinates x, y. fn (mut ctx Context) set_window_title(s string) set_window_title sets the string s as the window title. fn (mut ctx Context) show_cursor() show_cursor will make the cursor appear if it is not already visible fn (mut ctx Context) write(s string) write puts the string s into the print buffer. struct Event { pub: typ EventType x int y int button MouseButton direction Direction code KeyCode modifiers Modifiers ascii byte utf8 string width int height int }
https://modules.vlang.io/term.ui.html
CC-MAIN-2021-31
refinedweb
1,399
68.3
Makes a file system available for use. Standard C Library (libc.a) #include <sys/vmount.h> int vmount (VMount, Size) struct vmount *VMount; int Size; int mount (Device, Path, Flags) char *Device; char *Path; int Flags; The vmount subroutine mounts a file system, thereby making the file available for use. The vmount subroutine effectively creates what is known as a virtual file system. After a file system is mounted, references to the path name that is to be mounted over refer to the root directory on the mounted file system. A directory can only be mounted over a directory, and a file can only be mounted over a file. (The file or directory may be a symbolic link.) Therefore, the vmount subroutine can provide the following types of mounts: A mount to a directory or a file can be issued if the calling process has root user authority or is in the system group and has write access to the mount point. To mount a block device, remote file, or remote directory, the calling process must also have root user authority. The mount subroutine only allows mounts of a block device over a local directory with the default file system type. The mount subroutine searches the /etc/filesystems file to find a corresponding stanza for the desired file system. Note: The mount subroutine interface is provided only for compatibility with previous releases of the operating system. The use of the mount subroutine is strongly discouraged by normal application programs. If the directory you are trying to mount over has the sticky bit set to on, you must either own that directory or be the root user for the mount to succeed. This restriction applies only to directory-over-directory mounts. Upon successful completion, a value of 0 is returned. Otherwise, a value of -1 is returned, and the errno global variable is set to indicate the error. The mount and vmount subroutines fail and the virtual file system is not created if any of the following is true: The mount and vmount subroutines can also fail if additional errors occur. These subroutines are part of Base Operating System (BOS) Runtime. The mntctl subroutine, umount subroutine. The mount command, umount command. Files, Directories, and File Systems for Programmers in AIX Version 4.3 General Programming Concepts: Writing and Debugging Programs. Understanding Mount Helpers in AIX Version 4.3 General Programming Concepts: Writing and Debugging Programs explains and examines the execution syntax of mount helpers.
https://sites.ualberta.ca/dept/chemeng/AIX-43/share/man/info/C/a_doc_lib/libs/basetrf2/vmount.htm
CC-MAIN-2022-40
refinedweb
413
55.64
On Monday 20 September 2004 1:01 pm, Keshavamurthy Anil S wrote: > On Mon, Sep 20, 2004 at 01:26:44PM -0500, Dmitry Torokhov wrote: > > Also, introducing recursion (depth does not seem to be limited here) is > > not a good idea IMHO - better convert it into iteration to avoid stack > > problems down teh road. > Humm, I guess recursion should be fine and even though the code does not have > an explicit limit, the ACPI namespace describing the Ejectable device will limit the > number of recursible devices. And I believe this won;t be more than 3 to 4 level depth. > Hence recursion is fine here. > > If you still strongly believe that recursion is not the right choice here, > let me know and I will convert it to iteration. I'm also in favor of removing the recursion, if only because it allows local analysis. I.e., a correctness argument based solely on the code in the patch is much more useful than one that relies on properties of an external and mostly unknown ACPI namespace. - To unsubscribe from this list: send the line "unsubscribe linux-ia64" in the body of a message to majordomo@vger.kernel.org More majordomo info at on Mon Sep 20 16:28:41 2004 This archive was generated by hypermail 2.1.8 : 2005-08-02 09:20:30 EST
http://www.gelato.unsw.edu.au/archives/linux-ia64/0409/11140.html
CC-MAIN-2020-24
refinedweb
225
58.92