text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Azure Configuration Options This article contains now obsolete information. Please refer to my update article that contains details on what to do when using the non-CTP version of Windows Azure. I’m happy to report that moving forward, my Azure Development VPC image will be using Windows 7! While this won’t really mean any difference to those of you reading this blog, it does make me happy. The new VPC image performs faster than the Vista one and as yet I’m not having any real issues with it (aside from having to figure out where stuff’s at). Between getting my new VPC up and running and some distracting thoughts regarding how to use the cloud to extend legacy on-premise applications (long story), I have managed to get this next article up. Today I want to briefly discuss configuration options in Windows Azure. Anyone that’s been doing .NET development is likely used to using the .config files to store configuration settings. With them you can make a quick tweak to your application by just firing up your favorite text editor and making the change. Now in an on-premise application this worked well because you usually had the ability to log into the hosting server and make the change. The downside was that if the application was hosted on multiple servers, you have to log into each server and make the same change. All the while hoping really hard that you didn’t mess one up or miss an instance. Now lets move this scenario into the cloud. We have multiple instances of our application all running in the cloud, we also don’t have remote access to the server to make the changes ourselves. So now, we have to change our application, rebuild it, publish it to the cloud, deploy it to staging, review it, and then deploy it to production. All this for something as simple as changing a minor configuration option in a text file. The upside is that with one deployment, Azure handles updating all instances of the application for us. But have no fear, there is another option. If you recall, back in Part 1 of my series on Azure Queues, I talked about adding the Microsoft.ServiceHosting.ServiceRuntime namespace to my class library. At the time I stated I wasn’t sure we needed it and in hindsight we really didn’t (that app would have run as a traditional asp.net application). Until now. You also hopefully recall our discussion about the ServiceConfiguration and ServiceDefinition files from my Hands on with the Azure Web Role. Together, these two items are the solution to our problem. We’ll put our settings into the ServiceConfiguration file then access them with the RoleManager class from the Microsoft.ServiceHosting.ServiceRuntime namespace. Here we go…. Service Configuration and Definition files A few weeks back, Ryan Dunn, author and technical Evangelist for Microsoft updated his blog with a post on why Windows Azure uses these files. In a nutshell, these files were created to allow for a configuration option that exists outside of the service deployment packages themselves. If you’ve read the MSDN article on Understanding a Hosted Service Architecture, you already know that the Service Definition file is bundled with our deployment package. It tells the Azure Fabric what types of roles are in the package as well as what configuration options exist in the Service Configuration file. The definition will be used by the RoleManager class to allow our cloud services to access those configuration properties. These two files, configuration and definition, work hand in hand. So we’ll need to update our service definition file with our custom properties and then update the configuration with the actual values. I’m going to use my previous QueueDemo project for this, so lets start by altering that solution’s service definition file as follows: In the image at the right, I’ve highlighted our changes to my services configuration definition file. Inside of the web role node, I’ve added a new ConfigurationSettings node which details the three settings we’ll be adding for this solution: AS_URI, AS_AccountName, AS_SharedKey. You can see we can create separate configuration settings for both the Web Role and (if it was present), the Worker Role. Unfortunately, at this time there’s no way to create a single configuration setting that is shared by both roles. Personally, I’m hoping I’ve either overlooked something or that this will be added in the future. If someone knows different, please let me know. Heard back from the Azure team that this is indeed the case currently. No word on if it will change in the future or not. With this done, now we have to switch to the service configuration file and set the values for our new keys. I’ve put in false values, so you’ll want to insert your own Azure Storage values into settings. If you want, the local storage credentials will work just fine if you want to use the QueueDemo application as I left it at the end of that series. Accessing Service Configuration Creating the service configuration is only half the equation. We now need to use the RoleManager to access these settings. Among other things, the RoleManager class allows us write to the service log, access local storage (not to be confused with the Azure Storage services), and to access our configuration settings. GetConfigurationSetting is a static method of the RoleManager class. We give this method the name of the setting we want to retrieve the setting for and it returns the value (if it exists). If the requested value does not exist, a RoleException will be thrown. Here’s my example: public AzureQueue(): base(){myURI = RoleManager.GetConfigurationSetting("AS_URI");} So what does this buy us? We’ve got a configuration file, and we have accessed it from our application. But the big question now is how does this help us solve our initial problem of updating a simple entry without having to redeploy our application. The picture to the left is a portion of the Windows Azure cloud service management page. See that “Configure…” button center below the glowey cube, that’s our solution. Once our cloud service (web or worker, makes no difference) is deployed, we can click that button to directly access the configuration file and make changes. The Azure Fabric then pushes those changes out to all the instances so the changes can take affect. Hopefully, once the Windows Azure management API is announced, we’ll see the ability to update this via a service. Combine that with various monitors to see how the instances of our services are running and we have a really spiffy way to remotely manage our web applications. I look forward to revisiting this subject when/if that is possible. Likes sands through the hourglass That’s all I have for now. I’m still messing with my QueueDemo sample project to get it working with hosted storage (darn these useless “bad request” exceptions). After that, I plan another ‘hands on’ style session with worker roles. Once I get all that out of the way, I’ll start messing with .NET services. With legacy application integration being at the forefront of my mind lately, I’m especially looking forward to working with the service bus. Until then, if there’s something you’d like to see me mess with, don’t hesitate to drop me a line at brent@stinemans.org. Till next time! "Hopefully, once the Windows Azure manage API is announced, we’ll see the ability to update this via a service. Combine that with various monitors to see how the instances of our services are running and we have a really spiffy way to remotely manage out web applications."Agreed, and hopefully these services will allow us to access the analytics for our application(s) as well. I\’m sure there\’ll be a big sideline in consuming Azure analytics data and visualizing it.-Jamie Thomson
https://brentdacodemonkey.wordpress.com/2009/05/12/azure-configuration-options/
CC-MAIN-2017-22
refinedweb
1,347
61.97
Log message: tmux: Update misc/tmux to 2.6 Changes: CHANGES FROM 2.5 TO 2.6, 05 October 2017 * Add select-pane -T to set pane title. * Fix memory leak when lines with BCE are removed from history. * Fix (again) the "prefer unattached" behaviour of attach-session. * Reorder how keys are checked to allow keys to be specified that have a leading escape. GitHub issue 1048. * Support REP escape sequence (\033[b). * Run alert hooks based on options rather than always, and allow further bells even if there is an existing bell. * Add -d flag to display-panes to override display-panes-time. * Add selection_present format when in copy mode (allows key bindings that do something different if there is a selection). * Add pane_at_left, pane_at_right, pane_at_top and pane_at_bottom formats. * Make bell, activity and silence alerting more consistent by: removing the bell-on-alert option; adding activity-action and silence-action options with the same possible values as the existing bell-action; adding a \ "both" value for the visual-bell, visual-activity and visual-silence options to trigger both a bell and a message. * Add a pane_pipe format to show if pipe-pane is active. * Block signals between forking and resetting signal handlers so that the libevent signal handler doesn't get called in the child and incorrectly write into the signal pipe that it still shares with the parent. GitHub issue 1001. * Allow punctuation in pane_current_command. * Add -c for respawn-pane and respawn-window. * Wait for any remaining data to flush when a pane is closed while pipe-pane is in use. * Fix working out current client with no target. GitHub issue 995. * Try to fallback to C.UTF-8 as well as en_US.UTF-8 when looking for a UTF-8 locale. * Add user-keys option for user-defined key escape sequences (mapped to User0 to User999 keys). * Add pane-set-clipboard hook. * FAQ file has moved out of repository to online. * Fix problem with high CPU usage when a client dies unexpectedly. GitHub issue 941. * Do a dance on OS X 10.10 and above to return tmux to the user namespace, allowing access to the clipboard. * Do not allow escape sequences which expect a specific terminator (APC, DSC, OSC) to wait for forever - use a small timeout. This reduces the chance of the pane locking up completely when sent garbage (cat /dev/random or similar). * Support SIGUSR2 to toggle logging on a running server, also generate the "out" log file with -vv not -vvvv. * Make set-clipboard a three state option: on (tmux both sends to outside terminal and accepts from applications inside); external (tmux sends outside but does not accept inside); and off. * Fix OSC 4 palette setting for bright foreground colours. GitHub issue 954. * Use setrgbf and setrgbb terminfo(5) capabilities to set RGB colours, if they are available. (Tc is still supported as well.) * Fix redrawing panes when they are resized several times but end up with the size unchanged (for example, splitw/resizep -Z/breakp). * Major rewrite of choose mode. Now includes preview, sorting, searching and tagging; commands that can be executed directly from the mode (for example, to delete one or more buffers); and filtering in tree mode. * choose-window and choose-session are now aliases of choose-tree (in the command-alias option). * Support OSC 10 and OSC 11 to set foreground and background colours. * Check the U8 capability to determine whether to use UTF-8 line drawing characters for ACS. * Some missing notifications for layout changes. * Control mode clients now do not affect session sizes until they issue refresh-client -C. new-session -x and -y works with control clients even if the session is not detached. * All new sessions that are unattached (whether with -d or started with no terminal) are now created with size 80 x 24. Whether the status line is on or off does not affect the size of new sessions until they are attached. * Expand formats in option names and add -F flag to expand them in option values. * Remember the search string for a pane even if copy mode is exited and entered again. * Some further BCE fixes (scroll up, reverse index). * Improvements to how terminals are cleared (entirely or partially). Log message: Update misc/tmux to 2.5. CHANGES FROM 2.4 to 2.5 09 May 2017 -. Log message: It is no longer needed to add -D_OPENBSD_SOURCE to CPPFLAGS for reallocarray(3) in NetBSD, fixed in tmux-2.4 by upstream (NFC). Log message: Update misc/tmux to 2.4. These. Log message: Whoops, next tmux version will be 2.4! Thanks a lot to <wiz> for spotting my mistake. Log message: NetBSD needs -D_OPENBSD_SOURCE for reallocarray(3). Pass it via CPPFLAGS.NetBSD. PKGREVISION++ Log message: Use curses instead of terminfo, pure terminfo seems to have been dropped in configure in 2.3. Bump PKGREVISION for safety. Log message: Update misc/tmux to 2.3. Incompatible Changes ==================== None. Normal Changes ============== - New option 'pane-border-status' to add text in the pane borders. - Support for hooks on commands: 'after' and 'before' hooks. - 'source-file' understands '-q' to suppress.
http://pkgsrc.se/misc/tmux
CC-MAIN-2018-09
refinedweb
856
68.87
This is probably one of the most ambitious articles I will ever write for CodeProject. In this article, I want to cover how I have approached starting to build a wrapper for the CodeProject API. You may be asking yourself why you'll want to use yet another wrapper when you could interface directly with the API yourself but I hope that my aims convince you to read further. A while back, I was speaking to Chris Maunder (well, pestering him for an API) and didn't he go and crack the whip until the team give us an API. Now, there have been a number of things I've been working on that have kept me away from playing with the API as much as I would like, but it kept niggling at me that I hadn't got around to working with the API – yet it was something I couldn't wait to try out. So, I got a little bit of free time set aside and decided that it was time to bite the bullet. I didn't want to produce just another wrapper when there are some excellent examples on here. Rather, I wanted to produce the wrapper that most closely resembled the way I approach development nowadays, and produce an article that details the way I approach this type of development. To that end, I decided that I needed to set down some objectives to guide me, which would serve as the basis of my "acceptance criteria". By the way, if you don't have acceptance criteria for your projects, you should start putting them in place – how can you know you've successfully achieved what you set out to do if you don't have something written down to mark that against. So, here are my objectives for this iteration. The wrapper has to be Portable Class Library compliant. As long as I adhere to strict PCL, I should be able to produce a wrapper that can be used in lots of environments. The first cut of the wrapper targets .NET 4.5, Windows 8, Windows Phone 8 and Windows Phone Silverlight 8. I will later be targeting Xamarin for this wrapper, so retargeting will take place that make it easy for us to use on Android and iOS as well. The API has to follow what I consider to be good design principles as much as possible. Now, this is a highly subjective thing but, where possible, I will be aiming to use SOLID principles extensively. While the development will largely follow TDD, there will be times that I stray away from this in order to use some feature that is not easily testable, such as the HttpClient. If I follow Single Responsibility as much as possible, I will be minimising the "touchpoints" so to speak. Ultimately, I am going to be producing something for others to use so I have to be as pragmatic as possible – this means that there will be some design compromises but, I will note them down here. This will be an iterative build. I am limiting the features that I produce for the first version to simply getting an access token - the token will be vital later on when we're using the API. This may seem to be a trivial thing but as there are a large number of design decisions that are going to go into the initial design, this is a good basis for getting something "out of the door" as a proving ground as a first pass. I would caution against relying too heavily on this version because it's highly likely that I will be refactoring things quite heavily in future builds but this is a good starting point. I want to use IoC, but I don't want to force you to have to use my chosen IoC. Okay, that's a strange statement but as I would like to get this adopted by as many people as possible, I am designing it so that you can use whatever mechanism suits you. Ultimately, if you want to "new" everything up yourself, that's entirely your choice but I like IoC. Saying that, I don't expect that you would have to register everything yourself if you want to go IoC. The code provides an abstraction that we can use to hide the fact that we are using IoC, letting us drop in the IoC container we prefer. I'll be providing an Autofac implementation, and this will demonstrate how we can wrap things up in an appropriate IoC container. Unless I have a very good reason, the code is going to follow the Single Responsibility Principle as much as possible. This means that there are going to be lots of small classes that hook things together. I'm a big fan of SRP. I'm not going to write things I don't need to write. This means that I'm going to be using packages that other people have written for things like converting JSON unless I find that I need something that doesn't exists in PCL form. Apart from sample projects and tests, all the libraries will be Portable Class Libraries. This just goes back to the fact I want this to be available as widely as possible. Inputs aren't to be trusted. If a method or constructor accepts a type, at the very least, we should test to make sure it's set. There will be some validity checking introduced but the first stage is to make sure that every public constructor or method doesn't trust its inputs. This is easily verifiable through tests. Initially, I will only be providing unit-level tests. I'm not going to be performing System-Under-Test (SUT) tests so inputs will be mocked as much as possible. At this stage, I'm not worried too much about exception handling or logging. The code, for this iteration, is going to naively assume that everything is okay when it calls out to external sources. In the next iteration, I'll be looking to put more formal handling in to cater for the so-called "unhappy day" cases. This means that there will be no custom exceptions being raised at this stage – these will be introduced in later stages. Okay, those should be enough objectives to convince you that I have a definite end-goal in mind, so let's see what we're going to need to allow us to build this version. Visual Studio 2013 Update 4 Telerik JustMock Lite. This is an open source mocking framework that I have a definite fondness for. For my company, I use the full version of JustMock because it does things like allow me to fully mock things like HttpClient so I can arrange on none-virtual methods as well as virtual ones. This feature isn't available in the open source version but it still does an awful lot – and it works with PCL. HttpClient Autofac. This is an extremely straightforward but powerful IoC container. I cannot recommend it highly enough. Json.NET. At the time of writing, this was the only JSON converter I could find that worked as a PCL library so I chose this implementation. Microsoft HTTP Client Libraries. By default, HttpClient is not provided as a PCL item. If we want to use it, we will end up downloading the PCL implementation. Note that I used NuGet to get the last three prerequisites. Readers of my previous articles will probably be aware that I like to use a more inclusive "we" where I'm demonstrating how to write a piece of code. Over the years, I've found that this helps to make for a warmer article. As this article is going to deal heavily with the process I was following, and the decisions I was taking, unfortunately it's not going to use this format as much. Don't despair though as we will be seeing how to write our own code to integrate this library later on in the article and, as that will be more interactive, I'm hoping you'll join me in writing the code at that point. Right from the start, I decided that I was going to break the API into two separate DLLs. The first DLL consists purely of interfaces. Effectively, this DLL lays out the contract that I'm going to follow with this development. This is a purely personal choice but is based on the fact that I frequently write systems where components are distributed around and I use MEF to compose things together automatically. By having a central interfaces library, the process effectively becomes a no-brainer for me. The Http namespace is the one that I'm placing the functionality that is directly related to actually interacting with Http. This class is, cunningly enough, responsible for building up the HttpClient. public HttpClientBuilder(HttpClient client, IClientSettings clientSettings) { if (client == null) throw new ArgumentNullException("client"); if (clientSettings == null) throw new ArgumentNullException("clientSettings"); client.BaseAddress = new Uri(clientSettings.BaseUrl); client.DefaultRequestHeaders.Accept.Clear(); client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json")); this.client = client; } If we look at the definition of the constructor, we see that we are expected to pass in the HttpClient into this class. On seeing this, our first reaction is probably that this looks wrong and that if we are creating the HttpClient externally, we should be populating it there. The important thing to remember here is that I am favouring IoC, so we aren't going to creating the HttpClient ourselves – we are merely going to register HttpClient and let the IoC container take care of giving us an instance. This means, of course, that we have to have some mechanism of defining what we want setting up HttpClient – this, of course, means that I have created this small class to set it up for us. The key things that we are setting up in this instance of HttpClient is the BaseAddress and the DefaultRequestHeaders, which we are setting to accept JSON. As we can see, our classes don't have to be terribly complicated when we build small self-contained units. What is interesting is, even though I know that I'm going to need to use different addresses in the future, I am keeping it as simple as possible right now. I'm not going to try and second-guess future implementations here and I'm not going to build architecture for code that I don't need in this iteration. While YAGNI isn't strictly a SOLID concept, it's an important one that has all sorts of implications about the way we write code – it's all too easy to write code that goes above and beyond what we need, just because we think that it might be handy at some point in the future. BaseAddress DefaultRequestHeaders This is the class that is going to be responsible for actually issuing the POST request to the server, and converting the response to JSON. It's not going to do any translation of the JSON object, that responsibility lies elsewhere. There's one method exposed by this class; PostRequestAsync. I am following the convention of having async tasks append Async to the operation name, this is a great way to let people know that your code is suitable for use with async/await. PostRequestAsync async await public async Task<object> PostRequestAsync(FormUrlEncodedContent request) { if (request == null) throw new ArgumentNullException("request"); HttpClient client = clientBuilder.GetClient(); HttpResponseMessage message = await client.PostAsync("Token", request); string jsonData = await message.Content.ReadAsStringAsync(); var output = JsonConvert.DeserializeObject(jsonData); return output; } The first thing I do in here is ensure that we are passing in a request object. You will note that I'm not stating it's a valid request object, just that it's a request object. This is in keeping with my initial objectives that I don't trust inputs, even if it's me that's written the code calling this method. As the development progresses, inputs like this will be surrounded by more rigorous validation – I will be demonstrating some of this checking when we look at ClientSettings and UserDetails. ClientSettings UserDetails The next line shows us retrieving the HttpClient from HttpClientBuilder. IHttpClientBuilder has been injected into this class just to provide this capability. HttpClientBuilder IHttpClientBuilder Now, we want to send an asynchronous post request to retrieve our token. This is where we are starting to step outside of SRP – is this method doing too much? The answer to that, of course, is that it is doing far too much but I have decided to leave it as it is for the moment. I know that, at some point soon, I'm going to want to split this out because there are going to be more than access token tests but, right now, this is sufficient for my needs. Refactoring will occur at some point, it just doesn't have to be right now. Finally, we want to read the response and deserialize it to JSON. This deserialization is the reason I haven't started to split this class up. Right now, I have elected to just retrieve a single property (this is done elsewhere), rather than hydrating the full model. Once I start to put richer models in place, the easier it will be to see how what shape I want this particular class to be. Incidentally, testing of this particular method is less than the rigour I would like it to be. I have made a conscious decision to limit the set of tests here because I know that I want to mock out the HttpClient requests, and I have not put the full infrastructure in place for this particular implementation. I'm holding off on that one because I want to demonstrate, in the next article, how making a change is isolated when we use SRP. At the moment, this namespace consists of a single class. This class provides the ability to get the value out of a single token. The eagle eyed reader (well, the ones who've browsed the code already) will have noticed that there is a seeming mismatch between the number of interfaces declared in CodeProject.Api.Interfaces.Http and the number of classes in CodeProject.Api.Http. There's one more interface than there is class in the relevant namespaces; the "missing class" is implemented here because it's the one that I'm going to use to build the post data for the access token. CodeProject.Api.Interfaces.Http CodeProject.Api.Http public FormUrlEncodedContent Build() { clientSettings = validateClientSettings.Validate(clientSettings); userDetails = validateUserDetails.Validate(userDetails); List<KeyValuePair<string, string>> postData = new List<KeyValuePair<string, string>>(); Add(postData, "grant_type", "password"); Add(postData, "client_id", clientSettings.ClientId); Add(postData, "client_secret", clientSettings.Secret); Add(postData, "username", userDetails.Email); Add(postData, "password", userDetails.Password); return new FormUrlEncodedContent(postData); } When we call Build, we expect to receive an encoded access token list of items. What's most interesting here is that I've started to introduce some other input validation to ensure that we have the values that we need to actually issue the request. You might wonder why I'm validating at this point, rather than in the constructor – after all, I've talked about not trusting input but there doesn't appear to be any input parameters here. Simply put, input relates to any state that a method relies on and we validate here because this is the point we actually need to check the state. If we were to validate in the constructor, we would need to have populated the user details and client settings before. While that would be okay for client settings (we'll see why when we look at that class), we expect the user details to be more dynamic. Typically, we might want the user to enter the details at run time, long after the object graph has been constructed. Build This class is the model that we're going to use to hold the access token. It's not going to be responsible for retrieving the token, that's the responsibility of FetchAccessToken, but it will be the entry point for our code to retrieve the token from the web server, and store it for easy access wherever it is needed. FetchAccessToken This class uses the AuthenticationTokenPostData class to build the encoded content and then uses ClientRequest to retrieve the access token, which is then decoded using the JsonConverter. AuthenticationTokenPostData ClientRequest JsonConverter public async Task<string> GetTokenAsync() { FormUrlEncodedContent encodedContent = encodedPostData.Build(); var ret = await clientRequest.PostRequestAsync(encodedContent) as JObject; return jsonConverter.GetValueFromToken<string>(ret, "access_token"); } This class provides the client id, secret and base url that we'll use to access the API online. These values are set up for us when we register an application with CodeProject. In the sample code, I have provided values for us to use against one of my registrations. I would urge you to create your own application, rather than assuming that I'm going to leave this one available permanently. public class ClientSettings : IClientSettings { public string ClientId { get { return "F_FV6JInrKlX35cgj8o_gmzKb17mArv-"; } } public string Secret { get { return "J8SRZ10DEWy7uCaWdYNnh5VhUV1ObOTWM73DCxTb9JP9Pa_qbfg_8A4YgtD0etst"; } } public string BaseUrl { get { return "<a href=""></a>"; } } } This class is the user details model that provides the email address and password for the user when they want to access the site to retrieve details about their account. While my every coding instinct tells me that I should be implementing INotifyPropertyChanged to raise property change notifications, I don't actually have a need to do that just yet, so that doesn't get added. INotifyPropertyChanged public class UserDetails : IUserDetails { public string Email { get; set; } public string Password { get; set; } } The final class I want to talk about in the CodeProject.Api assembly implements an interface that isn't defined in the core CodeProject.Api.Interfaces assembly. ModuleRegistration implements the IModuleRegistration interface which is actually defined in CodeProject.Api.Ioc.Builder. CodeProject.Api CodeProject.Api.Interfaces ModuleRegistration IModuleRegistration CodeProject.Api.Ioc.Builder The reason this class is implemented here is because it is responsible for registering all of the relevant interface to type mappings so that client applications don't have to. What I have aimed to do here is have the registration completely agnostic of a particular IoC technology. Initially, all of the registrations are marked to register as single instances because that is all I need for this particular iteration, but this will be changed so that things like the validation are multiple instance. public class ModuleRegistration : IModuleRegistration { public void Register<T>(IRegistration<T> registration) where T : class { if (registration == null) throw new ArgumentNullException("registration"); registration.RegisterAsSingleInstance<IFetchAccessToken, FetchAccessToken>(); registration.RegisterAsSingleInstance<IAccessToken, AccessToken>(); registration.RegisterAsSingleInstance<IUserDetails, UserDetails>(); registration.RegisterAsSingleInstance<IValidate<IClientSettings>, ValidateClientSettings>(); registration.RegisterAsSingleInstance<IValidate<IUserDetails>, ValidateUserDetails>(); registration.RegisterAsSingleInstance<IEncodedPostData, AuthenticationTokenPostData>(); registration.RegisterAsSingleInstance<IClientSettings, ClientSettings>(); registration.RegisterTypeAsSingleInstance<HttpClient>(); registration.RegisterAsSingleInstance<IClientRequest, ClientRequest>(); registration.RegisterAsSingleInstance<IHttpClientBuilder, HttpClientBuilder>(); registration.RegisterAsSingleInstance<IJsonConverter, JsonConverter>(); } } Again, while it's tempting to build in support for features such as applying names to the registrations for type resolution, as I don't need this feature yet, I'm not going to build it in. Okay, so that's covered the core API. Let's take a look and see how we can add our own IoC implementations and how we bootstrap things into place. This class provides the mapping between the high level IoC abstractions, and the actual underlying IoC calls in Autofac. The actual calls are pretty trivial, but they rely on us passing in the ContainerBuilder that we will be registering the dependencies against. Indeed, as we develop other IoC containers, we will see that they will have their own containers passed in, such as Unity's UnityContainer. ContainerBuilder UnityContainer The underlying interface is a generic interface to allow us to specify the type of the container. public class Registration : IRegistration<ContainerBuilder> { public void RegisterAsSingleInstance<TInterface, TType>() where TType : class, TInterface { if (Container == null) throw new ArgumentNullException(); Container.RegisterType<TType>().As<TInterface>().SingleInstance(); } public void RegisterTypeAsSingleInstance<TType>() where TType : class { if (Container == null) throw new ArgumentNullException(); Container.RegisterType<TType>().SingleInstance(); } public ContainerBuilder Container { get; set; } } As we can see here, this code isn't terribly complex – but it does exactly what we need it to do. As I mentioned before, YAGNI means that I haven't added multiple instance support yet and while I would like to offer a fluent interface, I don't have a need to do this yet, so I will avoid doing so for this iteration. At this point, it's worth noting that I do have an intent to add support for other IoC containers at a later iteration. Okay, so we have the ability to register interfaces and types against the IoC container, but we need something to tie the module registration together with the creation of the IoC container. This is where the Bootstrapper comes in. This abstract class provides the entry point for applications that want to use the API. We will derive from this class when we want to hook the API up behind the scenes for us; we could do it all manually if we want, but I have provided this functionality so that we don't have to. Bootstrapper public abstract class Bootstrapper { protected Bootstrapper() { Bootstrap(); } protected virtual void OnStartup() { } protected virtual void RegisterAdditionalModules(IRegistration<ContainerBuilder> registration) { } protected IContainer Container { get; private set; } protected virtual void InitializeModules() { OnStartup(); var containerBuilder = new ContainerBuilder(); IModuleRegistration moduleRegistration = new ModuleRegistration(); IRegistration<ContainerBuilder> registration = new Registration() { Container = containerBuilder }; moduleRegistration.Register(registration); RegisterAdditionalModules(registration); Container = containerBuilder.Build(); } private void Bootstrap() { InitializeModules(); } } The heart of the bootstrapping process is InitializeModules. In this method, we create the ContainerBuilder that Autofac requires, and allocate it to the Registration class. We then call register in ModuleRegistration which takes care of registering the different parts of the API for us. The final part of this puzzle requires the code to build up the IoC registrations in the container. InitializeModules I have provided OnStartup to allow our applications to do work before we begin initializing modules, and RegisterAdditionalModules to allow us to build up registration mappings in our applications. Effectively, this makes the Bootstrapper the "one stop shop" where all of our application registrations will reside. This is the pattern I will follow when I provide bootstrappers for additional containers. OnStartup RegisterAdditionalModules It's worth noting here, that the tests behind the Bootstrapper are the ones that I am the least happy with. They are present but all they do is verify that the different methods are called. I have plans to extend these tests in the next iteration. I appreciate that we have covered a lot of ground here. It's time for us to start building an application. The simplest type of application we can build is, of course, a console application so we will start with that. I've called my particular application, CodeProject.Api.Autofac.ConsoleApplication but feel free to change that if that feels too clumsy for you. CodeProject.Api.Autofac.ConsoleApplication Before we actually start adding functionality, we're going to need the following references: Let's start by creating the interface that describes what we want to do. As we're going to retrieve an authorization token, we need a mechanism to get the users email address and password, and we need a method that will call GetTokenAsync from our AccessToken implementation; as we know that we are going to be calling an async method, let's make sure that our interface also exposes this method as a suitable candidate for an async method: GetTokenAsync AccessToken using System.Threading.Tasks; namespace CodeProject.Api.Autofac.ConsoleApplication { public interface IAuthTokenSample { /// <summary> /// Enter the email address and password. /// </summary> void EnterCredentials(); /// <summary> /// Uses the API to retrieve the token. /// </summary> /// <returns>An awaitable task</returns> Task GetTokenAsync(); } } So, that's the interface created. Let's move on and create an implementation of this class. Cunningly enough, I'm going to call the class AuthTokenSample. AuthTokenSample using System; using System.Threading.Tasks; using CodeProject.Api.Interfaces.Settings; using CodeProject.Api.Interfaces.Token; namespace CodeProject.Api.Autofac.ConsoleApplication { public class AuthTokenSample : IAuthTokenSample { public void EnterCredentials() { } public async Task GetTokenAsync() { } } } Now, this class depends on the access token which we cannot get until we have populated the user details, so that tells us that we are going to have to get a reference to IAccessToken and IUserDetails. We're going to use constructor injection here, to inject these references into the class, and store them for use later on. IAccessToken IUserDetails private readonly IAccessToken accessToken; private readonly IUserDetails userDetails; public AuthTokenSample(IAccessToken accessToken, IUserDetails userDetails) { if (accessToken == null) throw new ArgumentNullException("accessToken"); if (userDetails == null) throw new ArgumentNullException("userDetails"); this.accessToken = accessToken; this.userDetails = userDetails; } We have everything in place now, to start implementing the interface methods. Let's start by having the user enter their details. public void EnterCredentials() { do { Console.Write("Enter email address: "); userDetails.Email = Console.ReadLine(); } while (string.IsNullOrWhiteSpace(userDetails.Email)); do { Console.Write("Enter password: "); userDetails.Password = Console.ReadLine(); } while (string.IsNullOrWhiteSpace(userDetails.Password)); } The last thing we need to do, is retrieve and display the token. public async Task GetTokenAsync() { Console.WriteLine(await accessToken.GetTokenAsync()); } It's that simple - because the user details are stored in a single instance, the access token has full access to the populated email address and password. The amount of code we need to write at this side of the API is negligible. The last stage of the puzzle is writing our Bootstrapper. As this is the start of our API journey, I have chosen to call this class, StartApi. Again, please choose whatever name you feel happiest with. StartApi As we have added an interface and class that we want to be treated as part of the IoC lifecycle, our bootstrapper needs to register them before we can use them. Fortunately, the boostrapper lifecycle gives us RegisterAdditionalModules to include them in the registration build up. Finally, we are going to provide a means to actually use our test implementation. As we're leveraging Autofac, our class is ultimately going to end up looking like this: using Autofac; using CodeProject.Api.Ioc.Builder; namespace CodeProject.Api.Autofac.ConsoleApplication { public class StartApi : Bootstrapper { protected override void RegisterAdditionalModules(IRegistration<ContainerBuilder> registration) { registration.RegisterAsSingleInstance<IAuthTokenSample, AuthTokenSample>(); base.RegisterAdditionalModules(registration); } public async void TestAuthTokenAsync() { using (var scope = Container.BeginLifetimeScope()) { var item = scope.Resolve<IAuthTokenSample>(); item.EnterCredentials(); await item.GetTokenAsync(); } } } } Finally, add code to the Main and we're good to go – we can retrieve our access token. Main using System; namespace CodeProject.Api.Autofac.ConsoleApplication { class Program { static void Main(string[] args) { StartApi api = new StartApi(); api.TestAuthTokenAsync(); Console.ReadKey(); } } } And that's it. That's all we need to do to create our application. It really is that easy. Here's a view of our application in action: At this stage, we have a lot of the basics in place for our API. There's a lot that we need to do, and we're going to have some refactoring to do as we add in additional functionality, but we have reached a good first step right now. My next steps are to extend what the API retrieves, introduce features such as multi-instance IoC support, start looking at adding sensible exception handling, and catering for the unavailability of online resources, as well as beefing up tests and adding additional samples. What we have here is a good start, and I hope that the way I approach developments are clear here. One final point. I talked about my objectives up front. At this stage, I can say that I have met the objectives I set out to achieve. Having objectives available up front makes things much easier to say whether or not we have achieved what we want to achieve. In my articles on developing the Ultimate Coder application, I set out the music I listened to while I was coding. Music helps me relax when the development is flowing, and this development is no exception; so the music I have listened to is: Thanks for reading, and I hope you have enjoyed this article. If nothing else, you have an Autofac enabled way to get a CodeProject Access Token. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) Quote:Hi Team, I have created app in codeproject and using which I have to get the user details by using oauth but by going through documentation I am not able to find the way to get accesstoken and even user data. I went through sample code but here the request parameter are encoded and even the base url is not mentioned. such as If I want to get access token of logged in user or by forcing user to login then I am using the url as " API/Account/Authorize?client_id=my_client_id&redirect_uri="redirect_url"&response_type=token&state="uniqe code" but by using this I am getting error 404. Could you guys please help me on this General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/986918/The-CodeProject-API-Part
CC-MAIN-2019-13
refinedweb
4,889
53.1
MP4BinaryToBase16 - Convert binary data to a base 16 string #include <mp4.h> char* MP4BinaryToBase16( const u_int8_t* pData, u_int32_t dataSize ) pData Specifies the pointer to the binary data. dataSize Specifies the size in bytes of the binary data. Upon success, a null terminated string representing the data in base 16. Upon error, NULL. MP4BinaryToBase16 converts binary data to a base 16 string. This encoding maps groups of 4 bits into the character set [0-9a-f]. The string is in malloc’d memory, so the caller is responsible for free’ing the memory. This utility is useful for generating the SDP descriptions for some RTP payloads. Example: 0x12, 0xAB -> "12ab" MP4(3)
http://huge-man-linux.net/man3/MP4BinaryToBase16.html
CC-MAIN-2017-13
refinedweb
111
52.36
Name, and click Next. Note that your skill's name must not indicate that it's functionality is limited. For example, you should not use "basic", "reduced" or "simple" in the title of your skill. The skill will default to English (US). Change the language using the drop-down, if you are targeting a different region/language. See Build Smart Home Skills in Multiple Languages to see the device categories that are supported for each language. - On the Choose a model to add to your skill page, Select Smart Home, which is a pre-built model, and click Create skill. - Under Payload Version, select v3. - Click Save and Smart Home service endpoint.: - N.Virginia for English (US) or English (CA) skills - EU (Ireland) region for English (UK), English (IN), German or French (FR) skills - US West (Oregon) for Japanese and English (AU) skills. The region is displayed in the upper right corner. Providing your Lambda function in the correct region prevents latency issues. -, or:", ; var responseHeader = request.directive.header; responseHeader.namespace = "Alexa"; responseHeader.name = "Response"; responseHeader.messageId = responseHeader.messageId + "-R"; // get user token pass in request var requestToken = request.directive.endpoint = { } } ] } ] } } } Configure the Smart Home Service Endpoint You must provide the ARN for your Lambda function in the skill configuration in the developer console. - Navigate back to your skill in the Developer Console. - Under 2. Smart Home service endpoint, in the Default endpoint box, provide the ARN number from the Lambda function you created and click Save. If you are creating multiple language versions of your skill, In the Section 3. Account Linking, click Setup Account Linking Authorization Code Grant. For an overview of account linking in smart home skills, see: Test Your Skill When your implementation is complete, and you've tested your Lambda function, you can functionally test your smart home skill. -. On the Test page, move the test slider to Yes and use the Alexa Simulator to. Note that skill certification is different than Works with Alexa certification for devices. See Certify Your Device, State Reporting State reporting is recommended for all smart home skills because it provides the best customer experience. For more information, see Understand State Reporting in a Smart Home Skill. State reporting is required for smart home skills that target devices that are certified as Works With Amazon Alexa. For more details, see Works with Alexa Requirements for a Smart Home Skill. Complete the following steps to implement state reporting: - Ensure your skill responds to ReportState directives with StateReport events. - Request permission to send events to the Alexa event gateway - Add code to receive the AcceptGrant directive and obtain authentication credentials. - Add code to Send ChangeReport messages to the Alexa event gateway using the stored customer authentication tokens.
https://developer.amazon.com/docs/smarthome/steps-to-build-a-smart-home-skill.html
CC-MAIN-2018-26
refinedweb
455
57.06
Hi --: ---- #include <stdio.h> #include <sys/types.h> #include <sys/stat.h> void main() { char *filename = "missing file"; struct stat finfo; finfo.st_mode = 0; fprintf(stderr, "stat returned: %d\n", stat(filename, &finfo)); fprintf(stderr, "finfo.st_mode = %d\n", finfo.st_mode); } ---- And sample output on RedHat 5.1: ----- stat returned: -1 38416 ----- Should mod_rewrite really be copying the URI to r->filename in hook_uri2file? Here's the comment which made me wonder: 329 /* 330 * Are we dealing with a file? If not, we can (hopefuly) safely assume we 331 * have a handler that doesn't require one, but for safety's sake, and so 332 * we have something find_types() can get something out of, fake one. But 333 * don't run through the directory entries. 334 */ 335 336 if (r->filename == NULL) { 337 r->filename = ap_pstrdup(r->pool, r->uri); 338 r->finfo.st_mode = 0; /* Not really a file... */ 339 r->per_dir_config = per_dir_defaults; 340 341 return OK; 342 } Anyway, the following patch does solve the problem I'm seeing; but I'm not sure it's very reasonable for non-linux servers. It might make sense to wrap it in an appropriate preprocessor define ... but I'm not sure which versions of linux display this problem. ---- --- http_request.c 1999/04/20 23:38:44 1.147 +++ http_request.c 1999/05/12 01:59:42 @@ -267,6 +267,9 @@ } #if defined(ENOENT) && defined(ENOTDIR) else if (errno == ENOENT || errno == ENOTDIR) { + if (errno == ENOENT) + r->finfo.st_mode = 0; + last_cp = cp; while (--cp > path && *cp != '/') ----- thanks -- Ed
http://mail-archives.apache.org/mod_mbox/httpd-dev/199905.mbox/%3CPine.LNX.3.96.990511171921.17623C-100000@crankshaft%3E
CC-MAIN-2016-26
refinedweb
254
68.97
Participant 1245 Points Mar 01, 2013 02:50 AM|demoninside9|LINK Hi all, Actually I have a sql table for booking Vehicles. I book a vehicle from a form where I need to select Time_From and Time_To. I need to put a clause here :- If there is already a booking for a Vehicle on the same date then you can't book that Vehicle between the time period, out of the time period you can book. see my booking information below I don't know that how many times a vehicle is booked for a particular date. How can I check the all vehicle booking for a Unique vehicle for a Unique date. I need to check booking time period. I need to count the rows for particular vehicle for a particular date. please give me hint how to put logic here on the vehicle booing form. Thank All-Star 51031 Points Mar 01, 2013 03:03 AM|oned_gk|LINK Maybe you can count like this DATEDIFF(minute,[time_from],[time_to]) as totalminute Use datetime type for the fields Star 9555 Points Mar 01, 2013 03:14 AM|Paul Linton|LINK You don't say what technology you are using to access the database, SQL, Linq to SQL, Entity Framework, ??? Whatever technology you use there will be someway to extract the date part from a DateTime. For example, with .Net DateTimes you cna just use the Date property. To find everything on the same day you would compare with just the Date part. If you can't find the function to extract the date part then you can do everything >=the day and <the next day (this may be the fast way in SQL as it could make used of a sorted index on the column in question. Are your car bookings for multiple days? The is a standard way to find if one range overlaps another range. if (this.Start < other.End && this.End > other.Start) { the ranges overlap } (think about it, it is a little tricky. Consider ranges that overlap, ranges that are totally inside another range and ranges which totally span other ranges. You may need to change the comparison to include equality depending on the business needs of your situation) Participant 1245 Points Mar 01, 2013 03:48 AM|demoninside9|LINK Thanks Paul, [I am using SQL 2008] Let me know one thing, suppose a Vehicle name XYZ is booked TODAY for 3 times, lets suppose [10:00 AM - 12:00 PM, 01:00 PM - 2:00 PM and 2:15 PM - 3:00 PM). I can count the that how many times XYZ is booked for TODAY. So for checking I need to get all Time_From and Time_To fields for 3 rows. Do I need to put those in array for putting condition if (this.Start < other.End && this.End > other.Start) with a for Loop ? cos I don't know thw rows no, it can be n numbers of rows. I have these variables DateTime Time_From_Booked -- DateTime Time_To_Booked [booked period] DateTime Time_From_Selected -- DateTime Time_To_Selected [selected period] String Vehicle; and Datetime Today now how to do with all the counted rows ? [i.e. how may times a vehicle is booked for a particular date] Thanks Star 9555 Points Mar 01, 2013 04:04 AM|Paul Linton|LINK I wouldn't use an array (I never use Arrays but this is definitely not a time to use an Array) First define a utility class to hold ranges of dates public class Range { DateTime start; DateTime end; public bool Overlaps(Range other) { return (other.start < this.end && other.end > this.start); } public Range(DateTime start, DateTime end) { this.start = start; this.end = end; } } Then you will want a list of these var todaysBookings = new List<Range>(); Add each booking to the list todaysBookings.Add(new Range(Time_from_booked, Time_to_Booked)); Does the selected overlap any of todays bookings? var selectedRange = new Range(Time_From_Selected, Time_To_Selected); if (todaysBookings.Any(b => b.Overlaps(selectedRange)) { The selected range overlaps a booking} 4 replies Last post Mar 01, 2013 04:04 AM by Paul Linton
https://forums.asp.net/t/1886401.aspx?Find+field+from+counted+rows
CC-MAIN-2018-09
refinedweb
680
68.7
here is my code. I am trying generate random alphabet but i see same letters. example: (YHTGDHFBSHXCHFYFUXZWDYKLXI) How can i fix it? just i need mixed alphabet not same letters. Thank you so much. #include <time.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> void random_string(char * string, unsigned length) { /* Seed number for rand() */ srand((unsigned int) time(0)); int i; for (i = 0; i < length; ++i) { string[i] = rand() % 26 + 'A'; } string[i] = '\0'; } int main(void) { char s[26]; random_string(s, 26); printf("%s\n", s); return 0; } The operation you are looking for is called a shuffle or a permutation. It is not sufficient to call a random-letter function 26 times, since, as you see, you can generate duplicates. Instead, start with the string "ABCDEFGHIJKLMNOPQRSTUVWXYZ" and perform a shuffle operation. If you want to learn by doing such things from scratch, I recommend reading about the Fisher-Yates Shuffle then crafting an implementation on your own.
https://codedump.io/share/sjVUDT77LFWG/1/generate-random-letters-in-c
CC-MAIN-2018-09
refinedweb
162
67.55
Sometimes, you’ll work with code that runs in threads which must do their processing quickly. This is common in Web applications, though of course it also occurs in other scenarios. You may well need to log messages from those threads; however, if you are a library developer, you typically won’t have control over which handlers are configured by the running application. What happens if some of the configured handlers are ones that potentially will block for longish periods of time, causing delays which hold the threads up longer than is acceptable? The most common culprit which demonstrates sluggish behaviour is the SMTPHandler: sending emails can take a long time, for a number of reasons outside the developer’s control (for example, a poorly performing mail or network infrastructure). But almost any network-based handler can block: Even a SocketHandler operation may do a DNS query under the hood which is too slow (and this query can be deep in the socket library code, below the Python layer, and outside your control). One answer which pushes itself to the fore is to arrange things so that the time-consuming operation happens on a separate thread. This has been suggested to me once or twice, and suggestions have been made to perhaps provide threaded versions of the existing Handler classes or incorporate threading into some of the existing classes. That didn’t sound like a great idea to me, and for one reason or another I haven’t been able to address it until now. I recently posted about an addition to Python 3.2 of a QueueHandler class, the addition of which came about as a result of Mike Bayer of SQLAlchemy fame pointing me to a Stack Overflow answer he had given, to a question about using logging with multiprocessing. Mike’s solution was a specialized handler for use with multiprocessing which delegated to a RotatingFileHandler, which worked for his use case but was not general enough to put in the stdlib. So I came up with QueueHandler, which works not only with multiprocessing queues but also in-process thread-safe queues as implemented in the Queue module (renamed to queue in more recent Pythons). QueueHandler is also easy to subclass, for example to send logging events to a ZeroMQ socket, as described in this post. QueueHandler also forms the basis of dealing with handlers that block. Before we look at one way to solve that problem, I should mention that the QueueHandler implementation which will be in 3.2 got a slight improvement: a new prepare method was added and the emit method was modified to use it. The docstrings give the reason for this minor refactoring: def prepare(self, return record def emit(self, record): """ Emit a record. Writes the LogRecord to the queue, preparing it first. """ try: self.enqueue(self.prepare(record)) except (KeyboardInterrupt, SystemExit): raise except: self.handleError(record) The whole class is to be found here. Moving on to how to solve the problem of blocking handlers, the answer is in part to attach only a QueueListener, which has been designed as the counterpart to QueueHandler. A QueueListener is very simple: it’s passed a queue and a handler, and it fires up an internal thread which listens to its queue for LogRecords sent from QueueHandlers (or any other source of LogRecords, for that matter). It should be relatively easy to subclass QueueListener to listen using other types of queue – for example, ZeroMQ sockets – though we won’t cover that in this post (it’ll be using a similar approach as was described here for our QueueHandler subclass ZeroMQSocketHandler). Here’s QueueListener: import threading class QueueListener(object): _sentinel = None def __init__(self, queue, handler): """ Initialise an instance with the specified queue and handler. """ self.queue = queue self.handler = handler self._stop = threading.Event() self._thread = None def dequeue(self, block): """ Dequeue a record and return it, optionally blocking. The base implementation uses get. You may want to override this method if you want to use timeouts or work with custom queue implementations. """ return self.queue.get(block) def start(self): """ Start the listener. This starts up a background thread to monitor the queue for LogRecords to process. """ self._thread = t = threading.Thread(target=self._monitor) t.setDaemon(True) t.start() def _monitor(self): """ Monitor the queue for records, and ask the handler to deal with them. This method runs on a separate, internal thread. The thread will terminate if it sees a sentinel object in the queue. """ q = self.queue has_task_done = hasattr(q, 'task_done') while not self._stop.isSet(): try: record = self.dequeue(True) if record is self._sentinel: break self.handler.handle(record) if has_task_done: q.task_done() except queue.Empty: pass # There might still be records in the queue. while True: try: record = self.dequeue(False) if record is self._sentinel: break self.handler.handle(record) if has_task_done: q.task_done() except queue.Empty: break def stop(self): """ Stop the listener. This asks the thread to terminate, and then waits for it to do so. """ self._stop.set() self.queue.put_nowait(self._sentinel) self._thread.join() self._thread = None Since the queue should only ever get LogRecords put into it, it seems reasonable to use None as a sentinel to terminate the thread. Of course, you can set a different sentinel if you wish. The advantage of having a separate QueueListener class is that you can use the same instance to service multiple QueueHandlers. This is more resource-friendly than, say, having threaded versions of the existing handler classes, which would eat up one thread per handler for no particular benefit. Here’s a simple snippet showing how to use QueueHandler and QueueListener together: def main(): q = queue.Queue(-1) qh = QueueHandler(q) h = logging.StreamHandler() ql = QueueListener(q, h) root = logging.getLogger() root.addHandler(qh) ql.start() f = logging.Formatter('%(threadName)s: %(message)s') h.setFormatter(f) # The log output will display the thread which generated # the event (the main thread) rather than the internal # thread which monitors the internal queue. This is what # you want to happen. root.warning('Look out!') ql.stop() if __name__ == '__main__': main() This should be self-explanatory, but of course please feel free to post a comment if you need clarification of anything. The whole script is here, and if you run it you should see something like the following: MainThread: Look out! Notice that QueueListener is not even especially logging-specific: You can pass it as a handler any object that has a handle method which takes a single argument, and that method will be passed any non-sentinel object which appears on the queue. You should be able to paste QueueHandler and QueueListener into your own code, as you may well be using Python versions earlier than 3.2 :-) The plan is to add QueueListener to logging.handlers, so before 3.2 enters beta, I’d be grateful for any comments or suggestions you have about any of this stuff. Very interesting blog. Alot of blogs I see these days don't really provide anything that I'm interested in, but I'm most definately interested in this one. Just thought that I would post and let you know. After reading such amazing writings, your heart gets energized.
http://plumberjack.blogspot.my/2010/09/improved-queuehandler-queuelistener.html
CC-MAIN-2018-13
refinedweb
1,214
64.3
Name: krT82822 Date: 05/28/99 The following JVM's all perform optimizations that violate chapter 17 of the Java Language Specification, which described the model for how threads can interact through shared memory: Wintel 1.1.8 Wintel 1.2.1 classic Wintel 1.2.1 Hotspot Sparc Solaris 1.2.1 production This can be considered either a bug in the compiler or a problem with chapter 17 of the specification (I would argue for the later). I have gone over this with Guy Steele, and he agrees with me that the optimization is not allowed by the current specification. I have written a paper on the Java memory model which describes these issues at length: "Fixing the Java Memory Model", William Pugh, ACM SIGPLAN Java Grande Conference, 1999, available from. Below I include a sample program that tests for violation of the Java memory model. /** * This class checks to see if a Java Virtual Machine * respects the Java Memory Model, as described in * Chapter 17 of the Java Language Specification. * * The Java Memory Model prohibits certain compiler optimizations. * In particular, it requires that for each memory location in isolation, * the reads and writes to that memory location have sequentially * consistent semantics. One effect of this is that once a thread * sees a write to a variable by another thread, it cannot forget that * it has seen the write. * * In particular, for the code sequence: * * i = p.x; * j = q.x; * k = p.x; * * the compiler may not eliminate the second load of p.x unless * it can prove that q and p do not point to the same object, * or that no other thread can update p.x. * * This is spelled out in much more detail in * * "Fixing the Java Memory Model", William Pugh, ACM SIGPLAN Java Grande * Conference, 1999, available from. * */ import java.awt.Point; public class Coherence { static Point p = new Point(); static Point q = p; public static void check() { boolean optimizationDone = true; boolean interleavingSeen = false; boolean loopHoistingDone = true; Point pp = p; Point qq = q; int i,i0,j,k,m; i = 0; for(int l = 0;l < 10000000; l++) { i0 = i; i = pp.x; j = qq.x; k = pp.x; m = pp.x; if (l > 0 && i0 != i) loopHoistingDone = false; if (k != m) optimizationDone = false; if (i != j) interleavingSeen = true; if (j > k) { System.out.println( "i = " + i + ", j = " + j + ", k = " + k + ", j > k -- in violation of JMM"); System.exit(0); } } if (!optimizationDone) { System.out.println("optimization not done (yet)"); interleavingSeen = false; } else if (loopHoistingDone) System.out.println("Extremely poor interleaving or Loop hoisting done"); else if (!interleavingSeen) System.out.println("no intra-loop interleaving seen"); else System.out.println("Saw intra-loop interleaving and only legal optimizations"); Thread.yield(); } public static void main(String args[]) { Thread t1 = new Thread() { public void run() { while (true) { for(int l = 0;l < 10000000; l++) { p.x++; } Thread.yield(); } } }; Thread t2 = new Thread() { public void run() { for(int l = 0;l < 10; l++) check(); System.out.println("No violation of the JMM detected"); System.exit(0); } }; t1.start(); t2.start(); } } (Review ID: 63094) ======================================================================
https://bugs.java.com/bugdatabase/view_bug.do?bug_id=4242244
CC-MAIN-2018-05
refinedweb
516
58.28
- Overview - Transcript 1.2 First Look at Slim PHP In this lesson, I’ll show you the basics of Slim PHP for micro web apps. You’ll get an understanding of how a typical Slim application is constructed, specifically how to build and configure routes, retrieve GET and POST data, and assign and output template variables. Code Snippet The following snippet creates a route to render the index.phtml template. The name parameter in the URL will be passed to the template in the $args array. $app->get('/[{name}]', function ($request, $response, $args) { return $this->render($response, 'index.phtml', $args); }); Related Links 1.First Look at Slim PHP2 lessons, 10:47 Free Lesson 1.1Introduction01:18 1.2First Look at Slim PHP09:29 1.2 First Look at Slim PHP [SOUND] I'm Matthew Setter for Tuts+ and welcome to the introductory coffee break course on Slim PHP. Specifically, version three. In this short seven to 12 minute course, I'm going to step you through how to use Slim to create a basic micro application. You're going to get an understanding of how a typical Slim application is constructed, specifically how to build and configure routes, retrieve GET & POST data, and assign and output template variables. To follow along, you're only gonna need a few things. These are PHP 5.6 or 7.0 ideally, Composer and Git. There's loads more that you could use but I don't wanna throw too much at you in such a short period of time. If you came to know more though, tweet me @settermjd on Twitter. In your terminal run the command composer create-project slim/slim-skeleton slim-project. This last a bit of a mouthful to say or create a new project directory called Slim-project and in there install a basic slim project. When the install is finished running if we have a look at what's been generated, we can see that it's created quite an intuitive directory structure. There's a log directory file system logs. The standard public directory which is the document route, a source directory for source files, templates for view template files, a test directory for test classes, and it's included the external dependencies in the composer standard vendor directory. Looking in composer.Json, you can see the requirements PHP version 5.5 or greater Slim, slim PHP view and and monologue. Nice and light just the way you'd like it when you're creating a micro application. It's also included PHP unit as a development dependency and created a PS our full source and test namespace, so that we can begin creating class files straight away when we come to do so along with some handy scripts. Looking in public slash index not PHP we can see that it's created a handy bootstrap file which creates a slim application object, configuring it with the settings from settings.PHP, which is located in the source directory. We'll have a look at that in just a moment. It then loads the dependencies, middleware and routes before booting the application by calling the app object run method. Before we dig into the other configuration file so let's boot the application and have a quick look at what it generates. From the terminal I'll call composer start which launches the application using PHP's inbuilt web server. And makes it available on port eighty eighty on a local host. Viewing it in the browser it's nothing too special let's be fair. Just one route but it does work. Now let's look at the other configuration files. In settings you can see that era logging is enabled that it sets the template directory passed to the templates directory. And provides the logon configuration setting the log file path and default logging level to debug. Next let's have a look at the applications dependencies. Configured via src/dependencies of PHP. You can see that it sets up two dependencies. These are renderer and logger. Render is a PHP render object which uses templates to render HTML output. Logger is an instance of a monologue logger object which uses the logger settings which we just saw resulting in writing logs to the local file system. Looking in src/middleware.php, none middleware is created by default. And I'm not covering middleware in this course. I don't want to overwhelm you with too much information in such a relatively short period of time but it will be covered in a future coffee break course. So stay tuned for more on that in the future. Finally looking in src/routes.php, we can see that one route has been created. This basics playing in the little bit of detail by calling the get method, a route is created which will only be accessible by a get request. Using any other method will result in a 404 not found response. The first argument is the route to match. In this case it matches the application's URI with no path or query parameters. It also accepts one optional argument called name. The optional status is indicated by the enclosing parentheses. The second argument is the handler, in this case, a callback. To the callback, I pass the applications request and response objects. Which always the case with Slim, along with optional arguments. The handler first retrace the logger service from the DR container and logs a message at the information level that the default route's been rendered. It then renders index.phtml. Located in the templates directory. Now with all that said let's have a look at the contents of index stop of phtml so that you can see how templates work. Here in the file you can see that it's an HTML PHP file combination. If a name has been supplied then it renders the value of nine which is being set as a local variable. Otherwise it renders try SlimFramework. Nice bit of advertising when you say. Now let's look at how template variables are supplied. Notice here in Rostock PHP that there's no explicit passing of name to the template. This is a bit of magic I like to think on Slim's part. It's done implicitly if it's said. Now just while we're looking at this. Don't worry if you feel you've missed something there. I know that I thought I had the first time. However, and given that, we could be a bit more explicit by calling the request objects get attribute method. Like so, assigning it to a new element called name in the args array. Well, is not strictly necessary it's less likely I feel to cause confusion for anybody reading the code including yourself. And that is the kind of code that I like. Now let's add a second template and variable and this I'm calling time. I'll do that after the assignment of name setting its value with PHP's time function. With that done, now it's gone printed out in the template. As before we can use PHP short error syntax to save us time and effort. However, I like to be that much more explicit and use a print statement instead. Switching back to the browser and reloading the page, we can now see that the two variables are printed out on the default route. Okay, now that we've gotten the basics in place, let's start building on what we've learned by learning how to create post routes and extract post data. To do so let's first add a post route called add. We can do so by calling the post method similar to how we called the get method previously. We could also call the map method and pass in a list of the http methods which this route will accept like so. In this case just post. With that done, we can make use of two methods on the request object to retrieve information posted to the route. These are getParsedBody() and getParsedBodyParam(). GetParsedBody() returned an associative array of all POST parameters. GetParsedBodyParam(), as you may suspect, returns the value of a specific parameter. Optionally, supplying a default value, if the value that we're looking for hasn't been set. Quite handy by the way to avoid template errors. Now depending on whether you need all post data or just one specific value, these methods coming in very handy. This is because they perform background checks to determine if a value is present in the post super global before attempting to retrieve it. Whereas avoiding errors and saving us in time. However, one thing I need to stress is that they don't perform any form of data sanitization. So don't think that just by using them, your user input is now safe. With that said on their create a new template which prints out the value of name. And with that done we now need to test the code. To do that logically we can do it in the browser. So I'm going to make use of postman. If you've not heard of postman it's an excellent tool for testing APIs or in our case simply making post requests. Here in my copy of postman, I'll answer the URL of your test site and set the method to be post. After the battle click body so that I can add some post added to the request adding a key of name, and the value of Mathew. With that done, I click Send. And when the response is returned, you can see that my name is returned as the sole contents of the response body and that is a rapid overview of Slim PHP Version three. We've seen how basic application is composed along with an overview of how routes work, how to assign template variables and how to retrieve data from GET & POST requests. I hope. I truly do. Now that it showed you just how easy just how straightforward and efficient it can be to create micro applications using SLIM as well as await your appetite for future coffee bread courses on SLIM PHP by yours truly.
https://code.tutsplus.com/courses/first-look-at-slim-php/lessons/first-look-at-slim-php
CC-MAIN-2021-04
refinedweb
1,734
73.98
Python Tutorial – Python Programming For Beginners Recommended by 163 users Python Tutorial I will start this Python Tutorial by giving you enough reasons to learn Python. Python is simple and incredibly readable since closely resembles the English language. Through this Python Tutorial, I will introduce you to each aspect of Python and help you understand how everything fits together to gain insights from it. In this Python Tutorial blog, I would be covering following topics: - Hello World Program - Python & It’s Features - Python Applications - Variables - Data types - Operators - Conditional Statements - Loops - Functions Hello World Program Python is a great language for beginners, all the way up to seasoned professionals. In Python, you don’t have to deal with complex syntaxes, let me give you an example: If I want to print “Hello World” in Python, all I have to write is: print ('Hello World') It’s that simple! Python & It’s Features Python is an open source scripting language whch was created by Guido van Rossum in 1989. It is an interpreted language with dynamic semantics and is very easy to learn. Let’s look at some cool features of Python. Let me give you one more motivation to learn Python, It’s wide variety of applications. Python Applications: Python finds application in a lot of domains, below are few of those: This is not all, it is also used for automation and for performing a lot of other tasks. After this Python tutorial, I will be coming up with a separate blog on each of these applications. Let’s move ahead in this Python Tutorial and understand how Variables work in Python. Variables in Python:. Assigning values to a variable:. Data Types in Python:. Numeric:. List: - You can consider the Lists as Arrays in C, but in List you can store elements of different types, but in Array all the elements should of the same type. - List is the most versatile datatype available in Python which can be written as a list of comma-separated values (items) between square brackets. Consider the example below:. Tuples: A Tuple is a sequence of immutable Python objects. Tuples are sequences, just like Lists. The differences between tuples and lists are: - Tuples cannot be changed unlike lists - Tuples use parentheses, whereas lists use square brackets. Consider the example below:: - A Set is an unordered collection of items. Every element is unique. - A Set is created by placing all the items (elements) inside curly braces {}, separated by comma. Consider the example below: Set_1 = {1, 2, 3} In Sets, every element has to be unique. Try printing the below code: Set_2 = {1, 2, 3, 3} Here 3 is repeated twice, but it will print it only once. Let’s look at some Set operations: Union: Union of A and B is a set of all the elements from both sets. Union is performed using | operator. Consider the below example: A = {1, 2, 3, 4} B = {3, 4, 5, 6} print ( A | B) Output = {1, 2, 3, 4, 5, 6} Intersection: Intersection of A and B is a set of elements that are common in both sets. Intersection is performed using & operator. Consider the example below: A = {1, 2, 3, 4} B = {3, 4, 5, 6} print ( A & B ) Output = {3, 4} Difference:. Access elements from a dictionary: Dict = {'Name' : 'Saurabh', 'Age' : 23} print(Dict['Name']) Output = Saurabh Changing elements in a Dictionary: Dict = {'Name' : 'Saurabh', 'Age' : 23} Dict['Age'] = 32 Dict['Address'] = 'Starc Tower' Output = {'Name' = 'Saurabh', 'Age' = 32, 'Address' = 'Starc Tower'} Next in Python Tutorial, let’s understand the various Operators in Python.. Arithmetic Operators:. Assignment Operators: Bitwise Operators:. Identity Operators:: Conditional statements are used to execute a statement or a group of statements when some condition is true. There are namely three conditional statements – If, Elif, Else. Consider the flowchart shown below: Let me tell you how it actually works. - First the control will check the ‘If’ condition. If its true, then the control will execute the statements after If condition. - When ‘If’ condition is false, then the control will check the ‘Elif’ condition. If Elif condition is true then the control will execute the statements after Elif condition. - If ‘Elif’ Condition is also false then the control will execute the Else statements.. Loops: - In general, statements are executed sequentially. The first statement in a function is executed first, followed by the second, and so on - There may be a situation when you need to execute a block of code several number of times A loop statement allows us to execute a statement or group of statements multiple times. The following diagram illustrates a loop statement: Let me explain you the above diagram: - First the control will check the condition. If it is true then the control will move inside the loop and execute the statements inside the loop. - Now, the control will again check the condition, if it is still true then again it will execute the statements inside the loop. - This process will keep on repeating until the condition becomes false. Once the condition becomes false the control will move out of loop. There are two types of loops: - Infinite: When condition will never become false - Finite: At one point, the condition will become false and the control will move out of the loop There is one more way to categorize loops: - Pre-test: In this type of loops the condition is first checked and then only the control moves inside the loop - Post-test: Here first the statements inside the loops are executed, and then the condition is checked Python does not support Post-test loops. Learn Python From Experts Loops in Python: In Python, there are three loops: - While - For - Nested: Functions are a convenient way to divide your code into useful blocks, allowing us to order our code, make it more readable, reuse it and save some time. def add (a, b): return a + b c = add(10,20) print(c) Output = 30.
https://www.edureka.co/blog/python-tutorial/
CC-MAIN-2018-39
refinedweb
995
60.55
Hi all. I’m new to RoR and trying to figure out instance variables. I’m trying to put a string in one in the controller and print it in the view. However I’m getting the following error: NoMethodError in Say#hello You have a nil object when you didn’t expect it! You might have expected an instance of Array. The error occurred while evaluating nil.+ Apparently the @text variable remains nil, even though I assign a string to it. Any ideas about what goes wrong? This is my controller code: class SayController < ApplicationController def Hello @text = ‘Hello world from the controller’ end def Goodbye end end This is my view:Rails test (say) Hello world! <%= 'testje ' + @text %>
https://www.ruby-forum.com/t/instance-variables-created-in-controller-are-nil-in-view/144019
CC-MAIN-2021-43
refinedweb
120
73.88
Purpose I tried to find a simple example that showed me: - How to call some Lua script from within C - How to call a C function from within the Lua script - How to pass an instance of a structure/class to the script and have it call functions on it Being naive I thought finding something like this would be easy. We all know how stories starting with "Being naive..." end (in road-side ditches surrounded by police tape). I used the latest Lua release (5.2) and stumbled across errors where functions were deprecated (lua_open no longer exists), examples where there was so much going on I simply couldn't understand the basics, examples where a script.lua file was simply executed, examples where in order to pass a C object over I had to create something called a MetaTable and and... So here it is, a simple example that pretends it's a game. The idea would be that you have an enemy object and it's intelligence would be stored in a Lua script that is loaded at the start of the game, and the "onUpdate" function in that script is called every frame wherein it can change the position, state, colour, happiness, whatever of the object. Note This contains no error checking to keep things simple. The Lua documentation on how to do error checking is great, and you should check the result of every call you make, as well as checking to see if variables are actually the type you expect them to be before using them. Also, I started using Lua for the first time today, so this is simply an example that works for me, your mileage and joy in ignorance may vary. Code #include <stdio.h> #include <stdlib.h> #include "lua.hpp" /* This is our game object structure, specifying all the properties that may pertain to an object instance in our game. For this example, only the position is important. */ typedef struct GameObject { int x; int y; } GameObject; /* This function is called from Lua via GameObject.move to move our object instance. In Lua-land it passes 3 parameters: the instance; and x and y. */ static int GameObject_move(lua_State *Lua) { GameObject *instance = (GameObject*)lua_topointer(Lua, 1); const int x = lua_tointeger(Lua, 2); const int y = lua_tointeger(Lua, 3); /* adjust the position of the instance and print out that we've done something. */ instance->x += x; instance->y += y; printf("GameObject_move(%d, %d)\n\tinstance->x = %d\n\tinstance->y = %d\n", x, y, instance->x, instance->y); return 0; } /* This is called from C-land and simply serves as a helper function to set up the update function and call it. */ static void scriptOnUpdate(lua_State *Lua, GameObject *Instance) { /* Fetch the Lua function and make it the thing currently on the stack */ lua_getglobal(Lua, "onUpdate"); /* Push the first argument onto the stack, which is the instance we are currently updating */ lua_pushlightuserdata(Lua, Instance); /* Call the function on the stack telling it that it has 1 parameter and no return values */ lua_call(Lua, 1, 0); } /* This sets up the Lua context and registers our functions with it. */ static lua_State* createLuaContext() { /* Build up an array of functions we want to expost for the "GameObject" namespace. */ /* Add whatever functions you want to expose to this array. */ const luaL_Reg gameObjectFunctions[] = { { "move", GameObject_move }, /* Lua GameObject.move() maps to GameObject_move() in C-land */ { 0, 0 } /* NULL terminator to signal the end of the array */ }; /* This is the Lua script we plan on running, it would probably be in a resource file somewhere. */ /* scriptOnUpdate() above calls this function, which in turn calls the GameObject_move() function above */ const char *script = "function onUpdate(instance)\n" \ " GameObject.move(instance, 4, 2)\n" \ "end\n"; lua_State *lua = luaL_newstate(); /* Creates a new context, lua_open replacement in versions > 5 */ luaL_openlibs(lua); /* Adds the default Lua libraries to the context */ luaL_newlib(lua, gameObjectFunctions); /* Adds our functions to the context */ lua_setglobal(lua, "GameObject"); /* Assigns a namespace to our most recently added functions */ luaL_loadstring(lua, script); /* Loads our script into the context */ lua_call(lua, 0, 0); /* This "primes" our context, without this, things will fail! */ return lua; } int main(int argc, char *argv[]) { /* This is our game instance that we intend on passing to the Lua script for it to have its way with. */ GameObject instance; instance.x = 6; instance.y = 8; lua_State *lua = createLuaContext(); /* Set up the lua context */ scriptOnUpdate(lua, &instance); /* Call the Lua script "onUpdate" function passing our current object instance */ lua_close(lua); /* ... and we're done. clean up shop and go home! */ printf("\nPress return to quit.\n"); getchar(); return 0; }
http://iwasdeportedandalligotwasthislousydomain.co.uk/index.php?m=01&y=13&d=31&entry=entry130131-152758
CC-MAIN-2018-09
refinedweb
766
59.94
12. Generic flow API (rte_flow) 12.1. Overview This API provides a generic means to configure hardware to match specific ingress or egress traffic, alter its fate and query related counters according to any number of user-defined rules. It is named rte_flow after the prefix used for all its symbols, and is defined in rte_flow.h. - Matching can be performed on packet data (protocol headers, payload) and properties (e.g. associated physical port, virtual device function ID). - Possible operations include dropping traffic, diverting it to specific queues, to virtual/physical device functions or ports, performing tunnel offloads, adding marks and so on. 12.2. Flow rule 12.2.1. Description A flow rule is the combination of attributes with a matching pattern and a list of actions. Flow rules form the basis of this API. Flow rules can have several distinct actions (such as counting, encapsulating, decapsulating before redirecting packets to a particular queue, etc.), instead of relying on several rules to achieve this and having applications deal with hardware implementation details regarding their order. Support for different priority levels on a rule basis is provided, for example in order to force a more specific rule to come before a more generic one for packets matched by both. However hardware support for more than a single priority level cannot be guaranteed. When supported, the number of available priority levels is usually low, which is why they can also be implemented in software by PMDs (e.g. missing priority levels may be emulated by reordering rules). In order to remain as hardware-agnostic as possible, by default all rules are considered to have the same priority, which means that the order between overlapping rules (when a packet is matched by several filters) is undefined. PMDs may refuse to create overlapping rules at a given priority level when they can be detected (e.g. if a pattern matches an existing filter). Thus predictable results for a given priority level can only be achieved with non-overlapping rules, using perfect matching on all protocol layers. Flow rules can also be grouped, the flow rule priority is specific to the group they belong to. All flow rules in a given group are thus processed within the context of that group. Groups are not linked by default, so the logical hierarchy of groups must be explicitly defined by flow rules themselves in each group using the JUMP action to define the next group to redirect too. Only flow rules defined in the default group 0 are guarantee to be matched against, this makes group 0 the origin of any group hierarchy defined by an application. Support for multiple actions per rule may be implemented internally on top of non-default hardware priorities, as a result both features may not be simultaneously available to applications. Considering that allowed pattern/actions combinations cannot be known in advance and would result in an impractically large number of capabilities to expose, a method is provided to validate a given rule from the current device configuration state. This enables applications to check if the rule types they need is supported at initialization time, before starting their data path. This method can be used anytime, its only requirement being that the resources needed by a rule should exist (e.g. a target RX queue should be configured first). Each defined rule is associated with an opaque handle managed by the PMD, applications are responsible for keeping it. These can be used for queries and rules management, such as retrieving counters or other data and destroying them. To avoid resource leaks on the PMD side, handles must be explicitly destroyed by the application before releasing associated resources such as queues and ports. The following sections cover: - Attributes (represented by struct rte_flow_attr): properties of a flow rule such as its direction (ingress or egress) and priority. - Pattern item (represented by struct rte_flow_item): part of a matching pattern that either matches specific packet data or traffic properties. It can also describe properties of the pattern itself, such as inverted matching. - Matching pattern: traffic properties to look for, a combination of any number of items. - Actions (represented by struct rte_flow_action): operations to perform whenever a packet is matched by a pattern. 12.2.2. Attributes 12.2.2.1. Attribute: Group Flow rules can be grouped by assigning them a common group number. Groups allow a logical hierarchy of flow rule groups (tables) to be defined. These groups can be supported virtually in the PMD or in the physical device. Group 0 is the default group and this is the only group which flows are guarantee to matched against, all subsequent groups can only be reached by way of the JUMP action from a matched flow rule. Although optional, applications are encouraged to group similar rules as much as possible to fully take advantage of hardware capabilities (e.g. optimized matching) and work around limitations (e.g. a single pattern type possibly allowed in a given group), while being aware that the groups hierarchies must be programmed explicitly. Note that support for more than a single group is not guaranteed. 12.2.2.2. Attribute: Priority A priority level can be assigned to a flow rule, lower values denote higher priority, with 0 as the maximum. Priority levels are arbitrary and up to the application, they do not need to be contiguous nor start from 0, however the maximum number varies between devices and may be affected by existing flow rules. A flow which matches multiple rules in the same group will always matched by the rule with the highest priority in that group. If a packet is matched by several rules of a given group for a given priority level, the outcome is undefined. It can take any path, may be duplicated or even cause unrecoverable errors. Note that support for more than a single priority level is not guaranteed. 12.2.2.3. Attribute: Traffic direction Flow rule patterns apply to inbound and/or outbound traffic. In the context of this API, ingress and egress respectively stand for inbound and outbound based on the standpoint of the application creating a flow rule. There are no exceptions to this definition. Several pattern items and actions are valid and can be used in both directions. At least one direction must be specified. Specifying both directions at once for a given rule is not recommended but may be valid in a few cases (e.g. shared counters). 12.2.2.4. Attribute: Transfer Instead of simply matching the properties of traffic as it would appear on a given DPDK port ID, enabling this attribute transfers a flow rule to the lowest possible level of any device endpoints found in the pattern. When supported, this effectively enables an application to reroute traffic not necessarily intended for it (e.g. coming from or addressed to different physical ports, VFs or applications) at the device level. It complements the behavior of some pattern items such as Item: PHY_PORT and is meaningless without them. When transferring flow rules, ingress and egress attributes (Attribute: Traffic direction) keep their original meaning, as if processing traffic emitted or received by the application. 12.2.3. Pattern item Pattern items fall in two categories: - Matching protocol headers and packet data, usually associated with a specification structure. These must be stacked in the same order as the protocol layers to match inside packets, starting from the lowest. - Matching meta-data or affecting pattern processing, often without a specification structure. Since they do not match packet contents, their position in the list is usually not relevant. Item specification structures are used to match specific values among protocol fields (or item properties). Documentation describes for each item whether they are associated with one and their type name if so. Up to three structures of the same type can be set for a given item: spec: values to match (e.g. a given IPv4 address). last: upper bound for an inclusive range with corresponding fields in spec. mask: bit-mask applied to both specand lastwhose purpose is to distinguish the values to take into account and/or partially mask them out (e.g. in order to match an IPv4 address prefix). Usage restrictions and expected behavior: - Setting either maskor lastwithout specis an error. - Field values in lastwhich are either 0 or equal to the corresponding values in specare ignored; they do not generate a range. Nonzero values lower than those in specare not supported. - Setting specand optionally lastwithout maskcauses the PMD to use the default mask defined for that item (defined as rte_flow_item_{name}_maskconstants). - Not setting any of them (assuming item type allows it) is equivalent to providing an empty (zeroed) maskfor broad (nonspecific) matching. maskis a simple bit-mask applied before interpreting the contents of specand last, which may yield unexpected results if not used carefully. For example, if for an IPv4 address field, specprovides 10.1.2.3, lastprovides 10.3.4.5 and maskprovides 255.255.0.0, the effective range becomes 10.1.0.0 to 10.3.255.255. Example of an item specification matching an Ethernet header: Non-masked bits stand for any value (shown as ? below), Ethernet headers with the following properties are thus matched: src: ??:??:01:02:03:?? dst: ??:??:??:??:??:01 type: 0x???? 12.2.4. Matching pattern A pattern is formed by stacking items starting from the lowest protocol layer to match. This stacking restriction does not apply to meta items which can be placed anywhere in the stack without affecting the meaning of the resulting pattern. Patterns are terminated by END items. Examples: The above example shows how meta items do not affect packet data matching items, as long as those remain stacked properly. The resulting matching pattern is identical to “TCPv4 as L4”. If supported by the PMD, omitting one or several protocol layers at the bottom of the stack as in the above example (missing an Ethernet specification) enables looking up anywhere in packets. It is unspecified whether the payload of supported encapsulations (e.g. VXLAN payload) is matched by such a pattern, which may apply to inner, outer or both packets. The above pattern is invalid due to a missing L3 specification between L2 (Ethernet) and L4 (UDP). Doing so is only allowed at the bottom and at the top of the stack. 12.2.5. Meta item types They match meta-data or affect pattern processing instead of matching packet data directly, most of them do not need a specification structure. This particularity allows them to be specified anywhere in the stack without causing any side effect. 12.2.5.1. Item: END End marker for item lists. Prevents further processing of items, thereby ending the pattern. - Its numeric value is 0 for convenience. - PMD support is mandatory. spec, lastand maskare ignored. 12.2.5.2. Item: VOID Used as a placeholder for convenience. It is ignored and simply discarded by PMDs. - PMD support is mandatory. spec, lastand maskare ignored. One usage example for this type is generating rules that share a common prefix quickly without reallocating memory, only by updating item types: 12.2.5.3. Item: INVERT Inverted matching, i.e. process packets that do not match the pattern. spec, lastand maskare ignored. Usage example, matching non-TCPv4 packets only: 12.2.5.4. Item: PF Matches traffic originating from (ingress) or going to (egress) the physical function of the current device. If supported, should work even if the physical function is not managed by the application and thus not associated with a DPDK port ID. 12.2.5.5. Item:. - Can be specified multiple times to match traffic addressed to several VF IDs. - Can be combined with a PF item to match both PF and VF traffic. - Default maskmatches any VF ID. 12.2.5.6. Item:. - Default maskmatches any port index. 12.2.5.7. Item: Item: PHY_PORT which refers to the physical port of a device, whereas Item: PORT_ID refers to a struct rte_eth_dev object on the application side (also known as “port representor” depending on the kind of underlying device). - Default maskmatches the specified DPDK port ID. 12.2.5.8. Item: MARK Matches an arbitrary integer value which was set using the MARK action in a previously matched rule. This item can only specified once as a match criteria as the MARK action can only be specified once in a flow action. Note the value of MARK field is arbitrary and application defined. Depending on the underlying implementation the MARK item may be supported on the physical device, with virtual groups in the PMD or not at all. - Default maskmatches any integer value. 12.2.5.9. Item: TAG Matches tag item set by other flows. Multiple tags are supported by specifying index. - Default maskmatches the specified tag value and index. 12.2.5.10. Item: META Matches 32 bit metadata item set. On egress, metadata can be set either by mbuf metadata field with PKT_TX_DYNF_METADATA flag or SET_META action. On ingress, SET_META action sets metadata for a packet and the metadata will be reported via metadata dynamic field of rte_mbuf with PKT_RX_DYNF_METADATA flag. - Default maskmatches the specified Rx metadata value. 12.2.6. Data matching item types Most of these are basically protocol header definitions with associated bit-masks. They must be specified (stacked) from lowest to highest protocol layer to form a matching pattern. The following list is not exhaustive, new protocols will be added in the future. 12.2.6.1. Item: ANY Matches any protocol in place of the current layer, a single ANY may also stand for several protocol layers. This is usually specified as the first pattern item when looking for a protocol anywhere in a packet. - Default maskstands for any number of layers. Example for VXLAN TCP payload matching regardless of outer L3 (IPv4 or IPv6) and L4 (UDP) both matched by the first ANY specification, and inner L3 (IPv4 or IPv6) matched by the second ANY specification: 12.2.6.2. Item: RAW Matches a byte string of a given length at a given offset. Offset is either absolute (using the start of the packet) or relative to the end of the previous matched item in the stack, in which case negative values are allowed. If search is enabled, offset is used as the starting point. The search area can be delimited by setting limit to a nonzero value, which is the maximum number of bytes after offset where the pattern may start. Matching a zero-length pattern is allowed, doing so resets the relative offset for subsequent items. - This type does not support ranges ( lastfield). - Default maskmatches all fields exactly. Example pattern looking for several strings at various offsets of a UDP payload, using combined RAW items: This translates to: - Locate “foo” at least 10 bytes deep inside UDP payload. - Locate “bar” after “foo” plus 20 bytes. - Locate “baz” after “bar” minus 29 bytes. Such a packet may be represented as follows (not to scale): 0 >= 10 B == 20 B | |<--------->| |<--------->| | | | | | |-----|------|-----|-----|-----|-----|-----------|-----|------| | ETH | IPv4 | UDP | ... | baz | foo | ......... | bar | .... | |-----|------|-----|-----|-----|-----|-----------|-----|------| | | |<--------------------------->| == 29 B Note that matching subsequent pattern items would resume after “baz”, not “bar” since matching is always performed after the previous item of the stack. 12.2.6.3. Item: ETH Matches an Ethernet header. The type field either stands for “EtherType” or “TPID” when followed by so-called layer 2.5 pattern items such as RTE_FLOW_ITEM_TYPE_VLAN. In the latter case, type refers to that of the outer header, with the inner EtherType/TPID provided by the subsequent pattern item. This is the same order as on the wire. If the type field contains a TPID value, then only tagged packets with the specified TPID will match the pattern. The field has_vlan can be used to match any type of tagged packets, instead of using the type field. If the type and has_vlan fields are not specified, then both tagged and untagged packets will match the pattern. dst: destination MAC. src: source MAC. type: EtherType or TPID. has_vlan: packet header contains at least one VLAN. - Default maskmatches destination and source addresses only. 12.2.6.4. Item: VLAN Matches an 802.1Q/ad VLAN tag. The corresponding standard outer EtherType (TPID) values are RTE_ETHER_TYPE_VLAN or RTE_ETHER_TYPE_QINQ. It can be overridden by the preceding pattern item. If a VLAN item is present in the pattern, then only tagged packets will match the pattern. The field has_more_vlan can be used to match any type of tagged packets, instead of using the inner_type field. If the inner_type and has_more_vlan fields are not specified, then any tagged packets will match the pattern. tci: tag control information. inner_type: inner EtherType or TPID. has_more_vlan: packet header contains at least one more VLAN, after this VLAN. - Default maskmatches the VID part of TCI only (lower 12 bits). 12.2.6.5. Item: IPV4 Matches an IPv4 header. Note: IPv4 options are handled by dedicated pattern items. hdr: IPv4 header definition ( rte_ip.h). - Default maskmatches source and destination addresses only. 12.2.6.6. Item: IPV6 Matches an IPv6 header. Dedicated flags indicate if header contains specific extension headers. To match on packets containing a specific extension header, an application should match on the dedicated flag set to 1. To match on packets not containing a specific extension header, an application should match on the dedicated flag clear to 0. In case application doesn’t care about the existence of a specific extension header, it should not specify the dedicated flag for matching. hdr: IPv6 header definition ( rte_ip.h). has_hop_ext: header contains Hop-by-Hop Options extension header. has_route_ext: header contains Routing extension header. has_frag_ext: header contains Fragment extension header. has_auth_ext: header contains Authentication extension header. has_esp_ext: header contains Encapsulation Security Payload extension header. has_dest_ext: header contains Destination Options extension header. has_mobil_ext: header contains Mobility extension header. has_hip_ext: header contains Host Identity Protocol extension header. has_shim6_ext: header contains Shim6 Protocol extension header. - Default maskmatches hdrsource and destination addresses only. 12.2.6.7. Item: ICMP Matches an ICMP header. hdr: ICMP header definition ( rte_icmp.h). - Default maskmatches ICMP type and code only. 12.2.6.8. Item: UDP Matches a UDP header. hdr: UDP header definition ( rte_udp.h). - Default maskmatches source and destination ports only. 12.2.6.9. Item: TCP Matches a TCP header. hdr: TCP header definition ( rte_tcp.h). - Default maskmatches source and destination ports only. 12.2.6.10. Item: SCTP Matches a SCTP header. hdr: SCTP header definition ( rte_sctp.h). - Default maskmatches source and destination ports only. 12.2.6.11. Item: VXLAN Matches a VXLAN header (RFC 7348). flags: normally 0x08 (I flag). rsvd0: reserved, normally 0x000000. vni: VXLAN network identifier. rsvd1: reserved, normally 0x00. - Default maskmatches VNI only. 12.2.6.12. Item: E_TAG Matches an IEEE 802.1BR E-Tag header. The corresponding standard outer EtherType (TPID) value is RTE_ETHER_TYPE_ETAG. It can be overridden by the preceding pattern item. epcp_edei_in_ecid_b: E-Tag control information (E-TCI), E-PCP (3b), E-DEI (1b), ingress E-CID base (12b). rsvd_grp_ecid_b: reserved (2b), GRP (2b), E-CID base (12b). in_ecid_e: ingress E-CID ext. ecid_e: E-CID ext. inner_type: inner EtherType or TPID. - Default masksimultaneously matches GRP and E-CID base. 12.2.6.13. Item: NVGRE Matches a NVGRE header (RFC 7637). c_k_s_rsvd0_ver: checksum (1b), undefined (1b), key bit (1b), sequence number (1b), reserved 0 (9b), version (3b). This field must have value 0x2000 according to RFC 7637. protocol: protocol type (0x6558). tni: virtual subnet ID. flow_id: flow ID. - Default maskmatches TNI only. 12.2.6.14. Item: MPLS Matches a MPLS header. label_tc_s_ttl: label, TC, Bottom of Stack and TTL. - Default maskmatches label only. 12.2.6.15. Item: GRE Matches a GRE header. c_rsvd0_ver: checksum, reserved 0 and version. protocol: protocol type. - Default maskmatches protocol only. 12.2.6.16. Item: GRE_KEY Matches a GRE key field. This should be preceded by item GRE. - Value to be matched is a big-endian 32 bit integer. - When this item present it implicitly match K bit in default mask as “1” 12.2.6.17. Item: configured by threshold. Driver can divide the range of threshold and map to different accuracy levels that device support. Threshold 0 means perfect match (no fuzziness), while threshold 0xffffffff means fuzziest match. Usage example, fuzzy match a TCPv4 packets: 12.2.6.18. Item: GTP, GTPC, GTPU Matches a GTPv1 header. Note: GTP, GTPC and GTPU use the same structure. GTPC and GTPU item are defined for a user-friendly API when creating GTP-C and GTP-U flow rules. v_pt_rsv_flags: version (3b), protocol type (1b), reserved (1b), extension header flag (1b), sequence number flag (1b), N-PDU number flag (1b). msg_type: message type. msg_len: message length. teid: tunnel endpoint identifier. - Default maskmatches teid only. 12.2.6.19. Item: ESP Matches an ESP header. hdr: ESP header definition ( rte_esp.h). - Default maskmatches SPI only. 12.2.6.20. Item: GENEVE Matches a GENEVE header. ver_opt_len_o_c_rsvd0: version (2b), length of the options fields (6b), OAM packet (1b), critical options present (1b), reserved 0 (6b). protocol: protocol type. vni: virtual network identifier. rsvd1: reserved, normally 0x00. - Default maskmatches VNI only. 12.2.6.21. Item: VXLAN-GPE Matches a VXLAN-GPE header (draft-ietf-nvo3-vxlan-gpe-05). flags: normally 0x0C (I and P flags). rsvd0: reserved, normally 0x0000. protocol: protocol type. vni: VXLAN network identifier. rsvd1: reserved, normally 0x00. - Default maskmatches VNI only. 12.2.6.22. Item: ARP_ETH_IPV4 Matches an ARP header for Ethernet/IPv4. hdr: hardware type, normally 1. pro: protocol type, normally 0x0800. hln: hardware address length, normally 6. pln: protocol address length, normally 4. op: opcode (1 for request, 2 for reply). sha: sender hardware address. spa: sender IPv4 address. tha: target hardware address. tpa: target IPv4 address. - Default maskmatches SHA, SPA, THA and TPA. 12.2.6.23. Item: IPV6_EXT Matches the presence of any IPv6 extension header. next_hdr: next header. - Default maskmatches next_hdr. Normally preceded by any of: 12.2.6.24. Item: IPV6_FRAG_EXT Matches the presence of IPv6 fragment extension header. hdr: IPv6 fragment extension header definition ( rte_ip.h). Normally preceded by any of: 12.2.6.25. Item: ICMP6 Matches any ICMPv6 header. type: ICMPv6 type. code: ICMPv6 code. checksum: ICMPv6 checksum. - Default maskmatches typeand code. 12.2.6.26. Item: ICMP6_ND_NS Matches an ICMPv6 neighbor discovery solicitation. type: ICMPv6 type, normally 135. code: ICMPv6 code, normally 0. checksum: ICMPv6 checksum. reserved: reserved, normally 0. target_addr: target address. - Default maskmatches target address only. 12.2.6.27. Item: ICMP6_ND_NA Matches an ICMPv6 neighbor discovery advertisement. type: ICMPv6 type, normally 136. code: ICMPv6 code, normally 0. checksum: ICMPv6 checksum. rso_reserved: route flag (1b), solicited flag (1b), override flag (1b), reserved (29b). target_addr: target address. - Default maskmatches target address only. 12.2.6.28. Item: ICMP6_ND_OPT Matches the presence of any ICMPv6 neighbor discovery option. type: ND option type. length: ND option length. - Default maskmatches type only. Normally preceded by any of: 12.2.6.29. Item: ICMP6_ND_OPT_SLA_ETH Matches an ICMPv6 neighbor discovery source Ethernet link-layer address option. type: ND option type, normally 1. length: ND option length, normally 1. sla: source Ethernet LLA. - Default maskmatches source link-layer address only. Normally preceded by any of: 12.2.6.30. Item: ICMP6_ND_OPT_TLA_ETH Matches an ICMPv6 neighbor discovery target Ethernet link-layer address option. type: ND option type, normally 2. length: ND option length, normally 1. tla: target Ethernet LLA. - Default maskmatches target link-layer address only. Normally preceded by any of: 12.2.6.31. Item: META Matches an application specific 32 bit metadata item. - Default maskmatches the specified metadata value. 12.2.6.32. Item: GTP_PSC Matches a GTP PDU extension header with type 0x85. pdu_type: PDU type. qfi: QoS flow identifier. - Default maskmatches QFI only. 12.2.6.33. Item: PPPOES, PPPOED Matches a PPPoE header. version_type: version (4b), type (4b). code: message type. session_id: session identifier. length: payload length. 12.2.6.34. Item: PPPOE_PROTO_ID Matches a PPPoE session protocol identifier. proto_id: PPP protocol identifier. - Default maskmatches proto_id only. 12.2.6.35. Item: NSH Matches a network service header (RFC 8300). version: normally 0x0 (2 bits). oam_pkt: indicate oam packet (1 bit). reserved: reserved bit (1 bit). ttl: maximum SFF hopes (6 bits). length: total length in 4 bytes words (6 bits). reserved1: reserved1 bits (4 bits). mdtype: ndicates format of NSH header (4 bits). next_proto: indicates protocol type of encap data (8 bits). spi: service path identifier (3 bytes). sindex: service index (1 byte). - Default maskmatches mdtype, next_proto, spi, sindex. 12.2.6.36. Item: IGMP Matches a Internet Group Management Protocol (RFC 2236). type: IGMP message type (Query/Report). max_resp_time: max time allowed before sending report. checksum: checksum, 1s complement of whole IGMP message. group_addr: group address, for Query value will be 0. - Default maskmatches group_addr. 12.2.6.37. Item: AH Matches a IP Authentication Header (RFC 4302). next_hdr: next payload after AH. payload_len: total length of AH in 4B words. reserved: reserved bits. spi: security parameters index. seq_num: counter value increased by 1 on each packet sent. - Default maskmatches spi. 12.2.6.38. Item: HIGIG2 Matches a HIGIG2 header field. It is layer 2.5 protocol and used in Broadcom switches. - Default maskmatches classification and vlan. 12.2.6.39. Item: L2TPV3OIP Matches a L2TPv3 over IP header. session_id: L2TPv3 over IP session identifier. - Default maskmatches session_id only. 12.2.6.40. Item: PFCP Matches a PFCP Header. s_field: S field. msg_type: message type. msg_len: message length. seid: session endpoint identifier. - Default maskmatches s_field and seid. 12.2.6.41. Item: ECPRI Matches a eCPRI header. hdr: eCPRI header definition ( rte_ecpri.h). - Default maskmatches nothing, for all eCPRI messages. 12.2.7. Actions Each possible action is represented by a type. An action can have an associated configuration object. Several actions combined in a list can be assigned to a flow rule and are performed in order. They fall in three categories: - Actions that modify the fate of matching traffic, for instance by dropping or assigning it a specific destination. - Actions that modify matching traffic contents or its properties. This includes adding/removing encapsulation, encryption, compression and marks. - Actions related to the flow rule itself, such as updating counters or making it non-terminating. Flow rules being terminating by default, not specifying any action of the fate kind results in undefined behavior. This applies to both ingress and egress. PASSTHRU, when supported, makes a flow rule non-terminating. Like matching patterns, action lists are terminated by END items. Example of action that redirects packets to queue index 10: Actions are performed in list order: In the above example, while DROP and QUEUE must be performed in order, both have to happen before reaching END. Only QUEUE has a visible effect. Note that such a list may be thought as ambiguous and rejected on that basis. As previously described, all actions must be taken into account. This effectively duplicates traffic to both queues. The above example also shows that VOID is ignored. 12.2.8. Action types Common action types are described in this section. Like pattern item types, this list is not exhaustive as new actions will be added in the future. 12.2.8.1. Action: END End marker for action lists. Prevents further processing of actions, thereby ending the list. - Its numeric value is 0 for convenience. - PMD support is mandatory. - No configurable properties. 12.2.8.2. Action: VOID Used as a placeholder for convenience. It is ignored and simply discarded by PMDs. - PMD support is mandatory. - No configurable properties. 12.2.8.3. Action: PASSTHRU Leaves traffic up for additional processing by subsequent flow rules; makes a flow rule non-terminating. - No configurable properties. Example to copy a packet to a queue and continue processing by subsequent flow rules: 12.2.8.4. Action: JUMP Redirects packets to a group on the current device. In a hierarchy of groups, which can be used to represent physical or logical flow group/tables on the device, this action redirects the matched flow to the specified group on that device. If a matched flow is redirected to a table which doesn’t contain a matching rule for that flow then the behavior is undefined and the resulting behavior is up to the specific device. Best practice when using groups would be define a default flow rule for each group which a defines the default actions in that group so a consistent behavior is defined. Defining an action for matched flow in a group to jump to a group which is higher in the group hierarchy may not be supported by physical devices, depending on how groups are mapped to the physical devices. In the definitions of jump actions, applications should be aware that it may be possible to define flow rules which trigger an undefined behavior causing flows to loop between groups. 12.2.8.5.. 12.2.8.6. Action: FLAG Flags packets. Similar to Action: MARK without a specific value; only sets the PKT_RX_FDIR mbuf flag. - No configurable properties. 12.2.8.7. Action: QUEUE Assigns packets to a given queue index. 12.2.8.8. Action: DROP Drop packets. - No configurable properties. 12.2.8.9. Action: COUNT Adds a counter action to a matched flow. If more than one count action is specified in a single flow rule, then each action must specify a unique id. Counters can be retrieved and reset through rte_flow_query(), see struct rte_flow_query_count. The shared flag indicates whether the counter is unique to the flow rule the action is specified with, or whether it is a shared counter. For a count action with the shared flag set, then a global device namespace is assumed for the counter id, so that any matched flow rules using a count action with the same counter id on the same port will contribute to that counter. For ports within the same switch domain then the counter id namespace extends to all ports within that switch domain. The shared flag is DEPRECATED and SHARED COUNT action should be used to make shared counters. Query structure to retrieve and reset flow rule counters: 12.2.8.10.. If non-applicable for matching packets RSS types are requested, these RSS types are simply ignored. For example, it happens if: - Hashing of both TCP and UDP ports is requested (only one can be present in a packet). - Requested RSS types contradict to flow rule pattern (e.g. pattern has UDP item, but RSS types contain TCP). If requested RSS hash types are not supported by the Ethernet device at all (not reported in dev_info.flow_type_rss_offloads), the flow creation will fail. Note: RSS hash result is stored in the hash.rss mbuf field which overlaps hash.fdir.lo. Since Action: MARK sets the hash.fdir.hi field only, both can be requested simultaneously. Also, regarding packet encapsulation level: 0requests. 12.2.8.11. Action: PF Directs matching traffic to the physical function (PF) of the current device. - No configurable properties. 12.2.8.12. Action: VF Directs matching traffic to a given virtual function of the current device. Packets matched by a VF pattern item can be redirected to their original VF ID instead of the specified one. This parameter may not be available and is not guaranteed to work properly if the VF part is matched by a prior flow rule or if packets are not addressed to a VF in the first place. 12.2.8.13. Action: PHY_PORT Directs matching traffic to a given physical port index of the underlying device. See Item: PHY_PORT. 12.2.8.14. Action: PORT_ID Directs matching traffic to a given DPDK port ID. See Item: PORT_ID. 12.2.8.15.. 12.2.8.16.. The following is an example of configuring IPsec inline using the INLINE_CRYPTO security session: The encryption algorithm, keys and salt are part of the opaque rte_security_session. The SA is identified according to the IP and ESP fields in the pattern items. 12.2.8.17. Action: OF_SET_MPLS_TTL Implements OFPAT_SET_MPLS_TTL (“MPLS TTL”) as defined by the OpenFlow Switch Specification. 12.2.8.18. Action: OF_DEC_MPLS_TTL Implements OFPAT_DEC_MPLS_TTL (“decrement MPLS TTL”) as defined by the OpenFlow Switch Specification. 12.2.8.19. Action: OF_SET_NW_TTL Implements OFPAT_SET_NW_TTL (“IP TTL”) as defined by the OpenFlow Switch Specification. 12.2.8.20. Action: OF_DEC_NW_TTL Implements OFPAT_DEC_NW_TTL (“decrement IP TTL”) as defined by the OpenFlow Switch Specification. 12.2.8.21. Action: OF_COPY_TTL_OUT Implements OFPAT_COPY_TTL_OUT (“copy TTL “outwards” – from next-to-outermost to outermost”) as defined by the OpenFlow Switch Specification. 12.2.8.22. Action: OF_COPY_TTL_IN Implements OFPAT_COPY_TTL_IN (“copy TTL “inwards” – from outermost to next-to-outermost”) as defined by the OpenFlow Switch Specification. 12.2.8.23. Action: OF_POP_VLAN Implements OFPAT_POP_VLAN (“pop the outer VLAN tag”) as defined by the OpenFlow Switch Specification. 12.2.8.24. Action: OF_PUSH_VLAN Implements OFPAT_PUSH_VLAN (“push a new VLAN tag”) as defined by the OpenFlow Switch Specification. 12.2.8.25. Action: OF_SET_VLAN_VID Implements OFPAT_SET_VLAN_VID (“set the 802.1q VLAN id”) as defined by the OpenFlow Switch Specification. 12.2.8.26. Action: OF_SET_VLAN_PCP Implements OFPAT_SET_LAN_PCP (“set the 802.1q priority”) as defined by the OpenFlow Switch Specification. 12.2.8.27. Action: OF_POP_MPLS Implements OFPAT_POP_MPLS (“pop the outer MPLS tag”) as defined by the OpenFlow Switch Specification. 12.2.8.28. Action: OF_PUSH_MPLS Implements OFPAT_PUSH_MPLS (“push a new MPLS tag”) as defined by the OpenFlow Switch Specification. 12.2.8.29. Action: VXLAN_ENCAP Performs a VXLAN encapsulation action by encapsulating the matched flow in the VXLAN tunnel as defined in the``rte_flow_action_vxlan_encap`` flow items definition. This action modifies the payload of matched flows. The flow definition specified in the rte_flow_action_tunnel_encap action structure must define a valid VLXAN network overlay which conforms with RFC 7348 (Virtual eXtensible Local Area Network (VXLAN): A Framework for Overlaying Virtualized Layer 2 Networks over Layer 3 Networks). The pattern must be terminated with the RTE_FLOW_ITEM_TYPE_END item type. 12.2.8.30. Action: VXLAN_DECAP Performs a decapsulation action by stripping all headers of the VXLAN tunnel network overlay from the matched flow. The flow items pattern defined for the flow rule with which a VXLAN_DECAP action is specified, must define a valid VXLAN tunnel as per RFC7348. If the flow pattern does not specify a valid VXLAN tunnel then a RTE_FLOW_ERROR_TYPE_ACTION error should be returned. This action modifies the payload of matched flows. 12.2.8.31. Action: NVGRE_ENCAP Performs a NVGRE encapsulation action by encapsulating the matched flow in the NVGRE tunnel as defined in the``rte_flow_action_tunnel_encap`` flow item definition. This action modifies the payload of matched flows. The flow definition specified in the rte_flow_action_tunnel_encap action structure must defined a valid NVGRE network overlay which conforms with RFC 7637 (NVGRE: Network Virtualization Using Generic Routing Encapsulation). The pattern must be terminated with the RTE_FLOW_ITEM_TYPE_END item type. 12.2.8.32. Action: NVGRE_DECAP Performs a decapsulation action by stripping all headers of the NVGRE tunnel network overlay from the matched flow. The flow items pattern defined for the flow rule with which a NVGRE_DECAP action is specified, must define a valid NVGRE tunnel as per RFC7637. If the flow pattern does not specify a valid NVGRE tunnel then a RTE_FLOW_ERROR_TYPE_ACTION error should be returned. This action modifies the payload of matched flows. 12.2.8.33. Action: RAW_ENCAP Adds outer header whose template is provided in its data buffer, as defined in the rte_flow_action_raw_encap definition. This action modifies the payload of matched flows. The data supplied must be a valid header, either holding layer 2 data in case of adding layer 2 after decap layer 3 tunnel (for example MPLSoGRE) or complete tunnel definition starting from layer 2 and moving to the tunnel item itself. When applied to the original packet the resulting packet must be a valid packet. 12.2.8.34. Action: RAW_DECAP Remove outer header whose template is provided in its data buffer, as defined in the rte_flow_action_raw_decap This action modifies the payload of matched flows. The data supplied must be a valid header, either holding layer 2 data in case of removing layer 2 before encapsulation of layer 3 tunnel (for example MPLSoGRE) or complete tunnel definition starting from layer 2 and moving to the tunnel item itself. When applied to the original packet the resulting packet must be a valid packet. 12.2.8.35. Action: SET_IPV4_SRC Set a new IPv4 source address in the outermost IPv4 header. It must be used with a valid RTE_FLOW_ITEM_TYPE_IPV4 flow pattern item. Otherwise, RTE_FLOW_ERROR_TYPE_ACTION error will be returned. 12.2.8.36. Action: SET_IPV4_DST Set a new IPv4 destination address in the outermost IPv4 header. It must be used with a valid RTE_FLOW_ITEM_TYPE_IPV4 flow pattern item. Otherwise, RTE_FLOW_ERROR_TYPE_ACTION error will be returned. 12.2.8.37. Action: SET_IPV6_SRC Set a new IPv6 source address in the outermost IPv6 header. It must be used with a valid RTE_FLOW_ITEM_TYPE_IPV6 flow pattern item. Otherwise, RTE_FLOW_ERROR_TYPE_ACTION error will be returned. 12.2.8.38. Action: SET_IPV6_DST Set a new IPv6 destination address in the outermost IPv6 header. It must be used with a valid RTE_FLOW_ITEM_TYPE_IPV6 flow pattern item. Otherwise, RTE_FLOW_ERROR_TYPE_ACTION error will be returned. 12.2.8.39. Action: SET_TP_SRC Set a new source port number in the outermost TCP/UDP header. It must be used with a valid RTE_FLOW_ITEM_TYPE_TCP or RTE_FLOW_ITEM_TYPE_UDP flow pattern item. Otherwise, RTE_FLOW_ERROR_TYPE_ACTION error will be returned. 12.2.8.40. Action: SET_TP_DST Set a new destination port number in the outermost TCP/UDP header. It must be used with a valid RTE_FLOW_ITEM_TYPE_TCP or RTE_FLOW_ITEM_TYPE_UDP flow pattern item. Otherwise, RTE_FLOW_ERROR_TYPE_ACTION error will be returned. 12.2.8.41. Action: MAC_SWAP Swap the source and destination MAC addresses in the outermost Ethernet header. It must be used with a valid RTE_FLOW_ITEM_TYPE_ETH flow pattern item. Otherwise, RTE_FLOW_ERROR_TYPE_ACTION error will be returned. 12.2.8.42. Action: DEC_TTL Decrease TTL value. If there is no valid RTE_FLOW_ITEM_TYPE_IPV4 or RTE_FLOW_ITEM_TYPE_IPV6 in pattern, Some PMDs will reject rule because behavior will be undefined. 12.2.8.43. Action: SET_TTL Assigns a new TTL value. If there is no valid RTE_FLOW_ITEM_TYPE_IPV4 or RTE_FLOW_ITEM_TYPE_IPV6 in pattern, Some PMDs will reject rule because behavior will be undefined. 12.2.8.44. Action: SET_MAC_SRC Set source MAC address. It must be used with a valid RTE_FLOW_ITEM_TYPE_ETH flow pattern item. Otherwise, RTE_FLOW_ERROR_TYPE_ACTION error will be returned. 12.2.8.45. Action: SET_MAC_DST Set destination MAC address. It must be used with a valid RTE_FLOW_ITEM_TYPE_ETH flow pattern item. Otherwise, RTE_FLOW_ERROR_TYPE_ACTION error will be returned. 12.2.8.46. Action: INC_TCP_SEQ Increase sequence number in the outermost TCP header. Value to increase TCP sequence number by is a big-endian 32 bit integer. Using this action on non-matching traffic will result in undefined behavior. 12.2.8.47. Action: DEC_TCP_SEQ Decrease sequence number in the outermost TCP header. Value to decrease TCP sequence number by is a big-endian 32 bit integer. Using this action on non-matching traffic will result in undefined behavior. 12.2.8.48. Action: INC_TCP_ACK Increase acknowledgment number in the outermost TCP header. Value to increase TCP acknowledgment number by is a big-endian 32 bit integer. Using this action on non-matching traffic will result in undefined behavior. 12.2.8.49. Action: DEC_TCP_ACK Decrease acknowledgment number in the outermost TCP header. Value to decrease TCP acknowledgment number by is a big-endian 32 bit integer. Using this action on non-matching traffic will result in undefined behavior. 12.2.8.50. Action: SET_TAG Set Tag. Tag is a transient data used during flow matching. This is not delivered to application. Multiple tags are supported by specifying index. 12.2.8.51. Action: SET_META Set metadata. Item META matches metadata. Metadata set by mbuf metadata field with PKT_TX_DYNF_METADATA flag on egress will be overridden by this action. On ingress, the metadata will be carried by metadata dynamic field of rte_mbuf which can be accessed by RTE_FLOW_DYNF_METADATA(). PKT_RX_DYNF_METADATA flag will be set along with the data. The mbuf dynamic field must be registered by calling rte_flow_dynf_metadata_register() prior to use SET_META action.. In hairpin case with Tx explicit flow mode, metadata could (not mandatory) be used to connect the Rx and Tx flows if it can be propagated from Rx to Tx path. 12.2.8.52. Action: SET_IPV4_DSCP Set IPv4 DSCP. Modify DSCP in IPv4 header. It must be used with RTE_FLOW_ITEM_TYPE_IPV4 in pattern. Otherwise, RTE_FLOW_ERROR_TYPE_ACTION error will be returned. 12.2.8.53. Action: SET_IPV6_DSCP Set IPv6 DSCP. Modify DSCP in IPv6 header. It must be used with RTE_FLOW_ITEM_TYPE_IPV6 in pattern. Otherwise, RTE_FLOW_ERROR_TYPE_ACTION error will be returned. 12.2.8.54. Action: AGE Set ageing timeout configuration to a flow. Event RTE_ETH_EVENT_FLOW_AGED will be reported if timeout passed without any matching on the flow. Query structure to retrieve ageing status information of a shared AGE action, or a flow rule using the AGE action: 12.2.8.55. Action: SAMPLE Adds a sample action to a matched flow. The matching packets will be duplicated with the specified ratio and applied with own set of actions with a fate action, the packets sampled equals is ‘1/ratio’. All the packets continue to the target destination. When the ratio is set to 1 then the packets will be 100% mirrored. actions represent the different set of actions for the sampled or mirrored packets, and must have a fate action. 12.2.8.57. Action: MODIFY_FIELD Modify dst field according to op selected (set, addition, subtraction) with width bits of data from src field. Any arbitrary header field (as well as mark, metadata or tag values) can be used as both source and destination fields as set by field. The immediate value RTE_FLOW_FIELD_VALUE (or a pointer to it RTE_FLOW_FIELD_POINTER) is allowed as a source only. RTE_FLOW_FIELD_START is used to point to the beginning of a packet. op selects the operation to perform on a destination field. - set copies the data from src field to dst field. - add adds together dst and src and stores the result into dst. - sub subtracts src from dst and stores the result into dst width defines a number of bits to use from src field. level is used to access any packet field on any encapsulation level as well as any tag element in the tag array. - 0 means the default behaviour. Depending on the packet type, it can mean outermost, innermost or anything in between. - 1 requests access to the outermost packet encapsulation level. - 2 and subsequent values requests access to the specified packet encapsulation level, from outermost to innermost (lower to higher values). For the tag array (in case of multiple tags are supported and present) level translates directly into the array index. offset specifies the number of bits to skip from a field’s start. That allows performing a partial copy of the needed part or to divide a big packet field into multiple smaller fields. Alternatively, offset allows going past the specified packet field boundary to copy a field to an arbitrary place in a packet, essentially providing a way to copy any part of a packet to any other part of it. value sets an immediate value to be used as a source or points to a location of the value in memory. It is used instead of level and offset for RTE_FLOW_FIELD_VALUE and RTE_FLOW_FIELD_POINTER respectively. 12.2.9. Negative types All specified pattern items ( enum rte_flow_item_type) and actions ( enum rte_flow_action_type) use positive identifiers. The negative space is reserved for dynamic types generated by PMDs during run-time. PMDs may encounter them as a result but must not accept negative identifiers they are not aware of. A method to generate them remains to be defined. Application may use PMD dynamic items or actions in flow rules. In that case size of configuration object in dynamic element must be a pointer size. 12.2.10. Planned types Pattern item types will be added as new protocols are implemented. Variable headers support through dedicated pattern items, for example in order to match specific IPv4 options and IPv6 extension headers would be stacked after IPv4/IPv6 items. Other action types are planned but are not defined yet. These include the ability to alter packet data in several ways, such as performing encapsulation/decapsulation of tunnel headers. 12.3. Rules management A rather simple API with few functions is provided to fully manage flow rules. Each created flow rule is associated with an opaque, PMD-specific handle pointer. The application is responsible for keeping it until the rule is destroyed. Flows rules are represented by struct rte_flow objects. 12.3.1. Validation Given that expressing a definite set of device capabilities is not practical, a dedicated function is provided to check if a flow rule is supported and can be created. int rte_flow_validate(uint16_t port_id, const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error); The flow rule is validated for correctness and whether it could be accepted by the device given sufficient resources. The rule is checked against the current device mode and queue configuration. The flow rule may also optionally be validated against existing flow rules and device resources. This function has no effect on the target device. The returned value is guaranteed to remain valid only as long as no successful calls to rte_flow_create() or rte_flow_destroy() are made in the meantime and no device parameter affecting flow rules in any way are modified, due to possible collisions or resource limitations (although in such cases EINVAL should not be returned).: - 0 if flow rule is valid and can be created. A negative errno value otherwise ( rte_errnois also set), the following errors are defined. -ENOSYS: underlying device does not support this functionality. -EINVAL: unknown or invalid rule specification. -ENOTSUP: valid but unsupported rule specification (e.g. partial bit-masks are unsupported). EEXIST: collision with an existing rule. Only returned if device supports flow rule collision checking and there was a flow rule collision. Not receiving this return code is no guarantee that creating the rule will not fail due to a collision. ENOMEM: not enough memory to execute the function, or if the device supports resource validation, resource limitation on the device. -EBUSY: action cannot be performed due to busy device resources, may succeed if the affected queues or even the entire port are in a stopped state (see rte_eth_dev_rx_queue_stop()and rte_eth_dev_stop()). 12.3.2. Creation Creating a flow rule is similar to validating one, except the rule is actually created and a handle returned. struct rte_flow * rte_flow_create(uint16_t port_id, const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action *actions[], struct rte_flow_error *error);: A valid handle in case of success, NULL otherwise and rte_errno is set to the positive version of one of the error codes defined for rte_flow_validate(). 12.3.3. Destruction Flow rules destruction is not automatic, and a queue or a port should not be released if any are still attached to them. Applications must take care of performing this step before releasing resources. int rte_flow_destroy(uint16_t port_id, struct rte_flow *flow, struct rte_flow_error *error); Failure to destroy a flow rule handle may occur when other flow rules depend on it, and destroying it would result in an inconsistent state. This function is only guaranteed to succeed if handles are destroyed in reverse order of their creation. Arguments: port_id: port identifier of Ethernet device. flow: flow rule handle to destroy. error: perform verbose error reporting if not NULL. PMDs initialize this structure in case of error only. Return values: - 0 on success, a negative errno value otherwise and rte_errnois set. 12.3.4. Flush Convenience function to destroy all flow rule handles associated with a port. They are released as with successive calls to rte_flow_destroy(). int rte_flow_flush(uint16_t port_id, struct rte_flow_error *error); In the unlikely event of failure, handles are still considered destroyed and no longer valid but the port must be assumed to be in an inconsistent state. Arguments: port_id: port identifier of Ethernet device. error: perform verbose error reporting if not NULL. PMDs initialize this structure in case of error only. Return values: - 0 on success, a negative errno value otherwise and rte_errnois set. 12.3.5. Query Query an existing flow rule. This function allows retrieving flow-specific data such as counters. Data is gathered by special actions which must be present in the flow rule definition. int rte_flow_query(uint16_t port_id, struct rte_flow *flow, const struct rte_flow_action *action, void *data, struct rte_flow_error *error); Arguments: port_id: port identifier of Ethernet device. flow: flow rule handle to query. action: action to query, this must match prototype from flow rule. data: pointer to storage for the associated query data type. error: perform verbose error reporting if not NULL. PMDs initialize this structure in case of error only. Return values: - 0 on success, a negative errno value otherwise and rte_errnois set. 12.4. Flow isolated mode The general expectation for ingress traffic is that flow rules process it first; the remaining unmatched or pass-through traffic usually ends up in a queue (with or without RSS, locally or in some sub-device instance) depending on the global configuration settings of a port. While fine from a compatibility standpoint, this approach makes drivers more complex as they have to check for possible side effects outside of this API when creating or destroying flow rules. It results in a more limited set of available rule types due to the way device resources are assigned (e.g. no support for the RSS action even on capable hardware). Given that nonspecific traffic can be handled by flow rules as well, isolated mode is a means for applications to tell a driver that ingress on the underlying port must be injected from the defined flow rules only; that no default traffic is expected outside those rules. This has the following benefits: - Applications get finer-grained control over the kind of traffic they want to receive (no traffic by default). - More importantly they control at what point nonspecific traffic is handled relative to other flow rules, by adjusting priority levels. - Drivers can assign more hardware resources to flow rules and expand the set of supported rule types. Because toggling isolated mode may cause profound changes to the ingress processing path of a driver, it may not be possible to leave it once entered. Likewise, existing flow rules or global configuration settings may prevent a driver from entering isolated mode. Applications relying on this mode are therefore encouraged to toggle it as soon as possible after device initialization, ideally before the first call to rte_eth_dev_configure() to avoid possible failures due to conflicting settings. Once effective, the following functionality has no effect on the underlying port and may return errors such as ENOTSUP (“not supported”): - Toggling promiscuous mode. - Toggling allmulticast mode. - Configuring MAC addresses. - Configuring multicast addresses. - Configuring VLAN filters. - Configuring global RSS settings. int rte_flow_isolate(uint16_t port_id, int set, struct rte_flow_error *error); Arguments: port_id: port identifier of Ethernet device. set: nonzero to enter isolated mode, attempt to leave it otherwise. error: perform verbose error reporting if not NULL. PMDs initialize this structure in case of error only. Return values: - 0 on success, a negative errno value otherwise and rte_errnois set. 12.5. Verbose error reporting The defined errno values may not be accurate enough for users or application developers who want to investigate issues related to flow rules management. A dedicated error object is defined for this purpose: enum rte_flow_error_type { RTE_FLOW_ERROR_TYPE_NONE, /**< No error. */ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, /**< Cause unspecified. */ RTE_FLOW_ERROR_TYPE_HANDLE, /**< Flow rule (handle). */ RTE_FLOW_ERROR_TYPE_ATTR_GROUP, /**< Group field. */ RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, /**< Priority field. */ RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, /**< Ingress field. */ RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, /**< Egress field. */ RTE_FLOW_ERROR_TYPE_ATTR, /**< Attributes structure. */ RTE_FLOW_ERROR_TYPE_ITEM_NUM, /**< Pattern length. */ RTE_FLOW_ERROR_TYPE_ITEM, /**< Specific pattern item. */ RTE_FLOW_ERROR_TYPE_ACTION_NUM, /**< Number of actions. */ RTE_FLOW_ERROR_TYPE_ACTION, /**< Specific action. */ }; struct rte_flow_error { enum rte_flow_error_type type; /**< Cause field and error types. */ const void *cause; /**< Object responsible for the error. */ const char *message; /**< Human-readable error message. */ }; Error type RTE_FLOW_ERROR_TYPE_NONE stands for no error, in which case remaining fields can be ignored. Other error types describe the type of the object pointed by cause. If non-NULL, cause points to the object responsible for the error. For a flow rule, this may be a pattern item or an individual action. If non-NULL, message provides a human-readable error message. This object is normally allocated by applications and set by PMDs in case of error, the message points to a constant string which does not need to be freed by the application, however its pointer can be considered valid only as long as its associated DPDK port remains configured. Closing the underlying device or unloading the PMD invalidates it. 12.6. Helpers 12.6.1. Error initializer static inline int rte_flow_error_set(struct rte_flow_error *error, int code, enum rte_flow_error_type type, const void *cause, const char *message); This function initializes error (if non-NULL) with the provided parameters and sets rte_errno to code. A negative error code is then returned. 12.6.2. Object conversion int rte_flow_conv(enum rte_flow_conv_op op, void *dst, size_t size, const void *src, struct rte_flow_error *error); Convert src to dst according to operation op. Possible operations include: - Attributes, pattern item or action duplication. - Duplication of an entire pattern or list of actions. - Duplication of a complete flow rule description. - Pattern item or action name retrieval. 12.6.3. Tunneled traffic offload rte_flow API provides the building blocks for vendor-agnostic flow classification offloads. The rte_flow “patterns” and “actions” primitives are fine-grained, thus enabling DPDK applications the flexibility to offload network stacks and complex pipelines. Applications wishing to offload tunneled traffic are required to use the rte_flow primitives, such as group, meta, mark, tag, and others to model their high-level objects. The hardware model design for high-level software objects is not trivial. Furthermore, an optimal design is often vendor-specific. When hardware offloads tunneled traffic in multi-group logic, partially offloaded packets may arrive to the application after they were modified in hardware. In this case, the application may need to restore the original packet headers. Consider the following sequence: The application decaps a packet in one group and jumps to a second group where it tries to match on a 5-tuple, that will miss and send the packet to the application. In this case, the application does not receive the original packet but a modified one. Also, in this case, the application cannot match on the outer header fields, such as VXLAN vni and 5-tuple. There are several possible ways to use rte_flow “patterns” and “actions” to resolve the issues above. For example: 1 Mapping headers to a hardware registers using the rte_flow_action_mark/rte_flow_action_tag/rte_flow_set_meta objects. 2 Apply the decap only at the last offload stage after all the “patterns” were matched and the packet will be fully offloaded. Every approach has its pros and cons and is highly dependent on the hardware vendor. For example, some hardware may have a limited number of registers while other hardware could not support inner actions and must decap before accessing inner headers. The tunnel offload model resolves these issues. The model goals are: 1 Provide a unified application API to offload tunneled traffic that is capable to match on outer headers after decap. 2 Allow the application to restore the outer header of partially offloaded packets. The tunnel offload model does not introduce new elements to the existing RTE flow model and is implemented as a set of helper functions. For the application to work with the tunnel offload API it has to adjust flow rules in multi-table tunnel offload in the following way: 1 Remove explicit call to decap action and replace it with PMD actions obtained from rte_flow_tunnel_decap_and_set() helper. 2 Add PMD items obtained from rte_flow_tunnel_match() helper to all other rules in the tunnel offload sequence. The model requirements: Software application must initialize rte_tunnel object with tunnel parameters before calling rte_flow_tunnel_decap_set() & rte_flow_tunnel_match(). PMD actions array obtained in rte_flow_tunnel_decap_set() must be released by application with rte_flow_action_release() call. PMD items array obtained with rte_flow_tunnel_match() must be released by application with rte_flow_item_release() call. Application can release PMD items and actions after rule was created. However, if the application needs to create additional rule for the same tunnel it will need to obtain PMD items again. Application cannot destroy rte_tunnel object before it releases all PMD actions & PMD items referencing that tunnel. 12.7. Caveats - DPDK does not keep track of flow rules definitions or flow rule objects automatically. Applications may keep track of the former and must keep track of the latter. PMDs may also do it for internal needs, however this must not be relied on by applications. - Flow rules are not maintained between successive port initializations. An application exiting without releasing them and restarting must re-create them from scratch. - API operations are synchronous and blocking ( EAGAINcannot be returned). - Stopping the data path (TX/RX) should not be necessary when managing flow rules. If this cannot be achieved naturally or with workarounds (such as temporarily replacing the burst function pointers), an appropriate error code must be returned ( EBUSY). - Applications, not PMDs, are responsible for maintaining flow rules configuration when closing, stopping or restarting a port or performing other actions which may affect them. Applications must assume that after port close, stop or restart all flows related to that port are not valid, hardware rules are destroyed and relevant PMD resources are released. For devices exposing multiple ports sharing global settings affected by flow rules: - All ports under DPDK control must behave consistently, PMDs are responsible for making sure that existing flow rules on a port are not affected by other ports. - Ports not under DPDK control (unaffected or handled by other applications) are user’s responsibility. They may affect existing flow rules and cause undefined behavior. PMDs aware of this may prevent flow rules creation altogether in such cases. 12.8. PMD interface The PMD interface is defined in rte_flow_driver.h. It is not subject to API/ABI versioning constraints as it is not exposed to applications and may evolve independently. The PMD interface is based on callbacks pointed by the struct rte_flow_ops. - PMD callbacks implement exactly the interface described in Rules management, except for the port ID argument which has already been converted to a pointer to the underlying struct rte_eth_dev. - Public API functions do not process flow rules definitions at all before calling PMD functions (no basic error checking, no validation whatsoever). They only make sure these callbacks are non-NULL or return the ENOSYS(function not supported) error. This interface additionally defines the following helper function: rte_flow_ops_get(): get generic flow operations structure from a port. If PMD interfaces don’t support re-entrancy/multi-thread safety, the rte_flow API functions will protect threads by mutex per port. The application can check whether RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE is set in dev_flags, meaning the PMD is thread-safe regarding rte_flow, so the API level protection is disabled. Please note that this API-level mutex protects only rte_flow functions, other control path functions are not in scope. More will be added over time. 12.9. Device compatibility No known implementation supports all the described features. Unsupported features or combinations are not expected to be fully emulated in software by PMDs for performance reasons. Partially supported features may be completed in software as long as hardware performs most of the work (such as queue redirection and packet recognition). However PMDs are expected to do their best to satisfy application requests by working around hardware limitations as long as doing so does not affect the behavior of existing flow rules. The following sections provide a few examples of such cases and describe how PMDs should handle them, they are based on limitations built into the previous APIs. 12.9.1. Global bit-masks Each flow rule comes with its own, per-layer bit-masks, while hardware may support only a single, device-wide bit-mask for a given layer type, so that two IPv4 rules cannot use different bit-masks. The expected behavior in this case is that PMDs automatically configure global bit-masks according to the needs of the first flow rule created. Subsequent rules are allowed only if their bit-masks match those, the EEXIST error code should be returned otherwise. 12.9.2. Unsupported layer types Many protocols can be simulated by crafting patterns with the Item: RAW type. PMDs can rely on this capability to simulate support for protocols with headers not directly recognized by hardware. 12.9.3. ANY pattern item This pattern item stands for anything, which can be difficult to translate to something hardware would understand, particularly if followed by more specific types. Consider the following pattern: Knowing that TCP does not make sense with something other than IPv4 and IPv6 as L3, such a pattern may be translated to two flow rules instead: Note that as soon as a ANY rule covers several layers, this approach may yield a large number of hidden flow rules. It is thus suggested to only support the most common scenarios (anything as L2 and/or L3). 12.9.4. Unsupported actions - When combined with Action: QUEUE, packet counting (Action: COUNT) and tagging (Action: MARK or Action: FLAG) may be implemented in software as long as the target queue is used by a single rule. - When a single target queue is provided, Action: RSS can also be implemented through Action: QUEUE. 12.9.5. Flow rules priority While it would naturally make sense, flow rules cannot be assumed to be processed by hardware in the same order as their creation for several reasons: - They may be managed internally as a tree or a hash table instead of a list. - Removing a flow rule before adding another one can either put the new rule at the end of the list or reuse a freed entry. - Duplication may occur when packets are matched by several rules. For overlapping rules (particularly in order to use Action: PASSTHRU) predictable behavior is only guaranteed by using different priority levels. Priority levels are not necessarily implemented in hardware, or may be severely limited (e.g. a single priority bit). For these reasons, priority levels may be implemented purely in software by PMDs. - For devices expecting flow rules to be added in the correct order, PMDs may destroy and re-create existing rules after adding a new one with a higher priority. - A configurable number of dummy or empty rules can be created at initialization time to save high priority slots for later. - In order to save priority levels, PMDs may evaluate whether rules are likely to collide and adjust their priority accordingly. 12.10. Future evolutions - A device profile selection function which could be used to force a permanent profile instead of relying on its automatic configuration based on existing flow rules. - A method to optimize rte_flow rules with specific pattern items and action types generated on the fly by PMDs. DPDK should assign negative numbers to these in order to not collide with the existing types. See Negative types. - Adding specific egress pattern items and actions as described in Attribute: Traffic direction. - Optional software fallback when PMDs are unable to handle requested flow rules so applications do not have to implement their own.
http://doc.dpdk.org/guides/prog_guide/rte_flow.html
CC-MAIN-2021-17
refinedweb
10,531
58.38
Constrained fits to data Posted June 11, 2013 at 07:39 PM | categories: data analysis, optimization | tags: | View Comments Updated June 12, 2013 at 08:31 AM Our objective here is to fit a quadratic function in the least squares sense to some data, but we want to constrain the fit so that the function has specific values at the end-points. The application is to fit a function to the lattice constant of an alloy at different compositions. We constrain the fit because we know the lattice constant of the pure metals, which are at the end-points of the fit and we want these to be correct. We define the alloy composition in terms of the mole fraction of one species, e.g. \(A_xB_{1-x}\). For \(x=0\), the alloy is pure B, whereas for \(x=1\) the alloy is pure A. According to Vegard's law the lattice constant is a linear composition weighted average of the pure component lattice constants, but sometimes small deviations are observed. Here we will fit a quadratic function that is constrained to give the pure metal component lattice constants at the end points. The quadratic function is \(y = a x^2 + b x + c\). One constraint is at \(x=0\) where \(y = c\), or \(c\) is the lattice constant of pure B. The second constraint is at \(x=1\), where \(a + b + c\) is equal to the lattice constant of pure A. Thus, there is only one degree of freedom. \(c = LC_B\), and \(b = LC_A - c - a\), so \(a\) is our only variable. We will solve this problem by minimizing the summed squared error between the fit and the data. We use the fmin function in scipy.optimize. First we create a fit function that encodes the constraints. Then we create an objective function that will be minimized. We have to make a guess about the value of \(a\) that minimizes the summed squared error. A line fits the data moderately well, so we guess a small value, i.e. near zero, for \(a\). Here is the solution. import numpy as np import matplotlib.pyplot as plt # Data to fit to # x=0 is pure B # x=1 is pure A X = np.array([0.0, 0.1, 0.25, 0.5, 0.6, 0.8, 1.0]) Y = np.array([3.9, 3.89, 3.87, 3.78, 3.75, 3.69, 3.6]) def func(a, XX): LC_A = 3.6 LC_B = 3.9 c = LC_B b = LC_A - c - a yfit = a * XX**2 + b * XX + c return yfit def objective(a): 'function to minimize' SSE = np.sum((Y - func(a, X))**2) return SSE from scipy.optimize import fmin a_fit = fmin(objective, 0) plt.plot(X, Y, 'bo ') x = np.linspace(0, 1) plt.plot(x, func(a_fit, x)) plt.savefig('images/constrained-quadratic-fit.png') Optimization terminated successfully. Current function value: 0.000445 Iterations: 19 Function evaluations: 38 Here is the result: You can see that the end points go through the end-points as prescribed. Copyright (C) 2013 by John Kitchin. See the License for information about copying.
http://kitchingroup.cheme.cmu.edu/blog/2013/06/11/Constrained-fits-to-data/
CC-MAIN-2017-22
refinedweb
522
75.4
pwnd solution in Speedy category for Pawn Brotherhood by tarikki def safe_pawns(pawns): safe = 0 pp = [] #possible protectors for pawn in pawns: pp.append(chr(ord(pawn[0])-1)+str(int(pawn[1])-1)) #a possible protector would be one down the alphabet, one line down pp.append(chr(ord(pawn[0])+1)+str(int(pawn[1])-1)) #or one up the alphabet, one line down if any(p in pawns for p in pp): #if the list of pawns has any of these two, safe += 1 #the pawn in question is safe pp = [] #remember to reset the list return safe Aug. 17, 2014 Forum Price Global Activity ClassRoom Manager Leaderboard Coding games Python programming for beginners
https://py.checkio.org/mission/pawn-brotherhood/publications/tarikki/python-3/pwnd/share/21e9d9f43a3c74a91c4330950b53743a/
CC-MAIN-2021-43
refinedweb
117
56.29
This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project. "-g" option enables var_tracking and long compilation on Darwin - x86_64-apple-darwin10.8.0 404 Error on Page Re: [Bug driver/49858] lost ability to use driver for external language support? [C]: Unnecessary int(16)->long(32) promotion: Bug in C front end? Re: [Gm2] gnu modula-2 + cygwin compilerfehler.... mit loesung [gnu.org #702521] Re: Romain Geissler copyright assignment [google] Last 4.6 merge google/main -> google/gcc-4_6 [google] Merge from gcc-4_6-branch [google] Merged gcc-4_6-branch into google/gcc-4_6 [google] Merged google/integration -> google/main [google] Merged trunk -> google/integration [Melt] Runfile Mode and rest of compilation [PATCH, rs6000] -mno-sched-prolog vs .debug_frame [PATCH] for Re: 404 Error on Page Re: [patches] Re: Cross compiler build instructions - PowerPC [pph] cache replace next insert by [pph] Merged trunk -> pph [pph] Merged trunk->pph [pph] Re-instantiating namespaces and symbols [pph] Running a single test in pph.exp RE: [RFC] Add middle end hook for stack red zone size [RFC] Remove -freorder-blocks-and-partition [rs6000] -mno-sched-prolog vs .debug_frame About nested Designated Initializers Ada boolean type Added more notes and group picture to GCC gathering wiki Advice for changing OpenMP translation All my bootstraps fail with: /usr/bin/ld: cannot find crti.o in stage 2. ANN: gcc-python-plugin 0.5 Announce: GCC MELT plugin 0.8 rc3 for GCC 4.6 announce: MELT plugin 0.8 for GCC 4.6 Announce: MELT plugin 0.8rc1 for GCC 4.6 announce: MELT plugin 0.8rc2 for 4.6 Assignment Operator Bug Bootstrap failure with doloop optimization on ARM Re: Bug#629142: autogen: FTBFS: Various aborts C++ bootstrap of GCC - still useful ? C++ mangling, function name to mangled name (or tree) C99 Status - inttypes.h cachecc1 query cgraph callees availability conversion: immediate-value -> memory ref Re: Cross compiler build instructions - PowerPC Defining constraint for registers tuple Does IRA support spilling to registers instead of stack? DSL2011 paper about MELT Du Toan G8, Phan mem du toan cong trinh uu viet nhat hien nay! Flags and pph... frame pointer must be single register? gcc 4.6.1 expand is playing tricks on me GCC 4.6.1 likes to rename my functions gcc bitfield order control GCC build time using bash vs. dash GCC MELT group on googlegroups gcc-4.4-20110705 is now available gcc-4.4-20110712 is now available gcc-4.4-20110719 is now available gcc-4.4-20110726 is now available gcc-4.5-20110707 is now available gcc-4.5-20110714 is now available gcc-4.5-20110721 is now available gcc-4.5-20110728 is now available gcc-4.6-20110701 is now available gcc-4.6-20110708 is now available gcc-4.6-20110715 is now available gcc-4.6-20110722 is now available gcc-4.6-20110729 is now available gcc-4.7-20110702 is now available gcc-4.7-20110709 is now available gcc-4.7-20110716 is now available gcc-4.7-20110723 is now available gcc-4.7-20110730 is now available gcc-python-plugin status update: static analysis, and HTML reports Fwd: Google Summer of Code 2011 Doc Camp 17 October - 21 October Re: GSOC - Student Roundup hash signature of cc1 etc....? Help with ivopts Hey gcc hi all Hosting new mirror. How to add a new pass? How to fix my account on gnu.org? How to force an indirect jump? Improve addsi3 for CONST_INT Fwd: Interested In Doing Documentation Project IPA and LTO IRA vs CANNOT_CHANGE_MODE_CLASS, + 4.7 IRA regressions? IRA: matches insn even though !reload_in_progress Is libiberty's splay-tree.c really a splay tree? libstdc++-6.dll doesn't export functions -- gcc-4.6.1 and host=i686-pc-mingw32 Linemap and pph The Linux binutils 2.21.53.0.1 is released Re: Long paths with ../../../../ throughout MELT plugin 0.8rc for 4.6 on 32 bits x86 MELT plugin 0.8rc4 for 4.6 merged GCC trunk 176387 into MELT branch Multi-register location descriptors in dwarf2out.c Music composer for films,games and commercials Need clarification for PPC floating Point compiler options onlinedocs formated text too small to read PARM_DECL to SSA_NAME PATCH RFA: Build stages 2 and 3 with C++ Re: Re: patch: don't issue -Wreorder warnings when order doesn't matter patch: don't issue -Wreorder warnings when order doesn't matter Performance degradation on g++ 4.6 Please apply toplevel patches to both gcc and src possible race condition in make install? Pta_flags enum overflow in i386.c Question about SMS scheduling windows Question on missed insn combine optimization. Register pressure analysis Re: RFC: [GUPC] UPC-related changes RFC: PATCH: Require and use int64 for x86 options RFH: Impose code-movement restrictions and value assumption (for ASYNCHRONOUS/Coarrays) Re: Richard Sandiford appointed RTL maintainer Romain Geissler copyright assignment RTEMS Port of GCJ Progress Report splitting add instructions TARGET_FLAGS_REGNUM breaks gcc testresults limit? Three new additional global reviewers tree_code enum usage in hooks.c trunk merged into MELT branch, and improved build unicode in gcc 4.6.1 output Updating the CFG after function modifcation Validity of __asm__ transformation with "m" reference Re: viewcvs: Python error A visualization of GCC's passes, as a subway map Vladimir Makarov appointed Register Allocation Maintainer write after approval status Re: X32 project status update Your computers are a target for theft. Don't be a victim!
http://gcc.gnu.org/ml/gcc/2011-07/subjects.html
CC-MAIN-2017-04
refinedweb
919
61.12
Download Setup + Crack > DOWNLOAD (Mirror #1) Download Setup + Crack > DOWNLOAD (Mirror #1) Defending the Elden Ring, Elden Lords have risen from the dust and made a name for themselves. The Elden Ring is a world of heroes, led by the 12 Elden Lords, who will live their lives for the future of the Elden Ring. They have fought and finally defeated the dark lord, Garai, but the true battle still begins. The forces of Vraska the 1,000-Year Witch have already begun their attacks. Orr, the Mage, has also unleashed the elemental powers of war, and humanity and all of the Elden Ring is at stake! Select the path that each of the twelve Elden Lords will take in the upcoming battle. A war is raging around them and a clash of fate awaits. Only you can help make the difference. The fate of the entire world is in your hands. Who will you be? —– Trailer:Q: Should I call the 1st argument of load() an extension module? I am converting an old python program to P2P. One of the functions I’m calling looks like this: from pwn import * def scr_load(file): return load(file, “C:\WINDOWS\system32\cmd.exe”) I am new to p2p, so I was wondering what would be the convention for namespaced modules, and in this case what would it be? A: You need to decide. It depends on: who’s going to look into your code if you can be corrected when you make a mistake (which is easier to do if your code name changes) Assuming: you are the main developer you wrote the initial version of the library I’d name it something like the name it is: “scraper”. It’s a scraper. It scrapes stuff. Features Key: Game Features: - Suspense - Story-Rich Exploration - Innovative Interface - A Massive World - Complete Package: Game Entertainment in One Box - A Large Variety of Content - Deep Game Mechanics - Game Mechanics that are Easy to Understand and Easy to Operate Gameplay Features: - Engaging Action - Stunning Graphics - Imaginative System - Interesting Story Play Style - Action: Action JRPG - Action RPG: Japan’s Favorite Action RPG CLICKATURL: ( Our Indie-RPGs - Towering Seascapes - Full House J-Pop - Rowdy Little Tail - Advent Vast </html Elden Ring Download PC/Windows It’s safe to say that the Elden Ring is a title that we can easily get excited about. You play as one of the five main characters: Tarnished, the Ever-Lost, the Gray Bracer, the Injured Priest, and the Ethereal Knight. The Elden Ring story revolves around your character’s journey to the Lands Between to seek out Sol, the Lost Daughter of the Elden God. Sol is waiting for you, and your character must reach the location. Tarnished is a man of noble blood, but he’s been tainted by the damned. As a result, he doesn’t keep a low profile and seeks to vanquish the Three Chaos Demon Forces known as the “Hallowed” and the “Devourer.” He won’t be joined by any of the other characters, but he’s got friends in other game worlds. Through the world of the Lands Between, you’ll find various locations, and each is backed by a series of quests and challenges. By completing these challenges, you’ll earn a small amount of money and be able to purchase things like armor, weapons, and magic. The combat system in Elden Ring is a mix of action RPG and card game. You can equip various kinds of equipment that grants effects, and you can use that equipment to enter into various battles. Each character has two kinds of attacks—Attribute Attack and Magic Attack. Attribute Attack is powerful when you’re starting out, but you’ll need to be careful of your initial attack levels when you enter the battle, and not in the end. It’s important to make sure you have enough money, so you can spend it on upgrading your equipment. Magic Attack is different from Attribute Attack. It’s usually used to bring out your battle spirit, and you’ll get a handy effect if you hit the enemy. Magic Attack is useful when you’re in the midst of battle, but you’ll need to be careful when you’re using it because there’s a limit to how much you can use. The Elden Ring comes with various enemy types to fight, and each enemy has its own tactics. For instance, the “Devourer” wields the “Dragon’s Wrath” magic that increases the power bff6bb2d33 Elden Ring For PC The game’s main purpose is to liberate the lands near you and equip you with weapons and armor. Defeat nearby opponents, and evolve in an effort to become a great leader. Rise. The Legend of Rebirth: The Elden Ring’s events unfold every day, and as a Lord, you will need to contend with a variety of events throughout the Lands Between. Although you can challenge your opponents at any time, you can also choose the time of your own challenges. In addition to you, there are also various people who support you, the meaning of which you can discover in your own playthrough. Your House is the epic story born from the Bonds of the Elden Ring: In order to become a Lord, the game’s epic tale is born from the bonds and events that have occurred in the Lands Between. The bonds that you have made with the people around you will also have their own stories and become unique in your playstyle. You have received the Trust of the Elden Ring: To become a Lord, you need to prove your worth to the Elden Ring and defeat your opponents. This is part of the game’s journey to the preparation screen. To show your ability as a Lord, you can go to F.A.K.E. field. The game’s most memorable content, created by a studio of your own. “The Elden Ring is the Fantasy Action RPG that is born from the expectations of their fans. You will be able to challenge the Battles of the Legends that are beyond imagination, living with the lighthearted and compelling game play that is different from other RPGs.” -Arin, Director of Development at Anima PS. You may encounter bugs in the game when you start up for the first time. If you experience this, we kindly ask that you use the Data Integrity Checker to fix these bugs. General Discussion YouTube What’s new: Without slowing down a moment from your hectic online life, you can enjoy endless adventures and take on your friends in intense battles on a silver platter, or engage in other PvP activities. The game features popular online elements while adding various new elements to deliver the best experience possible. The game is also designed with a unique structure to prevent the addition of grinding. You can quickly achieve exhilarating resolutions of action and become an arm-waving boss in the world of Tales of. Your life, your journey awaits you. Team ENIGMA is a GREE company, and the Tales of game as well as Tales of Girls games were launched in collaboration with Gree. The Tales of IP was officially acquired from GREE in October 2016. Also, the Tales of game was updated on August 25, 2017 with the addition of “Salvage Shop” that allows the player to obtain special items in exchange for in-game currency when interacting with NPC vendors. Please visit for more information. ENIGMA Games Timespan Expansion 1. HD Expansion – The Lands Beyond has been expanded to space. The Zeros (leptons) that the player must avoid have been replaced by the Wraths (muons). These Wraths travel faster than the Zeros, and cannot be attacked. The period of Death has been added to the game. Players must endure this period of time, and if they enter the period, their contract with the Lord will end. 2. Chibi Existence System – Free Elden Ring 1.Unrar. 2.Install TAR. 3.Install Xilisoft Mobile Suite. 4.Click start, run elden ring game.. 5.Enjoy..:) Other Links For Elden Ring: Originally from Alberta, Alberta-based stylist, graphic designer and illustrator Rachel Shore has been creating home accents, retail packaging and product designs for 15 years. We think her heart is in the kitchen. Check out her website! This pineapple bowl was made from some old jars and acrylic paint. Although, acrylic paint is not the first thing that came to mind, the process to create this bowl was rather simple, and allowed for some fun. We give major props to Rachel and the Alberta artists who can collaborate with our company to bring their unique style to form a beautiful product! From the creator of the Gleam is in your Home Contest: If you enjoyed this Pineapple Bowl, you’ll love these DIY Pineapple Projects Share this: Twitter Tumblr Like this: Like Loading…Q: How to run Javascript in a winform application I have a winform that uses javascript. Is there any way to stop javascript from running in the Winform and then execute the code inline? Thanks A: This is tricky, as Windows Scripting Host (WSH) and JScript.Net are embedded within IE, and there’s probably no way to fully isolate their execution. As an example, WSH and JScript are the choice for implementing VBA, which in turn is often used as a backend to Winforms. You can try to search for Internet Explorer’s embedded support for Javascript here: This could be the most viable option. Antonio Conte has been named the new manager of Chelsea, a club he led from his appointment at the end of last season How To Install and Crack Elden Ring: - PSD-EDT_theme-r4.rar (512K) - PS3-BD.rar - Language - Hit Enter to launch the game in your selected language - Subtitles - Hit Enter to select your installed subtitle language - Region (Change if needed) - Hit Enter to manually change your region - Interface Font (Choose your Font) - Higher Font Size (Default) - Hit Enter to choose your preferred size - Language (Change if needed) - Hit Enter to change your language to match your interface font size - Subtitles (Change if needed- System Requirements: -Microsoft Windows 7, Windows 8, Windows 8.1 (32-bit or 64-bit OS will work. 64-bit is recommended) -Internet Explorer 11 -Adobe Flash Player 11.2 (DirectX 9.0c or newer) -Adobe Acrobat Reader DC -Android Mobile Phone (Android version 4.4 or newer) -SYSTEM REQUIREMENTS: -Windows Media Player (Windows 10)/ iTunes (Windows 10)/ Media Player Classic (Windows 10) Related sites:
https://haitiliberte.com/advert/repack-elden-ring-patch-full-version-dlcfree-registration-code/
CC-MAIN-2022-33
refinedweb
1,767
69.31
How do I sort a data table based on a particular column which has integer values in ascending order? I wrote this: Complete Post Edited by Nancy Michell MSDN Magazine May 2005 Hi I have DropDownList that Show all groups. in that, I want to show "select one group" as default in DDL. for that i used code below. but it sort all return data and "select one group" will not be the first line. How can i fix it? Best Regards. Morteza SELECT -1 as id,'select one group' as groupName union SELECT id, groupName FROM [group] order by groupNam Sort) Hi, I've got multiple datagrids in the application I'm currently working on. Their ItemSource can be updated with new content on server notifications. When this happens, the ItemSource is being replaced by a new one (would it be better to update the content of the ItemSource - which is much more complicated for this case of figure - instead of binding a new one?) When the new ItemSource is being bound, the sorting of the grid is being lost. I've solved this behaviour by saving the sorting before setting the new binding and restoring it after the new binding has been done. Below you'll find the static methods used for that: using System; using System.Collections.Generic; using System.ComponentModel; using System.Linq; using System.Windows.Controls; using System.Windows.Data; using Telecontrol.SIPPR.Client.Commons.DataObjects; namespace Telecontrol.SIPPR.Client.Commons.Utils { public static class DataGridUtil { SharePoint 2010 library sort order is incorrect.? ThanksTim Vander Koo! 'Sort the datatable based on sequence id leadtable.DefaultView.Sort ="Id" 'Sort the datatable based on sequence id leadtable.DefaultView.Sort = But it doesnt seem to work Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
http://www.dotnetspark.com/links/46436-datatable-sort--particular-order.aspx
CC-MAIN-2017-13
refinedweb
303
59.19
Member Since 2 Years Ago 4,8areenmj left a reply on How To Present Contents Of A File (stored On My Project) In A Blade File? tareenmj left a reply on How To Present Contents Of A File (stored On My Project) In A Blade File? @jlrdw I attempted to use getContents() method but it says that 'Call to a member function getContents() on a non-object'. I am receiving my file as such::: $path = $user->cv()->first()->path; $fileName = explode("/", $path)[1]; $extension = explode(".", $path)[1]; $uploaded = Storage::get($path); //if i do dd($uploaded) it does output the file (a representation of it) dd($uploaded->getContents()); tareenmj started a new conversation How To Present Contents Of A File (stored On My Project) In A Blade File? I have a view with a bunch of HTML, and I would also like to present the user's resume they posted before. I have a link from which the user can download their resume: Route::get('resumeDownload', function() { $user = Auth::user(); if ($user->cv()->count() != 0) { //if the user has a CV uploaded $path = $user->cv()->first()->path; $fileName = explode("/", $path)[1]; $extension = explode(".", $path)[1]; $uploaded = Storage::get($path); return (new \Illuminate\Http\Response($uploaded, 200))->header('Content-Type', 'application/msword'); //extension not working properly I know, don't know how to resolve it } else { //if the user has no CV uploaded return view('errors/404', ['notifications'=>$notifications, 'user'=>$user]); } }); But now my CV stored in that path, I would like to retrieve the contents of it and display it in my blade file with my other html. Any suggestions how I would go about doing this? public function generic_function(){ return view('sampleView', ['user'=>$user]); //I need to pass contents of the file into this view so I can display it } And finally How would I show it in my view? (Assuming I pass the contents of the file as a variable) <html> <body> <!-- Random HTML pieces of code --> </body> {{$resume}} <!-- WOuld it be something like a simple echo statement?? --> </html> tareenmj left a reply on How Do I Change The Max_post_size And The Max_upload_size In Laravel 5.0? @JackJones yea but I can't seem to locate the file where I would change the upload_size, or can I just do it on a route. I read that php has a php.ini file but i can't find it on my Laravel app tareenmj started a new conversation How Do I Change The Max_post_size And The Max_upload_size In Laravel 5.0? I need to be able to upload files my user choses to upload. I was able to accomplish this, thanks to Laravel's filesystems but I am unsure about what is the max size the user can upload, and how I can change this property. Could anyone help me out? tareenmj started a new conversation Validating An Array Of Inputs In Laravel 5.0 I have a table, where the user can choose to add/delete rows. These rows have two input fields (TB1_a[] and TB1_b[]). I would like to make my rules so that TB1_a[0] is only required if TB1_b[0] is not null. So I created my rules as: foreach ($this->request->get('TB1_a') as $key => $val) { $rules['TB1_a[' . $key .']'] = 'required_with:TB1_b[' . $key .']'; $rules['TB1_a[' . $key .']'] = 'required_with:TB1_b[' . $key .']'; } return $rules; However, for some reason, this is unable to work and always makes my data valid. Could anyone please help me in this case? tareenmj left a reply on How Do I Validate Array Inputs In Laravel 5.0 @ChristophHarms yup, I was suggested to go through each of the input fields and manually add the rules, but it's just an annoying procedure. Guess Laravel 5.1 would have helped far more tareenmj started a new conversation How Do I Validate Array Inputs In Laravel 5.0 I have a table, where the user can fill in information. However, the user can add rows and/or delete rows. I used a cloning function for that. My html looks like: <input type="text" class="form-control" name="TB1_a[]"> <input type="text" class="form-control" name="TB1_b[]"> As you can see, it is an array input. So if the user adds a row, I'll have two values stored in TB1_a[] and TB1_b. Now, I would like to make my rules so that if the user enters information inside TB1_a[0], I would like to make TB1_b[0] required, but valid if both TB1_a[0] and TB1_b[0] are empty. My validation rules: 'TB1_a.*' => 'required_with:TB1_b.*', 'TB1_b.*' => 'required_with:TB1_a.*' However, these rules are unable to detect that I am referring to an array, meaning Laravel does not detect the '.*'. I would also like to point out that this logic was working perfectly with Laravel 5.4, but I have had to downgrade my Laravel and now it stops working. Any help please? tareenmj started a new conversation How To Pre-select Select Statements In Laravel I have a select statement with a large number of options to select. I am retrieving all of the options from my back-end and filling them in. However I would also like to select an option, which I also pass in from my backend. My code is as follows: @for($i=0; $i<sizeof($info['userInfo']['courses']); $i++) // <tr class="table_row"> <td><select class="form-control" name="TB1_instructions_course[]" id="course" required="" value='{{$info['userInfo']['courses'][$i]}}'> <option value="">Course</option> @foreach($info['staticInfo']['courses'] as $course) //pull in static data <option>{{$course['course']}}</option> @endforeach </select></td> <td><input type="text" class="form-control" placeholder="C01" name="TB1_instructions_section[]" id='section' required="" value='{{$info['userInfo']['courses'][$i]}}'></td> <!-- the rest of the table--> Now the text field works perfectly and populates it accordingly, but I am unable to select the appropriate option in my select input. Any help? Can I do this without using jQuery or javascript? tareenmj left a reply on Use Regex To Allow For Alphabetic Characters, Hypens, Underscores, Spaces, And Numbers I was originally using alpha_num as a validation method, and when I did include spaces, it gave me an error. tareenmj started a new conversation Use Regex To Allow For Alphabetic Characters, Hypens, Underscores, Spaces, And Numbers I would like to validate using Laravel for a unique situation. The field I am authorizing is the name of a book. So it can have alphabetic characters, numeric characters, spaces, and hypens/underscores/any other key. The only thing I don't want it to have is spaces at the beginning, before you enter any key. So the name can't be " L", notice the space, whereas "L L L" is completely acceptable. Could anyone help me in this situation? So far I got a regex validation as such: regex:[a-z{1}[A-Z]{1}[0-9]{1}] I'm unsure how to include the other restrictions. tareenmj left a reply on How To Use Authentication Using Username Instead Of Email tareenmj started a new conversation How To Use Authentication Using Username Instead Of Email I want to create an app where I Log in the username based on username, instead of email address. Since I am a beginner to Authentication, I simply ran the php artisan make:auth and the migrate commands. I made no change to the user migration file or any other. I edited my Login controller by placing a method as such: class LoginController extends Controller { use AuthenticatesUsers; public function username(){ return 'username'; } //the rest of the controller code Now my Login Blade file is as: <form class="form-horizontal" method="POST" action="{{ route('login') }}"> {{ csrf_field() }} <div class="form-group{{ $errors->has('name') ? ' has-error' : '' }}"> <label for="name" class="col-md-4 control-label">Username</label> <div class="col-md-6"> <input id="name" type="text" class="form-control" name="name" value="{{ old('name') }}" required autofocus> @if ($errors->has('name')) <span class="help-block"> <strong>{{ $errors->first('name') }}</strong> </span> @endif </div> </div> //followed by the rest of the html. However, as soon as I type in the name and the password, it doesn't allow me to be logged in, and does nothing. I am unsure about what I am doing wrong here, can somebody please help? tareenmj started a new conversation Mimes Validation Not Working Correctly In Laravel My webpage contains a file for upload, and I want the file uploaded to only be either pdf, doc or docx. My form tag also has enctype="multipart/form-data" and has files="true" My html looks like: <div id="cv_upload" class="row"> <div class="col-xs-12"> <input type="file" name='cv'> </div> </div> The $rules array associated with this is as follows: 'cv' => 'mimes:pdf,doc,docx|required' And finally my messages looks like: 'cv.required' => 'A selection for C.V. is required before proceeding.', 'cv.mimes' => 'CV must be of the following file type: pdf, doc or docx.' The only problem with this is, even after I upload a pdf or doc, the message I receive is the one for required. I have no idea why this isn't working as expected. I also tried removing the 'application/' but that yields no success either. Please help. tareenmj started a new conversation I Need To Remove All Elements From An Associative Array Based On A Variable Name I am reading input from the user (in Laravel) and have rules associated to each of those inputs. If the user selects a checkbox (for that specific table), I would like to remove all the rules from the $rules associative array for that table. To better illustrate, I have created two arrays mimicking the behaviour. My input array is as follows: $input = array("TB1_course" => ['0'=>'CHEM 1E03', '1'=>'ENG 1D04' ], "TB1_section" => ['0'=>'CHEM 1E03', '1'=>'ENG 1D04' ], "TB1_checkbox" => "1", "TB2_course" => ['0'=>'CHEM 1E03', '1'=>'ENG 1D04' ], "TB2_checkbox" => "0" ); $rules= array( 'TB1_course.*' => 'required_with', 'TB1_section.*' =>'required_with', 'TB2_course.*' =>'required_with' ); You can see from the input array that TB1_checkbox has a value of 1. If it has a value of 1, I would like to remove all the rules associated with TB1 (i.e. remove the elements in $rules with a key containing 'TB1'). I attempted to do as such, and was partially successful. My code looks like: foreach ($input as $key=>$value){//go thru every element of the inputs if ((strpos($key, 'checkbox')!==false) && $value==1){//if it contains 'checkbox' //and checkbox is selected $table_name= substr($key, 0,3);//name of table //now go thru $rules and remove all elements which contain a word of $table_name $rules=array_filter($rules, function($x){ return strpos($x, $table_name)!==0;//problem lies here }, ARRAY_FILTER_USE_KEY); } } However my code isn't working. The strpos() function is unable to read the $table_name variable, and leads to problems. If I manually pass in a string, it ends up working (i.e. 'TB1' instead of $table_name), but I have to have the flexibility of checking all my tables (so the rules containing 'TB2' have to be removed if "TB2_checkbox" has a value of 1). Is there any way to solve this issue? Thank you for the help. tareenmj started a new conversation How Do I Apply Custom Validation On An Array? My problem is fairly complex and kind of difficult to explain with text, so I will try my best. I have three inputs, all as arrays, 'A' 'B' 'C'. I also have a checkbox input 'D' (not an array). Every element in A has to be required if the corresponding element in B or C is non-empty, BUT, if D has a value of 1, A shouldn't be required even if the corresponding elements in B or C are non-empty. If A[1] is empty, and B[1] is empty but C[1] is non-empty, I would like only A[1] to be required, not A[0] or A[2]. I created some rules to try and solve this, but i'm unsure how to solve it. $validator = Validator::make($output, [ ]); $validator->sometimes('A.*', 'required', function($input){ if ($input->D==1){//if checkbox D is checked, don't make A required regardless of anything else return false; } if (($input->B!=null) || ($input->C!=null)){//Problem: instead of doing it for the individual elements it will make A required if any element of B or C is non-empty return true; } return false; }); tareenmj started a new conversation How Do I Call A Public Function Inside Of My Controller? I have a problem with being able to call a function I defined in my controller. My controller has defined functions which I would like to call from a string. public function store(Request $request) { $data = $request->except('_token'); $name = "form1_sections/" . $data['nextTab'] . "_form"; parse_str($data['inputs'], $output); $rules = $this->call_user_func($data['currentTab']); $validator = Validator::make($output, $rules); if ($validator->passes()) { return ["view" => view("$name")->render(), "isValid" => true]; } else { return ["isValid" => false, "msg" => json_encode([ 'errors' => $validator->errors()->getMessages(), 'code' => 422 ]) ]; } } function section1() { return [ 'startDate' => 'required| date', 'endDate' => 'required| date|different:startDate', 'cv' => 'mimes:pdf,doc,docx' ]; } $data['currentTab'] returns a string like section1, and I would like to invoke section1() function whenever that is the case. Any help? tareenmj left a reply on Validation Working In Web.php But Not In My Controller no luck, it doesn't work. I'm trying to find a way to do this but even after researching, can't find a lot explaining this. tareenmj started a new conversation Laravel Validation: Make A Field Required If Other Fields Are Present BUT Don't Make It Required If A Certain Field Has A Specific Value I have a bunch of inputs on my form, and the field under consideration has various validation requirements. To simplify, I have created an analogy: I have inputs A, B, C and D. 'A' should be required if B and C are present, but not required if input 'D' is a value of 1 even if B and C are present. The same requirements are present for inputs 'B' and 'C'. My initial guess was: A => 'required_with:B,C | required_if:D,0' but this doesn't really work. I'm terrible with custom validation, so if a step-by-step method of a custom validator is needed, can you please provide a full code? Thank you! tareenmj left a reply on Validation Working In Web.php But Not In My Controller tareenmj left a reply on Validation Working In Web.php But Not In My Controller An error of: master.js:76 POST 500 (Internal Server Error) and a warning of: master.js:76 [Deprecation] Synchronous XMLHttpRequest on the main thread is deprecated because of its detrimental effects to the end user's experience. For more help, check. Any idea? Thanks for your help!'); } }); tareenmj left a reply on Validation Working In Web.php But Not In My Controller My ajax request ends up breaking and giving me a message for 'error'. When I put this exact same code into my routes file, it works without any problem. tareenmj left a reply on Validation Working In Web.php But Not In My Controller tareenmj started a new conversation Validation Working In Web.php But Not In My Controller Hi! I had a problem with validation. For testing purposes, I had all my validation logic stored inside my routes file (web.php), but I attempted to move it into my controller but for some reason it stops working. My code was working perfectly when it was placed in the routes file. My code is as follows:]'); tareenmj started a new conversation Laravel: Do Validation Upon Two Separate Conditions I have a table in my html form, where the user is free to add as many rows as they would like. Underneath the table I have a checkbox, which when checked indicates the user has no inputs. I have a bunch of inputs in Laravel which are in an array (so it can name[0] and name [1] or just name[0] depending on how many rows the user added). I also have a checkbox, which makes all my inputs disappear, and I wouldn't like to make any field required if the user checked it. My current rules are: [ 'TB1_course.' => 'required_if:TB1_checkbox, 0|required_with:TB1_section.,TB1_session.,TB1_reqElec.', 'TB1_section.' => 'required_if:TB1_checkbox, 0|alpha_num', 'TB1_session.' => 'required_if:TB1_checkbox, 0' ] My validation rules required: (1) If the user selects the checkbox, I would like to make no field in the entire table required (regardless if those inputs are empty or not) (2) If the user didn't select the checkbox and left one row entirely blank (i.e. TB1_course[0], TB1_section[0], and TB_session[0] empty OR supplied TB1_course[0], TB1_section[0], TB1_session[0] but left TB1_course[1], TB1_section[1] and TB1_session[1] empty), inputs are valid. (3) If the user didn't select the checbox and also gave missing info on the same row (i.e. TB1_course[0] isn't empty but TB1_section[0] and TB1_session[0] are empty), make it invalid. I am unsure how to go about this, since I also have to worry about indices. I'm a beginner in validation, so please excuse me if I am asking a very basic problem. Any help is greatly appreciated. tareenmj left a reply on Required Only If A Checkbox Is Not Checked @Screenbeetle can't I just do something like $rules= [ 'TB1_course.*' => 'required_if: TB1_checkbox,0' ] ? tareenmj started a new conversation Required Only If A Checkbox Is Not Checked I am a bit of a beginner in Laravel, and I had some trouble with validation. I want to make a field required only if the value of the a certain checkbox is 0. My rules currently are: $rules=[ 'TB1_course.*' => 'required' ]; My checkbox input html is as such: " input type="hidden" value="0" name="TB1_checbox" input type="checkbox" value="1" name="TB1_checkbox" //with the html tags of course " I gave a hidden input just so that if the user does not select the checkbox, I receive a value of 0. My TB1_course is an array of inputs, since I used a clone function in jQuery. Is there an easy way to make TB1_course required only if TB1_checkbox is 0? Thank you for all your help! tareenmj started a new conversation How Do I Receive The Error Messages From Validation Upon An Ajax Request I am not the most proficient at using Laravel, so I stumbled upon some issues with validation. I am doing an Ajax request to validate a section of a form, and if it the user gave valid information, I am replacing the html inside the form tag with the next view (the next section of the form). If they gave in invalid information, I don't want to change the html but rather just present the error messages. I only have one form tag, but I'm replacing the html content of the form if user gives valid info. My web.php (i'm using closures): $('}, //also need to pass in related Tab ID, and also tabID success: function (data) { if (data.value===1){//only doing for one section right now if (data.isValid){//if data is valid, return next view $('#tabShow').html((data.view)); } else{//if it isn't target the div id=errors and replace only error section alert(data.isValid); var errors=data.responseJSON; alert(errors); false;//prevent navigation } } else{ $('#tabShow').html((data.view));} }, error: function () { alert('error'); } }); Route::post('/roa', function() { $m = Request::except('_token'); $name = "form1_sections/" . $m['nextTab'] . "_form"; //name of next view $name2 = "form1_sections/" . $m['currentTab'] . "_form"; //name of current view parse_str($m['inputs'], $output);//to deal with json $type= gettype($output); if ($m['currentTab']=='section2'){//since I was only working with one section $rules = [//rules for that section 'TB1_course.*' => 'required' ]; $validator=Validator::make($output, $rules); if ($validator->passes()){ return ["view" => view("$name")-> render(), "value"=>1, "inputs"=>$output, "type"=>$type, "isValid"=>true]; //if valid, then pass next view } return ["view" => view("$name2")->withErrors($validator) -> render(), "value"=>1, "inputs"=>$output, "type"=>$type, "isValid"=>false]; //if invalid, how do I pass error messages html? } return ["view" => view("$name") -> render()];//for all other sections }); tareenmj left a reply on Returning A View With Previous Inputs Using Laravel And AJAX tareenmj left a reply on Returning A View With Previous Inputs Using Laravel And AJAX tareenmj left a reply on Returning A View With Previous Inputs Using Laravel And AJAX I don't? When I am returning the view if an error is detected, it removes all the pre-filled input entries (i.e. "view" => view("$name2")->withInput($output)->withErrors($validator) -> render()). And in my ajax success function, I replace the entire part with the view. tareenmj started a new conversation Returning A View With Previous Inputs Using Laravel And AJAX I have a form, when submitted, invokes an AJAX request. This ajax request performs back-end validation on the inputs. If an error is detected, it displays the error messages. So if the user fills out 30 fields, and one is not valid, I would like to return all those inputs with an error message. My Laravel Code: Route::post('/roa', function() { $m = Request::except('_token'); $name = "form1_sections/" . $m['nextTab'] . "_form";//next view name $name2 = "form1_sections/" . $m['currentTab'] . "_form";//current view name $var= parse_str($m['inputs'], $output);//data from ajax is a string if ($m['currentTab']=='section2'){//i'm only doing validation on one section right now //to simplify the code. $rules = [ 'TB1_course.*' => 'required' ]; $validator=Validator::make($output, $rules); if ($validator->passes()){//if no error, return the next view return ["view" => view("$name")-> render(), "value"=>1, "inputs"=>$output]; } return ["view" => view("$name2")->withInput($output)->withErrors($validator) -> render(), "value"=>1, "inputs"=>$output]; } return ["view" => view("$name") -> render()]; }); My Ajax request: $('}, success: function(data){ $('#tabShow').html((data.view)); }, error: function () { alert('error'); } }); }); I am successfully able to receive all the error messages, but the withInput($output) for some reason is not working. Thank you for all your help. tareenmj started a new conversation How Do I Validate All Of An Array's Elements? I have received inputs from my front-end form, however since my tables are dynamic, my inputs looks like: var $output = [ '_token' =>..... , 'TB1_course' => [ '0'=> .., '1' => .., etc ], //other inputs ] TB1_course can change in size depending on the user's input. I would like to make every element inside of TB1_course to be required. How would I do as such? My initial guess was: $rules= [ 'TB1_course' => 'required', //other rules for the rest of the inputs ] Thanks for your help! tareenmj started a new conversation Using Constraints On A One-to-many Relationship So I have two tables in my database, Album and Artists. One artist can have multiple albums, so a one-to-many relationship. I defined my two models as: class Artist extends Eloquent{ public $timestamps=false; public function albums(){ return $this->hasMany('App\Album','artist_id','id'); } } class Album extends Eloquent{ public $timestamps=false; public function artist(){ return $this->belongsTo('App\Artist','artist_id','id'); } } Now, I am trying to obtain all the albums associated with one artist, and order them alphabetically, but when I attempt to try this: Route::get('/', function () { $x=\App\Artist::where('name','=','Drake')->get(); $album=$x[0]->albums->orderBy('name')->get(); echo $album[1]->name; } It gives me an error saying that method orderBy does not exist. I also tried doing: $x=\App\Artist::orderBy('name')->get() and it worked perfectly. Thank you for the help! tareenmj left a reply on Reading From A One-to-many Relationship Causing Errors Thank you! Yea I think I'll go thru the docs in detail. Yea, that makes alot more sense. tareenmj left a reply on Reading From A One-to-many Relationship Causing Errors tareenmj left a reply on Reading From A One-to-many Relationship Causing Errors I was able to solve it, I couldn't get my head around the collection object. If anyone would like to know how I solved it: the $test=\App\Album::where('name','=','Horroscrope')->get() returns a collection object, to access the object inside the collection. You can do: $x=\App\Album::where('name','=','Horrorscope')->get(); $test=$x[0]->artist()->get(); return $test[0]->name; This will return the corresponding artist associated with the album. tareenmj started a new conversation Reading From A One-to-many Relationship Causing Errors So I defined a couple of models in Laravel (one was Album and one was Artist) as follows: Album.php: class Album extends Eloquent{ public $timestamps=false; public function artist(){ return $this->belongsTo(Artist::class); } } Artist.php: class Artist extends Eloquent{ public $timestamps=false; public function albums(){ return $this->hasMany(Album::class); } } Now in my web.php: I ran these commands: Route::get('/', function () { $artist= new \App\Artist; $artist->name='Eve 6'; $artist->save(); $album= new \App\Album; $album->name='Horrorscope'; $album->artist()->associate($artist); $album->save(); $test= \App\Album::where('name','=','Horrorscope')->get(); $x= $test->artist()->get(); return $x->name; }} It gives me an error whenever I attempt to run this as: BadMethodCallException in Macroable.php line 74: Method artist does not exist. Can anyone help in this category, I have no idea why it isn't working. I am attempting to get the artist associated with the $test variable, so if a user gives me an album name, I can retrieve the Artist directly. Any help will be appreciated. tareenmj left a reply on How To Retrieve Data From A Many-to-many Relationship Using Eloquent Thank you, it helped a lot!! tareenmj started a new conversation How To Retrieve Data From A Many-to-many Relationship Using Eloquent So I wanted to create a many-to-many relationship between a table called 'albums' and a table called 'listeners', and I was able to do so successfully. I created the appropriate Eloquent classes for each and also created proper methods (i.e used the belongstoMany function). But now if I select a random album (depending on the user's input) and want to see all the listeners associated with that album, how would I go about doing so? The two classes look like: class Album extends Eloquent{ public function listeners(){ return $this->belongstoMany(Listener::class); } } class Listener extends Eloquent{ public function albums(){ return $this->belongstoMany(Album::class); } } Any help is appreciated! tareenmj started a new conversation HTTP ERROR 500: When I Attempt To Upload A Laravel Project Onto My Remote Server I am a bit of a beginner in Laravel and remote servers, etc, so please bear with me if I am asking something super obvious or simple. I wanted to create a Laravel project on my webpage, so I connected to my server via Putty, and directly put the command through the command line of Putty. It created one (blog was its name) successfully, but it created the project inside of my webpage directory inside of its own folder called blog. I moved all the files and folders inside of blog to my main webpage directory through FileZilla. I did not change the Laravel code at all, it should show just the default start up page. When I visit my webpage, it throws an error like: HTTP ERROR 500. Just for curiosity, I visited my webpage/public (I know I do not have any route defined like this) and it threw a Laravel error like: NotFoundHttpException in RouteCollection.php line 161. I want to upload an image so I also posted this question on stackoverflow, if you would like to have a look: Any help on resolving this issue would be of great help, thank you for your help. tareenmj left a reply on Does GET Data Have Higher Priority Than POST Data In Laravel 5.4? If I have a form which has a method POST and also a route which POSTS, but I do supply a URL with GET data in my form, which takes priority? Let me elaborate: From my form I post two variables x=3 and y=4 but in my URL I give GET data like: x=1 and y=2. If I then print the data, will x=3 or 1? tareenmj started a new conversation Does GET Data Have Higher Priority Than POST Data In Laravel 5.4? So I am a beginner at Laravel and following a book (Codesmart which uses Laravel 5) whereas I am using 5.4. So I created a simple form like: (my form tags looked like: <form action="/blog/public/?foo=bar&baz=get&l=loki>) {{ csrf_field() }} <input type="hidden" name="foo" value="bar" /> <input type="hidden" name="baz" value="boo" /> <input type="submit" value="send" /> () and my routes web.php file looked like: Route::get('post-form', function(){ return view('form'); }); Route::post('/', function(){ $data=Request::all(); var_dump($data); }); My output looked like: array(4) { ["_token"]=> string(40) "FNOgfFXulxabXbIEBxB3Jliq8buFgjCGrWgHdiuh" ["foo"]=> string(3) "bar" ["baz"]=> string(3) "boo" ["l"]=> string(4) "loki" }. My book explicitely says that GET data has higher priority and the values of "foo" and "baz" should equal "get" but in my case obviously it's different. Any help would be greatly appreciated. tareenmj left a reply on POST Causing Errors @Cronix @tykus ohhh that makes more sense, so when I do the command $data=Request::all () it will also pass the token field to the variable data and that's why when I do var_dump ($data), the token is also included in the array? Is there any sort of function which would exclude the token and give me all of the submitted values?? Thanks guys for all your help!! tareenmj left a reply on POST Causing Errors @tykus Thanks! I think the url() was the problem and I am working locally. So now it works and I have no idea why. This is my form.blade.php: <head> <meta charset = "UTF-8"> <title></title> </head> <body> <form action="{{'/blog/public/'}}" method="POST"> {{ csrf_field() }} <input type="hidden" name="foo" value="bar" /> <input type="hidden" name="baz" value="boo" /> <input type="submit" value="send" /> </form> </body> But now it is actually dumping the token as well as foobar and bazboo which is a bit weird. This shows up as my result: array(3) { ["_token"]=> string(40) "epYxrCmkehKtX8v4sJAiLqoMjne8n4eGX04CV7rP" ["foo"]=> string(3) "bar" ["baz"]=> string(3) "boo" } tareenmj left a reply on POST Causing Errors tareenmj left a reply on POST Causing Errors I didn't have any url included in the view? Was I meant to include a url in my form.blade? My entire form.blade was like this: <head> <meta charset = "UTF-8"> <title></title> </head> <body> <form action="{{url('/')}}" method="POST"> {{ csrf_field() }} <input type="hidden" name="foo" value="bar" /> <input type="hidden" name="baz" value="boo" /> <input type="submit" value="send" /> </form> </body> tareenmj started a new conversation POST Causing Errors So I am a beginner to Laravel and I was expiremnting with POST and GET. So I attempted to create a simple form in my views directory called form.blade.php. It just had two hidden text fields with values and a submit button that I was posting to the URL like this inside the form tag: action="{{ url('/') }}" method="POST" My routes file was like this: Route::get('post-form', function(){ return view('form'); }); Route::post('/', function(){ $data= Request::all(); var_dump($data); }); As soon as I attempt to submit the form, an error is thrown at me: MethodNotAllowedHttpException in RouteCollection.php line 233: I do not understand why this is happening as my logic seems to be correct to me. Any help and solutions would be greatly appreciated. Thank you! tareenmj left a reply on Two Models With Two Different Databases I'm trying to create a table using Laravel, and also pull in data and enter data into an existing table on the same database. I felt for creating a table I would have to use migrations and then create a model whereas for an existing one I would have to just create a model. Yea apologies, I could have worded it better. tareenmj started a new conversation Two Models With Two Different Databases I am just a beginner at Laravel, so I probably do not know a lot. I created a migration and created a new table using Laravel, and also created a new model for that table so I could refer to it as App\Task and my table's name is tasks (it has one ID column, one body column, one created_at, and one updated_at column). I also wanted to edit and pull in information from another table I created on MySQL workbench in the same database. I created a new model called test and the table name was tests (it has only one id column and one description column) . Now using php artisan tinker, when I enter: App\test::get() it fetches all the records from the existing table. I also wanted to enter data into the table so I attempted this command in tinker: $entry=new App\test; and then $entry->description="Random Entry"; and finally when I entered: $entry; To my confusion, it displayed that entry had a description column, a created_at value and an updated_at entry. It won't let me save $entry into my test table either because of the additional column. Could someone please explain this and provide a solution? Thanks for your help!
https://laracasts.com/@tareenmj
CC-MAIN-2019-39
refinedweb
5,544
51.99
The Biggest Prime February 8, 2013 This is a simple program: > (with-output-to-file "bigprime.txt" (lambda () (display (- (expt 2 57885161) 1)))) However, when I ran that program, the machine thrashed for a while, and after about a minute, crashed. So I wrote this program using C/GMP, which can be compiled with the command gcc bigprime.c -lgmp -o bigprime: /* bigprime.c -- save the digits of 2^57885161-1 to a file */ #include #include "gmp.h" void main(void) { mpz_t p; FILE *fp; fp = fopen("bigprime.txt","w"); mpz_init(p); mpz_ui_pow_ui(p, 2, 57885161); mpz_sub_ui(p, p, 1); gmp_fprintf(fp, "%Zd", p); mpz_clear(p); fclose(fp); } The program ran successfully, creating the bigprime.txt file in about a minute. You can see the program at.!!
http://programmingpraxis.com/2013/02/08/the-biggest-prime/2/
CC-MAIN-2014-41
refinedweb
127
76.42
view raw According to wikibooks... @one @@one @@value @two class MyClass @one = 1 @@value = 1 def initialize() @two = 2 end end @one is an instance variable of the class MyClass and @@value is the class variable MyClass. As @one is an instance variable it is only owned by the class MyClass (In Ruby class is also object), not shareable, but @@value is a shared variable. shared variable class A @@var = 12 end class B < A def self.meth @@var end end B.meth # => 12 non shared variable class A @var = 12 end class B < A def self.meth @var end end B.meth # => nil @two is an instance variable of the instances of the class MyClass. Instance variables are private property of objects, thus they wouldn’t share it. In Ruby classes are also objects. @one you defined inside a class MyClass, thus it is only owned by that class defining it. On the other hand @two instance variable will be created when you will be creating a object of the class MyClass, say ob, using MyClass.new. @two is only owned by ob, none other objects have any idea about it.
https://codedump.io/share/ocTbRkwHmRYZ/1/difference-between-the-quotquot-instance-variable-belonging-to-the-class-object-and-quotquot-class-variable-in-ruby
CC-MAIN-2017-22
refinedweb
192
82.34
Python get behind, so we now support some of the major features coming with this new version. This article will walk you through the features currently supported by our latest PyCharm release. To try them out, get the latest version of PyCharm and download the current beta release of Python 3.8 from here. From there you will just need to switch to Python 3.8 as your interpreter in PyCharm (if you’re not sure how to switch the interpreter, jump into our documentation for help). Positional-only parameters Function definitions are a key element when designing libraries and APIs for user consumption. The more explicit these definitions are, the easier they are to implement. One way to achieve such explicitness is by how the function can be called with its arguments. As of now, Python only had the option to define arguments as positional, keyword, or keyword-only, but with this new version we now have another way to define them by using positional-only parameters. To use this feature, just set the arguments in your function definition and write a forward slash`/` after the last positional-only argument you want to declare. This is closely analogous to the keyword-only arguments syntax, but instead of setting the arguments after the asterisk`*`, you do it before the slash`/`. Let’s look at an example. Say, you have a function in your library that selects a value randomly from different values passed as arguments. Such values can be passed in any position and the function will return you a random choice. The semantic meaning stays the same, regardless of the order of the values in the function call. By design, you decide that those arguments should be positional-only: By doing this, you ensure that your library’s users won’t be able to call your function with the arguments’ keywords. In the past, if you renamed the arguments of your function for refactoring purposes (or any other reason), the code of your library’s users would be at risk if they were to make a function call with keyword arguments (for example, `select_random(a=3, b=89, c=54)`). One of the advantages of positional-only parameters is that, if you decide to change the variable names in the function definition, your library’s users won’t be affected as there are no keyword dependencies in the function calls they make to begin with. Assignment expressions A new way to assign values to variables is available with this latest Python version. Now, expressions can also assign values as part of their declaration, which removes the necessity to initialize variables in advance. As a result, you can make your code less verbose and add compactness, as declarations can be made within expressions in the same line. The syntax to declare a variable consists of the walrus operator `:=` inside an expression enclosed by parentheses. An important note is that the walrus operator is different from the equals operator. For example, comma-separated assignments with the equals operator are not the same as the ones made by the walrus operator. One example of such usage can be a while loop with a control variable. When you use this feature, the loop’s control expression will also hold the variable definition and reassignment. In the previous example, the ‘before’ while loop has a variable assignment before it and also inside its execution code. The ‘after’ loop has the assignment inside its control statement definition by using an assignment expression. Final annotation and final decorator When you design a class, you have to make sure your methods are used properly. With this new version, a final decorator and a `Final` type annotation are introduced to help restrict the usage of methods, classes, and variables. If needed, this feature will let you declare that a method should not be overridden, that a class should not be subclassed, or that a variable or attribute should not be reassigned. The final decorator prohibits any class decorated with `@final` from being subclassed, and any method decorated with `@final` from being overridden in a subclass. Let’s say you have a class that declares a method and that method is being used inside the class at different points. If the user modifies that method by overriding it while subclassing, there are risks that the base class behavior might change or run into errors. To avoid this, you can use the final decorator to prevent the user from overriding such a class. Let’s say you have a signature generator class like the following: When initialized, the signature is generated through a `create_signature ` method which is called within the `__init__` constructor method. Depending on your class design, you may opt to protect your `create_signature` method with the final decorator so it is not overridden if subclassed. With the final decorator, you ensure that any other method that depends on this method inside the class is not affected by a method override. In this case, the `__init__` constructor method is using the `create_signature` method. By using the final decorator, you ensure that the initialization of the class will not be affected by any change that might be introduced by subclassing. Another thing to notice is that in this example, we use the `Final` attribute with the `ENCODER` attribute. This class attribute holds the type of string encoding used in the `create_signature` method. By class design, we choose to use the `Final` attribute because we use that value within the methods of the class and we don’t want it to be overridden as that would change the methods’ behavior. Equals sign in f-strings String formatting makes code more concise, readable, and less prone to error when exposing values. Variable names and values now can coexist in string contexts with the introduction of the equals sign in f-strings. To take advantage of this new feature, type your f-string as follows: `f'{expr=}’` where `expr` is the variable that you want to expose. In this way, you get to generate a string that will show both your expression and its output. This feature is helpful when you’d like to write variable values to your log. If you happen to use this for debugging purposes, you may want to check out PyCharm’s debugger. Learn more For more information about usages and examples where these features can be useful, take a look at PEP-0570, PEP-0572, PEP-0591, and bpo-36817. We at PyCharm continue to work on supporting Python 3.8 fully, and we hope these features will come in handy for you when setting up or working with a project using Python 3.8. Support for other features should be expected in the near future, so be sure to pay close attention to our latest releases. If you have any questions or suggestions, drop us a comment. Thanks! 9 Responses to Python 3.8 support in PyCharm noah says:August 22, 2019 Assignment expressions is exactly the type of constructs modern languages are avoiding for the sake of clarity and side-effect, it’s strange to see that popping up in Python while they refuse to implement a much-needed ‘case’ statement which would go a long way to clarify code. I hope the language responsibles are not leading this language more inconsistent, we saw what happened to Java… Paul Everitt says:September 30, 2019 Perhaps, but that’s not germane to PyCharm. It’s in the language, we support it. Derek says:October 19, 2019 I’m currently on PyCharm Community 2019.2.3 and started a new project with Python 3.8. When I use an example with the walrus operator, I have warnings from PyCharm about having whitespace before the : and not having whitespace after the : Snippet: def walrus_operator(num_iterations): count = 0 while (count := (count + 1)) <= num_iterations: sleep(1) print(count) The equals sign in the f-string, positional-only parameters, Final annotation and final decorator don't raise any warnings, though. It's important to remember that with the Final annotation and final decorator, you need to import them from the typing class: from typing import Final, final Casey says:October 24, 2019 Confirming this same PEP8 warning in 2019.2.3. Was checking to see if I needed to update inspections independently. Casey says:October 24, 2019 For others interested, this is being tracked as a major bug: PY-37196 Assignment expressions and PEP8 warnings Kyle says:October 23, 2019 I’m trying to use Python 3.8 as the Python interpreter for a project, but I can’t seem to get it to work or find the option. This seemed easier on my other PC with 3.7 and I’m not sure if this is because of the change in python or the new PC. Ernst Haagsman says:October 29, 2019 How are you trying to configure the interpreter? Dom says:October 30, 2019 MODE = typing.Literal[‘r’, ‘rb’, ‘w’, ‘wb’] According to Python 3.8 documentation: def open_helper(file: str, mode: MODE) -> str: … open_helper(‘/some/path’, ‘r’) # Passes type check open_helper(‘/other/path’, ‘typo’) # Error in type checker But no Error shown. 🙁 Andrew Simons says:February 15, 2020 Is debugging Python 3.8 code supported in PyCharm 2018.2? We recently upgraded a project to Python 3.8 and Django 3 (from Python 3.6 and Django 2.2) and although I can start the django server from PyCharm with it pointing at my venv and it says the debugger has attached, it never stops on any breakpoints I set. /Users/blah/Workspace/acme/proj/venv38/bin/python3 “/Applications/PyCharm 2018.2.app/Contents/helpers/pydev/pydevd.py” –multiproc –save-signatures –qt-support=auto –client 127.0.0.1 –port 62462 –file /Users/blah/Workspace/acme/proj/src/manage.py runserver localhost:8000 pydev debugger: process 9878 is connecting Connected to pydev debugger (build 182.5262.4) Is this a case of PyCharm 2018.2 will never work with Python 3.8 and I need to upgrade, or shall I continue trying to find/fix problems with the debugger configuration?
https://blog.jetbrains.com/pycharm/2019/08/python-3-8-support-in-pycharm/
CC-MAIN-2021-43
refinedweb
1,692
61.16
Opened 2 years ago Closed 2 years ago #8503 closed bug (fixed) New GeneralizedNewtypeDeriving check still isn't permissive enough Description Roman Cheplyaka writes: I just tried compiling smallcheck with GHC HEAD, and it didn't work out: Test/SmallCheck/SeriesMonad.hs:41:7: Can't make a derived instance of ‛MonadLogic (Series m)’ (even with cunning newtype deriving): it is not type-safe to use GeneralizedNewtypeDeriving on this class; ‛msplit’, at type ‛forall a. m a -> m (Maybe (a, m a))’, cannot be converted safely In the newtype declaration for ‛Series’ So GHC now looks at the methods, but the problem is still there. I would agree that msplit's type is problematic (due to the nested m's), but Simon and Richard previously said that it should work, so I'm confused. Change History (41) comment:1 Changed 2 years ago by goldfire comment:2 Changed 2 years ago by simonpj - Cc dimitris@… sweirich@… added Wait. Suppose we have g1 : m ~R n, and g2 : a ~R b. Then you might think we can conclude that g1 g2 : m a ~R n b. But as you point out, our current rules say not. (I'm looking in docs/core-spec/core-spec.pdf.) But why do they say not? What does it mean for m to be representationally-equal to n (something that really only makes intuitive sense at kind *), other than "if you apply them to representationally-equal arguments you get representationally-equal types? Another alternative would be NOT to eta-reduce the newtype coercion. See Note [Newtype eta] in TyCon.lhs. It's much less important than it was because we are no longer coercing dictionaries. But there would still be a cost, as the note points out. Simon comment:3 Changed 2 years ago by sweirich Perhaps another option is to generalize TyConAppCo so that it allows the heads to differ as long as they are (representationally) equal and we have role information for each head. That's slightly more general than changing AxiomInstCo as you'd get transititivity... comment:4 Changed 2 years ago by simonpj Stephanie did you think about para 2 of my comment? Why not just change the typing rules? Simon comment:5 Changed 2 years ago by sweirich Simon, there is more to it than just "if you apply them to representationally-equal arguments you get representationally-equal types?" We need to also look at the roles of the parameters in the application. Consider this example: type family F a where F Moo = Char F _ = Int newtype T a = MkT (F a) newtype U a = MkU (T a) So T and U are representationally equal, but their parameter has nominal role. That means that we can safely conclude T Moo ~/R U Moo but it would *not* be safe to say T Moo ~/R U Int because then we would have Char ~/R Int. That is why I was suggesting the change to TyConAppCo. We're not tracking the parameter roles in kinds, so the only place we can look for them is in the head of an application. comment:6 Changed 2 years ago by simonpj Good point. Messing with TyConAppCo seems like a sledgehammer to crack a nut, though. Let's check one thing. Does the problem go away if we say that the axiom for U is axiom ax7 a : U a ~R T a that is, if we refrain from eta-reducing the axiom? If so, I think we can re-visit our reasons for eta-reducing the axiom (see Note [Newtype eta] in TyCon.lhs). I believe the reason was 99% to do with coercing the dictionary in GND. And we aren't doing that any more. Moreover, it seems kind of odd to have representational equality at any kind other than *, doesn't it? Do we have any other source of representational equality at non-* kinds? Getting rid of the eta reduction would simplify the code a bit, too. Simon comment:7 Changed 2 years ago by goldfire I agree with Stephanie -- the rule Simon was looking at in core-spec (Co_AppCo, p. 10) is that way for a good reason. Why eta-reduce? Because it allows for more lifting of newtype coercions, via the Coercible mechanism. For example, if we don't eta-reduce newtype NTMaybe a = MkNT (Maybe a) then we couldn't derive, say StateT Int NTMaybe a ~R StateT Int Maybe a. So, we don't need eta-reduction as much as maybe we once did, but it's still useful. One other alternative is to have the cake and eat it too: a newtype declaration like the one for NTMaybe can create two axioms: one eta-reduced and one not. There would have to be some logic to choose which axiom to use, but this would allow for all the constructions that we might want. (Strawman implementation: always use the eta-reduced version, except in GND and in Coercible code where we need the extra role info.) I'm personally not a big fan of the generalization of TyConAppCo. Stephanie's proposal means that the TyConAppCo would have to store two TyCons and a coercion between them, and all the code that thinks about TyConAppCos would now have to consider the possibility that the two TyCons are different. But, after all this, I still think I favor my original idea: allow oversaturated AxiomInstCos, and store extra role information in CoAxioms accordingly. Can we think of reasons why this is bad? We already allow oversaturated TyConAppCos (in fact, we require them to be oversaturated, as a TyConAppCo can't be on the left of an AppCo). We would introduce a similar invariant between AxiomInstCo and AppCo. comment:8 Changed 2 years ago by simonpj Can you outline the AxiomInstCo idea in a bit more detail? Sounds plausible. comment:9 follow-up: ↓ 35 Changed 2 years ago by goldfire Hold the phone! There is an easier solution to the original problem! In my first comment, I wanted a coercion co :: forall (a :: *). Maybe (Maybe a) ~R NT (NT a), and then I proposed a possible value of that type that was ill-roled. But, here is a well-roled value: co = co1 ; co2 co1 :: forall (a :: *). Maybe (Maybe a) ~R Maybe (NT a) co1 = forall (a :: *). Maybe (sym NTCo <a>_N) co2 :: forall (a :: *). Maybe (NT a) ~R NT (NT a) co2 = forall (a :: *). sym NTCo <NT a>_N If we convert the pieces one at a time and then use transitivity, we're all OK. I remember battling against this problem over the summer. So, now we have a new question: how to get GHC to find this solution? Luckily, we happen to have a solver lying around that does just that: the Coercible mechanism. I imagine we could get the GND code to just call the Coercible code for each method. In fact, I considered this approach when improving the GND mechanism a few weeks ago, but thought it was overkill. My guess is that this would Just Work, when implemented. It might simplify the code a bit, too, but be somewhat less efficient (at compile time) at doing GND. I think this is reasonable. Taking the idea a bit further, what if the Coercible mechanism can't derive a coercion for a particular method? It would have to produce unsolved Coercible constraints... which we could just add to the constraints on the GND instance! This would allow more GND's to work than we had hoped for, such as this example: type family F a class C a where meth :: a -> F a type instance F Int = Bool class C Int where meth = (> 0) type instance F Age = Bool newtype Age = MkAge Int deriving C -- just works, no extra constraints because F Age = F Int type instance F [a] = a class C [a] where meth = head type instance F (List a) = Int newtype List a = MkL [a] deriving C -- this would create an instance with head "instance (Coercible a Int) => C (List a) where ..." This all seems quite reasonable behavior, though we would want to make sure we don't produce constraints like Coercible Int Bool. Thoughts? Is this a good plan? We could always, as a first pass, implement GND in terms of Coercible and fail if there are any unsolved constraints, working for C Age above but not C (List a). comment:10 Changed 2 years ago by simonpj Cool idea. I like it. For the most part, the 'deriving' mechanism generates an instance decl in HsSyn RdrName and feeds it through the renamer and type checker. We could do that for deriving decls too, like this: newtype Age = MkAge Int deriving( C ) would generate instance C Int => C Age where meth = coerce (meth :: Int -> F Int) Here we simply call coerce, which wakes up the Coercible stuff. It's a bit of a nuisance that we have to specify a type signature (in HsType) for meth, so that the from-type of the coerce is known, but not too bad. What you say about inferring Coercible constraints also makes sense. This would have another benefit: making GND much more uniform with the other instances. See TcEnv.InstInfo. We can probably get rid of NewTypeDerived altogether, by generating a VanillaInst instead! Simon comment:11 Changed 2 years ago by goldfire Posted this to ghc-devs. Realizing it makes more sense here: I've run up against a limitation of Coercible I'd like to know more about. Currently, the stage2 compiler on my branch fails to build because of the problem. In Module.lhs, there is this line: newtype PackageId = PId FastString deriving( Eq, Typeable ) The deriving mechanism sensibly prefers to use the GND mechanism when it can, and it can (seemingly) for Eq here. But, I get this error: compiler/basicTypes/Module.lhs:297:46: No instance for (ghc-prim:GHC.Types.Coercible FastString PackageId) because ‛PackageId’ is a recursive type constuctor This is curious, because PackageId is manifestly not recursive. A little poking around tells me that any datatype mentioned in a .hs-boot file is considered recursive. There is sense to this, but the error message sure is confusing. In any case, this opens up a broader issue: we want GND to work with recursive newtypes. For example: class C a where meth :: a instance C (Either a String) where meth = Right "" newtype RecNT = MkRecNT (Either RecNT String) deriving C The above code works just fine in 7.6.3. But, if Coercible isn't allowed over recursive newtypes, then this wouldn't work if GND is implemented in terms of coerce. So, my question is: why have this restriction? And, if there is a good reason for it, it should probably be documented somewhere. I couldn't find mention of it in the user's guide or in the haddock docs. If we do keep this restriction, what to do about GND? Seems like this may kill the idea of implementing GND in terms of coerce, but that makes me sad. comment:12 Changed 2 years ago by goldfire Simon PJ's response: Ah, life is never as simple as you hope. The whole treatment of recursive types is a bit flaky in GHC. For newtypes here is the motivation newtype R = MkR R Now if we have an instance instance Coercible a R => Coercible a R we aren't going to make much progress. Mutual recursion is similar. This is very much a corner case. I think that if the recursion is under a type constructor eg newtype R1 = MkR [R1] then we are fine. But the current test is conservative. I worry about newtype R2 a = MkR (F a) because perhaps type instance F Int = R2 Int and now R2 Int is just like R. But GHC won't spot this today. In any case, I suppose that, provided it was documented, GND could simply ignore the recursion problem, behave as advertised, and if that gives a loop it's the programmer's fault. Things in hs-boot files are treated (again conservatively) as if they might be recursive. A related thing is unpacking data types. Consider data T = MkT {-# UNPACK #-} !S data S = MkS {-# UNPAXCK #-} !Int {-# UNPAXCK #-} !Int A S-value is represented as a pair of Int# values. And similarly T. But what about data S2 = MkS2 {-# UNPACK #-} !Int {-# UNPACK #-} !S2 We don’t want to unpack infinitely. Strictness analysis also risks infinitely unpacking a strict argument. I think the rules for newtypes could be different (and perhaps more generous) than for data types. comment:13 Changed 2 years ago by goldfire Yuck. I'm leery of creating a way for the type-checker to loop without turning on UndecidableInstances. And, with the R2 example that Simon gives, the newtype and the relevant type family instance might be in different modules, so detecting even so simple an infinite regress may be problematic. (And, note that the pieces on their own are innocent-looking.) I've also just looked at Note [Recursive newtypes] in TcDeriv. This note explains why GND works for recursive newtypes. I repeat it here: Note [Recursive newtypes] ~~~~~~~~~~~~~~~~~~~~~~~~~ Newtype deriving works fine, even if the newtype is recursive. e.g. newtype S1 = S1 [T1 ()] newtype T1 a = T1 (StateT S1 IO a ) deriving( Monad ) Remember, too, that type families are curretly (conservatively) given a recursive flag, so this also allows newtype deriving to work for type famillies. We used to exclude recursive types, because we had a rather simple minded way of generating the instance decl: newtype A = MkA [A] instance Eq [A] => Eq A -- Makes typechecker loop! But now we require a simple context, so it's ok. I'm confused by the last line. In the code above the note, a context is generated that includes, for this example, Eq [A]! Where does the loop get broken? Regardless, here's a possible solution: if Coercible over recursive newtypes is sound but otherwise in danger of causing a loop, we could just require UndecidableInstances to get the Coercible mechanism to go down this road. In this scenario, my example (with RecNT) would fail to use GND without UndecidableInstances but should succeed with it. We can even give a helpful error message in this scenario. Or, perhaps even better: could the Coercible mechanism use TyCon.checkRecTc? That function seems to be designed for exactly this scenario, which is where we need to do newtype expansion but don't want to fall into a black hole. That might complicate the Coercible code, but I think it would be a step in the right direction. We could still allow users to specify UndecidableInstances to bypass even this check... but my guess is that checkRecTc would work exactly in the correct cases, meaning that bypassing the check is sure to cause the type-checker to loop. Other, more informed opinions are very welcome here. Lastly, is there a solid reason to require that the recursiveness of a type in a hs-boot file and the recursiveness of the real type are the same? It looks to me that it would be sound to assume types in hs-boot files are recursive, but then not to inherit this trait when processing the real thing. That would fix the problem with compiling Module.lhs, and it would save some embarrassment if we were to suggest UndecidableInstances to compile a very ordinary-looking newtype! comment:14 Changed 2 years ago by nomeata I was conservative when disallowing recursive newtypes for Coercible, but quite possibly too conservative. I also wanted to keep the code simple (Note that it just keeps generating constraints and passing them back to the constraint solver; it does *not* build the whole coercion in one go). I wasn’t aware of TyCon.checkRecTc and I’ll look into it. Thanks for pointing that out. comment:15 Changed 2 years ago by nomeata Using just TyCon.checkRecTc is too permissive for Coercible. checkRecTc succeeds for Fix (Either x) (where newtype Fix f = MkFix (f (Fix f))) for all type x. But then we’d generate the instances instance Coercible (Either x (Fix (Either x))) b => Coercible (Fix (Either x)) b instance Coercible a (Either x (Fix (Either x))) => Coercible a (Fix (Either x)) so trying to solve the constraint Coercible (Fix (Either Age)) (Fix (Either Int)) will loop. (With the current implementation, which hooks into the general constraint resolving, I’d expect it to not loop at compile time, but create a diverging witness, and crash at run time. Neither of these are desired). Now what about your example, newtype RecNT = MkRecNT (Either RecNT String). Assume that Coercible would work for it, and assume there is also newtype RecNT2 = MkRecNT2 (Either RecNT2 String). Then, similarly, solving Coercible RecNT RecNT2 will loop with the current implementation... So it is not a matter of relaxing the checks; if we want to support recursive newtypes in Coercible, the algorithm needs to be generally overhauled. comment:16 Changed 2 years ago by goldfire Yes, I agree, sadly. Simon PJ and I (in a phone call) thought that we could solve all our problems with topNormaliseType_maybe, which "evaluates" out any newtypes and type functions that appear at the top of a type. If there is no way to reduce a type at the top level, the function fails. Because the Coercible code really only examines the top level, we thought this was sufficient. However, it's not, because of the example you (nomeata) give, with RecNT and RecNT2. My only thought now for how to do this is to record in the wanted constraint which newtypes have been expanded, perhaps using the checkRecTc technology. This would mean either creating a new constructor for Ct or a new field somewhere. Perhaps creating a new constructor isn't so bad -- that's how nominal equality is handled. Canonicalisation can then convert a normal class-based constraint into the new CCoercible or something. I still think there is hope here, and I do think we should strive to include recursive newtypes in this approach. In my attempt to fix this all, I've done some refactoring in getCoercibleInst. You can see my work at my github repo, branch coerce. That branch also introduces a GenCoercion class containing Coercion and TcCoercion. That may or may not be a good idea, but it worked nicely with topNormaliseType_maybe, which has to sometimes produce a Coercion and sometimes a TcCoercion. I've also discovered that Coercible really ought to be poly-kinded. If we have newtype List a = MkList [a], then we might want Coercible (StateT Int [] a) (StateT Int List a), which would lead to Coercible [] List, but that's ill-kinded! I think the first coercion is quite legitimate, and so we should find some way of allowing it. Generalizing Coercible seems to be the obvious way forward, but other possibilities are out there, I suppose. This problem (with Coercible being mono-kinded) came up in practice as a core-lint failure when compiling the libraries in my branch. comment:17 Changed 2 years ago by simonpj - Cc diatchki added Let's note that even with Fix there is a legitimate and useful coercion Coercible (Fix (Either x)) (Either x (Fix (Either x))). We can write it by hand, and you might want to get from the Fix form to the Either form. So the same instance declaration may terminate when solving some constraints, but not for others. The constraint solver already stores a "depth" in each constraint. The depth is incremented by 1 each time you use an instance declaration. For example, when you use instance Eq a => Eq [a] to solve d1:Eq [Int], by reducing it to d2:Eq Int, then d2 has a depth one greater than d1. Since such instance declarations remove a type constructor, a deep recursion implies a deeply nested type, like [[[[[[Int]]]]]. So (proposal) maybe we can simply depth-bound the solver. In fact it already is; this is the -fcontext-stack flag. .) Here is one other idea. Suppose we have the wanted constraint Coercible [alpha] [Int]. Should we use the Coercible a b => Coercible [a] [b] instance to solve it? Well, if it turns out that alpha (a unification variable) ultimately is Int, then doing so would not be wrong, but it would be a waste because the identity coercion will do the job. So maybe this is a bit like the Equal type family in the Closed type families paper: we should not do the list/list reduction unless the argument types are "apart" (in the terminology of the paper). But that would be too strong. Consider f :: Coercible a b => [a] -> [b] f x = coerce x This should work, but it requires use of that list/list instance, and we don't know that a and b are un-equal. It's just that in this context we can treat a and b as skolems and hence "apart" for this purpose. Except, even then it's not quite right: we could have f :: (a~b, Coercible a b) => [a] -> [b] and now a and b are provably equal. (Or some GADT version thereof.) So I think we probably need the depth-bound thing too. Do any of these ideas make sense to you guys? I'll add Iavor to the cc list. comment:18 Changed 2 years ago by nomeata .) JFTR: Last time I checked, we did, leading to a non-terminating Coercion value. comment:19 Changed 2 years ago by simonpj I discussed with Joachim. The "apartness" idea definitely doesn't work. Consider newtype Fix1 x = F1 (x (Fix1 x)) newtype Fix2 x = F2 (x (Fix2 x)) and try solving Coercible (Fix1 (Either Age)) (Fix2 (Either Int)). Everything is fully known, and always apart, yet we get a loop. Moreover, analysing the definitions to find loop breakers (as GHC does now) fails in the presence of type functions newtype Bad a = B (F a) type instance F Int = Bad Int Conclusion: the depth-bound idea is the only real option. We also need to prevent the construction of recursive dictionaries, but we can use the depth of a constraint to stop that too; do not solve a constraint of depth n by using a constraint of lower depth. This is rather conservative, but it's enough to prevent recursive dictionary construction. comment:20 Changed 2 years ago by nomeata comment:21 Changed 2 years ago by goldfire Yes, the depth-bound seems sensible and is probably the only real solution to this problem. We should be aware of a particularly nasty case, like this: Coercible (Maybe (Fix Maybe)) (Fix Maybe) This should be solvable (and I posit that more sensible cases might come up in practice). The current algorithm simplifies the left-hand type (LHT) as far as it can, then simplifies the right-hand type (RHT) as far as it can. I would worry that the depth would hit its maximum when reducing the LHT and then refuse to simplify the RHT. Even if we tracked depths separately for the two sides, the right side would hit the maximum depth right before the two sides unify, leading to failure, no matter what the depth is! Note that if we just switch the order of the arguments to Coercible, this goal is easy to solve, after one reduction on the LHT. Having the solubility of a symmetric constraint depend on the order of arguments is undesirable. I have no good ideas of how to fix this, at the moment. comment:22 Changed 2 years ago by nomeata Have you checked that it fails? I believe it would work: Coercible (Maybe (Fix Maybe)) (Fix Maybe) is only matched by instance Coercible a (Maybe (Fix Maybe) => Coercible a (Fix Maybe) so we are left with Coercible (Maybe (Fix Maybe)) (Maybe (Fix Maybe)) which is solved by the reflexive instance Coercible a a (I’m not claiming that all cases work as smoothly, but they need to be a bit more constructed :-)) comment:24 Changed 2 years ago by simonpj For the recurisve-newtype problem (ie the main topic of this ticket, now that we have moved polykinded Coercible to #8541), the approach (developed by Joachim. comment:25 Changed 2 years ago by nomeata ignore this; ticket race condition noise comment:26 Changed 2 years ago by nomeata As a first step, I implemented separate depth counters for regular constraints and type function application. User-visible changes: Flag -ftype-function-stack, in addition to -fcontext-stack. Feel free to bikeshed you think the name is inappropriate. The defaults for these values are discussed in #5395 (current 200 resp. 20). See for my work in progress, and for the corresponding test suite updates. comment:27 Changed 2 years ago by nomeata I have something that is roughly working, enough to notice problems. While it successfully prevents Coercible Fix (Either Int) from looping or generating recursive dictionaries, it does not help a lot against newtype Void a = Void (Void (a,a)) foo5 = coerce :: Void () -> () Even with a depth of 20, this filly up my memory quickly, especially trying to print the final constraint. Do we have to live with the fact that there will likely always be ways to make the compiler use too many resources? Or should coercing recursive newtypes require UndecidableInstances to be enabled? comment:28 Changed 2 years ago by simonpj There are plenty of other ways to construct exponentially-sized types (even in H-M),so I don't think we should worry here. Remember too that type functions mean that the entire notion of a "recursive newtype" is suspect. comment:29 Changed 2 years ago by nomeata Got that code in a working form. I don’t claim to know the solver well enough to be completely certain that there is not a way left to create a recursive dictionary, but at least the examples that I tried worked. Commits 6f5c183cb798a5d5e1011417f1f211834e4e9215 (GHC) and ec83ce7530af5474a3ad49d7120913c7f22266f0 (testsuite); for convenience, here is the note describing the design: Note [Preventing recursive dictionaries] We have some classes where it is not very useful to build recursive dictionaries (Coercible, at the moment). So we need the constraint solver to prevent that. We conservativey ensure this property using the subgoal depth of the constraints: When solving a Coercible constraint at depth d, we do not consider evicence from a depth <= d as suitable. Therefore we need to record the minimum depth allowed to solve a CtWanted. This is done in the SubGoalDepth field of CtWanted. Most code now uses mkCtWanted, which initializes it to initialSubGoalDepth (i.e. 0); but when requesting a Coercible instance (requestCoercible in TcInteract), we bump the current depth by one and use that. There are two spots where wanted contraints attempted to be solved using existing constraints; doTopReactDict in TcInteract (in the general solver) and newWantedEvVarNonrec (only used by requestCoercible) in TcSMonad. Both use ctEvCheckDepth to make the check. That function ensures that a Given constraint can always be used to solve a goal (i.e. they are at depth infinity, for our purposes) comment:30 Changed 2 years ago by nomeata I cherry-picked Richard’s implementation of GND via coerce (c76fed) onto wip/T8503 and GHC compiles (after make clean) just fine – which is great. There are test suite failures, though A lot are performance degrations (some, if they are about the performance of GHC, are to be expected, the Coercible machinery with subsequent simplification is not the fastest code). The others are: Unexpected failures: codeGen/should_run cgrun068 [exit code non-0] (normal) deriving/should_compile T2856 [exit code non-0] (normal) deriving/should_compile drv015 [exit code non-0] (normal) deriving/should_fail T1496 [stderr mismatch] (normal) deriving/should_fail T2721 [stderr mismatch] (normal) deriving/should_fail T4846 [stderr mismatch] (normal) deriving/should_fail T7148 [stderr mismatch] (normal) deriving/should_fail T7148a [stderr mismatch] (normal) deriving/should_run drvrun019 [exit code non-0] (normal) gadt CasePrune [stderr mismatch] (normal) gadt gadt6 [exit code non-0] (normal) ghci.debugger/scripts print018 [bad stdout] (ghci) indexed-types/should_compile DerivingNewType [exit code non-0] (normal) indexed-types/should_compile IndTypesPerf [bad exit code] (normal) indexed-types/should_compile T2850 [exit code non-0] (normal) indexed-types/should_compile T3423 [exit code non-0] (normal) indexed-types/should_compile T4185 [exit code non-0] (normal) indexed-types/should_compile T5955 [bad exit code] (normal) indexed-types/should_compile T7474 [exit code non-0] (normal) lib/integer integerConstantFolding [bad stderr] (normal) polykinds T7332 [exit code non-0] (normal) roles/should_fail Roles10 [stderr mismatch] (normal) simplCore/should_compile T4203 [exit code non-0] (normal) typecheck/should_compile T3955 [exit code non-0] (normal) typecheck/should_fail T5246 [stderr mismatch] (normal) Of these, there are a few where we get unresolved Coercible instances due to roles (and many of them might be genuine and can probably be marked as expected to fail), but there is also a number of panic and failed assertions. wip/T8503 itself validates, so these were either introduced or triggered by implementing GND via coerce. Probably triggered, given that the GND code now generates mostly straight-forward code. I’ll start looking into some of them tomorrow, but Richard, if you feel like helping debugging them, that would be appreciated. Some of them appear in code that I do know know much about, such as module interfaces. comment:31 Changed 2 years ago by nomeata - Blocked By 8548 added comment:32 Changed 2 years ago by nomeata Is it hijacking this ticket to discuss the impact of GND-via-coerce here? Anyways, feature request #2850 will be broken if we do GND-via-coerce. Four tickets are broken because of #8548 (Coercible not working for newtype instance yet), these will be fixable. The remaining ones (TEST="print018 cgrun068 T4203 drvrun019 T3955 T7474 T3423 IndTypesPerf T5246 integerConstantFolding gadt6 T7332 drv015") cause panics, looking into them now. comment:33 Changed 2 years ago by goldfire comment:34 Changed 2 years ago by nomeata Yes, it is a horribly effective testsuite ;-) comment:35 in reply to: ↑ 9 Changed 2 years ago by nomeata Hold the phone! There is an easier solution to the original problem! \[..\] Thoughts? Is this a good plan? We could always, as a first pass, implement GND in terms of Coercible and fail if there are any unsolved constraints, working for C Age above but not C (List a). I’ve been trying to get your attempt to work. It works for a lot of easy cases, and it works in theory, but there is an implementational difficulty with extra type variables in the instance. Consider: import GHC.Exts class Cls a where meth :: a newtype Id m = Id m deriving Cls The instance we want to generate is instance forall m. Cls m => Cls (Id m) where meth = coerce (meth :: m) :: Id m But that will require ScopedTypeVariables. Question one: Can I generate code at TcGenDeriv stage that uses scoped type variables, without requiring that extension to be enabled in the whole module? Next problem: Currently the code produces ==================== Derived instances ==================== Derived instances: instance T3423.Cls m_ayI => T3423.Cls (T3423.Id m_ayI) where T3423.meth = GHC.Prim.coerce (T3423.meth :: m_ayI) :: T3423.Id m_ayI so there is no forall in the instance head. For newtype ... deriving Cls it might be possible to add that (although I did not yet find where), but the user should be able to specify deriving instance Cls m => Cls (Id m) without having to add forall m.. I’m not sure what to do here. It would be best if we could generate code that works without having to specify types in the method definition at all, maybe using something like $ClsId = case $superClassDict of D:Cls meth => D:Cls (coerce meth), but I do not see how $superClassDict should look like. comment:36 Changed 2 years ago by nomeata My analysis in the previous comment was wrong; SPJ found out that the problem seems to lie in the calculation of defined and used variables for derived instances. Debugging... comment:37 Changed 2 years ago by nomeata Ok, the problem were two missing characters: @@ -432,7 +432,7 @@ renameDeriv is_boot inst_infos bagBinds = -- Bring the right type variables into -- scope (yuk), and rename the method binds ASSERT( null sigs ) - bindLocalNames (map Var.varName tyvars) $ + bindLocalNamesFV (map Var.varName tyvars) $ do { (rn_binds, fvs) <- rnMethodBinds (is_cls_nm inst) (\_ -> []) binds ; let binds' = InstBindings { ib_binds = rn_binds , ib_pragmas = [] but now everything seems to work. I’m running validate right now. If it goes through, any objections to pushing wip/T8503 to master and closing this bug? comment:38 Changed 2 years ago by goldfire comment:39 Changed 2 years ago by Joachim Breitner <mail@…> comment:40 Changed 2 years ago by Joachim Breitner <mail@…> comment:41 Changed 2 years ago by nomeata Ok, pushed. Closing this ticket (is is far too long anyways, and trac spends too much time generating it), so any new issues with Coercible or GND please in new tickets. comment:42 Changed 2 years ago by nomeata - Resolution set to fixed - Status changed from new to closed It seems that trac does not do NLP, but Richard does. So again: Closing the ticket... This is subtler than I thought. Let's take a simple case: In the derived instance for C NT, we need this: What is co? Ideally, it would be but that doesn't role-check! NTCo:NT[0] has type NT ~R# Maybe. But, to apply that coercion to another requires an AppCo, which in turn requires its second argument to have a nominal role. The underlying problem here is essentially that eta-reducing the newtype coercion loses information about the roles of its parameters. After all, if Maybe's parameter were at a nominal role, then this deriving would be bogus. By eta-reducing, we're forgetting that Maybe's parameter's role is other than nominal. What to do? One possible solution is to allow AxiomInstCo to be oversaturated, and then role information could be stored in an axiom (indeed, role information for all non-eta-reduced parameters is already there). This isn't a terrible plan, but perhaps it has further ramifications that I'm not currently seeing.
https://ghc.haskell.org/trac/ghc/ticket/8503
CC-MAIN-2015-48
refinedweb
5,711
61.26
. The first approach Suppose you want to make NPCs which react differently to the player interacting with them. First of all, we have a Guard which just says “Hello” and a witch, who heals the player if his health is low. Let’s go with a naive implementation. Here’s the Player class: // Player.h #pragma once #include <string> class; } Character base class: // Character.h #pragma once #include <string> #include "Player.h" class Character { public: virtual ~Character() = default; virtual void interact(Player& player) = 0; std::string getName() const { return name; } void setName(const std::string& name) { this->name = name; } protected: std::string name; }; interact if a function which gets called each time player interacts with characters, for example, talks with them. Let’s derive Guard and Witch classes from the Character class. Guard class: // Guard.h #include "Character.h" class Guard : public Character { public: void interact(Player& player) override; private: }; // Guard.cpp #include "Guard.h" #include <iostream> void Guard::interact(Player& player) { std::cout << name << ": Hello! Be safe!" << std::endl; } Witch class: // Witch.h #pragma once #include "Character.h" class Witch : public Character { public: void interact(Player& player) override; private: }; // Witch.cpp #include "Witch.h" #include <iostream> void Witch::interact(Player& player) { if (player->getHp() == player.getMaxHp()) { std::cout << name << ": Hi! Come back when you're wounded." << std::endl; } else { std::cout << name << ": Hi! Let me heal you." << std::endl; player.setHp(player.getMaxHp()); } } And now we can use all these classes: // main.cpp #include <iostream> #include "Guard.h" #include "Player.h" #include "Witch.h" int main() { Player player; Witch witch; witch.setName("Lucia"); Guard guard; guard.setName("Sam");); } You can find the code here. Here’s the output: |Player walks around the town and talks with some npcs Lucina: Hi! Come back when you're wounded. Sam: Hello! Be safe! |Player goes on a great adventure! |but gets beaten up by some foes... > Player: 5 HP | Back to the town... Let's talk to the witch Lucina: Hi! Let me heal you. > Player: 10 HP Cool, let’s now analyze how scripts can make this whole thing better. Let’s look closer at Guard and Witch classes. The only thing different about them is an interact function. This will be the only difference between classes for the most friendly NPCs. Do we really need to make a new class for this? We can use the Strategy pattern. But still, we have to create a new class for each new behaviour. And it’s all in the code. This is not very good especially if you’re working with someone who doesn’t know how to program. If another dev wants to change some NPC’s behaviour or add a new type of NPC, he has to ask the programmer to do it. This is not very productive. And even if you’re a programmer, there are still nice things which scripting will give to you. One of the most important: you don’t have to recompile the code when you change something in scripts. You can even change object behaviour during runtime, reload the its script and continue playing the game. Isn’t it awesome? Let’s get to the scripting! Scripting First of all, let’s get rid of Witch and Guard classes. We won’t need them anymore. Let’s start by modifying Character class. // Character.h #pragma once #include <memory> #include <string> #include <LuaBridge.h> class Player; class Character { public: Character(); void loadScript(luabridge::lua_State* L, const std::string& scriptFilename, const std::string& tableName); virtual ~Character() = default; void say(const std::string& text); std::string getName() const { return name; } void setName(const std::string& name) { this->name = name; } void interact(Player& player); protected: std::string name; std::unique_ptr<luabridge::LuaRef> interactFunc; }; What has changed? First of all, we have a LuaRef pointer named interactFunc which we’ll use to call a Lua interact function. It’s a pointer because LuaRef doesn’t have a default constructor so you have to allocate memory for interactFunc once you get if from the script. I’ll assume that each NPC will have its own Lua table to make things as simple as possible. What if you want to create multiple objects that use the same interact function? Loading the interactFunc over and over seems inefficient. You can handle it this way: create some “template” characters from which you will copy the interact function. Use shared_ptr for interactFunc for reference counting to see how many objects use the particular interact function. We also have loadScript function which will load character’s name and an interact function from the script. Now let’s see how Character‘s methods are defined Character::Character() : interactFunc(nullptr) { } void Character::loadScript(luabridge::lua_State* L, const std::string& scriptFilename, const std::string& tableName) { using namespace luabridge; if (luaL_dofile(L, scriptFilename.c_str()) == 0) { // script has opened LuaRef table = getGlobal(L, tableName.c_str()); if (table.isTable()) { if (table["name"].isString()) { name = table["name"].cast<std::string>(); } else { name = "Null"; } if (table["interact"].isFunction()) { interactFunc = std::make_unique<LuaRef>(table["interact"]); } } } else { std::cout << "Error, can't open script!" << std::endl; } } It’s pretty self-explanatory if you’ve read the previous part of my tutorial. Notice all the error checking. Make sure you do it all the time when you handle scripts. It’ll save you someday. The Character table will look like this in a Lua script: some_Character = { name = "some name" interact = function(player) -- put interaction code there end } Here’s another function. Pretty simple: void Character::say(const std::string& text) { std::cout << name << ": " << text << std::endl; } And interact function looks like this. void Character::interact(Player& player) { if (interactFunc) { try { (*interactFunc)(this, player); } catch (luabridge::LuaException const& e) { std::cout << "LuaException: " << e.what() << std::endl; } } } We just call Lua interact function using pointers to Character and Player. There’s exception handling. You can make some mistakes in interact function but that wouldn’t crash your program and you’ll see where you made this mistakes. Very useful. Here’s how Player class looks now: // Player.h #pragma once #include <string> #include "Character.h" class Player : public; } Not much have changed. But notice that Player class is now derived from Character class. We’ll see how we can register derived classes in a moment. We can now register C++ classes using LuaBridge so Lua can handle them. Here’s how it’s done. lua_State* L = luaL_newstate(); luaL_openlibs(L); getGlobalNamespace(L) .beginClass<Character>("Character") .addConstructor<void(*)(void)>() .addProperty("name", &Character::getName, &Character::setName) .addFunction("say", &Character::say) .endClass() .deriveClass<player,("Player") .addProperty("hp", &Player::getHp, &Player::setHp) .addProperty("maxHp", &Player::getMaxHp, &Player::setMaxHp) .endClass(); Let’s go line by line: beginClass<Character>("Character") You start registering class with this. Notice that class name in Lua may be different in C++. It’s convenient for them to be the same though. addConstructor<void(*)(void)>() Here’s how you add constructor. You have to specify its arguments using function pointer syntax. You can’t have overloaded constructors. addProperty("name", &Character::getName, &Character::setName) Lua can handle some member variables as properties. So you can write something like this in Lua: characterName = character.name character.name = "Some guy" Getters and setters will be called for each operation with a property. If you want the property to be read only, just pass a getter to addProperty method without a setter. Adding functions is simple too: addFunction("say", &Character::say) And we end class registration with this function. endClass() We can register derived classes like this: deriveClass<Player, Character>("Player") Virtual methods work normally. No special syntax needed. Here are some notes from LuaBridge manual: The `deriveClass` takes two template arguments: the class to be registered, and its base class. Inherited methods do not have to be re-declared and will function normally in Lua. If a class has a base class that is not registered with Lua, there is no need to declare it as a subclass. Here are some more important things to know: - LuaBridge does not support overloaded functions - Const methods are detected automatically. Const-correctness is enforced - Destructor is registered automatically It’s also important to know how objects are passed to Lua and which lifetime (C++ or Lua) they have. I’ll cover this in next article. And now let’s write some scripts! -- witch.lua witch = { name="Lucia", interact=function(witch, player) if(player.hp == player.maxHp) then witch:say("Hi! Come back when you're wounded.") else player.hp = player.maxHp witch:say("Hi! Let me heal you.") end end } -- guard.lua guard = { name="Sam", interact=function(guard, player) guard:say("Hello! Be safe!") end } Not too much to explain here. Notice that you call member functions with ‘:‘ like this: witch:say(...) And now the remaining part of main.cpp Player player; player.setName("Player"); Character witch; witch.loadScript(L, "witch.lua", "witch"); Character guard; guard.loadScript(L, "guard.lua", "guard");); The output is the same as for previous C++ only version. You can find the code here. And there you have it. I hope you’ve seen how useful Lua scripts can be. What’s next? But things don’t end here! Explore and try Lua yourself. Maybe you’ll find some more cool things! In the next chapter I’ll cover object creation and their lifetime. I also plan to write a chapter which will show how I use scripts in my game (I’ve written a lot about that in my dev logs, but I need to organize it a bit). I don’t have classes which I have to register because I use Entity/Component/System model so things are a bit different there. See ya next time! If you’ve noticed mistakes or just want to provide some feedback, feel free to send me an e-mail: eliasdaler@yandex.ru 20 thoughts on “Using Lua with C++. Using Lua scripts with C++ classes.” Yeah ! Another awesome tutorial ! :) Thanks a lot ! I’ll try to adapt this for the Entity Component System i use, i hope i’ll succeed ! Glad you liked it. You’re welcome! Feel free to ask me anything about it if you have some questions. When are you going to write the next part? I found these tutorials and they are fantastic(best ones I’ve found) but I really need to know about object creation and their lifetime! Thanks. Thank a lot! I’m planning to write it somewhere this month or next month maybe. This stuff takes lots of time so I can’t really promise a lot. :) Thanks for replying! I appreciate it and take your time :) No problem! By the way, there’s an official LuaBridge manual which explains a lot about object lifetime, you can find it here: My tutorial will use it as reference but I’ll also try to show some practical examples and show what happens if you do some incorrect stuff (like changing value of const variable) and stuff like that. :) Thanks for the great info! I’ll be glad to check it out. Hey Elias, me again, sorry for bombarding you with questions, but I am trying to use the technique you used here with the interaction function but I’m trying to implement the collide function we previously discussed. From what I can tell this should work well, and the bounding box in the example gives me no trouble, however when I call my collide function I get an access violation runtime error (Meaning I am reading memory I shouldn’t be). Anyway here is my code: anything not shown here is exactly the same as your code. Except for maybe some names. Let me know if you need anything else. Hey, no problem, feel free to ask as many questions as you need. So, did you check didCollide function after loading it? It shouldn’t be nullptr and it’s LuaRef shoudn’t have mRef == -1. And can you show me your script, please? Thanks again, I appreciate the help. If by checking didCollide you mean doing if (didCollide) { … } then yes. The only difference between my code and the code in this article is that I load the function in the constructor of my class, as shown by one of your other articles. Ghost script: Then here is the loading/calling of didCollide: Hmm, pretty strange, this should work. If mRef in didCollide LuaRef equals to -1 then it means that something goes wrong. It’s hard to know what. And by the way, you shouldn’t pass pointers to the function, you should pass references, so you should write: (*didCollide)(*ent1, *ent2); (Sorry it took me so long to answer) Hmm, okay thanks for the tip! You’re welcome :) i have got question. let’s say we programing RTS (…strategy) and we need difference interaction for player, ghost, guard, and even not character ( for example chest ) how to write that code? use field name? Yes, you should have a field which specifies entity type. Maybe it’ll be string or some unique int identifier for each entity type. there shouldn’t be “delete interactFunc;” in the Character destructor. It’s a shared_ptr so it cleans itself up automatically. You shouldn’t give Character a destructor at all, the default compiler-generated one is fine. You’re right. That’s a thing I forgot to take out, because previously the code used raw pointers. I’ve even changed shared_ptr to unique_ptr as it’s better in this case. “If you want the property to be read only, just pass a setter to addProperty method.” I guess you meant “getter” here? Yep. Fixed! i recomend printing lua errors like this = std::cout<<"ERROR "<<lua_tostring(L,-1)<<std::endl; gives more verbose about the problem.
https://eliasdaler.wordpress.com/2014/11/01/using-lua-with-c-luabridge-part-2-using-scripts-for-object-behaviour/?replytocom=800
CC-MAIN-2019-26
refinedweb
2,297
68.67
We have organized our site map for easy access. You can browser though Site Map to reach the tutorials and information pages. We... Java Code example Java Programming Java Beginners Where can I learn Java Programming question which is "Where can I learn Java Programming?". We have... have to search more for "Where can I learn Java Programming?", just...Learning Java is one of the tough task for beginners. It requires efforts Best way to learn Java Best way to learn Java is through online learning because you can learn... at RoseIndia will be more than happy to help RoseIndia Java course are divided... making it comfortable for beginners. For further reading, beginners in java How to Market Your Ecommerce Site site? Great! The next step to get it all rolling smoothly is to market your e-commerce site. More and more people from all around the world today are switching... making a purchase. Now the question is to how to market your ecommerce site to get sections by topics on our website. You can find the best tutorials here at Rose... example | Java Programming | Java Beginners Examples | Applet Tutorials...In this section we have listed the tutorials on Rose India website. We have Where is the top site to buy swtor credits in 2014 new year? Where is the top site to buy swtor credits in 2014 new year? In 2014... adventure and get relieved of the lack of swtor credits.You can buy cheapest... Starfighter in swtor, more and more players are expecting a favorable discount Know About Outsourcing, More About Outsourcing, Useful Information Outsourcing at the beginning with a definition of what is outsourcing. Outsourcing can... of IT outsourcing, you can either outsource the entire management of all your.... Outsourcing can be either offshore or within your country. In other words, offshore where to start - Java Beginners where to start Myself being a java beginner want to know wat... on. Hi, Thanks for using RoseIndia.net You can start learning java... You can master Java in a week. Please check our section http The Best We Offer The Best We Offer Keeping the existing trend and future scenario in foresight here we offer your preferred... our most visited website for Java professionals:. We Twitter Best Practices Questions Twitter Best Practices Questions Hello, What are the best practices of using Twitter for business? How you can get more from Twitter for your.... Here we are discussing the latest trends in Twitter, best strategies, tactics More than 1 preparedStatement object - Java Beginners java... Thanks Hi Friend, You can use more than one prepared Statement object in the java class.Try the following code: import java.sql....More than 1 preparedStatement object Hey but I want to use more than How to Save Your Site from Google Penguin Update that you can get away with such deplorable tactics. To save your site from... face as quickly to evade the wrath of Google. At best you can focus on the Black... for immediate result to take effect. We would focus here mainly on those areas that can Best Java Websites Best Java Websites The best Java Websites listed here Java is one of the most... popular and best website for Java Technologies. Our list of best Java Website will help you learn java quickly. Even experienced programmers can find a lot Java Get Example program to get the Command Line Arguments In java we can get.... Get computer name in java We can get... the desktop path of the system. In the java environment we can get the desktop How can we get hibernate statistics? How can we get hibernate statistics? Hi, How can we get hibernate statistics? Thanks, Hi, The Hibernate statistics is one... and provides a complete aspect of the Hibernate work and performance. For more Operating System Information ; In this example we are learn how we can get information about our operation system. In this example we are getting the OS name...) to get the property of the OS. To get OS name we have key value Professional Web Design Services For You Web Site also as there is a lot of information available which can also save you the cost... that you can always get the changes done according to your needs. The basic... information that can be used by them. There should be optimal visual and functional VoIP Information VoIP Information VOIP Information When you are looking for the most comprehensive VOIP Information available, you will quickly find that you have... the vast amount of information on your own could be imposing. Your other option Free Web Site Hosting Services offer free space and tools for you to build your own web site. You get:150 MB... Search Engines To Promote Your Site E-Mail (Coming Soon) And much more... http... you can get:150 MB of space, Direct FTP, Your own sub-domain, Personal guestbook wap site - Development process for more information.... and manufacturer when any one hit my WAP site. I am using jsp + ejb. Please tell me which technology should i use to do this code. I have a java platform of web j2ee Best Open Source Software Best Open Source Software Best Open Source Open source software. Often..., and other key software applications. Best Legal Practices... license policies where you will see much of the same.   Java Get Example to get the Command Line Arguments In java we can get the command .... Get computer name in java We can get... the desktop path of the system. In the java environment we can get the desktop javascript where to declare variables ); </script> For more information, visit the following link: Declare variables in Javascript JavaScript where to declare variables To declare...javascript where to declare variables javascript where to declare Where to learn java programming language Where to learn java programming language I am beginner in Java and want to learn Java and become master of the Java programming language? Where... oriented programming language. It's easy to start learning Java. You can learn how Outsourcing Negotiation Tips,Best Negotiation Tips,Negotiation Tips for Successful Outsourcing than done. Some of the issues can get even emotional or contentious. However the smart buyer has to take the lead in the negotiating process, as only they can... issues need more time to be sorted out. Transfer the work to the vendor only hibernate best practices - Hibernate hibernate best practices Hi all, I am working on hibernate. can any one please send me the best practices in hibernate. And please send... for Match EJB roles with the corresponding description of the role's responsibilities, where the description may include deployment descriptor information. application deployment unit. Assembly level information can be changed... with the corresponding description of the role's responsibilities, where the description may include deployment descriptor information. Prev Chapter 1 WEB SITE WEB SITE can any one plzz give me some suggestions that i can implement in my site..(Some latest technology) like theme selection in orkut like forgot password feature.. or any more features Sorry but its How can I start programming with Java on your windows machine. You can get more information about installation JDK... How can I start programming with Java How can I Start programming with Java? Before getting Sample Java program for beginners Here beginners in Java can learn to create there first Hello World program in Java. To install Java one needs JDK (Java Development Kit), which can..., a programmer is ready to create their own Java program. Here we have also described JEE - Java Enterprise Edition JEE - Java Enterprise Edition In this section we are going to discuss about the Java Enterprise Edition or simply JEE platform. In this section of JEE you... for connecting to networked data sources. For more Java Enterprise Edition related Best Practice in Criteria handling? - Hibernate link, you will get more information about Criteria handling , Read for more information, Thanks (Roseindia...Best Practice in Criteria handling? Please explain best Practices Agra Travel Information visit site of India. No tourism journey can be completed without visiting Agra..., and Trident Hilton equipped with world-class facilities and luxury where you can...Agra Travel Guide - Agra Travel Information Agra, the city of three World Information about Connaught Place Delhi to get into the area from. It can be busy at some times though. This is due... that offers many different types of stores and interesting places. This is a site in Delhi that has evolved over the years to where it features many different types Best Cloud Computing Applications for Small Business in selecting some of the best cloud computing applications for small business we... of computing. In relating here best cloud computing applications for small business..., developing more competent end to end customer service based on the real time access Delhi Chattarpur Temple Information complex is that people around Delhi can easily get to the temple. It is placed... that are used to please the goddess and to create comfortable places that people can... Rama. These shrines are located in individual temples. There are a little more how we can create website through java how we can create website through java Dear actually i wanna ask query about creation of website through java then how we can create web site through java plz help me..........i have already a web site ....specially for premium Web Site promotion services at roseindia.net Web site services will help you get listed in major search engines and directory of the world. Our own site traffic comes from the major search engines and directory. We hand submits the web sites url manually Information Website Template Information Websites We provide high quality Information site development... in developing websites for information industry. We are designing... that website is well personalized and easy to navigate. We ensure that the information Best Online Java Training Best Online Java Training While getting enrolled for a online java training program most of the students and professional look out for best online Java... over the years and so are their applications, hence choosing best online Java get the value from another class - Java Beginners ().trim(); [/code] How I can get String alsl = ((Node)flnameTEXT.item(0... for more information with running example. I hope that, this link will help you...get the value from another class Hello to all, I have stupid trafic site statistics - Java Server Faces Questions statistics using data coming from a full database ? if it does, how can we correspond the balises d:chartItem to a java class (Bean)? Thanks for any...trafic site statistics Hello, Does the framework JSF work Article Marketing ? Time-Sensitive vs. Timeless Information of writing articles. Information about the best article directories throughout... information which will remain useful for several years. Articles Can Be Time... Article Marketing ? Time-Sensitive vs. Timeless Information Overview of Networking through JAVA Java Video Tutorials for beginners Java Video Tutorials for beginners are being provided at Roseindia online for free. These video tutorials are prepared by some of the best minds of Java... in the vast field of Java, this video section can be a lot helpful. All the video Java projects for beginners is more tutorials for Java coding for beginners What...In this tutorial you will be briefed about the Java projects for beginners... and complicated Java, can be become easy with the help of this tutorials which have been What is Web Graphics site should match in color, typeface, and special effects. You can use graphics in many of the ways on your web site. You can use them as markers... to improve the appeal of your web site then you can avail graphic Student Information System in Java - Java Beginners Student Information System in Java Hello Sir, I want Mini Java Project for Student Admission System. Hi Friend, Can you provide us details regarding your project like student admission form, faculty information Hibernate WHERE Clause a simple java file into which we will use HQL where clause for extracting... a HQL where clause can be used. To achieve the solution of such problem... (RightClick -> runAs -> Java Application) you will get the output printStackTrace in JSP of the Throwable class. By using this method we can get more information about the error process if we print a stack trace from the exception. The code.... java:56)at org.apache.jasper.runtime.HttpJspBase.service (HttpJspBase.java:97 How to Get Started with Java? A Java Beginners Tutorial - How to Get Started with Java? For a programmer Java offers some challenges that are uncommon with other environments and to get... using any piece of software. Here we go as to show you how to get started Agra Travel Information Guide Fort and nearby Fatehpur Sikri are honored as UNESCO World Heritage Site... hotel in the city. The tourists can also enjoy some mouthwatering snacks... every day. Taj Express and Bhopal Shatabdi Express, the best options, leave Information - Java Beginners Information on Java Technologies Need information on Programming technologies and specifically on Java The Best Fashion Schools The Best Fashion School We have talked about fashion careers in our previous... as a model. In general, there is no best fashion school out there. All you can do... can afford one. In short going to fashion school is a very good idea, so we How to Upload Site Online? How to Upload Site Online? After designing and developing your... program is free for uploading a website? How to Upload Site Online? Thanks Hi, After that you will get the user name and password to access the server Web Site Goals - Goal of Web Designing products and services available in it, but also the customers can get the right hub... within the deadline and the budget of the company. We try to get the maximum results in the minimum inputs. We make a proposal and dummy displaying site?s goal HTML FAQ site HTML FAQ site For a school project need simple website. It is an FAQ site which uses natural language processing to get the answers. Natural... a question or it can be a keyword. The answer for the question might Learn Spring Framework from where? for beginners. Could someone suggest me the best url to start learning the Spring... to understand. Where can I find the projects to learn Spring framework? Thanks...Learn Spring Framework from where? I want to learn the Spring An Online Wallet Service Can Help for Ecommerce An Online Wallet Service Can Help for Ecommerce An online wallet is a place where a person can store all of the relevant information to make purchases... and when a e-commerce site has these payment options available, it can increase more circles - Java Beginners more circles Write an application that uses Circle class you created in the previous assignment. ? The program includes a static method createCircle() that o reads a radius of a circle from the user o creates a circle object Using WHERE Clause in JDBC conditions. It allows to filter the data from a table, a WHERE clause can be added to the SELECT statement. The WHERE clause searches the specific results... Using WHERE Clause in JDBC Where Getting Methods Information of a class methods list in Method[] array. Now we can get other information of that methods... Getting Methods Information of a class  ... to retrieve information of all methods of a class (that included in the program Get Variable in PHP Video tutorial the information from any form. To make it easier for you we will use $_GET variable... we can set the $_GET variable to get the text field data. Once you have set the $_GET variable, you can print the text field values to webpage. Remember Where is Java being Used? Where is Java being Used?  ... programming language. The most important feature of Java is its byte code that can... as mobile technology is concerned, it offers offline facility, so that users can get get information about foreign keys - JDBC get information about foreign keys How to get the information about foreign keys in Java Getting Information of All Available Time Zones program shows you how to get about all the information for the time zones...: TimeZone: This class is used to getting and setting the time zone. You can get... Getting Information of All Available Time Zones | Site Map | Business Software Services India Tutorial Section C Tutorials | Java Tutorials | PHP Tutorials | Linux... Tutorials | Dojo Tutorials | Java Script Tutorial | CVS Tutorial best Struts material - Struts best Struts material hi , I just want to learn basic Struts.Please send me the best link to learn struts concepts Hi Manju, Read for more and more information with example at: http help - Java Beginners help what is java and where we use java? Hi friend... by a Java interpreter. For more information,Tutorials,Examples on Java visit...:// JavaScript - JavaScript Tutorial more elements. JavaScript fontColor method We can also... code more complex. Therefore instead of using the if-else statement we can use... As we can use the Math.max() method to find the maximum of zero or more Get information about foreign keys used in a table. Get information about foreign keys used in a table. How can I get information about foreign keys used in a table Best iPhone Accessories and Attachments case: We all know how sensitive touch screen phones can be, so it makes perfect... for mobile conversations. Car Navigation: While we are discussing the best... Best iPhone Accessories and Attachments   java - Java Beginners java What is HTML. How we can use in Java Applications Hi... the html. We are providing you an example where we have stored the html code in the string. When we run that program the content from the string will get Accessing your site Accessing your site I cant acess your site.I am getting good grip on java through roseindia.kindly help me so that I can acess roseindia Use of "|" separator for selecting more than one path Use of "|" separator for selecting more than one path You can execute.... It allows you more flexibility by allowing you to execute multiple expressions A DVR Mobile Application Can Get Any Problem Recorded with Ease A DVR Mobile Application Can Get Any Problem Recorded with Ease The Digital Video Recorder, or DVR, is one of the best types of devices that anyone can use... to get this type of application to work. What happens here is that a user can search Where clause Where clause I have doubt in Where clause .. i got a Employid from user and check the id and display the details of that id what can i do? The WHERE clause is used to filter records.It is used to extract only those information information hello sir i installed java1.5 s/w but java editor not displaying. i wants to create a desktop icon for this application.without notepad i wants to write code in code window like c,cpp.plz help me. replay Get current working directory in Java Get current working directory in Java In this section we will discuss about how to get the the current working directory in java. Current working... Absolute path. We can get the canonical path by calling getCannonicalPath how to get the next option - Java Beginners how to get the next option i was getting values from the database... for you. Thanks Hi Friend, Sorry,we have provided you wrong link by mistake. If you want to navigate the database data then you can visit Mobile devices, latest information and tutorials Mobile devices, latest information and tutorials In this page we... in mobile application development. We are also providing tutorials on all... packages that can be used on the smart phones. These software allows How can we get second of the current time using date function? How can we get second of the current time using date function? How can we get second of the current time using date function How many ways can we get the value of current session id? How many ways can we get the value of current session id? How many ways can we get the value of current session id Best Vehicle Tracking System Best Vehicle tracking not just means tracking the vehicle and its location... GLONASS is a more recent technology, which is used to assist heavy vehicles, GPS is a more widespread technology. Vehicle tracking system helps to know Why would we want a Database? Part-3 : Why would we want a Database? Most of the beginners are asking...; : With the help of PHP and MySQL, we can develop thousands of forum on the internet... need to change in one script. 3.Database : In the website you can get your all Best Java Online Training Class as per their wish and time available. As we are living in a world where... with the best Advanced Java classes and its various applications...Java Online Training Classes Java Online Training Classes are highly sought when do we get null pointer exception in java when do we get null pointer exception in java can anybody explain me abt null pointer exception? like in wt situation do v get NPE? Are these 2 strings same? String str; String str=null Ecommerce Web Development Helping You Get The Best Out Of Your Online Business , where anybody can surf and see into your site. The good news...Ecommerce Web Development – Helping You Get The Best Out Of Your Online... more you will want to get in on the action as well. An overview Ecommerce 5 Ways Mobile can Impact Marketing information the more online customers you are going to grab. Companies are looking... and promotions need to get more consumer-friendly. So the higher your rank gets the larger... prefer more of the video demo than a static piece of information and smartphone Submit your site to 100 top search engines where you can submit your site. Top search engines are Yahoo, Google, MSN, AOL, Dmoz, etc., but there are also many more high quality sites to list your Any one can help me? - Java Beginners Any one can help me? CAN any of u help me to do this java codes.... If the player is able to arrange all the words without any mistake, he/she can get... of five animals, they will be taken to another level where they will be asked SEO and Search Engines,Best Search Engines on Web,What is Search Engine of the search engine while doing SEO work. Since in SEO we are using search... which search engine is most important, how to get indexed in the search etc. is vital for any SEO. In this section we will compare important search engines. What Add Edit And Delete Employee Information , edit and delete the Employee's information from the database using java swing... changes in it. Then on clicking the save button, the information will get..., the information of that employee will get removed from the database. Here is the code How to find java jobs at any where in india How to find java jobs at any where in india Hi, I want information about java related jobs in india.where can i find that details Writing more than one cards in a WML deck. ; Using this code you can display some information in your... time. Here we are displaying "Get ready to learn... Writing more Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://www.roseindia.net/tutorialhelp/comment/58239
CC-MAIN-2016-18
refinedweb
3,935
65.93
Lets say that I have a rich data model with many classes and relationships. What I would like to do is to solidly lock down the data model using exception throwing data validators. Therefore if a user passed an invalid piece of data to the set method, he would get an exception rejecting the data. Example: public class Person { public void setName(final String name) { if (name.size() > 30) throw new IllegalArgumentException(); } } The idea being that I can only give a name if it is 30 characters or less. Now in CMP 2.0, this class would look like .. public abstract class Person implements EntityBean { public abstract void setName(final String name); } Because i have no control over the fields, I cannot check the data in the setter method. How does one overcome this deficiency? I cannot leave the data validation out because it would leave a hole that i could drive a truck through and yet i dont know how I can validate it. TIA -- Rob How can I do data validation using CMP 2.0 ? (9 messages) - Posted by: Robert Simmons - Posted on: January 11 2003 05:46 EST Threaded Messages (9) - How can I do data validation using CMP 2.0 ? by meka toka on January 12 2003 00:46 EST - How can I do data validation using CMP 2.0 ? by Leonard Gurevich on January 12 2003 11:20 EST - How can I do data validation using CMP 2.0 ? by Matthew Brown on January 12 2003 16:51 EST - How can I do data validation using CMP 2.0 ? by Robert Simmons on January 12 2003 23:11 EST - Don't validate data in the entity bean by J. B. Rainsberger on January 13 2003 10:54 EST - Don't validate data in the entity bean by Robert Simmons on January 13 2003 15:06 EST - Don't validate data in the entity bean by Matthew Brown on January 13 2003 06:05 EST - Don't validate data in the entity bean by Robert Simmons on January 14 2003 05:00 EST - Don't validate data in the entity bean by J. B. Rainsberger on January 28 2003 09:12 EST How can I do data validation using CMP 2.0 ?[ Go to top ] Hi, - Posted by: meka toka - Posted on: January 12 2003 00:46 EST - in response to Robert Simmons You could use a session bean as a client to your entity bean. Meaning, the session bean takes the data from the user/client and then validates it(if necessary throws exceptions) before writing it to the underlying data store(entity beans). HTH Regards Meka Toka How can I do data validation using CMP 2.0 ?[ Go to top ] For each field in CMP Entity Bean you can create non-abstract method where you can validate/modify passed to the bean data and then call abstract setter. - Posted by: Leonard Gurevich - Posted on: January 12 2003 11:20 EST - in response to Robert Simmons How can I do data validation using CMP 2.0 ?[ Go to top ] In my experience, its poor design to actually validate within the EJB Code. I typically have the following structure.Even with the use of local objects(ejbinterfaces ect)-you still have overhead associated with using ejb objects instead of regulard objects. Here is our structure- - Posted by: Matthew Brown - Posted on: January 12 2003 16:51 EST - in response to Robert Simmons Web Request----->Servlet-getParametersForFields---->populate data transfer object(DTO)which is serializable---->validate against a validation class(which encapsulates all of the validation logic)-on the DTO class----- if everything validates------>create EJB. I use the reflection API to recursively call all of the DTO's get methods and validate against the validation class. I then use my own exception objects- which can be thrown during the validation phase. I think that having the validation logic in the EJB causes tight coupling. If I make a change to my validation logic, I don't have to redeploy the the EJB code to do so. Also, encapsulating validation logic causes a more modular design that makes the classes much cleaner(ie. EJB's are for data access, DTO's can be used for alot of different things, and the validation class is only for validation.) There are alot of people that'll say that validation is the job of the EJB object- but I don't see why its that desireable. I know that my validation rules WILL change- for example when we implemented one of our U.S. systems in Canada- validation on zip codes changed significantly- I used the same EJB's- and just changed the validation logic in the validation class. I didn't need to worry about redeploying a bunch of beans- instead just some regular old classes. How can I do data validation using CMP 2.0 ?[ Go to top ] Ahh there is the crux of the matter. I want the validation in bothe the DTO and the EJB. The idea is to make sure no object can get out of synch with what is in the database. Validating in the session bean is a bit like givign the keys to the trash man. - Posted by: Robert Simmons - Posted on: January 12 2003 23:11 EST - in response to Matthew Brown How do I do the non-abstract method validation to in the ejb? Don't validate data in the entity bean[ Go to top ] The responsibility of the CMP entity bean is to store the data. The responsibility of the business logic is to validate the data. The less you mingle these responsibilities, the better. - Posted by: J. B. Rainsberger - Posted on: January 13 2003 10:54 EST - in response to Robert Simmons If you are concerned about someone bypassing the business logic and trying to store more than 30 characters using the CMP entity bean directly, then talk to the developer, rather than try to write code to solve the problem. Sometimes it's not a code problem; it's a people problem. Good luck. Don't validate data in the entity bean[ Go to top ] Actually, accordign to Object Oriented Engineering principles, an object should validate its own data. In fact it would be wasteful if everyone using a data object attempted to validate the data independently. It would also be an application hole you could drive a truck through. - Posted by: Robert Simmons - Posted on: January 13 2003 15:06 EST - in response to J. B. Rainsberger Additionally, you may not have control over the developer usign your bean. Instead of spending hours at a customer site trying to figure out what they screwed up, my beans that I sell them should catch their mistakes. All of this adds up to the entity bean performing data validation checks. What I would like to know is how I use the non abstract methods in an CMP 2.0 entity bean to validate data. A quick example or reference to a document would suffice. Don't validate data in the entity bean[ Go to top ] From a conceptual standpoint, an object should validate itself, however, there are different level's and types of validation. Making a blanket statement that "Object's need to validate themselves and all of the validation code needs to exist in the object" ignores this fact. There is business domain validation(ie. do a range check on account numbers from the General Ledger module) and there is object state validation(validate that the account number's not null ). To combine the data model with the business logic layer of the application is poor design. The data model should not need to know what the validation is in the business logic area. The business logic should not have to know how the object's are persisted to the database. What I'm talking about is loose coupling. "Coupling describes the strength of the association established by a connection from one module to another. Strong coupling results in a system that is harder to understand, change or correct." Object Oriented Analysis and Design- Booch. - Posted by: Matthew Brown - Posted on: January 13 2003 18:05 EST - in response to Robert Simmons Ask yourself if including this validation logic is part of the business domain, or is it validating simply the object's state. Look hard at how easy it'll be to change your CMP objects- because you know at sometime requirements/needs will change. I wouldn't base my descision on a developer that doesn't know how to implement your system. Don't validate data in the entity bean[ Go to top ] Yes, that is self evident. I assumed people knew that I was talkign about data validation and not imposemento f business rules. Validating types, sizes, and basic requirements of data should be the domain of the data object which should correspond directly with the database validation insomuch as it is possible. This prevents data corruption from leaking into the system by providing multiple firewalls of verification. If your entity beans validate and your DTOs validate then there will be less of a problem. - Posted by: Robert Simmons - Posted on: January 14 2003 05:00 EST - in response to Matthew Brown In the end, I asked a question that I am not sure there is an acceptible answer to. One of the reasons I tend to favor JDO over Entity beans is the fact that JDO does not impose abstract restrictions on the type of data object, nor does it prevent me from validating data or doign other logic inside the data object. CMP 2.0 with its lack of dynamic "finders" and its rigid structure makes a questionable platform for persistence integration. Don't validate data in the entity bean[ Go to top ] I realize that an object SHOULD validate its own data, but EJBs are not your ordinary objects. They are heavyweight montrosities belonging to a sometimes-silly framework. For that reason, it is best not to get too fancy with them. - Posted by: J. B. Rainsberger - Posted on: January 28 2003 09:12 EST - in response to Robert Simmons If the bean were BMP, most people would likely suggest doing the validation in a Data Transfer Object, then calling the BMP entity bean's 'setData' method to avoid the repetitive remote calls and database updates. What makes the CMP situation any different?
http://www.theserverside.com/discussions/thread.tss?thread_id=17352
CC-MAIN-2015-27
refinedweb
1,742
60.65
how do you make the prog to wait 1 sec? I wanna make it so it would do a function every 1 sec. is this possible? Printable View how do you make the prog to wait 1 sec? I wanna make it so it would do a function every 1 sec. is this possible? It's possible, but not portable. How you would go about this depends on what system and compiler you use. But because most of the question askers around here use Windows, here's one way to do it: Code: #include <windows.h> ... Sleep(1000); ... yes but that pauses the entire prog. I just want to pause it in a function. isn't there any other way??? >I just want to pause it in a function. Unless you're using multiple processes or threads, the only thread of execution is the function that you're executing at the time. If you want two functions to perform simultaneously then you need to create another process or another thread, both of which are highly platform dependent at their simplest. So you'll want to move to a more appropriate forum. Unless you're using multiple threads of execution, they're the same thing. I assume you're not using multiple threads, so just use Sleep(1000) edit: While I'm on the subject - the 1000 is just the number of milliseconds you want to pause for, so that's how you would pause for other amounts of time. I know how to use Sleep. Only one thing, I'm using C++ Builder so I can create Windows Application with ease, so I want it so when 1 sec passes, it changes a part of the program with oausing the whole prog. Like when it's doing multiple things, when it's like 13% done, it updates the "percentdone" label I made.............................................. .. Have the progress bar be a seperate object, when your function that is unpacking everything detects that its now at a new percentage just make a call to another function that updates the progress bar.
http://cboard.cprogramming.com/cplusplus-programming/54710-how-do-you-printable-thread.html
CC-MAIN-2014-23
refinedweb
347
73.17
I've been playing with Spark and Python on this online jupyter notebook and tried 3 ways to pass python functions: 1) using map import numpy as np def my_sqrt(x): return np.sqrt(x) sc.parallelize(range(10)).map(my_sqrt).collect() sc.parallelize([(my_sqrt, i) for i in range(10)]).map(lambda x : x[0](x[1])).collect() sc.parallelize([(np.sqrt, i) for i in range(10)]).map(lambda x : x[0](x[1])).collect() The first approach works because Spark is using special serialization strategy to process closures required for transformations which is significantly slower but more powerful than the standard pickle (otherwise we couldn't use .map(lambda x: ...)). The last approach works because there is no need to serialize function code at all. It references sqrt from numpy module so as long as NumPy is accessible on each worker there is no problem at all. The second approach doesn't work because pickling doesn't serialize the code. import pickle pickle.dumps(my_sqrt) ## b'\x80\x03c__main__\nmy_sqrt\nq\x00.' All it does it states please give me an object assigned to my_sqrt ( my_sqrt.__name__) from the top-level script environment (a.k.a. __main__). When it is executed on the workers it doesn't use the same environment and there is no such object in the scope anymore, hence the exception. To be clear it is neither a bug or something specific to Spark. You can easily reproduce the same behavior locally as follows: In [1]: import pickle In [2]: def foo(): ... In [3]: foo_ = pickle.dumps(foo) In [4]: pickle.loads(foo_) Out[4]: <function __main__.foo> In [5]: del foo In [6]: pickle.loads(foo_) --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) ... AttributeError: Can't get attribute 'foo' on <module '__main__'> Since it doesn't concern itself with an actual value you can even reassign like this: In [7]: foo = "foo" In [8]: pickle.loads(foo_) Out[8]: 'foo' Take away message here is if you want to use a function this way put it in a separate module and distribute it among the workers the same way as you do with other dependencies, including custom classes definitions.
https://codedump.io/share/mMeaTC07JclK/1/passing-python-functions-as-objects-to-spark
CC-MAIN-2017-09
refinedweb
364
57.37
Back then when I started programming, I hated data structures and algorithms because everyone was saying how you wouldn't invert a binary tree in a real job and similar things. However, once I've learned it, I realized it's actually very helpful and made me think better as a developer. I realized I'm much more flexible in terms of coming up with a solution during development as well as when dealing with DSA. The Problem? You're required to check for balanced brackets in an string. A string with balanced brackets means that it has opening brackets of the same type (such as square brackets, angular brackets, and so on) just as much as the closing brackets of the same type. For example, a string contains 99 opening angular brackets ('<') as well as 99 closing angular brackets ('>'). Also, the brackets cannot overlap each other. For example "([)]" is not balanced. In contrast, "{()}[]" is balanced. The Idea I don't want to confuse you with all possible solutions. One of the best ways to solve this problem is as follows. We will iterate through the string one time only (big O(n) time) and then depending on the value, we'll do something to solve this problem. Throughout our journey from first element till the last, we will store all opening brackets (be it angular brackets, square brackets, opening parentheses, and so on) inside a stack. In contrast, when we encounter closing brackets, we will not store them, rather we will pop an item from the existing stack and compare if both these elements are pairs ('<' and '>' are pairs, '[' and ']' are pairs). If yes, everything is good and the string is balanced. If no, the string isn't a balanced expression. I don't want to write so much which might confuse you so I'll show the code snippet in Java to solve this problem. Throughout the code snippet, I will write comments so you'll understand this solution better. The Implementation import java.util.*; class BalancedBrackets { // We need to store all opening brackets in a list // and closing brackets in another. // How do we make sure an opening bracket // relates with another (such as '<' relates to '>')? // The way is to put them in the same index in the two lists // although they are in different lists. private final List < Character > leftBrackets = Arrays.asList('(', '[', '{', '<'); private final List < Character > rightBrackets = Arrays.asList(')', ']', '}', '>'); public boolean balancedBrackets(String str) { Stack <Character> stack = new Stack <> (); for (char ch: str.toCharArray()) { // if current character is a left bracket, push it to the stack if (isLeftBracket(ch)) stack.push(ch); if (isRightBracket(ch)) { // If current character is a right bracket, don't push it to the // stack rather pop an existing item in the stack and compare if // these two brackets are related to each other. Before that, if // the stack is empty and the current character is a closing // bracket, we don't need to do a lot of thinking rather we can // just return false because for every closing bracket, there must // be an opening bracket waiting in the stack to be popped. // Therefore, empty stack means things went wrong, so return false // immediately. if (stack.empty()) return false; var top = stack.pop(); if (!bracketsMatch(top, ch)) return false; } } return stack.empty(); } private boolean isLeftBracket(char ch) { return leftBrackets.contains(ch); } private boolean isRightBracket(char ch) { return rightBrackets.contains(ch); } // This method checks if an opening bracket matches with another // (for example, '<' matches with '>', '(' matches with ')', and // so on). For this method to work, we need to make sure each // index in the two lists hold the brackets of same type. For // example, at index 2 in first list, if the element is '<', then // then in index 2 in second list the element must be matching, in // this case '>'. private boolean bracketsMatch(char left, char right) { return leftBrackets.indexOf(left) == rightBrackets.indexOf(right); } } The Reflection That's how you solve this problem. Let's analyze the time and space complexity of this solution a little. The time complexity is O(n) because we're iterating through every element once. If the string is larger, the time taken to compute the string will be larger linearly. The space complexity is also O(n) because with more brackets, we require additional space linearly to store them. Top comments (0)
https://dev.to/ishakmohmed/how-to-check-for-balanced-brackets-in-a-string-every-developer-must-know-this-to-think-better-4pg8
CC-MAIN-2022-40
refinedweb
725
63.8
Keeping the client and server in sync can be difficult while developing a web app. That’s particularly true if you are using text-based data such as JSON in your API calls (though the flexibility and readability of JSON over binary data should not be disregarded). JSON Schemas and Contract Tests On a previous project that used Ember and Rails, I attempted to keep the server and client in sync by using JSON Schema and writing contract tests to verify that all data at all endpoints conformed to the Schema. This worked, but it was a significant maintenance pain. Additionally, unless you write the Schema in a very restrictive way, it is easy to update your code and neglect to update the Schema. Finally, in my opinion, writing ActiveModelSerializers feels pretty disjointed from the final shape of the JSON data that is serialized. I’d rather convert to an intermediate data structure with obvious serialization rules (i.e., a structure that only contains numbers, strings, lists, and maps of strings to the aforementioned types). Contract tests also feel very boilerplate. So I left that project feeling mildly unsatisfied with my method of keeping the client and server in sync. My current project is also using Ember for the front end, but it uses ASP.NET Web API (in C#) on the back end. In this project, there are more developers–and to some extent, more siloing–so I felt very strongly that we needed to know the server and the client were in sync on their API and data structures. I looked at contract tests again, but as I said, they are annoying to maintain and contain lots of boilerplate. They wouldn’t be necessary at all if I could share just a bit of data structures and/or code between the front end and the back end. Sharing Types and Endpoints I got to thinking: The server-side code already defines all the types that will be serialized to JSON in easy-to-read, static types. Could I share them with the client in some way? Ember already has a standard way to define data structures (extending Ember.Object, or DS.Model if you are using Ember Data). Could I find the relevant data structures in the C# back end and programmatically convert and emit them as Ember Object types? This seemed fairly simple, though after a fair bit of searching, I didn’t find an existing library that quite fit the bill. So I started to ponder how much work it would be to implement the conversion myself. Ember model generator As it turned out, a bit of reflection on the C# code actually made doing so pretty easy: - Find all controllers (all types in an assembly that extend ApiController). - Find all types for input and output on controller actions (MethodInfo instances), unwrapping IEnumerable<>, Task<> etc. - Crawl the properties on all types from step 2 and include referenced types. - Use (parts of) my JSON serialization library to emit the correct names for keys/properties in JSON/JavaScript. - Finally, emit JavaScript code based on consuming JavaScript framework requirements–in this case, as export const TypeName = Ember.Object.extend({... Here’s an example conversion. Given this ASP.NET code: public class BlogPost { public string Author { get; set; } public string Text { get; set; } public int Likes { get; set; } public Comment[] Comments { get; set; } } public class Comment { public string UserName { get; set; } public string Text { get; set; } } my generator creates the following Ember code: export const Comment = DS.Model.extend({ userName: DS.attr('string'), text: DS.attr('string') }); export const BlogPost = DS.Model.extend({ author: DS.attr('string'), text: DS.attr('string'), likes: DS.attr('number'), comments: DS.hasMany('comment') }); API generation At this point, I am able to generate models for use in JavaScript code that I know match the server–that’s where they came from, after all. Going a step further, as we are not using Ember Data, I also found it useful to export an api.js file that wraps the URLs and methods for each Controller Action endpoint. Essentially, this lets my calling code use something more semantically meaningful than $.ajax(). So for example, if I have two Controllers that use the models from the previous example: [RoutePrefix("api/blog-posts")] public class BlogPostsController : ApiController { [HttpGet] [Route("")] public async Task > GetPosts() { //... } [HttpPost] [Route("")] public async Task CreatePost(BlogPost data) { //... } [HttpDelete] [Route("{id}")] public async Task DeletePost(int id) { //... } } [RoutePrefix("api/blog-posts/{blogId}/comments")] public class BlogPostCommentsController : ApiController { [HttpPost] [Route("")] public async Task CreateComment(int blogId, BlogPostComment cmt) { //... } } That is exported as follows for Ember: blogPosts: { getPosts: endPoint('GET', route`api/blog-posts`, {"returnDataType":["blog-post"]}), createPost: endPoint('POST', route`api/blog-posts`, {"inputDataType":"blog-post","returnDataType":"blog-post"}), deletePost: endPoint('DELETE', route`api/blog-posts/${0}`, {}) }, blogPostComments: { createComment: endPoint('POST', route`api/blog-posts/${0}/comments, {"inputDataType":"blog-post-comment","returnDataType":"blog-post-comment"}) }, These API endpoints live in a service that can be accessed as follows: export default Ember.Controller.extend({ api: Ember.inject.service(), createPost(data) { return this.get('api.blogPosts').createPost(data); } }); Input data types for each endpoint are checked to make sure they match the expected type (by comparing constructors), and return types are automatically deserialized from raw JSON/JavaScript objects into the model type used by my application. These types are specified by the generated “inputDataType” and “returnDataType” properties. Making updates With the generator in place, when new types and endpoints are added to the back end, all I have to do is run the tool and dump the output files in the right location on my Ember project. (I put them in app/generated/, to make sure they don’t collide with any other files.) Or, if you want the front end to lead the back end–as I’d recommend–you can make the required changes to the generated files, and then later on when the back end catches up, you just need to make sure the generated files don’t change when you re-run the generator. Conclusion I’m really happy with how this turned out. It didn’t take very long to implement, and now I know that both the back end and the front end will use the same data and the same endpoints. This technique has already helped me catch potential bugs early when changing data types. C# may be a bit verbose for my tastes, but I say, “If you’ve got a static type system, you should make good use of it.” Where have you seen a little bit of tooling work go a long way on a project? By commenting below, you agree to the terms and conditions outlined in our (linked) Privacy Policy1 Comment Really nice article. It surprises me the lack of tooling around this concept. So much friction involved in something that to me seems so fundamental. Thanks for posting.
https://spin.atomicobject.com/2016/09/08/exporting-server-side-types-endpoints-ember/
CC-MAIN-2018-43
refinedweb
1,153
54.63
1. What are the fundamental principles of OO programming? Answer: As a developer, you might be tempted to answer that it comprises things like Encapsulation, Polymorphism, Abstraction, and Inheritance. Although this is true, it doesn’t really explain the fundamental core of what OO is and what its benefits are. Principles are crucial but they are not the most important aspect of what OO actually is. What is really important is to understand in what grounds OO is built upon, or in other words, what are the foundations of OO programming. The two most fundamental core concepts on which OO has been built upon in C# are this pointer and Dynamic Dispatch. Obviously, there are principles like Encapsulation, Polymorphism, Abstraction, and Inheritance, but these are the consequence and not the generating force behind the OO paradigm in C#. 2. Explain Partial Class in C#? Answer: Partial classes concept added in .Net Framework 2.0 and it allows us to split the business logic into multiple files with the same class name along with “partial” keyword. 3. What is Thread in C#? Answer: Thread is an execution path of a program. Thread is used to define the different or unique flow of control. If our application involves some time-consuming processes then it’s better to use Multithreading., which involves multiple threads. 4. What does it mean? Answer: Object Pool is nothing but a container of objects that are ready for use. Whenever there is a request for a new object, the pool manager will take the request and it will be served by allocating an object from the pool. 5. How does it work? Answer: We are going to use a Factory pattern for this purpose. We will have a factory method, which will take care of. 6. What is the difference between early binding and late binding in C#? Answer: Early binding and late binding are the concepts of polymorphism. There are two types of polymorphism in C#. Compile Time Polymorphism: It is also known as early binding. Run Time Polymorphism: It is also known as late binding or method overriding or dynamic polymorphism. 7. What is ArrayList? Answer: ArrayList is a dynamic array. You can add and remove the elements from an ArrayList at runtime. In the ArrayList, elements are not automatically sorted. 8. What do you mean by the delegate in C#? Answer: Delegates are type-safe pointers, unlike function pointers as in C++. Delegate is used to represent the reference of the methods of some return type and parameters. 9. What is a constructor? Answer: A constructor is a class member executed when an instance of the class is created. The constructor has the same name as the class, and it can be overloaded via different signatures. Constructors are used for initialization chores. 10. Why are strings in C# immutable? Answer:. 11. What is object pool in .Net? Answer: An object pool is a container of ready to use objects. It reduces the overhead of creating a new object. 12. Questions on Looping Statements The section contains questions on if, while, do-while, for, switch, continue and goto looping statements? Answer: - IF Statements - Switch Statements - For Loop Statements While Loop Statements - Do While Loop Statements - Continue, Goto statements 13. What is Garbage Collection? Answer: Garbage Collection is a process of releasing memory automatically occupied by objects which are no longer accessible. Describe the basic construction of a C# program. Write a simple program that outputs “Hello World” to the console. A typical C# program consists of a namespace declaration, a class, methods, attributes, the main method, statements, expressions, and comments. A potential example for printing “Hello World” to the console is detailed below. using System; namespace HelloWorldApplication class HelloWorld static void Main(string[] args) Console.WriteLine(“Hello World”); Console.ReadKey(); 14. Questions and Answers on Object-Oriented Concepts? Answer: The section contains questions and answers on polymorphism, structures, enumeration, inheritance, method overloading and method overriding, abstract class and interfaces implementation. - Polymorphism - Structures - Enumerations - Fundamentals of Inheritance - Inheritance Implementation - Method Overloading - Method Overriding Constructor Overloading - Abstract Class & Methods - Interfaces Introduction - Interfaces Implementation - Introduction of Overloaded Operators - Recursion 15. What is the difference between disposing of () and finalize() methods in C#? Answer: The dispose of the () method is explicitly called by user to free unmanaged resources such as files, database connections, etc whereas finalizing () method is implicitly called by the garbage collector to free unmanaged resources like files, database connections, etc. The dispose of () method belongs to the IDisposable interface whereas finalize() method belongs to the Object class. ( tableu training ) 16. What is a delegate in C? Answer: A delegate in C# is an object that holds the reference to a method. It is like a function pointer in C++. 17. What is the difference between dynamic type variables and object type variables in C#? Answer: The difference between dynamic and object type variables has to do with when the type checking takes place during the application lifecycle. Dynamic type variables handle type checking at run time, while object type variables handle type checking during compile time. 18. Explain nullable types in C#? Answer: Nullable types are data types that, in addition to their normal values, also contain a defined data type for null. Nullable types exist to help integrate C#, which generally works with value types, and databases, which often use null values. You can declare a nullable type in C# using the following syntax: 19. What is Managed and Unmanaged code? Answer: Managed code is a code which is executed by CLR (Common Language Runtime) i.e all application code based on .Net Platform. It is considered as managed because of the .Net framework which internally uses the garbage collector to clear up the unused memory. Unmanaged code is any code that is executed by application runtime of any other framework apart from .Net. The application runtime will take care of memory, security and other performance operations. 20. Questions and Answers on Data Types, Variables and Operators? Answer: The section contains questions and answers on integer, float, decimal and character data types, variable initialization, their scope and lifetime, arithmetic, relational, logical, bitwise and conditional operators. - Integer Data Types - Floating and Decimal Data Types - Char Types and String Literals - Initialization of Variables - Scope and Lifetime of Variables Type Conversion in - Expressions - Arithmetic Operators - Relational and Logical Operators - Bit-wise and Conditional Operators 21. Questions and Answers on Miscellaneous topics? Answer: The section contains questions and answers on the basics of pointers and their operation, accessor controls, and string formatting. - Unsafe Code & Pointers Basics - Pointers Operation – 1 - Pointers Operation – 2 - Accessor Controls of Class Introduction of String Formatting - String Formatting – 1 - String Formatting – 2 - If you like to learn C# thoroughly, you should attempt to work - on the complete set of C# questions and answers mentioned above. It will immensely help anyone trying to crack a C# - code or an interview. - Here’s the list of Best Reference Books in C# Programming Language. 22. Questions and Answers on Reflections, Multithreaded Programming, Collection Classes, and Mathematical Functions? Answer: The section contains questions and answers on collection framework, mathematical classes, rounding functions, iterators, and multithreaded programming. - Introduction of Reflections - Collection Classes - Maths Class - Rounding Functions in C# Multi-threaded Programming – 1 - Multi-threaded Programming – 2 - Iterators 23. what is the solution if you need to manipulate sets of items? Answer: One solution would be to create a variable for each item in the set but again this leads to a different problem. ( devops training online ) 24. How many variables do you need? Answer: So in this situation Arrays provide mechanisms that solve the problem posed by these questions. An array is a collection of related items, either value or reference type. In C# arrays are immutable such that the number of dimensions and size of the array are fixed. Arrays Overview An array contains zero or more items called elements. An array is an unordered sequence of elements. All the elements in an array are of the same type (unlike fields in a class that can be of different types). The elements of an array accessed using an integer index that always starts from zero. C# supports single-dimensional (vectors), multidimensional and jagged arrays. Elements are identified by indexes relative to the beginning of the arrays. An index is also commonly called indices or subscripts and is placed inside the indexing operator ([]). Access to array elements is by their index value that ranges from 0 to (length-1). Array Properties The length cannot be changed once created. Elements are initialized to default values. Arrays are reference types and are instances of System. Array. Their number of dimensions or ranks can be determined by the Rank property. An array length can be determined by the GetLength() method or Length property. 25. What is Parsing? How to Parse a Date Time String? Answer: Parsing is converting a string into another data type. For Example: string text = “500”; int num = int.Parse(text); 500 is an integer. So, the Parse method converts the string 500 into its own base type, i.e int. Follow the same method to convert a DateTime string. string DateTime = “Jan 1, 2018”; DateTime parsed value = DateTime.Parse(DateTime). 26. What are the collection types can be used in C#? Answer: - Below are the collection types in C# – - ArrayList - Stack - Queue - SortedList - HashTable - Bit Array 27. Explain Attributes in C#? Answer: Attributes are used to convey the info for runtime about the behavior of elements like – “methods”, “classes”, “enums” etc. Attributes can be used to add metadata like – comments, classes, compiler instruction, etc. 28.. background – gets or sets a value which indicates whether a thread should be a background process or foreground. ThreadState– describes the thread state. 29. List out the differences between Array and ArrayList in C#? Answer: Array stores the values or elements of the same data type but ArrayList stores values of different data types. Arrays will use the fixed length but ArrayList does not use fixed-length like an array. 30. Why use “using” in C#? Answer: “Using” statement calls – “dispose of” method internally, whenever any exception occurred in any method call and in “Using” statement objects are read-only and cannot be reassignable or modifiable. 31. Explain namespaces in C#? Answer: Namespaces are containers for the classes. We will use namespaces for grouping the related classes in C#. “Using” keyword can be used for using the namespace in another namespace. 32. List out two different types of errors in C#? Answer: - Below are the types of errors in C# – - Compile Time Error - Run Time Error 33. What are the differences between static, public and void in C#? Answer: Static classes/methods/variables are accessible throughout the application without creating an instance. The compiler will store the method address as an entry point. Public methods or variables are accessible throughout the application. The void is used for the methods to indicate it will not return any value. 34. Do we get an error while executing the “finally” block in C#? Answer: Yes. We may get an error in finally block. 35. What is the difference between “out” and “ref” parameters in C#? Answer: “out” parameter can be passed to a method and it need not be initialized whereas “ref” parameter has to be initialized before it is used. 36. Explain Jagged Arrays in C#? Answer: If the elements of an array is an array then it’s called a jagged array. The elements can be of different sizes and dimensions. 37. What are the differences between events and delegates in C#? Answer: Main difference between event and delegate is event will provide one more of encapsulation over delegates. So when you are using events destination will listen to it but delegates are naked, which works in the subscriber/destination model. 38. What is a Generic Class? Answer: Generics or Generic class is used to create classes or objects which do not have any specific data type. The data type can be assigned during runtime, i.e when it is used in the program. For Example: Generic Class. ( python online training ) Note: Browse latest C# Interview Questions and C# Tutorial Videos. Here you can check C# Training details and C# Training Videos for self learning. Contact +91 988 502 2027 for more information. All C# Interview Questions C# VideosDuration: 20-25 Hours | Version: 12c, 11g | Admin - Experienced Faculty - Real-time Scenarios - Free Bundle Access - Course Future Updates - Sample CV/Resume - Interview Q&A - Complimentary Materials
https://svrtechnologies.com/best-38-c-interview-questions-and-answers-pdf/
CC-MAIN-2020-29
refinedweb
2,089
58.18
MUSCLE CELL BIOLOGY AND CELL MOTILITY Zentrum Physiologie, Medizinische Hochschule Hannover, D-30625 Hannover, Germany Submitted 20 August 2002 ; accepted in final form 11 February 2003 muscle plasticity; NFATc1; resting calcium concentration Animal experiments were carried out according to the guidelines of the local Animal Care Committee (Bezirksregierung Hannover). Electrostimulation. After the myotubes had been growing on microcarriers for 14 days under resting conditions, some of them were electrostimulated in the culture flasks with a pattern consisting of cycles of 1 Hz for 15 min followed by a 30-min pause. This cycle was continuously repeated for several days. Voltage applied at the platinum electrodes was chosen to achieve maximal contraction amplitudes of the myotubes [details as reported before (11)]. The polarity of the platinum electrodes was changed every second to avoid electrophoretic separation of the serum proteins in the cell culture medium. The medium of stimulated cultures contained 1 mM N-acetylcysteine as a radical scavenger. Northern blot analysis. Total RNA isolation, RNA separation on 1.2% agarose-formaldehyde gels, hybridization with MHC cDNA probes, and autoradiography were done as described previously (15). For detection of slow MHC I mRNA, the 3'-terminal 450-bp HinfI fragment from rabbit MHC I cDNA (1) was used. Fast MHC IId mRNA was estimated with the 3'-terminal PstI fragment of the rabbit MHC IId cDNA (13) that is specific for fast MHC isoform IId (20). For normalization of Northern blots, 18S rRNA was detected with the 5.8-kb HindIII fragment of 18S rDNA (9) and -actin mRNA was hybridized with the 1.8-kb BamHI fragment of human -actin cDNA (16). Autoradiography exposure times were 5 days, 3 days, and 8 h, respectively, for MHC mRNA, -actin mRNA, and 18S rRNA. NFATc1 immunofluorescence. Myotubes grown on microcarriers were detached from the carriers by incubation with Accutase (PAA, Austria) for 60 min. After sedimentation of the microcarriers, the myotubes in the supernatant were centrifuged at 330 g for 5 min and resuspended in Dulbecco's modified Eagle's medium (DMEM)/10% neonatal calf serum. Cells were then seeded on glass coverslips and cultured for two days. It was verified (data not shown) by immunofluorescence using monoclonal antibodies (Accurate Chem. and Sci.) that during these two days, the myotubes retained the expression pattern of fast and slow MHC that they had developed during the preceding culture on microcarriers. Electrostimulation of myotubes on coverslips with 1 to 3 stimulation cycles [= (1, 2, 3) x 45 min] was performed in 24-well plates, whereas stimulation with cycles repeated for 24 h and 40 h was performed in petri dishes (diameter 6 cm). Again, platinum electrodes were used for electrostimulation. Settings of the stimulator were chosen by microscopical control of myotube contractions. After the stimulation protocol was finished, cells were washed and fixed with 3% paraformaldehyde (PFA) or 100% methanol and permeabilized in 0.1% Triton X-100 as described previously (14). Incubation with goat anti-NFATc1 antibody (Santa Cruz Biotechnology, Santa Cruz, CA; this antibody reacts with both the phosphorylated and the dephosphorylated form of NFATc1) for 30 min was followed by incubation with fluorescein isothiocyanate (FITC)-labeled anti-goat IgG secondary antibody (Santa Cruz Biotechnology). Immunostained myotubes were photographed in an inverted fluorescence microscope (Leica, Wetzlar, Germany) at a magnification of x400 (objective x40, NA 1.25). For quantification of the subcellular localization of NFATc1, a total of at least 300 cells were inspected to generate one of the columns in Figs. 2, 3, and 4. Each cell was classified as exhibiting nuclear staining only, cytoplasmic staining only, or both nuclear and cytoplasmic staining. Determinations of [Ca2+]i. Resting [Ca2+]i and Ca2+ transients were measured with the fluorescent Ca2+-sensitive indicator fura 2 (Molecular Probes, Leiden, Netherlands). Myotubes cultured as described in NFATc1 immunofluorescence were washed three times with Ringer solution (145 mM NaCl, 2.5 mM KCl, 1 mM MgSO4, 10 mM HEPES, 10 mM glucose, and 1.8 mM CaCl2, pH 7.4) and incubated with Ringer solution containing 2 µM fura 2-acetoxymethyl ester (AM) for 30 min at 37°C. After fura 2 loading, the solution was exchanged for Ringer solution and cells were washed again twice and then incubated at 37°C for 40 min to allow further deesterification of fura 2-AM. The experimental setup consisted of an inverted microscope (Axiovert 135 TV, Zeiss, Jena, Germany) with the objective LD Achroplan 40 x 0.6 (Zeiss, Jena, Germany) connected to a photomultiplier (PMT 01710, PTI) measuring fluorescence emission at 510 nm. Excitation wavelengths were 340 and 380 nm produced by a Xenon lamp (UXL-75XE, Ushio). To estimate resting [Ca2+]i, fluorescent light was recorded from an area on a glass coverslip that contained an average number of 30 myocytes. Measurement of Ca2+ transients was performed on single cells. Background fluorescence was determined before fura 2 loading at each excitation wavelength over an identical number of myocytes and was substracted from subsequent fluorescence measurements. The value of Rmax, i.e., the fura 2 fluorescence ratio F340/F380 at full calcium binding of the dye, was determined by incubating the cells in Ringer solution with saponin [0.002% (wt/vol), 25 min according to Carroll et al. (3)], and a CaCl2 concentration of 2.5 mM. Incubation time was optimized by continuous ratio measurement after addition of saponin. Rmax was found to be 11.4 ± 1.4 (n = 5). Rmin, the fluorescence ratio F340/F380 at zero calcium binding in vivo, was estimated by incubating the cells in Ca2+-free Ringer solution with 0.5 mM EGTA for 1 h at 37°C, followed by an incubation for 30 min in the same buffer with 10 µM ionomycin. Rmin was found to be 0.67 ± 0.1 (n = 37). The fura 2/Ca2+ dissociation constant Kd was determined in vitro as described (8) and was found to be Kd = 166 ± 6 nM and Sf2/Sb2 = 4.44 (Sf2: n = 37; Sb2: n = 5). The measured fluorescence ratios R were used with Rmax, Rmin, Kd, and Sf2/Sb2 to calculate the intracellular Ca2+ concentrations as published by Grynkiewicz et al. (8): [Ca2+] = Kd (Sf2/Sb2) (R Rmin)/(Rmax R). Fura 2 calibration was performed daily. Figure 2A shows a Ca2+ transient obtained by superimposing 30 transients recorded from single cells. This averaged transient possesses a peak Ca2+ concentration of 500 nM and a 75% decay time of 250 ms. An example of a measurement of a series of single cell transients during electrostimulation of the myocytes with 1 Hz is given in Fig. 2B. It is apparent in Fig. 2B that Ca2+ levels return to base level between stimuli and that no significant change in peak Ca2+ occurs during the stimulation train for up to 30 min. Between stimulation intervals, the resting Ca2+ remained at control levels for at least 24 h, as evident from the measurements reported above. Table 2 addresses the question whether the myoinositol 1,4,5-trisphosphate (IP3) receptor Ca2+ channel of the sarcoplasmic reticulum affects resting Ca2+ levels in myotubes with and without electrostimulation. Blocking the channel with Xestospongin reduces resting Ca2+ levels under control conditions from about 60 to 40 nM, suggesting that a continuous flux of Ca2+ from the sarcoplasmic reticulum via this channel contributes to the Ca2+ level in the sarcoplasm under resting conditions. However, Xestospongin does not alter the lack of an effect of 20 min of continuous electrostimulation (1 Hz) on the resting Ca2+ level. Table 3 gives the results of a titration of the intracellular resting Ca2+ level of the myotubes with Ca2+ ionophore A-23187 that had been added to the medium 4 h before Ca2+ measurement. Age of the culture was 16 days, and [Ca2+]i rose from a control level of about 60 to about 120 nM with 5 · 108 M ionophore and to 150 nM with 10 · 108 M ionophore. It may be noted that after prolonged exposure of the myotubes to the ionophore, e.g., for 24 h, Ca2+ levels began to decline somewhat compared with the levels of Table 3, probably due to the onset of compensatory mechanisms (data not shown). The titration with ionophore was used to study the dependence of NFATc1 activation on the intracellular Ca2+ level (see Fig. 3). Nuclear translocation of NFATc1 by ionophore treatment and electrostimulation. Figure 3 demonstrates that NFATc1 translocation in the present system can be induced by a sustained increase in resting [Ca2+] as it is generated by the continuous presence of Ca2+ ionophore. Figure 3A shows the intracellular distribution of NFATc1 fluorescence with staining in the cytoplasm only and lack of staining in the nuclei (left) as it is seen in control conditions. Addition of Ca2+ ionophore results in an intermediate activation of NFATc1 with simultaneously nuclear and cytoplasmic staining (Fig. 3A, center) or a pronounced activation with exclusively nuclear staining (Fig. 3A, right). Graded increases in [Ca2+]i were induced by addition of various concentrations of Ca2+ ionophore A-23187 to the culture 4 h before fixation of the cells for immunofluorescence. It is apparent that A-23187 induces NFATc1 activation in a dose-dependent fashion, with 5 x 108 M ionophore producing an about 50% activation of the NFATc1 system. This ionophore concentration is seen in Table 3 to raise [Ca2+]i from 60 to 120 nM. Full activation of NFATc1 is achieved with 10 x 108 M ionophore and a resting intracellular Ca2+ level of 150 nM. Figure 4 shows the effect of electrostimulation for various times on the activation and translocation of NFATc1 into the nucleus. The age of the myotubes studied was 23 days, and all cells were transferred onto coverslips after the third week. Cells of Fig, 4, bars BD, were then exposed to electrostimulation either with three cycles (each cycle consisting of 15-min stimulation at 1 Hz and 30-min pause) (bar B) or with this same cycle repeated continuously for 24 h (bar C)orfor 40 h (bar D). The fixation of cells for NFATc1 localization was done 30 min after the end of the last stimulation interval in all cases (i.e., at the end of the last cycle). Whereas under resting conditions NFATc1 was almost exclusively cytoplasmic and thus inactive (Fig. 4, bar A), Fig. 4, bar B, shows that 30 min after only 3 stimulation intervals, 20% of the myotubes exhibited exclusively nuclear NFATc1 staining (this number amounts to about 60% when determined immediately after the third stimulation interval; see Ref. 11). This represents a significant activation of the calcineurin-NFAT pathway. After 24 h of continuous repetition of the cycles, the degree of activation is markedly enhanced (60% of the cells show exclusively nuclear staining; again, this number is considerably higher, almost 100%, immediately after the last stimulation interval). The increase in activation after prolonged repetition of the stimulation cycle is seen because 30 min after each stimulation interval, some NFATc1 remains in the nuclei (e.g., see Fig. 4, bar B) and is added to the NFATc1 that is translocated during the next stimulation interval. Thus, from cycle to cycle, the activation of NFATc1 increases until a plateau of near-maximal activation (close to 100% at the end of the stimulation interval, 60% after the 30-min pause) is reached. This plateau does not increase further between 24 and 40 h of continued repetition of the stimulation cycles (Fig. 4, bars C and D). Thus, although three subsequent cycles achieve a moderate activation only, long-term repetition of the same cycle produces a very high degree of activation. The stimulation-induced import of NFATc1 can be totally blocked by cyclosporin A as reported earlier (11, 14). This shows that calcineurin activation is an essential step for generating nuclear import of the transcription factor. Figure 5 illustrates the effect of Xestospongin on nuclear translocation of NFATc1. All cultures were grown on microcarriers for 3 wk and were then transferred onto coverslips. Activation of NFATc1 in controls was low with and without Xestospongin, but immediately after 20 min of continuous electrostimulation NFATc1 it was almost fully activated, no matter whether Xestospongin was present or not. This suggests that the decrease in resting [Ca2+] that is affected by Xestospongin (Table 2) does not influence the calcineurin-dependent nuclear import of NFATc1. MHC pattern under electrostimulation. Figure 6 shows that 24 h of electrostimulation of the myotube culture with 15 min at 1 Hz/30-min pause cycles results in an incipient transformation of MHC mRNA from a "fast" toward a "slow" pattern with a clear increase in MHC I mRNA and a decrease in MHC IId mRNA. Thus 24 h stimulation 1) leads to the strong activation of NFATc1 as seen in Fig. 4, bar C; and 2) is sufficient to initiate the process of fast-to-slow transformation at the level of MHC mRNA expression. Continuation of electrostimulation with the same pattern for 14 days results in a complete switch to MHC I mRNA expression with loss of MHC IId mRNA (Fig. 7). The first step in the chain of events leading to calcineurin activation, NFATc1 nuclear import and switching on the MHC I gene, is a Ca2+ signal. This is confirmed by the observation of Kubis et al. (10) and Meissner et al. (15) that an ionophore-mediated permanent increase in resting [Ca2+]i indeed induces muscle transformation. It has been shown in T cells that an elevation of [Ca2+]i above 400 nM lasting for 2 h is required to commit T cells to activation (21). Dolmetsch et al. (6) have also shown that in T cells, a sustained [Ca2+]i elevation is required for activation of the calcineurin-NFATc1 pathway. In addition, Dolmetsch et al. (5) have observed that at identical time-averaged [Ca2+]i, oscillating Ca2+ levels increase NFAT-dependent gene expression in Jurkat T cells more efficiently than a sustained increase in [Ca2+]i. Which kind of Ca2+ signal initiates myotube transformation under electrostimulation? Sreter et al. (19) have shown very early that chronic electrostimulation in vivo is associated with a long-term rise in [Ca2+]i. Olson and Williams (17) later postulated that such a sustained rise in [Ca2+]i may initiate the process of muscle transformation. In contrast with this view, the present data show that the level of resting [Ca2+]i of myotubes is not increased after 24 h of electrostimulation (Fig. 1, Table 1). Nevertheless, NFATc1 is almost fully translocated into the nuclei already after 20 min of stimulation (Fig. 5) and remains so after 24 and 40 h (Fig. 4), and an increase in MHC I mRNA and a decrease in MHC IId mRNA expression are detectable after 24 h. We conclude that the myotubes' calcineurin/NFATc1 system is able to sense the short intracellular Ca2+ transients that are associated with each electrical stimulation and myotube contraction. In our cell culture, these Ca2+ transients have a 75% decay time of about 250 ms and reach peak Ca2+ concentrations of 500 nM. These short transients occur continuously at a frequency of 1 Hz during each 15-min stimulation interval. In myotubes, this pattern appears to almost fully activate NFATc1 and upregulate the transcription of slow myosin mRNA. This implies that increased contractile activity, when it occurs for sufficiently long periods in the present culture (see Ref. 11), can initiate fast-to-slow muscle transformation at the myosin level by virtue of the Ca2+ transients responsible for excitation contraction coupling. This result is noteworthy because it has not been shown before that brief Ca2+ transients can activate calcineurin/NFATc1. The Ca2+ oscillations that Dolmetsch et al. (6) found quite effective in Jurkat T cells had rather large amplitudes of between 200 and 1000 nM, occurred at low frequencies between 0.01 and 0.001 Hz, and lasted for 0.5 to 1 min per spike. The Ca2+ transients measured here have longer decay times than adult muscle fibers harvested by proteolytic digestion (3). The shorter duration of the Ca2+ transient in adult fibers may contribute to the failure of continuous stimulation at 1 Hz to induce nuclear import of green fluorescent protein (GFP)-NFAT construct, as reported by Liu et al. (12). In contrast, series of Ca2+ transients of longer duration, as seen in our culture system, are apparently able to activate calcineurin-induced NFATc1 import at 1 Hz (Ref. 11 and this study). This occurs although the Ca2+ transients in the present culture are not superimposed at 1 Hz, as shown in Fig. 2B. It may be noted that the brief transients in the present model can presumably have this effect because the peak Ca2+ concentration during each muscle activation reaches 500 nM, which by far exceeds the level of 120 nM that is necessary to induce 50% activation of the calcineurin-NFATc1 system when a sustained elevation of [Ca2+]i is produced by ionophore. It has been proposed that the IP3 receptor in skeletal muscle may serve to modulate [Ca2+]i (7). By the association of calcineurin with the IP3 receptor-FK506 binding protein (FKBP 12) complex (2), the IP3 Ca2+ channel may conceivably be involved in the activation of the calcineurin/NFAT cascade. The data of Table 2 show that indeed the IP3 inhibitor Xestospongin affects the resting [Ca2+]i, but Xestospongin does not affect the efficiency of electrostimulation to induce NFATc1 nuclear import. We conclude that the IP3 receptor is not a major factor in the activation of calcineurin/NFATc1-mediated slow MHC expression. Ca2+ signaling under chronic electrostimulation (>24 h). Figure 1 shows that chronic electrostimulation lasting for >24 h does lead to a rise in intracellular resting Ca2+ levels above control levels. This elevated Ca2+ level persists for the observed time of 14 days of chronic electrostimulation (Fig. 1). During this time, NFATc1 remains mainly nuclear and the process of muscle transformation approaches completion as seen from the myosin mRNA pattern of Fig. 7. It may be noted that this time is also sufficient to approach completion of fast-to-slow transformation at the level of MHC protein expression (10). What is the nature of the Ca2+ signal that maintains the transformation process over this extended period of time? The present results suggest that the resting [Ca2+] of 75 nM, although increased by 50% over control values of 50 nM, is not the responsible signal. The evidence for this comes from Fig. 3B in conjunction with Table 3. Figure 3B shows that exposure of the myotubes to 5 x 108 M A-23187 causes an approximately half-maximal activation of NFATc1. At this ionophore concentration, Table 3 shows the resting [Ca2+]i of the myotubes to be 120 nM, i.e., significantly higher than [Ca2+]i = 75 nM as established under chronic electrostimulation. To achieve nearly full activation of NFATc1, a resting [Ca2+]i of 151 nM at an ionophore concentration of 10 x 108 M is required (Fig. 3B, Table 3). Thus neither full nor half-maximal nuclear import of NFATc1 is accomplished by the resting Ca2+ levels seen in Fig. 1 during chronic electrostimulation. Again, the conclusion is that during chronic as well as short-term electrostimulation, the signal for activation of calcineurin/NFATc1 is constituted by the sequences of rapid Ca2+ transients associated with muscle activation and contraction. It is possible that the moderate but sustained increase in the level of resting [Ca2+]i mediates other events associated with fast-to-slow transformation that are not linked to calcineurin/NFATc1, such as metabolic adaptation of the muscle fiber (14). This work was supported by the Deutsche Forschungsgemeinschaft (Gr 489/13). The costs of publication of this article were defrayed in part by the payment of page charges. The article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. 2. Cameron AM, Steiner JP, Roskams AJ, Ali SM, Ronnett GV, and Snyder SH. Calcineurin associated with the inositol 1,4,5-triphosphate receptor-FKBP 12 complex modulates Ca2+ flux. Cell 83: 463472, 1995.[Web of Science][Medline] 3. Carroll SL, Klein MG, and Schneider MF. Decay of calcium transients after electrical stimulation in rat fast- and slow-twitch skeletal muscle fibres. J Physiol 501: 573588, 1997. 4.2509, 1998. 5. Dolmetsch RE, Lewis RS, Goodnow CC, and Healey JI. Differential activation of transcription factors induced by Ca2+ response amplitude and duration. Nature 386: 855858, 1997.[Medline] 6. Dolmetsch RE, Xu K, and Lewis RS. Calcium oscillations increase the efficiency and specificity of gene expression. Nature 392: 933936, 1998.[Medline] 7. Estrada M, Cardenas C, Liberona JL, Carrasco MA, Mignery GA, Allens PD, and Jaimovich E. Calcium transients in 1B myotubes lacking ryanodine receptors are related to inositol triphosphate receptors. J Biol Chem 25: 2286822874, 2001. 8. Grynkiewicz G, Poenie M, and Tsien RY. A new generation of Ca2+ indicators with greatly improved fluorescence properties. J Biol Chem 260: 34403450, 1985. 9. Katz RA, Erlanger BF, and Guntaka RV. Evidence for extensive methylation of ribosomal RNA genes in rat XC cell line. Biochim Biophys Acta 739: 258264, 1983.[Medline] 10. Kubis HP, Haller EA, Wetzel P, and Gros G. Adult fast myosin pattern and Ca2+-induced slow myosin pattern in primary skeletal muscle cell culture. Proc Natl Acad Sci USA 94: 42054210, 1997. 11. Kubis HP, Scheibe RJ, Meissner JD, Hornung G, and Gros G. Fast-to-slow transformation and nuclear import/export kinetics of NFATc1 during electrostimulation of muscle cells in culture. J Physiol 541: 835847, 2002. 12. Liu Y, Cseresnyés Z, Randall WR, and Schneider MF. Activity-dependent nuclear translocation and intranuclear distribution of NFATc in adult skeletal muscle fibers. J Cell Biol 155: 2739, 2001. 13. Maeda K, Sczakiel GWA, and Wittinghofer A. Characterization of cDNA coding for the complete light meromyosin portion of a rabbit fast skeletal muscle myosin heavy chain. Eur J Biochem 167: 97102, 1987.[Web of Science][Medline] 14.. 15. Meissner JD, Kubis HP, Scheibe RJ, and Gros G. Reversible Ca2+-induced fast-to-slow transition in primary skeletal muscle culture cells at the mRNA level. J Physiol 523: 1923, 2000. 16. Moos M and Gallwitz D. Structure of two human -actin-related processed genes one of which is located next to a simple repetitive sequence. EMBO J 2: 757761, 1983.[Web of Science][Medline] 17. Olson EN and Williams RS. Remodeling muscles with calcineurin. Bioessays 22: 510519, 2000.[Web of Science][Medline] 18. Pette D and Vrbova G. Adaptation of mammalian skeletal muscle fibers to chronic electrical stimulation. Rev Physiol Biochem Pharmacol 120: 115208, 1992.[Web of Science][Medline] 19. Sreter FA, Lopez JR, Alamo L, Mabuchi K, and Gergely J. Changes in intracellular ionized Ca concentration associated with muscle fiber type transformation. Am J Physiol Cell Physiol 253: C296C300, 1987. 20. Uber A and Pette D. PCR-based assignment of two myosin heavy chain cDNA clones to biochemically and histochemically defined single type IIB and IID fibers of rabbit muscle. FEBS Lett 331: 193197, 1993.[Web of Science][Medline] 21. Weiss A, Shields R, Newton M, Manger B, and Imoden J. Ligand-receptor interactions required for commitment to the activation of the interleukin 2 gene. J Immunol 138: 21692176, 1987.[Abstract] This article has been cited by other articles:
http://ajpcell.physiology.org/cgi/content/full/285/1/C56
crawl-002
refinedweb
3,874
55.24
. ... Also, in mpcp.py you may want to make the following change. In the current implementation, when an exception occurs it brings down the cherrypy server. But, because its persistent, it never gets brought back up. To fix this: def setup(options): global _isSetup ... - cherrypy.server.start(initOnly=True) + try: + cherrypy.server.start(initOnly=True) + except cherrypy.NotReady: + cherrypy.server.restart() ... I assume at some point, the mpcp will be changed to match the new CherryPy? thread handling routine. The mpcp used for this mini HOWTO was 1.0... double check to see the purge requests aren't there already. Good luck!
http://trac.turbogears.org/wiki/ModPythonIntegration?version=2
CC-MAIN-2015-18
refinedweb
102
63.46
Motivated by @o_sharp's solution (and I pretty much stole the title :-P). from collections import Counter from cmath import phase class Solution(object): def maxPoints(self, points): P = [complex(p.x, p.y) for p in points] return max([P.count(p) + max(Counter(phase(q - p) for q in P if q != p).values() or [0]) for p in P] or [0]) Granted, it's three lines if I count the cmath import (LeetCode does import collections for us, but not cmath). Do we need to worry about the float number precision problem? BTW just found a rational number representation: @feyhi said in 2-liner in Python, Complex solution without GCD: Do we need to worry about the float number precision problem? BTW just found a rational number representation: In general, we should worry about using double as key for hash map since it has accuracy concerns. Actually, @forsaken did some tests in this post. To avoid this issue, my idea. Then I have a C++ solution using std::map in this post. @zzg_zzm Ah, darn. Oh well, at least now I see it's wrong and understand why. Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/53635/2-liner-in-python-complex-solution-without-gcd
CC-MAIN-2017-51
refinedweb
207
66.13
Captured stdout printed twice The following test prints the captured stdout twice: {{{ def render(**variables): print "Checking {}".format(variables) return variables def with_all_versions(func): def decorated(): yield func, render return decorated @with_all_versions def test_text(r): assert "y" in r(x=u'gurk') }}} The output I get is the following: {{{ ==== test session starts === platform darwin -- Python 2.7.0 -- pytest-1.3.4 test path 1: tester.py tester.py F ========= FAILURES ========= _ test_text[0] _ r = <function render at 0x101dfecf8> @with_all_version def test_text(r): assert "y" in r(x=u'gurk') E assert 'y' in {'x': u'gurk'} E + where {'x': u'gurk'} = <function render at 0x101dfecf8>(x=u'gurk') tester.py:14: AssertionError ------ Captured stdout ----- Checking {'x': u'gurk'} Checking {'x': u'gurk'} = 1 failed in 0.07 seconds = }}} I would have expected the output {{{ Checking {'x': u'gurk'} }}} only once. BTW, running the test with the --nomagic option "fixes" the problem It's a known issue: when an assertion fails the expression of the assertion is re-run to print intermediate values. This means in your case that the function call is repeated, thus the double print. We have a plan to avoid the re-interpretation but it requires some more experimentation and work. best, holger Re-interpretation shouldn't be a problem as long as the captured output is thrown away the second time. closing this issue, advancing re-interpretation is a different issue.
https://bitbucket.org/hpk42/py-trunk/issues/134/captured-stdout-printed-twice
CC-MAIN-2017-26
refinedweb
237
52.9
In the "startup" data block Code: Select all print('starting...') class Foo(object): pass In the "keyboard_left data block" Code: Select all from startup import Foo And when I press the left key, I get the following error... Code: Select all Traceback (most recent call last): File "keyboard_left", line 3, in <module> ImportError: No module named startup Blender Game Engine Finished Code in the startup data block as already run though. Does blender throw this away after a controller is called? If so, how do I declare classes/globals to be used by other python controllers? Or am I just namespacing the import incorrectly?
https://www.blender.org/forum/viewtopic.php?p=98215
CC-MAIN-2016-40
refinedweb
104
71.34
Array.IndexOf Method (Array, Object) Searches for the specified object and returns the index of its first occurrence in a one-dimensional array. Namespace: SystemNamespace: System Assembly: mscorlib (in mscorlib.dll) Parameters - array - Type: System.Array The one-dimensional array to search. - value - Type: System.Object The object to locate in array. Return ValueType: System.Int32 The index of the first occurrence of value in array, if found; otherwise, the lower bound of the array minus 1. This method searches all the elements of a one-dimensional array for value. To determine whether value exists in array, the method performs an equality comparison by calling each element's Equals method). This method is an O(n) operation, where n is the Length. using System; public class Example { public static void Main() { // Create a string array with 3 elements having the same value. String[] strings = { "the", "quick", "brown", "fox", "jumps", "over", "the", "lazy", "dog", "in", "the", "barn" }; // Display the elements of the array. Console.WriteLine("The array contains the following values:"); for (int i = strings.GetLowerBound(0); i <= strings.GetUpperBound(0); i++) Console.WriteLine(" [{0,2}]: {1}", i, strings[i]); // Search for the first occurrence of the duplicated value. String searchString = "the"; int index = Array.IndexOf(strings, searchString); Console.WriteLine("The first occurrence of \"{0}\" is at index {1}.", searchString, index); // Search for the first occurrence of the duplicated value in the last section of the array. index = Array.IndexOf(strings, searchString, 4); Console.WriteLine("The first occurrence of \"{0}\" between index 4 and the end is at index {1}.", searchString, index); // Search for the first occurrence of the duplicated value in a section of the array. int position = index + 1; index = Array.IndexOf(strings, searchString, position, strings.GetUpperBound(0) - position + 1); Console.WriteLine("The first occurrence of \"{0}\" between index {1} and index {2} is at index {3}.", searchString, position, strings.GetUpperBound(0), index); } } // The example displays 7 and index 11 is at index 10.
https://msdn.microsoft.com/en-us/library/7eddebat.aspx
CC-MAIN-2015-27
refinedweb
325
53.17
Comment on Tutorial - FileReader and FileWriter example program in Java By Tamil Selvan Comment Added by : Chezhian TK Comment Added at : 2010-10-14 21:40:16 Comment on Tutorial : FileReader and FileWriter example program in Java By Tamil Selvan Simple file writer example. ............. import java.io.BufferedWriter; import java.io.FileWriter; public class FileWriterDemo { public static void main(String args[]) throws Exception { FileWriter fw = new FileWriter("mytest.txt"); BufferedWriter bw = new BufferedWriter(fw); bw.write("test2"); bw.close(); fw should give the output also.. then only it will View Tutorial By: Mohamed Younus at 2011-07-07 10:13:54 2. Thanks for the explicit introduction! View Tutorial By: yemi at 2010-01-13 03:28:14 3. I wanna to name ,subject and marks of every subjec View Tutorial By: raveesh yadav at 2012-10-13 19:56:47 4. Hi... it would be good if someone could send the r View Tutorial By: deb at 2008-02-20 11:36:51 5. super example...it very useful for me... View Tutorial By: elan at 2011-06-02 07:54:39 6. it is 3 days since I searched resolution. It was h View Tutorial By: Daniel Costa at 2009-02-25 06:11:27 7. this is no doubt a simple program for the learner View Tutorial By: shweta at 2008-07-12 01:18:59 8. Very good. I like this way. First provide quite wr View Tutorial By: NJ at 2010-04-01 23:27:44 9. The arraycopy in a System is static native method View Tutorial By: Ashwinkumar.vj at 2011-07-19 09:25:02 10. Thank you View Tutorial By: vbulletincoder at 2011-09-13 13:16:40
http://java-samples.com/showcomment.php?commentid=35468
CC-MAIN-2019-04
refinedweb
287
68.26
Grails and Continuous Integration: An Essential Combo Grails is a fantastic framework. As well as being ultra-productive and fun to work with, it encourages good testing practices, and provides a swathe of tools to make writing unit, integration, and web tests a real pleasure. As we will see, Grails also lends itself quite well to build automation. However, Grails is not Maven. Grails has a fairly loose concept of the software development lifecycle, and is happy to let you bundle up and deploy your application as a WAR file without running all the unit and integration tests first. A Continuous Integration (CI) server can help you keep the flexibility of the Grails development process, and still ensure that your code is fully tested for each and every code change. In this article, we will take a look at how you can introduce Continuous Integration into your Grails project. More specifically, we will walk through how you can automate your Grails testing and code quality metrics with Hudson, a popular open source CI tool. Hudson Meets Grails: Preparing Your CI Server You can use pretty much any modern Continuous Integration server to automate your Grails builds. However, in this article, we will be using Hudson. Hudson is exceptionally easy to use, and comes with a very usable Grails plugin. To get started with Hudson, just download the latest version from the Hudson website, and run it from the command line: $java -jar hudson.war This will start Hudson running on the default port of 8080; go to to check it out. The easiest way to use Grails in Hudson is to install the Hudson Grails Plugin via the Plugin Manager screen, which you will find in the Manage Hudson screen (see Figure 1). Once the plugin has downloaded, you will need to restart Hudson for the plugin to take effect. Figure 1. The Hudson Grails plugin is the most convenient way to work with Grails in Hudson Finally, you need to go to the Configure System screen and give Hudson the path of your Grails installation (see Figure 2). Hudson might manage to find Grails on the system path, but I like things to be clear and well defined, especially for a build server. Once this is done, you are ready to start. Figure 2. Configuring the Grails plugin in Hudson Automating Your Grails Unit Tests One of the most fundamental features of a Continuous Integration environment is Continuous Testing. Whenever code changes are made, the full test suite is executed to ensure that nothing has been broken by the change. Indeed, without a comprehensive set of unit and integration tests, Continuous Integration loses much of its added value. So setting up some automated unit tests in Hudson is a logical first step in our quest to automate the Grails build process. Grails provides a clean distinction between unit tests and integration tests, and places them in separate directories. In Grails, unit tests are placed in the test/unit directory. They are designed to run fast, with no framework or database. Grails 1.1 comes with a powerful testing library (originally a plugin in Grails 1.0) that makes it easy to mock out domain objects and test everything from the domain classes to the controllers in isolation. For example, the mockDomain() method will add validation methods such as validate(), and mocked-out persistence methods such as save() and find(), so that you can easily unit test field validation and controller classes without having to load up the entire Grails application context. For example, here we verify a simple business rule that should be encoded in the Address domain class: import grails.test.* import static org.hamcrest.Matchers.* import static org.hamcrest.MatcherAssert.assertThat class AddressTests extends GrailsUnitTestCase { void testStreet1ShouldNotBeEmpty() { mockDomain(Address) def address = new Address(street1:'') assertThat address.validate(), is(false) assertThat address.errors.street1, is("blank") } ... } These libraries make coding unit tests easy, and, even better, they are lightning fast. It really is a good idea to write your tests as Grails unit tests wherever it is practical to do so. If you want to run the unit tests by themselves, you can do so by providing the -unit option when you call grails test-app: $ grails test-app -unit Let's set this up in Hudson. First, create a new freestyle build job (using the New Job menu). You will need to configure the Source Code Management section to work with whatever source code repository you are using, and then decide how you want the build to be triggered. If you are not sure, select Poll SCM here, and provide an appropriate frequency such as @hourly or */10 * * * *. This last value will check for changes every 10 minutes, and trigger a build if any changes are detected. Next, let's add the build. In the build section, when you click on "Add build step," pick the Build With Grails option. This gives you a number of options, including the Grails targets to be run, as well as directory options and some command line parameters. You can invoke several targets in the same line, which is not the case when you run Grails on the command line. However, you do need to enclose each command, including its arguments and options, in double quotes. Figure 3. Adding a build step for the Grails unit tests The actual Grails command we are invoking here is the following: $ grails test-app -unit --non-interactive Note the --non-interactive option. Under normal circumstances, Grails builds are quite chatty, and they often prompt you for extra information, such as whether you want to upgrade one of your plugins. If this happens when your build is running on Hudson, your build will come to a grinding halt and wait until some kind soul answers its question. Which, of course, will never happen. Indeed, one of the tenets of any Continuous Integration architecture is that a build should be able to run without human intervention, using only the resources downloaded from the source code repository. The --non-interactive option, introduced in Grails 1.1, is the workaround for this issue. With this option, any queries will be automatically answered with the default option (which is usually a sensible choice), and the build will continue. Finally, in the post-action builds, you can publish your JUnit test reports. Remember, Grails is Groovy and Groovy is Java, and the Grails testing frameworks have a solid foundation in Spring and JUnit. So all you need to do is point Hudson to the test/reports directory, and it will find everything it needs to report on your test results (see Figure 4). Figure 4. Hudson reports on Grails unit test results Automating Your Integration Tests Grails integration and functional tests are slower and more heavyweight, but also more comprehensive. During these tests, Grails creates an in-memory database, or, depending how you have set up your application, uses a proper test database. It also loads up a full Spring and Hibernate application context. And, if you are running web tests of any kind, it will start up a web server in the background for your tests, just for good measure. These tests let you exercise all the components of your application from end to end, to see how they work together. Needless to say, this all can take a bit of time. Because of this, I like to run the unit tests in a separate build job, followed by the integration tests in a subsequent step. Like the unit tests, you can run integration tests in isolation, by using the -integration option. Maven users should note that this will run only the integration tests, not the unit tests followed by the integration tests. You can set up a Hudson build job to run your integration tests by using the -integration option in the Grails build step. $ grails test-app -integration --non-interactive Another approach is to run all of the tests in this second testing phase. Including the unit tests usually increases the overall time taken for these builds very marginally, and it makes it much easier to integrate accurate code coverage metrics, which we will cover further on. Automating Functional and Web Tests Web testing is a tricky business, but Grails goes out of its way to provide useful tools for the job. There are Grails plugins for both Selenium and Canoo Webtest, as well as the very promising Functional Test plugin. For this example, we will be using the third option, though similar integration techniques work for both the Selenium and Canoo Webtest tests. You can install the Functional Test plugin into your project as shown here: $ grails install-plugin functional-test This plugin is based on the HTMLUnit library, and fits in very nicely with the Grails approach. Functional test cases go in their own directory (the test/functional directory), and test results are generated in the standard JUnit format. The HTMLUnit library also makes for very clear and readable test cases. A very simple test case is shown here: class RegisterListingFunctionalTests extends functionaltestplugin.FunctionalTestCase { void testDisplayRegistrationForm() { get '/listing/edit' assertStatus 200 assertContentContains 'Sign up now!' } void testRegistrationPageShouldDisplaysErrorMessageIfNoFieldsEntered() { get '/listing/edit' click 'submit' assertContentContains 'Sorry...' assertContentContains 'You need to fill out all the required fields.' } ... } These tests will run whenever you run the normal Grails tests. They produce normal JUnit test reports that Hudson can process. The plugin starts up your web application on a local Jetty server, running on port 8080 by default. Of course, you may need to modify this option if the port 8080 is already being used. You can do this by setting the port you want in the server.port option (see Figure 5). Figure 5. Setting up integration and functional tests Integrating Code Coverage Reports Code quality metrics in the Groovy space are relatively young. One promising new project called CodeNarc provides some simple static analysis rules for Groovy (similar to Checkstyle for Java), but is not quite ready for seamless integration with Hudson just yet. Code coverage, on the other hand, is well catered for by the open source Cobertura project, which handles Groovy code just fine. Code coverage is a very useful code quality metric that indicates how much of your code has been executed by your test cases. While it is not in itself a guarantee of high-quality testing, it is certainly very handy for determining what code hasn't been tested at all, and as a general indicator of how well testing practices are being applied. Code coverage in Grails is easy. Just install the Test Code Coverage plugin, as shown here: $ grails install-plugin code-coverage Now, whenever you run your Grails tests, code coverage data will be collected, and an HTML code coverage report generated. To integrate with Hudson, however, we will need to produce XML code coverage data, using the -xml option, as shown below. $ grails test-app -xml --non-interactive You will also need to install the Hudson Cobertura plugin into your Hudson installation, and to tell Hudson that your build will be producing coverage data, and where to find it (see Figure 6). With this done, Hudson will produce the sort of coverage report shown in Figure 7. Figure 6. Configuring code coverage reports in Hudson Figure 7. Hudson generates very readable code coverage reports Code coverage is an extremely processor-intensive task, so you don't want to use it for every build. If you want to skip the code coverage for a particular build (for example, during the unit tests build job), you can use the -nocoverage option. On a typical set of unit tests, this option can make the builds run much faster. Conclusion Grails is an excellent, highly productive development framework that positively encourages good development and testing practices. In this article, we have seen how to set up a Continuous Integration build job to compile and test your Grails application in Hudson. We also looked at how to separate unit tests from integration tests, and saw a few tricks of the trade that are useful to know when you automate your integration and functional/web tests. Finally, we saw how to integrate code coverage metrics into your build. Resources - The Grails website, where you can learn more about Grails - Hudson can be downloaded here - Hudson tutorial by Andrew Glover - The Grails Functional Test plugin is documented here. - The Grails Test Code Coverage plugin is documented here. - CodeNarc, code quality metrics for Groovy - Login or register to post comments - Printer-friendly version - 21664 reads Thanks! by biehl - 2009-07-23 04:58Great guide. It mostly works for me. One problem I have is that currently when a test fails the whole build is counted as failed. So I cannot distinguish failed compilations and failed tests. Do you know any way to distinguish between compilation and test failures? Best Anders
https://today.java.net/node/219975/atom/feed
CC-MAIN-2015-22
refinedweb
2,153
60.55
FGETPOS(3P) POSIX Programmer's Manual FGETPOS(3P) This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. fgetpos — get current file position information #include <stdio.h> int fgetpos(FILE *restrict stream, fpos_t *restrict pos); The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of POSIX.1‐2017 defers to the ISO C standard. The fgetpos() function shall store the current values of the parse state (if any) and file position indicator for the stream pointed to by stream in the object pointed to by pos. The value stored contains unspecified information usable by fsetpos() for repositioning the stream to its position at the time of the call to fgetpos(). The fgetpos() function shall not change the setting of errno if successful. Upon successful completion, fgetpos() shall return 0; otherwise, it shall return a non-zero value and set errno to indicate the error.. None. None. None. None. Section 2.5, Standard I/O Streams, fopen(3p), ftell(3p), rewind(3p), ungetcGETPOS(3P) Pages that refer to this page: stdio.h(0p), ftell(3p)
https://www.man7.org/linux/man-pages/man3/fgetpos.3p.html
CC-MAIN-2022-27
refinedweb
224
54.02
Today coding question is, find the length of the longest subsequence of two strings. Given "abc" and "aec", the longest subsequence string is "ac". Quick FYI, if you are confused about substring and subsequence. Given two strings: "iamnice", "iamtired" e.g. of a longest substring: "iam" e.g. of a longest subsequence: "iamie" In this question, we only care about subsequence. So naively I can, list down (space complexity is kaboom over here) all the possible combinations of both strings (kaboom time complexity), then search for the longest common string (O(mn) where m and n are the lengths of the strings). Altogether, extremely terrible time and space complexity. def findCombinationsOfString(string, list, combi): if len(string) == 0: return newCombi = combi + string[0] # don't add common strings if newCombi not in list: list.append(newCombi) findCombinationsOfString(string[1:], list, newCombi) for i in range(1, len(string)): findCombinationsOfString(string[i:], list, combi) return list def nativelyFindLongestSubsequenceOfStrings(str1, str2): lengthStr1, lengthStr2 = len(str1), len(str2) if lengthStr1 == 0 or lengthStr2 == 0: return 0 if str1 == str2: return lengthStr1 list1 = findCombinationsOfString(str1, [], "") # ['a', 'ab', 'abc', 'ac', 'b', 'bc', 'c'] list1 = sorted(list1, key=len, reverse=True) # ['abc', 'ab', 'ac', 'bc', 'a', 'b', 'c'] list2 = findCombinationsOfString(str2, [], "") # ['a', 'ae', 'aec', 'ac', 'e', 'ec', 'c'] list2 = sorted(list2, key=len, reverse=True) # ['aec', 'ae', 'ac', 'ec', 'a', 'e', 'c'] for i in range(len(list1)): for j in range(len(list2)): if list1[i] == list2[j]: return len(list1[i]) return 0 print nativelyFindLongestSubsequenceOfStrings("", "") The better way is to, put two strings into a 2D array. (I tried to draw my explanation, refer to narrative explanation in the next section) More explanation in words. We put two strings into a 2D array. Iterate through the 2D array to find all common characters. If the characters from both strings are the same, increment 1 (an indicator of 1 character in the subsequence). To get the longest subsequence, we need to consider all possible combinations of common characters that come before. So we increment 1 + get whatever (possible combinations) value that comes before. Soon we realise there are 3 possible scenarios of the possible combination values; it could be from the top left box, the top box, and the left box. Since we are finding the longest subsequence, we get the maximum value of the 3 possible values. If the current index shows that both characters from the two strings are the same, we increment 1 + max(top left value, top value, left value). Until it reaches the last index at bottom right. It should contain the longest subsequence of two strings. Tada. Code, of course in Python. Time complexity: O(mg), just kidding. It's O(mn), where m and n are the lengths of the strings Space complexity: O(mn), the 2D array takes up space. def findLongestSubsequenceString(str1, str2): lengthStr1, lengthStr2 = len(str1), len(str2) # if one of the strings is empty, return length of non empty string # we can easily use a variable for magic number 0. e.g. EMPTY_STRING = 0 if lengthStr1 == 0 or lengthStr2 == 0: return 0 # do you think it's a good idea to catch specific case like this? # if the possibility of both strings being same is high, this is a legit case I think. if str1 == str2: return lengthStr1 # create 2D array with length of str1 and str2 # I think this syntax is cool AF. memo = [[0 for i in range(lengthStr2)] for i in range(lengthStr1)] for i in range(lengthStr1): for j in range(lengthStr2): topLeft = top = left = 0 # need to check if the indices are within range of 2D array, # else the code will throw you into the sea (I meant throw out of range error) if i > 0 and j > 0: topLeft = memo[i - 1][j - 1] if i > 0: top = memo[i - 1][j] if j > 0: left = memo[i][j - 1] # get the maximum value of possible subsequence combinations memo[i][j] = max(topLeft, top, left) # increment 1 if current characters are the same # we can do it up there but I think it looks clearer if we separate out this logic if str1[i] == str2[j]: memo[i][j] = memo[i][j] + 1 return memo[lengthStr1 - 1][lengthStr2 - 1] print findLongestSubsequenceString('abc', 'bca') # 2 # 2D Array looks likes this: # [1, 0, 0] # [0, 1, 1] # [0, 1, 2] What are we missing out, this piece of (shit) code only caters for unique strings. What if there are duplicated characters (opps). I just started playing with dynamic programming so I need to study more on it. So far this is the best solution from my side. We could also do it recursively. I have not thought of that approach yet. (Next part?) Comment if I miss out any edge cases. :< I did a quick peek (googled for solution :> ), there is a better solution using suffix thingy but I believe my brain has enough of this question. We will come back to this suffix thingy soon. Ok, bye. p/s: are we able to color code the code? Discussion You can add the language directly after the first three back-ticks at the beginning of your code-block. Like so: ohhh, thanks peter! colourrssss Your algorithm is not correct in some cases. Just launch it for 'aa' and 'a' strings ;)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/annlin/find-the-longest-subsequence-of-two-strings-709
CC-MAIN-2021-04
refinedweb
890
60.14
core - memory image file format #include <sys/param.h> #include <sys/core.h> programname.core in the working directory, provided the terminated process had write permission in the directory, and provided the abnormality did not cause a system crash. (In this event, the decision to save the core file is arbitrary, see savecore(8).) The maximum size of a programname.core file is limited by setrlimit(2). Files which would be larger than the limit are not created. The programname.core file consists of the u-area, whose size (in pages) is defined by the UPAGES manifest in the <machine/param.h> file. The uareaIX. OpenBSD 3.6 December 11, 1993
http://nixdoc.net/man-pages/OpenBSD/man5/core.5.html
CC-MAIN-2018-05
refinedweb
111
60.41
If I require a file full of constants in g1.inc like use constant FAILURE_CONF => 1; use constant FAILURE_EVENT => 2; use constant OPEN_REQ => 3; use constant OPEN_CONF => 4; 1; [download] use strict; require "g1.inc"; if ( 3 == OPEN_REQ ) { print "Code is OPEN_REQ\n"; } else { print "Code is NOT OPEN_REQ\n"; } [download] I receive: 'Bareword "OPEN_REQ" not allowed while "strict subs"'. I had thought for some reason requiring a file is like inserting that code at that spot in the script. Is it a compiletime vs runtime issue? If I use a module, what is the briefest way to pass my large number of declared constants into the main namespace without retyping everyone of them into @EXPORT? I await your witty wisdom. :-) require is to happen at runtime, which leaves the compiler with a bareword and nothing to match in the symbol table. Arrange to use the file instead, since that happens at compile time. After Compline,Zaxo do file is probably the simplest way. Otherwise, you have to arrange for them to be exported from the modules namespace into the callers. do 'g1.inc'; [download] For the constant subs to be recognised as such when the code that calls them is compiled, you'll need to put the do in a BEGIN{} block. BEGIN{ do 'gi.inc'; } [download] do "g1.inc" [download] As for your second question, you could have all your constants have a common prefix, like C_. Then, you can do a little introspection as so: package Constants; use base 'Exporter'; our @EXPORT = grep { substr( $_, 0, 2 ) eq 'C_' && __PACKAGE__->can( $_ ); } keys %Constants::; use constant C_FOO => 1; use constant C_BAR => 2; use constant C_BAZ => 3; [download] The Perfect is the Enemy of the Good. -Scott package Constants; use base 'Exporter'; our @EXPORT = grep { substr( $_, 0, 2 ) eq 'C_' && __PACKAGE__->can( $_ ); } keys %Constants::; [download] Anyone doing a lot of work with perl constants might want to skim through those talk notes, by the way. And in general, these days I'd recommend staying away from constants altogether -- their utility rarely makes up for their annoyances (e.g no sigil, so they don't interpolate very smoothly). Also a case can be made for keeping definitions of things in a YAML file instead of using perl code to store configuration (programmatic modification of YAML is a lot easier than parsing perl). Try this.. use constant {FAILURE_CONF => 1, FAILURE_EVENT => 2, OPEN_REQ => 3, OPEN_CONF => 4}; [download] package Constants; use strict; use Exporter; our @ISA = qw(Exporter); our @EXPORT = qw(FOO $BAR @A %H) use constant FOO => 1; our $BAR = "bar"; our @A = (1,2,3,4); our %H = (a=>1,b=>2); 1; [download] use strict; use Constants; print FOO,"\n"; print $BAR,"\n"; print join(",",@A),"\n"; print "$_=>$H{$_}," foreach keys %H; [download] I cheat, personally. I have a program which loads all of its contents from an XML file. I think I've got this down the bare minimum required to be a useful example. (there's a few different type of config parameters, not all of which get exported, and logic for adding new values) Update: You'd of course want to swap EXPORT_OK and EXPORT, so you don't have to pass in which constants you wanted to import (also cleaned up a typo, and split a paragrah) Now, the process the author uses is kinda nice. Maybe that should be replicated, but definitely not the resulting code. package Constants; use base 'Exporter'; use enum qw(:C_ FOO, BAR, BAZ); our @EXPORT = grep /^C_/, %{__PACKAGE__ . "::"}; 1; [download] use MyModule qw(:group1 :group2); [download] perrin's suggestion works fine, but adds verbosity. Using 'do' didn't seem to help and gave me the same error. zaxo confirmed the compiletime issue. Thus, dragonchild's suggestion for using BEGIN worked fine. Also, I really like his wizardry for getting all the constants into @EXPORT without typing them all. I incorporated that technique. Thanks! I also used Grygonos's suggestion simplifying the creation of all the constants. I'm gonna need to spend some time studying jhourcle's XML technique. friedo's technique looks similar to dragonchild's but I didn't have enum.pm installed so haven't checked it out yet. I have learned many lessons o wise ones! Ceramics Glass Wood Metal Plastic Paper Banana leaves Something else Results (392 votes), past polls
http://www.perlmonks.org/?node_id=452043
CC-MAIN-2013-20
refinedweb
730
70.13
Armstrong number in c: This program prints armstrong number. In our program we ask the user to enter a number and then we use a loop from one to the entered number and check if it is an armstrong number and if it is then the number is printed on the screen. Remember a number is armstrong if the sum of cubes of individual digits of a number is equal to the number itself. For example 371 is an armstrong number as 33 + 73 + 13 = 371. Some other armstrong numbers are 0, 1, 153, 370, 407. #include <stdio.h> int main() { int r; long number = 0, c, sum = 0, temp; printf("Enter an integer upto which you want to find armstrong numbers\n");; } return 0; } Download Generate Armstrong numbers program. Output of program:
http://www.programmingsimplified.com/c-program-generate-armstrong-number
CC-MAIN-2013-48
refinedweb
133
69.52
The importance of patterns in biology A lot of what we do when writing programs for biology can be described as searching for patterns in strings. The obvious examples come from the analysis of biological sequence data – remember that DNA, RNA and protein sequences are just strings. Many of the things we want to look for in biological sequences can be described in terms of patterns: - protein domains - DNA transcription factor binding motifs - restriction enzyme cut sites - degenerate PCR primer sites - runs of mononucleotides However, it's not just sequence data that can have interesting patterns. As we discussed in when talking about files, most of the other types of data we have to deal with in biology comes in the form of strings inside text files – things like: - read mapping locations - geographical sample coordinates - taxonomic names - gene names - gene accession numbers - BLAST search results In previous pages, we've looked at some programming tasks that involve pattern recognition in strings. We've seen how to count individual amino acid residues (and even groups of amino acid residues) in protein sequences, and how to identify restriction enzyme cut sites in DNA sequences. We've also seen how to examine parts of gene names and match them against individual characters. The common theme among all these problems is that they involve searching for a fixed pattern. But there are many problems that we want to solve that require more flexible patterns. For example: - given a DNA sequence, what's the length of the poly-A tail? - given a gene accession name, extract the part between the third character and the underscore - given a protein sequence, determine if it contains this highly redundant protein domain motif Because these types of problems crop up in so many different fields, there's a standard set of tools in Python for dealing with them: regular expressions. Regular expressions are a topic that might not be covered in a general purpose programming book, but because they're so useful in biology, we're going to devote the whole of this section to looking at them. Although the tools for dealing with regular expressions are built in to Python, they are not made automatically available when you write a program. In order to use them we must first talk about modules. Modules in Python The functions and data types that we've discussed so far have been the basic ones that are likely to be needed in pretty much every program – tools for dealing with strings and numbers, for reading and writing files, and for manipulating lists of data. As such, they are automatically made available when we start to create a Python program. If we want to open a file, we simply write a statement that uses the open() function. However, there's another category of tools in Python which are more specialized. Regular expressions are one example, but there is a large list of specialized tools which are very useful when you need them, but are not likely to be needed for the majority of programs. Examples include tools for doing advanced mathematical calculations, for downloading data from the web, for running external programs, and for manipulating dates. Each collection of specialized tools – really just a collection of specialized functions and data types – is called a module. For reasons of efficiency, Python doesn't automatically make these modules available in each new program, as it does with the more basic tools. Instead, we have to explicitly load each module of specialized tools that we want to use inside our program. To load a module we use the import statement. For example, the module that deals with regular expressions is called re, so if we want to write a program that uses the regular expression tools we must include the line: import re at the top of our program. When we then want to use one of the tools from a module, we have to prefix it with the module name. For example, to use the regular expression search() function (which we'll discuss) we have to write: re.search(pattern, string) rather than simply: search(pattern, string) If we forget to import the module which we want to use, or forget to include the module name as part of the function call, we will get a NameError. We'll encounter various other modules in future pages. For the rest of this page specifically, all code examples will require the import re statement in order to work. For clarity, we won't include it, so if you want try running any of the examples, you'll need to add it at the start of your code. Raw strings Writing regular expression patterns, as we'll see in the next section of this section, requires us to type a lot of special characters. Recall from the section on manipulating text that certain combinations of characters are interpreted by Python to have a special meaning. For example, \n means start a new line, and \t means insert a tab character. Unfortunately, there are a limited number of special characters to go round, so some of the characters that have a special meaning in regular expressions clash with the characters that already have a special meaning. Python's way round this problem is to have a special rule for strings: if we put the letter r immediately before the opening quotation mark, then any special characters inside the string are ignored: print(r"\t\n") The r stands for raw, which is Python's description for a string where special characters are ignored. Notice that the r goes outside the quotation marks – it is not part of the string itself. We can see from the output that the above code prints out the string just as we've written it: \t\n without any tabs or newlines. You'll see this special raw notation used in all the regular expression code examples on this page – even when it's not strictly necessary – because it's a good habit to get in to. Searching for a pattern in a string We'll start off with the simplest regular expression tool. re.search() is a true/false function that determines whether or not a pattern appears somewhere in a string. It takes two arguments, both strings. The first argument is the pattern that you want to search for, and the second argument is the string that you want to search in. For example, here's how we test if a DNA sequence contains an EcoRI restriction site: dna = "ATCGCGAATTCAC" if re.search(r"GAATTC", dna): print("restriction site found!") Notice that we've used the raw notation for the pattern string, even though it's not strictly necessary since it doesn't contain any special characters. Alternation Now that we've seen a simple example of how to use re.search(), let's look at something a bit more interesting. This time, we'll check for the presence of an AvaII recognition site, which can have two different sequences: GGACC and GGTCC. One way to do this would be to use the techniques we learned in the previous chapter to make a complex condition using or: dna = "ATCGCGAATTCAC" if re.search(r"GGACC", dna) or re.search(r"GGTCC", dna): print("restriction site found!") But a better way is to capture the variation in the AvaII site using a regular expression. One useful feature of regular expressions is called alternation. To represent a number of different alternatives, we write the alternatives inside parentheses separated by a pipe character. In the case of AvaII, there are two alternatives for the third base – it can be either A or T – so the pattern looks like this: GG(A|T)CC Writing the pattern as a raw string and putting it inside a call to re.search() gives us the code: dna = "ATCGCGAATTCAC" if re.search(r"GG(A|T)CC", dna): print("restriction site found!") Notice the power of what we've done here; we've written a single pattern which captures all the variation in the sequence in one string. Character groups The BisI restriction enzyme cuts at an even wider range of motifs – the pattern is GCNGC, where N represents any base. We can use the same alternation technique to represent this pattern: GC(A|T|G|C)GC However, there's another regular expression feature that lets us write the pattern more concisely. A pair of square brackets with a list of characters inside them can represent any one of these characters. So the pattern GC[ATGC]GC will match GCAGC, GCTGC, GCGGC and GCCGC. Here's a program that checks for the presence of a BisI restriction site using character groups: dna = "ATCGCGAATTCAC" if re.search(r"GC[ATGC]GC", dna): print("restriction site found!") Taken together, alternation and character groups do a pretty good job of capturing the kind of variation that we're interested in for biological programming. Before we move on, here are two short cuts that deal with specific, common scenarios. If we want a character in a pattern to match any character in the input, we can use a period or dot. For example, the pattern GC.GC would match all four possibilities in the BisI example above. However, the period would also match any character which is not a DNA base, or even a letter. Therefore, the whole pattern would also match GCFGC, GC&GC and GC9GC, which may not be what we want, so be careful when using this feature. Sometimes it's easier, rather than listing all the acceptable characters, to specify the characters that we don't want to match. Putting a caret ^ at the start of a character group like this [^XYZ] will negate it, and match any character that isn't in the group. The example above will match any character other than X, Y or Z. Quantifiers The regular expression features discussed above let us describe variation in the individual characters of patterns. Another group of features, quantifiers, let us describe variation in the number of times a section of a pattern is repeated. A question mark immediately following a character means that that character is optional – it can match either zero or one times. So in the pattern GAT?C the T is optional, and the pattern will match either GATC or GAC. If we want to apply a question mark to more than one character, we can group the characters in parentheses. For example, in the pattern GGG(AAA)?TTT the group of three As is optional, so the pattern will match either GGGAAATTT or GGGTTT. A plus sign immediately following a character or group means that the character or group must be present but can be repeated any number of times – in other words, it will match one or more times. For example, the pattern GGGA+TTT will match three Gs, followed by one or more As, followed by three Ts. So it will match GGGATTT, GGGAATT, GGGAAATT, etc. but not GGGTTT. An asterisk immediately following a character or group means that the character or group is optional, but can also be repeated. In other words, it will match zero or more times. For example, the pattern GGGA*TTT will match three Gs, followed by zero or more As, followed by three Ts. So it will match GGGTTT, GGGATTT, GGGAATTT, etc. It's the most flexible quantifier. If we want to match a specific number of repeats, we can use curly brackets. Following a character or group with a single number inside curly brackets will match exactly that number of repeats. For example, the pattern GA{5}T will match GAAAAAT but not GAAAAT or GAAAAAAT. Following a character or group with a pair of numbers inside curly brackets separated with a comma allows us to specify an acceptable range of repeats. For example, the pattern GA{2,4}T means G, followed by between 2 and 4 As, followed by T. So it will match GAAT, GAAAT and GAAAAT but not GAT or GAAAAAT. Just like with substrings, we can leave out the lower or upper limits. A{3,} will match three or more As, and G{,7} will match up to seven Gs. Positions The final set of regular expression tools we're going to look at don't represent characters at all, but rather positions in the input string. The caret symbol ^ matches the start of a string8, and the dollar symbol $ matches the end of a string. The pattern ^AAA will match AAATTT but not GGGAAATTT. The pattern GGG$ will match AAAGGG but not AAAGGGCCC. Combining The real power of regular expressions comes from combining these tools. We can use quantifiers together with alternations and character groups to specify very flexible patterns. For example, here's a complex pattern to identify full-length eukaryotic messenger RNA sequences: ^AUG[AUGC]{30,1000}A{5,10}$ Reading the pattern from left to right, it will match: - an AUG start codon at the beginning of the sequence - followed by between 30 and 1000 bases which can be A, U, G or C - followed by a poly-A tail of between 5 and 10 bases at the end of the sequence As you can see, regular expressions can be quite tricky to read until you're familiar with them! However, it's well worth investing a bit of time learning to use them, as the same notation is used across multiple different tools. The regular expression skills that you learn in Python are transferable to other programming languages, command line tools, and text editors. The features we've discussed above are the ones most useful in biology. However, there are many more regular expression features available in Python. If you want to become a regular expression master, it's worth reading up on greedy vs. minimal quantifiers, back-references, lookahead and lookbehind assertions, and built in character classes. Before we move on to look at some more sophisticated uses of regular expressions, it's worth noting that there's a method similar to re.search() called re.match(). The difference is that re.search() will identify a pattern occurring anywhere in the string, whereas re.match() will only identify a pattern if it matches the entire string. Most of the time we want the former behaviour. More ways to use patterns In all the examples we've seen so far, we used re.search() as the condition in an if statement to decide whether or not a string contained a pattern. However, there are lots more interesting things we can do with regular expression patterns. Extracting the part that matched Often in our programs we want to find out not only if a pattern matched, but what part of the string was matched. To do this, we need to store the result of using re.search(), then use the group() method on the resulting object. When introducing the re.search() function above I said that it was a true/false function. That's not exactly correct though – if it finds a match, it doesn't return True, but rather an object that is evaluated as true in a conditional context (if the distinction doesn't seem important to you, then you can safely ignore it). The value that's actually returned is a match object – a new data type that we've not encountered before. Like a file object, a match object doesn't represent a simple thing, like a number or string. Instead, it represents the results of a regular expression search. And just like a file object, a match object has a number of useful methods for getting data out of it. One such method is the group() method. If we call this method on the result of a regular expression search, we get the portion of the input string that matched the pattern. Here's an example: imagine we want to take a DNA sequence and determine whether or not it contains any ambiguous bases – i.e. any bases that are not A, T, G or C. We can use a negated character group to write a regular expression that will match any non-ATGC base: [^ATGC] and test the sequence like this: dna = "ATCGCGYAATTCAC" if re.search(r"[^ATGC]", dna): print("ambiguous base found!") The code above tells us that the DNA sequence contained a non-ATGC base, but it doesn't tell us exactly what the base was. To do that, we need to call the group() method on the match object like this: dna = "CGATNCGGAACGATC" m = re.search(r"[^ATGC]", dna) # m is now a match object if m: print("ambiguous base found!") ambig = m.group() print("the base is " + ambig) The output from this program: ambiguous base found! the base is N tells us not only that the sequence contained an ambiguous base, but that the ambiguous base was N. Extracting multiple groups What if we want to extract more than one bit of the pattern? Say we want to match a scientific name like Homo sapiens or Drosophila melanogaster. The pattern is relatively simple: multiple characters, followed by a space, followed by multiple characters: .+ .+ To match multiple characters we're using a period (meaning any character) followed by a plus (meaning repeated at least once but possibly multiple times). Now let's say that we want to extract the genus name and species name into separate variables. We add parentheses around the parts of the pattern that we want to store: (.+) (.+) This is called capturing part of the pattern. We can now refer to the captured bits of the pattern by supplying an argument to the group() method. group(1) will return the bit of the string matched by the section of the pattern in the first set of parentheses, group(2) will return the bit matched by the second, etc.: scientific_name = "Homo sapiens" m = re.search("(.+) (.+)", scientific_name) if m: genus = m.group(1) species = m.group(2) print("genus is " + genus + ", species is " + species) The output shows how the two bits of the same pattern were stored in different variables. Note that the space, which was part of the pattern but not part of the captured groups, isn't in either of the two variables: genus is Homo, species is sapiens If you're keeping count, you'll realize that we now have three different roles for parentheses in regular expressions: - surrounding the alternatives in an alternation - grouping parts of a pattern for use with a quantifier - defining parts of a pattern to be extracted after the match Getting match positions As well as containing information about the contents of a match, the match object also holds information about the position of the match. The start() and end() methods get the positions of the start and end of the pattern on the string. Let's go back to our ambiguous base example and find the position of the ambiguous base: dna = "CGATNCGGAACGATC" m = re.search(r"[^ATGC]", dna) if m: print("ambiguous base found!") print("at position " + str(m.start())) Remember that we start counting from zero, so in this case, the match starting at the fifth base has a start position of four: ambiguous base found! at position 4 Multiple matches An obvious limitation of the above example is that it can only find a single ambiguous base, because re.search() can only find a single match. To process multiple matches, we need to switch to re.finditer(), which returns a list of match objects which we can process in a loop: dna = "CGCTCNTAGATGCGCRATGACTGCAYTGC" matches = re.finditer(r"[^ATGC]", dna) for m in matches: base = m.group() pos = m.start() print(base + " found at position " + str(pos)) We can see from the output that we now find all three parts of the string that match the pattern: N found at position 5 R found at position 15 Y found at position 25 Getting multiple matches as strings A common scenario is where we want to get a list of all the parts of a string that match a given pattern. Here's a regular expression pattern that matches runs of A and T longer than five bases: [AT]{6,} Here's a DNA sequence with the bits that we want to extract in bold: ACTGCATTATATCGTACGAAATTATACGCGCG We could extract the bits of the string that match the pattern using re.finditer() and group(): dna = "CTGCATTATATCGTACGAAATTATACGCGCG" matches = re.finditer(r"[AT]{6,}", dna) result = [] for m in matches: result.append(m.group()) print(result) but because this is a common problem, there's a special method for dealing with it called re.findall(). Just like the other methods we've seen, re.findall() takes the pattern and the string as arguments, but rather than returning a list of match objects it returns a list of strings. We can rewrite our code like this: dna = "CTGCATTATATCGTACGAAATTATACGCGCG" result = re.findall(r"[AT]{6,}", dna) print(result) Splitting a string using a regular expression Occasionally it can be useful to split a string using a regular expression pattern as the delimiter. The normal string split() method doesn't allow this, but the re module has a split() function of its own that takes a regular expression pattern as an argument. The first argument is the pattern, the second argument is the string to be split. Imagine we have a consensus DNA sequence that contains ambiguity codes, and we want to extract all runs of contiguous unambiguous bases. We need to split the DNA string wherever we see a base that isn't A, T, G or C: dna = "ACTNGCATRGCTACGTYACGATSCGAWTCG" runs = re.split(r"[^ATGC]", dna) print(runs) The output shows how the function works – the return value is a list of strings: ['ACT', 'GCAT', 'GCTACGT', 'ACGAT', 'CGA', 'TCG'] Notice that the bits of the string that matched the pattern are excluded from the output (just like the delimiters are excluded from the output when we use the normal split() method). Recap In this section we learned about regular expressions, and the functions and methods that use them. We started with a brief introduction to two concepts that, while not part of the regular expression tools, are necessary in order to use them – modules and raw strings. We got a brief overview of features that can be used in regular expression patterns, and a quick look at the range of different things we can do with them. Just as regular expressions themselves can range from simple to complex, so can their uses. We can use regular expressions for simple tasks – like determining whether or not a sequence contains a particular motif – or for complicated ones, like identifying messenger RNA sequences by using complex patterns. Before we move on to the exercises, it's important to recognize that for any given pattern, there are probably multiple ways to describe it using a regular expression. Near the start of the page, we came up with the pattern GG(A|T)CC to describe the AvaII restriction enzyme recognition site, but the same pattern could also be written as: GG[AT]CC (GGACC|GGTCC) (GGA|GGT)CC G{2}[AT]C{2} As with other situations where there are multiple different ways to write the same thing, it's best to be guided by what is clearest to read. Exercises Have a go at the following two exercises. The first is a simple multi-part problem that will give you experience writing a bunch of different regular expressions while keeping the bulk of the program the same. The second one is a classic bioinformatics problem: predicting the fragments that will be produced by digesting a given DNA sequence with a restriction enzyme. Remember, you can always find solutions and explanations for all exericses in the Python for Biologists books. Accession names Here's a list of made up gene accession names: accessions = ['xkn59438', 'yhdck2', 'eihd39d9', 'chdsye847', 'hedle3455', 'xjhd53e', '45da', 'de37dp'] Copy and paste this line into your text editor (or whatever you're using to write Python code). Write a program that will print only the accession names that satisfy the following criteria – treat each criterion separately: - contain the number 5 - contain the letter d or e - contain the letters d and e in that order - contain the letters d and e in that order with a single letter between them - contain both the letters d and e in any order - start with x or y - start with x or y and end with e - contain three or more digits in a row - end with d followed by either a, r or p Double digest Click here to download a file dna.txt which contains a made up DNA sequence. Predict the fragment lengths that we will get if we digest the sequence with two made up restriction enzymes – AbcI, whose recognition site is ANT/AAT, and AbcII, whose recognition site is GCRW/TG. The forward slashes (/) in the recognition sites represent the place where the enzyme cuts the DNA. SOLUTIONS You can find solutions to all the exercises, along with explanations of how they work, in the Python for Biologists books - head over to the books page to check them out.
https://pythonforbiologists.com/regular-expressions/
CC-MAIN-2018-26
refinedweb
4,219
59.43
CRUD Operation with ASP.NET MVC and Fluent Nhibernate CRUD Operation with ASP.NET MVC and Fluent Nhibernate Join the DZone community and get the full member experience.Join For Free Jumpstart your Angular applications with Indigo.Design, a unified platform for visual design, UX prototyping, code generation, and app development. Before some time I have written a post about Getting Started with Nhibernate and ASP.NET MVC –CRUD operations. It’s one of the most popular post blog post on my blog. I get lots of questions via email and other medium why you are not writing a post about Fluent Nhibernate and ASP.NET MVC. So I thought it will be a good idea to write a blog post about it. What is Fluent Nhibernate: Convection over configuration that is mantra of Fluent Hibernate If you have see the my blog post about Nhibernate then you might have found that we need to create xml mapping to map table. Fluent Nhibernate uses POCO mapping instead of XML mapping and I firmly believe in Convection over configuration that is why I like Fluent Nhibernate a lot. But it’s a matter of Personal Test and there is no strong argument why you should use Fluent Nhibernate instead of Nhibernate. Fluent Nhibernate is team Comprises of James Gregory, Paul Batum, Andrew Stewart and Hudson Akridge. There are lots of committers and it’s a open source project. You can find all more information about at following site. On this site you can find definition of Fluent Nhibernate like below. Fluent, XML-less, compile safe, automated, convention-based mappings for NHibernate. Get your fluent on. They have excellent getting started guide on following url. You can easily walk through it and learned it. ASP.NET MVC and Fluent Nhibernate: So all set it’s time to write a sample application. So from visual studio 2013 go to file – New Project and add a new web application project with ASP.NET MVC. Once you are done with creating a web project with ASP.NET MVC It’s time to add Fluent Nhibernate to Application. You can add Fluent Nhibernate to application via following NuGet Package. And you can install it with package manager console like following. Now we are done with adding a Fluent Nhibernate to it’s time to add a new database. Now I’m going to create a simple table called “Employee” with four columns Id, FirstName, LastName and Designation. Following is a script for that. CREATE TABLE [dbo].[Employee] ( [Id] INT NOT NULL PRIMARY KEY, [FirstName] NVARCHAR(50) NULL, [LastName] NVARCHAR(50) NULL, [Designation] NVARCHAR(50) NULL ) Now table is ready it’s time to create a Model class for Employee. So following is a code for that. namespace FluentNhibernateMVC.Models { public class Employee { public virtual int EmployeeId { get; set; } public virtual string FirstName { get; set; } public virtual string LastName { get; set; } public virtual string Designation{ get; set; } } } Now Once we are done with our model classes and other stuff now it’s time to create a Map class which map our model class to our database table as we know that Fluent Nhibernate is using POCO classes to map instead of xml mappings. Following is a map class for that. using FluentNHibernate.Mapping; namespace FluentNhibernateMVC.Models { public class EmployeeMap : ClassMap<Employee> { public EmployeeMap() { Id(x => x.EmployeeId); Map(x => x.FirstName); Map(x => x.LastName); Map(x => x.Designation); Table("Employee"); } } } If you see Employee Map class carefully you will see that I have mapped Id column with EmployeeId in table and another things you can see I have written a table with “Employee” which will map Employee class to employee class. Now once we are done with our mapping class now it’s time to add Nhibernate Helper which will connect to SQL Server database. using FluentNHibernate.Cfg; using FluentNHibernate.Cfg.Db; using NHibernate; using NHibernate.Tool.hbm2ddl; namespace FluentNhibernateMVC.Models { public class NHibernateHelper { public static ISession OpenSession() { ISessionFactory sessionFactory = Fluently.Configure() .Database(MsSqlConfiguration.MsSql2008 .ConnectionString(@"Data Source=(LocalDB)\v11.0;AttachDbFilename=C:\data\Blog\Samples\FluentNhibernateMVC\FluentNhibernateMVC\FluentNhibernateMVC\App_Data\FNhibernateDemo.mdf;Integrated Security=True") .ShowSql() ) .Mappings(m => m.FluentMappings .AddFromAssemblyOf<Employee>()) .ExposeConfiguration(cfg => new SchemaExport(cfg) .Create(false, false)) .BuildSessionFactory(); return sessionFactory.OpenSession(); } } } Now we are done with classes It’s time to scaffold our Controller for the application. Right click Controller folder and click on add new controller. It will ask for scaffolding items like following. I have selected MVC5 controller with read/write action. Once you click Add It will ask for controller name. Now our controller is ready. It’s time to write code for our actions. Listing: Following is a code for our listing action. public ActionResult Index() { using (ISession session = NHibernateHelper.OpenSession()) { var employees = session.Query<Employee>().ToList(); return View(employees); } } You add view for listing via right click on View and Add View. That’s is you are done with listing of your database. I have added few records to database table to see whether its working or not. Following is a output as expected. Create: Now it’s time to create code and views for creating a Employee. Following is a code for action results. public ActionResult Create() { return View(); } // POST: Employee/Create [HttpPost] public ActionResult Create(Employee employee) { try { using (ISession session = NHibernateHelper.OpenSession()) { using (ITransaction transaction = session.BeginTransaction()) { session.Save(employee); transaction.Commit(); } } return RedirectToAction("Index"); } catch (Exception exception) { return View(); } } Here you can see one if for blank action and another for posting data and saving to SQL Server with HttpPost Attribute. You can add view for create via right click on view on action result like following. Now when you run this you will get output as expected. Once you click on create it will return back to listing screen like this. Edit: Now we are done with listing/adding employee it’s time to write code for editing/updating employee. Following is a code Action Methods. public ActionResult Edit(int id) { using (ISession session = NHibernateHelper.OpenSession()) { var employee = session.Get<Employee>(id); return View(employee); } } // POST: Employee/Edit/5 [HttpPost] public ActionResult Edit(int id, Employee employee) { try { using (ISession session = NHibernateHelper(); } } You can create edit view via following via right click on view. Now when you run application it will work as expected. Details: You can write details code like following. public ActionResult Details(int id) { using (ISession session = NHibernateHelper.OpenSession()) { var employee = session.Get<Employee>(id); return View(employee); } } You can view via right clicking on view as following. Now when you run application following is a output as expected. Delete: Following is a code for deleting Employee one action method for confirmation of delete and another one for deleting data from employee table. public ActionResult Delete(int id) { using (ISession session = NHibernateHelper.OpenSession()) { var employee = session.Get<Employee>(id); return View(employee); } } [HttpPost] public ActionResult Delete(int id, Employee employee) { try { using (ISession session = NHibernateHelper.OpenSession()) { using (ITransaction transaction = session.BeginTransaction()) { session.Delete(employee); transaction.Commit(); } } return RedirectToAction("Index"); } catch (Exception exception) { return View(); } } You can add view for delete via right click on view like following. When you run it’s running as expected. Once you click on delete it will delete employee. That’s it. You can see it’s very easy. Fluent Nhibernate supports POCO mapping without writing any complex xml mappings. You can find whole source code at GitHub at following location. Hope you like it. Stay tuned for more!! Take a look at an Indigo.Design sample application to learn more about how apps are created with design to code software. }}
https://dzone.com/articles/crud-operation-aspnet-mvc-and
CC-MAIN-2019-09
refinedweb
1,266
50.94
Hi, I'm a current, beggining java student and I am completely stumped! i have been searching the web for help but i just cant find a solution, nor can I tell if i'm on the right path with my code. For this weeks assignment I have to write a Java Application that prompts the user for input. Here is the exact Assignement:.) and here is the pseudocode we are offered as a potential guide: 6) Here is some pseudocode/skeleton Java code for one possible solution to the program to get you started (this shows procedural code, but an object-oriented solution would have been better, since Java is a pure object-oriented language): import the classes you need main declare productNo and quantity variables declare and initialize lineAmount and orderAmount variables set up a String for your output via the Scanner class (or you may use the JTextArea GUI component – this will require additional research beyond the textbook!) start filling the String (or JTextArea) with the headers for Product, Quantity, Line Cost, and Total Cost prompt the user for the first productNo while the productNo is not the sentinel value of -1 get the quantity if the quantity is -1 then exit switch on productNo in each case, determine the new lineAmount add the lineAmount to the orderAmount add the new subtotal/order line information to the output String (or JTextArea) get the next productNo output the total orderAmount and here is the code i have come up with so far: import java.util.Scanner; public class Test { public static void main(String args[]){ int cntr, product, units, cost, totalcost = 0; Scanner MK = new Scanner(System.in); cntr=0; System.out.println("Enter Product No.(1-5)or -1 to Quit"); product = MK.nextInt(); switch(product) { case 1: while(cntr <5){ System.out.println(""); System.out.println("Product " + (cntr+1)); System.out.println("Enter Quantity or -1 to Quit"); product = MK.nextInt(); System.out.println("Enter Cost"); cost = MK.nextInt(); totalcost = totalcost + cost*product; System.out.println("Current total cost: " + totalcost); cntr++; } System.out.println("Total Cost-->"+totalcost); } } } I have no clue how to associate a specific cost with a specific product, and looking at her examples, that seems to be what she wants. As of right now, my program asks the User to input the cost of the product because I am clueless on how to write the code i need. It is only supposed to ask for the product number, the number of units being sold, and then output the total after each new product is added to the list. - Product 1 is $2.98 per unit - Product 2 is $4.50 per unit - Product 3 is $9.98 per unit - Product 4 is $4.49 per unit Product 5 is $6.87 per unit If there is anyone willing to help me out, at least pointing me in the right direction, i would greatly appriciate it.
https://www.daniweb.com/programming/software-development/threads/431967/decision-control-and-loops-w-user-interaction
CC-MAIN-2018-43
refinedweb
492
61.87
I don't understand anything. I need someone to do my homework. Type: Posts; User: programming_nerd I don't understand anything. I need someone to do my homework. This is what I have done so far My Card Class public class Card { public int suit; public int number; private final String[] Digit = {"A", "2", "3", "4", "5", "6", "7", "8", "9", "10",... A player is dealt an initial hand of two cards. He has the option of drawing cards to bring the total value to 21 or less without exceeding it, so that the dealer will lose by having a lesser hand... Hi, I am new here.I'm cool and awesome and need helps in Java =[. I just notice i posted on the wrong section -_- ...
http://www.javaprogrammingforums.com/search.php?s=e5fd384a0d691fa5a307546f59666662&searchid=1514082
CC-MAIN-2015-18
refinedweb
127
84.37
Update: All articles are now in the main article namespace. Feel free to nominate your favorite articles and images at VFH and VFP! The articles are written, the votes are tallied and the first ever Uncyclopedia Poo Lit Surprise Writing Competition is now over! We had over forty participants and nearly fifty (non-disqualified) articles in total, which marks the first PLS a resounding success. Thanks to the judges for plowing through a metric ton of blathering nonsense sparkling prose, and of course to the writers, because without them we could keep our money. The competition was fierce; in all three of the categories one extra vote could potentially have swayed the results. Without further ado, the following are the winners and honorable mentions of the Poo Lit Surprise... Best Illustrated Article Winner: Honorable Mentions: Best Rewrite Winner: - Metrosexual by Spintherism, based on Metrosexual Honorable Mentions: - William Wordsworth by Claudius Prime, based on Wordsworth - Fozzie Bear by Bear, based on Fozzie Bear Best Article Winner: Honorable Mentions: Our three winners will receive the esteemed title of Poet Lowrate for the Order of Uncyclopedia, as well as an Amazon e-certificate for $10 00. Our runners-up will get warm admiration. Thank you all for participating! Full list of submissions:
http://uncyclopedia.wikia.com/wiki/Uncyclopedia:Poo_Lit_Surprise?diff=prev&oldid=816127
CC-MAIN-2014-52
refinedweb
209
52.49
Download presentation Presentation is loading. Please wait. 1 Capital Budgeting 2 The process of determining and selecting the most profitable long-term (>1 year) projects. Firm ’ s capital budgeting decisions define its strategic plan. Projects : mutually exclusive vs. independent. Tools ; Break even point, Payback period, Discounted-payback, NPV, and IRR 3 Break even point The amount of production units which make zero profit to the firm. It implies that total revenue is equal to total cost. Profit = Total Revenue - Total Cost Break even when profit = 0, hence; Total Revenue = Fixed cost + Variable cost 4 (P) (X 0 ) = FC + (VC) (X 0 ) ; therefore, whenP > VC whereX 0 = Break even point (units) P = Sale Price ($ per unit) FC = Fixed Cost ($) VC = Variable Cost ($) Break even point X 0 = FC (P – VC) units $ Revenue Total Cost Fixed Cost 5 Break even point Example; In order to operate a coffee shop, Joe need to have $9,000 for initial fixed-cost. The variable cost is $4 per cup. He will sell his coffee for $7 each cup. How many coffee cups needed to be sold for break-even? If he want $3,000 profit, how many additional cups he need to sell? Solution; Profit = Revenue – Total cost Break-even when profit = 0 ; 0 = ($7)(X 0 ) – [ $9,000 + ($4)(X 0 ) ] X 0 = 3,000 cups What if profit = $3,000 ; $3,000 = ($7)(X) – [ $9,000 + ($4)(X) ] X = 4,000 cups He need to sell additional 1,000 cups after he get break-even in order to make $3,000 profit. (Notice; the units produced after get break- even generate the pure profit from theirs margin multiply by units. 6 Payback period The number of years required to recover the original cost of investment Payback will occur when the cumulative net cash flow equals zero Decision rule: The shorter the payback is the better one. To decide which project to be accepted, benchmark payback period is needed. If project ’ s payback equal or less than benchmark ’ s payback, accept the project If project ’ s payback more than benchmark ’ s payback, reject the project 7 Payback period Example; given the benchmark payback period is 4 years. If A and B are independent, accept both of them. If A and B are mutually exclusive, A would be accepted over B. 8 Discounted payback period An improvement over the payback period by consider the time value of money. Example (continued); given the firm ’ s cost of capital = 10% 9 Net present value (NPV) Given the flaws in the payback methods of only ranking projects, NPV provide the specific amount of dollar for project evaluations. Decision rule: For independent projects, if NPV > 0, then accept the project. If NPV < 0, then reject the project. For mutually exclusive projects, choose the one with the highest NPV subject to the condition that the NPV is 0 or greater. 10 Net present value (NPV) Example (continued); NPV A NPV B 11 Internal rate of return (IRR) The rate of return which provides NPV equal to zero. The rate of return which bring inflow ’ s present value equal to outflow ’ s present value. Decision rule: Benchmark rate,usually firm ’ s cost of capital, is required as the minimum rate that the firm will accept for a given project. For independent projects, if IRR > Cost of Capital, then accept the project. If IRR < Cost of Capital, then reject the project. For mutually exclusive projects, the projects are ranked on the basis of their IRRs. 12 Internal rate of return (IRR) Example (continued); the firm ’ s cost of capital = 10% Project A ; 2,000 = 1,000/(1+IRR A ) /(1+IRR A ) /(1+IRR A ) /(1+IRR A ) 4 Trial and error give IRR A = 14.5% Project B ; 2,000 = 200/(1+IRR B ) /(1+IRR B ) /(1+IRR B ) 3 + 1,200/(1+IRR B ) 4 Trial and error give IRR B = 11.8% If A and B are independent, accept both because their IRR more than 10%, which is the cost of capital If A and B are mutually exclusive, then A would be ranked higher than B since IRR A > IRR B and both are more than cost of capital. 13 Comparison of NPV and IRR Some conflicting decisions may occur between the two methods. NPV method is the better. 14 Comparison of NPV and IRR Cost of Capital NPV Crossover Rate = 7.2 % IRR B = 11.8% IRR A = 14.5% There are two reasons that the NPV profiles intersect. 1. The projects have different sizes, or 2. The projects have different lives. 15 Comparison of NPV and IRR For independent projects, IRR and NPV methods always give the same accept or reject decision. 16 Comparison of NPV and IRR For mutually exclusive projects, the IRR and NPV methods sometimes give the different decision. 17 Comparison of NPV and IRR The NPV method is better, since NPV is the one that selects the project that maximizes shareholders ’ wealth Mathematically, the NPV method assumes the reinvestment rate of the cash flows is the cost of capital, while the IRR method assumes the reinvestment rate to be the IRR. In conclusion, the NPV method is considered to bet the best method since it leads to conceptually correct capital budgeting decisions. Similar presentations © 2017 SlidePlayer.com Inc.
http://slideplayer.com/slide/3824608/
CC-MAIN-2017-09
refinedweb
887
69.31
This short article shows how you can turn an ASCX UserControl into one of these IFRAMES and use it to show ads (or whatever else suits your fancy). First, we need to create a UserControl. Here is the ASCX Markup: <% As can be seen above, all we need is our IFRAME on the control designer with the runat="server" attribute, and sufficient additional attributes to get the default display characteristics that we want. The "src" attribute is initially set to a simple script-only page that inserts random ads. This is specified also by some sample Google Adsense code I have in the AppSettings section of the web.config. Now here is the codebehind, which includes a custom "PlaceAd" method: using namespace public void PlaceAd(string width, string height, string src, string style, string adCode) { AdControl1.Attributes["height"] = height; AdControl1.Attributes["width"] = width; AdControl1.Attributes["style"] = style; // use one or the other of these ideas - either pass src, or pass adCode parameter: if (src != null) AdControl1.Attributes["src"] = src; else AdControl1.InnerHtml = adCode; } } } So with our control dragged onto a Page, we can tell it how big to be, and even set style attributes such as "float:left;" and so on, which makes it very flexible.To use the control on a page, all we need to do is drag it on from the Solution Explorer. In my sample ASPX page, I have code in Page_Load that places an ad, and I also have a button with a textbox that let's you see how the src property can be changed programmatically at runtime: protected You can download the sample Visual Studio 2005 Solution here. Message Board Articles Submit Article Add Blog To Blog Directory .NET Developer Registry Software Downloads Feedback Win a free License of CryptoLicensing!
http://eggheadcafe.com/tutorials/aspnet/4c4c5f62-678e-4e05-87fd-4477a0d1987c/using-serverside-iframes.aspx
crawl-002
refinedweb
297
60.95
. said in SPI Pins for the Omega2: CS1 - GPIO6 SCLK - GPIO7 MOSI - GPIO8 MISO - GPIO9 This worked for me. Well, at least until some point. It allows me to write 16 bytes to the SPI bus, but if I try to send more, it doesn't work anymore. And if I try spi.writeBytes(position, bytes_list)(in python), it just inserts the byte positionat the beginning of bytes_list, and has the same 16 bytes limitation. Did any of you experience this also ? ?" spi-gpio-custom is software-based bit-banged SPI. The hardware-SPI on GPIOs 6-9 should already be enabled by default. Just do ls -l /dev/spi*and the SPI-bus should show up. @WereCatf Thanks. But how do I do that? I see no equivalent of what I've found in other implementations (opening the communication, sending multiple writes, closing the com). I tried : pixels = [] for i in range(16): pixels.append(0xFF) spi.write(pixels) spi.write(pixels) but it will write those 16 values twice, not add the second call at the end of the first. Also, spi.writeBytesonly inserts the value of [address] at the beginning of the list of values, so it does not address the problem. The two follogins statements are equivalent : spi.write([0x66, 0x00, 0xFF]) spi.writeBytes(0x66, [0x00, 0xFF]) @Pierre-Rossinès What works for me, even if it only partially works, is the following : import onionSpi spi = onionSpi.OnionSpi(1, 32766) spi.speed = 1000000 spi.mode = 0 spi.lsbfirst = False spi.checkDevice() spi.registerDevice() spi.setupDevice() spi.write([0] * 16) I'm not sure I need every single line though. - Brice Parent @Pierre-Rossinès Also, I don't remember what I've installed, but in my installed packages, I have (I only put the ones that seemed relevant to me) : fast-gpio - 0.1-1 gpioctl-sysfs - 0.0.6-1 libonionspi - 0.1-1 pyOnionSpi - 0.1-1 python-base - 2.7.13-3 python-light - 2.7.13-3 spi-tool - 0.1-1 spi-tools - 1-cc6a41fdcec60610703ba6db488c621c64952898 I don't remember having manually installed anything other that pyOnionSpi and maybe python-light (and I'm not even sure for those two...), and I particularly don't remember having installed a specific version of spi-tools like I see here... But anyway, that"s what makes it partially work here ! Edit : But it looks like you just reversed the arguments when creating the OnionSpi object (use 1, 32766instead of 32766, 1). @James-Behrens I've open a ticket 12 days ago about that, without the 150uS delay that I didn't know about (I don't have a scope), but so far no answer from the team. I'll link it to your post for more info. Also, if you find a workaround to be able to use more than 16 bytes without having to wait for a software update, like a software SPI or another library, please tell me about it.:
http://community.onion.io/topic/1560/spi-pins-for-the-omega2/?page=1
CC-MAIN-2019-35
refinedweb
496
68.06
Hi all, I have just put togehter some simple C code with the core functions copied from another thread. I cannot get the flash to erase or write. I am thinking the problem is either to do with the paging in this processor (which i dont fully understand), or the timing. Otherwise I have no idea. I have other working flash code in other processors using a far more complicated method (which I posted on here previously), but really want this to work as its far more simple. Code: #include <hidef.h> /* for EnableInterrupts macro */#include "derivative.h" /* include peripheral declarations */word wTemp = 0; //General usage wordbyte bTemp = 0; //General usage bytevoid InitFlashRoutine(void); // match number of bytes above.void FlashInit(void);byte FlashErasePage(word page); byte FlashProgramByte(word address, byte data);const unsigned char ROM}; volatile unsigned char}; void main(void) { PTBDD_PTBDD2 = 1; //Pin used for debug PTBD_PTBD2 = 0; //LED1 on on first power up PTBDD_PTBDD3 = 1; //Pin used for debug PTBD_PTBD3 = 1; //LED2 off //Setup bus to run at 19.925Mhz//ICSC1 remains default (Internal reference with FLL enabled)//ICSSC setting, 39.85Mhz//DRST = 01 = Mid range//DMX32 = 1 = Accurate internal xtal//Internal reference selected//Output of FLL Selected ICSSC = 0b01110000; ICSC2 = 0b00000000; //Bus div 1//Not sure how this is suppose to be done now as there is no FLL lock... Just created a about a short delay to be sure// while(!ICGS1_LOCK) __RESET_WATCHDOG(); //Wait for lock before allowing software to startup//Loop for wTemp*2^8 wTemp = 1000; bTemp = 0; while(--wTemp){ while(++bTemp) __RESET_WATCHDOG(); //Wait at least some time for the clock to stablise, really should be 1mS } PTBD_PTBD2 = 1; //LED off once running EnableInterrupts; /* enable interrupts */ /* include your code here */ InitFlashRoutine(); //Ensure RAM is loaded correctly FlashErasePage(0x8000); FlashProgramByte(0x8000, 0xCC); if((*((byte *)0x8000)) == 0xCC) PTBD_PTBD3 = 0; //LED on if programming success for(;;) { __RESET_WATCHDOG(); /* feeds the dog */ } /* loop forever */ /* please make sure that you never leave main */}void InitFlashRoutine(void) // match number of bytes above. { PGM[0] = ROM_PGM[0]; PGM[1] = ROM_PGM[1]; PGM[2] = ROM_PGM[2]; PGM[3] = ROM_PGM[3]; PGM[4] = ROM_PGM[4]; PGM[5] = ROM_PGM[5]; PGM[6] = ROM_PGM[6]; }// - - - - - - - - - - - - -//This must be setup based on the clock settings#define initFCDIV 0b01001101 //FLASH clock divider (setup for 20Mhz Bus)// ||||||||// |||||||+-DIV0 \// ||||||+--DIV1 |// |||||+---DIV2 >-- divide by (12+1)// ||||+----DIV3 | BUSCLK/(8*12)~=(192.3Khz)// |||+-----DIV4 |// ||+------DIV5 /// |+-------PRDIV8 -- divide (prescale) by 8// +--------DIVLD --- read-only statusvoid FlashInit(void){ while(FSTAT_FACCERR) FSTAT_FACCERR = 1; //Make sure error is cleared before setting up the divider if(!FCDIV_FDIVLD) FCDIV = initFCDIV; //set fFCLK = about 200kHz (if not already setup i.e. on warm boot) FPROT_FPOPEN = 1; //No flash protection FPROT_FPS = 1; //No flash protection}// - - - - - - - - - - - - -byte FlashErasePage(word page) { asm { TPA ; Get status to A PSHA ; Save current status SEI ; Disable interrupts LDA #0x30 STA FSTAT ; Clear FACCERR & FPVIOL flags LDHX page STA ,X ; Save the data LDA #$40 ; Erase command STA FCMD LDA #FSTAT_FCBEF_MASK LDHX @FSTAT JSR PGM PULA ; Restore previous status TAP } return (FSTAT & 0x30); }byte FlashProgramByte(word address, byte data) { asm{ TPA PSHA ; Save current status SEI ; Disable interrupts LDA #0x30 STA FSTAT ; Clear FACCERR & FPVIOL flags LDHX address LDA data STA ,X ; Save the data LDA #$20 ; Burn command STA FCMD LDA #FSTAT_FCBEF_MASK LDHX @FSTAT JSR PGM PULA ; Restore previous status TAP } return (FSTAT & 0x30); } Message Edited by CarlFST60L on 2009-01-14 02:36 AM There appear a couple of possible issues - It might be possible that a COP reset could occur during the erase process, whilst within the RAM based routine. It is probably a good idea to periodically clear the COP timer within this routine. You may also be correct that there is a paging issue with the QE128. I notice that your test code addresses the paging window at 0x8000. Perhaps carry out your initial tests at some other flash location, not dependent on the PPAGE setting. The following code snippet does implement the clearing of the COP timer. Also shown is an alternative way of copying the RAM based code from flash, handled as a compiled function rather than as a variable. The FLASH_ROUTINE segment also needs to be placed within the PRM file, making use of the RELOCATE_TO command. extern char __SEG_START_FLASH_ROUTINE[]; extern char __SEG_SIZE_FLASH_ROUTINE[]; /* Private function prototype */ #pragma CODE_SEG FLASH_ROUTINE void Flash_Cmd( void); #pragma CODE_SEG DEFAULT /*****************************************************************************/ #pragma CODE_SEG FLASH_ROUTINE /* This function is loaded to RAM, and executed from there. The size of the function is 10 bytes. On entry, ACC must contain FCBF mask value (0x80), and H:X the address of FSTAT register */ void Flash_Cmd( void) { __asm { STA ,X ; wp Commence flash command LDA #$70 ; (2) FCCF|FACCERR|FPVIOL mask L1: STA SRS ; (3) Reset COP required for page erase BIT ,X ; rfp Test command status BEQ L1 ; Loop while incomplete & no error } } #pragma CODE_SEG DEFAULT /*****************************************************************************/ /* Copies Flash_Cmd() function into RAM, with steps documented in Tech Note 228 Start_data refers to the begining of the flash block to be copied. */ #define Start_data __SEG_START_FLASH_ROUTINE #define Size_data __SEG_SIZE_FLASH_ROUTINE void CopyInRAM(void) { char *srcPtr, *dstPtr; int count; srcPtr = (char *)Start_data; dstPtr = (char *)(void *)&Flash_Cmd; for (count = 0; count < (int)Size_data; count++) { *dstPtr = *srcPtr; dstPtr++; srcPtr++; } } Regards, Mac I dont think COP is an issue as the LED flash's on boot, which only happens once. RE Page issue, I have tried address 0x4000, 0x8000 and 0xC000. Also, I can read the address using the method in the code, and, if i step through the code, it appears to read the correct address's. I have quite a few products in production which use a similar method to which you posted. When I copied that code over to the QE128, nothing worked as expected. So, I started searching and came across another thread which had some far more effectient and simple code so decided to use that, but had the same result, still no flash erase/write. I tried various ideas, but still no luck on this QE128. I assume its something to do with this QE128 as similar routines work fine in SH8, QD2, GT32, AW32 etc. The only other thing i changed was: Used: while(!ICGS1_LOCK) __RESET_WATCHDOG(); //GT32 Wait for lock before allowing software to startup instead of the count delay of 1 second or so I also changed the clock speed to 4.32Mhz and initFCDIV to match. so, i guess the problem must be either the bus speed running @ 19.925Mhz is a problem, or, the FLL isnt getting lock quick enough (I tried waiting 10 seconds to be sure, and it still didnt work). I have tested the 19.925Mhz bus speed and it appears to be exactly that, 19.925Mhz. so im lost. So, I think that rules out the bus rate being a problem. the flash should all be unprotected if my understanding is correct. Which I believe is correct in my sample code. I have tested this code on a GB60 and it works fine! But still nothing on the QE128. The only difference was the clock speed for the GB60 and FCDIV (slowed to 4.32Mhz as I had that from some other software). Everything else was identical. I have also tried 2 x QE128's, one had nothing but 3.3V connected, gnd, BDM pins. Is there an issue with paging? From what I understand paging has nothing to do with this, but i cannot see any other reason this wouldn't work (other than the bus speed being 19.925Mhz which shouldnt be a problem) Also, I have done this with no BDM connected, tried 0x4000, 0x8000 and 0xC000. Message Edited by CarlFST60L on 2009-01-16 05:29 AM We have tested this software in 3 other S08 processors (AW32, GB60, GE8) and it works first time every time no problems. The only difference when running QE128 is FCDIV, FBus and Paging. I dont do anything with paging at all as we dont currently need it, though could be the problem? We have also been supplied sample C code for the QE128 dev which DOES NOT WORK, it compiles WRONG, it jumps to the WRONG address when calling flash. So that was no help at all. Has anyone got ANY code that works in QE128 they can share? Or can anyone tell me if my my settings for FCDIV, FBUS and Paging are ok as surely this is the problem The mask revision is 1MJ11, codewarrior is the latest downloaded version (6.2) Message Edited by CarlFST60L on 2009-01-23 03:31 AM Did you get any sample code to resolve your problem? I am running in the same problem. Carl, you init FPROT register this way: FPROT_FPOPEN = 1; //No flash protection FPROT_FPS = 1; //No flash protection This way you are write protecting 126kB of flash. Is it what you want? Flash routines from your first thread message (PGM and two Flashxxx routines) are working OK for me on QE128.
https://community.nxp.com/thread/49524
CC-MAIN-2019-26
refinedweb
1,481
59.13
Getting Spark, Python, and Jupyter Notebook running on Amazon EC2 Step-by-step guide to getting PySpark working with Jupyter Notebook on an instance of Amazon EC2. Hi! I’m Jose Portilla and I teach over 200,000 students about programming, data science, and machine learning on Udemy! You can check out all my courses here. If you’re interested in learning Python for Data Science and Machine learning, check out my course here. (I also teach Full Stack Web Development with Django!) This is mainly a guide for myself to reference again at a later date, but I figured it would be useful to others. There are a few guides similar to this one floating around, but a lot of them have not been updated in a while. This guide assumes some basic familiarity with the command line and AWS console. Step 1: Create an Amazon EC2 Instance For simplicity we will use the Free Tier Micro Instance using Ubuntu 64x. Use all the default settings, except set All Traffic instead of SSH when editing the Security of the instance. This guide can be expanded to multiple instances or larger instances. Step 2: SSH into that instance (Windows) For Windows users, you’ll need to use PuTTY . Amazon has a really good set of instructions located here. Follow those to the point where you have the connection to the Ubuntu console. Step 3: SSH into that instance (Linux/Mac OS) An SSH client is already built into Mac and Linux (usually). You can just do a straightforward ssh command into your EC2 instance using your .pem file that you downloaded. Amazon also has instructions on this here. You should now have successfully connected to the command line of your virtual Ubuntu instance running on EC2. The rest of the guide will tell you commands to put into this terminal. Step 4: Download and Install Anaconda Next we will download and install Anaconda for our Python. You can replace the version numbers here with whatever version you prefer (2 or 3). $ wget $ bash Anaconda3–4.1.1-Linux-x86_64.sh Press Enter through the license agreements, then Enter yes to accept, then Enter again to accept the default location. Step 5: Check which Python you are using. Ubuntu already comes with Python, but let’s make sure we are using the Anaconda version. Use: $ which python Most likely you won’t be using Anaconda’s version of Python (you can tell by checking the output of the which python command). Change to the Anaconda version of Python by specifying your source using: $ source .bashrc Then confirm with: $ which python Or you can just type python and check to see. Step 6: Configure Jupyter Notebook Jupyter comes with Anaconda, but we will need to configure it in order to use it through EC2 and connect with SSH. Go ahead and generate a configuration file for Jupyter using: $ jupyter notebook --generate-config Step 7: Create Certifications We can also create certifications for our connections in the form of .pem files. Perform the following: $ mkdir certs $ cd certs $ sudo openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout mycert.pem -out mycert.pem You’ll get asked some general questions after running that last line. Just fill them out with some general information. Step 8: Edit Configuration File Next we need to finish editing the Jupyter Configuration file we created earlier. Change directory to: $ cd ~/.jupyter/ Then we will use visual editor (vi) to edit the file. Type: $ vi jupyter_notebook_config.py You should see a bunch of commented Python code, this is where you can either uncomment lines or add in your own (things such as adding password protection are an option here). We will keep things simple. Press i on your keyboard to activate -INSERT-. Then at the top of the file type: c = get_config() # Notebook config this is where you saved your pem cert c.NotebookApp.certfile = u'/home/ubuntu/certs/mycert.pem' # Run on all IP addresses of your instance c.NotebookApp.ip = '*' # Don't open browser by default c.NotebookApp.open_browser = False # Fix port to 8888 c.NotebookApp.port = 8888 Once you’ve typed/pasted this code in your config file, press Esc to stop inserting. Then type a colon : and then type wq to write and quit the editor. Step 9: Check that Jupyter Notebook is working Now you can check to see that jupyter notebook is working. In your ubuntu console type: $ jupyter notebook You’ll see an output saying that a jupyter notebook is running at all ip addresses at port 8888. Go to your own web browser (Google Chrome suggested) and type in your Public DNS for your Amazon EC2 instance followed by :8888. It should be in the form: After putting that into your browser you’ll probably get a warning of an untrusted certificate, go ahead and click through that and connect anyway, you trust the site. (Hopefully, after all you are the one that made it!) You should be able to see Jupyter Notebook running on you EC2 instance. Great! Now we need to go back and install Scala, Java, Hadoop, and Spark on that same instance to get PySpark working correctly. Use Crtl-C in your EC2 Ubuntu console to kill the Jupyter Notebook process. Clear the console with clear and move on to the next steps to install Spark. Step 10: Install Java Next we need to install Java in order to install Scala, which we need for Spark. Back at your EC2 command line type: $ sudo apt-get update Then install Java with: $ sudo apt-get install default-jre Check that it worked with: $ java -version Step 11: Install Scala Now we can install Scala: $ sudo apt-get install scala Check that it worked with: $ scala -version (Optional: You can install specific versions of Scala with the following, just replace the version numbers): $ wget $ sudo dpkg -i scala-2.11.8.deb Step 12: Install py4j We need to install the python library py4j, in order to this we need to make sure that pip install is connected to our Anaconda installation of Python instead of Ubuntu’s default. In the console we will export the path for pip: $ export PATH=$PATH:$HOME/anaconda3/bin Then use conda to install pip: $ conda install pip Confirm that the correct pip is being used with: $ which pip Now we can install py4j with pip: $ pip install py4j Step 13: Install Spark and Hadoop Use the following to download and install Spark and Hadoop: $ wget $ sudo tar -zxvf spark-2.0.0-bin-hadoop2.7.tgz Step 14: Tell Python where to find Spark Finally we need to set our Paths for Spark so Python can find it: $ export SPARK_HOME='/home/ubuntu/spark-2.0.0-bin-hadoop2.7' $ export PATH=$SPARK_HOME:$PATH $ export PYTHONPATH=$SPARK_HOME/python:$PYTHONPATH Step 15: Launch Jupyter Notebook You should now have everything set up to launch Juptyer notebook with Spark! Run: $ jupyter notebook Then as previously done, connect through your browser again to your instance’s Jupyter Notebook. Launch a new notebook and in a notebook cell type: from pyspark import SparkContext sc = SparkContext() If that works, you’re all done! Hopefully you found this useful! (Check out the various comments on this article for other useful tips by readers!) Thanks for reading.
https://medium.com/@josemarcialportilla/getting-spark-python-and-jupyter-notebook-running-on-amazon-ec2-dec599e1c297
CC-MAIN-2017-47
refinedweb
1,228
71.24
13 March 2012 09:17 [Source: ICIS news] SEOUL (ICIS)--?xml:namespace> “Everything is going very smoothly and we expect to begin pre-marketing in 2013,” the source said. The project was announced in August 2011 and is expected to cost won (W) 1,000bn ($889m). The company will be using the Honeywell UOP Parex process for the plant, with the feedstocks – isomer-grade xylenes (IX), toluene and C9 aromatics – to be equally supplied by SKGC and JX Nippon Oil, the source said. Both companies will have equal PX offtakes after commercial production starts, he added. SKGC also has a stake in the Singapore-based Jurong Aromatics Corp (JAC) aromatics project and is expecting an offtake of around 200,000 tonnes/year of PX after the facility starts up in late 2014. ($1 = W1,124
http://www.icis.com/Articles/2012/03/13/9540911/s-koreas-skgc-aims-to-start-up-ulsan-px-project-by-q2-2014.html
CC-MAIN-2015-18
refinedweb
135
67.28
15.5 Ten Carat 12:30 Sedg Yorkshire Evening Post Phil Rostron -12.64 5.5 Chip N Pin 1:35 Sedg News of the World Pegasus +2.10 5.5 Chip N Pin 1:35 Sedg Newcastle Journal Doug Moscrop -6.00 4 Serabad 1:35 Sedg Irish Post David Thorpe +0.50 3.85 Malt De Vergy 2:10 Sedg The Mail On Sunday John Martin -8.00 6.2 Blast The Past 2:45 Sedg Evening Press, York Ebor -6.20 19 Middleway 3:20 Sedg Liverpool Daily Post Chris Wright +14.25 7.6 Looking Forward 3:20 Sedg The People Larry Lynx +7.42 Hi, I'm trying to read in a table in the form above and store them in arrays (they are horse racing guides). However as you can see some of the names are two or three sylables and some of them are only one. I would like the names to be stored as one entry. Any ideas? Below is the code I'm using... #include <iostream> #include <fstream> #include <stdlib.h> #include <math.h> #include <iomanip> using namespace std; int main () { ifstream input; input.open("inputFile.tex"); int i =0; string c; string store[100]; while (! inputH.eof()){ input >> c; store[i]=c; cout << store[i] << endl; i=i+1; } }
https://www.daniweb.com/programming/software-development/threads/98811/inputing-a-text-file-problem-with-ifstream
CC-MAIN-2017-51
refinedweb
219
85.89
#include <I2C.h>#define HMC5883L 0x1E int x = 0;int y = 0;int z = 0;void setup(){ I2c.begin(); I2c.write(HMC5883L,0x02,0x00); //configure device for continuous mode}void loop(){ I2c.read(HMC5883L,0x03,6); //read 6 bytes (x,y,z) from the device x = I2c.receive() << 8; x |= I2c.receive(); y = I2c.receive() << 8; y |= I2c.receive(); z = I2c.receive() << 8; z |= I2c.receive(); } Did you restart the IDE after installing the library? Did you install the library? And, you've been told that it is I2C, not I2c. Don't come back until you fix that. I like your scan utility and would like to incorporate that into checking for the device before reading it but not sure how to convert it over to the I2C.h library functions for the following: void I2C::scan(){ uint16_t tempTime = timeOutDelay; timeOut(80); uint8_t totalDevicesFound = 0; Serial.println("Scanning for devices...please wait"); Serial.println(); for(uint8_t s = 0; s <= 0x7F; s++) { returnStatus = 0; returnStatus = start(); if(!returnStatus) { returnStatus = sendAddress(SLA_W(s)); } if(returnStatus) { if(returnStatus == 1) { Serial.println("There is a problem with the bus, could not complete scan"); timeOutDelay = tempTime; return; } } else { Serial.print("Found device at address - "); Serial.print(" 0x"); Serial.println(s,HEX); totalDevicesFound++; } stop(); } if(!totalDevicesFound){Serial.println("No devices found");} timeOutDelay = tempTime;} Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=144533.15
CC-MAIN-2016-36
refinedweb
253
54.79
A: <html> <body> <p>EUR/USD: <%= rates.get() %></p> </body> </html>: <?xml version="1.1"?> <page> <rate>1.1324</rate> </page> This is what the model will produce, having no knowledge of the view. Then, we create the view as an XSL stylesheet, which will transform XML into HTML: <xsl:stylesheet <xsl:template <html> <body> <p> <xsl:text>EUR/USD: </xsl:text> <xsl:value-of </p> </body> </html> </xsl:template> </xsl:stylesheet> As you see, the view doesn't know anything about the model in terms of implementation. All it knows is the format of XML data output produced by the model. Here is how you design it in the Takes framework. Let's start with a simple example: import org.takes.http.Exit; import org.takes.http.FtCli; public final class Entry { public static void main(final String... args) throws Exception { new FtCli(new TkApp(), args).start(Exit.NEVER); } } It's a simple web application that starts a web server and never ends (it waits for connections in daemon mode). To make it work, we should create a simple "take" named TkApp: import org.takes.Take; import org.takes.tk.TkWrap; final class TkApp extends TkWrap { @Override public Response act() { return new RsWithType( new RsText( "<page><rate>1.1324</rate></page>" ), "application/xml" ); } } This "take" always returns the same XML response, but it doesn't do any XSL transformation yet. We need to add the RsXSLT class to the picture: @Override public Response act() { return new RsXSLT( new RsWithType( new RsText( "<?xml-stylesheet type='text/xsl' href='/xsl/index.xsl'?>" + "<page><rate>1.1324</rate></page>" ), "application/xml" ) ); }: @Override public Response act() { return new RsXSLT( new RsWithType( new RsXembly( new XeChain( new XeStylesheet("/xsl/index.xsl"), new XeAppend( "page", new XeDirectives( new Directives().add("rate").set("1.1324") ) ) ) ), "application/xml" ) ); }.
http://www.yegor256.com/2015/06/25/xml-data-xsl-views-takes-framework.html
CC-MAIN-2017-51
refinedweb
301
67.35
Styling Cells 00:00 Styling might not be the most important thing that you do to a spreadsheet, but it can definitely help to know how to in case a need pops up. openpyxl has a ton of styling options available to perform everything that Excel can. 00:12 We’ll just cover a few topics here to get the basics down. The documentation has much more info if you’re interested in specific things that you can do. There are two main ways to apply styles to cells. 00:24 You can either apply a style directly to a cell or create a style template that can then be applied to multiple cells. Let’s open up an interpreter session and test this out on the Amazon data spreadsheet. So I’m going to open up bpython, 00:40 get the regular import, load the workbook, 00:46 and then grab that active sheet. Before you can start playing around with the styles you need to import a couple of style classes, so from openpyxl.styles import Font, Color, Alignment, Side, and Border. So at this point, you can go ahead and create some styles. 01:05 So something like bold_font is going to equal Font() and then pass in bold=True. And if you wanted to have big red text, you could say Font(color=colors.RED)—which should actually be in all-caps—and then size=20. 01:23 And you’ll see that 'colors' is not defined, because it wasn’t imported. So, from openpyxl.styles import colors. 01:32 Rerun that line, and there you go. If you want to have center-aligned text, you can say Alignment(horizontal="center"). And if you wanted to have a double-border side, you could say Side() and then pass in border_style and then set this equal to "double". And then for a square border, you can say Border() and then pass in top as the double_border_side. 02:01 The right will also be the double_border_side, the bottom— double_border_side, and then the left will also be double_border_side. Okay. 02:12 Just going to clear that to clean this up. So now, take your sheet["A2"] and set the .font = bold_font. 02:22 Go ahead and grab "A3", and now set the .font to big_red_text, grab "A4" change the .alignment to center_aligned_text, and then with "A5" set the .border property to equal the square_border that you made. To see how these worked, take the workbook and save it. 02:45 Set the filename to something like "sample_styles.xlsx". 02:51 All right, it popped up in the directory. Let’s go take a look at it. So going over it, A2 has bold text, A3 has this large red text, A4 has this center alignment, and then A5 has this square border around it. 03:07 So, that’s pretty cool! If you go back, the other way to apply styles to cells is to use NamedStyles, and these will produce, basically, a template. 03:16 So, from openpyxl.styles import NamedStyle. 03:21 Okay. Set header equal to a new NamedStyle and the name here, just call it "header". And now, instead of assigning a style directly to the cell, you’ll assign them to this. 03:34 So, header.font and set this equal to something like Font() and set bold=True. And then header.border, you can say Border() and just do this on the bottom. Set Side(), border_style and passing something like "thin". And with the .alignment, go ahead and say Alignment(horizontal="center") and add some vertical centering, too. So, because this is headers, you want to apply this to all the cells on the first row. You can say the header_row = sheet[1]. 04:11 You’ll remember that this will return the first row of the sheet, so you can say that for cell in header_row: cell.style = header. 04:20 Okay. To make sure this worked, go ahead and say workbook.save() and set the filename equal to—I’m going to call it the same thing, "sample_styles.xlsx". And no errors, so let’s go take a look! All right. 04:34 Now you can see that the header row has the new style applied. 04:42 They’re bold. They’re centered. You can make the row a little bigger—they’re centered vertically, as well, and horizontally. And they have one thin border running on the bottom of the cells. 04:53 A cool thing to note is that these NamedStyles actually show up in Excel. If you ever looked in the Styles section in the toolbar, you can drop this down and you’ll see these default styles for, like, Bad cells, Good cells, different colors, things like that. 05:07 And you’ll notice that there’s a Custom section and there’s header! So, that’s a cool thing to keep in mind when you define these NamedStyles, is that you may not actually apply them to any cells with your script, but they are available for a user to use in the workbook depending on what they need to do. All right, so that’s all there is to know about styles. 05:28 Like I said, we just covered a couple basic things that you can do. If you need to look at anything specific, I highly recommend taking a look at the openpyxl documentation. 05:38 In the next video, we’re going to take a look at a really cool feature of Excel and you’re going to learn how to apply conditional formatting to cells using Python. Some changes happening with colors module, this was what I ended up doing. red_hex = '00FF0000' big_red_text = Font(color=red_hex, size=20) Become a Member to join the conversation. Jon Nyquist on Sept. 7, 2020 Minor point: The openpyxl source code documentation says there are getting rid of some of the predefined colors, including RED openpyxl.readthedocs.io/en/stable/_modules/openpyxl/styles/colors.html I had to add this code to get it to work:
https://realpython.com/lessons/styling-cells/
CC-MAIN-2020-40
refinedweb
1,022
83.15
I have a dnx console application on a Ubuntu VM that watches a folder which is shared with the host OS (Windows 8.1). When file changes occurs in the shared folder on the Ubuntu VM, the console application responds to them but not when file changes are made on the host. How come and is there anyway to make it work? using System; using System.IO; public class Program { public static void Main(string[] args) { FileSystemWatcher watcher = new FileSystemWatcher(); watcher.Path = "/media/sf_shared"; watcher.NotifyFilter = NotifyFilters.LastWrite | NotifyFilters.FileName; watcher.Changed += new FileSystemEventHandler(Respond); watcher.Created += new FileSystemEventHandler(Respond); watcher.Deleted += new FileSystemEventHandler(Respond); watcher.EnableRaisingEvents = true; Console.ReadKey(); } private static void Respond(object source, FileSystemEventArgs e) { Console.WriteLine("Hej"); } } On Linux, Mono will use inotify as its preferred event publishing backend to the the FileSystemWatcher class. The issue in your case is that a change to the filesystem by Windows is not going to cause the Ubuntu/Linux filesystem to publish an event... Windows to Windows UNC sharing paths do publish file system events, but cross-platform file system sharing 'typically' do not.. i.e. embedded Samba in a VM host... not a hard rule, but each deployed environment needs tested. The only way for this to happen in your case is a polling system, which mono does support, if fact it supports 5 different backends for file system eventing out of the box. You can force it to use the built-in polling method (do this before the file watch object creation): Environment.SetEnvironmentVariable ("MONO_MANAGED_WATCHER", "1"); Or set the environment var before running the app: export MONO_MANAGED_WATCHER=1 Buyer beware: It is a performance issue when polling. This will cause a dir scan on what you defined in your watcher ever 750ms. Keep your watcher to one dir, no sub dirs, and preferable that it is filtered to a very small file subset (one?) and if possible only have one file in that dir.... This was documented at one time in the mono docs, but I can not longer find it there (but it is in the source code ;-) : Online Doc: Old doc notes (might be in the man page for mono?): "Mono's implementation of the FileSystemWatcher has multiple backends. This is necessary because not all operating systems supported by Mono have all the features necessary to provide the functionality expected by applications. If the operating system kernel supports watching directories (inotify on Linux, KEvents on BSD or OSX) that feature is used; Otherwise it falls back to using the Gamin or FAM libraries (these libraries provide an API to monitor directories) and if none of those features are available, Mono will poll every 750 milliseconds the directories watched. You can force the polling behavior (instead of using the kernel support) by setting the MONO_MANAGED_WATCHER environment variable before executing your application. This might be useful for filesystems that do not support inotify and still require polling to detect changes."
http://www.dlxedu.com/askdetail/3/f620f119946d8b0f5d5cab03e262043f.html
CC-MAIN-2018-39
refinedweb
493
54.52
How to use recursion in Python This. What does recursion look like? Now that we know recursion is characterized by the use of a function that calls itself, let's have a look using the textbook example of the fibonacci sequence. For the uninitiated, the fibonacci sequence goes something like this: 1, 1, 2, 3, 5, 8, 13, 21, 34. Each subsequent number is obtained by summing the previous two numbers. A recursive algorithm to generate the nth number in the sequence would look something like this: def fibonacci(n): if n == 0: return 1 elif n == 1: return 1 return fibonacci(n-1) + fibonacci(n-2) To wrap your head around this, let's use the call of fibonacci(4) as an example. When you call the above function with a value of 4, the function will hit: return fibonacci(3) + fibonacci(2) We don't know the solution to either of these, so we call fibonacci(3) first: return fibonacci(2) + fibonacci(1) Again, we don't know the solution to fibonacci(2), so we call that which hits: return fibonacci(1) + fibonacci(0) Great! This is a problem small enough for us to solve so both of those calls to our function return 1. Then, we end up back at: return fibonacci(2) + fibonacci(1) Except this time, fibonacci(2) has a value of 2 so then fibonacci(1) is called. This quickly returns 1 so now we have completely resolved fibonacci(3) from the first iteration. It evaluates to 3! Now we just have to deal with fibonacci(2). This hits: return fibonacci(1) + fibonacci(0) Which returns a value of 2. Our function terminates with a final evaluation of 2 + 3. We return the value 5. Now wasn't that easy? No? That's fine, it still melts my brain. Here's a summary of the functions indented by what level the calls actually occur: fibonacci(4) fibonacci(3) fibonacci(2) fibonacci(1) = 1 fibonacci(0) = 1 fibonacci(1) = 1 fibonacci(2) fibonacci(1) = 1 fibonacci(0) = 1 Fibonacci is a decent way to demonstrate recursion unfortunately, it is purely academic. It has no real world value since an iterative algorithm is much easier to read and works faster. Let's move on to some more practical examples. Testing a word to see if it is a palindrome A palindrome is a phrase that is the same written forward and backwards. "Madam, I'm Adam" and "rotator" are both palindromes. Now how would you write an algorithm to test for palindromes using a for or while loop? It's doable, but not so simple. It turns out, this is a perfect application for breaking the problem into smaller problems we can solve easily and using, you guessed it, recursion! # assume we have made sure w is a lowercased phrase stripped of spaces and punctuation def isPalindrome(w): if len(w) == 1: return True elif len(w) == 2: if w[0] == w[1]: return True else: return False else: if w[0] == w[-1]: return isPalindrome(w[1:-1]) else: return False I'll spare you the stack trace. You'll see our two small problems we know how to solve (our base cases). If a word is 1 character, it is necessarily a palindrome. If it is 2 of the same characters, it is a palindrome as well. If it is longer than that, we don't know. We check the first and last letters to make sure they're the same then recursively call our function with w[1:-1] which is the same phrase without the outer two letters. Here's an output of log statements from calling "isPalindrome('madamimadam')": calling isPalindrome on madamimadam comparing m to m calling isPalindrome on adamimada comparing a to a calling isPalindrome on damimad comparing d to d calling isPalindrome on amima comparing a to a calling isPalindrome on mim comparing m to m calling isPalindrome on i It is a palindrome! See how recursion allows us to solve the smallest case of a problem, then apply it on a much wider scale by breaking our large problem into smaller pieces? Traversing files Is the palindrome example still too contrived for you? Well how about this one? Let's say you are writing a python script to find all of the image files in a given directory and its subdirectories and do something with them, say append something to their name. If you did not do this recursively, you're looking at a huge headache. Who knows how deep the directory structure goes? Yes, Python has a built in method "os.walk" that does this for you, but it's possible you'll need to be more flexible and need to implement this yourself. Here goes! def appendToImages(dir, image_file_extensions=['jpg', 'jpeg', 'gif', 'png']): filenames = os.listdir(dir) for filename in filenames: filepath = os.path.join(dir, filename) if os.path.isfile(filepath): # ignore non image files filename, file_extension = os.path.splitext(filepath) if not file_extension[1:] in image_file_extensions: continue os.rename(filename + file_extension, "{0}-IMAGE{1}".format(filename, file_extension)) print("renamed {0}{1} to {0}-IMAGE{1}".format(filename, file_extension)) elif os.path.isdir(filepath): appendToImages(filepath) To quickly summarize, we take a directory, list its files, then iterate over those files. If it is in fact a file, we decide if it is an image and work on it. If it is a directory, we call "appendToImages" on that directory. Tips and tricks The first thing about recursion that trips up new programmers is termination. Your function must have a termination condition that is reached 100% of the times it is called. If not, you will have an infinite loop that sucks up memory until it stack overflows or crashes. In the above scripts, we either act upon a number that decreases each time the function is called or we can assume the list of files is finite. When should you use recursion? This is a tough answer, since any recursive algorithm could be written iteratively. My recommendation is to become comfortable with recursion, learn about recursive sorting and searching algorithms, then use it when it feels like you are solving the same problem over and over with a smaller set of data each time. There are performance concerns as well. Many recursive algorithms are slower than their iterative counterparts, but that is a guide for another day.
https://howchoo.com/g/mzy2y2zmymq/how-to-use-recursion-in-python
CC-MAIN-2019-35
refinedweb
1,073
63.39
make is a utility for building applications. This tutorial will teach you how to use this utility with Makefiles. This tutorial is primarily focused on GNU make. Hand compilationEdit Before we start talking about "make", let's first describe the build process used long ago: "hand compilation", aka "manual build". That processes isn't too bad if you only have one or two source files. A simple compilation process would be by typing the following code by hand: g++ -o file file.cpp For beginners, this might take a while to get used to. As an alternative, using make the command would be: make file Both commands will do the same thing - take the file.cpp source file, compile it and then link it into an executable. But where the problem comes in is where the user has to use more than one source file, or is dependent on libraries. Making an SDL application might consist of something like this: g++ `sdl-config --cflags` -o file file.cpp This is getting harder to remember, isn't it? Let's add some optimisation: g++ -O3 -fomit-frame-pointer `sdl-config --cflags` -o file file.cpp Do you really want to type that whole command line every time you make the tiniest edit to your source files? This is where make comes in - it will save your time! BasicsEdit A make file works as a simple dependency tree - it does compile the stuff that is outdated and then links your software together. You will have to specify the compilation options, but they will not be as tough on you anymore. A simple makefile for compiling your application: Makefile default: myapp myapp: g++ -o file file.cpp To compile your application, you may do: make or: make myapp Well, you may now ask, how much of use is this? I can make a shell script for this myself! So lets now work on multiple files and object files - your application is big enough to be linked now: it contains five files. The Test CaseEdit Globally, there are three functions present: int main(); (file1.cpp) void file2function(std::string from); (file2.cpp) void file3function(std::string from); (file3.cpp) File2 and file3 have their header files, so they could be inter-linked. The actual application will look like this: file1.cpp #include "file2.h" #include "file3.h" int main() { file2function("main"); file3function("main"); } file2.h #include <string> #include <iostream> #include "file3.h" void file2function(std::string source); file2.cpp #include "file2.h" void file2function(std::string from) { std::cout << "File 2 function was called from " << from << std::endl; file3function("file2"); } file3.h #include <string> #include <iostream> #include "file2.h" void file3function(std::string source); file3.cpp #include "file3.h" void file3function(std::string from) { std::cout << "File 3 function was called from " << from << std::endl; } So, how would I link this all up? Well, the most basic way is to do it all by hand: g++ -c -o file2.o file2.cpp g++ -c -o file3.o file3.cpp g++ -o file1 file1.cpp file2.o file3.o However, make would offer a simpler way, Makefile: Makefile default: testcase testcase: file2.o file3.o g++ -o file1 file1.cpp file2.o file3.o Pretty different, eh? There's a different approach to do this, which is what makes make so good: Makefile # Create the executable "file1" from all the source files. COMPILER=g++ OBJS=file1.o file2.o file3.o default: testcase testcase: $(OBJS) $(COMPILER) -o file1 $(OBJS) How easy is it to expand now? Just add a file to the OBJS list and you're done! This is a complete makefile that handles the complete process of converting several source files (and creating a bunch of ".o" files as an intermediate step) into the final "file1" executable. Good programmers put human-readable comments (lines starting with the "#" character) in the makefile to describe why the makefile was written and what make is intended to do with this makefile (which, alas, is not always what actually happens). Advanced makeEdit Long LinesEdit Some statements in a makefile can be very long. To make them easier to read, some programmers break up long statements into several shorter, easier-to-read physical lines. Many consoles displayed 80 characters across, which is the threshold some programmers use for breaking up lines and comments. In short, a programmer may decide it looks nicer to writes several short physical lines that together compose one statement, like this: some really long line \ with, like, \ a bajillion parameters and \ several file names \ and stuff Rather than the same statement, crammed into a single physical line: some really long line with, like, a bajillion parameters and several file names and stuff Make CleanEdit When you type "make clean" from the command prompt, "make" deletes all the files that you specified were easily-replaceable files -- i.e., the intermediate and output files generated from your irreplaceable hand-written source files. To tell "make" that some file is a easily-replaceable file, add it to the "rm" line of the "clean" section of your makefile, and that file will be deleted *every* time you type "make clean" at the command prompt. (If you don't already have a "clean" section in your makefile, add a section to the end of your makefile that looks like the following): .PHONY: clean clean: rm *.o rm file In this example, we've told "make" that all intermediate object files ("*.o") and the final executable ("file") are safe-to-delete, easily-regenerated files. Typically a programmer periodically runs "make clean", then archives *every* file in the directory, and then runs "make" to regenerate every file. Then he tests the program executable, confident that the program he is testing can be easily regenerated entirely from the files in the archive. VariablesEdit "make" accepts three ways of setting variables. - Recursively expanded variables are defined by using variable = value. This means if one variable 'x' contains references to another variable 'y', and 'y' changes after the 'x' being defined, 'y' will include the change made to 'x'. x = abc y = $(x)ghi x = abcdef # x will be abcdef, while y will be abcdefghi - Simply expanded variables are defined by using variable := value. This means if one variable 'x' contains references to another variable 'y', the 'y' variable will be scanned for once and for all. x := abc y := $(x)ghi # x is 'abc' at this time x := abcdef # x will be abcdef, while y will be abcghi File Names With Special CharactersEdit Many programmers recommend not creating files with spaces or dollar signs in them, in order to avoid the 'spaces in file names' error in "make" and many other utilities.[2] If you must deal with a file that has spaces or dollar signs in the file name (such as the file "my $0.02 program.c"), sometimes you can escape the literal name of the file with double slashes and double dollar signs—in the Makefile, that filename can be represented as "my\\ $$0.02\\ program.c".[3] Alas, the double-slash escape doesn't work when "make" works with lists of files (which are internally represented as space-separated filenames). One work-around is to refer to that file name using a space-free representation like "my+$$0.02+program.c", then later in the make file use the $(subst) function to convert that space-free representation back to the actual on-disk file name.[2] Alas, this work-around can't handle filenames that include both spaces and plus-signs. Perhaps it's simplest to avoid filenames with spaces in them as inputs or outputs of "make". Name of the makefileEdit When creating a new makefile, give it the literal name " Makefile" (8 characters, no extension). In principle, you could create a makefile with some other name, such as makefile or GNUmakefile, which (like Makefile) are automatically found by the GNU version of the make utility, or some other name that you pass to the make utility with the --file option.[4] Example makefilesEdit ReferencesEdit - ↑ "GNU Make Manual" section "6.2 The Two Flavors of Variables". - ↑ a b John Graham-Cumming. "GNU Make meets file names with spaces in them". CM Crossroads 2007. - ↑ John Graham-Cumming. "GNU Make escaping: a walk on the wild side" CM Crossroads 2007. - ↑ "GNU Make Manual" section "3.2 What Name to Give Your Makefile". Further readingEdit Books and manualsEdit - GNU Make documentation - FreeBSD make manual page - Microsoft NMAKE reference - GNU make Standard Library. - Managing Projects with GNU make. TutorialsEdit - Make - a tutorial - The GNU C Programming Tutorial - Writing a makefile - Makefile Tutorial: How To Write A Makefile - OPUS Makefile Tutorial - Makefile Tutorial for C, gcc, & gmake - Using NMake ArticlesEdit - "Ask Mr. Make" series of article about GNU Make - What is wrong with make? - What’s Wrong With GNU make? - Recursive Make Considered Harmful - Advanced Auto-Dependency Generation. - "Kleknev: a coarse-grained profiler for build systems". Sometimes "make" is used to build extremely large, complicated systems; the Kleknev system helps people figure out what items are eating the most time during that build process.
http://en.m.wikibooks.org/wiki/Make
CC-MAIN-2014-15
refinedweb
1,521
64.41
0 I have been working on this code for quite a while and have not been able to figure it out. I looked online and through the forums and could not find something that showed how to fix it. This program takes in the julian date ie. 12df234(first two # are year, last 3 are date) and separates the numbers from the letters. When i try to compile using gcc i get the warning In function âextract_julianâ: warning: passing argument 1 of âsscanfâ makes pointer from integer without a cast /usr/include/stdio.h:430: note: expected âconst char * __restrict__â but argument is is of type âcharâ And when i run the program i get "segmentation fault" This is my code #include <stdio.h> //function prototypes void read_code(char *julian_date); void extract_julian(int *day, int *year,char *julian_date); int main() char julian_date[7]; /* julian date consists of 7 characters*/ int day=0; int year=0; read_code(julian_date); extract_julian(&day,&year,julian_date); printf("this is day %d\n",day); printf("this is year %d\n",year); return 0; } void read_code(char *julian_date)/*this function is for input*/ { printf("input the julian date.\n"); scanf("%s",julian_date); } void extract_julian(int *day,int *year,char *julian_date)/*this function extracts the numbers from the julian date*/ { sscanf(julian_date,"%d",&*year); /* makes the first 2 numbers equal to year*/ sscanf(julian_date[4],"%d",&*day); /*makes last 3 equal to date*/ } This is my first post so i am not sure if i used code tags right. Sorry if i didnt Edited 3 Years Ago by pyTony: fixed formating
https://www.daniweb.com/programming/software-development/threads/397019/help-unable-to-pass-string-by-reference
CC-MAIN-2016-50
refinedweb
263
54.66
Overview » Yearly Archives: 2012 JAXB – Representing Null and Empty Collections Demo Code The following demo code will be used for all the different versions of the Java model. It simply sets one collection to null, the second to an empty list, and the third to a populated list. package package blog.xmlelementwrapper; import java.util.ArrayList; import javax.xml.bind.*; public class Demo { public static void main(String[] args) ...Read More » A simple Groovy issue tracker using file system It will be a chaos not to track bugs and feature requests when you developing software. Having a simple issue tracker would make managing the project much more successful. Now I like simple stuff, and I think for small project, having this tracker right inside the source control (especially with DSVC like Mercurial/Git etc) repository is not only doable, but ...Read More » Under the JVM hood – Classloaders Classloaders are a low level and often ignored aspect of the Java language among many developers. At ZeroTurnaround , our developers have had to live, breathe, eat, drink and almost get intimate with classloaders to produce the JRebel technology which interacts at a classloader level to provide live runtime class reloading, avoiding lengthy rebuilds/repackaging/redeploying cycles. Here are some of the ...Read More » Who Do You Promote Into Management? I » Java Object resurrection Overview After ...Read More » Don’t take the Technical Debt Metaphor too far Because .. » Generating Barcodes in PDFs with Flying-Saucer Flying »
http://www.javacodegeeks.com/2012/page/3/
CC-MAIN-2016-07
refinedweb
242
56.86
I am trying to replace a small section of text with a block of text. I am hoping that this block of text will be replaced using replace function of python. I would have really like to use sed # Change the current working directory to IDX location import os print os.getcwd() os.chdir(##location of my text files##) print os.getcwd() #Read file names com_line=raw_input("File name please:") contents=[] strs = '#DREFIELD IDOL_SOURCE="idol source"\n#DREFIELD IDOL_TAXONOMY="taxonomy data"\n #DREFIELD IDOL_CATEGORY="IDOL_CATEGORY"\n #DREFIELD IDOL_PROMOTION="idol_promotion"\n #DREFIELD IDOL_PROMOTION_TERM="IDOL_PROMOTION_TERM"\n #DREFIELD IDOL_WHATSNEW="IDOL_WHATSNEW"\n #DREFIELD IDOL_SUMMARY="IDOL_SUMMARY"\n #DREFIELD IDOL_URL="IDOL_URL DATA"\n #DREFIELD IDOL_FORMNUMBER="IDOL_FORMNUMBER"\n #DREFIELD IDOL_DESCRIPTION="IDOL_DESCRIPTION DATA"\n#DRESECTION 0\n' with open(com_line) as f: contents.append(f.readlines()) f.close #Writing to file f2 = open("test.idx", 'w') for content in contents: for s in content: if s == '#DRESECTION 0\n': s.replace("#DRESECTION 0\n", strs) f2.write("%s" % s) f2.close #DRESECTION 0\n strs Stings in Python are immutable. Once a string object has been created, it can't be modified. You can however create a new string based on the old one. When you are calling s.replace, it doesn't modify s in place, because that's not possible. Instead, the str.replace method returns a new string. You could do s = s.replace(...) and it should work. However, there's no point in replacing the whole contents of one string with another string when you could just rebind the name to the new string. I suggest using: if s == '#DRESECTION 0\n': s = strs f2.write(s) I also changed the write call. There's no need to use string substitution here, since you want to write the whole string (and nothing extra). Just pass s as the argument. There are a few other minor issues with your code: For instance, you're not calling the close methods on your files, only naming them. You need f2.close() if you want to actually close the file. But you don't actually need to do that for f, since the with statement you're using will close the file automatically (you should probably use a with for f2 and not bother with the manual close call). There also doesn't seem to be much need contents to be a nested list (with just one item in the outer list). You probably want to do contents = f.readlines() rather than appending the list of lines to an initially empty list. That will let you get rid of one of the loops in the later code.
https://codedump.io/share/mFft57zIvu3w/1/replace-specific-text-with-a-block-of-text
CC-MAIN-2016-44
refinedweb
439
67.55
Understanding example, that we can assign any kind of data to any variable: # Python code x = 4 x = "four" Here we've switched the contents of x from an integer to a string. The same thing in C would lead (depending on compiler settings) to a compilation error or other unint¶. This means that there is some overhead in storing an integer in Python as compared to an integer in a compiled language like C, as illustrated in the following figure:. L = list(range(10)) L [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] type(L[0]) int Or, similarly, a list of strings: L2 = [str(c) for c in L] L2 ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'] type(L2[0]) str Because of Python's dynamic typing, we can even create heterogeneous lists: L3 = [True, "2", 3.0, 4] [type(item) for item in L3] the following figure: At the implementation level, the array essentially contains a single pointer to one contiguous flexibility, but are much more efficient for storing and manipulating data. import array L = list(range(10)) A = array.array('i', L) A: import numpy as np # integer array: np.array([1, 4, 2, 5, 3]) array([1, 4, 2, 5, 3]) Remember that unlike Python lists, NumPy is constrained to arrays that all contain the same type. If types do not match, NumPy will upcast if possible (here, integers are up-cast to floating point): np.array([3.14, 4, 2, 3]) array([ 3.14, 4. , 2. , 3. ]) If we want to explicitly set the data type of the resulting array, we can use the dtype keyword: np.array([1, 2, 3, 4], dtype='float32') array([ 1., 2., 3., 4.], dtype=float32) Finally, unlike Python lists, NumPy arrays can explicitly be multi-dimensional; here's one way of initializing a multidimensional array using a list of lists: # nested lists result in multi-dimensional arrays np.array([range(i, i + 3) for i in [2, 4, 6]]) array([[2, 3, 4], [4, 5, 6], [6, 7, 8]]) The inner lists are treated as rows of the resulting two-dimensional array. # Create a length-10 integer array filled with zeros np.zeros(10, dtype=int) array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) # Create a 3x5 floating-point array filled with ones np.ones((3, 5), dtype=float) array([[ 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1.]]) # Create a 3x5 array filled with 3.14 np.full((3, 5), 3.14) array([[ 3.14, 3.14, 3.14, 3.14, 3.14], [ 3.14, 3.14, 3.14, 3.14, 3.14], [ 3.14, 3.14, 3.14, 3.14, 3.14]]) # Create an array filled with a linear sequence # Starting at 0, ending at 20, stepping by 2 # (this is similar to the built-in range() function) np.arange(0, 20, 2) array([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18]) # Create an array of five values evenly spaced between 0 and 1 np.linspace(0, 1, 5) array([ 0. , 0.25, 0.5 , 0.75, 1. ]) # Create a 3x3 array of uniformly distributed # random values between 0 and 1 np.random.random((3, 3)) array([[ 0.99844933, 0.52183819, 0.22421193], [ 0.08007488, 0.45429293, 0.20941444], [ 0.14360941, 0.96910973, 0.946117 ]]) # Create a 3x3 array of normally distributed random values # with mean 0 and standard deviation 1 np.random.normal(0, 1, (3, 3)) array([[ 1.51772646, 0.39614948, -0.10634696], [ 0.25671348, 0.00732722, 0.37783601], [ 0.68446945, 0.15926039, -0.70744073]]) # Create a 3x3 array of random integers in the interval [0, 10) np.random.randint(0, 10, (3, 3)) array([[2, 3, 4], [5, 7, 8], [0, 5, 0]]) # Create a 3x3 identity matrix np.eye(3) array([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.]]) # Create an uninitialized array of three integers # The values will be whatever happens to already exist at that memory location np.empty(3) array([ 1., 1., 1.]) NumPy Standard Data Types¶ the following table. Note that when constructing an array, they can be specified using a string: np.zeros(10, dtype='int16') Or using the associated NumPy object: np.zeros(10, dtype=np.int16) More advanced type specification is possible, such as specifying big or little endian numbers; for more information, refer to the NumPy documentation. NumPy also supports compound data types, which will be covered in Structured Data: NumPy's Structured Arrays.
https://jakevdp.github.io/PythonDataScienceHandbook/02.01-understanding-data-types.html
CC-MAIN-2019-18
refinedweb
757
62.17
a sequence ::= target augop (expression_list | yield_expression) initial value is retrieved with a getattr() and the result is assigned with a setattr(). Notice that the two methods do not necessarily refer to the same variable. When getattr() refers to a class variable, setattr() still writes to an instance variable. For example: class A: x = 3 # class variable a = A() a.x += 1 # writes a.x as 4 leaving A.x as 3). print_stmt ::= "print" ([expression ("," expression)* [","] | ">>", (2) when the last character written to standard output is '\n',.. Note In Python 2.2, the yield statement was only allowed when the generators feature has been enabled. This __future__ import statement was used to enable the feature: from __future__ import generators..] The built-in function __import__() is provided to support applications that determine which modules need to be loaded dynamically; refer to Built-in Functions for additional information." or_expr [.
http://docs.python.org/dev/reference/simple_stmts.html
crawl-001
refinedweb
146
51.14
On Thu, Jan 21, 2021 at 13:31:03 +0300, Andrey Fokin wrote: > Thanks Peter! > But is possible change XML configuration towards will be possible to do > full (push) backup? Yes, I've included links to the documentation to do so in my previous reply, so I'd suggest you follow it. See ... > On Wed, Jan 20, 2021 at 9:08 PM Peter Krempa <pkrempa redhat com> wrote: > > >: > > > > ... here. I specifically avoided giving exact steps so that you read the documentation including the description of caveats of enabling this _experimental_ feature. So please make sure to read it. This is NOT meant for production use. Anyways: You need to add the 'qemu' namespace to your domain XML: <domain type='kvm' xmlns: And force-on the incremental backup capability: <qemu:capabilities> <qemu:add </qemu:capabilities> This enables backups, but as I've wrote previously ... > > > > to enable a feature named 'incremental-backup'. Please make sure to read > > the caveats of doing so thoroughly. .. observe the following: > > I also advise not to combine incremental backups with snapshots > > (virDomainSnapshot* APIs) and blockjobs (virDomainBlock* APIs) as that > > will not work properly without a patch enabling also experimental qemu > > features. If you use them, backups will break. You get to keep the pieces, as you've been warned. Also this: > > I also suggest you use the latest qemu for this.
https://listman.redhat.com/archives/libvirt-users/2021-January/msg00034.html
CC-MAIN-2021-31
refinedweb
224
56.15
Getting Started With FRDM Kl46z Part 3 - LCD, Slider & PWM Introduction: Getting Started With FRDM Kl46z Part 3 - LCD, Slider & PWM In third part I want to show You how to use Touch Slider with PWM and LCD. We will see how change value on LCD and PWM on leds during sliding finger. Step 1: LCD To use LCD we have to click right mouse button on our project's name and 'Import library' -> 'From Import Wizard' then search SLCD library and add it to the project. The second and LAST thing you have to do to use LCD is to include lcd library by #include. That's all ! You even don't have to initialize lcd because defaul t constructor do it for us. We can also open downloaded library and see other functions to use. For example we can change the contrast or dot place. Step 2: TSI Sensor and PWM To use touch slider we have to add tsi_sensor library in the same way as we added lcd library. Then include tsi_sensor.h and create object of TSIAnalogSlider with arguments which are our TSI pins and range. As we can see on picture above these pins are PTB16 and PTB17. Function which indicates TSI_sensor value is readPercentage(), or readDistance(). For PWM we don't have to add library, because in mbed.h there is PWMOut. We create an object with LED_COLOR as an argument. Step 3: Writing a Programme It's very short and quite easy code which can be used i bigger projects. #include "mbed.h" #include "SLCD.h" #include "tsi_sensor.h" SLCD slcd; TSIAnalogSlider slider(PTB16, PTB17, 100); // touch sensor PwmOut gLed(LED_GREEN); // pwm out PwmOut rLed(LED_RED); int main() { slcd.printf("lcd "); wait(2); // delay 2 sec while (1) { slcd.CharPosition = 0; // if we don't use it the value on the screen will be sliding slcd.printf("%1.2f",slider.readPercentage()); // print TSI_sensor value on LCD rLed = slider.readPercentage(); // set TSI_value to the PWM linked with LED gLed = 1.0 - slider.readPercentage(); wait_ms(10); } } where can I find the slcd library?
http://www.instructables.com/id/FRDM-kl46z-part-3-LCD-Slider-PWM/
CC-MAIN-2017-39
refinedweb
348
74.79
So here are my code segements: 1. The Interface with problem Im trying to solve // Design a Java interface Lockable that includes the following methods: setKey, // lock, unlock, and locked. The setKey, lock, and unlock methods take an // integer parameter that represents the key. The setKey method establishes // the key. The lock and unlock methods lock and unlock the object, // but only if the key passed in is correct. The locked method returns a // boolean indicating whether or not the object is locked. A Lockable object // represents an object whose regular methods are protected: if the object is // locked, the method cannot be invoked; if it is unlocked, they can be invoked. // Redesign and implement a version of this Coin class so that it is Lockable. public interface Lockable { boolean locked(); void setKey(int password); boolean lock(int password); boolean unlock(int password); } 2. The class coin with modifications: public class Coin implements Lockable { private int key; // password to make it lockable private boolean locked; // if true, object is locked private final int HEADS = 0; private final int TAILS = 1; private int face; public boolean locked() { return this.locked; } public void setKey(int password) { this.key = password; this.locked = true; } public boolean lock(int password) { if (password == this.key) { this.locked = true; return true; } else { return false; } } public boolean unlock(int password) { if (password == this.key) { this.locked = false; return true; } else { return false; } } public Coin() { locked = true; } public boolean flip() { if (this.locked == false) { face = (int) (Math.random() * 2); return true; } return false; } public boolean isHeads() { return (face == HEADS); } public String toString() { if (this.locked == false) { String faceName; if (face == HEADS) faceName = "Heads"; else faceName = "Tails"; return faceName; } else return ""; } } 3. Since the questions asked for only the methods, i believe thats as far he wants us to go but he made a driver to test if it worked, something similar to this public class Driver { public static void main(String[] args) { Coin myCoin = new Coin(); boolean bSuccess, bResult; myCoin.setKey(1234); bSuccess = myCoin.flip(); // Should Fail, coin is locked bResult = myCoin.unlock(12345); if (bResult) // coin is unlocked { myCoin.flip(); boolean iValue = myCoin.isHeads(); } else { System.out.println("You better remember the LOCK value"); } } } Some errors I get when running this: 1. After prompting for the password it does nothing 2. Even i tried to tidy the code, some variables were left behind since the program for this situation. If someone could help me fix my Driver class and allowing Coin to implement it Thanks everyone in Advance!
http://www.javaprogrammingforums.com/whats-wrong-my-code/12363-implementing-driver-following-code.html
CC-MAIN-2015-48
refinedweb
420
65.73
Item Description Images Additional InformationProtect your personal privacy!! This Bug Detector can protect you from: Audio Bugs Hidden Wireless Cameras Transmitting Cell Phones WIFI And More!! Detection Range :. Features : Frequency: 0 - 6 GHz Lower Frequency: 0 ~ 120 MHz Higher Frequency: 149 MHz ~ 400 MHz GSM: 900 MHz ~ 1900 MHz Detectable Image Frequency: 200 MHz ~ 2400 MHz Microwave Frequency: 2400 MHz ~ 6 GHz Working Voltage: 3V x 2) Working Current: Caution 8mA Vibration: 70mA Sensitivity: Max 10 meters Depends on output of Transmitter Detected Color: Bright Silver Dimensions: 56 x 90 x 17 mm Weight: 50g Gross Weight 100g Power Consumption: 2 batteries, Up to 4 hours continuous use Package Included: 1 x Silver Detecor 1 x Earphone 1 x User Manual Payment: Sweety bear On-line ioffer-store : When you make the order please payment in 7days via Paypal. If you want other payment way Such as West Union Credit Card,please contact us for more detail. Thank you for your understanding! Wish you have a nice day in my store! Shipping: 1. We shiped to the paypal confirmed address ! Please make sure the address is right ! If you want change the address please contact us in time ! 2. Dispatch is arranged within 2-5 days Once your payment is finished Verification By paypal . 3. Delivery time for reference : EMS about 7-14 days . DHL/UPS/FEDEX/TNT:About 5-7days. HongKong Post /China Post:About 15-35 days . Normally The free shipping way we supply is China Post. If the weight over 2KG or you want choose a fast way to delivery to you ,you need to add the extra mony for the DHL,EMS,UPS way to you ! In this way ,Please contact me for the detail ! I will choose the fast and cheap safe way to delivery to you! 4. In order to ensure you received you order tax free upon import ,I will declare it as a "gift" with a lower value.The lower price does not reflect the total price you paid or the market value of the product and may not apply to countries with different tariff rules. 5. Any import charges of fees are the buyer's responsibility. Store Policy : Warranty: 12 months warranty is provided.Please read the products description carefully before you pay.All merchandise we sell are in good quality and the same as we described,They are under strict examination and tooken photos before shipping.We will try our best to avoid any quality problem,but if it have any problem,please incurred on you .If you ask the refund,Please make sure the return items must be in good original and resalable condition.The shipping and the handing charge is not refundable,custom should be responsible for all the charges of sending and returning. We will always do our best to solve the problem and bring you happy shopping experience in our store. We care about each buyer and we will fix any problem in a good business manner,Please contact us before opening any dispute or leaving any neutral/negative feedback .We will sent the replacement or refund once the items is received as new .Your satisfaction is always our NO.1 priority! Note:Any men made defects do not belongs to the warranty,such as broken,scratched ect. Thank you for your understanding!Wish you have a nice day in my store! Best regards! Welcome to Sweety Bear On-line Store again!
http://www.ioffer.com/i/anti-spy-camera-wireless-rf-signal-bug-detector-finder-383703114
CC-MAIN-2013-48
refinedweb
576
63.19
Withdrawal Contents - Withdrawal - Abstract - Documentation - Quick start - Examples in this PEP - Branches and branches - Remote repositories and remote branches - Commit editing and caveats - Undo - Merge or rebase? - Null-merges - Branching models - Advanced configuration - Advanced topics - Tips and tricks - git on server - From Mercurial to git - Git and GitHub - References This PEP was withdrawn as it's too generic and doesn't really deals with Python development. It is no longer updated. The content was moved to Python Wiki [1]. Make further updates in the wiki. Abstract This Informational PEP collects information about git. There is, of course, a lot of documentation for git, so the PEP concentrates on more complex (and more related to Python development) issues, scenarios and examples. The plan is to extend the PEP in the future collecting information about equivalence of Mercurial and git scenarios to help migrating Python development from Mercurial to git. The author of the PEP doesn't currently plan to write a Process PEP on migration Python development from Mercurial to git. Documentation Git is accompanied with a lot of documentation, both online and offline. Documentation for starters Git Tutorial: part 1, part 2. Git User's manual. Everyday GIT With 20 Commands Or So. Git workflows. Advanced documentation Git Magic, with a number of translations. Pro Git. The Book about git. Buy it at Amazon or download in PDF, mobi, or ePub form. It has translations to many different languages. Download Russian translation from GArik. Offline documentation Git has builtin help: run git help $TOPIC. For example, run git help git or git help help. Quick start Download and installation Unix users: download and install using your package manager. Microsoft Windows: download git-for-windows. MacOS X: use git installed with XCode or download from MacPorts or git-osx-installer or install git with Homebrew: brew install git. git-cola (repository) is a Git GUI written in Python and GPL licensed. Linux, Windows, MacOS X. TortoiseGit is a Windows Shell Interface to Git based on TortoiseSVN; open source. Initial configuration This simple code is often appears in documentation, but it is important so let repeat it here. Git stores author and committer names/emails in every commit, so configure your real name and preferred email: $ git config --global user.name "User Name" $ git config --global user.email user.name@example.org Examples in this PEP Examples of git commands in this PEP use the following approach. It is supposed that you, the user, works with a local repository named python that has an upstream remote repo named origin. Your local repo has two branches v1 and master. For most examples the currently checked out branch is master. That is, it's assumed you have done something like that: $ git clone $ cd python $ git branch v1 origin/v1 The first command clones remote repository into local directory python`, creates a new local branch master, sets remotes/origin/master as its upstream remote-tracking branch and checks it out into the working directory. The last command creates a new local branch v1 and sets remotes/origin/v1 as its upstream remote-tracking branch. The same result can be achieved with commands: $ git clone -b v1 $ cd python $ git checkout --track origin/master The last command creates a new local branch master, sets remotes/origin/master as its upstream remote-tracking branch and checks it out into the working directory. Branches and branches Git terminology can be a bit misleading. Take, for example, the term "branch". In git it has two meanings. A branch is a directed line of commits (possibly with merges). And a branch is a label or a pointer assigned to a line of commits. It is important to distinguish when you talk about commits and when about their labels. Lines of commits are by itself unnamed and are usually only lengthening and merging. Labels, on the other hand, can be created, moved, renamed and deleted freely. Remote repositories and remote branches Remote-tracking branches are branches (pointers to commits) in your local repository. They are there for git (and for you) to remember what branches and commits have been pulled from and pushed to what remote repos (you can pull from and push to many remotes). Remote-tracking branches live under remotes/$REMOTE namespaces, e.g. remotes/origin/master. To see the status of remote-tracking branches run: $ git branch -rv To see local and remote-tracking branches (and tags) pointing to commits: $ git log --decorate You never do your own development on remote-tracking branches. You create a local branch that has a remote branch as upstream and do development on that local branch. On push git pushes commits to the remote repo and updates remote-tracking branches, on pull git fetches commits from the remote repo, updates remote-tracking branches and fast-forwards, merges or rebases local branches. When you do an initial clone like this: $ git clone -b v1 git clones remote repository to directory python, creates a remote named origin, creates remote-tracking branches, creates a local branch v1, configure it to track upstream remotes/origin/v1 branch and checks out v1 into the working directory. Some commands, like git status --branch and git branch --verbose, report the difference between local and remote branches. Please remember they only do comparison with remote-tracking branches in your local repository, and the state of those remote-tracking branches can be outdated. To update remote-tracking branches you either fetch and merge (or rebase) commits from the remote repository or update remote-tracking branches without updating local branches. Updating local and remote-tracking branches To update remote-tracking branches without updating local branches run git remote update [$REMOTE...]. For example: $ git remote update $ git remote update origin Fetch and pull There is a major difference between $ git fetch $REMOTE $BRANCH and $ git fetch $REMOTE $BRANCH:$BRANCH The first command fetches commits from the named $BRANCH in the $REMOTE repository that are not in your repository, updates remote-tracking branch and leaves the id (the hash) of the head commit in file .git/FETCH_HEAD. The second command fetches commits from the named $BRANCH in the $REMOTE repository that are not in your repository and updates both the local branch $BRANCH and its upstream remote-tracking branch. But it refuses to update branches in case of non-fast-forward. And it refuses to update the current branch (currently checked out branch, where HEAD is pointing to). The first command is used internally by git pull. $ git pull $REMOTE $BRANCH is equivalent to $ git fetch $REMOTE $BRANCH $ git merge FETCH_HEAD Certainly, $BRANCH in that case should be your current branch. If you want to merge a different branch into your current branch first update that non-current branch and then merge: $ git fetch origin v1:v1 # Update v1 $ git pull --rebase origin master # Update the current branch master # using rebase instead of merge $ git merge v1 If you have not yet pushed commits on v1, though, the scenario has to become a bit more complex. Git refuses to update non-fast-forwardable branch, and you don't want to do force-pull because that would remove your non-pushed commits and you would need to recover. So you want to rebase v1 but you cannot rebase non-current branch. Hence, checkout v1 and rebase it before merging: $ git checkout v1 $ git pull --rebase origin v1 $ git checkout master $ git pull --rebase origin master $ git merge v1 It is possible to configure git to make it fetch/pull a few branches or all branches at once, so you can simply run $ git pull origin or even $ git pull Default remote repository for fetching/pulling is origin. Default set of references to fetch is calculated using matching algorithm: git fetches all branches having the same name on both ends. Push Pushing is a bit simpler. There is only one command push. When you run $ git push origin v1 master git pushes local v1 to remote v1 and local master to remote master. The same as: $ git push origin v1:v1 master:master Git pushes commits to the remote repo and updates remote-tracking branches. Git refuses to push commits that aren't fast-forwardable. You can force-push anyway, but please remember - you can force-push to your own repositories but don't force-push to public or shared repos. If you find git refuses to push commits that aren't fast-forwardable, better fetch and merge commits from the remote repo (or rebase your commits on top of the fetched commits), then push. Only force-push if you know what you do and why you do it. See the section Commit editing and caveats below. It is possible to configure git to make it push a few branches or all branches at once, so you can simply run $ git push origin or even $ git push Default remote repository for pushing is origin. Default set of references to push in git before 2.0 is calculated using matching algorithm: git pushes all branches having the same name on both ends. Default set of references to push in git 2.0+ is calculated using simple algorithm: git pushes the current branch back to its @{upstream}. To configure git before 2.0 to the new behaviour run: $ git config push.default simple To configure git 2.0+ to the old behaviour run: $ git config push.default matching Git doesn't allow to push a branch if it's the current branch in the remote non-bare repository: git refuses to update remote working directory. You really should push only to bare repositories. For non-bare repositories git prefers pull-based workflow. When you want to deploy code on a remote host and can only use push (because your workstation is behind a firewall and you cannot pull from it) you do that in two steps using two repositories: you push from the workstation to a bare repo on the remote host, ssh to the remote host and pull from the bare repo to a non-bare deployment repo. That changed in git 2.3, but see the blog post for caveats; in 2.4 the push-to-deploy feature was further improved. Private information When cloning/fetching/pulling/pushing git copies only database objects (commits, trees, files and tags) and symbolic references (branches and lightweight tags). Everything else is private to the repository and never cloned, updated or pushed. It's your config, your hooks, your private exclude file. If you want to distribute hooks, copy them to the working tree, add, commit, push and instruct the team to update and install the hooks manually. Commit editing and caveats A warning not to edit published (pushed) commits also appears in documentation but it's repeated here anyway as it's very important. It is possible to recover from a forced push but it's PITA for the entire team. Please avoid it. To see what commits have not been published yet compare the head of the branch with its upstream remote-tracking branch: $ git log origin/master.. # from origin/master to HEAD (of master) $ git log origin/v1..v1 # from origin/v1 to the head of v1 For every branch that has an upstream remote-tracking branch git maintains an alias @{upstream} (short version @{u}), so the commands above can be given as: $ git log @{u}.. $ git log v1@{u}..v1 To see the status of all branches: $ git branch -avv To compare the status of local branches with a remote repo: $ git remote show origin Read how to recover from upstream rebase. It is in git help rebase. On the other hand, don't be too afraid about commit editing. You can safely edit, reorder, remove, combine and split commits that haven't been pushed yet. You can even push commits to your own (backup) repo, edit them later and force-push edited commits to replace what have already been pushed. Not a problem until commits are in a public or shared repository. Undo Whatever you do, don't panic. Almost anything in git can be undone. git checkout: restore file's content git checkout, for example, can be used to restore the content of file(s) to that one of a commit. Like this: git checkout HEAD~ README The commands restores the contents of README file to the last but one commit in the current branch. By default the commit ID is simply HEAD; i.e. git checkout README restores README to the latest commit. (Do not use git checkout to view a content of a file in a commit, use git cat-file -p; e.g. git cat-file -p HEAD~:path/to/README). git reset: remove (non-pushed) commits git reset moves the head of the current branch. The head can be moved to point to any commit but it's often used to remove a commit or a few (preferably, non-pushed ones) from the top of the branch - that is, to move the branch backward in order to undo a few (non-pushed) commits. git reset has three modes of operation - soft, hard and mixed. Default is mixed. ProGit explains the difference very clearly. Bare repositories don't have indices or working trees so in a bare repo only soft reset is possible. git reflog: reference log Removing commits with git reset or moving the head of a branch sounds dangerous and it is. But there is a way to undo: another reset back to the original commit. Git doesn't remove commits immediately; unreferenced commits (in git terminology they are called "dangling commits") stay in the database for some time (default is two weeks) so you can reset back to it or create a new branch pointing to the original commit. For every move of a branch's head - with git commit, git checkout, git fetch, git pull, git rebase, git reset and so on - git stores a reference log (reflog for short). For every move git stores where the head was. Command git reflog can be used to view (and manipulate) the log. In addition to the moves of the head of every branch git stores the moves of the HEAD - a symbolic reference that (usually) names the current branch. HEAD is changed with git checkout $BRANCH. By default git reflog shows the moves of the HEAD, i.e. the command is equivalent to git reflog HEAD. To show the moves of the head of a branch use the command git reflog $BRANCH. So to undo a git reset lookup the original commit in git reflog, verify it with git show or git log and run git reset $COMMIT_ID. Git stores the move of the branch's head in reflog, so you can undo that undo later again. In a more complex situation you'd want to move some commits along with resetting the head of the branch. Cherry-pick them to the new branch. For example, if you want to reset the branch master back to the original commit but preserve two commits created in the current branch do something like: $ git branch save-master # create a new branch saving master $ git reflog # find the original place of master $ git reset $COMMIT_ID $ git cherry-pick save-master~ save-master $ git branch -D save-master # remove temporary branch git revert: revert a commit git revert reverts a commit or commits, that is, it creates a new commit or commits that revert(s) the effects of the given commits. It's the only way to undo published commits (git commit --amend, git rebase and git reset change the branch in non-fast-forwardable ways so they should only be used for non-pushed commits.) There is a problem with reverting a merge commit. git revert can undo the code created by the merge commit but it cannot undo the fact of merge. See the discussion How to revert a faulty merge. One thing that cannot be undone Whatever you undo, there is one thing that cannot be undone - overwritten uncommitted changes. Uncommitted changes don't belong to git so git cannot help preserving them. Most of the time git warns you when you're going to execute a command that overwrites uncommitted changes. Git doesn't allow you to switch branches with git checkout. It stops you when you're going to rebase with non-clean working tree. It refuses to pull new commits over non-committed files. But there are commands that do exactly that - overwrite files in the working tree. Commands like git checkout $PATHs or git reset --hard silently overwrite files including your uncommitted changes. With that in mind you can understand the stance "commit early, commit often". Commit as often as possible. Commit on every save in your editor or IDE. You can edit your commits before pushing - edit commit messages, change commits, reorder, combine, split, remove. But save your changes in git database, either commit changes or at least stash them with git stash. Merge or rebase? Internet is full of heated discussions on the topic: "merge or rebase?" Most of them are meaningless. When a DVCS is being used in a big team with a big and complex project with many branches there is simply no way to avoid merges. So the question's diminished to "whether to use rebase, and if yes - when to use rebase?" Considering that it is very much recommended not to rebase published commits the question's diminished even further: "whether to use rebase on non-pushed commits?" That small question is for the team to decide. To preserve the beauty of linear history it's recommended to use rebase when pulling, i.e. do git pull --rebase or even configure automatic setup of rebase for every new branch: $ git config branch.autosetuprebase always and configure rebase for existing branches: $ git config branch.$NAME.rebase true For example: $ git config branch.v1.rebase true $ git config branch.master.rebase true After that git pull origin master becomes equivalent to git pull --rebase origin master. It is recommended to create new commits in a separate feature or topic branch while using rebase to update the mainline branch. When the topic branch is ready merge it into mainline. To avoid a tedious task of resolving large number of conflicts at once you can merge the topic branch to the mainline from time to time and switch back to the topic branch to continue working on it. The entire workflow would be something like: $ git checkout -b issue-42 # create a new issue branch and switch to it ...edit/test/commit... $ git checkout master $ git pull --rebase origin master # update master from the upstream $ git merge issue-42 $ git branch -d issue-42 # delete the topic branch $ git push origin master When the topic branch is deleted only the label is removed, commits are stayed in the database, they are now merged into master: o--o--o--o--o--M--< master - the mainline branch \ / --*--*--* - the topic branch, now unnamed The topic branch is deleted to avoid cluttering branch namespace with small topic branches. Information on what issue was fixed or what feature was implemented should be in the commit messages. But even that small amount of rebasing could be too big in case of long-lived merged branches. Imagine you're doing work in both v1 and master branches, regularly merging v1 into master. After some time you will have a lot of merge and non-merge commits in master. Then you want to push your finished work to a shared repository and find someone has pushed a few commits to v1. Now you have a choice of two equally bad alternatives: either you fetch and rebase v1 and then have to recreate all you work in master (reset master to the origin, merge v1 and cherry-pick all non-merge commits from the old master); or merge the new v1 and loose the beauty of linear history. Null-merges Git has a builtin merge strategy for what Python core developers call "null-merge": $ git merge -s ours v1 # null-merge v1 into master Branching models Git doesn't assume any particular development model regarding branching and merging. Some projects prefer to graduate patches from the oldest branch to the newest, some prefer to cherry-pick commits backwards, some use squashing (combining a number of commits into one). Anything is possible. There are a few examples to start with. git help workflows describes how the very git authors develop git. ProGit book has a few chapters devoted to branch management in different projects: Git Branching - Branching Workflows and Distributed Git - Contributing to a Project. There is also a well-known article A successful Git branching model by Vincent Driessen. It recommends a set of very detailed rules on creating and managing mainline, topic and bugfix branches. To support the model the author implemented git flow extension. Advanced configuration Line endings Git has builtin mechanisms to handle line endings between platforms with different end-of-line styles. To allow git to do CRLF conversion assign text attribute to files using .gitattributes. For files that have to have specific line endings assign eol attribute. For binary files the attribute is, naturally, binary. For example: $ cat .gitattributes *.py text *.txt text *.png binary /readme.txt eol=CRLF To check what attributes git uses for files use git check-attr command. For example: $ git check-attr -a -- \*.py Useful assets GitAlias (repository) is a big collection of aliases. A careful selection of aliases for frequently used commands could save you a lot of keystrokes! GitIgnore and are collections of .gitignore files for all kinds of IDEs and programming languages. Python included! pre-commit (repositories) is a framework for managing and maintaining multi-language pre-commit hooks. The framework is written in Python and has a lot of plugins for many programming languages. Advanced topics Staging area Staging area aka index aka cache is a distinguishing feature of git. Staging area is where git collects patches before committing them. Separation between collecting patches and commit phases provides a very useful feature of git: you can review collected patches before commit and even edit them - remove some hunks, add new hunks and review again. To add files to the index use git add. Collecting patches before committing means you need to do that for every change, not only to add new (untracked) files. To simplify committing in case you just want to commit everything without reviewing run git commit --all (or just -a) - the command adds every changed tracked file to the index and then commit. To commit a file or files regardless of patches collected in the index run git commit [--only|-o] -- $FILE.... To add hunks of patches to the index use git add --patch (or just -p). To remove collected files from the index use git reset HEAD -- $FILE... To add/inspect/remove collected hunks use git add --interactive (-i). To see the diff between the index and the last commit (i.e., collected patches) use git diff --cached. To see the diff between the working tree and the index (i.e., uncollected patches) use just git diff. To see the diff between the working tree and the last commit (i.e., both collected and uncollected patches) run git diff HEAD. See WhatIsTheIndex and IndexCommandQuickref in Git Wiki. Root Git switches to the root (top-level directory of the project where .git subdirectory exists) before running any command. Git remembers though the directory that was current before the switch. Some programs take into account the current directory. E.g., git status shows file paths of changed and unknown files relative to the current directory; git grep searches below the current directory; git apply applies only those hunks from the patch that touch files below the current directory. But most commands run from the root and ignore the current directory. Imagine, for example, that you have two work trees, one for the branch v1 and the other for master. If you want to merge v1 from a subdirectory inside the second work tree you must write commands as if you're in the top-level dir. Let take two work trees, project-v1 and project, for example: $ cd project/subdirectory $ git fetch ../project-v1 v1:v1 $ git merge v1 Please note the path in git fetch ../project-v1 v1:v1 is ../project-v1 and not ../../project-v1 despite the fact that we run the commands from a subdirectory, not from the root. ReReRe Rerere is a mechanism that helps to resolve repeated merge conflicts. The most frequent source of recurring merge conflicts are topic branches that are merged into mainline and then the merge commits are removed; that's often performed to test the topic branches and train rerere; merge commits are removed to have clean linear history and finish the topic branch with only one last merge commit. Rerere works by remembering the states of tree before and after a successful commit. That way rerere can automatically resolve conflicts if they appear in the same files. Rerere can be used manually with git rerere command but most often it's used automatically. Enable rerere with these commands in a working tree: $ git config rerere.enabled true $ git config rerere.autoupdate true You don't need to turn rerere on globally - you don't want rerere in bare repositories or single-branch repositories; you only need rerere in repos where you often perform merges and resolve merge conflicts. Database maintenance Git object database and other files/directories under .git require periodic maintenance and cleanup. For example, commit editing left unreferenced objects (dangling objects, in git terminology) and these objects should be pruned to avoid collecting cruft in the DB. The command git gc is used for maintenance. Git automatically runs git gc --auto as a part of some commands to do quick maintenance. Users are recommended to run git gc --aggressive from time to time; git help gc recommends to run it every few hundred changesets; for more intensive projects it should be something like once a week and less frequently (biweekly or monthly) for lesser active projects. git gc --aggressive not only removes dangling objects, it also repacks object database into indexed and better optimized pack(s); it also packs symbolic references (branches and tags). Another way to do it is to run git repack. There is a well-known message from Linus Torvalds regarding "stupidity" of git gc --aggressive. The message can safely be ignored now. It is old and outdated, git gc --aggressive became much better since that time. For those who still prefer git repack over git gc --aggressive the recommended parameters are git repack -a -d -f --depth=20 --window=250. See this detailed experiment for explanation of the effects of these parameters. From time to time run git fsck [--strict] to verify integrity of the database. git fsck may produce a list of dangling objects; that's not an error, just a reminder to perform regular maintenance. Tips and tricks Command-line options and arguments git help cli recommends not to combine short options/flags. Most of the times combining works: git commit -av works perfectly, but there are situations when it doesn't. E.g., git log -p -5 cannot be combined as git log -p5. Some options have arguments, some even have default arguments. In that case the argument for such option must be spelled in a sticky way: -Oarg, never -O arg because for an option that has a default argument the latter means "use default value for option -O and pass arg further to the option parser". For example, git grep has an option -O that passes a list of names of the found files to a program; default program for -O is a pager (usually less), but you can use your editor: $ git grep -Ovim # but not -O vim BTW, if git is instructed to use less as the pager (i.e., if pager is not configured in git at all it uses less by default, or if it gets less from GIT_PAGER or PAGER environment variables, or if it was configured with git config [--global] core.pager less, or less is used in the command git grep -Oless) git grep passes +/$pattern option to less which is quite convenient. Unfortunately, git grep doesn't pass the pattern if the pager is not exactly less, even if it's less with parameters (something like git config [--global] core.pager less -FRSXgimq); fortunately, git grep -Oless always passes the pattern. bash/zsh completion It's a bit hard to type git rebase --interactive --preserve-merges HEAD~5 manually even for those who are happy to use command-line, and this is where shell completion is of great help. Bash/zsh come with programmable completion, often automatically installed and enabled, so if you have bash/zsh and git installed, chances are you are already done - just go and use it at the command-line. If you don't have necessary bits installed, install and enable bash_completion package. If you want to upgrade your git completion to the latest and greatest download necessary file from git contrib. Git-for-windows comes with git-bash for which bash completion is installed and enabled. bash/zsh prompt For command-line lovers shell prompt can carry a lot of useful information. To include git information in the prompt use git-prompt.sh. Read the detailed instructions in the file. Search the Net for "git prompt" to find other prompt variants. SSH connection sharing SSH connection sharing is a feature of OpenSSH and perhaps derivatives like PuTTY. SSH connection sharing is a way to decrease ssh client startup time by establishing one connection and reusing it for all subsequent clients connecting to the same server. SSH connection sharing can be used to speedup a lot of short ssh sessions like scp, sftp, rsync and of course git over ssh. If you regularly fetch/pull/push from/to remote repositories accessible over ssh then using ssh connection sharing is recommended. To turn on ssh connection sharing add something like this to your ~/.ssh/config: Host * ControlMaster auto ControlPath ~/.ssh/mux-%r@%h:%p ControlPersist 600 See OpenSSH wikibook and search for more information. SSH connection sharing can be used at GitHub, GitLab and SourceForge repositories, but please be advised that BitBucket doesn't allow it and forcibly closes master connection after a short inactivity period so you will see errors like this from ssh: "Connection to bitbucket.org closed by remote host." git on server The simplest way to publish a repository or a group of repositories is git daemon. The daemon provides anonymous access, by default it is read-only. The repositories are accessible by git protocol (git:// URLs). Write access can be enabled but the protocol lacks any authentication means, so it should be enabled only within a trusted LAN. See git help daemon for details. Git over ssh provides authentication and repo-level authorisation as repositories can be made user- or group-writeable (see parameter core.sharedRepository in git help config). If that's too permissive or too restrictive for some project's needs there is a wrapper gitolite that can be configured to allow access with great granularity; gitolite is written in Perl and has a lot of documentation. Web interface to browse repositories can be created using gitweb or cgit. Both are CGI scripts (written in Perl and C). In addition to web interface both provide read-only dumb http access for git (http(s):// URLs). Klaus is a small and simple WSGI web server that implements both web interface and git smart HTTP transport; supports Python 2 and Python 3, performs syntax highlighting. There are also more advanced web-based development environments that include ability to manage users, groups and projects; private, group-accessible and public repositories; they often include issue trackers, wiki pages, pull requests and other tools for development and communication. Among these environments are Kallithea and pagure, both are written in Python; pagure was written by Fedora developers and is being used to develop some Fedora projects. GitPrep is yet another GitHub clone, written in Perl. Gogs is written in Go. GitBucket is written in Scala. And last but not least, GitLab. It's perhaps the most advanced web-based development environment for git. Written in Ruby, community edition is free and open source (MIT license). From Mercurial to git There are many tools to convert Mercurial repositories to git. The most famous are, probably, hg-git and fast-export (many years ago it was known under the name hg2git). But a better tool, perhaps the best, is git-remote-hg. It provides transparent bidirectional (pull and push) access to Mercurial repositories from git. Its author wrote a comparison of alternatives that seems to be mostly objective. To use git-remote-hg, install or clone it, add to your PATH (or copy script git-remote-hg to a directory that's already in PATH) and prepend hg:: to Mercurial URLs. For example: $ git clone $ PATH=$PATH:"`pwd`"/git-remote-hg $ git clone hg:: PEPs To work with the repository just use regular git commands including git fetch/pull/push. To start converting your Mercurial habits to git see the page Mercurial for Git users at Mercurial wiki. At the second half of the page there is a table that lists corresponding Mercurial and git commands. Should work perfectly in both directions. Python Developer's Guide also has a chapter Mercurial for git developers that documents a few differences between git and hg. Git and GitHub gitsome - Git/GitHub command line interface (CLI). Written in Python, work on MacOS, Unix, Windows. Git/GitHub CLI with autocomplete, includes many GitHub integrated commands that work with all shells, builtin xonsh with Python REPL to run Python commands alongside shell commands, command history, customizable highlighting, thoroughly documented.
http://docs.activestate.com/activepython/3.5/peps/pep-0103.html
CC-MAIN-2018-43
refinedweb
5,670
62.68
This is an abbreviated version of the book Make Your Own Python Text Adventure. Typically, a text adventure game involves the player exploring and interacting with a world to tell a story. For this tutorial, I wanted the game to take place in a cave with the goal of escaping alive. You can of course use this idea too, but feel free to use your own idea! Almost all parts of this tutorial are interchangeable with your own custom elements. We’re going to start by defining some items and enemies. Items Start by creating a new directory called adventuretutorial and create a blank file called __init__.py. This tells the Python compiler that adventuretutorial is a Python package which contains modules. Go ahead and create your first module in this same directory called items.py. The first class we are going to create is the Item class. When creating a class, consider the attributes that class should have. In general, it will be helpful for items to have a name and description. We can use these to give the player information about the item. Let’s also add a value attribute that will help players to compare items within an economy. class Item(): """The base class for all items""" def __init__(self, name, description, value): self.name = name self.description = description self.value = value def __str__(self): return "{}\n=====\n{}\nValue: {}\n".format(self.name, self.description, self.value) This class has just two methods, but we’re going to see them pop up a lot. The __init__ method is the constructor and it is called whenever a new object is created. The __str__ method is usually a convenience for programmers as it allows us to print an object and see some useful information. Without a __str__ method, calling print() on an object will display something unhelpful like <__main__.Item object at 0x101f7d978>. While we could create an Item directly, it wouldn’t be very interesting for the player because there is nothing special about it. Instead, we’re going to extend the Item class to define more specific items. One of my favorite parts of games is finding treasure so let’s go ahead and create a Gold class. class Gold(Item): def __init__(self, amt): self.amt = amt super().__init__(name="Gold", description="A round coin with {} stamped on the front.".format(str(self.amt)), value=self.amt) The Gold class is now a subclass of the superclass Item. Another word for a subclass is child class and superclasses may be called parent or base classes. The constructor here looks a little confusing but let’s break it down. First you’ll notice an additional parameter amt that defines the amount of this gold. Next, we call the superclass constructor using the super().__init__() syntax. The superclass constructor must always be called by a subclass constructor. If the constructors are exactly the same, Python will do it for us. However, if they are different, we have to explicitly call the superclass constructor. Note that this class doesn’t have a __str__ method. If a subclass doesn’t define its own __str__ method, the superclass method will be used in its place. That’s OK for the Gold class because the value is the same as the amount so there’s no reason to print out both attributes. I mentioned earlier that this game is going to have weapons. We could extend Item again to make some weapons, but weapons all have something in general: damage. Whenever a lot of specific classes are going to share the same concept, it’s usually a good idea to store that shared concept in a superclass. To do this, we will extend Item into a Weapon class with a damage attribute and then extend Weapon to define some specific weapons in the game. class Weapon(Item): def __init__(self, name, description, value, damage): self.damage = damage super().__init__(name, description, value) def __str__(self): return "{}\n=====\n{}\nValue: {}\nDamage: {}".format(self.name, self.description, self.value, self.damage) class Rock(Weapon): def __init__(self): super().__init__(name="Rock", description="A fist-sized rock, suitable for bludgeoning.", value=0, damage=5) class Dagger(Weapon): def __init__(self): super().__init__(name="Dagger", description="A small dagger with some rust. Somewhat more dangerous than a rock.", value=10, damage=10) This should feel very familiar as we are following the same process of creating subclasses to define more specific elements in the game. Enemies Now that you’re an expert at extending objects, creating the enemies will be a breeze. Go ahead and create a new module called enemies.py. Our base class Enemy should include a name, hit points, and damage the enemy does to the player. We’re also going to include a method that allows us to quickly check if the enemy is alive. class Enemy: def __init__(self, name, hp, damage): self.name = name self.hp = hp self.damage = damage def is_alive(self): return self.hp > 0 Next, create a few subclasses of Enemy to define some specific enemies you want the player to encounter. Here are mine, but feel free to use your own ideas instead. class GiantSpider(Enemy): def __init__(self): super().__init__(name="Giant Spider", hp=10, damage=2) class Ogre(Enemy): def __init__(self): super().__init__(name="Ogre", hp=30, damage=15) Notice that we don’t have to include is_alive() in the subclasses. This is because subclasses automatically get access to the methods in the superclass. If you are completely new to programming, some of this may be confusing. My book Make Your Own Python Text Adventure is great for beginners because it does not assume any knowledge of programming concepts. Hey Philip, what version of python did you make this on? I’m very new to python in general, but I keep getting syntax errors regarding colons and indents, but I’m writing them exactly as you have them. I’m wondering if it might be a translation issue with a different version of python (I’m using v.3.2.3), or Maybe it is too soon to run the module? Hi, Ryan! That version should be fine. If you run into too much trouble, try checking out the project from GitHub to compare to the code you’ve written. Also, at this point in the tutorial, there isn’t really anything to run. Hi, i have a few questions. in the beginning you write that we should create a directory called adventuretutorial and two blank files called _init_.py and items.py. Can make the folder and files on my desktop? I’m a little bit confused on how I am going to use this. Uhm, in your code example for the enemy main class, you just said Enemy: and then went onto the next thing. Are you sure there should not be empty parenthesis before the colon? Also, in the example, it says giant spider twice, and no ogre 🙂 Nice catch! I updated the post. Yep! You only need to use parentheses when you extend a base class. When I try to run it in Terminal, its says Traceback (most recent call last): File “/Users/lumbo/Desktop/adventuretutorial/game.py”, line 3, in from player import Player File “/Users/lumbo/Desktop/adventuretutorial/player.py”, line 2, in import items, world File “/Users/lumbo/Desktop/adventuretutorial/items.py”, line 1, in class Gold(Item): NameError: name ‘Item’ is not defined Hmm, I’m not sure what the problem is. That probably means there is a typo or omission somewhere. You might want to compare your code against my code in the GitHub repository to make sure it is the same. Happy programming! Okay, I fixed the: NameError: name ‘Item’ is not defined But now I have this: TypeError: super() takes at least 1 argument (0 given) Do I have to put something in between the parenthesis? When I try to run the game, it gives me this error: NameError: global name ‘__init__’ is not defined How do I fix this? Without more details, I would try a few things: What does ‘amt’ mean? It means “amount”. As in amount of gold. Hey. I am used to python 2, and i dont know that much, but how do i make directories and stuff? Hey, I’m getting a TypeError when I run the code. Says __init__ takes 1 positional argument but 3 were given. Using Python 3.5, and would gladly use any assistance. This error means that an __init__()function is not set up for three arguments, but you’re trying to pass that many arguments in. Without seeing the complete code, my guess would be there’s a problem in the Itemclass. Ignoring self, count your arguments in each initializer starting with your most-specific class and working your way up to the most general class. I keep getting AttributeError: “Gold” object has no attribute ‘description’ That looks OK to me, where are you getting that error? The Python stack trace should give you some more information. Compare your code where the error is against mine and make sure everything matches. not sure why description wasn’t being read but I found an additional error … super().__init__(name=”Gold”, description=”A round coin with {} stamped on the front.”.format(str(self.amt)), value=self.amt ) <—– #this class wasn't closed properly on yours when you say to make a new class e.g. gold class do i make a new module or do i use the item module? Thanks Either would work, but it makes sense to me to include it in the existing items module. A correction: The constructor for a Python class is named ‘__new__’; it is a class method that returns the new instance. Most beginners never need to know about the constructor and never need to use ‘__new__’. The ‘__init__’ method of an instance is not the constructor. It is the initialiser for the instance; it has access to the new instance (as ‘self’) and returns None. Beginners should learn this as the “initialiser” (not the constructor, which is different). Thank you for making this course! How would I go about making a Bandage item that appends the players health so the player has a way of gaining his/her health back There’s quite a few steps involved, but in general, you need a Heal action that checks the player’s inventory for the bandaid. If a bandaid is present, then call a Heal function that increases the player HP, and remove the bandaid from the inventory. This is actually something I go into detail about in the book, if you’re interested. Do you need to create new modules to make the game? Or can you just put everything into one file. You can put everything in one file, but it may make it more difficult to follow along or compare to my code. I continually get the following error. The specifics change slightly, it seems, depending on which enemy is being assigned to an enemy tile. All throw name ‘enemies’ is not defined. I figured out a pervious problem involving the import but non-evocation of the file Player. i.e. I had imported it but not defined it! Here I cannot find a similar problem. Any Suggestions? I am in chapter 12 of the book. Is it possible that you are missing the import enemiesat the top of your file? That error means Python doesn’t know what “enemies” means. By using the import, you are telling Python that it is a package to access. I didn’t import enemies in world! AHHHHH!!! Now it is throwing a list of other errors but they seem to be typos. Thanks After create your items class, are the actual items objects in the class or are you defining them in a dictionary or something outside the class? When you get to part 3, you will see that we create a Player object and that object has a list of items (i.e. the inventory). Another Question sorry, reading through the book, and im wondering is it necessary to create different classes of items or would be fine just creating instances of item with the specific name, descrip, and value. I’m making a dating based adventure game, and i cant see reason for a bunch of classes of items. Thanks There’s no right or wrong answer to this question. It really depends on how you’re going to use the items in your code. In general, it’s nice to not repeat yourself. So if you know you are going to create a bunch of Cats in your code, you don’t want to retype the description of the Cat in lots of places, so you make a Cat class. It can also help with code readability to know something about the object based on its name. The more dissimilar your objects are, the more you should lean towards using distinct classes. The more similar your are objects are the more you should lean toward using a generic class with slightly different attributes for each instance. I only have Python 2.7, will this only run on Python 3.5? I’m assuming the only difference I need to make is typing object: eg: class Item(object) ? I am pretty sure it should work on any Python 3.X. I can’t vouch for this, but someone else in the comments made a 2.X-compatible fork. Thanks Phillip! Hi, thanks for the tutorial! I would like to include another property of weight (and also add some other damage properties) So here I am inheriting the name, desc and value from the parent Item, and then I am specifying the weapon-specific attributes (damage, damage2 and weight) that I would like all weapons to share. However I am not sure how to modify the below: I guess I would add (self.damage2, self.weight) to the bracketed text But I am not sure what everything in the “{}….{}” means or what the .format means. Could you help, please? Any help would be much appreciated 🙂 Thank you again! Hi DL, The braces are like placeholders and everything inside of format(...)are the things that go into the placeholders. So, you’d need to add two more placeholders and then the new attributes into the format method. This DigitalOcean tutorial gives a lot more details, if you are interested. Hey Phillip, Just wanted to say thank you for both the reply and the link. Python is the first prog language I’ve ever really looked at. The /n confused me and I assumed it was mathematical hahaha! 🙂 Thank you so much for the explanation. All makes sense now. DL hey i keep my console keeps giving me: ^ Traceback (most recent call last): “C:/Users/WILL/Documents/j785797398728/6583672687649819/code/Out_the_dark_hole/game.py”, line 2, in from player import player File “C:/Users/WILL/Documents/j785797398728/6583672687649819/code/Out_the_dark_hole\player.py”, line 1, in import items, world File “C:/Users/WILL/Documents/j785797398728/6583672687649819/code/Out_the_dark_hole\items.py”, line 20 class Weapon(Item): SyntaxError: invalid syntax yes it is copy and pasted but i can’t figure out what is wrong with the syntax How do you make a directory? How would one go about equipping and unequipping these weapons?? I’m getting an error message on the “class Item()” line. It says “expected an indented block” What do I do help?! P.S: I’m using Python 3.7.4. It somehow fixed itself. nevermind Why do we need the Item parent class for the Weapon child class? We don’t use any variables or methods from it. We use its attributes (name, description, etc.) when viewing the inventory. I keep getting a problem in my code where it says that “Item” is not defined. For example: class Gold(Item): def __init__(self, amt): self.amt = amt super().__init__(name=”Gold”, description=”A round coin with {} stamped on the front.”.format(str(self.amt)), value=self.amt) I keep getting a yellow line under “Gold(Item)” and when I hover over it to see the issue it says that “Item” is not defined. If it helps at all I am using Node to make this
https://letstalkdata.com/2014/08/how-to-write-a-text-adventure-in-python-part-1-items-and-enemies/?replytocom=17553
CC-MAIN-2022-21
refinedweb
2,728
74.9
Hello World sample program in C# By: Ram Baskar Printer Friendly Format The following is a very simple C# program, a version of the classic "Hello, world" example: The effect is to write the following text to the output console: using System; class ExampleClass { static void Main() { Console.WriteLine("Hello, world!"); } } ExampleClass Above is a class definition. Everything between the following pair of braces describes ExampleClass. static void Main() This declares the class member method where the program begins execution. The .NET runtime calls the Main method. (Note: Main may also be called from elsewhere, like any other method, e.g. from another method of ExampleClass.) The static keyword makes the method accessible without an instance of ExampleClass.!". A GUI example: using System.Windows.Forms; class ExampleClass { static void Main() { MessageBox.Show("Hello,. excellent job,very impressive.you have done a quit View Tutorial By: Massaging Back of Legs at 2012-03-29 06:34:55
http://www.java-samples.com/showtutorial.php?tutorialid=1434
CC-MAIN-2018-43
refinedweb
156
50.94
On Thu, Nov 13, 2008 at 11:39:18AM +0200, Yoav Steinberg wrote: > > > Michael Niedermayer wrote: > >> Index: mpegaudio_parser.c > >> =================================================================== > >> --- mpegaudio_parser.c (revision 15776) > >> +++ mpegaudio_parser.c (working copy) > >> @@ -24,15 +24,46 @@ > >> #include "mpegaudio.h" > >> #include "mpegaudiodecheader.h" > >> > >> +#define HEADERS_REQUIRED_FOR_SYNC 3 > >> +#define MAX_Q_SIZE 4 > >> +#if (MAX_Q_SIZE < HEADERS_REQUIRED_FOR_SYNC) || (((MAX_Q_SIZE-1) & MAX_Q_SIZE) != 0) > >> +#error Header queue size too small or not power of 2 > >> +#endif > >> +#define ENQ_HDR(ctx,pktptr,pktsize) do { \ > >> + (ctx)->hdr_q_size++; \ > >> + (ctx)->hdr_q_last = ((ctx)->hdr_q_last + 1) & (MAX_Q_SIZE-1); \ > >> + (ctx)->hdr_q[(ctx)->hdr_q_last].ptr = (pktptr); \ > >> + (ctx)->hdr_q[(ctx)->hdr_q_last].size = (pktsize); \ > >> + } while (0) > >> +#define DEQ_HDR(ctx,pktptr,pktsize) do { \ > >> + int idx = ((ctx)->hdr_q_last - ((ctx)->hdr_q_size - 1)) & (MAX_Q_SIZE-1); \ > >> + pktptr = (ctx)->hdr_q[idx].ptr; \ > >> + pktsize = (ctx)->hdr_q[idx].size; \ > >> + if (--(ctx)->hdr_q_size == 0) \ > >> + CLRQ(ctx); \ > >> + } while (0) > >> +#define CLRQ(ctx) do { \ > >> + (ctx)->hdr_q_size = 0; \ > >> + (ctx)->inbuf_ptr = (ctx)->inbuf + ((ctx)->inbuf_ptr - (ctx)->inbuf_pkt_end); \ > >> + (ctx)->inbuf_pkt_end = (ctx)->inbuf; \ > >> + } while (0) > >> +#define PKT_READY_IN_Q(ctx) ((ctx)->hdr_q_size > 1 || ((ctx)->hdr_q_size == 1 && !(ctx)->reading_frame_data)) > > > > ... > > > > this (and the rest of the patch) looks scarily messy > > also this patch is probably >10 times bigger than needed. And such big redesign > > requires a lengthy justification. besides the code might benefit from using > > ff_combine_frame(), though changing the parser to use ff_combine_frame() has > > to be seperate from any intended changes to its "behavior". > > The patch might look messy because I basically replaced the existing > mpegaudio_parse() so the diff came out messy. In any case I used the > existing logic for finding the frames in the stream, checking for their > validity and consistency, and conserving the same exit conditions from > the function. > I added a "header queue" to keep track of where packets are stored in > the inbuf, and to easily clear the inbuf or return a valid packet from > the queue when when needed (when header_count is reset or when it > reaches a minimum valid count, respectively). > I changed the loop structure to support dequeuing and returning old > frames if they exist in the queue. > I changes the parsing state structure to contain a queue of positions in > inbuf where previous packets have been stored. I also created a new > state variable indicating if we're in the midst of reading frame data > (which was basically a side functionality of the frame_size state var). > For the sake of clarity I added comments, changed the header_count state > var to have a positive minimal boundary (instead of obscure negative int > logic), and used macros and defines for the queue handling and constants > (previously duplicated hardcoded numbers). > Also I made sure the existing optimization hack of not copying the frame > to inbuf if its contained in the input buffer was preserved. And used > the existing inbuf to store the packets to avoid additional memcpy's. > > Regarding ff_combine_frame(), I'm not sure I understand its > functionality, and I'm not sure it'll provide any assistance when > tracking multiple "future" packets as required here. Additionally a > quick look at ff_combine_frame() hits to uses of additional memcpy's and > allocation which aren't really required here. > > > > > Also thinking about this, the parser should not delay or drop the packets > > at all, it should just delay setting the sample_rate/other values. And the > > decoder&ffmpeg should be fixed to do something reasonable with the first > > damaged packet(s) > > > > This is more of a design issue. I assume that it's the parsers role to > provide valid frames to the decoder since parsing should be the process > of determining which frames are valid frames in the stream. If you > believe a fix in the decoder is more appropriate then maybe you can > point me out to where/what should be done. Since I'm concerned about > getting all the valid frames and not dropping any valid frames the > current patch works great for me. > As a side not, I've tested the patch on a large collection of mp3's > collected from many sources to verify the deframing logic, which now > seems much better than before for the (all too common) corrupt mp3 files. could you provide some mp3s that are not handlded optimally? anyway, this patch is definitly: <>
http://ffmpeg.org/pipermail/ffmpeg-devel/2009-January/066697.html
CC-MAIN-2014-52
refinedweb
679
51.99
I was going to finally get to the point, but I decided it would be good to consolidate our understanding of "the most significant difference" from the previous parts into a single short summary. The groundhog has seen his shadow and you are in for six more weeks of winter. Morton Order Redux Recall our rather boring definition of a Morton-ordered Key. import Data.Word -- show data Key = Key {-# UNPACK #-} !Word {-# UNPACK #-} !Word -- /show deriving (Show, Read, Eq) main = putStrLn "It typechecks." I want to refactor the trick for comparing in Morton order from Part 2 that we revisited in Part 4 into a more reusable form. To that end, let us consider how to compare two unsigned words for how they differ in the placement of their most significant bit. Logically I want to on compare msb, without paying for calculating the position of the most significant bit directly. To do I first observe that we can first compare our two values a and b. If they match, then trivially they agree on the position of their most significant set bit! If they don't, then either a < b or b < a. Without loss of generality, let's assume a < b. Then either a had the same msb as b or it doesn't. If a had the same msb as b then xor a b will not have that bit set, so a < xor a b will be False as a as a more significant bit set than xor a b does. If a does not have the same msb as b, and a < b, then b has it set, and a does not, so the more significant bit will be set in xor a b, and a < xor a b will be True. Putting all of this logic together yields the following combinator: import Data.Bits import Data.Word -- show compares :: Word -> Word -> Ordering compares a b = case compare a b of LT | a < xor a b -> LT GT | b < xor a b -> GT _ -> EQ main = print $ compares 4 7 -- /show We can similarly reason through specialized scenaros to obtain <, <=, ==, /=, >=, > restricted to the most significant bit. import Data.Bits import Data.Word -- show lts, les, eqs, nes, ges, gts :: Word -> Word -> Bool lts a b = a < b && a < xor a b les a b = a <= b || xor a b <= b -- /show eqs a b = a >= min b c && b >= max a c where c = xor a b nes a b = a < min b c || b < min a c where c = xor a b -- show ... gts a b = a > b && xor a b > b ges a b = a >= b || a >= xor a b main = print $ les 4 7 -- /show With that we can see our earlier Ord instance for Key is just: import Data.Bits import Data.Word data Key = Key {-# UNPACK #-} !Word {-# UNPACK #-} !Word deriving (Show, Read, Eq) lts :: Word -> Word -> Bool lts a b = a < b && a < xor a b -- show instance Ord Key where Key a b `compare` Key c d | xor a c `lts` xor b d = compare b d | otherwise = compare a c -- /show main = putStrLn "It typechecks." Now that is much easier to read: If the most significant difference betwen a and c is less significant than the most significant difference between b and d, then we should just compare b and d, otherwise we compare a with c. One of these things is not like the others This makes it clear why we had three uses of xor in the original, the first two were to calculate the differences themselves, while the last xor was simply to compare by most significant bit! Switching to these internally made almost a factor of two difference in the performance of the overall multiplier relative to actually performing the masking! This bodes well for the practicality of the as-yet-still-unbenchmarked IntMap alternative described in part 4. August 25 2013 P.S. This also means I effectively just claimed No-Prize #5 for myself. ;)
https://www.schoolofhaskell.com/user/edwardk/revisiting-matrix-multiplication/part-6
CC-MAIN-2016-50
refinedweb
672
66.17
The particle context command creates a particle context. The particle context provides an interactive means to create particle objects. The particle context command also provides an interactive means to set the option values, through the Tool Property Sheet, for the particlecommand that the context will issue. In query mode, return type is based on queried flag. Derived from mel command maya.cmds.dynParticleCtx Example: import pymel.core as pm pm.dynParticleCtx( 'dynParticleContext', e=True, nj=5, jr=1.5 ) # Set the option values for number of jitters to 5 and jitter # radius to 1.5 in the particle context, which will result in # creating 5 particles for each mouse click in the viewport, # randomly placed, but all within 1.5 units of the mouse click.
http://www.luma-pictures.com/tools/pymel/docs/1.0/generated/functions/pymel.core.context/pymel.core.context.dynParticleCtx.html#pymel.core.context.dynParticleCtx
crawl-003
refinedweb
125
52.36
Opened 2 years ago Last modified 3 months ago One should be able to create custom FilterSpecs for the admin's list_filter interface.. Another approach by korpios is described in the comments. I really like the idea behind this, but I'd need to review the code closer and won't take the time to do it right now since it's a new feature. I'd rather focus on bugs preventing newforms-admin from being used. Also see #3400 for another ideas about filters which could benefit from this scheme... This ticket isn't critical to merge newforms-admin into trunk. Tagging with nfa-someday. How about FilterSpec?.insert method to do prepend and FilterSpec?.register to do append? discussed somewhere, no? I'm attaching my crack at this issue; in particular, I wanted to allow custom FilterSpecs that aren't associated with a field. This way, you can throw together a filter for fairly arbitrary queries. Much of the field-specific code from FilterSpec has been moved out to a subclass, FieldFilterSpec; that should be used as the superclass for custom field-based filterspecs, while FilterSpec can be used for non-field-based ones. The list_filter syntax gains two further options besides field names: a FilterSpec subclass, or a tuple of ('field_name', FieldFilterSpecSubclass). An example combining all three: list_filter = ( 'field1', ('field2', SomeFieldFilterSpec), SomeFilterSpec, ) Custom FilterSpecs?, also allowing fieldless FilterSpecs? Tweaked the summary and description. that was me; sorry. updated version of my simple patch,. I'm not sure having a reversed registry is intuitive. It certainly makes overriding FilterSpecs? simple. Except if I were to register two FilterSpecs? I would expect them to be tested the order I add them, the same way it is done in the rest of Django. Maybe the FilterSpec?'s should be in a Priority Queue instead of a Queue? I had created a solution which doesn't reverse the registry ticket:8330. But I'm realizing now that this wouldn't allow for overriding other filters like Boolean. I'll keep playing. Suggestion: make FieldFilterSpec?'s similar to how widgets work in newforms admin. Based on korpios' idea I made a patch that takes the idea further. With korpios' current solution you can add custom filters but not play with the queryset. Filters only based on field lookups is too little for complex administrations. I provide a patch which lets you add, like korpios' suggested, custom filters like this: Furthermore, every FilterSpec? can implement get_query_set: def SomeFilterSpec: def get_query_set(self, cl, qs): if self.params.has_key("custom_get_parameter"): return qs.filter(somefield__startswith = self.params["custom_get_parameter"]) Note that this is a trivial example that can also be achieved with field lookup parameters. But there are no limits of how to filter the result set. Custom Filtersets (fieldless) with custom query set manipulation. line 103 of filterspec_with_custom_queryset_against_1_0.diff should read: + super(RelatedFilterSpec, self).__init__(request, params, model, model_admin, f) I believe there is a small bug with this patch. Line 31 of filterspec_with_custom_queryset_against_1_0.diff should be indented once more so that get_field() is only called when a field is defined in the list_filter. In the case that the list_filter item is a single callable item, the field variable is not set and get_field() should not be called for verification. Also, any word on if this feature will make it into 1.1? I notice that there is no implemented listed on the status page. I'm using this feature frequently in my application, and would prefer to run a released version of Django instead of patching. :) New path is diff versus v1.0.2 of django and includes fixes mentioned in the following comments: 10/08/08 20:08:44 changed by anonymous ¶ 12/12/08 09:58:24 changed by gerdemb ¶ 10/08/08 20:08:44 changed by anonymous ¶ 12/12/08 09:58:24 changed by gerdemb ¶ Patch updated to latest HEAD passing all tests I updated the patch to the latest HEAD and fixed the TODO comments. In order to pass the existing tests, I had to disable the use of custom GET params. In the original patch, the filterspec could consume a custom param and then return an arbitrary queryset based on this parameter, but to do this it silently removed the parameters that don't match a field name from the request. The problem is that there is a test to insure that any parameters that do not match field names are handled by forwarding to ?e=1 (testIncorrectLookupParameters) which failed when the previous patch silently removed them. Additionally, the code in the previous patch didn't handle filters on M2M relationships through an intermediate table because it assumed the first part of the search param would be a field name. Developer discussion here: I am attaching a new patch against the latest HEAD. It's based on gerdemb's patch but now allows for consuming of GET parameters to allow for custom, non-field based querystring keys. The patch also implements a priority queue in a sense; FieldFilterSpec?.register() allows you to pass in high_priority=True in order to register the FieldFilterSpec? as high priority. This argument is True by default but passed in as False for all default filter specs. High priority specs are inserted before normal priority specs, but order is preserved amongst the two separate groups. Let me know what you think of that solution. I'm also moving this to Accepted since it's on the 1.1 features list (forgive me if this is an incorrect decision). Also flagging it as Needs tests. New patch against r9836 Please see 5833-against-9836-new-proper.patch and not 5833-against-9836-new.patch; I accidently diffed some additional, unrelated changes, and couldn't get trac to replace the old attachment. I deleted the 5833-against-9836-new.patch to avoid confusion. I don't see any documentation for this enhancement in any of the patches? Makes it hard to gets started on reviewing the code. In the absence of doc, tests would also at least show how the new function is intended to be used, but those too are still missing. If anyone who has worked on the patches here or has an interest in seeing this done could provide some of these missing pieces, that would help move this one along. Without docs and tests, and with 1.1 feature freeze coming up soon, this isn't going to make 1.1. Replying to jacob: Without docs and tests, and with 1.1 feature freeze coming up soon, this isn't going to make 1.1. Without docs and tests, and with 1.1 feature freeze coming up soon, this isn't going to make 1.1. I would like to get this committed, we can work on it during the PyCON sprint. It is really important for us that this ticket or #3400 gets into 1.1 and we are ready to put some work into it. The feature freeze is tomorrow. Only bug fixes will be happening during the sprints at PyCon. I'm afraid this is just going to have to wait until 1.2. Re-targeting for 1.2. By Edgewall Software.
http://code.djangoproject.com/ticket/5833
crawl-002
refinedweb
1,203
67.45
Q (JavaScript: The Definitive Guide) before diving deeper into QML. It's also helpful to have a basic understanding of other web technologies like HTML and CSS, but it's not required. QML looks like this: import Qt 4.) Commenting in QML is similar to JavaScript. import Qt 4.7. Properties begin with a lowercase letter (with the exception of Attached Properties). QML supports properties of many types (see QML Basic Types). The basic types include int, real, bool, string, color, and lists. Item { x: 10.5 // a 'real' property ... state: "details" // a 'string' property focus: true // a 'bool' property } QML properties are what is known as typesafe. That is, they only allow you to assign a value that matches the property type. For example, the x property of item is a real, and if you try to assign a string to it you will get an error. Item { x: "hello" // illegal! } Each object can be given a special unique property called an id. No other object within the same QML document can have the same id value. Assigning an id enables the object to be referred to by other objects and scripts. The first Rectangle element below has an id, "myRect". The second Rectange. List properties look like this: Item { children: [ Image {}, Text {} ] } The list is enclosed in square brackets, with a comma separating the list elements. In cases where you are only assigning a single item to a list, you can omit the square brackets: Image { children: Rectangle {} }. Some objects attach properties to another object. Attached Properties are of the form Type.property where Type is the type of the element that attaches property. For example: Component { id: myDelegate Text { text: "Hello" color: ListView.isCurrentItem ? "red" : "blue" } } ListView { delegate: myDelegate } The ListView element attaches the ListView.isCurrentItem property to each delegate it creates. Another example of attached properties is the Keys element which attaches properties for handling key presses to any visual Item, for example: Item { focus: true Keys.onSelectPressed: console.log("Selected") } Signal handlers allow actions to be taken in response to an event. For instance, the MouseArea element has signal handlers to handle mouse press, release and click: MouseArea { onPressed: console.log("mouse button pressed") } All signal handlers begin with "on". Some signal handlers include an optional parameter, for example the MouseArea onPressed signal handler has a mouse parameter: MouseArea { acceptedButtons: Qt.LeftButton | Qt.RightButton onPressed: if (mouse.button == Qt.RightButton) console.log("Right mouse button pressed") }
https://doc.qt.io/archives/qt-4.7/qdeclarativeintroduction.html
CC-MAIN-2020-45
refinedweb
411
59.7
1. Python Module. Python has many useful built in modules, which can be used as soon as the installation is complete. We use the build-in sys module as an example to write a hello module as below. #!/usr/bin/env python3 # -*- coding: utf-8 -*- ' a hello test module ' __author__ = 'Richard Trump' import sys def say_hello(): args = sys.argv if len(args)==1: print('Hello World!') elif len(args)==2: print('Hello, %s!' % args[1]) else: print('Too many arguments!') if __name__=='__main__': say_hello() Lines 1 and 2 are standard comments, line 1 comment allow the hello.py file to run directly on Unix/Linux/Mac, line 2 comment indicate that the .py file itself is encoded in standard utf-8 encoding. Line 4 is a string that represents the document annotation for the module. The first string of any module source code is treated as a document annotation of the module. Line 6 use the __author__ variable to write the author in it so that when you expose the source code to public, others can see your name. This is the standard file template for a Python module. And then there’s the actual code part. As you may have noticed, the first step to using the sys module is to import the module import sys. After import the sys module, we have the variable sys pointing to the module, and with the variable sys, we can access all the functions of the sys module. The sys module has an argv variable that stores all the arguments on the command line within a list. Argv has at least one element because the first parameter is always the name of the .py file, for example: Running python3 hello.py gives sys.argv with list ['hello.py']; Running python3 hello.py Richard give sys.argv value is ['hello.py', 'Richard']. Finally, notice these two lines of code: if __name__=='__main__': say_hello() When we run the hello module file on the command line, the Python interpreter sets a special variable called __name__ to __main__. And if you import the hello module in another .py fiele, the if check will fail. So, the if check can allow a module to execute a few extra code through the command-line, and the most common code in the main function is to run test code. We can run hello.py from the command line to see what happens. $ python3 hello.py Hello World! $ python hello.py Richard Hello, Richard! If you start the Python interaction environment, then import the hello module. $ python3 >>> import hello >> When import, ‘Hello Word’ is not printed! Because the say_hello() function is not executed. Only when you call hello.say_hello(), you can print out ‘Hello Word!’. >>> hello.say_hello() Hello World! 2. Python Module Variable Scope. In a module, we may define many functions and variables, some functions and variables we want to use for others ( public ), some functions and variables we want to use only within the module ( private ). In Python, this is done with the _ prefix. Normal function and variable names are public and can be referenced directly. Variables like __xxx__ are special variables that can be referenced directly, but have special meanings. For example, __author__ and __name__ is a special variables. The document annotations defined by the hello module can also be accessed with the special variable __doc__. We normally don’t use this kind of variable name for our own variables. Functions or variables like _xxx and __xxx are not public ( there are private) and should not be referenced directly. Private functions or variables should not be referenced externally, so what’s the use of them? Please have a look at below example. def _say_hello(name): return 'Hello, %s' % name def _say_hi(name): return 'Hi, %s' % name def say(name): if len(name) > 3: return _say_hello(name) else: return _say_hi(name) We expose the say() function in the module and hide the internal logic in the private function. In this way, calling the say() function does not care about the details of the internal private function, which is a very useful method for code encapsulation and abstraction. Functions that are not referenced externally are all defined as private, and only those that are referenced externally are defined as public.
https://www.code-learner.com/python-module-introduction/
CC-MAIN-2021-43
refinedweb
712
66.84
Technical Support On-Line Manuals RL-ARM User's Guide (MDK v4) #include <rtl.h> int bind ( int sock, /* Socket descriptor */ const SOCKADDR *addr, /* Pointer to the local address */ int addrlen); /* Length of the SOCKADDR structure */ The bind function assigns a name to an unnamed socket. The name represents the local address and port of the communication end point. The argument sock specifies a socket descriptor returned from a previous call to socket. The argument addr is a pointer to the SOCKADDR structure containing the local address and port of the socket. The argument addrlen specifies the length of the SOCKADDR structure. The bind function is in the RL-TCPnet library. The prototype is defined in rtl.h. note The bind function returns the result code: closesocket,.
https://www.keil.com/support/man/docs/rlarm/rlarm_bind.htm
CC-MAIN-2020-34
refinedweb
127
57.87
SQLAlchemy 1.1 Documentation SQLAlchemy 1.1 Documentation legacy version Changes and Migration - What’s New in SQLAlchemy 1.1? - 1.1 Changelog - 1.0 Changelog¶ - - 1.0.0b5 - 1.0.0b4 - 1.0.0b3 - 1.0.0b2 - 1.0.0b1 -.0 Changelog¶ 1.0.19 1.0.18¶Released: July 24, 2017 oracle¶ 1.0.17¶Released: January 17, 2017 orm¶ 1.0.16¶Released: November 15, 2016 orm¶ [orm] [bug] Fixed bug in Session.bulk_update_mappings()where an alternate-named primary key attribute would not track properly into the UPDATE statement.¶ [orm] [bug] Fixed bug where joined eager loading would fail for a polymorphically- loaded mapper, where the polymorphic_on was set to an un-mapped expression such as a CASE expression.¶ [orm] [bug] Fixed bug where the ArgumentError raised for an invalid bind sent to a Session via Session.bind_mapper(), Session.bind_table(), or the constructor would fail to be correctly raised.¶ [orm] [bug] Fixed bug in Session.bulk_save()where an UPDATE would not function correctly in conjunction with a mapping that implements a version id counter.¶ [mssql] [bug] Updated the server version info scheme for pyodbc to use SQL Server SERVERPROPERTY(), rather than relying upon pyodbc.SQL_DBMS_VER, which continues to be unreliable particularly with FreeTDS.¶ [mssql] [bug] Added error code 20017 “unexpected EOF from the server” to the list of disconnect exceptions that result in a connection pool reset. Pull request courtesy Ken Robbins.¶ 1.0.15¶Released: September 1, 2016 orm¶ sql¶ 1.0.14¶Released: July 6, 2016 engine¶ [engine] [bug] [postgresql] Fixed bug in cross-schema foreign key reflection in conjunction with the MetaData.schemaargument, where a referenced table that is present in the “default” schema would fail since there would be no way to indicate a Tablethat has “blank” for a schema. The special symbol schema.BLANK_SCHEMAhas been added as an available value for Table.schemaand Sequence.schema, indicating that the schema name should be forced to be Noneeven if MetaData.schemais specified.¶ sql¶ [sql] [bug] Fixed issue in SQL math negation operator where the type of the expression would no longer be the numeric type of the original. This would cause issues where the type determined result set behaviors.¶ [sql] [bug] Fixed bug whereby the __getstate__/ __setstate__methods for sqlalchemy.util.Properties were non-working due to the transition in the 1.0 series to __slots__. The issue potentially impacted some third-party applications. Pull request courtesy Pieter Mulder.¶ [sql] [bug] FromClause.count()is pending deprecation for 1.1. This function makes use of an arbitrary column in the table and is not reliable; for Core use, func.count()should be preferred.¶ [sql] [bug]().¶ [sql] [bug] Fixed bug whereby Table.tometadata()would make a duplicate UniqueConstraintfor each Columnobject that featured the unique=Trueparameter.¶ postgresql¶ [postgresql] [bug] Fixed bug whereby TypeDecoratorand Varianttypes were not deeply inspected enough by the PostgreSQL dialect to determine if SMALLSERIAL or BIGSERIAL needed to be rendered rather than SERIAL.¶ 1.0.13¶Released: May 16, 2016 orm¶ [orm] [bug] Fixed bug in “evaluate” strategy of Query.update()and Query.delete()which would fail to accommodate a bound parameter with a “callable” value, as which occurs when filtering by a many-to-one equality expression along a relationship.¶ [orm] [bug] Fixed bug whereby the event listeners used for backrefs could be inadvertently applied multiple times, when using a deep class inheritance hierarchy in conjunction with mutiple mapper configuration steps.¶ [orm] [bug] Fixed bug whereby passing a text()construct to the Query.group_by()method would raise an error, instead of intepreting the object as a SQL fragment.¶ [orm] [bug].¶ [orm] [bug] Fixed regression appearing in the 1.0 series in ORM loading where the exception raised for an expected column missing would incorrectly be a NoneTypeerror, rather than the expected NoSuchColumnError.¶ sql¶ [sql] [bug] Fixed bug where when using case_sensitive=Falsewith an Engine, the result set would fail to correctly accommodate for duplicate column names in the result set, causing an error when the statement is executed in 1.0, and preventing the “ambiguous column” exception from functioning in 1.1.¶ [sql] [bug] Fixed bug where the negation of an EXISTS expression would not be properly typed as boolean in the result, and also would fail to be anonymously aliased in a SELECT list as is the case with a non-negated EXISTS construct.¶ [sql] [bug] Fixed bug where “unconsumed column names” exception would fail to be raised in the case where Insert.values()were called with a list of parameter mappings, instead of a single mapping of parameters. Pull request courtesy Athena Yao.¶ postgresql¶ mssql¶ [mssql] [bug] Fixed bug where by ROW_NUMBER OVER clause applied for OFFSET selects in SQL Server would inappropriately substitute a plain column from the local statement that overlaps with a label name used by the ORDER BY criteria of the statement.¶ [mssql] [bug] [oracle].¶ oracle¶ [oracle] [bug].¶ [oracle] [bug].¶ misc¶ [bug] [examples] Changed the “directed graph” example to no longer consider integer identifiers of nodes as significant; the “higher” / “lower” references now allow mutual edges in both directions.¶ [bug] [py3k] Fixed bug in “to_list” conversion where a single bytes object would be turned into a list of individual characters. This would impact among other things using the Query.get()method on a primary key that’s a bytes object.¶ 1.0.12¶Released: February 15, 2016 orm¶ [orm] [bug].¶ [orm] [bug] Fixed regression since 0.9 where the 0.9 style loader options system failed to accommodate for multiple undefer_group()loader options in a single query. Multiple undefer_group()options will now be taken into account even against the same entity.¶ engine¶ [engine] [bug] [mysql]¶ Sessionand Connection, and taking place for operations such as a failure on “RELEASE SAVEPOINT”. Previously, the fix was only in place for a specific path within the ORM flush/commit process; it now takes place for all transactional context managers as well. sql¶ [sql] [bug] Fixed issue where the “literal_binds” flag was not propagated for expression.insert(), expression.update()or expression.delete()constructs when compiled to string SQL. Pull request courtesy Tim Tate.¶ [sql] [bug].¶ [sql] [bug].¶ postgresql¶ [postgresql] [bug] Fixed bug in expression.text()construct where a double-colon expression would not escape properly, e.g. some\:\:expr, as is most commonly required when rendering PostgreSQL-style CAST expressions.¶ mssql¶ [mssql] [bug] Fixed the syntax of the extract()function when used on MSSQL against a datetime value; the quotes around the keyword are removed. Pull request courtesy Guillaume Doumenc.¶ [mssql] [bug] [firebird].¶ oracle¶ 1.0.11¶Released: December 22, 2015 orm¶ [orm] [bug] Fixed regression caused in 1.0.10 by the fix for #3593 where the check added for a polymorphic joinedload from a poly_subclass->class->poly_baseclass connection would fail for the scenario of class->poly_subclass->class.¶ [orm] [bug].¶ [orm] [bug] Major fixes to the Mapper.eager_defaultsflag, this flag would not be honored correctly in the case that multiple UPDATE statements were to be emitted, either as part of a flush or a bulk update operation. Additionally, RETURNING would be emitted unnecessarily within update statements.¶ [orm] [bug] Fixed bug where use of the Query.select_from()method would cause a subsequent call to the Query.with_parent()method to fail.¶ sql¶ [sql] [bug] Fixed bug in Update.return_defaults()which would cause all insert-default holding columns not otherwise included in the SET clause (such as primary key cols) to get rendered into the RETURNING even though this is an UPDATE.¶ mysql¶ [mysql] [bug] An adjustment to the regular expression used to parse MySQL views, such that we no longer assume the “ALGORITHM” keyword is present in the reflected view source, as some users have reported this not being present in some Amazon RDS environments.¶ [mysql] [bug] Added new reserved words for MySQL 5.7 to the MySQL dialect, including ‘generated’, ‘optimizer_costs’, ‘stored’, ‘virtual’. Pull request courtesy Hanno Schlichting.¶ misc¶ [bug] [ext] Further fixes to #3605, pop method on MutableDict, where the “default” argument was not included.¶ [bug] [ext].¶ 1.0.10¶Released: December 11, 2015 orm¶ [orm] [bug] Fixed issue where post_update on a many-to-one relationship would fail to emit an UPDATE in the case where the attribute were set to None and not previously loaded.¶ [orm] [bug].¶ [orm] [bug].¶ [orm] [bug].¶ orm declarative¶ [bug] [declarative] [orm] Fixed bug where in Py2K a unicode literal would not be accepted as the string name of a class or other argument within declarative using backref()on relationship(). Pull request courtesy Nils Philippsen.¶ sql¶ [sql] [feature] Added support for parameter-ordered SET clauses in an UPDATE statement. This feature is available by passing the [sql] [bug].¶ [sql] [bug] [postgresql] Fixed bug where CREATE TABLE with a no-column table, but a constraint such as a CHECK constraint would render an erroneous comma in the definition; this scenario can occur such as with a PostgreSQL INHERITS table that has no columns of its own.¶ postgresql¶ [postgresql] [bug] Fixed issue where the “FOR UPDATE OF” PostgreSQL-specific SELECT modifier would fail if the referred table had a schema qualifier; PG needs the schema name to be omitted. Pull request courtesy Diana Clarke.¶ [postgresql] [bug] Fixed bug where some varieties of SQL expression passed to the “where” clause of postgresql.ExcludeConstraintwould fail to be accepted correctly. Pull request courtesy aisch.¶ [postgresql] [bug] Fixed the .python_typeattribute of postgresql.INTERVALto return datetime.timedeltain the same way as that of types.Interval.python_type, rather than raising NotImplementedError.¶ mysql¶ [mysql] [bug] Fixed bug in MySQL reflection where the “fractional sections portion” of the mysql.DATETIME, mysql.TIMESTAMPand mysql.TIMEtypes would be incorrectly placed into the timezoneattribute, which is unused by MySQL, instead of the fspattribute.¶ mssql¶ [mssql] [bug] Added the error “20006: Write to the server failed” to the list of disconnect errors for the pymssql driver, as this has been observed to render a connection unusable.¶ [mssql] [bug] A descriptive ValueError is now raised in the event that SQL server returns an invalid date or time format from a DATE or TIME column, rather than failing with a NoneType error. Pull request courtesy Ed Avis.¶ [mssql] [bug] Fixed issue where DDL generated for the MSSQL types DATETIME2, TIME and DATETIMEOFFSET with a precision of “zero” would not generate the precision field. Pull request courtesy Jacobo de Vera.¶ misc¶ [bug] [ext] Added support for the dict.pop()and dict.popitem()methods to the mutable.MutableDictclass.¶ [bug] [py3k].¶ [bug] [ext].¶ [change] [tests] The ORM and Core tutorials, which have always been in doctest format, are now exercised within the normal unit test suite in both Python 2 and Python 3.¶ 1.0.9¶Released: October 20, 2015 orm¶ [orm] [feature] Added new method Query.one_or_none(); same as Query.one()but returns None if no row found. Pull request courtesy esiegerman.¶ [orm] [bug] [postgresql].¶ [orm] [bug] Fixed rare TypeError which could occur when stringifying certain kinds of internal column loader options within internal logging.¶ [orm] [bug] Fixed bug in Session.bulk_save_objects()where a mapped column that had some kind of “fetch on update” value and was not locally present in the given object would cause an AttributeError within the operation.¶ [orm] [bug] Fixed 1.0 regression where the “noload” loader strategy would fail to function for a many-to-one relationship. The loader used an API to place “None” into the dictionary which no longer actually writes a value; this is a side effect of #3061.¶ sql¶ [sql] [bug] Fixed regression in 1.0-released default-processor for multi-VALUES insert statement, #3288, where the column type for the default-holding column would not be propagated to the compiled statement in the case where the default was being used, leading to bind-level type handlers not being invoked.¶ postgresql¶ [postgresql] [bug].¶ oracle¶ [oracle] [bug] [py3k].¶ This change is also backported to: 0.7.0b1 [oracle] [bug].¶ misc¶ [feature] [ext] Added the AssociationProxy.infoparameter to the AssociationProxyconstructor, to suit the AssociationProxy.infoaccessor that was added in #2971. This is possible because AssociationProxyis constructed explicitly, unlike a hybrid which is constructed implicitly via the decorator syntax.¶ [bug] [examples] Fixed two issues in the “history_meta” example where history tracking could encounter empty history, and where a column keyed to an alternate attribute name would fail to track properly. Fixes courtesy Alex Fraser.¶ [bug] [sybase].¶ 1.0.8¶Released: July 22, 2015 engine¶ [engine] [bug].¶ This change is also backported to: 0.7.0b1 sqlite¶ misc¶ [misc] [bug].¶ 1.0.7¶Released: July 20, 2015 orm¶ [orm] [bug].¶ [orm] [bug].¶ [orm] [bug].¶ orm declarative¶ [bug] [declarative] [orm].¶ engine¶ [engine] [bug].¶ [engine] [bug] Fixed regression where ResultProxy.keys()would return un-adjusted internal symbol names for “anonymous” labels, which are the “foo_1” types of labels we see generated for SQL functions without labels and similar. This was a side effect of the performance enhancements implemented as part of #918.¶ sql¶ [sql] [feature] Added a ColumnElement.cast()method which performs the same purpose as the standalone cast()function. Pull request courtesy Sebastian Bank.¶ [sql] [bug] Fixed bug where coersion of literal Trueor Falseconstant in conjunction with and_()or or_()would fail with an AttributeError.¶ [sql] [bug] Fixed potential issue where a custom subclass of FunctionElementor other column element that incorrectly states ‘None’ or any other invalid object as the .typeattribute will report this exception instead of recursion overflow.¶ [sql] [bug] Fixed bug where the modulus SQL operator wouldn’t work in reverse due to a missing __rmod__method. Pull request courtesy dan-gittik.¶ 1.0.6¶Released: June 25, 2015 orm¶ [orm] [bug].¶ [orm] [bug] Fixed 1.0 regression where the enhanced behavior of single-inheritance joins of #3222 takes place inappropriately for a JOIN along explicit join criteria with a single-inheritance subclass that does not make use of any discriminator, resulting in an additional “AND NULL” clause.¶ [orm] [bug] Fixed bug in new Session.bulk_update_mappings()feature where the primary key columns used in the WHERE clause to locate the row would also be included in the SET clause, setting their value to themselves unnecessarily. Pull request courtesy Patrick Hayes.¶ [orm] [bug].¶ sql¶ [sql] [bug].¶ postgresql¶ [postgresql] [feature] [postgresql] [feature].¶ [postgresql] [bug] [pyp.¶ [postgresql] [bug] Repaired the ExcludeConstraintconstruct to support common features that other objects like Indexnow do, that the column expression may be specified as an arbitrary SQL expression such as castor text.¶ mssql¶ [mssql] [bug].¶ 1.0.5¶Released: June 7, 2015 orm¶ [orm] [feature].¶ [orm] [bug] The “lightweight named tuple” used when a Queryreturns rows failed to implement __slots__correctly such that it still had a __dict__. This is resolved, but in the extremely unlikely case someone was assigning values to the returned tuples, that will no longer work.¶ engine¶ [engine] [feature] Added new engine event ConnectionEvents.engine_disposed(). Called after the Engine.dispose()method is called.¶ [engine] [feature] Adjustments to the engine plugin hook, such that the URL.get_dialect()method will continue to return the ultimate Dialectobject when a dialect plugin is used, without the need for the caller to be aware of the Dialect.get_dialect_cls()method.¶ [engine] [bug] Fixed bug where known boolean values used by engine_from_config()were not being parsed correctly; these included pool_threadlocaland the psycopg2 argument use_native_unicode.¶ [engine] [bug].¶ [engine] [bug] [postgresql] [bug] [pypy].¶ mssql¶ [mssql] [bug] [feature] [ext] Added support for *argsto be passed to the baked query initial callable, in the same way that *argsare supported for the BakedQuery.add_criteria()and BakedQuery.with_criteria()methods. Initial PR courtesy Naoki INADA.¶ [feature] [ext].¶ [bug] [ext].¶ 1.0.4¶Released: May 7, 2015 orm¶ [orm] [bug].¶ [orm] [bug].¶ [orm] [bug].¶ schema¶ [schema] [bug] misc¶ [bug] [ext] Fixed bug where when using extended attribute instrumentation system, the correct exception would not be raised when class_mapper()were called with an invalid input that also happened to not be weak referencable, such as an integer.¶ This change is also backported to: 0.9.10 [bug] [pypy] [tests] Fixed an import that prevented “pypy setup.py test” from working correctly.¶ This change is also backported to: 0.9.10 1.0.3¶Released: April 30, 2015 orm¶ [orm] [bug] [pypy] Fixed regression from 0.9.10 prior to release due to #3349 where the check for query state on Query.update()or Query.delete()compared the empty tuple to itself using is, which fails on Pypy to produce Truein this case; this would erronously emit a warning in 0.9 and raise an exception in 1.0.¶ [orm] [bug] Fixed regression from 0.9.10 prior to release where the new addition of entityto the Query.column_descriptionsaccessor would fail if the target entity was produced from a core selectable such as a Tableor CTEobject.¶ [orm] [bug] Fixed regression within the flush process when an attribute were set to a SQL expression for an UPDATE, and the SQL expression when compared to the previous value of the attribute would produce a SQL comparison other than ==or !=, the exception “Boolean value of this clause is not defined” would raise. The fix ensures that the unit of work will not interpret the SQL expression in this way.¶ [orm] [bug]¶. [orm] [bug].¶ [orm] [bug] Fixed issue in new QueryEvents.before_compile()event where changes made to the Queryobject’s collection of entities to load within the event would render in the SQL, but would not be reflected during the loading process.¶ engine¶ [engine] [feature]is added to complement it. Also added new flag¶ ExceptionContext.invalidate_pool_on_disconnect. Allows an error handler within ConnectionEvents.handle_error()to maintain a “disconnect” condition, but to handle calling invalidate on individual connections in a specific manner within the event. [engine] [feature] Added new event DialectEvents.do_connect, which allows interception / replacement of when the Dialect.connect()hook is called to create a DBAPI connection. Also added dialect plugin hooks Dialect.get_dialect_cls()and Dialect.engine_created()which allow external plugins to add events to existing dialects using entry points.¶ sql¶ [sql] [feature] Added a placeholder method TypeEngine.compare_against_backend()which is now consumed by Alembic migrations as of 0.7.6. User-defined types can implement this method to assist in the comparison of a type against one reflected from the database.¶ [sql] [bug].¶ [sql] [bug].¶ 1.0.2¶Released: April 24, 2015 orm declarative¶ sql¶ [sql] [bug]¶ Labelconstruct that is also present in the columns clause. Additionally, no backend will emit GROUP BY against the simple label name only when passed a Labelconstruct. 1.0.1¶Released: April 23, 2015 orm¶ [orm] [bug] [orm] [bug] Fixed unexpected use regression cause by #3061 where the NEVER_SET symbol could leak into relationship-oriented queries, including¶. [orm] [bug].¶ engine¶ [engine] [bug] Added the string value "none"to those accepted by the Pool.reset_on_returnparameter as a synonym for None, so that string values can be used for all settings, allowing utilities like engine_from_config()to be usable without issue.¶ This change is also backported to: 0.9.10 sql¶ [sql] [bug]()).¶ sqlite¶ [sqlite] [bug] [firebird] [bug] Fixed a regression due to #3034 where limit/offset clauses were not properly interpreted by the Firebird dialect. Pull request courtesy effem-git.¶ [firebird] [bug] Fixed support for “literal_binds” mode when using limit/offset with Firebird, so that the values are again rendered inline when this is selected. Related to #3034.¶ 1.0.0¶Released: April 16, 2015 orm¶ [orm] [feature] Added new argument Query.update.update_argswhich allows kw arguments such as mysql_limitto be passed to the underlying Updateconstruct. Pull request courtesy Amir Sadoughi.¶ [orm] [bug].¶ [orm] [bug].¶ sql¶ [sql] [feature] The topological sorting used to sort¶. [sql] [bug] Fixed issue where a MetaDataobject that used a naming convention would not properly work with pickle. The attribute was skipped leading to inconsistencies and failures if the unpickled MetaDataobject were used to base additional tables from.¶ This change is also backported to: 0.9.10 This change is also backported to: 0.9.10 mssql¶ [mssql] [bug].¶ [mssql] [bug] Using the Binaryconstructor now present in pymssql rather than patching one in. Pull request courtesy Ramiro Morales.¶ misc¶ [bug] [tests].¶ 1.0.0b5¶Released: April 3, 2015 orm¶ This change is also backported to: 0.9.10 [orm] [bug].¶ [orm] [bug] Added a list() call around a weak dictionary used within the commit phase of the session, which without it could cause a “dictionary changed size during iter” error if garbage collection interacted within the process. Change was introduced by #3139.¶ [orm] [bug].¶ sql¶ [sql] [bug].¶ 1.0.0b4¶Released: March 29, 2015 sql¶ [sql] [bug] Fixed bug in new “label resolution” feature of #2992 where a label that was anonymous, then labeled again with a name, would fail to be locatable via a textual label. This situation occurs naturally when a mapped column_property()is given an explicit label in a query.¶ [sql] [bug] Fixed bug in new “label resolution” feature of #2992 where the string label placed in the order_by() or group_by() of a statement would place higher priority on the name as found inside the FROM clause instead of a more locally available name inside the columns clause.¶ schema¶ [schema] [feature] [mssql] [bug] [firebird] [oracle] [sybase] 1.0.0b2¶Released: March 20, 2015 orm¶ [orm] [bug] Fixed unexpected use regression from pullreq github:137 where Py2K unicode literals (e.g. u"") would not be accepted by the relationship.cascadeoption. Pull request courtesy Julien Castets.¶ orm declarative¶ [change] [declarative] [orm].¶ engine¶ [engine] [bug] The “auto close” for¶ ResultProxyis now a “soft” close. That is, after exhausing [mysql] [bug] [py3k] Fixed the mysql.BITtype on Py3K which was not using the ord()function correctly. Pull request courtesy David Marin.¶ This change is also backported to: 0.9.10 [mysql] [bug].¶ [general] [feature] Structural memory use has been improved via much more significant use of¶ _. [general] [bug] The __module__attribute is now set for all those SQL and ORM functions that are derived as “public factory” symbols, which should assist with documentation tools being able to report on the target module.¶ [orm] [feature] [orm] [feature] The subquery wrapping which occurs when joined eager loading is used with a one-to-many query that also features LIMIT, OFFSET, or DISTINCT has been disabled in the case of a one-to-one relationship, that is a one-to-many with¶ relationship.uselistset to False. This will produce more efficient queries in these cases. [orm] [feature].¶ [orm] [feature].¶ [orm] [feature] A new series of Sessionmethods [orm] [feature] Added a parameter Query.join.isouterwhich is synonymous with calling Query.outerjoin(); this flag is to provide a more consistent interface compared to Core FromClause.join(). Pull request courtesy Jonathan Vanasco.¶ [orm] [feature] Added new event handlers AttributeEvents.init_collection()and AttributeEvents.dispose_collection(), which track when a collection is first associated with an instance and when it is replaced. These handlers supersede the collection.linker()annotation. The old hook remains supported through an event adapter.¶ [orm] [feature] The¶(). [orm] [feature] A new implementation for¶ KeyedTupleused by the Queryobject offers dramatic speed improvements when fetching large numbers of column-oriented rows. [orm] [feature] The behavior of¶ joinedload.innerjoinas well as relationship.innerjoinis now to use “nested” inner joins, that is, right-nested, as the default behavior when an inner join joined eager load is chained to an outer join eager load. [orm] [feature].¶ [orm] [feature] The infoparameter has been added to the constructor for SynonymPropertyand ComparableProperty.¶ [orm] [feature] The InspectionAttr.infocollection is now moved down to InspectionAttr, where in addition to being available on all MapperPropertyobjects, it is also now available on hybrid properties, association proxies, when accessed via Mapper.all_orm_descriptors.¶ [orm] [changed] The¶ proc()callable passed to the create_row_processor()method of custom Bundleclasses now accepts only a single “row” argument. [orm] [changed] Deprecated event hooks removed: populate_instance, create_instance, translate_row, append_result See also Deprecated ORM Event Hooks Removed .9.5,.9.5, 0.8.7 This change is also backported to: 0.9.9 This change is also backported to: 0.9.9 [orm] [bug] Fixed bug where TypeError raised when Query.join()called with unknown kw arguments would raise its own TypeError due to broken formatting. Pull request courtesy Malthe Borch.¶ [orm] [bug] Fixed bug where the ON clause for Query.join(), and Query.outerjoin()to a single-inheritance subclass using of_type()would not render the “single table criteria” in the ON clause if the from_joinpoint=Trueflag were set.¶ This change is also backported to: 0.9.9 This change is also backported to: 0.9.8 This change is also backported to: 0.9.8 [orm] [bug] Fixed warning that would emit when a complex self-referential primaryjoin contained functions, while at the same time remote_side was specified; the warning would suggest setting “remote side”. It now only emits if remote_side isn’t present.¶ This change is also backported to: 0.9.8 This change is also backported to: 0.9.7 This change is also backported to: 0.9.7 [orm] [bug] Fixed bug where items that were persisted, deleted, or had a primary key change within a savepoint block would not participate in being restored to their former state (not in session, in session, previous PK) after the outer transaction were rolled back.¶ This change is also backported to: 0.9.7 [orm] [bug] Fixed bug in subquery eager loading in conjunction with with_polymorphic(), the targeting of entities and columns in the subquery load has been made more accurate with respect to this type of entity and others.¶ This change is also backported to: 0 This change is also backported to: 0.9.5 This change is also backported to: 0.9.5 . This change is also backported to: 0.9.5 This change is also backported to: 0.9.5 [orm] [bug] Fixed bug in SQLite join rewriting where anonymized column names due to repeats would not correctly be rewritten in subqueries. This would affect SELECT queries with any kind of subquery + join.¶ This change is also backported to: 0.9.5 This change is also backported to: 0.9.5 [orm] [bug] Fixed bug where the session attachment error “object is already attached to session X” would fail to prevent the object from also being attached to the new session, in the case that execution continued after the error raise occurred.¶ [orm] [bug] The primary¶ [orm] [bug]. [orm] [bug] Repaired support of the copy.deepcopy()call when used by the orm.util.CascadeOptionsargument, which occurs if copy.deepcopy()is being used with relationship()(not an officially supported use case). Pull request courtesy duesenfranz.¶ [orm] [bug] Fixed bug where¶ Session.expunge()would not fully detach the given object if the object had been subject to a delete operation that was flushed, but not committed. This would also affect related operations like make_transient(). [orm] [bug] [orm] [bug] The¶. [orm] [bug] Improvements to the mechanism used by¶ Sessionto locate “binds” (e.g. engines to use), such engines can be associated with mixin classes, concrete subclasses, as well as a wider variety of table metadata such as joined inheritance tables. [orm] [bug].¶ [orm] [bug] The ON clause rendered when using Query.join(), Query.outerjoin(), or the standalone orm.join()/ orm.¶ [orm] [bug] A major rework to the behavior of expression labels, most specifically when used with ColumnProperty constructs with custom SQL expressions and in conjunction with the “order by labels” logic first introduced in 0.9. Fixes include that an¶. [orm] [bug] Changed the approach by which the “single inheritance criterion” is applied, when using¶. [orm] [bug].¶ [orm] [bug] The “resurrect” ORM event has been removed. This event hook had no purpose since the old “mutable attribute” system was removed in 0.8.¶ [orm] [bug].¶ [orm] [bug] [py3k] The IdentityMapexposed from Session.identity_mapnow returns lists for items()and values()in Py3K. Early porting to Py3K here had these returning iterators, when they technically should be “iterable views”..for now, lists are OK.¶ [orm] [bug].¶ [orm] [change].¶ [orm] [enhancement] [feature] [declarative] [orm] The¶. [bug] [declarative] [orm] Fixed “‘NoneType’ object has no attribute ‘concrete’” error when using AbstractConcreteBasein conjunction with a subclass that declares __abstract__.¶ This change is also backported to: 0.9.8 [bug] [declarative] [orm] Fixed bug where using an __abstract__mixin in the middle of a declarative inheritance hierarchy would prevent attributes and configuration being correctly propagated from the base class to the inheriting class.¶ [bug] [declarative] [orm] A relationship set up with¶ declared_attron a AbstractConcreteBasebase class will now be configured on the abstract base mapping automatically, in addition to being set up on descendant concrete classes as usual. engine¶ [engine] [feature] Added new user-space accessors for viewing transaction isolation levels; Connection.get_isolation_level(), Connection.default_isolation_level.¶ This change is also backported to: 0.9.9 [engine] [feature] Added new event ConnectionEvents.handle_error(), a more fully featured and comprehensive replacement for ConnectionEvents.dbapi_error().¶ This change is also backported to: 0.9.7 [engine] [feature] [engine] [bug] The engine-level error handling and wrapping routines will now take effect in all engine connection use cases, including when user-custom connect routines are used via the¶ create_engine.creatorparameter, as well as when the Connectionencounters a connection error on revalidation. [engine] [bug] Removing (or adding) an event listener at the same time that the event is being run itself, either from inside the listener or from a concurrent thread, now raises a RuntimeError, as the collection used is now an instance of colletions.deque()and does not support changes while being iterated. Previously, a plain Python list was used where removal from inside the event itself would produce silent failures This change is also backported to: 0.9.5 This change is also backported to: 0.9.5 [sql] [feature].¶ [sql] [feature] Literal values within a¶ DefaultClause, which is invoked when using the Column.server_defaultparameter, will now be rendered using the “inline” compiler, so that they are rendered as-is, rather than as bound parameters. [sql] [feature] The type of expression is reported when an object passed to a SQL expression unit can’t be interpreted as a SQL fragment; pull request courtesy Ryan P. Kelly.¶ [sql] [feature].¶ [sql] [feature].¶ [sql] [feature]¶ Insert.from_select()now includes Python and SQL-expression defaults if otherwise unspecified; the limitation where non- server column defaults aren’t included in an INSERT FROM SELECT is now lifted and these expressions are rendered as constants into the SELECT statement. [sql] [feature]¶. [sql] [feature] Added new method Select.with_statement_hint()and ORM method Query.with_statement_hint()to support statement-level hints that are not specific to a table.¶ [sql] [feature] The infoparameter has been added as a constructor argument to all schema constructs including MetaData, Index, ForeignKey, ForeignKeyConstraint, UniqueConstraint, PrimaryKeyConstraint, CheckConstraint.¶ [sql] [feature] The Table.autoload_withflag now implies that Table.autoloadshould be True. Pull request courtesy Malik Diarra.¶ [sql] [feature],¶ [sql] [changed] The column()and table()constructs are now importable from the “from sqlalchemy” namespace, just like every other Core construct.¶ [sql] [changed] The implicit conversion of strings to¶ text()constructs when passed to most builder methods of select()as well as Querynow. [sql] [bug] Fixed bug in Enumand other SchemaTypesubclasses where direct association of the type with a MetaDatawould lead to a hang when events (like create events) were emitted on the MetaData.¶ This change is also backported to: 0.9.7,.9.7, 0.8.7 [sql] [bug] Fixed bug in INSERT..FROM SELECT construct where selecting from a UNION would wrap the union in an anonymous (e.g. unlabled) subquery.¶ This change is also backported to: 0.9.5, 0.8.7 [sql] [bug] Fixed bug where Table.update()and Table.delete()would produce an empty WHERE clause when an empty and_()or or_()or other blank expression were applied. This is now consistent with that of This change is also backported to: 0.9.5, 0.8.7 [sql] [bug] Added the native_enumflag to the __repr__()output of Enum, which is mostly important when using it with Alembic autogenerate. Pull request courtesy Dimitris Theodorou.¶ This change is also backported to: 0.9.9 This change is also backported to: 0.9.9 This change is also backported to: 0.9.9 [sql] [bug] Fixed bug where a fair number of SQL elements within the sql package would fail to __repr__()successfully, due to a missing descriptionattribute that would then invoke a recursion overflow when an internal AttributeError would then re-invoke __repr__().¶ This change is also backported to: 0.9.8 This change is also backported to: 0.9.8 [sql] [bug] Fixed bug in CTE where literal_bindscompiler argument would not be always be correctly propagated when one CTE referred to another aliased CTE in a statement.¶ This change is also backported to: 0.9.8 [sql] [bug] Fixed 0.9.7 regression caused by #3067 in conjunction with a mis-named unit test such that so-called “schema” types like Booleanand Enumcould no longer be pickled.¶ This change is also backported to: 0.9.8 This change is also backported to: 0.9.7 [sql] [bug] Fixed bug in common table expressions whereby positional bound parameters could be expressed in the wrong final order when CTEs were nested in certain ways.¶ This change is also backported to: 0.9.7 [sql] [bug] Fixed bug where multi-valued Insertconstruct would fail to check subsequent values entries beyond the first one given for literal SQL expressions.¶ This change is also backported to: 0.9.7 [sql] [bug] Added a “str()” step to the dialect_kwargs iteration for Python version < 2.6.5, working around the “no unicode keyword arg” bug as these args are passed along as keyword args within some reflection processes.¶ This change is also backported to: 0.9.7 [sql] [bug] The TypeEngine.with_variant()method will now accept a type class as an argument which is internally converted to an instance, using the same convention long established by other constructs such as Column.¶ This change is also backported to: 0 This change is also backported to: 0.9.5 [sql] [bug] Fixed bug where the Operators.__and__(), Operators.__or__()and Operators.__invert__()operator overload methods could not be overridden within a custom TypeEngine.Comparatorimplementation.¶ This change is also backported to: 0.9.5 [sql] [bug] Fixed bug in new DialectKWArgs.argument_for()method where adding an argument for a construct not previously included for any special arguments would fail.¶ This change is also backported to: 0.9.5 [sql] [bug] Fixed regression introduced in 0.9 where new “ORDER BY <labelname>” feature from #1068 would not apply quoting rules to the label name as rendered in the ORDER BY.¶ This change is also backported to: 0.9.5 [sql] [bug] Restored the import for Functionto the sqlalchemy.sql.expressionimport namespace, which was removed at the beginning of 0.9.¶ This change is also backported to: 0.9.5 [sql] [bug] The multi-values version of¶. [sql] [bug] Fixed bug in Table.tometadata()method where the CheckConstraintassociated with a Booleanor Enumtype object would be doubled in the target table. The copy process now tracks the production of this constraint object as local to a type object.¶ [sql] [bug] The behavioral contract of the¶ ForeignKeyConstraint.columnscollection has been made consistent; this attribute is now a ColumnCollectionlike that of all other constraints and is initialized at the point when the constraint is associated with a Table. [sql] [bug].¶ [sql] [bug] Fixed the name of the PoolEvents.reset.dbapi_connectionparameter as passed to this event; in particular this affects usage of the “named” argument style for this event. Pull request courtesy Jason Goldberger.¶ [sql] [bug] Reversing a change that was made in 0.9, the “singleton” nature of the “constants”¶ null(), true(), and false()has been reverted. These functions returning a “singleton” object had the effect that different instances would be treated as the same regardless of lexical use, which in particular would impact the rendering of the columns clause of a SELECT statement. [sql] [bug] [engine].¶ [sql] [bug] [engine].¶ [sql] [bug] Using Insert.from_select()now implies inline=Trueaccessor does not apply. Previously, there was a documentation note that one may prefer inline=Truew.¶ [sql] [enhancement].¶ schema¶ [schema] [feature] The DDL generation system of¶. [schema] [feature] Added a new accessor Table.foreign_key_constraintsto complement the Table.foreign_keyscollection, as well as ForeignKeyConstraint.referred_table.¶ [schema] [bug] The¶ [postgresql] [feature] Added support for the CONCURRENTLYkeyword with PostgreSQL indexes, established using postgresql_concurrently. Pull request courtesy Iuri de Silvio. See also Indexes with CONCURRENTLY This change is also backported to: 0.9.9 This change is also backported to: 0.9.8 [postgresql] [feature] Added kw argument postgresql_regconfigto the ColumnOperators.match()operator, allows the “reg config” argument to be specified to the to_tsquery()function emitted. Pull request courtesy Jonathan Vanasco.¶ This change is also backported to: 0.9.7 [postgresql] [feature] Added support for PostgreSQL JSONB via JSONB. Pull request courtesy Damian Dimmich.¶ This change is also backported to: 0.9.7 [postgresql] [feature] Added support for AUTOCOMMIT isolation level when using the pg8000 DBAPI. Pull request courtesy Tony Locke.¶ This change is also backported to: 0.9.5 This change is also backported to: 0.9.5 [postgresql] [feature] The PG8000 dialect now supports the create_engine.encodingparameter, by setting up the client encoding on the connection which is then intercepted by pg8000. Pull request courtesy Tony Locke.¶ [postgresql] [feature] Added support for PG8000’s native JSONB feature. Pull request courtesy Tony Locke.¶ [postgresql] [feature] [pypy] Added support for the psycopg2cffi DBAPI on pypy. Pull request courtesy shauns.¶ [postgresql] [feature] Added support for the FILTER keyword as applied to aggregate functions, supported by PostgreSQL 9.4. Pull request courtesy Ilja Everilä. See also PostgreSQL FILTER keyword [postgresql] [feature] Support has been added for reflection of materialized views and foreign tables, as well as support for materialized views within¶ Inspector.get_view_names(), and a new method PGInspector.get_foreign_table_names()available on the PostgreSQL version of Inspector. Pull request courtesy Rodrigo Menezes. [postgresql] [feature] Added support for PG table options TABLESPACE, ON COMMIT, WITH(OUT) OIDS, and INHERITS, when rendering DDL via the Tableconstruct. Pull request courtesy malikdiarra. See also [postgresql] [feature] Added new method PGInspector.get_enums(), when using the inspector for PostgreSQL will provide a list of ENUM types. Pull request courtesy Ilya Pekelny.9.5, 0.8.7 [postgresql] [bug] Added a new “disconnect” message “connection has been closed unexpectedly”. This appears to be related to newer versions of SSL. Pull request courtesy Antti Haapala.¶ [postgresql] [bug] Fixed bug in postgresql.arrayobject where comparison to a plain Python list would fail to use the correct array constructor. Pull request courtesy Andrew.¶ This change is also backported to: 0.9.8 This change is also backported to: 0.9.8 [postgresql] [bug] [pg8000] Fixed bug introduced in 0.9.5 by new pg8000 isolation level feature where engine-level isolation level parameter would raise an error on connect.¶ This change is also backported to: 0 This change is also backported to: 0.9.5 [postgresql] [bug] The PostgreSQL¶. [postgresql] [bug] The¶ [mysql] [feature] The MySQL dialect now renders TIMESTAMP with NULL / NOT NULL in all cases, so that MySQL 5.6.6 with the¶ explicit_defaults_for_timestampflag enabled will will allow TIMESTAMP to continue to work as expected when nullable=False. Existing applications are unaffected as SQLAlchemy has always emitted NULL for a TIMESTAMP column that is nullable=True. [mysql] [feature].¶ .9.7, 0.8.7 .9.5, 0.8.7 [mysql] [bug] Added support for reflecting tables where an index includes KEY_BLOCK_SIZE using an equal sign. Pull request courtesy Sean McGivern.¶ This change is also backported to: 0.9.5, 0.8.7 [mysql] [bug] Added a version check to the MySQLdb dialect surrounding the check for ‘utf8_bin’ collation, as this fails on MySQL server < 5.0.¶ This change is also backported to: 0.9.9 This change is also backported to: 0.9.8 [mysql] [bug] [mysqlconnector] Unicode SQL is now passed for MySQLconnector version 2.0 and above; for Py2k and MySQL < 2.0, strings are encoded.¶ This change is also backported to: 0.9.8 [mysql] [bug] The MySQL dialect now supports CAST on types that are constructed as TypeDecoratorobjects.¶ [mysql] [bug].¶ [mysql] [bug] The¶ mysql mysql.SETthat actually wants to include the blank value ''as a legitimate value, a new bitwise operational mode is added which is enabled by the mysql.SET.retrieve_as_bitwiseflag, which will persist and retrieve values unambiguously using their bitflag positioning. Storage and retrieval of unicode values for driver configurations that aren’t converting unicode natively is also repaired. [mysql] [bug]] [bug]¶ true()and false()symbols again produce the keywords “true” and “false”, so that an expression like column.is_(true())again works on MySQL. [mysql] [bug] The MySQL dialect will now disable ConnectionEvents.handle_error()events from firing for those statements which it uses internally to detect if a table exists or not. This is achieved using an execution option skip_user_error_eventsthat disables the handle error event for the scope of that execution. In this way, user code that rewrites exceptions doesn’t need to worry about the MySQL dialect or other dialects that occasionally need to catch SQLAlchemy specific exceptions.¶ [mysql] [bug] Changed the default value of “raise_on_warnings” to False for MySQLconnector. This was set at True for some reason. The “buffered” flag unfortunately must stay at True as MySQLconnector does not allow a cursor to be closed unless all results are fully fetched.¶ [mysql] [change] The gaerdbmsdialect is no longer necessary, and emits a deprecation warning. Google now recommends using the MySQLdb dialect directly.¶ This change is also backported to: 0.9.9 sqlite¶ [sqlite] [feature] Added support for partial indexes (e.g. with a WHERE clause) on SQLite. Pull request courtesy Kai Groner. See also This change is also backported to: 0.9.9 [sqlite] [feature] [sqlite] [bug] UNIQUE and FOREIGN KEY constraints are now fully reflected on SQLite both with and without names. Previously, foreign key names were ignored and unnamed unique constraints were skipped. Thanks to Jon Nelson for assistance with this.¶ [sqlite] [bug] The SQLite dialect, when using the sqlite.DATE, sqlite.TIME, or sqlite] [bug]. [sqlite] [bug] Added¶ [mssql] [feature] Enabled “multivalues insert” for SQL Server 2008. Pull request courtesy Albert Cervin. Also expanded the checks for “IDENTITY INSERT” mode to include when the identity key is present in the VALUEs clause of the statement.¶ This change is also backported to: 0.9.7 [mssql] [feature] [mssql] [changed] [mssql] [bug] Fixed the version string detection in the pymssql dialect to work with Microsoft SQL Azure, which changes the word “SQL Server” to “SQL Azure”.¶ This change is also backported to: 0.9.8 This change is also backported to: 0.9.5 oracle¶ [oracle] [feature] Added support for cx_oracle connections to a specific service name, as opposed to a tns name, by passing ?service_name=<name>to the URL. Pull request courtesy Sławomir Ehlert.¶ [oracle] [feature] New Oracle DDL features for tables, indexes: COMPRESS, BITMAP. Patch courtesy Gabor Gombas.¶ [oracle] [feature] [oracle] [feature] Added support for the Oracle table option ON COMMIT.¶ [oracle] [bug] Fixed long-standing bug in Oracle dialect where bound parameter names that started with numbers would not be quoted, as Oracle doesn’t like numerics in bound parameter names.¶ This change is also backported to: 0.9.8 [oracle] [bug] [tests] Fixed bug in oracle dialect test suite where in one test, ‘username’ was assumed to be in the database URL, even though this might not be the case.¶ This change is also backported to: 0.9.7 [oracle] [bug] An alias name will be properly quoted when referred to using the %(name)stoken inside the Select.with_hint()method. Previously, the Oracle backend hadn’t implemented this quoting.¶ misc¶ [feature] [examples] Added a new example illustrating materialized paths, using the latest relationship features. Example courtesy Jack Zhou.¶ This change is also backported to: 0.9.5 [feature] [ext] Added a new extension suite sqlalchemy.ext.baked. This simple but unusual system allows for a dramatic savings in Python overhead for the construction and processing of orm Queryobjects, from query construction up through rendering of a string SQL statement. See also [feature] [examples] [feature] [ext] The sqlalchemy.ext.automapextension will now set cascade="all, delete-orphan"automatically on a one-to-many relationship/backref where the foreign key is detected as containing one or more non-nullable columns. This argument is present in the keywords passed to autom.¶ [removed] The Drizzle dialect has been removed from the Core; it is now available as sqlalchemy-drizzle, an independent, third party dialect. The dialect is still based almost entirely off of the MySQL dialect present in SQLAlchemy.9.5, 0.8.7 [bug] [ext] Fixed bug in mutable extension where MutableDictdid not report change events for the setdefault()dictionary operation.¶ This change is also backported to: 0.9.5, 0.8.7 [bug] [ext] Fixed bug where MutableDict.setdefault()didn’t return the existing or new value (this bug was not released in any 0.8 version). Pull request courtesy Thomas Hervé.¶ This change is also backported to: 0.9.5, 0.8.7 [bug] [ext] [py3k] Fixed bug where the association proxy list class would not interpret slices correctly under Py3K. Pull request courtesy Gilles Dartiguelongue.¶ This change is also backported to: 0.9.9 This change is also backported to: 0.9.9 [bug] [examples] Fixed a bug in the examples/generic_assocaitions/discriminator_on_association.py example, where the subclasses of AddressAssociation were not being mapped as “single table inheritance”, leading to problems when trying to use the mappings further.¶ This change is also backported to: 0.9.9 This change is also backported to: 0.9.8 This change is also backported to: 0.9.8 [bug] [ext] Fixed bug where ext.mutable.MutableDictfailed to implement the update()dictionary method, thus not catching changes. Pull request courtesy Matt Chisholm.¶ This change is also backported to: 0.9.8 [bug] [ext] Fixed bug where a custom subclass of ext.mutable.MutableDictwould not show up in a “coerce” operation, and would instead return a plain ext.mutable.MutableDict. Pull request courtesy Matt Chisholm.¶ This change is also backported to: 0.9.8 This change is also backported to: 0.9.8 [bug] [tests] Fixed bug where “python setup.py test” wasn’t calling into distutils appropriately, and errors would be emitted at the end of the test suite.¶ This change is also backported to: 0.9.7 [bug] [declarative] Fixed bug when the declarative __abstract__flag was not being distinguished for when it was actually the value False. The __abstract__flag needs to actually evaluate to a True value at the level being tested.¶ This change is also backported to: 0.9.7 [bug] [testsuite] In public test suite, shanged to use of String(40)from less-supported Textin StringTest.test_literal_backslashes. Pullreq courtesy Jan.¶ This change is also backported to: 0.9.5 [bug] [py3k] [tests] Corrected for some deprecation warnings involving the impmodule and Python 3.3 or greater, when running tests. Pull request courtesy Matt Chisholm.¶ This change is also backported to: 0.9.5
https://docs.sqlalchemy.org/en/rel_1_1/changelog/changelog_10.html
CC-MAIN-2019-13
refinedweb
7,781
58.89
[For those who noted that the above graphic lacks one of the fundamental requirements for an animation (namely animated-ness), CodeProject doesn't support animations within a page, so click here to see the actual animation.] All projects suffer creeping featurism. Things that start simple and elegant end up as the "before" image in a weight-loss ad. This control has grown considerably since it was first written. If you want to do something with a ListView, this control probably has some code to help you do it. For those in a hurry, this control has the following major features: ListView TreeListView VirtualObjectListView FastObjectListView DataListView FastDataListView IVirtualListDataSource This control has its own website, hosted by SourceForge: ObjectListView - How I Learned To Stop Worrying and Love .NET's ListView (made using the cool Sphinx documentation tool). This is not an empty shell site. It actually has lots of useful information. There you can find a step by step tutorial to help you get started, as well as a cookbook showing you how to accomplish common tasks. This article is only really an introduction. Those who aren't in a hurry can now read the rest of the article. " />). It is in the interests of encouraging "slothfulness" that I wrote this control.. ListView ListBox The reason for wanting to avoid the ListView is all the boilerplate code it needs to work: make the ListViewItems, add all the SubItems, catch the header click events and sort the items depending on the data type. Each of these tasks is slightly different for each instance of a ListView. If you want to support grouping, there's an even bigger chunk of boilerplate code to copy and then modify slightly. ListViewItems SubItems For a basically lazy person, this is far too much work. ObjectListView was born to relieve this workload. ObjectListView An ObjectListView provides two groups of functionality. The first group is designed to make a ListView much easier to use. This group ranges from automatically converting a list of model objects into a fully functional ListView, to making drag and drop and cell editing much easier to use. The second group adds new features to a ListView, such as image overlays and customisable tooltips. ListView, There are two ways to use an ObjectListView in your project: Once your project has been built, there should now be a new section in your Toolbox, "ObjectListView Components". In that section should be ObjectListView and its friends. You can then drag an ObjectListView onto your window, and use it as you would a standard ListView control. ObjectListView If you don't want to add the ObjectListView project to your project, you can also just add the ObjectListView.dll file. Adding the DLL does not automatically add any new components into your toolbox. You will need to add them manually after you have added the DLL to your project. "Simple things should be simple. Complex things should be possible." The major design goal of ObjectListView is to make the programmer's life simpler. However, this control provides a lot of functionality, and it can be overwhelming if you try to absorb it all at once. It's best to start with the foundational ability of an ObjectListView: it generates a fully functional ListView from a list of model objects. Usually the control is configured within the IDE (setting properties and creating columns) and then, with a single line of code, it is put into action like this: this.objectListView1.SetObjects(allPeople); And that is it! Simple, fast, uncomplicated, non-fattening and without a single line of boilerplate code. Without further work, this ObjectListView is a fully functional ListView and will handle drag and drop, alternate line colouring, column-click sorting, data formatting, grouping and possibly editing too. The "Simple Example" tab in the demo project shows what is possible with only IDE configuration and this one line of code. What is actually happening here? When you call SetObjects(), the ObjectListView runs through the given list of model objects, extracts the aspect nominated for each column, converts that aspect to a string, and then puts those strings together to make a row in the ListView. For those who think in pictures, you can visualize the process like this: SetObjects() mind-set. If you can perform the mind-mangling step of changing your thinking, ObjectListView will be your best friend. An ObjectListView is much more active than a plain ListView. A normal ListView is essentially passive: it sits there, and you poke and prod it and eventually it looks like you want. With an ObjectListView you tell it what you want done and the ObjectListView does it for you. More formally: An ObjectListView is used declaratively. You configure the ObjectListView, give it your collection of model objects, and the ObjectListView builds the ListView for you. responding to events or by installing delegates (more on this later). Once the columns and control are configured, putting it into action is simple, as you have already seen: a single call to SetObjects(). destroys, changes and builds them as required from the information you have given. Resist the temptation to add/edit/sort or otherwise mess with ListViewItems — it will not work. Items There is also no need to hide information in a ListViewItem. Old style ListView programming is anymore. The ObjectListView already knows which model object is behind each row. In many cases, the programmer simply uses the SelectedObjects property to find out which model objects the user wants to do something to. ListViewItem Tag SelectedObjects A single call to SetObjects() is all well and good, but real-world applications need more than just sorting and grouping. They need at least a little image in the first column. The obvious first enhancement to this simple example is to display images in the ListView. To do that, we need to configure the ObjectListView so it knows what image to show against each row. This normally cannot be done within the IDE. Very often the image to be shown depends on the model object being displayed. To decide on an image,.] ImageGetter First, you need a method that matches the ImageGetterDelegate signature: it must accept a single object parameter and returns an object. The value returned from the ImageGetter delegate is used as an index into the ObjectListView's SmallImageList. As such, the ImageGetter can return either a string or an int. (If the ObjectListView is owner-drawn, the ImageGetter can also return an Image). ImageGetterDelegate object ObjectListView's SmallImageList string int Image A somewhat frivolous example follows: object PersonColumnImageGetter (object rowObject) { // People whose names start with a vowel get a star, // otherwise the first half of the alphabet gets hearts // and the second half gets music Person p = (Person)rowObject; if ("AEIOU".Contains(p.Name.Substring(0, 1))) return 0; // star else if (p.Name.CompareTo("N") < 0) return 1; // heart else return 2; // music }; To install it, you do this: this.personColumn.ImageGetter = new ImageGetterDelegate(this.PersonColumnImageGetter); In VB: this.personColumn.ImageGetter = New ImageGetterDelegate(AddressOf PersonColumnImageGetter) delegate this.personColumn.ImageGetter = delegate (object rowObject) { Person p = (Person)rowObject; if ("AEIOU".Contains(p.Name.Substring(0, 1))) return 0; // star else if (p.Name.CompareTo("N") < 0) return 1; // heart else return 2; // music }; Anonymous delegates save you from having to add lots of little methods to your class. However, if the anonymous delegates start to become too big or if you find yourself copying them verbatim from one place to another, it's a good sign that you need to put some new methods on your model class. [v2.3] If your model class has a property which can return the name or the index of the image that should be shown, you don't need to install a delegate. You can set the ImageAspectName property to the name of that property. But this is crossing the line between model and view, so I'm not encouraging this practice, just pointing out that it's possible. ImageAspectName ObjectListView uses a combination of events and delegates to allow further, more complex customizations. All of the following can be customized through delegates: OLVColumn.AspectGetter AspectGetter OLVColumn.AspectToStringConverter ToString() OLVColumn.AspectPutter OLVColumn.GroupKeyGetter OLVColumn.GroupKeyToTitleConverter OLVColumn.CheckStateGetter CheckStatePutter ObjectListView provides many events, which allow the programmer to customize its behaviour. Have a look at events available in the IDE. Some commonly used events are: FormatRow FormatCell CellToolTipShowing HeaderToolTipShowing SelectionChanged SelectedIndexChanged The complex example tab in the demo project has examples of using all of these delegates. For example, turn on "Show Groups" and click on the "Cooking Skill" or "Hourly Rate" column to see what's possible. The control is written to be data-agnostic. It doesn't care what type of data it is working on. The only requirement is that the object passed to the SetObjects method must support the IEnumerable interface, which isn't too heavy a requirement. This means that it works equally well with an ArrayList, a DataTable or a list of compiler errors returned from CompilerResults.Errors. For example, to display information from DataTable, you could install AspectGetter delegates like this: SetObjects IEnumerable ArrayList DataTable CompilerResults.Errors columnName.AspectGetter = delegate(object row) { return ((DataRow)row)["Name"]; }; columnCompany.AspectGetter = delegate(object row) { return ((DataRow)row)["Company"]; }; columnAge.AspectGetter = delegate(object row) { return ((DataRow)row)["Age"]; }; Then install the table like this: this.objectListView1.SetObjects(ds1.Tables["Persons"].Rows); Note that this code installed the Rows, not the DataTable itself. Rows Actually, you would not have to define AspectGetters to do this. It would be enough to set columnName.AspectName to "Name" and the ObjectListView would be able to extract the indexed property from the DataRow. But the example still works as it stands. AspectGetters columnName.AspectName DataRow In response to intense public demand — OK, a couple of people asked for it — there is a DataListView class. This is a data-bindable version of ObjectListView which accepts various kinds of data sources and populates the list with data from that source. For those programmers for whom even one line of code is too much, DataListView can be configured completely within the IDE. Give it a DataSource, set up the columns and it will simply work. DataListView also listens for ListChanged events on its data source and uses those to keep its list in sync with the contents of the data source. Add a new row to the data table and it will automatically appear in the ListView! Edit a value in the DataListView and the change automatically appears in the DataTable! DataSource ListChanged The DataListView can accept several types of objects as data sources: DataView DataSet DataViewManager BindingSource To use DataListView, you give each column the name of the data column you want it to display in the AspectName property. Then set the DataSource member to your data source. That's it! Even more slothfulness! So, we could accomplish the same thing as the "data unaware" example above by configuring AspectNames for the columns and then setting the data table directly, like this: AspectName AspectNames this.dataListView1.DataSource = ds1.Tables["Persons"]; Alternatively, for the monumentally slothful, the whole thing could be done through IDE configuration. Set the AspectName property on the columns; set the DataSource property to the appropriate dataset and then set the DataMember property to tell which table from the dataset you want to show. Hey, presto, a fully functional data set viewer. All without writing a single line of code. DataMember [2.5] There is a now FastDataListView, which combines the ease of use of a DataListView with the speed of a FastObjectListView. This basically manages a datasets through a virtual list. On my mid-range laptop, it can handle data sets of 100,000+ rows effortlessly. If you've ever wanted to thoroughly overwhelm your users with 10 million rows of data, then go ahead and knock them out with VirtualObjectListView. Let's get this out of the way first: ListViews are NOT good interfaces for large number of items. There aren't any. If you are trying to show a ListView that has 10 million rows, you need to rethink your interface. However, if you really have to use a ListView with that many rows, then VirtualObjectListView is your answer. ListViews Normally ObjectListView keeps a list of model objects that it can read, sort or group at will. An VirtualObjectListView does not keep such a list. Instead, it only fetches model objects when they need to be displayed. For large lists, this is a massive reduction in resources. If the user never looks at the 4-millionth row, the VirtualObjectListView will never ask for it, so the program will never have to create it. VirtualObjectListView To use a VirtualObjectListView, you must implement an IVirtualListDataSource and give that data source to the virtual list (via the VirtualListDataSource property). Using that interface, the virtual list can then function just like a full-fledged ObjectListView. The only things a virtual list still can't do are: it can't show groups and it can't use tile view. But otherwise they should operate in the same way as a normal ObjectListView, including sorting, check boxes, and searching through typing. VirtualListDataSource If you don't want to implement all the methods of IVirtualListDataSource, you can subclass AbstractVirtualListDataSource which is a "do-nothing" implementation of that interface. At very least, you must implement GetObjectCount() and GetNthObject(), otherwise nothing will appear in the list. AbstractVirtualListDataSource GetObjectCount() GetNthObject() Although it isn't documented, .NET virtual ListViews cannot have checkboxes. VirtualObjectListView codes around this limitation, but you must use the functions provided by ObjectListView: CheckedObjects, CheckObject(), UncheckObject() and their friends. If you use the normal check box properties (CheckedItems or CheckedIndicies), they will throw an exception, since the ListView is in virtual mode, and .NET "knows" it can't handle checkboxes in virtual mode. CheckedObjects CheckObject() UncheckObject() CheckedItems CheckedIndicies So far, ObjectListView has been catering to the slothful, those of us who want to do the least work and get the most results. If impatience is your key character flaw, then the FastObjectListView is for you. In exchange for losing a few features, you gain a great amount of speed. How fast is it? On my low-range laptop, a normal ObjectListView builds a list of 10,000 objects in about 10 seconds. A FastObjectListView builds the same list in less than 0.1 seconds. What do you lose? With a FastObjectListView, you cannot use Tile view and if you are on XP, you cannot show groups. Apart from that, all features of ObjectListView are available in the FastObjectListView. From v2.3 onwards, when running on Vista or later, FastObjectListViews can show groups. Simply set ShowGroups to true, and the control will handle groups in the same fashion as a normal ObjectListView. FastObjectListViews ShowGroups From time to time, there are situations when you want to show a tree structure (like a TreeView), but you also want to show more than information about the items than just their name (like a ListView). Enter the TreeListView. It shows a tree structure with its nice ability to expand and collapse, but also shows information in columns: TreeView Like all the other ObjectListViews, TreeListView relies on delegates. The two essential delegates for using a TreeListView are: ObjectListViews to know if a given model can be expanded: CanExpandGetter delegate. CanExpandGetter to get the children of a model when it is expanded: ChildrenGetter delegate. ChildrenGetter In the demo, there is an Explorer like example, which navigates the disks on the local computer. The tree list view in that demo is configured like this: this.treeListView.CanExpandGetter = delegate(object x) { return (x is DirectoryInfo); }; this.treeListView.ChildrenGetter = delegate(object x) { DirectoryInfo dir = (DirectoryInfo)x; return new ArrayList(dir.GetFileSystemInfos()); }; In this example, the CanExpandGetter delegate ensures that only directories can be expanded. The ChildrenGetter delegate returns the contents of a directory when that directory is expanded. ChildrenGetter delegates are only ever called if CanExpandGetter returns true. So in this case, the ChildrenGetter delegate knows that the parameter x must be a DirectoryInfo instance. true x DirectoryInfo To make it work, you must add some "roots" (top level objects). You can do this either by setting the Roots property to a collection of model objects, or you can just call SetObjects() like normal. On a TreeListView, SetObjects(), AddObject() and RemoveObject() all apply to the collection of roots. Roots AddObject() RemoveObject() To refresh the list of children under a model, you call RefreshObject() on the parent. RefreshObject() TreeListView works best when there is a significant overlap of information between branches and the leaf nodes. They have to share the same columns. They also work when the branches have no information apart from their name and the columns are really only used for the leaf nodes. They do not work so well when you want to show several bits of information about branches and other bits of information about leaf nodes, with little overlap between the two. They look silly because of all the empty cells.: objects this.objectListView1.SomeDelegate = delegate(object x) { MyModelObject model = (MyModelObject)x; ... } which becomes tiresome after a while. It would be nice if you could tell the ObjectListView that it would always be displaying, say, Person objects. Something like: Person this.objectListView1 = new ObjectListView<Person>(); this.objectListView1.SomeDelegate = delegate(Person model) { ... } Unfortunately, the designer in Visual Studio cannot handle parameterized controls like that. [I remember reading that in an Microsoft blog somewhere, but I can't find it again. There are a couple of knowledgeable people who says that it can't - here for example. If someone knows if this is a documented decision, could you please let me know.] There are a couple of tricks to get around some of the most obvious problems, but they all hit a wall with the code generation. So, in the meantime, we now have a TypedObjectListView class. ColumnHeader TypedColumn<Person> tcol = new TypedColumn<Person>(this.columnHeader16); tcol.AspectGetter = delegate(Person x) { return x.GetRate(); }; tcol.AspectPutter = delegate(Person x, object newValue) { x.SetRate((double)newValue); }; The final feature". tlist.GenerateAspectGetters() Parent.HomeAddress.Phone.AreaCode Sometimes it makes no sense to allow the user to resize a column. A column that simply shows a 16x16 status icon has no reason to be resizable. Extending this idea one step further, you can imagine cases where a column should not be less than a given size or wider than a maximum size. So, it would be good if we could give columns a minimum and a maximum width. Setting the same value for both would give a fixed-sized column. However, controlling the resizing of columns turns out to be a non-trivial problem. It is easy to find examples of fixing the width of all columns in a ListView: Chris Morgan has a nice implementation available here. Unfortunately, that technique cannot be used to restrict individual column widths. In fact, I could not find any example anywhere of how to restrict a column width to be within a given range. Regardless, columns can be given a MinimumWidth and a MaximumWidth. Even within the IDE, these settings will prevent the user from setting the width of the column to outside of the given values. See below for a more detailed discussion of the complexities and potential limitations of my implementation. MinimumWidth MaximumWidth There are situations where it would be nice if a column (normally the rightmost one) would expand as the listview expands, so that as much of the column was visible as possible without having to scroll horizontally (you should never, ever make your users scroll anything horizontally!). A free space filling column does exactly that. The "Comments" column in the Simple tab of the demo shows this in action. listview When an ObjectListView is resized, the space occupied by all the fixed width columns is totaled. The difference between that total and the width of the control is then shared between the free space filling columns. If you only have one such column, it is given all the space; if you have two, each is given half the space. Be a little cautious with these space filling columns. Their behaviour is not standard and can sometimes be quite surprising, especially if the columns aren't the right most columns. One surprise is that these columns cannot be resized by dragging their divider — their size depends on the free space available in the ListView. Now that you've gone to all that work to make a very pretty ListView, wouldn't it be nice if you could just print it? Yes, I know there is always the PrntScrn key, but I have noticed that some upper management do not think very highly of that as a reporting solution. The ListViewPrinter is the answer to your printing problem. Configure an instance of it in the IDE (the ListView property controls which list is printed) and then call: ListViewPrinter this.listViewPrinter1.PrintPreview(); Thus, for nothing you can have a very pretty report that looks like this one. Admittedly, the formatting in this example is too much, but you can modify all the formatting to suit your tastes. See the demo for some more sedate examples and read the code to see how to make it work. It really is very cool. This is a logically separate piece of code, so it lives in its own project. If you want to use it, you will need to add to your project either the ListViewPrinter project itself or the ListViewPrinter.dll file. The procedure is the same as for the ObjectListView project given in the First Steps section above.. The "editability" of an ObjectListView is controlled by the CellEditActivation property. This property can be set to one of the following values: CellEditActivation CellEditActivateMode.None CellEditActivateMode.SingleClick CellEditActivateMode.DoubleClick CellEditActivateMode.F2Only UI surprises (resulting in complaints like "How come I can't edit this value by clicking on it like I can on all the other cells?"). You have been warned. IsEditable false Once a cell editor is active, the normal editing conventions apply: The default processing creates a cell editor based on the type of the data in the cell. It can handle bool, int, string, DateTime, float and double data types. When the user has finished editing the value in the cell, the new value will be written back into the model object (if possible). bool DateTime float double To do something other than the default processing, you can listen for two events: CellEditStarting and CellEditFinishing. CellEditStarting CellEditFinishing. CellEditEventArgs e.Cancel Control your cell editor has a read/write property called Value, ObjectListView will use that to get and put the cell value into the control. If it doesn't, the Text property will be used instead. Color ColorCellEditor Value Text. Cancel No prizes for guessing that you can refer to the Control property to extract the value that the user has entered and then use that value to do whatever she or he wants. During this event, you should also undo any event listening that you have setup during the CellEditStarting event. You can prevent the cell edit operation from finishing (e.g. if the value the user has entered isn't acceptable) by listening for the CellEditValidating event. If the handler for this event sets Cancel to true, the edit operation will NOT finish and the editor will remain on the screen. Please make sure you have made it clear to the user why the edit operation hasn't finished. CellEditValidating You can look in the demo at listViewComplex_CellEditStarting() and listViewComplex_CellEditFinishing() to see an example of handling these events. listViewComplex_CellEditStarting() listViewComplex_CellEditFinishing() Once the user has entered a new value into a cell and pressed Enter, the ObjectListView tries to store the modified value back into the model object. There are three ways this can happen: RefreshItem() OLVColumn AspectPutter Owner.Address.Postcode. All aspects of cell editing are described in further details on this page. Remember that can of worms I didn't want to open? Owner drawing the ListView? Well, one afternoon when I had too little to do (ha!), I decided that it really couldn't be too bad and I got out my can opener. Several evenings later, I could only confirm my original estimate: owner drawing is a can of worms. It should be easy. It should just work. But it doesn't. Regardless, ObjectListViews can now be owner-drawn and it is owner drawing on steroids! Like most of ObjectListView, owner drawing is accomplished by installing a delegate. Inside the renderer delegate, you can draw whatever you like: columnOD.RendererDelegate = delegate(DrawListViewSubItemEventArgs e, Graphics g, Rectangle r, Object rowObject) { g.FillRectangle(new SolidBrush(Color.Red), r); g.DrawString(((Person)rowObject).Name, objectListView1.Font, new SolidBrush(Color.Black), r.X, r.Y); } Installing a delegate works fine, but there are numerous utility methods that are useful within such a delegate. Is the row currently selected? What colour should the background be? The BaseRenderer class encapsulates these utilities. To make your own Renderer class, you must subclass BaseRenderer, override the Render(Graphics g, Rectangle r) method and again draw whatever you like, only this time you have a lot of nice utility methods available to you. There are a couple of subclasses of BaseRenderer already available for use. BaseRenderer Renderer Render(Graphics g, Rectangle r) BarRenderer MultiImageRenderer MappedImageRenderer enum ImageRenderer ICollection strings ints Images FlagsRenderer To use any of these renderers or your own custom subclass, you assign an instance of them to a column's Renderer property, like this: colCookingSkill.Renderer = new MultiImageRenderer(Resource1.star16, 5, 0, 40); This means that the cooking skill column will draw up to 5 of the star16 images, depending on the data value. The renderer expects data values in the range 0 to 40. Values of 0 or less will not draw any stars. Values of 40 or more will draw 5 stars. Values in between will draw a proportional number of stars. star16 As of v2.0, Renderers are now Components, which means they can created and manipulated within the IDE. So, to use a MultiImageRenderer like the above, you would create one within the IDE, configure its properties to be as you can, and then assign it to the column's Renderer property. Renderers Components Owner drawing only happens when you turn on the OwnerDrawn mode. So, you can only see your custom renderer when the ObjectListView is in owner-drawn mode. OwnerDrawn Rows in list views are always of fixed height and calculated from the ListView font and/or the height of the image lists. Row height can be set using the RowHeight property. You cannot have rows of differing heights — it simply cannot be done with a ListView. RowHeight It is obvious, but easily overlooked, that owner drawing is slower than non-owner drawing. Owner drawing requires a lot more work than native drawing. Again, for small lists, the difference is not significant. However, it can be noticeable when a large number of redraws is necessary. For example, go to the "Virtual List" tab on the demo and drag the scroll thumb down to the bottom. Now turn on OwnerDraw and do it again. Quite a difference! OwnerDraw As of v2.2, ObjectListView now has reasonably sophisticated support for drag and drop operations. If you want the user to be able to drag rows out of an ObjectListView, you set the DragSource property. This property accepts an object that implements the IDragSource interface. It will often be enough to use an instance of SimpleDragSource: DragSource IDragSource SimpleDragSource this.objectListView1.DragSource = new SimpleDragSource(); This drag source remembers the currently selected rows, and equips the drag data object with text and HTML versions of those rows. With that simple drag source, you can select 10 rows from an ObjectListView and drag them onto Microsoft Word to create a formatted table of those rows. You can also drag rows onto other ObjectListViews, which will normally be what you want. From within the IDE, you can set IsSimpleDragSource to true to make your ObjectListView into a drag source using a SimpleDragSource. IsSimpleDragSource Accepting drops from other sources is a little more complicated, but is handled in similar manner. If you want the user to be able to drop stuff onto an ObjectListView, you set the DropSink property. This property accepts an object that implements the IDropSink interface. In many cases, you will use an instance of SimpleDropSink. DropSink IDropSink SimpleDropSink this.objectListView1.DropSink = new SimpleDropSink(); From within the IDE, you can set IsSimpleDropSink to true to do make your ObjectListView into a drop sink. A DragSource needs no further information, but a DropSink needs to know at least two other things: IsSimpleDropSink If you use a SimpleDropSink, the ObjectListView will trigger two events to handle these situations: a CanDrop event and a Dropped event. To actually be useful, you need to handle these events. You can set up handlers for these events within the IDE, like normal. CanDrop Dropped more information: ModelCanDrop ModelDropped A SimpleDropSink is actually quite sophisticated. It can be configured in several ways. Have a look at the code to see more options. Allow drops on the background: myDropSink.CanDropBetween = true; Allow drops between items: You can even drop on subitems: myDropSink.CanDropOnSubItems = true; And change highlight colour just to be different: myDropSink.FeedbackColor = Color.IndianRed; You can learn more about drag and drop, including how to write your own drop sink from scratch on this page. The most frequently requested feature is collapsible groups. I want it; you want it; your great aunt Mildrid who lives in Wollongong wants it. Unfortunately, with the current ListView it is just not possible: groups cannot be collapsed — on XP. But on Vista, this most commonly requested feature is a reality. It is enabled by default, so under Vista, groups are automatically collapsible. If you don't want your groups to be collapsible, set HasCollapsibleGroups to false. Thanks to Crustyapplesniffer who implemented this feature. HasCollapsibleGroups In v2.3, groups received a major overhaul. No longer content with just being collapsible, groups can now have a title image, a subtitle, a task (that clickable link on the right), and footers. When done well, this can make your listview look very nice indeed: This extended formatting can be setup during the AboutToCreateGroup event. Alternatively, you can use the extended version of the MakeGroupies() method, which allows all these new properties to be configured. The above screenshot was configured with one MakeGroupies() call: AboutToCreateGroup MakeGroupies() this.columnCookingSkill.MakeGroupies( new object[]{10, 20, 30, 40}, new string[] {"Pay to eat out", "Suggest take-away", "Passable", "Seek dinner invitation", "Hire as chef"}, new string[] { "emptytoast", "hamburger", "toast", "dinnerplate", "chef" }, new string[] { "Pay good money -- or flee the house -- rather than eat their homecooked food", "Offer to buy takeaway rather than risk what may appear on your plate", "Neither spectacular nor dangerous", "Try to visit at dinner time to wrangle an invitation to dinner", "Do whatever is necessary to procure their services" }, new string[] { "Call 911", "Phone PizzaHut", "", "Open calendar", "Check bank balance" } ); These group formatting facilities are only available on Vista and later. On XP, groups can only have a header. When an ObjectListView is empty, it can display a "this list is empty" type message. The EmptyListMsg property holds the string that appears when an ObjectListView is empty. This string is rendered using the EmptyListMsgFont. Both of these properties can be configured within in the IDE. EmptyListMsg EmptyListMsgFont But if you want to write a little bit of code, you can have much more interesting messages. The empty list message is actually implemented as an overlay. You can access that overlay though the EmptyListMsgOverlay property. By default, this is a TextOverlay that you can customise to your hearts content: EmptyListMsgOverlay TextOverlay TextOverlay textOverlay = this.objectListView1.EmptyListMsgOverlay as TextOverlay; textOverlay.TextColor = Color.Firebrick; textOverlay.BackColor = Color.AntiqueWhite; textOverlay.BorderColor = Color.DarkRed; textOverlay.BorderWidth = 4.0f; textOverlay.Font = new Font("Chiller", 36); textOverlay.Rotation = -5; Doing this gives a message like this: ObjectListViews can now treat cells as hyperlinks. To do this, set UseHyperlinks to true of the ObjectListView, and then set Hyperlink property of OLVColumn to true to make all the cells in that column behaves as hyperlinks. UseHyperlinks Hyperlink If you don't want all cells to be hyperlinks, you can listen for the IsHyperlink event (in the above shot, occupation that start with "s" are not hyperlinks). In this event, you can specify what URL will be attached to that cell. By default, the URL is the text of the cell. If you set the URL to null that cell will not be treated as a hyperlink. If you are already listening for the FormatCell you could set up the URL in that event too. IsHyperlink The formatting of the hyperlinks is controlled by the HyperlinkStyle property of ObjectListView. You can create and configure a HyperLinkStyle within the IDE, and then assign it to your ObjectListView. The same style can be assigned to multiple ObjectListViews. In 95% of cases, the default styling will suffice. HyperlinkStyle HyperLinkStyle When a hyperlink is clicked, a HyperlinkClicked event is triggered. If you handle this yourself, set Handled to true to prevent the default processing from occurring. If you don't handle it, the default processing is to try and open the associated URL. HyperlinkClicked Handled Be careful about making column 0 to be a hyperlink. If it is, every time a user clicks a row trying to select it, it will open a browser window, which would become annoying very quickly. In v2.4, the headers of an ObjectListView became fully style-able. If ObjectListView.HeaderUsesTheme is true (the default), the header will be drawn according to the OS's current theme and will ignore any header style. If this is false, the headers will be formatted according to their given header format style. ObjectListView.HeaderUsesTheme You can set the style for all columns at once (through ObjectListView.HeaderFormatStyle) or for just one column (through OLVColumn.HeaderFormatStyle). A style given to a specific column takes precedence over one given to the control as a whole. Like other styles, HeaderFormatStyles can be created, configured and assigned within the IDE. ObjectListView.HeaderFormatStyle OLVColumn.HeaderFormatStyle HeaderFormatStyles Header styles allow a different appearance for each state of the header: Normal Hot Pressed For each state, the header format allows the font, font color, background color and frame to be specified. If you combine these attributes badly, you can produce some truly dreadful designs, but when well used, the effect can be pleasant. There is also HeaderWordWrap on ObjectListView which allows the text within a header to be word wrapped. So, if you are feeling sadistic, you can inflict something like this on your users: HeaderWordWrap If you have a narrow column and want to save some horizontal space, you can set OLVColumn.IsHeaderVertical to true, and the text of the header will be drawn vertically: OLVColumn.IsHeaderVertical .NET has ImageIndex and IndexKey properties on the ColumnHeader class. However, no one knows quite why since any value you set on these in the IDE will not be persisted by the code generator. ImageIndex IndexKey ObjectListView, however, has the HeaderImageKey property, which allows you to choose the image that will be shown in the header of a column. HeaderImageKey One fundamental of good design is separation of presentation from model. Model classes should not know how they are being presented to the user. But there are development situations when speed of development is everything (Merchant banks and stock brokers often seem to be in this camp). In such cases, placing some sort of user interface into the model classes themselves is an acceptable trade off. It is with a nod to such development that ObjectListView now has Generator class and OLVColumn attributes. The idea with these classes is that, in your model classes, you decide which properties you want to appear in the ObjectListView, and then you give those properties an OLVColumn attribute. In these attribute, you specify some of the characteristics that you would normal give through the IDE (e.g. column title, alignment, image getter, format string). Then, when you are ready to show your list of models, you generate the columns from the model and then show the models: Generator List<ForexPurchase> purchases = this.GetForexPurchasesToShow(); Generator.GenerateColumns(this.olv, purchases); this.olv.Objects = purchases; In this example, this.olv is a completely unconfigured ObjectListView. Nothing was done to it in the IDE, except for placing it on the form: no columns were created or configured. The Generator uses the information that was given in the OLVColumn attributes to build a fully functional ObjectListView. this.olv When the user later wants to see the foreign exchange sales that were made today, she clicks the "Sales" button, and some code like this might be executed: List<ForexSale> sales = this.GetForexSalesToShow(); Generator.GenerateColumns(this.olv, sales); this.olv.Objects = sales; This reuses the same ObjectListView control, but now it is a fully functional ObjectListView showing information about Forex sales. [Thanks to John Kohler for this idea and the original implementation] In v2.4, ObjectListViews became filterable. For backwards compatibility, this ability is off by default. Set UseFiltering to true to enable this feature. UseFiltering If you set the ModelFilter or ListFilter properties, only model objects that match those filters will be shown in the list. List filters apply some criteria on the list as a whole. A tail filter (showing only,say, the last 500 lines) is a good example of a whole list filter (this is implemented by the TailFilter class). A model filter considers each model object in turn and decides whether or not that model should be included in the list shown to the user. Model filters are the most commonly used. ModelFilter ListFilter TailFilter The ModelFilter provides a very useful filtering implementation. It takes a delegate as a construction parameter, and that delegate decides if the given model object should be included. To filter a list of phone calls to only show emergency calls, you could install filter like this: this.olv1.ModelFilter = new ModelFilter(delegate(object x) { return ((PhoneCall)x).IsEmergency; }); You can, of course, make your own filters, by implementing the IModelFilter or the IListFilter interface. You could, for example, show only emergency calls like this: IModelFilter IListFilter public class OnlyEmergenciesFilter : IModelFilter { public bool Filter(object modelObject) { return ((PhoneCall)x).IsEmergency; } } ... this.olv1.ModelFilter = new OnlyEmergenciesFilter(); One very common filtering task is to only show rows that contain a certain string. iTunes does this with its "Search" box. ObjectListView makes it very easy to implement this text filtering via a TextMatchFilter. You use it thus:: TextMatchFilter this.olv1.ModelFilter = new TextMatchFilter(this.olv1, "search"); After executing this line, the ObjectListView will only show rows where the text "search" occurs in at least one cell of that row. The filter can be configured to only consider some of the columns in the ObjectListView by setting the Columns property. This is useful for avoiding searching on columns that you know will return nonsensical results (like checkboxes or image only columns). Columns It can also be set up to do regular expression searching or simple prefix matching: this.olv1.ModelFilter = new TextMatchFilter(this.olv1, "^[0-9]+", TextMatchFilter.MatchKind.Regex); As a bonus, if your filtered ObjectListView is owner drawn, you can pair this text searching with a special renderer, HighlightTextRenderer. This renderer draws a highlight box around any substring that matches its given text. So: HighlightTextRenderer TextMatchFilter filter = new TextMatchFilter. CornerRoundness FramePen FillBrush Remember: the list has to be owner drawn for the renderer to have any effect. [v2.5] ObjectListView can present the user with an Excel-like filtering interface. If they right click on a column's header, a "Filtering" menu item will be presented. This will let the user select one or more distinct values from that column. When they click "Apply", only those rows that have one of the chosen values for that column will be displayed. If you don't want your users to have this filtering ability, set ObjectListView.ShowFilterMenuOnRightClick to false. To hide the ‘Filter’ menu item for a particular column, set UsesFiltering to false on that column. ObjectListView.ShowFilterMenuOnRightClick UsesFiltering At runtime, the user is able to select which columns they wish to see in an ObjectListView. The user interface mechanism for this is that when the user right clicks on any header, they will presented with a menu that lets them choose which columns they wish to see. The exact behaviour of the column selection mechanism is governed by the SelectColumnsOnRightClickBehaviour property. SelectColumnsOnRightClickBehaviour To prevent the user from changes the visible columns, set this property to ColumnSelectBehaviour.None. ColumnSelectBehaviour.None To present the column selection menu as a submenu off the header right click menu, set this property to ColumnSelectBehaviour.Submenu. ColumnSelectBehaviour.Submenu To present the column selection menu as the bottom items in the header right click menu, set this property to ColumnSelectBehaviour.Inline. This is the default. If SelectColumnsMenuStaysOpen is true (which is the default), the menu will remain open after the user clicks on column, letting them hide or show multiple columns without having to right click again. ColumnSelectBehaviour.Inline SelectColumnsMenuStaysOpen To present the user with a dialog that lets them choose the columns (as well as rearrange the order of the columns), set this property to ColumnSelectBehaviour.ModelDialog. ColumnSelectBehaviour.ModelDialog If there are some columns that you do not want the user to be able to hide, set OLVColumn.Hideable to false. This will prevent the user from hiding that column. OLVColumn.Hideable Note: Column 0 can never be hidden. This is a limit of the underlying Windows control. If you wish to make your first column hideable, move it to anywhere else in the column list, and then set its DisplayIndex to 0, so that it appears first. DisplayIndex Haven't you always wanted to put those little snazzy bits of moving eye candy into your applications? You know, a flashing border around some cell, or a spinning star right in the middle of the list? In v2.4, ObjectListView now integrates with the Sparkle animation library. This library lets you put animations over the top of existing Controls. Due to the way .NET ListViews work, that library cannot work with plain ListViews. However, through the extendibility of ObjectListViews decorations, it now works seemlessly with ObjectListViews. To learn about the Sparkle library, you should read this article -- and of course look at the code. But briefly, the design goals of the library were: To use the library itself, you'll need to grasp its four major concepts: Sprite ISprite Effects IEffect The workflow for creating a Sparkle animation is: Animation Sprites Effect Locators OK, OK. Just show me the code. AnimatedDecoration listAnimation = new AnimatedDecoration(this.olvSimple); AnimatedDecoration rowAnimation = new AnimatedDecoration(this.olvSimple, myModel); AnimatedDecoration cellAnimation = new AnimatedDecoration(this.olvSimple, myModel, olvColumn);. ImageSprite TextSprite ShapeSprite: MoveEffect FixedLocation image.FixedLocation = Locators.SpriteAligned(Corner.MiddleCenter); This says the images MiddleCenter will always be aligned to the MiddleCenter of the animation (which in this case is on the whole of the list). MiddleCenter: Again, CodeProject doesn't support animations within a page, so click here to see the actual animation. In eight lines of code, you've put a spinning, fading star (a la Picasa) onto your otherwise static ListView. Reflection is used in a couple of ways: to get data from the model objects, to put data into cell editors, and to put data into the model objects. Getting data from the model object uses reflection to dynamically invoke a method, property or field by its name. protected object GetAspectByName(object rowObject) { if (String.IsNullOrEmpty(this.aspectName)) return null; BindingFlags flags = BindingFlags.Public | BindingFlags.Instance | BindingFlags.InvokeMethod | BindingFlags.GetProperty | BindingFlags.GetField; try { return rowObject.GetType().InvokeMember(this.aspectName, flags, null, rowObject, null); } catch (System.MissingMethodException) { return String.Format("Missing method: {0}", this.aspectName); } } Things to note in this code: null BindingFlag public Static protected private InvokeMember() MissingMethodException Reflection is easier to code, but you pay the penalty in speed. On my machine, reflection is 5-10x slower than using delegates. On a list of only 10-20 items, it doesn't matter. However, if your list has hundreds of items, it's worth installing AspectGetter delegates. The real code is actually more complicated, since it supports using a dot notation to access sub-properties. It is valid to have several method or property names, joined by dots, as an aspect name. Each method or property name is de-referenced and the result of that de-referencing is used as the target for the next method or property. It's more intuitive to use than it is to explain " /> For example, Owner.Address.Postcode is a valid aspect name. This will fetch the Owner property from the initial model object and then ask that owner object for its Address. Then it will ask that address for its Postcode. Owner Address Postcode [As of v2.4, ObjectListView uses a more sophisticated scheme to access data through reflection. These improvements are housed in the Munger class. On average, these improvements are 3-5 times faster than standard reflection, which lessens (but does not completely remove) the value of writing custom AspectGetters.] Munger When we put a cell editor onto the screen, we need to get the value from the cell and somehow give it to the control. Unfortunately, there is no standard way for giving a Control a value. Some controls have a Value property, which is exactly what we want, but others do not. Where there is Value property, we want to use it, but where there isn't, the best we can do is use the Text method. protected void SetControlValue(Control c, Object value, String stringValue) { // Look for a property called "Value". We have to look twice // since the first time we might get an ambiguous result PropertyInfo pinfo = null; try { pinfo = c.GetType().GetProperty("Value"); } catch (AmbiguousMatchException) { // The lowest level class of the control must have overridden // the "Value" property. // We now have to specifically look for only public instance properties // declared in the lowest level class. BindingFlags flags = BindingFlags.DeclaredOnly | BindingFlags.Instance | BindingFlags.Public; pinfo = c.GetType().GetProperty("Value", flags); } // If we found it, use it to assign a value, otherwise simply set the text if (pinfo == null) c.Text = stringValue; else { try { pinfo.SetValue(c, value, null); } catch (ArgumentException) { c.Text = stringValue; } } } So, what's going on here? Firstly, we use GetProperty to try and get information about the Value property on the control. We have to allow for ambiguous matches, which will occur if the controls immediate class has overridden the base class's Value property. In that case, we use some BindingFlags to say that we want the Value property that was declared in the lowest level class. To any language lawyers, yes, I know it's not foolproof, but it works in almost all cases. GetProperty BindingFlags Once we have the property info, we can simply call the SetValue method. We have to catch the ArgumentException just in case the value can't be set. SetValue ArgumentException If any of this has gone wrong, we simply use the Text method to put the value into the control and hope that it does what we want. s ObjectListView was originally written to make a ListView easier to use, not to add swathes of new functionality. Initially, sub-item images were the only additional functionality (now, however, it does add swathes of new functionality). A plain vanilla ListView only supports images in the first column. ObjectListView doesn't have this restriction; any column can show images. To show images on sub-items, there are basically two strategies: Owner drawing is a can of worms that I did not want to open, so I initially chose the second option. The ListView control in Windows has the ability to draw images against sub-items, but that functionality was not exposed in .NET. We can send messages to the underlying ListView control to make it show the images. Remember that these tricks rely on the underlying ListView control, so they may not work in future versions of Windows. It's certain that they will not work on non-Microsoft platforms. To make the ListView control draw sub-item images, we need to: LVS_EX_SUBITEMIMAGES Setting the extended style would be simple except that .NET doesn't expose the extended styles flag. So, we have to pull in the SendMessage() function and define the constants we want to use. SendMessage() [DllImport("user32.dll", CharSet=CharSet.Auto)] private static extern IntPtr SendMessage(IntPtr hWnd, int msg, int wParam, int lParam); private const int LVM_SETEXTENDEDLISTVIEWSTYLE = 0x1000 + 54; // LVM_FIRST+54 private const int LVS_EX_SUBITEMIMAGES = 0x0002; Then, at some convenient point, you turn on the flag: SendMessage(this.Handle, LVM_SETEXTENDEDLISTVIEWSTYLE, LVS_EX_SUBITEMIMAGES, LVS_EX_SUBITEMIMAGES); This would be enough, except that .NET Framework erases all unknown extended styles when an extended style is set. Examples are FullRowSelect and GridLines. So, the above code will have to be called after all other initialization is complete. FullRowSelect GridLines Our second task is to tell the ListView control which sub-item will show which image. To do this, we need a new structure, LVITEM, and some more constants. We don't use most of the LVIF_ constants, but they're included for completeness. LVITEM LVIF_ private const int LVM_SETITEM = 0x1000 + 76; // LVM_FIRST + 76 private const int LVIF_TEXT = 0x0001; private const int LVIF_IMAGE = 0x0002; private const int LVIF_PARAM = 0x0004; private const int LVIF_STATE = 0x0008; private const int LVIF_INDENT = 0x0010; private const int LVIF_NORECOMPUTE = 0x0800; [StructLayout(LayoutKind.Sequential, CharSet=CharSet.Auto)] private struct LVITEM { public int mask; public int iItem; public int iSubItem; public int state; public int stateMask; [MarshalAs(UnmanagedType.LPTStr)] public string pszText; public int cchTextMax; public int iImage; public int lParam; // These are available in Common Controls >= 0x0300 public int iIndent; // These are available in Common Controls >= 0x056 public int iGroupId; public int cColumns; public IntPtr puColumns; }; We also need to import SendMessage a second time, but with a slightly different signature. We use the parameter EntryPoint to import a function using a name other than the C# function name. SendMessage EntryPoint [DllImport("user32.dll", EntryPoint="SendMessage", CharSet=CharSet.Auto)] private static extern IntPtr SendMessageLVI(IntPtr hWnd, int msg, int wParam, ref LVITEM lvi); Finally, we can set up the sub-item images using a method like this: public void SetSubItemImage(int itemIndex, int subItemIndex, int imageIndex) { LVITEM lvItem = new LVITEM(); lvItem.mask = LVIF_IMAGE; lvItem.iItem = itemIndex; lvItem.iSubItem = subItemIndex; lvItem.iImage = imageIndex; SendMessageLVI(this.Handle, LVM_SETITEM, 0, ref lvItem); } In the above member, itemIndex is the 0-based index of the row in question. subItemIndex is the 1-based index of the sub-item and imageIndex is the 0-based index into the image list associated with the listview. itemIndex subItemIndex imageIndex Once we have our nice new UI widget, we still have one more important step: make it work within the IDE. The whole point of this ListView is that it should make the programmer's life easier. That means it has to integrate well with the development environment, which drops us into the scary world of attributes and metadata. One problem with figuring out how to integrate with the IDE is that it is not well-documented. That is, some pieces are documented, but it is usually not clear what we should do with those pieces. You might read that you can use EditorAttribute to control how a particular property is edited, but it is hard to see how to use that information to put the right sort of editors onto your custom DataSource and DataMember properties. EditorAttribute That is where the quasi-magical Lutz Roeder's .NET Reflector is so useful [this has been bought by RedGate, and is now only available commercially (they have a 14-day trial)]; not just the public ones — and all the methods of each class. It then reverse-engineers the source code for the methods. It's an amazing and amazingly useful piece of software. Using the Reflector, it turns out that the right incantation for our DataSource property is the relatively simple, if unintuitive: [AttributeProvider(typeof(IListSource))] public Object DataSource { ... } However, for the DataMember property, we need to invoke this spell: [Editor("System.Windows.Forms.Design.DataMemberListEditor, System.Design, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a", typeof(UITypeEditor))] public string DataMember { ... } This has to be considered at least non-obvious given that DataMemberListEditor is not even mentioned in the SDK documentation. DataMemberListEditor To limit the widths of columns, we have to find some way to intercept attempts to modify them. There are three UI mechanisms to change the width of a column: Fortunately, all three of these mechanisms ultimately use the same HDN_ITEMCHANGING message. We only need to catch that message and everything should be fine. The first part of the solution requires the handling of WM_NOTIFY events, like this: HDN_ITEMCHANGING WM_NOTIFY protected override void WndProc(ref Message m) { switch (m.Msg) { case 0x4E: // WM_NOTIFY if (!this.HandleNotify(ref m)) base.WndProc(ref m); break; default: base.WndProc(ref m); break; } } Then we can handle the HDN_ITEMCHANGING message. If the change would make the column wider or thinner than it should be, we simply veto the change by returning a result of 1. 1 private bool HandleNotify(ref Message m) { bool isMsgHandled = false; const int HDN_FIRST = (0 - 300); const int HDN_ITEMCHANGINGA = (HDN_FIRST - 0); const int HDN_ITEMCHANGINGW = (HDN_FIRST - 20); NMHDR nmhdr = (NMHDR)m.GetLParam(typeof(NMHDR)); if (nmhdr.code == HDN_ITEMCHANGINGW) { NMHEADER nmheader = (NMHEADER)m.GetLParam(typeof(NMHEADER)); if (nmheader.iItem >= 0 && nmheader.iItem < this.Columns.Count) { HDITEM hditem = (HDITEM)Marshal.PtrToStructure( nmheader.pHDITEM, typeof(HDITEM)); OLVColumn column = this.GetColumn(nmheader.iItem); if (IsOutsideOfBounds(hditem, column) { m.Result = (IntPtr)1; // prevent the change isMsgHandled = true; } } } return isMsgHandled; } private bool IsOutsideOfBounds(HDITEM hditem, OLVColumn col) { // Check the mask to see if the width field is valid, if ((hditem.mask & 1) != 1) return false; // Now check that the value is in range return (hditem.cxy < col.MinimumWidth || (col.MaximumWidth != -1 && hditem.cxy > col.MaximumWidth)); } The solution does not appear very complicated. However, reality is rarely so simple. For example, those of you with some knowledge about the header control might be thinking, "Hey! What about HDN_TRACK and its friends? Why don't you do anything about them?" HDN_TRACK Well, according to KB article #183258, Microsoft states that when a header control has the HDS_FULLDRAG style, it will receive HDN_ITEMCHANGING messages rather than HDN_TRACK messages. Since version 4.71 of common controls, header controls always receive the HDS_FULLDRAG style. So, still it seems that we only have to handle the HDN_ITEMCHANGING message. HDS_FULLDRAG The trouble is that this is not always true. Under XP SP2 (at least), header controls with the HDS_FULLDRAG style do not always send HDN_ITEMCHANGING messages rather than HDN_TRACK messages. This may be why Microsoft has withdrawn that particular KB article. On some machines, header controls send HDN_ITEMCHANGING events as they should, but on others, the header controls send the old sequence of messages: HDN_BEGINTRACK, HDN_TRACK, HDN_ENDTRACK, HDN_ITEMCHANGING, HDN_ITEMCHANGED. HDN_BEGINTRACK HDN_ENDTRACK HDN_ITEMCHANGED After quite a bit of digging, the crucial setting seems to be the Explorer option "Show Window Contents While Dragging." In an example of "truly bizarre side effects," if this option is turned on, the header will send HDN_ITEMCHANGING messages instead of HDN_TRACK messages (as it should). However, if it is turned off, the header sends lots of HDN_TRACK messages and only one HDN_ITEMCHANGING message at the very end of the process. Having two possible sequences of events complicates my simple plan. If the "Show Window Contents While Dragging" option is turned on, the current code works perfectly. If it is off, things are uglier. On the whole, if we receive multiple HDN_TRACK messages and only one HDN_ITEMCHANGING message, it's harder to control the resizing process. The reason is that there is no way to cancel just one HDN_TRACK message. If we return a result of 1 from a HDN_TRACK message, we cancel the whole drag operation, not just that particular track event. From the user's point of view, it would appear that when they dragged the column to its minimum or maximum width, the drag operation would simply stop, even when they hadn't released the mouse. This is clearly not what we want. [v2.0] As of v2.0, ObjectListView modifies the HDN_TRACK message itself, changing the size of the column in place, which is the best solution. Just in case someone else runs into this problem, there is a strange bug in the ListView code. The effect of this bug is that in between a BeginUpdate() and EndUpdate() pair, iterating over the Items collection and calling Items.Clear() does not work reliably if the ListView handle has not been created. For example, if you call the following method before the handle for listView1 has been created, nothing will be written to the debugging output: BeginUpdate() EndUpdate() Items.Clear() listView1 private void InitListView1() { this.listView1.BeginUpdate(); this.listView1.Items.Add("first one"); this.listView1.Items.Add("second one"); foreach (ListViewItem lvi in this.listView1.Items) System.Diagnostics.Debug.WriteLine(lvi); this.listView1.EndUpdate(); } If you remove the BeginUpdate() and EndUpdate() pair or call the method after the handle has been created, the method will work as expected. The source of this bug is that, deep in the bowels of the ListView code, when BeginUpdate() is called, the ListView begins caching list updates. When EndUpdate() is called, the cache is flushed. However, GetEnumerator() does not flush the cache or take it into account. So, iterating over Items between calls to BeginUpdate() and EndUpdate() will only return the items that were present before BeginUpdate(). There are at least two easy ways around this bug: GetEnumerator() Items.Count Thanks to ereigo for helping me track down this bug. I like pictures. I think it's neat that in Explorer you can put a little graphic in the bottom right of the listview. I wanted to do the same thing with an ObjectListView. Surely, it can't be that difficult. But it was. What did I specifically want from this feature? I wanted a background image on the ObjectListView. It had to stay fixed in place, not scrolling when the ListView scrolls. It had to work on XP and Vista. It had to be easy to custom, ideally just setting an image within the IDE. If the image could be positioned in whatever corner, or have a varying level of transparency, those would be bonuses. And obviously, I wanted it to work flawlessly — though I would be content with working spectacularly well. The classic solution is to intercept the WM_ERASEBKGROUND message, erase the ClientRectangle, draw whatever you want, and the rest of the control then draws over what you've already drawn. Easy. WM_ERASEBKGROUND ClientRectangle. DoubleBuffered AllPaintingInWmPaint The second try was to use LVM_SETBKIMAGE. This WinSDK message tells a ListView to draw an image under the control. Exactly what I wanted. But life is rarely that easy. LVM_SETBKIMAGE The first difficulty was actually making it work. TortoiseSVN sometimes has a listview background image, and Stefan had kindly documented some of his troubles in getting it to work. Using the information there, I managed to put an image under the control! Excellent... well not really. It did put an image under the ListView, but with a number of unpleasant side-effects: LVBKIMAGE xOffset yOffset LVBKIF_FLAG_ALPHABLEND. Details [May 2012 Update] Windows 7 has improve this situation somewhat. It doesn't have the same problems with column 0 or subitem images. Grid lines are still a problem, and obviously, it doesn't work in owner drawn mode (In owner drawn mode, each cell draws itself, including its background, which covers the background image). If you can live with these limitations, native watermarks are quite neat. They are true backgrounds, not translucent overlays like the OverlayImage uses. They also have the decided advantage over overlays in that they work correctly even in MDI applications. ObjectListView now [v2.5.1] has built in support for native backgrounds: // Set a watermark in the bottom right of the control this.olv.SetNativeBackgroundWatermark(Resource1.redback1); // Set the background image positioned 50% horizontally and 75% vertically this.olv.SetNativeBackgroundImage(Resource1.redback1, 50, 75)); // Set a tiled background to the control this.olv.SetNativeBackgroundTiledImage(Resource1.limeleaf); I eventually decided on using the Layered Window API, which .NET exposes through the Opacity and TransparencyKey properties of Form. Opacity TransparencyKey Form The idea was to place a completely transparent form over the top of ObjectListView, and then draw onto that form (Mathieu Jacques did the same thing with his LoadingCurtain idea). From the user's point of view, the image appears to be draw onto the ObjectListView, but from the ObjectListView point of view, the overlays are not there. This idea was a good one, but there were many other complications before it actually worked. See here for the overly-detailed version of the story. But it did eventually work. So, as of v2.2, ObjectListViews support overlays, which are non-scrolling, transluscent images or text drawn over the top of the list contents. The overlays scheme is extensible, but the two most common overlays are image overlays and text overlays. ImageOverlay These two overlays are so common that ObjectListView comes with one of each predefined: OverlayImage and OverlayText properties. These predefined overlays are exposed to the IDE, and can be configured directly from there. So, for the majority of cases, this is the only understanding of overlays that you will need. OverlayImage OverlayText Despite my best efforts, this scheme does not work on MDI applications. Overlays do not obey the z-ordering that MDI forms obey. This means that overlays appear on top of all MDI forms. There is no solution to this problem. Just don't use overlays in MDI applications. Your only option is to use native background images (with their limitations). I have written this control in two other languages — Smalltalk and Python — and it is always one of the most useful items in my toolbox. I have used it in every project I have had in those languages and I'm sure it is just as useful here. I hope that the code encourages more "slothfulness" and gives programmers time to improve some other parts of their project, thus encouraging hubris as well. v2. AddObjects() InsertObjects() RefreshObjects() RemoveObjects() CopyObjectsToClipboard() v3.0 will be a big change. Until now, each version has strove to maintain. AlwaysGroupByColumn There is no definite timetable for version 3.0. After a long break, the next release of ObjectListView is available.. CellEditTabChangesRows CellEditEnterChangesRows Sortable Groupable Searchable Hideable VirtualListView GetNextItem() GetPreviousItem() OLVListView TreeRenderer IRenderer SelectObject() SelectObjects() SelectedObject TextMatchFilter.MatchKind InlineMenu SubMenu ModalDialog ColumnSelectionForm OLVColumn.AutoCompleteEditorMode AutoCompleteEditor ObjectListView.IncludeColumnHeadersInCopy ObjectListView.Freezing TreeListView.ExpandedObjects Expanding Expanded Collapsing Collapsed ObjectListView.SubItemChecking Equals() == UseTranslucentSelection UseTranslucentHotItem BorderDecoration CopySelectionOnControlCUsesDragSource Alt-[arrow] CheckBoxes SaveState() RestoreState() GroupByOrder None GroupWithItemCountSingularFormatOrDefault ClearObjects() ModelDropEventArgs.RefreshObjects() TreeListViews OLVColumn.HeaderTextAlign EditingCellBorderDecoration OLVColumn.Wrap ObjectListView.SmoothingMode BuildList(true) CellEdit NewValue AllowExternal RearrangableDropSink ObjectListView.HeaderMaximumHeight ChangeToFilteredColumns() InvalidCast Click OLVColumn.ValueToString() HeaderFont HeaderForeColor SelectAllOnControlA CopySelectionOnControlC Name UseOverlays F2 ShowHeaderInAllViews This release focused on formatting — giving programmers more opportunity to play with the appearance of the ObjectListView . Decorations allow you to put pretty images, text and effects over the top of your ObjectListView Groups have been overhauled for this release. Groups under XP remain unchanged, but under Vista and Windows 7, many more formatting options are now available. ObjectListViews can now have cells that are hyperlinks. The font and text color of the ObjectListView header can now be changed. You can also word wrap the header text. In previous version, RowFormatter was the approved way to change the formatting (font/text color/background color) of a row or cell. But it had some limitations: RowFormatter AlternateBackgroundColors OLVListItem. By using compiler attributes, ObjectListViews can now be generated directly from model classes. [Thanks to John Kohler for this idea and the original implementation] When running on Vista and later, virtual lists can now be grouped! FastObjectListView supports grouping out of the box. For your own VirtualObjectListView you must do some more work yourself. [This was more of a technical challenge for myself than something I thought would be wildly useful. If you do actually use groups on virtual lists, please let me know] OwnerDrawn ShowCommandMenuOnRightClick HotItemChanged Hot* UseExplorerTheme OLVColumn.AutoCompleteEditor OlvHitTest() HitTest() OLVListItem.GetSubItemBounds() TextAlign LastSortColumn PrimarySortColumn LastSortOrder PrimarySortOrder OLVListItem.Bounds IncrementalUpdate() This is primarily a bug fix release. CellClicked CellOver CellRightClicked BuildList() CalculateCellBounds() The two big features in this version are drag and drop support and image overlays. ObjectListViews now have sophisticated support for drag and drop operations. An ObjectListView can be made a source for drag operations by setting the DragSource property. Similarly, it can be made a sink for drop actions by setting the DropSink property. The dragging is based on the IDragSource interface, and the drop handling revolves around the IDropSink interface.. Rearrangable lists are supported through the RearrangeableDropSink class. RearrangeableDropSink This version added the ability to draw translucent images and text over the top over the ObjectListView contents. These overlays do not scroll when the list contents scroll. These overlays works in all Views. You can set an overlay image within the IDE using the OverlayImage and OverlayText properties. The overlay design is extensible, and you can add arbitrary overlays through the AddOverlay() method. Views AddOverlay() CellToolTipControl HeaderToolTipControl SelectedColumn TintSortColumn SelectedColumnTint Scroll. ItemRenderer modelObject[this.AspectName] DataRows DataRowViews EditorRegistry TriStateCheckBoxes UseCustomSelectionColors UseHotItem TreeListView.RevealAfterExpand VirtualObjectListViews ItemCheck ItemChecked ImagesRenderer FlagsRenderer<T> This version adds some small features and fixes some bugs in 2.0 release. ObjectListView.EnsureGroupVisible() TreeView.UseWaitCursorWhenExpanding internal protected ObjectListView.FinishCellEditing() ObjectListView.PossibleFinishCellEditing() ObjectListView.CancelCellEditing() TreeRenderer.LinePen HideSelection Version 2.0 is a major change to ObjectListView. ListViewPrinter's Pens Brushes CellToolTipGetter HeaderToolTipGetter BeforeSorting AfterSorting TypedObjectListView.GenerateAspectGetters() CheckedAspectName IsSearchOnSortColumn CheckStateGetter CheckState booleans BooleanCheckStateGetter BooleanCheckStatePutter CustomSorter FastObjectListDataSource SortObjects() HighlightForegroundColor HighlightBackgroundColor EnableCustomSelectionColors() AlwaysGroupBySortOrder CheckedObject SORT_INDICATOR_UP_KEY SORT_INDICATOR_DOWN_KEY CopyObjectsToClipboard() CopySelectionToClipboard(). MakeColumnSelectMenu(). AddObject(s)/RemoveObject(s) Add/RemoveObject(s) GetItem() UserControl GetAspectByName() AddObject/AddObjects/RemoveObject/RemoveObjects BusinessCardRenderer RenderDelegate boolean void RendererDelegate TopItemIndex ColumnRightClick SelectedIndex GetCheckedObject() GetCheckedObjects() GetItemAt() Windows.Forms listview FlagRenderer OLVColumn.IsVisible ColumnSelectionForm SelectColumnsOnRightClick CopySelectionToClipboard() Objects SecondarySortColumn SecondarySortOrder RightToLeftLayout ListViewPrinter SelectedItem GetColumn() string DrawAlignedImage() ListItem UseAlternatingBackColors ListView.View Owner.Workgroup.Name OLVColumn.MakeGroupies() Freeze()/Unfreeze()/Frozen null SetObjects(null) GetSelectedObjects() if...else This code is covered by GNU General Public License v.
http://www.codeproject.com/Articles/16009/A-Much-Easier-to-Use-ListView?fid=350107&df=90&mpp=10&sort=Position&spc=None&select=4425291&tid=4422664
CC-MAIN-2014-35
refinedweb
11,086
56.45
Pier Fumagalli wrote: >). try clicking on "" > For blocks, though, do we want to have them to point at something (like > the block descriptor, or the home page) or shall we ignore for now? we *MUST* be able to serve stuff from our URI block identifiers from the future. it's actually the whole point in having http: based identifiers. I think that blocks URI should be starting with because that doesn't change and also signifies that these URI don't represent namespaces (which will be still) and will also make it easier for us to control that URL space when we'll need to publish the metadata in it. -- Stefano.
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200403.mbox/%3C406B006C.9080805@apache.org%3E
CC-MAIN-2016-30
refinedweb
111
75.13
- First, we need to import the necessary namespaces import static org.junit.Assert.*; import org.junit.Test; import Application.MathProvider; First Import - org.junit.Assert.* : This will import all the assertion statements. These assertions. - Define the class and the class variables. Also, define the constructor if needed. public class JunitMathProvider_1{ MathProvider provider; public JunitMathProvider_1(){ provider = new MathProvider(); } } In this step, we have just created a simple class called 'JunitMathProvider_1' and created a Class variable called provider of the type MathProvider. It is because we have to test MathProvider class. - Define the()); } Here we have created a method inside the class called add(). This method will test the add functionality of MathProvider class. To test it we have declared two integers name firstNumber and lastNumber and assigned them values 10 and 20. Now we will assert that sum of these two numbers is equal to the sum of the number returned by MathProvider.Add functionality. NOte: Don't worry about the System.out.println statement, it's just a log to the System.out stream to print the method name with some text. What is Asserts? Asserts mean verifying the values. Assert can be of an Object type, Boolean type, Int type or any data type. It simply verifies the actual value with the expected value and asserts records only the failed scenario. What is '@Test' Annotation? Pay attention to the '@Test' annotation. All the tests in Junit4 has a @Test annotation that should be put on a test method. This helps the Junit framework to identify tests inside a test class. Now your complete test will look like this: package Application.UnitTests; import static org.junit.Assert.*; import junit.extensions.*; import org.junit.Test; import Application.MathProvider; public class JunitMathProvider_1 { MathProvider provider; public SimpleTest(){ provider = new MathProv. - Right click on the 'JunitMathProvider_1' class, go to 'Run As' > 'Junit Test'. Your test, project window and test result would look like this. Step 3: Set up JUnit Perspective: - If you are using eclipse you can see the Junit perspective by clicking on Window > Show View > Other - Search for Junit in the window that pops up and select Junit. - You will start seeing the JUnit view in the View pane. As shown here: Note: Make sure that the 'two small cross' button marked with the highlighter is not clicked. If it is clicked, you will not see the pass result, it will only show failed results. You will see that it shows the test class and. Similar Articles
https://www.toolsqa.com/java/junit-framework/first-junit-test/
CC-MAIN-2022-27
refinedweb
415
68.16
TL;DR In this tutorial, we’re going to learn the basics of Nrwl’s tool Nx, as well as how to create a custom workspace CLI schematic. You can see the finished code in this repository. Note: This tutorial assumes some basic knowledge of Angular and the Angular CLI. If you've never touched Angular before, check out our Real World Angular series. Concerns of Enterprise Teams Developing applications on a large enterprise team is different than developing alone or on a small team. While small groups might be able to get away with ad hoc decisions on application structure or coding best practices, the same cannot be said when you’re working with dozens or perhaps hundreds of other people. In addition, enterprise teams need to assume that their code will stick around for many years to come. This means they’ll need to do as much work as possible at the beginning of an application’s development to minimize the cost of maintenance down the road. We can summarize the concerns of enterprise teams like this: - Consistency — how do we make sure everyone in the organization (which may be thousands of people) follows the same best practices for structuring and writing code? - Safety — how do we ensure that our code will not be subject to attacks or prone to errors? - Increased size and complexity — how can we structure our code so that it can grow without sacrificing clarity or performance? - Changing requirements — how can we keep up with the demands of the business to continually update the application without letting technical debt get out of control? While small teams and small organizations share these same concerns to some extent, the risks can be catastrophic at the enterprise scale. To learn more about the problems large teams face, check out Victor Savkin’s excellent ng-conf 2018 talk Angular at Large Organizations. Nx: The Enterprise Toolkit for Angular Nx is a set of tools for the Angular CLI built by the consulting firm Nrwl to help with exactly these issues of consistency, safety, and maintainability. In addition to including a schematic to implement monorepo-style development of applications and libraries, Nx includes a set of libraries, linters, and code generators to help large teams create and enforce best practices across their organizations. "Nx by @nrwl_io is a set of tools to help enterprise @angular devs with consistency, safety, and maintainability." Tweet This Out of the box, Nx contains tools to help with: - State management and NgRx - Data persistence - Code linting and formatting - Migrating from AngularJS to Angular - Analyzing dependencies visually - Creating and running better tests - Creating workspace-specific schematics While we can’t delve into every feature of Nx, we are going to take a look in just a bit at that last one: workspace-specific schematics. Before we do that, though, let’s learn how to get started with Nx. Nx Basics Let’s learn the basics of getting up and running with Nx. We’ll need to know how to install Nx, how to generate a workspace, and how to create applications and libraries. Install Nx We’ll start by installing Nx, which is really just a collection of Angular CLI schematics. You’ll need Node 8.9 or greater and NPM 5.5.1 or greater for this, which you can install from the Node website. First, make sure you have the latest version of the Angular CLI installed globally: npm i -g @angular/cli (Note that npm i -g is just shorthand for npm install --global.) Then, install Nx globally: npm i -g @nrwl/schematics Installing Nx globally is actually only required to create workspaces from scratch from the command line. If you’re unable to install things globally at work, don’t worry. You can add Nx capabilities to an existing CLI project by running this command: ng add @nrwl/schematics Note: You can also use Angular Console instead of the CLI throughout this tutorial if you'd prefer to work with a GUI instead of the command line. Create a Workspace Now that we’ve got Nx installed, we’re ready to create our first workspace. What exactly is a workspace, though? A workspace is an example of a monorepo (short for monorepository) — a repository that holds several related applications and libraries. Let’s imagine we’ve been tasked to create a pet adoption system. Let’s think about some of the different pieces we might need for this: - A front end application for potential adopters to browse through the pets - A front end application for administrators to update available pets and receive inquiries - Shared UI components between the front ends - Shared data access libraries between the front ends - A Node server to serve up the pet and adoption inquiry data While we could keep each of these in a separate repository, it would get messy keeping all of the versions in sync, managing dependencies, and ensuring that all developers working on the project have the correct access they need. Monorepos solve these problems and more. With Nx, a workspace is a monorepo structure for the Angular CLI to keep your applications and libraries organized. After installing Nx globally, we can run this command: create-nx-workspace pet-adoption-system When we run this command with version 7 or later of the CLI and Nx, we'll get two prompted questions. The first is whether we’d like to use a separate name for npm scope, which lets us import internally using a shorthand. For example, we could set the scope to "adoption-suite", which would mean our imports would start with @adoption-suite. We’re not going to do this in this tutorial, so we can just hit enter to leave it as the default, which is pet-adoption-system. The second prompt is whether we’d like to use npm or Yarn for package management. I’ll be using npm in this tutorial, but you’re welcome to use Yarn instead. This will create a new CLI workspace that will look a bit different from what you’re used to. Instead of the usual src folder, you’ll see folders named apps, libs, and tools at the root of the project. You’ll also see some extra files like nx.json to configure Nx, .prettierrc to configure the Prettier formatting extension, and .editorconfig to set some editor presets. Nx does a lot of set-up for you so you can focus on writing code instead of on tooling. Create an Application Let’s create an application in our new workspace and add routing to it. ng generate app adoption-ui --routing Version 7 of Nx adds prompts regarding directory placement, style extension, and choice of unit and end-to-end test runners. This is great because we can now easily set up an application using SCSS, Jest for unit testing, and Cypress for end-to-end testing with zero extra work. In this tutorial, feel free to leave all the prompts at their defaults. We won’t be using them here. Nx’s app schematic is almost identical to the built-in CLI version, but it does have a couple of differences. For example, the routing option we added configures the root NgModule with routing instead of creating a separate module. This has increasingly become best practice in the Angular community to avoid "module pollution," so it’s really nice Nx does this for us by default. After we’ve run the command, we’ll now have an apps/adoption-ui folder with the familiar CLI structure inside of it. Notice that the AppModule is set up with routing and that there is a separate apps/adoption-ui-e2e folder for Protractor. This differs a bit from the regular CLI setup, where the e2e folder is inside of the app folder. Create a Library Libraries are ideal places to house things like UI components or data access services for use in multiple applications. We can add new libs to an Nx Workspace by using the AngularCLI generate command, just like adding a new app. Nx has a schematic named lib that can be used to add a new Angular module lib to our workspace: ng generate lib shared-components We’ll get several prompts at the command line regarding tooling, testing, setup, and routing. Since we’re not actually going to be using this library, the answers to these questions are irrelevant -- go ahead and just accept all of the defaults. However, it’s good to know how easily customizable libs are from the command line, from routing and lazy loading to tooling and setup. Running this command will create a libs/shared-components with its own src folder and config files. We can easily add components to this library by passing a project option to the CLI. Let's take advantage of the g shortcut for generate and the c shortcut component and run the following command: ng g c pet-list --export=true --project=shared-components Since this is a shared library, I’ve added export=true to export the component from the NgModule. Now that we know the basics of Nx, let’s explore one of its lesser known but incredibly powerful features: the ability to create custom workspace schematics. Creating a Custom Auth Schematic You’re actually already familiar with schematics — they’re what the Angular CLI uses to create new components, services, and more when you run the ng generate command. You can also write your own schematics from scratch, whether or not you’re using Nx. The beauty of Nx, though, is that it does all the hard work of wiring up your custom schematics so that it’s easy to run them in your workspace. Custom schematics are frequently used for two broad purposes: - To enforce styles or standards at your organization. - To generate custom code specific to your organization. "Custom @angular CLI schematics are used to enforce styles or standards and to generate custom code." Tweet This We’re going to focus on the former in this tutorial because generating custom code with the schematics API requires some knowledge of the TypeScript abstract syntax tree and is a little out of our scope in this tutorial. Some examples of good ideas for custom schematics that enforce styles or standards are: - Enforcing directory structure or application architecture. - Enforcing patterns for data access like NgRx. Since authentication is a subject dear to our hearts here at Auth0, let’s create a schematic for developers to use when adding an authentication module to their applications. We’d like it to do four things: - Adhere to a naming convention of prefixing "auth-" to the files. - Create an authentication module and import it into the project’s AppModule. - Create an empty service that will hold our authentication code. - Create an empty CanActivateguard that will hold an authentication route guard. We can accomplish all of this in a single schematic that will accept the name of the module and the project we’re adding it to as arguments. This will let our developers working on the pet adoption system quickly add the correct scaffolding to any application they’re building on the project. Let’s get started! Generate the Custom Schematic The first step to creating a custom schematic is to use Nx to generate the initial code. We can do this by running this command: ng g workspace-schematic auth-module (Notice that I've used the g shortcut for generate again.) The workspace-schematic schematic (I know, it’s so meta) creates the auth-module folder inside of tools/schematics. This new folder contains two files: schema.json and index.ts. Update the Schema Schema.json contains metadata for our custom schematic like the options used with it. If we open it, we’ll see this default code: // tools/schematics/auth-module/schema.json { "$schema": "", "id": "auth-module", "type": "object", "properties": { "name": { "type": "string", "description": "Library name", "$default": { "$source": "argv", "index": 0 } } }, "required": ["name"] } By default, we’re able to pass a name property, which is a string, to our schematic. The $default parameter tells us that it will assume that the first argument given to this command is the value for the name property. Feel free to change the description of the name property to something more specific, like, "Auth module name." Let’s also add another property to this file to specify the project to which we’ll add our new authentication module. Underneath the name property, add the following: // tools/schematics/auth-module/schema.json // ...above code remains the same // add under the name property: "project": { "type": "string", "description": "Project to add the auth module to" } // ...below code remains the same Let’s make it required, too, by adding it to the required array. The finished file will look like this: // tools/schematics/auth-module/schema.json { "$schema": "", "id": "auth-module", "type": "object", "properties": { "name": { "type": "string", "description": "Auth module name", "$default": { "$source": "argv", "index": 0 } }, "project": { "type": "string", "description": "Project to add the auth module to" } }, "required": ["name", "project"] } We could add any other options with their types here if we needed them. If you look at schema.json for the official component schematic, for example, you’ll see familiar boolean options like spec and inlineTemplate. Write the Custom Schematic We’re now ready to write the custom schematic implementation. This lives in the generated index.ts file. Let’s open it up and begin to create our auth-module schematic. We’ll see the following code generated by Nx: // tools/schematics/auth-module/index.ts import { chain, externalSchematic, Rule } from '@angular-devkit/schematics'; export default function(schema: any): Rule { return chain([ externalSchematic('@nrwl/schematics', 'lib', { name: schema.name }) ]); } Let’s break down what’s happening here so that we can build off of it. First, we’re importing the chain and externalSchematic functions, as well as the Rule type, from @angular-devkit/schematics. We then use all of those imports in the exported function. What are these things, though? We often say that schematics are like blueprints when you’re building a house. In that analogy, a Rule is a page or section of a blueprint. Rules are the individual pieces of the schematic that tell the CLI how to modify the file system, represented as a Tree in schematics. Trees allow us to mutate the file system with file creations, updates, and deletions. Generally, anything we can do with the file system we can do with the tree. Unlike the file system, though, trees use a transactional structure. No changes are made to the actual file system until the entire schematic runs. If part of the schematic fails, any changes to the file system are rolled back. Rules, then, are functions that take in the current tree and return either a modified tree or another rule. This means that rules are also composable. We can combine rules using the chain function like we’re doing in this example. There are other operators, too, like branchAndMerge and mergeWith. You’ll see these functions throughout the official CLI schematics code. We can also use the externalSchematic function here to refer to an outside schematic and compose its rules onto our custom schematic. In the boilerplate generated by Nx, we start with a rule that simply runs the lib schematic built into Nx, which is in the @nrwl/schematics collection. The heart of the schematic is the exported function: // tools/schematics/auth-module/index.ts export default function(schema: any): Rule { return chain([ externalSchematic('@nrwl/schematics', 'lib', { name: schema.name }) ]); } This function takes in a schema and returns a Rule. The schema should match the structure in schema.json. At the core, all schematics are just functions that return rules that modify the file system. You’ll often see many rules defined in a schematic, as well as helper functions, but in the end, only one function gets exported. This function will have any necessary rules chained, branched, or merged together. The simplest possible custom schematic is one that takes advantage of chaining together existing rules. This is particularly useful at large organizations to enforce best practices and architecture patterns. For example, you may want to ensure that everyone on the team always generates components with inline styles and with an accompanying model as a TypeScript interface. You could easily chain together the component schematic with the interface schematic to make this easy for everyone in the organization to do. We’re going to follow a similar pattern here. We want to make sure that everyone who adds authentication to a project always adds a separate module, a service, and a CanActivate guard. We’ll use our new tools—the chain and externalSchematic functions—to do this. Let’s first replace Nx lib with a call to the module schematic, keeping it inside of the array being passed into the chain function: // tools/schematics/auth-module/index.ts // ...above code remains the same // replace nx lib with this: externalSchematic('@schematics/angular', 'module', { project: schema.project, name: schema.name, routing: true, module: 'app.module.ts' }) Notice that we’re first passing in the collection ( @schematics/angular), followed by the schematic name ( module), followed by an options object. The options object contains values for any options for the external schematic. We’re passing in the project, name, and root module name of app.module.ts. We’re also setting the routing flag to true to add routing to the authentication module. This makes it easy for someone to add routes to the authentication module, such as a callback route or routes for logging in and out. For the calls to the service and guard schematics, we’ll need to import path at the top of our file: // tools/schematics/auth-module/index.ts // ...previous imports import * as path from 'path'; This will let us specify file paths regardless of whether the user is using a Windows or Linux-based operating system. Add the following to the array after the module schematic (don’t forget a comma!): // tools/schematics/auth-module/index.ts // ...above code remains the same externalSchematic('@schematics/angular', 'service', { project: schema.project, name: schema.name, path: path.join( 'apps', schema.project, 'src', 'app', schema.name, 'services' ) }) // ...end of the array and chain function When we run this schematic, it will generate our service inside of apps/{project}/src/app/{schema}/services. You could easily change this to a different structure if you’d like. Do you notice how easy these schematics make standardizing and enforcing code organization? Our last call is to the guard schematic and it’s almost identical. Add this after the service schematic (and, again, don’t forget the comma!): // tools/schematics/auth-module/index.ts // ...above code remains the same externalSchematic('@schematics/angular', 'guard', { project: schema.project, name: schema.name, path: path.join( 'apps', schema.project, 'src', 'app', schema.name, 'services' ) }) // ...end of the array and chain function This will generate a guard in the same place as the authentication service. Our schematic is nearly finished now. It will create a module, service, and guard with the name we give it and for the project we specify. Let’s add one final touch: let’s throw an error if the user doesn’t prefix their new authentication module with auth-. To do this, we can add the following if statement inside of our function on line 5, just before we return our chain of rules: // tools/schematics/auth-module/index.ts // ...above code remains the same // add above the returned chain function: if (!schema.name.startsWith('auth-')) { throw new Error(`Auth modules must be prefixed with 'auth-'`); } // ...below code remains the same We’ll now see an error if we don’t adhere to the naming guidelines. Neat! The finished code for our custom schematic looks like this: import { chain, externalSchematic, Rule } from '@angular-devkit/schematics'; import * as path from 'path'; export default function(schema: any): Rule { if (!schema.name.startsWith('auth-')) { throw new Error(`Auth modules must be prefixed with 'auth-'`); } return chain([ externalSchematic('@schematics/angular', 'module', { project: schema.project, name: schema.name, routing: true, module: 'app.module.ts' }), externalSchematic('@schematics/angular', 'service', { project: schema.project, name: schema.name, path: path.join( 'apps', schema.project, 'src', 'app', schema.name, 'services' ) }), externalSchematic('@schematics/angular', 'guard', { project: schema.project, name: schema.name, path: path.join( 'apps', schema.project, 'src', 'app', schema.name, 'services' ) }) ]); } Remember that we’re only generating the scaffolding of our authentication setup with this schematic. To learn how to properly implement authentication in your service and guard, check out our Angular authentication tutorial. We’ve also got an NgRx authentication tutorial for you if you’re taking advantage of NgRx in your project. To use either of these tutorials to set up Auth0 in your application, first sign up for a free Auth0 account here. Run the Custom Schematic Now that we’ve got the schematic written, let’s test it out. Nx already did the work of wiring it up to be used in our workspace, so we don’t need to worry about that. To run our new auth-module schematic, run the following command: npm run workspace-schematic -- auth-module auth-adoption --project=adoption-ui This command runs the workspace-schematic script that’s part of Nx. We use the -- operator to pass options into that script like the names of the module and project. (If you’re using Yarn, you can ditch the -- operator and run yarn workspace-schematic auth-module with the rest of the options.) Once the schematic runs, we’ll see the resulting files inside of the src folder of the adoption-ui application. Our auth-adoption module and its routing module will be there, as well as the services folder containing the auth-adoption service and route guard. Don’t forget to also test out our naming requirement. Try to run the command again but without the auth- prefix. You should see Auth modules must be prefixed with 'auth-' as an error in your console. You’ll also see an error if you fail to specify the project name. All of the finished code can be found in this repository. Conclusion Our custom authentication schematic accomplishes a lot with under 50 lines of code: - It automatically scaffolds an authentication module, service, and route guard. - It enforces both an architecture standard and a naming standard. - It can easily be reused by developers for future products. Since writing custom schematics for the first time can be a bit of a challenge, it helps to have Nx do a lot of the heavy lifting. That’s really what Nx does best: automating and simplifying Angular development at scale. To learn more about custom schematics, check out the introduction on the Angular blog, this great tutorial on Generating Custom Code by Manfred Steyer, and this presentation on custom schematics by Brian Love. And, of course, the best place to see examples of custom schematics is the source code for the Angular schematics. We’ve really only scratched the surface of what Nx can do. To learn more about Nx, including features like the visual dependency graph, dependency constraints, and the ability to run only affected tests, check out the official Nx documentation, the nx-examples repo, and this free video course on Nx by Justin Schwartzenberger. Special thanks to Jason Jean from Nrwl for doing some pair programming with me to answer my questions about Nx and custom schematics. Thanks, Jason!
https://auth0.com/blog/create-custom-schematics-with-nx/
CC-MAIN-2021-43
refinedweb
3,928
62.98
how to subscribe to the ROS PyOpenNI node as well as how to format your project to use the PyOpenNI node. Node Initializiation rospy.Subscriber('rgb', String, callback_rgb) rospy.Subscriber('depth', String, callback_depth) rospy.Subscriber('gesture', String, callback_gest) rospy.Subscriber('skeleton', Skeleton, callback_skeleton) rospy.spin() Instantiate the ROS subscribers for RGB image, depth, gestures, and skeleton. Link the subscribers to each of their respective callback methods. RGB and Depth Conversion def callback_rgb(data): frame = np.fromstring(data.data, dtype=np.uint8).reshape(480, 640, 3) def callback_depth(data): frame = np.fromstring(data.data, dtype=np.uint8).reshape(480, 640) The RGB and Depth data is passed in the form of strings because of ROS restrictions on message datatypes. However, in order to use the data, they must be converted to some other useful form. Thus, in the callback methods, we convert the string into numpy arrays. The numpy.fromstring method converts the string data into an array of ints, which is reformatted into a multi-dimensional array with the reshape method. Numpy arrays are compatible with all OpenCV methods, making them convenient to use. To display the array, simply use cv2.imshow and cv2.waitKey methods, which is much easier than using pygame. Gestures def callback_gest(data): print data Compared to the RGB and depth callbacks, the gesture callback is incredibly simple. The name of the gesture is basically passed in the form of a string. For instance, if the user waved, the publisher would pass “wave” as its string. Skeletons Skeleton callbacks are a bit more complicated. First of all, notice that we are using a custom message type Skeleton. The custom message allows multiple information in skeleton to get passed at the same time. Here is the definition of Skeleton in rosmsg show: [ros_py_openni/Skeleton]: int8 id string data The id represents the user’s ID generated by the publisher, and the data are the points of their skeletons in order. Notice that this data is again passed in the form of a string. This is due to the ROS limitation that prevents python lists from being passed. In order to use it, however, we must convert the string back to a python list. import ast ... def callback_skeleton(data): newpos_skeleton = ast.literal_eval(data.data) Through ast.literal_eval, we can convert the string back to a list of its appropriate datatype (meaning if the list originally contained floats, literal_eval will convert string back to a list of floats instead of a list of strings). What is convenient about literal_eval is that it can also recognize and convert nested lists, so the original list structure of skeleton is not distorted in any way. To integrate the custom Skeleton message into your program, however, requires a few more steps. First, make a directory named msg inside your project directory. Open that directory and create a new text file named Skeleton.msg. Then inside that message file, add the following two lines: int8 id string data Next, go into your package.xml and add these two lines: <build_depend>message_generation</build_depend> <run_depend>message_runtime</run_depend> Then, go into your CMakeLists.txt and make sure that your find_package includes at least the following: find_package(catkin REQUIRED COMPONENTS roscpp rospy std_msgs message_generation ) After that, scroll down and uncomment the add_message_files lines. Delete the sample message files under FILES and instead put Skeleton.msg as your file name. Your add_message_files should look like this: add_message_files( FILES Skeleton.msg ) Uncomment the generate_messages lines: generate_messages( DEPENDENCIES std_msgs # Or other packages containing msgs ) Finally, uncomment the catkin_package line and make sure that it at least says CATKIN_DEPENDS message_runtime. Others may be optional: catkin_package( # INCLUDE_DIRS include # LIBRARIES beginner_tutorials CATKIN_DEPENDS message_runtime roscpp rospy std_msgs # DEPENDS system_lib ) After you save your package.xml and CMakeLists.txt, make sure to do a catkin_make and source your setup.bash file again. Now, your project will have the OpenNI node fully integrated. Helpful Links:: This tutorial from ROS explains in detail how to make a custom message type.: Here are some documentations of the ROS message type that include what are/are not allowed as ROS messages.
https://emotionrobots.com/2015/11/22/pyopenni-ros-node-subscriber/
CC-MAIN-2019-35
refinedweb
681
58.38
Consider a simplified example: // Example #include <iostream> class A { public: int x; A() : x(5) {} private: const A& operator=(const A& a) { std::cout << "A::=" << std::endl; x = a.x; return *this; } A(const A& a); }; class B { public: A aObj; B() { aObj.x = 6; } }; Notice that the assignment operator is private, hence the aObj inside B needs to be manipulated in place. The boost.python binding for this looks like (and this is what py++ also generates out-of-the-box): #include "test.cpp" namespace bp = boost::python; BOOST_PYTHON_MODULE(Boost_YMI){ bp::class_< B, boost::noncopyable>( "B" ) .def( bp::init< >()[bp::default_call_policies()] ) .def_readwrite("aObj", &B::aObj) ; bp::class_< A, boost::noncopyable >( "A" ) .def( bp::init< >()[bp::default_call_policies()] ) .def_readwrite( "x", &A::x ); } But this does not compile since Boost.Python generates an assignment operator for the attribute aObj (which has a private assignment operator). Is this anyway to tell Boost.Python to not generate the assignment operator for aObj of B. One workaround (which is not at all elegant!) is for me to write the following hand-crafted methods: void set(B& self, A& a) { self.aObj.x = a.x; } const A& get(B& self) { return self.aObj; } and then not expose the aObj as an attribute but rather through these methods in python: bp::class_< B, boost::noncopyable>( "B" ) .def( bp::init< >()[bp::default_call_policies()] ) .def("set", &set) .def("get", &get, bp::return_internal_reference<>() ) thanks - Abhi
https://mail.python.org/pipermail/cplusplus-sig/2006-April/010298.html
CC-MAIN-2016-36
refinedweb
234
52.15
Hide Forgot From Bugzilla Helper: User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.2.1) Gecko/20030225 Description of problem: If you compile C code that includes header files, some of those header files don't appear in the source list from gdb. This behaviour is different from the gcc that shipped with redhat 7.3 (2.96). Here is a simple example: $ cat hello.c #include <stdio.h> #include "hello.h" int main(int argc, char *argv[]) { printf("%s", hellomsg); } $ cat hello.h const char *hellomsg = "Hello world!\n"; $ gcc -g -o hello hello.c $ gdb hello GNU gdb Red Hat Linux (5.3post-0.20021129.18rh) ... (gdb) list hello.h:1 No source file named hello.h. (gdb) info sources Source files for which symbols have been read in: Source files for which symbols will be read in on demand: /usr/src/build/231499-i386/BUILD/glibc-2.3.2-20030313/build-i386-linux/csu/crtn.S, hello.c, /usr/src/build/231499-i386/BUILD/glibc-2.3.2-20030313/build-i386-linux/csu/crti.S, init.c, ../sysdeps/i386/elf/start.S Note that gdb can't list hello.h and hello.h doesn't appear in the source list. With older versions of gcc (2.96 for example), hello.h would have been listable and would appear in the info source list. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Compile example code as shown above 2. run gdb with example executable 3. type "info sources" Actual Results: gdb can't list hello.h and hello.h doesn't appear in source list Expected Results: gdb used to be able to list hello.h and hello.h was shown in the source list Additional info: hello.h appears to be both in .debug_line file table: readelf -wl hello | grep 'hello\.' 1 0 0 0 hello.c 2 0 0 0 hello.h and hellomsg is said to be on line 1 in hello.h: readelf -wi hello | grep -C3 hellomsg DW_AT_byte_size : 4 DW_AT_type : <245> <1><29b>: Abbrev Number: 8 (DW_TAG_variable) DW_AT_name : (indirect string, offset: 0xd1): hellomsg DW_AT_decl_file : 2 DW_AT_decl_line : 1 DW_AT_type : <252> gcc4 has the same problem (or causes gdb to have the same problem if it's a gdb problem). I used the version of gcc4 from fc3: gcc4 --version gcc4 (GCC) 4.0.0 20041019 (Red Hat 4.0.0-0.8).
https://bugzilla.redhat.com/show_bug.cgi?format=multiple&amp;id=91599
CC-MAIN-2019-43
refinedweb
410
70.9
Robofab Dialogs Hi, I'm trying to use Robofab's dialogs. from robofab.interface.all.dialogs import AskYesNoCancel AskYesNoCancel("foo?") And I get this traceback, it can't import EasyDialogs: File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/EasyDialogs.py", line 24, in ImportError: cannot import name GetNewDialog I guess there's another workaround to this issue than building these dialogs from scratch using Vanilla, right? Thanks again, jca There is indeed a RoboFab dialogs issue if you are working on Lion. It's better so see the RoboFab mailing list. This is going to be in the next update. If it's really urgent you can build them with vanilla :) I know about the issue, i did some tests, but I thought it was FontLab 5.1 realted. And no, I'm not using Lion. Should dialogs work in Snow Leopard? As the traceback says it's seems the embeded python 2.7 can't import EasyDialogs... I can build them with vanilla, I know, but it's not urgent enough. I'll wait for a while if there's no other way.
https://forum.robofont.com/topic/32/robofab-dialogs
CC-MAIN-2020-16
refinedweb
190
68.47
“Your System running in low graphics mode.” Yeah, you have been served son! Today was one of those dark days for me. I was feeling tired after a few hours on the computer configuring some stuff – particularly LAMP (Linux, Apache, MySQL and Perl) tools. I did this because I am planning to create some APIs for a project! So, I just let my laptop go to sleep and when I came back it was OFF. What the heck? I asked myself and tried turning it ON to the shock of my eyes. This is what surprised me; Whenever I clicked the OK button to see what was next, the mouse disappeared and even using the tab key didn’t help because the choices given didn’t work at all. You can imagine how many unspeakable words I said here. Instead of hitting my head on the wall 12 times, I fired up my other laptop and searched on Google for similar instances. I knew I was not alone here – Google the hell out of it. By the way, I needed another computer to do that because I couldn’t get anything done with this one. If you run into this issue, do not panic. Follow these steps: Before you even do this, you should know that you have a Graphics problem and these guidelines will help you fix that. You can find more information by doing a simple search. That is, if this post does not help you solve your problem. Sometimes, you might have to do a fresh install of your Ubuntu! In order to arrive at the Terminal from the above screen, Start your computer while pressing these buttons: CTRL + ALT + F1 and then enter your username and password before moving on. The System Running In Low Graphics Mode – Solution Then run this on your terminal when that is done: Follow that with the following line on your terminal: Before you restart the machine, run this next line: When prompted here, select GDM (This by the way is a Graphics Manager) Finally, reboot this bad boy and you will be on your way to happy dancing; You can find more information by visiting the Ubuntu forums. That should do it for you. If you found this post helpful, do not hesitate to share it with others online. You know, if life gives you a bug, you can either eat it for lunch or let it bite you in the face. I would rather you ate it. Good luck and have a great week! See you soon because that is a wrap! 124 Comments on “How To Fix ‘The System Running In Low Graphics Mode’ Error (Linux)” John Njagarah After struggling to fix the graphics problem for nearly 12 hours, at the time I nearly decided to take a rest from it i decided to carefully implement the above steps and the problem was solved straight away. ClearlyAWindowsGuy This did not work for me. I still receive the same message ‘The System Is Running In Low Graphics Mode’. I have no experience with Ubuntu. This OS is running our Fog imaging software and has now given us zero production with imaging PCs with Windows 7. I have been Googling for the answer for a few days with no real clear solution. It was working fine until the unit had to be rebooted. No we receive this error even after following the instructions from this posting. tony kwok I was so frustrated for this problem after installing apache, mysql and php on my Dell Optiplex 9020. Tried instructions on other blogs but none of them worked. This one worked like a charm! Elisha Chirchir I am glad it helped you! emma Hi I have a problem with this but when I come to login my details it continues tosay incorrect, I’m sure they are correct, do you know how I can get passed this stage? Thank you Emm Bijaya Maharjan which username and password does it mean? Elisha Chirchir Your computer username and password; if you have used the terminal before, your name is always there and looks something like this :username@computerName – so you want the first part and the password that you set when you first set up your computer or the password you use when you use sudo to install stuff. Let me know how it goes. Bijaya Maharjan i completed every single step stated above and it went pretty well but screen remained black after all this process. yinan The screen is black, and I cannot access to the terminate after I do the steps above, how to fix it? thank you vey much! Mohammed Ameen I accidently pressed the other option instead of gdm, in the Graphics Manager after installing gdm. When I tried rebooting, it still didn’t work. I tried to do it again, but installing gdm doesn’t open the graphics manager. What do I do? Sam Gracias! tenia rato buscando y no encontre nada, pero con esto se soluciono, tranks Koushik I have a HP laptop with AMD RADEON graphics card. I’m trying to install ubuntu 13.10 x64 on it. But when I choose ‘try ubuntu’ option at beginning of installation, it says ‘your system is running in low graphics mode’. So, I don’t have any option of installation at al. Please suggest what can I do? Elisha Chirchir Have you followed the instructions on this tutorial and it did not work still? If not, I would suggest you do that first and then let me know how it goes. Koushik Hello…I can’t even try ubuntu in my laptop. After pressing try ubuntu button, it shows that message. I think your steps are only applicable when it is installed in hard drive, not from live cd. Please say something for the case of live cd Elisha Chirchir You should be able to boot your Ubuntu from your cd then go to the terminal; this is however contingent on whether the OS is installed or not – regardless of where. Beyond that, I have not tried that scenario myself so I am not familiar with it. Bryan Lee Thanks, you rock! I started trying the various solutions in an askubuntu thread (try recovery mode, too many files on the computer, etc.) and finally tried these steps and it worked like a charm. I’m on an HP Pavilion laptop dv6000 series, Ubuntu 12.04. Elisha Chirchir I am glad it worked for you! Shridhar Thanx , It works…!!!!! Elisha Chirchir You are welcome! Brando I thought it’d done it, but after reboot, and putting in user name and password, it comes up with a *username*system product name: $ prompt. so not sure what to do now! Sorry, total noob with Ubuntu, I’m trying to fix my husbands puter after it had this graphics issue. Elisha Chirchir I am not sure what happened there. What happens when you enter your username and password again? Any errors? Brando hi Elisha It just keeps going back to the system product name prompt. Elisha Chirchir I really wish I could figure it out for you. If all else fails, I would re-install Ubuntu since it is free of charge. William The command didn’t work in my desktop. Does the computer need to be connected to the Internet during the process? Elisha Chirchir I believe you need an internet connection so that you can install some packages through the terminal. Try that. satish after completing the steps. my system standing at “UBUNTU” logo not going forward plz help me…… gagan yes bro same happened to me gagan hello bro after completing these steps it ia now standing at ubuntu logo what to do now…. kamal not working for me. sudo apt-get update does something and then tries to download archives from internet. Since try ubuntu option goes to “low graphics error mode”, cant connect to internet. Also for other commands, it says “package gdm is not available, but is referred to by another package. However the following packages replace it: gnome-control-center-data”. Please suggest. Elisha Chirchir You cannot update without an internet connection. So, I would check that to make sure you are connected. Marc Thanks for your help. I followed all of your instructions, but now when I start the machine it doesn’t go to the desktop it just goes to that screen where you type everything in. How do I get it to go to the desktop? Elisha Chirchir Which screen is it? The terminal? Mike Yes, the terminal. It seems to be working OK, except that it goes straight into the terminal and not to the screen Elisha Chirchir I have no idea what could be causing that. I would trying looking around the web to see if anyone has encountered a similar issue. Mike I have tried a few things, for example on running startx I get this modprobe: ERROR: could not insert nvidia_173 : Unknown symbol in module, or unknown parameter (see dmesg) Fatal server error: If I find a solution I will let you know. Thanks, Mike nkcoder Hi, Elisha, I have the same problem(ubuntu 14.04), but when I press Ctrl+Alt+F1 and login, the terminal cannot connect to the network, so I cannot update or install using apt-get. Do you have any idea? Thanks. Elisha Chirchir You might have to use a wired connection if you have no other way to connect to a wireless connection. I have no other ideas that could be of help. If you get to fix it, please let me know how you did it. Mike I am back up. I downloaded Ubuntu again, but this time I ran it on the memory stick first and not on the PC and the screen worked OK. Then I installed from the system running on the memory stick and had no problems. Previously I had tried to install it directly onto the PC from the stick. Might work for someone else. mallikarjun In mycase ubuntu 12lts i replaced gdb in command s with gnome-control-center Elisha Chirchir Thanks for stopping by. So you got yours to work? nishant i am using proxy network then how to set the settings of ethernet to download the update Faye I tried ctrl+alt+F1, but my terminal is not showing properly, I cannot read the letters on screen. What should I do? Thanks. BTW I am using Ubuntu 14. Reddish Tanks!!! It worked for me (ASUS Zenbook UX32VD) Caroline A huge “Merci” for your help ! It works like a charm for me… 😉 nguyen kien Thank you so much, it worked for me android Thank you. it’s working. Steve I tried the above steps and now upon reboot all I get is blank screen with small blinking curser on top left screen arun The same issue for me too. gbenga Thank u very much…. It work Nidhal Hello, I have followed these steps but after rebooting the screen remained black for over 30 minutes. Does anyone have a solution? mica Thank you. it’s working. 🙂 troy Hey Elisha, I followed all the steps and after I entered my password, it says “The disk drive for /dev/mapper/ubuntu-swap_1 is not ready yet or not present. Continue to wait, or Press S to skip mounting or M for manual recovery” My cursor is an X and if I press s or m, nothing happens. Thanks for your help. Nitiflor Thank you a lot. Worked perfectly. Martin Dude! You just saved my bottom! I was about to pack and ship Ubuntu box to a client when after last checkup and reboot … BOOM ERROR. I have 30 min to figure out what the hell and fix it. Muchos kudos 4 U! Thanks!!! Elisha Chirchir I am glad I could help you Martin. It is always a great feeling to hear back from great people like you. Now, let us keep this going. Saurav Sir what to do if i am encountering this problem when I can’t access internet. I mean my internet requires to login from portal which i cant do without running browser. Please help if there is any way possible without internet KNotes Ingat Aku Per 17 Maret 2015 | RESTAVA […] […] deanj Thank you soo much, chrome stopped working and i rebooted and got this poxy error message. i followed everystep as i could but after completing each step i had to type clear, so i was able to see what was going on, it seems 3/4 down the screen it all gets squashed. IM not 100% sure, what i selected on the pop up dialog as i couldnt see haha.. But anyways it is working, although my login page has changed but i can live with that.. Would you happen to have a link where i can find: changing the resoultion ayfor the login page and the tty Shell ? Elisha Chirchir I would probably Google around for that; I have no clear solution to that. David Wonderful, it worked. Surprisingly, this showed up after messing with MySQL too. Brendan Great working again. had to do the instructions 2 times, 2nd time it worked. thanks for the help Adam Worked like a charm. Thanks ! Joseph Myalla My laptop ran into Lower graphics mode just after installing team viewer 10 in ubuntu 12.04, I followed these instructions my graphics is back, i can login but the computer seems busy, no icons are shown, can not start a program Dilakson Siva Thanks guys….. It’s working fine… prafull rawal thanks its working !!!! Elisha Chirchir Glad it worked for you! dominik Wow dude you really helped me THX 😀 Elisha Chirchir I am glad it helped! Sam Thanks a lot…!!! SkyDiver Bumped into this problem after upgrading from Ubuntu Desktop to GNOME Desktop on 14.04 as described in this LAS cast: (instructions from this PDF file on the same LAS page:). Your tip was the missing link to enjoying the GNOME Desktop. Thank buddy! Vinay Thanks | its working Elisha Chirchir I am glad it worked for you Vinay! Khoa Worked for the. Thanks a lot Ubuntu 14.04 Elisha Chirchir I am glad it worked for you! tt3 after completing the steps. my system standing at “UBUNTU” logo not going forward plz help me…… Elisha Chirchir So after you reboot it is still there? TT3 yes it was still there. But I tried it many time. the problem is solve now by using the same command I first go to recovery mode enable connexion and I repair broken packages then I substituted gdm by lightdm Many thank CHIRCHIR for this kind reaction. Elisha Chirchir I am so glad you fixed it! Have a great day/night! TT3 Thanks ..you too Felipe Hello! I can’t even login. When I type my username and password, it says it is incorrect, but it is not. I just started using Ubuntu and was really enjoying it 🙁 I really appreciate any help 🙂 Elisha Chirchir Are you still struggling with this? Just make sure the credentials are correct and try again; reboot the machine and try again. Abok Isaac Thanks for the fix. It hit me the same way as described in your post! Just in case someone cannot access Internet, can connect to any WiFi hot spot using the folksinging command. nmcli d wifi connect SSID password PASSWORD iface wlan0 (replace capitalized words with actual SSID and password). Elisha Chirchir Thanks for the tip to connect to the wifi through commandline mamarezax person X= “Elisha Chirchir”; you made my day X; return (thanx); Michal Cool! Wokred for me. Tx! Waqas ur Rahman Thanks a lot man it helped !!! God bless you Elisha Chirchir You are welcome!! Have fun! aldy Terimakasih ,, ini sangat membantu Nathan Ho Hey I am trying to download Ubuntu 14.04 on Virtualbox, and I am installing it right now. After the installation completed and I restarted the Virtualbox the “The system is running in low graphics mode” came up and I did the command, but it said i had no disk space left, and i have tried everything from sudo apt-get autoclean and sudo apt-get clean but nothing is working!!!! Elisha Chirchir It means you have no disk space left. You should free up some space in your machine then allocate more space for your Ubuntu installation then reinstall it! Sean awesome and clear post! J Hector Garcia ayuda tengo el siguiente problema w: Not using locking for read only lock file /var/lib/dpkg/lock E: Unable to write to /var/cache/apt/ E The package lists or status file could not be parsed or opened Elizabeth 66 Hey thank you I tried to install some nvidia GeForce GTX graphics but got theverything low graphics screen…. tried different solutions but this one worked for me 😉 thanKS!!! Amp It didn’t work. Error shows 4m 1St cmd itself. Just lyk no access ND are u root. Even stil I’m in root oly Dhiraj After running the commands and rebooting my laptop. It just stays black and never starts up. How should I tackle it Elisha Chirchir Are you connected to the net? Dhiraj I did everything you told and reboot the computer. Now the screen is just black on start up and no access is allowed. please help Vincent It didnt worked for me. Second time I will have to reinstall all my server. I think Team Viewer kill my system and I think this is because i connect to my server with team viewer before ubuntu terminate 100% his startup. joshua My screen is also black after these steps on Ubuntu 14.04 and nothing can do.Guys please be carefully! Now I’ve more headache then before … Jagan Fixed on my dell xps, thanks 🙂 kishan tqsm!!! #admin life saver………. Prateek Goyal Voila, it worked for me. Thanks for the post. eko thanks you…..it worked for me,,,thanks for the post Simone Riccardi cannot believe you are a fucking rock star 😀 thanks! it worked for me after 4 hours struggling with fu**in useless manual lines of code… Naturelover Will this method work on an Ubuntu 14.04 system with Intel graphics, which is currently running lightdm? In my case, a software update was what caused the above problem. I reinstalled lightdm, and that worked initially, but then reverted back to the initial problem. Thanks. Elisha Chirchir Have you tried any of the steps outlined? Naturelover I followed the steps outlined, but with “lightdm” instead of “gdm”. After the reboot, the display appeared normal, but I could not log in to the system. So I opened a console, and rebooted again.This time the display was again in the low graphics mode as before. Naturelover Hi Elisha, Not sure if you are available to reply to comments. I was wondering, if a system is running lightdm already, and a software update causes the above problem, will going through your steps with gdm, and selecting gdm in the end, will work without any problems. I’m a bit nervous about possibly losing all my work that’s on this system, and having to re-install from scratch because of this problem. Harsh vishnoi Hey …I’m in the worst situation ..please. Help me .. This tutorial is not working with me.. There is nothing happening and I did all the same steps mentioned above..please it’s a request be fast..urgent shekar u r really brilliant brooo thanks Kwabena Ampofo Life saving tutorial Finally something on the net is working Thanks a million Eric Woow Great! It worked for me Mudit Thanks! worked well for me. Had the same issue after installing LAMP. Radi Hi, after installing updates and other apps I struggled with the issue, 3 hours fight before found your site. Thanks a lot it works 🙂 Hanish pachori It doesn’t worked for me .i t is sme as it was before running these commands , if anyone haveany other idea regarding to this problem please help me to solve this error. platon Thanks it worked sir . This one was helpful i tried through stackoverflow nothing this instruction worked out !! Shivu it says gdm not installed Rakshith I’m stuck with the black screen after rebooting. plz help me Chris Did not work for me. In fact it made it worse….. Now I have a flickering black screen with some writing showing on the top left corner for a second as it flickers. Now I can’t even use ctrl+alt+f1 to get a terminal open… Myra Hi. I am not able to return to login when i press ctrl+alt+f1. Is there any other ways to do so? I really appeociate if u can help Dominic M ♥️ my hero jordan have the same issue(black screen saying system under low maintenance) after upgrading from 14.04 to 16.04 ,for ur steps i need internet ,how to connect after ctrl+alt+f1.I am using a dongle which i used for upgrade.New with ubuntu nmcli con up id ‘Airtel’ giving no device found for connection geethu Will I lost all my data with this? PLease help me Ray This did not work it only caused more grief. sudo apt-get install gdm froze-up terminal???
http://simpledeveloper.com/system-running-in-low-graphics-mode/
CC-MAIN-2019-35
refinedweb
3,569
82.34
Java Exercises: Program than read an integer and calculate the sum of its digits and write the number of each digit of the sum in English Java Basic: Exercise-87 with Solution Write a Java program than read an integer and calculate the sum of its digits and write the number of each digit of the sum in English. Sample Solution: Java Code: import java.io.*; public class Main { public static void main(String[] args) { BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); try { int sum = 0; String str = br.readLine(); char[] numStr = str.toCharArray(); for (int i = 0; i < numStr.length; i ++) { sum += numStr[i] - '0'; } System.out.println("Original Number: "+str); print_number(sum); } catch (IOException e) { e.printStackTrace(); } } public static void print_number(int n) { int x; int y; int z; String[] number = {"zero","one","two","three","four","five","six","seven","eight","nine"}; System.out.println("Sum of the digits of the said number: "+n); if (n < 10) { System.out.println(number[n]); } else if (n < 100) { x = n / 10; y = n - x *10; System.out.println("In English: "+number[x] + " " + number[y]); } else { x = n / 100; y = (n - x * 100) / 10; z = n - x * 100 - y * 10; System.out.println("In English: "+number[x] + " " + number[y] + " " + number[z]); } } } If input 8 Sample Output: Original Number: 8 Sum of the digits of the said number: 8 eight Flowchart: Java Code Editor: Contribute your code and comments through Disqus. Previous: Write a Java program start with an integer n, divide n by 2 if n is even or multiply by 3 and add 1 if n is odd, repeat the process until n = 1. Next: Write a Java program to get the current system environment and system properties. -
https://www.w3resource.com/java-exercises/basic/java-basic-exercise-87.php
CC-MAIN-2020-40
refinedweb
288
54.93
Related Titles - Full Description Whatever. The code gives you everything you need to solve the problem at hand, while the accompanying text carefully explains how it solves your problem and, more importantly, why it works the way it does. What youll learn - Solutions to the most common development problems that real people meet in the field day-to-day - How to use the .NET 2.0 namespaces to your best advantage - To understand coding issues from a Visual Basic perspective, including VB features such as the My namespace often neglected in C#-centric texts - Answers to everything from application domains to XML processing, passing through graphics, database access, and security along the way! - Language Integrated Query (LINQ), a feature of the 2007 release of Visual Studio Who this book is for This book is for anyone who uses Visual Basic 2005. If you've ever stopped midline and wondered how you're going to finish the expression, or found yourself struggling with a complex piece of logic while wondering if youre reinventing the .NET wheel, this book is for you. It provides the succinct nuggets of information that you need to get the job done. - Source Code/Downloads - - Errata -
http://www.apress.com/programming/visual-basic/9781590598528
CC-MAIN-2014-52
refinedweb
199
58.32
Asked by: PRISM Interoperation Question We are using the following setup: Developers Visual Studio 2010 Professional with PRISM 4 Designers Expression Blend 4 with SP 1 When we try opening the Developer XAML with references to PRISM, Expression Blend says that the XAML is invalid. Should we register the PRISM libraries in the Designer machines? The batch file seems to need Visual Studio installed? What are we missing here? Should we install Visual Studio 2010 in the Designer machines too?Wednesday, December 8, 2010 2:22 PM All replies Are the referenced assemblies (prism libraries) not on the designer machines at all? Blend will need to be able to find the assemblies just like VS does.Wednesday, December 8, 2010 2:35 PMModerator Yes, the referenced assemblies are in the Designer machine. The Designer opens a project which was built and running in the Developer machine.Wednesday, December 8, 2010 2:52 PM - Hmm, can you be more specific about the error message? Can you build/run the project on the designer machine?Wednesday, December 8, 2010 2:55 PMModerator One of the error messages is: The name RegionManager does not exist in the namespace Please find below the snapshot of the error:, December 8, 2010 3:09 PM If you expand the references section in the project panel, are there any warning icons by any of the assemblies? Another possible thing to try to eliminate variables, from a command prompt, msbuild yourfilename.sln and see if that succeeds.Wednesday, December 8, 2010 3:19 PMModerator There are no missing references or warning icons in Developer and Designer machines. The Designers are working with a Parallels Desktop and using a network folder to access the code. I now see an error message which says that it could NOT resolve the reference to the Microsoft.Practices.ServiceLocation even though the Reference in the Designer Machine does not have any warning icon and the path is right. I even checked the length of the path and it is less that 255 characters. Wednesday, December 8, 2010 3:36 PM - Edited by Ranjith Venkatesh - Private Wednesday, December 8, 2010 3:38 PM spelling mistake This will be the time running msbuild will help diagnose the problem. That is how Blend builds under the covers (as does VS of course). if it isn't already in your path, you can find msbuild.exe in a path like: c:\windows\Microsoft.Net\Framework\v4.0.30319\msbuild.exeWednesday, December 8, 2010 4:03 PMModerator Hi Chuck, Thanks for the support. I solved the problem with the following steps on the Designer machine: 1. Install .NET 3.5 SP1 (This is needed for PRISM 4) 2. Install PRISM 4 3. Install Visual Studio 2010 Professional Trial Edition (This was needed to register the PRISM 4 Binaries) 4. Once the PRISM Binaries were registered the project was able to build using Blend Cheers, Ranjith PS: If you can tell me ways to reduce these steps I will be thankful as we have more Designers on the project :) Wednesday, December 8, 2010 4:26 PM - Edited by Ranjith Venkatesh - Private Wednesday, December 8, 2010 4:28 PM missed some words
https://social.msdn.microsoft.com/Forums/en-US/da89a78e-8d50-4470-aaa1-356c834b04fd/prism-interoperation?forum=blend
CC-MAIN-2021-31
refinedweb
532
62.48
Search: Search took 0.01 seconds. - 1 Jun 2011 3:14 AM Heya, I haven't worked with GWT GXT for over a year now, but I think we used not Buttons but plain HTML with a link to solve this. So the Node's Label String was actual HTML code with the Label... - 14 Jan 2011 10:46 AM - Replies - 3 - Views - 3,477 Heya! This was quite some time ago :) However, I do remember that we solved this on a model-level, which means, we put an enabled flag on the model and whenever an element had enabled=false we... - 29 Mar 2010 2:38 AM - Replies - 2 - Views - 897 This gave me some headaches, too. Thanks for your post. :) Another thing here: All our ComboBoxes have setEditable set to false, which resulted in the effect that all Comboboxes where practically... - 24 Mar 2010 6:14 AM Thank sven, that's what I am doing at the moment. However, it would be great to have this implemented internally. I hope you can squeeze it in :) - 24 Mar 2010 5:42 AM Sorry to push this thread, but I have not found any nice solution here, yet. Anyone? - 23 Mar 2010 8:44 AM - Replies - 3 - Views - 1,371 Just for others who might run into similar issues, this is the working sample code: public class MainEntryPoint implements EntryPoint { class MyTabItem extends TabItem { ... - 23 Mar 2010 8:42 AM - Replies - 3 - Views - 1,371 Thanks, using FitLayout was crucial, but also to use a Viewport. :) Attaching the ContentPanel directly to the RootPanel and not using a GXT Viewport will result in a wrong layout, too. Issue... - 23 Mar 2010 5:33 AM - Replies - 3 - Views - 1,371 Hi, I have an application which I would like to update to the newest GXT and GWT version. I am using: GXT 2.1.1 (also updated css) GWT 2.0.2 My application does have some nested TabPanels... - 23 Mar 2010 5:17 AM - Replies - 1 - Views - 852 Sorry, wrong Forum! Could you please just delete/ignore this post? Cheers - 23 Mar 2010 5:00 AM - Replies - 1 - Views - 852 Hi, I have an application which I would like to update to the newest GXT and GWT version. I am using: GXT 2.1.1 (also updated css) GWT 2.0.2 My application does have some nested TabPanels... - 16 Mar 2010 6:17 AM Hi, I have a TreePanel with checkable items but as soon as I apply a filter to the TreeStore, all items automatically get unchecked and the user has to check them again. Is there any possibility... - 15 Mar 2010 7:33 AM Thanks sven, works like a charm! Just one small thing: The horizontal scrollbar disappears as soon as the "long item" is not displayed anymore, so the scrollbar keeps appearing and disappearing... - 15 Mar 2010 7:17 AM Ok, I just found out that the code works fine with IE. So it might be a Firefox-specific problem? - 15 Mar 2010 7:11 AM Hi! I hope you guys can help me to figure out why this code is not working properly. I am trying to render a TreePanel with some items which have long texts on it, so I would expect some... - 10 Mar 2010 10:57 AM - Replies - 3 - Views - 3,477 I would like to enable/ disable items on my TreePanel on certain occasions. This means that they should appear grey and the check box is not clickable anymore. This was easily done by just setting... - 10 Mar 2010 10:24 AM Yes, changing my superclass to BaseModel was all I had to do. I had monitorChanges set to true already. Thanks again, you really helped me a lot! - 10 Mar 2010 10:14 AM Thanks for your quick reply! - next time I will be more cautious before posting in Bugs :-) I was not aware of the fact the BaseModelData does not fire events, nor could I find any useful hints... - 10 Mar 2010 10:00 AM When using a TreePanel and changing the text field of the BaseModelData, the displayed text on the tree item is not updated. Using: Web mode GXT 2.1.1 GWT 2.0.2 Firefox 3.5.8 Sample code: - 26 Feb 2010 5:43 PM Jepp, I saw this in the TreeGrid already. Especially the GridCellRenderer is exactly what I need. However I hoped there would be a solution for TreePanel because I don't actually want a Grid for... - 26 Feb 2010 4:08 AM Ok, sorry to push this thread, maybe my question was too complicated: Is there a way to have TreePanel using Widgets as items instead of the internal plain Model? I would need my tree items to... - 26 Feb 2010 3:59 AM - Replies - 1 - Views - 938 I am using an UTF-8 encoded XML file which contains all translations for labels in my GXT-App. The file is stored on the server and my application retrieves them on startup. The user can then easily... - 24 Feb 2010 7:44 AM Hi! I just moved from GXT 1.x to the newest GXT version. I have a rather severe issue with the new TreePanel implementation as I need to have "special" TreeItems on my tree which do not only... - 5 Aug 2008 1:40 AM - Replies - 3 - Views - 1,380 Same issue here. Also, double-clicking a leaf item on a tree displays the browser's context menu... - 16 Jul 2008 1:42 PM - Replies - 5 - Views - 2,141 Ok, I tested both: Solution 2 does not work as if a first-level item has changed i have to update the root (because the children of the root are the first-level items) and therefore have to update... - 16 Jul 2008 1:18 PM Jump to post Thread: how could I sync two trees? by LukasP - Replies - 4 - Views - 1,647 I have a similar problem. As far as I am concerned there is no way to give the TreeBinder an advice which items out of the TreeStore should be shown on the Tree. Which is exactly what you (and I)... Results 1 to 25 of 59
https://www.sencha.com/forum/search.php?s=076a36eaea3db8a1660d097db7d3fb3f&searchid=12493091
CC-MAIN-2015-35
refinedweb
1,037
81.63
I am trying to develop either a console app or unit tests to drive a WPF application. I am using UI Automation in my code. There are properties UI Automation can capture but I cannot set up the classes necessary. My project has a reference to System.Windows, 'using System.Windows;' is set up in my class, but when I try to define a variable using a class from System.Windows, such as Rect or Point, I get an error message, "The type or namespace name 'Rect' does not exist in the namespace 'System.Windows' (are you missing an assembly reference?)" I am working with VS2012 Professional and have downloaded Windows SDK. Do I have a configuration problem or am I not able to access the classes in System.Windows in a Console or Unit Test project? Given that I need to be able to use use methods in AutomationElement that return class types like Point or Rect, what can I do to get that data? You also need to add a reference to WindowsBase.dll. From the documentation for Point: Namespace: System.Windows Assembly: WindowsBase (in WindowsBase.dll) You can check any type you want to use, and add the appropriate reference (assembly) and the using statement (namespace) to use that type within your console app or unit test. Reed Copsey, Jr. - - If a post answers your question, please click Mark As Answer on that post. If you find a post helpful, please click Vote as Helpful.
https://social.msdn.microsoft.com/Forums/en-US/d2fd282e-1422-48c7-8590-583490b453a9/use-of-systemwindows-classes-in-a-console-or-unit-test-project?forum=netfxbcl
CC-MAIN-2020-45
refinedweb
248
75.61
Instant integration of Ian Bicking's WebTest () with django's testing framework. Project Description django-webtest is an app for instant integration of Ian Bicking’s WebTest () with django’s testing framework. Usage from django_webtest import WebTest class MyTestCase(WebTest): # optional: we want some initial data to be able to login fixtures = ['users', 'blog_posts'] # optional: default extra_environ for this TestCase extra_environ = {'REMOTE_ADDR': '10.20.30.template and response.context goodness that is usually only available if you use django’s native test client. These attributes contain a list of templates that were used to render the response and the context used to render these templates. Unlike django’s native test client CSRF checks are not suppressed so missing CSRF tokens will cause test fails (and that’s good). If forms are submitted via WebTest forms API then all form fields (including CSRF token) are submitted automagically: class AuthTest(WebTest): fixtures = ['users.json'] def test_login(self) login('/'). Installation $ pip install webtest $ pip install django-webtest or $ easy_install webtest $ easy_install django-webtest or grab latest versions from bitbucket (,) Why? While django.test.client.Client is fine for it. Twill however understands HTML better and is more mature so consider it if WebTest doesn’t fit for some reason. Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/django-webtest/1.2/
CC-MAIN-2018-17
refinedweb
231
56.35
In this section we are discussing about throwing of built-in exceptions in Java. AdsTutorials In this section we are discussing about throwing of built-in exceptions in Java. In Java throw and throws keywords are used to throw an Exception. throw keyword throws an exception forcibly that may use the custom messages whereas the keyword throws is used to throw a possible exception. Example Here I am going to give a simple example which will demonstrate you about how to use throw keyword in Java to throw the exception. In this example I am trying to find a result after dividing the two numbers. When you will divide the two non-zero numbers the outcome of result will be what you had expected but if the denominator is non zero then outcome of result will be unexpected. In this case I want to throw an ArithmethicException with the specified message that specifies the denominator must not be zero. In this example I have created a class named Test.java where I created a reference of ArithmeticException and a method that returns the result after dividing the two numbers. In this method I have checked for the denominator number that if it is equals to zero then this method throws the exception. Then in the main() method created object of Test class and called the method created for dividing two numbers. Source Code Test.java public class Test { ArithmeticException ae; public int divide(int numerator, int denominator) { if(denominator == 0) { throw ae; } return numerator/denominator; } public static void main(String[] args) { Test test = new Test(); test.ae = new ArithmeticException("Second Number must not be zero"); //int result = test.divide(4, 0); int result = test.divide(4,2); System.out.println("Result : "+ result); } } Output When you will execute the above Java class then the output will be as follows : But when you will call the method by passing two non-zero numbers as passed into the above Java Class Test.java then after executing this class output will be as follows : Advertisements Posted on: January 9, 2013 If you enjoyed this post then why not add us on Google+? Add us to your Circles Advertisements Ads Ads Discuss: Java Throw Built-in Exception Post your Comment
http://www.roseindia.net/java/exceptions/throw-exceptions.shtml
CC-MAIN-2017-13
refinedweb
375
52.6