text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Return on Equity vs. Return on Capital You know that when expanding and investing in projects overseas as Acme plans to, it is essential to understand such things as Return on equity (ROE) and Internal rate of return (IRR). Using Internet sources , Gather information on ROE and IRR. (you may want to start with the websites listed below) Return on Equity vs. Return on Capital Return on Equity Definition Keep Your Eye on the ROE IRR Example Write a two to three paragraph explanation for each of these terms. Include the advantages and disadvantages of using the ROE and the IRR when selecting projects to invest in overseas. Then, select two companies with the same industry. Using the annual report information available on the company's website compute the ROE for each company. Each of these three questions need an answer of a minimum length of 2-3 paragraphs. Solution Preview Return on Equity: The formula for calculating Return on Equity is: Net Profit / Shareholder's equity Return on equity reveals how much profit a company earned in comparison to the total amount of shareholder's equity found on the balance sheet. $600 million dollars and it made $36 million in profit, it would be earning 6% on your equity [$36M / $600M = .06, or 6%]. It is also known that return on equity is particularly important because it can help you cut through the garbage spieled out by most CEO's in their annual reports about, "achieving record earnings". The return on equity figure takes into account the retained earnings from previous years, and tells investors how effectively their capital is being reinvested. Thus, it serves as a far better gauge of management's fiscal adeptness than the annual earnings per share. While ROE is a useful measure, it does have some flaws or disadvantages that can give you a false picture, so never rely on it alone. For example, if a company carries a large debt and raises funds through borrowing rather than issuing stock it will reduce its book value. A lower book value means you are dividing by a smaller number so the ROE is artificially higher. It may also be more meaningful to look at the ROE over a period of the past five years, rather than one year to average out any abnormal numbers. IRR (Internal Rate of Return) Often used in capital budgeting, it's the interest rate that makes net present value of all cash flow equal zero. Essentially, this is the return that a company would earn if it expanded or invested in itself, rather than investing that money elsewhere. In other words, if you have an investment ...
https://brainmass.com/business/capital-budgeting/77340
CC-MAIN-2016-50
refinedweb
446
59.64
Contents In my last post, I wrote about forward declarations for normal classes. Today, I give you some information about forward-declaring templates and enums. Forward-declaring Enums As I wrote in the last post, the compiler does not always need to know the definition of a class. It needs one if we use one of its members or the base class, or if it has to know how large objects of that class are. One should think that the same applies to enums, but that’s not the case. Forward-declaring plain old enums is not possible. The good news is, that we can provide forward declarations for scoped enums aka. enum classes. We also can forward declare enums with explicit underlying types. Both features have been introduced in C++11: enum OldEnum; //ERROR enum WithUnderlyingType : short; //OK enum class Scoped; //OK enum class ScopedWithType : int; //OK The actual definitions of the enums obviously have to match the declarations. Scoped enums that are not explicitly declared or defined with an underlying type, the underlying type is int. That means it does not matter whether the definition of Scoped explicitly adds int, and whether the definition of ScopedWithType does not mention it. Forward-declaring class templates Forward-declaring class templates is as easy as a normal class declaration: template <typename T, typename U> class X; It is also possible to provide forward declarations for specializations of those class templates: template <typename U> class X<int, U>; template <> class X<int, int>; Using incomplete types in templates When we instantiate a class template that is parametrized with one of our types, the question arises whether it is sufficient to only have a forward declaration of our type. Let’s, for example, take this class definition: class MyClass { //... std::shared_ptr<MyOtherClass> pOther; }; Is a forward declaration of MyOtherClass OK, or do we have to #include the full definition? The answer depends on the class template, in this case, shared_ptr. As we recall, a forward declaration of shared_ptr is not enough here, because the compiler needs to know the size. That depends on the implementation of shared_ptr and whether it contains or inherits from MyOtherClass. It may not be much of a surprise, that shared_ptr only stores a pointer to its argument type, so a forward declaration of MyOtherClass should be OK. Except for the fact that shared_ptr defines functions that use the argument type. That means, that wherever we trigger the instantiation of one of those functions, MyOtherClass needs to be defined as well. At first glance, that may seem OK since we usually only use the member functions of class members in the source file. However, one of those member functions is the destructor. If MyClass does not explicitly define a destructor, the compiler will do that for us. The destructor will also call the destructor of pOther, which contains a call to the destructor of MyOtherClass. Whether and where we need the definition of MyOtherClass therefore depends on where we or the compiler define the destructor and special member functions. Rule of thumb: use fully-defined types in templates One of the points in using smart pointers is the Rule of Zero. We don’t want to care about destructors and the like. Another point about using abstractions like class templates is, we should not need to know the exact implementation details. At least not enough to figure out whether the implementation needs us to define the template argument or whether just forward-declaring it is enough. And, even if we know the implementation details of such a template, we should not depend on that knowledge. What happens it the template implementation changes and suddenly needs the definition of its argument? Every class that only provides a forward declaration will break. Bottom-line is that, in general, it is better to #include the definition of our template arguments. Only in the rare case where we need to micromanage our compile-time dependencies, we can try to use a forward declaration instead. Forward-declaring library classes With all I’ve written about forward declarations, it might be tempting to provide forward declarations for classes and other entities provided by libraries. For example, if I only declare a function that takes a string, why should I have to #include <string> and all the stuff that comes with it? namespace std { class string; } Do not do this! It’s simply wrong. std::string is not a class, but a typedef to std::basic_string<char>. And no, you can’t simply add a forward declaration to template <class C> class basic_string<CharT>; because that’s not all there is to it, either. There surely are things in other libraries that are easier to provide forward declarations for, right? Don’t be tempted to forward declare those classes and templates, either. Libraries change, classes become type aliases and vice versa. Those changes will then break your code in nasty ways. If, however, you happen to be on the other side and write libraries yourself, consider to provide headers that contain forwarding declarations for your classes and templates. An example is the standard header <iosfwd> that provides forward declarations for things related to iostreams and the like. 6 Comments Permalink it’s a great feature of C++11 that we can forward declare enums. But I think it was available in MSVC, as an extension even before C++11 Permalink What strategies do you suggest for avoiding writing all your template code in a ‘public’ header? In cases where you know you’ll be using the templated code with just a few specific types, can you not hive that off into a separate library with pre-instantiated instances of the classes just with the specific types? Your code can then link to the concrete classes instead? If the code is genuinely used as generic in your code, this isn’t necessarily possible, but seems we over-pollute our headers when we put the templated class implementation direct into our public headers…. Permalink What you suggest is possible, although it reduces the usefulness of templates. It depends largely on the use case I’d say. Permalink Why do you make distinct the difference between your code and ‘library’ code. Your code could also change its implementation? I notice that the Google guidelines suggest never to forward declare individual classes, for the ‘library’ reason unless you do so via a ‘libfwd’ header. Permalink Thanks for the comment! Our own code is under our control, and we know better how likely we are to make changes that would break forward declarations. Having said that, it can of course be a good idea to have forwarding headers for our own common classes, too. Permalink As an individual, yes perhaps, but as part of a larger team I am wary of the cons of this approach. Perhaps the header speedup for builds could be achieved in other ways? e.g. comparing single threaded, parallel, unity and precompiled header approaches
https://arne-mertz.de/2018/03/forward-declaring-templates-and-enums/
CC-MAIN-2018-34
refinedweb
1,168
61.36
Given two strings A and B. The task is to count the number of ways to insert a character in string A to increase the length of the Longest Common Subsequence between string A and string B by 1. Examples: Input : A = “aa”, B = “baaa” Output : 4 The longest common subsequence shared by string A and string B is “aa”, which has a length of 2. There are two ways that the length of the longest common subsequence can be increased to 3 by adding a single character to string A: - There are 3 different positions in string A where we could insert an additional ‘a’ to create longest common subsequence “aaa” (i.e at the beginning, middle, and end of the string). - We can insert a ‘b’ at the beginning of the string for a new longest common subsequence of “baaa”. So, we have 3 + 1 = 4 ways to insert an alphanumeric character into string A and increase the length of the longest common subsequence by one. Let say for a given string A and string B, the length of their LCS is k. Let’s insert a single character ‘c’ after the ith character in string A and denote the string formed after the insertion as string Anew, which looks like: Anew = A1, i . c . Ai + 1, n where Ai, j denotes a substring of string A from the ith to the jth characters and ‘.’ denotes a concatenation of two strings. Let’s define knew to be the length of the LCS of Anew and B. Now we want to know if knew = k + 1. The crucial observation is that the newly inserted character ‘c’ must be a part of any common subsequence of Anew and B having length > k. We know this because if there is any common subsequence of Anew and B, this is a contradiction because it would mean the length of the LCS of A and B is > k. Using the above observation, we can try the following approach. For each possible character ‘c'(there are 52 upper and lower case English letters and 10 Arabic digits, so there are 62 possible characters to insert) and for every possible insertion i in String A (there are |a| + 1 insertion positions), let’s try to insert ‘c’ after the ith character in string A and match it with every occurrence of ‘c’ in string B, we can try to match these ‘c’ characters such that: A1, i . c . Ai+1, n B1, j-1 . c . Bj+1, m Now, in order to check if such an insertion produces an LCS of length k + 1, it’s sufficienet to check if the length of the LCS of A1, i and B1, j-1 plus the length of the LCS Ai+1, n and Bj+1, m is equal to k. In this case, the lCS of Anew and B is k + 1 because there is both a match between the fixed occurances of character ‘c’ and there is no longer common subsequence between them. If we can quickly get the length of the LCS between every two prefixes of A and B as well as between every two of their suffixes, we can compute the result. The length of the LCS between their prefixes can be read from a Dynamic Programming table used in computing the LCS of string A and string B. In this method, dp[i][j] stores the length of longest common subsequence of A, i and Bi, j. Similarly, the length of the LCS between their suffixes can be read from an analogous dp table which can be computed during computation of the LCS of Areversed and Breversed where Sreversed denotes the reversed string S. C++ Python3 # Python Program to Number of ways to insert a # character to increase LCS by one MAX = 256 def numberofways(A, B, N, M): pos = [[] for _ in range(MAX)] # Insert all positions of all characters # in string B for i in range(M): pos[ord(B[i])].append(i+1) # Longest Common Subsequence dpl = [[0] * (M+2) for _ in range(N+2)] for i in range(1, N+1): for j in range(1, M+1): if A[i – 1] == B[j – 1]: dpl[i][j] = dpl[i – 1][j – 1] + 1 else: dpl[i][j] = max(dpl[i – 1][j], dpl[i][j – 1]) LCS = dpl[N][M] # Longest Common Subsequence from reverse dpr = [[0] * (M+2) for _ in range(N+2)] for i in range(N, 0, -1): for j in range(M, 0, -1): if A[i – 1] == B[j – 1]: dpr[i][j] = dpr[i + 1][j + 1] + 1 else: dpr[i][j] = max(dpr[i + 1][j], dpr[i][j + 1]) # inserting character between position # i and i+1 ans = 0 for i in range(N+1): for j in range(MAX): for x in pos[j]: if dpl[i][x – 1] + dpr[i + 1][x + 1] == LCS: ans += 1 break return ans # Driver Code if __name__ == “__main__”: A = “aa” B = “baaa” N = len(A) M = len(B) print(numberofways(A, B, N, M)) # This code is contributed by vibhu4agarwal Output: 4 Time Complexity: O(N x M)
https://tutorialspoint.dev/algorithm/dynamic-programming-algorithms/number-ways-insert-character-increase-lcs-one
CC-MAIN-2020-24
refinedweb
870
65.9
You could use #pragma once for your header file containing your typedefs, function declarations and definitions, etc and then include in multiple files without any problems I believe. You could use #pragma once for your header file containing your typedefs, function declarations and definitions, etc and then include in multiple files without any problems I believe. > for (i=0; i = days; i++) Your test condition is an assignment and not a test condition as pointed out by @rstanley already. Perhaps what you want is this for (i = 0; i < days; i++) { #include <iostream> struct Point { int x; int y; Point () : x(0) , y(0) { } Thanks for referring me the docs, I've downloaded the PDF and will make sure to read through carefully. For now though, I will not be reading through any solutions as that way I don't get to strain... > NOW it is clear all ponts that can form a rectangle (3 or 4 - not 2, which will give us infinite rectangles [with different areas and alignment]) must be AT the vertices. Is that correct and... I found a similar question but this one caters to only those rectangles whose edges/sides are parallel to axes. This is fairly easier question in which I didn't have to find slopes and do all the... The question says to find the maximum possible number of rectangles. But it would be interesting to add code for displaying the points that form each rectangle. Hey! Flp's interpretation is wrong. Let's say there are four points with their coordinates entered (A, B, C, D be the points). Based on the rectangle posted by flp, AB forms one side of the... for (Ctr1 = 0; Ctr1 < TotalPossibleLines; Ctr1++) { for (Ctr2 = Ctr1 + 1; Ctr2 < TotalPossibleLines; Ctr2++) { if (SlopeOfLine[Ctr1] == SlopeOfLine[Ctr2]) { ... Example 1: Points: 6 Coordinates: (0,0) , (0,1) , (1,0) , (1,1) , (2,0) , (2,1) Total Possible Rectangles: 3 (Two smaller ones and one larger one) 1) (0,0) , (0,1) , (1,1) , (1,0) ... You're just being stupid. I'm not gonna bother replying to the crap you post. My eyes bleed reading this. > BE SPECIFIC! I'm not the one who's supposed to be specific. That is literally what the question is. Any (non-idiotic, I may add) person will understand the... Hey Salem! Again, I implemented this because this was the first idea I got after looking at the question. The constraint mentioned in the question was 4 < No. of points < 100. I stuck with arrays as... Buddy, read the question again. In case it's not very clear, a rectangle is a shape with a pair of sides parallel to each other and another pair of sides that are also parallel but perpendicular to... Sorry for the code elongation. Every time I copy-paste code from C::B, each newline character results in two newline characters on here. So, the program looks bigger than it actually is. Hey, I've been practicing for another competitive programming event and there's this question which I'm facing difficulty solving. So, the question is to count the number of all possible... Very true. I was reading the Stroustrup FAQ and a PDF that I came across on the internet while doing my research on the evolution of C and C++. There's so many ways I've been invoking undefined... All the books I've downloaded were done without changing/modifying the name so a quick Google search should land you at the right book. Out of the many books and websites saved, I've read almost 50%... 15855 15856 @Mod, I think this message got posted like three or more times, I don't know how. If it's possible, please delete the other two. Thanks! :) It's been nearly 1.5 years since I wrote my first line of code in C++. I remember how much I disliked programming in my first few days because I had never had the exposure and I thought that learning... Hey Thomas! Sometimes the simplest of mistakes happen from the best of us..... Often, in companies and institutions, the mightiest of bugs may occur due to just a single line of code. Take for... Assignment: Write a program to display all possible permutations of additions that lead to the Input number so long as they contain the digit "1" Example: Input: 5 Output: I had this question come in a test a few months back. (Just so I don't seem like a complete idiot, here's the question for beginners in Pointers ;)) (Don't use a compiler though) int Ar[] = {...
https://cboard.cprogramming.com/search.php?s=958a4371ca802369adff09c072aec966&searchid=1933113
CC-MAIN-2019-43
refinedweb
774
72.26
Debug.Log that logs an error message to the console. When you select the message in the console a connection to the context object will be drawn. This is very useful if you want know on which object an error occurs. When the message is a string, rich text markup can be used to add emphasis. See the manual page about rich text for details of the different markup tags available. See Also: Debug.unityLogger, ILogger, Logger.LogError. using UnityEngine; using System.Collections; public class MyGameClass : MonoBehaviour { private Transform transform; void MyGameMethod() { if (transform == null) Debug.LogError("memberVariable must be set to point to a Transform.", transform); } } Note that this pauses the editor when 'ErrorPause' is enabled.
https://docs.unity3d.com/2020.3/Documentation/ScriptReference/Debug.LogError.html
CC-MAIN-2021-49
refinedweb
117
61.22
In two weeks from now it will be one year since Kotlin started out as an open source project. It’s been a lot of hard work over this time, with a huge help of the community: we received 164 pull requests, which means a contribution every other day or so. Today we make another step and roll out Kotlin M5. This blog post covers the changes introduced in this release. Overview M5 was a short milestone (you should subtract the New Year’s break from its term), but we got rid of 144 issues in the tracker. Many IDE subsystems were improved, including JUnit runner, search of Kotlin classes from Java, better diagnostics for invalid external annotations, new icons and support for the Darcula color scheme: Minor changes in the language include better support for Float literals (you can now simply say 1.0 where Float is expected) and ability to mix positioned and named arguments to function calls. Some of the changes are not so humble and may require you to fix the existing code… Package Classes In the older versions of Kotlin every package that had top-level functions or properties declared was compiled to a class named “namespace”, where the top-level declarations were represented by static methods. When you used more than one of these “namespace” classes in Java, you ran into a name clash: you can not import two classes with the same name into the same compilation unit. With Kotlin M5 package classes are named after respective packages, which gives them different names and fixes this problem. The naming convention works as follows: package “org.example” gets a class “org.example.ExamplePackage”. I.e., we take the simple name of the package, capitalize it, append “Package” and put that class into the package itself. So far it works pretty well. NOTE: your older versions of kotlin-runtime.jar will not work any more because of this change. The compiler will complain about an “incompatible ABI version”, and the IDE will propose to replace the old runtime jar with a new one. Inner Classes An inner class is a non-static nested class, i.e. it holds a reference to an instance of its outer. In Java nested classes are inner by default, and if you don’t want a reference to the outer, you make your class static. Sometimes it leads to memory leaks, when someone is holding a reference to an instance of an inner class without knowing that it also holds an outer instance. Since M5, Kotlin wants you to mark inner classes explicitly, and nested classes are “static” by default. This may break your existing code, and in the IDE there’s a handy quick-fix to the rescue (just press Alt+Enter on the error). Java Generics and Nullability Generics are tricky, and their combination with nullable types is trickier still. In Java everything is nullable, for example, consider a Java method foo(ArrayList<String>), Kotlin (before M5) used to see it as ArrayList<String?>?, i.e. the collection may be null, and its elements may be null too. This is the safest thing we can do, but it proved to be very inconvenient: if you have an ArrayList<String> in Kotlin, you can’t pass it to foo(): ArrayList is invariant in its generic parameter and thus ArrayList<String> is not a subtype of ArrayList<String?>. This causes a lot of pain, even when KAnnotator is used. So we decided to change the default strategy for generic argument types, and load ArrayList<String>? in the case above. This change may break some of the existing code. Most of it is straightforwardly fixable by removing unneeded question marks. If you want the old type, you can add an external annotation to your Java definition. But what about safety? Now Java code may fool you by giving you a collection of nulls instead of strings, and your Kotlin code will fail. This may happen, but we make it fail helpfully: Kotlin checks data received from Java and fails early and with a detailed error message like this: This is much better than an NPE sometime later, maybe. The same kind of checks is performed for function parameters: if someone calls a Kotlin function illegally passing a null, it will blow up early, blaming the guilty as precisely as possible. Varargs and function literals Kotlin’s type-safe builders are awesome, especially if you note that they are not a built-in mechanism, but merely a combination of nice language features (mainly extension functions and higher-order functions). One thing was bothering builder writers in older versions of Kotlin: you could not define a vararg function that could also take a function literal as an argument outside parentheses. Now you can do it: can be called as You can also use named and positioned arguments together (including varargs): Ranges Kotlin’s standard library evolves too, and this time we revised ranges. To remind you, ranges are used a lot in loops and conditions: The new ranges are more consistent internally and generalize properly to cases with descending iteration, nontrivial increments and so on. We’ll provide more details in a separate blog post this week. Default constructors Kotlin allows only one constructor per class. When modeling our data, we often use default values for constructor parameters (after all, this is what makes having only one constructor practical): Now, constructors are even more convenient: in the generated byte code this class will get a default constructor, i.e. the one that takes no arguments (uses the default values for them). This case comes up a lot when using Java frameworks like JAXB, so now Kotlin is even more Java-friendly. Conclusion You can download Kotlin M5 from the plugin repository. It requires IntelliJ IDEA 12 (using recently released 12.0.3 is recommended). Have a nice Kolin! Congratulations on this release. Any chance of fixing KT-400 (java get/set) some time soon? Thanks. It’s not “fixing”, it’s “implementing”. We do not know how to do it properly. Great news!! I will start playing with it right now! And this new generator for constructor will allow more easily integration with hibernate, spring and others libraries! Thank you guys!! This new M5 release has completely transformed the definition and usage of a JavaFX builder that I am developing as a means of bootstrapping my understanding of Kotlin. Thanks for all your efforts.
https://blog.jetbrains.com/kotlin/2013/02/kotlin-m5-is-out/
CC-MAIN-2016-50
refinedweb
1,081
62.07
sso with oracle database on suse 10 and active directory Hi, I want to implement single sign on for an Oracle Database using the active directory. my configuration is a server running suse linux enterprise sp2 (SLES10) with an Oracle Database 10g installed and a server running windows server 2003 with active directory installed. how do i implement... Is there any way that I can quickly find out who is hidden in Active Directory for Exchange recipients, contacts and Distribution Groups? Exchange 2000 and Active Directory mess... dhcp&dns relation in active directory dhcp&dns relation in active directory Retricting access to users on a specified OU I want to give access to users on a different domain and restrict access to users on a specified OU. Do I need to setup something on the Firewall, DNS, Site & services or one way trust relationship? Active Directory Windows 2003 on both my Domain (a.com) and the other domain (b.com). 3rd line / network support Can someone provide me with 3rd line technical questions please working with Active Directory, Exchange 2000/2003 and Citrix Presentation Server 4.5 What FSMO placement considerations do you know of I want to let users from Domain A to use resources on Domain B. Domain A reside at a different Forest than Domain B. I want to limit acces to a spicified OU. Question; What should i setup? - Firewall -DNS -Sites and services -Trust relkationsheap Thanks and regards. import contacts into exchange how do i import a contact list into an organizational unit in exchange 2003 Active Directory and Network security management.... Users cannot change password when prompted I use Windows Server 2003 with Active Directory. I have strict password policies set, and lately my users have been getting warnings that their passwords are expiring in X amount of days. I know the defualt is set to start prompting at 14. The problem is that when my users opt to change their... Distribution list does not hide from GAL I click hide distribution list in Active Directory, but it still shows up in the GAL. Can I use my DC server. We extended the AD schema in preparation for 2008 and had a IPSec GPO stop working. The GPO or at least the name seems to work thru GPresults but the IP filter settings do not apply/.). How can I do a reverse lookup of a folder's pathname from a domain group that grants the access? Active Directory replication i have created a usr in my active direcory. i want to replicate this user to another site which is my adc. how do i do it
http://itknowledgeexchange.techtarget.com/itanswers/tag/active-directory/page/27/?page=30
CC-MAIN-2013-48
refinedweb
442
64.2
See end for updates to my ideas on this. See also my write your own CBV base class post. I've written before about the somewhat doubtful advantages of Class-Based Views. Since then, I've done more work as a maintenance programmer on a Django project, and I've been reminded that library and framework design must take into account the fact that not all developers are experts. Even if you only hire the best, no-one can be an expert straight away. Thinking through things more from the perspective of a maintenance programmer, my doubts about CBVs have increased, to the point where I recently tweeted that CBVs were a mistake. So I thought I'd explain my reasons here. First, I'll look at the motivation behind CBVs, how they are doing at solving what they are supposed to solve, and then analyse the problems with them in terms of the Zen of Python. What problems do CBVs solve? Customising generic views People kept wanting more functionality and more keyword arguments to the list_detail views (and others, but that those especially, as I remember). The alternative was large copy-and-paste of the code, so people were understandably wanting to avoid that (and avoid writing any code themselves). So, we replaced them with classes that allows people to override just the bit they need to override. This eliminates the need for code duplication, and removes the burden of lots of feature requests for generic views. Or does it? Instead of tickets for keyword arguments to list_detail, it seems we have a bunch of other tickets asking for changes to CBVs, many of which can't be implemented as mixins or subclasses. One of the problems is that if the view calls anything else (e.g. a paginator class, or a form class), you have to provide hooks for how it calls it, which means implementing methods that can be overridden. If you forget any, or if the thing you are calling gains some new keyword arguments, you've got feature requests, or duplication because someone had to override a larger method just to change one aspect of it. If you don't forget any, you've got dozens of little methods to document. Also, there are problems like this attempt to mix FormView with ListView functionality. Fixing this will end up with similar amounts of copy-paste, but in this case it requires a fair bit of debugging first to realise you have a problem. So I'm not convinced CBVs have made much difference here. Eliminating flow control boilerplate The classic example is editing using a form. You see this pattern again and again using function based views (FBVs from now on): from django.shortcuts import render def contact(request): if request.method == 'POST': form = ContactForm(request.POST) if form.is_valid(): send_contact_message(form.cleaned_data['email'], form.cleaned_data['message']) return HttpResponseRedirect('/thanks/') else: form = ContactForm() return render(request, 'contact.html', {'form': form}) Without question this is tedious and annoying. CBVs reduce this to: from django.views.generic.edit import ProcessFormView class ContactView(ProcessFormView): form_class = ContactForm template_name = 'contact.html' success_url = '/thanks/' def form_valid(self, form): send_contact_message(form.cleaned_data['email'], form.cleaned_data['message']) return super(ContactView, self).form_valid(form) Much better! However... It's not really that much shorter. 8 lines compared to 11, ignoring imports. But now let's make it more realistic. We're going to have: - Initial arguments to the form that are based on the request object. - Priority users get a form with the option to indicate 'urgent' status to message, which results in a text message as well as an email. The template is also rendered a bit differently for them and needs a flag. - URLs defined using reverse as they should be. FBV: from django.core.urlresolvers import reverse from django.shortcuts import render def contact(request): high_priority_user = (not request.user.is_anonymous() and request.user.get_profile().high_priority) form_class = HighPriorityContactForm if high_priority_user else ContactForm if request.method == 'POST': form = form_class(request.POST) if form.is_valid(): email, message = form.cleaned_data['email'], form.cleaned_data['message'] send_contact_message(email, message) if high_priority_user and form.cleaned_data['urgent']: send_text_message(email, message) return HttpResponseRedirect(reverse('contact_thanks')) else: form = form_class(initial={'email': request.user.email} if not request.user.is_anonymous() else {}) return render(request, 'contact.html', {'form': form, 'high_priority_user': high_priority_user}) CBV: from django.core.urlresolvers import reverse_lazy from django.views.generic.edit import ProcessFormView class ContactView(ProcessFormView): template_name = 'contact.html' success_url = reverse_lazy('contact_thanks') def dispatch(self, request, *args, **kwargs): self.high_priority_user = (not request.user.is_anonymous() and request.user.get_profile().high_priority) return super(ContactView, self).dispatch(request, *args, **kwargs) def get_form_class(self): return HighPriorityContactForm if self.high_priority_user else ContactForm def get_initial(self): initial = super(ContactView, self).get_initial() if not request.user.is_anonymous(): initial['email'] = request.user.email return initial def form_valid(self, form): email, message = form.cleaned_data['email'], form.cleaned_data['message'] send_contact_message(email, message) if self.high_priority_user and form.cleaned_data['urgent']: send_text_message(email, message) return super(ContactView, self).form_valid(form) def get_context_data(self, **kwargs): context = super(ContactView, self).get_context_data(**kwargs) context['high_priority_user'] = self.high_priority_user return context (A few lines could be shaved by not using super() in a number of places, but only at the expense of future confusion/problems if a maintainer was expecting the normal declarative behaviour.) Notice: I really want high_priority_user to be a local variable that is calculated once, and used in a couple of places. With a function, that's what I have. With a CBV, I have to simulate it using an attribute on self. I also need to override the dispatch() method just to create it. These are both ugly hacks. The CBV version is extremely noisy, due to all the calls to super(). This would be improved in Python 3 (at the expense of a rather magical super() builtin), but is still far from perfect. Every time you override a method, you have to mention it twice. The CBV version is now significantly longer - 24 non-blank lines compared to 17. Since I'm using the same APIs for requests and forms, and doing the same thing, this can only mean that the amount of boilerplate has significantly increased. Sure, I've removed the flow control boilerplate, but must have added some other type. Even if you know the Form API very well, you are going to have to look up the docs or the source code to find get_initial() etc. and ensure you get the signature correct. You have to know two sets of APIs to use a form, instead of one. In the CBV, flow control is totally hidden. In the process of abstracting away the duplication, we've also hidden the order of execution. I've ordered my methods in the order they are called (I think), but that may or may not be obvious to anyone else, and nothing forced me to do it. Because of the last point, it is massively more difficult to debug. Which of the two would you rather maintain? Which do you think a maintenance programmer, who has never seen this code before, would rather maintain? To understand what is going on, you've got an intimidating stack of base classes to navigate, compared to a single function. The fundamental problem here is that Python sucks at implementing custom flow control. Not many languages shine here. Ruby has blocks, which help. Haskell has a pretty good story due to a combination of succinct function definition, lazy evaluation and the way that IO works. Lisp has macros. But with Python, we are limited to: abusing generators, abusing the with statement, or classes and the template method design pattern (which is basically what CBVs use). I think we decided to use the latter because it's one of the only options we've got, but failed to notice that it's just not very good. There are worse things that can happen with shifting requirements. What if you want two different forms on the same page? I've done this on more than one occasion, and it can be a useful thing — when you present the user with two different courses of action, and you've got completely different info they need to fill in. It would be a nightmare to get CreateView or UpdateView to do this. You will either produce a monstrosity, or you'll have to start from scratch with an FBV. You've been seduced down a tempting, calm stretch of water, and then left high and dry when you come to the end of what CBVs offer. If you start with an FBV, it's an easy change. Overall, when I work on CBVs, even views I've created, I start to feel like I'm working on a classic ASP.NET Page class. It is bringing back painful memories! We’re not as bad as that yet, but methods that simply modify some data on self are a code-smell that we are heading in that direction. Another way of looking at it is that with CBVs, views have become an instance of the ‘framework pattern’ instead of the ‘library pattern’. With a framework, your application code gets called by the framework code, and you can easily end up having to understand how the framework is implemented. With a library, the library code gets called by your application code, and this is in general much more flexible, much easier to document and much easier to debug. We ought to be moving Django more in the direction of a library. With the comparison to ASP.NET, I'm also basically saying CBVs are not Pythonic. I'll back up this claim in terms of selected parts of the Zen of Python. Are CBVs Pythonic? Beautiful is better than ugly This is a subjective one, but the noise of all the super() calls, get_context_data() compared to a simple {} etc. is ugly to me. Simple is better than complex Let's notice, first off, that FBVs are simpler than CBVs, according to the basic meaning of having fewer parts. A class is a datastructure with attributes and functions attached. A function is just a function. So a class is more complex than a function. If we ever use a class where a function would work, we have to justify the additional complexity. Complex is better than complicated Are CBVs complicated? Just read the Django source if you think they are not. In one app I wrote, I had a mixin that implemented a bit of common flow control for forms — namely it returned a JSON response with validation errors for certain types of requests. This meant I didn't need separate views for the AJAX validation, and with CBVs I eliminated all the boilerplate. Great! Well, it was, until I found a crazy problem to do with the order in which different base classes set things up. To solve my problem, I ended up writing this: # MRO problem: we need BaseCreateView.post to come first in MRO, to # provide self.object = None, then AjaxyFormMixin must be called before # ProcessFormView, so that the right thing happens for AJAX. class AjaxMroFixer(type): def mro(cls): classes = type.mro(cls) # Move AjaxyFormMixin to one before last that has a 'post' defined. new_list = [c for c in classes if c is not AjaxyFormMixin] have_post = [c for c in new_list if 'post' in c.__dict__] last = have_post[-1] new_list.insert(new_list.index(last), AjaxyFormMixin) return new_list It's a metaclass, and implements the mro() method that allows you to override the method order resolution. I am not proud of this code. (For those who don't get English understatement, please understand what I'm saying: this code is horrific). Before writing this code, I didn't know that the mro() method existed. Although I value the education, if a framework forces you to learn about type.mro() and implement it, it is doing something wrong. Flat is better than nested The hierarchy of classes is certainly a form of nesting, whereas functions are flat. I think CBVs might be considerably better if you started by writing your own, so that you had base classes that did everything you needed, but nothing more, and ended up with a very flat hierarchy. Readability counts Get some people who don't know Django to read ContactView above and figure out what it does, and some others to feed the function version, and see who has the easier time. Ask them what happens, for example, when the form is invalid. There is no contest here. Explicit is better than implicit With a CBV, by inheriting from a class you are inheriting all the behaviour of that class, and all its parent classes. You could argue this is explicit, since you've explicitly indicated the base class, but you haven't indicated all the parent classes — they come automatically. So it's more like an implicit request in practice. Of course, this is always true with OOP to some extent — you are inheriting a bunch of behaviour precisely because you don't want to define it again. But let's notice that it does have its downsides. For example, let's play spot the difference between the following two views — an FBV: from django.shortcuts import render def my_view(request): return render(request, "my_template.html", {}) and a CBV: from django.views.generic import TemplateView class MyView(TemplateView): template_name = "my_template.html" Can you see the important difference? Try to answer before reading on. I'm talking only about the functional differences when this view is accessed by a client, not differences of code organisation or re-use or performance. Well, TemplateView inherits from View which provides a dispatch() method, and TemplateView provides a get() method which will handle all GET (and HEAD) requests, by the logic defined in View.dispatch(). However, neither defines a post() method, which means that you will get a "405 Method Not Allowed" error if you try POST or other HTTP verbs, to the CBV view, whereas you will get a 200 with the FBV. In the CBV, all of this logic has been invoked implicitly. Nothing in what I wrote was an explicit request for 405s for POST requests, but I got it because I inherited from TemplateView — even though using a template does not imply that behaviour. (By the way, this issue caused a real problem in a site I wrote, which was easily debugged because I knew a lot about the internals of CBVs, and therefore easily fixed, but it still adds noise to my class — code that can only be explained by reference to some inherited behaviour I didn't really want). Of course, as already mentioned, you can say the same thing whenever you inherit from a class. But, in my opinion, it is one of the disadvantages of OOP, and it is showing itself here. The question is: do the advantages of inherited behaviour outweigh the disadvantages of implicit behaviour? There should be one way to do it I already mentioned one case where the CBV method is just not going to work for you, or is not going to be worth it — needing two different forms on the same page. But in the real views I write, I suspect probably the majority would not fit easily into a CBV. So, in your project you now need both FBVs and CBVs — you've got two ways to do it. When a maintenance programmer comes along, and needs to do some similar work that involves a form, they will be confused. Which pattern do they start from? The simple way to avoid this is to avoid the less flexible solution — avoid using CBVs. So simply having two different ways of building up views is a violation of this principle, but within CBVs there are also instances. First, there are also problems with the attempt to use declarative style, which is perhaps the most attractive feature of the CBV API. So, you need initial arguments to your form? Just define the initial attribute on your class. Unless, however, you need it to be dynamic — you'll have to define get_initial() instead. Also, it often isn't obvious where to add certain bits of code. There is often more than one choice of method to override when you come to add a little bit of functionality e.g. get() or dispatch(), and which you choose sometimes has subtle implications, and sometimes doesn't matter. With FBVs, two different Django developers tasked with the same modification to an existing view would often produce identical or nearly identical patches, but I suspect that would be rarer with CBVs. UPDATE 2015-12-31: The mantra of "There should be one way to do it" means that very often CBVs will be used when they are really inappropriate, once you've started down that path. That explains silliness like the following: class FeedbackView(SingleObjectMixin, generic.TemplateView): template_name = 'dashboard/designers/feedback.html' model = DesignerFeedback context_object_name = 'feedback' key = "feedback" def get_object(self, queryset=None): self.object = get_object_or_404( DesignerFeedback, pk=self.kwargs['pk'], designer=self.request.user.designer, ) return self.object def dispatch(self, request, *args, **kwargs): self.object = self.get_object() return super(FeedbackView, self).dispatch(request, *args, **kwargs) That was a real example taken from a project I'm working on, and from the perspective of a maintenance programmer it's a nightmare, involving multiple levels of indirection to achieve a very simple task. It should have been written as this very simple code which is less than half the size: def view_feedback(request, pk=None): feedback = get_object_or_404( DesignerFeedback, pk=pk, designer=request.user.designer, ) return render(request, 'dashboard/designers/feedback.html', {'feedback': feedback}) It was probably written as the much more complicated version due to either cargo-culting (which becomes necessary when you have complicated class hierarchies that no-one really understands) or deliberate consistency with the surrounding code. If the implementation is hard to explain, it's a bad idea Looking at the implementation, you'll find things like MultipleObjectTemplateResponseMixin, as well as MultipleObjectMixin and TemplateResponseMixin, and the former is not just the composition of the other two. This is just one of many signs that things are going wrong. If you can really compose behaviour just by adding mixins, MultipleObjectTemplateResponseMixin should not be needed. Explaining things like this is hard, and the reason is that you just can't build up complex views using classes and mixins. Conclusion Overall, I think CBVs make: - very simple views slightly shorter, much cleaner (and significantly harder to debug, but you don't need to debug them because they are simple); - views of medium complexity significantly longer, with more boilerplate and noise and much harder to debug; - views of high complexity almost impossible. You only gain for the simple views, but they were simple anyway, just slightly tedious. Is this advantage really enough to outweigh the disadvantages I've listed? To be honest, I regret dropping the function based generic views for CBVs. There was an alternative solution to the tickets asking for them to do more: WONTFIX. Generic views were simple shortcuts for common problems. They didn't actually involve very much code, and if you needed to duplicate some of it you weren't duplicating much. We should have just said: this is what generic views do, if you need them to do something else then write your own. Because that is what you have to say with class based generic views anyway, just slightly later. Regarding going forward, I say: stop the rot. If you're starting a new project, I recommend avoiding CBVs, or just wrapping them in thin functional wrappers, like this one for ListView (or, with django-pagination a simple 3-line view function is probably all you need and is actually easier than subclassing ListView). For Django core, let's fix up the main problems and bugs CBVs have, and then leave them as solutions to simple problems. But please: let's not move everything in Django-world — whether Django core/contrib or resuable apps — to CBVs. Just use a function, and stop writing classes. Update: 2013-03-09 My opinions have changed slightly since I wrote the above, due to comments below and other helpful blog posts. I'd summarise by saying: - There are some places with CBVs really shine, especially when you are writing many similar simple views. - You can avoid some of the problems I mentioned by using your own class hierarchy, that you control completely, and making it flat, and specific to your needs. - I still prefer to write function based views. I've spent too many hours debugging class hierarchies of different kinds, and seen too many view functions that started out fitting into the kind of patterns that CBVs provide, and then breaking them in big ways.
https://lukeplant.me.uk/blog/posts/djangos-cbvs-were-a-mistake/
CC-MAIN-2017-13
refinedweb
3,474
56.45
Details Description Currently the TaskTracker spawns the map/reduce tasks, resulting in them running as the user who started the TaskTracker. For security and accounting purposes the tasks should be run as the job-owner. Issue Links - Activity I think HADOOP-4451 is either related or a duplicate of this. Correct ? Before beginning discussions on approach, I wanted to summarize my understanding of this task, and also start discussion on a few points that I have some questions on. The following are some salient points: - We want to run tasks as the user who submitted the job, rather than as the user running the daemon. - I think we also don't want to run the daemon as a privileged user (such as root) in order to solve this requirement. Right ? - The directories and files used by the task should have appropriate permissions. Currently these directories and files are mostly created by the daemons, but used by the task. A few are used/accessed by the daemons also. Some of these directories and files are the following: - mapred.local.dir/taskTracker/archive - directories containing distributed cache archives - mapred.local.dir/taskTracker/jobcache/$jobid/ - Include work (which is a scratch space), jars (containing the job jars), job.xml. - mapred.local.dir/taskTracker/jobcache/$jobid/$taskid - Include job.xml, output (intermediate files), work (current working dir) and temp (work/tmp) directories for the task. - mapred.local.dir/taskTracker/pids/$taskid - Written by the shell launching the task, but read by the daemons. - What should 'appropriate' permissions mean ? I guess read/write/execute (on directories) for the owner of the job is required. What should the permissions be for others ? If the task is the only consumer, then the permissions for others can be turned off. However, there are cases where the daemon / other processes might read the files. For instance: - The distributed cache files can be shared across jobs. - Jetty seems to require read permissions on the intermediate files to serve them to the reducers. In the above cases, can we make these world readable ? - Task logs are currently generated under $ {hadoop.log.dir} /userlogs/$taskid. These are served from the TaskLogServlet of the TaskTracker. - Apart from launching the task itself, we may need some other actions to be performed as the job owner. For instance: - Killing of a task - Maybe setting up and cleaning up of the directories / files - Running the debug script - mapred.map|reduce.task.debug.script Is there anything that I am missing ? Comments on the questions of shared directories / files - distributed cache, intermediate outputs, log files ? I think that (2) depends on how (1) is proposed to be addressed. If you assume that (1) is addressed by using seteuid() or the su command such that processes actually run on the system as the appropriate user, then (2) is extremely difficult without being ruin as root. If (1) is addressed just by setting the UGI in some way, then this had disadvantages compared to the seteuid/su - which facilitates secured access to non-HDFS resources (e.g. NFS in smaller environments). I had some offline discussions with Arun and Sameer and here are some initial thoughts on approach. A lot of details still need to be flushed out, but I am posting this to get some early feedback. We do want to run the daemons as non-privileged users, and yet go with a setuid based approach to run tasks as a regular user. One approach that was proposed to do this is as follows: - We create a setuid executable, say a taskcontroller, that will be owned by root. - This executable can take the following arguments - <user> <command> <command arguments>. - <user> will be the job owner. - <command> will be an action that needs to be performed, such as LAUNCH_JVM, KILL_TASK, etc. - <command arguments> will depend on the command. For e.g. LAUNCH_JVM would have the arguments currently used to launch a JVM via the ShellCommandExecutor. - The tasktracker will launch this executable with the appropriate command and arguments when needed. - As the executable is a setuid exe, it will run as root, and will quickly drop privileges using setuid, to run as the user. - Then the arguments will be used to execute the required action, for e.g. launching a VM or killing a task. - Before dropping privileges, if needed, the executable could set up directories with appropriate ownership, etc. - Naturally this would be platform specific. Hence, we can define a TaskController class that defines APIs to encapsulate these actions. For e.g., something like: abstract class TaskController { abstract void launchTask(); abstract void killTask(Task t); // etc... } - This could be extended by a LinuxTaskController, that converts the generic arguments into something that can be passed to executable - for e.g. maybe a process ID. - One specific point is about the directory / file permissions. Sameer was of the opinion that the permissions should be quite strict, that is, world readable rights are not allowed. There are cases where the task as well as the daemon may need to access files. To handle this, one suggestion is to first set the permissions to the user, and then change the ownership to the daemon after the task is done. The points above specify a broad approach. Please comment on whether this seems reasonable, reasonable in parts, or completely way off the mark. smile. Based on feedback, I would start implementing a prototype to flush out the details. +1 for a setuid program. It should be written in C, not Java to ensure it has enough access to the platform to actually be secure. In particular, it has to clear both real and effective user ids. I'd like to see the proposed list of commands for the setuid program. No user-specified strings should be included on the command line, to avoid special character attacks. I agree with Sameer that we should have very tight permissions on the map output and task directories. One of the subcommands should probably be to move the outputs from somewhere like $task/output to somewhere like $tt/output/$job/$task. Having a plugin that lets us switch between the current pure-java implementation that doesn't change user ids and a setuid implementation sounds reasonable. We should continue to support the non-user-switch by default for clusters run by a single non-root user. Thanks for the comments, Owen. It should be written in C, not Java to ensure it has enough access to the platform to actually be secure. In particular, it has to clear both real and effective user ids. Yes, I had that in mind. Specifically, I was planning to do something like setuid(getpwnam(user_name)->pw_uid). Since this would be done by a program running as superuser (the setuid exe), it would clear both the real and effective uids. I'd like to see the proposed list of commands for the setuid program. Sure, I will work on that and post the list here. In order to be reasonably complete, I think I should have a version that's working. So, I will start prototyping on the lines I described above. What about Cygwin / Windows users? Reseting fix for version, as this will not make the feature freeze. You are about to take on one of the big problems they hit in the grid world: identity. all the grid tools (condor, platform, etc) have lots of effort put in at the OS level to create new users on target machines, manage the disk and cpu usage limits of that user, etc. But you also need to propagate identity over the wire, which gets you into SAML and other things. Because right now the JobTracker trusts you to be who you say you are -having caller authentication would be a prerequisite to doing back-end user switching. If you are interested in running pure Java apps under different rights, this could be done via a security manager. Every task would be started with an explicit security manager/policy that limited what it could do, file and network operations would be checked against the policy. This would be portable and easier to test. It also eliminates the need to run the TT as root, to keep the unix user database in sync with the hadoop user list, etc. Steve, I do agree that the security manager approach is simpler and portable. However, I think the requirement is also to support features like streaming. Given that, I think the security manager approach would not work. Am I right ? Also, I agree that authentication and authorization is a pre-requisite for this. It is being handled by the other tasks under the jira HADOOP-4487. Here, I am focussing only on the mechanisms to make tasks run as the users who submitted the jobs - a small part of the larger framework. I have been able to make some progress and get a wordcount job to run as the job submitter. The design follows the basic approach mentioned above, minus the plugin abstraction, which I need to create yet. - Created a setuid C executable. - This executable currently takes the following commands: - SETUP_DIRS <list of directories>: This command sets up task specific directories to be owned by the user. The general approach I followed for handling directory permissions is that the root directories, such as hadoop.tmp.dir/mapred/local/taskTracker/jobcache/jobid would be owned by the tasktracker daemon, which creates task directories under it when needed. Then the taskcontroller exe will change the ownership and permissions of the task directory and sub folders to the user. - RUN_TASK <path to a file containing the M/R task to execute> The file is a temp file created under the user's work directory itself - executable by the user - MOVE_FILES <source directory> <destination directory> This command is used to copy the intermediate output and task logs from the task directories to a system specific directory owned by the daemon. The servlets serving this data are modified to read from the system specific directory. - These are called from the JvmManager class at appropriate places. A couple of things came up when doing this: - Task logs: Currently task logs can be viewed when the task is still executing. Further the task logs are read by the TaskLogServlet, which is running in the daemon context. We want the task logs to be owned by the user. I still need to figure out how to achieve this. Currently, I am only able to access task logs after they are done, by executing the MOVE_FILES command. Any ideas are welcome. - JVM Reuse: Currently, I've only handled this with one JVM per task. Need to check the approach when JVM reuse is in the picture. - Still need to work on cleanup and kill actions, as also distributed cache. The code I have is very raw and needs lots of polishing even as a first draft. Will try to do so in a couple of days. Any comments on the approach so far ? Hemanth -you are right, for streaming/pipes stuff a second identity is needed. What some of the grid toolkits have done in the past is have some low-privilege user for running work; there isn't a 1:1 mapping of grid users to user accounts, instead the worker is allowed access to the relevant files of a user for a while, then at the end of the job, that data goes away. This eliminates some of the account management problems, though forces you to make sure that the worker doesnt have access to any old/shared data on the same filesystem. From what I've heard, running under a security manager kills performance, which is pretty much a non-starter. Especially given that we need unix-level security anyways. In our environment, running as the real user is important. If I run a job, I should not be able to look at or kill your job's data or tasks, even if we are sharing a machine. Of course this feature needs to be optional, since: 1. It requires that all cluster users have accounts on all slave nodes in the cluster 2. It requires native code that may not work on all platforms 3. It requires root access, which not all Hadoop admins have. Hemanth, This sounds good, but from a security standpoint, I think that it would be better to make the tasks more specific. So something like: CREATE_TASK_DIR owen task_20080101_0001_m_000001_1 MOVE_TASK_OUTPUT owen task_20080101_0001_m_000001_1 REMOVE_TASK_DIR owen task_20080101_0001_m_000001_1 It would also be good to have the task tracker root directories in a separate config file that can be owned by root. My goal is to make this executable as limited as possible. It should also block root as the user. What I do not want to see is having this work: MOVE_FILES root /tmp/foo /etc/passwd > Steve: have some low-privilege user for running work; there isn't a 1:1 mapping of grid users to user accounts > Owen: running as the real user is important. If I run a job, I should not be able to look at or kill your job's data or tasks Might it be possible to have a pool of low-privileged users, to remove the requirement that every user has an account on every machine? Or maybe that requirement's not that onerous, with PAM/LDAP? The user who submits the job should be the user who runs the code on the compute nodes due to issues that surround the environment outside Haddop. For example, it is possible to submit a job that writes junk data to the low priv user's home dir. Without tracking who submitted that job, ops would never know who to go bonk on the head. ... and then there is streaming. I can think of instances where it might be useful to have generic accounts run stuff. In those instances, it is still much better to have that handled outside Hadoop. [Either through setuid scripts, roles, sudo, kinit a special keytab prior to job submit, whatever.] Let the OS/tool/ops team/whatever deal with the accounting in those situations. CREATE_TASK_DIR owen task_20080101_0001_m_000001_1 Owen, sure this makes sense. Just one point is that I might need the job id along with the task id. But that's still within the same spirit. I will also make the other changes you have suggested like blocking root user. I've also added a CLEANUP_TASK_DIR command to the executable which is now able to cleanup the directories after task is completed. This is called from the CleanupQueue thread in task tracker. I had an offline discussion with Devaraj regarding the implementation, and we also went over the impact this would have when clubbed with JVM reuse. A few comments from him that I am documenting here: - Task directories under the tasktracker system or root directory to which files (such as intermediate outputs) are copied after task completion should be in the same disk as the original user's task directories. This is to prevent across disk copies. - Regarding the problem of serving log outputs which I've mentioned here, we discussed one approach could be to have a command in the executable to read the data and return to the TaskLogServlet on demand. This would happen reasonably rarely and does not affect any other functionality. Hence it seems like the performance overhead can be ignored. - Another comment was to reduce the number of times the executable is launched. For e.g. without JVM reuse, I can setup the directories, run the task, and then move the outputs with a single launch of the executable. This is possible because all actions are per task, and there is one JVM per task. Hence the lifecycle of the task fits well with the setuid changes. With JVM reuse though, the last point becomes problematic. We can easily setup the directories and move the output before and after the task. However, that needs to be done with a separate launch of the executable - three times actually. The performance impact this would have (and would it offset the advantage of JVM reuse) is something to measure and see. I have a version now that runs with JVM reuse enabled also. The main changes to get this to work was to correct the way I was figuring out the current task ids in the JvmManager class. Also added a KILL_TASK command. This will look as follows: KILL_TASK user_name job_id task_attempt_id This will be called from JvmRunner.kill(), which in turn is called whenever a TT gets a kill task action. Since the JVM process is running as the job owner (different from the TT), we can't directly destroy the JVM process. Instead, what I've done is the following: - Write a (hidden) .pid file into the task directory when a task is executed. This is owned by the job owner and not readable by anyone else. The pid file contains the JVM's pid. - When the JVM needs to be killed, we call the taskcontroller executable with the job_id and task_id. - The taskcontroller drops privileges to the job owner, then reads the pid file and gets the pid of the jvm. - Then the taskcontroller issues a kill(pid, SIGTERM) to kill the jvm. Any concerns with this approach ? Currently other than distributed cache, all other aspects of the task life cycle are functioning. I'll probably upload a single writeup (as Arun had done for HADOOP-4348) that will capture all the information in comments above for easy reference. And of course, follow that up with the first patch. smile That sounds reasonable. Please ensure that the pid file is owned by the user given on the command line and has permissions of 600. This avoids someone leaving this file writable and having someone point it at a different process. Thanks, Owen. I will take care of that. Some discussion is required for handling distributed cache ( HADOOP-4493). Firstly, localized files from distributed cache are not localized per job. Since anything can be passed through distributed cache, I think it should support the same level of access control as the rest of the files. That is, they should be changed to be localized per job and subject to the same access control mechanisms we are using for the rest of the files - like output directories etc. I don't think this is a big impact for users as they can't assume the cache to contain the files they want on the nodes where the task is running. However, from the system perspective, probably if a lot of users (say working on the same project that requires the same data files) want to share this but across multiple jobs, we would be copying only once per node, saving both space and time. If we modify this to be localized per job, we could lose that advantage, no ? Any thoughts on this trade off ? I had an offline discussion with Sameer about how to get this patch in. To make it easier for reviewing, maybe it makes sense to split the task up into multiple sub tasks. Atleast 3 that are identified are: - Launch and kill tasks (this would involve RUN_TASK and KILL_TASK commands) - Handle local data securely (this would involve SETUP_TASK and MOVE_TASK_OUTPUT and CLEANUP_TASK commands) - Handle distributed cache. In order to get a working launch and kill tasks patch though, the file and directory permissions will need to be opened up to allow access to all users. Each of the other patches will make it more secure. Please note that we have discussed the approach of how we will address directory and file permissions (such as intermediate outputs) in this JIRA already. This proposal is only to make it simpler to get some incremental patches in. Would this work ? If yes, I will use this JIRA to handle the first of the three tasks, then use HADOOP-4491 and HADOOP-4493 for the others. It would also be good to have the task tracker root directories in a separate config file that can be owned by root. We are taking care of this point in the setuid executable. One question is to determine how the location of this secure config file will known to the executable. Following are our options: Option 1: Read from the environment variable HADOOP_CONF_DIR Option 2: Take a command line option to specify the location of the file. Option 3: Have it as a build time configuration parameter, and encode into the executable (like for instance, pass it as an autoconf option). Options 1 and 2 may allow users to launch the executable pointing to some custom path. Option 3 would completely avoid this, and make it more secure. For the sake of deployment, I think the setuid executable should be built using a separate ant target, as it would need to be setup as owned by root etc. So, maybe it is easy to do Option 3 in that case. If we decide to go with one of the other two options, we should mandate additional checks to make sure that the configuration file is owned by the root user, as Owen mentioned. Any comments ? The attached patch implements changes in the tasktracker to launch tasks using the setuid executable defined in HADOOP-4930. By doing so, it runs tasks as job owners. The CLI for the setuid exe is: task-controller <user-name> <command-enum-value> <job-id> <task-id> <tasktracker-root> As mentioned in comments above, this patch only handles launching and killing of tasks, and does not handle file and directory permissions securely. In fact, it opens up the permissions so that both the tasktracker and task can share files and directories. However, this change is only done when the feature is enabled, and does not affect the default Hadoop behavior. When HADOOP-4491 and other issues are fixed, secure permissions will be replaced. The changes in the patch include: - A TaskController class that defines abstract methods for launching and killing tasks - A DefaultTaskController where a little code from JvmManager has been moved - A LinuxTaskController which implements the methods by calling the setuid executable of HADOOP-4930. - A new configuration variable mapred.task.tracker.task-controller to define the specific type of TaskController to use. Defaults to DefaultTaskController. Tested this on a single node cluster, along with the setuid executable of HADOOP-4930. Will follow-up with testing on larger clusters. I request a review for the same. A lot of discussion has happened on various comments in this JIRA. The attached document collates all of them. I hope this will make it easier to follow the approach and review the changes. I attached a new patch that is more comprehensive. All changes from the previous patch still hold good. This one adds the correct permissions for all relevant files and directories, except distributed cache. The previous patch only set relevant permissions on the task and log cache directories for all users, with the intent that tasks running as any user should be able to create and use other files and directories under them. This requirement still applies. However, there are other files and directories whose access needs to be adjusted too. The new patch addresses these changes: - It sets permissions on the job related jar files and other directories allowing access to everyone. - It sets read and execute permissions on directory paths until the task / job cache and log directories. For e.g. if a task cache directory is created under $ {mapred.local.dir} /taskTracker/jobcache, all paths in this component are attempted to be given read and execute (and no write) access for all users. This is required for looking up paths and locating / reading files created by the tasktracker. Both the changes above are required in future as well. Except then, the permission string would be more restrictive (disallowing access to group and others). The previous patch was working because of a subtle behavior in setuid. On the systems where we tested, the umask was set such that read and execute permissions were provided to group by default. So, any of the job files created by the tasktracker had read and execute to the group to which the tasktracker user belonged. When the setuid executable switched users, it does not clear the supplementary group information of the launcher. Hence, the new process running as the job owner still had access to the groups to which the tasktracker belonged, and hence worked. Again, in HADOOP-4491, we propose to remove all access for the group ownership also, and hence this will not be an issue. The latest patch also adds correct permissions to files localized as part of the distributed cache. In order to do this, I introduced a new API in distributed cache to indicate whether the files were localized freshly (i.e. as part of the current task's localization), or whether they are already existing in cache. I use this API to avoid setting permissions on the same cache files repeatedly for each task. If there's a better way to do this, I would be glad to know that. All changes made in the previous patch still hold good, except for minor refactoring. This patch is now complete to the best of my knowledge. Please do offer your comments. Hemanth, this is look good. Some comments: - We should use mapred.local.dir instead of hadoop.tmp.dir in LinuxTaskController. - Use Path's methods instead of String manipulation for all path-related manipulations. - Pass mode, user/group to DistributedCache rather than rely on the newly introduced DistributedCache.isFreshlyLoaded which is then unnecessary. - Move setting up of JVM-specific files e.g. task's log directory to TaskController.launchJVM. I've updated the patch to trunk, incorporating most of Arun's comments above. Arun, can you please take a look. We should use mapred.local.dir instead of hadoop.tmp.dir in LinuxTaskController. Done. Use Path's methods instead of String manipulation for all path-related manipulations. Done. Pass mode, user/group to DistributedCache rather than rely on the newly introduced DistributedCache.isFreshlyLoaded which is then unnecessary. Done. I've added a new overloaded API that passes the information to DistributedCache. Just to keep options open, I've defined a new public class DistributedCacheFileAccessInfo - a simple class that can be used to define permissions and ownership information for localized files in DistributedCache. Can you take a specific look at this, and let me know if this looks OK ? Move setting up of JVM-specific files e.g. task's log directory to TaskController.launchJVM I've not done this one alone. It was not very clear what information is necessary at launch time. For e.g. if there are some localized files under the task cache directory that need to be loaded at launch time, we'll need permissions for these also. In general, it seemed a little risky to launch the JVM without giving full access to all jars etc, even if the Task will start running later only. So, I've left this as is. I think the main concern here was about the special check I had in JvmManager where I was avoiding setting the permissions again when getting the task to launch. This seems a simple enough check, and I've documented the rationale in code. Can you verify this again, and let me know your thoughts ? Attached a hopefully last version of the patch. This one is extensively tested and has fixed a couple of bugs related to incorrect assumptions about multiple mapred local directories. Thanks to Sreekanth and Amar for help in testing this. We're run randomwriter, sort with and without JVM reuse, and also streaming and using distributed cache. The test-patch results are showing a -1 on release audit which I've written to core-dev about. I am not sure why a -1 is coming, will continue to debug that. There's also a -1 on tests. It is difficult to write unit tests for this patch since it requires support for multiple users. There are a lot of log statements in the patch which should be removed before commit. I am attaching this with the hope that someone can take a look at the changes. Arun ? Merged with trunk. It was broken by a commit done yesterday night. Also removed extraneous logs. Running ant test. test-patch gives following output: 823 release audit warnings (more than the trunk's current 820 warnings). The -1 on release audit is because jdiff has generated some changes to public classes and packages (DistributedCache, FileUtil and the filecache package). The release audit seems to be flagging these new jdiff changed files as warnings. I've cross checked the ASF license header is included for all new files I've put up. The -1 on tests is as explained above. There's no change in functionality to the last patch I uploaded. Only a merge. ? ant test passes locally. -1 overall. Here are the results of testing the latest attachment against trunk revision 7413 821 release audit warnings (more than the trunk's current 819 warnings). +1 core tests. The patch passed core unit tests. -1 contrib tests. The patch failed contrib unit tests. Test results: Release audit warnings: Findbugs warnings: Checkstyle results: Console output: This message is automatically generated. The attached file fixes the streaming test failures. I'd made changes to FileUtil.java to run the chmod command using the ShellCommandExecutor. Previously this was using the Process class. There was no reason to change it, so I moved back to using Process and the tests passed locally. I thought ant test runs contrib tests as well, which is why I missed these on the first patch. -1 overall. Here are the results of testing the latest attachment against trunk revision 7417 in Chukwa and is the same as HADOOP-5172. Some comments after a discussion with Hemanth: - We agree that DistributedCacheFileAccessInfo isn't necessary. We'll wait for HADOOP-4493to fix access control to the DistributedCache, until which we will just allow requisite access to all files in the cache (755). - FileUtil.setPermissionsForPathComponents and opening of permissions to mapred.local.dir via LinuxTaskController.setConf bother me. I see 4 different hooks we that we need to provide for setup/cleanup: - per-tracker (i.e. at TaskTracker initialization e.g. setting up mapred.local.dir) - per-job (job jars) - per-jvm (task log files) - per-task Of course we might not need all the cleanup hooks. - The TaskController itself should be stateless, the above hooks should be plugged into at appropriate places e.g. TaskTracker.localizeJob should call TaskController.initializeJob rather than having LinuxTaskController maintaining state as in the patch. Thanks for the review Arun. I had a discussion with Sreekanth about the changes, and we are proposing the following: - Introduce a TaskTracker.initializeSystemDirs. This will create $mapred.local.dir/taskTracker/jobCache/, $mapred.local.dir/taskTracker/archives, and $hadoop.log.dir/userlogs on all relevant disks. Currently, as per Arun's comments, we'll have this API in TaskTracker, which will be called at Tracker initialization time. If it is felt that this should be per TaskController, then we can easily move this to the TaskController API. I think this may need 777 on the $mapred.local.dir/taskTracker/jobCache/ directory currently because the files would be created both by the task and the tracker - for e.g. the task could create the output directories on a new disk which has yet not been touched by the tracker. - Introduce a TaskController.initializeJob. This will be called from TaskTracker.localizeJob, with the jobid as parameter. This will set up the access for $mapred.local.dir/taskTracker/jobCache/jobid directories on all disks which have been touched by localization. - Modify TaskController.launchTaskJVM to set up permissions for the log dir and the pid dir associated with that task. This will remove the call to initializeTask from the JvmManager.runChild API - Modify TaskController.initializeTask to set up permissions for the log dir, pid dir, and task cache dir for the task. There is no need to set up things for the job, because it's been done in initializeJob already. We will need to repeat the permission setting for the log dir and pid dir. - Modify DistributedCache.localizeCache to set up permissions for the localized files. We propose to recursively set up 755 permissions (hardcoded) for all files under the $mapred.local.dir/taskTracker/archive/ directory for now. This might repeatedly set up permissions for files that are already correctly setup. However, it will keep things simple. If there's a performance issue, it is easy to address it, by setting it only for the files being localized, and by walking up its parent paths. Please let me know if this seems a bad choice. The above changes will mean we can remove: - DistributedCacheFileAccessInfo - FileUtil.setPermissionsForPathComponents - TaskController.cleanup - The runningJobs state maintained by LinuxTaskController. Arun, does this tie in with your expectations ? If it is felt that this should be per TaskController, then we can easily move this to the TaskController API. On second thoughts - I'd go there right-away, since it means that we don't need to open it up to 777 for mapred.local.dir if the DefaultTaskController is in effect. Introduce a TaskController.initializeJob [...] +1 Modify TaskController.launchTaskJVM to set up permissions [...] +1 Modify TaskController.initializeTask to set up permissions for [...] +1 Modify DistributedCache.localizeCache to set up permissions for the localized files. [...] +1 we were running with this patch and had problems with streaming scripts being set to 666. This patch fixes that to make all the files unjarred to be readable and executable by all. Mahadev, thanks for debugging this issue. The problem was that the permissions being used in LinuxTaskController were making an assumption that files submitted to the cluster, say via streaming jobs, or in distributed cache, will already be executable if they need to be executed as part of the tasks. However, there seem to be scenarios in which this is not mandated, and the framework handles this in the default case (when user tasks are run as the tasktracker itself). The right fix is to modify the permissions without this assumption. If we do that, I guess the change in RunJar is not necessary. One more concern is that the RunJar change would fix the problem only for jar files, but not for other types of archives. We are testing this (with the failed streaming test case) with the modified permissions in LinuxTaskController. If that works, I suggest we just use the new version of the LinuxTaskController itself. Again, thanks for debugging this issue ! Attaching patch incorporating Comments from Hemanth and Arun. The only change which has been done which move away from the above are: - No seperate permissions being set for pid directories, as they are not being used in the current trunk. - Assumption that path to jobcache directory, log directory the user who runs task would have read and execute permission along the path. Fixing, findbugs warning. Plus had made mistake in localizejob in LinuxTaskController corrected it. One more change required. I've noticed that when the JVM Manager calls a killTaskJVM, the current working directory for the LinuxTaskController is set to a task attempt directory. This will fail, because this directory will no longer exist. While this is not creating problems as well behaved JVMs will exit themselves, it is not something to be relied on. The relevant piece of code is this: private ShellCommandExecutor buildTaskControllerExecutor(TaskCommands command, String userName, List<String> cmdArgs, JvmEnv env) throws IOException { ... if (env != null) { shExec = new ShellCommandExecutor(taskControllerCmd, env.workDir, env.env); }else { shExec = new ShellCommandExecutor(taskControllerCmd); } } Setting env.workDir will set the working directory to that particular task attempt directory which may no longer exist. I get the following error when this happens: 2009-03-13 10:39:52,750 WARN org.apache.hadoop.mapred.LinuxTaskController: IOException in killing task: Cannot run program "/path/to/bin/task-controller" (in directory "/path/to/mapred-local/taskTracker/jobcache/job_200903130908_0051/attempt_200903130908_0051_m_001012_1000/work"): java.io.IOException: error=2, No such file or directory Attaching patch incorporating Hemanth's Comment. Tested the patch and exception was not thrown. Found a problem while this patch was being tested. When a TT was being re-inited by JT after it's lost for some time, TT got the following NPE and it crashed completely. 2009-03-22 22:22:36,155 ERROR org.apache.hadoop.mapred.TaskTracker: Can not start task tracker because java.lang.NullPointerException at org.apache.hadoop.mapred.LinuxTaskController.buildTaskCommandArgs(LinuxTaskController.java:183) at org.apache.hadoop.mapred.LinuxTaskController.killTaskJVM(LinuxTaskController.java:237) at org.apache.hadoop.mapred.JvmManager$JvmManagerForType$JvmRunner.kill(JvmManager.java:401) at org.apache.hadoop.mapred.JvmManager$JvmManagerForType.stop(JvmManager.java:211) at org.apache.hadoop.mapred.JvmManager.stop(JvmManager.java:61) at org.apache.hadoop.mapred.TaskTracker.close(TaskTracker.java:925) at org.apache.hadoop.mapred.TaskTracker.run(TaskTracker.java:1815) at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:2899) Attaching a patch which resolves the NullPointerException raised when TT is re-inted. The problem was with race condition in which JVMRunner was issued kill before it launched the task. The patch now checks if the inital context is set. i.e task is launched if not, it merely updates the data structures. Attaching patch modifiying a minor thing, to check jvm env member of TaskControllerContext. The JVM env needs to be set for both controllers. So the check has been moved into JVMRunner. Attaching latest patch with following changes: - Merged the patch with the trunk. - Added documentation to the cluster_setup page. Created a subsection in Site configuration to list task controller configuration after real world cluster configuration. Attaching the cluster setup document pdf for checking output and text of documentation. Latest Merged patch. Mixed up merging in the last patch. Attaching new patch which corrects the issue. Some comments: - Use getLocalJobDir in LinuxTaskController.localizeJob - Whenever mkdir or mkdirs fails, we should continue from loops. - Changes in TaskRunner seem unnecessary. - Changes in DistributedCache to pass the baseDir seems unnecessary. Note that localizeCache already takes a CacheStatus object that has the baseDir. - This comment is not incorporated: "Modify TaskController.launchTaskJVM to set up permissions for the log dir and the pid dir associated with that task. This will remove the call to initializeTask from the JvmManager.runChild API." I think this can be done by calling setup*FileAccess from launchTaskJVM - localizeJob can be called initializeJob - In setupTaskCacheFileAccess, we are setting permissions recursively from the job directory. But this is what we do in LinuxTaskController.localizeJob. So we should be setting permissions from the taskCacheDirectory only. - writeCommand should ideally check for existence of file before it tries to change permissions in the finally clause. - JvmManagerForType.getTaskForJvm() - "Incase of JVM reuse, tasks returned previously launched" - some grammatical mistake here. - In the kill part, I think it will be nice to add a info level log message when we are not doing the kill - both in JVM manager and in LinuxTaskController. - TaskLog.getLogDir() - Make this getUserLogDir(), and the javadoc need not mention TaskControllers. It should be a generic documentation that it returns the base location for the user logs. - mapred-defaults.xml should have the config variable for the task controller along with documentation. - I think we should first describe the use case for the Task controllers are trying to solve - as in the requirement to run tasks as a job owners. - It would be nice to give a little description of how the LinuxTaskController works - just saying something like we use a setuid executable, the tasktracker uses this exe to launch and kill tasks. - We should definitely mention that until other JIRAs like H-4491 etc are fixed, we open up permissions on the intermediate, localized and log files in the Linux TaskController case. - making the executable a setuid exe is a deployment step. It is currently added as a build step. - The path to the taskcontroller cfg - mention that this should be the path on the cluster nodes where the deployment of the taskcontroller.cfg file will happen - We should also mention that the LinuxTaskController is currently supported only on Linux. (though it sounds obvious) - Should we mention about permissions regarding mapred.local.dir and hadoop.log.dir (should be 777 and path leading up to them be 755) ? Attaching patch addressing Review comments by Hemanth. Uploading built pdf documentation for the change made by the patch for review. - Removing an unused variable in JVM Manager. Code changes look fine to me. Looking at the documentation changes. Attaching new patch with documentation changes. overall. Here are the results of testing the latest attachment against trunk revision 763247. failures are not related to the patch. Attaching new patch, the patch modifies the kill task part from the previous patch. This passes on the configuration value mapred.tasktracker.tasks.sleeptime-before-sigkill to task-controller binary. Current trunk version of the binary would ignore this value. Once fix for HADOOP-5420 goes in, the binary would sleep the interval after issuing SIGTERM then pass SIGTERM to child task. Running thro' Hudson. After offline discussion with Hemanth and Vinod the changes to LinuxTaskController for resolving HADOOP-5420 can be addressed in a different JIRA. Reverting to previous version of patch. Attaching patch fixing an issue found while testing on large clusters while task trackers are re-inited. - Changed FileUtil.chmod() to use ShellCommandExecutor instead of building process directly and executing the same. Attaching patch with following changes: - FileUtil now uses ShellCommandExecutor for chmod operations. It supresses the IOException thrown while doing chmod and logs it if debug enabled. - Check before TaskController.initializeTask() in JvmManager.getTaskForJVM() to see to that there is no double initalization of same task being done. Check before TaskController.initializeTask() in JvmManager.getTaskForJVM() to see to that there is no double initalization of same task being done. I think we should move the call to initializeTask into JvmRunner.runChild(). This way it is more explicit why the check is required. Other than this +1. Please make this PA. Making the initalizeTask call explicit in runChild. 652 release audit warnings (more than the trunk's current 651 warnings). [exec] [exec] All core and contrib tests passed on the local machine except TestQueueCapacities which is being handled in seperate JIRA. The release audit warning is due to a new public API in FileUtil. This is expected. This patch has been tested extensively manually, and a version of this patch is also deployed in production environment for a while now. There is a plan to develop unit tests as a followup. Given these, I am committing this patch, as it is blocking some other jiras in M/R. I just committed this. Thanks, Sreekanth ! Integrated in Hadoop-trunk #811 (See) Attached example for earlier version not to be committed. Another patch needed if applying this to the 0.20 branch. Attaching an example patch for branch 20 not to be committed. Attaching new Yahoo! Distribution patch. Editorial pass over all release notes prior to publication of 0.21. Subtask. Duplicate of HADOOP-4451.
https://issues.apache.org/jira/browse/HADOOP-4490?focusedCommentId=12653267&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2014-23
refinedweb
7,223
66.03
Overview Atlassian Sourcetree is a free Git and Mercurial client for Windows. Atlassian Sourcetree is a free Git and Mercurial client for Mac. SAMPLES REPOSITORY This is a repository of sample code for discussion in forums and groups. Anyone interested can clone, build and use this code but I warn that this is provided as is and without any guarantee. In general this would be code from a question in the forums that I copy to the repo and work to solve the question or to ilustrate a point about programming. I tried to split the forums and groups by using the namespace according to where the question was asked as the name of the user or the subject being answered. Often times there are tests associated with the code in the usual suspect src/tests/java. Building the project You can build the project by cloning it and running maven directly in the shell or you can import it with the IDE of your choice through the Maven integration.
https://bitbucket.org/notivago/samples
CC-MAIN-2018-05
refinedweb
170
67.08
CLONE fea‐ ture_test_macros(7)): clone(): Since glibc 2.14: _GNU_SOURCE Before glibc 2.14: _BSD_SOURCE || _SVID_SOURCE /* _GNU_SOURCE also suffices */ func‐ tion fn(arg). (This differs from fork(2), where execution continues in the child from the point of the fork(2) call.) The fn argument is a pointer to a function that is called by the child process at the begin‐ ning of its execution. The arg argument is passed to the fn function. When the fn(arg) function application returns, the child process termi‐ n mem‐ con‐ stants, (since Linux 2.0) If CLONE_FILES is set, the calling process and the child process share the same file descriptor table. Any file descriptor cre‐ corre‐ sponding file descriptors in the calling process.) Subsequent operations that open or close file descriptors, or change file descriptor flags, performed by either the calling process or the child process do not affect the other process.‐ ing process or the child process also affects the other process. If CLONE_FS is not set, the child process works on a copy of the filesystem‐‐ ble CON‐ FIG_SYSVIPC and CONFIG_IPC_NS options and that the process be privileged (CAP_SYS_ADMIN).‐ tainers. A network namespace provides an isolated view of the networking stack (network device interfaces, IPv4 and IPv6 protocol stacks, IP routing tables, firewall rules, the /proc/net and /sys/class/net directory trees, sockets, etc.). A physical net‐ work device can live in exactly one network namespace. A vir‐ tual network device ("veth") pair provides a pipe-like abstrac‐ tion that can be used to create tunnels between network names‐ CON‐ FIG capabil‐ ity)‐ tainers. A PID namespace provides an isolated environment for PIDs: PIDs in a new namespace start at 1, somewhat like a standalone sys‐ tem,; analo‐ gously, if the parent PID namespace is itself the child of another PID namespace, then processes in the child and parent PID namespaces will both be visible in the grandparent PID namespace. Conversely, the processes in the "child" PID names‐ pace associ‐ ated CON‐ FIG CON‐ FIG mem‐ ory. (since Linux 2.2) (since Linux 2.0) If CLONE_SIGHAND is set, the calling process and the child process share the same table of signal handlers. If the calling process or child process calls sigaction(2) to change the behav‐ ior sepa‐ rate undo list, which is initially empty. CLONE_THREAD (since Linux 2.4.0-test8) If CLONE_THREAD is set, the child is placed in the same thread group as the calling process. To make the remainder of the dis‐‐ tem-wide) unique thread IDs (TID). A new thread's TID is avail‐‐‐ tion) signal. If any of the threads in a thread group performs an execve(2), then all threads other than the thread group leader are termi‐han‐ dled signal is delivered to a thread, then it will affect (ter‐‐‐). The raw system call interfacei‐ fies. blackfin, m68k, and sparc The argument-passing conventions on blackfin, m68k, and sparc are dif‐ ferent‐ er's thread of execution. On failure, -1 is returned in the caller's context, no child process will be created, and errno will be set appro‐ priately. con‐ figured with the CONFIG_SYSVIPC and CONFIG_IPC_NS options. EINVAL CLONE_NEWNET was specified in flags, but the kernel was not con‐ figured with the CONFIG_NET_NS option. EINVAL CLONE_NEWPID was specified in flags, but the kernel was not con‐ figured with the CONFIG_PID_NS option. EINVAL CLONE_NEWUTS was specified in flags, but the kernel was not con‐ figured clone() circum‐ stances. In particular, if a signal is delivered to the child immedi‐ ately cre‐ ated); EXAMPLE Create a child that executes in a separate UTS namespace‐ fers in the UTS namespaces of the parent and child. For an example of the use of this program, see setns(2). ‐ ties(7), pthreads(7) COLOPHON This page is part of release 3.54 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. Linux 2013-04-16 CLONE(2)[top]
http://www.polarhome.com/service/man/?qf=clone&tf=2&of=ElementaryOS&sf=
CC-MAIN-2019-22
refinedweb
666
71.95
- CSS is seriously broken - Sort by variable - XML MULTI-TABLES parsing and mySQL - xslt alone or xslt/java for static site? - Dataset to Database - What am I missing re: very simple XML->XML transform - Selecting subelements from a node-set - fop - problem.... - why SOM of msxml4 can't load this schema? - [Ant] Merge two xml files - Generate xml node tree from xmi (xml) help - M$oft patents the wheel - RSS newsfeed - XML Schema enumeration question - 'select' element on XForms - "Nesting" XML documents using DOM - multiplie grouping - List of effective pages. xsl-fo - Recursively creating a DOM Tree - Trying to locate a standard XML schema - JAXB Reference Implementation Bug ? - Looking for a XML project - Links and XLinks?? - Determining QName from a URIref in RDFS - Use Parameter value in match or select - 'Parameterized XPath expressions ?' - how to approach an XSLT task - pb schema XML - XSD question (same attributes, different elements...) - docbook2html alternative for XML - Schema of xml - Xpath apache xerces/xalan dom3 - escape character in XML text string - A question about XPP - page break priority - XSL Transform trouble (XML newbie) - xsl:fo retrieve width and height of an imag - targetNamespace/import conflict - (Non)deterministic XML Schema - New to XML - Order of nodes in xsl:for-each nodeset (Xalan-J) - XSL FO Course - How to create a java "extends"/"implements" clause with xsd file and castor? - Is it just 'bad form'? - January 29 (web services security) panel discussion at San Diego Supercomputer Center - Re: CORENA launches Life*TYPE - Official press release - browser does not transform xml when server through web server - position() doesn't return expected value - Newbie again: position() doesn't return expected value - equal sign in value portion of xml tag - Newbie: translate() within <template name=""> - CORENA launches Life*TYPE - Official press release - Speller for XML files - Newbie problem?: XML-Schema Instance Validation (xerces 2.6.0, confusingcvc-complex-type errors) - [xsl:fo] : spaces underlining issues. - XSL substitution - accessing documentation elements/unhandled attributes via the som - problem with the unicode character 0x2a2b29 - output tag-name of a XMLnode with xerces - XSD: Extenstion, substituion, and recursion, Oh My! - remove header comment from Xerces output - Unknown code page - Documention for xerces-c 1.6.0? - Q: tool recommendation for large XML Schema - XMLSpy 5 evaluation keycode - correct use of xsi:schemaLocation - Strange (to me) import statements - XML schema uniqueness constraints - FOP Dynamic Heading Size? - Xerces 2.6 and validation of XML schema constraints - model to xml - XSD: Character in regular expression - XSL question-- walking a hierarchy - referencing another XSD file within an XSD file - January Meeting of the Washington Area XML Users Group - Values to Schema Data Types - escape colon in xml tag names - Are there any XPath parsers that generate XPath trees? - XML: Collecting ancestors - looking for a tool - SOAP & WSDL & Fail-over - HTML to XML - Problem converting XML to HTML using XSLT on Netscape 7.0 - Parsing Raw filter text data using XML - accessing previous context inside apply-templates? - Global variables and recursive calls - When using Xerces, how can I validate the text in createTextNode() - XML-Stylesheet declaration in DTD causing issues - Validating XML document without using "xsi:schemaLocation" attribute - Docbook XML: Image in the Page Header - XPath, SUm of Attributes for certain Elemnets - DocBook - numbering sections - newbie question - Xerces Perl - Don't want DOCTYPE - xslt - XML Schema: disjunctive attributes for element possible ? - problems when installing Protege - Using XSLT and XPath for graph data structure processing? - Using XSLT and XPath for graph data structure processing? - How do I append and modify an XML file on-line? - XML & XSLT namespaces - Reading XML in .Net - Newbie: how to make parser find a schema - RTL in XML - JAXB Components from namespace are not referenceable from schema document - XAdES examples/information - XML and VML Job - Represent deleted data with XML - Problem in parsing xml document with japanese text - Command line XML - running eXist 1.0b1 under windows - XML schemas, including arbitrary elements - [ANN] xmlBlueprint 2.1 released - Newbie: parsing and validation - XQuery APIs -- how do you identify the XML source - ANN: NNDef XML toolkit for Neural Networks now available for download - xsd: any complex type? - XML and CSS -> Blank Screen :( - New release of ExamXML visual tool for comparing and merging XML files. - How to setup apache web server to display xml file? - Q: localfile as validating scheme (anyURI problem) - A form in an XML-doc - XSL-FO? - schema compatibility - <xsl:when help with xslt - how can I do this ? - XSL and QNames - Q: JAXB and default in the element tag (plus some other Q's) - HR-XML NAICS element - Can a document contain multiple - how to search in a column only ??? (as in a database) - how to search in a column only ??? (as in a database) - XML? Big deal... I just don't get it - XML on MAC - XSL - skipping tags - [ANN] GEFEG Creates Free XML Upgrade Program for Foresight and Edifecs EDI Customers - Inconsistent FOP behavior ? - Big Picture General Question about XML SOAP and WSDL Web Services - How to use "<" character inside attribute? - Translating characters and enties - a parse error when using the xml4c. - xslt to dynamically re-namespace elements - How can I view an wml file? - XML Catalogs on Linux - XML Schema namespace prefix? - Tricky XSL question (?) - Defaulting empty XML elements - XML Vocabulary for PostScript - Blank Data Fields, How2 Preserve Tbl Columns?, use - XForms and Schemas - match muliple header records to associated detail records - creating a new empty document with DOM - branching XSLT tree - looping templates - invalid characters in tag name - Writing to XML file - XML Transaction - Newbie question - Where is the SOAP 1.1 Spec? - books! - Can Altova mapforce be used to map fields between two RDF (Resource Definition Framework) XML files? - XSD alphanumerical, reg_exp - attrybutes from higher elements XSLT - MINIMAL xml parser - Pass file name parameter to xsl stylesheet - xsl:choose - XML Newbie - C structures - Call for Papers: DATAX Workshop 2004 - Whitespace in Canonicalized XML - paterns for phones' numbers - Public ID for TEI Lite - stack values in xsl or substract element from tree - eliminate hard-coded file names in xsl - Tweak xsl to eliminate duplicate data and blank lines - creating composite XML documents - Search and Replace Text in XML file - new XML query language - validate xml in java - TEILite CSS Stylesheet - Java XSLT Tranformer: DOMSource vs. StreamSource problem - Problem with xsl:call-template - newbie:importance of XML? - XSL e Sort - fop 0.20.5: How to get a table and text in one line? - New XML facilities for VB6 - XSLT chains - to pull or to push? - Merge two xml files on common date field and write out tab-delimited file - [ANN] XML Standards Library 2.0 : Updated 2003-12-20 - Xml2Table update: Local files support - XSL-FO: How to wrapping text on an image? - XSLT: outputting element contents without containing tags - xml/vb/java common date format - CSS problem in IE - Throwing exceptions from with XSL - Generating multiple XHTML pages from an XML file - XML2Table update (Excel supported) - difficult xsl transform clarification - difficult xsl transform - page-position='last' is not yet implemented (NYI) - miss something about xslt - Any parsers supporting identity constraints to check XML files? - Missing xmlns attribute while indenting using xalan (JAVA) - How to do a choice on aggregation with XML schema - Fop embedding EUDC fots problems - XML Schema problem... - xsl-fo and xml file to test xalan to create fo-file - Xerces C++ help in GetNodeValue(). Beginner question.... - schema unordered element list with any element - XPATH expression. - xsl problem - DHS Rss Feed? - how to use starts-with - Web utility: XML to Table - Next version of Altova Authentic and Corel XMetaL - Embedding xml as remote obects into static html - XML for Coders (XFC) - Java class to translate from utf-8 to iso-8859-x - SVG doesn't raise events when embedded in HTML - utf-8 chars lost via ftp? - [ANN] Exchanger XML Editor V1.2 Released - How to split up HTML table XSL?? - XML Client/server side - xsl-t tutorial - Question about attribute inheritence in XML Schemas using <xsd:extension> - Writing brief report on XML databases... - A question on the xsl. - LF Example parsing XML from a string not a file - Is Cocoon for us? - merge two xml files based on common key - xml in java - Linking to a separate CSS in XSL - Encoding problems / Perl 5.8.0 / XML::LibXML / XML::LibXSLT - Silly ID question - expat GetAttribute help cpp - encoding in embedded svg in FOP - xs:all in schema - call-template in curly brackets - Tutorials/examples for SAX2 with Perl - SAX parseing goes 'all funny' on value [en] - not well format xml and cdata - ANN: XMLBuddy and XMLBuddy Pro 2.0 now available - XML to CSV made easy? - How to programatically assign a validating schema using Xerces? - Help on including one XML document within another XML document using XML Schemas - Deleting tags via XSL - from data structure to xml an vice versa - Checking compatibility between schemas - SAX and Servlets - How? - Generating XML fragment to DOM node using XSL - Macromedia looking for developers using XML - Using XSLT to validate source XML's DTD - Are RSS feeds supposed to be dynamically generated? - Checking compatibility between schemas
http://www.velocityreviews.com/forums/archive/f-32-p-30.html
CC-MAIN-2014-15
refinedweb
1,506
50.26
Advanced Namespace Tools blog 5 January 2017 Weird Behavior of CFS ctl file and 9p requests for it Background and Initial Symptoms The Cache File Server has an option to collect statistics, which are presented via a file /cfsctl at the root of the tree. While trying to read these stats, I experienced buggy behavior: - The data did not seem to be updating properly on successive reads - The data display started showing corruption, with fragments of new data appearing at the end of the output - Eventually, attempts to read from the file would produce an EOF error with no data output at all I took a look at the code, and added some debugging prints to investigate what was going on. Here is an example of debug output from adding diagnostic prints to the cfsctl section of the genstats and rread functions in cfs.c. Each cycle beginngin with a "statlen" is from successive cats of the file after the data has changed: statlen 1054 = p 4df8e - statbuf 4db70 cnt 8192 > statlen 1054 - off 0 setting c.rhdr.count to 1054 cnt 8192 > statlen 1054 - off 1054 setting c.rhdr.count to 0 statlen 1055 = p 4df8f - statbuf 4db70 cnt 7138 > statlen 1055 - off 1054 setting c.rhdr.count to 1 cnt 8192 > statlen 1055 - off 1055 setting c.rhdr.count to 0 statlen 1057 = p 4df91 - statbuf 4db70 cnt 7137 > statlen 1057 - off 1055 setting c.rhdr.count to 2 cnt 8192 > statlen 1057 - off 1057 setting c.rhdr.count to 0 statlen 1281 = p 4e071 - statbuf 4db70 cnt 7135 > statlen 1281 - off 1057 setting c.rhdr.count to 224 cnt 8192 > statlen 1281 - off 1281 setting c.rhdr.count to 0 statlen 1275 = p 4e06b - statbuf 4db70 cnt 6911 > statlen 1275 - off 1281 setting c.rhdr.count to -6 (int)c.rhdr.count -6 < 0, sending eof Contrary to my expectations, statlen seemed to be behaving correctly, but the count and offset taken from the 9p request structure c.thdr (c.rhdr is the reply) looked strange to me. Why was the request corresponding to each new cat operation showing a steadily shrinking count and increasing offset? A Partial Fix with very Strange Results The failure to show the data correctly was mostly due to the offset parameter, because the data is sent in the reply by: c.rhdr.data = statbuf + off; Because the offset doesn't return to 0 on new reads, only the final few bytes of the changed buffer are shown - with the stale data appearing before them. I decided to hack in some manipulation of the offset, and then things got really weird. Here is my hacked-up debugging version of the code for the cfsctl file in rread: off = c.thdr.offset; cnt = c.thdr.count; if(statson && ctltest(mf)){ /* statsend is a variable I added to help control the output behavior */ fprint(2, "rread cfsctl:\n"); if(statsend == 0) off = 0; if(statsend == 1) off = statlen; statsend++; if(statsend == 2) statsend = 0; /* The idea is that we send all the data from 0 to statlen on the first request, then, then set the offset equal to the amount read previously and send no data on the second read - the rest of the logic is unchanged save debugging prints */ if(cnt > statlen-off){ c.rhdr.count = statlen-off; fprint(2, "cnt %d > (statlen %d - off %lld) setting c.rhdr.count to %d\n", cnt, statlen, off, c.rhdr.count); } else{ c.rhdr.count = cnt; fprint(2, "cnt %d <= statlen %d - off %lld, c.rhdr.count set to cnt %d\n", cnt, statlen, off, c.rhdr.count); } if((int)c.rhdr.count < 0){ fprint(2, "(int)c.rhdr.count %d < 0, sendreply(eof)\n", (int)c.rhdr.count); sendreply("eof"); return; } c.rhdr.data = statbuf + off; fprint(2, "c.rhdr.data %p set from statbuf %p + off %lld\n", c.rhdr.data, statbuf, off); sendreply(0); return; } This did fix the issue partially - I now received correctly updated stats data from every read of the file, with no corruption and no eof errors. However, something even stranger (to me at least!) started happening: All of the previous reads from the file were also printed, with the new data appended at the end So, as the fs was used and I read from the ctl file repeatedly, the output would be like this: Client Server #calls Δ ms/call Δ #calls Δ ms/call Δ 1 1 0.750 0.750 1 1 0.743 0.743 Tversion 7 7 0.575 0.575 7 7 0.569 0.569 Tauth 7 7 0.893 0.893 7 7 0.888 0.888 Tattach 325 325 0.490 0.490 324 324 0.486 0.486 Twalk 147 147 0.474 0.474 146 146 0.470 0.470 Topen 764 764 0.239 0.239 18 18 3.644 3.644 Tread 16 16 5.071 5.071 16 16 5.065 5.065 Twrite 135 135 0.586 0.586 135 135 0.581 0.581 Tclunk 169 169 0.468 0.468 169 169 0.455 0.455 Tstat 11 11 ndirread 7 7 ndelegateread 0 0 ninsert 0 0 ndelete 5 5 nupdate 1716594 1716594 bytesread 4668 4668 byteswritten 0 0 bytesfromserver 3769 3769 bytesfromdirs 1712825 1712825 327 2 0.490 0.407 325 1 0.487 0.799 Twalk 149 2 0.499 2.373 147 1 0.470 0.442 Topen 766 2 0.275 14.317 18 0 3.644 Tread 16 0 5.071 16 0 5.065 Twrite 137 2 0.620 2.962 137 2 0.616 2.955 Tclunk 169 0 0.468 169 0 0.455 Tstat 11 0 ndirread 7 0 ndelegateread 0 0 ninsert 0 0 ndelete 5 0 nupdate 1716594 0 bytesread 4668 0 byteswritten 0 0 bytesfromserver 3769 0 bytesfromdirs 1712825 0 388 61 0.496 0.529 385 60 0.494 0.532 Twalk 152 3 0.520 1.527 149 2 0.471 0.527 Topen 773 7 0.311 4.267 20 2 3.330 0.507 Tread 17 1 5.111 5.746 17 1 4.822 0.940 Twrite 140 3 0.638 1.451 140 3 0.634 1.442 Tclunk 227 58 0.481 0.517 227 58 0.468 0.507 Tstat 13 2 ndirread 7 0 ndelegateread 0 0 ninsert 0 0 ndelete 5 0 nupdate 1736279 19685 bytesread 4693 25 byteswritten 0 0 bytesfromserver 11431 7662 bytesfromdirs 1724848 12023 bytesfromcache 25 25 bytestocache Every time I read from the /cfsctl file after the data had been changed, the new data would be appended at the end, and this process continues arbitrarily. This seems paradoxical to me because the statbuf is a static array of 2048 bytes, and there are only 2 9p requests being fulfilled, one for a bit over 1000 bytes, and one for 0. Here are what the debugging prints look like with this version of the code (note that these offsets are fake, the actual c.thdr offset request size is huge: 4811 > 3684 > 2557 > (statlen 1127 - off 0) setting c.rhdr.count to 1127 c.rhdr.data 4dc00 set from statbuf 4dc00 + off 0 This data was collected in a test where I was trying to see just how far the extra buffering/appending would go - and so far I have not found a limit. At the moment, I am receiving over a megabyte of data from reads of /cfsctl: cpu% cat /cfsctl |wc 24184 130469 1200544 All of this data is not being stored anywhere by the cfs process itself, its memory footprint is far too small: cpu% ps -a |grep cfs glenda 211 0:00 0:56 312K Pread cfs Here is some of the data from the raw 9p requests printed after the convM2S in rcvmsg: rcvmsg: count is 1430 offset is 1194602 rcvmsg: count is 8192 offset is 1195729 rcvmsg: count is 8192 offset is 1195729 rcvmsg: count is 20 offset is 323088 rcvmsg: count is 8192 offset is 1195729 rcvmsg: count is 303 offset is 1195729 rcvmsg: count is 8192 offset is 1196032 rcvmsg: count is 8192 offset is 1196032 rcvmsg: count is 20 offset is 323088 rcvmsg: count is 8192 offset is 1196032 rcvmsg: count is 8192 offset is 1196032 I have some logic to turn on/off those prints depending on whether or not the /cfsctl file is being read, so I'm not sure if the "count is 20" messages are part of the cfsctl read transaction. Complete debug code that I'm running is at and the raw logs that im extracting these debug sample outputs from is at So What/Why/How ? What seems like it must be happening is that the kernel is caching responses to reads of the /cfsctl file and maintaining the offset between successive invocations of cat. Somewhere in the communication chain between cat, the mount device (devmnt.c in the kernel) and the actual cfs program, things are getting confused, and the kernel must be keeping a large buffer of data which it is replaying to each new cat. The cause of the progressively larger offsets to cat is unclear to me. I don't understand how whatever cfs is doing wrong is causing cat/devmnt to behave in this way. The Lightbulb Goes On After writing up all the above, I was ready to consult the lead 9front dev, Cinap. As usual, he was able to diagnose and fix in the issue in about five minutes of total irc discussion. Ironically enough for me as a namespace fanatic, I was forgetting something about namespaces in the standard /lib/namespace file. The very first line: mount -aC #s/boot /root $rootspec The -C flag is the key here. As we know from man 1 mount: -C (Only in mount.) By default, file contents are always retrieved from the server. With this option, the kernel may instead use a local cache to satisfy read(5) requests for files accessible through this mount point. So, now things fall into place: we are seeing an interaction between a caching mechanism intended for use with fses serving static files, and the synthetic cfsctl statistics file. In addition to pointing out the cause, Cinap also had a fast and easy fix: increment the qid (unique 9p protocol file identifier) at the end of the genstats() routine: ctlqid.vers++; Along with a minor adjustment to when genstats() is invoked, that was all that was necessary to fix the issue with the bad behavior of cfsctl in combination with the -C flag to mount. Lessons Learned One of the hardest challenges in debugging is remembering everything you know - I certainly knew the -C flag existed in the sense that I had read the manpage for mount many times, and had seen it used in the mount of root in the standard namespace file. Despite this, when my debugging led me to conclude that the kernel was supplying previously cached data to a read request, the existence and relevance of the -C flag completely slipped my mind. You might say it was a "can't see the forest for the trees" issue - I was bogged down in the details of how cfs was handling 9p messages, and I didn't manage to take a step back from those specifics and notice that the kernel was being specifically told to keep a read cache for the root filesystem which it was providing.
http://doc.9gridchan.org/blog/170105.cfsctl.weird9p
CC-MAIN-2017-22
refinedweb
1,923
79.6
here's a interesting stylistic clash between languages. Here's a excerpt from Python guide on docstring convention. ... The BDFL [Benevolent Dictator for Life] recommends inserting a blank line between the last paragraph in a multi-line docstring and its closing quotes, placing the closing quotes on a line by themselves. This way, Emacs' fill-paragraph command can be used on it. source: 〔Docstring Conventions By David Goodger, Guido Van Rossum. @…〕 there are quite a few interesting aspects. Note that in emacs's inline doc convention, function's parameters should be in CAPS in docstring. See: (info "(elisp) Documentation Tips") When a function's documentation string mentions the value of an argument of the function, use the argument name in capital letters as if it were a name for that value. Thus, the documentation string of the function eval refers to its second argument as ‘FORM’, because the actual argument name is form:Evaluate FORM and return its value. Here's a example: . Interactively, reads the register using `register-read-with-preview'." …) Python's convention is slightly better, because it's more intuitive and convenient. Programers don't need to remember to type ALLCAPS in documentation. Note: elisp is case sensitive too. Though, in elisp, basically no parameter (identifier) is ever ALLCAPS. Because of lisp syntax, lisp identifiers allows hyphen. So, basically ALL identifiers are by convention all-lower-case or all-lower-case-with-hyphen. While in python, camelCase or Capfirst is common, for objects, and all caps is usually for CONSTANTS. the problem with elisp convention is that, it basically limits the charset of identifiers to English letters only. For example: (defun geometry-transform-f (ξ φ) "do transform Ξ and Φ …" … ) Note that now it's hard to understand, because most of us are not familiar with the capitalization of Greek alphabet. The disadvantage of python's style is that now it's impossible to have a word in documentation that happens to be the same as the parameter name. For example: “Insert contents of register REGISTER. (REGISTER is a character.)”. If it were documented in python style, the word “register” is ambiguous. The best solution is actually introduce a markup, for example: "Insert contents of register p(register). " Guido's python style guide also suggests that the ending quote be on a line by itself. (that is, last char in docstring should be a newline char) Like this: def f(x): """Something … x -- the arg. The End. """ # … On the other hand, emacs's convention tells people not to do that. Like this: (defun f (x) "Something … X is …. The End." ;; … ) Here, i'm not sure one convention is absolutely better then the other. Python's style is more convenient (when you edit and cut lines). Computers can easily add or remove such char when processing the docstring for rendering purposes or whatever purpose. Emacs's style, is more “clean”. Because, if you don't require a line ending, you might want to not include it in your source code. there are 3 ways to go about this: The ④ is the best. Google's golang does this. another interesting point is that emacs's convention suggests using less than 67 chars per line. Python doesn't suggest this, but in practice, most lines are less than 80 chars per line because they need indentation at beginning of each line to make the left side aligned. A better way is to not have line length limit, and newline char (␤) should be used for logical break only. Again, computer can trivially parse and truncate lines when necessary. It should not be a human burden. Also, limiting the use of ␤ for logical purposes makes it semantically meaningful. If ␤ is also used for formatting purposes, then parser won't be able to tell, thus losing parts of automation power.
http://xahlee.info/comp/python_vs_elisp_docstring_convention.html
CC-MAIN-2014-41
refinedweb
639
57.27
The cost of Scala Option Interested in the cost of the Option type in Scala, I googled “scala option cost” and got this post as the first result. You can see that using Option incurs on average extra 0.05 nanosecond in execution time. … an extra delay caused by a single ADD is twice as big as this one (0.1 nanosecond). I was stupefied. WOW that is fast! Then I remember the quote “Trust no one, bench everything.” from sbt-jmh. In that post the author guessed that it has something to do with escape analysis¹, and I guess he is correct. @Benchmark def testOption(): Any = { val value: java.lang.Long = 4L Option(value).map(_ * 2).getOrElse(1L) } The above is the benchmark code. The option objects Option(value) and Option(value).map(_ * 2) are used only inside the function. No other code will see them. So the compiler can choose not to place them on the heap and we have the amazing performance reported. But that invites the question: what if the object does escape? Options in the Heap To help them escape, I simply return the optional/nullable values. Then to make the cases look more similar I wrap them both in a case class. @Benchmark def createWrappedOption(): OptionContainer[String] = { OptionContainer(Some("string")) }Benchmark Mode Cnt Score Error Units createWrappedNullable avgt 45 3.044 ± 0.002 ns/op createWrappedOption avgt 45 5.446 ± 0.004 ns/op Rather than the 0.05ns in the post, I saw a 2.4ns difference. In actual code, Option objects mostly live in a complicated graph of references. Escape analysis can hardly help there. That makes the 0.05ns figure very misleading. The Indirection What’s more, the cost of Option is not only the object allocation, but also the indirection cost.² To measure it I have this benchmark. @Benchmark def testOptionRandomAccess(state: OptionState): Int = { val i = ThreadLocalRandom.current().nextInt(state.size) val res = state.arr(i) res.fold(0)(_.length) } Arrays of various sizes, from 64 to 16M, are filled with Option[String]. They are then accessed randomly. When the strings are created along with the Option wrapping, reading the length of a string from an Array[Option[String]] is slower than reading from an Array[String], but not by a lot. Since the String object live right next to the Option object, it will get pulled along into the cache.³ For shits and giggles, I wanted to make the numbers worse. If I can separate all the String objects from the Option objects, reading the String will require reading from a different location in the memory. On the left-hand side, reading the first Option will bring the string “1” into the cache, its access is then cheap.⁴ On the right-hand-side, reading the first Option will bring the other Option objects into the cache. That’s not useful. Accessing “1” needs another fetch. And yes it is slower. I hope my rambling gives you a better idea of the different aspects of the cost of using Scala’s Option type. It may not be as cheap as you wish. Am I arguing against the use of Option? No. In the grand scheme of things, they do not matter. In 99% of the code⁵ we write, 0.05ns, 2.4ns, 100ns, or even 100μs, is not something to worry about. Option is fine with non-performance-sensitive code. Is it possible to have both the safety of Option and the performance of null? Yes, just use Kotlin! Kotlin supports nullable types: T?, which means T or null.⁷ In Scala 3 union types will be supported and we can have explicit null. Back in Scala 2, there is OptionVal, an “free” wrapper around a nullable reference. - - Consider this, you look for the definition of “poop”. The dictionary tells you it means the same as “shit”; then you go to the entry of “shit” and get 💩. Compare this to seeing 💩 immediately in the entry for “poop”. This indirection cost is also present in boxed primitives with generic data structure on the JVM. - For an explanation of locality, see - The extra objects do mean that less useful data can fit into the cache. - Two nines is probably an underestimate. - For a comparison of T | Nulland Option[T], see
https://medium.com/@georgeleung_7777/the-cost-of-scala-option-987ffd64206b
CC-MAIN-2020-16
refinedweb
724
77.33
I'm working on a Grade Point Average Calculator. The user just have to Select the number of courses offered, then select the grade obtain in the different courses by choosing the appropriate grade from the Choice Box. The problem with my application is that, if the user mistakenly selects "A" instead of "B", then he corrects his mistake by selecting "B", the program sums up all the selections, including the wrong selection. So the calculation ends up being wrong. Another proble with my application is that, u can't select an item twice. i.e if a user, selects "A" in the first Choice box, he can't select "A" again in the same Choice box, in case he wants to do another calculation. This is the source code for the gradeChoice class..... import java.awt.*; import java.awt.event.ItemListener; import java.awt.event.ItemEvent; import javax.swing.*; /* * *@author Anokam Kingsley */ GradeChoice(){ super(); final String[] Grades = {"Grades","A","A-","B+","B","B-","C+","C","C-","D","F"}; for(String grades : Grades){ add(grades); } addItemListener(new ItemListener(){ public void itemStateChanged(ItemEvent e){ String gradeString = getSelectedItem(); switch(gradeString){ case "A" : gradeMark = 4.00; break; case "A-": gradeMark = 3.67; break; case "B+": gradeMark = 3.34; break; case "B": gradeMark = 3.00; break; case "B-": gradeMark = 2.67; break; case "C+": gradeMark = 2.34; break; case "C": gradeMark = 2.00; break; case "C-": gradeMark = 1.67; break; case "D": gradeMark = 1.00; break; case "F": gradeMark = 0.00; break; default: System.out.println("Error!!"); } Scores += gradeMark; }}); //End of addItemListener }//End of GradeChoice constructor static double gradeMark,Scores;; }// End of class. i'd be very grateful if someone helps me
https://www.daniweb.com/programming/software-development/threads/454194/please-help-an-amateur
CC-MAIN-2018-30
refinedweb
278
69.07
If someone asks us to give input to a program interactive or through typing, we know what devices are we need for it like the keyboard, Mouse. Similarly, in case someone states that the output that to displayed, we know which device it will display right, undoubtedly that will be on the monitor. So, we can reliably say that the Keyboard does the standard input device and the monitor does the standard output device. Similarly, any error if happens then determines and also displayed on the monitor. So, the monitor is also a standard error device. The Standard input device stdin – reads from the keyboard standard output device stdout – prints to the display and can redirect as conventional input standard error device stderr – Same as stdout but commonly only for errors. Having error output individually allows the user to distract the usual output to a file and still be capable to see error messages. Certain standard devices are performed as files called standard streams. In Python, we can use these standard stream files by using the sys module. After importing, we can use these standard streams like stdin, stdout, and stderr in the same way as you use other files. Interesting Standard Input, Output Devices as Files If we import the sys module in our program then, sys.stdin.read() will let you read from the keyboard. This is due to the keyboard is the regular input device connected to sys.stdin. Similarly, sys.stdout.write() will let you write on the standard output device, the monitor.sys.stdin and sys.stdout are standard input and standard output devices respectively do treat as files. Thus the sys.stdin and sys.stdout are like files that are revealed by the Python when we start Python. The sys.stdin is regularly opened in read mode and sys.stdout is regularly opened in write mode. The Following Example code fragment explains to us the interesting use of these functions. It can print the contents of a file on the monitor without using print statement : Example: import sys fh = open(r":\poem.txt") line1 = fh.readline() line2 = fh.readline() sys.stdout.write(line1) sys.stdout.write(line2) sys.stderr.write("No errors occurred\n") Output: We work, we try to be better No errors occurred These statements would write on file/device associated with sys.stdout which is the monitor and also we can see that stderr also displayed its head on monitor. Python data files with statements: python’s with statements for the files is very advantageous when we have two related operations which we would like to execute as a pair, with a block of code in between. The syntax for using with statement is: Syntax with open( <filename>, <filemode>) as <filehandle> : <file manipulation statements > The perfect example for this is opening a file, handling the file, and then closing it : with open(‘output.txt’, 'w) as f : f .write(‘Hi there!') Explanation The above “with” statement it will automatically close the file after the nested block of code. The advantage of using a with statement means that it is guaranteed to close the file no matter how the nested block exits. Even if an exception which is known as a runtime error occurs before the end of the block, the “with” statement will handle it, and close the file.
https://edusera.org/what-are-standard-input-output-and-error-streams-python/
CC-MAIN-2022-40
refinedweb
558
63.8
Hi, i have in my program a list of vectors, the vectors contain integer. I create this list via the 'new' keyword. To fill this list in parallel, i changed the list to a cilk::reducer_list_append. So far so good, everything works fine. My question is: How do i properly delete this reducer-list? If it was a 'normal' two dimensional list, i would loop through the list, delete everything vector or just pop_back() until the list is empty and then delete the list itself. But i can't do that with a reducer, can i? Is there a way to access e.g. the first element to delete it or is there something similar to the pop_back() oder pop_front()-function? Or is it enough to 'just delete' the list? #include <cilk/reducer_list.h> #include <vector> int main() { cilk::reducer_list_append< vector< int > > * mylist = new cilk::reducer_list_append< vector< int > ; vector < int > myvector (4,1); mylist.push_back(myvector); mylist.push_back(myvector); delete(mylist) //What exactly happens here? }
https://software.intel.com/en-us/node/329333
CC-MAIN-2016-30
refinedweb
165
75.71
> Internet > Firefox Extensions > ReminderFox for Firefox > Changelog ReminderFox for Firefox 2.1 - Changelog What's new in ReminderFox for Firefox 2.1 : May 16th, 2013 · New Remote calendar support! · There is now proper support for synchronizing directly with remote calendars. Popular services such as fruux, Google calendar, and owncloud have been tested. By synchronizing ReminderFox with a calendar provider, it is now possible to sync your ReminderFox events across mobile devices (such as iPhone or Android phones). See full documentation here: · Some cool things: · Remote calendars can have colors set, and events for that calendar will show up highlighted in that color in the list. · You can have multiple remote calendars synchronized into your single Reminders list. · You can add a column on the list to view which events are synchronized to which calendar. Improvements: · Quick Alarm menu added to the Foxy menu in the Reminder list · Alarm dialog can now can be resized, and the size will be persisted thereafter. · New Reminder repeat option to support "COUNT"; you can now set an event to repeat for a specified number of times. Thanks to user Peter de Leuw for the request. Fixes: · Finally! Fixed issue where repeating todo alarms would not pop up. · Fixed: ReminderFox list was not opening properly when some columns were being set to sort (was just showing the calendar with empty list) · When delete recurring events/alarms, defaults button to delete 'this and previous' · When adding a reminder and setting a repeat, ReminderFox now warns the user if 'overlapping' start/end date spans conflict with the repeat interval. · Fixed: opening Edit window by clicking on calendar event was opening the Add dialog, not the Edit dialog · Fixed: 'Views' with string arguments did not work What's new in ReminderFox for Firefox 2.0.2 : November 15th, 2012 Improvements: · Day Box popup in Calendar positioning will not overlap with selected day box anymore · Copy Reminder/Event with recurrence will give option to copy as single instance of selected date. Filter/Views: · Reminders and Todos/Lists have separate settings for default views · ViewsEditor directly activates the last Views used; · Date editing definition opens with hovering over the date box. Print/View: · Menu structure changed · added a help menu item which calls the printing documentation (English and German versions) Notes Textbox: · in Add/Edit dialog the number of lines can be user sized: add preference for min/max number of lines · name: extensions.reminderFox.notesLines · type: string; value: {min lines},{max lines} Options changes: 'Display' Options: · Use single buttons bar: Add and OK buttons moved to top bar · switch to open the menu [Reminders] or [Todos/Lists] with mouse over or click · number of calendar months displayed · set the default text size 'Tooltip' Options: · enable an 'Agenda' displayed at startup, this is helpful to get an overview out reminders and todos, that page can easily be printed with normal Firefox/Thunderbird printing 'Defaults' Options: · List default has separate filter settings for Reminders/Todos with 'Last selected View' also 'File' Options: · If changing the 'Local ReminderFox File Location' with unsaved events, a warning will be displayed to save/undo those changes · Fixes:Calendar Year header now relates always to 'selected' calendar day · Views with multi year selection in some cases didn't show correct events · [Help] call from Foxy didn't open the documentation · 3-month Calendar called with icon on FX/TB main menu bar in some cases events were missing · Networking: with networking enabled and the main dialog shown, added/edited events/todos could be lost · Import with categories now handles 'Cancel' as expected · Editing reminders will not change to origin date of reminder (row on List); · [Add Todos] for WEB page was not working · Open message with mail icon was failing in Thunderbird 15 if a Chat account was added What's new in ReminderFox for Firefox 2.0.1 : August 25th, 2012 Fixes: · Defaulted the number of months shown in the calendar to 1. This behavior can be changed in the Options->Display tab under List and Calendar Setup. · Fixed: Reminders can be added by double-clicking on the calendar · Fixed: Performance of list/calendar. There was an issue with certain repeating reminders that spanned multiple days that caused long delays in displaying the calendar. · Fixed the calendar day popups that were showing up too quickly, obscuring the calendar, not going away, and generally behaving like a bad house guest. · Increased calendar day popup delay time to 1000ms (1 second) . This can be controlled via entering 'about:config' in the browser URL and searching on the preference 'extensions.reminderFox.calendar.daypopup.delay'. It can be shortened or increased as desired. · Fixed: the Foxy icon was not showing on the Toolbar button when 'small' icons mode was selected · Fixed: text search filter issues. The text search filter box had issues when typing, where the cursor was not behaving properly, making it difficult to use. · Fixed: Text size wasn't always responding as expected to keyboard shortcuts. CTRL-+ should now always increase the calendar size, and ALT-+ should increase the list text size. (CTRL - and ALT - will decrease the size). · Fixed: calendar day popup could be hidden _behind_ Calendar when opened from toolbar icon · Fixed issue where ReminderFox list dropdown could stay open after selecting Select List/Calendar Layout. Improvements: · Foxy's back! I missed Foxy so he's back in the bottom right of the calendar display. He can be hidden by right-clicking on him and selecting Hide Foxy. He can be shown again at any time by right-clicking underneath the calendar and selecting Show Foxy. · Cleaned up the look of the calendar and day popups a bit to have more consistent icons. What's new in ReminderFox for Firefox 2.0 : August 8th, 2012 · The Calendar Layout has been expanded to show three months in order to show more information at a glance. · There is also a new 'calendar-only' layout. The selected day in the month grid will be expanded; that way all events of that day are shown directly. · Printing is more flexible and can be modified by the user. · An Agenda page showing Todays, Upcoming events and the ToDo’s List can be shown at startup of ReminderFox’s host application (Firefox, Thunderbird, Seamonkey). What's new in ReminderFox for Firefox 1.9.9.5.1 : February 21st, 2012 · Fixed: if the Foxy icon had been hidden, the ribbon icon was not showing and the tooltip had no entries What's new in ReminderFox for Firefox 1.9.9.5 : January 30th, 2012 Improvements: · When suspending alerts or alarms, you now have the option to have the alerts/alarms automatically resume after a specified period of time · Mac: alert sliders now work on Mac · Updated version compatibility for SeaMonkey and PostBox Bug Fixes: · Fixed: menu icon was showing for all submenu items in Firefox ReminderFox tools menu (Mac) · House Keeping: refactored all global UI identifiers; help avoids namespace pollution and potential conflicts with other Add-ons What's new in ReminderFox for Firefox 1.9.9.4.3 : October 7th, 2011 · Fixed issues with Thunderbird where Adding Reminder to an email was not working. (Thanks to Guenter Wahl) · Fixed issue where welcome page was continuously displaying after update for some users What's new in ReminderFox for Firefox 1.9.9.4.2 : August 18th, 2011 · Fixed problem where it was not possible to Add New Categories · Fixed issue where exporting reminders to a file would allow you to select a directory instead of a file What's new in ReminderFox for Firefox 1.9.9.4 : July 15th, 2011 · Many refactorings to improve startup performance and best practices. HUGE thanks to Guenter Wahl! · The quick alarm toolbar icon now has a context menu to allow you to remove any quick alarm. · Pressing quick search hot key (default: CTRL-Q) will now show the Quick Search bar if it is not shown. This can be done even if the sidebar is not shown. What's new in ReminderFox for Firefox 1.9.9.3.1 : March 23rd, 2011 · Added a check to ensure that Addon-bar is showing for FF4 for existing ReminderFox users (if the ReminderFox icon was set to use the statusbar previously). What's new in ReminderFox for Firefox 1.9.9.3 : March 4th, 2011 · Fix for version 1.9.9.2 issue where alarms were not showing in some cases · Fix for version 1.9.9.2 issue where ToDos were not respecting View filtering · Fix for long-standing issue where after acknowledging an alarm it would re-appear 15-20 minutes later (Finally! Thanks to Dennis and Bernie for helping debug this one) · Fixed: if snoozed an an alarm for more than 24 days, the alarm would re-pop up immediately · Better alarm handling; now only set alarms when necessary, instead of setting them all in advance · Can now Drag & Drop calendar files onto the Foxy toolbar icon in order to import them (just like on the status icon) What's new in ReminderFox for Firefox 1.9.9.2 : February 4th, 2011 Improvements: · Can now set the default filters to be the last selected Stored View in the preferences. So if you have a custom View that you like to use, you can have the Reminder list always open to that. · When importing reminders/ToDos, there is a more detailed dialog detailing how many events are to be imported, and how many succeeded. (Thanks to Guenter Wahl) · You can now entirely remove the status bar "ribbon" icon by setting Display Placement toolbar to "none". You might want to do this if you prefer to have the "Foxy" toolbar icon exclusively (right-click on Firefox toolbar, click "Customize" and drag the Foxy · icon to any toolbar you wish) · Added new: "All Reminders (All Years)" default view for new installs · Added context menu to "Foxy" toolbar icon (right-click on Foxy to get the same context menu as the status bar icon) Bug Fixes: · Fixed issue some were seeing where new reminders were not being saved when Reminder list was closed · Fixed: Deleting reminders from alarm popup was not working for non-repeating reminders · Fixed: improved startup time · Fixed selecting row in list to not scroll off at the end of the year · Fixed: filters for Reminders tab were not being respected when opening to ToDo's tab by default · Fixed: Reading in calendars with specified TimeZones would not persist timezones properly · Fixed: When deleting past recurring reminders, the End Date was not being updated for later instances · Fixed: Quick Alarm display would not update properly if another upcoming reminder tab was open · Added additional debugging mechanisms: preference extensions.reminderFox.debug.file can be set to a file (example: "c:\reminderfox_log.txt" ) to have debug output to a file for better support during defect investigation What's new in ReminderFox for Firefox 1.9.9.1 : November 29th, 2010 · Fixed non-English locales What's new in ReminderFox for Firefox 1.9.9 : November 26th, 2010 Improvements: · Introducing Foxy! As we approach version 2.0 it felt like it was time for a bit of an icon refresh, so our mascot Foxy is now featured in the UI. Tip: Don't like Foxy? You can hide him in the Reminder List (and status tooltip) by right-clicking and selecting Hide Foxy. · UI improvements have been made to declutter the Reminder List UI a bit and make for a better first-run experience. · The Options and Calendar buttons have been replaced by links in the calendar sidebar · The Filters and search dropdowns are hidden by default. They can be shown by clicking the Show Filters link in the sidebar. This preference will be remembered across sessions. · Integrated Filter/View bar. The separate View dropdown has been removed, and Views are now integrated into the existing Filter dropdown (the one with Selected Year, Selected Month, etc). Select the sub-menu item for 'More Views' to see existing views and add new views. (Thanks to Guenter Wahl). · User request: when select to delete a recurring reminder from an alarm, you will be prompted to delete that instance or all instances · User request: added default options alarms. You can set the initial snooze alarm value, and the initial alarm action (Acknowledge, Mark as Complete, etc) · Added listener for Backspace (Delete on Mac) to remove a reminder from the list Bug fixes: · Fixed issue where files with Timezones could get corrupted (causing the Reminder List window to open with no events) · Fixed bug from 1.9.8.4 where Defaults were not being honored when Adding a Reminder from the status bar context menu · Fixed: No "12:00AM" time was being shown for Spanish locale · Fixed bug where calendar tooltip would show events lasting multiple days as occurring for one extra day · Added some safe-checking code around alarm parsing to gracefully handle invalid alarm blocks (ReminderFox list was not displaying properly in that case) · Fixes for Drag&Drop of links onto the status ribbon (thanks to Guenter Wahl) What's new in ReminderFox for Firefox 1.9.8.4 : October 11th, 2010 Improvements: · now handles timezones properly (allowing you properly subscribe or import calendars that define their own timezones, like the NFL calendars here: · Now emboldens the current month in the calendar widget (just like today's date is emboldened) · When alarms/alerts are suspended from the status icon context menu, the status icon and status text will turn grey to indicate that alarms have been disabled · Reminder list and alarm windows now use ReminderFox icon to better differentiate them from other Mozilla windows · Can now add a reminder based on an ICS event copied to your clipboard (thanks to Guenter Wahl ) Bug fixes: · Fixed issue where clicking Launch URL was opening up a separate browser window · Updated menus for Firefox 4 compatibility · Fixed: status messages in Reminder List window would sometimes not be cleared immediately when downloading subscribed reminders A bunch of fixes to ensure alarms always occur when they should: · Fixed issue with alarms where some repeating ToDos's were not showing some alarms; if a previous instance was Completed, or if a previous instance originated more than 3 months in the past · Fixed: if there was a previous instance of a repeating reminder in the alarm window, you would not be reminded at the time of the next instance. Now, the old instance will be replaced with the latest occurrence of the alarm. · Fixed: if there was a snoozed alarm, repeating instances would sometimes not display What's new in ReminderFox for Firefox 1.9.8.3 : July 30th, 2010 Improvements: · User request - description text in alarm can now be copied to your clipboard · Added 'Go To Today' button on calendar widget · When opening reminder list, the calendar will highlight today's date (instead of the date of next upcoming reminder) · Added support for additional web-based mail sites · For overdue reminders that are marked Remind Daily Until Completed, the list display will now show their original dates in the description · Updated for Firefox 4 beta 2 What's new in ReminderFox for Firefox 1.9.8.2 : June 7th, 2010 Improvements: · User request - removed column labels for Time label for "all day" and the Repeat label of "none", as they simply cluttered the list and made it more difficult to see the actual information you were interested in. Bug Fixes: · Fixed issue with older versions of Firefox and Thunderbird; was not able to delete existing reminders. · Fixed: when events spanned month boundaries by more than 4 days, they were not showing up on the calendar · for the next month What's new in ReminderFox for Firefox 1.9.8.1 : April 21st, 2010 · Fixed nl localization issues · Fixed issue where View list would not refresh in some cases What's new in ReminderFox for Firefox 1.9.8 : April 15.7 : February 5.6 : February 1st, 2010 New Features: · Copy Reminders - there is now a new option in the Reminder List context menu to Copy Reminder · and Copy ToDo. This will create a copy of the selected reminder and open up the Add Reminder · window with the pre-populated information. Improvements: · By popular demand, added back the Biweekly (fortnightly) repeat option in the Add Reminder repeat dropdown · Added context menu item on calendar for "Go to Today" which will reset the calendar back to Today's date. · Added option to show alarms in separate tabs or separate windows (as was the behavior in earlier versions). · To disable separate tabs, go to the ReminderFox options Notifications tab, and in the Alarms · section you can change the dropdown for "Show multiple alarms using" to "a separate window for each alarm" · You can now specify how long the alert slider will stay open. You can also set it so the alert slider stays · open until you click it. The preference is in the ReminderFox options in the Notifications tab; the · option is "Keep alert slider open". The default is "for 5 seconds", but you can change this to any length · of time you wish. If you instead select "until clicked" then once the alert slider is opened, it will stay · open until you click on it with your mouse. · Added UI preference for limiting status text length to a set number of characters (in the RemindeFox options Display tab; default: 40) · Added UI preference for setting maximum alert slider height, in pixels (in the ReminderFox options Notifications tab; default: 134) Bug fixes: · Was focusing Firefox window when alarm went off -- very annoying I'm told :) · alarm tab wasn't showing years properly (was showing "" ) · calendar from toolbar wasn't redrawing "Today's" date; would keep original date emboldened, even after it was no longer · the current date · Remind Until Complete status was not being properly saved when events were processed hourly; · Fixed: Multiple alarm dialogs could still open even when using Tabbed window option · Fixed: Repeating reminders will no longer show multiple alarms for different occurrences of the same reminder · Fixed: When importing reminders and choosing to Overwrite, was not displaying a Success message. What's new in ReminderFox for Firefox 1.9.5 : November 7.1 : March 26 : January 6th, 2009 New Features: ·) · Added new option to show the Week Numbers on the calendar. You'll find this in the Options->List display. You can select None, Default week numbering, or ISO 8601-style numbering. Improvements: · Clicking the status or toolbar icon when the Reminder list window is open will now close the window. This makes it easy to quickly open the Reminder list, scan what you're looking for, and then easily close the window (as suggested by reader Travis). · · extensions.reminderFox.keyboard.shortcut.openReminderFox and · extensions.reminderFox.keyboard.shortcut.addEvent · to change the shortcut combination -- or leave it blank to have no shortcut) · For the stylesheet used by the View/Print as HTML option, you can specify your own stylesheet or none at all using an advanced preference. Type about:config and modify the preference · extensions.reminderFox.html.stylesheet to any stylesheet of your choosing (eg: to use a local stylesheet:). If you specify no value at all for this preference, then no stylesheet will be used. · You can now add a reminder for all Thunderbird email messages currently selected (Thanks to Günter Wahl) · Thunderbird improvements - mail body includes more information when seding an invite; priority now added to mail message; change of Location reschedules the invitation (Thanks to Günter Wahl) · OS Switching support - the reminderfox ICS file is stored separately depending on OS; this allows you to point to the same file in a duel-boot scenario (Thanks to Günter Wahl) · Updated for Firefox 3.1 compatibility (and Thunderbird 3 preliminary work) Bug Fixes: · Fixed erroneous 'Custom' option in Custom repeat window. · Removed white font in View As HTML style · Sending mail messages from a Schedule reminder sends it directly (it no longer opens Thunderbird Compose window). (Thanks to Günter Wahl) SUBMIT PROGRAM | ADVERTISE | GET HELP | SEND US FEEDBACK | RSS FEEDS | UPDATE YOUR SOFTWARE | ROMANIAN FORUM Softpedia® and the Softpedia® logo are registered trademarks of SoftNews NET SRL. | |
http://linux.softpedia.com/progChangelog/ReminderFox-Firefox-Changelog-17052.html
CC-MAIN-2013-20
refinedweb
3,363
53.55
CodeGuru Forums > Visual C++ & C++ Programming > C++ (Non Visual C++ Issues) > Memory not released when list is cleared? PDA Click to See Complete Forum and Search --> : Memory not released when list is cleared? LoomisP December 5th, 2006, 09:42 PM I have a problem where my memory usage does not decrease when I clear a list. Here is a simple example: int main() { list<int> il; int k; for ( k=0; k != 50000000; k++) til.push_back(k) til.clear(); cin >> k return 1; } The cin >> k statment is just to keep the program from ending so that I can see the memory usage at that point. I can see the memory usage increasing during the for loop, but it does not decresae at the til.clear() statement. The program is using the same amount of memory when it gets to cin >> k whether I clear til or not. I have also tried putting a til.resize(0) after til.clear(). Am I missing something or is my compiler broken? Thanks, Loomis jfaust December 5th, 2006, 10:38 PM clear() does not free memory. The capacity of the list is unchanges. To really free memory, you need to swap. #include <list> #include <iostream> using namespace std; int main() { list<int> il; int k; for ( k=0; k != 50000000; k++) il.push_back(k); //il.clear(); il.swap(list<int>()); cin >> k; return 1; } By the way, it would be considerate if you would post code that actually compiles. Jeff LoomisP December 5th, 2006, 11:00 PM Sorry for omitting the headers, I thought it would be better to be concise, but I can see the point if someone wishes to copy/paste/compile. But your code does not compile either. My compiler does not let me call swap in that way. The argument has to be a reference, but I don't think the constructor can return a reference. This also seems like going around the barn the long way. If I clear the list, and resize it to zero, why is all that memory still allocated to it? What purpose does that have? Thanks, Loomis dcjr84 December 6th, 2006, 12:21 AM When you .clear() a list, it does not deallocate the memory used by the list. It simply empties the "contents" of each list element. Even if you .resize() it to zero, the memory is not released. I could be wrong on this, but I think the memory you allocate to a list is not deallocated until it goes out of scope, and the list object is destroyed. The reason I think this is because of the concept of temporal location. When the compiler dynamically requests a certain block of memory for the list, it has to invoke the OS to do so. This is a costly operation, so dynamic memory requests should be minimized. So, consider if you had a list of 10,000 elements. The compiler requests from the OS all of that memory. Then, for some unknown reason, you clear() the entire list. If all that memory was released back to OS, and then you decided right after you cleared it you wanted the list to have 10,000 elements again, that would be a HUGE waste of time. It would have to go through this whole process again of acquiring the memory space. I think the compiler is smarter than that, and knows that even though you want to clear the "contents" of the list, it is highly likely you will need to use that allocated memory, or at least a portion of it, sometime again in the near future. So, it keeps it around just in case. At least, thats my theory :D Try putting the list in a block and see what happens.... #include <list> #include <iostream> using namespace std; int main() { //Just a special block to test scope of the list { list<int> til; int k; for ( int k=0; k != 500000; k++) { cout << k << endl; til.push_back(k); } }//I am betting the memory will be deallocated //when this block is exited, because the list will be destroyed. int a; cout << "Enter a key: "; cin >> a; return 0; } LoomisP December 6th, 2006, 12:37 AM I tried your code and the memory was not deallocated when the list went out of scope. My understanding is that you are right, all the memory allocated in the scope should be deallocated, but at least for me it is not the case. Did you check try running this code yourself and see wht happens? Maybe it is just my compiler that is a problem. dcjr84 December 6th, 2006, 01:00 AM Yeah, I just compiled and ran the code. It appears to me that the memory is indeed released when the list object goes out of scope. From what I know about objects, when they go out of scope their destructor is called and any memory allocated during construction or the objects lifetime is released. So this would make sense to me. In the Windows Task Manager, under Processes, as my program runs I can see it steadily increasing in memory usage as push_backs are performed, up to about 12,000 K. Then as soon as the block is exited, the push_backs are done, and it is hanging at the cin >> statement, the memory for my program drops back to 1,016 K, where it stays until I end the program. screetch December 6th, 2006, 03:55 AM it depends on many parameters. Many implementations of the STL keep any memory they allocate in a pool for a later reuse, and some implementations may even never free it : that's the case with STLport, unless you ask it explicitely. If you test your code with MSVC have a look at _CrtSetAllocHook which will allow you to specify a hook that will be called each time an alloc is done or memory is freed. Compile it in debug. See if the memory is freed or not at the end of the block. Also test the same code with STLport : i bet the results are not the same :) ZuK December 6th, 2006, 04:15 AM STL containers usually let you specify allocaters that take care of actually allocating and deallocationg the memory used for the objects. Look at the documentation for your STL implementation, there might already be an allocation strategy available that does what you need. If not you can write your own allocators. Kurt NMTop40 December 6th, 2006, 04:49 AM This line is illegal C++: il.swap(list<int>()); bceause swap takes a non-const reference and you cannot bind a temporary to it. But you can make the call the other way round: list<int>().swap( il ); because although you cannot bind it to a non-const reference, you may call a non-const method from one on the same line. Paul McKenzie December 6th, 2006, 05:18 AM I have a problem where my memory usage does not decrease when I clear a list. Here is a simple example: There are two things you should be aware of. The first thing is whether "delete" or free() is called when a container is cleared. As others pointed out, a std::list will not call delete/free for the memory it has allocated when you call clear(). The swap mechanism is used to force the list deallocate the memory. But even if delete or free() were called, there is another aspect, and that is whether the heap manager actually frees the memory back to the operating system. There are heap managers that do not automatically call the operating system to deallocate the memory. The reason is that the heap manager could be coded so that it holds onto the allocated memory, in case you wish to allocate the memory once again in the program. In this case, all the heap manager has to do is manipulate a few pointers when asked to allocate the memory, and the job is done, probably much faster than allocating memory from the OS all over again. If you are using Task Manager or similar program to determine whether memory is actually being freed, remember that you do not have control over what memory is actually released (or obtained) from the OS by using the default "new", and "delete" operators. Yes, Task Manager and similar programs that track OS memory can be a good indicator, but by no means should it be used to certify that your program is actually using new and delete (or malloc and free) correctly, or whether the container classes are coded correctly. The only exception to this is if you (or the heap manager) strictly calls the OS memory functions to obtain and release memory. The heap manager would then have to be coded to obtain the memory from the OS each and every time new or malloc() is called, and call the OS function to release the memory each and every time delete or free() is called. Very few, if any good heap managers are written this way. Regards, Paul McKenzie jfaust December 6th, 2006, 09:07 AM This line is illegal C++: il.swap(list<int>()); bceause swap takes a non-const reference and you cannot bind a temporary to it. I think I believe you, but I'm confused because: 1. VS2005 compiles it. Although I've found areas where it does incorrectly compile things that should fail, I've not found anything as blatant as this. 2. The compiled code does the right thing. C++ Coding Standards does recommend your approach (Item 82). Jeff screetch December 6th, 2006, 09:11 AM under Visual 2005 this code issues a Warning (4239) //nonstandard extension used : A non-const reference may only be bound to an lvalue use a warning lvl of 4 to see it. As you can see it's an extension. jfaust December 6th, 2006, 09:25 AM under Visual 2005 this code issues a Warning (4239) //nonstandard extension used : A non-const reference may only be bound to an lvalue Grrrr. Bit again. You should have to explicitly turn on extensions via a compiler switch. Jeff screetch December 6th, 2006, 09:27 AM there is a switch to turn them off but then <windows.h> won't parse because there are unnamed structures which is an extension. in our project we use visual 2005 to get an executable but test our core libraries under mingw to detect otehr warning or stupid extensions. codeguru.com
http://forums.codeguru.com/archive/index.php/t-408299.html
crawl-003
refinedweb
1,754
70.13
Setting up a good indexing strategy will help you get rid of duplicate pages and useless ones that have no traffic or impressions. Removing the duplicate pages will help you reduce keyword cannibalization and rank better on the newly added content on your website. I have discussed finding duplicate posts and topics on your website and dealing with outdated content in my latest posts. This post will discuss how to automate your indexing strategy by removing unwanted pages from the Google index by updating their status directly from python to WordPress without touching the CMS. This script is very useful if you have a website with a large number of pages that are not driving any traffic or are simply outdated. This tutorial will work only with WordPress sites that are using the Yoast SEO plugin. We will define the pages we want to noindex and execute queries to the WordPress database to update their status. This is an advanced tutorial that will execute MySQL queries on your database, so make sure to apply it first on a staging environment to make sure that it is working, then you can move to the production server. And always take a backup from your database before doing anything. We will start by getting the full list of URLs of any WordPress website using the Yoast SEO plugin. The website I will work on in this tutorial was built for educational and experimental purposes. All the content is scraped from other websites, and the resources are mentioned at the end of each page. Import the libraries import pandas as pd from urllib.parse import unquote import requests from bs4 import BeautifulSoup as bs from tqdm.notebook import tqdm from datetime import date Define a user agent and website URL ua = "Mozilla/5.0 (Linux; {Android Version}; {Build Tag etc.}) AppleWebKit/{WebKit Rev} (KHTML, like Gecko) Chrome/{Chrome Rev} Mobile Safari/{WebKit Rev}" website_url = "" posts_xml = requests.get(website_url + "/sitemap_index.xml",headers={"User-Agent":ua}) Parse the URLs from the XML sitemap index and git rid of the uploads to keep only the posts posts_xml_content = bs(posts_xml.text,"xml") posts_sitemap_urls = posts_xml_content.find_all("loc") post_sitemap_count = 0 for sitemap_item in posts_sitemap_urls: if sitemap_item.text.find("post-") > -1: post_sitemap_count += 1 Read and parse the URLs from all the XML sitemaps, and store them in a Pandas data frame xml_list = [] urls_titles = [] for i in tqdm(range(1,post_sitemap_count + 1)): xml = f"{website_url}/post-sitemap{i}.xml" xml_response = requests.get(xml,headers={"User-Agent":ua}) xml_content = bs(xml_response.text,"xml") xml_loc = xml_content.find_all("loc") for item in xml_loc: uploads = item.text.find("wp-content") if uploads == -1: xml_list.append(unquote(item.text)) urls_titles.append(unquote(item.text.split("/")[-2].replace("-"," ").title())) xml_data = {"Page":xml_list,"Title":urls_titles} xml_list_df = pd.DataFrame(xml_data,columns=["Page","Title"]) xml_list_df.to_csv("urls-from-xml.csv",index=False) print("Done") Now we have all the posts URLs from our website xml_list_df Now, we need to get the traffic for our posts from Google Search Console. The best way to do that is to use Google Search Console API to return the complete data for the last 16 months. You can do that by using an add-on on Google Sheets called Search Analytics for Sheets (tutorial) or using a Python script that our friend Jean Christophe has written before. I recommend visiting his blog to learn more about Python and SEO. After getting the data from Google Search Console API, we need to import this data to our script. We can do that by using the Pandas library. gsc_data = pd.read_csv("source/gsc_data.csv") gsc_data We can see that only 925 from 2495 pages have clicks and impressions, and the rest of the pages are not getting anything for many reasons. (i have mentioned before that this website is only for learning and experimenting) Now we want to merge the two datasets to analyze the data. We will merge the complete URLs list with the links that have clicks and impressions. merged_data = pd.merge(xml_list_df,gsc_data,on="Page",how="left") merged_data Let's assume that our strategy will be to noindex any URL with 0 impressions in the last 16 months. Don't follow these steps blindly. You will break your website ranking. It would be best if you had a solid indexing strategy depending on your data. We will divide the URLs into two groups, URLs with impressions and URLs without impressions. We can do that by following the below steps. def update_impressions(impressions): if impressions != impressions: return "No Impressions" else: return "Has Impressions" merged_data["Impressions"] = merged_data["Impressions"].apply(lambda impressions : update_impressions(impressions)) merged_data Now we will analyze the traffic with Python to see the percentage of our pages that have impressions. import matplotlib.pyplot as plt impressions_count = merged_data.loc[merged_data["Impressions"] == "Has Impressions"] no_impressions_count = merged_data.loc[merged_data["Impressions"] == "No Impressions"] impressions_groups = [len(impressions_count),len(no_impressions_count)] plt.pie(impressions_groups,labels=["Has Impressions","No Impressions"],autopct='%1.1f%%') plt.title('Impressions') plt.axis('equal') plt.show() We can see that 75% of the pages have 0 impressions, and only 25% have impressions from the total number of 2495 pages. Now we need to get the ID for each post from the WordPress database, and we will be using these IDs to execute queries to noindex these pages. Before that, we need to get the page's slug. The slug is the part of the URL after the TLD. Full URL Slug ubung-kann-bipolarstorung-helfen Usually, WordPress uses the post's title as the slug, but you can change it as you want. no_impressions_count["Slug"] = no_impressions_count["Page"].apply(lambda page : page.split("/")[3]) no_impressions_count The structure of the WordPress database is clear and easy to understand. We will talk about the tables that we will be using in this tutorial. The most important table is wp_posts; in this table, WordPress stores all the details about our content, and a big part of our work will be on this table. You can get any table's structure in MySQL by typing the below command in MySQL Workbench or the Linux terminal. describe table_name; In our case, we want to see the fields in the wp_posts table describe wp_posts; After understanding the structure of wp_posts table, we can easily get the ID for each post by executing a query to return the ID where the slug is equaled to our data. select id from wp_posts where post_name = "post slug from our data frame"; In Python, we can do it by following the below steps. Importing the library import mysql.connector Establish a connection to your database mydb = mysql.connector.connect( host="localhost", user="root", password="root", database="my_db_name" ) In this step; you need four things Host IP The IP of the server that you are hosting your website on, in most cases it will be localhost. Database Username The username of the database. Here I am using root as an example. Database Password The password of the database; again I am using root as an example Database Name The name of your WordPress database. Define a function to execute the queries def get_wp_id(slug): sql = f"select id from wp_posts where post_name = '{slug}';" wp_id_query = mydb.cursor(buffered=True) wp_id_query.execute(sql) wp_id = wp_id_query.fetchone() return wp_id[0] Create a new column in our data frame and insert the WordPress ID inside it no_impressions_count["WP ID"] = no_impressions_count["Slug"].apply(lambda slug : get_wp_id(slug)) no_impressions_count Now we have the WordPress post id for each URL This is the most important step in our script, where we will execute queries to update the indexing status for each post in our data frame. The following query will be executed on a table called wp_yoast_indexable. This table is added by the Yoast plugin, where the plugin store all the indexing details inside it. The "wp_" prefix is the default that WordPress adds to each table. You can choose another one when installing a new WordPress which is recommended for security reasons. Before that, let's take a look at the wp_yoast_indexable table to understand its structure. you can do that by typing the following snippet in MySQL Workbench or on your Linux terminal describe wp_yoast_indexable; Below are explanations for some of the fields in the wp_yoast_indexable table As you can see, you can control almost everything related to SEO from this table, it's very powerful, and you can do many automation works using it. Our query will update the indexing status depending on the post id that we have in our data frame from NULL to 1. The value 1 means noindex, and NULL or 0 means index. There is a relation between wp_posts and wp_yoast_indexable which is the post ID. in wp_posts, the field is called ID, and in wp_yoast_posts its called object_id. so our query will look like this update wp_yoast_indexable set is_robots_noindex = 1 where object_id = 12345;" Execute the query in Python def noidnex_post(wp_id): sql = f"update wp_yoast_indexable set is_robots_noindex = 1 where object_id = {wp_id};" noindex_query = mydb.cursor(buffered=True) noindex_query.execute(sql) mydb.commit() return "noindex" no_impressions_count["Index Status"] = no_impressions_count["WP ID"].apply(lambda wp_id : noidnex_post(wp_id)) Now all the posts in our data frame have noindex meta robots, and you can randomly check the pages to make sure that it worked. or you can execute another query to get the number of the indexed and noindexed pages from the database by following the below script. Get the number of noindexed pages from the database sql = "select count(*) from wp_yoast_indexable where is_robots_noindex = 1 and object_type = 'post';" check_index_query = mydb.cursor(buffered=True) check_index_query.execute(sql) noindexed_count = check_index_query.fetchone()[0] print(noindexed_count) Get the total number of all posts sql = "select count(*) from wp_posts where post_status = 'publish';" all_posts_query = mydb.cursor(buffered=True) all_posts_query.execute(sql) all_posts_count = all_posts_query.fetchone()[0] print(all_posts_count) We see that we have the exact numbers similar to our data frame, so our script worked perfectly without any issues. Also, we can visualize the numbers to compare the percentage with the original data frame, which was 75% of the pages have no impressions, and 25% have impressions. index_vs_noindex_posts = [noindexed_count,all_posts_count] plt.pie(impressions_groups,labels=["noindex","index"],autopct='%1.1f%%') plt.title('Index Vs Noindex Pages') plt.axis('equal') plt.show() The above chart shows the same percentages, 75% noindex, and 25% index. After doing all the above, it's better to have all the details about your indexed and non-indexed pages in a sheet. You might need them later or if you want to revert your work in case you did anything wrong. We can do that by getting the data from our WordPress database. This time, we will join two tables together to retrieve the below details. The below query will do the job for us" And we can do it by Python to store the data in a data frame then export it to CSV sql = '" report_query = mydb.cursor(buffered=True) report_query.execute(sql) indexing_data = report_query.fetchall() wp_ids = [] wp_titles = [] wp_urls = [] wp_indexing = [] wp_indexing_date = [] wp_update_link = [] for row in indexing_data: wp_ids.append(row[0]) wp_titles.append(row[1]) wp_urls.append("" + row[2]) if row[3] == 1: wp_indexing.append("Not Indexed") else: wp_indexing.append("Indexed") wp_indexing_date.append(date.today()) wp_update_link.append(f"{row[0]}&action=edit") wp_dic = {"WP ID":wp_ids,"WP Post Title":wp_titles,"WP URL":wp_urls,"WP Indexing Status":wp_indexing,"WP Indexing Updated Date":wp_indexing_date,"WP Update Link":wp_update_link} wp_df = pd.DataFrame(wp_dic,columns=["WP ID","WP Post Title","WP URL","WP Indexing Status","WP Indexing Updated Date","WP Update Link"]) display(wp_df) wp_df.to_csv("indexing-data.csv", index = False) And the final results will look like this You can play with the script to do more automation with WordPress by changing little things. I hope this script will help you in your SEO work. If you have ideas on automating other WordPress tasks using Python, please share them with me on my Linkedin profile. I am always open to new ideas and collaborations on projects like this.
https://www.nadeem.tech/automate-your-seo-indexing-strategy-with-python-and-wordpress/
CC-MAIN-2022-27
refinedweb
1,989
55.24
Type: Posts; User: gakushya machine learning nlp speech generation speech recognition after reading this post i think i finally understand the importance of mime types :) I think you should query the matrix entry, saving the resultant string, and then testing that using a switch. I'm new to PHP myself though, so I'm just thinking out loud here. But it i think... It does indeed make me think... was that spam? I'd stop using that dictionary right away, sounds like bombastic bullshit. Or the dictionary is old and the usage has changed. Those definitions seem downright incorrect. Unfortunately, as I've... Are you sure JSP isn't just getting some of that runoff Java rumour that 'its too slow to do anything' I know nothing about JSP, I use PHP, but I doubt that it differs performance wise for just... So, if you play video games, I'm sure you're well familiar with the issues that plague almost every single game (or at least a lot of the heavy hitters of the modern market). Some dev teams are even... tensors! Generating all possible numbers? I thought this was some sort of philosophical thing when I clicked here Uhm, you took out the telephone number on the contact forum. I used it again now, and received this message: Sorry, but there were error(s) found with the form you submitted. These errors appear... updated the review after a night's remuneration. I see that google maps has now been used in the website, so I guess someone read this review, or that was just a coincidence. pm me when you remake the site, so I can post another review here. Okay, I'll tell you what I like first, from a user's perspective obviously. The component layout: The whole website is just three equal width rectangles. Simple, easy to navigate the content... thanks for confirmation! oh i just remembered that with a static initialization block it would work: public class hello { static { int[] vector = new int[10]; vector[0]=3; } public static void... This code compiles without error: public class hello { public static void main(String[] args){ int[] vector = new int[10]; vector[0]=3; } }
http://forums.codeguru.com/search.php?s=c4c629c5fb6fc24f9d7157a9c9ccc0cf&searchid=5798123
CC-MAIN-2014-52
refinedweb
368
74.19
When you started your Ext JS 4.* project with Sencha Cmd you know how easy… How article will discuss the following concepts: - Source Control - Sencha Architect for teams - Editors & IDEs and configuration - Code analysis tools - Code reviews - Test tools - Build processes Source Control Use a versioning / source control system, to track changes, share your code and save your code revisions as easy backups. A popular Versioning Control tool is Git. (but you can use any versioning tool of choice, such as SVN, CVS, Mercurial, etc...) Working with Git Internally at Sencha we use Git & Github. What’s important to know, is that you don’t want to check-in certain files and folders. When you do check in the framework or build folders, keep in mind that there are more changes on Git conflicts, and your code base will become extremely large. To make sure you don’t check-in these files by accident, create a .gitignore file in your project root. I am often using these ignore rules: # OS generated files # ###################### */.DS_Store .DS_Store .DS_Store? ._* .Spotlight-V100 .Trashes Icon? ehthumbs.db Thumbs.db # Packages # ############ # it's better to unpack these files and commit the raw source # git has its own built in compression methods *.7z *.dmg *.gz *.iso *.jar *.rar *.tar *.zip # Sencha Development # ###################### .architect .project .sencha/ .sass-cache/ ext/ touch/ temp/ build/ TIP: Wait? You didn't check in the frameworks? Yep. Usually I prefer it to keep my version control light and clean. (I can tell you how much pain it is, when whole versions of Ext JS are checked into GIT, and how horrible it is, to use the GIT client, while it's slow or crashing down.) - To give you an impression, the Sencha sdk is over 100 MB. So not checking in the sdk, means, you will have to generate a new application/workspace, with the same name space; and copy over the files. TIP: In case you by accident already committed certain files to Git, you have to remove them first from Git before ignoring them. For example: git rm file1.txt git commit -m "remove file1.txt" git push In case you have all these annoying OS generated files in Git, you can remove them like this: find . -name '*.DS_Store' -type f -delete For more information about Git, checkout: What about Git and Sencha Architect? When you're familiar with Git you shouldn’t have problems collaborating with multiple developers using Sencha Architect. The Sencha Architect metadata code and resultant JavaScript are very source-control friendly. But it is good to know that Sencha Architect creates meta data. (see metadata folder in your folder structure of an Sencha Architect. project). These metadata are used to generate the JavaScript in the app folder. (Basically, you don’t need to add the app folder under source control. Since Sencha Architect auto generates these files once you open and save the project in Sencha Architect.) Because of this metadata it is possible to work in the same Sencha Architect project with multiple developers. As you can see in the screenshot, the metadata maps a similar folder structure as the application file structure. Every Sencha class has its own meta data file, which is basically a JSON object. As long as you don’t work on the same files, there won’t be any conflicts. For more information about using Sencha Architect in project teams, take a look into this serie of blog posts written by Richard G Milone who works for CNX. It explains the process really well: - - - - Sencha & Git in general Whether you are working with Sencha Architect, or just writing code yourself, the best practice would be to define (smaller) classes, and nesting it through xtypes. Every (view) component, should have its own class, with its own namespace. We don’t need to worry about all these separate files since the Sencha build process (with Sencha Cmd / Sencha Architect uses Sencha Cmd in the background), concatenates and minifies all these classes into one single small file. This improves readability, usability and maintainability but think about it. It also will improve your workflow with source control systems. Cause smaller single files, reduces the change on working with your co-worker on the same file. When you develop your application with Sencha Architect, and you are dragging your components into the design canvas… ..by default all these views will be nested into one single file. (the viewport). You can promote these smaller view pieces to its own classes, so it will become a single class file, which will be nested via its xtype. For example, when you have a viewport with a form with fields, and a component with a template. You can promote the form and the detail component to its own class, by right clicking on the form (or detail component), and select: ‘Promote To Class”. After you selected that, you will see a link icon, which indicates that its a linked to its own class. You can start re-using it now too! Just sometimes, you are both working on the same file. This will result into a merge conflict as soon as you push your version to Git. Git will reject it. I know, this is not nice, but it’s not the end of the world. Always make a backup of your own file, to somewhere else in your file system. You can merge a file. There are various tools available for your editors / IDEs that deal with merging files. Also Git provides a Graphical User Interface which can show the differences. Another solution can be to accept or revert the changes and merge it yourself manually. TIP: Traditionally we recommend that users do not check in the .architect file or the framework and build folders into their source control systems. In situations where it is necessary to check these into the repository (for example, because you want to run the application directly in your browser after fetching the project), we suggest that you check it in and then add it to the ignore list so that there will not be further conflicts. Last but not least, the overall best solution for working in a team with source control, is communication! It’s just so much easier if you let your co-worker know, on which file / part of the app you are working on! Editors & IDEs While writing Sencha code you can use any editor or IDE of choice. Here are a couple of suggestions: - WebStorm / IntelliJ IDEA from JetBrains - IDEA is an IDE great for Java developers. WebStorm is their JavaScript version of the IDE. What’s nice of WebStorm is that it recognizes the Sencha frameworks, and therefore you can use code auto completion. WebStorm is commercial. IDEA has an oper source version. - Eclipse - Open source IDE mostly focussed on Java development. - Sublime Text Editor - A simplistic editor for code and markup with amazing performance. (Commercial) - Brackets - A modern simplistic open source editor, great for web development. All of these tools have the ability to use with plugins. For example, plugins to integrate code analysis tools or source control tools. Editor / IDE configuration When you work in a team, what’s most important is that you have your editor/IDE configured the same. Common editor settings are: - Indent Style: set to tab or space to use hard tabs or soft tabs respectively. - Indent Size: a whole number defining the number of columns used for each indentation level and the width of soft tabs (when supported). - Tab Width: a whole number defining the number of columns used to represent a tab character. - End of Line: set to lf, cr, or crlf to control how line breaks are represented. - Charset: set to latin1, utf-8, utf-16be or utf-16le to control the character set. - Trim trailing whitespace: Enable to remove any whitespace characters preceding newline characters and false to ensure it doesn't. - Insert final newline: Enable to ensure file ends with a newline or not. When these settings are out of sync in a team, you can run into many source control conflicts and hard to solve file merges. Code analysis tools There are a couple of tools you can use to analyse your code. Popular tools for JavaScript development are: - JSLint - A JavaScript syntax checker and validator on coding rules written by Douglas Crockford. - JSHint - A community driven fork of JSLint, which is not as strict as JSLint. There are many plugins available for IDE’s and editors, to check the JavaScript code while writing using the above tools. Analysing code with Sencha Cmd What’s also nice to know is Sencha Cmd does code checking. Every time when you run a sencha app build or sencha app build testing on the command-line, it will validate your JavaScript code. Lint errors will show up as parse warnings in your console. Not only it checks your JavaScript errors, it will also check your Sass stylesheet for errors, before compiling it to production ready CSS. Sencha Cmd has Rhino 1.7 and PhantomJS under the hood. These are JavaScript interpreters, it does not have the DOM implementation of a browser. Therefore it can run some nice things. Such as Linting/Validating or testing the code while building. For example Sencha Cmd uses PhantomJS for its image slicer. Because of PhantomJS, Sencha Cmd can make a screenshot, and slice it into images to serve to older browsers. For more information, see: Code Reviews It is also possible to let Sencha check your code. You can hire a Sencha professional services consultant who can review your code during certain points in your development process. We will check for best practices and see how to optimize your application and performance. Tools for testing your Sencha Code Let’s look into ways how to test your Sencha code: - Jasmine - Jasmine is an open source unit testing framework for JavaScript. Unit Tests attempt to isolate small pieces of code and objectively verify application logic. Jasmine aims to run on any JavaScript-enabled platform, to not intrude on the application nor the IDE, and to have easy-to-read syntax. See also: Siesta is a JavaScript testing tool that can help you test any JavaScript code and also perform testing of the DOM and simulate user interactions. UI tests attempt to subjectively verify that elements on the screen behave (and often look) as expected, both statically (i.e. the flat render) and also dynamically (i.e. as users perform given actions). Siesta from Bryntum is the best tool on the market. Using the API, you can choose from many types of assertions ranging from simple logical JS object comparisons to verifying that an HTML element is visible in the DOM. It is possible to test JavaScript in the browser and you can automate your tests. Once you have created your test suite, you should consider running it in the cloud using great services we support, such as Sauce or BrowserStack. See also: and Build process Sencha Cmd (and our build processes) run on top of Apache Ant. Apache Ant is a software tool for automating software build processes. It is implemented using the Java language therefore it requires the Java platform. You can write / wire up your own build process and code analysis tools as well. You can write these hooks in the build.xml file (in the project root). The file looks like this: <?xml version="1.0" encoding="utf-8"?> <project name="BarFinder""/> <!-- The following targets can be provided to inject logic before and/or after key steps of the build process: The "init-local" target is used to initialize properties that may be personalized for the local machine. <target name="-before-init-local"/> <target name="-after-init-local"/> The "clean" target is used to clean build output from the build.dir. <target name="-before-clean"/> <target name="-after-clean"/> The general "init" target is used to initialize all other properties, including those provided by Sencha Cmd. <target name="-before-init"/> <target name="-after-init"/> The "page" target performs the call to Sencha Cmd to build the 'all-classes.js' file. <target name="-before-page"/> <target name="-after-page"/> The "build" target performs the call to Sencha Cmd to build the application. <target name="-before-build"/> <target name="-after-build"/> --> </project> As you can see lots of code is commented out in here. So nothing is really happening yet. But you can create your own hooks. There’s a Sencha guide online, which contains the available tasks you can use: For example, here’s a code snippet I have used to create different build packages, where the folder name of the build contains a date. <target name="-after-build"> <tstamp> <format property="today" pattern="yyyy-MM-dd"/> </tstamp> <copy todir="${build.dir}/../../dist/${app.name}/${today}-mybuild" overwrite="true"> <fileset dir="${build.dir}"> <include name="**/*" /> </fileset> </copy> </target> For more information, please see: Conclusion When you are building serious enterprise applications you will need to come up with a strategy on how to analyze, test and collaborate your code. As you can see, choosing Sencha empowers developers to design, develop, test and deploy in development teams of any size. June 7, 2015 at 8:00 pm Reply June 8, 2015 at 4:32 pm Reply June 8, 2015 at 5:09 pm Reply June 17, 2015 at 8:47 am Reply August 14, 2015 at 9:01 am Reply
https://www.leeboonstra.com/developer/how-to-improve-your-sencha-code-while-working-in-large-teams/
CC-MAIN-2018-30
refinedweb
2,248
63.39
This is a clone of the Original Frogger by Atari. This is my first game using C#. I hope, after you read this article, you take with you some skills needed to make your own fun games. The clipart is believed to be of public domain taken off of free clipart sites. I have used timers and OnPaint events to draw the images using the GDI double-buffering to reduce flicker. I have used both Quartz.dll and my own personal class (tonysound.cs), which I wrote an article on, to play sound events and background music. Click on my articles link to find this article. I have set up several variables for speed as you can see in the code #region titled variables. The speed variable is to control the increments of the Image object being drawn. For instance, if I want the object to move slow, you would have a low number increment so it does move across the screen as fast. For each level of the game, I will add one to the current speed. This will increase the object movement increments and make the object appear to be move faster. X Coordinates The X coordinates like Lane1var1 and so on are for the X coordinate on the playfield. Each object has a location of its X,Y coordinate. So if we move the x, it will obviously go either left or right. This is how the frogs and cars move. I increment the x for the Image object inside the timer, for example: if (long1varx<672) long1varx+=logset1speed; else long1varx=-71; if (long2varx<672) long2varx+=logset1speed; else long2varx=-71; if (long3varx<672) long3varx+=logset1speed; else long3varx=-71; As you can see here, I check to see if the first log in the first row is less than 672. If it is, then I add the logset1speed to it causing it to change its x position. Otherwise, I reset it to its starting position as -71. I do the same for the other logs as well. Now, the Frog I move get his x value inside the KeyEvent, and then when the timer1 calls the drawstuff(g) function, it will get the updated x value of the frog and plot the graphic accordingly. public void OnKeyPress(object sender,System.Windows.Forms.KeyEventArgs e) { .... if (e.KeyCode == System.Windows.Forms.Keys.Left) { Frogx-=30; Sound.Play(hopsound, PlaySoundFlags.SND_FILENAME | PlaySoundFlags.SND_ASYNC | PlaySoundFlags.SND_NOSTOP); } ....... What happens above is the keypress event is fired with each key press. I then check to see if the left arrow is pressed, the right arrow and so on. I then move the frog's coordinates based on the arrow. If it's left, then I subtract its move increment from its original x location. If it's right, then I add. Bool values for the Target Taken and Snake Taken.. I have bool values like bool Target1taken and bool Snake1Taken. What this does is sets a true or false to determine if the frog can enter the home base. If the target is taken, this means the frog has already made it here and can't reenter again. Now if a snake is there, well he kills the frog, so we test to see if he is there, so if the frog jumps in the space he dies. Static variable for score and why... If you are wondering why I made the score variable static, here is why. If you are on another form that uses the same namespace, you can't use the variable from the other form without making an instance of an object. So I cheat a little and make the variable of type static, then all I have to do is type the formname.variablename; for example, Form1.Score, and I don't have to make an instance of the form, in say, Form4 at all. WHAT'S THAT NOISE? Well, just like pretty much any game, you have sound events and you have background music. Well, here is no exception. If you read my article about adding sound to C#, you will get an idea of what I have done. I used my class tonysound.cs that uses the winmm.dll to play my sound events, like the frog hopping and dying. Now, the background is a little different. You see, winmm.dll doesn't support sounds being played in parallel. So I had to come up with something a little different for the background. I hear you over there asking how, so let's dive into it. In order to use Quartz.dll, we have to add it to the references of our project. To do this, we go to the main project name in Source Explorer and hit Add reference. When the dialog box pops up, we then hit the Browse button and go to windows\system32 directory and find the quartz.dll. Once it's added, then hit OK. You will then see QuartzTypeLib has been added. Now we are ready to get busy with making some sound play with this cool DLL. Well, like pretty much everything else, we need an object, so let's make one. public QuartzTypeLib.FilgraphManagerClass mc; //quartz object needed to play background Now that we have the object, we need to do a few more things before we actually get the sound playing. QuartzTypeLib.FilgraphManager graphManager = new QuartzTypeLib.FilgraphManager(); // QueryInterface for the IMediaControl interface: mc =(QuartzTypeLib.FilgraphManagerClass)graphManager; What we have done above is made a Graphmanager needed for playing the sound. You will notice, I assign mc the value of graphManager which has been recast to the type of filgraphManagerClass. Now that this is done, we are ready to rock and roll with the mc object that's going to play our sound for us. So we have to make thread that calls soundbackground to play our sound. backgroundplay = new Thread (new ThreadStart(soundbackground)); // start thread to play music Before calling this thread, we want to make it run in the background. So, when we close our program, we do not get processes not closing. So, to do this, we do the following before starting the thread: backgroundplay.IsBackground=true; // make the thread a background thread backgroundplay.Start(); //start the music playing thread void soundbackground() { while(true) { // Call some methods on a COM interface // Pass in file to RenderFile method on COM object. mc.RenderFile("backmusic.mp3"); // loads the file in mc.Run(); // starts the playing if (mc.CurrentPosition==mc.Duration) // checks if ended { mc.CurrentPosition=0; // if we have ended we wanna start over } // -1 blocks this event infinately and the soundvar is an // eventcode that gets triggered after time out mc.WaitForCompletion(-1, out soundvar); } } If you look under the #region section called Images, you will see where I have made some image objects. These are the images which you see on the screen. To test the bounds, we have to make image in the form of a rectangle. Look at the #region Bounds and you see the declarations. public System.Drawing.Rectangle RectangleFrog; public System.Drawing.Rectangle RectangleCar3; public System.Drawing.Rectangle RectangleCar2; public System.Drawing.Rectangle RectangleCar1; public System.Drawing.Rectangle RectangleFastCar; public System.Drawing.Rectangle RectangleTractor; public System.Drawing.Rectangle RectangleCow; Now that we have declared the object, we need to define it. So, we do the following. You will see this in the tickme sections. I could have placed it elsewhere, but I didn't. RectangleCar1 = new Rectangle( Lane1var1, 312, Jeep1_Lane1.Width, Jeep1_Lane1.Height ); RectangleCar2 = new Rectangle( Lane1var2, 312, Jeep2_Lane1.Width, Jeep2_Lane1.Height ); RectangleCar3 = new Rectangle( Lane1var3, 312, Jeep2_Lane1.Width, Jeep2_Lane1.Height ); RectangleFastCar = new Rectangle(Lane2var,350,FastCar.Width,FastCar.Height); RectangleTractor = new Rectangle(Lane3var,370,Tracter.Width,Tracter.Height); RectangleCow = new Rectangle(Lane3var2,370,Cow.Width-10,Cow.Height-10); Now that we have our rectangle bounds defined for our objects, we can test to see if they intersect with other objects. This is useful because, if say the frog intersects with the car, then we obviously want the frog to die. So C# makes this easy for us by allowing us to use a function called IntersectsWith. If you look in the tickme function, you will see where I check to see if the frog is hit by cars on logs etc. if (RectangleFrog.IntersectsWith(RectangleCar2)|| RectangleFrog.IntersectsWith(RectangleCar1)|| RectangleFrog.IntersectsWith(RectangleCar3)|| RectangleFrog.IntersectsWith(RectangleFastCar)|| RectangleFrog.IntersectsWith(RectangleTractor)|| RectangleFrog.IntersectsWith(RectangleCow)) { lives--; livesfunction(); } What the above does is it checks to see if the frog image touches one of the cars or the cow or the tractor. If it does, then we are going to reduce our lives by one and then call the lives function to see if we are out of lives. I will not go into great detail on this because I wrote an article on it already. Please look at the AntiFlicker Graphics article here. After reading the aforementioned article, you will see you have to enable double buffering because it is off by default. Also, you have to make an onpaint event and then call the DrawStuff function from a timer to get the graphics to move without flicker and lag. The high-score is read in line for line from a simple text file. I thought about using a database but scratched the idea simply because I didn't see the need. I also thought about making the file binary which I may still do in future versions of this. Yeah, I know you're saying stop rambling. I want to see how this is done, so here we go... try { StreamReader s = File.OpenText("Score.txt"); line1=s.ReadLine(); line2=s.ReadLine(); scorecompare= Convert.ToInt32(line2); label4.Text=line1; label5.Text=line2; s.Close(); } catch { MessageBox.Show("Can't find Score.txt"); } So, let's take a look at what is going on here. First, we make our try and catch so if the score.txt file is missing, then we want to tell the user the reason the program has crashed. We then make sure up top we have using System.IO;, so we can use the StreamReader. So, we make a variable of type StreamReader and link it to the path of score.txt. Once this is done, we then assign line1 the value of the first read, which is line one. Then do the same for line2. Then I take and convert line2 which contains the score to an integer and store it in the scorecompair, so we can compare the high-score to the current score later. Then, I simply take and place the high-score name and the high-score score in their respective label text. Well, you are probably wondering how did we go about making this score.txt file? OK, if you look closely at the code, you will notice that if the scorecompare is less than the current score, then the current player has the highest score, so we simply display a form that gets the score; remember, we made it static so we could use it? Well, after we have the score, we want the user's name. After all this is in place, we have a submit button that does the following: private void button1_Click(object sender, System.EventArgs e) { FileInfo f = new FileInfo("Score.txt"); StreamWriter w = f.CreateText(); w.WriteLine(textBox1.Text.ToString()); w.WriteLine(Form1.score.ToString()); w.Close(); Close(); } The Quartz.dll is pretty interesting to mess with. I realized, with it, I could easily make a media player. I thought about making one of my dogs wag his tail as the music plays. This project was pretty fun for me and a great learning experience. I always wanted to learn C# and make a game. With the help of Code Project's members, you know who you are, I was able to put that dream into reality. One thing that still puzzles me is winmm.dll. I have heard that it will play two sounds in parallel from individuals on IRC but I have never seen any working code on how it is done. Because, from what I see, it doesn't have a blocking method, so when another event fires, the one that was last called takes priority. Another thing that kills me. I wanted to do Direct X and still will. I might have to get the newest version of Visual Studio though, but anyway, here is the deal. I have tried every method I know of to get DirectX to install correctly. It simply refuses to do so. I followed several methods like registering the DLLs, copying the DLLs, and so forth. So after I get a little less busy, I'm going to uninstall VS and reinstall it all and see if that works. One other thing. I am now a big fan of timers. They made my life a lot easier. Oh, I did learn one interesting thing. If you include using System.Diagnostics;, you can then call process.Start("a url"); and it will let your users email you or go to your website or whatever. Hopefully after reading this article, you will have taken with you some ideas to make some fantastic games for your family and even yourself. I always wanted the old style frogger and sure I could have bought it, but I figured why don't I learn how to make it, so there you have it. My version of Frogger. I'm sure there are bugs in this software and they will be corrected as I find them. If you like this code, do me a favor. Send me an email to Junkmail4tony@comcast.net and let me know how you like it. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/game/froggohop.aspx
crawl-002
refinedweb
2,293
75.3
I started to design a RESTful webservice with Flask and Python and I'm wondering how one would support multiple API versions in the same project. I'm thinking of putting the requested API version in the URL like this: /myapp/v1/Users /myapp/v1.1/Users <= Same as in v1 /myapp/v1.1/Books /myapp/v2/Users <= Changed in v2 /myapp/v2/Books <= Same as in v1.1 @app.route('/<version>/users') def users(version): # do something return jsonify(response) I am the author of the accepted answer on the question you referenced. I think the /<version>/users approach is not very effective as you say. If you have to manage three or four different versions you'll end up with spaghetti code. The nginx idea I proposed there is better, but has the drawback that you have to host two separate applications. Back then I missed to mention a third alternative, which is to use a blueprint for each API version. For example, consider the following app structure (greatly simplified for clarity): my_project +-- api/ +-- v1/ +-- __init__.py +-- routes.py +-- v1_1/ +-- __init__.py +-- routes.py +-- v2/ +-- __init__.py +-- routes.py +-- __init__.py +-- common.py Here you have a api/common.py that implements common functions that all versions of the API need. For example, you can have an auxiliary function (not decorated as a route) that responds to your /users route that is identical in v1 and v1.1. The routes.py for each API version define the routes, and when necessary call into common.py functions to avoid duplicating logic. For example, your v1 and v1.1 routes.py can have: from api import common @api.route('/users') def get_users(): return common.get_users() Note the api.route. Here api is a blueprint. Having each API version implemented as a blueprint helps to combine everything with the proper versioned URLs. Here is an example app setup code that imports the API blueprints into the application instance: from api.v1 import api as api_v1 from api.v1_1 import api as api_v1_1 from api.v2 import api as api_v2 app.register_blueprint(api_v1, url_prefix='/v1') app.register_blueprint(api_v1_1, url_prefix='/v1.1') app.register_blueprint(api_v2, url_prefix='/v2') This structure is very nice because it keeps all API versions separate, yet they are served by the same application. As an added benefit, when the time comes to stop supporting v1, you just remove the register_blueprint call for that version, delete the v1 package from your sources and you are done. Now, with all of this said, you should really make an effort to design your API in a way that minimizes the risk of having to rev the version. Consider that adding new routes does not require a new API version, it is perfectly fine to extend an API with new routes. And changes in existing routes can sometimes be designed in a way that do not affect old clients. Sometimes it is less painful to rev the API and have more freedom to change things, but ideally that doesn't happen too often.
https://codedump.io/share/mNIWftYzaTlX/1/support-multiple-api-versions-in-flask
CC-MAIN-2018-22
refinedweb
510
74.79
Details Description Tracking bug to note that the (Tika based) ExtractingRequestHandler will not work properly with jdk9 starting with build71. This first manifested itself with failures like this from the tests... [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=ExtractingRequestHandlerTest -Dtests.method=testArabicPDF -Dtests.seed=232D0A5404C2ADED -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=en_JM -Dtests.timezone=Etc/GMT-7 -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [junit4] ERROR 0.58s | ExtractingRequestHandlerTest.testArabicPDF <<< [junit4] > Throwable #1: org.apache.solr.common.SolrException: Invalid Date String:'Tue Mar 09 13:44:49 GMT+07:00 2010' Workarround noted by Uwe... The test passes on JDK 9 b71 with: -Dargs="-Djava.locale.providers=JRE,SPI" This reenabled the old Locale data. I will add this to the build parameters of policeman Jenkins to stop this from failing. To me it looks like the locale data somehow is not able to correctly parse weekdays and/or timezones. I will check this out tomorrow and report a bug to the OpenJDK people. There is something fishy with CLDR locale data. There are already some bugs open, so work is not yet finished (e.g. sometimes it uses wrong timezone shortcuts,...) Activity - All - Work Log - History - Activity - Transitions Misc comments from uwe on the mailing list regarding this... I debugged the date parsing problems with a new test (TestDateUtil in solrj). The reason for this failing is the following 2 things, but they are related (if not even the same bug): - is triggered: TIKA uses Date#toString() which inserts a broken timezone shortcut into the resulting date. This cannot be parsed anymore! This happens all the timein ROOT Locale (see below). - Solr uses Locale.ROOT to parse the date (of course, because it's language independent). This locale is missing all text representations of weekdays or timezones in OpenJDK's CLDR locale data, so it cannot parse the weekday or the time zones. If I change DateUtil to use Locale.ENGLISH, it works as expected. I will open a bug report at Oracle. ... I opened Report (Review ID: JI-9022158) - Change to CLDR Locale data in JDK 9 b71 causes SimpleDateFormat parsing errors ... I think the real issue here is the following (Rory can you add this to issue?): According to Unicode, all locales should fall back to the ROOT locale, if the specific Locale does not have data (e.g.,). The problem is now that the CLDR Java implementation seems to fall back to the root locale, but the root locale does not have weekdays and time zone short names - our test verifies this: ROOT locale is missing all this information. This causes all the bugs, also the one in. The root locale should have the default English weekday and timezone names (see). I think the ROOT locale and the fallback mechanism should be revisited in JDK's CLDR impl, there seems to be a bug with that (either missing data or the fallback to defaults does not work correctly). from Balchandra... Here is the JBS id: Uwe Also added some specific DateUtil tests of this w/o depending on tika to produce the date values... Hi, i keep this issue open for a while. There is nothing we can do at Solr side, this is really a bug. The only thing we could do is to use Locale.ENGLISH instead of Locale.ROOT for date parsing. But this is just a workaround and not really a good one. Hi, gives the following: In fact the parsing of weekday or month names in the root locale was a bug in earlier Java versions. The root locale has accoring to unicode Month names like "M01", "M02",... - but no english month names. Same with weekdays. Using the root locale is fine for parsing ISO formatted dates, but some of the formats are clearly "english" e.g. the "Cookie" or java.util.Date#toString() format. In Solr we should therefore change those SimpleDateFormats using english names while parsing to use Locale.ENGLISH. In JDK 9, they fixed the problem, but we are still not 100% correct. I checked the CLDR locale data, in fact it has no month names, only those "pseudo names". Otherwise this may break again in later versions or for people using ICU SPIs for timezones or locales. I will provide a patch for those date formats, which use english names later (I am currently on vacation, so don't hurry!). We should fix this in 5.3. Here the patch. I will review Lucene/Solr a second time later, but this should be all "english" date formats, that should not use ROOT. Commit 1694276 from Uwe Schindler in branch 'dev/trunk' [ ] LUCENE-6723: Fix date parsing problems in Java 9 with date formats using English weekday/month names. Commit 1694277 from Uwe Schindler in branch 'dev/branches/branch_5x' [ ] Merged revision(s) 1694276 from lucene/dev/trunk: LUCENE-6723: Fix date parsing problems in Java 9 with date formats using English weekday/month names. Commit 1694278 from Uwe Schindler in branch 'dev/branches/lucene_solr_5_3' [ ] Merged revision(s) 1694277 from lucene/dev/branches/branch_5x: Merged revision(s) 1694276 from lucene/dev/trunk: LUCENE-6723: Fix date parsing problems in Java 9 with date formats using English weekday/month names. I also committed to 5.3. I reopen this issue, because with Java 9 build 78 there are still problems (which are bugs in the JDK). This time the timezones cannot be parsed correctly. Hi Rory, hi Balchandra, I set up a quick round trip test (it iterates all available timezones in the JDK, sets them as default, creates a String out of new Date().toString() and tried to parse that afterwards with ENGLISH, US and ROOT locale. import java.text.ParseException; import java.text.SimpleDateFormat; import java.util.Date; import java.util.Locale; import java.util.TimeZone; public final class Test { private static void testParse(Locale locale, String date) { try { new SimpleDateFormat("EEE MMM d hh:mm:ss z yyyy", locale).parse(date); System.out.println(String.format(Locale.ENGLISH, "OK parsing '%s' in locale '%s'", date, locale)); } catch (ParseException pe) { System.out.println(String.format(Locale.ENGLISH, "ERROR parsing '%s' in locale '%s': %s", date, locale, pe.toString())); } } public static void main(String[] args) { for (String id : TimeZone.getAvailableIDs()) { System.out.println("Testing time zone: " + id); TimeZone.setDefault(TimeZone.getTimeZone(id)); // some date today: String date1 = new Date(1440358930504L).toString(); testParse(Locale.ENGLISH, date1); testParse(Locale.US, date1); testParse(Locale.ROOT, date1); // half a year back to hit DST difference: String date2 = new Date(1440358930504L - 86400000L * 180).toString(); testParse(Locale.ENGLISH, date2); testParse(Locale.US, date2); testParse(Locale.ROOT, date2); } } } With Java 8 this passes, with Java 9 build 78 it fails for several timezones. The funny thing is: SimpleDateFormat is not even able to parse "UTC" - LOL. Could you pass this to the issue after reopening? It’s a good test! Specifically, this time this date failed to parse: "Sat Jun 23 02:57:58 XJT 2012" New issue to get hold on this problem: The OpenJDK bug was fixed. Full details of an example failure... Java: 64bit/jdk1.9.0-ea-b71 -XX:-UseCompressedOops -XX:+UseG1GC r1689849
https://issues.apache.org/jira/browse/LUCENE-6723
CC-MAIN-2016-40
refinedweb
1,195
67.04
US9141717B2 - Methods, systems, products, and devices for processing DNS friendly identifiers - Google PatentsMethods, systems, products, and devices for processing DNS friendly identifiers Download PDF Info - Publication number - US9141717B2US9141717B2 US10/248,068 US24806802A US9141717B2 US 9141717 B2 US9141717 B2 US 9141717B2 US 24806802 A US24806802 A US 24806802A US 9141717 B2 US9141717 B2 US 9141717B2 - Authority - US - United States - Prior art keywords - keyword - domain name - request - set forth - Prior art date - Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.) - Active, expires Links Images Classifications - G06F17/30887— - Abstract Description This application claims the benefit of the following patent applications, which are hereby incorporated by reference; the application for patent is based on a disclosure filed on Jul. 17, 1998, as Disclosure Document No. 442,796 and portions of a disclosure filed on Jul. 11, 2001, as Disclosure Document No. 496,673 under the Document Disclosure Program and is a continuation in part of U.S. patent application Ser. No. 09/683,481 filed Jan. 5, 2002, by Schneider, now abandoned, and is a continuation in part of U.S. patent application Ser. No. 09/682,351 filed Aug. 23, 2001, by Schneider, now abandoned, and is a continuation in part of U.S. patent application Ser. No. 09/682,133 filed Jul. 25, 2001, by Schneider, now U.S. Pat. No. 7,194,552, and is a continuation in part of U.S. patent application Ser. No. 09/653,100 filed Aug. 31, 2000, by Schneider, now U.S. Pat. No. 6,760,746, and is a continuation in part of U.S. patent application Ser. No. 09/650,827 filed Aug. 30, 2000, by Schneider, now U.S. Pat. No. 6,901,436, and is a continuation in part of U.S. patent application Ser. No. 09/598,134 filed Jun. 21, 2000, by Schneider, now U.S. Pat. No. 6,895,430, and is a continuation in part of U.S. patent application Ser. No. 09/532,500 filed Mar. 21, 2000, by Schneider, now U.S. Pat. No. 7,136,932, which claims the benefit of U.S. Provisional Application Ser. No. 60/175,825 filed Jan. 13, 2000, by Schneider, U.S. Provisional Application Ser. No. 60/160,125 filed Oct. 18, 1999, by Schneider, U.S. Provisional Application Ser. No. 60/157,075 filed Oct. 1, 1999, by Schneider, U.S. Provisional Application Ser. No. 60/130,136 filed Apr. 20, 1999, by Schneider, U.S. Provisional Application Ser. No. 60/125,531 filed Mar. 22, 1999, by Schneider, U.S. Provisional Application Ser. No. 60/143,859 filed Jul. 15, 1999, and U.S. Provisional Application Ser. No. 60/135,751 filed May 25, 1999. 1. Field of the Invention This invention generally relates to identifier resolution and processing, and more specifically relates to methods, systems, products, and devices for processing domain name system friendly identifiers. 2. Description of the Related Art A resource identifier such as a Uniform Resource Identifier (URI) is a compact string of characters for identifying an abstract or physical resource. URIs are the generic set of all names and addresses that refer to objects on the Internet. A URI can be further classified as a locator, a name, or both. A Uniform Resource Name (URN) refers to the subset of URI that are required to remain globally unique and persistent even when the resource ceases to exist or becomes unavailable. A Uniform Resource Locator (URL) refers to the subset of URI that identify resources via a representation of their primary access mechanism (e.g., their network “location”), rather than identifying the resource by name or by some other attribute(s) of that resource.. In addition, the last (optional) part of the URL may be a “query string” preceded by “?” or a “fragment identifier” preceded by “#”. The fragment identifier indicates a particular position within the specified file. For example the URL “”, where “http” is the scheme or protocol, “” is the host server name or Fully Qualified Domain Name (FQDN), “80” is the port connection for the HTTP server request, “index.html” is the filename located on the server, and “appendix” is the identifier to display a specific portion of the HTML file called “index”. The URL “” also retrieves an HTML file called “index” on the HTTP server called “example.com”. By default, when either a port or filename is omitted upon accessing a HTTP server via a URL, the client browser interprets the request by connecting via port 80, and retrieving the HTML file called “index”. Transmission Control Protocol/Internet Protocol (TCP/IP) capable machine. Users typically do not interact directly with the resolver. Application programs that use the DNS, such as mailers, mail servers, Web clients, Web servers, Web caches, IRC clients, FTP clients, distributed file systems, distributed databases, and almost all other applications on TCP/IP rely on the resolver library.. Domain name resolution is explained in P. Mockapetris, “Informational RFC (Request for Comment) 1035: Domain Names—Implementation and Specification”, Internet Engineering Task Force (IETF), November 1987, “”. DNS friendly identifiers such as RFC 1035 compliant domain names are restricted to a limited 7 bit ASCII character set: A to Z, a to z, 0 to 9, and hyphen. core, Microsoft Internet Explorer ., tida.com) are referred to as Top Level Domains (TLDs), while hosts, computers with assigned IP addresses that are listed in specific TLD registries are known as second-level domains (SLDs). For the domain name “tida.com”, “.com” is the TLD, and “tida”.tida. With respect to domain name management, the term “registry” refers to an entity responsible for managing allocation of domain names within a particular name space, such as a TLD. The registry stores information about registered domain names and associated name servers. The term “registrar” refers to any one of several entities with authority to add names to the registry for a name space. Entities that wish to register a domain name do so through a “registrar”. The term “registrant” refers to the entity registering the domain name. In some name spaces, the registry and registrar functions can be operated by the same entity, so as to combine the concepts and functions of the registrar and registry. The combined registry-registrar model is implemented in many ccTLDs and a few gTLDs. VeriSign Global Registry Services (GRS) is the leading provider of domain name registry services and DNS support to the Internet and is responsible for the infrastructure that propagates this information throughout the Internet and responds to billions of DNS look-ups daily. Though there is a significant percentage of daily DNS look-ups that can not be resolved (e.g., can not find an IP address), there has never been any attempt by the registry or any other party to monetize from such unresolvable DNS requests. The arbitrarily limited number of gTLDs has created a severe shortage of desirable domain names in the “.com” registry, leading to substantial pent-up demand for alternate domain name resources. Experimental registry systems offering name registration services in an alternate set of exclusive domains such as “.space” or “.love” developed as early as January 1996. Although visible to only a fraction of Internet users, alternate DNS systems such as the Name.Space, AlterNIC, and eDNS registries have contributed to the community's dialogue on the evolution of DNS administration. Competition argues that TLDs have become an issue of free speech and should not be restricted to the current limited set of gTLDs and ccTLDs. Customers registering second-level domains in alternate TLDs cannot be reached by other Internet users because these domains, which are not listed in the root zone file, cannot be resolved by other Internet DNS name servers. Only if competitors individually negotiated with each of the scores of thousands of name server operators on the global Internet, something that is a physical and financial impossibility, for inclusion of alternate TLDs would there be any possibility that its domain names could be universally resolvable. As a result, competition has been unable to offer a commercially viable registration service in its TLDs, and has been unable to effectively compete in the domain name market. In March 2001, by partnering with several large ISPs to modify their nameservers, New.Net, Inc. opened their doors to serve as the most effective demonstration of how alternate TLDs are attempting to succeed in a fragmented market driven system. The following excerpt is provided in, “Informational RFC (Request for Comment) 2826: IAB Technical Comment on the Unique DNS Root”, Internet Architecture Board, May 2000, “”... A supplemental memo to RFC 2826 is provided in Simon Higgs, “Informational Internet Draft: Root Zone Definitions”, Higgs Communications, May 2001, “”. Within this memo, definitions are applied in an attempt to make other roots such as alternate or enhanced roots inclusive as part of a single, globally unique root implying that the single root comprises a plurality of root components distributed across many zone files. In another draft provided in Simon Higgs, “Informational Internet Draft: alternate Roots and the Virtual Inclusive Root”, May 29, 2001, “”, proposes a solution to the problem of duplicate colliding top level domains by identifying the virtual inclusive root (VIR), in compliance with the IAB's RFC 2826. Though the VIR is the sum of the consensus between all root zones on the public Internet, the VIR cannot support conflicting TLDs. There is a particular increase in articles and publications emphasizing the importance of name space and the perceived shortage of “.com” names. References have been made that NASA is seeking authorization for “.mars” as an extension of terrestrial geography. Speaking on the opening day of the annual Internet Society (ISOC) conference in Geneva on Jul. 22, 1998, Vint Cerf, a founding President of ISOC, said the domain name debate should also encompass “.earth” or “.mars” because that's where real-time science data is going to travel from in the not-too-distant future. He said, “The idea is to take the interplanetary Internet design and make it a part of the infrastructure of the Mars mission.” for name resolution services whereas the detection of only a delimiter implies a search request. Because MSIE browser is more than an application and has become a de-facto infrastructure component, all attempts to compete with the Autosearch feature of MSIE browser include modifying a template in the operating system registry to redirect autosearch results usually to another search page. MSIE also includes an AutoScan feature. The AutoScan and AutoSearch features are dependent on each other in Internet Explorer 5 and can not be disabled from each other. Autosearch is tried first, and if there are no Autosearch results then Autoscan is tried. The Autoscan serves only as a navigation tool and has never been adapted/configured to perform other request types. There are no known applications capable of detecting the activation of the autosearch and in response either force the autosearch to terminate and invoke an autoscan request or perform a request that completely overrides the autosearch request.. U.S. patent application Ser. No. 09/525,350 filed Mar. 15, 2000, by Schneider, entitled “Method, product, and apparatus for requesting a network resource” teaches how a registration request may be processed (particularly from an autosearch) in response to determining that a network resource can not be located from an input identifier having a valid domain. U.S. patent application Ser. No. 09/532,500 filed Mar. 21, 2000, by Schneider, entitled “Fictitious domain name method, product, and apparatus”, teaches how a valid URI may be constructed, resolved, and accessed (particularly from an autosearch) in response to determining that an input identifier includes a non-compliant RFC1035 domain name (e.g., domain name is fictitious). Both applications teach how input having only “.” delimiters can be processed from an autosearch by performing a network resource request and/or registration request in response to a failed DNS resolution request. In addition, U.S. Provisional Application Ser. No. 60/157,075 filed Oct. 1, 1999, by Schneider, entitled “Method and apparatus for integrating resource location and registration services of valid and fictitious domain names” and U.S. patent application Ser. No. 09/653,100 filed Aug. 31, 2000, by Schneider, entitled “Method, product, and apparatus for processing a data request”, teach how such resolution and registration methods of valid and fictitious identifiers including multilingual domain names can be integrated into a unified product, apparatus, and system. In effect, the autosearch feature has never been used for further processing of any kind in response to determining that a DNS friendly identifier is unresolvable. Aside from processing search requests and keyword resolution requests instead of or before processing a DNS resolution request, there has been no known public disclosure of how the autosearch can be used to process request types in response to a failed DNS resolution request until a VeriSign press release, “VeriSign Announces Breakthrough in Web Navigation For Tens of Millions of Users Worldwide”, Jun. 20, 2001, “”, which announces that “Internet users can now reach Web site destinations by typing domain names with characters used in their own languages into their Microsoft Internet Explorer 5.0 or higher browser software.” Using technology from RealNames Corporation, Microsoft modified a search function of MSIE to enable the International Domain Names to work without the use of special plug-ins or client software. This was followed by an Internet draft provided in Yves Arrouye, “IDN Resolution in Windows Internet Explorer 5.0 and Above”, Jul. 3, 2001, “”, describes how internationalized domain names (IDNs) are being resolved in MSIE. The document focuses on the different steps that are taken after a user enters an IDN in the address bar of IE, up to when the relevant Web page is displayed in the user's browser. Though all input identifiers having only a “.” delimiter have only recently been configured to pass from the autosearch to a RealNames keyword resolver, the only further processing currently implemented is limited to that of IDN resolution only with the display of an error message for all other input. RealNames Keywords were activated in the MSIE browser pursuant to a distribution agreement with Microsoft, which Microsoft chose not to renew. On Jun. 28, 2002 the RealNames service had terminated, where keywords no longer resolve in the MSIE browser. Keyword navigation has always remained dependent upon a client web browser or to a client web browser add-on or plug-in product. Other techniques of keyword navigation that do not require browser modification remain unexplored. The domain name system uses the domain “in-addr.arpa” to convert Internet addresses back to domain names. In a way, the highest level “.” is a ZLD. Another example of how a ZLD may be used is explained in, P. Falstrom, “Informational RFC (Request for Comment) 2916: E.164 number and DNS”, Cisco Systems Inc., September 2000, “” by showing how DNS can be used for identifying available services connected to a single E.164 phone number. Through transformation of E.164 numbers (ENUM) into DNS names and the use of existing DNS services like delegation through NS records, and use of NAPTR records in DNS, available services for a specific domain name can be discovered in a decentralized way with distributed management of the different levels in the lookup process. The domain “e164.arpa”, which serves as a ZLD is being populated in order to provide the infrastructure in DNS for storage of E.164 numbers. For example, the E.164 number “+1-216-555-1212” can be translated into the “2.1.2.1.5.5.5.6.1.2.1.e164.arpa” domain having a zone that includes NAPTR resource records to help determine which resource to access in response to the ENUM request, however it remains the onus of industry to adapt how, where, and when this transformation takes place. Currently, there are no known tools to adapt the web browser or similar network navigation device to be ENUM enabled. Though the E.164 identifier holds great promise, the expression of the identifier is based on an international standard that may be awkward and unintuitive for quick adaptation by the public. Similar identifiers that represent ENUM may be adapted by the public more readily. There has been evidence over the years of a quicker adoption to informally express a phone number in a syntax similar to an IP address such as “216.555.1212”, for example. Identifiers may exist across multiple namespaces, all with different ownership and rules, and different naming authorities. As shown, there is a recent convergence of attempting to map identifiers across multiple namespaces to access network resources via the DNS. The most common technique for mapping such identifiers is to automatically and transparently transform the identifier in a user's applications before a DNS query is sent. This method does not make any change to the DNS nor require separate DNS name servers. Other examples are proposals which have suggested that modifications are made to the DNS servers to accommodate international domain names, for example. While the proposed solution could work, it requires major changes to the Internet as it exists today. Domain name servers around the globe, which number in the hundreds of thousands, would have to be changed or updated. content taken from the wildcard RRs. The only example of wildcard RR usage in RFC 1034 is that of e-mail aliasing. U.S. Pat. No. 6,442,602 issued on Aug. 27, 2002, by Choudhry, entitled “System and method for dynamic creation and management of virtual subdomain addresses”, discloses how a wildcard RR can be used to launch a server script in response to an unrecognized/unregistered subdomain name such as “virtualsubdomain.domain.com”. The script will resolve it and map it to, or any other file on a web server which is actually registered. Recently, ccTLD registries have used wildcard RRs to redirect a resolvable domain name back to the registration. Furthermore, due to the global public nature with respect to the root zone file of the single, authoritative root DNS server or any other root DNS server for that matter, a wildcard resource record never been used in a root zone file. An emerging economy of names has created a politically controlled TLD space due to the technical constraint of the DNS having a single authoritative root. Though alternate roots have surfaced to provide alternate TLDs, such services are criticized by supporters of the single root that such implementations disrupt DNS stability and fragment the Internet. However, the same critics encourage competition under the assumption that all such competition will inevitably threaten the stability of the DNS. There has long been an unfulfilled need for processing domain names having TLDs that are not resolvable by a single authoritative public root. Though alternate root servers have been deployed to recognize alternate TLDs, there has been little incentive by industry to move in this direction for concern that using such domain names would confuse the public, fragment the Internet, etc. Now that conventional namespace solutions have been traversed and exhausted, industry is only first beginning to attempt and expand domain namespace by offering proposed solutions with identifiers having only the “.” delimiter. Similar in approach to the fictitious domain name method disclosed in U.S. patent application Ser. No. 09/532,500 filed Mar. 21, 2000, by Schneider, U.S. Published Patent Application 20020073233, published on Jun. 13, 2002 by Gross, et al., entitled, “Systems and methods of accessing network resources” discloses a system for using client-based address conversion software to intercept a requested Internet address having a non-ICANN compliant TLD. This published application corresponds to software technology deployed by New.net in March 2001 on either the network level by partner ISPs or on individual client machines, to add “new TLD extensions” to the existing DNS for the Internet community to purchase and use domain names with extensions that were previously unavailable. Requests to display Web pages with New.net domain names are resolved by appending the additional extension “.new.net” onto the address. As a result, requests are automatically routed to New.net's DNS servers to determine the correct IP address of the computer hosting the Web page. However, there is no mention in the published application nor no mechanism in New.net's deployed technology to provide the opportunity to register or check availability of a domain name of any kind in response to determining that the domain name is not resolvable or that a network resource corresponding to the domain name can not be located. Furthermore, the “new.net” portion of the domain name space (e.g., zone files) has never included the use of wildcard RRs. U.S. patent application Ser. No. 09/682,133 filed Jul. 25, 2001, by Schneider, entitled “Method, product, and apparatus for requesting a network resource” teaches how to create a market driven registrar competition across all TLDs (e.g., ccTLDs, gTLDs, alternate TLDs, and the like) by providing domain name wildcard redirection, particularly in a gTLD zone file, and U.S. patent application Ser. No. 09/682,351 filed Aug. 23, 2001, by Schneider, entitled “Fictitious domain name method, system, product, and apparatus” discloses how to create a competitive market driven namespace provider system across all TLDAs by providing fictitious domain name wildcard redirection in a root zone file, in this case through a proposed infrastructure domain “tlda.arpa”. Though these patent applications show new methods of using DNS resource records in public zone files (e.g., root zone, TLD zone), further methods have since been constructed that will be shown in this instant invention. Due to the perceived shortage of TLDs, the struggle to add new TLDs has enabled industry to overlook solutions for extending the use of the current domain name space. Such art clearly demonstrates that there is a need for a system to foster better use of domain name space. Accordingly, in light of the above, there is a strong need in the art for a system and method for enhancing how domain name space can be more extensively used on a network such as the Internet. The present invention enables fictitious domain name resolution before, during, and/or after a DNS query. The invention enables the DNS to resolve all DNS friendly identifiers. The present invention enables any namespace to be transformed into a FDN for DNS resolution processing including reiterative and/or recursive identifier transformations across multiple namespaces. The invention enables the smallest possible modification to the DNS to achieve immediate ubiquity to FDN usage on the Internet without having to change MSIE autosearch or add new client resolvers. The present invention enables the single authoritative root to process all HLDs as resolvable enabling the DNS to become all inclusive leaving alternative roots no choice but to create a system of virtually exclusive roots instead of VIRs. The invention enables a new infrastructure domain such as “tlda.arpa” to be used as a Primary Virtual Zero Level Domain (PVZLD) for brokering FDN requests across multiple namespaces to namespace providers who manage Secondary Virtual Zero Level Domains (SVZLD). present invention can bypass search request activation after a failed DNS request and instead redirect a DNS friendly identifier to a request portal to perform one or more of the following requests; navigation request, registration request, WHOIS request, back-order request, prefix request, suffix request, command request, resolution request, redirection request, search request, identifier registration request, commerce request, subscription request, dialing request, messaging request, conferencing request, vendor request, service request, login request, status request, authorization request, and reference request. The present invention can include numerical TLDs to create an alternate root zone managed by participating ISPs in order to enhance domain name space with respect to numerical DNS friendly identifiers. The invention is enabled to perform mnemonic conversion techniques when translating a specific class of numerical FDNs into the “e164.arpa” domain. The invention enables any ISP DNS server to be configured to point to a root zone alias or Virtual Zero Level Domain to enable clients to immediately and transparently use the benefits of improved DNS resolution. The present invention can use an emulated root domain or root zone alias to virtually eliminate failed DNS requests and/or access resources across a plurality of namespaces. The present invention can resolve and forward unregistered DNS friendly identifiers to network resources configured to provide additional request type services such as resolution, registration, search, discovery, directory, and information services. The invention can assist in financially sustaining Internet organizations such as ICANN, IANA, IETF, ISOC, and the like by realizing new sources of revenue. The present invention can perform a search engine request in response a hostname or domain name resolution request. The present invention can provide access to the listing of one or more trademark registrations relating to a search engine request of a keyword. The invention enables an autosearch to process any request other than that of a search request. The invention enables automatic termination of a detected autosearch to initiate an autoscan, the autoscan adapted to access a request portal and/or perform one of a search and registration request with the first domain name or keyword. The present invention can use a browser helper object to construct a URL to override a detected autosearch and access a network resource configured to extract autosearch parameters from the newly constructed URL. In general, in accordance with the present invention with a keyword, a computer implemented method includes generating a domain name having the keyword, and requesting a network resource corresponding to the domain name, wherein the network resource is adapted to extract the keyword from the domain name. In accordance with an aspect of the present invention, with a first domain name, a computer implemented method includes generating a second domain name having the first domain name, and requesting a network resource corresponding to the second domain name, wherein the network resource is adapted to extract the first domain name from the second domain name. In accordance with another aspect of the present invention, a DNS server includes a DNS query, a root zone used for attempting to resolve the DNS query, and at least one configuration parameter adapted to resolve the DNS query when it is determined that the root zone can not resolve the DNS query. In accordance with yet another aspect of the present invention, a root zone having at least one root resource record for resolving a DNS query includes a first root resource record configured to resolve the DNS query when it is determined that the DNS query does not include a top level domain. In accordance with another aspect of the present invention, a root zone having at least one root resource record for resolving a DNS query includes a first root resource record configured to resolve the DNS query when no other root resource record is in the root zone or when no other root resource record in the root zone is configured to resolve the DNS query. In accordance with yet another aspect of the present invention, a method for presenting search results includes obtaining the search results wherein at least one search result includes a resource identifier corresponding to a measure of intellectual property usage, ordering the search results based on the measure of intellectual property usage, and presenting the ordered search results. In accordance with another aspect of the present invention, a DNS server includes a DNS query having a highest level domain (HLD), a root zone having at least one root resource record, and the root resource record adapted to resolve the DNS query when it is determined that the HLD is a top level domain alias (TLDA). In accordance with yet additional aspects of the present invention, a device 140, memories 144, and input/output devices 148. 128 may include hundreds of thousands of individual networks of computers. One aspect of the present invention includes a specific type of server system called a DNS server system 121 which stores in memory a DNS database 124 having DNS records (resource records) that translate domain names into IP addresses and vice versa. The DNS server system 121 is connected 116 to a network 128. The DNS is a distributed database (of mappings) 124 implemented in a hierarchy of DNS servers (name servers) 121 and an application-layer protocol that allows hosts and name servers to communicate in order to provide the translation service. Name servers 121 are usually UNIX machines running BIND software. In order to deal with an issue of scale of the Internet, the DNS uses a large number of name servers 121, organized in a hierarchical fashion and distributed around the world. No single name server 121 has all of the mappings 124 for all of the hosts in the Internet. Instead, the mappings 124 are distributed across many name servers 121. 130. The pages 130 130. The browser programs 112 enable users to enter addresses of specific Web pages 130 121 connected to the Internet. The DNS client 114 eventually receives a reply, which includes the IP address for the domain name. The browser then opens a TCP connection 116 to the HTTP server process 120 located at the IP address.. When the network resource can be located (step 250) it can then be determined in step 258 whether the network resource can be accessed. When content, for example, can be accessed from the web server (network resource) then the network resource is accessible from URI (step 242) and results, if any, may then be provided in step 262. When the domain name is determined not resolvable (step 260) or when the resource can not be found (step 254) or when the resource can not be accessed (step 258) then an error message can be presented in step 264. For example, a browser having a search function receives a keyword as input. Typically, the keyword/hostname is forwarded to the search function after a failed DNS request. By generating an identifier that is a domain name having the keyword, the domain name can act as a carrier/envelope to the keyword. A DNS server (which can act as a proxy server and managed by an ISP, for example) can be adapted to resolve all such generated identifiers for the purpose of requesting (e.g., registering, searching, or resolving) at least a portion of the identifier. The DNS server can return an IP address of a network resource that is adapted to extract the keyword from the domain name identifier. In this way, the activation of a search request after a failed DNS request can be avoided entirely, enabling a browser to use keywords as a network navigation tool, search tool, and registration tool. For example, a browser having a search function receives a first domain name as input. Typically, the first domain name is forwarded to the search function after a failed DNS request. By generating an identifier that is a second domain name having the first domain name, the second domain name can act as a carrier/envelope to the first domain name. A DNS server (which can act as a proxy server and managed by an ISP, for example) can be adapted to resolve all such generated identifiers for the purpose of requesting (e.g., registering, searching, or resolving, etc.) at least a portion of the identifier. The DNS server can return an IP address of a network resource that is adapted to extract the first domain name from the second domain name. In this way, the activation of a search request after a failed DNS request can be avoided entirely, enabling a browser to be use domain names as both a navigation tool, search tool, and registration tool. For instance, a browser receives the keyword “example” and the domain name “example.keywordrouter.org” is generated by a string manipulation operation such as that of an append function. This domain name can be generated on the client side (e.g., from a DLL, TCP/IP stack, configuration file, or operating system registry) or on any server (e.g., ISP server, DNS server, proxy server, etc.). A resource record in the “keywordrouter.org” zone file can be used to access a network resource specifically adapted to perform a string manipulation operation such as a truncation operation to extract the keyword “example” and either automatically perform or provide a user with the opportunity to perform any non-DNS type request one of a navigation request, search request, directory request, discovery request, and registration request depending upon configuration parameters. For example, when the keyword is obtained from the domain name, it is desirable to determine how the keyword can be processed according to settings, preferences, or configuration parameters. For instance, the keyword might not be registered in one or more contexts as shown above. A registration form can be inputted by a user or automatically populated based on a “cookie” or registration profile of the user. When a user submits such a form, variables such as contact information and the like can be formatted, exported, passed in accordance with the requirements of the appropriate registrar, registry, search engine, trademark office, and the like. In the case of searching the keyword as a trademark, when “registration information” is provided, a listing of trademark registrations relating to the keyword can be displayed including displaying such trademark registration information contemporaneously with displaying other search results and the like. For instance, the search term or keywords “west coast travel” is obtained and a search engine request is performed, the search term can also be used to query a trademark database to find trademarks/tradenames/servicemarks that may match or are similar to the search term. Such trademark results can be included and/or accessed from any search results presented from the search engine request. An example of a URL that can be constructed to access trademark information while processing a search engine request is as follows: “? action=search&db=pto&ss=west+coast+travel” UI elements for associating intellectual property related information to URLs 884 can also be included. New entries for submitting patent, trademark, and copyright related information can be provided as part of the search engine submittal interface 880 that pertains to search engine rankings and URL submittals. For instance, a trademark identifier such as a country, federal, or state registration number or serial number, logo, or word mark can be provided to associate the submitter to the IP property right for the purpose of affecting the outcome of search engine rankings. A UI element can be used for binding a submitter or having a URL submitter declare 886 that the submitter is the owner of the trademark or is authorized by the owner of the trademark to use the listed trademark. Another UI element can be used for submitting 888 a digital certificate, digital signature, or PGP Public Key for the purpose of authenticating, verifying, and/or communicating with the property owner of the intellectual property right. Similar UI elements (not shown) can also be used for providing patent and copyright related information as well. The submittal of such an intellectual property identifier can correspond to a calculated measure of IP strength. For instance, on a scale from 1 to 10, a state trademark may yield a 6 where as a federal trademark would yield a 10. The inclusion of an authentication identifier such as a digital signature may have a value of 8, etc. A composite value can be calculated as a means to measure brand strength. This value can be stored as part of the URL submittal data record. This new kind of IP related metric for measuring IP authorized usage can be used to place more importance on such a submittal and yield a higher search engine ranking for search engine results based on search terms similar to or matching the trademark or other IP rights including patents and copyrights. As shown in the background of the present invention, there are only a few applications in which a wildcard RR has been used. Some known uses include e-mail aliasing, domain name registration, and virtual subdomains. Further use of wildcard RRs have been shown by Schneider in previous co-pending patent applications Ser. Nos. 09/682,133 and 09/682,351 by using a TLD wildcard RR for the purpose of creating competition between registration providers and using a root zone wildcard RR for the purpose of creating a competitive market driven namespace provider system to handle fictitious domain names having top level domain aliases. When the input domain name is a FDN having a TLDA and the wildcard RR is in a root zone file then a corresponding network resource adapted to determine which naming service/registry/namespace provider can resolve the FDN having a TLDA. This determination can be made automatically based on any combination of parameters including configuration settings, metadata, user preferences, past history, currently available resources, environment variables, cookies, and the like. If need be, the option of determining a resolution method may be provided enabling a user to select at least one namespace provider from a plurality of namespace providers. Nameservers found at a domain called “isp.com” 1130 (representative of any Internet Service Provider) can typically include an ISP DNS Server 1135 that can access the public DNS root. alternate naming service providers such as “new.net” domain 1140 can include an alternate DNS Server 1145 that can access an alternate DNS root (e.g., corn, net, tv, cc, kids, search, agent, travel, etc.). The alternate root usually comprises the ICANN root as well as additional alternate TLDS. Such alternate root zones have never included a resource record adapted to resolve a DNS query for an exact name that does not produce an exact match (e.g., wildcard RR). Some ISPs have partnered with such alternate naming services to provide additional user benefit. “zonecache.com” domain 1150 can include a Virtual DNS Server 1155 that can access an emulated DNS root zone also called a Virtual Zero Level Domain (VZLD) or root zone alias. Though the VZLD can include the ICANN DNS root, such an emulated root domain does not need to include any additional alternate TLDs. The purpose of this particular VZLD is to mirror the DNS while minimizing the volume of failed DNS resolutions. Any DNS subdomain can be configured to operate as a VZLD. VZLD can also include information about the authoritative name servers for virtual top level domains (VTLDs). For instance, each TLD can have a corresponding VTLD (as will be shown in conjunction with In effect a VZLD can include at least one resource record for resolving a DNS query, where a first resource record is configured to resolve the DNS query when no other resource record is in the VZLD or when no other resource record in the VZLD is configured to resolve the DNS query. For instance, the resource record can be configured to resolve the DNS query when it is determined that the DNS query does not include a top level domain, or more specifically, when it is determined that the DNS query includes a top level domain alias. The VZLD can further include a second resource record configured to forward a DNS query having a top level domain (TLD) to a corresponding VTLD zone having a VTLD resource record configured to resolve the DNS query having the VTLD when no other VTLD resource record is in the VTLD zone or when no other VTLD resource record in the VTLD zone is configured to resolve the DNS query having the VTLD. In turn, the VTLD can further include a second resource record configured to forward a DNS query having a second level domain (SLD) to a corresponding Virtual SLD (VSLD) zone having a resource record configured to resolve the DNS query having the VSLD when no other resource record is in the VSLD zone or when no other resource record in the VSLD zone is configured to resolve the DNS query having the VSLD. These resource records which link across a plurality of domains/zones can link across all domain levels (e.g., 3LD, 4LD, 5LD, etc.) in a virtual DNS. When the above root zone alias 1170 is queried to resolve “name.game”, for example, it is determined that there is no TLD called “game”. The wildcard RR is detected and passes the query value of “name.game” to a server labeled “tlda.arpa”. In effect, the wildcard RR treats the query as resolvable and redirects “name.game” to “tida.arpa” for further processing such as namespace resolution, registration services, search services, directory services, and/or discovery services through a TLDA Registry or licensed Metaregistry. Namespace providers may register to participate in FDN resolution by providing API resolver parameters, delimiter mappings, namespace mappings, Namespace ID, or any other parameters that can transform FDNs having a TLDA into the sponsored namespace managed by the provider. For instance, RealNames can participate by registering their Unified Resolution and Discovery Protocol (URDP) resolver service and Microsoft can participate in registering their Universal Directory, Discovery and Integration (UDDI) system to receive and process FDNs detected during DNS resolution, for example. There are many namespaces (e.g., multilingual names, fictitious domain names, ENUM, Credit Card Numbers, URNs, etc.) that serve as a layer to the DNS. The relationship of these different naming systems may be looked at as a hub and spokes, wherein the DNS serves as a hub with each namespace in relationship to the DNS serving as a spoke. These namespaces may now be accessed as a result of root zone wildcard redirection. The domain name “tlda.arpa” may serve as a wildcard gateway/portal (also called a primary zero level domain) to determine what type of redirector string or Secondary Virtual ZLD (SVZLD) may be used/accessed, if at all, to resolve other namespaces within the DNS. The Primary Zero Level Domain (PZLD) is in operative association with a network resource adapted to determine how to process the detected domain identifier having a top level domain alias. The virtual TLD zone alias can be populated with every resource record from the original published TLD zone and include the wildcard RR 1190. For example, the original published NET TLD zone can be accessed and updated each day from VeriSign Registry with a signed zone file access agreement in place. Additional resource records 1195 can be included that are representative of SLDs that have virtual SLD zone aliases. For instance, “SITEMAP.NET.” is a SLD that will list 3LD entries for the “SITEMAP.NET.” domain from a nameserver called “sitemap.net.zonecache.com”. For instance, the owner of the domain “sitemap.net” may wish to have DNS services managed by a virtual zone alias provider such as “zonecache.com”. Such a zone alias provider can perform any associated 3LD registration, discovery, directory, information, and request type services as needed on behalf of the SLD holder. A hierarchy of nameservers in the DNS system 121 are successfully queried until the appropriate DNS server 1255 is accessed. The DNS server 1255 includes a zone file 1260 having one or more resource records 1262. In addition, the DNS server 1255 also includes initialization and configuration files called “named.root” 1264, “named.conf” 1266, and “resolv.conf” 1268. Primary, secondary, and caching-only DNS servers, when first configured, need to know the IP addresses of root servers so that they can begin to resolve client requests about other zones. A default list of root servers is typically provided in the “named.root” file 1264. The data in the “named.conf” 1266 file specifies general configuration characteristics for the name server, defines each zone for which the name server is responsible, and provides further configuration information per zone. Data in the “resolv.conf” 1268 file specifies general configuration characteristics for the resolver including naming one or more specific name servers to query in a particular order and naming one or more specific zones to query in a particular order. Changes can be made to these initialization and configuration files to adapt a DNS server to perform aspects of the present invention. The initial set of root name servers is specified using a hint zone. When the server starts up, it uses the root hints to find a root name server and get the most recent list of root name servers. For instance, a root zone hint file can be placed in “named.conf” 1266 to call “named.root” 1264 when needed. A wildcard resource record can be placed in “named.root” 1264 to enable resolution of DNS friendly identifiers having TLDAs. Further changes can be made to “named.root” 1264 to enable a VTLD zone to be accessed instead of a TLD zone. Instead of using a root zone hint file, “named.conf” 1266 can include a file that will slave the root zone from a zonecache.com master DNS server that manages a VZLD or root zone alias. In addition, “resolv.conf” 1268 can be configured to append “zonecache.com” when a DNS friendly identifier such as a numerical FDN (e.g., 216.555.1212) can not be resolved by the public DNS root. After a first DNS query fails, the domain name “216.555.1212.zonecache.com” can be constructed. The zone for “zonecache.com” domain can be configured as a VZLD and access a root zone alias that can resolve the “216.555.1212” identifier. For example, when a DNS query includes a DNS friendly identifier such as a numerical fictitious domain name (NFDN) (e.g., 216.555.1212) and a root domain alias (DNS Root plus wildcard) is accessed, the NFDN can be resolved by translating the NFDN into an IP address. In turn, a network resource is configured to detect that the NFDN is more specifically a DNS friendly telephone number identifier and respond but translating the NFDN into the RFC 2916 compliant “2.1.2.1.5.5.5.6.1.2.1.e164.arpa”. In another example, when “” is received and the “.binaries” TLDA is detected, it can be further detected that “alt” is a top-level news category and that “” is the incorrect protocol associated with the identifier. The PVZLD can be configured to detect such an error and correct it by constructing a new URL such as “nntp://alt.binaries”. An alternate root by definition includes alternate top level domains. The alternate root also includes the legacy root, now maintained by the US Government (ICANN root). There exists no known published root zone (e.g., public DNS root, alternate root operating outside of ICANN authority, etc.) that has ever included a wildcard RR nor does there exist any known proposal for the inclusion of such a RR. The addition of a wildcard RR to the public DNS root does not add any new TLDs and therefore can not be called an alternate root. Only the U.S. Government can delegate authority to ICANN to add such a wildcard resource record to the public DNS root. If such a decision was ever made, unregistered DNS friendly identifiers could then be resolved and forwarded to network resources configured to provide additional request type services such as resolution, registration, search, discovery, directory, and information services. In this way, such request portal type services for a fictitious domain name can function similar to that of “parked page” services similar to that of registered domain names. By so doing, new sources of revenue can be realized to help financially sustain Internet organizations such as ICANN, IANA, IETF, ISOC, and the like. The selection of a root zone to point to is a voluntary act by DNS name server administrators and end-user client software. A nameserver of any DNS subdomain (e.g., ISP DNS server) can be configured to point to a root zone alias or Virtual Zero Level Domain to enable clients to immediately and transparently use the benefits of improved DNS resolution. The use of the root zone wildcard could reduce or eliminate the need for client systems to intercept a received identifier before reaching the DNS and/or further process the received identifier in response to a DNS error upon resolving the identifier. Because the wildcard can be used as a means to access other alternate roots, the single authoritative root can remain unified but yet have a synergistic relationship to alternate roots that participate in communicating with the primary virtual ZLD (PVZLD), or authoritative ZLD/absolute ZLD (AZLD). The wildcard RR or any resource record that functions similar to a wildcard used in a root zone or root zone alias can restore the intended purpose of the DNS by creating a unified global public infrastructure with respect to itself and, in addition, to other naming systems. include “num-stubs.conf”; The BIND configuration can be reloaded with the command “ndc reload”. Included in the stub zone 1370 are numerical extensions that can be four or more digits in length. Adding a three digit numerical extension is not advisable because a DNS friendly identifier could potentially be incorrectly processed and mistake an IP address for a numerical domain name. By populating the stub zone with numerical extensions real numerical domain names can be resolved. For instance, a four digit extension stub 1375 can be used to process a telephone number such as “216.555.1212” or a social security number such as “123.45.1212”, a 5-digit extension stub 1380 can be used to process a zip code domain name such as “info.44106” or a UPC code domain name such as “93371.44106”. Furthermore, in the case of zip code TLDs, a “.44106” zone can function as an alias and point to the same nameservers that have the zone “cleveland.oh.us”, for example. N-digit extensions can be applied as needed to represent any form of numerical TLD. In a preferred aspect, the stub zone 1370 is used as a convenience so that a root zone such as an ICANN root does not have to be modified. In another aspect of the present invention, numerical TLDs can be included to create an alternate root zone managed by participating ISPs in order to enhance domain name space with respect to numerical DNS friendly identifiers. For example, when the above numerical domain zone file is queried to resolve “555.1212”, it is determined that there is no 3LD and therefore the numerical domain name does not include an area code. The wildcard RR is detected and passes the query value of “555.1212” to a server labeled “areacodeportal.com”. In effect, the wildcard RR treats the query as resolvable and redirects “555.1212” to a network resource corresponding to “areacodeportal.com” for further processing. Such a network resource can be configured to present all possible area codes for the numerical domain name “555.1212” enabling a user to select the intended area code. The wildcard RR can redirect incorrect area codes as well. For instance, “999.555.1212” is not in the“555.1212” zone file and can in turn redirect to the network resource corresponding to “areacodeportal.com” to select a correct area code. A default area code or list of area codes may be included in configuration settings. This includes the ability to access one or more area codes from the operating system registry or from the current dialup settings of a client machine. A history folder can be consulted or a cookie can be placed on the client as a means to retrieve the most likely area code(s) in response to receiving an incomplete telephone number domain name identifier. Similar techniques can be applied to the autosearch to process NFDNs that are representative of telephone numbers in dot notation as well. Techniques for converting FDNs into the “e164.arpa” domain can include mnemonic conversion techniques. For instance, the DNS friendly identifier “1.800.AUTOMOBILE” (equivalent to 1-800-AUTOMOB or 1-800-288-6662) can pass through the DNS and either be redirected via a root zone wildcard or autosearch template to transform the FDN into a RFC 2916 compliant identifier such as “2.6.6.6.8.8.2.0.0.8.1.e164.arpa”. For instance, program code can be executed to add an entry to a client HOSTS file and operating system registry as follows: 127.0.0.1 auto.search.msn.com #bypass autosearch to localhost and force autoscan initiation [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\Main\UrlTemplate] “1”=“requestportal.com/cgi-bin/index.cgi?query=%s” “2”=“” “3”=“” “4”=“” Such configurations enable automatic termination of a detected autosearch to initiate an autoscan 198, the autoscan adapted to access a request portal 195 and/or perform one of a search and registration request with the first domain name or keyword. In effect, the autoscan 198 can be configured to function as an autosearch, autoregistration, and/or autorequest enabling third party providers to compete with Microsoft for keyword searches and failed DNS resolutions. A Browser Helper Object (BHO) 164 is a dynamic link library (DLL) that allows developers to customize and control MSIE. Whenever an instance of the browser is launched, it reads the registry to locate installed BHO's and then creates them. Created BHO's then have access to all the events and properties of that browsing session. Applications which install BHOs are becoming more popular. For example Alexa uses a BHO to monitor page navigation and show related page links. Go!Zilla use BHO's to monitor and control file downloading. BHO 164 can be configured to detect an autosearch request by monitoring navigation events and status bar events, (e.g., DISPID-BEFORENAVIGATE2, DISPID_STATUSTEXTCHANGE), and the like. For instance, when the domain component “auto.search.msn.com” is detected in a first URL or in the status bar text, the BHO 164 can construct a second URL by replacing the domain component with “requestportal.com”. The second URL is used to override the autosearch and access a network resource corresponding to “requestportal.com” that is configured to extract autosearch parameters from the newly constructed URL. Additionally, the BHO 164 can record location field input or buffer the keyword or first domain name before a navigation event to assist in constructing the second URL. In another aspect of the present invention, a BHO 164 can be configured to detect TLDAs in DNS friendly identifiers from the input buffer or from the detection of a DNS type event such as that of a network resource location request or domain name resolution request. A list 160 of ICANN compliant TLDs can be maintained client side or as part of a login file/script. Such a list can be periodically synchronized with the DNS to generate new updated TLD lists 160 from root zone updates. In fact, one of ordinary skill in the art can configure a TLD list 160 to be accessed client side via any OSI model layer such as application, network, transport layer, etc. For instance, Windows sockets (WinSock), is an Application Programming Interface (API) for developing Microsoft Windows compatible programs that can communicate with other machines via the TCP/IP protocol, or the like. A WinSock DLL provides the standard API, and each vendor's service provider layer is installed below the standard API. The API layer communicates to a service provider via a standardized Service Provider Interface (SPI), and can multiplex between multiple service providers simultaneously. Winsock2 includes a default name space provider. Another name space provider can be installed for the purpose of processing FDNs having TLDAs. One or more programs can be constructed to perform one or more aspects of the present invention. The program may be integrated as part of an API, operating system, or plug-in/add-on for a web browser 112. Such a program may be downloaded and installed for integration into the command line of a device or location field of a browser program 112. In addition, such a program product may be combined with other plug-in products (e.g., NeoPlanet, RealNames, Netword, NetZero, ICQ, AIM, I-DNS, WALID, Quick Click, Google, etc.) to offer additional functionality and to more quickly reach an existing customer base. Program installation may be activated in response to accessing a web site or any network resource corresponding to a URI. Modifying the source code of the browser program 112 itself or OS (e.g., Windows, Linux, NT, UNIX, MAC, etc.) may be more desirable, in effect, enabling tens or hundreds of millions of users to take advantage of more creative ways to use indicia such as FDNs as a means to access a valid URI. For any of the above implementations, HLD resolvability may be determined before, during, and/or after DNS resolution on the client side, server side, or at any point at any point on a network including at a peer-to-peer machines, proxy servers, firewalls, hubs, routers, resolvers (e.g., DNS resolvers, RealNames resolvers, Netword Resolvers, UDDI resolvers, etc.), and nameservers, etc. In addition, the step of HLD resolvability may further reside in hardware, software and/or firmware (e.g., network card, BIOS, adapter cards, etc.). HLD resolvability may be determined before processing a name resolution request or in response to determining that the domain name is unresolvable. MSIE browser (or other programs that use the MSIE shell) may forward an identifier having an unresolvable domain name or FDN to the autosearch feature for further processing. Instead of prompting the client browser 112 to display an error message, the identifier may instead be processed by the autosearch and/or routed to another naming service provider for further processing. For example, such input may be routed to a FDN registry or to a FDN translation service in operative association with a RealNames server and/or similar resolver and the like. Modifications may be made to the script/program/template running the autosearch that generates the “response.asp” web page on the server “auto.search.msn.com” to enable FDN processing and/or processing a search request having the FDN or unresolvable domain name. An extra template may be created and used in the registry of the MSIE autosearch feature. When the “auto.search.msn.com” server detects that the request includes a TLDA, the extra template may be used as a means to access an authorization database, registry, name translation database, or “GO LIST”, to determine how to generate a valid URI that corresponds to the received FDN having a TLDA. By using an extra template, the browser program does not have to be modified, thereby eliminating distribution costs for a browser version update. Other templates 162 may be included and used for processing prefix/suffix delimiters, URI redirection from SLD or HLD (e.g., “acme.”, “.net”, or “.sports”) to a corresponding vertical market directory service, using a TLDA as a customized search term (e.g., TLDA Query), using a template for each resolution method, and creating a user definable template in conjunction with specified component data at the time of registration or thereafter. In one aspect of the present invention, name tracking databases, name translation databases, or registries may be centrally maintained and updated through redundant servers. The data structure of such information may be stored as metadata (e.g., XML) or in any other format to allow integration of such data with the data managed by other naming service providers. Through Application Programming Interface (API), naming service providers can communicate with such resolvers, registries, and/or databases. Furthermore, access can be both platform and language independent. For instance, the TLDA registry can be accessed through any gateway such as Mobile Access Gateway. All requests may be routed to a NAPTR RR, SRV RR, or round-robin DNS for the purposes of distributing bandwidth and load balancing across a server farm. The server farm may include dedicated servers for each database or parts of a single database that operate in parallel to assure high throughput. In other aspects the name translation databases or registries may be maintained and updates propagated in a distributed hierarchy similar to that of the DNS. The name tracking database may be enhanced by combining the data from the name translation database and storing it at the Internet Service Provider (ISP) level to act as a distributed cache for minimizing bandwidth of server requests across the backbone of the Internet. The enhanced name tracking database may also be distributed as a client side cache for even quicker access, particularly when the network is unstable or unreliable causing retrieval delays, or when a network connection is lost. This invention may be implemented in networks and distributed systems of varying scope and scale. The registry, registration records, name translation database, or name tracking database may be located at separate sites and separate computers in a large distributed system. However, the various servers and clients described may actually be processes running on various computers and interacting to form a distributed and/or peer-to-peer system. In smaller implementations, some or all of the various servers described may actually be processes running on a single computer. All such components of the fictitious domain identifier system are robust, secure, and scalable. Security methods including authorization techniques may be applied to all information records in the system including registries and DNS resource records. Pre-navigation methods may also be applied to FDN detection techniques. Improvements may be made to the autosearch and/or browser, for example to try and apply such FDN techniques in yet earlier steps or in later steps. There may further be included any number of other steps that may be performed between the step of determining a domain name is fictitious and processing the FDN such as requesting a network resource or performing a registration request and/or search request. In effect, FDN detection may occur before any validity tests and after failed DNS resolution requests, for example. The same teachings may be applied to those skilled in the art by providing a text box object as input that can be located anywhere and on any web page including a text box that is embedded or part of an on-line advertisement. The text box object may be used in a stand-alone application and stored on magnetic and/or optical media that is either may further be overlaid as an interactive graphical. Claims (34) Priority Applications (15) Applications Claiming Priority (2) Related Parent Applications (1) Related Child Applications (1) Publications (2) Family ID=38950560 Family Applications (2) Family Applications After (1) Country Status (1) Cited By (5) Families Citing this family (174) Citations (375) Family Cites Families (2) - 2002 - 2002-12-13 US US10/248,068 patent/US9141717B2/en active Active - 2015 - 2015-08-26 US US14/836,553 patent/US9659070B2/en active Active
https://patents.google.com/patent/US9141717B2/en
CC-MAIN-2019-13
refinedweb
10,509
52.19
Plain Gaussian p.d.f. Definition at line 25 of file RooGaussian.h. #include <RooGaussian.h> Definition at line 27 of file RooGaussian.h. Definition at line 41 of file RooGaussian.cxx. Definition at line 53 of file RooGaussian.cxx. Definition at line 34 of file RooGaussian.h. Implements the actual analytical integral(s) advertised by getAnalyticalIntegral. This functions will only be called with codes returned by getAnalyticalIntegral, except code zero. Reimplemented from RooAbsReal. Definition at line 151 of file RooGaussian.cxx. Definition at line 31 of file RooGaussian.h. Evaluate this PDF / function / constant. Needs to be overridden by all derived classes. Implements RooAbsReal. Definition at line 61 of file RooGaussian.cxx. Compute \( \exp(-0.5 \cdot \frac{(x - \mu)^2}{\sigma^2} \) in batches. The local proxies {x, mean, sigma} will be searched for batch input data, and if found, the computation will be batched over their values. If batch data are not found for one of the proxies, the proxies value is assumed to be constant over the batch. Reimplemented from RooAbsReal. Definition at line 99 of file RooGaussian.cxx. Interface for generation of an event using the algorithm corresponding to the specified code. The meaning of each code is defined by the getGenerator() implementation. The default implementation does nothing. Reimplemented from RooAbsPdf. Definition at line 198 of file RooGaussian 142 of file RooGaussian 189 of file RooGaussian.cxx. Definition at line 45 of file RooGaussian.h. Definition at line 46 of file RooGaussian.h. Definition at line 44 of file RooGaussian.h.
https://root.cern.ch/doc/v622/classRooGaussian.html
CC-MAIN-2021-49
refinedweb
256
54.29
Marley Marley is a framework for building RESTful web services and applications with roughly _O(n)_ lines code where n is lines of DDL. It consists of several parts: A simple Rack application that acts as request parser/router. A default controller for ORM model classes. A plugin system similar to Sequel's. Marley Joints - A framework for creating reusable Marley resource sets. Reggae - A JSON data format The point of the whole thing is to minimize the amount of non-model (non-datacentric) code that needs to be written to implement a web service/application. Please see the examples/forum.rb and the included Joints' code to get a basic idea of what a Marley application looks like. Marley Resources Marley resources are constants in the Marley::Resources namespace. A resource must respond either to a #controller method or to one or more REST verbs (#rest_get, #rest_post, #rest_put, or #rest_delete). A #controller method must return an object that responds to one or more REST verbs. REST verb methods should return an object with a #to_json method. The Parser/Router The parser splits the request path on '/' and treats the first part as the requested resource. If no resource is specified in the request, it uses the default resource. if the resource has a REST method corresponding the the request verb, the parser calls that method and sends the return value to the client. The parser also traps various errors and returns the appropriate error message to the client. :include: rdoc/plugins.rdoc The Default Model Controller One of the things that the RestConvenience Sequel plugin does is add a #controller method to affected models. This method instantiates and returns a ModelController object for the model in question. At initialization, the Controller parses the request path to determine the model instances to which the request refers. :include: rdoc/joints.rdoc :include: rdoc/reggae.rdoc Jamaica The default Marley client is “Jamaica”, which consists of JS/CSS for browsers. It has now been moved to a separate repository at github.com/herbdaily/jamaica
https://www.rubydoc.info/gems/marley/0.8.4
CC-MAIN-2019-39
refinedweb
343
56.66
Nossir, there is nothing wrong with my blog the screenshot above is indeed taken from a Mac desktop, and the one you see on the center is actually an instance of the iPhone emulator. What you should notice, however, is the list of Identity Providers it shows, so strangely similar to what we have shown on Windows Phone 7 and ACS… but I better start from the beginning. A couple of months ago the Windows Azure Platform Evangelism team, in which I worked until recently, released a toolkit for taking advantage of the Windows Azure platform services from Windows Phone 7 applications. The toolkit featured various integration points with ACS, as explained at length here. At about the same time, the team (and Wade specifically) released a version of the same toolkit tailored to iOS developers. That first iOS version integrated with the core Windows Azure services, but didn’t take advantage of ACS. Well, today we are releasing a new version of the Windows Azure toolkit for iOS featuring ACS integration! Dear iOS friends landing on this blog for the first time: of course I understand you many not be familiar with the Windows Azure Control Service. You can get a quick introduction here, however for the purpose of providing context for this post let me just say the following: ACS is a fully cloud-hosted service which helps you to add to your application sign-in capabilities from many user sources such as Windows Live ID, Facebook, Google, Yahoo, arbitrary OpenID providers, local Active Directory instances, and many more. Best of all, it allows you to do so without having to learn each and every API or SDK; the integration code is the same for everybody, and extremely straightforward. All communications are done via open protocols, hence you can easily take advantage of the service from any platform, as this very post demonstrates. Try it! I am no longer on the Evangelism team, but the ACS work for this deliverable largely took place while I was still on it: recording the screencast and writing this blog post provides nice closure. Thanks to Wade for having patiently prepped & provided a Mac already perfectly configured for the recording! Also, for driving the entire project, IMO one of the coolest things we’ve done with ACS so far. And now, for something completely different: The Release As usual, you’ll find everything in DPE’s GitHub repository: There will be four main entries you’ll want to pay attention to: - watoolkitios-lib This is a library of Objective-C snippets which can help you to perform a number of common tasks when using WIndows Azure. For the specific ACS case, you’ll find code for listing identity providers, acquire and handle tokens, invoke the ACS management APIs, and so on. - watoolkitios-doc As expected, some documentation - watoolkitios-samples A sample application which demonstrates how to put the various snippets together - cloudreadypackages Those are a set of ready-to-go packages that can be directly uploaded and launched in Windows Azure, without requiring you to have access to Visual Studio or the Windows Azure SDK: all you need is to deploy them via the portal (which works on Mac, too). The packages can be used as test backend for your iOS applications. The packages take advantage of the technique described here to allow changes in the config settings even after deploy time. Which is a great segue for… - The ACS config tool for IOs In the Windows Azure Toolkit for Windows Phone 7 we included some Visual Studio templates which contain all the necessary logic for wiring up a phone application to ACS and configure ACS to issue tokens for that app. In iOS/xCode there’s no direct equivalent of those templates, but we still wanted to shield the developer from many of the low level details of using Windows Azure. To that end, we created a tool which can automatically configure the application, ACS and Windows Azure. Using the ACS Configuration Tool for iOS in the Toolkit If you want to see the tool in action, check out the webcast; here I will give you few glimpses, just to whet your appetite. This is a classic wizard, and it opens with a classic welcome page. We don’t like surprises, hence we announce what the tool is going to do let’s click next. The first screen gathers info about the Windows Azure storage account you want to use; nothing to do with ACS yet. Next. The next screen gathers the certificate used for doing SSL with the cloud package. Again, no ACS yet. Next. Ahh, NOW we are talking business. Just like in the toolkit for Windows Phone 7 we offered the possibility of using the membership provider or ACS, here we do the same: depending on which option you pick, the way in which the user will be prompted for credentials and how calls will be secured will differ accordingly. Here we go the ACS way, or course I would say this is the key screen in the entire process. Here we prompt the developer to provide the ACS namespace theyw ant to use with their iOS application, and the management key we need to modify the namespace settings accordingly. If you are unsure about how to obtain those values, a helpful link points to a document which will briefly explain how to navigate the ACS portal to get those. In this wizard we try to strike a balance between showing you the power of the services we use and keeping the experience simple. As we did for the WP7 toolkit, here we apply some defaults (Google, yahoo and live id as identity providers, pass-through rules for all) that will show how ACS works without offering too many knobs and levers to operate. If you are unhappy with the defaults, you can always go directly to the portal and modify the settings accordingly. For example you may add a Facebook app as identity provider, and that will show up automatically in the phone application without any changes to the code. The final screen of the wizard informs you that it has enough info to start the automatic configuration process. First it will generate a ServiceConfiguration.cscfg file, which you’ll use for configuring the Windows Azure backend (your cloudready package) via the portal. Then the wizard will reach out directly to the ACS management endpoint, and will add all the settings as specified. As soon as you hit Save the wizard will ask you for amlocation for the cscfg file, then it will contact ACS and show you a bar as it progresses thru the configuration. Pretty neat! Above you can see the generated ServiceConfiguration.cscfg. Of course the entire point of generating the file is so that you don’t have to worry about the details, but if you are curious you can poke around. You’ll mainly find the connection strings for the Windows Azure storage and the settings for driving the interaction with ACS. All you need to do is to navigate (via the Windows Azure management portal) to the hosted service you are using for your backend, hit Configure and paste in the autogenerated ServiceConfiguration.cscfg. The next step in the screencast shows how to run the sample application, already properly configured, in Xcode. If you hit the play button, you’ll be greeted by the screen which which I opened the post. The rest is business as usual: the application follows the same pattern as the ACS phone sample and labs: an initial selection driven by browser based sign in protocols to obtain and cache the token from ACS (a SWT) and subsequent web service calls secured via OAuth. Below a Windows Live ID prompt, followed by the first screen of the app upon successful authentication. Well, that’s it folks! I know that Wade and the gang will keep an eye on the GitHub repository: play with the code, let them know what you like and what you don’t like, branch the code and add the improvements you want, go crazy!
https://blogs.msdn.microsoft.com/vbertocci/2011/07/25/using-the-windows-azure-access-control-service-in-ios-applications/
CC-MAIN-2017-26
refinedweb
1,361
62.41
So I'm trying to make a tree data structure, which can be instantiated by a nested hash. My code is as follows, and should just recursively make nodes out of keys, and children out of their values. class Tree attr_accessor :children, :node_name #def initialize(name, children=[]) # @children = children # @node_name = name # end def initialize(hashTree) @node_name = hashTree.keys[0] @children = [] p node_name hashTree[node_name].each do |hash| children << Tree.new(hash) end end #... end p = {'grandpa' => { 'dad' => {'child 1' => {}, 'child2' => {} }, 'uncle' => {'child 3' => {}, 'child 4' => {} } } } p p p Tree.new(p) {"grandpa"=>{"dad"=>{"child 1"=>{}, "child2"=>{}}, "uncle"=>{"child 3"=>{}, "child 4"=>{}}}} "grandpa" /Users/Matt/sw/sevenLang/week1/hw-tree.rb:8:in `initialize': undefined method `keys' for ["dad", {"child 1"=>{}, "child2"=>{}}]:Array (NoMethodError) from /Users/Matt/sw/sevenLang/week1/hw-tree.rb:12:in `new' from /Users/Matt/sw/sevenLang/week1/hw-tree.rb:12:in `block in initialize' from /Users/Matt/sw/sevenLang/week1/hw-tree.rb:11:in `each' from /Users/Matt/sw/sevenLang/week1/hw-tree.rb:11:in `initialize' from /Users/Matt/sw/sevenLang/week1/hw-tree.rb:26:in `new' from /Users/Matt/sw/sevenLang/week1/hw-tree.rb:26:in `<main>' [Finished in 0.1s with exit code 1] hashTree[node_name] is p["grandpa"], and is a Hash: {"dad"=>{"child 1"=>{}, "child2"=>{}}, "uncle"=>{"child 3"=>{}, "child 4"=>{}}} Hash#each will yield a two-element array: a key and a value. So if you write hashTree[node_name].each do |hash| and hashTree[node_name] is a Hash, hash will always be a two-element array. Due to a trick in its grammar, Ruby will auto-splat an array argument if there are multiple formal parameters, so you can also write: hashTree[node_name].each do |name, hash| This will not result in an error. (You do actually still have an unrelated error in logic, as you're skipping a level.) An error-free version: class Tree attr_accessor :children, :node_name def initialize(name, hashTree) @node_name = name @children = [] hashTree.each do |child_name, hash| children << Tree.new(child_name, hash) end end end p = {'grandpa' => { 'dad' => {'child 1' => {}, 'child2' => {} }, 'uncle' => {'child 3' => {}, 'child 4' => {} } } } p Tree.new("Family", p) This can be shortened by using map: class Tree attr_accessor :children, :node_name def initialize(name, hashTree) @node_name = name @children = hashTree.map do |child_name, hash| Tree.new(child_name, hash) end end end
https://codedump.io/share/Yow8HhMtgeFD/1/is-ruby-turning-my-hash-into-an-array-if-so-why
CC-MAIN-2017-47
refinedweb
389
60.72
i was not able to come out of the shock that in Arduino float/double supports only two digits after decimal.! hey folks,i was not able to come out of the shock that in Arduino float/double supports only two digits after decimal.! even though it was mentioned 6-7digits of precision including the digits before and after the decimal. can someone help me to overcome this problem. i wanted to read a value if 0.009, 0.000008 etc...such kindof values...any help is appriciated..!thnq,chiru SyntaxSerial.print(val) Serial.print(val, format) Parametersval: the value to print - any data type format: specifies the number base (for integral data types) or number of decimal places (for floating point types). I'd describe the issue as a bug in the Serial print function myself - default options should not be hiding your values from you. @MarkT, what should be the default number of decimals for float in your opinion? (just curious) Floats are similarly printed as ASCII digits, defaulting to two decimal places. Programming TipsAs of version 1.0, serial transmission is asynchronous; Serial.print() will return before any characters are transmitted. All of them, of course. Every other programming language I can think of does this. #include <stdio.h>int main (){ float f = 12.3456789; printf ("f is %f\n", f); return 0;} f is 12.345679 Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=89788.0
CC-MAIN-2015-48
refinedweb
265
59.7
JWT in Query String¶ You can also pass the token in as a paramater in the query string instead of as a header or a cookie (ex: /protected?jwt=<TOKEN>). However, in almost all cases it is recomended that you do not do this, as it comes with some security issues. If you perform a GET request with a JWT in the query param, it is possible that the browser will save the URL, which could lead to a leaked token. It is also very likely that your backend (such as nginx or uwsgi) could log the full url paths, which is obviously not ideal from a security standpoint. If you do decide to use JWTs in query paramaters, here is an example of how it might look: from flask import Flask, jsonify, request from flask_jwt_extended import ( JWTManager, jwt_required, create_access_token, ) # IMPORTANT NOTE: # In most cases this is not recommended! It can lead some some # security issues, such as: # - The browser saving GET request urls in it's history that # has a JWT in the query string # - The backend server logging JWTs that are in the url # # If possible, you should use headers instead! app = Flask(__name__) app.config['JWT_TOKEN_LOCATION'] = ['query_string'] query paramater where the JWT is looked for is `jwt`, # and can be changed with the JWT_QUERY_STRING_NAME option. Making # a request to this endpoint would look like: # /protected?jwt=<ACCESS_TOKEN> @app.route('/protected', methods=['GET']) @jwt_required def protected(): return jsonify(foo='bar') if __name__ == '__main__': app.run()
http://flask-jwt-extended.readthedocs.io/en/latest/tokens_in_query_string.html
CC-MAIN-2018-34
refinedweb
247
56.29
- status: open --> pending-works-for-me Mocker is a mock objects library for python Code sample: import mocker class MyTests(mocker.MockerTestCase): def testWhatever(self): mock = self.mocker.mock() When writing self., it does not suggest mocker. Also, self.mocker. does not suggests mocker fields Thanks Logged In: YES user_id=617340 Originator: NO Strange, it seems to work here... It seems it could be a misconfiguration problem: if you added mocker.py later (after your interpreter was already set), you have to refresh it to get it to work (or this could be a bug in 1.3.15, as I've tested it against 1.3.18) Cheers, Fabio).
https://sourceforge.net/p/pydev/bugs/731/
CC-MAIN-2016-50
refinedweb
110
69.38
View loads a view. The easiest way to load views is to first write them as Closure templates. These templates compile to Javascript functions that return HTML code. You can use the output of these functions to create DOM elements. The following template, for example, creates two javascript function called views.mainView() and views.otherView(): {namespace views} /** * The main view. */ {template .mainView} <p id="b">Paragraph 1</p> <p>Paragraph 2</p> {/template} /** * The other view. */ {template .otherView} <p id="b">Paragraph 3</p> <p>Paragraph 4</p> {/template} You compile it to a Javascript file called views.js. You load views.js. Then you can use these two functions with the URL dispatcher: /*global $ , createHashHandler, views */ $(function () { var handleHashChange = createHashHandler([{ re: /a/, controller: function () { $('#mainDiv').html(views.mainView()); } }, { re: /b/, controller: function () { $('#mainDiv').html(views.otherView()); } }]); $(window).bind('hashchange', handleHashChange); handleHashChange(); }); Now, setting your URL fragment to ‘#a’ will set the contents of your main div to the main view. Setting the URL fragment to ‘#b’ automatically sets the main div’s contents to the other view. And that, combining Closure templates and my dispatcher, is all you need to structure a single-page web application. Put them together, and you have a Javascript version of Django’s template (view) and view (controller) layers. Your single-page applications can now be as well structured as your multi-page applications.
http://duganchen.ca/view-templates/
CC-MAIN-2021-17
refinedweb
230
59.4
ComboBox QML Type Provides a drop-down list functionality. More... Properties - acceptableInput : bool - activeFocusOnPress : bool - count : int - currentIndex : int - currentText : string - editText : string - editable : bool - hovered : bool - inputMethodComposing : bool - menu : Component - model : model - pressed : bool - selectByMouse : bool - style : Component - textRole : string - validator : Validator Signals Methods Detailed Description Add items to the ComboBox by assigning it a ListModel, or a list of strings to the model property. ComboBox { width: 200 model: [ "Banana", "Apple", "Coconut" ] } QML property was introduced in QtQuick.Controls 1.1. See also validator and accepted. This property specifies whether the combobox should gain active focus when pressed. The default value is false. This property holds the number of items in the combo box. This QML property was introduced in QtQuick.Controls 1.1. This property specifies text being manipulated by the user for an editable combo box. This QML property was introduced in QtQuick.Controls 1.1. This property holds whether the combo box can be edited by the user. The default value is false. This QML to edit or commit the partial text. This property can be used to determine when to disable events handlers that may interfere with the correct operation of an input method. This QML property was introduced in QtQuick.Controls 1 QML property was introduced in QtQuick.Controls 1.3. The style Component for this control. See also Qt Quick Controls Styles QML Types. Allows you to set a text validator for an editable ComboBox. When a validator is set, the text field will only accept input which leaves the text property in field: Note: This property is only applied when editable is true import QtQuick 2.2 import QtQuick.Controls 1.2 ComboBox { editable: true model: 10 validator: IntValidator {bottom: 0; top: 10;} focus: true } This QML property was introduced in QtQuick.Controls 1.1. See also acceptableInput, accepted, and editable. Signal Documentation This signal is emitted when the Return or Enter key is pressed on an editable combo box. If the confirmed string is not currently in the model, the currentIndex will be set to -1 and the currentText will be updated accordingly. Note: If there is a validator set on the combobox, the signal will only be emitted if the input is in an acceptable state. The corresponding handler is onAccepted. This QML QML signal was introduced in QtQuick.Controls 1.1. Method Documentation Finds and returns the index of a given text If no match is found, -1 is returned. The search is case sensitive. This QML method was introduced in QtQuick.Controls 1.1. Returns the text for a given index. If an invalid index is provided, null is returned This QML method was introduced in QtQuick.Controls 1.
https://doc.qt.io/archives/qt-5.5/qml-qtquick-controls-combobox.html
CC-MAIN-2019-26
refinedweb
451
58.18
Boosting with filters By default, search results are sorted according to score (relevance). In some scenarios, you want to boost the score of hits based on certain criteria. One way to do that is the BoostMatching method, which has two parameters. - Filter expression--any expression that returns a filter, meaning you can pass it the same types of expressions that you can pass to the Filter method. - Boost factor--If you pass it 2, a hit that matches the filter and would have a score of 0.2 instead has a score of 0.4. So, that hit is returned above a hit with a score of 0.3 that does not match the filter. Examples The following code increases the probability that a blog post about the fruit 'banana' is sorted before a blog post about the brand "Banana Republic". searchResult = client.Search<BlogPost>() .For("Banana") .BoostMatching(x => x.BlogCategory.Match("fruit"), 2) .GetResult();). This is enforced by the fact that the For method in the above sample returns a IQueriedSearch object, while the Filter method does not. So if the code compiles, it should work. You can call the method multiple times. If a hit matches several filters, the boost is accumulated. Note however that, while calling the method five or ten times is fine, it is not a good idea to apply a huge number of boosts. For example, you may use the BoostMatching method to boost recently published blog posts by giving ones published today a significant boost, and ones published in the last 30 days a slight boost. But adding a different boost for the last 365 days results in a very slow query or even an exception. Imagine you are developing a site for a car dealer and index instances of the following Car class. public class Car { public string Make { get; set; } public string Model { get; set; } public string Description { get; set; } public double SalesMargin { get; set; } public bool InStock { get; set; } public bool NewModelComingSoon { get; set; } } When a visitor performs a search, you want to return results that match the search query, and order them according relevance. But you may want to tweak the sorting a bit to optimize for the car dealer's business conditions. For example, if a certain model is in stock, the dealership can deliver it and receive payment faster, so you want to boost hits where that is true. var searchResult = client.Search<Car>() .For("Volvo") .BoostMatching(x => x.InStock.Match(true), 1.5) .GetResult(); Also, if the dealer has a high profit margin for a certain model, you could boost those cars. var searchResult = client.Search<Car>() .For("Volvo") .BoostMatching(x => x.InStock.Match(true), 1.5) .BoostMatching(x => x.SalesMargin.GreaterThan(0.2), 1.5) .GetResult(); Finally, if a model will soon be replaced by a newer model and the older model is in stock, it might be valuable to sell it before the new model comes out and the older model's value decreases. So, you give a significant boost to hits that match those criteria., you adapt the scoring (that is, sorting) of search results to optimize them for business. In other situations, you may want to optimize results for the user. For example, on a culinary site, you boost hits for certain recipes depending on what you know about a logged in users' allergies, previous searches, or previously-printed recipes. Combining BoostMatching with other methods The BoostMatching method is not limited to text searches. You can also use it after the MoreLike method. And, considering the large number of filtering options, you can combine BoostMatching with geographical filtering methods to boost hits close to the user. Last updated: Feb 23, 2015
https://world.episerver.com/documentation/Items/Developers-Guide/EPiServer-Find/9/DotNET-Client-API/Searching/Boosting-with-filters/
CC-MAIN-2021-25
refinedweb
619
61.46
Some C++ materials mention we can not call an online function inside ctor or dtor, ( sorry ,I believe it's easier to switch to Some C++ materials mention we'd do not to call an online function inside ctor or dtor, ) but we might give them a call accidentally. Can there be in whatever way to avoid that? For instance: # include < iostream > using namespace std; class CAT { public: CAT(){ f();} virtual void f(){cout<<"void CAT:f()"<<std::endl;} }; class SMALLCAT :public CAT { public: SMALLCAT():CAT() { } void f(){cout<<"void SMALLCAT:f()"<<std::endl;} }; int main() { SMALLCAT sc; } output: void CAT::f() //not we expected!!! Thanks You have to throw individuals "C++ materials" towards the garbage bin. You can call virtual functions in the constructor or destructor. Plus they is going to do their job. You just need to be familiar with the word what specs that clearly states that virtual dispatch mechanism works in compliance using the current dynamic kind of the item, avoid its final intended dynamic type. These kinds won't be the same to have an object being builtOrdamage, which frequently atmosphere newcomers once they make an effort to invoke virtual functions from constructors/destructors. Nonetheless, calling virtual functions from constructor and destructor is helpful feature from the language, as lengthy you may already know the things they're doing and just how they work in such instances. Why would you need to "prevent" it? It is just like stating that division operator is harmful since it's possible to divide something by zero, and asking how you can "prevent" its use within this program. You are able to call an online function in the destructor. Only in some cases it will not work, and could crash your program. The best way to avoid giving them a call would be to not give them a call. There is nothing more inticate I understand of, apart from possibly some static analysis tools to check out your code and warn you of potential issues such as this. You CAN call an online function in the ctor/dtor. The problem is your vtable is to establish in every constructor (and destructor), so that your virtual function call will call the implementation from the class that's presently being setup. If that is what you would like, awesome. However you might as well save the vtable research and perform a scoped function call.
http://codeblow.com/questions/preventing-calling-virtual-function-in-constructor-or-desturctor/
CC-MAIN-2019-09
refinedweb
403
59.43
Is anybody else getting a yellowish hue on their new iPad? - Currently Being ModeratedFeb 27, 2013 12:54 PM (in response to lindon85) I want to add to this discussion since I am in the middle of this unfortunate iPad lottery as well. I purchased an iPad 4 from a third-party vendor and the uneven backlighting issue was evident, with a slightly yellowish-gray cast in the upper left corner as well. To the average person it may not matter. But I use the iPad for creating content (art, sketches, etc.) and I am an avid reader (iBooks, internet, etc.) so this issue makes for a less than smooth viewing experience after long periods of time. On a blank white screen, it is evident that the center is the brightest, coolest area, while the lesser brightness frames the screen vertically on the right and left side (which seem to be the common characteristic issue I have been reading online). Also, if you create or find a neutral gray box or page and zoom in to cover the entire screen, you can also see any discoloration and uneveness as well. I returned that one, and got a replacement from Apple directly online. This one doesn't have the yellowish-gray cast on the upper corner, but has the same vertical framing issue. I thought I would be used to it, but after a long period of looking at the screen during normal use (reading, content creation, etc.) it becomes obvious. Even more evident in lower lighting environments. Not everyone lives in a brightly lit envrionment like the Apple Store where flaws are diminished at first glance. My plan has an endgame though. I'm going back to Apple Store, explain the issue and have blank pages ready to show. If the replacement exhibits the same characteristics, I will swap it for the refurbished iPad 3 that my father owns (he is okay with this). His has very good backlighting eveness and a slightly warmer cast (which is better than a bluish cast for long-term reading). My use of the iPad requires more visual attentiveness than his use, so I will give him the iPad 4 which for him, is mostly used for music, movies and videos. Too bad that it has come to this, but as a 21-year user/owner of Apple products, who has an acutely trained eye for digital color due to my career, I have never had to play this game with any of their screen-based products (iPhone, iMacs, Displays) right out of the box. - Currently Being ModeratedFeb 27, 2013 12:57 PM (in response to Alvin Martinez) Not to burst your bubble, but iPad 3 had the issue big time as well. I kept one that was the least offensive after 10. The iPad mini is not like the iPads. But it's also not retina. The next iPad will likely be like the mini and the form factor is much better. They are also likely to use a different screen tech. If you can wait, then I would. - Currently Being ModeratedFeb 27, 2013 1:02 PM (in response to BarrettF77) True about the iPad 3. This goes way beyond a generation of iPads. But the iPad 3 that my father owns and is willing to swap, is as close to perfect as one is going to get. Maybe the fact that it was refurbished may have something to do with it. But I think I will pass on the iPad 5 unless they use a different and better screen technology and the specs are markedly better than the iPad 3 (whereas the iPad 4 in everyday use is not much faster than the iPad 3). - Currently Being ModeratedFeb 27, 2013 1:06 PM (in response to Alvin Martinez) No I agree completely. iPad 4 was brought put because the 3 had heating issues from prolonged life. Apple store said they had to replace display units once every few weeks because the glass would shatter from heat. The 5 should have a much better performance ratio and the display panel will be different. It's the only way to make the unit slimmer and use a single light bar. The 3 and 4 have two led light bars. Wait and see I guess. - Currently Being ModeratedMar 17, 2013 1:15 AM (in response to lindon85) I purchased my brand new ipad mini 2 days ago and noticed yellowish screen immediately after a first boot. Its the first apple product that I have ever purchased and I'm very disappointed. Ill wait for 2-3 days more and if screen doesn't turn white Ill go to the store and try to exchange it. But of course theres no guarantee that another unit wont have the same issue. :( - Currently Being ModeratedMar 22, 2013 10:48 AM (in response to lexus232) Unfortunately the replacement that I got has the same problem. Now there's uneven discolouration at the bottom of the screen, I can clearly see it on a keyboard. A return button is whiter than space bar. I'm very disappointed with such quality and I will claim a complete refund. - Currently Being ModeratedMar 22, 2013 10:54 AM (in response to bentley123456) It's a lottery out there. Apple has no consistency as they like to brag about how many they have sold, not how many quality units have shipped without issue. Main reason I kept my 3rd gen iPad. The iPad mini my sister and the one I bought showed one uniformly pinkish and one uniformly blueish. This holds true on any product apple makes. The iPad however is the only one Ive seen where different corners will be different colors on some models. A So got some with a long dark bar down the sides of some that I was sent a year ago. Plus, I read about the LG fiasco on the retina MacBook pros, google it, and you'll see what someone in California is doing because of their issues. All you can do is keep trying with new units and playing the lottery at different stores or just get out of the iOS ecosystem. Cook has shown over the last year he isn't interested, so I wouldn't get your hopes up. Hopefully you get an outcome you're happy with! - Currently Being ModeratedMar 22, 2013 9:31 PM (in response to lindon85) My first iPad 4 definitely had a yellowish tint to it. The tint was consistent across the display. As soon as I noticed it, within a day, I returned it to the Apple store. I actually have always found Apple to be exceptional in their responsiveness to problems like this. I own two recent Mac Book Pros, two IPads, three iPhones in my family, AirPort Extreme wireless etc. etc. So this wasn't my first return. I have never been questioned on my return (even if I was out of the 14 day window normally offered for returns). In this case with the IPad 4, the Apple rep looked at it and immediately agreed with me that it was yellowish warm and offered to give me a refund, replace it, whatever I wanted. I asked for a replacement. The Apple rep initiated on her own the suggestion that we open up the new one to ensure that the display was perfect in every way. Which we did. And the second one was perfect and still is today some 3 or 4 months later. My primary point is that I really value Apple products and have no concerns whatsoever about their standing by their products. Every product I have is perfect. I have had to return an item or two and have found them to be as good as Costco in that regard :)))) I agree that they may have a quality control issue that they need to look into re this IPad situation but my suggestion to everyone on this thread is if you're not happy with your IPad, bring it back! I have dealt with two different Apple stores in the GTA and have found both stores to be equally responsive. I am surprised at times on various threads that I read here how many times people try to fix a problem on their own with something they are not happy about an Apple product they have purchased. Some people have tried recalibrating their displays with third party apps etc. My advice is just bring it back and don't accept a product that isn't doing, looking, etc. the way it should. I would be interested if any one on this thread has tried to return their IPad 4 or other Apple product and been denied. It's never happened to me ever. Better to not have to bring it back, but great that Apple will not leave you in the cold!!!! - Currently Being ModeratedMar 24, 2013 11:02 AM (in response to lindon85) And by the way, if this yellow hue is adhesive that didnt have enough time to dry before shipping to a costumer, then why my ipad mini, that i have purchased week ago, had ios 6.0.1? - Currently Being ModeratedMar 25, 2013 4:50 AM (in response to lindon85) Unfortunately apple premium reseller has refused to change my ipad 2nd time, no refund either. This is unacceptable. First time they acknowledged the problem, and now they say that its normal and its just a variation in screen color. They didnt even look at my ipad. Well at least I have learned one lesson from it- never buy anything from apple resellers, buy only directly from apple store or their website. I hope a yellow stain in the bottom left corner will go away with time. - Currently Being ModeratedMar 25, 2013 4:57 AM (in response to lexus232) There are a couple of teardowns around the internet. They state that there is no glue. There may be glue on the iPhone, but not on the iPad3/4. Not sure about the Mini, though I'd imagine that it's the same thing. And as someone who went through a few iPad 3s with this issue, I can say that it does not go away over time. At least it never did with the few that I ended up returning with this issue. lexus232 wrote: And by the way, if this yellow hue is adhesive that didnt have enough time to dry before shipping to a costumer, then why my ipad mini, that i have purchased week ago, had ios 6.0.1? - Currently Being ModeratedMar 26, 2013 6:15 AM (in response to Usmaak) Yup, iPad 4 that my sister boght me sometime around thanksgiving still has a very apparent yellow tint on the left side of the screen (landscape mode, home button on the right). Kind of frustrating. - Currently Being ModeratedApr 3, 2013 3:09 AM (in response to lindon85) After 2 and a half years I decided it was time to upgrade my iPad 1st gen to the iPad 4. One of the things that sold me was this "amazing" retina display. I bought the iPad at best buy, got home, turned it on and immediately noticed something wrong with the display...it's YELLOW! I was so disappointed. I compared it side by side with my iPad 1st gen and the difference is like night and day! The display on the original iPad is much better than the retina display. The whites are pure white on the 1st gen and yellowish/greenish on the iPad 4, also the display seems much dimmer. I will return it to best buy tomorrow. This thread has 116.000 views so far, you'd think apple would have resolved the issue by now. I wish I could just upgrade the iOS on my 1st gen. - Currently Being ModeratedApr 3, 2013 3:18 AM (in response to lindon85) They give a sh*t on QA. I have seen so many different panels and even in the store there are different ones laying around. Pink hues are common. Its still unbelievable that there are so many different panels built into this expensive device. Go return it and start the game again. If they dont get their act together, market share will be overrun from the friendly green robot soon. - Currently Being ModeratedApr 3, 2013 3:49 PM (in response to lindon85) Update: I returned the iPad to best buy buy and got a replacement one..guess what? Still YELLOW!. It's not as bad as the first one but when compared side by side with my iPad 1st gen the difference is night and day. I am very disappointed with the display quality. Apple definitely dropped the ball on this one. Actions More Like This - Retrieving data ... Legend - This solved my question - 10 points - This helped me - 5 points
https://discussions.apple.com/message/21693267
CC-MAIN-2014-15
refinedweb
2,158
70.73
Probably below: 'K&R style' -- Named. if () { 'Allman style' --. if () { 'Whitesmiths style' -- popularized by the examples that came with Whitesmiths C, an early commercial C compiler. Basic indent per level shown here is eight spaces, but four spaces are occasionally seen. if () { 'GNU style' -- Used throughout GNU EMACS and the Free Software Foundation code, and just about nowhere else. Indents are always four spaces per level, with `{' and `' halfway between the outer and inner indent levels. if () { My "religious" preference is definitely Allman style, but from what I understand authors tend to prefer K&R so that they don't lose precious book space when printing source. The cool thing is that Whidbey lets you format your code and use whatever indentation style you prefer (there's 60+ other choices you can make to format your code). What style do you use, and are you religious about it? Say you found a sample on a web site that used a different bracing style, would you change the bracing style before running the app? I would, but only if I were planning on keeping the code... Definitely Allman style I find Allman code easiest to read & to write, but it’s no big deal. I care more about tabs. Use an editor that doesn’t convert them to spaces, so I can be happy with my 2-space tabs and you can keep your great big ones. Well the beauty of VS is that when I type a } it makes everything ‘right’, so most stuff tends to get formatted before running. And of course Allman style is the only way to write ;). My current preference is Allman style, but I used to be a K&R guy. Even back when I was doing Delphi I used to write "if (…) then begin" all on one line, all in the name of saving space. Screen real-estate has improved a lot through in recent years so now I prefer the readability advantage of Allman over the space-savings of K&R. While I’m not religious about it, I do feel that it’s very imporant when doing team development that everyone agrees on the same style – nothing worse than looking at code that has a mix of two or more different styles used interchangeably. Big tabs are another pet peeve of mine, if you code inside a namespace, and then you have a class with a method and that method has nested logic, you’re talking about four tabs for code, not very pretty… I definitely agree with Luke that mixing and matching styles is a big no-no… Definitely Allman for me too. And I admit that if I pick up code formatted in K&R style that I’m intending to keep/extend, I do reformat it, even though in my heart of hearts I know it’s silly and anal-retentive to do so. I’m curious as to how I developed this preference (I honestly don’t remember, and my introduction to C was the Kernighan and Ritchie book back in my Amiga days). I always format HTML open/close tags to be balanced and symmetrical in the same way, so either one preference influenced the other or possibly they’re both evidence of a deeper pathology. Gosh. Okay, I’ll go against the grain. K&R for me. Don’t really like having a brace on a line by itself. I also always put in my closing brace immediately after the opening brace. Also changed VS.NET settings so that it doesn’t automatically move the brace to a new line. Check out the java coding conventions, this is the true way to indent if you want other to have a good time reading your code: Here is the one about methods: I use K&R and am *extremely* anal about using it, too. And another thing I’m pretty bad about is going through and replacing other people’s: if (x == y) x = null; else x = y; with: if (x == y) { x = null; } else { x = y; } It’s K&R or Java for me too. The brackets just shouldn’t have a line of their own. if (condition) { statements; } else if (condition) { statements; } else { statements; } In truth, if you were creating a language today and you weren’t trying to recruit C programmers to it, you wouldn’t have the brackets at all – VB’s like that. The main use for the braces is for bragging rights: C/C++/Java have them and VB/COBOL/APL don’t. /* *There is a lot of things VB dosn’t have. */ Read the source of Linux which employs the K&R format. The code is easy to read and it is structured nicely. The largest collaborative project in the world uses it, there is no more to say. I’m a K&R guy, but I’m slowly converting to Allman’s. Each new line used to be pain for me, but the readability is worth it. I like to use the Horstmann style, which has the advantage of space saving like the K&R style but the readability of the Allman style. After the opening brace, press tab and start typing up the code. If I found a piece of code that I would use in my programs that used a different style, I would probably go ahead and change it, just to be consistent. K&R style is a abomination that should be stricken from the face of the planet. 🙂 Allman style is the only way to go, IMNSHO. Infidels! Those unfortunates who end up in my employ or training room use K&R or find themselves on the wide end of the LART. I’ve never understood why Allman style is thought more readable than the One True Brace Style, it’s ugly, and it’s illogical. /Statements/ get a line to themselves (with minor caveats), braces do not. That said, I have to agree with the other K&R fan who thinks braces should be dropped entirely. Kudos: but only if you’re going to replace them with strict indentation like Python which is cruftless, reaadable /and/ logical. Huzzah! I program full-time on a 1024×768 laptop screen. Subtract bare minimum VS.NET toolbars etc. You’re left with an area that’s pefectly usable…. if you use K&R. I can’t understand this readability argument. Your eyes scan down the column that the ‘if’, ‘select’ ‘for’ line starts on, and look for the matching closing brace. I never have to look for the opening brace because it’s instinctive for me to type: if(){ then go and fil in the condition. ong live K&R. I want to see code on my page not needless white space! I apparently use ‘Whitesmith’s’. I find it interesting that style guides say that Allman and Whitesmith’s are equal in prevalence as I have come across no-one else who uses the same style as me and have suffered much chastising about it — I do see Allman’s a great deal, though. Long live Whitesmith’s and all the (evidently hidden) masses who use it. BTW, I do have to agree with Keith Patrick’s comments about single-statement bodies. I break my own style by allowing opening and closing braces to be on one line in those cases but I’d much rather have the braces than not. I’ve been bitten in the ass a couple times by inferring meaning from misapplied indent (‘misapplied’ meaning from the standpoint of the original code’s own style, not my preferred style; this is of course why Python is preposterous). Allman style for me too. I think statements like "Braces should not get their own line, only statements" are stupid – for one, braces ARE statements, in a way (think Pascal begin and end). For two, your ending braces get a line to themselves in 1TBS, don’t they? Allman for almost everything, with K&R for the few pieces of code where Allman is not appropriate (long series of if/else, independant if followed by just a continue/break or goto (yes those can be useful C or C++ but almost never in more modern languages). And indeed there even even a few occasions where a single line if () { … } is appriate. That’s why you must not be too religious. Strict, yes. Stuborn, no. Allman style. It is a lot easier to read if the braces line up vertically; makes it clearer what a block is. i.e. you have an if, while etc. on one line, and the block that it executes on the following line. Block is made up of brace, statements, close brace… I’m also rather anal about ALWAYS putting braces around even single-line blocks in the interests of readability. I hate religious arguments about programming style – yet I’m so opinionated that I can’t help to comment and get sucked in to the discussion: I used to be a strict K&R but have really honed in on Allman in the last two years or so. Allman with inline acceptions (begin-end braces on one line – as long as they do not contain nested braces) I think I like K&R at the time because of the saving in screen realestate. But now that I’m not limited to an 80×25 CRT display Allman really does make more sense. When you have multiple nestings of {} it helps a great deal to have them in the same column. It also flows well logically – Some block label as the header, and then the code-block: block-label { … } Indentations are a must – but must not be over done. Again I find inline acceptable: block-label { …. } But never block-label {…{….}…{….}….} There are several questions that need to be asked: Who is going to look at my code? Who is going to debug my code? Does my coding style help me? If you program for yourself – then it really doesn’t matter what you pick. But if your code is going to be read by someone else, then part of your responsibility is to help them out. Code elegance is small amount of easily readable code. Not code that takes up little space. -CF You’ve got to line up your curly brackets! No programming teacher I’ve ever had has ever advocated any other method. I didn’t even know any other methods existed until recently! What the heck kind of a complaint is "Oh, I just don’t feel comfortable giving a bracket it’s own line"?? It’s the start of an entire enclosed section of code, that’s not important enough for you? Having brackets that aren’t alligned with one another is like having matching bookends holding your books together but one of them is facing sideways and the other is facing forward. Sure it still works, but it just doesn’t look right! I’ve had to use all of these styles at one time or another, so I’m pretty flexible. K&R was really useful in the days when most of us programmed on 24×80 terminals. I did find one case where K&R style could prevent a problem. I had a programmer working for me who took the error messages produced by the compiler quite literally. There was a case where the compiler told him that there was a semicolon expected, so he added one: if (cond) ; { code that now always executes } With K&R style, the code would still work as written. Of course, the real problem was the programmer… Being in an "Allman" shop has really made me hate the style. Taking up extra space for a { makes reading the code more difficult because it makes detecting the indentation of the code more difficult. It also makes catching and handling exceptions hard to read because a rather run-of-the mill exception handling routine can take up many screens. I used to use Allman style but converted to R&K when I realised that I found K&R easyer to read. Allman is best. I have nothing more to add than what others have already posted. I have noticed more programmers have converted from K & R to Allman than vice versa. The arguments for Allman’s win hands down. End of discussion. I learned C from K&R and even while reading that book, I disliked that style, choosing instead to do what looks like is called Whitesmith’s. Now I do Allmans, since that’s what VS does automatically – it’s hasn’t been much of a switch. One thing I do which VS’s auto formatter doesn’t like is to line up case ‘a’: // with the following break; so that it’s easier to spot missing break statements. Call me stupid… but I am running Visual Studio 2005 express edition, but I can not change the default indentation style to Allman style! Who knows how to do it! Thanks. Of course, I found Tools/Options/Text Editor… Hi Matthijs, To change this, go to Tools…Options…Text Editor…C#…Formatting…New Lines… and make sure all the checkboxes under the "New Line Options for Braces" are checked. As you check and uncheck each one, you will see a "preview" screen that shows how the option will change formatting. Thanks danielfe that helped ! If the New Line Options aren’t there you need to check the Show All Setting box in the bottom left. I came across text naming K & R Kernel style – maybe because it is used in the Linux kernel? Kernel seems easier to write to me, so I’ll use this name for it. I use Allman since ever. I didn’t imagine there could be another style until I encountered it in code written by others or downloaded from the net. This may be due to historical reasons. My beginnings as a programmer were in Pascal, almost 20 years ago. So switching from begin/end to {/} naturally led me to Allman. Nevertheless, I always considered Pascal a very readable language, perfect for ppl needing to write lots of code occasionally, but not being mainly busy writing code – back then I needed to code my own programs to drive machinery under DOS, my own programs to do statistical processing on data acquired, and the like, but there were long periods of time where I would only use the programs I wrote, and not enhance them. So it seems to me reasonable that Allman brings more readability into the code than Kernel. I encountered the argument that in C and C++, having the opening brace on the same line makes the difference between a function/method declaration and a function/method definition easier to spot. But I find this to be plain BS, since the difference is actually made by the presence or absence of the semicolon. I agree that at the time when you had to program in text mode in a 24 X 80 console screen real estate was expensive. But even back then you could change your video mode to 50 X 80, or something about that, and double your screen real estate. Besides, back then there were little compilers (any at all?) supporting exceptions, and it is my experience that braces tend to eat up disturbingly much space especially when you code exception handling using Allman. Today, however, screens have become big enough so you don’t really have a screen real estate problem anymore. You can even use two screens if your development/debugging process or your code is very complex. So why use a style which degrades readability? Another issue about readability: usually when i experiment with a solution, or in the initial versions of prototype code I write ugly, long blocks. During such tests, I also edit this ugly code a lot, add logging, remove logging, add conditional code in some place, move it into another place, etc. It often happens during such ugly and dirty changes (I wouldn’t call them quick, hoever), that I loose control over my braces. Using the proper indentation and the braces aligned like Allman, I don’t have such a difficult time spotting a missing brace, and I can use the environment to reformat the code after a change. Now, with Kernel I’d surely have a hard time following opening braces in all sorts of positions (at the end of a for/if/while, maybe after a multiline test condition for an if. Using Allman all opening braces are easy to spot in the starting position of individual rows. I’m about to start writing a coding standard for internal use. I guess anybody can tell which style I’ll recommend. The arguments presented by various ppl above surely helped me decide. until I reached this page, I was still thinking I must be missing somethign about Kernel if so mcuh code and so many coders use it. Now I think it’s just a historically determined bad thing, like many others. K&R wins hands down for me. I used to be an Allman guy, since that seemed to be what most people were using. I then worked on a project which used K&R and was far more readable as a result. Like the person who commented before, I find it far easier to scan for if…} or while…} etc than {…} This is just plain horrible: try { … } catch ( … ) { … } I also find it near impossible to get an overall picture of the structure of Allman code since so little of it fits on the screen. K&R at all times. Also the style I remember from early C++ (C with Classes anyone?) and Pascal. Perl let’s you put if/while statements at the end or beginning of a statement to emphasize what’s important. What’s more important – that there’s a block of code, or that there’s a block of code that might execute, or will loop more than once? Emphasizing the opening statement seems much more sensible to me. Allman’s style is good if you program for line counters, though. Especially when you do switch – case with braces. Anything except K&R. I simply find it hard to read. I have some odd SQL coding conventions that I follow, too, all in the name of readability. IfYouWantToKnowWhatI1Is { read this; forget Allman } if_you_want_to_know_what_l1_is { keep smiling; use kernel style; } Never understood why searching for ‘{‘ is an issue (unless you are trying to remove it); just look at indentation. And 8 spaces of indentation is a blessing for anyone prone to writing unnecessarily complicated, hence less maintainable code. Just my thoughts… After I use Python, I enjoy programming in the K&R style. I need Allman style. If I even use a piece of code that isn’t written in Allman style, I will make it Allman style. This is time consuming. I think of anyone who doesn’t use Allman style as making more work on me to properly indent their code. 😉 As far as how many spaces to indent, I’m not picky, so long as it’s consistent throughout it’s entire scope, and the matching brackets are equally indented. Allman style, 2-space tabs. I use 4-space tabs in python since blocks are indicated by tabs instead of braces. Much prefer the K&R style. I used to be an Allman developer in college until I saw the light. More screen space to see your code is a plus but the main reason I like it is that the controlling statement nicely matches up with the ending brace. Visually it’s easier for me to see this than if I had an opening brace right below it. In the place I work now everyone uses Allman and it’s a nightmare looking at code that has a few lines in each if / else construct. The code just looks so bloated. I have a lot of c / c++ books, and they mostly use Allman style…which I HATE!! I can understand the readabilty argument of Allman though, so I leave an additional space like Allman, but keep my opening brace on the first line. e.g. int function (params) { function code; } And if/else for loops are pretty similar. if (condition) { do this } EXCEPT, if there is only one line to be executed, in which case I write it like this. if (condition) do this; I used to favor Allman style, but I switched to K&R when I had to deal with multiple programming languages (VB.NET, C#, Ruby). Although I can agree that Allman style is (at times) more readable, I favor the ability of K&R to allow just about ANY programming language to use the SAME number of lines. Examples: C/C++/C#/Java/PHP/Perl: ———- 1 if (expression1) { 2 statement1; 3 statement2; 4 statement3; 5 } else if (expression2) { 6 statement4; 7 statement5; 8 } else { 9 statement6; 10 } VBScript/VB6/VB.NET: ———- 1 If Expression1 Then 2 Statement1 3 Statement2 4 Statement3 5 ElseIf Expression2 Then 6 Statement4 7 Statement5 8 Else 9 Statement6 10 End If Pascal/Delphi: ———- 1 if expression1 then begin 2 statement1; 3 statement2; 4 statement3 5 end else if expression2 then begin 6 statement4 7 statement5 8 else 9 statement6 10 end; Ruby: ———- 1 if expression1 2 statement1 3 statement2 4 statement3 5 elsif expression2 6 statement4 7 statement5 8 else 9 statement6 10 end COBOL: ———- 1 IF EXPRESSION-1 2 STATEMENT-1 3 STATEMENT-2 4 STATEMENT-3 5 ELSE IF EXPRESSION-2 6 STATEMENT-4 7 STATEMENT-5 8 ELSE 9 STATEMENT-6 10 . Python's indentation rule is an annoyance, because I strongly feel that progressive INdentation must be followed by eventual progressive DEdentation. If something is indented 4 spaces, there must be something present on a line that dedents this block 4 spaces. I might think of using a comment to denote the end of the block Python: ———- 1 if expression1: 2 statement1 3 statement2 4 statement3 5 elif expression2: 6 statement4 7 statement5 8 else: 9 statement6 10 #end if Whitesmiths Style all the way, it makes my code look cleaner, as well as making the distinction that braces are part of the block of code, NOT the opening statement.
https://blogs.msdn.microsoft.com/danielfe/2003/11/24/its-all-a-matter-of-style/
CC-MAIN-2019-43
refinedweb
3,740
69.62
abcd a écrit : > I have a file, "a.py" > > blah = None > def go(): > global blah > blah = 5 > >>From the python interpreter I try.... > > >>>>from a import * >>>>blah >>>>go() >>>>blah >>>> > > > ...i was hoping to see "5" get printed out the second time I displayed > blah, but it doesn't. Now, if I type this same code directly into the > python interpreter it works as i was hoping. what i am missing? In Python, 'global' means 'module level'. And 'variables' are name:object bindings in a namespace (mostly function, class or module). The way you import blah and go from module a creates two names (blah and go) in the current namespace, and bind these names to resp. a.blah and a.go. The fact that go rebinds a.blah doesn't impact current namespace's blah. The following session may help you understanding what happens: >>> from a import * >>> dir() ['__builtins__', '__doc__', '__name__', 'blah', 'go'] >>> import a >>> dir() ['__builtins__', '__doc__', '__name__', 'a', 'blah', 'go'] >>> dir(a) ['__builtins__', '__doc__', '__file__', '__name__', 'blah', 'go'] >>> go is a.go True >>> go() >>> blah >>> a.blah 5 >>> Note that rebinding a name and mutating a (mutable) object are two distinct operations: # b.py blah = ['42'] def go(): blah[0] = "yo" def gotcha(): global blah blah = ['gotcha'] >>> from b import * >>> import b >>> blah ['42'] >>> blah is b.blah True >>> go() >>> blah ['yo'] >>> b.blah ['yo'] >>> blah is b.blah True >>> gotcha() >>> blah ['yo'] >>> b.blah ['gotcha'] >>> To make a long story short: 1/ avoid globals whenever possible 2/ when using (module) globals, either use them as pseudoconstants or as module's 'private' variables (IOW : only functions from the same module can modify/rebind them). HTH
https://mail.python.org/pipermail/python-list/2007-March/418486.html
CC-MAIN-2014-15
refinedweb
277
75.3
Getting Started with Anvil This guide will lead you through the creation of your first Anvil app. Together we will build a Guest Book app, and when we're done we will have something that looks like this. We recommend you open in a new tab, and follow along with this guide. Logging In Open , and click This is just to identify yourself to Anvil - your apps will not be stored with Google. Create a blank app Once you have logged in, the next page will list all your Anvil Apps. You probably don't have any yet, so let's fix that. Click Create new app to get started. You get a choice of templates to start from, but we are going to build our guest book app from scratch, so choose "Blank App". Hello, World! Now that you've created a blank app, you will find yourself in the Anvil App Editor. You can read more about the App Editor in the Anvil documentation . For now, we're going to go ahead and start building the guest book. It's traditional for the first program you write to greet you with the message Hello, World!, so let's start there. In Anvil, this will take roughly three clicks as well as typing the message. The blank app template isn't actually quite blank. You will see in the App Browser on the left of the IDE that your project already contains one Form. Forms are the pages of your app that the user sees. They contain Components, such as labels, buttons and text boxes. The main panel in the IDE is the Form Editor. This is where you design your form. First, find the Toolbox in the top-right of your screen, and find the Label component: Click and drag the label icon onto the form editor. Once you have placed your new label, you can move and resize it. You will notice that Anvil is helping you to drop the new component onto a hidden grid - this is so that it's easy for you to line up multiple components. When you're happy with its position and size, notice that the Property Table in the bottom-right of the IDE is now displaying all the properties that you can set for the label. We want to change the text property to read " Hello, world!". Your form should look something like this: Now you are ready to run the app. Running Your App The Anvil IDE has two states: Build and Run. We have been working in build mode so far. When you want to test your app, you will need to switch into the Run state. When in the Run state, if anything goes wrong in your code the Output Console will pop up, allowing you to jump straight to the problem and fix it. Now that your app is ready to run, click Run at the top of the Anvil IDE to switch into the Run state. What you see now is your running app. Admittedly, it's not very exciting yet. We should make it do something. User Input Wouldn't it be nice if your app could greet its users by name? Click Stop in the bar at the top of the IDE, and return to the form designer. Using the Toolbox again, add a Label , a TextBox and a Button so that your form looks something like the example below. You will need to edit the text property of the new label and the button as you did before. Naming Components As we will want to refer to the text box and the button from code, let's give them names. Select the text box, and then use the Property Table to set its name property to " txt_name", as in the screenshot on the right. In the same way, rename the button to " btn_greet" and the original label (the one that still reads "Hello, World!") to " lbl_greeting". There is nothing magic about the lbl_, btn_ and txt_ prefixes to the names. You can call your components anything you want (as long as the names start with a letter and don't contain spaces), but using a prefix that identifies the type of the component can help you keep track of things. Bringing Your App to Life Now we need to write some code to wire up our form. When the user enters their name and clicks the button, we will have the app display a greeting. Double click your "Greet" button. The form editor switches into Code View: The code you see is what drives your app. Every form has an associated Python class, which you can view by switching the Form Editor into Code View, as you just did. We want to add an action when the user clicks btn_greet. By double-clicking the button in the designer, we asked anvil to create a new function for us. This function will be called whenever a user clicks the button at run-time. The names of our components are important, because now we can refer to them in code. Inside the Form1 class, we can refer to our text box as self.txt_name. This means we can get the text from the box by writing self.txt_name.text. def btn_greet_click (self, **event_args): # This method is called when the button is clicked self.lbl_greeting.text = "Hello %s!" % self.txt_name.text Replace the btn_greet_click function that Anvil has created for us with the code on the right. Make sure that you don't change anything else - the __init__ function must remain, for instance. Now you're ready to run your app again. Once it appears, try typing your name in the text box and clicking "Greet". You should see something like this: Saving Data Now that we've seen how to add actions to buttons on our form, let's think about the next feature we need for our guest book: storage. We need to be able to save details of those who have visited. Before we start, let's ask the user for some more interesting information. Select the greeting label ( lbl_greeting) and delete it by pressing DELETE on your keyboard. Create a new TextArea component and an associated label so that your form looks like the updated example below. Name the new TextArea component txt_message. You will also want to move the button and change its text property. There is no need to rename it. You do not need to give names to the 'Enter your name:' and 'Enter a message:' labels. The only reason to name components is if you want to refer to them from the code. Linking to Google Spreadsheets To store visitors' details, we will link a new Google Spreadsheet to Anvil. We can do this entirely from inside Anvil. As we want our app to interface with Google, we must add the Google API Service. In the App Browser, click the + next to SERVICES and then choose to add the Google API. Select the Google API Service you just added, and click Add App File . Anvil will ask for permission to view your Google Drive files, and then you will see a list of all your folders. You could navigate around your Google Drive to choose where to create the file, but for now just click Create File Here to create it in the root folder. In the next dialog, choose the Spreadsheet file type. Enter the name " Guest Book" and then click Create File You should see a table showing your Guest Book spreadsheet along with a "Python identifier". This is how we refer to the sheet in code. To see our new spreadsheet, click View to open the new file in Google Drive. Now that we know how to refer to that spreadsheet in code, let's try saving visitor data. Adding Data Rows Switch back to Anvil, then use the App Browser to switch back to the Form Editor by clicking on Form1. from anvil import * # Importing anvil.google.drive allows us to connect to # our linked spreadsheet. import anvil.google.drive class Form1 (Form1Template): def btn_greet_click (self, sender, **event_args): # This method is called when the button is clicked user_name = self.txt_name.text msg = self.txt_message.text # Adding another row to a worksheet is simple: self.guest_sheet.add_row(name=user_name, message=msg) def __init__(self): # This sets up a variable for every component on this form. # For example, if we've drawn a button called "send_button", we can # refer to it as self.send_button: self.init_components() # Set "self.guest_sheet" to the first # worksheet of your spreadsheet. self.guest_sheet = anvil.google.drive.app_files.guest_book.worksheets[0] Click the Code button in the header to switch to Code View, then find your original btn_greet_click function. This won't work anymore, because we deleted lbl_greet. Replace the code for the entire form with the code on the right. Here we are loading the spreadsheet we linked earlier, then adding a row to its first worksheet. Run your app, enter some text in the boxes, then click the button. You won't see any response in the app (yet), but if you open up the spreadsheet in Google Drive, you should see the data has arrived. Notice that Anvil has filled in the column headings for you. When you add data, columns are automatically created as necessary. Our guest book app is starting to take shape. We have built the user interface, and we can save information that our visitors enter. Now, we just need to load and display that information for future visitors. To do this, we will add a LinearPanel to the form. Find it in the Toolbox, and add it below the button. When the form loads, we will add one label to the linear panel for every row in our spreadsheet, each displaying the name of the visitor. Linear panels are Containers that arrange their child components vertically, keeping them all lined up. They are ideal for situations like this. Name your new LinearPanel lst_visitors. While it's empty, it can be quite hard to see on the form, but you should end up with something that looks like this. The LinearPanel is the selected box at the bottom. Now we should write some more code to populate the linear panel when our app loads. Click the Code button to switch back into Code View. Startup Code def update_previous_signatures(self): # First, we clear out all components from the LinearPanel, # so it's completely empty. self.lst_visitors.clear() # Now, we add a Label to the panel for each row # in the spreadsheet. for row in self.guest_sheet.list_rows(): row_label = Label(text=row["name"]) self.lst_visitors.add_component(row_label) def __init__(self): # This sets up a variable for every component on this form. self.init_components() # Set "self.guest_sheet" to the first # worksheet of your spreadsheet. self.guest_sheet = anvil.google.drive.app_files.guest_book.worksheets[0] # Start off by loading previous messages self.update_previous_signatures() On startup, we want to load data from the Google Spreadsheet and display it in lst_visitors. Find the __init__ method in the code - this is the method that runs when Form1 loads. Replace it with the two functions on the right. You can see that we are loading the spreadsheet as we did when we were saving the data - but instead of adding a row, this time we loop through all the rows, creating a new Label for each one and adding it to the Linear Panel. Notice that we can initialise the properties of each new Label by passing them in as named parameters ( row_label = Label(text=row["name"])). We could set any number of properties this way. For more information about adding components to containers in code, please see the API Reference. Run your app again. You should see the names of your previous visitors displayed at the bottom of the form: At this point you may notice that signing the guest book does not cause the list to update immediately. To fix that, add a call to self.update_previous_signatures() at the end of the btn_greet_click function. In any case, restarting the app will cause the list to update. Custom Components Forms can also be used as components on other forms. Let's see how this can simplify our app. At the moment, we just display the name of each previous visitor in a label. If we wanted to display the visitors' messages too, we would need to add a second label per row, and things would start to get messy. We can improve things by designing a new Form as a template for a single row, then adding an instance of that form for each row in our spreadsheet. The benefits should become clear as we progress. Creating a Template Form To add a new form, click the + button in the FORMS section of the App Browser. A new form will appear, called Form2. Select it, then give it a more useful name by clicking next to the name, choosing Rename and entering RowTemplate. Now add two labels to the form, so that it looks like the example below. Name them lbl_name and lbl_message. # New initialisation function for RowTemplate class def __init__(self, row): self.init_components() # Save a reference to the row object for later self.sheet_row = row # Display the name and message self.lbl_name.text = row["name"] self.lbl_message.text = row["message"] Switch to the Code View of the RowTemplate form. Notice that RowTemplate has its own Python class, separate from Form1. When we create new instances of RowTemplate, it needs to know which data row to display, so we'll add a row parameter to the __init__ method. Use the code on the right. When the form is initialised, we will display data from whatever row we were given in the labels. Now we just need to use our RowTemplate form. Using a Template Form from anvil import * # Import the RowTemplate class, which is # defined by the RowTemplate form. # We can use it like any other component. from RowTemplate import RowTemplate # Importing anvil.google.drive allows us to connect to # our linked spreadsheet. import anvil.google.drive ... ... for row in self.guest_sheet.list_rows(): row_form = RowTemplate(row) self.lst_visitors.add_component(row_form) ... Switch back to Form1 and go to Code View. We don't need to modify the layout of the form, we will just change the way data is displayed when the form loads. This is remarkably easy. Instead of adding a label to lst_visitors for each row of data, let's add a new instance of RowTemplate. Change the code to look like the example on the right. Don't forget the extra import at the top - this allows us to refer to the new RowTemplate form. The for loop in the second block on the right should replace the one in your update_previous_signatures function. Run the app. You should see one copy of RowTemplate appearing for each previous visitor to your app. Modifying Data Now that we have a template form, we can do more than just display data from each row. Let's add a button to the template that allows users to delete rows from the spreadsheet. Switch back to the RowTemplate designer, move the labels slightly and add a button called btn_delete. Your form should look something like this: When a user clicks the Delete button, we need to do two things: - Delete the data row from the spreadsheet - Remove this instance of the template form from the displayed list. def btn_delete_click (self, **event_args): # The Delete button has been clicked! # First, delete the row from the spreadsheet self.sheet_row.delete() # Now this entry has been removed # from the spreadsheet, # remove ourselves from the form. self.remove_from_parent() Double click the button to auto-generate an event-handler method, then replace it with the code on the right. Make sure you do not delete the auto-generated __init__ function. Notice that the row object we got from the worksheet (via __init__) is more than just a dictionary - we can update data in the row, or delete it (as we do here). For more information about using Google Sheets, see the API reference. We're done! Run your app, and see that you can now delete data as well as add and view it. In the drop down menu at the top-right of the App Browser, choose "Share". The link provided allows you to share your guest book with your friends. Send them the link, and watch as the messages arrive! Next Steps Congratulations on completing your first Anvil app. We have really only scratched the surface of what is possible with Anvil. Take a look at some of the other template apps for examples of graphics and animation, server-side code, and even physics simulation. Finally, please sign our Guest Book!
https://anvil.works/doc/getting_started.html
CC-MAIN-2019-22
refinedweb
2,814
73.88
Details - Type: Bug - Status: Resolved - Priority: Minor - Resolution: Fixed - Affects Version/s: 0.9.0 - - - Labels: Description When reading a parquet file containing a string column, the RowGroup statistics contain a trailing space character for the string column. The example below shows the behavior. import pandas as pd import pyarrow as pa import pyarrow.parquet as pq # create and write arrow table as parquet df = pd.DataFrame({'string_column': ['some', 'string', 'values', 'here']}) table = pa.Table.from_pandas(df) pq.write_table(table, 'example.parquet') # read parquet file metadata and print string column statistics pq_file = pq.ParquetFile(open('example.parquet', 'rb')) print(pq_file.metadata.row_group(0).column(0).statistics.max) # yields b'values ' print(pq_file.metadata.row_group(0).column(0).statistics.min) # yields b'here ' For other data types I did not observe this problem, even though the statistics are always strings. When reading the same file with fastparquet, there is no trailing space character, which implies that this problem occurs in the reading path of pyarrow.parquet. I am aware that this might well be an issue with parquet-cpp, but as I face this bug as a pyarrow user, I report it here. I'll try to investigate this further and report back here. Update: The trailing space is added in parquet-cpp. pyarrow calls the function FormatStatValue which adds the trailing space (). There is no comment there to explain it. Does anyone here know what the reason is?
https://issues.apache.org/jira/browse/ARROW-2503
CC-MAIN-2019-35
refinedweb
238
59.5
Man Page Manual Section... (3) - page: iconv NAMEiconv - perform character set conversion SYNOPSIS #include <iconv.h> size_t iconv(iconv_t cd, char **inbuf, size_t *inbytesleft, char **outbuf, size_t *outbytesleft); DESCRIPTIONThe argument cd must be a conversion descriptor created using the function iconv_open(3). The main case is when inbuf is not NULL and *inbuf is not NULL. In this case, the iconv() function converts the multibyte sequence starting at *inbuf to a multibyte sequence starting at *outbuf. At most *inbytesleft bytes, starting at *inbuf, will be read. At most *outbytesleft bytes, starting at *outbuf, will be written. The iconv() function converts one multibyte character at a time, and for each character conversion it increments *inbuf and decrements *inbytesleft by the number of converted input bytes, it increments *outbuf and decrements *outbytesleft by the number of converted output bytes, and it updates the conversion state contained in cd. If the character encoding of the input is stateful, the iconv() function can also convert a sequence of input bytes to an update to the conversion state without producing any output bytes; such input is called a shift sequence. The conversion can stop for four reasons: 1. An invalid multibyte sequence is encountered in the input. In this case it sets errno to EILSEQ and returns (size_t) -1. *inbuf is left pointing to the beginning of the invalid multibyte sequence. 2. The input byte sequence has been entirely converted, that is, *inbytesleft has gone down to 0. In this case iconv() returns the number of nonreversible conversions performed during this call. 3. An incomplete multibyte sequence is encountered in the input, and the input byte sequence terminates after it. In this case it sets errno to EINVAL and returns (size_t) -1. *inbuf is left pointing to the beginning of the incomplete multibyte sequence. 4. The output buffer has no more room for the next converted character. In this case it sets errno to E2BIG and returns (size_t) -1. A different case is when inbuf is NULL or *inbuf is NULL, but outbuf is not NULL and *outbuf is not NULL. In this case, the iconv() function attempts to set cd's conversion state to the initial state and store a corresponding shift sequence at *outbuf. At most *outbytesleft bytes, starting at *outbuf, will be written. If the output buffer has no more room for this reset sequence, it sets errno to E2BIG and returns (size_t) -1. Otherwise it increments *outbuf and decrements *outbytesleft by the number of bytes written. A third case is when inbuf is NULL or *inbuf is NULL, and outbuf is NULL or *outbuf is NULL. In this case, the iconv() function sets cd's conversion state to the initial state. RETURN VALUEThe iconv() function returns the number of characters converted in a nonreversible way during this call; reversible conversions are not counted. In case of error, it sets errno and returns (size_t) -1. ERRORSThe following errors can occur, among others: - E2BIG - There is not sufficient room at *outbuf. - EILSEQ - An invalid multibyte sequence has been encountered in the input. - EINVAL - An incomplete multibyte sequence has been encountered in the input. VERSIONSThis function is available in glibc since version 2.1. CONFORMING TOPOSIX.1-2001. SEE ALSOiconv_close(3), iconv_open
https://linux.co.uk/documentation/man-pages/subroutines-3/man-page/?section=3&page=iconv
CC-MAIN-2017-09
refinedweb
538
64.1
What do you do with them? You can wear it for Halloween or everyday, if desired. It may be for the anime otaku seriously into their cosplay. For the furries out there, I'm not asking so don't tell me. Now what would the simplified caitlinsdad version be? Well, getting into advanced monitoring and feedback of brainwaves might be kinda expensive(they are dumping at the toy mega-store those Star Wars force trainers - control the floating ball with your mind- now $60 from $100 or maybe gut an i-dog but then I don't think the gears are powerful enough) and building some raw circutry is way more than I want to do. Pulley and invisible strings is so ghetto(Not that there's anything wrong with that...). A joystick that controls some motors on a headband? I know, how about servos controlled by an Arduino? And how about using an accelerometer to determine tilt/movement/position of the head to trigger ear movement. Tilt sensor switches might be too basic. GPS gets complicated and compass magnetomometer only points North. This gives me a project to use with my recently acquired Arduino(no, it did not fall off a truck...I have a receipt). Besides, Caitlin says I have to make this or sew her a custom sling/messenger book bag. This instructable may be more of an Arduino primer than about the cat ears. Note that this is a prototype project and I am throwing it out at this stage in the spirit of Open Source. I know it can be improved a lot by others and with more time to tweak it. Okay, they look like bunneh ears, kinda like a cat. With one of the ear coverings on... Step 1: Getting Started With Arduino I found that my local computer supergeekstore had stocked Sparkfun retail Arduino components. Maker Faire sells them at the Make: Shed. Adafruit has them also. You could get them from Sparkfun themselves. Hmmm, tax works out about what I would pay in shipping and would not have to wait at the door for the package. I had bought an Arduino Uno $30(other variants($25) require the FTDI programmer USB interface($15) which is built in to the Uno). Other add-ons can run up the tab. I have a 2x16 LCD display($20) to play with next. See if the kits will save you any money. The books were all text-book priced and I was sure info could be gleaned from the internet. You have to go to arduino.cc to download the Arduino programming interface for your PC. It allows you to create "sketches" or programs that get uploaded to the ATmega microprocessor chip on the Arduino board. The site contains tutorials and examples of code used to implement the devices you want to interface with the Arduino. On a PC, only thing tricky in getting started is you have to manually install the USB drivers so that it recognizes the Arduino on a COM port. A computer with the Arduino software installed, USB drivers, a USB cable to connect the PC and Arduino, and the Arduino Uno is all you need to get started. The Uno has an LED onboard that you can control with a sketch. I wanted to get an Arduino so I could do some of those cool things like build 3-D LED matrices and light reactive tabletops. It is easy to get your LED blinking and fading. It was easy to modify an example sketch to do Cylon lights(Knight Rider lights), just add more LEDs and resistors. Read up on all the LED instructables and you can throw around terms like "charlieplexing". So, always hook up an LED with a resistor so you don't burn it out(search "LED calculator" for widgets to find the right resistor value). A trip to Radio Snacks to get a bag of assorted LEDs and 100 ohm resistors. A trip back to get a protoboard and 100 ohm resistors because I failed to note 100k resistors are not 100 ohm resistors. And another trip back after realizing I do not have any wire thin enough (22 gauge?) that fit in the holes on the protoboard (and get solid not stranded). I saw a precut pack of jumpers but hey, I have a nice pair of wire strippers and can do that. I saw online some premade jumpers that had a patch-cord plug-like cover on the ends of the wires. Hey, I could get some tiny plastic jewelry beads, and pass the wire through and maybe solder or tin the ends of the wire to keep it in place. Do not do that, tinning the ends makes it not fit in the holes. Find a bead that fits snugly or put a drop of glue to hold it in place. Back to this project: Once you are comfortable enough to wire up your protoboard and have done a few sketches, you should be able to tackle this project. You will need to get: * Protoboard or breadboard to wire the circuit together. * Wire, lotsa wire with the ends stripped * 2 standard servos (My brother flies radio-controlled jets and helicopters so he had some spares to give me. Free is always good) three wire, 6 volts DC in, pulse-width-modulation PWM signal * ADXL345 accelerometer breakout board - $30 * some LEDs and appropriate resistors (200-300 ohm usually works for your standard red/green/yellow medium sized 5mm LED) The LEDs are really just diagnostic LEDs and indicators to help you fine tune your programming. * You will need some basic soldering skills and tools. * Power pack for your Arduino and power pack for your servos to make it portable * Some craft fur, craft sheet foam and ear lining material * Basic sewing skill. You can sew this by hand, machine or just use tape, glue/iron-on interfacing. Step 2: ADXL345 Accelerometer Breakout Board I had purchased 2 accelerometer boards at the store. 1. I knew I would be making two sets of necomimi. 2. Always have a backup. 3. When you experiment, prepare to toast one of your components. So I also got another Uno to back up the one I already had. This breakout board(you can buy just the chip and figure out how to do that surface mount soldering which is really tricky) has the chip mounted with some other critical capacitors/resistors on a circuit board ready for you to wire up. Man, is this breakout board tiny. At The Shack, I got their digitally controlled soldering station on sale for $50 and this was the first time I got to fire it up. Everything was connected and powered up. I had forgot there was a clear plastic shipping tube on the soldering tip and I had just put it back into the soldering station holder well. So I had to spend some time scraping off the melted plastic and gunk off the tip. Minor setback. I decided to hardwire 8 jumper wires to the breakout board so I could plug the leads into my protoboard. No instructions come with this board so I took a good guess on how to solder the leads in. If they surface mount on one side then I would plug the wires in on the bottom side. The pads were not pretinned and did not have a copper surface so I couldn't tell which would be better to solder on. Did I mention how small this breakout board is? I used the first preset temperature setting on the soldering station figuring, I am working with sensitive electronics, too much heat is bad... It was kinda hard to melt the solder but I got one wire in. Crank up the next higher setting. Solder still doesn't melt on contact but seems to work. Third pad I futz around with. I think I pressed in the hole or on the wire too long to get a nice blob of solder going. Dang, the circuit trace lifted off. Of course, stronger epithets were used. You know when that happens, you may be so f...... Desolder the wire and survey the damage. The trace to the pin on the chip is so thin, that pulled away too. I try to salvage it by scraping further up the trace, tin it and try to attach a jumper wire to the wire anchored in the hole with a blob of solder. This usually works but later on I find out my board is fried since I can't get it to communicate. I had originally burnt off the INT1 line. I didn't know I would not need it later on. I had also burnt off the pad on the SCL line. That was worse since I couldn't tell if I needed to jump the ground pad. I see some other complaints on the internet that the pads and traces are so small. I think I would get a Lilypad version next time which has big pads for sewing with conductive thread. So a warning here, use header pins to solder in if you can get them and fire up the iron to 670 degrees F with a light touch(third soldering preset happened to be 680, high). So I discovered this accelerometer would be a pain to work with. At least I had a backup, others may not be able to afford blowing away $30 like that, kinda like the tolls to New Jersey. I was lucky to get the second breakout board wired up. I went in blind thinking this would be a snap to hook up. I check to see if there are any tutorials on this. I may be in over my head on this. There is minimal support on this product, here, go read the datasheet. By the way, there are two methods of interfacing this and wiring diagrams are pretty sparse. One nice tutorial - most searches seem to point to this one, was way to complicated in getting the data out and scared me off. I don't think that was using the Sketch software either. Then I found this tutorial from Love Electronics - it had an ADXL345 library of code and functions to get the data out. Read up on how an electronic accelerometer works and look at all the images which are helpful. Download their ADXL345 library to use as I based my sketch on their code example. There are built in functions to easily read the accelerometer data. Their example allowed me to view the data streaming out of the ADXL345 by using the serial monitor that is part of the Arduino sketch software. I would just need to figure out how to interpret the raw data numbers and g-forces that were being pumped out. Step 3: Come Together, Right Now... Use pieces of masking tape to put tags on the ends of the wires of the breakout board for easy identification. Wire up the ADXL345 breakout board to the Arduino Connect VCC, SDO and CS to 3.3v on the Arduino, make sure it is the 3.3v or you may burn up the ADXL345 Connect the GND to ground on the Arduino. Connect SDA to Arduino 3.3V with 100 ohm resistor. Connect SCL to Arduino 3.3V with 100 ohm resistor. Connect SDA to Arduino analog pin 4 Connect SCL to Arduino analog pin 5 Learn about pull-up resistor switch wiring and why that is needed. Wire up the ADXL345 connected status LED Ground the ground pin side (shorter length) Put a 200 ohm resistor in the path Connect the resistor from the anode or long lead of the LED to digital signal pin 2. Wire up the three axis indicator LEDs I initially thought about putting in four leds in a diamond formation to indicate the tilt direction. I only used three in the mockup because I went with mapping the x, y, and z values from the accelerometer. Ground the ground pin side (shorter length) Put a 200 ohm resistor in the path. Connect the resistor from anode or long lead of the LED to the appropriate digital signal pins in on the arduino. I used pins 8, 9 , 11. They must be defined for output in the setup(). Go to Fritzing.org to get the prototyping software to make a fancy hardware diagram and have it generate a nifty circuit board if you plan to take it further. I am working on the Fritzing diagram for my test setup. You can simplify the wiring but that is how is works out on the protoboard. Of course, you can omit all the LEDs when you build the final product. Step 4: Sketchy Details Here is the code of the sketch. Just cut and paste as uploading a file at ibles may not retain the filename and will be confusing. //====start of sketch================== /* This program is based Arduino sketch examples and the following: ADXL345_Example.pde - Example sketch for integration with an ADXL345 triple axis accelerometer. the Wire library so we can start using I2C. #include <Wire.h> // Include the Love Electronics ADXL345 library so we can use the accelerometer. #include <ADXL345.h> // include the servo library to control the servo #include <Servo.h> Servo myservoA; // create servo object to control a servo // a maximum of eight servo objects can be created Servo myservoB; // create servo object to control a servo // a maximum of eight servo objects can be created int posA = 0; // variable to store the servo position int posB = 0; // Declare a global instance of the accelerometer. ADXL345 accel; // Set up a pin we are going to use to indicate our status using an LED. int statusPin = 2; // I'm using digital pin 2. //------------------------------------------------------ void setup() { myservoA.attach(7); // attaches the servo on pin 6 to the servo object myservoB.attach(6); // attaches the servo on pin 7 to the servo object // Begin by setting up the Serial Port so we can output our results. Serial.begin(9600); // Start the I2C Wire library so we can use I2C to talk to the accelerometer. Wire.begin(); // Ready an LED to indicate our status. pinMode(statusPin, OUTPUT); // Create an instance of the accelerometer on the default address (0x1D) accel = ADXL345(); // Check that the accelerometer is infact connected. if(accel.EnsureConnected()) { Serial.println("Connected to ADXL345."); digitalWrite(statusPin, HIGH); // If we are connected, light our status LED. } else { Serial.println("Could not connect to ADXL345."); digitalWrite(statusPin, LOW); // If we are not connected, turn our LED off. } // Set the range of the accelerometer to a maximum of 2G. accel.SetRange(2, true); // Tell the accelerometer to start taking measurements. accel.EnableMeasurements(); } //---------------------------------- void loop() { if(accel.IsConnected) // If we are connected to the accelerometer. { // Read the raw data from the accelerometer. AccelerometerRaw raw = accel.ReadRawAxis(); //This data can be accessed like so: int xAxisRawData = raw.XAxis; // Read the *scaled* data from the accelerometer (this does it's own read from the accelerometer // so you don't have to ReadRawAxis before you use this method). // This useful method gives you the value in G thanks to the Love Electronics library. AccelerometerScaled scaled = accel.ReadScaledAxis(); // This data can be accessed like so: float xAxisGs = scaled.XAxis; // We output our received data. Output(raw, scaled); } } // Output the data down the serial port. void Output(AccelerometerRaw raw, AccelerometerScaled scaled) { // initialize the LED pin as an output: pinMode(8, OUTPUT); pinMode(9, OUTPUT); pinMode(11, OUTPUT); pinMode(12, OUTPUT); // Tell us about the raw values coming from the accelerometer. Serial.print("Raw:\t"); Serial.print(raw.XAxis); Serial.print(" "); Serial.print(raw.YAxis); Serial.print(" "); Serial.print(raw.ZAxis); // Tell us about the this data, but scale it into useful units (G). Serial.print(" \tScaled:\t"); Serial.print(scaled.XAxis); Serial.print("G "); Serial.print(scaled.YAxis); Serial.print("G "); Serial.print(scaled.ZAxis); Serial.println("G"); // show led for axis if (scaled.XAxis > 0) { digitalWrite(8, HIGH); // set the LED on } else { digitalWrite(8, LOW); // set the LED off } // if (scaled.YAxis > 0) { digitalWrite(9, HIGH); // set the LED on } else { digitalWrite(9, LOW); // set the LED off } // if (scaled.ZAxis > 0.8) { digitalWrite(11, HIGH); // set the LED on } else { digitalWrite(11, LOW); // set the LED off } // make servo move according to conditions sensed ----------------------------- if (scaled.ZAxis < 0.8) { Serial.println("*** Raise Lower-----------Raise Lower"); RaiseLower(); delay(265); } // if (scaled.YAxis < 0) { Serial.println("*** Twitch Middle---------Twitch Middle"); twitchMiddle(); delay(400); } // if (scaled.XAxis < 0 and scaled.YAxis > 0 and scaled.ZAxis >0) { Serial.println("*** Wink Right---------------Wink Right"); winkRight(); delay(265); } } //------------------------------- // subroutine for raise and lower void RaiseLower() { twitchshort(); twitch(); for(posA = 0; posA < 135; posA += 1) // goes from 0 degrees to 135 degrees { // in steps of 1 degree myservoA.write(posA); // tell servo to go to position in variable 'pos' myservoB.write(180-posA); delay(12); // waits 15ms for the servo to reach the position } for(posA = 135; posA>=1; posA-=1) // goes from 135 degrees to 0 degrees { myservoA.write(posA); // tell servo to go to position in variable 'pos' myservoB.write(180-posA); delay(12); // waits 15ms for the servo to reach the position } } // subroutine for twitch ------------------------- void twitch() { for(posA = 0; posA < 45; posA += 1) // goes from 0 degrees to 45 degrees { // in steps of 1 degree myservoA.write(posA); // tell servo to go to position in variable 'pos' myservoB.write(180-posA); delay(8); // waits 15ms for the servo to reach the position } for(posA = 45; posA>=1; posA-=1) // goes from 45 degrees to 0 degrees { myservoA.write(posA); // tell servo to go to position in variable 'pos' myservoB.write(180-posA); delay(8); // waits 15ms for the servo to reach the position } } // subroutine for short twitch -------------------- void twitchshort() { for(posA = 0; posA < 25; posA += 1) // goes from 0 degrees to 25 degrees { // in steps of 1 degree myservoA.write(posA); // tell servo to go to position in variable 'pos' myservoB.write(180-posA); delay(5); // waits 15ms for the servo to reach the position } for(posA = 25; posA>=1; posA-=1) // goes from 25 degrees to 0 degrees { myservoA.write(posA); // tell servo to go to position in variable 'pos' myservoB.write(180-posA); delay(5); // waits 15ms for the servo to reach the position } } // subroutine for twitch middle -------------------- void twitchMiddle() { for(posA = 0; posA < 100; } for(posA = 100; posA>=1; posA-=1) // goes from 100 degrees to 0 degrees { myservoA.write(posA); // tell servo to go to position in variable 'pos' myservoB.write(180-posA); delay(10); // waits 15ms for the servo to reach the position } delay(250); for(posA = 0; posA < 120; } } // subroutine for wink right -------------------- void winkRight() { for(posA = 0; posA < 120; posA += 1) // goes from 0 degrees to 120 degrees { // in steps of 1 degree myservoA.write(posA); // tell servo to go to position in variable 'pos' // myservoB.write(180-posA); delay(10); // waits 15ms for the servo to reach the position } for(posA = 120; posA>=1; posA-=1) // goes from 180 degrees to 0 degrees { myservoA.write(posA); // tell servo to go to position in variable 'pos' // myservoB.write(180-posA); delay(10); // waits 15ms for the servo to reach the position } myservoB.write(180); // raise other ear if resting to match upright } //====end of sketch================== If you tilt your head forward, the ears should do a full up and down . If you tilt your head to the right, the left ear should go up and down. There should be a slight wiggle if a slight tilt. The ears drop to a down position. Play around with the delay values and servo timing loops for realistic action. Remember, you may have to figure out where your servos are positioned at 0 degrees start and what way they rotate. You then have to accomodate the movement with the servo commands in the sketch. X, Y, Z and servo movement in my prototype may not match yours. Step 5: Prototyping You can try to run this without the servos hardwired in. Servos are controlled with PWM (pulse-width-modulation) digital control signals from the Arduino. The control wire(white wire) is hooked up to Arduino digital pin 6 or pin 7. The red wire is hooked up to +5v power. Note on the prototype breadboard diagram that the power bus blocks are broken in the middle. Jumpers on one side to continue the 3.3v rail and 5v isolated for the servos. The black wire is hooked up to ground. If you are providing a separate power pack for the servos, as recommended as not to pull out a lot of power through the Arduino, the ground must be common with the arduino otherwise the signal won't work. I was trying to see what kind of data was coming out of the ADXL345 to see how I could use it as a tilt sensor. Comments about the board say that the axis reference markings are wrong so orient the board in different ways and see how the data changes. Modify the code accordingly. Please see the comments in the code to see what each step does. To mimic realistic action, I had to put in a delay each time the servos finish their action. The ears would be constantly moving if the person wearing this stayed completely still. I guess you would have to be a real animal behaviourist to figure out all of the motions to program in. I used basic assumptions to create the algorithm on how it reacts to the accelerometer values. A simple tilt to one side, tilt forward, and tilt to the other side is all I worked on. You would need to observe the data for the different movement of the sensors. Use the raw data or converted g-force values. Step 6: The Fuzzy Part... Attach the two servos to a headband. I just used electrical tape to secure them on. Note that when the unit is powered up, the motors are constantly spinning and you will detect a slight vibration so you may want to damp them with padding or foam tape. There is no noticible heat from the servos. I didn't have any fancy custom machined parts to attach to the servos so I just taped on some wire and went from there. You can mock up a pattern or eyeball it with cutting out a D shape on a folded piece of craft fur fabric. Cut out a similar shape out of craft foam to be the cartilage support for the ears.. Cover the good or furry side with the inner ear fabric. Sew around the outside but leave a bottom opening so you can turn it inside out. Play around with forming the ears, you may have to go back to sew the correct shape. I just tucked in the bottom end of the ear. You will need it to attach the ear covering to the servo. Sew it up later. I used a serger because, I can....well, I have found it to be the fastest way to sew some stuff, only pain is when the thread breaks because you are pushing too much fabric thickness through or trying to carve a shape like the roads of Monte Carlo. Place the cover on the servo ears. Mount the ADXL345 breakout board in the center of the headband. Route the wires to the center and back. Cover electronics with more craft fur. Step 7: Crash Test Bunnies So depending on how your fabric ear coverings turned out... The operation of this is pretty rudimentary. Put it on. Power it up. Move your head. Correct and shift. You know, this is kinda neat.... Step 8: Much to Do About Something... There are so many ideas for further development: Make an early warning earthquake detector alarm, that accelerometer is pretty sensitive. Make a game to see who twitches first. Mount the servos sideways to get dog ears or any other animal, you might get tired... Map the tilt sensor for wider or more natural range of corresponding ear movement. Map the readings in polar coordinates to make it easier to program for use as a tilt sensor. Normalize the data - some kind of debouncing as there are some erratic readings - may need to adjust for atmospheric conditions or enviroment such as temperature Adjust sensitivity by software filtering of the data to reduce "noise" Create a calibration/stabilization routine when needed to orient the headband on the wearer. Don't know if adding gyroscopic functions gives it other features. Have it respond to audio stimuli. Have ears move to the rhythm of music being heard. Have ears respond to heart rate monitor. Have ears respond according to brain wave monitor. Use lilypad for control module to include battery pack on headband. As for the Japanese manufacturing ethos: Make it smaller and faster. As for the Chinese manufacturing ethos: We can reverse engineer anything. As for the Instructables ethos: Make it yourself. Third Prize in the Halloween Props Challenge 21 Discussions 2 years ago Very good Instructable. 7 years ago on Introduction After viewing this “Instructable” I have no doubt that America, i.e., Americans, will solve the problems that plague mankind the world over. To think that the prototype was built on top of a Campbell’s Soup Can. WOW! I am humbled and overwhelmed to think of a world living in peace with everyone sporting their Cat Ears. What a concept. I can just envision the world’s delegates at the UN sitting down, putting on their Cat Ears with translation ear phones incorporated into the head band. Caitlin and the Cat Ears are very cute. Thank you Caitlin's dad. Reply 7 years ago on Introduction I have no doubt that the world, i.e. worldy peoples, are secure knowing that velcro has been invented to solve those problems that plague mankind wherever an ordinary fastener will not do. I can't wait for when velcro2 hits the market. Reply 7 years ago on Introduction In my opinion velcro and duct tape is need to save the world, lol 7 years ago on Introduction Those ears are so neat, I have to make some! 7 years ago on Introduction This is fantastic! I've been very interested in the transhuman movement, as well as EEG technology such as Mindflex, and this is one thing that strikes me as a step in the right direction. So cool! Reply 7 years ago on Introduction Thank you, Dr. Frankenstein. 7 years ago on Introduction From the video's description, they say the "necomimi" ears will be out near the end of this year. Plus in other related videos, we see them being tried out at some trade show. So I'd say they are real and coming soon! As for your instructable...AWESOME! It's obviously much more bulky and louder than the "necomimi" on the YouTube video, but I think its awesome how you did this one. Definitely a great starting point. Reply 7 years ago on Introduction I think it will only appear in the Nieman-Marcus or Hammacher Schlemmer catalogues. Get those elves cracking on the knockoffs. 7 years ago on Introduction eehhh...whats up doc?? those are bunny ears Reply 7 years ago on Introduction silly cat. 7 years ago on Introduction How clever! thanks for sharing! sunshiine Reply 7 years ago on Introduction Thanks. They still need some work though. 7 years ago on Introduction They are adorable, nicely done! Reply 7 years ago on Introduction *meow* 7 years ago on Introduction Neat idea! Reply 7 years ago on Introduction There's gotta be some loose arduinos around the office...and steal liftappropriate some servos from Randy's bots to make yourself some nifty dinosaur ears. 7 years ago on Introduction Really, really cute! Reply 7 years ago on Introduction Thanks. 7 years ago on Introduction more usagi than neko but still cool now all that needs to happen is to shrink every , make it thought active ,and put it on everybody in the world and then we would have achieved world peace cause nothing can be anger when they look this cute
https://www.instructables.com/id/Necomimi-Arduino-Cat-Ears/
CC-MAIN-2019-13
refinedweb
4,720
73.58
MySQL Client Does anybody have any suggestions on downloading/installing a MySQL client for Pythonista? Hi Eran, It is possible to run a mysql client, but it takes a bit of work. MySQL provide a pure python connector here It is licenced under GPL2 so you get the source. I got it to work by first installing into a regular Python 2.7 system on a linux box. Once installed and tested on the 'donor' find the mysql directory in site packages and copy the contents over to some other place for editing. In the copy directory move everything into the toplevel directory, delete the empty init.py files and any .pyc files. Remove the sub directories. In your new mysql directory you should have 14 or so python files. You will need to change some filenames to something temporary to avoid over-writing. To get it to work in pythonista start editing roughly as follows. Name the core init.py file to something like mysqldb (note: exclude .py otherwise pythonista won't import). Rename all the other files by just removing the .py extension. Rename the init.py file from the locales directory to locales. Now for each file look for the 'mysql.connector...' import statements and remove the package path, just leave the core module which will have the same name as one of the above files. In the locales file edit line 48 (ish) which is an import statement and remove the path, leave the 'client_error' bit - this is the eng locale taken care of. The next stage is to get the whole directory into pythonista which I ended up doing by importing the files in another app and re-creating them in pythonista using the clipboard - there are probably much more efficient ways to do this ! You should now have a mysql sub-directory in your main project directory. The last step in your main program is to import sys and insert the sub directory into sys.path - I just grab the last entry in path which is the project directory and do an os.path.join. Longwinded, but it works for me. It does take a few seconds to load and connect on my ipad2, but not so long as to be tedious. I have been accessing various Mysql 5 databases for checking data and writing data - so far has worked exactly as a regular installation, and quite quick once loaded even for fairly large queries. Hope this helps. Any chance you could post a link to download your modified folder? Hi guys - I've found a github repo by Tomasso Turchi that purports to do just this. I have yet to try it since I need to figure out how to get the dozen or so files into pythonista.... but it's here: lets you download github repos I just have to advertise mine :D. lets you download repos, releases and gists easily with a single interface :) Just an update: putting the files from the repo above, without their root folder, into your site-packages folder, seems to work. Then import lowercase mysqldb. This is a bug report regarding use of mysqldb with Pythonista. I was pulling my hair out wondering why I couldn't create a table with a certain column be a DOUBLE, even when I inserted the exact same data into it that was inserted into another column that was also a double. It turns out that if you're creating and deleting a table multiple times (as one might for testing purposes) while in Pythonista, and you change one of the column types, you need to hard-quit Pythonista or the previous table column types will remain in effect, ignoring the updated column types in your code. Force quitting Pythonista and then re-running the same code seems to do the trick. I am trying to get the mysql-connector-pythonista to work without success. I did: db=mysqldb.connect() """ My authentication info is in the defaults in connection.py """ cursor=db.cursor() and I get errors.OperationalError: MySQL Connection not available How can I get back more information about why the connection is not being established? (I have tested the parameters (userid, password, server, db) and they work fine from my Mac using mysqldb for python. @ihf have you tried passing your authentication info as such: db = mysqldb.connect(host= "db url here", user = "usernameHere", passwd="password here", db="nameofdatabase",port = 3306 ) Hi @Tizzy I saw you'd posted recently on this topic and hope you don't mind me asking a question. I am trying this myself, but keep getting an error: ImportError: no localisation for language 'eng' Would you know what's causing this, and have you got it working yourself? Thanks @chriswilson Can you post the code you have where this is happening? (You can leave the username/password/server info blank) Also where did you get Mysqldb for pythonista? (I've been meaning to post about the different mysql options between Pythonista2 and 3) @Tizzy Here's my code. It's the connect line that raises the error. import mysqldb db = mysqldb.connect(host= "", user = "user", passwd="password", db="my_database", port = 3306) cursor = db.cursor() @chriswilson Running your code strictly as is does bring up the same error. However, as soon as I entered a valid database url, the error goes away. Are you sure you're inputting the correct IP address/url? I think so - I'm using the hostname given by my provider (in the format). I just used the name of the database without a path - is this ok? @chriswilson remove the 'http://' part of the url. So it's just 'mysql.my_host.co.uk' You don't need a path for the database. Just a name. @Tizzy That seems to have worked - so simple when you know. Thanks! Now I just need to figure out how to use the mysqldb module! :) @chriswilson Here's a couple examples of how to do a few things with some premade functions for getting a list of tables, checking if a table exists, creating a table, getting the last saved entry, and getting a dump of all the data in a table. Obviously you have to fill out the credentials with your own stuff. Let me know if you have any questions. try: import pymysql as mysqldb except ImportError: import mysqldb def dbConnect(): #port 3306 is the default for mysql #returns false if it can't successfully connect username = "username" password = "password" dataBase = "databaseOnServer" porty = 3306 try: conn = mysqldb.connect(host="host.something.com", user=username,passwd=password,db = dataBase,port=porty) except mysqldb.err.OperationalError: print("Can't connect. Check your hostname/user/password info.") conn = False return conn def getLatestSavedEntry(tableName): #gets the latest entry, the one with largest ID (must be a table with "ID set as an auto-incrementing primary key") #createTable function does this. conn=dbConnect() if conn ==False: print("no connection") return cursor = conn.cursor() try: cursor.execute("SELECT * FROM "+str(tableName)+" where ID = (SELECT MAX(ID) FROM "+tableName+")") lastEntryData=cursor.fetchone() if lastEntryData == None: lastEntryData = ["doesnt","exist...nope"] except: lastEntryData =["table","doesn't","exist","...probably"] print(lastEntryData) cursor.close() return lastEntryData def createTable(tableName): conn = dbConnect() if conn ==False: print("no connection") return cursor=conn.cursor() #adjust this string with the sequel commands you'd like, columns etc. sequelString = "CREATE TABLE "+str(tableName)+"(ID INT(11) PRIMARY KEY AUTO_INCREMENT, uuid VARCHAR(50),request_at DOUBLE, duration INT, totalDuration INT, ratingHistoryCalculatedAverage DOUBLE,ratingHistory5 INT, ratingHistory4 INT, ratingHistory3 INT, ratingHistory2 INT, ratingHistory1 INT, Surge VARCHAR(30), fare DOUBLE, fareTakeHome DOUBLE, Distance DOUBLE)" try: print(".....trying table creation") cursor.execute(sequelString) print("created new table!") return "Success" except: print("table couldnt be created...") return "Failure to create" cursor.close() def getSequelData(tableName): #this gets all of the data in your selected database and table, returns False if conn=dbConnect() if conn ==False: print("no connection") return cursor = conn.cursor () #get the vertsion of your mysql cursor.execute("SELECT VERSION()") row = cursor.fetchone() queryString = "SELECT * FROM "+str(tableName) try: cursor.execute(queryString) data=cursor.fetchall() print(data) except mysqldb.err.ProgrammingError: print("DOESN'T EXIST, YOU MUST CREATE THIS TABLE TO BE ABLE TO FETCH ITS CONTENTS.") data = False cursor.close() return data def doesTableExist(tableNameToCheck): tableNameToCheck = str(tableNameToCheck) tableList = getTableList() if tableList ==False: print("no connection") return for table in tableList: #tableList is a list of tuples of unicode w/second of tuple empty existingTable = str(table[0])#gets to unicode string #print(existingTable,"???",tableNameToCheck) if existingTable == tableNameToCheck: print("table "+tableNameToCheck+" already exists. Yay!") userTableExists = True break else: userTableExists = False if userTableExists: #print("Table exists. moving on.") return True elif not userTableExists: #print("Table not found. Maybe you should create it.") return False def getTableList(): conn = dbConnect() if conn ==False: print("no connection") return False cursor = conn.cursor() #cursor.execute("select * from information_schema.tables") cursor.execute("SHOW TABLES") tableList=cursor.fetchall() #print(tableList) cursor.close() return tableList if __name__=="__main__": #tests print( getLatestSavedEntry("someTable") ) print( createTable("someTable") ) print( getSequelData("someTable") ) print( getTableList() ) print( "table exists?:",doesTableExist("someTable") ) @Tizzy it is good practice to not do statement = 'SELECT * FROM ' + table_name'or even statement = 'SELECT * FROM ' + %s' % table_nameas this can cause security issues with SQL injection. Most database packages (read: modules) will have something along the lines of statement = ''' SELECT * FROM ? ''' with db.connect as conn: result = conn.execute(statement, (table_name,)) This is a more secure way of accessing batabases. The other way is good enough for person projects, but keep that in mind or little Bobby Tables will make your life awful as a DBA. B. @blmacbeth are you saying to use triple quotes for security purposes? @Tizzy The triple quotes are not important. That's a Python syntax feature which allows you to write a string literal across multiple lines. SQL doesn't care about newlines, so it makes no difference whether you put everything in one line or on multiple lines. The important part is the question mark in the query. For example, you should write cursor.execute("select name from ?", [tablename]) instead of cursor.execute("select name from %s" % [tablename]) The difference is that the second variant uses standard Python string formatting (i. e. the value of tablenameis just put into the string at the position of the %s), which leaves you vulnerable to injection attacks. If tablenamewas taken from a public web form, then you could enter mytable; drop table mytableas the table name, which would result in a query of select * from mytable; drop table mytableand delete your data. In the first variant, we don't use the standard Python formatting. Instead we put a question mark in the query string and pass the table name as the second argument to cursor.execute, which internally escapes the string properly to avoid any code injection attacks. (We have to put tablenamein square brackets to make a single-element list - if we had five question marks, we'd pass a five-element list with all the values to insert.) Though I think this question-mark insertion only works in some cases. Now that I think about it, it might only be allowed for whereclauses and such. Perhaps because letting users specify arbitrary table names is dangerous enough? Not sure...
https://forum.omz-software.com/topic/1431/mysql-client
CC-MAIN-2019-04
refinedweb
1,879
57.77
Introduction: ESP8266: Parsing JSON As promised in my previous instructable, I will be covering more about the ArduinoJson library in detail, in this instructable. JSON (JavaScript Object Notation) is a lightweight data-interchange format that is easy for humans to read and write, and easy for machines to parse and generate. JSON objects are written in key/value pairs and it's a must for keys to be of the string data type while values can be a string, number, object, array, boolean or null. A vast majority of APIs that are now being used will return JSON data when called, and knowing how to parse them will definitely benefit you. In this instructable, we will be using the ArduinoJson library for the ESP8266 to help us parse JSON data and extract values based on keys. The ArduinoJson library is also capable of serializing JSON, meaning you could generate your own JSON data using data from sensors connected to your ESP8266 or Arduino for example (will be covering more about JSON serialization, in detail, in another instructable). So, let's get started. This project was done by me, Nikhil Raghavendra, a Diploma in Computer Engineering student from Singapore Polytechnic, School of Electrical and Electronic Engineering, under the guidance of my mentor Mr Teo Shin Jen. Step 1: Install the ArduinoJson Library coloured . Step 2: Performing a GET Request Before we can start parsing, we need to have the JSON data in the first place and to obtain our data, we perform a GET request. A GET request, as the name suggests, gets the data for us from a particular location using a specific URL. The boilerplate code to perform the GET request can be found below. For this example, we will be performing a GET request using the URL. You can call any API you like. #include <ESP8266WiFi.h><br> } http.end(); //Close connection } // Delay delay(60000); } The data that we are going to parse is contained in the payload variable. We don't actually need this variable when we are parsing our data later on. Step 3: Using the ArduinoJson Assistant The developers who developed the ArduinoJson library are so kind that they've even created an Assistant that writes the parser program for us using any JSON data as an input. To use the ArduinoJson assistant, you first need to know how your JSON is formatted and to do that, type in the URL that we used to perform the GET request earlier on into the browser of your choice and hit enter. Copy the JSON and head over to the ArduinoJson Assistant's web page and paste it into the text box below the label named "Input". Then scroll down to take a look at the parsing program generated by the Assistant. Copy the whole program or just a section of it. Step 4: Completing the Code and the Result Copying and pasting the parsing program generated by the Assistant into the boilerplate code that we used to perform a GET request earlier on would look like this: ) { // Parsing const size_t bufferSize = JSON_OBJECT_SIZE(2) + JSON_OBJECT_SIZE(3) + JSON_OBJECT_SIZE(5) + JSON_OBJECT_SIZE(8) + 370; DynamicJsonBuffer jsonBuffer(bufferSize); JsonObject& root = jsonBuffer.parseObject(http.getString()); // Parameters int id = root["id"]; // 1 const char* name = root["name"]; // "Leanne Graham" const char* username = root["username"]; // "Bret" const char* email = root["email"]; // "Sincere@april.biz" // Output to serial monitor Serial.print("Name:"); Serial.println(name); Serial.print("Username:"); Serial.println(username); Serial.print("Email:"); Serial.println(email); } http.end(); //Close connection } // Delay delay(60000); } Since we are only interested in the name, email and username of the user, we just used a section of the parsing program generated by the assistant. You can use the serial monitor to view the output. If you don't see anything, press the reset button on your ESP8266 and *boom* you should see the output there. Note: The last line of code in the code above introduces a delay of 1 minute or 60,000 ms into the loop. This means that the API is only called once every minute. The number of times an API can be called within a specified timeframe varies and you are strongly encouraged to follow the guidelines specified by your API provider. 7 Discussions Hi! Nice work on this tutorial! Remark that you can avoid the "payload" string by passing "http.getStream()" to "parseObject()". [delete] Would be possible adapt the code to use it with https? Thanks for the article! Could you please provide us the wiring diagram to connect esp8266 with arduino?? Hey, thanks a lot for the article. I'm trying to apply this to (I'm trying to get the titles of posts) and for some reason nothing returns (even though httpCode returns as 301). I pasted my code here: I used ArduinoJson assistant and changed the relevant parts in your code. I tried the suggestions in the "Why Parsing Fails" page of arduinojson.org to no avail. Can it be a memory problem, given that Reddit's JSON is considerably larger than the one in your example? I'm a beginner and still trying to wrap my mind around all of this. Thanks in advance. Im trying to use this api: and I always get httpCode value = -1 I tried to use https, same result.. Could you help me? Thank you Hi, there seems to be a problem with the server's SSL certificate, it's either too large or they could have blocked off non-browser agents from accessing the API. I tried connecting to the service using HTTPS and the SHA1 fingerprints don't match every time I run it. The certificate size could be to blame. I will try again and will let you know if it works. Any more info on this issue? I also get httpCode = -1 when trying to get data from or
https://www.instructables.com/id/ESP8266-Parsing-JSON/
CC-MAIN-2018-34
refinedweb
980
62.17
View a Stored XML Schema Collection After you import an XML schema collection by using CREATE XML SCHEMA COLLECTION, the schema components are stored in the metadata. You can use the xml_schema_namespaceintrinsic function to reconstruct the XML schema collection. This function returns an xml data type instance. For example, the following query retrieves an XML schema collection (ProductDescriptionSchemaCollection) from the production relational schema in the AdventureWorks2012 database. If you want to see only one schema from the XML schema collection, you can specify XQuery against the xml type result that is returned by xml_schema_namespace. For example, the following query retrieves product warranty and maintenance XML schema information from the ProductDescriptionSchemaCollection XML schema collection. You can also pass the optional target namespace as the third parameter to the xml_schema_namespace function to retrieve specific schema from the collection, as shown in the following query: When you create an XML schema collection by using CREATE XML SCHEMA COLLECTION in the database, the statement stores the schema components in the metadata. Note that only the schema components that SQL Server understands are stored. Any comments, annotations, or non-XSD attributes are not stored. Therefore, the schema reconstructed by xml_schema_namespace is functionally equivalent to the original schema, but it will not necessarily look the same. For example, you will not see the same prefixes you had in the original schema. The schema returned by xml_schema_namespace uses t as the prefix for the target namespace and ns1, ns2, and so on, for other namespaces. If you want to retain an identical copy of the XML schemas, you should save your XML schema in a file or in a database table in an xml type column. The sys.xml_schema_collections catalog view also returns information about XML schema collections. This information includes the name of the collection, the creation date, and the owner of the collection.
http://msdn.microsoft.com/en-us/library/ms191170.aspx
CC-MAIN-2014-10
refinedweb
308
50.77
The following example shows how you can display a popup Spark TitleWindow container in Flex 4 by using the static PopUpManager.addPopUp(), PopUpManager.createPopUp(), and PopUpManager.centerPopUp() methods. could also declare the TitleWindow instance using MXML, as seen in the following example: Or you could make a custom TitleWindow-based component and launch the popup window using the static PopUpManager.createPopUp() method, as seen in the following example: And the custom TitleWindow-based component, comps/MyTitleWindow. 25 thoughts on “Displaying a popup Spark TitleWindow container in Flex 4” Thank you, Peter, for another helpful post. Are there any advantages of using PopUpManager.createPopUp() vs. PopUpManager.addPopUp()? Thanks, IB @Anonymous, Not that I know of. It just depends on whether you’re trying to display a custom component or a component instance. Peter I don’t get the component “spark.components.TitleWindow” in Flashbuilder Beta 2. You i get a nighty build ? John @John, Yes, all of the examples on this blog usually use the latest nightly SDK build. You can download Flex 4 beta SDKs from and that link is at the top of most of the Flex 4 specific entries (see the yellow box at the top of the code in this entry). Peter I am using the non-“spark.components.TitleWindow”-version flash builder too…adobe’s really confused me… so then function giveMePopUp():void { var FAQpnl:ScorecardPanel = new ScorecardPanel(); PopUpManager.addPopUp(FAQpnl, Application.application as DisplayObject, true); } I wish you included which SDK build you were using, since the most recent stable Flex Builder 4 beta 2 version does not include a “titleWindow” component either. It’s very confusing. I tried to use a recent nightly build and it did have it included, yet there is a new library it seems, mxlns:ns. It replaces many components, for example: to . It replaces many components, for example: <s:Form> to <ns:Form>. @nicotroia, Sorry, I usually build all these examples on the latest nightly Flex SDK available at any given point. The old Beta 2 SDK is fairly out of sync with the latest APIs. The previous examples should all still work, although you will need to tweak the xmlns:mx="library://ns.adobe.com/flex/halo"namespace to xmlns:mx="library://ns.adobe.com/flex/mx". There isn’t a Spark version of the <Form> tag yet, so you’ll still need to use the mx namespace: And then: Peter Hi Peter, do you know how i set the order of oppening my popup windows. order by the click or select for a sample? In the order of my code the windows are overiting each other. Thanks. @Rodrigo Pena, You’d just have to open the popups in the reverse order. It’s actually working correctly. Assume your code launches 3 Alert controls. The first Alert is launched. Next the second Alert is launched, and overlaps Alert #1. Finally the third Alert is launched and overlaps Alert #2 which and Alert #1. So you have 3 Alerts, but you’d need to close Alert #3 to see the first two (assuming they were modal). If you want Alert #1 to be on “top”, you’d need to launch Alert #3, then Alert #2, and then Alert #1. Seems a bit backwards, but I believe it’s working as intended. Peter Hi, I am very new to Flex, is there way to pass multiple data from TitleWindow/PopUp window back to parenet window? thx ex3108 Hi, I am very new to Flex, is there way to pass multiple data from TitleWindow/PopUp window back to parenet application? thx ex3108 Hi~I want to change Alert’s border & buttons css styles in Flex4~Like chromeColor or the colors of title part & text part……….. But I can’t understand the tuition on the internet~ would you mind explain it briefly again? thank U~ hello sir, The code is very for me. I am using PureMVC pattern But now i want to register a mediator with this title window. where should i write the code. i don’t want to write the code in mxml. because it is a view part and i want to register the mediator in facade, but how get the instance of that mxml in facade. Hello Peter: How woul I place content inside a Window (say an swf demo movie built with Captivate)? Thank you Att, Edwin None of your examples are showing up… I’m guessing dropbox has hit its bandwidth limit. :( Hi Peter, how can I call a function (in the opener window) from the popup created? I can’t use the command myPopup[‘myButton’].addEventListener(‘click’, myFunction), because the buttons in the popup was created at runtime (by a repeater). I’d tried to use parentApplication.myFunction, but unfortunately it didn’t work. Thanks If i create popup via this metod: var popup:TitleWindow = PopUpManager.createPopUp(this , Popup , false) as TitleWindow; And i have Popup.mxml which contains ViewStack with id vs, then how can i reach vs right after creating the popup. popup.vs is not working Thanks for these. I want to move popup window, out of Parent application bounds. please give sample. HI, Can anyone tell me how to get the slightly blurry/transparent background when a displaying a popup? my code looks like this… var conGrats:CongratsPopUp = new CongratsPopUp() PopUpManager.addPopUp(conGrats,this,true); CongratsPopUp() is a TitleWindow component The above works, but I want it to blur out the screen behind the pop up. Thanks Aidan Aidan im also curious about how to control the screen behind the pop up. Have you had any luck controlling the screen under your window? Hi, I first learned how to build a popup window from here and after searching for a very long time, I’ve found a way to call a function in the parent window. For example, define a *public* function myFunc() in the parent. Then in the Popup Window, call parentApplication.myFunc(). It’s that easy! Hi Peter, thanks for all your posts. Here’s a quick question: How can I display a custom component (declared at the Application level) over a Popup created by the PopUpManager? When myCustomComp is supposed to show within the showCustomCompfunction, I tried with: myCustomComp.visible = true; this.setElementIndex(virtualKeyboard,this.numElements-1); But it is not showing over any window called by the PopUpManager Could you give me a hint? Thanks! When I add a Title Window to my 1920×1080 application, the modal background only covers about 75% of the application from the top left corner. The popup is centered within that 75% frame. However the blur effect does in fact cover 100% of the application. Any idea what is going on here and how to fix it?
http://blog.flexexamples.com/2009/10/23/displaying-a-popup-spark-titlewindow-container-in-flex-4/
CC-MAIN-2017-22
refinedweb
1,127
66.03
Sivan Thiruvengadam wrote: Thanks. Let me change this by pushing the object as you suggest and see if things work as expected. BTW, I always hit the else part of the CLNumList constructor, since i am not passing any CLNumList instance from LUA. Yep, I forgot that Lunar removes 'self'. // member function dispatcher static int thunk(lua_State *L) { // stack has userdata, followed by method args T *obj = check(L, 1); // get 'self', or if you prefer, 'this'---> lua_remove(L, 1); // remove self so member function args start at index 1 // get member function from upvalueRegType *l = static_cast<RegType*>(lua_touserdata(L, lua_upvalueindex(1))); return (obj->*(l->mfunc))(L); // call member function }
http://lua-users.org/lists/lua-l/2009-11/msg00062.html
CC-MAIN-2017-17
refinedweb
110
51.18
Newly is a drop in solution to add Twitter/Facebook/Linkedin style, new updates/tweets/posts available button. It can be used to notify user about new content availability and can other actions can be triggers using its delegate method. CocoaPods is a dependency manager for Cocoa projects. You can install it with the following command: $ gem install cocoapods CocoaPods 1.1.0+ is required to build Newly. To integrate Newly into your Xcode project using CocoaPods, specify it in your Podfile: source '' platform :ios, '10.0' use_frameworks! target '<Your Target Name>' do pod 'Newly' end Then, run the following command: $ pod install import Newly let newly = Newly() newly.showUpdate(message: "↑ New Tweets") Use this if you want to manually hide Newly. By default Newly will hide on touch. newly.hideUpdate() You can use Newly delegate to get its on click update. You can set delegate in your ViewDidLoad method. newly.delegate = self And then add extension for NewlyDelegate extension ViewController:NewlyDelegate{ func newlyDidTapped() { // Your custom code to trigger other actions once Newly is touched. } } You can customize appearance of Newly using following properties. newly.backgroundColor = UIColor(colorLiteralRed: 0, green: 153.0/255.0, blue: 229.0/255.0, alpha: 1.0) This will set the background colour for Newly. newly.textColor = UIColor.white This will set the text colour for Newly. newly.heightOffset = 78.0 This will set the height from top of the screen at which Newly will be displayed. newly.animationInterval = 1.0 This is will the animation time interval to show and hide Newly. newly.hideOnTouch = true Whether Newly should auto hide on touch. newly.isUpdateVisible = false Whether Newly is currenly visible.
https://openbase.com/pod/Newly
CC-MAIN-2021-39
refinedweb
276
51.65
Anonymous Functions in R and Python Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. What’s in a name? That which we call a rose by any other name would smell as sweet. Normal functions Before moving to anonymous functions, let’s start with what normal functions look like. In R # Define a function functionName <- function(variables) { # Function definition... do some stuff print(paste(variables, "doing stuff")) } # Call the function > functionName("boring strings") [1] "boring strings doing stuff" In Python # Define function def functionName(variables): # Do some stuff print variables, "doing stuff" # Call the function >>> functionName("boring strings") boring strings doing stuff Anonymous Functions Have no identity, no name, but still do stuff! They will not live in the global environment. Like a person without a name, you would not be able to look the person up in the address book. The anonymous function can be called like a normal function functionName(), except the functionName is switched for logic contained within parentheses (fn logic goes here)(). In R # Doing the same stuff anonymously > (function(variables) print(paste(variables, "doing stuff")) )("code") [1] "code doing stuff" In Python Python introduces the lambda keyword for anonymous functions, in contrast to R which sticks with the function keyword. # Doing the same stuff anonymously >>> (lambda variable: variable + " doing stuff")("code") 'code doing stuff' R Convention The most common convention is to use anonymous functions when using the *apply family of functions. For example, you might want to do an operation across a set of columns in a dataset. # Create a dataset df <- data.frame( col1 = c("element1", "element2"), col2 = c("element1", "element2"), stringsAsFactors = FALSE ) # lapply an anonymous function to the columns of the dataset > lapply(df, function(x) paste(x, "doing stuff")) $col1 [1] "element1 doing stuff" "element2 doing stuff" $col2 [1] "element1 doing stuff" "element2 doing stuff" Python Convention Doing the exact operation as above in python: import pandas as pd # Create DataFrame df = pd.DataFrame( [["element1", "element1"], ["element2", "element2"]], columns=['col1', 'col2'] ) >>> df.apply(lambda x: x + " doing stuff", axis=0) col1 col2 0 element1 doing stuff element1 doing stuff 1 element2 doing stuff element2 doing stuff You will generally see lambdas used with higher order functions like map(), reduce() and filter(). E.g. # Map a anonymous function against all elements of df['col1'] >>> map(lambda x: x + " doing stuff", df['col1']) ['element1 doing stuff', 'element2 doing stuff'] You may also see a lambda used to define a function. e.g. functionName = lambda x: x + " doing stuff" This is poor practice when a standard function definition would have been sufficient. The Python community has a bit of a split over the use of anonymous functions (lambdas) vs. list comprehensions. The benevolent dictator of the Python community Guido van Rossum has argued for lambda’s complete removal from python 3. In Conclusion Anonymous functions can make your code base harder to read. It can also make debugging harder. However, they can save you time from having to define yet another function in your code base. When should you use anonymous functions? According to Hadley “when it’s not worth the effort to give it a name”. A good example is when you won’t use the function anywhere else in your code. Or when you want to apply() a couple of pre-defined functions in one call e.g. lapply(df, function(x) secondFunction(firstFunction(x))). Resources Anonymous functions in Python Hadley on Anonymous Functions Lambda Functions in Python: What Are They Good.
https://www.r-bloggers.com/2017/09/anonymous-functions-in-r-and-python-2/
CC-MAIN-2021-43
refinedweb
593
52.7
o-colors The color palette for the FT masterbrand and sub-brand products. Usage There are a number of ways of using colors in your component or product. o-colors can be used via the Build Service, but it is recommended you import the Sass into your project to make use of the many Sass mixins and functions available. Sass: As with all Origami components, o-colors has a silent mode. To use its compiled CSS (rather than incorporating its mixins into your own Sass) set $o-colors-is-silent : false; in your Sass before you import the o-colors Sass: $o-colors-is-silent: false; @import 'o-colors/main'; Colors and accessibility o-colors has been built to help bridge the gap between design and development by providing functionality to help create colors dynamically from a central palette as well as generate contrasted text colors based on an elements background color. You can create tints of a color with the oColorsGetTint function. The function takes a palette color name and a brightness value (based on HSB color) to return a tint of the palette color. To work with text colors the oColorsFor mixin and oColorsGetTextColor function will output a text color based on the background color, which will be a mix of either black or white with the background at the percentage requested. You can also mix two colors manually using the oColorsMix function, providing two colors (either hex or palette color names) and a percentage at which to mix them. When working with the oColorsFor and oColorsGetTextColor features, the Sass will also automatically test the background color with the generated text color to see if the combination passes Web Content Accessibility Guidelines (WCAG). If the combination fails to pass at least WCAG AA you will see an error, if the combination passes AA but only at a larger font size (18px+), there will be a warning. For manually testing color contrasts, you can use Lea Verou's Contrast Ratio tool. Mixins and functions o-colors has a number of mixins and functions for you to access the color palette in your project, here they are listed from the most to least preferred methods of working with o-colors. Use case mixin Use the oColorsFor mixin to add color-related properties to your ruleset: .my-thing { @include oColorsFor(custom-box box, background text, 80); } Will output: .my-thing { background-color: #f2dfce; color: #302d29; // black mixed with #f2dfce at 80% } It takes three arguments: - Use case list: a list of color use cases in order of preference. The first one that is defined will be returned. - Property list: a list of all the properties you want the color applied to (background, border, text). They each correspond to background-color, border-colorand color. Default is all which includes all three properties. - Text Level: the opacity (1-100) for the text color derived from the background color of the use case. If a text use case exists already, this will have no effect. In the example above, the background and text colors are set, preferably from the custom-box use case, and if either of those properties are not available for that use case, from the box use case. If the text use case is not set, the function will output a text color based on either black or white (depending on the brightness of the background) blended with the background color at the text level specified. Use case function If you need to use a color value as part of a more complex CSS rule, e.g. a border color for just one side, or a gradient background, use the oColorsGetColorFor function: .my-thing { color: oColorsGetColorFor(article-life-arts-body article-body body, text, (default: blue)); } The oColorsGetColorFor function takes three arguments: - Use case list: a list of color use cases in order of preference. The first one that is defined for the specified property will be returned - Property: The property that you want to use the color for (background, border, or text). Note that in contrast to the oColorsFormixin, you must specify only one property. Options are background, border, text, and all. - Options: A Sass map of additional options, all of which are optional, and may comprise: - default: The name of a palette color to return if none of the specified use cases are defined for the desired property. May also be set to nullor undefinedto return that instead of the built in default (which is transparent) This function will not generate a text color based on the use case like oColorsFor to get a text color based on a use case, use oColorsGetTextColor Palette color function If you have a color use case not covered by those built into the colors module, consider defining a custom use case (see below) and then using the use case mixin or function described above. However, if you need to use a particular color in one single place only, it may be worth using the oColorsGetPaletteColor function, which returns the CSS color for a palette color name: .my-thing { color: oColorsGetPaletteColor('white-60'); } Generated text colors oColorsGetTextColor will return a text color based on the background and an opacity specified. The base of the text color is either black or white depending on the brightness of the background color and then mixed with the background at the specified opacity using oColorsMix. Warning: if the combination of background and text color created by the function would not pass WCAG AA level, o-colors will throw an error. Usage: .o-colors-palette-teal { color: oColorsGetTextColor(oColorsGetPaletteColor('teal'), 80); } Output: .o-colors-palette-teal { color: #cce3e5; } Tint palette colors oColorsGetTint will return a tinted palette color based on a specified brightness. The function takes the name of a palette color and an HSB brightness value between 0-100. For every colour except black, increasing the HSB brightness makes it appear lighter (ie claret becomes much pinker). Increasing the HSB brightness value of black will make it blacker. Take a look at the registry demo for a visual demo of this. Usage: .o-colors-tinted-color { background-color: oColorsGetTint('jade', 90); } Output: .o-colors-tinted-color { background-color: #177ee6; } Mix colors oColorsMix will mix two colors based on a percentage. This gives the impression of the base color appearing at the percentage opacity over the background color. oColorsMix will accept either a color value or the name of an o-colors palette color as arguments. Usage: .o-colors-palette-white { border: 1px solid oColorsMix(black, white, 20); } Output: .o-colors-palette-white { border: 1px solid #cccccc; } Defining custom use cases You can add use cases for your particular component or product. This is done using the oColorsSetUseCase mixin: @include oColorsSetUseCase(email, text, 'black-60'); It takes three arguments: - Use case: your particular use case - Property: the property for which the color should be used for (background, border, or text) - Color: a color from the palette If you are creating a use case for a component, you must namespace your use case name with the name of your component. You can also use oColorsGetUseCase to retrieve the palette color name (eg paper) defined against a usecase. This can be useful when you need the palette color name to use with another Sass mixin. Markup When using the build service or importing the module with silent mode set to false, o-colors provides you with helper classes to access the color palette. All palette colors are available as .o-colors-palette-[NAME] (which style just background-color) and use cases are available as .o-colors-[USECASE]-[PROPERTY] (which style the appropriate property): <p class="o-colors-body-text">Article text</p> This is a list of the use cases and their respective properties: CSS Variables When using the build service or importing the module with silent mode set to false, o-colors will output all the palette colors as CSS Variables. These will use the format --o-colors-{name} (e.g. --o-colors-black and --o-colors-teal). Migration guide Upgrading from v3.x.x to v4.x.x o-colors v4.x.x updates the entire palette of colors and adds a lot more functionality through new mixins and functions. The palette has been reduced from over 60 colors to a base palette of around 20 colors. These colors can be manipulated using new mixins to get a wide range of on brand colors accessibility compliant colors. To migrate from v3.x.x to use v4.x.x you will need to update the palette colors you are requesting using oColorsFor, oColorsSetUseCase, and oColorsGetPaletteColor. To work out which color names you need to update, we've created a table showing which colors should now be used in place of the old v3.x.x palette colors. New use cases have been added for opinion, hero and highlight branding. The product-brand use case has been removed. If you have any questions or comments about this component, or need help using it, please either raise an issue, visit #ft-origami or email Origami Support. Licence This software is published by the Financial Times under the MIT licence.
https://registry.origami.ft.com/components/o-colors@4.7.9/readme
CC-MAIN-2019-09
refinedweb
1,528
51.38
Managing Data Storage with Blockchain and BigchainDB Free JavaScript Book! Write powerful, clean and maintainable JavaScript. RRP $11.95’. Ascribe hit technological problems with this approach, and those problems were primarily due to Bitcoin’s Blockchain itself. Writing everything to it is slow, costly (currently 80c each time) and has a maximum number of daily entries and total capacity for writes. It’s also counter to typical scalable database technologies, adding nodes doesn’t improve performance and has no real query language. This makes scaling a business that relies upon the Bitcoin Blockchain a challenge. But the Blockchain concept is a strong one and the past years have seen an increasing rise in usage and legitimacy, with even major banks announcing development of technologies inspired by the concept. Ascribe decided to combine the best of both worlds, taking a proven NoSQL database (RethinkDB) and adding a Blockchain layer on top to add control, asset tracking and an additional level of security. This combination of technologies is especially interesting to NoSQL database users, as traditionally, few of them support ‘transactions’ that help guarantee a database change has taken place. By writing to an underlying NoSQL database via a Blockchain layer, BigchainDB adds transactional support. Thanks to the Blockchain layer, BigChainDB also claims to be fully decentralized. Whilst many distributed NoSQL databases claim this, there is often a pseudo master/slave setup. Installing BigChainDB and Dependencies There are couple of ways to install BigChainDB. First I tried the Docker images, but ran into some connection issues, finding the Python packages most reliable. - Install RethinkDB, for other Mac users, there is also a Homebrew package available. - Install Python 3.4+. - Install BigChainDB with Pip – sudo pip install bigchaindb - Start RethinkDB with rethinkdb - Start BigChainDB with bigchaindb startwhich will also configure things for you. - Open the BigChainDB (actually the RethinkDB UI) admin UI at Simple Example – Message Allocation and Tracking One of BigchainDB’s prime use cases (and why Ascribe created it), is for tracking assets, so let’s make a simple example in Python. First run the following commands in your terminal. pip install bigchaindb bigchaindb configure bigchaindb show-config Create a new file, app.py and add the following: from bigchaindb import Bigchain b = Bigchain() print(b) This imports the bigchaindb library, creates a new object and connects to it with the settings file just created. Then run the Python application: python app.py You should see something like <bigchaindb.core.Bigchain object at 0x1085b0dd8>, this tells us that everything is well. Add the following: from bigchaindb import Bigchain import time b = Bigchain() spuser_priv, spuser_pub = b.generate_keys() print("User Created") digital_asset_payload = {'msg': 'This is my special message just for you'} tx = b.create_transaction(b.me, spuser_pub, None, 'CREATE', payload=digital_asset_payload) print("Transaction Written") tx_signed = b.sign_transaction(tx, b.me_private) b.write_transaction(tx_signed) print ("Transaction Written to BC, now waiting") time.sleep(10) tx_retrieved = b.get_transaction(tx_signed['id']) print(tx_retrieved) This creates a user and associated keys for access to the database — remember that extra level of security. Then a payload for writing to the database is created, assigning the required keys, and written. It will take a few seconds for the new transaction to pass from the Blockchain layer to the database. The code waits for ten seconds and then retrieves and prints the record. You should see something like: { "signature": '304502205", "id": "0f442bcf4a42", "transaction": { "timestamp": "1457104938.430521", "data": { "hash": "b32779e57", "payload": { "msg": "This is my special message just for you" } }, "operation": "CREATE", "current_owner": "hFJKYk2", "new_owner": "26pdiQTTx", "input": None } } } You now have one special message that you would like one person to have access to: ... print("Now to transfer") spuser2_priv, spuser2_pub = b.generate_keys() print("Second User Created") tx_transfer = b.create_transaction(spuser_pub, spuser2_pub, tx_retrieved['id'], 'TRANSFER') print("Transfer Created") tx_transfer_signed = b.sign_transaction(tx_transfer, spuser_priv) b.write_transaction(tx_transfer_signed) print ("Transaction Written to BC, now waiting") time.sleep(15) tx_transfer_retrieved = b.get_transaction(tx_transfer_signed['id']) print("Transferred") print(tx_transfer_retrieved) This creates a second user and then takes the transaction ID of the special message and transfers it to the second user. The Blockchain layer of BigChainDB will prevent users and your code from executing the same action twice. If you tried running the code above again, a double spend exception will be thrown. This examples shows a small set of the methods that BigChainDB adds to RethinkDB, find the full list here. HTTP Endpoint Currently, the only client library available for BigChainDB is Python, more may follow, but in the meantime, a limited HTTP endpoint is available for querying existing transactions: Or write a new transaction with: Adding the following payload, where operation can be changed to suit the different types of transaction that can be written : { "id": , "" "signature": "", "transaction": { "current_owner": "", "data": { "hash": "", "payload": null }, "input": null, "new_owner": "", "operation": "", "timestamp": "" } } Part of a Decentralized Future Ignoring its Blockchain heritage for a moment, BigChainDB offers to supply a lot of functionality missing from current NoSQL and distributed databases. That fact alone may be a reason to try it and may provide a valid business/use case. For the Blockchain aficionados amongst you, it also completes the puzzle for a complete decentralized application stack. In theory there is now Ethereum for applications, IPFS as a file system and now BigChainDB for data storage. The pieces are in place for a very different way of developing, deploying and maintaining applications, leading to a fascinating future that I would love to hear your opinions on in the comments below. Developer Relations, Technical Writing and Editing, (Board) Game Design, Education, Explanation and always more to come. English/Australian living in Berlin, Herzlich Willkomm
https://www.sitepoint.com/managing-data-storage-with-blockchain-and-bigchaindb/
CC-MAIN-2021-04
refinedweb
932
54.32
12345678910111213141516171819202122232425262728293031323334353637383940 #include <iostream> #include <iomanip> // add this directive so you can format the output #include <cmath> // add this directive so you can use more arithmetic operators using namespace std; int main() { double cash, cost, change; // add a variable for cash returned, I called it "change" cout << fixed << showpoint << setprecision(2); // This will display cash limited to two decimal places. cout << "How much money do you have?" << endl; // Add end of line statement so that your display looks nice. cin >> cash; cout << "How much does the product cost?" << endl; cin >> cost; if (cash > cost) // Don't forget to add brackets with the if statement { change = cash - cost; // It's easier to make a variable for cash returned & the formula before your print statments cout << "Looks like you'll have enough money to pay for the product." << endl; cout << "The amount of cash back will be " << "$" << change << endl; } if (cash == cost) { change = cash - cost; cout << "Looks like you just barely have enough money" << endl; cout << "The amount of cash back will be " << "$" << change << endl; } if ( cash < cost) { change = abs (cash - cost); // abs takes the absolute value so you don't have a negative value for cash back. cout << "Looks like you'll need more money to buy the product" << endl; cout << "The amount of money needed to buy the product is " << "$" << change << endl; } return 0; }
http://www.cplusplus.com/forum/beginner/86936/
CC-MAIN-2015-14
refinedweb
223
65.39
In this article you will learn how to use different Angular Modules at the same time in an HTML page. AngularJS provides the facility to organize all the JavaScript code into modules. The advantage of that is to avoid declaring objects and variables in the global namespace. Angular.module function is used to define a module in AngularJS. Angular is a global namespace which exposed by AngularJS. It is always available for anyone who uses AngularJS. The angular.module function accepts two parameters the name of the module and array of dependencies of other modules. The angular.module function returns a module instance we can define a controller based on that instance using controller function. In this article I explain how to use several modules in a single Html page. Step 1 - Firstly, you need to create an empty html web page with Head and Body section. Step 2 - In the Head section import the angular -js file inside Script tag - You can download it on this link https//angularjs.org/ - You can use online reference as shown in my example. Step 3 - Create two JavaScript files. I have named them as App.Js and App2.js. Step 4 - Inside App.js file define the first module as shown in the example code. Name it as mainModule. Inside the main module you can place your own code. In my example, I have registered an object instance which named Employee and two variables (firstName and lastName) and a function to get employee's full name in it, Step 5 - In the example, we register an object instance and we name it Employee. The name of a registered instance has to be unique in the module. Next declare the mainController. Then we can have our registered Employee object through Dependency Injection by simply specifying the name Employee as an argument of the controller function. Step 6 - Create the second module inside the JavaScript file that named App2.js and name the module as subModule. Step 7 - Inside the subModule you can register an object instance using value function as previously done. Define a controller using controller function. In my example I have registered Calc object instance. Step 8 - Then you need to import both JavaScript files to the Index.html file that we previously created. Step 9 - You can use object reference variable (EmployeeInstance and CalcInstance) which we declared in the controller function of the module. Step 10 - In html body tag you need to add ng.app directive to use AnglarJs in html page. I have added two div tags in my html page to call the controllers separately. Inside the div tag we can place ng.controller directive to call to the related controller. Step 11 - Give the module name to the ng.app directive and controller names to the ng.controller directive. In my example, I have included mainModule for ng.app directive.Step 12 - Then go to your browser and open the index page on it. You will see the result shown as follows. Step 13 - To get the expected result we need to use the following code line inside the Script tag on the head section our Index.html file and use the name CombineModule for ng.app directive, angular.module("CombineModule", ["mainModule", "subModule"]); Then save changes and repress the browser to see the expected result,This is my first article. Your comments are highly appreciated. View All
https://www.c-sharpcorner.com/article/use-different-angular-modules-at-the-same-time-in-an-html-pa/
CC-MAIN-2019-47
refinedweb
571
67.86
ORIGINAL DRAFT In a previous column, I wrote about a solution that allowed you to use XML documents as configuration files. This is a practice that’s become increasingly common place. This month, we’ll put something together that allows us to do console-based XML file editing, a fairly common requirement in server-based software configuration. This solution is generic enough to apply fairly widely. The only assumptions are that the XML file stores configuration values in the form of attributes rather than inline text. The XMLConsole class allows you to load and save XML configuration files, navigate the hierarchy and edit attributes in a menu-driven, scrolling dialog with the end user. In the process of developing this project, a number of reusable classes came into play. Figure 1 shows the classes that are part of this solution. The ConsoleIO class is actually a collection of static variables and methods that act much like the Java System class, but allows us to use indirection that keeps us one level away from the JVM’s console-handling. As such, you can get a ConsoleWriter from ConsoleIO.out and ConsoleIO.err, and a ConsoleWriter from the ConsoleIO.in variables. I’ve followed the same conventions as the System IO print and input streams. By default, the writer and readers map onto the System.in, System.out and System.err. Figure 1: Classes in the XMLConsole project. The ConsoleReader provides some utility methods for reading text and integer inputs. It uses a utility class called ConstraintUtil, which holds a set of static methods for constraint management. When an expected constraint is not met, a ConstraintException gets thrown. The exception text is expected to explain the problem so that it can be presented to the end user and the input can be suitably correct. For example, when a number menu of items is presented for selection, if the user enters a value outside the range of acceptable choices, we’ll present the exception text so the user knows what went wrong and then redisplay the menu. This mechanism is easy to use and provides a unified way of handling input constraints. There are three classes that provide user interface elements. The ConsoleMenu presents a numbered list of text entries and asks the user to enter a numerical value for their choice. The ConsoleChoice class provides a question associated with a shorter list of choices, such as a Yes/No questions, with an optional default value. The ConsoleValue class provides a mechanism for editing short text values, such as attributes. The ConsoleValue class expects new input but defaults to the old value if the string is empty, showing the default in square brackets. XMLConsole manages the high-level interaction with the XML file and the end user. The user is free to navigate entries and make changes, optionally saving before quitting. XMLConsole provides a cookie crumb trail similar to the context found in web pages. For each menu or text input a user is faced with answering, the trail reflects the XML tags from top to the current position, along with the an index value indicating the tag position relative to it’s parent. This way, it’s easy for the end user to stay oriented while managing tag collections that are similar, such as a list of tags aggregating children of the same type. Let’s look at a few of the key classes to clarify a few details. XMLConsole is the centerpiece, which controls the interaction flow. To put things in context, we’ll also look at ConsoleReader and the ConsoleMenu class, which will serve as a good example of building text-only component. Naturally, you can take the same ideas and apply them to other similar components. The ConsoleReader class (Listing 1) extends BufferedReader primarily so that we can read line input and extend the behavior to handle more specific input. There are two constructors, one of which wraps an InputStreamReader around an InputStream object. The other just passes the Reader argument to the parent class. The three methods are fairly straight forward. The first is just an alias for readLine, which can be modified if new capabilities need to be added. This way, we can avoid changing the readLine behavior in the future. Because ConsoleReader extends BufferedReader, we have implicit access to all of the methods in BufferedReader in any case. The two readInt method variants are there to filter line input which is intended to be an integer value. Other input types could be provided, but for our purposes this was sufficient. The first readInt method tries to parse the result of a readText call as an integer and catches the NumberFormatException. If we see this, a ConstraintException is thrown, with a suitable explanation of what happened. This text will be presented to the user when the data type that was entered is incorrect. The second readInt method extends the behavior by checking for values within a specified range. Because all the basic Java data types have object wrappers that implement the Comparable interface, the static utility methods in ConstraintUtil expect Comparable arguments. This allows us to compare not only basic Number instances, but also String and other instances for classes that implement the Comparable interface, reusing the same methods. It’s worth noting that most Java runtime code expects the compared objects to be of the same type. That being said, we’re primarily interested in Integer objects, so we call the isAboveInclusive and isBelowInclusive methods after wrapping the int value in an Integer object. You’ll notice that this allows us to support null constraints. If the min or max values are null, the constraint is assumed to be unnecessary. Finally, if the tests all pass, we return an int value. Like the first version of readInt, the second, parameterized, version throws a ConstraintException if the entered values are inappropriate. On top of the primitive input handling available in the ConsoleReader, we can build more complex components. ConsoleMenu represents a key element in out XMLConsole, and so deserves a closer look. Listing 2 shows the code for ConsoleMenu. The constructor expects a String list, which represents the list of items to be selected. This version of ConsoleMenu handles a single choice, but it would be easy enough to extend the behavior to handle a comma-separated list of choices. Once you have a ConsoleMenu object defined, you can use it by calling either the select or ask method. The select method does most of the work and returns an integer value, verified to be within the right constraints. To simplify any higher-level code, ConsoleMenu takes responsibility for presenting the user with the choices recursively if a ConstraintException was thrown. This is done only after the user is told there was a problem. You’ll notice that error output goes to ConsoleIO.err, input is retrieved from ConsoleIO.in and normal output goes to ConsoleIO.out. Presenting the user with a menu is a simple matter of prefixing the elements from the String list with an incrementing number and asking for a numerical value as input. We return the numerical value with select, but provide a pair of methods to relate the integer value to the original text. The asText method takes the index value and returns the String, but you can use the ask method directly and skip over the integer step if you know you want only the text value. There are advantages and disadvantages to both approaches. If you use integer values, you can usually keep the text uncoupled and enable internationalization or text position movement within the menu list without disturbing the logic. There are times, however, where managing numerical values is troublesome, such as when you add text values with specific meanings or contexts. As you’ll see in the higher-level code, we use numerical values in XMLConsole but provide utility methods for making a distinction between selection contexts. We also use String values in cases where the selection context changes position dynamically. If you wanted to use indirection, such as when localizing the text, the localized text can just as easily be compared. Listing 3 shows the XMLConsole class, which uses the underlying classes to read and save XML files and provide console-based text navigation and editing. The constructor calls the load method, which uses the JDOM DOMBuilder to construct a JDOM Document from the file. This is what we use to manage the XML content. The save method reverses the process to save the Document to a file. We save a reference to the loaded file object so that the interface can save the file without prompting the user for a file name. Notice that because we use DOM, this solution is limited to XML documents that fit in memory. If the file is used for configuration and doesn’t fit into memory, XML is probably not the right solution for your configuration problems. There are a number of protected methods in this class. Rather than studying the code for each, I’ll just mention their function, so that the code is easy to follow. Most of the code is easy to understand and I’ll explain the details if they tend to be confusing. The first method is getChildIndex, which takes a JDOM Element object and returns it’s own index position relative to it’s parent. The two getContextTrail methods construct a String indicating the current context. This is effectively a cookie crumb trail that shows the path from the root node to the current node with separators on a single line. I’ve implemented the ability to provide an attribute name as well. The attribute name is ignored if the String is null. The first version of the method defaults to the non-attribute behavior. Because the user will need to select child tags as well as attributes and navigational choices in each of the main menus, I’ve divided these into categories. The first is detectable using the isChildIndex method. If the index value from the menu choice returns true when this method is called, we are dealing with a child tag selection. The isAttrsIndex returns true if we are choosing an attribute name. Because the process of elimination suggests that if it is not a tag or attribute it must be something else, we don’t need another method to make that distinction. We do, however, want a method that translates the menu offset to the attribute-relative index value, so we have a method called getAttrsIndex to do that. Finally, we get to the two key methods in this class. The createMenu method constructs a menu from a given XML Element. For each child tag, we create an entry of the form “Configure <tag name>”. For each attribute at the current level, we add an entry with the form “Edit <attribute name>” and, based on whether we are at the root or not, we either add the “Save Config” and “Exit Program” choices or the “Previous Menu” choice. Equipped with menus of this form, we can now navigate the XML document hierarchy with relative ease. The navigate method does most of the work, so we’ll cover it in slightly more detail. We start with the root and set an exit flag to false before dropping into a loop that exits only when the flag is set to true. For each iteration, we show the user the context trail an call createMenu with the current context Element. Then we present the menu for user selection. You’ll notice we store both the numerical and text value for the menu choice in local variables. Now all we have to do is trigger suitable behavior for each menu choice. If isChildIndex returns true, we are dealing with a child tag, so all we need to do is set the context variable to the child Element and the next loop will take care of everything for us. If the isAttrsIndex method returns true, we are dealing with an attribute and want to enable users to edit the value. If the Attribute value is not null, we use the ConsoleValue component to offer a prompt with a default value option. The ConsoleValue class gets single-line text input from the user and puts the default value in square brackets. If the user enters nothing (pressing enter with an empty string), the old value is retained. The returned value is either the old value or a new one, which we use to set the Attribute value. There are three other possible menu selections during navigation, which we deal with by matching the text rather than the integer value. If the choice is “Previous Menu”, we just need to retrieve the current context parent and set the context variable so that it will be used on the next loop. If the “Save Config” option is chosen, we provide some user feedback and save the file that’s currently being edited using the save method. If the “Exit Program” option is chosen, we just set the exit flag to true and the loop exists, ending the main program call. Listing 4 shows the trace for a a quick session that edits the Example.xml file provided with the code. The trace shows navigation from the root node to Port value in the second Server entry, which itself is under a Servers category. The Port value is changed from 80 to 8080 and the navigation choices take us back to the root node, at which point we save, answering yes to the prompt and quit, confirming that we want to quit before exiting the program. When you run the XMLConsole’s main method, you’ll be able to navigate and change values as you see fit. While this implementation is fairly basic, it clearly offers a straight forward mechanism for editing XML configuration files in environments where non-console interfaces are not an option. This is especially true in server environments. Using the same mechanism, you can also provide remote configuration by using Socket streams. The framework is extensible enough, so you can add new text components that provide improved data type-checking or offer different selection options. Console interfaces have been around longer than GUIs and still provide a preferred configuration option in many server application environments. I hope that these ideas can serve you well. Listing 1 import java.io.*; public class ConsoleReader extends BufferedReader { public ConsoleReader(InputStream stream) { super(new InputStreamReader(stream)); } public ConsoleReader(Reader reader) { super(reader); } public String readText() throws IOException { return readLine(); } public int readInt() throws IOException { String text = readText(); try { return Integer.parseInt(text); } catch (NumberFormatException e) { throw new ConstraintException( "This value must be an integer."); } } public int readInt(Integer min, Integer max) throws IOException { Integer value = new Integer(readInt()); if (min != null) { ConstraintUtil.isAboveInclusive(value, min); } if (max != null) { ConstraintUtil.isBelowInclusive(value, max); } return value.intValue(); } } Listing 2 import java.io.*; public class ConsoleMenu { protected String[] list; public ConsoleMenu(String[] list) { this.list = list; } public String ask() throws IOException { return asText(select()); } public String asText(int index) { return list[index]; } protected int select() throws IOException { try { ConsoleIO.out.println( "Please select from the following menu:\n"); for (int i = 0; i < list.length; i++) { String number = "" + (i + 1) + ") "; ConsoleIO.out.println(number + list[i]); } ConsoleIO.out.printPrompt( "Enter a number to select a menu item"); Integer min = new Integer(1); Integer max = new Integer(list.length); return ConsoleIO.in.readInt(min, max) - 1; } catch (ConstraintException e) { ConsoleIO.err.printError(e); return select(); } } } Listing 3 import java.io.*; import java.util.*; import org.jdom.*; import org.jdom.input.*; import org.jdom.output.*; public class XMLConsole { protected File file; protected Document doc; public XMLConsole(File file) throws JDOMException { load(file); } protected int getChildIndex(Element element) { Element parent = element.getParent(); if (parent == null) return 1; List children = parent.getChildren(); for (int i = 0; i < children.size(); i++) { if (element == children.get(i)) { return i + 1; } } return -1; } protected String getContextTrail(Element context) { return getContextTrail(context, null); } protected String getContextTrail(Element context, String attr) { Element parent = null; ArrayList trail = new ArrayList(); if (context != null) { String name = context.getName() + '[' + getChildIndex(context) + ']'; trail.add(name); } while ((parent = context.getParent()) != null) { String name = parent.getName() + '[' + getChildIndex(parent) + ']'; trail.add(0, name); context = parent; } StringBuffer buffer = new StringBuffer("\n"); for (int i = 0; i < trail.size(); i++) { if (i > 0) buffer.append(" / "); buffer.append((String)trail.get(i)); } if (attr != null) { buffer.append(" / "); buffer.append(attr); } int count = buffer.length(); for (int i = 0; i < count; i++) { if (i == 0) buffer.append('\n'); else buffer.append('-'); } return buffer.toString(); } protected boolean isChildIndex(Element element, int index) { int childCount = element.getChildren().size(); return index >= 0 && index < childCount; } protected boolean isAttrsIndex(Element element, int index) { int childCount = element.getChildren().size(); int attrsCount = element.getAttributes().size(); int count = childCount + attrsCount; return index >= childCount && index < count; } protected int getAttrsIndex(Element element, int index) { int childCount = element.getChildren().size(); return index - childCount; } protected ConsoleMenu createMenu(Element element) { String[] extras = {"Previous Menu"}; if (element.isRootElement()) { extras = new String[] {"Save Config", "Exit Program"}; } List children = element.getChildren(); List attributes = element.getAttributes(); int childCount = children.size(); int attrsCount = attributes.size(); int extraCount = extras.length; String[] options = new String[childCount + attrsCount + extraCount]; for (int i = 0; i < childCount; i++) { Element child = (Element)children.get(i); options[i] = "Configure " + child.getName(); } for (int i = 0; i < attrsCount; i++) { Attribute attr = (Attribute)attributes.get(i); options[childCount + i] = "Edit " + attr.getName() + " [" + '"' + attr.getValue() + '"' + "]"; } for (int i = 0; i < extraCount; i++) { options[childCount + attrsCount + i] = extras[i]; } return new ConsoleMenu(options); } protected void navigate() throws IOException { Element context = doc.getRootElement(); boolean timeToExit = false; while (!timeToExit) { ConsoleIO.out.printText(getContextTrail(context)); ConsoleMenu menu = createMenu(context); int index = menu.select(); String choice = menu.asText(index); if (isChildIndex(context, index)) { List children = context.getChildren(); context = (Element)children.get(index); } else if (isAttrsIndex(context, index)) { List attributes = context.getAttributes(); int offset = index - context.getChildren().size(); Attribute attr = (Attribute)attributes.get(offset); if (attr != null) { String name = attr.getName(); ConsoleIO.out.printText( getContextTrail(context, name)); ConsoleValue editor = new ConsoleValue( "What is the new value for '" + name + "'", attr.getValue()); attr.setValue(editor.ask()); } } else if (menu.asText(index).equals("Previous Menu")) { context = context.getParent(); } else if (menu.asText(index).equals("Save Config")) { ConsoleChoice confirm = new ConsoleChoice( "Save changes to your configuration", ConsoleChoice.YES_OR_NO, 1); if (confirm.ask().equalsIgnoreCase("yes")) { ConsoleIO.out.printText("\nSaving..."); save(file); ConsoleIO.out.printText("Done\n"); } } else if (menu.asText(index).equals("Exit Program")) { ConsoleChoice confirm = new ConsoleChoice( "Are you sure you want to exit", ConsoleChoice.YES_OR_NO, 1); if (confirm.ask().equalsIgnoreCase("yes")) { ConsoleIO.out.printText("\nExit\n"); timeToExit = true; } } } } public void load(File file) throws JDOMException { this.file = file; DOMBuilder builder = new DOMBuilder(); doc = builder.build(file); } public void save(File file) throws IOException { XMLOutputter output = new XMLOutputter(); FileWriter writer = new FileWriter(file); output.output(doc, writer); writer.close(); } public static void main(String[] args) throws Exception { File file = new File("Example.xml"); XMLConsole console = new XMLConsole(file); console.navigate(); } } Listing 4 Config[1] --------- Please select from the following menu: 1) Configure Servers 2) Save Config 3) Exit Program Enter a number to select a menu item: 1 Config[1] / Servers[1] ---------------------- Please select from the following menu: 1) Configure Server 2) Configure Server 3) Previous Menu Enter a number to select a menu item: 1 Config[1] / Servers[1] / Server[1] ---------------------------------- Please select from the following menu: 1) Edit Host ["claude"] 2) Edit Port ["80"] 3) Previous Menu Enter a number to select a menu item: 2 Config[1] / Servers[1] / Server[1] / Port ----------------------------------------- What is the new value for 'Port'? [80]: 8080 Config[1] / Servers[1] / Server[1] ---------------------------------- Please select from the following menu: 1) Edit Host ["claude"] 2) Edit Port ["8080"] 3) Previous Menu Enter a number to select a menu item: 3 Config[1] / Servers[1] ---------------------- Please select from the following menu: 1) Configure Server 2) Configure Server 3) Previous Menu Enter a number to select a menu item: 3 Config[1] --------- Please select from the following menu: 1) Configure Servers 2) Save Config 3) Exit Program Enter a number to select a menu item: 2 Save changes to your configuration {Yes, No}? [No]: yes Saving... Done Config[1] --------- Please select from the following menu: 1) Configure Servers 2) Save Config 3) Exit Program Enter a number to select a menu item: 3 Are you sure you want to exit {Yes, No}? [No]: yes Exit
http://www.claudeduguay.com/articles/xmlconsole/XMLConsoleArticle.html
CC-MAIN-2019-30
refinedweb
3,417
56.25
Simple, robust framework for creating discord bots for Dart. <br /> Fork of Hackzzila's nyx - extended with new functionality, few bug fixes, applied pending pull requests. Latest docs for newest release. My website has docs for latests commits - You can read about incoming changes Wiki docs are designed to match latest release. linterrors from dartanalyzer Fri 06.07.2018 delay()in Command class Fri 06.07.2018 Fri 06.07.2018 Snowflaketype Add this to your package's pubspec.yaml file: dependencies: nyxx: ^0.21.3 You can install packages from the command line: with pub: $ pub get Alternatively, your editor might support pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:nyxx/nyxx:
https://pub.dartlang.org/packages/nyxx/versions/0.21.3
CC-MAIN-2019-09
refinedweb
126
70.5
On Tue, 2014-06-03 at 10:54 -0700,.Any implementation which doesn't support XFS is unviable from a distropoint of view. The whole reason we're fighting to get USER_NS enabledin distros goes back to lack of XFS support (they basically refused toturn it on until it wasn't a choice between XFS and USER_NS). If we putthem in a position where they choose a namespace feature or XFS, they'llchoose XFS.XFS developers aren't unreasonable ... they'll help if we ask. I meanit was them who eventually helped us get USER_NS turned on in the firstplace.James
https://lkml.org/lkml/2014/6/7/118
CC-MAIN-2020-16
refinedweb
101
67.04
I have a few arrays that I need to hold in memory for the duration of my program. The arrays are used as look up references by different files so I thought I should make a DLL to hold them in. The main problem I seem to run into is that the files have to be constructed at the beginning of the program. The arrays hold a few thousand values each and future ones may hold millions, so hard coding the arrays isn't an option. Here is my best attempt: First, I made the Dll header file. I read about making static constructors, which is what I am trying to do here to hold the arrays. I put the export only on the NumCalc class (correct?). // TablesDll.h #ifndef TABLESDLL_EXPORTS #define TABLESDLL_EXPORTS #ifdef TABLESDLL_EXPORTS #define TABLESDLL_API __declspec(dllexport) #else #define TABLESDLL_API __declspec(dllimport) #endif namespace tables { class Arrays { public: static const int* arr; private: static int* SetNums(); }; class TABLESDLL_API NumCalc { public: static Arrays arrays; }; } #endif Now the definitions: // TablesDll.cpp #include "stdafx.h" #include "TablesDll.h" #include <stdexcept> (<- I don't know why this is here...) namespace tables { const int* Arrays::arr = SetNums(); int* Arrays::SetNums() { int* arr= new int[2000]; /* set the numbers*/ return arr; } } It compiles fine. I take the files and stick them into a test program as so: // TestTablesDll #include "stdafx.h" #include "TablesDll.h" using namespace tables; int _tmain(int argc, _TCHAR* argv[]) { for(int i=0; i<299; i++) printf("arr[%i] = %d/n", i, NumCalc::arrays::arr[i]); return 0; } This doesn't even compile unfortunately. error C3083: 'arrays': the symbol to the left of a '::' must be a type My previous attempt didn't use the static constructor. There was no class Arrays. NumCalc was the only class containing static TABLESDLL_API const int* arr and the private function static const int* SetNums(). This yielded a LNK2001 compiler error when run in the TestTablesDll I'm pretty sure there's an issue with the function not running at compile time, leaving the arr variable undefined. How can I do this? In TablesDll.h you should put TABLESDLL_API to the Arrays class too. Otherwise you will not be able to use the parts of NumCalc that depend on Arrays. Also you should have this Arrays NumCalc::arrays; in TablesDll.cpp even though Arrays is an empty class - arrays has to be defined somewhere (and not just declared in the class definition). EDIT: There were more problems. arr should be accessed like this: NumCalc::arrays.arr - with . and not with :: Also the header always exports symbols beacuse you define TABLESDLL_EXPORTS and right after that you check if it's defined. This is how it should be: #ifndef TABLESDLL_HEADER_GUARD #define TABLESDLL_HEADER_GUARD #ifdef TABLESDLL_EXPORTS #define TABLESDLL_API __declspec(dllexport) #else #define TABLESDLL_API __declspec(dllimport) #endif and in TablesDll.cpp you should define TABLESDLL_EXPORTS before including the header - so that only the dll exports the symbols and the executable imports them. Like this: #define TABLESDLL_EXPORTS #include "TablesDll.h"
http://databasefaq.com/index.php/answer/108035/c-arrays-dll-linker-holding-constructed-static-arrays-in-memory-for-multiple-files-c-
CC-MAIN-2019-04
refinedweb
500
55.84
Compiler Error CS0445 Visual Studio 2008 Updated: March 2009 Cannot modify the result of an unboxing conversion The result of an unboxing conversion is a temporary variable. The compiler prevents you from modifying such variables because any modification would go away when the temporary variable goes away. To fix this, use a new value-type variable to store the intermediate expression, and assign the value to the new variable. The following code generates CS0455: namespace ConsoleApplication1 { // CS0445.CS class UnboxingTest { public static void Main() { Point p; p.x = 1; p.y = 5; object obj = p; // Generates CS0445: ((Point)obj).x = 2; // Use the following lines instead. //Point p2; //p2 = (Point)obj; //p2.x = 2; //obj = p2; // Verify the change. //Console.WriteLine(((Point)obj).x); } } public struct Point { public int x; public int y; } }
http://msdn.microsoft.com/en-us/library/1zd0a13x(v=vs.90).aspx
crawl-003
refinedweb
133
51.85
WARNING: Version 6.1 of the Elastic Stack has passed its EOL date. This documentation is no longer being maintained and may be removed. If you are running this version, we strongly advise you to upgrade. For the latest information, The native realm is added to the realm chain by default. You don’t need to explicitly configure a native realm to manage users through the REST APIs. When you configure realms in elasticsearch.yml, only the realms you specify are used for authentication. To use the native realm as a fallback, you must include it in the realm chain. You can, however, configure options for the native realm in the xpack.security.authc.realms namespace in elasticsearch.yml. Explicitly configuring a native realm enables you to set the order in which it appears in the realm chain, temporary disable the realm, and control its cache options. To configure a native realm: Add a realm configuration of type nativeto elasticsearch.ymlunder the xpack.security.authc.realmsnamespace. At a minimum, you must set the realm typeto native. If you are configuring multiple realms, you should also explicitly set the orderattribute. See Native Realm Settings for all of the options you can set for the nativerealm. For example, the following snippet shows a nativerealm configuration that sets the orderto zero so the realm is checked first: xpack: security: authc: realms: native1: type: native order: 0 - Restart Elasticsearch. See Native realm settings. X-Pack security enables you to easily manage users in Kibana on the Management / Security / Users page. Alternatively, you can manage users through the user API. For more information and examples, see User Management APIs. To migrate file-based users to the native realm, use the migrate tool.
https://www.elastic.co/guide/en/x-pack/6.1/native-realm.html
CC-MAIN-2022-27
refinedweb
287
59.4
Continue Reading → Introduction Extension methods are a new feature in C# 3.0. An extension method enables us to add methods to existing types without creating a new derived type, recompiling, or modify the original types. We can say that it extends the functionality of an existing type in .NET. An extension method is a staticmethod to the existing staticclass. We call an extension method in the same general way; there is no difference in calling. Feature and Property of Extension Methods The following list contains basic features and properties of extension methods: - It is a staticmethod. - It must be located in a staticclass. -. - An extension method should be in the same namespace as it is used or you need to import the namespace of the class by a usingstatement. - You can give any name for the class that has an extension method but the class should be static. - If you want to add new methods to a type and you don't have the source code for it, then the solution is to use and implement extension methods of that type. - If you create extension methods that have the same signature methods as the type you are extending, then the extension methods will never be called. Using the Code We create an extension method for a stringtype so stringwill be specified as a parameter for this extension method and that method will be called by a stringinstance using the dot operator. In the above method WordCount(), we are passing a stringtype with this so it will be called by the stringtype variable, in other words a stringinstance. Now we create a staticclass and two staticmethods, one for the total word count in a stringand another for the total number of characters in a stringwithout a space. using System; namespace ExtensionMethodsExample { public static class Extension { public static int WordCount(this string str) { string[] userString = str.Split(new char[] { ' ', '.', '?' }, StringSplitOptions.RemoveEmptyEntries); int wordCount = userString.Length; return wordCount; } public static int TotalCharWithoutSpace(this string str) { int totalCharWithoutSpace = 0; string[] userString = str.Split(' '); foreach (string stringValue in userString) { totalCharWithoutSpace += stringValue.Length; } return totalCharWithoutSpace; } } } Now we create an executable program that has a stringas an input and uses an extension method to count the total words in that stringand the total number of characters in that stringthen show the result in a console screen. using System; namespace ExtensionMethodsExample { class Program { static void Main(string[] args) { string userSentance = string.Empty; int totalWords = 0; int totalCharWithoutSpace = 0; Console.WriteLine("Enter the your sentance"); userSentance = Console.ReadLine(); //calling Extension Method WordCount totalWords = userSentance.WordCount(); Console.WriteLine("Total number of words is :"+ totalWords); //calling Extension Method to count character totalCharWithoutSpace = userSentance.TotalCharWithoutSpace(); Console.WriteLine("Total number of character is :"+totalCharWithoutSpace); Console.ReadKey(); } } } ~Suraj K
http://www.dotnetguru.in/2014/11/
CC-MAIN-2019-22
refinedweb
456
54.52
Document Associations January 30, 2002 This week the XML-Deviant attempts to disentangle the threads of a number of tightly woven discussions that have taken place on XML-DEV recently. The general theme of these discussions is how one associates processing with an XML document. On the surface this may seem like a simple problem, but there are a number of issues that expose some weak points in the XML architecture. Actually, in circumstances where you are exchanging XML between tightly coupled systems, there are very few issues, beyond the usual systems integration gotchas. The difficulties begin to arise in circumstances that make the most of the advantages that markup provides. In these loosely coupled environments data may be subjected to processing and constraints unforeseen by the original information provider. Data tends to take on a life of its own, separate from the producing and consuming systems. Processing in this context may involve dispatching to a component capable of processing the data or converting it into a known vocabulary so that it can be manipulated further. This may require some degree of resource discovery to find the correct components, schemas, and transformations required to carry out the task in hand. This is true for both generic processing architectures as well as specific applications; but in the latter case the resources may be pre-packaged and immediately available rather than dynamically acquired. James Clark describes validation as a very specific example of associating processing with a document. The schema used by a document author may not be the same as that used by the consumer of the document, assuming they use one at all. The author and consumer may require different constraints and may use different, or a combination of, schema languages to apply them. Clark argues that a general mechanism for describing processing is required. This might be achieved by in-document indicators or by an external association defined by the processor. Anchor points available within a document which allow an association of resources and processing include the MIME type, namespaces, or the document type. The interplay between the first two of these mechanisms was the subject of last week's column. This week's column will focus on the latter two issues and will illustrate some of the complexities highlighted in the recent debate. RDDL and Namespaces A heated exchange raged across XML-DEV recently concerning RDDL, which provides a means to associate a directory of resources with an XML namespace. It took a lot of flames to boil down the issues to their core, at which point it became clear that most of the disagreement was about the utility of associating resources with a namespace rather than a document. In other words, while RDDL was defined as a means to answer the question, "what is at the end of a namespace?", many wanted a mechanism to associate resources at the document level. Ronald Bourret summed up this difference in granularity with his TV metaphor. Bourret also accurately diagnosed the original source of confusion:. This confusion lead some to conclude that RDDL was somehow broken and that an alternate mechanism needed. However RDDL is generic enough to be applied to both tasks, ultimately leading Jonathan Borden to suggest a means to associate a resource directory with an XML document using either a namespaced attribute or a Processing Instruction. A quick tally of opinion suggests that the latter was preferable to many, Tim Bray being the notable exception. Bray also warned against attempting to define too much too early:... There's probably a good idea lurking in here somewhere, but I don't think we're really ready to write the rules down yet. So while there may be a general consensus that such a mechanism is both desirable and necessary, there's still no agreement on the best approach. For example, Michael Brennan had previously suggested a mechanism that could generalize things further by using extended XLinks. Rick Jelliffe believed that an approach based on packaging XML applications was a richer solution. We need to move beyond document types to distributable, extensible!, identifiable (and, sure, web-locatable), system-integrator-friendly "XML Applications". Typing and Architectural Forms Moving beyond resource associations, the discussion also touched on the general issue of document typing. Specifically the relationships between document types, namespaces, and schemas. Rick Jelliffe explained that there isn't a 1:1 relationship between namespaces and schemas: ...a name in a namespace does not always have a 1:1 association with a particular schema definition. Similarly, the elements in a whole namespace may be used in different ways by different schemas which use elements from the namespace. But often there will be one general or typical schema for a namespace. Yet variants can be expected over time due to maintenance, etc. So a namespace may be a set, but that does not mean an element in a particular namespace will always have the same content model etc. Jonathan Borden also demonstrated, using a "schema algebra", that a document can have many types, and also that it's wrong to equate namespaces and document types. Borden said that the main issue is that a replacement for DOCTYPE is required which is agnostic to the particular schema language used. All of this boils down to many-to-many associations between namespaces, schemas, and document types. A particular instance may itself take on different types according to how it's interpreted by the user. This seems to be the central message and is a way to understand the theoretical arguments. Semantics are entirely local and are defined by the particular processing context into which the data is fed. The tightly coupled XML exchange mentioned previously becomes a special case. In this circumstance the producer and consumer agree precisely on how a document should be interpreted. It's important for a producer to be able to assert that data is suitable for processing in a certain way, but the consumer is free to disregard this. This echoes Clark's argument that a general mechanism for associating processing is required and lends weight to Gavin Nicol's assertion that this should be defined separately from the instance. If it's defined separately, then the consumer of a document can override it. Steven Newcomb argued that Architectural Forms is a natural fit in this kind of environment, allowing a document to assert that it conforms to a variety of constraints. Someday we'll wake up and realize that, from an information management-and-interchange perspective, it's very, very useful for an element to declare that it's an instance of multiple element types, and to be able to invoke full syntactic validation of such instances against all their classes, in syntactic space, including both context and content. Anything less is suboptimal as a basis for flexible, mix-and-match information interchange via XML, among people who want to cooperate with each other, but who have endlessly specialized local requirements. Architectural forms, anyone? Whether Architectural Forms will be successfully dug out from the HyTime infrastructure remains to be seen. John Cowan certainly seems interested in exploring the possibilities. Unfortunately there are no easy answers at the end of this discussion. For the most part it appears to be scene setting for a large amount of work still to be undertaken. This is a recurrent New Year theme on XML-DEV, according to Len Bullard. Hazarding some predictions it seems likely that the pipeline meme that's been circulating recently will continue to do so, and that the ISO DSDL work will provide some key solutions in this area.
http://www.xml.com/pub/a/2002/01/30/association.html
CC-MAIN-2017-13
refinedweb
1,271
51.58
There was a time when instruments sporting a GPIB connector (General Purpose Interface Bus) for computer control on their back panels were expensive and exotic devices, unlikely to be found on the bench of a hardware hacker. Your employer or university would have had them, but you’d have been more likely to own an all-analogue bench that would have been familiar to your parents’ generation. So there you are, with an instrument that speaks a fully documented protocol through a physical interface you have plenty of spare sockets for, but if you’re a Linux user and especially if you don’t have an x86 processor, you’re a bit out of luck on the software front. Surely there must be a way to make your computer talk to it! Let’s give it a try — I’ll be using a Linux machine and a popular brand of oscilloscope but the technique is widely applicable. It’s easy with a VISA We are fortunate in that National Instruments have produced a standard bringing together the various physical protocols and interfaces used, and their VISA (Virtual Instrument Software Architecture) is available as precompiled libraries for both Windows and Linux(x86). Talking to VISA is a well-trodden path, for example if you are a Python coder there is a wrapper called PyVISA through which you can command your instruments to your heart’s content. And if you’ve spotted the glaring gap for architectures with no NI VISA library, they’ve got that covered too. PyVISA-py is a pure Python implementation of VISA that replaces it. As a demonstration, we’ll take you through the process of using PyVISA-py and PyVISA on a Raspberry Pi for basic communication with an instrument over USB. We’ve used both a Raspberry Pi Zero and a Raspberry Pi 3 each running the latest Raspbian distro, but a similar path should apply to most other Linux environments and like instruments. Our instrument here is a Rigol DS1054z oscilloscope. We start by installing the Python libraries for USB, PyVISA-py, and PyVISA. We’re assuming you already have python and pip, if not here’s a page detailing their installation. Type the following lines at the command prompt: sudo pip install pyusb sudo pip install pyvisa sudo pip install pyvisa-py You should now be able to test the installation from the Python interpreter. Make sure the instrument is both turned on and connected via USB, and type the following: sudo python This should give you a version and copyright message for Python, followed by a three-arrow >>> Python interpreter prompt. Type the following lines of Python: import visa resources = visa.ResourceManager('@py') resources.list_resources() The first line imports the VISA library, the second loads a resource manager into a variable, and the third queries a list of connected instruments. The ‘@py’ in line 2 tells the resource manager to look for PyVISA-py, if the brackets are empty it will look for the NI VISA libraries instead. If all is well, you will see it return a list of resource names for the instruments you have connected. If you only have one instrument it should be similar to the one that follows for our Rigol: (u'USB0::6833::1230::DS1ZA123456789::0::INSTR',) The part in the single quotes, starting with USB0:: is the VISA resource name for your instrument. It is how you will identify it and connect to it in further code you write, so you will either need to run the Python code above in your scripts and retrieve the resource name before you connect, or as we are doing in this demonstration copy it from the prompt and hard-code it in the script. Hard-coding is not in any way portable as the script may only work with your particular instrument, however it does provide a convenient way to demonstrate the principle in this case. If you are still within the Python interpreter at this point, you can leave it and return to the command prompt by typing a control-D end-of-file character. Towards Something More Useful Assuming all the steps in the previous paragraphs went smoothly, you should now be ready to write your own code. We’ll give you a simple example, but first there are a couple of pieces of documentation you’ll want to become familiar with. The first is the PyVISA documentation, the same as we linked to earlier, and the second should be the programming reference for your instrument. The manufacturer’s website should have it available for download, in the case of our Rigol it can be found as a PDF file (Click on the “Product manuals” link at the top). The PyVISA manual details all the wrapper functions and has a set of tutorials, while the product manual lists all the commands supported by the instrument. In the product manual you’ll find commands to replicate all the interface controls and functions, but the ones we are most interested in are the measurement (MEAS) set of commands. For our example, we’ll be measuring the RMS voltage on channel 1 of our Rigol. We’ll connect to the instrument directly using its resource name and querying it for its model identifier, before selecting channel 1 and querying it for an RMS voltage reading. Copy the following code into a text editor, replacing the resource identifier with that of your own instrument, and save it as a .py file. In our case, we saved it as query_rigol.py. #Bring in the VISA library import visa #Create a resource manager resources = visa.ResourceManager('@py') #Open the Rigol by name. (Change this to the string for your instrument) oscilloscope = resources.open_resource('USB0::6833::1230::DS1ZA123456789::0::INSTR') #Return the Rigol's ID string to tell us it's there print(oscilloscope.query('*IDN?')) #Select channel 1 oscilloscope.query(':MEAS:SOUR:CHAN1') #Read the RMS voltage on that channel fullreading = oscilloscope.query(':MEAS:ITEM? VRMS,CHAN1') #Extract the reading from the resulting string... readinglines = fullreading.splitlines() # ...and convert it to a floating point value. reading = float(readinglines[0]) #Send the reading to the terminal print reading #Close the connection oscilloscope.close() Enable the channel on the instrument – when you are familiar with the API you can do this with your software – and connect it to a signal. We used the ‘scope calibration terminal as a handy square wave source. You can then run the script as follows, and if all is well you will be rewarded with the instrument ID string and a voltage reading: sudo python query_rigol.py It’s worth noting, we have just run Python as root through the sudo command to use the USB device for the purposes of this demonstration. It’s beyond the scope of this page, but you will want to look at udev rules to allow its use as a non superuser. With luck on this page we will have demystified the process of controlling your USB-connected instruments, and you should be emboldened to give it a go yourself. We’re not quite done yet though, the second part of this article will present a more complete example with a practical purpose; we’ll use our Raspberry Pi and Rigol to measure the bandwidth of an RF filter. 28 thoughts on “How to Control Your Instruments From A Computer: It’s Easier Than You Think” PyVISA is fantastic for USB devices and I definitely recommend it if you want something crossplatform for your lab (unlike VB). Really nice for automating parametric sweeps. I had the misfortune of working in a lab that had old equipment that only had GPIB interfaces. In addition, PyVISA isn’t able to support USB-GPIB adapters very well, and Keysight and NI suites really only support RHEL in terms of Linux. Those programming references can get a bit dense, especially for VNAs. VISA obviously makes things easier, but don’t forget about SCPI (which is abstracted by VISA). If you’ve got GPIB you probably have SCPI compliance. Indeed, very true. The purpose of the article though is a simple introduction rather than a comprehensive review. National Instruments bloatware? no thanks! I prefer to keep my sanity! I installed this VISA-stuff because i wanted to try communicate with my DS1102E and i had a bad surprise, it messed something up on my computer. Fixing wasn’t too difficult, just delete some stuff in the registry, but yeah, this shouldn’t happen. Once this was done i was able to communicate with the scope using Perl. I however had to write a little more code because some simplified Perl-module didn’t work. Don’t ask me about details, long time ago… i wrote a visa app for windows for my rigol dm3058e dmm , i haven’t finished it since it covered the bits i wanted, still have a few things.. just noticed i also forgot to copy the C++ source code to github though, so i’ll add it. Baahahaa, I skimmed this article earlier thinking it was a replacement for MIDI. I thought there would be trumpets and drums! I bet if your instrument has steppers you could squeeze Für Elise (or the Imperial March;if that’s your jam) out of it via SCPI. Well this is the one to beat… More than a few scanners and copiers had classical music Easter egg ‘calibration’ routines. e.g. This works for Siglent scope´s also! Also work with the Visa sources. Control commands can be found here : Manuals Shameless plugs: and What’s the benefit of Visa, compared to simple SCPI through USBTMC protocol on Linux? USBTMC let you talk very easily to your instrument, the most useful would be to abstract SCPI commands. You can change between different physical layer like GPIB, RS232, USB, VXI or LXI. I agree, but most recent instruments implements USBTMC then others will works with either GPIB or RS232. For GPIB(with prologix USB dongle) and RS232 it may be exactly the same than USBTMC, you just write SCPI commands to /dev/ttyUSBX(GPIB or rs232) or /dev/usbtmcX(USBTMC) and read response from this same inode. It should be the same. But with Visa you can use other connections like Ethernet. And a big plus it is interchangeable over different OS. Probably its greater abstraction as a simple introduction to the topic, in the case of this article. I’m about to google for an GPIB implementation on the Raspberry Pi, using it’s GPIO Pins (+levelshiftprotectioncircuitry, of course)… Visa is nice, but the hole power comes with ivi. Then you can change all components in your setup, in theory :D The only person I know who writes a HaD blog wherein complete instructions are given. We aim to please :) Can’t you just pull out the Commodore PET for a controller? Legend has it that some of those computers were sold because thy were cheap controllers, so suddenly the labs with test equipment that had the bus could afford controllers. I seem to recall the HP-150, the one from 1983 with the touch screen, it used the bus to interface to its external floppy drive. And if course, there were HP calculators in later days which included the bus. Michael The was also the HP-8X and 98XX computers which often had a HPIB interface card which also was used to control the disk drives,hard drives, and printers on the system. I have an old Tek scope with RS232, I think it was just intended to print. Can I capture the print with it? I seem to remember doing this with a Tek TDS420 about 20 years ago. Just connected the serial to a laptop and used a terminal program to capture the output to a file. I think I might have had to set the output format to something that was printable ASCII to get the file to capture properly but I’m not sure about that. GPIB was originally called HPIB. Standardized as IEEE-488. VISA is an abstraction to multiple physical layers, and a standardized API; for example VXI-11 is the LAN abstraction (later updated to LXI). VXI-11 is built on ONC-RPCs. Pretty much every HP computing device from the late 70’s to the late 90’s came with HPIB, and it was used to interface to all manner of peripherials as well as instruments. The cheap usbgpib dongles, like Prologix are really just toys. They don’t abstract the entirety of IEEE-488 … for example, there’s no interrupt channel, so SRQs are poorly implemented. Most of the standards are online these days if you go digging. SCPI is an attempt to standardize command sets across like instruments. It’s only been partially successful at that. Personally, I prefer to use Perl to control instruments with PDL to handle the data. Google perl VXI11::Client to find it. A really good and often cheap LANGPIB box is the HP E2050A. Beware of NI … they don’t implement VXI-11 and use a proprietary protocol. Another shameless plug: The Syscomp instruments use an FTDI chip to interface to the USB connection, so the interface looks like a high-speed serial interface and you can talk to it with any language that can send ASCII strings to a serial port. We use Tcl, but you can even use a dumb terminal emulator to talk to the hardware. Makes debugging very straightforward.
http://hackaday.com/2016/11/16/how-to-control-your-instruments-from-a-computer-its-easier-than-you-think/
CC-MAIN-2017-17
refinedweb
2,262
60.35
tag:blogger.com,1999:blog-182414822009-06-25T19:14:37.653-07:00Programming and Debugging (in my Underhøøsen)Mostly C++ and D stuff... from the maker of <a href="" rel="nofollow">ZeroBUGS</a>The Free Meme Power of ForeachIn D, arrays can be traversed using the foreach construct:<br /><pre><code><br />int [] a = [1, 5, 10, 42, 13];<br />foreach (i;a) {<br /> writefln(“%d”, i);<br />}<br /></code></pre><br />The array of integers in this example is traversed and each element printed, in the natural order in which elements appear in the sequence. To visit the elements in reverse order, simply replace <code>foreach</code> with <code>foreach_reverse</code>. It is as intuitive as it gets.<br /><br />Moreover, linear searches can be implemented with foreach: simply break out of the loop when the searched-for value is found:<br /><pre><code><br />foreach (i;a) {<br /> writefln(“%d”, i);<br /> if (i == 42) {<br /> break;<br /> }<br />}<br /></code></pre><br />Consider now a tree data structure where a tree node is defined as:<br /><pre><code><br />class TreeNode {<br /> TreeNode left;<br /> TreeNode right;<br /> int value;<br />}<br /></code></pre><br />What is the meaning of a statement such as <code>foreach (node; tree) { … }?</code> The simple answer is that with the above definition of TreeNode, the code does not compile.<br /><br />But if it were to compile, what should it do? Visit the nodes in order, or in a breadth-first fashion? No answer is the right one, unless we get to know more about the problem at hand. If we’re using a binary search tree to sort some data, then foreach would most likely visit the nodes in-order; if we’re evaluating a Polish-notation expression tree, we might want to consider post-order traversal.<br /><br /><br /><h2>Foreach Over Structs and Classes</h2><br /><br />Tree data structures and tree traversal occur often in computer science problems (and nauseatingly often in job interviews). Balanced binary trees are routinely used to implement associative containers, as C++ programmers are certainly familiar with the standard template collections set and map.<br /><br />One difference between sequence containers (such as lists, arrays, and queues) on one hand, and containers implemented with trees on the other, is that there are more ways to iterate over the elements of a tree than there are ways to enumerate the elements of a sequence. A list (for example) can be traversed from begin to end and, in the case of double-linked lists, in reverse, from the end to the beginning; that’s it. But a tree can be traversed in order, in pre-order, post-order, or breadth first.<br /><br />No built-in traversal algorithm will fit all possible application requirements. D’s approach is to provide the opApply operator as "standard plumbing" where users can plug their own algorithms for iterating over the elements of a class or struct. The operator is supposed to implement the iteration logic, and delegate the actual processing of the objects to a ... delegate:<br /><br /><pre><code><br />class TreeNode {<br />public:<br /> TreeNode left;<br /> TreeNode right;<br /> int value;<br /> int opApply(int delegate(ref TreeNode) processNode) {<br /> // ... tree traversal<br /> // ...<br /> return 0;<br /> }<br />}<br /></code></pre><br />When the programmer writes a foreach loop, the compiler syntesizes a delegate function from the body of the loop, and passes it to <code>opApply</code>. In this example, the body of the delegate will have exactly one line that contains the <code>writefln</code> statement:<br /><pre><code><br />TreeNode tree = constructTree();<br />foreach(node; tree) {<br /> writefln(“%d”, node.value);<br />}<br /></code></pre><br />For an in-order traversal, the implementation of opApply may look something like this:<br /><pre><code><br />int opApply(int delegate(ref TreeNode) processNode) {<br /> return (left && left.opApply(processNode)) <br /> || processNode(this)<br /> || (right && right.opApply(processNode));<br />}<br /></code></pre><br />The delegate that the compiler synthesizes out of the foreach body returns an integer (which is zero by default). A break statement in the foreach loop translates to the delegate function returning a non-zero value. As you can see in code above, a correct implementation of opApply should make sure that the iteration is "cancelled" when the delegate returns a non-zero value.<br /><br />The traversal function’s argument must match the delegate argument type in the signature of opApply. In the example above the processNode function could modify the tree node that is passed in. If the TreeNode class writer wanted to outlaw such use, the opApply operator should have been declared to take a delegate that takes a const TreeNode parameter:<br /><pre><code><br />class TreeNode {<br />// …<br /> int opApply(int delegate(ref const TreeNode) processNode) {<br /> return processNode(this); <br /> }<br />}<br /></code></pre><br />The new signature demands that the client code changes the parameter type from TreeNode to const TreeNode. Any attempt to modify the node object from within the user-supplied traversal function will fail to compile.<br /><br />Another possible design is to encode all traversal algorithms as TreeNode methods. The following shows an example for the in-order algorithm (other traversal algorithms are left as an exercise for the reader):<br /><pre><code><br />class TreeNode {<br />// …<br /> int traverseInOrder(int delegate(ref int) dg) {<br /> if (left) {<br /> int r = left.traverseInOrder(dg);<br /> if (r) {<br /> return r;<br /> }<br /> }<br /> int r = dg(value);<br /> if (r) {<br /> return r;<br /> }<br /> if (right) {<br /> r = right.traverseInOrder(dg);<br /> if (r) {<br /> return r;<br /> }<br /> }<br /> return 0;<br /> }<br />}<br /><br />foreach(val; &tree.traverseInOrder) {<br /> Console.WriteLine(val);<br />}<br /></code></pre><br /><br /><h2>D Generators</h2><br /><br />A generator is a function or functor that returns a sequence, but instead of building an array or vector containing all the values and returning them all at once, a generator yields the values one at a time. Languages such as C# and Python have a yield keyword for this purpose. In D a generator can be implemented with foreach and a custom opApply operator. Assume one wants to print the prime numbers up to N, like this:<br /><pre><code><br /> foreach (i; PrimeNumbers()) {<br /> if (i > N) {<br /> break;<br /> }<br /> writeln(i);<br /> }<br /></code></pre><br />To make this work, the PrimeNumbers struct could be implemented like this:<br /><pre><code><br />struct PrimeNumbers {<br /> int n = 1;<br /> int primes[];<br /><br /> int opApply(int delegate(ref int) dg) {<br />loop:<br /> while (true) {<br /> ++n;<br /> foreach (p; primes) {<br /> if (n % p == 0) {<br /> continue loop;<br /> }<br /> }<br /> primes ~= n;<br /> if (dg(n)) {<br /> break;<br /> }<br /> }<br /> return 1;<br /> }<br />}<br /></pre></code><div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>The Free Meme AssemblyPublishing the source code for D.NET on <a href="">CodePlex</a> in its current (rough) form turned out to be a great idea, as I received very good feedback. Tim Matthews of New Zealand has submitted several bug reports and patches and convinced me to change my stance on the "assembly class hack".<br /><br />The hack was a very limited solution to the problem of qualifying imported declarations by the name of the assembly where they live. I described the problem and the attempt to hack around it at the end of a post <a href="">back in December</a>; a diligent reader commented that I should have used the <span style="font-style: italic;">pragma</span> mechanism instead. I resisted the suggestion at the time, mainly because I am trying to avoid changing the compiler front-end if I can help it. (Front-end changes have to ultimately be merged back into Walter' s source tree, and he is a very very busy guy.)<br /><br />A D implementation on .NET is not going to be of much use without the ability to generate code that imports and interfaces with existing assemblies. Guided by this idea, Tim Matthews did a lot of trail-blazing and prototyping and showed that because in .NET namespaces can span across assemblies, there has to be a way of specifying an arbitrary number of assemblies in one import file. My "assembly class" hack allowed for one and only one assembly name to be specified.<br /><br />So I had to byte the bullet and do the right thing: assembly names are now specified with a pragma, like this:<br /><pre><code><br />pragma (assembly, "mscorlib")<br />{<br />// imported declarations here...<br />}<br /><br /></code></pre>Any number of such blocks can appear within a D import file. And in the future, the code will be extended to allow a version and a public key to be specified after the assembly name.<br /><br />Another thing that has to be fixed is to make the compiler adhere to the convention of using a slash between enclosing classes and nested types, as described in Serge Lidin's <a href="">Expert .NET 2.0 IL Assembler</a>, CHAPTER 7 page 139:<br /><blockquote>Since the nested classes are identified by their full name and their encloser (which is in turn identified by its scope and full name), the nested classes are referenced in ILAsm as a concatenation of the encloser reference, nesting symbol / (forward slash), and full name of the nested class</blockquote><br /><br />Tim also submitted a front-end patch that allows directories and import files of the same name to coexist, so that the code below compiles without errors:<br /><pre><code><br />import System;<br />import System.Windows.Forms;<br /></code></pre><br /><br />An alternative solution that I proposed was to create import files named "this.d", so that the above would have read:<br /><pre><code><br />import System.this;<br />import System.Windows.Forms;<br /></code></pre><br /><br />After some consideration, I bought Tim's point that this is not what .NET programmers would expect, and went on to apply his patch.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>The Free Meme .NET on CodeplexI will be taking a few days off next week and I decided to upload the code for the D compiler <span style="font-weight:bold;">.net</span> back-end on Codeplex before I close shop. I hope that it will provide context for the last few months worth of blog posts.<br /><br />Most core language features are usable, but there's no Phobos port and if you need to import functionality from external DLLs, you'll have to hand-write some import files (following the model in <code>druntime/import/System.di</code>); I hope to get around to writing a tool that automates the process one of these days -- and it will most likely be in D.<br /><br /<span style="font-style:italic;">fault</span>.<br /><br />Check it out at <a href=""></a>!<div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>The Free Meme against _argptrVariadic functions work slightly different in my D.NET implementation than under the native D compiler.<br /><br />For functions with variable numbers of arguments, the native compiler synthesizes two parameters: <i>_arguments</i> and <i>_argptr</i>; _arguments is an array of TypeInfo objects, and _argptr is a pointer to the beginning of the variable arguments on the stack. The user is supposed to query the type information in _arguments, and do the proper pointer arithmetic to navigate the arguments. You can see some examples at <a href=""></a>:<br /><blockquote><pre><code><br />void printargs(int x, ...)<br />{<br /> writefln("%d arguments", _arguments.length);<br /> for (int i = 0; i < _arguments.length; i++)<br /> { _arguments[i].print();<br /><br /> if (_arguments[i] == typeid(int))<br /> {<br /> int j = *cast(int *)_argptr;<br /> _argptr += int.sizeof;<br /> writefln("\t%d", j);<br /> }<br /> else if (_arguments[i] == typeid(long))<br /> {<br /> long j = *cast(long *)_argptr;<br /> _argptr += long.sizeof;<br /> writefln("\t%d", j);<br /> }<br /> // ...<br /></code></pre></blockquote><br /><br />The pointer arithmetic is not verifiable in managed code. A separate array of type descriptors is not necessary in .net, because the type meta-data can be passed in with the arguments. <br /><br />In D.NET, the variable arguments are passed as an array of objects. For example, for a D function with the prototype <pre><code>void fun(...)</code></pre> the compiler outputs:<br /><code><br />.method public void '_D23funFYv' (object[] _arguments)<br /></code><br />I handled variadic support slightly differently from the native compiler: I dropped _argptr and provided a new helper function, _argtype, that can be used as demonstrated in this example:<br /><pre><code><br />void fun(...)<br />{<br /> foreach(arg; _arguments)<br /> {<br /> if (_argtype(arg) == typeid(int))<br /> {<br /> int i = arg;<br /> Console.WriteLine("int={0}".sys, i);<br /> }<br /> else if (_argtype(arg) == typeid(string))<br /> {<br /> string s = arg;<br /> Console.WriteLine(s.sys);<br /> }<br /> }<br />}<br /></code></pre><br />If the type of the arguments is known, there is no need to check for the typeid:<br /><pre><code><br />void fun(...)<br />{<br /> foreach(arg; _arguments)<br /> {<br /> int i = arg;<br /> Console.WriteLine(i);<br /> }<br />}<br /></code></pre><br />If an incorrect type is passed in, it is still okay, because the error is detected at runtime.<br /><pre><code><br />fun("one", "two", "three"); // int i = arg will throw<br /></code></pre><br /><br />The downside of this approach is that it is not compatible with the native code. This does not affect template variadic functions, which should be perfectly portable.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>The Free Meme and D-iceIt is amazing how much insight one can get into a language by <i>simply</i> writing a compiler for it... Today I am going to spend half a lunch break copying and pasting into this post a stack of notes related to array slices (collected over the past few months of working on a .net back-end for the D compiler).<br /><br />D sports a data type called <span style="font-style: italic;">array slice</span> that is intended to boost the performance of array-intensive computations. A slice is a lightweight value type, conceptually equivalent to a range of array elements, or a "view" into an array. It can be thought of as consisting of a reference to an array, and a pair of begin-end indexes into the array:<br /><pre><code><br />struct (T) ArraySlice {<br /> T[] a; // reference to array<br /> int begin; // index where slice begins<br /> int end; // one past index where slice ends<br />}<br /></code></pre><br />The actual representation of a slice is currently internal to the compiler, and completely opaque to the programmer. The template struct above is not what the layout of a slice actually looks like, it is intended for illustrative purposes only.<br /><br />Consider this code:<br /><pre><code><br />int a[] = [1, 3, 5, 7, 9, 11, 13, 17];<br />int[] s = a[2..5];<br /></code></pre><br />The second declaration introduces "s" as a slice of array "a", starting at position two and ending at (but not including) the fifth element. Using the template pseudo-definition of ArraySlice, the code is conceptually equivalent to:<br /><pre><code><br />int a[] = [1, 3, 5, 7, 9, 11, 13, 17];<br />ArraySlice!(int) s = { a, 2, 5 };<br /></code></pre><br />To understand how array slices may help performance, consider an application that reads and parses XML files. The input can be loaded as an array of characters (a huge string). The application builds a DOM representation of the input, and each node in the DOM contains a token string. This approach is wasteful, because the token strings hold copies of character sequences that are already present in the input; copying tokens around has a linear complexity (it is directly proportional with the number of characters in the token) and the same is true for the spatial complexity (how much memory is being used). But XML tokens could be modeled as slices of character arrays ("views" into the original XML input string), and complexity in both time and space would drop down to a constant value.<br /><br />This design can be implemented in other languages than D, but memory management issues may add unnecessary complexity. In C++ for example we'll have to make sure that the life-time of the token slices does not exceed the scope of the original input string. D belongs to the family of garbage-collected languages; by default, objects are reference-counted, and holding a reference to an array slice indirectly keeps the original array "alive", because the slice contains a reference to it.<br /><br />Now that the design rationale behind array slices is understood, let's take another look at the syntax. You have probably noticed that in the statement:<br /><code>int[] s = a[2..5];</code><br /><br />The declaration part introduces "s" as an array of integers; it is not until you see the assignment that the lightweight, array slice, true nature of "s" is revealed.<br />D has no special syntax for disambiguating between "true" arrays and array slices in declarations; they can be used interchangeably. As a matter of fact, a function signature with array parameters will happily accept a slice as argument. In the following code, both "a" and "s" are legal arguments for the count function:<br /><pre><code><br />int count(int[] arr) {<br /> return arr.length; // return the number of elements in arr<br />}<br />writefln("%d", count(a)); // ok<br />writefln("%d", count(s)); // also ok<br /></code></pre><br /><br /><h3>Resizing Slices</h3><br />Both arrays and slices support the built-in length property. As you expect, an expression such as a.length tells you how many elements are present in the array; the property applies to array slices as well, and in that case it gives the number of elements within the slice range. For example, the output of the following code is "3":<br /><pre><code><br />int a[] = [1, 2, 3, 5, 7, 9, 11, 13, 17];<br />int[] s = a[2..5];<br />writefln("%d", s.length); // prints 3<br /></code></pre><br />So far so good, but I forgot to mention that the length property is read-write: not only can you query the size of an array, you can also change it, like this:<br /><code><br />a.length = 100; // extend the array to 100 elements<br /></code><br />Assignment to the length property resizes the array. This begs the question: what happens when an array slice is being resized? The answer of course is "it depends".<br />With "a" and "s" defined as above, let's say we resize the "s" slice from 3 elements to 7:<br /><code><br />s.length = 7;<br /></code><br />This extends the view of "s" into "a" up to the ninth element of "a" ("s" starts at 2). It is as we have said:<br /><code><br />s = a[2..9];<br /></code><br />The slice is still within the bounds of the "a" array. Resizing it is a constant-time operation that changes one field inside the internal representation of "s". If instead of the built-in slice we had used the template ArraySlice struct introduced above, resizing the slice would have amounted to:<br /><pre><code><br />int a[] = [1, 3, 5, 7, 9, 11, 13, 17];<br />ArraySlice!(int) s = { a, 2, 5 };<br />s.end = 9; // resize the slice<br /></code></pre><br />Because "s" is simply a "view" of the array, modifying an element in the array is immediately reflected into the slice, for example:<br /><pre><code><br />writefln("%d %d", a[2], s[0]); // prints "5 5"<br />a[2] = 23;<br />writefln("%d %d", a[2], s[0]); // prints "23 23"<br /></code></pre><br />What happens if we re-size "s" past the end of "a"?<br /><pre><code><br />s.length = 100; // what does this do?<br /></code></pre><br />The answer is that the behavior is up to the compiler. The current native compiler from Digital Mars changes the type of "s" from a lightweight view into an array to a full-fledged array, and re-sizes it to fit 100 elements.<br /><pre><code><br />int a[] = [1, 2, 3, 5, 7, 9, 11, 13, 17];<br />int[] s = a[2..5];<br />writefln("%d %d", a[2], s[0]); // prints "5 5"<br />s.length = 100;<br />a[2] = 23;<br />writefln("%d %d", a[2], s[0]); // prints "23 5"<br /></code></pre><br />In other words, resizing a slice past the end of the original array breaks up the relationship between the two, and from that point they go their separate merry ways.<br />This behavior also underlines the schizophrenic nature of array slices that are not full copies of arrays, unless they change their mind.<br /><br /><h3>Concatenating Slices</h3><br />We saw how array slices may be re-sized via the “length” property. Slices may be re-sized implicitly via concatenation, as in the following example:<br /><pre><code><br />int a[] = [1, 2, 3, 5, 7, 9, 11, 13, 17];<br />int s[] = a[0..5];<br />s ~= [23, 29];<br /></code></pre><br />The tilde is the concatenation operator for arrays, strings and slices. In this example, the slice is resized to accommodate the elements 23 and 29. Note that even that in this situation the resizing is different from had we written:<br /><code>s.length += 2;</code><br /><br />Extending the length by two elements simply bumps up the upper limit of the slice (because there is still room in the original array, "a"). As we saw in the previous section, if the new length exceeds the bounds of the original array, the slice will be "divorced" from the original array, and promoted from a light-weight view to a full, first-class array. If we just extend the length by two elements, the bounds of "a" are not exceeded.<br /><br />However, in the case of appending (as in s ~= [23, 29]) in addition to resizing we are also setting the values of two additional elements. The slice needs to be divorced from the array, so the a[5] and a[6] are not overwritten with the values 23 and 29. The compiler turns "s" into a full array of length + 2 == 7 elements, copies the elements of “a” from 0 to 5, then appends values 23 and 29.<br /><br />The problem, as with resizing past the bounds of the original array, is that after the array and the slice part their ways, it is no longer possible to modify value types in the original array via the slice (which has now been promoted to a standalone array). This is a run time behavior, hard to predict by statically analyzing (or visually inspecting) the code.<br /><br /><h3>Rolling Your Own Array Slices</h3><br /><br />It is impossible to determine statically whether a D function parameter is an array or a slice by examining the function's code alone. It is up to the function's caller to pass in an array or a slice.<br /><pre><code><br />void f (int[] a) {<br />// ...<br />}<br />int a[] = [1, 2, 3, 5, 7, 9, 11, 13, 17];<br />f(a); // call f with an array argument<br />f(a[2..5]); // call f with a slice argument;<br /></code></pre><br />In some cases you may want to better communicate that your function is intended to work with slices rather than arrays. You may also want to have better control over the slice's properties. Say for example you want to make sure that a slice is never re-sized.<br /><br />You can accomplish these things by rolling your own ArraySlice. The template struct that was introduced earlier is a good starting point. The signature of "f" can be changed to:<br /><pre><code><br />void f (ArraySlice!(int) a) { // ...<br /></code></pre><br />That's a good start, but the struct is not compatible with a built-in array slice. The following code does not compile:<br /><pre><code><br />struct (T) ArraySlice {<br /> T[] a; // reference to array<br /> int begin; // index where slice begins<br /> int end; // one past index where slice ends<br />}<br /><br />int a[] = [1, 2, 3, 5, 7, 9, 11, 13, 17];<br />ArraySlice!(int) s = { a, 2, 5 };<br />foreach(i; s) { // error, does not compile<br /> writefln("%d", i);<br />}<br /></code></pre><br />You could of course rewrite the foreach loop to use the begin..end range:<br /><pre><code><br />foreach(i; s.begin..s.end) {<br /> writefln("%d", s.a[i]);<br />}<br /></code></pre><br />In addition to being more verbose such code is not very well encapsulated, since it explicitly accesses the struct's public members. If we later decide to factor out ArraySlice into its own module, and make the “a”, “begin”, and “end” members private, the code above will not compile anymore.<br /><br />All that's preventing the compact version of the foreach loop from compiling is that the <span style="font-style: italic;">opApply</span> operator is missing. So let's add one:<br /><pre><code><br />struct (T) ArraySlice {<br /> // ...<br /> int opApply(int delegate(ref int) dg) {<br /> foreach (i;begin..end) {<br /> dg(a[i]);<br /> }<br /> return 0;<br /> }<br />}<br /></code></pre><br />Great! This gets us past the compilation error. The foreach loop now compiles and prints out all the elements in the slice. There is a small and subtle bug in this code though. Suppose that instead of printing all elements in the slice, you're doing a linear search, for example:<br /><pre><code><br />foreach(i; s) {<br /> if (i == 5) { // found it!<br /> break;<br /> }<br />}<br /></code></pre><br />Astonishingly enough, this code will not break out of the foreach loop, instead it will continue through all the elements in the slice.<br /><br />The <a href=""></a> website prescribes the behavior of the opApply operator:<br /><blockquote>".<br /></blockquote>The D compiler synthesizes the delegate function from the body of the foreach loop, and the code above is transformed internally to this equivalent form:<br /><pre><code><br />int dg(ref int i) {<br /> if (i == 5) {<br /> return 1;<br /> }<br /> return 0;<br />}<br />s.opApply(&dg);<br /></code></pre><br />The bug in the opApply operator is that the loop should be broken out of when dg returns non-zero:<br /><pre><code><br />int opApply(int delegate(ref int) dg) {<br /> foreach (i; begin..end) {<br /> if (dg(a[i])) break;<br /> }<br /> return 0;<br />}<br /></code></pre><br />Now foreach works correctly. To support foreach_reverse, just add a <span style="font-style: italic;">opApplyReverse</span> member function to the ArraySlice template struct:<br /><pre><code><br />int opApplyReverse(int delegate(ref int) dg) {<br /> foreach_reverse (i; begin..end) {<br /> if (dg(a[i])) break;<br /> }<br /> return 0;<br />}<br /></code></pre><br />What about manipulating the length of the slice? Neither of the lines below compiles:<br /><pre><code><br />writefln("%d", s.length);<br />s.length = 100;<br /></code></pre><br />To support the length property, we have to add these methods:<br /><pre><code><br />struct (T) ArraySlice {<br /> // ...<br /> // return the length of the slice<br /> int length() { return end - begin; }<br /><br /> // resize the slice, preventing it to grow past<br /> // the original array's length<br /> void length(int newLength) {<br /> end = begin + newlength;<br /> if (end > a.length) { end = a.length; }<br /> }<br />}<br /></code></pre><br />To prevent resizing the slice, all there is to do is to leave the second overload undefined, effectively turning length into a read-only property.<br /><br />Clients of the struct can still set its individual members to inconsistent values. To disallow incorrect usage, the "a", "begin", and "end" members can be made private (and the struct will have to move to its own module, because in D private access is not enforced if the class or struct lives in the same module as the client code).<br /><br />To make the ArraySlice struct even more source-compatible with built-in slices, you can give it an opIndex operator:<br /><pre><code><br />struct (T) ArraySlice {<br /> // ...<br /> T opIndex(size_t i) { return a[i]; }<br />}<br /></code></pre><br />The opIndex operator allows this to work:<br /><code>writefln("%d", s[3]);</code><br /><br />Assignment to array elements is not allowed:<br /><pre><code><br />s[3] = 42; // error, does not compile<br /></code></pre><br />If you want the array elements to be modified via the slice like that, just define<br />opIndexAssign:<br /><pre><code><br />struct (T) ArraySlice {<br /> // ...<br /> void opIndexAssign(size_t i, T val) { a[i] = val; }<br />}<br /></code></pre><br />When you put it all together, the ArraySlice struct will look something like this:<br /><pre><code><br />struct (T) ArraySlice {<br />private:<br /> T[] a; // reference to array<br /> int begin; // index where slice begins<br /> int end; // one past index where slice ends<br />public:<br /> T opIndex(size_t i) { return a[i]; }<br /> void opIndexAssign(size_t i, T val) { a[i] = val; }<br /> int length() { return end - begin; }<br /> // comment this function out to prevent resizing<br /> void length(int newLength) {<br /> end = begin + newlength;<br /> if (end > a.length) { end = a.length; }<br /> }<br /><br /> // support foreach<br /> int opApply(int delegate(ref int) dg) {<br /> foreach (i;begin..end) {<br /> if (dg(a[i])) break;<br /> }<br /> return 0;<br /> }<br />}<br /></code></pre><br />In conclusion, by rolling your own array slice implementation the intent of your code becomes clearer, your level of control over it increases, and you can still retain the brevity of the built-in slices.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>The Free Meme ctors in D.NET (Part 2)D allows multiple static constructors per class (all sharing the same signature). For example, the following code is legal:<br /><pre><code><br />version(D_NET)<br />{<br /> import System;<br /> alias Console.WriteLine println;<br />}<br />else<br />{<br /> import std.stdio;<br /> alias writefln println;<br />}<br />class A<br />{<br /> static int i = 42;<br /> static this() <br /> {<br /> println("static A.this 1");<br /> }<br /> static this() <br /> {<br /> println("static A.this 2");<br /> }<br />}<br />void main()<br />{<br /> println(A.i);<br />}<br /></code></pre><br />The program prints:<br /><pre><br />static A.this 1<br />static A.this 2<br />42<br /></pre><br />Because IL does not allow duplicate methods with the same signature, instead of mapping static constructors directly to .cctor methods, my compiler generates one .cctor per class (where needed) that makes function calls to the <span style="font-style:italic;">static this()</span> constructors. The .cctor is not called if the class is never referenced -- this behavior is different from the native Digital Mars D compiler. If we comment out the one line in main, it will still print the constructor messages in native mode, but not under the .net compiler.<br /><br />D classes may also have one or more static destructors, as in this example:<br /><pre><code><br />class A<br />{<br /> static int i = 42;<br /> static this() <br /> {<br /> println("static A.this 1");<br /> }<br /> static this() <br /> {<br /> println("static A.this 2");<br /> }<br /> static ~this()<br /> {<br /> println("static A.~this 1");<br /> }<br /> static ~this()<br /> {<br /> println("static A.~this 2");<br /> }<br />}<br /></code></pre><br /><br />Unlike with the class constructors, there is no special IL method to map static destructors to. My compiler supports them with <a href="">AppDomain.ProcessExit</a> event handlers, registered in reverse order of their lexical occurrences. IL allows non-member .cctor methods, and the compiler takes advantage of this feature to synthesize code that registers the static destructors as ProcessExit handlers.<br /><br />It is interesting to observe that the global .cctor <span style="font-weight:bold;">does reference the class</span> when it constructs the event handler delegates:<br /><pre><code><br />.method static private void .cctor()<br />{<br /> // register static dtor as ProcessExit event handler<br /> call class [mscorlib]System.AppDomain [mscorlib]System.AppDomain::get_CurrentDomain()<br /> ldnull<br /> ldftn void 'example.A'::'_staticDtor /> // register static dtor as ProcessExit event handler<br /> call class [mscorlib]System.AppDomain [mscorlib]System.AppDomain::get_CurrentDomain()<br /> ldnull<br /> ldftn void 'example.A'::'_staticDtor /> ret<br />}<br /></code></pre><br />This means that the .cctor of the class will be called, even if no user code ever references it.<br /><br />In addition to class static constructors and destructors, D also features <span style="font-weight:bold;">module static constructors and destructors</span>. These are expressed as non-member functions with the signature <span style="font-style:italic;">static this()</span> and <span style="font-style:italic;">static ~this()</span>, respectively.<br />For example:<br /><pre><code><br />//file b.d<br />import a;<br />version(D_NET)<br />{<br /> import System;<br /> alias Console.WriteLine println;<br />}<br />else<br />{<br /> import std.stdio;<br /> alias writefln println;<br />}<br /><br />static this()<br />{<br /> println("module B");<br /> map["foo"] = "bar";<br />}<br />static this()<br />{<br /> println("boo");<br />}<br />static ~this()<br />{<br /> println("~boo");<br />}<br /><br />//file a.d<br />version(D_NET)<br />{<br /> import System;<br /> alias Console.WriteLine println;<br />}<br />else<br />{<br /> import std.stdio;<br /> alias writefln println;<br />}<br /><br />string map[string];<br /><br />static this()<br />{<br /> println("module A");<br />}<br />static ~this()<br />{<br /> println("~module A");<br />}<br /><br />void main()<br />{<br /> foreach (k, v; map)<br /> {<br /> version(D_NET)<br /> {<br /> Console.WriteLine("{0} -> {1}".sys, k, v.sys);<br /> }<br /> else<br /> {<br /> writefln("%s -> %s", k, v);<br /> }<br /> }<br />}<br /></code></pre><br />It is noteworthy that regardless in which order the two files above are compiled the resulting program prints the same output:<br /><pre><br />module A<br />module B<br />boo<br />foo -> bar<br />~boo<br />~module A<br /></pre><br />The explanation lay in the D language rules: if a module B imports a module A, the imported module (A) must be statically initialized first (before B).<br /><br />As in the case of static constructors and destructors for classes, the compiler uses the global, free-standing .cctor method to stick calls to module ctors and register ProcessExit events that call the module's static dtors.<br /><br /><br /><span style="font-style:italic;">Thanks to <a href="">BCSd</a> for prompting this post with his comment and code sample.</span><div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>The Free Meme Constructors in D.NETThe D programming language features <span style="font-style: italic;">static constructors</span> that are similar to <span style="font-style: italic;">class constructors</span> in C#: they are called automatically to initialize the static fields (shared data) of a class.<br /><br />At the IL level, static constructors are implemented by the special <span style="font-style: italic;">.cctor</span> methods. The experimental compiler for D that I am working on in my virtual garage groups together user-written static constructor code with static field initializers into <span style="font-style: italic;">.cctor</span>s (and I believe that the C# compiler does the same).<br /><pre><code><br />class Example {<br /> static int solution = 42; // assignment is moved inside .cctor<br /> static double pi;<br /><br /> static this() { // explicit static ctor ==> .cctor<br /> pi = 3.14159;<br /> }<br />}<br /></code></pre>The code above produces the same IL as:<br /><pre><code><br />class Example {<br /> static int solution = 42;<br /> static double pi = 3.14159;<br />}<br /></code></pre><br />Also, the compiler synthesizes one class per module to group all "free-standing"global variables (if any).<br /><br />For example, the IL code that is generated out of this D source<br /><pre><code><br />static int x = 42;<br /><br />void main() {<br /> writefln(x);<br />}<br /></code></pre><br />is equivalent to the code generated for:<br /><pre><code><br />class ModuleData {<br /> static int x = 42;<br />}<br />void main() {<br /> writefln(ModuleData.x);<br />}<br /></code></pre><br />only that in the first case the ModuleData class is implicit and not accessible from D code. This strategy allows for the initializers of global variables to be moved inside the <span style="font-style: italic;">.cctor</span> of the hidden module data class.<br /><br />IL guarantees that the class constructors are invoked right before any static fields are first referenced. If the compiler flags the class with the <span style="font-weight: bold;">beforefieldinit</span> attribute, then the class constructors are called even earlier, i.e. right when the class is referenced, even if no members of the class are ever accessed (the C# compiler sets the <span style="font-weight: bold;">beforefieldinit</span> attribute on classes without explicit class constructors).<br /><br />Serge Lidin explains all the mechanics in his great book <a href="">Expert .NET 2.0 IL Assembler</a><img src="" alt="" style="border: medium none ! important; margin: 0px ! important;" width="1" border="0" height="1" />, and recommends avoiding beforefieldinit, on grounds that invoking a .cctor is a slow business. I am considering using it though, on the synthesized module data class.<br /><br />In conjunction with a compiler-generated reference to the module data class, the <span style="font-weight: bold;">beforefieldinit</span> attribute will guarantee that the global variables are initialized on the main thread, and will avoid strange race conditions and bugs.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>The Free Meme Teleprompter to BlameI will never run for President of the United States. Not because I wouldn't like to, but because I can't: I am a Naturalized Citizen, one step below the Natural Born One. As depressing as this can be, there's a good side to it, too: I can always recant.<br /><br />I have the luxury to lightheartedly declare "Folks, I don't know what I was smoking when I said that <a href="">D structs cannot be implemented as value types in .net</a>", without being afraid of losing any points in any poll.<br /><br />Further research proved that my initial argument, <span style="font-style: italic;">in .net value types do not participate in garbage collection</span> was... er irrelevant. That's because Walter Bright' s D compiler front-end is smart enough to insert calls to the structs' destructors wherever necessary! Now that's what I call a bright design. It doesn't matter that the CLR does not garbage-collect <span style="font-style: italic;">new</span>-ed value types, because the front-end generates the code that deletes them.<br /><br />I was running some compiler back-end tests for the D language post-blit operator when I realized that copying value types in IL is straight-forward; you LOAD the source variable / field / argument and STORE into the destination (boom!) whereas bit-copying managed classes is not as trivial.<br /><br />I did not give up right away. Hanging on to my <span style="font-style: italic;">structs-as-classes</span> implementation, I wrote a run-time helper blitting routine:<br /><pre><code><br /> using System.Runtime.InteropServices;<br /><br /> // assumes src and dest are of the same type<br /> static public void blit(Object dest, Object src)<br /> {<br /> int size = Marshal.SizeOf(src.GetType());<br /> IntPtr p = Marshal.AllocHGlobal(size);<br /> try<br /> {<br /> Marshal.StructureToPtr(src, p, false);<br /> Marshal.PtrToStructure(p, dest);<br /> }<br /> finally<br /> {<br /> Marshal.FreeHGlobal(p);<br /> }<br /> }<br /></code></pre><br />It did not take long for the truth to dawn upon me: Wow, bit-blitting non-value types is a major pain in the (oh) bum (ah). Efficient that code ain't (or should I say ISN'T? man do those consonants hurt). Honestly, I am not even sure that code is kosher. Better not to get into a pickle if one can avoid it... so back to the <span style="font-style: italic;">struct-as-value type</span> implementation I am.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>The Free Meme for ProgrammersI <a href="">wrote a while ago</a> about implementing thread support in the D .net compiler. The idea was to generate code that constructs delegates that are passed to the Start method of the <a href="">System.Threading.Thread class</a>. I discussed some details of constructing delegates out of nested functions, and I concluded that my next thread-related task was to implement support for the <a href=""><span style="font-style: italic;">synchronized</span> keyword</a>.<br /><br />Like the postman who goes for a walk in the park after coming home from work, I relax in the after hours by working on pet projects like C++ debuggers and D compilers. So this Saturday morning I sat down to implement the code generation for <span style="font-style: italic;">synchronized</span>. The D language accepts two statement forms:<br /><pre><code><span style="font-style: italic;">synchronized</span> ScopeStatement<br /><span style="font-style: italic;">synchronized</span> (Expression) ScopeStatement<br /></code></pre>The former can be easily reduced to the latter by synthesizing a global object and an expression that references it.<br /><br />Here is a sample of D code that illustrates the use of the synchronized keyword:<pre><code><br />import System;<br /><br />class Semaphore {<br /> bool go;<br />}<br />Semaphore sem = new Semaphore;<br /><br />void main() {<br /> void asyncWork() {<br /> while (true) { // busy wait<br /> synchronized(sem) {<br /> if (sem.go) break;<br /> }<br /> }<br /> }<br /> Threading.Thread t = new Threading.Thread(&asyncWork);<br /> t.Start();<br /> synchronized(sem) {<br /> sem.go = true;<br /> }<br /> t.Join();<br />}<br /></code></pre>A <span style="font-style: italic;">synchronized</span> statement can be transformed into the following pseudo-code:<br /><pre><code>object m = Expression();<br />try {<br /> lock(m);<br /> ScopeStatement();<br />}<br />finally {<br /> unlock(m);<br />}<br /></code></pre>Implementing lock / unlock maps naturally to the Enter / Exit methods of the <a href="">System.Threading.Monitor</a> class. That's really all there is to it, generating the method calls is trivial.<br /><br />I was a bit disappointed by how easy the task turned out to be, but on the bright side I had plenty of time left to spend with my kid. I took him to Barnes and Noble to check out the Thomas trains and board books and the computer section, where I found the most helpful book title ever: "C# for Programmers". I guess no lawyer accidentally wandering through the computer book section can claim that he bought the book because the title was misleading. I wish that all book titles be so clear: "Finance for Accountants" or "Elmo Board Book for Toddlers".<div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>The Free Meme Perl in the WallThe most successful computer languages out there were born out of concrete problems: Perl in the beginning was nothing more than a Reporting Language; C came out of the need of writing OS-es in a portable fashion; PHP emerged of somebody's need for expressing dynamic web content as server-side macros. C# solves the problem of doing component programming without intricate COM knowledge and 2 PhD-s per developer.<br /><br />Typically, after the problem is solved, and the engineers scratched their itch, wrote the code, and shipped the (working!) products, some rather academic type decides: "Now I am going to redo it the right way" (and that's how UNIX got rewritten as Plan 9, a great OS that nobody uses).<br /><br />Interestingly enough, the redesigned products rarely end up being as successful as the original. People have been trying for years to replace things such as Windows, Office and the C programming language with "better" rewrites, but the market does not seem to care much. If it was good enough the first time around, it got adopted and used. Who cares how neat (or messy) the internal plumbings are?<br /><br />The C and C++ languages are great for doing low level systems programming; they may be harder to use for constructing applications, and I would definitely advise against using C++ for web development. The D programming language is fancying itself as a better C++ and I think that is true in the application development area. But I do not see D as a systems language. I will never write a task scheduler in a garbage-collected language. When I write system level stuff, I want the good old WYSIWYG behavior: no garbage collector thread running willy-nilly and no strange array slices (that are like arrays except for when they aren't). And thanks but no thanks, I want no memory fences, no <span style="font-style: italic;">thingamajica</span> inserted with parental care by the compiler to protect me from shooting my own foot on many-core systems. That is the point of systems programming: I want the freedom to shoot myself in the foot.<br /><br />I have been trying (unsuccessfully) to argue with some folks on the digitalmars.d newsgroup that the new 2.0 D language should not worry much about providing support for custom allocation schemes. D is designed <span>to help productivity</span>: it relieves programmers from the grueling tasks of managing memory, and it encourages good coding practices, but a systems language it is not. We already have C, C++ and assembly languages to do that low-level tweak when we need it.<br /><br />Sadly, some of the people involved with the design of D 2.0 are aiming for the moral absolute, rather than focus on shipping something that works well enough. I think it is a bad decision to allow for mixing the managed and unmanaged memory paradigms; it is even worse that there are no separate pointer types to disambiguate between GC-ed and explicitly-managed objects. C++ went that route in its first incarnation, and it wasn't long before people realized that it was really hard to keep track of what objects live on the garbage collected heap and what objects are explicitly managed. A new pointer type had to invented (the one denoted by the caret) to solve the problem.<br /><br />If folks really want to use D to program OS-es and embedded devices and rewrite the code that controls the breaks in their cars, they should at least make a separate D dialect and name it for what it is, <span style="font-style: italic;">Systems D</span>, <span style="font-style: italic;">Embedded D</span> or something like that. The garbage collection and other non-WYSIWYG features should be stripped out from such a dialect.<br /><br />The ambition of making D 2.0 an all encompassing, moral absolute language may cause 2.0 to never ship, never mind get wide adoption. Perl started out with more modest goals and ended up enjoying a huge mind share.<br /><br />So the <span style="font-style: italic;">ship-it today if it works</span> hacker in me feels like dedicating a song to the language purists:<br /><blockquote>We don't need no education<br />Perl's best language of them all,<br />We don't need no education<br />Thanks a bunch to Larry Wall!<br /></blockquote><div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>The Free Meme Arrays in D.NETThe D Programming Language supports <span style="font-style: italic;">foreach</span> loops over associative arrays.<br /><br />Associative arrays are data structures that look much like "normal" arrays, but the index types are not integers. Here's an example of an array of doubles indexed by strings, expressed in D:<br /><br /><span style="font-family:courier new;">double [string] a = [ "pi" : 3.14159, "e" : 2.718281828, "phi" : 1.6180339 ];</span><br /><br />The D language does not explicitly specify how associative arrays should be implemented. In C++ associative arrays can be implemented as standard STL <span style="font-family: courier new;">map</span>s, <span style="font-family: courier new;">hash_map</span>s (to be replaced by <span style="font-family: courier new;">unordered_map</span>s in C++0x) or with <a href="">Google's sparse hash map</a>s, to name a few possibilities.<br /><br />In other languages such as Python, associative arrays are called <span style="font-style: italic;">dictionaries</span>. The family of .NET languages take advantage of the <a href="">System.Collections.Generic.Dictionary</a> class (this is also what the D compiler for .NET does: it implements associative arrays as system dictionaries).<br /><br />D provides an easy way to iterate over an associative array, the <span style="font-style: italic;">foreach</span> keyword. This keyword should be familiar to anyone programming in C#, UNIX shell, or managed C++. Here is an example for how it is being used in D:<pre><code><br />foreach (string key, double value; a) {<br /> version(D_NET) {<br /> Console.WriteLine("{0}={1}".sys, key, value);<br /> }<br /> else {<br /> writefln("%s=%f", key, value);<br /> }<br />}<br /></code></pre><br />The types for the key and value arguments can be explicitly specified, but that is not necessary as the compiler can infer them automatically. The <span style="font-style: italic;">foreach</span> line can be re-written more compactly as:<br /><code><br />foreach (key, value; a) {<br /></code><br />Another legal form for the statement, used to iterate over the values (and ignore the keys) is:<br /><code><br />foreach (value; a) {<br /></code><br /><br />The current implementation of the compiler front-end synthesizes a nested function out of the loop's body. The .NET back-end constructs a delegate out of this nested function and its closure; then it "wimps out" and calls a run-time helper written in C#:<pre><code><br /> public class AssocArray<br /> {<br /> public delegate int Callback<V>(ref V value);<br /> public delegate int Callback<K, V>(K key, ref V value);<br /><br /> static public void Foreach<K, V>(Dictionary<K, V> aa,<br /> int unused,<br /> Callback<V> callback)<br /> /// ...<br /> static public void Foreach<K, V>(Dictionary<K, V> aa,<br /> int unused,<br /> Callback<K, V> callback)<br /> /// ...<br /> }<br /></code></pre><br />The generic <span style="font-family: courier new;">Foreach</span> function has two overloads, to accommodate both forms of the foreach statement.<br /><br />D rules do not allow an array to be modified from within the loop, but <span style="font-weight: bold;">the elements of the array can be modified</span> if the value argument has a <span style="font-weight: bold;">ref</span> storage class:<pre><code><br />foreach (key, ref value; a) {<br /> value = 0.0;<br />}<br /></code></pre>C#'s rules are stricter, one cannot modify neither the collection (by adding / removing elements) nor change the individual elements. To work around this restriction, the run-time helper code does two passes over the dictionary that corresponds to the D associative array:<br /><pre><code><br />static public void Foreach<K, V>(Dictionary<K, V> aa,<br /> int unused, Callback<K, V> callback)<br />{<br /> Dictionary<K, V> changed = new Dictionary<K, V>();<br /> foreach (KeyValuePair<K, V> kvp in aa)<br /> {<br /> V temp = kvp.Value;<br /> int r = callback(kvp.Key, ref temp);<br /> if (!kvp.Value.Equals(temp))<br /> {<br /> changed[kvp.Key] = temp;<br /> }<br /> if (r != 0)<br /> {<br /> break;<br /> }<br /> }<br /> foreach (KeyValuePair<K, V> kvp in changed)<br /> {<br /> aa[kvp.Key] = kvp.Value;<br /> }<br />}<br /></code></pre><br />The Callback delegate is constructed from the address of a closure object and a nested <span style="font-style:italic;">foreach</span> function, both synthesized in the compiler. The generated code looks something like this:<br /><pre><code><br /> newobj instance void 'vargs.main.Closure_2'::.ctor()<br /> stloc.s 1 // '$closure3'<br /> ldloc.1 // '$closure3'<br /> ldftn instance int32 'vargs.main.Closure_2'::'__foreachbody1' (float64& '__applyArg0')<br /> // construct Foreach delegate<br /> newobj instance void class [dnetlib]runtime.AssocArray/Callback`1<float64>::.ctor(object, native int)<br /> .line 14<br /> call void [dnetlib]runtime.AssocArray::Foreach<string, float64>(<br /> class [mscorlib]System.Collections.Generic.Dictionary`2<!!0,!!1>,<br /> int32,<br /> class [dnetlib]runtime.AssocArray/Callback`1<!!1>)<br /></code></pre><br /><span style="font-weight:bold;">Edit:</span> After writing this piece I noticed that I forgot to mention one interesting side effect of my implementation: because there is no try / catch around the Callback call in the C# run-time support code, the foreach loop has all-or-nothing transactional semantics.<br /><br />For example, this program has different outputs when compiled with DMD from when it is compiled with my D / .NET compiler:<pre><code><br />version(D_NET)<br />{<br /> import System;<br /> import dnet;<br />}<br />else<br />{<br /> import std.stdio;<br />}<br /><br />void main()<br />{<br /> int [string] x = ["one" : 1, "two" : 2, "three" : 3];<br /><br /> try<br /> {<br /> foreach (ref v; x) <br /> {<br /> if (v == 3)<br /> throw new Exception("kaboom");<br /> v = 0; <br /> }<br /> }<br /> catch (Exception e)<br /> {<br /> version(D_NET)<br /> {<br /> Console.WriteLine(e.toString().sys);<br /> }<br /> else<br /> {<br /> writefln("%s", e.toString());<br /> }<br /> }<br /> foreach (k, v; x) <br /> {<br /> version(D_NET)<br /> {<br /> Console.WriteLine("{0}, {1}".sys, k, v);<br /> }<br /> else<br /> {<br /> writefln("%s, %d", k, v);<br /> }<br /> }<br />}<br /></code></pre><br />Under D/.NET it prints:<br /><pre><br />kaboom<br />one, 1<br />two, 2<br />three, 3<br /></pre><br />while the native compilation gives:<br /><pre><br />object.Exception: kaboom<br />two, 0<br />three, 3<br />one, 1<br /></pre><br />It would be very easy to get my compiler to emulate the native behavior, but I kind of like the "transactional" flavor...<div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>The Free Meme Conditional CompilationThe D programming language supports <a href="">conditional compilation</a> using version identifiers and version numbers, a solution that is slightly better than the #ifdef, pre-processor driven, way of C/C++ that most of us are used to.<br /><br />When using the .NET compiler for D that I am developing, one will be able to import and take advantage of .NET assemblies. For example the System.Console.WriteLine family of functions may come in handy. But such code would not compile when fed to the native Digital Mars D compiler.<br /><br />Conditional compilation and the version identifier D_NET do the trick, like in this example:<br /><pre><code><br />version(D_NET)<br />{<br /> import System;<br /> import dnet;<br />}<br />else<br />{<br /> import std.stdio;<br />}<br /><br />void main()<br />{<br /> int [string] x;<br /><br /> x["one"] = 1;<br /> x["two"] = 2;<br /><br /> foreach (k, v; x)<br /> {<br /> version(D_NET)<br /> {<br /> Console.WriteLine("{0}, {1}".sys, k, v);<br /> }<br /> else<br /> {<br /> writefln("%s, %d", k, v);<br /> }<br /> }<br />}<br /></code></pre><br />So I hacked the front-end of the D for .NET compiler to predefine D_NET.<br /><br />Of course, abusing conditional compilation will yield code that is unreadable and hard to grasp as a C++ source littered with #ifdef ... #else ... (or the US tax code).<br /><br />But I am a strong supporter of The Second Amendment of the Internet Constitution: "<span style="font-style: italic;">the right of the People to keep and bear compilers that let them shoot themselves in the foot shall not be infringed</span>".<div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>The Free Meme Strings In the D ChordIn my <a href="">recent interview</a> with the InfoQ technology magazine I was asked about compatibility issues between D and .NET. I replied with a brief description of how array slices in D raise a conceptual incompatibility: <span style="font-style: italic;">System.Array</span> and <span style="font-style: italic;">System.ArraySegment</span> are distinct, unrelated types. In D arrays slices are indistinguishable from arrays and this creates the problems that I mentioned in the interview.<br /><br />But there are other incompatibilities between D and .NET that I did not mention because I wanted to keep the interview lean and focused.<br /><br />Take for example the built-in support for strings.<br /><br />The keyword <span style="font-style: italic;">string</span> is shorthand in both IL and C# for <a style="font-style: italic;" href="">System.String</a> (essentially a sequence of Unicode characters in the range U+0000 to U+FFFF).<br /><br />In the D programming language, <span style="font-style: italic;">string</span> is shorthand for <span style="font-style: italic;">invariant char[]</span> and characters are unsigned bytes.<br /><blockquote><span style="font-size:100%;"><br /></span><span style="font-size:85%;"><span style="font-weight: bold;">Side notes</span>: D uses UTF8 to support foreign languages, and also sports the types </span><span style="font-style: italic;font-family:times new roman;font-size:85%;" >wstring</span><span style="font-size:85%;"> (a sequence of 2 byte-wide characters, compatible with Microsoft's Unicode) and </span><span style="font-style: italic;font-family:times new roman;font-size:85%;" >dstring </span><span style="font-size:85%;">(for UTF-32 strings). Wide and double-wide (UTF-32) string literals are denoted by the "w" and "d" respective suffixes, as in: "Hello"w, "Good Bye"d (UTF-32 dchar works in the native D compiler, but is not currently supported in my .NET compiler).</span></blockquote><span style="font-size:85%;"><br /></span>When a variable of type <span style="font-style: italic;">string</span> is encountered in a D source, the compiler emits a corresponding IL variable with the type <span style="font-style: italic;">unsigned int8[]</span>.<br /><br />In IL there is a special instruction, <span style="font-style: italic;">ldstr</span>, for loading string literals on the evaluation stack. This code<br /><pre><code>ldstr "Hello"<br /></code></pre>loads a "Hello" <span style="font-style: italic;">[mscorlib]System.String</span>. If this literal is to be stored into a variable (say "x"), then my compiler will insert conversion code that looks somewhat like this:<br /><code></code><pre><br />call class [mscorlib]System.Text.Encoding<br /> [mscorlib]System.Text.Encoding::get_UTF8()<br />ldstr "Hello"<br />callvirt instance uint8[]<br /> [mscorlib]System.Text.Encoding::GetBytes(string)<br /><br />stloc 'x' // store byte array into variable x<br /></pre><br />For the cases where a D string (array of bytes) has to be converted to a <span style="font-style: italic;">System.String</span> I provide an explicit string property, called <span style="font-style: italic;">sys</span>, with the following D prototype:<br /><code><br />static public String sys(string x);<br /></code><br />The D programmer would write something like this:<br /><code><br />import System;<br />// ... snip ...<br />stringsys</span> function can be elided, and generates straightforwardly:<code><br />ldstr "Hello .NET"<br />call void [mscorlib]System.Console::'WriteLine' (string)<br /></code><br />Matters get more interesting when we consider associative arrays. D offers a great convenience to programmers by supporting associative arrays directly in the language, for example<br /><code><br />int [string] dict;<br /><br />dict["one"] = 1;<br />dict["two"] = 2;<br />// ...<br /></code><br />introduces an array of integers indexed by strings. By contrast, in other languages such data structures are implemented "externally" in a library; in C++ for example, std::map<std::string, int> is implemented in the <a href="">STL</a>; the C# equivalent of an associative array is the <a href="">System.Collections.Generic.Dictionary</a>. My friend and colleague Ionut Gabriel Burete contributed an implementation of associative arrays in the D compiler for .NET using exactly that class.<br /><br />An associative array / dictionary with string keys is an interesting case, because <span style="font-style: italic;">System.String::Equals</span> does the right thing out of the box, namely performs a lexicographical comparison of two strings; <span style="font-style: italic;">System.Array::Equals</span> however simply compares object references, it does not iterate over and compare elements. This means that a <span style="font-style: italic;">Dictionary(string, int) </span>will behave as you expect, but if the key is of the <span style="font-style: italic;">unsigned int8[]</span> type you may be in for a surprise.<br /><br />For this reason, I put a hack in the compiler back-end: for the case of associative arrays I break away from representing D strings as byte arrays in .NET, and use <span style="font-style: italic;">System.String </span>instead, which works great (or so I thought up until I ran into the problem of generating the code for a <span style="font-style: italic;">foreach</span> iteration loop).<br /><br />For D code such as:<br /><pre><code><br />import System;<br />int [string] x;<br />x["one"] = 1;<br />x["two"] = 2;<br />// ...<br />foreach (k, v; x) {<br /> Console.WriteLine("{0}, {1}", k, v);<br />}<br /></code></pre><br />the compiler synthesizes a function that corresponds to the body of the loop, and binds the key and value ("k" and "v" in the code snippet) to local variables in that function (this happens all inside the front-end).<br /><br />The back-end must be able to detect the variables bound to <span style="font-style: italic;">foreach</span> arguments and reconcile the data types where necessary. In the example above the type of the "k" variable in the IL will thus be <span style="font-style: italic;">System.String</span>, and not <span style="font-style: italic;">unsigned int8[]</span>.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>The Free Meme Closure...A diligent reader commented on my <a href="">previous post</a> that the implementation of nested functions in D for .NET was buggy for multi-threaded programs.<br /><br />Indeed, in the code below <code>asyncWork()</code> never returned. That's because a copy by-value of the variable <code>go</code> was used in the closure.<br /><pre><code><br />void main()<br />{<br /> bool go;<br /> <br /> void asyncWork()<br /> {<br /> while (!go)<br /> {<br /> //busy wait<br /> }<br /> }<br /> Threading.Thread t = new Threading.Thread(&asyncWork);<br /> t.Start();<br /><br /> go = true;<br />}<br /></code></pre><br />I am trying to avoid generating unverifiable code in this compiler project: IL does not allow managed pointers as fields in a class; ILASM accepts unmanaged pointers but that yields unverifiable code. My first instinct for closures was to use a copy for all the referenced variables, but as observed by my reader that approach did not work for multi-threaded programs.<br /><br />One way to solve the problem is to use unmanaged pointers in the closure; under this implementation the example above runs correctly. There may be at least another solution: wrap each referenced variable into an object, and have both the nested function and the surrounding context share the object reference; I found it to convoluted and pursued the unmanaged pointers route instead.<br /><br />This is how the generated IL looks for the D code in the example:<br /><pre><code><br />.module 'example'<br />.custom instance void [mscorlib]System.Security.UnverifiableCodeAttribute::.ctor() = ( 01 00 00 00 )<br /><br /><br />//--------------------------------------------------------------<br />// main program<br />//--------------------------------------------------------------<br />.method public hidebysig static void _Dmain ()<br />{<br /> .entrypoint<br /> .maxstack 3<br /> .locals init (<br /> [0] bool pinned 'go',<br /> [1] class [mscorlib]System.Threading.Thread 't',<br /> [2] class example.main.closure1 '$closure3'<br /> )<br /> newobj instance void example.main.closure1::.ctor()<br /> stloc.s 2 // '$closure3'<br /> ldloc.2 // '$closure3'<br /> ldloca 0 // 'go'<br /> stfld bool* 'example.main.closure1'::go2<br /> ldloc.2 // '$closure3'<br /> dup<br /> ldvirtftn instance void example.main.closure1::'asyncWork' ()<br /> newobj instance void class [mscorlib]System.Threading.ThreadStart::.ctor(object, native int)<br /> newobj instance void [mscorlib]System.Threading.Thread::.ctor (ThreadStart)<br /> stloc.s 1 // 't'<br /> ldloc.1 // 't'<br /> callvirt instance void [mscorlib]System.Threading.Thread::'Start' ()<br /> ldc.i4 1<br /> stloc.s 0 // 'go'<br /> ret<br />}<br /><br />.class private auto example.main.closure1 extends [dnetlib]core.Object<br />{<br /><br /> .method public virtual newslot hidebysig instance void 'asyncWork' ()<br /> {<br /> .maxstack 2<br />L1_example:<br /> ldarg.0<br /> ldfld bool* 'example.main.closure1'::go2<br /> ldind.i1<br /> ldc.i4 0<br /> beq L1_example<br /> ret<br /> }<br /> .field public bool* go2<br /> // default ctor, compiler-generated<br /> .method public hidebysig instance void .ctor()<br /> {<br /> ldarg.0<br /> call instance void [dnetlib]core.Object::.ctor()<br /> ret<br /> }<br />} // end of example.main.closure1<br /></code></pre><br />Now there is only one more <em>small</em> problem to address: synchronization between threads...<div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>The Free Meme Functions and DelegatesMy previous post missed one aspect of delegates in D: nested functions. Walter Bright gave me this example:<br /><pre><br />int delegate() foo(int i)<br />{<br /> int bar() { return i; }<br /> return &bar;<br />}<br /></pre><br />Function <em>bar</em> is nested inside <em>foo</em>; <em>foo</em> wraps <em>bar</em> into a delegate which is returned. My blog post is guilty of overlooking this use case for delegates; yet my compiler implementation is innocent: the example compiles and runs correctly.<br /><br />The code example may look like a new use case at first, but is in fact similar to making a delegate from an object instance and a method:<br /><pre><br />class Foo {<br /> int i;<br /> int bar() { return i; }<br />}<br />...<br />Foo f = new Foo;<br />int delegate() dg = &f.bar;<br /></pre><br />The reason is that there is an invisible object in the nested function case. In the D programming language, nested functions have access to the surrounding lexical scope (note how function <em>bar</em> uses <em>i</em> which is declared as a parameter of <em>foo</em>); the .NET D compiler represents internally the lexical <strong>context</strong> of the nested function as an object. The fields of the context object are shallow copies of the variables in the "parent" scope. The IL class declaration for the context is synthesized by the compiler, which also instantiates the context. The context is populated on the way in (before calling the nested function) and used to update the variables in the parent scope on the way out (after the call has completed).<br /><br />The constructor of a delegate object takes two parameters: an object reference and a pointer to a function; in the case of nested functions, the first parameter that the compiler passes under the hood is the context object. This is why constructing a delegate from a nested function is not different from using an object and one of its methods.<br /><br />What if the nested function is declared inside of a class method (you ask). In this case there is no need to synthesize a class declaration to model the context of the nested call. The class to which the method belongs is augmented with hidden fields that shadow the variables in the parent scope.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>The Free Meme in D for .NETThis past weekend I typed "Joe Newman" in <a href="">Pandora</a> and sat down for a couple of hours to implement delegates in my .NET back-end for the D compiler.<br /><br />I begun by studying the documentation on MSDN and I noticed some differences in the way delegates work in .NET and D.<br /><br />In .NET (and C#) delegates are objects that wrap pointers to functions so that they can be manipulated and invoked safely. The functions may be either standalone or members of a class. In D, the concept of delegates applies only to member functions. Delegates may be called asynchronously in .NET (I am not aware of a similar feature in the D programming language). The concept of delegates is thus simpler in D.<br /><br />The implementation that I came up with is straight-forward: classes that derive from [mscorlib]System.MulticastDelegate are generated for each delegate type. The classes are sealed and each have a virtual Invoke method that matches the signature of the D delegate.<br /><br />For the following D code snippet<br /><pre><br />class Test<br />{<br /> void fun(int i)<br /> { ...<br /> ...<br /> }<br />}<br />Test t = new Test;<br />void delegate(int) dg = &t.fun; <br /></pre><br />the generated IL looks like this:<br /><pre><br />.class sealed $Delegate_1 extends [mscorlib]System.MulticastDelegate<br />{<br /> .method public instance void .ctor(object, native int) runtime managed {}<br /> .method public virtual void Invoke(int32) runtime managed {}<br />}<br />...<br />...<br />.locals init (<br /> [0] class Test 't',<br /> [1] class $Delegate_1 'dg'<br /> )<br />newobj instance void delegate.Test::.ctor ()<br />stloc.s 0 // 't'<br /><br />ldloc.0 // 't'<br />dup<br />ldvirtftn instance void delegate.Test::'print' (int32 'i')<br />newobj instance void class $Delegate_1::.ctor(object, native int)<br />stloc.1<br /></pre><br />One small (and annoying) surprise that I had was that although <a href="">the IL standard</a> contains code samples with user-defined classes derived directly from [mscorlib]System.Delegate, such code did not pass PEVERIFY and, more tragically, crashed at run-time. The error message ("Unable to resolve token", or something like that) was not helpful; but the ngen utility dispelled the confusion by stating bluntly that my class could not inherit System.Delegate directly. Replacing System.Delegate with System.MulticastDelegate closed the issue.<br /><br />Once I got delegates to work for class methods, I realized that the code can be reused to support D pointers to functions as well. In D pointers to functions are a different concept from delegates; in .NET however, a delegate can be constructed from a standalone function by simply passing a null for the object in the constructor. It is trivial for the compiler to generate code that instantiates .NET delegates in lieu of function pointers.<br /><br />One nice side-effect of representing pointers to functions as delegates is that they can be aggregated as class members, unlike pointers to other data types <a href=""> that cannot be aggregated</a> as struct or class fields (an IL-imposed restriction for managed pointers).<br /> <br />I hope that one day D decides to support asynchronous delegate calls. I have yet to imagine the possibilities for asynchronous, pure methods. <br /><br />Until then, the .NET back-end is moving along getting closer and closer to a public release.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>The Free Meme ConstructorsThe D programming language allows a constructor of a class to call another constructor of the same class, for the purpose of sharing initialization code. This feature is called "delegating constructors"; it is also present in C# and in the emerging <a href="">C++ 0x</a>.<br /><br />C#'s syntax for delegating constructors resembles the initializer lists in C++, and strictly enforces that the delegated constructor is called before any other code in the body of the caller constructor; the feature is masterfully explained in <a href="">More Effective C#: 50 Specific Ways to Improve Your C# (Effective Software Development Series)</a><img src="" width="1" height="1" border="0" alt="" style="border:none !important; margin:0px !important;" />.<br /><br />D is more flexible, a constructor can be called from another constructor's body pretty much like any other "regular" method, provided that some <a href="">simple rules</a> are observed (for example, it is not permitted to call a constructor from within a loop).<br /><br />A D compiler must detect constructor delegation and ensure that some initialization code is not executed more than once. Let's consider an example:<br /><pre><br />class Example<br />{<br /> int foo = 42;<br /> int bar;<br /><br /> this()<br /> { <br /> bar = 13; <br /> }<br /> this(int i)<br /> {<br /> foo = i;<br /> this();<br /> }<br />}<br /></pre><br />In the first constructor, before the field <em>bar</em> is assigned the value 13, some "invisible" code executes: first, the constructor of the base class is invoked. The <em>Example</em> class does not have an explicit base; but in D, similar to Java and C#, all classes have an implicit root Object base. It is as if we wrote:<pre><br />class Example : Object<br />{ ...<br />}<br /></pre><br />After generating the call to Object's constructor, the compiler generates the code that initializes <em>foo</em> to 42. The explicit assignment as written by the programmer executes after wards. <br /><br />The compiler must be careful so that the initializations steps described above happen only once in the second constructor. This is not simply a matter of efficiency; it is more importantly, a matter of correctness. If calling the base Object constructor and the initialization of foo where generated blindly <strong>inside the body of each constructor</strong>, then the following would happen in the second constructor's case:<br /><ol><br /><li>Object's ctor is invoked (compiler generated)</li><br /><li>foo = 42 (compiler generated)</li><br /><li>foo = i (programmer's code)</li><br /><li>constructor delegation occurs (programmer's code), which means that:</li><br /><li>Object's ctor is invoked</li><br /><li>foo = 42 (compiler generated)</li><br /></ol><br />This is obviously incorrect, since it leaves the <em>Example</em> object in a different state than the programmer intended.<br /><br />Such scenario is very easily avoided by a native compiler. Object creation is translated to several distinct steps:<br /><ol><br /><li>memory for the object is allocated</li><br /><li>invocation of base ctor is generated</li><br /><li>initializers are generated (this is where foo = 42 happens)</li><br /><li>constructor as written by programmer is invoked</li><br /></ol><br />The important thing to note is that in the native compiler's case the compiler leaves the constructors alone, as written by the programmer, and inserts its magic "pre-initializaton" steps in between the memory allocation and constructor invocation.<br /><br />When writing a compiler back-end for .NET things are slightly different: the creation of an object is expressed in one compact, single line of MSIL (Microsoft Intermediary Language) assembly code:<br /><pre><br />newobj <constructor call><br /></pre><br />In our example, that would be<br /><pre><br />newobj void class Example::.ctor()<br /></pre><br />and<br /><pre><br />newobj void class Example::.ctor(int32)<br /></pre><br />respectively. So the compiler-generated magic steps of calling the base constructor, etc have to happen <strong>inside</strong> the constructor body. To prevent the erroneous scenario of double-initialization from happening, I had to generate a hidden, "guard" Boolean field for classes that use constructor delegation. The variable is set when entering a constructor's body; it is checked inside each constructor before calling the base constructor and stuff. Here's how the generated IL code looks like:<pre><br />//--------------------------------------------------------------<br />// ctor.d compiled: Sun Feb 08 23:04:49 2009<br />//--------------------------------------------------------------<br />.assembly extern mscorlib {}<br />.assembly extern dnetlib {}<br />.assembly 'ctor' {}<br /><br />.module 'ctor'<br /><br /><br />.class public auto ctor.Example extends [dnetlib]core.Object<br />{<br /> .field public int32 foo<br /> .field public int32 bar<br /> .method public hidebysig instance void .ctor ()<br /> {<br /> .maxstack 3<br /> ldarg.0<br /> ldfld bool 'ctor.Example'::$in_ctor<br /> brtrue L0_ctor<br /> ldarg.0<br /> call instance void [dnetlib]core.Object::.ctor()<br /> ldarg.0<br /> ldc.i4 42<br /> stfld int32 'ctor.Example'::foo<br />L0_ctor:<br /> ldarg.0 // 'this'<br /> ldc.i4 13<br /> stfld int32 'ctor.Example'::bar<br /> ret<br /> }<br /> .method public hidebysig instance void .ctor (int32 'i')<br /> {<br /> .maxstack 3<br /> ldarg.0<br /> call instance void [dnetlib]core.Object::.ctor()<br /> ldarg.0<br /> ldc.i4 42<br /> stfld int32 'ctor.Example'::foo<br /> ldarg.0 // 'this'<br /> ldarg.1 // 'i'<br /> stfld int32 'ctor.Example'::foo<br /> ldarg.0 // 'this'<br /> ldc.i4 1<br /> stfld bool 'ctor.Example'::$in_ctor<br /> ldarg.0<br /> call instance void ctor.Example::.ctor ()<br /> ret<br /> }<br /> .field bool $in_ctor<br />} // end of ctor.Example<br /></pre><br />As a side note, in the second constructor's case a small redundancy still exists: <em>foo</em> is assigned to 42 only to be set to another value right away. I am hoping that this isn't much of an issue if the JIT engine detects it and optimizes it out. I'd be happy to hear any informed opinions.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>The Free Meme Over STL Code<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="cursor: pointer; width: 400px; height: 320px;" src="" alt="" id="BLOGGER_PHOTO_ID_5297933972183716562" border="0" /></a><br /><br />When debugging C++ code written using the Standard Template Library (STL) it is not unusual to find yourself stepping through STL code. Most of the time, this is not very useful: The STL implementation typically comes bundled with the C++ compiler, and it has been thoroughly tested by the vendor; it is unlikely that the bug you are after is caused by the STL.<br /><br />So when a statement such as <code>myVector.push_back(x)</code> is encountered while tracing with the debugger, you normally want to step <strong>over</strong> it, not <strong>into</strong> it. Most debuggers offer a "step over" and a "step into" function. So you would chose "step over". <br /><br />But how about this? You want to debug a routine named <code>my_func(size_t containerSize)</code> and want to step into the body of <code>my_func</code> when this statement is hit: <code>my_func(myVector.size())</code>. If you select "step into", the debugger will first take you into the guts of STL's <code>vector<T>::size()</code> implementation before stepping into <code>my_func</code>.<br /><br />The ZeroBUGS debugger allows you to avoid such annoyances. Once inside size(), you can right click, and select to "Always step over..." that function, all functions in that file, or all files in the same directory. The debugger will remember your option, and you don't have to see the guts of size(), or any other vector function, or any other STL function, respectively. <br /><br />The functionality can be used not just with the STL but any code. If you later change your mind, the "Manage" menu allows you to remove functions, files or directories from the step-into blacklist.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>The Free Meme Destruct a D StructI wrote <a href=""><span style="text-decoration: underline;">a while ago</span></a> about similarities between D and .NET (and implicitly C#). My interest in mapping D features to .NET is driven by a research project that I took on a few months ago: a D 2.0 language compiler for .NET (<a href="">D 2.0 is a branch version of D that includes experimental features</a>). I was mentioning how in both D and C# structs are lightweight, value types.<br /><br />After working on struct support in more detail, I have come to the realization that D structs cannot be implemented as .NET value type classes. Rather, they have to be implemented as reference type classes.<br /><br />The short explanation is that while in IL value classes do not participate in garbage collection, D expects the GC to reap structs after they are no longer in use.<br /><br />Interestingly enough, value types may be newobj-ed (not just created on the stack).<br /><br />We can use a simple example to demonstrate the difference between value classes and reference classes. If we compile the following program using the IL assembler (ILASM) and run it, nothing gets printed on the screen:<br /><code><pre><br />.assembly extern mscorlib {}<br />.assembly 'test' {}<br /><br />.class public value auto Test<br />{<br /> .field public int32 i<br /><br /> .method public void .ctor()<br /> {<br /> ret<br /> }<br /> .method virtual family void Finalize()<br /> {<br /> ldstr "finalizing..."<br /> call void [mscorlib]System.Console::WriteLine(string)<br /> ret<br /> }<br />}<br />//--------------------------------------------------------------<br />// main program<br />//--------------------------------------------------------------<br />.method public static void main ()<br />{<br /> .entrypoint<br /> .locals init (<br /> <strong>class</strong> Test t<br /> )<br /> newobj instance void Test::.ctor()<br /> stloc 't'<br /> ret<br />}<br /></pre><br /></code><br />But if we changed the declaration of the Test class from a value type to class, like this:<br /><code><pre><br />.class public auto Test<br /></pre></code><br />we could see "finalizing..." printed, a confirmation that the destructor (the Finalize method) is being invoked by the garbage collector. All it takes is removing "value" from the declaration.<br /><br />In IL, value types have no self-describing type information attached. I suspect that the reason for not having them being garbage collected is that, without type information, the system cannot possibly know which (virtual) Finalize method to call (note that although C# struct are implemented as sealed value classes, "sealed" and "value" are orthogonal).<br /><br />D supports the <a href="">contract programming</a> paradigm, and <a href="">class invariants</a> is one of its core concepts.<br /><br />The idea is that the user can write a special method named "invariant", which tests that certain properties of a class or struct hold. In debug mode, the D compiler inserts "probing points" throughout the lifetime of the class (or struct), ensuring that this function is automatically called: after construction, before and after execution of public methods, and <strong>before destruction</strong>. <br /><br />The natural mechanism for implementing the last statement is to generate a call to the invariant method at the top the destructor function body. But if the destructor is never called then we've got a problem.<br /><br />So having destructors work correctly is not just a matter of collecting memory after the struct expires, but it is also crucial to contract programming in D.<br /><br />Assignment to structs and passing in and to functions may become heavier weight in D.NET than in the native, Digital Mars D compiler (albeit this is something that I have to measure) by implementing structs as reference type classes, but it is necessary in order to support important D language features.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>The Free Meme night, at the monthly <a href="">NWCPP</a> meeting Walter Bright gave a <a href="">presentation</a> on meta-programming using the D language. Once again, D put C++ to shame.<br /><br />Because of transportation arrangements I could not accompany Walter et. Co to the watering hole after the lecture. Instead I went home and decided to test how my D.NET work-in-progress compiler handles templates, and what kind of code it generates.<br /><br />I picked a variadic template for my test, which computes the maximum of an arbitrarily long list of numbers (adapted from a version written by Andrei Alexandrescu) :<br /><br /><pre>import System;<br /><br />auto max(T1, T2, Tail...)(T1 first, T2 second, Tail args)<br />{<br /> auto r = second > first ? second : first;<br /> static if (Tail.length == 0) {<br /> return r;<br /> }<br /> else {<br /> return max(r, args);<br /> }<br />}<br /><br />void main()<br />{<br /> uint k = 42;<br /> auto i = max(3, 2, k, 2.5);<br /> Console.WriteLine(i);<br />}<br /></pre><br /><br />The program above prints 42 (of course), and here's how the generated IL looks like:<br /><br /><pre><br />//--------------------------------------------------------------<br />// max.d compiled: Thu Jan 22 19:38:26 2009<br />//--------------------------------------------------------------<br />.assembly extern mscorlib {}<br />.assembly extern dnetlib {}<br />.assembly 'max' {}<br /><br />.module 'max'<br /><br />//--------------------------------------------------------------<br />// main program<br />//--------------------------------------------------------------<br />.method public hidebysig static void _Dmain ()<br />{<br />.entrypoint<br />.maxstack 4<br />.locals init (<br />[0] unsigned int32 'k',<br />[1] float64 'i'<br />)<br />ldc.i4 42<br />stloc.s 0 // 'k'<br />ldc.i4 3<br />ldc.i4 2<br />ldloc.0 // 'k'<br />ldc.r8 2.5<br />call float64 _D3max16__T3maxTiTiTkTdZ3maxFiikdZd (<br /> int32 'first', int32 'second', unsigned int32, float64)<br />stloc.s 1 // 'i'<br />ldloc.1 // 'i'<br />call void [mscorlib]System.Console::'WriteLine' (float64)<br />ret<br />}<br />.method public hidebysig static float64 _D3max16__T3maxTiTiTkTdZ3maxFiikdZd (<br /> int32 'first', int32 'second', unsigned int32, float64)<br />{<br />.maxstack 4<br />.locals init (<br />[0] int32 'r'<br />)<br />ldarg.1 // 'second'<br />ldarg.0 // 'first'<br />bgt L0_max<br />ldarg.0 // 'first'<br />br L1_max<br />L0_max:<br />ldarg.1 // 'second'<br />L1_max:<br />stloc.s 0 // 'r'<br />ldloc.0 // 'r'<br />ldarg.2 // '_args_field_0'<br />ldarg.3 // '_args_field_1'<br />call float64 _D3max14__T3maxTiTkTdZ3maxFikdZd (<br />int32 'first', unsigned int32 'second', float64)<br />ret<br />}<br />.method public hidebysig static float64 _D3max14__T3maxTiTkTdZ3maxFikdZd (<br /> int32 'first', unsigned int32 'second', float64)<br />{<br />.maxstack 3<br />.locals init (<br />[0] unsigned int32 'r'<br />)<br />ldarg.1 // 'second'<br />ldarg.0 // 'first'<br />conv.u4<br />bgt L2_max<br />ldarg.0 // 'first'<br />conv.u4<br />br L3_max<br />L2_max:<br />ldarg.1 // 'second'<br />L3_max:<br />stloc.s 0 // 'r'<br />ldloc.0 // 'r'<br />ldarg.2 // '_args_field_0'<br />call float64 _D3max12__T3maxTkTdZ3maxFkdZd (unsigned int32 'first', float64 'second')<br />ret<br />}<br />.method public hidebysig static float64 _D3max12__T3maxTkTdZ3maxFkdZd (<br /> unsigned int32 'first', float64 'second')<br />{<br />.maxstack 2<br />.locals init (<br />[0] float64 'r'<br />)<br />ldarg.1 // 'second'<br />ldarg.0 // 'first'<br />conv.r8<br />bgt L4_max<br />ldarg.0 // 'first'<br />conv.r8<br />br L5_max<br />L4_max:<br />ldarg.1 // 'second'<br />L5_max:<br />stloc.s 0 // 'r'<br />ldloc.0 // 'r'<br />ret<br />}<br /></pre><br /><strong>Edit:</strong> One more reason for loving D templates: pasting D code into HTML does not require replacing angular brackets with &lt; and &gt; respectively!<div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>The Free Meme and D-perI am very excited about the D for .NET compiler project because of all the things I am learning. Books on my lab's desk these days are: <a href="">Compiling for the .NET Common Language Runtime (CLR)</a><img src="" alt="" style="border: medium none ! important; margin: 0px ! important;" border="0" width="1" height="1" />, <a href="">Distributed Virtual Machines: Inside the Rotor CLI</a><img src="" alt="" style="border: medium none ! important; margin: 0px ! important;" border="0" width="1" height="1" />, <a href="">Concepts of Programming Languages (8th Edition)</a><img src="" alt="" style="border: medium none ! important; margin: 0px ! important;" border="0" width="1" height="1" />, <a href="">The Dragon Book</a><img src="" alt="" style="border: medium none ! important; margin: 0px ! important;" border="0" width="1" height="1" />.<br /><br />As I am digging deeper and deeper I am discovering interesting design challenges.<br /><br /><span style="font-size:130%;">1. Enum</span><br /><br />In .NET, the base class for enumerated types is System.Enum, which does not allow non-integral- (nor char-) based types.<br /><br />D allows strings in enumerated types, like so:<br /><code><br /></code><pre class="d_code"><span class="d_keyword">enum</span> : string<br />{<br /> A = <span class="d_string">"hello"</span>,<br /> B = <span class="d_string">"betty"<br />}<br /></span></pre><br />Possible solutions: make D.NET's enums based on [mscorlib]System.Enum and forbid non integrals (current implementation), or make my own [dnetlib]core.Enum base class. The problem with the latter is that it would preclude inter-operation with other languages. Walter Bright's suggestion (it is so clever that I wish I came up with it myself) is to use a combination of both solutions: generate structures derived from System.Enum when possible, and base them off a custom class when not. The D .NET compiler will split enums into two groups - those that are integral types, and those that are not, plus the "char" case. This allows interoperability without crippling language features.<br /><br /><span style="font-size:130%;">2. Strings</span><br /><br />D strings are UTF8, System.String is Unicode 16. My attempt to cleverly use System.String under the hood (rather than represent D strings as byte arrays) created way more complications than I initially thought, and prompted a major re-factoring effort that ate up almost my entire week-end.<br /><br />An interesting wrinkle is that in D.NET associative arrays are implemented using Dictionary objects under the hood, which use the Equals method to compare keys. For System.String, Equals does a lexicographical comparison as one would normally expect; for System.Array, the implementation of Equals simply compares object references.<br /><br />D strings are now represented as arrays of bytes; in order to make associative arrays work correctly, extra work had to be done.<br /><br /><span style="font-size:130%;">3. Pointers</span><br /><br />Can't do pointers as members of classes or elements in an array.<br /><br />This stems from a restriction in the IL: cannot have managed pointers as class fields or array elements.<br /><br />Interesting consequences:<br />3.a) in D.NET a nested method cannot access variables of pointer type in the surrounding lexical context (because the implementation constructs a delegate under the hood, and the object part has all the accessed variables copied as its members)<br /><br />3.b) can't pass pointers to variadic functions (because I send in the variable argument list in as an array -- for compatibility with System.Console.WriteLine)<br /><br />How severe are these limitations? Are there any reasonable workarounds? I guess I'll have to dig D-per to find out.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>The Free Meme AwayThe D programming language does not support multi-dimensional arrays.<br /><br />Instead, multi-dimensional matrices can be implemented with arrays of arrays (aka jagged arrays), same as in C and C++.<br /><br />When a static, multidimensional array needs to be initialized, in a statement such as:<br /><code><br />int foo[3][4][5][6];<br /></code><br />the native compiler back-end implicitly initializes the array by reserving the memory and filling it with zeros.<br /><br />In the .NET back-end for the D compiler that I am working on, things are different: explicit <span style="font-family: courier new; font-style: italic;">newarr</span> calls are required, in conjunction with navigating the data structure and initializing the individual elements.<br /><br />And this is where it gets interesting. The array may have any arbitrary rank, and thus the compiler needs to figure out the types of the nested arrays; for the example above, they are:<br /><pre><br />int32 [][][][]<br />int32 [][][]<br />int32 [][]<br />int32 []<br /></pre><br />My implementation uses a runtime helper function in the <span style="font-style: italic;">dnetlib.dll</span> assembly; rather than trying to determine the rank of the array and the types involved, the compiler back-end simply generates a call to the runtime helper, which does the heavy lifting. This solution works for jagged arrays of any rank.<br /><br />The helper code itself is written in C# and uses generic recursion; it appends square brackets [] to the generic parameter at each recursion level, like shown below.<br /><br /><pre>namespace runtime<br />{<br /> public class Array<br /> {<br /> //helper for initalizing jagged array<br /> static public void Init<T><t>(System.Array a, uint[] sizes, int start, int length)<br /> {<br /> if (length == 2)<br /> {<br /> uint n = sizes[start];<br /> uint m = sizes[start + 1];<br /> for (uint i = 0; i != n; ++i)<br /> {<br /> a.SetValue(new T[m], i);<br /> }<br /> }<br /> else<br /> {<br /> --length;<br /> //call recursively, changing template parameter from T to T[]<br /> Init<t[]><T[]>(a, sizes, start, length);<br /> uint n = sizes[start];<br /> for (uint i = 0; i != n; ++i)<br /> {<br /> Init</t[]></t><T><t><t[]><t>((System.Array)a.</t></t[]></t>GetValue(i), sizes, start + 1, length);<br /><div id=":yt" class="ArwC7c ckChnd"> }<br /> }<br /> }<br /><br /> //called at runtime<br /> static public void Init<T><t>(System.Array a, uint[] sizes)<br /> {<br /> Init<T><t>(a, sizes, 0, sizes.Length);<br /> }<br /> }<br />} //namespace runtime<br /></t></t></div></pre><div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>The Free Meme There A Point in Using Pointers?A few people wrote back in response to a <a href="">previous blog post</a> on the D for .NET project, some asking, well, why .NET?<br /><br />Part of the answer is that .NET and D seem to be made for each other:<br /><br />A common fragrance imbues both designs; for example, in D structs are not objects, but value types -- same as in C#. In D all objects inherit from a root object, which has methods such as <span style="font-style: italic;">toString</span> , <span style="font-style: italic;">toHash </span>and <span style="font-style: italic;">opEquals</span>; in .NET, [mscorlib]System.Object sports <span style="font-style: italic;">ToString</span>, <span style="font-style: italic;">GetHashCode</span>, and <span style="font-style: italic;">Equals</span>.<br /><br />Still not convinced? How about <a href="">array properties</a>, then? In D there are properties such as <span style="font-style: italic;">sort</span>, <span style="font-style: italic;">reverse</span>, and <span style="font-style: italic;">dup</span>; in .NET we have <span style="font-style: italic;">System.Array.Sort()</span>, <span style="font-style: italic;">System.Array.Reverse()</span>, and (tadaaa) <span style="font-style: italic;">System.Array.Clone()</span>. Coincidence? Perhaps. Or maybe powerful memes where floating free in the air and found propitious hosts in both .NET and D (not unlike the idea of <a href="">Python-scripting a debugger</a>, which was pioneered by ZeroBUGS, and it is now <a href="">being adopted by GDB</a>).<br /><br />But the cute metaphors have to stop somewhere (no honeymoon lasts forever) and so we come upon the thorny issue of pointers. D allows pointers, albeit does not encourage them. But unmanaged pointers (and even managed pointers arithmetic) does not yield <span style="font-style: italic;">verifiable</span> code in .NET. I have experimented with both managed and unmanaged pointers, and generated textual IL that compiles and runs; PEVERIFY however refuses to put the seal of approval on such code.<br /><br />And so I am very tempted to disallow pointers in class and struct members (in D, as in .NET objects are manipulated via references anyway, so what's the point of a pointer, anyway?)<div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>The Free Meme Good Idea is Worth StealingAs always, the <span class="blsp-spelling-error" id="SPELLING_ERROR_0">freetards</span> in the Open Source community are stealing good ideas. Scripting the debugger with Python, pioneered by my work in <span class="blsp-spelling-error" id="SPELLING_ERROR_1">ZeroBUGS</span> is now copied by <span class="blsp-spelling-error" id="SPELLING_ERROR_2">GDB</span>: <a href=""></a><br /><br />Too bad their implementation is awfully buggy.<br /><br />And too bad that back in 2006 I did not think the idea was patentable :)<div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>The Free Meme .NET, D Here CallingA piece of advice from someone who spent fifteen years writing software professionally: if some "experts" ever say "printf debugging" is a poor technique, tell them to get out of town.<br /><br /><span style="font-style: italic;">Printf debugging</span> is helpful in a many great deal of situations, for example when you are writing a compiler. The debugger cannot be trusted, because the work-in-progress compiler may not output complete debug information just yet. But you can trust what's printed white on black on the screen.<br /><br />There is a chicken and egg problem with printf though: how does one compile the implementation of printf (or writefln, as it is the case with the D programming language) if the compiler itself is not there?<br /><br /.<br /><br /.<br /><br />In order to get this code to work<br /><pre><br />import System;<br /><br />void main() {<br /> System.Console.WriteLine("hello D.NET");<br />}<br /></pre><br />I wrote a System.d file, containing the D version of the Console class declaration (not complete, but good enough to get me going):<br /><pre><br />public class Console<br />{<br /> static public void WriteLine();<br /> static public void WriteLine(string);<br /> static public void WriteLine(string, ...);<br /> <br /> static public void WriteLine(char);<br /> static public void WriteLine(bool);<br /> static public void WriteLine(int);<br /> static public void WriteLine(uint);<br /> static public void WriteLine(long);<br /> static public void WriteLine(float);<br /> static public void WriteLine(double);<br /> <br /> static public void Write(char);<br /> static public void Write(string);<br /> static public void Write(string, ...);<br /> <br /> static public void Write(bool);<br /> static public void Write(int); <br /> static public void Write(long);<br /> static public void Write(float);<br /> static public void Write(double);<br />}<br /></pre><br />In the future, I plan to write a program that produces this kind of declaration automatically from .NET assemblies, using reflection.<br /><br /:<br /><pre><br />call void class <strong>[mscorlib]</strong>System.Console::WriteLine(string)<br /></pre><br />One of my design guide lines for this project is to modify the front-end as least as possible, if at all. So then how do I get the imports to be fully qualified by assembly names?<br /><br />With a clever hack, of course. I added these lines at the top of System.d:<br /><pre><br />class mscorlib { }<br />class assembly : mscorlib { }<br /></pre><br />Then I tweaked my back-end to recognize the "class assembly" construct and prefix the imported module names with whatever the name of the base class of the assembly class is.<div class="blogger-post-footer"><img width='1' height='1' src=''/></div><img src="" height="1" width="1"/>The Free Meme
http://feeds.feedburner.com/ProgrammingAndDebugginginMyUnderhosen
crawl-002
refinedweb
16,821
54.63
I am not subscribed to debian-legal. Glenn Maynard wrote: > > Consider a major, practical reason we require that packages be buildable > with free tools: so people--both Debian and users--can make fixes to the > software in the future. I agree with this. This is also not the point. You keep talking about pracakge that can only be built with a non-free compiler. The one in question can be built with a free or non-free compiler. > For example, suppose OpenSSL is built with ecc (Expensive C Compiler), > because it produces faster binaries, the Debian package is created with > it, and ends up in a stable release. A security bug is found, and the > maintainer isn't available. Can another developer fix this bug? No: > you can't possibly make a stable update with a completely different > compiler, halving the speed and possibly introducing new bugs. (Debian > is very conservative and cautious with stable updates; this is one of > the reasons many people use it.) Yes. Assuming that OpenSSL will compile properly with both gcc and ecc, and the source is not using tricks to change functionality when compiled wiht one or the other. To me, using ecc or gcc is, or at least should be, similar to using gcc -O1 or gcc -O9. Similarly, I do not consider a signifcant performance boost to be a change in functionality. I'm thinking something like this: #ifdef ecc // this enables the -S option #elif defined(gcc) // remove -S, but add in -o instead #else // neither -S nor -o available #endif In this case, the compiler used would have a significant change in functionality, and would require the build-dep on ecc, and would be contrib at best. > On the same token, users are similarly unable to exercise the level of > caution needed when making security updates on critical systems, unless > they subject themselves to whatever non-free license the compiler uses. gcc is written under the GPL. I can write a non-free program, keep the source entirely secret, and distribute my program in binary form only, with a very restrictive license. The gcc license does not contaminate the resultant binary (unless, of course, I put gcc code in my program). Similarly, the ecc license should not prevent compiling GPL'd code. If it did, ecc would be unsuitable for any purpose, period. > This is a fundamental reason it's required that packages be buildable > using free tools, and why I don't think "you can build a kind-of similar > package using free tools, but the one we're giving you can only be built > with non-free tools" is acceptable. Again, if it could only be built properly and working with ecc, I will happily agree with you until the cows come home to roost. This would be a long time, as cows donot generally roost. Specifically, this package could be built with either gcc or icc. I will accept the argument from a pragmatic standpoint, in as much a bug in icc would be harder to track down, but not from a ``it is a different package'' because of using icc instead of gcc. -- John H. Robinson, IV jaqque@debian.org http (((( WARNING: I cannot be held responsible for the above, sbih.org ( )(:[ as apparently my cats have learned how to type. spiders.html ((((
https://lists.debian.org/debian-legal/2004/10/msg00312.html
CC-MAIN-2017-22
refinedweb
557
61.26
I'm lost on how to do and call 3d array Lindsey Ship Greenhorn Joined: Feb 07, 2003 Posts: 19 posted Feb 07, 2003 15:15:00 0 Hi I am in a real mess, trying to do a 3d array. I am only new to java so forgive me If this looks messy.I do not know how to call the methods ect. what i have to sort out is! if i enter a airport name it will display the flight No's: if i enter flight number it will display destination and flight time I was trying to ref the name of airports by int so that I can call from the rest of the array. Can any one put me on the right track please! Here is my horrible attempt import java.io.*; class AirplaneCopy { public static int i, l, j, k; { String[] aps = {"Liverpool","Frankfurt","Copenhagen","Budapest","Amsterdam","Warsaw","Stockholm","Rome","Paris","Madrid"}; public static String[] searchAirport( String airport ){ string[] result = null; if (aps[i][1].equals(airport)){ result = new string[2]; result[0] = aps[i][0]; result[1] = aps[i][2]; return result; }; return result;} //******************************************************************************************** // Read input from user public static void main( String args[] ) throws IOException System.out.println("Enter name of Airport?"); InputStreamReader input = new InputStreamReader(System.in); BufferedReader keyboardInput = new BufferedReader(input); String[] info =searchAirport( keyboardInput.readLine() ); if (info == null) System.out.println("Sorry,no Airport of that name listed "); else if (info == aps) System.out.println(info[]); int result =0; for(i=0; i<aps.length; i++) { if(aps[i].equals (aps)) { result = i;} } // 1st row = fight No's 2nd row destination i.e 3(Copenhagen) 3rd row flight time // so my first data refers to liverpool airprt with 3 flights int[][][] flights = {{{121,418,553,0,0}, {0,0,553,0,0},{121,0,0,0,0},{0,0,553,0,0},{121,418,0,0,0},{0,0,0,0,0},{0,0,0,0,0},{0,418,0,0,0},{0,418,0,0,0},{121,0,0,0,0}}, // Departing flights {{3 ,9 ,2 ,0,0}, {0,0,4 ,0,0},{5 ,0,0,0,0},{0,0,6 ,0,0},{ 10, 9,0,0,0},{0,0,0,0,0},{0,0,0,0,0},{0, 5,0,0,0},{0, 8,0,0,0},{ 7,0,0,0,0}}, // Destinations {{35 ,50 ,55 ,0,0}, {0,0,50 ,0,0},{35 ,0,0,0,0},{0,0,30 ,0,0},{ 65, 35,0,0,0},{0,0,0,0,0},{0,0,0,0,0},{0, 60,0,0,0},{0, 55,0,0,0},{ 90,0,0,0,0}} // Flight Time }; for( l = 0; l < flights.length; l++); for( j = 0; j < flights.length; j++); for( k = 0; k < flights.length; k++); System.out.println ("test"+(flights[1][0][])); } } Lindsey Ship Greenhorn Joined: Feb 07, 2003 Posts: 19 posted Feb 07, 2003 15:58:00 0 Hi iam sorry if i ask to much, I really do want to learn, so if anyone can give a tutorial site on 3d arrays. I will check it out as all the books i have read don't give examples on 3d. Sorry if i have come across a fool with my code as i know it is messed up and most proberly wrong in all aspects thanks if you can help, in case i am in bed when you answer as it geting late now in th UK. Greg Charles Sheriff Joined: Oct 01, 2001 Posts: 2961 12 I like... posted Feb 11, 2003 12:41:00 0 OK, first "Don't Panic!" That's the best advice ever to come out of The Hitchhiker's Guide to the Galaxy . Everyone has to begin somewhere. Relax and don't worry so much about seeming a "fool". Now with that said, yes, this is a horrible attempt. You have aps, which is clearly a one-dimensional array (notice the single set of brackets in String []), but you're trying to apply multiple indices to it: aps[i][0]. Also, i is a static variable set outside the method, which, as you will find as you gain familiarity with programming, is something you really don't want to do. Let's just take your first goal: given the name of an airport you want to return the flights for that airport. Will there be more than one flight per airport? Probably. Will there be the same number of flights per airport? Probably not. What I'm getting at is that I don't think you want to use a multi-dimensional array at all. How about arrays stored in a hash table? Is that beyond the scope of this assignment? Try this: import java.util.Hashtable; public class AirplaneCopy { Hashtable airports = new Hashtable(); public AirplaneCopy() { init(); } private void init() { String londonFlights = { "110A", "123B", "444SP" }; airports.put("London", londonFlights); String parisFlights = { ... }; airports.put("Paris", parisFlights); ... } public String [] searchAirports(String airport) { return (String [])airports.get(airport); } } Of course, that's just off the top of my head, and I haven't even compiled it, but it should work more or less. A couple final closing points about multi-dimensional arrays (for anyone left reading): 1. Multi-dimensional arrays are usually only theoretically useful. In real life, arrays or other collections of objects are more common. 2. You should make sure you understand one-dimensional arrays, before trying to wrap your mind around 2D, 3D, and nD. Lindsey Ship Greenhorn Joined: Feb 07, 2003 Posts: 19 posted Feb 11, 2003 13:53:00 0 Thanks Greg for your advise I am now trying with a 2d array, got my first part working calling the airport Liverpool it now displays all 3 flights. Used a counter which I read about in a book.I am stuck on second bit, will try and sort it out but I am totally confused. You will see what I mean if I can not sort it . I hope you guys won't object me coming back with a new post. Thanks again all the little problems seem major when you are just in your 1st stage of learning. Oh a special thanks to you guys who run this site for beginners, as other site's mock us for asking. What must be silly questions to them. Wish I could send a thank you card to the ranch for all the Staff and you guys and girls who give up your time to help the likes of myself. Sorry for going on to much Take care every one xxxx Lindsey I agree. Here's the link: subject: I'm lost on how to do and call 3d array Similar Threads don't know how to call method in my new version of code BubbleSort where am i calling method in wrong ?? where is my silly mistake which of these 2 arrays is right All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/393232/java/java/lost-call-array
CC-MAIN-2015-40
refinedweb
1,173
72.05
Hello everyone! I'm trying to create a menu where you have four options: - You can register a bird you have seen in a textfile. - You can write the type of a bird you have seen. - You can write the place where you have seen the bird. - Close the program Can someone help me with creating one? I'm a bit stuck and I keep getting errors so I would appreciate if someone could help me program a menu with these four options. Also for the first option, registering a bird, this is what i've done. Code : import easyIO.*; class Birds { public static void main(String[] args) { In press = new In(); Out birds = new Out("birdfile.txt", true); birds.out("Birds name: "); String biName = press.inLine(); birds.out("Sex: "); String biSex = press.inLine(); birds.out("Place for observation: "); String plObs = press.inLine(); birds.out("Date of observation: "); int date = press.inInt(); System.out.println("Birds name: ") birds.close(); I havent started on the other 3 options yet. Was thinkg about getting the menu done first, but got stuck Thanks alot for help
http://www.javaprogrammingforums.com/%20loops-control-statements/31559-can-someone-please-help-me-creating-menu-while-loops-switch-statements-printingthethread.html
CC-MAIN-2014-41
refinedweb
183
78.14
just wondering if I dispose my dbcontext object correctly here or should I be using the using block instead? public class RepoBankAccount : IBankAccount { private AppDbContext db = null; public RepoBankAccount() { this.db = new AppDbContext(); } public RepoBankAccount(AppDbContext db) { this.db = db; } public IEnumerable<BankAccount> ViewAllBankAccount() { return db.BankAccounts.ToList(); } public BankAccount ViewBankAccount(long accountNumber) { return db.BankAccounts.Where(b => b.AccountNumber.Equals(accountNumber)).SingleOrDefault(); } public void DeleteBankAccount(BankAccount bankAccount) { db.BankAccounts.Remove(bankAccount); Save(); } public void InsertBankAccount(BankAccount bankAccount) { db.BankAccounts.Add(bankAccount); Save(); } public void Save() { try { db.SaveChanges(); } catch(Exception ex) { System.Console.WriteLine("Error:" + ex.Message); } finally { if(db != null) db.Dispose(); } } } I read that I should not be calling dispose manually from But in some sample code, I also notice this scaffolding code but not too clear how it does the job on its own. protected override void Dispose(bool disposing) { if (disposing) { db.Dispose(); } base.Dispose(disposing); } DbContexts are designed to be short-lived. The very first initialization and use of a DbContext presents a spin up cost to resolve the entity mappings, but aside from that the context can be scoped to individual calls, or sets of calls. Your code will work fine and so long as your repo is disposed, the dbContext will be cleaned up. There are pitfalls with this approach though in that as the product matures it is easy to forget to dispose something, and these DbContexts can soak up a fair bit of memory if they are long-lived. To avoid issues with entities that become disconnected from their DbContext, an entity should never leave the scope of it's DbContext. If it does, you run into errors if a lazy load gets triggered for example. For instance lets say I have a method in a Controller or such that does something like this: (Note: I don't advocate ever returning Entities to a view, but for example's sake...) public ActionResult View(long accountNumber) { BankAccount bankAccount; using (var repo = new RepoBankAccount()) { bankAccount = repo.ViewBankAccount(accountNumber); } return new View(bankAccount); } The repo will be disposed, and if bank account either has no references, or all references are eager loaded, this call would work just fine. However, if there is a lazy load call, the controller method will fail because the DbContext associated with the Bank Account was disposed. This can be compensated for by ensuring the return occurs inside the scope of the using block: public ActionResult View(long accountNumber) { using (var repo = new RepoBankAccount()) { BankAccount bankAccount = repo.ViewBankAccount(accountNumber); return new View(bankAccount); } } To help avoid issues like this, it is generally a better idea to create POCO view model classes to populate within the scope of the DbContext from the entities, then return those view models. No surprise lazy load hits etc. Where this really starts to crumble apart is when you want to coordinate things like updates across entities to ensure that updates are committed or rolled back together. Each of your repo classes are going to have separate DbContext instances. The first default approach to get familiar with to address this is Dependency Injection and Inversion of Control, particularly an IoC container such as Autofac, Unity, Ninject, or Castle Windsor. Using these, you can have your repository classes accept a dependency on a DbContext, and they can scope a single instance of a Dependency across a lifetime. (such as per HTTP Request for example) In this way, the references of all of your repositories in a single session call will be provided the same DbContext instance. A call to SaveChanges() will attempt to commit all pending changes. A better pattern is the Unit of Work pattern where the scope of the DbContext is moved outside of the repository and each repository is either provided a reference to the DbContext, or can locate it. (similar to how the IoC pattern works) The advantage of UoW patterns is that you can move control of the commit/rollback out to the consumer of the repositories I promote the use of Mehdime's DbContextScope since it negates the need to pass around references to the UoW/DbContext. Mehdime DbContextScope (EF6 original github) EFCore supported Port
https://entityframeworkcore.com/knowledge-base/55112972/garbage-collection-for-entity-framework-dbcontext-csharp
CC-MAIN-2020-40
refinedweb
695
52.19
Hi > uname -a AIX ep5512b 1 6 000497A2D900 > xlC -qversion IBM XL C/C++ for AIX, V10.1 Version: 10.01.0000.0000 Hi // foo.cpp #include <iostream> struct Foo { unsigned char m1:1; unsigned char m2:7; }; int main() { std::cout << sizeof(Foo) << std::endl; return 0; } // > xlC foo.cpp // No errors > ./a.out 4 // not 1 // ========== sizeof(Foo) on most of compilers is 1. Do we have a way to say to compiler that sizeof(Foo) should be 1? Thanks Alex Topic SystemAdmin 110000D4XK 196 Posts Pinned topic sizeof (<struct with char bit fields>) == 4, no 1 2011-05-08T12:38:14Z | Updated on 2011-05-08T15:36:13Z at 2011-05-08T15:36:13Z by SystemAdmin - SystemAdmin 110000D4XK196 Posts Re: sizeof (<struct with char bit fields>) == 4, no 12011-05-08T15:36:13Z This is the accepted answer. This is the accepted answer.Hi, actually, the sizeof operator is correctly reporting the amount of space that the Foo structure is consuming, so if I can rephrase your question, you're asking "why does this IBM IAX xlC compiler allocate 4 bytes for struct Foo when the fields m1 and m2 consume 8 bits (1 byte)?" The compiler is rounding up the size to the nearest word. It will pack in as many fields as it can in a given word, but the size will be rounded up. For instance, if we write a modified version of your example: #include <iostream> struct Foo { unsigned char m1:1; unsigned char m2:7; }glob1; struct Fooplus { struct Foo; unsigned char m1:1; unsigned char m2:7; char blah; char blah2; } glob2; int main() { std::cout << "Foo is:" << sizeof(Foo) << std::endl; std::cout << "Fooplus is:" << sizeof(Fooplus) << std::endl; return 0; } we can see that Foo is rounded up to 4 bytes, but also that Fooplus packs in all the fields into 4 bytes: Foo is:4 Fooplus is:4 The POWER architecture does provide for loads and stores of bytes, but integer operations are all on words (4 bytes). Typically, bit field structure members will have their values manipulated e.g. there will be adds, subtracts etc of these values. In these scenarios, it's ultimately faster to have these bit fields aligned on word boundaries so that the entire word can be loaded into a register and manipulated, probably with masks. While there will be scenarios where this approach costs a bit extra in space, such as the small test case you provided, we have found that on balance, optimizing for the manipulation of the bit fields is the better trade off.
https://www.ibm.com/developerworks/community/forums/html/topic?id=77777777-0000-0000-0000-000014614328&ps=100
CC-MAIN-2016-07
refinedweb
434
65.25
- Having Problems with adding new Columns on-the-fly - why has the variable passed to tree's store been changed? - How to? Design, positioning in Extjs, like CSS clear in Extks - combobox from markup is sending displayed value instead of key value - How to invoke a renderer on a displayfield xtype - Ext.data.NodeInterface quick question - sort order issues on un-unque fields with grids - TreeGrid showing hierarchical json data - best practices question - [OPEN] Does ExtJS 4 still not supporting locked columns in TreeGrid? - RowEditing Combo losing it's value? - How to display panel floated above custom layout. - update model with json - How calculate checked attribute in grid checkcolumn - Grid with locked columns no longger response to 'itemdblclick' - How to get a child's content to resize its parent container - pagination not working - Extjs Grid date column issue - error messages about .. Xtemplate dataview with 'hasMany - belongsTo' model - Add/Remove hidden grid columns from the DOM - Properly set waitMsgTarget using MVC pattern - stacked and grouped column chart functionality - Save login information - encoding html for checkbox, combobox - Baseparams in a store does not work - how to show grouped value in Extjs 4 grid groupHeader? - How to convert a dom input to a Ext.form.field.Text - Change Image when a button is clicked - Scrollbar not working in grid. - Store sync - Cant read posted values on server - Dynamic tree refresh - Grid row Right click [Context menu] Open new url - simple question: Radar chart 3 series limitation? - Treegid with action column - Limit for textarea and textfield with RegEx - Tree new loading - Saving image sprite on server - Simply trying to right-align a column - Which method can I use to submit a rowediting component? - Store.findRecord by id - extjs 4 generate pdf from FORM panel - Redirecting from one tab to another? - Is the video component streaming capable?? - Grid RowEditor in Firefox - Ajax.request() vs form.submit() difference... - No scrollbar and paging toolbar in grid inside window help - Grid mouse wheel scrolling - Ajax request using Jsonp - How to deleted whole Group from the Grid - Grid column not showing up - Ext.form.DateField date formats - TimeField returns whole date/time format if remove extjs3.0 compat js file in EXTJS 4 - Issue with using removeAll(true) on a container with 2 grids - grid using Ext.selection.CheckboxModel - Expired user not prompt with login page - [4.0.2a] Issue with highlighting text in cell editing grid inside window. - Disabling a item in Ext.grid.column.Action - How can I change how Groups themselves are sorted? - Setting default value to Combo Ext 4.X - How to identify which column renderer is running for? - ExtJS comunioty edition for SAAS solution - Panel - Table Layout - Padding Issues - HtmlEditor change event - How to add two menu items on one row? - Exts filefield IE8 Spring 3 (Exception thrown not caught) Line: 4898 extjs-all-debug - Right click on row removes headers - String literal is not properly closed by a matching quote - The global url in web application - Expand path in tree, callback - MVC : Getting view references - How do you change a form panel layout after it has been rendered ? - How to model and array in a store? - gridView.findRowIndex - extjs 4 equivalent? - REST PARAMETERS - Panel in a Menu - Grid panel displays all the value in 1 line - MultiSelect dblclick event handler - intenalization application example in extjs4 - Ext.data.Model : Associations with XML datas - Hello friends, I am having difficulty working with the component Treview - how can i remove the sort in a grid panel - Firefox: Button-Tooltip not shown if button disabled - Retrieving grid row editor through Ext.ComponentQuery - Relocating a component from one parent to another? - checkboxgroup above GridPanel - Ext JS 4 Ajax request & load response into datastore - Help needed - How do I download Ext JS Example files? - Change collapsed panel title color - Autoadjust height TreePanel - READING INCORRECT DATA - Html Editor in mobile devices doesn't show keyboard - Grid with Paging and Search: Custom Parameters - Form.updateRecord only works once? - Grouping Problem - Migraion from Ext 2.2 to 4.0 - MVC: Application events handling - Cookie not always working - Save model passing itself to the proxy - Connect to Sql server Database for sorting purpose - Can we use two libraries together (Ext 2.2 and Ext 4.0) - Mapping multiple values from a datastore into a single grid column - How to disable JSON-encoding proxy query params? - Using xtype:numeric in rendered input field in a grid column - Model field definition requiring another class.... - Ext.selection.CheckboxModel - header is null - dom.addEventListener is not a function - sencha binary is not in machine's $Path, where is it? - Do custom events automatically propageted to children from parent container? - How to get association to play nice with MVC namespace - Set treepanel pre-selected node - Ext.dd.DDProxy div layer text selection problem - Chart Series listener MVC - create a refresh toolbar - Edit newly created grid record, someone save me from jumping off a bridge - Hyper Link - Hasmany relation with a single element - An anonymous Ext.data.Model for Ext.data.Store - An easy way to expand all groups in a grid using grouping feature - Problem with editor event listener - FF7, Firebug and EXT = unusable - Unable to load my combo for first time(after combo is set by form load method) - Uncaught TypeError: Cannot call method 'isOnLeftEdge' of undefined - How to give moving effects to a image given as floating true in a carousel ? - FileUploadField in FieldContainer doesn't use extUpload - Chart Reload Data Error - QuickTip help, can't figure out how to initialize - Change Grid Background Color - Issue running Dynamic Menu created in Native Javascript on ExtJS 4 - Process XML from AJAX Form submit - Strategies for updating controls as data changes? - Cars example not working (blank page) - Controller Question - Multilevel Grid Grouping using Ext 4 - Help me! A simple question - Update a node name, made dirty the next node in TreeGrid - Simple queries regarding Tree - Multiple series on chart - line chart problum - Class Spotlight only works including on index.html??? - Spotlight works with gridpanel? - Asynchronous Ajax call loading the records lately - What about supporting 'controllers' property in controller itself? - How can I change de toolbar color? - How to determine the field name in the tips renderer in a grouped bar/column series? - Using JSON to populate selectfield drop down - ComboBox not setting on form load - jsbuilder for extjs 4 app - Existing HTML and MVC - Collapsible Form does not work in Panel - Ext.tab.Panel not rendering as expected - Overrides? - Video on iPad work around - Celleditor: Tab Key Navigation on Editor Grid Columns only. - Ext.Net is Running Too Slow - Theming issue: Line 15: Functions may only be defined... - TinyMCE width ExtJS - Dynamicaly change grid editor plugin - How to split demo Layout-Browser into MVC Architecture? - GridPanel's 'collapsible' does NOT work - Combo Box tpl not clickable - Chnage position of an item added to a form - Getting error : Error: Ext.ux.grid.GridFilters is not a constructor - nested data and drag&drop tree - DualList example in ExTJS - How to get groupField value after Grouping is change - Combo with a store that doesn't load - Phantom record is created when updating the store - Getting Error "el is null [Break On This Error] a2 = el.getAnchorXY(p2, false);" - Combobox in header column - Grouping Summary groupclick event listener - Grid: duplicate entries issue - Scrollbar broken in treepanel - Theming Extjs - With our Apache configuration, the samples initialize freeze - extjs 4 tree store RESTful - Prevent a Treegrid rerendering from the store when opening closing nodes - Rendering panel collapsed makes it disappear - Form Validation (disabled fields) - Why is adding Sprites 1 at a time faster than adding all to items[] during init?!?! - Chart bar values less than 1? - Drag and Drop - Grid to Div - where to define an overwrite for Ext.data.BelongsToAssociation in MVC application - What are all the ExtJS Style Sheets for? - Porblem while adding or removing series in charts - Chart Tool Tip - ExtJS MVC, Models, Stores and Server side validation errors - Can't populating MultiSelect control from JSON Store? - Paging info not updating after using local filtering on grid panel - ToolTip Issue in Ext Window..... - Ext data store require full model name for model config - Store + Proxy + JSON Reader doesn't work - QTP Add-in? - Quick debug help? - Grid Paging Toolbar setting page & start as NaN - Dynamically add Store to Controller - jQuery Sortable in extjs....? - Problem with Grid Panel - Checkbox Selection in a TreePanel - row expander selected area related help needed - Store: How to set parameters for its load action AFTER instantiation? - Problem with date format in a grid - Internet Explorer - how to apply mask in IE? - Injecting HTML into iframe breaks history - Dynamically Change Chart Axis Title - Remote Combo Box Paging - DataPicker set Today - Extjs 4.x and WCAG 2 AA, Section 508, and ARIA - textfield label alignment - EXTJS 4.x selecting record from grid - How do I stop a clicked combo box from clearing its value???? - Extremely inefficient update of SVG text sprites - Expected JSON API response for RESTful proxy on TreeStore? - Programmaticaly check/uncheck checkboxes in the Tree panel - Layout choice for text header and grid - How can I add a button to charts - A way to obtain the actual URL that a store uses for its load operation? - Does convert on a field that belongs to a dirty record, work? - Ext.Loader question - How to add and handle mousedown event on Textfield in ExtJs 4 in MVC - Where is Ext.lib.Dom? - Deployment of an extJS web application - MVC TreeGrid problems with store.getRootNode - Line chart problem in extJs 4.0.2 - groupHeaderTpl - dateFormat - Combobox name property is gone after store load - Chart behavior problem with firefox and chrome - how to avoid using getCmp() to pull a fieldSet into view? - c.setWidth is not a function error. - StartEdit error "item is null" - scrollable viewport or panel with window components - DragZone in two gridpanel - How to Change action column icon dynamically? - Recreating Grid With Different model and Fails or BUG? - Tree: Using a Store with an Ajax proxy, how to preload ALL childrens? - Fill Form with data from database - Help with debugging issues - Clickable in Image Viewer example - loadRecord() doesn't load the record - maximizie portlet in portal with 4.0.2a? - Viewport max shown items? - Change Grid with CheckboxModel default on itemclick behaiour - window eventhandler not called from controller - What about using HTML 5 Database Storage in ExtJS Storage Proxy? - selectFirstRow() failing - NOT the FAQ - Where is my TwinTriggerField? - Testing if a drag is in progress, how? - Multiple views with same store - Store sync callback
https://www.sencha.com/forum/archive/index.php/f-87-p-5.html?s=1a65831267d40a768b652707be7d7f07
CC-MAIN-2019-09
refinedweb
1,746
53.21
The official source of information on Managed Providers, DataSet & Entity Framework from Microsoft 1 - Open the POCOWalkthru project This project is a Console Application that has two files, "Program.cs" and "Blogging.edmx". The "Blogging.edmx" is a Model First Entity Framework Model file for a blogging application. If you open the "Blogging.edmx" this is what you will see: 2 - Create an empty Database In visual studio click the "View" menu and select "Server Explorer", which will show something like this: In "Server Explorer" right click on "Data Connections" and select "Create New SQL Server Database…". The following dialog will appear: Set "Server name" to ".\SQLEXPRESS" to target your local SQL Server Express installation, and enter "Blogging" for the new database name and click OK. 3 - Generate the Database Definition Language script (DDL) for the Blogging Model Right click on the "Blogging.edmx" canvas and select "Generate Database Script from Model..." from the context menu: This will bring up this screen: 4 - Create and Select a connection to your empty "Blogging" database: On this screen you create or choose a new connection to your "Blogging" database. If option is enabled select "No, exclude sensitive data from the connection string. I will set it in my application Code". Then click the "Next" button which will produce DDL for the "Blogging.edmx" and display that on this screen: 5 - Add the DDL file to your project Click "Finish" and a new file will be added to your project called "Blogging.edmx.sql": The contents of the file will look something like this: 6 - Execute the DDL using OSQL (or SQL Server Management Studio if you have it installed): Open a Command Prompt in your "POCOWalkthru" project directory, where the "Blogging.edmx.sql" is located, and execute the following command: OSQL -E -S ".\SQLEXPRESS" -i Blogging.edmx.sql This command will add the necessary tables and Foreign Key relationships to your new "Blogging" database. 7 - Add New Artifact Generation Item Right click on the "Blogging.edmx" canvas and select "Add New Artifact Generation Item..." 8 - Select the POCO Template: This will bring up the Artifact Selection Screen where you can choose which Template you wish to use. On this screen select "EntityFramework POCO Code Generator" and rename your template "Blogging.tt" and click "Add". 9 - Dismiss the Security Warning: A Security Warning will appear: The template comes from a trusted source (Microsoft) so click "OK". Your project will now look like this: T4 stands for Text Template Transformation Toolkit, and is a template engine that ships with Visual Studio. The Entity Framework POCO Template leverages T4 to allow you to customize code generation. When you choose the POCO Template two T4 templates files are added to your project. In this case one is called "Blogging.Context.tt" and the other is called "Blogging.Types.tt". If you wish to modify the generated code you simply modify one or both of these templates. The "Blogging.Types.tt" file is responsible for generating a file for each EntityType and ComplexType in the "Blogging.edmx" model. It also generates a file called "Blogging.Types.cs", which contains a FixupCollection class used by the POCO classes to keep the opposite ends of a relationships in sync. For example in the model we’ve been using to date when you set a Comment's Author to a particular person the FixupCollection class insures sure that the Person's Comments collection contains the Comment too. The second template ("Blogging.Context.tt") produces a strongly typed ObjectContext for the "Blogging.edmx" model. You use this strongly typed ObjectContext to interact with your database. Note that each time you edit any T4 template the dependent files are regenerated, so you shouldn’t edit the dependent files directly, or your changes will be lost. The primary goal of the POCO template is to produce Persistence Ignorant Entities classes. However the strong typed ObjectContext derives from ObjectContext which is an Entity Framework class. So this template must live in a project with a reference to the Entity Framework. By splitting the template into two, one part that generates the Entities and ComplexTypes and one that generates a strongly typed context, it makes it possible not only to have Entities and ComplexType that are persistence ignorant but further to put those classes in an assembly / project that has no persistence aware code in it at all. The next few steps show how to do this. 11 – Create a new Class Library project in the solution called Entities: 12 – In the POCOWalkthru project add a reference to the Entities Project: 13 – Move the "Blogging.Types.tt" file into the Entities project 14 – Edit the "Blogging.Types.tt" to fix the link back to the Model Since we have moved the template the relative location of the Model has changed, and we need to fix the template so its link back to the model is correct again. To do this you modify this line in the template from: string inputFile = @ “Blogging.edmx” ; To: string inputFile = @ “..\POCOWalkthru\Blogging.edmx”; This is simply a relative path from the template’s new location to the Blogging.edmx file in the other project. Once you’ve done this Save the template, this will regenerate the POCO entity classes, so in solution explorer your solution should now look like this: Notice the Entities project has no reference to the System.Data.Entity (aka the Entity Framework), and is completely persistence ignorant. 15 – Edit the "Blogging.Context.tt" file to use the “Entities” namespace The classes in the Entities project are now in the "Entities" namespace rather than the "POCOWalkthru" namespace, so we need to modify the template that produces the string strongly typed context so that the generated class uses the "Entities" namespace or the class won’t compile. 16 - Add and Query for data in the Blogging Database Now that we are producing POCO classes it is time to verify that we can add some data to the database and get it back again using our POCO classes and the Entity Framework. Add this using to the program.cs file: using Entities; Then modify the Main() method by adding this code: using (BloggingContainer ctx = new BloggingContainer()){ Person person = new Person(); person.EmailAddress = "billg@microsoft.com";| person.Firstname = "Bill"; person.Surname = "Gates"; ctx.People.AddObject(person); ctx.SaveChanges(); Person query = ctx.People.First(); Console.WriteLine("Saved and Found {0}", query.Firstname);} 17 - Compile the project and run and you should see this: To modify how the POCO entities behave you can either add a partial class to extend the class produced by the template or you can modify the templates themselves. Indeed this is the whole point of this template based solution. Modifying the T4 Template files is pretty simple, but it is beyond the scope of this basic walk through. You can expect us to release some customization examples soon. T4 provides a mechanism by which you can write common utilities that you share across templates, in something called include files. The POCO template ships with a very useful include file that can be found in the installation directory here: %INSTALLDIR%\Includes\EF.Utilities.ctp.CSD.ttinclude This include provides a number of utility feature that make writing T4 templates easier, including: The first version of the POCO Template that works with .NET 4.0 Beta 1 has one major known limitation: The POCO template generates Entities that don’t support change notification proxies. Reasoning: There is a known bug in Beta1 of the .NET 4.0 that causes problems if notification proxies are used in conjunction with entity classes that do their own fix-up. Since the classes generated by the POCO template do their own fix-up, via the FixupCollection, customers using this template would immediately run into this bug if the generated entities that also supported change notification proxies. To support change notification proxies every property in the entity class must be virtual, so to avoid this we consciously generated non-virtual properties. The next release of the template for Beta2 will generate virtual properties again. - Alex James Program Manager, Entity Framework
http://blogs.msdn.com/b/adonet/archive/2009/06/22/feature-ctp-walkthrough-poco-templates-for-the-entity-framework.aspx
CC-MAIN-2014-23
refinedweb
1,356
54.12
Goal: Demonstrate how to keep WPF UBlocks and a ProgressBar. TextBlock ProgressBar What is a non-responsive UI? Surely, we’ve all witnessed a Windows Form or WPF application that “locks up” from time to time. Have you ever thought of why this happens? In a nutshell, it’s typically because the application is running on a single thread. Whether it’s updating the UI or running some long process on the back end, such as a call to the database, everything must get into a single file line and wait for the CPU to execute the command. So, when we are making that call to the database that takes a couple seconds to run, the UI is left standing in line waiting, unable to update itself, and thus “locking up”. How can this unresponsive UI problem be resolved? Whether it’s a Windows Form or WPF application, the UI updates on the main or primary thread. In order to keep this thread free so the UI can remain responsive, we need to create a new thread to run any large tasks on the back-end. The classes used to accomplish this have evolved over the different releases of the .NET Framework, becoming easier, and richer in capabilities. This, however, can cause some confusion. If you do a simple Google search on C# or VB and asynchronous, or something similar, you are sure to get results showing many different ways of accomplishing asynchronous processing. The answer to the question, “which one do I use?” of course depends on what you’re doing and what your goals are. Yes, I hate that answer also. Since I cannot possibly cover every asynchronous scenario, what I would like to focus on in this article is what I have found myself needing asynchronous processing for majority of the time. That would be keeping the UI of a WPF application responsive while running a query on the database. Please note that with some minor modifications, the code in this article and in the downloadable source code can be run for a Windows Form application also. In addition, this article shows how to solve a specific problem with asynchronous programming, by no means though is this the only problem asynchronous programming is used for. To help demonstrate synchronous, asynchronous, and event-driven asynchronous processing, I will work through an application that transgresses through several demos: The code in this article will be written in VB; however, full source code download will be available in both C# and VB versions. As I mentioned previously, what you do not want to do is run all your processing both back-end and UI on a single thread. This will almost always lead to a UI that locks up. You can download the demo application in both C# and VB versions. Run the application, and click the Start button under Synchronous Demo. As soon as you click the button, try to drag the window around your screen. You can’t. If you try it several times, the window may even turn black, and you will get a “(Not Responding)” warning in the title bar. However, after several seconds, the window will unlock, the UI will update, and you can once again drag it around your screen freely. Let’s look at this code to see what’s going on. If you look at the code for this demo, you will see the following: First, we have a delegate which is sort of like a function pointer, but with more functionality and providing type safety. Delegate Function SomeLongRunningMethodHandler(ByVal rowsToIterate As Integer) As String We could easily not use the delegate in this sample, and simply call the long running method straight from the method handler. In fact, if I didn't already know I was going to change this call to run asynchronously, I wouldn't use a delegate. However, by using the delegate, I can demonstrate how easy it is to go from a synchronous call to an asynchronous call. In other words, let’s say you have a method that you may want to run asynchronously but you aren’t sure. By using a delegate, you can make the call synchronously now, and later switch to an asynchronous call with little effort. I’m not going to go into too much detail on delegates, but the key to remember is that the signature of the delegate must exactly match the signature of the function (or Sub in VB) it will later reference. In this VB example, the delegate signature is for a Function that takes an Integer as a parameter and returns a String. Sub Function Integer String Next, we have the method handler for the Click event of the button. After resetting the TextBlock to an empty string, the delegate is declared. Then, the delegate is instantiated (yes, a class is created when you create a delegate). In this case, a pointer to the function to be called by the delegate is passed as a parameter to the constructor. What we now have is an instance of our delegate (synchronousFunctionHandler) that points to the function SomeLongRunningSynchronousMethod. If you move down one more line, you can see how this method is called synchronously by the delegate. The delegate instance we have is actually an instance of a class with several methods. One of those methods is called Invoke. This is how we synchronously call the method attached to the delegate. You may have also noticed the methods BeginInvoke and EndInvoke, if you used Intellisense. Click synchronousFunctionHandler SomeLongRunningSynchronousMethod Invoke BeginInvoke EndInvoke Remember when I said that by using delegates we can easily move from synchronous to asynchronous? You now have a clue as to how, and we will get into the details of that soon. Going back to our asynchronous example, you can see the Invoke method is called on the delegate instance. It is passed an integer as a parameter, and returns a string. That string is then assigned to a TextBlock to let the user know the operation is complete. Private Sub SynchronousStart_Click(ByVal sender As System.Object, _ ByVal e As System.Windows.RoutedEventArgs) _ Handles synchronousStart.Click Me.synchronousCount.Text = "" Dim synchronousFunctionHandler As SomeLongRunningMethodHandler synchronousFunctionHandler = _ New SomeLongRunningMethodHandler(AddressOf _ Me.SomeLongRunningSynchronousMethod) Dim returnValue As String = _ synchronousFunctionHandler.Invoke(1000000000) Me.synchronousCount.Text = _ "Processing completed."& returnValue & " rows processed." End Sub This is the function that the delegate calls. As mentioned earlier, it could have also been called directly without the use of a delegate. It simply takes an integer, and iterates that many times, returning the count as a string when completed. This method is used to mimic any long running process you may have. Private Function SomeLongRunningSynchronousMethod _ ByVal rowsToIterate As Integer) As String Dim cnt As Double = 0 For i As Long = 0 To rowsToIterate cnt = cnt + 1 Return cnt.ToString() End Function The bad news is that implementing this demo asynchronously causes an unresponsive UI. The good news is that by using a delegate, we have set ourselves up to easily move to an asynchronous approach and a responsive UI. Now, run the downloaded demo again, but this time, click the second Run button (Synchronous Demo). Then, try to drag the window around your screen. Notice anything different? You can now click the button which calls the long running method and drag the window around at the same time, without anything locking up. This is possible because the long running method is run on a secondary thread, freeing up the primary thread to handle all the UI requests. This demo uses the same SomeLongRunningSynchronousMethod as the previous example. It will also begin by declaring and then instantiating a delegate that will eventually reference the long running method. In addition, you will see a second delegate created with the name UpdateUIHandler, which we will discuss later. Here are the delegates and the event handler for the button click of the second demo: UpdateUIHandler Delegate Function AsyncMethodHandler _ ByVal rowsToIterate As Integer) As String Delegate Sub UpdateUIHandler _ ByVal rowsupdated As String) Private Sub AsynchronousStart_Click( _ ByVal sender As System.Object, _ ByVal e As System.Windows.RoutedEventArgs) Me.asynchronousCount.Text = "" Me.visualIndicator.Text = "Processing, Please Wait...." Me.visualIndicator.Visibility = Windows.Visibility.Visible Dim caller As AsyncMethodHandler caller = New AsyncMethodHandler _ (AsyncMethodHandlerAddressOf Me.SomeLongRunningSynchronousMethod) caller.BeginInvoke(1000000000, AddressOf CallbackMethod, Nothing) End Sub Notice that the event method starts out similar to the previous example. We setup some UI controls, then we declare and instantiate the first delegate. After that, things get a little different. Notice the call from the delegate instance “caller” to BeginInvoke. BeginInvoke is an asynchronous call, and replaces the call to Invoke seen in the previous example. When calling Invoke, we passed the parameter that both the delegate and the delegate method had in their signature. We do the same with BeginInvoke; however, there are two additional parameters passed which are not seen in the delegate or the delegate method signature. The two additional parameters are DelegateCallback of type AsyncCallback and DelegateAsyncState of type Object. Again, you do not add these two additional parameters to your delegate declaration or the method the delegate instance points to; however, you must address them both in the BeginInvoke call. caller DelegateCallback AsyncCallback DelegateAsyncState Object Essentially, there are multiple ways to handle asynchronous execution using BeginInvoke. The values passed for these parameters depend on which technique is used. Some of these techniques include: WaitHandle IAsyncResult IsCompleted We will use the last technique, executing a callback method when the asynchronous call completes. We can use this method because the primary thread which initiates the asynchronous call does not need to process the results of that call. Essentially, what this enables us to do is call BeginInvoke to fire off the long running method on a new thread. BeginInvoke returns immediately to the caller, the primary thread in our case, so UI processing can continue without locking up. Once the long running method has completed, the callback method will be called and passed the results of the long running method as a type IAsyncResult. We could end everything here; however, in our demo, we want to take the results passed into the callback method and update the UI with them. You can see that our call to BeginInvoke passes an integer, which is required by the delegate and the delegate method as the first parameter. The second parameter is a pointer to the callback method. The final value passed is “Nothing”, because we do not need to use the DelegateAsyncState in our approach. Also, notice that we are setting the Text and Visibility property of the visualIndicator TextBlock here. We can access this control because this method is called on the primary thread, which is also where these controls were created. Nothing Text Visibility visualIndicator TextBlock Protected Sub CallbackMethod(ByVal ar As IAsyncResult) Try Dim result As AsyncResult = CType(ar, AsyncResult) Dim caller As AsyncMethodHandler = CType(result.AsyncDelegate, _ AsyncMethodHandler) Dim returnValue As String = caller.EndInvoke(ar) UpdateUI(returnValue) Catch ex As Exception Dim exMessage As String exMessage = "Error: " & ex.Message UpdateUI(exMessage) End Try End Sub In the callback method, the first thing we need to do is get a reference to the calling delegate (the one that called BeginInvoke), so that we can call EndInoke on it and get the results of the long running method. EndInvoke will always block further processing until BeginInvoke completes. However, we don’t need to worry about that because we are in the callback method which only fires when BeginInvoke has already completed. EndInoke Once EndInvoke is called, we have the result of the long running method. It would be nice if we could then update the UI with this result; however, we cannot. Why? The callback method is still running on the secondary thread. Since the UI objects were created on the primary thread, they cannot be accessed on any thread other than the one which created them. Don’t worry though; we have a plan which will allow us to still accomplish updating the UI with data from the asynchronous call. After EndInvoke is called, the Sub UpdateUI is called and is passed the return value from EndInvoke. Also notice that this method is wrapped in a Try-Catch block. It is considered a good coding standard to always call EndInvoke and to wrap that call in a Try-Catch if you wish to handle the exception. This is the only positive way to know that the asynchronous call made by BeginInvoke completed without any exceptions. Sub UpdateUI Try-Catch Sub UpdateUI(ByVal rowsUpdated As String) Dim uiHandler As New UpdateUIHandler(AddressOf UpdateUIIndicators) Dim results As String = rowsUpdated Me.Dispatcher.Invoke(Windows.Threading.DispatcherPriority.Normal, _ uiHandler, results) End Sub Sub UpdateUIIndicators(ByVal rowsupdated As String) Me.visualIndicator.Text = "Processing Completed." Me.asynchronousCount.Text = rowsupdated & " rows processed." End Sub Next, we can see the UpdateUI method. It takes as a parameter the return value from EndInvoke in the callback method. The first thing it does is to declare and instantiate a delegate. This delegate is a Sub, and takes a single parameter of type String. Of course, this means that the function pointer it takes in its constructor must also point to a Sub with the exact same signature. For our demo, that would be the UpdateUIIndicators Sub. After setting up the delegate, we place the UpdateUI parameter into a string. This will eventually be passed into BeginInvoke. UpdateUI UpdateUIIndicators Sub Next, you will see the call to Invoke. We could have also used a call to BeginInvoke here, but since this method is only updating two UI properties, it should run quickly and with out the need for further asynchronous processing. Notice that the call to Invoke is run off Me.Dispatcher. The dispatcher in WPF is the thread manager for your application. In order for the background thread called by Invoke to update the UI controls on the primary thread, the background thread must delegate the work to the dispatcher which is associated to the UI thread. This can be done by calling the asynchronous method BeginInvoke, or the synchronous method Invoke as we have done off the dispatcher. Finally, the Sub UpdateUIIndicators takes the results passed into it and updates a TextBlock on the UI. It also changes the text on another TextBlock to indicate that processing has completed. Me.Dispatcher UpdateUIIndicators We have now successfully written a responsive multi-threaded WPF application. We have done it using delegates, BeginInvoke, EndInvoke, callback methods, and the WPF Dispatcher. Not a ton of work, but more than a little. However, this traditional approach to multithreading can now be accomplished using a much simpler WPF asynchronous approach. There are many approaches to writing asynchronous code. We have already looked at one such approach, which is very flexible should you need it. However, as of .NET 2.0, there is what I would consider a much simpler approach, and safer. The System.ComponentModel.BackgroundWorker (BackgroundWorker) provides us with a nearly fail-safe way of creating asynchronous code. Of course, the abstraction which provides this simplicity and safety usually comes at a cost, which is flexibility. However, for the task of keeping a UI responsive while a long process runs on the back-end, it is perfect. In addition, it provides events to handle messaging for both the tracking process and the cancellation with the same level of simplicity. System.ComponentModel.BackgroundWorker BackgroundWorker Consider the following method which we have decided to spin off on a separate thread so that the UI can remain responsive. Private Function SomeLongRunningMethodWPF() As String Dim iteration As Integer = CInt(100000000 / 100) Dim cnt As Double = 0 For i As Long = 0 To 100000000 cnt = cnt + 1 If (i Mod iteration = 0) And (backgroundWorker IsNot Nothing) _ AndAlso backgroundWorker.WorkerReportsProgress Then backgroundWorker.ReportProgress(i \ iteration) End If Return cnt.ToString() End Function Notice, there is also some code to keep track of the progress. We will address this as we get to it; for now, just keep in mind we are reporting progress to the backgroundWorker.ReportProgress method. backgroundWorker.ReportProgress Using the BackgroundWorker and the event driven model, the first thing we need to do is create an instance of the BackgroundWorker. There are two ways to accomplish this task: I will quickly demonstrate the latter method, but for the remainder of the demo, we will use the declarative approach. First, you must reference the namespace for System.ComponentModel. System.ComponentModel <Window x:Class="AsynchronousDemo" xmlns="" xmlns:x="" xmlns: Then, you can create an instance of the BackgroundWorker. Since there is no UI element, you can drop this XAML anywhere on the page. <Window.Resources> <cm:BackgroundWorker x:Key="backgroundWorker" _ WorkerReportsProgress="True" _ </Window.Resources> Declaratively, we could accomplish the same thing: Private WithEvents backgroundWorker As New BackgroundWorker() Next, we need something to call the long running process to kick things off. In our demo, we will trigger things with the Click event of the button. Here’s the method handler that gets called and starts things off: Private Sub WPFAsynchronousStart_Click(ByVal sender As System.Object, _ ByVal e As System.Windows.RoutedEventArgs) Me.wpfCount.Text = "" Me.wpfAsynchronousStart.IsEnabled = False backgroundWorker.RunWorkerAsync() wpfProgressBarAndText.Visibility = Windows.Visibility.Visible End Sub Let’s go through what’s happening in the button click event. First, we clear out any text that’s in our TextBlock used for displaying messages on the UI, and set the IsEnabled state of the two buttons. Next, we call RunWorkerAsync, which fires off a new thread and begins our asynchronous process. The event that is called by RunWorkerAsync is DoWork. DoWork, which is running on a new thread, provides us a place to call our long running method. RunWorkerAsync also has a second overload, which takes an Object. This object can be passed to the DoWork method, and used in further processing. Note that we do not need any delegates here, and we do not need to create any new threads ourselves. IsEnabled RunWorkerAsync DoWork When the button is clicked, we are also capturing that event in a Storyboard located in the XAML. This Storyboard triggers the animation directed at a ProgressBar, which runs until the asynchronous process has completed. <StackPanel.Triggers> <EventTrigger RoutedEvent="Button.Click" SourceName="wpfAsynchronousStart"> <BeginStoryboard Name="myBeginStoryboard"> <Storyboard Name="myStoryboard" TargetName="wpfProgressBar" TargetProperty="Value"> <DoubleAnimation From="0" To="100" Duration="0:0:2" RepeatBehavior="Forever" /> </Storyboard> </BeginStoryboard> </EventTrigger> </StackPanel.Triggers> Private Sub backgroundWorker_DoWork(ByVal sender As Object, _ ByVal e As DoWorkEventArgs) _ Handles backgroundWorker.DoWork Dim result As String result = Me.SomeLongRunningMethodWPF() e.Result = result End Sub There are a few important things to note about DoWork. First, as soon as this method is entered, a new thread is spun off from the managed CLR threadpool. Next, it is important to remember that this is a secondary thread, so the same rules apply for not being able to update the UI controls which were created on the primary thread. Remember, in our long running process, I noted that we were tracking progress? Specifically, every 100 iterations of the loop, we were calling: backgroundWorker.ReportProgress(i \ iteration) The method ReportProgress is wired up to call the BackgroundWorker's ProcessChanged event. ReportProgress ProcessChanged Private Sub backgroundWorker_ProgressChanged(ByVal sender As Object, _ ByVal e As System.ComponentModel.ProgressChangedEventArgs) _ Handles backgroundWorker.ProgressChanged Me.wpfCount.Text = _ CStr(e.ProgressPercentage) & "% processed." End Sub We are using this method to update a TextBlock with the current iteration count. Note that because this method runs on the Dispatcher thread, we can update the UI components freely. This is obviously not the most practical means of using the ProgressChanged event; however, I wanted to simply demonstrate its use. Once processing has completed in the DoWork method, the dispatcher thread’s RunWorkerCompleted method is called. This gives us an opportunity to handle the CompletedEventArgs.Result, which was passed in from DoWork. ProgressChanged RunWorkerCompleted CompletedEventArgs.Result Private Sub backgroundWorker_RunWorkerCompleted(ByVal sender As Object, _ ByVal e As RunWorkerCompletedEventArgs) _ Handles backgroundWorker.RunWorkerCompleted wpfProgressBarAndText.Visibility = Windows.Visibility.Collapsed Me.wpfCount.Text = "Processing completed. " & _ CStr(e.Result) & " rows processed." Me.myStoryboard.Stop(Me.lastStackPanel) Me.wpfAsynchronousStart.IsEnabled = True End Sub In the RunWorkerCompleted event, we first hide the progress bar and the progress bar status text since our long running operation has completed. We can also enable the Start button so the demo can be run again. As noted previously, we can access these UI elements here because we are back on the primary thread (Dispatcher thread). The downloadable code which is available in both C# and VB also contains code which handles the CancelAsync method. This demonstrates how you can give the user the ability to cancel a long running process, should they decide it's not worth waiting for. In most applications, once the user starts a process, they are stuck waiting for it to complete. However, since this post has already run very long, I have decided to not include it here in the article. CancelAsync This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) caller = New AsyncMethodHandler _ (AsyncMethodHandlerAddressOf Me.SomeLongRunningSynchronousMethod) caller = New AsyncMethodHandler _ (AddressOf Me.SomeLongRunningSynchronousMethod) General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/29925/Responsive-UIs-for-WPF-Applications-Using-Asynchro
CC-MAIN-2015-40
refinedweb
3,576
54.83
IoT Based Bidirectional Visitor Counter using ESP8266 & Blynk IoT Based Visitor Counter using NodeMCU & IR Sensor - Overview: IoT based Visitor Counter Overview: IoT based Visitor Counter Today in this project, will make IoT Based Bidirectional Visitor Counter using NodeMCU ESP8266 & Blynk IoT cloud. This project is useful for monitoring the total number of visitors entering, exiting, and current visitors available from any part of the world using the Blynk IoT cloud platform. Infrared or IR Sensors are used to count the total number of incoming and outcoming visitors. The Visitors’ data is uploaded automatically to Blynk cloud using the NodeMCU ESP8266 Wi-Fi Module. You can use this ESP8266 NodeMCU based IoT Bidirectional Visitor counter in the hall, shopping mall, office entrance gate to count the total number of visitors. This device counts the total number of visitors entering through the gate & also counts the total number of visitors exiting through the different gates. Basically, it calculates the total number of current visitors by subtracting the outgoing visitor from the incoming visitor. When a single person enters the room, the light turns on automatically. Whenever there are no visitors present in the room, the light turns off automatically. Previously, we have made the Visitor Counter Project using Arduino & OLED Display. But this time we will send the data to the Blynk cloud instead of simply watching it on OLED Display. Components Required We can make this IoT Bidirectional Visitor Counter using ESP8266 Wi-Fi Module, a pair of IR Sensor, SSD1306 OLED Display, and a Relay Module. The list of components that you need for creating an IoT Bidirectional Visitor Counter is as follows. Learn more about IR sensors as Visitor Detectors from our previous post. IoT Bidirectional Visitor Counter Circuit or Schematic The Circuit Diagram for IoT Bidirectional Visitor Counter using NodeMCU ESP8266 is very simple. I designed the circuit diagram for this project using Fritzing Software. Connect the I2C Pins (SDA & SCL) of 0.96″ SSD1306 OLED Display with NodeMCU D2 & D1 Pins. Interface the output pin of the pair of IR Sensors to the D5 & D6 pin of NodeMCU. We will use one of the IR Sensors in the Entrance gate for counting incoming visitors and the other in the exit gate for counting the outgoing visitor. Similarly, connect a 5V Relay Module to the D4 Pin of ESP8266. Both the IR Sensors and Relay Module works at 5V VCC. You can supply a 5V power supply from a NodeMCU Vin pin. PCB Designing & Ordering You can simply assemble the circuit on a breadboard. But if you don’t want to assemble the circuit on a breadboard, you can follow this schematic and build your own PCB. You can download the Gerber file of my PCB Design from the link attached below. The PCB looks like the image shown below. I provided the Gerber File for IoT Based Bidirectional Visitor Counter with Automatic Light Control system PCB below. You can simply download the Gerber File and order your custom PCB from NextPCB. Visit the NextPCB official website by clicking here:. Simply upload your Gerber File to the Website and place an order. The reason behind most of the people trusting NextPCB for PCB & PCBA Services is because of their quality. The PCB quality is superb and has a very high finish, as expected. More Interesting Projects - IoT Fall Detector Using MPU6050 & ESP8266 - Portable Wi-Fi Repeater using ESP8266 NodeMCU - Connect RFID to PHP & MySQL Database with NodeMcu ESP8266 - Interfacing PIR Sensor with ESP8266 and EasyIoT - IoT Based RFID Attendance System using ESP32 - Internet Clock Using NodeMCU ESP8266 and 16×2 LCD without RTC Module Setting Up Blynk Application We need to set up the Blynk App to receive the Visitor Counter data from the ESP8266 NodeMCU board. To set up Blynk Application, Download & Install the Application on your smartphone. Available in both Android PlayStore and IOS AppStore. Open the App & create a New Account using your email address. - Click on create a new project - Provide the Name of your project as “IoT Visitor Counter” - Choose NodeMCU Dev Board - Select connection type as Wi-Fi, then click on Create Button. - They sent the Blynk authentication token to your email address. (We need it later on programming) - Now, click on the (+) icon at the top right corner of the screen. - Search for the “Value Display” widget and add 3 of them to your main screen. - Also, add Super Chart Widget to the main screen. - Click on the First Value Display. - Name it as “Visitors Now” - Set the Input Pin to Virtual Pin V3, Enter input Range & Choose the refresh rate as 1sec. - You can set colors and fonts size according to your need. Similarly, do the same for Visitors In, Out with the help of the images below. Now, click on the Super Chart widget and Name it as “Visitors Graphs” Add three datastream as Visitors In, Out, and Now with their respective virtual pins. Finally, the Blynk App setup for IoT Bidirectional Visitor Counter using NodeMCU ESP8266 is completed. Source Code/Program The Source Code for ESP8266 based IoT Bidirectional Visitor Counter and Automatic Light Control System is given below. The code requires SSD1306 & Adafruit GFX library & Blynk Library for compilation. First, download the required libraries and add them to the Arduino IDE. From this code part, change the Wi-Fi SSID, Password & Blynk Authentication Token. Use the same API Token sent to your email address. You can now copy the code and upload it to the NodeMCU ESP8266 Board. char auth[] = "xxxxx-xxxxx-xxxx"; // You should get Auth Token in the Blynk App. char ssid[] = "xxxxx-xxxxx-xxxx"; // Your WiFi credentials. char pass[] = "xxxxx-xxxxx-xxxx"; Final Program Code for IoT based Bidirectional Visitor Counter & Automatic Light Control using NodeMCU. #include <Wire.h> #include <Adafruit_GFX.h> #include <Adafruit_SSD1306.h> #include <Blynk.h> #include <ESP8266WiFi.h> #include <BlynkSimpleEsp8266); char auth[] = "xxxxx-xxxxx-xxxx"; // You should get Auth Token in the Blynk App. char ssid[] = "xxxxx-xxxxx-xxxx"; // Your WiFi credentials. char pass[] = "xxxxx-xxxxx-xxxx"; #define inSensor 14 //D5 #define outSensor 12 //D6 int inStatus; int outStatus; int countin = 0; int countout = 0; int in; int out; int now; #define relay 0 //D3 WidgetLED light(V0); void setup() { Serial.begin(115200); Blynk.begin(auth, ssid, pass); delay(1000); // wait a second display.begin(SSD1306_SWITCHCAPVCC, 0x3C); //initialize with the I2C addr 0x3C (128x64) delay(2000); pinMode(inSensor, INPUT); pinMode(outSensor, INPUT); pinMode(relay, OUTPUT); digitalWrite(relay, HIGH); Serial.println("Visitor Counter Demo"); display.clearDisplay(); display.setTextSize(2); display.setTextColor(WHITE); display.setCursor(20, 20); display.print("Visitor"); display.setCursor(20, 40); display.print("Counter"); display.display(); delay(3000); } void loop() { Blynk.run(); // Initiates Blynk inStatus = digitalRead(inSensor); outStatus = digitalRead(outSensor); if (inStatus == 0) { in = countin++; } if (outStatus == 0) { out = countout++; } now = in - out; if (now <= 0) { digitalWrite(relay, HIGH); light.off(); display.clearDisplay(); display.setTextSize(2); display.setTextColor(WHITE); display.setCursor(0, 15); display.print("No Visitor"); display.setCursor(5, 40); display.print("Light Off"); display.display(); Serial.println("No Visitors! Light Off"); delay(500); } else { digitalWrite(relay, LOW); light.on();(); Serial.print("Current Visitor: "); Serial.println(now); Serial.print("IN: "); Serial.println(in); Serial.print("OUT: "); Serial.println(out); delay(500); } Blynk.virtualWrite(V1, in); // Visitors In Blynk.virtualWrite(V2, out); // Visitors Out Blynk.virtualWrite(V3, now); // Current Visitors delay(1000); } Testing IoT ESP8266 Based Visitor Counter Project The above code for the ESP8266 based IoT visitor counter with Blynk fulfills all the requirements of the project. You can upload the code to the ESP8266 Wi-Fi Module. Once you upload the code, the ESP8266 board will connect to the Blynk Cloud using your Wi-Fi Network. Once it connects to the network, you can open the Serial Monitor. You can simply check all the details on Serial Monitor. The Serial Monitor will display the connection status of the Wi-Fi Network along with the IP Address. If there is a successful Wi-Fi connection and a further connection to Blynk IoT Cloud. It will update automatically the data after every 1-second interval. The Serial Monitor will also display the number of visitors entering, exiting, current visitors & relay Status. On the Hardware Part, when there are no visitors, the OLED Display will simply show “No Visitors Lights OFF”. Similarly, when someone enters the room, the light will turn on automatically and OLED Display shows the number of incoming, outgoing, and current visitors. Monitoring Visitors Status Online on Blynk App As soon as we connected the device to the Wi-Fi and Blynk server, you will see the data appearing on the Blynk App. Here you will see the data for visitors. So the data only changes when an event occurs. You will get a beautiful real-time dashboard as shown in the image below. Video Tutorial & Guide Conclusion So, that’s pretty much for this tutorial. I hope you enjoyed making your IoT Based Bidirectional Visitor Counter using ESP8266 & Blynk. If you did, don’t forget to share this article with your friends. Want help? let me know in the comment section below. 4 Comments can i get a BlynkSimpleShieldEsp8266.h library just wonder how to use it in real world. If I would like to place it in school bus which has two entry door and exit door. Ideally one esp8266 with two IR sensors which the exit checking sensor needs to have long wiring from entry door to exit door( 1-3 metres). Or using two esp8266 communicate together for checking in-out visitor? Use one ESP8266 and attach long wire on IR sensor. It works. Thanks so much for your answer
https://theiotprojects.com/iot-based-bidirectional-visitor-counter-using-esp8266-blynk/
CC-MAIN-2021-43
refinedweb
1,607
57.16
span8 span4 span8 span4 Hi, Very hard question to post here : I work on a project to write AIXM5.1 AirspaceTimeSlice. In the writer, i have a field [xml_geometry ] : Surface. Inside this field i can found <gml: geometricLineString> nested structure, <gml:arcbycenter> nested structure in different positions. ( Not known when i read the feature ). When i read the feature in FME, i'm able to recognize an arc and put in an XML templater the definition of this geometry in gml fields. But i don't succeed to write the <geometricLineString > définition. It shall contain <posList > éléments of each point defining the IFME Line. e.g : <posList> Long(A) Lat(A) Long(B) Lat(B).....Long(k) Lat(k) </posList> I have the points in a list : Long {0} A Lat {0} A ..... I don't succeed in writing after the whole structure Surface by combining the different parts ( the order is random ) Assume the lists Long{}.A and Lat{}.A contain the coordinates to be listed within the "posList" element, this is a possible expression to generate the required element. <posList>{ let $lat := fme:get-list-attribute("Lat{}.A") for $x at $i in fme:get-list-attribute("Long{}.A") return ($x, $lat[$i]) }</posList> Answers Answers and Comments 8 People are following this question. Xml_geometry and xml_templater 1 Answer Xml_geometry written with Xml_Templater - AIXM 5 output issue 0 Answers Inspire Gml Administrative Boundaries- writing values from a list to gml 3 Answers Convert my access into a GML 3.2.1 based on an existing xsd 1 Answer XMLTemplater listattributes 1 Answer
https://knowledge.safe.com/questions/58252/how-to-create-a-good-nested-structure-geometry-gml.html
CC-MAIN-2020-05
refinedweb
264
57.37
So, I know that this code works, but I have several questions. 1) Is is practical to have multiple classes reference to a single variable or class entity ? 2) When writing a multithreading program, will this technique break? 3) is there a smarter or better approach, than this simple code? Code:#include <iostream> using namespace std; class Increase { int & numb; public: Increase( int & n ) : numb(n) { } void by( unsigned int val) { numb += val; } }; class Decrease { int & numb; public: Decrease( int & n) : numb(n) { } void by( unsigned int val) { numb -= val; } }; int main(void) { int number = 20; Increase inc(number); Decrease dec(number); inc.by(20); cout<< number << "\n"; dec.by(10); cout<< number <<"\n"; return 0; }
http://cboard.cprogramming.com/cplusplus-programming/149943-questions-classes-reference-single-variable.html
CC-MAIN-2014-41
refinedweb
116
73.07
Each VSPackage to be loaded in a deployed application must have a valid package load key (PLK). The PLK is uniquely related to the VSPackage and cannot be used to load any other VSPackage. PLKs are obtained from the Microsoft Visual Studio Industry Partner (VSIP) Web site. Although the PLK request form refers to your product name, description, and version, it actually means your VSPackage name, description, and version. Be careful when entering information to obtain your PLK. Incorrect information is the primary reason for PLK load failure. Go to the Visual Studio Industry Partner Web site. Sign in with your Passport Network credentials. Use the e-mail address and password that you use to sign in to the Passport Network. If you have no credentials, you can sign up here. The VSIP Affiliate home page appears. Click the Products link. If you have no existing product, create a new product by clicking Create New Product and filling in the requested product information. The company name that appears next to the product name is the company name you used when creating your VSIP membership. It must match the company name of your VSPackage. The Product URL is not used to generate the PLK. You can use any URL, but it has to be correctly formatted, for example,. Click the lock icon to the right of your product, that is, your package. The PLK request form appears with the product name and description filled out. Fill in the requested fields. Be careful not to add any trailing spaces. Product Version. Type in a product version, for example, 1.0. You can use any string. Yes Package Guid This is the GUID attribute of the class that implements your VSPackage, for example: [Guid("EEE474A0-083B-4e9c-B453-F6FCCEDA2577")] public class MyProductName : MsVsShell.Package Note Use the numeric value to fill in the field, excluding any brackets. For example: EEE474A0-083B-4e9c-B453-F6FCCEDA2577 Application Residency Select a value from the dropdown list, for example, Microsoft Visual Studio .NET 2003. PLKs generated for Visual Studio.NET 2003 will also work with Visual Studio.NET 2005. Minimum Product Edition Select a value from the drop-down list, for example, Standard. Resource DLL Filename Type in the name of your resource dll, for example, MPUI.dll. If you have no resource dll, leave this field blank. Note This field is not used to generate the PLK. No Click Request PLK. A message appears that looks something like this: Your Package Load Key request is pending. You will receive a notification when it's been approved or denied. The notification you receive will provide the PLK.
http://msdn.microsoft.com/en-us/library/bb165395(VS.80).aspx
crawl-002
refinedweb
440
67.96
These days in families, both men and women have started working. Both men and women want to succeed in life and for this they have to work hard and spent more time in the offices. Due to this it becomes difficult for both of them to look after their children. Due to pressure of they offices many of the parents are not able to call their children and as them weather they have reached the house from school or from any other place safely or not. If you come in the category of those parents then I will help you to reduce your tension. This project consist of mediatek linkit one as the controller. A simple push button is attached to this project. Keep it near you door so that when your children enter the house they remember to push this push button. When they will push the button, a SMS will be send to your mobile number and the appliance attached to this project will be turned on. For example when your children will enter the house, they will search for switch board to turn on light. When they will push the button to turn on the light, a SMS will be sent to you telling that your children have reached the home safely. I am sure that this will reduce your stress. Don't forget that you have to change the message in the code from your message which you want to receive when your children reach home. You can also use it to notify yourself when you life partner reaches home. This is a simple step taken by me to reduce the stress of various parents. Teacher Notes Teachers! Did you use this instructable in your classroom? Add a Teacher Note to share how you incorporated it into your lesson. Step 1: Collect Parts Here are the parts required by you to make this project. There I have used some module which you may not have but their is no need to worry because they are basic modules which you can make by yourself. I further steps I will be telling you how to make those modules. Here is the list: - Linkit one - Relay module - Push button module - A SIM card - Linkit one GSM antenna - Linkit one battery - Jump cables(male to female are preferred) - A box or enclosure - A USB cable to upload the code Step 2: Connect: Insert the SIM When the switch is pressed a SMS is also send to the registered mobile number. These days SMS can be sent through internet but the better option would be to use a network provider to send the SMS. For that you would be needing a Mini size SIM card. Make sure that you buy mini size SIM card as linkit one does not support micro or nano size SIM card. Insert the SIM card to the SIM slot present at the back side of the board. Make sure that you insert it the right way. Step 4: Connecting the GSM Antenna Linkit one comes with three different types of antennas. One is for GPS, one for WiFi and Bluetooth and one for GSM. Since we are using only GSM feature of linkit one in this project, you would be needing only one antenna. The GSM antenna is rectangle in shape(the big one). Take that antenna and find GSM port one the downward side of linkit one. Connect that antenna to that port(you may have to apply some force while connecting) and you are done. Step 5: Connectig the Push Button Module A push button module is a small module consisting of a push button and a resistor. If you don't have a push button module you can make one for you on a small PCB. You only need a push button, small piece of PCB and a 10K resistor. Connect vcc of push button module to linkit one 5V,gnd to gnd and Vout to digital pin 8. If you are making a module, then connect one pin of push button to linkit one vcc, other pin to digital pin 8 and short digital pin 8 to gnd with a 10K resistor. Step 6: Connecting the Relay Module In the final product, when the push button is pushed, it will send an SMS as well as turn on the appliance connected to it. The appliance can not be connected directly to the linkit one board as the voltage output of linkit one is 3.7V whereas most of the appliance require 110V to 220V to work so we would be needing a relay which can switch the voltage of you linkit one from a 110V to 220V supply. For switching you can use a relay module like me or can make a module of your own by the circuit diagram attached in the images given above. Connect the vcc of the module to linkit one 5V , gnd of the module to linkit one gnd and Vout of the module to digital pin 11. Step 7: Making an Enclosure Choosing a correct enclosure is a important thing. You would be needing a enclosure of small size in which all your things can fit. Don't choose a enclosure in which you can not drill holes properly. Take a enclosure which is small in size, not too much thick, rigid, durable, with thin walls etc. After choosing the enclosure, drill two holes in it. One from which the hump cables of push buttons can pass and other from which the wire going to your appliance can pass. If forgot to tell you that your push button module would be outside your box whereas all other things are inside it. Place all the things inside the box and close it and proceed to next step. Step 8: Upload the Code Here is the code you need to upload to your board. Make sure that in the code you change the number "1234567890" from your mobile number and change the message from your message. After uploading the code proceed to next step where you would be told how to use it. Here is the code: #include <LGSM.h> int push=8; int relay=11; int state=0; void setup() { Serial.begin(9600); while (!LSMS.ready()) delay(1000); Serial.println("SIM ready for use"); pinMode(8,INPUT); pinMode(11,OUTPUT); } void loop() { LSMS.beginSMS("1234567890"); if(digitalWrite(push)==HIGH && state==0) { sentLSMS.print("I am here"); // please change this line for your message state=1; digitalWrite(push,HIGH); } else if(digitalWrite(push)==HIGH && state==1) { digitalWrite(push,LOW); state=0; } } <br><br> Step 9: TEST Now its time to test your project. Power on your board and leave it for about 10 seconds so that your SIM card can connect to your telecom partner. The push the button kept on the top. After pushing, the appliance connected to your board will be switched on and you will receive a SMS. When you will again push the button, the appliance you be turned off but this time you will not receive the SMS. S now your project is ready to be used. Step 10: The End Place this project at you main door or outside your door. When you reach your home, by simply pushing the button you can give them a message that you have reached the home succesfully thus reducing their tension Hope you like the project and loved it. For any query and problem comment below. Your can change the project according to your use. Do remember to post your project photos also. THANK YOU Participated in the Arduino All The Things! Contest Discussions 3 years ago This instructable prevents my mom from being like this:
https://www.instructables.com/id/Presence-Notifier-With-Push-Button/
CC-MAIN-2019-43
refinedweb
1,291
79.19
Sometimes imposing constraints on the type of an object without requiring it to belong to a specific inheritance hierarchy is useful. These are usually referred to as concepts in the C++ community. This module lists the concepts commonly used in deal.II with brief descriptions of their intent. The convention in deal.II for listing constraints on a type is to provide the name of the concept as a typename in a template: for example, the type of a Vector depends on the type of the underlying field, and so it is defined as a template: The point here is that you are creating a vector that can store elements of type Number. But there are some underlying assumptions on this. For example, the deal.II Vector class is not intended to be used just as a collection (unlike std::vector) but defines vector space operations such as addition of vectors, or the norm of vectors. Consequently, the data type users can specify for Number must satisfy certain conditions (i.e., it must conform to or "model" a "concept"): Specifically, the type must denote objects that represent the elements of what mathematically call a "field" (which you can think of as, well, "numbers": things we can add, multiply, divide, take the absolute value of, etc). The point of a concept is then to describe what conditions a type must satisfy to be a valid template argument in a given context. This page describes these conditions for a number of concepts used throughout deal.II. Specifically, in the example above, the Number concept discussed below describes the types that could be used as argument for the Vector class. Concepts have been proposed as a language extension to C++ for a long time already. They would allow us to describe that a class or function has certain properties in order to be a qualified template argument. For example, it would allow us to express in C++ code that the first argument to, say, GridTools::find_closest_vertex(), must have a type that represents an actual mesh – which we can currently only describe in words, see below. Using C++ concepts would allow us to describe this in code and trying to call such a function with an object as first argument that is not, in fact, a mesh would yield a compiler error that makes the mismatch clear. Unfortunately, these proposals to C++ have never made it into any official C++ standard; they are proposed for C++20 however. We may start to use them once the vast majority of our users have compilers that support this standard. More information on the topic can be found at this wikipedia page. deal.II includes both DoFHandler and hp::DoFHandler as objects which manage degrees of freedom on a mesh. Though the two do not share any sort of inheritance relationship, they are similar enough that many functions just need something which resembles a DoFHandler to work correctly. Many functions and classes in deal.II require an object which knows how to calculate matrix-vector products (the member function vmult), transposed matrix-vector products (the member function Tvmult), as well as the `multiply and add' equivalents vmult_add and Tvmult_add. Some functions only require vmult and Tvmult, but an object should implement all four member functions if the template requires a MatrixType argument. Writing classes that satisfy these conditions is a sufficiently common occurrence that the LinearOperator class was written to make things easier; see Linear Operators for more information. One way to think of MatrixType is to pretend it is a base class with the following signature (this is nearly the interface provided by SparseMatrix): Template functions in C++ cannot be virtual (which is the main reason why this approach is not used in deal.II), so implementing this interface with inheritance will not work, but it is still a good way to think about this template concept. One can use the PointerMatrixAux class to implement vmult_add and Tvmult_add instead of implementing them manually. Meshes can be thought of as arrays of vertices and connectivities, but a more fruitful view is to consider them as collections of cells. In C++, collections are often called containers (typical containers are std::vector, std::list, etc.) and they are characterized by the ability to iterate over the elements of the collection. The MeshType concept refers to any container which defines appropriate methods (such as DoFHandler::begin_active()) and typedefs (such as DoFHandler::active_cell_iterator) for managing collections of cells. Instances of Triangulation, DoFHandler, and hp::DoFHandler may all be considered as containers of cells. In fact, the most important parts of the public interface of these classes consists simply of the ability to get iterators to their elements. Since these parts of the interface are generic, i.e., the functions have the same name in all classes, it is possible to write operations that do not actually care whether they work on a triangulation or a DoF handler object. Examples abound, for example, in the GridTools namespace, underlining the power of the abstraction that meshes and DoF handlers can all be considered simply as collections (containers) of cells. On the other hand, meshes are non-standard containers unlike std::vector or std::list in that they can be sliced several ways. For example, one can iterate over the subset of active cells or over all cells; likewise, cells are organized into levels and one can get iterator ranges for only the cells on one level. Generally, however, all classes that implement the containers-of-cells concept use the same function names to provide the same functionality. Functions that may be called with either class indicate this by accepting a template parameter like or The classes that satisfy this concept are collectively referred to as mesh classes. The exact definition of MeshType relies a lot on library internals, but it can be summarized as any class with the following properties: typedefnamed active_cell_iterator. get_triangulation()which returns a reference to the underlying geometrical description (one of the Triangulation classes) of the collection of cells. If the mesh happens to be a Triangulation, then the mesh just returns a reference to itself. begin_active()which returns an iterator pointing to the first active cell. dimensioncontaining the dimension in which the object lives. space_dimensioncontaining the dimension of the object (e.g., a 2D surface in a 3D setting would have space_dimension = 2). This concept describes scalars which make sense as vector or matrix entries, which is usually some finite precision approximation of a field element. The canonical examples are double and float, but deal.II supports std::complex<T> for floating point type T in many places as well. See the description in Polynomials and polynomial spaces for more information. In some contexts, anything that satisfies the interface resembling may be considered as a polynomial for the sake of implementing finite elements. This is essentially a synonym for MatrixType, but usually only requires that vmult() and Tvmult() be defined. Most of the time defining Tvmult() is not necessary. One should think of vmult() as applying some approximation of the inverse of a linear operator to a vector, instead of the action of a linear operator to a vector, for the preconditioner classes. This is an object capable of relaxation for multigrid methods. One can think of an object satisfying this constraint as having the following interface as well as the constraints required by MatrixType: where these two member functions perform one step (or the transpose of such a step) of the smoothing scheme. In other words, the operations performed by these functions are \(u = u - P^{-1} (A u - v)\) and \(u = u - P^{-T} (A u - v)\). Almost all functions (with the notable exception of SparsityTools::distribute_sparsity_pattern) which take a sparsity pattern as an argument can take either a regular SparsityPattern or a DynamicSparsityPattern, or even one of the block sparsity patterns. See Sparsity patterns for more information. Deriving new stream classes in C++ is well-known to be difficult. To get around this, some functions accept a parameter which defines operator<<, which allows for easy output to any kind of output stream. deal.II supports many different vector classes, including bindings to vectors in other libraries. These are similar to standard library vectors (i.e., they define begin(), end(), operator[], and size()) but also define numerical operations like add(). Some examples of VectorType include Vector, TrilinosWrappers::MPI::Vector, and BlockVector.
https://www.dealii.org/current/doxygen/deal.II/group__Concepts.html
CC-MAIN-2019-39
refinedweb
1,404
51.38
import pandas as pd import numpy as np sex_ratios = pd.read_csv('m2f_ratios.csv', skiprows=8) age_code = {a: i for i,a in enumerate(sex_ratios.Age.unique())} age_label = {i: a for i,a in enumerate(sex_ratios.Age.unique())} sex_ratios['AgeCode'] = sex_ratios.Age.apply(lambda x: age_code[x]) area_idx = sex_ratios.Area == \ 'United Kingdom of Great Britain and Northern Ireland' years_idx = sex_ratios.Year <= 2015 sex_ratios_uk = sex_ratios[np.logical_and(years_idx, area_idx)]Here take care of the age coding and isolate the data for the United Kingdom and Northern Ireland. Now we can rearrange the data to see ratio per year and age using a pivot table, we can then visualize the result using the heatmap function from seaborn: import matplotlib as plt import seaborn as sns pivot_uk = sex_ratios_uk.pivot_table(values='Ratio', index='AgeCode', columns='Year') pivot_uk.index = [age_label[a] for a in pivot_uk.index.values] plt.figure(figsize=(10, 8)) plt.title('Sex ratio per year and age groups') sns.heatmap(pivot_uk, annot=True) plt.show() In each year we see that the ratio was above 1 (in favor of males) for young ages it then becomes lower than 1 during adulthood and keeps lowering with the age. It also seems that with time the ratio decreases more slowly. For example, we see that the age group 70-74 had a ratio of 0.63 in 1970, while the ration in 2015 was 0.9. I think you forgot "import matplotlib as plt" Great job and thank you ;) I actually did, thanks!
http://glowingpython.blogspot.it/2017/06/a-heatmap-of-male-to-female-ratios-with.html
CC-MAIN-2017-30
refinedweb
248
60.61
The. Consider this: the previous two sections showed an example of calling a simple remote SOAP method with one argument and one return value, both of simple data types. This required knowing, and keeping track of, the service URL, the service namespace, the function name, the number of arguments, and the datatype of each argument. If any of these is missing or wrong, the whole thing falls apart. That shouldn't come as a big surprise. If I wanted to call a local function, I would need to know what package or module it was in (the equivalent of service URL and namespace). I would need to know the correct function name and the correct number of arguments. Python deftly handles datatyping without explicit types, but I would still need to know how many argument to pass, and how many return values to expect. The big difference is introspection. As you saw in Chapter 4, Python excels at letting you discover things about modules and functions at runtime. You can list the available functions within a module, and with a little work, drill down to individual function declarations and arguments. WSDL lets you do that with SOAP web services. WSDL stands for “Web Services Description Language”. Although designed to be flexible enough to describe many types of web services, it is most often used to describe SOAP web services. A WSDL file is just that: a file. More specifically, it's an XML file. It usually lives on the same server you use to access the SOAP web services it describes, although there's nothing special about it. Later in this chapter, we'll download the WSDL file for the Google API and use it locally. That doesn't mean we're calling Google locally; the WSDL file still describes the remote functions sitting on Google's server. A WSDL file contains a description of everything involved in calling a SOAP web service: - The service URL and namespace - The type of web service (probably function calls using SOAP, although as I mentioned, WSDL is flexible enough to describe a wide variety of web services) - The list of available functions - The arguments for each function - The datatype of each argument - The return values of each function, and the datatype of each return value In other words, a WSDL file tells you everything you need to know to be able to call a SOAP web service.
http://docs.activestate.com/activepython/2.7/dip/soap_web_services/wsdl.html
CC-MAIN-2019-18
refinedweb
404
67.18
@KurtE Smiling and crying at the same time Copying most of the stuff from your ILI9341_t3n library instead of bodmers. Only thing I cant decide on is if the Adafruit fonts would work or just use Bodmers My use two SPI buss version, which as the large eye stuff... uncannyEyes_async_st7735-190827a.zip There is only one eye type based on the default one, which I choose by default if you are using ST7789. There are a couple of defines at the start. The first is the one that turns on using the DMA updates.The first is the one that turns on using the DMA updates.Code:#define USE_ASYNC_UPDATES #define ST77XX_ON_SPI_SPI2 #define USE_ST7789 #define DEBUG_ST7789 The second is my defines for using the SPI2 setup which has the CS challenged displays on it. And likewise using the ST7789 displays.. Now the debug turn on, actually outputs info at the top and bottom of eye which I am using to help debug. The first number iScale is the thing that is driving the problem I have. If it is below 400 maybe closer to 700 eye draws wrong. Higher numbers draw better, but Iris is not as big as it should be... So some scaling issue... And with two months to go before Halloween, it might be useful to also add sound effects. I've always wanted to incorporate two distance sensors as well, so the eyes track people as they move. I've discovered if I let any of the uncanny eyes run for a long time, and I see them out of the corner of my eye, it can get a little creepy. If you don't count soldering, and doing all of the DYI, right now a Teensy 4.0 + 2 of the cheap non-CS displays is a little cheaper than the M4SK (~ $30 US vs. $45 US, not counting s/h). I am curious how you support many different displays, each with different pinouts. Do you just re-jumper each display as you test it, or do you have a separate Teensy for each display always wired up, or do like I do, have a protoboard that you mount the Teensy of the day (or breadboard) with the SPI pins brought out in a specific order, and then use a custom cable that goes from that standard pinout to the particular display? Last edited by MichaelMeissner; 08-28-2019 at 03:51 PM. That is currently the new SSD1351_t3 library appears to be working great on T4 (), as I mentioned in the simple how many times in a second can it call fillScreen with different colors. With this library: 72 with Adafruit 48... So I thought I should at least try on T3.6... And I had compile errors, which I fixed, but the display is not showing up correctly.... The speed test ran and I think it was something like 71 on T3.6 The Adafruit run was 27... Hooked up Logic Analyzer and output looked correct, BUT: the output at about 20MHZ was too fast, changed the Frequency requested from 24000000 to 19000000 and display works correctly. So I could go at faster SPI speed on T4 then I could on T3.6, maybe because on T3.6 it is using hardware CS for both CS and DC and on T4 not. So maybe time delay between CS assert or ... Anyway running with set to 19mhz actual SPI speed about 14.71mhz. Frames per second 54... So latest change by default will probably lower speed of T4, but you can now pass in desired frequency on the tft.begin() method... Again wondering to self: Should we keep making our own Teensy versions of these libraries and/or see if we can integrate some of these speed ups into the Adafruit_GFX (spitft), such that more people can automatically get better performance on the Teensy boards). Of course they probably would not get all of the other stuff, like frame buffer, DMA, additional graphic primitives... @KurtE Guess that is always the question - Where to stop and say ok the basics are working do we need to expand it to do everything the other displays do. Took a while for me to decide to do that with the ST7789 board. Think the only reason I decided to do it was because (1) the display resolution was 240x320 on the board that I have and (2) was looking for a bit more functionality. So I wound up putting everything in that the ILI9341_t3n lib has as long as I was doing it - argh. But by doing it learned a bit more info. As for adding in that tft.begin(frequency) I actually think that's a good idea for some of the other libraries as well. Ran into an issue with the arducam where I had to slow down the SPI speed for the display. Then it worked great. May have to add that into the ST7735 lib as well About the wondering: Guess it could work - incorporate the basics of the Teensy speedups you did for adafruit so if you just want to use the adafruit gfx lib its there. If you want to use the additional features like frame buffer, DMA etc you would use the specific Teensy library. Since, Adafruit has its own machines with DMA and they now have an Adafruit_ZeroDMA library that abstracts this in some sense. If we can have the same public names (using a different include file), it may be easier in the future to import things from the Adafruit world to the Teensy world. I assume by now, we plan to keep the Teensy stuff in a new file, and not trying to keep the Adafruit library up to date with Teensy patches (i.e. the old ST7735 library). Of course from time to time, it will mean we have to update the *_t{3,4} libraries when Adafruit_GFX changes. @KurtE Is there a reason MISO was never defined in the constructors? @KurtE - @defragster I just created a new repository for my rewrites - haven't tested everything just trying to graphicstest.ino function working first. The only thing that seems to be broke is drawLine function. Other functions in graphicstest seem to be working. Took me so long because of my fat fingers and forgetting I had to change certain things around - argh. Anyway if you are interested my changes are here:. Pretty sure I started your Kurt's dma_low branch @KurtE - if you tried it I think the problem is with porting clipping over from ILI9341 library Update: got it solved - now for more testing Last edited by mjs513; 08-28-2019 at 10:16 PM. @mjs513 @MichaelMeissner - Sorry I have been MIA this afternoon. Weather is too nice to sit inside and too much beer at lunch. MISO - Have not been used in constructors as none of the display I have seen use them..... (Except a few of them use them for onboard SD cards...). Could be wrong, but I think these constructors came from the Adafruit stuff. Have not had a chance to play yet. Adafruit SPITFT library: I mentioned speed difference up on Adafruit library and got a response back: From Adafruit2... The question would be how far to take it...The question would be how far to take it...we'd love to have teensy 3/4 speedups in SPITFT.cpp, we chatted with paul about this but he got very busy. we did update all our libraries as he requested to use 'transactions' for sending commands and data. there's definitely room for speedups, even if its not as fast as can possibly be There are some obvious low hanging fruit like: The Teensy should call transfer16 instead of two calls to transfer.The Teensy should call transfer16 instead of two calls to transfer.Code:void Adafruit_SPITFT::SPI_WRITE16(uint16_t w) { if(connection == TFT_HARD_SPI) { #if defined(__AVR__) AVR_WRITESPI(w >> 8); AVR_WRITESPI(w); #elif defined(ESP8266) || defined(ESP32) hwspi._spi->write16(w); #else hwspi._spi->transfer(w >> 8); hwspi._spi->transfer(w); #endif } else if(connection == TFT_SOFT_SPI) { But I think we can also get lots of speedups, either directly going to registers or the like... @KurtE Glad you enjoyed your day - everybody needs a break sometime As for speed ups. Yes certainly for some boards you can speed up by doing transfer16's but maybe you should consider just exporting what you have been doing with all the display libraries that you have, i.e, use the registers like you said for the Teensies. On a side note almost got it all working but its like its stuck in 240x240 mode - still think I have to go over all the functions again in case I did anything dumb with the clipping. But at least it passes the graphicstest now. @KurtE Looks like I fixed the issues with clipping so I think all the graphic primitives are now work - so now trying to work on getting fonts working with the ST7735. But got a strange error associated with SPI that I have no clue how to trace where the problem is in the code unless I forgot to include something - maybe you have seen this before - I apologize for the length: Code:Arduino: 1.8.9 (Windows 10), TD: 1.47, Board: "Teensy 4.0, Serial, Faster, US English" In file included from F:\arduino-1.8.9\hardware\teensy\avr\libraries\ST7735_t3\ST7735_t3.h:23:0, from F:\arduino-1.8.9\hardware\teensy\avr\libraries\ST7735_t3\st7735_t3_font_Arial.h:4, from F:\arduino-1.8.9\hardware\teensy\avr\libraries\ST7735_t3\st7735_t3_font_Arial.c:1: F:\arduino-1.8.9\hardware\teensy\avr\libraries\SPI/SPI.h:1037:1: error: unknown type name 'class' class SPISettings { ^ F:\arduino-1.8.9\hardware\teensy\avr\libraries\SPI/SPI.h:1037:19: error: expected '=', ',', ';', 'asm' or '__attribute__' before '{' token class SPISettings { ^ F:\arduino-1.8.9\hardware\teensy\avr\libraries\SPI/SPI.h:1069:1: error: unknown type name 'class' class SPIClass { // Teensy 4 ^ F:\arduino-1.8.9\hardware\teensy\avr\libraries\SPI/SPI.h:1069:16: error: expected '=', ',', ';', 'asm' or '__attribute__' before '{' token class SPIClass { // Teensy 4 ^ F:\arduino-1.8.9\hardware\teensy\avr\libraries\SPI/SPI.h:1378:8: error: unknown type name 'SPIClass' extern SPIClass SPI; ^ F:\arduino-1.8.9\hardware\teensy\avr\libraries\SPI/SPI.h:1383:8: error: unknown type name 'SPIClass' extern SPIClass SPI1; ^ F:\arduino-1.8.9\hardware\teensy\avr\libraries\SPI/SPI.h:1384:8: error: unknown type name 'SPIClass' extern SPIClass SPI2; ^ In file included from F:\arduino-1.8.9\hardware\teensy\avr\libraries\ST7735_t3\st7735_t3_font_Arial.h:4:0, from F:\arduino-1.8.9\hardware\teensy\avr\libraries\ST7735_t3\st7735_t3_font_Arial.c:1: F:\arduino-1.8.9\hardware\teensy\avr\libraries\ST7735_t3\ST7735_t3.h:138:3: error: unknown type name 'DMASetting' DMASetting _dmasettings[2]; ^ F:\arduino-1.8.9\hardware\teensy\avr\libraries\ST7735_t3\ST7735_t3.h:139:3: error: unknown type name 'DMAChannel' DMAChannel _dmatx; ^ Error compiling for board Teensy 4.0. @mjs513 - Probably an issue maybe with SPI.h or the like, Where you are trying to compile in a .c file (C not C++), so in a C file things like: class foo; is not valid. In some libraries and the like you will see something like: #ifdef __cplusplus All of your c++ stuff Maybe something like: extern "C" { ... In SPI.h I don't see any such ifdef stuff... So simple answer is, a C file and likewise any header file included in a C file should not include SPI.h Started playing a little with speedups and yes I think the only way to reasonably speed it up is to bring in a lot of the stuff, we have done to talk directly to registers... I hacked up a copy of the graphic test and added some simple timing outputs: So it printed out some simple elapsedMillis timings for some of the tests...So it printed out some simple elapsedMillis timings for some of the tests...Code:#define USE_ADAFRUIT #define SCREEN_WIDTH 128 #define SCREEN_HEIGHT 128 // Change this to 96 for 1.27" OLED. // You can use any (4 or) 5 pins #define SCLK_PIN 13 #define MOSI_PIN 11 #define DC_PIN 9 #define CS_PIN 10 #define RST_PIN 8 // Color definitions #define BLACK 0x0000 #define BLUE 0x001F #define RED 0xF800 #define GREEN 0x07E0 #define CYAN 0x07FF #define MAGENTA 0xF81F #define YELLOW 0xFFE0 #define WHITE 0xFFFF #include <Adafruit_GFX.h> #include <SPI.h> #ifdef USE_ADAFRUIT #include <Adafruit_SSD1351.h> Adafruit_SSD1351 tft = Adafruit_SSD1351(SCREEN_WIDTH, SCREEN_HEIGHT, &SPI, CS_PIN, DC_PIN, RST_PIN); #else #include <SSD1351_t3.h> SSD1351_t3 tft = SSD1351_t3(CS_PIN, DC_PIN, MOSI_PIN, SCLK_PIN, RST_PIN); #endif float p = 3.1415926; void setup(void) { while (!Serial && millis() < 5000) ; Serial.begin(9600); Serial.print(F("Hello! SSD1351 TFT Test")); tft.begin(); Serial.println(F("Initialized")); uint16_t time = millis(); tft.fillScreen(BLACK); time = millis() - time; Serial.println(time, DEC); delay(500); // large block of text tft.fillScreen. ", WHITE); delay(1000); // tft print function! elapsedMillis em = 0; tftPrintTest(); Serial.printf("tftPrintTest: %d\n", (uint32_t)em); delay(4000); // a single pixel tft.drawPixel(tft.width()/2, tft.height()/2, GREEN); delay(500); // line draw test em = 0; testlines(YELLOW); Serial.printf("testlines: %d\n", (uint32_t)em); delay(500); // optimized lines em = 0; testfastlines(RED, BLUE); Serial.printf("testfastlines: %d\n", (uint32_t)em); delay(500); em = 0; testdrawrects(GREEN); Serial.printf("testlines: %d\n", (uint32_t)em); delay(500); em = 0; testfillrects(YELLOW, MAGENTA); Serial.printf("testlines: %d\n", (uint32_t)em); delay(500); em = 0; tft.fillScreen(BLACK); testfillcircles(10, BLUE); testdrawcircles(10, WHITE); Serial.printf("testfillcircles...: %d\n", (uint32_t)em); delay(500); em = 0; testroundrects(); Serial.printf("testroundrects: %d\n", (uint32_t)em); delay(500); em = 0; testtriangles(); Serial.printf("testtriangles: %d\n", (uint32_t)em); delay(500); mediabuttons(); delay(500); Serial.println("done"); delay(1000); } void loop() { tft.invertDisplay(true); delay(500); tft.invertDisplay(false); delay(500); } void testlines(uint16_t color) { tft.fillScreen(BLACK); for (int16_t x=0; x < tft.width(); x+=6) { tft.drawLine(0, 0, x, tft.height()-1, color); delay(0); } for (int16_t y=0; y < tft.height(); y+=6) { tft.drawLine(0, 0, tft.width()-1, y, color); delay(0); } tft.fillScreen(BLACK); for (int16_t x=0; x < tft.width(); x+=6) { tft.drawLine(tft.width()-1, 0, x, tft.height()-1, color); delay(0); } for (int16_t y=0; y < tft.height(); y+=6) { tft.drawLine(tft.width()-1, 0, 0, y, color); delay(0); } tft.fillScreen(BLACK); for (int16_t x=0; x < tft.width(); x+=6) { tft.drawLine(0, tft.height()-1, x, 0, color); delay(0); } for (int16_t y=0; y < tft.height(); y+=6) { tft.drawLine(0, tft.height()-1, tft.width()-1, y, color); delay(0); } tft.fillScreen(BLACK); for (int16_t x=0; x < tft.width(); x+=6) { tft.drawLine(tft.width()-1, tft.height()-1, x, 0, color); delay(0); } for (int16_t y=0; y < tft.height(); y+=6) { tft.drawLine(tft.width()-1, tft.height()-1, 0, y, color); delay(0); } } void testdrawtext(char *text, uint16_t color) { tft.setCursor(0, 0); tft.setTextColor(color); tft.setTextWrap(true); tft.print(text); } void testfastlines(uint16_t color1, uint16_t color2) { tft.fillScreen(BLACK); for (int16_t y=0; y < tft.height(); y+=5) { tft.drawFastHLine(0, y, tft.width(), color1); } for (int16_t x=0; x < tft.width(); x+=5) { tft.drawFastVLine(x, 0, tft.height(), color2); } } void testdrawrects(uint16_t color) { tft.fillScreen(BLACK); for (int16_t x=0; x < tft.width(); x+=6) { tft.drawRect(tft.width()/2 -x/2, tft.height()/2 -x/2 , x, x, color); } } void testfillrects(uint16_t color1, uint16_t color2) { tft.fillScreen(BLACK); for (int16_t x=tft.width()-1; x > 6; x-=6) { tft.fillRect(tft.width()/2 -x/2, tft.height()/2 -x/2 , x, x, color1); tft.drawRect(tft.width()/2 -x/2, tft.height()/2 -x/2 , x, x, color2); } } void testfillcircles(uint8_t radius, uint16_t color) { for (int16_t x=radius; x < tft.width(); x+=radius*2) { for (int16_t y=radius; y < tft.height(); y+=radius*2) { tft.fillCircle(x, y, radius, color); } } } void testdrawcircles(uint8_t radius, uint16_t color) { for (int16_t x=0; x < tft.width()+radius; x+=radius*2) { for (int16_t y=0; y < tft.height()+radius; y+=radius*2) { tft.drawCircle(x, y, radius, color); } } } void testtriangles() { tft.fillScreen(BLACK); int color = 0xF800; int t; int w = tft.width()/2; int x = tft.height()-1; int y = 0; int z = tft.width(); for(t = 0 ; t <= 15; t++) { tft.drawTriangle(w, y, y, x, z, x, color); x-=4; y+=4; z-=4; color+=100; } } void testroundrects() { tft.fillScreen(BLACK); int color = 100; int i; int t; for(t = 0 ; t <= 4; t+=1) { int x = 0; int y = 0; int w = tft.width()-2; int h = tft.height()-2; for(i = 0 ; i <= 16; i+=1) { tft.drawRoundRect(x, y, w, h, 5, color); x+=2; y+=3; w-=4; h-=6; color+=1100; } color+=100; } } void tftPrintTest() { tft.setTextWrap(false); tft.fillScreen(BLACK); tft.setCursor(0, 30); tft.setTextColor(RED); tft.setTextSize(1); tft.println("Hello World!"); tft.setTextColor(YELLOW); tft.setTextSize(2); tft.println("Hello World!"); tft.setTextColor(GREEN); tft.setTextSize(3); tft.println("Hello World!"); tft.setTextColor(BLUE); tft.setTextSize(4); tft.print(1234.567); delay(1500); tft.setCursor(0, 0); tft.fillScreen(BLACK); tft.setTextColor(WHITE); tft.setTextSize(0); tft.println("Hello World!"); tft.setTextSize(1); tft.setTextColor(GREEN); tft.print(p, 6); tft.println(" Want pi?"); tft.println(" "); tft.print(8675309, HEX); // print 8,675,309 out in HEX! tft.println(" Print HEX!"); tft.println(" "); tft.setTextColor(WHITE); tft.println("Sketch has been"); tft.println("running for: "); tft.setTextColor(MAGENTA); tft.print(millis() / 1000); tft.setTextColor(WHITE); tft.print(" seconds."); } void mediabuttons() { // play tft.fillScreen(BLACK); tft.fillRoundRect(25, 10, 78, 60, 8, WHITE); tft.fillTriangle(42, 20, 42, 60, 90, 40, RED); delay(500); // pause tft.fillRoundRect(25, 90, 78, 60, 8, WHITE); tft.fillRoundRect(39, 98, 20, 45, 5, GREEN); tft.fillRoundRect(69, 98, 20, 45, 5, GREEN); delay(500); // play color tft.fillTriangle(42, 20, 42, 60, 90, 40, BLUE); delay(50); // pause color tft.fillRoundRect(39, 98, 20, 45, 5, RED); tft.fillRoundRect(69, 98, 20, 45, 5, RED); // play color tft.fillTriangle(42, 20, 42, 60, 90, 40, GREEN); } So far only testing on T4: With my new library: Timings for Adafruit_SD1351:Timings for Adafruit_SD1351:Code:SSD1351::commandList called SSD1351::commandList exit Initialized 17 tftPrintTest: 1547 testlines: 233 testfastlines: 25 testlines: 24 testlines: 154 testfillcircles...: 54 testroundrects: 68 testtriangles: 35 done:Code:Hello! SSD1351 TFT TestInitialized 40 tftPrintTest: 1602 testlines: 417 testfastlines: 58 testlines: 55 testlines: 360 testfillcircles...: 106 testroundrects: 140 testtriangles: 69 done But still a long way to go...But still a long way to go...Code:Hello! SSD1351 TFT TestInitialized 36 tftPrintTest: 1592 testlines: 396 testfastlines: 51 testlines: 49 testlines: 319 testfillcircles...: 98 testroundrects: 127 testtriangles: 64 done Although maybe I need to see if I can force both to use the same SPI speed and see how much of difference that maigh make @KurtE Thanks for the pointer. Did this in the 7735.h: Now down to 2 errors for now - and no i didn't touch any of the DMA stuffNow down to 2 errors for now - and no i didn't touch any of the DMA stuffCode:#ifdef __cplusplus #include <SPI.h> #endif Code:F:\arduino-1.8.9\hardware\teensy\avr\libraries\ST7735_t3\ST7735_t3.h:139:3: error: unknown type name 'DMASetting' DMASetting _dmasettings[2]; ^ F:\arduino-1.8.9\hardware\teensy\avr\libraries\ST7735_t3\ST7735_t3.h:140:3: error: unknown type name 'DMAChannel' DMAChannel _dmatx; ^ As again if included from C file, then DMASettings/DMAChannel are maybe not defined... So again you could #ifdef with the __cplusplus...As again if included from C file, then DMASettings/DMAChannel are maybe not defined... So again you could #ifdef with the __cplusplus... arround the ST7735DMA_Data stuff.. But then the next lines are: class ST7735_t3 : public ... Which won't work.... My guess is you are trying to bring in the ili9341_t3 like Fonts?... Or split out the font structure definition into it's own header file which both the main library header as well as the font files include that header More on the Speed ups... Actually probably good enough on T4 if you simply use higher SPI speeds... T4 - [CODE]// Actually test now with speed set to 24mhz My library: Unmodified version:Unmodified version:Code:Hello! SSD1351 TFT TestSSD1351 - Hardware SPI SSD1351::commandList called SSD1351::commandList exit Initialized 13 tftPrintTest: 1539 testlines: 197 testfastlines: 20 testlines: 19 testlines: 124 testfillcircles...: 45 testroundrects: 56 testtriangles: 29 done Modified version:Modified version:Code:Hello! SSD1351 TFT TestInitialized 21 tftPrintTest: 1553 testlines: 219 testfastlines: 30 testlines: 29 testlines: 186 testfillcircles...: 56 testroundrects: 73 testtriangles: 36 done And if you try running on T3.6. It runs OK at 15mhz and 18mhz (same speed), but display fails to display properly at 20mhz...And if you try running on T3.6. It runs OK at 15mhz and 18mhz (same speed), but display fails to display properly at 20mhz...Code:Hello! SSD1351 TFT TestInitialized 15 tftPrintTest: 1541 testlines: 191 testfastlines: 22 testlines: 21 testlines: 135 testfillcircles...: 45 testroundrects: 58 testtriangles: 29 done Code://// ST7735_t3 /// T3.6 at 15mhz/18mhz Hello! SSD1351 TFT TestSSD1351::commandList called SSD1351::commandList exit Initialized 18 tftPrintTest: 1548 testlines: 217 testfastlines: 27 testlines: 25 testlines: 165 testfillcircles...: 52 testroundrects: 67 testtriangles: 34 done Fails at 20mhz //// modified Adafruit /// Hello! SSD1351 TFT TestInitialized 18 tftPrintTest: 1550 testlines: 248 testfastlines: 26 testlines: 25 testlines: 162 testfillcircles...: 57 testroundrects: 71 testtriangles: 37 done @KurtE Sorry Kurt thought I explained it earlier. What I did was move/convert the primitives over from ILI9341_t3n which seemed to work in testing so far, even setOrigin and Clipping that was in ILI9341_t3n. Hey even that stuff i add about drawFloat/drawString works with the ST7735. Now I wanted to bring over the fonts from ILI9341 and try and get it to work and that's when I ran into the issue described above. The interesting thing is that with out the font files in the directory have absolutely no compile issues with cplusplus. I have been using the ILI9341_t3n as a guide - like we did for the ILI9488 library but guess I don't really understand about cplusplus wrappers so I messed it up. So really appreciate the guidance on this one - guess I have to do a little more reading about it. Going to try and follow this method first before splitting it out.Going to try and follow this method first before splitting it out... EDIT: forgot I actually do have a cpluplus #ifdef around the class - so will just move it up @KurtE Just saw your post on the SSD1351 speedups. That's a nice bump for the T4 compared to the original version. Wondering about the T3.6 that its limited to 18Mhz? Maybe you need a little bit of delay like you recommended to me with the Arducam before the CS high? Anyway fonts are now working on the ST7735 display. Looks really sharp on the 240x320 display from Adafruit. Scrolling not working right but think that's the original problem we had when I put it into the ILI9341 lib and reading the rectangle. Have to work on that one. @mjs513 - I guess the real question is how fast can these actually be driven... I probably should again put these under Logic Analyzer at the different SPI speeds... But if I look at the SSD1351 Spec: At Page 52 for SPI timing, I see something like: Clock cycle min is something like: 220ns Which looked like it did not include the fall time or rise time, which both looked like 15ns on in their charts, So maybe something like: 250ns.. SO I would think about 4mhz? But appears to be fine at about 15. Actually with T4 it appears to work up to near 20mhz (actual SPI output speed), but was failing on T3.6 when it was running at 20mhz. Could bee the timing differences for CS and DC. Although in Adafruit case, I am doing both in software. That is not using PUSHR to encode which CS pins are to be set/cleared... I think one of the main speed ups might be to see why they default to something like 5mhz for Teensy? @KurtE Poking around the intenet they seem to be saying around 15 ir 18. But some folks were doing tricky things go get faster. Assembly and registers on arduino. So don't know how much faster your going to able to go with just doing transfers. The interesting thing about trying to update the underlying library (SPITFT) will be seeing how each of the display types can (or not) set their default SPI speeds. Obviously different controllers probably have different max speeds. One thing I would like to do is to update all of the ones that I play with hopefully have something like this on their begin/init/??? function that sets the specific desired speed for that program. I can imagine this could be important for some setups as maybe they use longer wires or ... Also @Paul - I wish I could have put in the function: SPI.transfer16(buffer, retbuf, cnt), or maybe the ESP32 version writePixel(), as with a lot of these drivers things would be a lot easier and with less overhead: Example in SPITFT: So this one, unless I go down to the registers level I am left with two possibilities...So this one, unless I go down to the registers level I am left with two possibilities...Code:void Adafruit_SPITFT::writePixels(uint16_t *colors, uint32_t len, bool block, bool bigEndian) { if(!len) return; // Avoid 0-byte transfers #if defined(ESP32) // ESP32 has a special SPI pixel-writing function... if(connection == TFT_HARD_SPI) { hwspi._spi->writePixels(colors, len * 2); return; } #elif defined(USE_SPI_DMA) ... #else // All other cases (bitbang SPI or non-DMA hard SPI or parallel), // use a loop with the normal 16-bit data write function: while(len--) { SPI_WRITE16(*colors++); } a) Leave as it is, which transfers, which translates into: while(len--)SPI.transfer16(*colors++); Which will not keep anything in FIFO queue. b) Use temporary buffer to copy a portion of the output into and then call block transfer (which I may try next) Something like: Code:#elif defined(KINETISK) || defined(KINETISL) || defined(__IMXRT1062__) if(connection == TFT_HARD_SPI) { #define SPI_MAX_WRITE_PIXELS_AT_ONCE 32 static uint8_t temp[SPI_MAX_WRITE_PIXELS_AT_ONCE*2]; uint8_t *pb = temp; while (len) { *pb++ = *colors >> 8; *pb++ = *colors++ &0xff; if (pb == &temp[SPI_MAX_WRITE_PIXELS_AT_ONCE*2]) { hwspi._spi->transfer((uint8_t *)temp, nullptr, SPI_MAX_WRITE_PIXELS_AT_ONCE*2); pb = temp; } len--; } // if there is anything remaing in buffer output it now if (pb != &temp[0]) { hwspi._spi->transfer((uint8_t *)temp, nullptr, (uint32_t)pb-(uint32_t)&temp[0]); } return; } @KurtE Would think the block transfers would be better for the teensies, I would think if you are going to incorporate it into the Adafruit library. Its interesting poking around GitHub. While trying to get readrect working for the ST7789 came across the library by Bodmer which is rather interesting. He seems to have got it working by the way. But that's besides the point. What I was going to say was that he has SPI clock at 40Mhz and SPI read at 6Mhz (max). Unfortunately he doesn't support the ssd1351 so no help on the settings for that one. I'll post another question on the readrect.
https://forum.pjrc.com/threads/57015-ST7789_t3-(part-of-ST7735-library)-support-for-displays-without-CS-pin?s=ac27c0b8c24beb9ed995ce878709a2a1&p=213980&viewfull=1
CC-MAIN-2019-47
refinedweb
4,576
67.25
Hi, Michael Vargo here, and I wanted to take a minute and talk about how you can provide redundancy for End User Recovery of DFS shared data using Data Protection Manager. The environment diagram below provides a baseline for an explanation of the options that can be utilized to provide redundancy for End User Recovery (EUR) of Distributed File System (DFS) shared data. Note that Microsoft does not “officially” support EUR redundancy with DPM but it can be achieved. The decisions that must be made regarding which redundancy plan you will implement is primarily based on what resources in the environment are most likely to fail. Are you planning for a failure of a DFS server, a DPM server or an entire site? In the event that a fileserver#1 should fail you will not need to have a separate copy of the DFS data backed up to another DPM server for users to be able continue accessing previous versions of DFS data. The clients accessing the DFS data will be redirected to fileserver#2 and can still access the recoverable data (previous versions) from DPM#1. However, DPM #1 will no longer be backing up the DFS data if fileserver#1 is down. This problem can only be addressed by having redundant DPM servers. If fileserver#1 will only be down for a short period, you may not want to make any changes to DPM and EUR. If you are planning for a DPM server failure , you can establish redundancy through the use of secondary DPM protection or another DPM server protecting the replica of the DFS data on fileserver#2. Secondary protection is achieved by connecting the protection agent to DPM#1 from DPM#2. You will then be presented with a “Protected Server” option when enumerating data sources under DPM#1 from DPM#2 which allows you to create a redundant copy of the DFS data on DPM#2. You can then choose to “switch disaster protection” should DPM#1 go down. However, the use of a secondary DPM server provides no benefit to DFS protection and EUR. Switching protection does not recreate the required shares on the new primary DPM server or update any Active Directory objects. We will discuss these items in more detail shortly. The information above is provided to explain why we recommend the use of a second DPM server that has no association with DPM#1 to provide redundancy for DFS backups and EUR . DPM#1 backs up DFS data on fileserver#1 and DPM#2 backs up the replicated DFS data on fileserver#2. This will provide the ability to continue DPM backups of the DFS data and EUR access to the DFS data should fileserver#1, DPM#1 or site#1 become unavailable. DPM#1 and DPM#2 should both be configured to meet your data retention requirements. Optimally we would enable EUR for DPM#1 and DPM#2, but DPM only supports enabling EUR on one DPM server at a time in Active Directory when protecting the same DFS shares from separate DPM Servers. You will need to disable EUR for DPM#1 by deleting the AD objects that get created when enabling EUR in the event we need to implement the disaster recovery plan. There are two categories of items that get added when enabling EUR. The first is a set of AD objects that get created with the Extension of the AD schema as a result of enabling EUR. The second is a set of shares that get created on the DPM server. The first AD object that gets created is cn=ms-sharemapconfiguration,cn=system,dc=domain,dc=local object. This gets created as a result of running DPMADSchemaExtension.exe . It is available on the DPM server in the c:\program files\Microsoft DPM\DPM\End User Recovery directory. It is run when enabling end-user recovery from the DPM options on the End-user Recovery tab. We frequently see issues where enabling end-user recovery fails when run from DPM where it fails with a message similar to “The Active Directory could not be configured.” You can also copy DPMADSchemaExtension.exe to a domain controller and run it manually as a user who is a member of both the "Schema Admins" and "Enterprise Admins" security groups. The additional items will not be created until you successfully synchronize the DPM server with the protected DFS data after the schema extension. After the synchronization job you will see an object created under the cn=ms-sharemapconfiguration container for each DFS name space protected by DPM. It has a name in the format CN=GUID and a class of ms-srvShareMapping. The import information in this object includes the ms-backupSrvShare attribute which points to the DPM server that is protecting the DFS data and the ms-productionSrvShare which indicates the DFS node that is being protected by DPM. The second set of items created upon the completion of a synchronization job after the schema is update are shares on the DPM server. These are the shares that users’ access when viewing the “previous versions” tab in the properties of an object on a EUR enabled DFS share. There will be one share for each protected DFS namespace. The screen shot below shows the shares for DFS namespaces Namespace1 and public. They are associated the name space on Sharepoint01 with the location of the replica of the files in the DPM storage pool. The AD objects under cn=ms-sharemapconfiguration,cn=system,dc=domain,dc=local will automatically be removed if you uncheck “enable end-user recovery” from the DPM options on the End-user Recovery tab. However, if the DPM server has crashed or is otherwise unavailable you must manually remove these entries. The recommended tool to access and remove these objects is ADSIedit.msc. It will allow you to drill down to the cn=ms-sharemapconfiguration,cn=system,dc=domain,dc=local container and see all of the child objects that represent each of the DFS name spaces. All of the child objects representing the failed DPM#1 server must be removed before you enable EUR on DPM#2. You can use repadmin.exe to create a query that will list all of the AD objects associated with the DFS name space being protected by a failed DPM server. repadmin /showattr dc01 ncobj:domain: /filter:"(&(objectclass=ms-srvsharemapping)(ms-productionsrvshare=\\sharepoint01\namespace1))" /subtree The above command would connect to a DC named dc01 and dump all attributes for all objects with an objectclass of ms-srvsharemapping where the ms-productionsrvshare attribute contains a value of \\sharepoint01\namespace1. You could limit the output with the /atts: option to just dump specific values from the object. For example: repadmin /showattr dc01 ncobj:domain: /filter:"(&(objectclass=ms-srvsharemapping)(ms-productionsrvshare=\\sharepoint01\namespace1))" /subtree /atts:name > ms-productionshare.txt Michael Vargo |:
https://techcommunity.microsoft.com/t5/system-center-blog/how-to-provide-redundancy-for-end-user-recovery-eur-of/ba-p/347718
CC-MAIN-2022-40
refinedweb
1,151
51.18
.barrier The barrier module provides a primitive for synchronizing the progress of a group of threads. License: Authors: Sean Kelly Source core/sync/barrier.d - class Barrier; - This class represents a barrier across which threads may only travel in groups of a specific size.Examples: import core.thread; ); - this(uint limit); - Initializes a barrier object which releases threads in groups of limit in size.Parameters:Throws:SyncError on error. - void wait(); - Wait for the pre-determined number of threads and then proceed.Throws:SyncError on error. Copyright © 1999-2022 by the D Language Foundation | Page generated by Ddoc on Sat Sep 24 14:58:21 2022
https://dlang.org/phobos/core_sync_barrier.html
CC-MAIN-2022-40
refinedweb
106
61.22
Talk:How to terminate a Windows Phone app SB Dev -Nice article, I have run into situations where it was necessary to use this. In the end you mention that people should make sure to save important data. You should perhaps mention that they can't rely on the regular application lifecycle events for this because after Terminate the App terminates immediately without firing any of the regular events. SB Dev (talk) 07:42, 4 September 2013 (EEST) Influencer - Broadly discussed Kunal, thanks for the article, but... App Termination for WP8 (and WP7) has already been broadly discussed,in blogs How to Terminate the WP8 Application Programmatically using C#? and discussion threads, even on Nokia Discussions Closing Windows Phone 8 apps programatically . Your article doesn't give added benefit over existing discussions, e.g. on StackOverflow Using App.Current.Terminate() method in Windows phone 8. Maybe you should compare the various methods that have been used on WP7 and WP8 with their pros and cons or give more use cases (SB Dev has given a sample in the aforementioned Nokia discussion thread). At least you should give a link to the MSDN docs.Thomas influencer (talk) 09:23, 4 September 2013 (EEST) Vaishali Rawat - WP app gets killed on pressing Back key Hey Kunal, AFAIK, a WP app gets killed when you press Back button (considering you are on the first page of app). If case you are not on first page, it keeps on navigating to back pages with in the app. I have objections on your below statements. Please do the needful. 1) "it should not reside in the memory unnecessarily when user hits back to come out from the application." My view: App never stays in memory if you are pressing Back (considering you are on the first page of app). It gets killed. 2) "We all already know that, Windows Phone application stays in the memory and goes to suspended state when user hits hardware back button" My View: Again, same issue. An app goes in to suspended state when we press HOME or by launching another app. Official documentation is at Vaishali Rawat (talk) 10:25, 4 September 2013 (EEST) Hamishwillee - Agreed Hi Kunal This would be even better if the title was "How to terminate a Windows Phone app programmatically", and you also covered WP7. The WP7 case is the difficult one since there is no supported API - here you need to do stuff like creating an exception and choosing not to catch that type of exception in the app. If you did this then Thomas (Influencer) comment, which is true currently is no longer true. Vaishali's points are also valid - specifically that the back button terminates the app rather than suspending it. I'll edit that in a second myself. Thanks very much rgardsH hamishwillee (talk) 13:05, 4 September 2013 (EEST) Kunal Chowdhury - Thank you everyone for the feedback. I really appreciate this to improve the article.@Vaishali, thanks for pointing out the issue. I will incorporate those changes soon. Kunal Chowdhury (talk) 13:14, 4 September 2013 (EEST) Hamishwillee - Half way there Hi Kunal that's great. I have fixed Vaishali's issue and subedited. I also added a new section for WP7 and included brief overview of one way of terminating WP7 app. All this needs now is code which shows this approach and listing of other approaches. Hope you're OK with this! RegardsH hamishwillee (talk) 13:18, 4 September 2013 (EEST) Kunal Chowdhury - Thank you Hi Hamish, Thanks for editing it. I will work on the other part and change this Wiki soon. Regards,Kunal Kunal Chowdhury (talk) 13:22, 4 September 2013 (EEST) SB Dev - In fact you can exit a Windows Phone 7 application, without raising an unhandled Exception (a bad idea to begin with as it a) might lead to trouble with certification and b) will hurt your App in Store rankings, as the algorithm also takes the number of crashes into consideration). The Game class from the Microsoft.Xna.Framework namespace provides an Exit() method which works just fine on WP7. You can simply instantiate a Game object even in non-Xna-Apps and use the method to exit the App.I did quite a lot of digging for this back when I wrote the application that uses it and I consider this to be the best way to approach the issue, if you run into one of the rare occasions where programatically terminating an App is necessary. In general this should be avoided due to the App differing from user expectations. SB Dev (talk) 15:00, 4 September 2013 (EEST) Hamishwillee - Great tip Thanks SB Dev Great tip. I've integrated it above (a new one on me!). Note that this is a wiki, so if you see a better approach you can choose to edit directly. Obviously when an article is under construction like this it is probably even better to do as you have done and make the suggestion of the author.I think this is done now. hamishwillee (talk) 02:56, 5 September 2013 (EEST) Hamishwillee - except possible ....Providing a code example for a function that can do this for both WP7 and WP8 that people can copy-paste hamishwillee (talk) 02:57, 5 September 2013 (EEST) SB Dev - Possible - I guess so but it wouldn't be too pretty. As long as your App targets WP7 the XNA-Game-method will work. As soon as you change your target platform to WP8 those assemblies are no longer available. The only way around this, that I could see, would be to check for the "Teminate" method using Reflection and if it does not exist, trying to dynamically load the XNA assembly and invoking the method on the Game object instead. I'll try to do this today but can't guarantee that I'll find the time.The rationale in providing these as comments actually was that the article still is actively developing. I have little to no problems editing wiki content directly and did so on several articles (though I have not found out wether a regular user can change the title of an articel and if so how). SB Dev (talk) 07:43, 5 September 2013 (EEST) Hamishwillee - Moving articles I believe you can move articles - its done using the "Move" menu option, which is accessed from the drop down near the little eye on the right hand side of the page.Thanks! hamishwillee (talk) 11:04, 5 September 2013 (EEST) SB Dev -I added a terminate sample method that works on both WP7 and WP8. I tested it in the WP7 project configuration on the WP7 Emulator included with the WP8 SDK and on the Lumia 920. I also tested it in the WP8 project configuration on the 920. It would be great if someone could test it on an actual WP7 device. I can't as mine is currently used by a friend. SB Dev (talk) 15:08, 6 September 2013 (EEST) Kunal Chowdhury - NiceThanks SB Dev for the universal code snippet. BTW, will Dev Center accept those call to Reflection. Don't have idea on that, so just asking. Kunal Chowdhury (talk) 15:10, 6 September 2013 (EEST) SB Dev -I haven't tried either but in principle it's only using public APIs so that at least should not be an issue. Where Reflection can be an issue is when you call certain APIs only using Reflection and it's APIs that require special capabilities (e.g. phone book access). The Store during submission checks what APIs are called and infers the required permissions that way (at least it does with WP7 - I believe it was changed for WP8 Apps). So those permissions would not be detect, therefore not be assigned to the App and executing them on the device would fail. The APIs we are using here to my knowledge don't require special permissions/capabilities so it should be fine (again - in theory). SB Dev (talk) 15:21, 6 September 2013 (EEST) Hamishwillee - Theory sounds fine But if you (or anyone) find out differently then please update. I've given this a minor update for "prettiness" and wiki style - ie using Icode template to mark up the code instead of bold. regardsHamish hamishwillee (talk) 08:11, 9 September 2013 (EEST) Influencer - Added another idea I added another idea I used in an app.Thomas influencer (talk) 23:22, 24 September 2013 (EEST) SB Dev -If I remember correctly the call of GoBack with an Empty BackStack does close the App because an Exception is raised. Something we are trying to avoid due to the repercussions associated with it regarding the Marketplace Listing and Error Reporting. It's possible though that it has changed in WP8 (didn't try it for quite some time). I'd agreee that this would be the best way to approach this as quite often what we try to do IS returning the user to the previous App when calling close (which is exactlly why we terminate the App). SB Dev (talk) 09:34, 25 September 2013 (EEST) Influencer - Right Yes, that's why I wrote 'If you have removed all entries but the last'. You have to leave the back entry returning you from your first page on the stack so that no exception occurs. I changed the text again to emphasize that.Thomas influencer (talk) 09:58, 25 September 2013 (EEST) Hamishwillee - Good trick Thomas Pity you can't query the backstack to find out what's left on it. I've reworded to make this even more clear. Hope its OK. Regardsh hamishwillee (talk) 13:17, 2 October 2013 (EEST) Influencer - Influencer - sure thanksbetter now... influencer (talk) 13:24, 2 October 2013 (EEST) Ubaldo.Felloni - Thanks Hi, thanks for the article. I made some tests. It is no possible to terminate the application without exceptions using NavigationService.GoBack()Ubaldo Ubaldo.Felloni (talk) 13:14, 22 November 2013 (EET) Hamishwillee - Ubaldo.Felloni - what tests did you try HI Ubaldo.Felloni >It is no possible to terminate the application without exceptions using NavigationService.GoBack() What tests did you do? Are you saying that GoBack always raises and exception even if there is a final page on the stack (ie calling it removes the page and then raises and exception)? regardsH hamishwillee (talk) 05:17, 25 November 2013 (EET) Hamishwillee - Ubaldo.Felloni - thanks More information from Ubaldo: Main steps: - create a new project - add a button to MainPage.xaml - add a click event. Invoke the NavigationService.GoBack() to terminate the application. hamishwillee (talk) 00:48, 26 November 2013 (EET) Hamishwillee - Goback method removedThanks very much Ubaldo.Felloni for reporting this. hamishwillee (talk) 01:39, 29 November 2013 (EET)
http://developer.nokia.com/community/wiki/Talk:How_to_terminate_a_Windows_Phone_app
CC-MAIN-2014-35
refinedweb
1,801
62.68
calloc() Allocate space for an array Synopsis: #include <stdlib.h> void* calloc ( size_t n, size_t size ); Arguments: - n - The number of array elements to allocate. - size - The size, in bytes, of one array element. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description:. Returns: A pointer to the start of the allocated memory, or NULL if an error occurred (errno is set). Errors: - ENOMEM - Not enough memory. - EOK - No error. Examples: #include <stdlib.h> #include <stdio.h> int main( void ) { char* buffer; buffer = (char* )calloc( 80, sizeof(char) ); if( buffer == NULL ) { printf( "Can't allocate memory for buffer!\n" ); return EXIT_FAILURE; } free( buffer ); return EXIT_SUCCESS; } Environment variables: -().
https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/c/calloc.html
CC-MAIN-2020-40
refinedweb
119
53.47
MCE::Core - Documentation describing the core MCE API This document describes MCE::Core version 1.806); }; -- Output Hello from 2 Hello from 4 Hello from 5 Hello from 1 Hello from 3 Below, a new instance is configured with all available options. use MCE; my $mce = MCE->new( max_workers => 8, ## Default 1 # Number of workers to spawn. This can be set automatically # with MCE 1.412 and later releases. # MCE 1.521 sets an upper-limit of 8 for 'auto'. # See MCE::Util::get_ncpu for more info. # max_workers => 'auto', ## # of lcores, 8 maximum # max_workers => 'auto-1', ## 7 on HW with 16-lcores # max_workers => 'auto-1', ## 3 on HW with 4-lcores # max_workers => MCE::Util::get_ncpu, # run on all lcores chunk_size => 2000, ## Default 1 # Can also take a suffix; K (Kilobytes) or M (Megabytes). # The default is 1 when using the Core API and 'auto' for # MCE Models. For arrays or queues, chunk_size means the # number of records per chunk. For iterators, MCE will not # use chunk_size, though the iterator may use it to determine # how much to return per iteration. For files, smaller than or # equal to 8192 is the number of records. Greater than 8192 # is the number of bytes. MCE reads until the end of record # before calling user_func. A value above 64M will change # to 64M quietly (the maximum allowed). # chunk_size => 1, ## Consists of 1 record # chunk_size => 1000, ## Consists of 1000 records # chunk_size => '16K', ## Approximate 16 kilobytes # chunk_size => '20M', ## Approximate 20 megabytes tmp_dir => $tmp_dir, ## Default $MCE::Signal::tmp_dir # Default is $MCE::Signal::tmp_dir which points to # $ENV{TEMP} if defined. Otherwise, tmp_dir points # to a location under /tmp. freeze => \&encode_sereal, ## Default \&Storable::freeze thaw => \&decode_sereal, ## Default \&Storable::thaw # Release 1.412 allows freeze and thaw to be overridden. # Simply include a serialization module prior to loading # MCE. Configure freeze/thaw options. # use Sereal qw( encode_sereal decode_sereal ); # use CBOR::XS qw( encode_cbor decode_cbor ); # use JSON::XS qw( encode_json decode_json ); # # use MCE; gather => \@a, ## Default undef # Release 1.5 allows for gathering of data to an array or # hash reference, a MCE::Queue/Thread::Queue object, or code # reference. One invokes gathering by calling the gather # method as often as needed. # gather => \@array, # gather => \%hash, # gather => $queue, # gather => \&order, init_relay => 0, ## Default undef # For specifying the initial relay value. Allowed values # are array_ref, hash_ref, or scalar. The MCE::Relay module # is loaded automatically when specified. # init_relay => \@array, # init_relay => \%hash, # init_relay => scalar, input_data => $input_file, ## Default undef RS => "\n>", ## Default undef # input_data => '/path/to/file' ## Process file # input_data => \@array ## Process array # input_data => \*FILE_HNDL ## Process file handle # input_data => \$scalar ## Treated like a file # input_data => \&iterator ## User specified iterator # The RS option (for input record separator) applies to files # and file handles. # MCE applies additional logic when RS begins with a newline # character; e.g. RS => "\n>". It trims away characters after # the newline and prepends them to the next record. # # Typically, the left side is what happens for $/ = "\n>". # The right side is what user_func receives. # # All records begin with > and end with \n # Record 1: >seq1 ... \n> (to) >seq1 ... \n # Record 2: seq2 ... \n> >seq2 ... \n # Record 3: seq3 ... \n> >seq3 ... \n # Last Rec: seqN ... \n >seqN ... \n loop_timeout => 5, ## Default 0 # Added in 1.7, enables the manager process to timeout of a read # on channel 0. The manager process decrements the total workers # running for any worker which have died in an uncontrollable # manner. Specify this option if on occassion a worker runs out # of memory or dies due to an error from an XS module. # # A number smaller than 5 is silently increased to 5. max_retries => 2, ## Default 0 # This option, added in 1.7, causes MCE to retry a failed # chunk from a worker dying while processing input data or # sequence of numbers. parallel_io => 1, ## Default 0 posix_exit => 1, ## Default 0 use_slurpio => 1, ## Default 0 # The parallel_io option enables parallel reads during # large slurpio, useful when reading from fast storage. # Do not enable parallel_io when running MCE on many # blades with input coming from shared storage. # Set posix_exit to avoid all END and destructor processing # for non-threads spawned via fork. This is set automatically # to 1 if (F)CGI.pm is present. # Enable slurpio to pass the raw chunk (scalar ref) # to the user function when reading input files. use_threads => 1, ## Auto 0 or 1 # MCE spawns child processes by default, not threads. # # However, MCE supports threads via 2 threading # libraries if threads is desired. # The use of threads in MCE requires that you include # threads support prior to loading MCE. The use_threads # option defaults to 1 when a thread library is loaded. # Threads is loaded automatically for $^O eq 'MSWin32'. # # use threads; use forks; # use threads::shared; (or) use forks::shared; # use MCE use MCE; spawn_delay => 0.035, ## Default undef submit_delay => 0.002, ## Default undef job_delay => 0.150, ## Default undef # Time to wait, in fractional seconds, after spawning # a worker, parameters submission to a worker, and # job commencement (running, staggered delay). # Specify job_delay when wanting to stagger # workers connecting to a database. on_post_exit => \&on_post_exit, ## Default undef on_post_run => \&on_post_run, ## Default undef # Execute code block after a worker exits or dies. # MCE->exit, exit, or die # Execute code block after running. # MCE->process or MCE->run user_args => { env => 'test' }, ## Default undef # MCE release 1.4 added a new parameter to allow one to # specify arbitrary arguments such as a string, an ARRAY # or a HASH reference. Workers can access this directly: # my $args = $mce->{user_args} or MCE->user_args; user_begin => \&user_begin, ## Default undef user_func => \&user_func, ## Default undef user_end => \&user_end, ## Default undef # Think of user_begin, user_func, and user_end as in # the awk scripting language: # awk 'BEGIN { begin } { func } { func } ... END { end }' # MCE workers call user_begin once at the start of a job, # then user_func repeatedly until no chunks remain. # Afterwards, user_end is called. user_error => \&user_error, ## Default undef user_output => \&user_output, ## Default undef # MCE will forward data to user_error/user_output, # when defined, for the following methods. # MCE->sendto(\*STDERR, "sent to user_error\n"); # MCE->printf(\*STDERR, "%s\n", "sent to user_error"); # MCE->print(\*STDERR, "sent to user_error\n"); # MCE->say(\*STDERR, "sent to user_error"); # MCE->sendto(\*STDOUT, "sent to user_output\n"); # MCE->printf("%s\n", "sent to user_output"); # MCE->print("sent to user_output\n"); # MCE->say("sent to user_output"); stderr_file => 'err_file', ## Default STDERR stdout_file => 'out_file', ## Default STDOUT # Or to file; user_error and user_output take precedence. flush_file => 1, ## Default 0 flush_stderr => 1, ## Default 0 flush_stdout => 1, ## Default 0 # Flush sendto file, standard error, or standard output. interval => { delay => 0.007 [, max_nodes => 4, node_id => 1 ] }, # For use with the yield method introduced in MCE 1.5. # Both max_nodes & node_id are optional and default to 1. # Delay is the amount of time between intervals. # interval => 0.007 ## Shorter; MCE 1.506+ sequence => { ## Default undef begin => -1, end => 1 [, step => 0.1 [, format => "%4.1f" ] ] }, bounds_only => 1, ## Default undef # For looping through a sequence of numbers in parallel. # STEP, if omitted, defaults to 1 if BEGIN is smaller than # END or -1 if BEGIN is greater than END. The FORMAT string # is passed to sprintf behind the scene (% may be omitted). # e.g. $seq_n_formatted = sprintf("%4.1f", $seq_n); # Do not specify both options; input_data and sequence. # Release 1.4 allows one to specify an array reference. # e.g. sequence => [ -1, 1, 0.1, "%4.1f" ] # The bounds_only => 1 option will compute the 'begin' and # 'end' items only for the chunk and not the items in between # (hence boundaries only). This option has no effect when # sequence is not specified or chunk_size equals 1. # my $begin = $chunk_ref->[0]; my $end = $chunk_ref->[1]; task_end => \&task_end, ## Default undef # This is called by the manager process after the task # has completed processing. MCE 1.5 allows this option # to be specified at the top level. task_name => 'string', ## Default 'MCE' # Added in MCE 1.5 and mainly beneficial for user_tasks. # One may specify a unique name per each sub-task. # The string is passed as the 3rd arg to task_end. user_tasks => [ ## Default undef { ... }, ## Options for task 0 { ... }, ## Options for task 1 { ... }, ## Options for task 2 ], # Takes a list of hash references, each allowing up to 17 # options. All other MCE options are ignored. The init_relay, # input_data, RS, and use_slurpio options are applicable to # the first task only. # max_workers, chunk_size, input_data, interval, sequence, # bounds_only, user_args, user_begin, user_end, user_func, # gather, task_end, task_name, use_slurpio, use_threads, # init_relay, RS # Options not specified here will default to same option # specified at the top level. ); There are 3 constants which are exportable. Using the constants in lieu of 0,1,2 makes it more legible when accessing the user_func arguments directly. Exports SELF => 0, CHUNK => 1, and CID => 2. use MCE export_const => 1; use MCE const => 1; ## Shorter; MCE 1.415+ user_func => sub { # my ($mce, $chunk_ref, $chunk_id) = @_; print "Hello from ", $_[SELF]->wid, "\n"; } MCE 1.5 allows all public method to be called directly. use MCE; user_func => sub { # my ($mce, $chunk_ref, $chunk_id) = @_; print "Hello from ", MCE->wid, "\n"; } The following list options which may be overridden when loading the module. use Sereal qw( encode_sereal decode_sereal ); use CBOR::XS qw( encode_cbor decode_cbor ); use JSON::XS qw( encode_json decode_json ); use MCE max_workers => 4, ## Default 1 chunk_size => 100, ## Default 1 tmp_dir => "/path/to/app/tmp", ## $MCE::Signal::tmp_dir freeze => \&encode_sereal, ## \&Storable::freeze thaw => \&decode_sereal ## \&Storable::thaw ; my $mce = MCE->new( ... ); From MCE 1.8 onwards, Sereal 3.008+ is loaded automatically if available. Specify Sereal = 0> to use Storable instead. use MCE Sereal => 0; Run calls spawn, submits the job; workers call user_begin, user_func, and user_end. Run shuts down workers afterwards. Call spawn whenever the need arises for large data structures prior to running. $mce->spawn; ## Call early if desired $mce->run; ## Call run or process below ## Acquire data arrays and/or input_files. Workers persist after ## processing. $mce->process(\@input_data_1); ## Process arrays $mce->process(\@input_data_2); $mce->process(\@input_data_n); $mce->process('input_file_1'); ## Process files $mce->process('input_file_2'); $mce->process('input_file_n'); $mce->shutdown; ## Shutdown workers Often times, one may want to capture the exit status. The on_post_exit option, if defined, is executed immediately by the manager process after a worker exits via exit (children only), MCE->exit (children and threads), or die. The format of $e->{pid} is PID_123 for children and THR_123 for threads. my $restart_flag = 1; sub on_post_exit { my ($mce, $e) = @_; ## Display all possible hash elements. print "$e->{wid}: $e->{pid}: $e->{status}: $e->{msg}: $e->{id}\n"; ## Restart this worker if desired. if ($restart_flag && $e->{wid} == 2) { $mce->restart_worker; $restart_flag = 0; } } sub user_func { my ($mce) = @_; MCE->exit(0, 'msg_foo', 1000 + MCE->wid); ## Args, not necessary } my $mce = MCE->new( on_post_exit => \&on_post_exit, user_func => \&user_func, max_workers => 3 ); $mce->run; -- Output (child processes) 2: PID_33223: 0: msg_foo: 1002 1: PID_33222: 0: msg_foo: 1001 3: PID_33224: 0: msg_foo: 1003 2: PID_33225: 0: msg_foo: 1002 -- Output (running with threads) 3: TID_3: 0: msg_foo: 1003 2: TID_2: 0: msg_foo: 1002 1: TID_1: 0: msg_foo: 1001 2: TID_4: 0: msg_foo: 1002 The on_post_run option, if defined, is executed immediately by the manager process after running MCE->process or MCE->run. This option receives an array reference of hashes. The difference between on_post_exit and on_post_run is that the former is called immediately whereas the latter is called after all workers have completed running. sub on_post_run { my ($mce, $status_ref) = @_; foreach my $e ( @{ $status_ref } ) { ## Display all possible hash elements. print "$e->{wid}: $e->{pid}: $e->{status}: $e->{msg}: $e->{id}\n"; } } sub user_func { my ($mce) = @_; MCE->exit(0, 'msg_foo', 1000 + MCE->wid); ## Args, not necessary } my $mce = MCE->new( on_post_run => \&on_post_run, user_func => \&user_func, max_workers => 3 ); $mce->run; -- Output (child processes) 3: PID_33174: 0: msg_foo: 1003 1: PID_33172: 0: msg_foo: 1001 2: PID_33173: 0: msg_foo: 1002 -- Output (running with threads) 2: TID_2: 0: msg_foo: 1002 3: TID_3: 0: msg_foo: 1003 1: TID_1: 0: msg_foo: 1001 MCE supports many ways to specify input_data. Support for iterators was added in MCE 1.505. The RS option allows one to specify the record separator when processing files. MCE is a chunking engine. Therefore, chunk_size is applicable to input_data. Specifying 1 for use_slurpio causes user_func to receive a scalar reference containing the raw data (applicable to files only) instead of an array reference. input_data => '/path/to/file', ## process file input_data => \@array, ## process array input_data => \*FILE_HNDL, ## process file handle input_data => $fh, ## open $fh, "<", "file" input_data => $fh, ## IO::File "file", "r" input_data => $fh, ## IO::Uncompress::Gunzip "file.gz" input_data => \$scalar, ## treated like a file input_data => \&iterator, ## user specified iterator chunk_size => 1, ## >1 means looping inside user_func use_slurpio => 1, ## $chunk_ref is a scalar ref RS => "\n>", ## input record separator The chunk_size value determines the chunking mode to use when processing files. Otherwise, chunk_size is the number of elements for arrays. For files, a chunk size value of <= 8192 is how many records to read. Greater than 8192 is how many bytes to read. MCE appends (the rest) up to the next record separator. chunk_size => 8192, ## Consists of 8192 records chunk_size => 8193, ## Approximate 8193 bytes for files chunk_size => 1, ## Consists of 1 record or element chunk_size => 1000, ## Consists of 1000 records chunk_size => '16K', ## Approximate 16 kilobytes chunk_size => '20M', ## Approximate 20 megabytes The construction for user_func when chunk_size > 1 and assuming use_slurpio equals 0. user_func => sub { my ($mce, $chunk_ref, $chunk_id) = @_; ## $_ is $chunk_ref->[0] when chunk_size equals 1 ## $_ is $chunk_ref otherwise; $_ can be used below for my $record ( @{ $chunk_ref } ) { print "$chunk_id: $record\n"; } } Specifying a value for input_data is straight forward for arrays and files. The next several examples specify an iterator reference for input_data. use MCE; ## A factory function which creates a closure (the iterator itself) ## for generating a sequence of numbers. The external variables ## ($n, $max, $step) are used for keeping state across successive ## calls to the closure. The iterator simply returns when $n > max. sub input_iterator { my ($n, $max, $step) = @_; return sub { return if $n > $max; my $current = $n; $n += $step; return $current; }; } ## Run user_func in parallel. Input data can be specified during ## the construction or as an argument to the process method. my $mce = MCE->new( # input_data => input_iterator(10, 30, 2), chunk_size => 1, max_workers => 4, user_func => sub { my ($mce, $chunk_ref, $chunk_id) = @_; MCE->print("$_: ", $_ * 2, "\n"); } )->spawn; $mce->process( input_iterator(10, 30, 2) ); -- Output Note that output order is not guaranteed Take a look at iterator.pl for ordered output 10: 20 12: 24 16: 32 20: 40 14: 28 22: 44 18: 36 24: 48 26: 52 28: 56 30: 60 The following example queries the DB for the next 1000 rows. Notice the use of fetchall_arrayref. The iterator function itself receives one argument which is chunk_size (added in MCE 1.510) to determine how much to return per iteration. The default is 1 for the Core API and MCE Models. use DBI; use MCE; sub db_iter { my $dsn = "DBI:Oracle:host=db_server;port=db_port;sid=db_name"; my $dbh = DBI->connect($dsn, 'db_user', 'db_passwd') || die "Could not connect to database: $DBI::errstr"; my $sth = $dbh->prepare('select color, desc from table'); $sth->execute; return sub { my ($chunk_size) = @_; if (my $aref = $sth->fetchall_arrayref(undef, $chunk_size)) { return @{ $aref }; } return; }; } ## Let's enumerate column indexes for easy column retrieval. my ($i_color, $i_desc) = (0 .. 1); my $mce = MCE->new( max_workers => 3, chunk_size => 1000, input_data => db_iter(), user_func => sub { my ($mce, $chunk_ref, $chunk_id) = @_; my $ret = ''; foreach my $row (@{ $chunk_ref }) { $ret .= $row->[$i_color] .": ". $row->[$i_desc] ."\n"; } MCE->print($ret); } ); $mce->run; There are many modules on CPAN which return an iterator reference. Showing one such example below. The demonstration ensures MCE workers are spawned before obtaining the iterator. Note the worker_id value (left column) in the output. use Path::Iterator::Rule; use MCE; my $start_dir = shift or die "Please specify a starting directory"; -d $start_dir or die "Cannot open ($start_dir): No such file or directory"; my $mce = MCE->new( max_workers => 'auto', user_func => sub { MCE->say( MCE->wid . ": $_" ) } )->spawn; my $rule = Path::Iterator::Rule->new->file->name( qr/[.](pm)$/ ); my $iterator = $rule->iter( $start_dir, { follow_symlinks => 0, depthfirst => 1 } ); $mce->process( $iterator ); -- Output 8: lib/MCE/Core/Input/Generator.pm 5: lib/MCE/Core/Input/Handle.pm 6: lib/MCE/Core/Input/Iterator.pm 2: lib/MCE/Core/Input/Request.pm 3: lib/MCE/Core/Manager.pm 4: lib/MCE/Core/Input/Sequence.pm 7: lib/MCE/Core/Validation.pm 1: lib/MCE/Core/Worker.pm 8: lib/MCE/Flow.pm 5: lib/MCE/Grep.pm 6: lib/MCE/Loop.pm 2: lib/MCE/Map.pm 3: lib/MCE/Queue.pm 4: lib/MCE/Signal.pm 7: lib/MCE/Stream.pm 1: lib/MCE/Subs.pm 8: lib/MCE/Util.pm 5: lib/MCE.pm Although MCE supports arrays, extra measures are needed to use a "lazy" array as input data. The reason for this is that MCE needs the size of the array before processing which may be unknown for lazy arrays. Therefore, closures provides an excellent mechanism for this. The code block belonging to the lazy array must return undef after exhausting its input data. Otherwise, the process will never end. use Tie::Array::Lazy; use MCE; tie my @a, 'Tie::Array::Lazy', [], sub { my $i = $_[0]->index; return ($i < 10) ? $i : undef; }; sub make_iterator { my $i = 0; my $a_ref = shift; return sub { return $a_ref->[$i++]; }; } my $mce = MCE->new( max_workers => 4, input_data => make_iterator(\@a), user_func => sub { my ($mce, $chunk_ref, $chunk_id) = @_; MCE->say($_); } )->run; -- Output 0 1 2 3 4 6 7 8 5 9 The following demonstrates how to retrieve a chunk from the lazy array per each successive call. Here, undef is sent by the iterator block when $i is greater than $max. Iterators may optionally use chunk_size to determine how much to return per iteration. use Tie::Array::Lazy; use MCE; tie my @a, 'Tie::Array::Lazy', [], sub { $_[0]->index; }; sub make_iterator { my $j = 0; my ($a_ref, $max) = @_; return sub { my ($chunk_size) = @_; my $i = $j; $j += $chunk_size; return if $i > $max; return $j <= $max ? @$a_ref[$i .. $j - 1] : @$a_ref[$i .. $max]; }; } my $mce = MCE->new( chunk_size => 15, max_workers => 4, input_data => make_iterator(\@a, 100), user_func => sub { my ($mce, $chunk_ref, $chunk_id) = @_; MCE->say("$chunk_id: " . join(' ', @{ $chunk_ref })); } )->run; -- Output 1: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 2: 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 3: 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 4: 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 5: 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 6: 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 7: 90 91 92 93 94 95 96 97 98 99 100 The 1.3 release and above allows workers to loop through a sequence of numbers computed mathematically without the overhead of an array. The sequence can be specified separately per each user_task entry unlike input_data which is applicable to the first task only. See the seq_demo.pl example, included with this distribution, on applying sequences with the user_tasks option. Sequence can be defined using an array or a hash reference. use MCE; my $mce = MCE->new( max_workers => 3, # sequence => [ 10, 19, 0.7, "%4.1f" ], ## up to 4 options sequence => { begin => 10, end => 19, step => 0.7, format => "%4.1f" }, user_func => sub { my ($mce, $n, $chunk_id) = @_; print $n, " from ", MCE->wid, " id ", $chunk_id, "\n"; } ); $mce->run; -- Output (sorted afterwards, notice wid and chunk_id in output) 10.0 from 1 id 1 10.7 from 2 id 2 11.4 from 3 id 3 12.1 from 1 id 4 12.8 from 2 id 5 13.5 from 3 id 6 14.2 from 1 id 7 14.9 from 2 id 8 15.6 from 3 id 9 16.3 from 1 id 10 17.0 from 2 id 11 17.7 from 3 id 12 18.4 from 1 id 13 The 1.5 release includes a new option (bounds_only). This option tells the sequence engine to compute 'begin' and 'end' items only, for the chunk, and not the items in between (hence boundaries only). This option applies to sequence only and has no effect when chunk_size equals 1. The time to run is 0.006s below. This becomes 0.827s without the bounds_only option due to computing all items in between, thus creating a very large array. Basically, specify bounds_only => 1 when boundaries is all you need for looping inside the block; e.g. Monte Carlo simulations. Time was measured using 1 worker to emphasize the difference. use MCE; my $mce = MCE->new( max_workers => 1, chunk_size => 1_250_000, sequence => { begin => 1, end => 10_000_000 }, bounds_only => 1, ## For sequence, the input scalar $_ points to $chunk_ref ## when chunk_size > 1, otherwise $chunk_ref->[0]. ## ## user_func => sub { ## my $begin = $_->[0]; my $end = $_->[-1]; ## ## for ($begin .. $end) { ## ... ## } ## }, user_func => sub { my ($mce, $chunk_ref, $chunk_id) = @_; ## $chunk_ref contains 2 items, not 1_250_000 my $begin = $chunk_ref->[ 0]; my $end = $chunk_ref->[-1]; ## or $chunk_ref->[1] MCE->printf("%7d .. %8d\n", $begin, $end); } ); $mce->run; -- Output 1 .. 1250000 1250001 .. 2500000 2500001 .. 3750000 3750001 .. 5000000 5000001 .. 6250000 6250001 .. 7500000 7500001 .. 8750000 8750001 .. 10000000 The max_retries option, added in 1.7, causes MCE to retry a failed chunk from a worker dying while processing input data or sequence of numbers. MCE configures the on_post_exit option with the following code in the absence of on_post_exit. Otherwise, simply tailor this into your on_post_exit code. ## What is done behind the scene when max_retries is specified and ## omitting the on_post_exit option in MCE. sub on_post_exit { my ($mce, $e, $retry_cnt) = @_; my ($cnt, $msg) = ($retry_cnt + 1, "Error: Chunk $e->{id} failed"); ($retry_cnt < $mce->max_retries) ? print {*STDERR} "$msg, retrying chunk attempt #${cnt}\n" : print {*STDERR} "$msg\n"; $mce->restart_worker; } ## Running MCE with max_retries. use strict; use warnings; use MCE; sub user_func { my ($mce, $chunk_ref, $chunk_id) = @_; die "Died : chunk_id == 3" if $chunk_id == 3; print "$mce->{_wid}: $chunk_id\n"; } my $mce = MCE->new( max_workers => 1, user_func => \&user_func, max_retries => 2, )->spawn; my @input_data; push @input_data, qw( 0 1 2 3 4 5 6 7 ); $mce->process( { chunk_size => 1 }, \@input_data ); $mce->shutdown; -- Output 1: 1 1: 2 Died : chunk_id == 3 at script.pl line 8, <__ANONIO__> line 6. Error: Chunk 3 failed, retrying chunk attempt #1 Died : chunk_id == 3 at script.pl line 8, <__ANONIO__> line 7. Error: Chunk 3 failed, retrying chunk attempt #2 Died : chunk_id == 3 at script.pl line 8, <__ANONIO__> line 14. Error: Chunk 3 failed 1: 4 1: 5 1: 6 1: 7 1: 8 The user_begin and user_end options, if specified, behave similarly to awk 'BEGIN { begin } { func } { func } ... END { end }'. These are called once per worker during each run. MCE 1.510 passes 2 additional parameters ($task_id and $task_name). sub user_begin { ## Called once at the beginning my ($mce, $task_id, $task_name) = @_; $mce->{wk_total_rows} = 0; } sub user_func { ## Called while processing my $mce = shift; $mce->{wk_total_rows} += 1; } sub user_end { ## Called once at the end my ($mce, $task_id, $task_name) = @_; printf "## %d: Processed %d rows\n", MCE->wid, $mce->{wk_total_rows}; } my $mce = MCE->new( user_begin => \&user_begin, user_func => \&user_func, user_end => \&user_end ); $mce->run; When processing input data, MCE can pass an array of rows or a slurped chunk. Below, a reference to an array containing the chunk data is processed. e.g. $chunk_ref = [ record1, record2, record3, ... ] sub user_func { my ($mce, $chunk_ref, $chunk_id) = @_; foreach my $row ( @{ $chunk_ref } ) { $mce->{wk_total_rows} += 1; print $row; } } my $mce = MCE->new( chunk_size => 100, input_data => "/path/to/file", user_func => \&user_func, use_slurpio => 0 ); $mce->run; Here, a reference to a scalar containing the raw chunk data is processed. sub user_func { my ($mce, $chunk_ref, $chunk_id) = @_; my $count = () = $$chunk_ref =~ /abc/; } my $mce = MCE->new( chunk_size => 16000, input_data => "/path/to/file", user_func => \&user_func, use_slurpio => 1 ); $mce->run; Output from MCE->sendto('STDERR/STDOUT', ...), MCE->printf, MCE->print, and MCE->say can be intercepted by specifying the user_error and user_output options. MCE on receiving output will forward to user_error or user_output in a serialized fashion. Handy when wanting to filter, modify, and/or direct the output elsewhere. sub user_error { ## Redirect STDERR to STDOUT my $error = shift; print {*STDOUT} $error; } sub user_output { ## Redirect STDOUT to STDERR my $output = shift; print {*STDERR} $output; } sub user_func { my ($mce, $chunk_ref, $chunk_id) = @_; my $count = 0; foreach my $row ( @{ $chunk_ref } ) { MCE->print($row); $count += 1; } MCE->print(\*STDERR, "$chunk_id: processed $count rows\n"); } my $mce = MCE->new( chunk_size => 1000, input_data => "/path/to/file", user_error => \&user_error, user_output => \&user_output, user_func => \&user_func ); $mce->run; This option takes an array of tasks. Each task allows up to 17 options. The init_relay, input_data, RS, and use_slurpio options may be defined inside the first task or at the top level, otherwise ignored under other sub-tasks. max_workers, chunk_size, input_data, interval, sequence, bounds_only, user_args, user_begin, user_end, user_func, gather, task_end, task_name, use_slurpio, use_threads, init_relay, RS Sequence and chunk_size were added in 1.3. User_args was introduced in 1.4. Name and input_data are new options allowed in 1.5. In addition, one can specify task_end at the top level. Task_end also receives 2 additional arguments $task_id and $task_name (shown below). Options not specified here will default to the same option specified at the top level. The task_end option is called by the manager process when all workers for that sub-task have completed processing. Forking and threading can be intermixed among tasks unless running Cygwin. The run method will continue running until all workers have completed processing. use threads; use threads::shared; use MCE; sub parallel_task1 { sleep 2; } sub parallel_task2 { sleep 1; } my $mce = MCE->new( task_end => sub { my ($mce, $task_id, $task_name) = @_; print "Task [$task_id -- $task_name] completed processing\n"; }, user_tasks => [{ task_name => 'foo', max_workers => 2, user_func => \¶llel_task1, use_threads => 0 ## Not using threads },{ task_name => 'bar', max_workers => 4, user_func => \¶llel_task2, use_threads => 1 ## Yes, threads }] ); $mce->run; -- Output Task [1 -- bar] completed processing Task [0 -- foo] completed processing Beginning with MCE 1.5, the input scalar $_ is localized prior to calling user_func for input_data and sequence of numbers. The following applies. $_ is a reference to the buffer e.g. $_ = \$_buffer; $_ is a reference regardless of whether chunk_size is 1 or greater user_func => sub { # my ($mce, $chunk_ref, $chunk_id) = @_; print ${ $_ }; ## $_ is same as $chunk_ref } $_ is a reference to an array. $_ = \@_records; $_ = \@_seq_n; $_ is same as $chunk_ref or $_[CHUNK] user_func => sub { # my ($mce, $chunk_ref, $chunk_id) = @_; for my $row ( @{ $_ } ) { print $row, "\n"; } } use MCE const => 1; user_func => sub { # my ($mce, $chunk_ref, $chunk_id) = @_; for my $row ( @{ $_[CHUNK] } ) { print $row, "\n"; } } $_ contains the actual value. $_ = $_buffer; $_ = $seq_n; ## Note that $_ and $chunk_ref are not the same below. ## $chunk_ref is a reference to an array. user_func => sub { # my ($mce, $chunk_ref, $chunk_id) = @_; print $_, "\n; ## Same as $chunk_ref->[0]; } $mce->foreach("/path/to/file", sub { # my ($mce, $chunk_ref, $chunk_id) = @_; print $_; ## Same as $chunk_ref->[0]; }); ## However, that is not the case for the forseq method. ## Both $_ and $n_seq are the same when chunk_size => 1. $mce->forseq([ 1, 9 ], sub { # my ($mce, $n_seq, $chunk_id) = @_; print $_, "\n"; ## Same as $n_seq }); Sequence can also be specified using an array reference. The below is the same as the example afterwards. $mce->forseq( { begin => 10, end => 40, step => 2 }, ... ); The code block receives an array containing the next 5 sequences. Chunk 1 (chunk_id 1) contains 10,12,14,16,18. $n_seq is a reference to an array, same as $_, due to chunk_size being greater than 1. $mce->forseq( [ 10, 40000, 2 ], { chunk_size => 5 }, sub { # my ($mce, $n_seq, $chunk_id) = @_; my @result; for my $n ( @{ $_ } ) { ... do work, append to result for 5 } ... do something with result afterwards }); The methods listed below are callable by the main process and workers. The 'abort' method is applicable when processing input_data only. This causes all workers to abort after processing the current chunk. Workers write the next offset position to the queue socket for the next available worker. In essence, the 'abort' method writes the last offset position. Workers, on requesting the next offset position, will think the end of input_data has been reached and leave the chunking loop. $mce->abort; MCE->abort; Returns the chunk_id for the current chunk. The value starts at 1. Chunking applies to input_data or sequence. The value is 0 for the manager process. my $chunk_id = $mce->chunk_id; my $chunk_id = MCE->chunk_id; Getter method for chunk_size used by MCE. my $chunk_size = $mce->chunk_size; my $chunk_size = MCE->chunk_size; Calls the internal freeze method to serialize an object. The default serialization routines are handled by Sereal if available or Storable. my $frozen = $mce->freeze([ 0, 2, 4 ]); my $frozen = MCE->freeze([ 0, 2, 4 ]); Getter method for max_retries used by MCE. my $max_retries = $mce->max_retries; my $max_retries = MCE->max_retries; Getter method for max_workers used by MCE. my $max_workers = $mce->max_workers; my $max_workers = MCE->max_workers; Returns the Process ID. Threads have thread ID attached to the value. my $pid = $mce->pid; ## 16180 (pid) ; 16180.2 (pid.tid) my $pid = MCE->pid; Returns the session directory used by the MCE instance. This is defined during spawning and removed during shutdown. my $sess_dir = $mce->sess_dir; my $sess_dir = MCE->sess_dir; Returns the task ID. This applies to the user_tasks option (starts at 0). my $task_id = $mce->task_id; my $task_id = MCE->task_id; Returns the task_name value specified via the task_name option when configuring MCE. my $task_name = $mce->task_name; my $task_name = MCE->task_name; Returns the task worker ID (applies to user_tasks). The value starts at 1 per each task configured within user_tasks. The value is 0 for the manager process. my $task_wid = $mce->task_wid; my $task_wid = MCE->task_wid; Calls the internal thaw method to un-serialize the frozen object. my $object_ref = $mce->thaw($frozen); my $object_ref = MCE->thaw($frozen); Returns the temporary directory used by MCE. my $tmp_dir = $mce->tmp_dir; my $tmp_dir = MCE->tmp_dir; Returns the arguments specified via the user_args option. my ($arg1, $arg2, $arg3) = $mce->user_args; my ($arg1, $arg2, $arg3) = MCE->user_args; Returns the MCE worker ID. Starts at 1 per each MCE instance. The value is 0 for the manager process. my $wid = $mce->wid; my $wid = MCE->wid; Methods listed below are callable by the main process only. Forchunk, foreach, and forseq are sugar methods and described in MCE::Candy. Stubs exist in MCE which load MCE::Candy automatically. The process method will spawn workers automatically if not already spawned. It will set input_data => $input_data. It calls run(0) to not auto-shutdown workers. Specifying options is optional. Allowable options { key => value, ... } are: chunk_size input_data job_delay spawn_delay submit_delay flush_file flush_stderr flush_stdout stderr_file stdout_file on_post_exit on_post_run sequence user_args user_begin user_end user_func user_error user_output use_slurpio RS Options remain persistent going forward unless changed. Setting user_begin, user_end, or user_func will cause already spawned workers to shut down and re-spawn automatically. Therefore, define these during instantiation. The below will cause workers to re-spawn after running. my $mce = MCE->new( max_workers => 'auto' ); $mce->process( { user_begin => sub { ## connect to DB }, user_func => sub { ## process each row }, user_end => sub { ## close handle to DB }, }, \@input_data ); $mce->process( { user_begin => sub { ## connect to DB }, user_func => sub { ## process each file }, user_end => sub { ## close handle to DB }, }, "/list/of/files" ); Do the following if wanting workers to persist between jobs. use MCE max_workers => 'auto'; my $mce = MCE->new( user_begin => sub { ## connect to DB }, user_func => sub { ## process each chunk or row or host }, user_end => sub { ## close handle to DB }, ); $mce->spawn; ## Spawn early if desired $mce->process("/one/very_big_file/_mce_/will_chunk_in_parallel"); $mce->process(\@array_of_files_to_grep); $mce->process("/path/to/host/list"); $mce->process($array_ref); $mce->process($array_ref, { stdout_file => $output_file }); ## This was not allowed before. Fixed in 1.415. $mce->process({ sequence => { begin => 10, end => 90, step 2 } }); $mce->process({ sequence => [ 10, 90, 2 ] }); $mce->shutdown; Described in MCE::Relay. One can restart a worker who has died or exited. The job never ends below due to restarting each time. Recommended is to call MCE->exit or $mce->exit instead of the native exit function for better handling, especially under the Windows environment. The $e->{wid} argument is no longer necessary starting with the 1.5 release. Press [ctrl-c] to terminate the script. my $mce = MCE->new( on_post_exit => sub { my ($mce, $e) = @_; print "$e->{wid}: $e->{pid}: status $e->{status}: $e->{msg}"; # $mce->restart_worker($e->{wid}); ## MCE-1.415 and below $mce->restart_worker; ## MCE-1.500 and above }, user_begin => sub { my ($mce, $task_id, $task_name) = @_; ## Not interested in die messages going to STDERR, ## because the die handler calls MCE->exit(255, $_[0]). close STDERR; }, user_tasks => [{ max_workers => 5, user_func => sub { my ($mce) = @_; sleep MCE->wid; MCE->exit(3, "exited from " . MCE->wid . "\n"); } },{ max_workers => 4, user_func => sub { my ($mce) = @_; sleep MCE->wid; die("died from " . MCE->wid . "\n"); } }] ); $mce->run; -- Output 1: PID_85388: status 3: exited from 1 2: PID_85389: status 3: exited from 2 1: PID_85397: status 3: exited from 1 3: PID_85390: status 3: exited from 3 1: PID_85399: status 3: exited from 1 4: PID_85391: status 3: exited from 4 2: PID_85398: status 3: exited from 2 1: PID_85401: status 3: exited from 1 5: PID_85392: status 3: exited from 5 1: PID_85404: status 3: exited from 1 6: PID_85393: status 255: died from 6 3: PID_85400: status 3: exited from 3 2: PID_85403: status 3: exited from 2 1: PID_85406: status 3: exited from 1 7: PID_85394: status 255: died from 7 1: PID_85410: status 3: exited from 1 8: PID_85395: status 255: died from 8 4: PID_85402: status 3: exited from 4 2: PID_85409: status 3: exited from 2 1: PID_85412: status 3: exited from 1 9: PID_85396: status 255: died from 9 3: PID_85408: status 3: exited from 3 1: PID_85416: status 3: exited from 1 ... The run method, by default, spawns workers, processes once, and shuts down afterwards. Specify 0 for $auto_shutdown when wanting workers to persist after running (default 1). Specifying options is optional. Valid options are the same as for the process method. my $mce = MCE->new( ... ); ## Disables auto-shutdown $mce->run(0); The 'send' method is useful when wanting to spawn workers early to minimize memory consumption and afterwards send data individually to each worker. One cannot send more than the total workers spawned. Workers store the received data as $mce->{user_data}. The data which can be sent is restricted to an ARRAY, HASH, or PDL reference. Workers begin processing immediately after receiving data. Workers set $mce->{user_data} to undef after processing. One cannot specify input_data, sequence, or user_tasks when using the "send" method. Passing any options e.g. run(0, { options }) is ignored due to workers running immediately after receiving user data. There is no guarantee to which worker will receive data first. It depends on which worker is available awaiting data. use MCE; my $mce = MCE->new( max_workers => 5, user_func => sub { my ($mce) = @_; my $data = $mce->{user_data}; my $first_name = $data->{first_name}; print MCE->wid, ": Hello from $first_name\n"; } ); $mce->spawn; ## Optional, send will spawn if necessary. $mce->send( { first_name => "Theresa" } ); $mce->send( { first_name => "Francis" } ); $mce->send( { first_name => "Padre" } ); $mce->send( { first_name => "Anthony" } ); $mce->run; ## Wait for workers to complete processing. -- Output 2: Hello from Theresa 5: Hello from Anthony 3: Hello from Francis 4: Hello from Padre The run method will automatically spawn workers, run once, and shutdown workers automatically. Workers persist after running below. Shutdown may be called as needed or prior to exiting. my $mce = MCE->new( ... ); $mce->spawn; $mce->process(\@input_data_1); ## Processing multiple arrays $mce->process(\@input_data_2); $mce->process(\@input_data_n); $mce->shutdown; $mce->process('input_file_1'); ## Processing multiple files $mce->process('input_file_2'); $mce->process('input_file_n'); $mce->shutdown; Workers are normally spawned automatically. The spawn method allows one to spawn workers early if so desired. my $mce = MCE->new( ... ); $mce->spawn; The greatest exit status is saved among workers while running. Look at the on_post_exit or on_post_run options for callback support. my $mce = MCE->new( ... ); $mce->run; my $exit_status = $mce->status; Methods listed below are callable by workers only. MCE serializes data transfers from a worker process via helper functions do & sendto to the manager process. The callback function can optionally return a reply. [ $reply = ] MCE->do('callback' [, $arg1, ... ]); Passing args to a callback function using references & scalar. sub callback { my ($array_ref, $hash_ref, $scalar_ref, $scalar) = @_; ... } MCE->do('main::callback', \@a, \%h, \$s, 'foo'); MCE->do('callback', \@a, \%h, \$s, 'foo'); MCE knows if wanting a void, list, hash, or a scalar return value. MCE->do('callback' [, $arg1, ... ]); my @array = MCE->do('callback' [, $arg1, ... ]); my %hash = MCE->do('callback' [, $arg1, ... ]); my $scalar = MCE->do('callback' [, $arg1, ... ]); A worker exits from MCE entirely. $id (optional) can be used for passing the primary key or a string along with the message. Look at the on_post_exit or on_post_run options for callback support. MCE->exit; ## default 0 MCE->exit(1); MCE->exit(2, 'chunk failed', $chunk_id); MCE->exit(0, 'msg_foo', 'id_1000'); A worker can submit data to the location specified via the gather option by calling this method. See MCE::Flow and MCE::Loop for additional use-case. use MCE; my @hosts = qw( hosta hostb hostc hostd hoste ); my $mce = MCE->new( chunk_size => 1, max_workers => 3, user_func => sub { # my ($mce, $chunk_ref, $chunk_id) = @_;, "$host.sta", $status); MCE->gather("$host.err", $error) if (defined $error); } ); my %h; $mce->process(\@hosts, { gather => \%h }); foreach my $host (@hosts) { print $h{"$host.out"}, "\n"; print $h{"$host.err"}, "\n" if (exists $h{"$host.err"}); print "Exit status: ", $h{"$host.sta"}, "\n\n"; } -- Output Worker 2: Hello from hosta Exit status: 0 Worker 1: Hello from hostb Exit status: 0 Worker 3: Hello from hostc Error from hostc Exit status: 1 Worker 2: Hello from hostd Exit status: 0 Worker 1: Hello from hoste Exit status: 0 Worker leaves the chunking loop or user_func block immediately. Callable from inside foreach, forchunk, forseq, and user_func. use MCE; my $mce = MCE->new( max_workers => 5 ); my @list = (1 .. 80); $mce->forchunk(\@list, { chunk_size => 2 }, sub { my ($mce, $chunk_ref, $chunk_id) = @_; MCE->last if ($chunk_id > 4); my @output = (); foreach my $rec ( @{ $chunk_ref } ) { push @output, $rec, "\n"; } MCE->print(@output); }); -- Output (each chunk above consists of 2 elements) 3 4 1 2 7 8 5 6 Worker starts the next iteration of the chunking loop. Callable from inside foreach, forchunk, forseq, and user_func. use MCE; my $mce = MCE->new( max_workers => 5 ); my @list = (1 .. 80); $mce->forchunk(\@list, { chunk_size => 4 }, sub { my ($mce, $chunk_ref, $chunk_id) = @_; MCE->next if ($chunk_id < 20); my @output = (); foreach my $rec ( @{ $chunk_ref } ) { push @output, $rec, "\n"; } MCE->print(@output); }); -- Output (each chunk above consists of 4 elements) 77 78 79 80 Use the printf, print, and say methods when wanting to serialize output among workers. These are sugar syntax for the sendto method. These behave similar to the native subroutines in Perl with the exception that barewords must be passed as a reference and require the comma after it including file handles. Say is like print, but implicitly appends a newline. MCE->printf(\*STDOUT, "%s: %d\n", $name, $age); MCE->printf($fh, "%s: %d\n", $name, $age); MCE->printf("%s: %d\n", $name, $age); MCE->print(\*STDERR, "$error_msg\n"); MCE->print($fh, $log_msg."\n"); MCE->print("$output_msg\n"); MCE->say(\*STDERR, $error_msg); MCE->say($fh, $log_msg); MCE->say($output_msg); Caveat: Use the following syntax when passing a reference not a glob or file handle. Otherwise, MCE will error indicating the first argument is not a glob reference. MCE->print(\*STDOUT, \@array, "\n"); MCE->print("", \@array, "\n"); ## ok Described in MCE::Relay. The sendto method is called by workers for serializing data to standard output, standard error, or end of file. The action is done by the manager process. Release 1.00x supported 1 data argument, not more. MCE->sendto('file', \@array, '/path/to/file'); MCE->sendto('file', \$scalar, '/path/to/file'); MCE->sendto('file', $scalar, '/path/to/file'); MCE->sendto('STDERR', \@array); MCE->sendto('STDERR', \$scalar); MCE->sendto('STDERR', $scalar); MCE->sendto('STDOUT', \@array); MCE->sendto('STDOUT', \$scalar); MCE->sendto('STDOUT', $scalar); Release 1.100 added the ability to pass multiple arguments. Notice the syntax change for sending to a file. Passing a reference to an array is no longer necessary. MCE->sendto('file:/path/to/file', $arg1 [, $arg2, ... ]); MCE->sendto('STDERR', $arg1 [, $arg2, ... ]); MCE->sendto('STDOUT', $arg1 [, $arg2, ... ]); MCE->sendto('STDOUT', @a, "\n", %h, "\n", $s, "\n"); To retain 1.00x compatibility, sendto outputs the content when a single data reference is specified. Otherwise, the reference for \@array or \$scalar is shown in 1.500, not the content. MCE->sendto('STDERR', \@array); ## 1.00x behavior, content MCE->sendto('STDOUT', \$scalar); MCE->sendto('file:/path/to/file', \@array); ## Output matches the print statement MCE->sendto(\*STDERR, \@array); ## 1.500 behavior, reference MCE->sendto(\*STDOUT, \$scalar); MCE->sendto($fh, \@array); MCE->sendto('STDOUT', \@array, "\n", \$scalar, "\n"); print {*STDOUT} \@array, "\n", \$scalar, "\n"; MCE 1.500 added support for sending to a glob reference, file descriptor, and file handle. MCE->sendto(\*STDERR, "foo\n", \@array, \$scalar, "\n"); MCE->sendto('fd:2', "foo\n", \@array, \$scalar, "\n"); MCE->sendto($fh, "foo\n", \@array, \$scalar, "\n"); A barrier sync operation means any worker must stop at this point until all workers reach this barrier. Barrier syncing is useful for many computer algorithms. Barrier synchronization is supported for task 0 only or omitting user_tasks. All workers assigned task_id 0 must call sync whenever barrier syncing. use MCE; sub user_func { my ($mce) = @_; my $wid = MCE->wid; MCE->sendto("STDOUT", "a: $wid\n"); ## MCE 1.0+ MCE->sync; MCE->sendto(\*STDOUT, "b: $wid\n"); ## MCE 1.5+ MCE->sync; MCE->print("c: $wid\n"); ## MCE 1.5+ MCE->sync; return; } my $mce = MCE->new( max_workers => 4, user_func => \&user_func )->run; -- Output (without barrier synchronization) a: 1 a: 2 b: 1 b: 2 c: 1 c: 2 a: 3 b: 3 c: 3 a: 4 b: 4 c: 4 -- Output (with barrier synchronization) a: 1 a: 2 a: 4 a: 3 b: 2 b: 1 b: 3 b: 4 c: 1 c: 4 c: 2 c: 3 Consider the following example. The MCE->sync operation is done inside a loop along with MCE->do. A stall may occur for workers calling sync the 2nd or 3rd time while other workers are sending results via MCE->do or MCE->sendto. It requires another semaphore lock in MCE to solve this which was not done in order to keep resources low. Therefore, please keep this in mind when mixing MCE->sync with MCE->do or output serialization methods inside a loop. sub user_func { my ($mce) = @_; my @result; for (1 .. 3) { ... compute algorithm ... MCE->sync; ... compute algorithm ... MCE->sync; MCE->do('aggregate_result', \@result); ## or MCE->sendto MCE->sync; ## The sync operation is also needed here to ## prevent MCE from stalling. } } There may be on occasion when the MCE driven app is too fast. The interval option combined with the yield method, both introduced with MCE 1.5, allows one to throttle the app. It adds a "grace" factor to the design. A use case is an app configured with 100 workers running on a 24 logical way box. Data is polled from a database containing over 2.5 million rows. Workers chunk away at 300 rows per chunk performing SNMP gets (300 sockets per worker) polling 25 metrics from each device. With this scenario, the load on the box may rise beyond 90+. In addition, IP_Tables may reach its contention point causing the entire application to fail. The scenario above is solved by simply having workers yield among themselves in a synchronized fashion. A delay of 0.007 seconds between intervals is all that's needed. The load on the box will hover between 23 ~ 27 for the duration of the run. Polling completes in under 17 minutes time. This is quite fast considering the app polls 62.5 million metrics combined. The math equates to 3,676,470 per minute or rather 61,275 per second from a single box. ## Both max_nodes and node_id are optional (default 1). interval => { delay => 0.007, max_nodes => $max_nodes, node_id => $node_id } A 4 node setup can poll 10 million devices without the additional overhead of a distribution agent. The difference between the 4 nodes are simply node_id and the where clause used to query the database. The mac addresses are random such that the data divides equally to any power of 2. The distribution key lies in the mac address itself. In fact, the 2nd character from the right is sufficient for maximizing on the power of randomness for equal distribution. Query NodeID 1: ... AND substr(MAC, -2, 1) IN ('0', '1', '2', '3') Query NodeID 2: ... AND substr(MAC, -2, 1) IN ('4', '5', '6', '7') Query NodeID 3: ... AND substr(MAC, -2, 1) IN ('8', '9', 'A', 'B') Query NodeID 4: ... AND substr(MAC, -2, 1) IN ('C', 'D', 'E', 'F') Below, the user_tasks is configured to simulate 4 nodes. This demonstration uses 2 workers to minimize the output size. Input is from the sequence option. use Time::HiRes qw(time); use MCE; my $d = shift || 0.1; local $| = 1; sub create_task { my ($node_id) = @_; my $seq_size = 6; my $seq_start = ($node_id - 1) * $seq_size + 1; my $seq_end = $seq_start + $seq_size - 1; return { max_workers => 2, sequence => [ $seq_start, $seq_end ], interval => { delay => $d, max_nodes => 4, node_id => $node_id } }; } sub user_begin { my ($mce, $task_id, $task_name) = @_; ## The yield method causes this worker to wait for its next time ## interval slot before running. Yield has no effect without the ## 'interval' option. ## Yielding is beneficial inside a user_begin block. A use case ## is staggering database connections among workers in order ## to not impact the DB server. MCE->yield; MCE->printf( "Node %2d: %0.5f -- Worker %2d: %12s -- Started\n", MCE->task_id + 1, time, MCE->task_wid, '' ); return; } { my $prev_time = time; sub user_func { my ($mce, $seq_n, $chunk_id) = @_; ## Yield simply waits for the next time interval. MCE->yield; ## Calculate how long this worker has waited. my $curr_time = time; my $time_waited = $curr_time - $prev_time; $prev_time = $curr_time; MCE->printf( "Node %2d: %0.5f -- Worker %2d: %12.5f -- Seq_N %3d\n", MCE->task_id + 1, time, MCE->task_wid, $time_waited, $seq_n ); return; } } ## Simulate a 4 node environment passing node_id to create_task. print "Node_ID Current_Time Worker_ID Time_Waited Comment\n"; MCE->new( user_begin => \&user_begin, user_func => \&user_func, user_tasks => [ create_task(1), create_task(2), create_task(3), create_task(4) ] )->run; -- Output (notice Current_Time below, stays 0.10 apart) Node_ID Current_Time Worker_ID Time_Waited Comment Node 1: 1374807976.74634 -- Worker 1: -- Started Node 2: 1374807976.84634 -- Worker 1: -- Started Node 3: 1374807976.94638 -- Worker 1: -- Started Node 4: 1374807977.04639 -- Worker 1: -- Started Node 1: 1374807977.14634 -- Worker 2: -- Started Node 2: 1374807977.24640 -- Worker 2: -- Started Node 3: 1374807977.34649 -- Worker 2: -- Started Node 4: 1374807977.44657 -- Worker 2: -- Started Node 1: 1374807977.54636 -- Worker 1: 0.90037 -- Seq_N 1 Node 2: 1374807977.64638 -- Worker 1: 1.00040 -- Seq_N 7 Node 3: 1374807977.74642 -- Worker 1: 1.10043 -- Seq_N 13 Node 4: 1374807977.84643 -- Worker 1: 1.20045 -- Seq_N 19 Node 1: 1374807977.94636 -- Worker 2: 1.30037 -- Seq_N 2 Node 2: 1374807978.04638 -- Worker 2: 1.40040 -- Seq_N 8 Node 3: 1374807978.14641 -- Worker 2: 1.50042 -- Seq_N 14 Node 4: 1374807978.24644 -- Worker 2: 1.60045 -- Seq_N 20 Node 1: 1374807978.34628 -- Worker 1: 0.79996 -- Seq_N 3 Node 2: 1374807978.44631 -- Worker 1: 0.79996 -- Seq_N 9 Node 3: 1374807978.54634 -- Worker 1: 0.79996 -- Seq_N 15 Node 4: 1374807978.64636 -- Worker 1: 0.79997 -- Seq_N 21 Node 1: 1374807978.74628 -- Worker 2: 0.79996 -- Seq_N 4 Node 2: 1374807978.84632 -- Worker 2: 0.79997 -- Seq_N 10 Node 3: 1374807978.94634 -- Worker 2: 0.79996 -- Seq_N 16 Node 4: 1374807979.04636 -- Worker 2: 0.79996 -- Seq_N 22 Node 1: 1374807979.14628 -- Worker 1: 0.80001 -- Seq_N 5 Node 2: 1374807979.24631 -- Worker 1: 0.80000 -- Seq_N 11 Node 3: 1374807979.34634 -- Worker 1: 0.80001 -- Seq_N 17 Node 4: 1374807979.44636 -- Worker 1: 0.80000 -- Seq_N 23 Node 1: 1374807979.54628 -- Worker 2: 0.80000 -- Seq_N 6 Node 2: 1374807979.64631 -- Worker 2: 0.80000 -- Seq_N 12 Node 3: 1374807979.74633 -- Worker 2: 0.80000 -- Seq_N 18 Node 4: 1374807979.84636 -- Worker 2: 0.80000 -- Seq_N 24 The interval.pl example above is included with MCE. Mario E. Roy, <marioeroy AT gmail DOT com>
http://search.cpan.org/dist/MCE/lib/MCE/Core.pod
CC-MAIN-2016-44
refinedweb
7,995
66.33
{-# LANGUAGE ScopedTypeVariables, BangPatterns, TypeSynonymInstances, UndecidableInstances, Flexible, Storage.Hashed.AnchoredPath import Storage.Hashed.Tree import Storage.Hashed.Hash import Control.Applicative( (<$>) ) import Data.List( sortBy ) import Data.Int( Int64 ) import Data.Maybe( isNothing, isJust ) import qualified Data.ByteString.Lazy.Char8 as BL import Control.Monad.RWS.Strict import qualified Data.Set as S :: (Functor m, Monad m) => TreeMonad m () flush = do current <- get changed' <- map fst <$> M.toList <$> gets changed dirs' <- gets tree >>= \t -> return [ path | (path, SubTree s) <- list t ] modify $ \st -> st { changed = M.empty, changesize = 0 } forM_ (changed' ++ dirs' ++ [AnchoredPath []]) flushItem runTreeMonad' :: (Functor m, Monad m) => TreeMonad m a -> TreeState m -> m (a, Tree m) runTreeMonad' action initial = do (out, final, _) <- runRWST action (AnchoredPath []) initial return (out, tree final) runTreeMonad :: (Functor m, Monad m) => TreeMonad m a -> TreeState m -> m (a, Tree m) runTreeMonad action initial = do let action' = do x <- action flush return x runTreeMonad' action' initial -- | :: (Functor m, Monad m) => TreeMonad m a -> Tree m -> m (a, Tree m) virtualTreeMonad action t = runTreeMonad' action $ initialState t (\_ -> return NoHash) (\_ x -> return x) :: (Functor m, Monad -- | :: (Functor m, Monad m) => AnchoredPath -> Maybe (TreeItem m) -> TreeMonad m () replaceItem path item = do path' <- (`catPaths` path) `fmap` currentDirectory modify $ \st -> st { tree = modifyTree (tree st) path' item } flushItem :: forall e m. (Monad m, Functor, Functor m) => TreeMonad m () flushSome = do x <- gets changesize when (x > megs 100) $ do remaining <- go =<< sortBy age <$> M.toList <$> gets changed modify $ \s -> s { changed = M.fromList remaining } where go [] = return [] go ((path, (size, age_)):chs) = do x <- (\s -> s - size) <$> gets changesize flushItem path modify $ \s -> s { changesize = x } if (x > megs 50) then go chs else return $ chs megs = (* (1024 * 1024)) age (_, (_, a)) (_, (_, b)) = compare a b instance (Functor m, Monad m) => TreeRO (TreeMonad m) where expandTo p = do t <- gets tree) => TreeRW (TreeMonad m) where writeFile p con = do expandTo p modifyItem p (Just blob) flushSome from' <- expandTo from to' <- expandTo to tr <- gets tree let item = find tr from' found_to = find tr to' unless (isNothing found_to) $ fail $ "Error renaming: destination " ++ show to ++ " exists." unless (isNothing item) $ do modifyItem from Nothing modifyItem to item renameChanged from to copy from to = do from' <- expandTo from to' <- expandTo to tr <- gets tree let item = find tr from' unless (isNothing item) $ modifyItem to item findM' :: forall m a e. (Monad m, Functor m) => (Tree m -> AnchoredPath -> a) -> Tree m -> AnchoredPath -> m a findM' what t path = fst <$> virtualTreeMonad (look path) t where look :: AnchoredPath -> TreeMonad m a look = expandTo >=> \p' -> flip what p' <$> gets tree findM :: (Monad m, Functor m) => Tree m -> AnchoredPath -> m (Maybe (TreeItem m)) findM = findM' find findTreeM :: (Monad m, Functor m) => Tree m -> AnchoredPath -> m (Maybe (Tree m)) findTreeM = findM' findTree findFileM :: (Monad m, Functor m) => Tree m -> AnchoredPath -> m (Maybe (Blob m)) findFileM = findM' findFile
http://hackage.haskell.org/package/hashed-storage-0.5.8/docs/src/Storage-Hashed-Monad.html
CC-MAIN-2014-41
refinedweb
471
54.66
Details - Type: Bug - Status: Closed - Priority: Not Evaluated - Resolution: Done - Affects Version/s: Qt Creator 4.14.0 - Fix Version/s: Qt Creator 4.14.1, Qt Creator 4.15.0-beta1 - Component/s: Clang Tidy & Clazy Analyzer - Labels:None - Platform/s: - Commits:d20303250427de7eb2c336c4340a123e00d897e0 (qt-creator/qt-creator/4.14) Description When trying to analyze my code that is built against a Kit with MSVC 2019/16.8.. With clang-tidy, it fails with errors like: C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.28.29333\include\intrin.h:56:1: error: expected function body after function declarator [clang-diagnostic-error] __MACHINE(void * _AddressOfReturnAddress(void)) ^ Suppressed 3 warnings (3 with check filters). Found compiler error(s). That might be related to MSVC 2019/16.8: With clazy, it fails with errors like: C:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\um\winnt.h(1010,5): error: MS-style inline assembly is not available: Unable to find target for this triple (no targets are registered) __asm { ^ When trying to do that with a MinGW 8.1 x64 kit, it fails like this... With clazy, it fails with errors like: C:\Qt\Tools\mingw810_64\lib\gcc\x86_64-w64-mingw32\8.1.0\include\xmmintrin.h:292:19: error: use of undeclared identifier '__builtin_ia32_movss' return (__m128) __builtin_ia32_movss ((__v4sf) __A, Attachments Issue Links - relates to QTCREATORBUG-20818 Clang-tidy fails to analyze files on Windows with MinGW - Closed
https://bugreports.qt.io/browse/QTCREATORBUG-25126?gerritReviewStatus=Open
CC-MAIN-2022-05
refinedweb
243
50.94
Get out of the transit airport at Schengen area and the counting of Schengen period Fields that can be ordered in more than one way Vegetarianism in Buddhism on different schools Run the java code. This was a doozy. I recently upgraded the compiler and rebuilt the JNI library. check over here It works :) –Emaborsa Mar 14 '13 at 19:24 Thank you, you are the best!!!!!!!!!!!!!!!!!!!!!!!!! –Thomas Sep 26 '13 at 16:09 add a comment| Your Answer draft saved Could aliens colonize Earth without realizing humans are people too? asked 3 years ago viewed 3697 times active 3 years ago Related 0Wrap C header file to use it with a dll and JNA(erator)1The specified procedure could not be found (MapVirtualKeyEx) Does anyone know what might be causing this? I also double checked, and the signatures of the JNI functions look the same in the case where the DLL does load without error and in the case where it fails. You can alter this property by passing -Djava.library.path= share|improve this answer answered Nov 19 '13 at 20:16 Samhain 1,369616 I am using System.load() and providing Full path of the I tried to edit h and cpp files, using classes, namespaces, static methods and other staff found on the web, but nothing to do. I did not used -Djava.library.path while Launching my Java application from command prompt. –Shiva Nov 20 '13 at 6:48 @user3008675 The issue is that Java can't find your other If the list includes your function, then you're good. If so, how? FAQs Search RecentTopics FlaggedTopics HotTopics Best Topics Register / Login Post Reply Bookmark Topic Watch Topic New Topic programming forums Java Java JSRs Mobile Certification Databases Caching Books Engineering Languages Frameworks I think during the upgrade process my project settings/netbeans paths got screwed up. The Specified Procedure Could Not Be Found Windows 7 You can see that LIBINET.DLL in the attached image is highlited with red and when you click on it you can see missing functions.| code: char[] output = new char[8]; // output has no more than 8 characters char[] input = new char { 'x', 'x', 'x' }; int len = FuncB (input, output); After share|improve this answer edited Jan 17 '11 at 16:07 Sampson 183k46389477 answered Oct 1 '08 at 23:10 anjanb 6,49494886 I opened the DLL in a PE explorer and double This tool uses JavaScript and much of it will not work correctly without it enabled. I apologize for the typo. now I am not getting the error anymore, but I am not getting any output. Am I doing something wrong? Please disregard and delete this email if you are not the intended recipient. #################################################################### Previous message: [Rd] Improvements to write.arff (PR#12574) Next message: [Rd] strsplit and the empty string Messages sorted The Specified Procedure Could Not Be Found Windows 2008 R2 Please turn JavaScript back on and reload this page. Dependency Walker now I am not getting the error anymore, but I am not getting any output. Will update this as soon as I hear from them write2warriors commented Apr 3, 2012 not sure if it answers your questions, but on Dedendency Walker, Without undecorate C++ functions, it Browse other questions tagged java dll jna or ask your own question. You can not post a blank message. Changed in zorba: status: New → Invalid See full activity log To post a comment you must log in. It may be OK if you have problems with delay-load modules (missing delay-load dependencies are not a problem as long as the calling DLL is prepared to handle the missing module Now I get the above error. How to replace not found reference "??" in an another constant e.g "REF"? this content My C/C++ skills ar not the best so the problem could be there. Step over with the debugger twice and you should be able to step over your source code. If you are using a C++ compiler, you must use 'extern "C"' in order to avoid name mangling; stdcall calling convention will also add a suffix to function names. Did Donald Trump say that "global warming was a hoax invented by the Chinese"? share|improve this answer answered Oct 1 '08 at 21:31 Jeff Yates 46.7k14117160 add a comment| up vote 0 down vote Did you create the new external DLL using the standard JNI The .dll is being found because I debugged from where the Java class calls System.loadLibrary() and the path to the .dll is resolved correctly. Both libBv2 and libC were being found. I.e., using javah and so forth? I apologize for the typo. share|improve this answer edited Oct 8 '08 at 14:54 answered Oct 7 '08 at 21:47 matt 3061410 I had this issue. have a peek at these guys Is there an error with the dll? Or perhaps I switched JDKs/JREs and as a result my DLL wasn't in the new directory. It sounds like you aren't referencing all the lib files you need. The library seems to get loaded, but always the same exception is thrown: Exception in thread "main" java.lang.UnsatisfiedLinkError: Error looking up function 'function': The specified procedure could not be found. The difference between 'ping' and 'wget' in relation to hostname resolution What is the difference between l() and url()? share|improve this answer answered Oct 3 '08 at 1:03 Anthony Cramp 2,44041625 I am using the standard JNI procedure. Terms Privacy Security Status Help You can't perform that action at this time. Rest of dependencies of that DLL are present in %PATH% and i have verified it using tools like dependency walker. Thanks Keith P.S. Login or register to post comments Report any problems with this website to: webmaster © 2016 Sly Technologies Inc. Reload to refresh your session. Java char maps to native wchar_t. I would have deleted this post, but I'm not sure that is possible. Four Birds + One more hot questions question feed lang-java about us tour help blog chat data legal privacy policy work here advertising info mobile contact us feedback Technology Life / We recommend upgrading to the latest Safari, Google Chrome, or Firefox.
http://gsbook.org/the-specified/java-lang-unsatisfiedlinkerror-the-specified-procedure-could-not-be-found.php
CC-MAIN-2018-09
refinedweb
1,050
71.14
I have a generic method like this: public class MyClass { public T MyMethod<T>(string arg) { return default(T); } } When I try to call the method like this: int someInt = MyClassInstance.MyMethod<int>("some_string"); I get the following error: Attempting to JIT compile method 'MyClass.MyMethod<int> (string)' while running with --aot-only. This also happens to bool and float types but it works for string. Would you guys know why this happens? Thanks! Answer by brianturner · Jan 29, 2015 at 02:42 PM This happened because just-in-time (JIT) compilation does not work on iOS. All code must be ahead-of-time (AOT) compatible. default() sometimes uses JIT. When this happens, you will have to write the code in such a way to remove the use of default(). I can't predict if it will or won't. default() I will update this answer if I figure out more. All right. Thanks! :) Can someone confirm this please? @brianturner @jica : Because I see default keyword being used in Dictionary implementation for TryGetValue. Was wondering how this got accepted as answer? Please suggest. Interesting. I wasn't aware Dictionary was using it. I ran some test and found there are some instance where it does work, but can't see why it does or doesn't. If it works for Mono AOT iOS failure for generic method 0 Answers ExecutionEngineException : JIT/AOT error when adding to delegate 1 Answer Fixing AOT errors in iOS with BinaryWriter 2 Answers ios: 2 function calls on the same interface, one working... the other function call doing literally nothing 0 Answers My Generic function using LINQ(orderBy thenBy)Not working on IOS 1 Answer
https://answers.unity.com/questions/888683/ios-jit-compile-error-on-generic-method-called-wit.html
CC-MAIN-2019-51
refinedweb
280
65.42
public class Groovy extends Java Executes a series of Groovy statements. Statements can either be read in from a text file using the src attribute or from between the enclosing groovy tags. Adds the class paths (if any) classLoader- the classloader to configure Set the source resource. a- the resource to load as a single element Resource collection. Adds a fileset (nested fileset attribute) which should represent a single source file. set- the fileset representing a source file Add the FilterChain element. filter- the filter to add Declare the encoding to use when inputting from a resource; If not supplied or the empty encoding is supplied, a guess will be made for file resources, otherwise the platform's default encoding will be used. encoding- the character encoding to use. Declare the encoding to use when outputting to a file; Leave unspecified or use "" for the platform's default encoding. encoding- the character encoding to use. or a nested resource is supplied.
https://docs.groovy-lang.org/docs/groovy-3.0.8/html/gapi/org/codehaus/groovy/ant/Groovy.html
CC-MAIN-2022-21
refinedweb
161
56.76
I am trying to write a program in C to accomplish the following task. Input: Three double-precision numbers, a, b, and c. Output: All the numbers from b to a, that can be reached by decrements of c. #include <stdlib.h> #include <stdio.h> int main() { double high, low, step, var; printf("Enter the <lower limit> <upperlimit> <step>\n>>"); scanf("%lf %lf %lf", &low, &high, &step); printf("Number in the requested range\n"); for (var = high; var >= low; var -= step) printf("%g\n", var); return 0; } 10-236-49-81:stackoverflow pavithran$ ./range.o Enter the <lower limit> <upperlimit> <step> >>0.1 0.9 0.2 Number in the requested range 0.9 0.7 0.5 0.3 10-236-49-81:stackoverflow pavithran$ 10-236-49-81:stackoverflow pavithran$ ./range.o Enter the <lower limit> <upperlimit> <step> >>0.1 0.5 0.1 Number in the requested range 0.5 0.4 0.3 0.2 0.1 10-236-49-81:stackoverflow pavithran$ Using a double as a counter in a for loop requires very careful consideration. In many instances it's best avoided. I'm sure you know that not all numbers that are exact in decimal are also exact in binary floating point. In fact, for IEEE754 floating point, only dyadic rationals are. So 0.5 is, but 0.4, 0.3, 0.2, and 0.1 are not. The closest IEEE754 floating point double to 0.2 is actually the slightly larger 0.200000000000000011102230246251565404236316680908203125. In your case a repeated subtraction of this from 0.9 eventually causes a number whose first significant figure is a to become a number whose first significant figure is a - 3: your bug then manifests itself. The simple remedy is to work in integers, decement by 1 each time, and scale your output using step.
https://codedump.io/share/RWD6fITIh8IW/1/looping-over-a-range-of-floats-in-c
CC-MAIN-2017-04
refinedweb
307
68.26
A little bonus for people that follows my webpack academy course! I will show you how to add typescript with vuejs2 and Sass! I will divide this article into 3 parts! You can only follow the first if you need to add only typescript into your project! Add typescript For adding typescript we will need to add a loader and install some dependencies! We will install ts-loader that will handling .ts file! We need to add tsconfig.json (ts-loader will use it for transpiling ts file into js file). After this we will remove all file in our src/ in order to add index.ts (expect html file). We need to use ts-loader in our webpack config! module: { rules: [{ test: /\.tsx?$/, loader: "ts-loader", exclude: /node_modules/, }] }, Alias typescript & webpack If you use alias in webpack, you need to do the same alias for tsconfig file! webpack.config resolve: { alias: { '@': path.resolve(__dirname, "./src/"), } }, tsconfig.json "paths": { "@/*": ["./src/*"] }, You can check all changes from this Add vuejs 2 So now we will install vue2! We will add the vue-loader. We will need to install another loader, if you remember during the first academy, I explain the goal of style-loader (it inject css into the DOM). We will need to replace it (we use it only in dev mode) with vue-style-loader! (it will do the same thing but for css in vue file!) Ok so now we need to make 4 things! - the first is to indicate the alias of vue for webpack - the second is linked to typescript - the third is to add vue library as cdn - the last is to configure vue & test! Alias vue (vue.esm.js) In the webpack config 'vue$': 'vue/dist/vue.esm.js', Adapt typescript with vue file When typescript will handle vue file, it will have some trouble! Since it's not a ts file! But we need to transpile vue file into js file! So when we declare our ts-loader we need to add this options: { // Tell to ts-loader: if you check .vue file extension, handle it like a ts file appendTsSuffixTo: [/\.vue$/] } We also need to create a file called vue-shims.d.ts, it will tell the TypeScript compiler that importing .vue files is OK. So you can import vue file without issue in .ts! declare module "*.vue" { import Vue from "vue" export default Vue } Also, we need to put this file in the ts-config "files": [ "./vue-shims.d.ts", ] 😅 We almost finish! Be brave 💪 Import vue with cdn Go to the part dedicated to cdn in my academy if you need to know how it's working but you need to add vue cdn link for dev mode and the same but vue.min in prod mode. Don't forget to add external property into the webpack.config Config vuejs We just need to configure vuejs config and we are done! So first of all we need to create index.ts that will be the entry file of vue. import Vue from "vue" import App from "./app/App.vue" Vue.config.productionTip = false export const app = new Vue({ el: "#app", render: h => h(App), }) I prefer to split .vue to .ts, my vue file will include my style and template, the typescript file will include all component logic. app.vue <template> <div class="toto"> Hello there </div> </template> <script lang="ts" src="./App.ts"></script> <style scoped> .toto { color: blue; } </style> app.ts import Vue from "vue" export default Vue.extend({}) The last thing to do is to go to html file and create the div that will be used by vuejs (vue will use this div to inject its components). As mentioned by the entry file, the id is app. So we just need to inject it into our html! <div id="app"></div> You can check all changes from this SASS You can skip vuejs part if you are only interested by SASS with webpack! Let's add sass-loader to our project, we need to use it before handling css! Since Sass transpilers file .scss into .css! We also need to change our regex for handling .scss file test: /\.(sa|sc|c)ss$/, Alias for style I like to use alias for style files! So we can go to it, but we need to add it to webpack config and typescript config! After this, we can create our first .sass file. $main-colors: #6096BA; $hover: #45718D; $active: #385A71; $grey: #677681; $light: #B7D0E1; $black: #233947; And use it to .vue <style lang="scss" scoped> @import "~style/import.scss"; div { background: $grey; } </style> Note: We also need to install sass packages! I hope you like this BIG bonus! I hope you like this reading! 🎁 You can get my new book Underrated skills in javascript, make the difference for FREE if you follow me on Twitter and MP me 😁 ☕️ You can SUPPORT MY WORKS 🙏 🏃♂️ You can follow me on 👇 🕊 Twitter : 👨💻 Github: And you can mark 🔖 this article! Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/codeoz/webpack-academy-bonus-use-webpack-with-typescript-vuejs-sass-38ff
CC-MAIN-2021-49
refinedweb
839
84.47
Library is a pre-compiled program while header’s file is an interface for the library.Library are program that is ready for use by the programmer and need no compilation.The best example of library file is .dll file(in Windows) and .so file(in Linux).In Windows,if you want to include a plugin library is language dependent.If you are programming in JAVA you can only use the library written in JAVA.This also holds true for C and C++ library but,C++ standard allowed the use of C library.However,through binding a library written in particular language can be utilized with various languages.Library are of two types static and dynamic library. For more information you can go to this link Static and Dynamic library in C++. Header’s file serves as an interface for the library.It contain all the functions names that will be included in the library but not the function’s code.If you want to use a function from the library you need to include it’s header’s file know what this strlen() function is.But if you include the header’s file name cstring the compiler compile the program happily and gives you the length of the string str. By adding the header’s file name cstring the compiler searches for the function strlen in the header’s file and calls it if found,if not it gives you an error message .It is only through adding header’s file that we can use the library functions, so they act as an interface for the library functions.Header’s file usually have .h extension.The next topic discusses on how to make a header’s file. Making .h and .cpp file You have seen that header’s(.h) file becomes necessary if you are using library.In other word,if you are making a library you also need a header’s file. Header’s file only introduced what functions your library will contain,the main code of the function is written in .cpp (for C++, .c for C )file.Here we will learn a few ways to make .h and .cpp file. I ) If you are using Code::Blocks the fastest way to make .h and .cpp is to navigate to the left-top side of the window and go to the directory “File->New->Class” and click on it then a new window appears where the name of the class is entered(say Class_name).Clicking create will create Class_name.h inside newly created folder ‘include‘ and Class_name.cpp inside the newly created folder ‘src‘. If you open the Class_name.h the file will appear like this. “include/Classname.h“ for the first time it includes the header’s file contents in the preprocessor. But when it come across the statement “include/Classname.h” again in another file it prevents the inclusion of the content of the file as it has already been included.If the statements were not added you will get an error. In your .cpp add the code of add() function.So the file will appear as, #include “../include/Class_name.h” Class_name::Class_name( ) { //ctor } Class_name::~Class_name( ) { //dtor } int Class_name::add(int i1 ,int i2) { return (i1+i2) ; } To include the add() function in your main.cpp file include ” #include “include/Class_name.h” “.So,the main.cpp file will appear as, #include <iostream> #include “include/Class_name.h” using namespace std ; int main( ) { Class_name cn ; cout<< cn.add(1,999) << endl ; cin.get() ; return 0 ; } **Note :: The header’s file name is declared inside ” “(double quotation) not under ‘ < > ‘ (sign).This allow the compiler to search for the file in the directory where your main.cpp file is present.However, if we declare it inside < > the compiler would search for the library file in the standard directory ( the directory where all the library file is store in your code::Blocks installation directory,most probably “CodeBlocks\MinGW\include” ) and you will get an error as the file cannot be found. ii )Using notepad or notepad++ you can also create .h and .cpp files . a)For header’s file add the three lines given below. #ifndef CLASSNAME_H ///CLASSNAME is the name of your class #define CLASSNAME_H ///CLASSNAME is the name of your class /* Declare your class or function name here */ #endif ///CLASSNAME I have already explain the importance of the three statements above.Now save the file in your main.cpp file directory. b ) For .cpp file. Open the notepad and include the header’s file name as given below and save it as classname.cpp in the directory of the main.cpp file. #include “classname.h” /* Methods or functions definition here */ Note that defining the functions or methods is allowed only inside .cpp file.However , you can define an inline function inside .h file because inline functions does not behave like the usual function and are meant only for code substitution.More about inline function will be discussed in Chapter 4 A program is given below using the inline method in classname.h header’s file. #define CLASSNAME_H ///CLASSNAME is the name of your class #include <iostream> using namespace std; class classname { public: classname( ); inline void func( ) { cout<<“func() is an inline method \n” ; } ~classname( ); } ; #endif ///CLASSNAME classname::classname( ) { ///Constructor } classname::~classname( ) { ///Destructor } #include < iostream > #include “classname.h” using namespace std ; int main( ) { classname cn ; cn.func( ) ; cin.get() ; return 0 ; }
https://corecplusplustutorial.com/library-how-it-differs-from-headers-file/
CC-MAIN-2017-30
refinedweb
902
66.94
RedBlue – A Processing Library RedBlue is a library for creating Anaglyph Stereographic 3D images in Processing. RedBlue plugs right in as a renderer in size() and makes the digital world jump out and play! In order to join in the fun, you're going to need a hot pair of specs, as found in a box of sugary cereal from the year 1987. RedBlue is an extention of P3D, so don't try to make something really huge unless you're rendering to video. I'm quite sorry to let you know that because of this you also cannot use smooth(). Getting it running size(width, height, "megamu.redblue.RedBlue"); It's just one extra little detail in that friendly size() command that we all know and love. Add “megamu.redblue.RedBlue” as the third parameter, and things should just work! Put on your glasses and let 'er fly. import megamu.redblue.RedBlue; public void setup() { size(640, 480, "megamu.redblue.RedBlue"); noStroke(); } public void draw() { background(0); lights(); rotateY((float)millis()/2000); translate(200,0,0); box(50); } Advanced things Lets say for example that you want things to pop out more. You can get this effect, it's just a little weird looking in syntax. I should note that it's been set up to best reflect the distortion that your camera's “lens” produces and if you want things to pop out more, you should be looking at perspective(). setDivergence( float howMuch ) setDivergence() lets you set just how much pop your stereograph gets. The default is 1.0, setting this greater than one will have an increased effect, less than one for a decreased effect and setting this to 0.0 will totally flat. Negative if you're wearing your glasses upside-down. import megamu.redblue.*; void setup(){ size(216,216,"megamu.redblue.RedBlue"); } void draw(){ background(0); camera(); lights(); float alt = time(1234); float asm = time(4321); translate(width/2, height/2, -width/4); pushMatrix(); rotate( time(567), sin(alt)*sin(asm), cos(alt), sin(alt)*cos(asm) ); box(40); popMatrix(); for(int i=0; i<16; i++){ rotateX( 1.5*i ); pushMatrix(); rotateY( time(534+i*291) ); translate( map( sin(time(1200+300*i)), -1, 1, 30, 150 ), 0, 0 ); rotate( time(765), sin(alt)*sin(asm), cos(alt), sin(alt)*cos(asm) ); box(20); popMatrix(); } } void mouseMoved(){ float diverge = map(mouseX,0,width,0,2); ((RedBlue)g).setDivergence( diverge ); } float time(float scaleBy){ return (float)millis()/scaleBy; } Similiarily, you can also get back that divergence number in case you lost it somewhere. getDivergence() It works much like you would expect, and the syntax is again kinda wonky. import megamu.redblue.RedBlue; void setup(){ size(640, 480, "megamu.redblue.RedBlue"); } void draw(){ float diverge = ((RedBlue)g).getDivergence(); } RedBlue in OpenGL This is something that I'm interested in doing. I fought it for a bit and stand defeated for now. If someone is a GL guru and knows how to make this integrate well with Processing, please drop me email. When it works, it'll go live right here. Install megamu.redblue.RedBlue megamu.redblue.RedBlueOpenGL - Coming someday...
http://leebyron.com/else/redblue/
crawl-002
refinedweb
523
65.52
HTTP Multiple Choice Questions 1. The enterprise model represents �� transactions for an enterprise. �� the current state of the enterprise. �� all enterprise processes. �� the interaction between client and server. 2. The HyperText Transfer Protocol is �� a data link protocol. �� a stateless protocol. �� based on TCP port number 60. �� all of the above. 3. If we could examine the bits transmitted over a network that are sent using a TCP/IP socket, we would find �� the data link protocol. �� the internet protocol. �� the transmission control protocol. �� all of the above. �� none of the above. 4. A network consisting of multiple computers connected to a switch would have a �� star toplogy. �� ring topology. �� bus topology. �� mesh topology. �� none of the above. 5. The Internet Protocol �� provides handshaking to provide error control. �� connects to a port corresponding to a program. �� uses network hardware addresses in the packet. �� all of the above. �� none of the above. 6. The carrier sense multiple access with collision detection protocol used for Ethernet �� will retransmit a frame if a collision is detected during transmission. �� has no limit to the number of retransmissions. �� requires nodes to notify each other if they want to transmit. �� all of the above. �� none of the above. 7. The TCP protocol uses the acknowledgement number to �� encrypt the data. �� track the number of bytes transmitted. �� control the window size. �� none of the above. 8. The TCP port number used in the TCP protocol corresponds to �� internet addresses. �� hardware addresses. �� protocols used by programs to transmit data. �� physical connections. 9. The structure of the internet address is defined by the �� first byte of the address. �� second byte of the address. �� data link layer. �� network connection. 10. For each client connecting to a server using sockets, �� the server must accept a connection before the client can communicate. �� the client can only try to establish a connection once. �� clients can request data but cannot write to the server. �� none of the above. 11. In JAVA, for a class called Animals, a subclass called Birds would be defined by a. public class Birds subclass of Animals{ float wingSpan;// wing span in meters public float get_wingSpan() { return(wingSPan); } } b. public class Birds extends Animals{ float wingSpan;// wing span in meters public float get_wingSpan() { return(wingSPan); } } c. public class Birds extension of Animals{ float wingSpan;// wing span in meters public float get_wingSpan() { return(wingSPan); } } d. none of the above. 12. A thread in a JAVA program �� allows only one connection to a TCP/IP port. �� can be used to make multiple functions to run independent of each other. �� is used to define the type of ADO connection for a server. �� prevent multiple IP address conflict. 13.. The Data Definition Language is used to �� specify the structure of a database. �� define the data types in a database. �� modify the current database structure. �� none of the above �� all of the above 14. In a relational database, �� each row must be unique. �� a table must appear in sorted order. �� only one foreign key is allowed. �� all of the above 15. 16. When a method in JAVA throws an exception, �� the program must stop. �� the method must be enclosed in a try structure. �� the method must be in a thread. �� the method must be in the main class. 17. The Ethernet frames for two nodes have collided twice on the network for each node. The next possible number of slot times delayed at each node could be, �� 0. �� 1. �� 2. �� 3. �� all of the above 18. The CRC portion of the Ethernet frame is used to �� provide error checking. �� store the hardware address. �� encrypt data. �� sequence frames. �� none of the above 19. By writing messages to a ServerSocket in JAVA, �� a server can communicate with a client. �� respond to client requests for a connection. �� drop clients. �� none of the above 20. When a router selects a physical connection to send a packet to a destination it uses, �� the hardware address of the destination. �� the IP address of the destination. �� the tender address of the destination. �� the CRC number of the destination.
https://brainmass.com/computer-science/control-structures/http-multiple-choice-questions-42775
CC-MAIN-2016-44
refinedweb
676
70.6
Natural Language Processing in a Kaggle Competition for Movie Reviews ModelingNLP/Text Analyticsposted by Jesse Steinweg-Woods November 22, 2017 Jesse Steinweg-Woods I decided to try playing around with a Kaggle competition. In this case, I entered the “When bag of words meets bags of popcorn” contest. This contest isn’t for money; it is just a way to learn about various machine learning approaches. The competition was trying to showcase Google’s Word2Vec. This essentially uses deep learning to find features in text that can be used to help in classification tasks. Specifically, in the case of this contest, the goal involves labeling the sentiment of a movie review from IMDB. Ratings were on a 10 point scale, and any review of 7 or greater was considered a positive movie review. Originally, I was going to try out Word2Vec and train it on unlabeled reviews, but then one of the competitors pointed out that you could simply use a less complicated classifier to do this and still get a good result. I decided to take this basic inspiration and try a few various classifiers to see what I could come up with. The highest my score received was 6th place back in December of 2014, but then people started using ensemble methods to combine various models together and get a perfect score after a lot of fine tuning with the parameters of the ensemble weights. Hopefully, this post will help you understand some basic NLP (Natural Language Processing) techniques, along with some tips on using scikit-learn to make your classification models. Cleaning the Reviews The first thing we need to do is create a simple function that will clean the reviews into a format we can use. We just want the raw text, not all of the other associated HTML, symbols, or other junk. We will need a couple of very nice libraries for this task: BeautifulSoup for taking care of anything HTML related and re for regular expressions. import re from bs4 import BeautifulSoup Now set up our function. This will clean all of the reviews for us. def review_to_wordlist(review): ''' Meant for converting each of the IMDB reviews into a list of words. ''' # First remove the HTML. review_text = BeautifulSoup(review).get_text() # Use regular expressions to only include words. review_text = re.sub("[^a-zA-Z]"," ", review_text) # Convert words to lower case and split them into separate words. words = review_text.lower().split() # Return a list of words return(words) Great! Now it is time to go ahead and load our data in. For this, pandas is definitely the library of choice. If you want to follow along with a downloaded version of the attached IPython notebook yourself, make sure you obtain the data from Kaggle. You will need a Kaggle account in order to access it. import pandas as pd train = pd.read_csv('labeledTrainData.tsv', header=0, delimiter="t", quoting=3) test = pd.read_csv('testData.tsv', header=0, delimiter="t", quoting=3 ) # Import both the training and test data. Now it is time to get the labels from the training set for our reviews. That way, we can teach our classifier which reviews are positive vs. negative. y_train = train['sentiment'] Now we need to clean both the train and test data to get it ready for the next part of our program. traindata = [] for i in xrange(0,len(train['review'])): traindata.append(" ".join(review_to_wordlist(train['review'][i]))) testdata = [] for i in xrange(0,len(test['review'])): testdata.append(" ".join(review_to_wordlist(test['review'][i]))) TF-IDF Vectorization The next thing we are going to do is make TF-IDF (term frequency-interdocument frequency) vectors of our reviews. In case you are not familiar with what this is doing, essentially we are going to evaluate how often a certain term occurs in a review, but normalize this somewhat by how many reviews a certain term also occurs in. Wikipediahas an explanation that is sufficient if you want further information. This can be a great technique for helping to determine which words (or ngrams of words) will make good features to classify a review as positive or negative. To do this, we are going to use the TFIDF vectorizer from scikit-learn. Then, decide what settings to use. The documentation for the TFIDF class is available here. In the case of the example code on Kaggle, they decided to remove all stop words, along with ngrams up to a size of two (you could use more but this will require a LOT of memory, so be careful which settings you use!) from sklearn.feature_extraction.text import TfidfVectorizer as TFIV tfv = TFIV(min_df=3, max_features=None, strip_accents='unicode', analyzer='word',token_pattern=r'w{1,}', ngram_range=(1, 2), use_idf=1,smooth_idf=1,sublinear_tf=1, stop_words = 'english') Now that we have the vectorization object, we need to run this on all of the data (both training and testing) to make sure it is applied to both datasets. This could take some time on your computer! X_all = traindata + testdata # Combine both to fit the TFIDF vectorization. lentrain = len(traindata) tfv.fit(X_all) # This is the slow part! X_all = tfv.transform(X_all) X = X_all[:lentrain] # Separate back into training and test sets. X_test = X_all[lentrain:] Making Our Classifiers Because we are working with text data, and we just made feature vectors of every word (that isn’t a stop word of course) in all of the reviews, we are going to have sparse matrices to deal with that are quite large in size. Just to show you what I mean, let’s examine the shape of our training set. X.shape (25000, 309798) That means we have 25,000 training examples (or rows) and 309,798 features (or columns). We need something that is going to be somewhat computationally efficient given how many features we have. Using something like a random forest to classify would be unwieldy (plus random forests can’t work with sparse matrices anyway yet in scikit-learn). That means we need something lightweight and fast that scales to many dimensions well. Some possible candidates are: - Naive Bayes - Logistic Regression - SGD Classifier (utilizes Stochastic Gradient Descent for much faster runtime) Let’s just try all three as submissions to Kaggle and see how they perform. First up: Logistic Regression (see the scikit-learn documentation here). While in theory L1 regularization should work well because p»n (many more features than training examples), I actually found through a lot of testing that L2 regularization got better results. You could set up your own trials using scikit-learn’s built-in GridSearch class, which makes things a lot easier to try. I found through my testing that using a parameter C of 30 got the best results. from sklearn.linear_model import LogisticRegression as LR from sklearn.grid_search import GridSearchCV grid_values = {'C':[30]} # Decide which settings you want for the grid search. model_LR = GridSearchCV(LR(penalty = 'L2', dual = True, random_state = 0), grid_values, scoring = 'roc_auc', cv = 20) # Try to set the scoring on what the contest is asking for. # The contest says scoring is for area under the ROC curve, so use this. model_LR.fit(X,y_train) # Fit the model. GridSearchCV(cv=20, estimator=LogisticRegression(C=1.0, class_weight=None, dual=True, fit_intercept=True, intercept_scaling=1, penalty='L2', random_state=0, tol=0.0001), fit_params={}, iid=True, loss_func=None, n_jobs=1, param_grid={'C': [30]}, pre_dispatch='2*n_jobs', refit=True, score_func=None, scoring='roc_auc', verbose=0) You can investigate which parameters did the best and what scores they received by looking at the model_LR object. model_LR.grid_scores_ [mean: 0.96459, std: 0.00489, params: {'C': 30}] model_LR.best_estimator_ LogisticRegression(C=30, class_weight=None, dual=True, fit_intercept=True, intercept_scaling=1, penalty='L2', random_state=0, tol=0.0001) Feel free, if you have an interactive version of the notebook, to play around with various settings inside the grid_values object to optimize your ROC AUC score. Otherwise, let’s move on to the next classifier, Naive Bayes. Unlike Logistic Regression, Naive Bayes doesn’t have a regularization parameter to tune. You just have to choose which “flavor” of Naive Bayes to use. According to the documentation on Naive Bayes from scikit-learn, Multinomial is our best version to use, since we no longer have just a 1 or 0 for a word feature: it has been normalized by TF-IDF, so our values will be BETWEEN 0 and 1 (most of the time, although having a few TF-IDF scores exceed 1 is technically possible). If we were just looking at word occurrence vectors (with no counting), Bernoulli would have been a better fit since it is based on binary values. Let’s make our Multinomial Naive Bayes object, and train it. from sklearn.naive_bayes import MultinomialNB as MNB model_NB = MNB() model_NB.fit(X, y_train) MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True) Pretty fast, right? This speed comes at a price, however. Naive Bayes assumes all of your features are ENTIRELY independent from each other. In the case of word vectors, that seems like a somewhat reasonable assumption but with the ngrams we included that probably isn’t always the case. Because of this, Naive Bayes tends to be less accurate than other classification algorithms, especially if you have a smaller number of training examples. Why don’t we see how Naive Bayes does (at least in a 20 fold CV comparison) so we have a rough idea of how well it performs compared to our Logistic Regression classifier? You could use GridSearch again, but that seems like overkill. There is a simpler method we can import from scikit-learn for this task. from sklearn.cross_validation import cross_val_score import numpy as np print "20 Fold CV Score for Multinomial Naive Bayes: ", np.mean(cross_val_score (model_NB, X, y_train, cv=20, scoring='roc_auc')) # This will give us a 20-fold cross validation score that looks at ROC_AUC so we can compare with Logistic Regression. 20 Fold CV Score for Multinomial Naive Bayes: 0.949631232 Well, it wasn’t quite as good as our well-tuned Logistic Regression classifier, but that is a pretty good score considering how little we had to do! One last classifier to try is the SGD classifier, which comes in handy when you need speed on a really large number of training examples/features. Which machine learning algorithm it ends up using depends on what you set for the loss function. If we chose loss = ‘log’, it would essentially be identical to our previous logistic regression model. We want to try something different, but we also want a loss option that includes probabilities. We need those probabilities if we are going to be able to calculate the area under a ROC curve. Looking at the documentation, it seems a ‘modified_huber’ loss would do the trick! This will be a Support Vector Machine that uses a linear kernel. from sklearn.linear_model import SGDClassifier as SGD sgd_params = {'alpha': [0.00006, 0.00007, 0.00008, 0.0001, 0.0005]} # Regularization parameter model_SGD = GridSearchCV(SGD(random_state = 0, shuffle = True, loss = 'modified_huber'), sgd_params, scoring = 'roc_auc', cv = 20) # Find out which regularization parameter works the best. model_SGD.fit(X, y_train) # Fit the model. GridSearchCV(cv=20, estimator=SGDClassifier(alpha=0.0001, class_weight=None, epsilon=0.1, eta0=0.0, fit_intercept=True, l1_ratio=0.15, learning_rate='optimal', loss='modified_huber', n_iter=5, n_jobs=1, penalty='l2', power_t=0.5, random_state=0, shuffle=True, verbose=0, warm_start=False), fit_params={}, iid=True, loss_func=None, n_jobs=1, param_grid={'alpha': [6e-05, 7e-05, 8e-05, 0.0001, 0.0005]}, pre_dispatch='2*n_jobs', refit=True, score_func=None, scoring='roc_auc', verbose=0) Again, similar to the Logistic Regression model, we can see which parameter did the best. model_SGD.grid_scores_ [mean: 0.96477,
https://opendatascience.com/natural-language-processing-in-a-kaggle-competition-for-movie-reviews/
CC-MAIN-2021-43
refinedweb
1,952
54.83
- NAME - VERSION - DESCRIPTION - INCLUDED MODULES - SOURCE - BUGS - SEE ALSO - AUTHOR NAME Acme::CPANModules::PERLANCAR::RsyncEnhancements - List of my enhancements for rsync VERSION This document describes version 0.001 of Acme::CPANModules::PERLANCAR::RsyncEnhancements (from Perl distribution Acme-CPANModules-PERLANCAR-RsyncEnhancements), released on 2019-04-01. DESCRIPTION List of my enhancements for rsync. Rsync is one of my favorite tools in the whole wide world. There are a few things that I want rsync to do but doesn't so I made some enhancements for it. Currently all of the enhancements are in the form of wrapper, because it is the easiest and most straightforward, implementation-wise. INCLUDED MODULES Rsync is a one-way syncing tool, as two-way syncing can be much slower (because it requires recording states in both sides) or requires more specific tools (like version control system). In simpler cases, when updates only happen in one side, you can perform two-way syncing by just checking that: the side that has the newest file "wins" (is sync-ed to the "losing" side). This script checks that condition. Rsync can resume a partial sync, but does not automatically retries. An annoying thing is invoking an rsync command to sync a large tree, leaving the computer for the day, then returning the following day hoping the transfer would be completed, only to see that it failed early because of a network hiccup. This wrapper automatically retries rsync when there is a transfer error. This wrapper adds some color to the rsync output, particularly giving a red to deletion, so you can spot deletion more easily. Particularly handy if you use it with the -n( --dry-run) option. - about the Acme::CPANModules namespace cpanmodules - CLI tool to let you browse/view the lists AUTHOR perlancar <perlancar@cpan.org> This software is copyright (c) 2019 by perlancar@cpan.org. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
https://metacpan.org/pod/release/PERLANCAR/Acme-CPANModules-PERLANCAR-RsyncEnhancements-0.001/lib/Acme/CPANModules/PERLANCAR/RsyncEnhancements.pm
CC-MAIN-2019-26
refinedweb
332
53.21