text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Sometimes, it is the little things that can end up being quite annoying. This is especially if these little tasks become tedious and you have to repeat them a lot of times. Any developer has such a task. Any developer has that one thing that frustrates him or her, or a task that they simply hate doing. (It sounds as if I am in a bad mood. No; I am not.) For me, the task that I hate the most is clearing Textboxes after they have been populated, and setting the Textboxes' default text. This usually involves creating a Sub procedure (or two) to clear all input fields on a form when, say, for example: a button has been clicked; and then setting its default text back to a descriptive prompt, such as: "Please enter name." Granted, if you do not have many input fields and forms, this isn't really bothersome, but it may get boring after a few hundred forms on a few hundred projects. Yes, I know there may be better ways to structure these little tasks, but hey, time is not always on your side. Today, I will show you a wonderful trick that I never knew existed until a few months ago. This trick will prevent you from having to manually clear and re-populate all your text fields on your form (as described above). You will create a project in which you create a Class Library that makes use of a few Windows APIs to add a watermark feature to all your text controls. This watermark will be placed in your text controls automatically, and once the text control receives the focus, the watermark will automatically disappear. This saves a lot of tedious work. Our Projects I have decided to include both C# and VB.NET code in this article. Open Visual Studio and create either a C# Windows Forms project or a Visual Basic.NET Windows Forms project. After the project has been created, design your Form to resemble Figure 1. Figure 1: Design You will return to your Windows Forms Project later. Add a new project to your Windows Forms Project (again, either C# or VB.NET) by clicking File, Add, New Project. In the New Project dialog box, select Class Library, as shown in Figure 2 Figure 2: Class Library You may name your Class Library anything you desire. I have named mine C_Watermark and VB_EditWatermark. Code Add the following namespaces to your Class Library. C# using System; using System.Runtime.InteropServices; using System.Windows.Forms; VB.NET Imports System.Runtime.InteropServices Imports System.Windows.Forms These namespaces are necessary to enable us to communicate with Windows Forms objects (such as our textboxes and comboboxes) as well as to import the necessary Windows APIs into our project. Add the Windows APIs. C# [DllImport("user32.dll", CharSet = CharSet.Auto)] private extern static Int32 SendMessage( IntPtr hWnd, int msg, int wParam, [MarshalAs(UnmanagedType.LPWStr)] string lParam); [DllImport("user32", EntryPoint = "FindWindowExA", ExactSpelling = true, CharSet = CharSet.Ansi, SetLastError = true)] private static extern IntPtr FindWindowEx(IntPtr hWnd1, IntPtr hWnd2, string lpsz1, string lpsz2); private const int EM_SETCUEBANNER = 0x1501; VB.NET <DllImport("user32.dll", CharSet:=CharSet.Auto)> Private Function FindWindowEx( ByVal hWnd1 As IntPtr, ByVal hWnd2 As IntPtr, ByVal lpsz1 As String, ByVal lpsz2 As String) As IntPtr End Function Private Function SendMessage( ByVal hWnd As IntPtr, ByVal msg As Integer, ByVal wParam As Integer, <MarshalAs(UnmanagedType.LPWStr)> ByVal lParam As String) As Int32 End Function Private Const EM_SETCUEBANNER As Integer = &H1501] The FindWindowEx API function retrieves a handle to a window whose class name and window name match the supplied parameters. The SendMessage API sends the given message (in this case, the EM_SETCUEBANNER message) to a window. Add the SetWatermark procedure. C# public static void SetWatermark(this Control ctl, string text) { if (ctl is ComboBox) { IntPtr Edit_hWnd = FindWindowEx(ctl.Handle, IntPtr.Zero, "Edit", null); if (!(Edit_hWnd == IntPtr.Zero)) { SendMessage(Edit_hWnd, EM_SETCUEBANNER, 0, text); } } else if (ctl is TextBox) { SendMessage(ctl.Handle, EM_SETCUEBANNER, 0, text); } } VB.NET <System.Diagnostics.DebuggerStepThrough()> <System.Runtime.CompilerServices.Extension()> Public Sub SetWatermark(ByVal ctl As Control, _ ByVal text As String) If TypeOf ctl Is ComboBox Then Dim Edit_hWnd As IntPtr = FindWindowEx(ctl.Handle, _ IntPtr.Zero, "Edit", Nothing) If Not Edit_hWnd = IntPtr.Zero Then SendMessage(Edit_hWnd, EM_SETCUEBANNER, 0, text) End If ElseIf TypeOf ctl Is TextBox Then SendMessage(ctl.Handle, EM_SETCUEBANNER, 0, text) End If End Sub The SetWatermark procedure determines the type of control that is being passed to it. If it is either a ComboBox or a TextBox set the watermark by using the supplied text. Build your Class Library project. After the build, there should be no errors; you need to add a Reference to this Class Library in your Windows Forms projects. Do this by clicking Project, Add Reference…, Solution, and ticking the box next to your Class Library name (see Figure 3). Figure 3: Add Reference to Library Add the Class Library namespace to your Form's code. C# re>using C_Watermark; VB.NET Imports VB_EditWatermark This simply links the Form to the Library. In the Form's Load event, type the following: txtFirstName.Set You will notice whilst typing that Visual Studio's AutoComplete shows SetWatermark as an added textbox Property, as shown in Figure 4. Figure 4: Property Add the rest of the code: C# txtFirstName.SetSetWatermark("Enter Name"); cboTitle.SetWatermark("Choose Title"); VB.NET txtFirstName.SetWatermark("Enter Name") cboTitle.SetWatermark("Choose Title") Run your application. You will notice that the prompts only show when the box doesn't have the focus and disappear when the box has the focus. Figure 5: Running The C# source code and VB.NET source code are available on GitHub. Conclusion The little things matter, not only on the end user's screen, but in the coding world. This feature has saved me tons of hours and put me in a better mood.
https://mobile.codeguru.com/csharp/.net/net_general/watermarking-edit-controls-using-c-or-visual-basic.html
CC-MAIN-2018-47
refinedweb
985
57.77
On Fri, 6 Jun 2008, Alexey Dobriyan wrote:> On Thu, Jun 05, 2008 at 12:25:06PM +0200, Armin Schindler wrote:>> On Sat, 31 May 2008, Alexey Dobriyan wrote:>>> 1. creating proc entry and not saving pointer to PDE and checking it>>> is not going to work.>>>> I don't know where you found this. I have look even in older versions, but>> the pointer divas_proc_entry is set by proc_create(). The patch to>> divasproc.c is wrong, it exists from the beginning of the driver.>> (2.6.25.4 doesn't contain the bug you describe).>> Check mainline kernel, namely, 2.6.26-rc5.Ah, okay. So someone removed the pointer between 2.6.25 and 2.6.26 then.In that case your patch is correct of course.Armin>>> --- a/drivers/isdn/hardware/eicon/divasproc.c>>> +++ b/drivers/isdn/hardware/eicon/divasproc.c>>> @@ -125,8 +125,8 @@ static const struct file_operations divas_fops = {>>>>>> int create_divas_proc(void)>>> {>>> - proc_create(divas_proc_name, S_IFREG | S_IRUGO, proc_net_eicon,>>> - &divas_fops);>>> + divas_proc_entry = proc_create(divas_proc_name, S_IFREG | S_IRUGO,>>> + proc_net_eicon, &divas_fops);>>> if (!divas_proc_entry)>>> return (0);>
http://lkml.org/lkml/2008/6/6/43
CC-MAIN-2013-48
refinedweb
173
60.72
Usually VS saves connectionstrings in .config file or .settings file which is xml formatted. to do it yourself vs main menu Project->Add new item->.settings file Add your items in this settings file, you're now can modify your settings without modifying your code. Avoid using ini files, xml is better. Thanks! is there a way that i can do it using my program in vb? like having a form where i can edit all the settings for my program in xml? Sure, you've ml file and you also have System.Xml namespace which contains a lot of classes to add\remove and edit some nodes in xml files... I'm kind lost there.. Can you teach me how? a little sample code will do, if you don't mind.. I'm kind lost there.. Can you teach me how? a little sample code will do, if you don't mind.. Create a new dialog form and use it to manipulate configuration items. On form_load add the items from your program settings: Private Sub frmConfig_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load ' ' Set the default values into the controls via the program settings ' bLoading = True Me.txtServer.Text = My.Settings.Server Me.txtDB.Text = My.Settings.Database bLoading = False End Sub Set flags if they change any of the values and then on the OK_Button_Click test for changes and save them if they exist: Private Sub OK_Button_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles OK_Button.Click ' ' See if we have anything to save ' If Me.bServerChanged Then My.Settings.Server = Me.txtServer.Text End If If Me.bDBChanged Then My.Settings.Database = Me.txtDB.Text End If ' ' If anything changed, save the changes to the setting file ' If Me.bServerChanged Or Me.bDBChanged Then My.Settings.Save() End If Me.DialogResult = System.Windows.Forms.DialogResult.OK Me.Close() End Sub Please, tell me what exactly what do you want to do ?? I want to have a configuration settings editor for my program with mysql.. I want the users to be able to switch from one server to the other using a form. If ill be making an installer for my project where would the config go? can it be read by the application still? or will i specify a path where the application will read its settings.. From VS main menu Project->Add new item select .settings file, once you've added it a designer (tabular) form allows you to add variables add ServerName and assign it a default value. In coding let's the connection string static except the server name dynamic from the settings
https://www.daniweb.com/programming/software-development/threads/112096/xml-config-or-an-ini-config
CC-MAIN-2016-50
refinedweb
447
69.79
Though Herb Sutter has been a C++ guru for decades, his name is most often associated these days with two words: "free lunch." That's thanks to his oft-cited 2005 paper, "The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software." The author, Microsoft software architect and chair of the ISO C++ Standards Committee, took time recently to speak with Go Parallel about the multi-core chip revolution and how it's affecting software developers. Let's start with the "you say tomato, I say 'to-mah-to'" question: Are the words "concurrent" and "parallel" interchangeable when it comes to software? I personally use those terms synonymously; the meanings are essentially the same. Now, historically, computer science has often formally distinguished between "parallel" and "concurrent" as tags for two different kinds of work: There, the idea is that "concurrency" means two or more threads or processes that can operate asynchronously; "parallelism," on the other hand, implies scalability, or how to use 100 cores to get something done faster. My first Effective Concurrency column, "The Pillars of Concurrency," distinguishes these as two of the three major pillars of concurrency requirements, and calls them "isolation" and "scalability," which I think better help convey what they're about than somewhat artificially distinguishing between "concurrency" and "parallelism." Your mileage may vary. But most people don't draw that strong a distinction between the two words now. "Concurrency" is the general word for multithreading using multiple cores, whether it's to do work asynchronously or to get an answer faster or both. As a verb, it's more convenient to say "parallelize." I use them pretty interchangeably. How do you compare Threading Building Blocks (TBB) with the .NET Task Parallel Library? They're both about "How do I do work on collections of things to use many cores?" They're targeting same second "pillar" of concurrency, throughput and scalability and using more cores to get the answer faster. OpenMP also targets the same pillar—it has already broken a lot of ground. Can't you drop these algorithms into the C++ standard? Even with parallel for, there are lots of design options. First, side effects matter: You can't just automatically parallelize every for loop in existing code without knowing about the loop body's side effects. That's why OpenMP lets you opt in by spelling it with a #pragma omp parallel. Similarly, you can't just automatically parallelize every existing call to for_each. So a parallel version of for_each has to be distinct somehow, whether using a different name or putting the parallel algorithm into a different namespace, so that users can opt into the parallel version. Second, the semantics of the parallel version of an algorithm may be different from the sequential version, in a number of ways that lead to a lot of design choices. A simple example is that a parallel find may return "a" match rather than "the first" match closest to the front of the container. A more subtle one is that when the sequential version returns "the first" match, you can then find the next match by starting at that position and continuing, but there's no directly corresponding "okay, resume here and find the next one" concept for a parallel find. Because the semantics of the parallel version of the algorithm can be different in several ways, each of which is a design choice or design dimension, I think we need more experience to be sure that a given set of design choices is best before it's ready to bake into a standard. Third, a major design choice area is that you can turn the knob on usability vs. safety. For example, I mentioned side effects a minute ago; you can try to prevent mistakes and make a parallel loop harder to use, or on the other end of the scale you can do what OpenMP does and trust the programmer completely and make it easy to use but also easy to cut himself. Generally, the safer you make something, the harder it is to use; in this case, the harder it is to express what you want to do easily and/or efficiently. Finding the right balance here will take experience. Have you seen much response to Microsoft's Task Parallel Library? TPL has been out in the field about two months, and I haven't seen the feedback yet.
http://www.devx.com/go-parallel/Article/37573
crawl-001
refinedweb
739
57.91
Day 1 Keynote - Bjarne Stroustrup: C++11 Style - Date: February 2, 2012 from 9:30AM to 11:00AM - Day 1 - Speakers: Bjarne Stroustrup - 205,916 Views - 72 Comments Something went wrong getting user information from Channel 9 Something went wrong getting user information from MSDN Something went wrong getting the Visual Studio Achievements Right click “Save as…”Slides. Follow the Discussion Good!! Always eager to learn from the best. I'm definitely looking forward to watching Bjarne's interesting talk and the other GoingNative 2012 sessions! Looking forward to this exciting session, Rocks!! Looking forward to all the sessions. I am based in Manchester UK, must have checked the time in Redmond USA at least 20 times today :) cant wait. We are gonna party like it is C++98 :P Where are the live feed links? Awesome talk! Where can I access the recorded keynote? You'll be able to access the recorded keynote and indeed all the sessions right here. Charles said it would take about +1 day to do the encoding and then the downloadable video files will be available. Where can I download yesterday videos? It was a great lecture! But I haven't had the time to watch other speakers. I'll download the 1080p version of talks, since 1080p makes reading the code much a nicer experience. EDIT: Charles, would it be possible to also publish the PowerPoint or PDF slides? @undefined:Yes, where are the recorded sessions? Had to work during almost all talks, so I'm looking forward to here all these presentations - saw a bit of the first day live but will enjoy the recordings of all presentations soon . BTW: great selection of speakers: Bjarne, Sutter, Alexandrescu,… @STL: great to here that range-base-for-loops will be in VC11.. though I'm a std::for_each-guy so that's not that big of a deal for me. PS: looking forward to std::thread-support in VC… The range-based for-loop is significantly less verbose than std::for_each() (my least favorite STL algorithm). But using more specific STL algorithms is always a good idea. The first qsort Example seems to be broken. I guess it goes to show how bad the API really is. void f(char *arr, int m, ...) { qsort(arr, m, sizeof(char*), cmpstringp); } He probably wanted a char *arr[]. Great talk so far. btw. this website should support Unicode in names! Thanks. Fixed for future uses. A great talk! I believe the 38 slide should read shared_ptr<Gadget> p( new Gadget{n} ); instead of shared_ptr<Gadget> p = new Gadget{n}; The same thing with the 39 slide. I thought the talk on C++11 was great. helpful Can someone enlighten me about the syntax on page 62 (and 63) of slides: double* f(const vector<double>& v); // read from v return result double* g(const vector<double>& v); // read from v return result void user(const vector<double>& some_vec) // note: const { double res1, res2; thread t1 {[&]{ res1 = f(some_vec); }}; thread t2 {[&]{ res2 = g(some_vec); }}; // ... t1.join(); t2.join(); cout << res1 << ' ' << res2 << '\n'; } Isn't there a type mismatch between f() return and res1? I took some sentence from the description of Bjarne description, because I am trying to find ressources, materials tutorials that show how to acheive this. For the previous standart or c++11. Anybody know good reference where i can find this now? thanks @undefined: Slides will be released with each session video! Like this one C Oh heck I give up. Cool. Now we can see the invisible graph. Yes, having to explain an invisible graph was a bit of a challenge :-) Thanks for the comments; corrections will be applied to future versions of the talk. It is interesting to ask if the software which supposed to show the graphs and the graph itself (i.e. invisible one) was written in C++. I noticed also some problems with fonts on one or two slides. According to my experience these are typical problems in all kind of scientific presentations. It is hard to believe that it is so difficult to avoid those problems with current technology. The same presentation on one computer looks very different on a different computer only because some other fonts are installed on that computer. In theory it is possible to embed the fonts with the presentation unfortunately this method do not work well in many cases (my own experience). The only real solution is to transform the presentation into pdf or use some other software (or use your own computer but in many cases is not possible). I saw these problems hundreds times in all kind of conferences and it looks like nobody in the MS Office Team cares about that (since the existence of MS Office). The question about an easier way to declare getters and setters, anyone else think than Bjarne was just short of saying "I don't want that crap in my language"? =) Nice. C++ may get bashed a lot, but its creator can certainly deliver a coherent presentation.() ); Shouldn't there at least be a performance difference between sum=0; for(vector<int>::sizetype i=0;i<v.size();++i){ sum+=v[i]}; sum=0; for_each(v.begin(),v.end(),[&sum](int x){sum +=x;}); Since the first is calling size() during each loop, while the second (I believe) wouldn't constantly be rechecking the size, similarly to defining a vector<int>::sizetype end=v.size; and checking i<end? I am also curious why there aren't at least some run times or something to back up the claim that there is no discernible difference between "several systems and several compilers"? On my question about getters and setters in the video, I guess that these should be avoided; public data and object members should simply be declared public, despite what I've seen to be a common practice on many projects, and which seems to be promoted in many object oriented languages. Ideally, there would be some way to overload the setting of an object or data "property" if the logic needs to be changed or limits imposed. I have created a Property template class in the past as Bjarne suggested, however, there is no elegant way for the parent class to overload the setter in the aggregated Property<T> member, and the syntax of accessing members of the property too often become property.get().member(), rather than property.member(), which is what you want to write. From a language viewpoint, perhaps something like an overloaded "member access operator" would allow library writers to implement a setter or getter later if needed without changing user code. But without this, we suggest that if we need to change logic around setting or getting a member value, make the property private and recompile - we can easily find and update all the usages of the data member to use the new getter or setter. So awesome to have Bjarne posting on C9! Thank you, sir. C @undefined:Darren, here are my thoughts on your question. If you created a wrapper class for each public data member you could overload the assignment operator to perform bounds checking (as well as assignment) and perhaps throw an exception if necessary. That would solve the problem of assigning to a property without a setter. Of course, you would also have to overload all other meaningful operators for that property such as the boolean operators. You would have to repeat all this for each property, which in the end may be more trouble than it's worth. I can't really think of another way to do it, but I also haven't touched C++ in awhile so I could be wrong. Anyway, good luck. I would really love to hear a talk, or read a paper, from Bjarne that discusses when to use OOP, and when to choose Functional or Type programming. For me, finding a balance has always been the most difficult part in software development. There just isn't one right way, but I'd love to hear his thoughts. If anyone has any links to anything related that would be wonderful. Nice For those who are also members of the C++ Software Developers group on LinkedIn, I have started a discussion about what I believe are the most important features of C++ 11, and would love to see feedback and examples of good style that people would like to contribute. See I watched the video few times. I feel like we need some "fresh-minds" in defining what programming should look like, replacing Bjarne. They had their era, time to move on. My biggest problem with C++ (in big project) are the #includes that square the amount of source to compile (headers are compiled separately for each compilation unit). Look how long it takes to compile firefox or KDE :-( I think this is were we pay the cost for [over]using templates and/or inline functions. Maybe there is something that could be fixed here? Maybe if we break backward compatibility (drop the preprocessor)? It's a pity that those problems were not mentioned here. @pafinde: That's one of the things that modules seek to solve. You can see the "invisible" graph in my posted slides. I wrote a paper for IEEE Computer Magazine with very similar examples. See the January 2012 issue of Computer or my publications page:. From C++ application development point of view, is there any place for compiler generated iterators in C++ (c# IEnumerable)? Seems like they may be implemented with zero overhead, like lambdas do. I dont see any difference between the example and the 'better' example Both are understandable only if you use declarative parameter names as it is done with which is equally understandable for me if you write ? @bog: Thanks for this detailed answer to my comment. I dont want to start nit-picking here. For sure 99.9999% of all programmers (me included) would use both corner points to define a rectangle. But you could also define it by its center point and any other point. Or using the second constructor with Point top_left and Box_hw. Even if i would be 99% sure that i know what's meant, if i would readI would take a look at the implementation or read the docs to be sure. So for me, using declarative parameter names highly improves the readability of interfaces. After a night thinking about this issue I have to correct my first comment. Within the meaning of this excellent talk, the examples using Point are the better ones. I was just misled by the different notations for good examples written with parameters and bad examples written without. The Point example is better, because it implicates the possibility to use units like it is done by the Speed example. The general point (sic) about the Point example is that sequences of arguments of the same type is prone to transposition of argument values. I consider it a well established fact that this is a significant source of errors. The implication is that we need to look for remedies. Using a more expressive and specific set of types is one approach. A very good source of infomation.. Bjarne sir, I truly enjoyed, appreciated, and was influenced by your presentation. One thing that comes to mind is the ability for C++ to write performant, yet secure code. I'm confused at one thing. I can understand where he says, shared_ptr and unique_ptr, but where he says why use a pointer, and then shows this code: I'm pretty sure C++ wouldn't except that? I've just run a test, and you can scope a variable like in Java now ^_^ it would be like this Its amazing to see how C++ is catching up with .NET. I've always been a C++ guy. Thanks again. Now? That last f() has worked for about two decades! It's pure C++98. The "Gadget g {n};" in the original example, simply used the C++11 uniform initializer syntax, but is otherwise identical. Wow, i must be honoured to get a reply from the man himself. Thanks for the heads up Bjarne, C++ is really moving up. So I can just pass a object by value by using rvalue references. That is soo cool. Tom So, why can’t I read an unsigned char from an input stream? When I try to read from "0", I get 060 and not 0 as expected. And when I push (unsigned char) 0, I get "\0", not "0" in the output. (1) Huh? unsigned char c; while(cin>>c) cout<<c<<'\n'; gives exactly what I expect (2) The value of (unsigned char)0 *is* 0; not the value of the character '0' Great presentation Bjarne. Honestly I have checked it a few times already on the expense of not having watched the other videocasts yet... Too bad the Vector vs Linked-List comparison kind of fell short. In spite of the graph-mishap I got inspired and tested it on a few different machines. For small amounts it was virtually the same but as the sets got larger there was a huge difference.It was fun to see - especially since I remember discussing this a couple of years ago (then I failed to see the larger picture). Thanks again for the presentation! To the guy asking about getters and setters using different get and set functions per class and while still keeping function inlining, this should work. template<class OutType, class StoreType, class Controller> class Property { private: StoreType data; public: operator OutType() { return Controller::get(data); } OutType operator=(OutType a) { Controller::set(data, a); return Controller::get(data); } }; class HPController { public: static int get(int &a) { return a; } static void set(int &a, int &b) { a = b; } }; class Man { public: Property<int, int, HPController> HP; }; void PropertyTest() { Man man; man.HP = 7; cout << man.HP << "\n"; } Thanks Bjarne!!! I knew I wasn't stupid for wanting readable interfaces!! Hehe @Ray: The problem with that approach becomes when your 'controller' needs to do something a bit more complex and needs the target object's state to decide what to do or needs to notify the target object to do something else upon a change. In my experience I've found those cases the primary cases where I actually needed getters and setters. So then in that case the Property class template needs to change to be able to contain a controller object which then holds a reference to the target object ( 'Man', in this case ), and the Controller then can not use static methods. But then here is the bloat added. So I like Darren's new proposal best - if they are logically publically available properties just leave them as public member variables. In the future, when you realize that you need something more complex, either make them private and add getters and setters and modify the client code, or make a decorator that allows the assignment operator to work with them which calls the real getters and setter behind the scenes. The truth is IOStream treats signed/unsigned chars as characters and not as numbers. Whether this is something to be expected I don't know. Didn't know I could watch this on the internet. I will definitely watch this as soon as I get off. My suggestion is that when you write a char (short for "character") to an character I/O stream, you should expect to see that character on the output device. It takes quite some alternative learning to expect otherwise. PS The "c" in cout, stands for "character" PPS "If everything else fails read the manual" I was trying to use his units code that was on the slide around 24:00, but the syntax he uses for the following doesn't seem to work with gcc 4.6.2 and -std=c++0x using Speed = Value<Unit<1,0,-1>>; I've never seen this use of the using directive before. Anybody know what is up with this? @Luke: gcc 4.7 introduces support for template aliases. I have only 4.6.1 installed... I'll need to upgrade I guess With gcc 4.7, the following works: Speed sp1 = Value<Unit<1,0,-1>>(100); // 100 meters / second But this does not (operator/ is not defined): Speed sp1 = Value<Unit<1,0,0>>(100) / Value<Unit<0,0,1>>(1); I guess he left out the part which would define all the arithmetic operators. Yes, about two pages of things like this template<class U1, class U2> Value<typename Unit_plus<U1,U2>::type> operator*(Value<U1> x, Value<U2> y) { return Value<typename Unit_plus<U1,U2>::type>(x.val*y.val); } and this template<class U1, class U2> struct Unit_plus { typedef Unit<U1::m+U2::m, U1::kg+U2::kg, U1::s+U2::s > type; }; You can make that prettier in C++11, but I was using an old compiler (then), so I used old-fashioned, but effective, metaprogramming. I like this one. !bind comes from boost. "Using !bind(pred, _1) in the first call to stable_partition() in the definition of the gather() function template (around minute 56 of the video) won't compile, will it? (Unless the wrapper object returned from bind() overloads operator!, which I don't think it does.)" - from decades, code was been easy to learn first, because you were just need a few terms for programming. And you were done all things with that. - Now, you need , you must use Interfaces , typedef specifics, classes globally existing in a namespace (like .NET) and you must know how you name it. Yes, a box it's ok . This is a simple box. But, Box_hw ? how do you spell it ? You need now what you want to do but name it ! Is it more difficult for programmers ? No . Is it more difficult to remember the names ? No It it always difficult to remember for beginners. But, if you are a beginner, engineer, you just need to remember all classes. For example, even google couldn't help you if you want a bicycle and you don't know how you spell a bicycle. Now, differences between engineers : a few people know all classes ? well, but it's not very realistic. Second, i love when i can be a C++ programmer since i know how to program in Java. That is in a good spirit. Third, i love when he said "who does the delete ?" . Many bugs come from the bad documentation or a left program. And else about Copy ? Not copy ? well, you can choice. You need to choice and need to say in the documentation that you can copy or not (thread safe ?). After, it explains you should have use a Vector and not a List to insert incrementally your data because true OO type is a chained list. That is the difference and a time consuming with .NET List insertion. But, it's implementation dependent. You should know the implementation now. Low level is should be not use : use standard templates instead. That's very C++ ! Remove this comment Remove this threadclose
http://channel9.msdn.com/Events/GoingNative/GoingNative-2012/Keynote-Bjarne-Stroustrup-Cpp11-Style?format=smooth
CC-MAIN-2013-20
refinedweb
3,207
72.76
Evangelism - Developer and Platform Evangelism Eric points out some of the new syntax changes we’ve made in C# 2.0 for FCLC compliance. These changes could break some of your existing code so you should make sure to verify 1.1 code when moving to VS 2005 (Whidbey). You should also be aware of the new BCL changes that will help developers write decent and non-offensive code, including changes to System.String and a new exception class. System.String To check whether a string contains indecent content, developers can now call the IsIndecent method which returns true or false if the string is indecent. Below is the copy/paste from ObjectBrowser on one of our latest VS builds (40326) public static bool IsIndecent(string str) Member of System.String Summary: Determines whether a specified System.String object is indecent. Parameters: str: A System.String. Return Values: true if the value of str is indecent, otherwise, false. Example: if (String.IsIndecent(text)) { //Do something here } We have also added the ability to estimate the cost of an FCLC fine based on the contents of a string using a new method, EstimateFine. public static float EstimateFine(string str) Provides an estimate of an indecency fine based on string contents using floating-point precision. Single-precision floating point estimate of FCLC fines. float f = String.EstimateFine(text); Note – We’ve also added a culture specific overload to EstimateFine which enables you to estimate indecency fines in other cultures. Currently this returns 0.0 except when the culture is set to ‘EN-US’. public static float EstimateFine(string str, System.Globalization.CultureInfo culture) We’ve also changed the behavior of strings so that a string marked with a public accessor (ex: public string s;) that contains indecent content will now raise a System.IndecentException as outlined below: System.IndecentException public class IndecentException : System.Exception Member of System The exception that is thrown when the content of a string with a public access modifier is indecent. Other keywords We are still debating this, so any feedback is appreciated. Are any of the following language constructs, classes, properties, or methods listed below considered offensive or indecent? Your feedback, as always, is appreciated. If you would like to receive an email when updates are made to this post, please register here RSS
http://blogs.msdn.com/danielfe/archive/2004/04/01/105653.aspx
crawl-002
refinedweb
387
50.43
This component is very cool. But, I test to use it with an remote store. It's work badly. Can you help me to attribute defaults vlaues with an remote store ? Thanks With a standard combo, it's only ever 1 value that's queried, but with this component if you performed a setValue call with 10 values - the records each have to be pulled back from the server. I have some ideas about handling remote I just don't know if it will work in practice yet, so watch this space. Dan very nice - thanks : Code: var usedRecords = this.usedRecords; var valueField = this.valueField; this.store.on('load', function(store, records) { store.filterBy(function(record, id) { return (!usedRecords.containsKey(record.get(valueField))); }, this); }); Thanks for posting again - there will be a bit more to it than that though as I need to accomodate for: The preventDuplicates and removeValuesFromStore config options, a shared data store and the ability for users to enter new data (when the allowAddNewData config is true). I did some testing yesterday and I know how I'll accomodate for this, but It will be a week or 2 until I release a new version (there are some other parts of the code that need refactoring too). I'm going away for a few days R&R today, but will post again here when I've made some progress. Dan Excellent work! I'm looking forward to seeing it work for remote stores. Thumbs up! I set something up for remote validation ( from the orginal facebook thread), but it got really annoying doing a 1 field lookup each time. So I was thinking this should only do a remote lookup on all fields onBlur. That way it doesn't send too many http requests and the user using it doesn't see the field jump around ( I had the displayFieldTpl displaying something slightly different then the input). A couple of other things that I added to original, that would be nice in here as well: 1. Pattern matching (middle of the word, etc.) 2. Comma separated list of values i.e. I'd like to be able to paste "california,delaware,texas" in the box and it should validate all three and turn them into boxes. I didn't paste this code, but I do have something if you'd like to see it. Maybe you can use some of it as your code looks a little cleaner than mine anyway 3. An option that doesn't turn the text into a box until onBlur happens. I've had requests that it's somewhat annoying for the displayFieldTpl to change the field and expand it when they're still trying to type in the box (i.e. they type in an email <tad> and it displays "Tad Johnston (tad)"). It would be awesome to get something like this as an option. Hopefully all this makes sense....let me know if you want some of my awesome code to pick apart UTF8 UTF8 Wowww , Very Interesting. Is it possible to have it for UTF8 also? I have some Unicode Character to call (like chinese, Farsi, etc) Thanks in Advance.)
http://www.sencha.com/forum/showthread.php?69090-Ext.ux.form.SuperBoxSelect-as-seen-on-facebook-and-hotmail&p=333994&viewfull=1
CC-MAIN-2015-18
refinedweb
531
72.66
Graphics ROOT provides powerful graphics capabilities for displaying and interacting with graphical object like plots, histograms, 2D and 3D graphical objects, etc. Here the basic functions and principles are presented, which can be applied to graphs (→ see Graphs) and histograms (→ see Histograms). The basic whiteboard on which an object is drawn is called in ROOT a canvas (class TCanvas ). A canvas is an area mapped to a window directly under the control of the display manager. A canvas contains one or more independent graphical areas: the pads (class TPad ). A pad is graphical entity that contains graphical objects. A pad can contain other pads (unlimited pad hierarchy). A pad is a linked list of primitives of any type (graphs, histograms, shapes, tracks, etc.). Adding an element to a pad is done by the Draw() method of each class. Painting a pad is done by the Paint() method of each object in the list of primitives. Graphics tutorials Graphic classes ROOT provides numerous graphic classes, of which the following are among the most used: Working with graphics ROOT offers many possibilities to work with graphics, for example: - drawing objects - drawing objects with special characters in its name - using the context menu for manipulating objects - using the Graphics Editor for objects Drawing objects The TObject class has the virtual method Draw() by which objects can be “drawn”. The object is “drawn” on a canvas (TCanvas class) that contain one or more pads (TPad class). When an object is drawn, you can interact with it. - Use the Draw()method to draw an object. object.Draw() Example A one-dimensional sine function shall be drawn. Use the TF1 class to create an object that is a one-dimensional function defined between a lower and upper limit. TF1 f1("func1","sin(x)",0,10) f1.Draw() The function is displayed in a canvas. Figure: Canvas (point to the bottom left light blue square or right-click on the image to interact with the object). Drawing objects with special characters in its name In general, avoid using objects that containing special character like \, /, # etc. in the objects names. Also object names starting with a number might be not accessible from the ROOT command line. / is the separator for the directory level in a ROOT file therefore an object having a / in its name cannot be accessed from the command line. Nevertheless, some objects may be named in this way and saved in a ROOT file. The following macro shows how to access such an object in a ROOT file. #include "Riostream.h" #include "TFile.h" #include "TList.h" #include "TKey.h" void draw_object(const char *file_name = "myfile.root", const char *obj_name = "name") { // Open the ROOT file. TFile *file = TFile::Open(file_name); if (!file || file->IsZombie()) { std::cout << "Cannot open " << file_name << "! Aborting..." << std::endl; return; } // Get the list of keys. TList *list = (TList *)file->GetListOfKeys(); if (!list) { std::cout << "Cannot get the list of TKeys! Aborting..." << std::endl; return; } // Try to find the proper key by its object name. TKey *key = (TKey *)list->FindObject(obj_name); if (!key) { std::cout << "Cannot find a TKey named" << obj_name << "! Aborting..." << std::endl; return; } // Read the object itself. TObject *obj = ((TKey *)key)->ReadObj(); if (!obj) { std::cout << "Cannot read the object named " << obj_name << "! Aborting..." << std::endl; return; } // Draw the object. obj->Draw(); } Using the context menu for manipulating objects Right-click on the function to display the context menu. Figure: Context menu for manipulating objects. Here you can change many properties of the object like title, name, range, line and fill attributes etc. For example, you can change the range by clicking SetRange. Figure: SetRange dialog window. Select a range, for example 5, 25. Figure: Range 5, 25 for sin(x). Using the Graphics Editor for objects You can edit an existing object in a canvas by right-clicking the object or by using the Graphics Editor. - Click View and then select Editor. Figure: Editor for setting attributes interactively. You can draw and edit basic primitives starting from an empty canvas or on top of a picture. There is a toolbar that you can use to draw objects. - Click View and then select Toolbar. Figure: Toolbar providing more options. You can create the following graphical objects: - Arc of circle - Arrow - Diamond - Ellipse - Pad - PaveLabel - PaveText or PavesText - PolyLine - Text string Graphical objects The following sections introduce some of the graphical objects that ROOT provides. Usually, one defines these graphical objects with their constructor and draws them with their Draw() method. The following graphical objects are presented: Lines TLine(Double_t x1,Double_t y1,Double_t x2,Double_t y2) x1, y1, x2, y2 are the coordinates of the first and the second point. Example root[] l = new TLine(0.2,0.2,0.8,0.3) root[] l->Draw() Arrows TArrow(Double_t x1, Double_t y1, Double_t x2, Double_t y2, Float_t arrowsize, Option_t *option) The arrow is defined between points x1,y1 and x2,y2. option defines the direction of the arrow like >, <, <>, ><, etc. Example TCanvas *c1 = new TCanvas("c1"); c1->Range(0,0,1,1); TArrow *ar1 = new TArrow(0.1,0.1,0.1,0.7); ar1->Draw(); TArrow *ar2 = new TArrow(0.2,0.1,0.2,0.7,0.05,"|>"); ar2->SetAngle(40); ar2->SetLineWidth(2); ar2->Draw(); TArrow *ar3 = new TArrow(0.3,0.1,0.3,0.7,0.05,"<|>"); ar3->SetAngle(40); ar3->SetLineWidth(2); ar3->Draw(); TArrow *ar4 = new TArrow(0.46,0.7,0.82,0.42,0.07,"|>"); ar4->SetAngle(60); ar4->SetLineWidth(2); ar4->SetFillColor(2); ar4->Draw(); TArrow *ar5 = new TArrow(0.4,0.25,0.95,0.25,0.15,"<|>"); ar5->SetAngle(60); ar5->SetLineWidth(4); ar5->SetLineColor(4); ar5->SetFillStyle(3008); ar5->SetFillColor(2); ar5->Draw(); Figure: Examples of various arrow formats. Polylines A polyline is a set of joint segments. It is defined by a set of N points in a 2D-space. TPolyLine(Int_t n,Double_t* x,Double_t* y,Option_t* option) n is the number of points, and x and y are arrays of n elements with the coordinates of the points. Example Double_t x[5] = {.3,.7,.6,.24,.2}; Double_t y[5] = {.6,.1,.9,.8,.7}; TPolyLine *pline = new TPolyLine(5,x,y); pline->SetFillColor(42); pline->SetLineColor(13); pline->SetLineWidth(3); pline->Draw("f"); pline->Draw(); Figure: Example for a polyline. Ellipses You can truncate and rotate an ellipse. An ellipse is defined by its center ( x1, y1) and two radii r1 and r2. A minimum and maximum angle may be specified ( phimin, phimax). The ellipse may be rotated with an angle theta (all in degrees). Example TCanvas *c42 = new TCanvas("c42"); c42->Range(0,0,1,1); TEllipse *el1 = new TEllipse(0.25,0.25,.1,.2); el1->Draw(); TEllipse *el2 = new TEllipse(0.25,0.6,.2,.1); el2->SetFillColor(13); el2->SetFillStyle(3008); el2->Draw(); TEllipse *el3 = new TEllipse(0.75,0.6,.2,.1,45,315); el3->SetFillColor(26); el3->SetFillStyle(1001); el3->SetLineColor(4); el3->Draw(); TEllipse *el4 = new TEllipse(0.75,0.25,.2,.15,45,315,62); el4->SetFillColor(56); el4->SetFillStyle(1001); el4->SetLineColor(4); el4->SetLineWidth(6); el4->Draw(); Figure: Examples for a ellipses. Rectangles A TWbox is a rectangle (TBox ) with a border size and a border mode. The bottom left coordinates x1, y1 and the top right coordinates x2, y2 define a box. Example // A TBox: tb = new TBox(0.2,0.2,0.8,0.3) tb->SetFillColor(5) tb->Draw() // A TWbox: TWbox *twb = new TWbox(.1,.1,.9,.9,kRed+2,5,1); twb->Draw(); Markers TMarker(Double_t x,Double_t y,Int_t marker) The parameters x and y are the marker coordinates and marker is the marker type. - Use the TPolyMarker to create an array on N points in a 2D space. At each point x[i], y[i] a marker is drawn. - Use the TAttMarker class to change the attributes color, style and size of a marker. Example - Use the TAttMarker::SetMarkerSize(size)method to set the sizeof a marker. Curly lines and arcs Curly lines and the curly arcs are special kinds of lines that are used to draw Feynman diagrams. - Use the TCurlyLine and the TCurlyArc constructors to create curly lines and arcs for Feynman diagrams. TCurlyLine(Double_t x1, Double_t y1, Double_t x2, Double_t y2, Double_t wavelength, Double_t amplitude) TCurlyArc(Double_t x1, Double_t y1, Double_t rad, Double_t phimin, Double_t phimax, Double_t wavelength, Double_t amplitude) Both classes directly inherit from TPolyLine . Example Refer to the $ROOTSYS/tutorials/graphics/feynman.C tutorial for creating a Feynman diagram. Figure: Feynman diagram. Text and Latex Text that is displayed in a pad is embedded in a box, called pave (TPaveLabel , TPaveText and TPavesText ) . All text displayed in ROOT graphics is an object of the TText class. Example root[] pl = new TPaveLabel(-50,0,50,200,"Some text") root[] pl->SetBorderSize(0) root[] pl->Draw() A TPaveLabel can contain only one line of text. A TPaveText can contain several lines of text. A TPavesText is a stack of text panels. Latex Latex (TLatex ) can be used as text, especially to draw mathematical formulas or equations. The syntax of TLatex is very similar to the Latex in mathematical mode. Example TCanvas *c1 = new TCanvas("c1","test",600,700); // Write formulas. TLatex l; l.SetTextAlign(12); l.SetTextSize(0.04); l.DrawLatex(0.1,0.9,"1) C(x) = d #sqrt{#frac{2}{#lambdaD}}\ #int^{x}_{0}cos(#frac{#pi}{2}t^{2})dt"); l.DrawLatex(0.1,0.7,"2) C(x) = d #sqrt{#frac{2}{#lambdaD}}\ #int^{x}cos(#frac{#pi}{2}t^{2})dt"); l.DrawLatex(0.1,0.5,"3) R = |A|^{2} = #frac{1}{2}#left(#[]{#frac{1}{2}+\ C(V)}^{2}+#[]{#frac{1}{2}+S(V)}^{2}#right)"); l.DrawLatex(0.1,0.3, "4) F(t) = #sum_{i=-#infty}^{#infty}A(i)cos#[]{#frac{i}{t+i}}"); l.DrawLatex(0.1,0.1,"5) {}_{3}^{7}Li"); Figure: Latex in a pad. Graphical objects attributes and styles There are the following classes for changing the attributes of graphical objects: - - - TAttMarker - Used for setting the styles for a marker. - Creating and modifying a style When objects are created, their default attributes (taken from TAttFill , TAttLine , TAttMarker , TAttText ) are taken from the current style. The current style is an object of the TStyle class and can be referenced via the global variable gStyle (→ see ROOT classes, data types and global variables). ROOT provides two styles: Default Plain Creating the Default style The Default style is created by: auto default = new TStyle("Default","Default Style"); Creating the Plain style The Plain style is useful if you, for example, are working on a monochrome display. auto plain = new TStyle("Plain","Plain Style (no colors/fill areas)"); plain->SetCanvasBorderMode(0); plain->SetPadBorderMode(0); plain->SetPadColor(0); plain->SetCanvasColor(0); plain->SetTitleColor(0); plain->SetStatColor(0); Setting the current style - Use the SetStyle()method, to set the current style. gROOT->SetStyle(style_name); You can get a pointer to an existing style with: auto style = gROOT->GetStyle(style_name); Note When an object is created, its attributes are taken from the current style. For example, you may have created an histogram in a previous session and saved it in a ROOT file. Meanwhile, if you have changed the style, the histogram will be drawn with the old attributes. You can force the current style attributes to be set when you read an object from a file by: gROOT->ForceStyle(); Creating additional styles TStyle *st1 = new TStyle("st1","my style"); st1->Set.... st1->cd(); This becomes now the current style. Getting the attributes of the current style You can force objects (in a canvas or pad) to get the attributes of the current style. canvas->UseCurrentStyle(); Axis Axis are automatically built in by various high level objects such as histograms or graphs. TAxis manages the axis and is referenced by TH1 and TGraph . To make a graphical representation of an histogram axis, TAxis references the TGaxis class. - Use the GetXaxis(), GetYaxis()or GetZaxis()methods to get the axis for an histogram or graph. Example TAxis *axis = histo->GetXaxis() Setting the axis title - Use the SetTitle()method to set the tile of an axis. Example axis->SetTitle("My axis title"); If the axis is embedded into a histogram or a graph, you first have to extract the axis object. Example histo->GetXaxis()->SetTitle("My axis title") Setting axis options and characteristics The available axis options are listed in the following example.) Setting the number of divisions - Use the TAxis::SetNdivisions(ndiv,optim)method to set the number of divisions for an axis. ndiv and optim are defined as follows: ndiv = N1 + 100*N2 + 10000*N3, with: N1= number of first division, N2= number of secondary divisions, N3= number of tertiary divisions. optim= kTRUE (default): The divisions’ number will be optimized around the specified value. optim= kFALSE, or n < 0: The axis will be forced to use exactly n divisions. Example ndiv = 0: no tick marks. ndiv = 2: 2 divisions, one tick mark in the middle of the axis. ndiv = 510: 10 primary divisions, 5 secondary divisions. ndiv = -10: exactly 10 primary divisions. Zooming the axis - Use TAxis::SetRange() or TAxis::SetRangeUser() to zoom the axis. The SetRange() method parameters are bin numbers. For example if a histogram plots the values from 0 to 500 and has 100 bins, SetRange(0,10) will cover the values 0 to 50. The SetRangeUser() method parameters are user coordinates. If the start or end is in the middle of a bin the resulting range is approximation. It finds the low edge bin for the start and the high edge bin for the high. Setting time units for axis - Use the SetTimeDisplay()method to set an axis as a time axis. Example For a histogram histo, the x-axis is set as time axis. histo->GetXaxis()->SetTimeDisplay(1); For a time axis, you can set the Time formats The time format defines the format of the labels along the time axis. It can be changed using the TAxis::SetTimeFormat() method. The time format used if from the C function strftime(). It is a string containing the following formatting characters, for date: %a: abbreviated weekday name %b: abbreviated month name %d: day of the month (01-31) %m: month (01-12) %y: year without century %Y: year with century for time: %H: hour (24-hour clock) %I: hour (12-hour clock) %p: local equivalent of AM or PM %M: minute (00-59) %S: seconds (00-61) %%: % The other characters are output as is. For example to have a format like dd/mm/yyyy, use: ~~~ .cpp h->GetXaxis()->SetTimeFormat("%d/%m/%Y"); ~~~ Time offset The time is a time in seconds in the UNIX standard UTC format (this is an universal time, not the local time), defining the starting date of an histogram axis. This date should be greater than 01/01/95 and is given in seconds. There are the three ways to define the time offset: - Setting the global default time offset. Example TDatime da(2003,02,28,12,00,00); gStyle->SetTimeOffset(da.Convert()); Notice the usage of TDateTime to translate an explicit date into the time in seconds required by SetTimeFormat. If no time offset is defined for a particular axis, the default time offset will be used. - Setting a time offset to a particular axis. Example TDatime dh(2001,09,23,15,00,00); h->GetXaxis()->SetTimeOffset(dh.Convert()); - Using SetTimeFormattogether with the time format The time offset can be specified using the control character %F after the normal time format. %F is followed by the date in the format: yyyy-mm-dd hh:mm:ss. Example histo->GetXaxis()->SetTimeFormat("%d\/%m\/%y%F2000-02-28 13:00:01");. Example gStyle->SetTitleH(0.08); TDatime da(2003,02,28,12,00,00); gStyle->SetTimeOffset(da.Convert()); auto ct = new TCanvas("ct","Time on axis",0,0,600,600); ct->Divide(1,3); auto ht1 = new TH1F("ht1","ht1",30000,0.,200000.); auto ht2 = new TH1F("ht2","ht2",30000,0.,200000.); auto ht3 = new TH1F("ht3","ht3",30000,0.,200000.); for (Int_t i=1;i<30000;i++) { auto(2019,12,4,15,00,00); ht3->GetXaxis()->SetTimeDisplay(1); ht3->GetXaxis()->SetTimeOffset(dh.Convert()); ht3->Draw(); Figure: Time axis. Drawing an axis independently of a graph/histogram This may be useful if you want to draw a supplementary axis for a graph. Legends A TLegend is a panel with several entries (TLegendEntry class).). - Use the AddEntry() method to a add a new entry to a legend. The parameters are: *objisa pointer to an object having a marker, a line, or a fill attributes (a histogram, or a graph). labelis the label to be associated to the object. option: - “ L”: Draws a line associated with line attributes of obj, if objinherits from TAttLine . - “ P”: Draw a poly-marker associated with marker attributes of obj, if objinherits TAttMarker . - “ F”: Draws a box with fill associated with fill attributes of obj, if objinherits TAttFill . Example The following legend contains a histogram, a function and a graph. The histogram is put in the legend using its reference pointer whereas the graph and the function are added using their names. Because TGraph constructors do not have the TGraph name as parameter, the graph name should be specified using the SetName() method. { auto c1 = new TCanvas("c1","c1",600,500); gStyle->SetOptStat(0); // Histogram: auto h1 = new TH1F("h1","TLegend Example",200,-10,10); h1->FillRandom("gaus",30000); h1->SetFillColor(kGreen); h1->SetFillStyle(3003); h1->Draw(); // Function: auto f1=new TF1("f1","1000*TMath::Abs(sin(x)/x)",-10,10); f1->SetLineColor(kBlue); f1->SetLineWidth(4); f1->Draw("same"); const Int_t n = 20; Double_t x[n], y[n], ex[n], ey[n]; for (Int_t i=0;i<n;i++) { x[i] = i*0.1; y[i] = 1000*sin(x[i]+0.2); x[i] = 17.8*x[i]-8.9; ex[i] = 1.0; ey[i] = 10.*i; } // Graph: auto gr = new TGraphErrors(n,x,y,ex,ey); gr->SetName("gr"); gr->SetLineColor(kRed); gr->SetLineWidth(2); gr->SetMarkerStyle(21); gr->SetMarkerSize(1.3); gr->SetMarkerColor(7); gr->Draw("P"); // Creating a legend. auto legend = new TLegend(0.1,0.7,0.48,0.9); legend->SetHeader("The Legend Title","C"); // option "C" allows to center the header legend->AddEntry(h1,"Histogram filled with random numbers","f"); legend->AddEntry("f1","Function abs(#frac{sin(x)}{x})","l"); legend->AddEntry("gr","Graph with error bars","lep"); legend->Draw(); } Figure: Legend containing a histogram, a function and a graph. Canvas and pad A canvas (TCanvas ) is a graphical entity that contains graphical objects that are called pads (TPad ). A pad is a graphical container that contains other graphical objects like histograms and arrows. It also can contain other pads, called sub-pads. When an object is drawn, it is always in the so-called active pad. Accessing the active pad - Use the global variable gPadto access the active pad. For more information on global variables, → see ROOT classes, data types and global variables. Example If you want to change the fill color of the active pad to blue, but you do not know the name of the active pad, you can use gPad. gPad->SetFillColor(38) Accessing an object in an active pad - Use the TPad::GetPrimitive(const char* name) method to access an object in an active pad. Example root[] obj = gPad->GetPrimitive("myobjectname") (class TObject*)0x1063cba8 A pointer to the object myobjectname is returned and put into the obj variable. The type of the returned pointer is a TObject* that has a name. Hiding an object in a pad You can hide an object in a pad by removing it from the list of objects owned by that pad. Use the TPad::GetListOfPrimitives() method to list is accessible objects of a pad. Use the Remove()method to remove the object from the list. Example First, a pointer to the object is needed. Second, a point to the list of objects owned by the pad is needed. Then you can remove the object from the list, i.e. pad. The object disappears as soon as the pas is updated. root[1] obj = gPad->GetPrimitive("myobjectname") root[2] li = gPad->GetListOfPrimitives() root[3] li->Remove(obj) Updating a pad For performance reasons, a pad is not updated with every change. Instead, the pad has a “bit-modified” that triggers a redraw. The “bit-modified” is automatically set by: touching the pad with the mouse, for example by resizing it with the mouse, finishing the execution of a script, adding or modifying primitives, for example the name and title of an object. You can set the “bit-modified” by using the Modified() method. Example // The pad has changed. root[] pad1->Modified() // Recursively updating all modified pads: root[] c1->Update() A subsequent call to TCanvas::Update() scans the list of sub-pads and repaints the pads. Dividing a pad into sub-pads To draw multiple objects on a canvas (TCanvas ), you can divide pad (TPad ) into sub-pads. There are two ways to divide a pad into sub-pads: building pad objects and draw them into a parent pad, automatically divide a pad into horizontal and vertical sub-pads. Creating a single sub-pad To build sub-pads in a pad, you must indicate the size and the position of the sub-pads. Example A sub-pad is to be built into the active pad (pointed by gPad). First, the sub-pad is build the TPad constructor. root[] spad1 = new TPad("spad1","The first subpad",.1,.1,.5,.5) The NDC (normalized coordinate system) coordinates are specified for the lower left point (0.1, 0.1) and for the upper right point (0.5, 0.5). Then the sub-pad is drawn. root[] spad1->Draw() For building more sub-pads, repeat this procedure as many times as necessary. Dividing a pad into sub-pads - Use the TPad::Divide() method to divide a pad into sub-pads. Coordinate systems of a pad For a TPad the following coordinate systems are available: You can convert from one system of coordinates to another. User coordinate system Most methods of TPad use the user coordinate system, and all graphic primitives have their parameters defined in terms of user coordinates. By default, when an empty pad is drawn, the user coordinates are set to a range from 0 to 1 starting at the lower left corner. - Use the TPad::range(float x1,float y1,float x2,float y2) method to set the user coordinate system. The arguments x1and x2define the new range in the x direction, and y1and y2define the new range in the y direction. Example Both coordinates go from -100 to 100, with the center of the pad at (0,0). TCanvas MyCanvas ("MyCanvas") gPad->Range(-100,-100,100,100) Normalized coordinate system (NDC) Normalized coordinates are independent of the window size and of the user system. The coordinates range from 0 to 1 and (0, 0) correspond to the bottom-left corner of the pad. Pixel coordinate system The pixel coordinate system is. Converting between coordinate systems TPad provides some methods to convert from one system of coordinates to another. In the following table, a point is defined by: (px,py)in pixel coordinates, (ux,uy)in user coordinates, (ndcx,ndcy)in normalized coordinates, (apx, apy)in absolute pixel coordinates. Note All the pixel conversion functions along the Y axis consider that py=0is at the top of the pad except PixeltoY(), which assumes that the position py=0is at the bottom of the pad. To make PixeltoY()converting the same way as the other conversion functions, it should be used the following way ( pis a pointer to a TPad ): p->PixeltoY(py - p->GetWh()); Copying a canvas - Use the TCanvas::DrawClonePad method to make a copy of the canvas. You can also use the TObject:DrawClone() method, to draw a clone of this object in the current selected pad.
https://root.cern/manual/graphics/
CC-MAIN-2020-34
refinedweb
4,007
57.27
Bart De Smet's on-line blog (0x2B | ~0x2B, that's the question) Introduction Code 39 is a specification for barcodes that allows coding of the following symbols: A-Z 0-9 - . $ / + % * space. The goal of this small project is to allow generation of barcodes using System.Drawing in .NET, with C#. Some principles Every character of a code is translated into a sequence of white and black bars, each of which can be narrow or wide. A code is typically wrapped between a trailing and ending * character because the encoding of the * character allows to detect the orientation of the barcode based on asymmetry. Encoding of the characters is done on a one-per-one basis and an intercharacter gap (white) is inserted between two consecutive character encodings. For more info, see the Code 39 specification. Mission goal In the end we want to be able to write the following; The code skeleton The basic code skeleton looks as follows, including the constructors and two private members: class The Code39Settings class isn't rocket science at all and is a container for a series of properties ('property type' 'property name' = 'default value'): Coding the character codes Another requirement is an encoding table that maps a character on a encoding. Basically, every character is encoded as a BWBWBWBWB pattern (B=black, W=white) each of which can be either n(arrow) or w(ide). Therefore we have the following code table: static The two last lines are System.Drawing support for what comes next. Representing and painting patterns As you've seen in the initialization code above, we need some Pattern class to represent a single character pattern and - see later - that's also able to paint itself in some given "context". Therefore we define a private class Pattern. private First we define the static Parse method: Nothing special going on in here, just parsing the narrow-wide code string (from the static initialization stuff) to a boolean array, of which each element tells whether the corresponding position (B/W) in the graphical code is wide (true) or not (false). Example: For {'0', "n n n w w n w n n"} the encoding will become { false, false, false, true, true, false, true, false, false } meaning: narrow black + narrow white + narrow black + wide white + wide black + narrow white + wide black + narrow white + narrow black or: Time to get the thing painted: the Paint method. Notice this requires System.Drawing.dll to be referenced and the System.Drawing namespace to be imported: This piece of code is given a Graphics graphical context, the Code39Settings object as well as the left position to start painting on (on the "canvas" of the Graphics context). It returns the horizontal space taken while painting the character. In the core loop of the method, the width of the current "code part" (i.e. the white or black bar) is examined by looking at the nw array. This determines the width of the "code part" based on the settings (WideWidth or NarrowWidth). For every even position (i.e. 0, 2, 4, 6, 8) we'll draw a black bar with the determined width. Otherwise, we'll just skip drawing and continue, while bookkeeping the total drawing width. To assist in debugging we paint a gray background when the debugging flag is turned on (Debug build). This allows for visual inspection without having your eyes hurt on the tiny bars with a screen flicker rate of 60 to 75 Hz :-). Example: (Notice the 6 characters needed to encode BART, because of the trailing * characters.) An additional supporting method is available to calculate the width of a character pattern without drawing it: Painting the code On to the Code39 class's Paint method. Quite a bit of code but rather easy to understand: Apart from some string measurement tricks, the code is rather straightforward. The core of the method is the following piece of code: foreach (char c in this.code) left += codes[c].Paint(settings, g, left) + settings.InterCharacterGap; This piece of code is responsible to extract the code patterns from the codes encoding dictionary and ask these objects to paint themselves (Pattern.Paint) on the given left position, every time leaving some intercharacter gap spacing. The result Have fun, Code39 Download the code here. Another great post from Bart. Reproduced here in all it's glory. Thanks for your contributions, Bart! PingBack from Introduction In my previous blog post on Code-39 barcodes, I've shown you guys how to generate Code-39... PingBack from Pingback from Code 39 in C# « .NET i takie tam Pingback from community.bartdesmet.net/.../4432.aspx Pingback from blog.bartdesmet.net/.../4432.aspx Pingback from bartdesmet.net/.../4432.aspx Pingback from blogs.bartdesmet.net/.../4432.aspx
http://community.bartdesmet.net/blogs/bart/archive/2006/09/18/4432.aspx
crawl-002
refinedweb
795
63.19
The WMI system classes are a collection of predefined classes based on the Common Information Model (CIM). Unlike classes supplied by providers, the system classes are not declared in a Managed Object Format (MOF) file. WMI creates a set of these classes whenever a new WMI namespace is created. Objects from the system classes are used to support WMI activities, such as: event and provider registration, security, and event notification. Some objects are temporary, and some are stored in the repository as instances of the system classes. System classes follow a naming convention that consists of a double-underscore (__) followed by the class name. When you write an MOF file to define classes for a WMI provider, Mofcomp.exe does not compile any class with an initial double-underscore (__) because that is reserved for WMI system class names. The documentation for the system classes includes only the nonsystem local properties. Links are provided in class definitions so that you can navigate the class hierarchy quickly and easily. WMI System Classes The following table lists the various system classes.
https://msdn.microsoft.com/en-us/ie/aa394583(v=vs.94)
CC-MAIN-2020-16
refinedweb
180
55.13
36088/how-to-calculate-difference-in-timestamp-columns I have a data frame with following columns and both datatypes are strings : "DateSubmitted" , "DateClosed", 8/1/2018 12:29 8/2/2018 16:47 8/1/2018 20:25 8/2/2018 20:28 How to calculate the difference between Date Closed and date Submitted? First, write the data in a csv file. Then you can use the pandas data frame to calculate the difference. Refer to the below code: import pandas as pd pd.read_csv("names.csv") df_test['Difference'] = (df_test['CloseDate'] - df_test['SubmittedDate']).dt.days calculate square root in python >>> import math ...READ MORE In Python 3 (< 3.3) From the docs ...READ MORE Try doing this - It is efficient for ...READ MORE calculate modules in python mod = a % ...READ MORE Hi, I am trying to run following things ...READ MORE Firstly you need to understand the concept ...READ MORE org.apache.hadoop.mapred is the Old API org.apache.hadoop.mapreduce is the ...READ MORE Hi, You can create one directory in HDFS ...READ MORE Hey @abhijmr.143, you can print array integers ...READ MORE You can get the changing time from ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/36088/how-to-calculate-difference-in-timestamp-columns
CC-MAIN-2022-40
refinedweb
219
70.7
Even tiny, single-use open source tools are worthy of our attention. If you have the need to programmatically configure your Nginx servers, look no further than nginxparser by Fatih Erikli. Weighing in at less than 100 lines of code, nginxparser provides two features. from nginxparser import load load(open("/etc/nginx/sites-enabled/foo.conf")) [['server'], [ ['listen', '80'], ['server_name', 'foo.com'], ['root', '/home/ubuntu/sites/foo/']]]] and dumping: from nginxparser import dumps dumps([['server'], [ ['listen', '80'], ['server_name', 'foo.com'], ['root', '/home/ubuntu/sites/foo/']]]) 'server { listen 80; server_name foo.com; root /home/ubuntu/sites/foo/; }' I was impressed with how simple it is to define a parser using Pyparsing, which nginxparser does to great effect. It’s definitely worth checking out! Have comments? Send a tweet to @TheChangelog on Twitter. Subscribe to The Changelog Weekly – our weekly email covering everything that hits our open source radar.
http://thechangelog.com/nginxparser-a-python-module-that-loads-and-dumps-nginx-configs/
CC-MAIN-2015-06
refinedweb
147
51.44
when we compile a java source file it will be converted to a class file. The classes or interfaces written in the source file will be converted to bytecode and will be stored in the class file. The class file does not only contain byte code. It also contains some additional information which is useful to the compiler and virtual machine to compile and interpret the byte code. For example when generic class is compiled the generic class will be converted to bytecode and will be stored in the class file. The byte code does not contain any information about generics but the class file which is generated from the source file does contain information about generics. This information is useful to the compiler when compiling other source files. For Example:- The source file which contains generic class:- public class Person<T>{ public T add(T a){ System.out.println("happy"); } } Generated Byte Code for the above class when compiled which is stored in the class file:- public class Person{ public Object add(Object a){ System.out.println("happy"); } } The byte code will be stored in the class file and also the class file contains some additional information which is required by the compiler and virtual machine such as information about generics which are not really the part of Byte Code. Generally VM need some additional information, which is usually stored at the top of the file (the file "header"), to know how to load and interpret the binary data in the file. Such header information could contain the generics details. The process of erasing generics while compilation is called type erasure which is not very important to know in this context.
http://yaragalla.blogspot.in/2013/05/difference-between-byte-code-and-class.html
CC-MAIN-2017-34
refinedweb
282
58.62
RationalWiki talk:Nothing is going on at Citizendium/Archive3 Contents - 1 Rerun - 2 Where to put this? - 3 Tendrl - 4 Sanger and Wikileaks - 5 King Martin's caninisation (yes, that is spelt correctly) of those who disagree with him - 6 Critical acclaim! - 7 Boing Boing - 8 "Your opinion counts for very little here." - 9 Disappearing references - 10 Incompetent ramblings - and a few diagrams - 11 CZ's failure is all wikipedia's fault! - 12 Non-personcitizen - 13 Google Analytics - 14 Restart - 15 Ha! - 16 2007 all over again - 17 Martin... resigns? - 18 In passing - 19 Re: Ha! - 20 Dana Ullman is back! - 21 Encyclopedia of Life - 22 Building a walled garden - 23 Tendrl - 24 Expert participation - 25 Knowino/Tendrl - 26 CZ approved articles vs. Wikipedia - 27 Homeopathy resurrected - 28 Elitist? - 29 And the hits keep coming... Rerun[edit] Most of this page is a rerun of this article & its comments. Even to the participants. 18:21, 30 November 2010 (UTC) - Hey, why are the Citizens allowed to slag on people they kicked out on other forums, but no one can call a spade a spade at CZ without, um, getting kicked out? Example is Hayseed Penis saying "The reason he briefly joined Citizendium was that he was no longer welcome at Wikipedia and thought, mistakenly, that his idiosyncratic and disruptive behavior would be tolerated at Citizendium." That's pretty nasty for a bunch of sweet-as-candy collegial collaborators. Asterisk (talk) 06:28, 1 December 2010 (UTC) - Have you seen Hayseed's edit comments on wikipedia? He was nasty back then. Hasn't changed. FreeThought (talk) 12:40, 1 December 2010 (UTC) - Um, guys? Is the name calling really necessary? Doctor Dark (talk) 13:36, 1 December 2010 (UTC) - All those commenters are actually me. Just ask Larry - David Gerard (talk) 13:28, 1 December 2010 (UTC) Where to put this?[edit] We have Conservapedia:Active users and RationalWiki:Active users. So, where should we place the charts for Citizendium? At the moment, they are at Citizendium/Active users. Of course, Citizendium doesn't need a namespace on its own, but it shouldn't be a subpage, IMO. So, what do you think of Active users at Citizendium? Or are there better proposals? Thanks, larronsicut fur in nocte 13:21, 1 December 2010 (UTC) - At the risk of being called an ignorant bitch again, why not create a CZ namespace? At least way, active users, WIGO and whatever follows can be lumped in there. --Ψ GremlinHable! 13:43, 1 December 2010 (UTC) - RW space, I suspect, would be the way to go (RationalWiki:Citizendium active users). Asterisk (talk) 05:58, 2 December 2010 (UTC) Tendrl[edit] Hello all! On the Citizendium forums, I posted a link to "Tendrl". (Sadly, that's the best name I've been able to come up with so far; I'm really bad at naming things. It's probably a good thing I don't have any kids. Suggestions for other names are welcome.) ;-) I've called it a fork of the Citizendium, but at this stage I don't intend for it to be a content fork in the traditional sense—more a "principles" fork, if you know what I mean. Rather than try to explain all my ideas here, I'll invite you to check out the site itself: I've created a number of pages there so far with ideas and notes, and I'm still trying to work out the fine details. Any and all suggestions are welcome. And, if you feel inclined, please do feel welcome to create an account there and edit—don't worry about breaking anything, we can fix things up later! :-) Thomas Larsen (talk) 10:33, 23 November 2010 (UTC) - New article on Tendrl started. FreeThought (talk) 10:55, 23 November 2010 (UTC) - Thanks! And I appreciate your support above. We should have a logo competition, we just need a few contributors first. ;-) Thomas Larsen (talk) 11:03, 23 November 2010 (UTC) - More educational free content is good for everyone. First thing you need to do is work out how not to be just a copy of Wikipedia. e.g. I was most intrigued by suggestions to use Semantic MediaWiki - though you need to be an ontology geek to get started, I've seen from an internal SMW at work that the ontology geeks can set up their complicated systems and others can in fact work inside them. e.g. If you had a university sponsorship, you could have a much more credible "academics' Wikipedia" that would keep idiots out of experts' faces. Just make it CC by-sa and the world will benefit. There's any number of ways to differentiate oneself. That sort of thing - David Gerard (talk) 11:09, 23 November 2010 (UTC) - If free content means publishers Alphascript and Betascript are allowed to copy wikipedia content into books and charge for it, I would sooner opt for a different model. Advocating people work for free but let others earn money from their toils is clearly not fair or equitable. FreeThought (talk) 02:10, 3 December 2010 (UTC) - Hopefully that'll be a quite temporary market inequity. Most of the people tricked into buying those things seem to be Wikipedia editors looking for more print sources for articles ... The trouble is that "free content" includes "for any purpose", and restricting commercial use is a usage restriction. Erik Moeller argues that attempting to force "noncommercial" is a First World conceit: a charity selling books Alphascript-style is "noncommercial", someone in a poor country selling a DVD of Wikipedia is "commercial". It's actually a tricky one - David Gerard (talk) 12:49, 3 December 2010 (UTC) - o_Oimg Martin does hate your guts, doesn't he? Forking is a serious business, indeed. Doesn't he realise that the same metaphor can be applied to Citizendium?--ZooGuard (talk) 20:14, 23 November 2010 (UTC) - NO, ITS DIFRINT WHEN TEH CO-FOUNDER DOS IT!!!111img (Silliness aside, yeah, wow, that reply is headdesk-inducing. Though at least they apparently moved away from "TREASON!!!") --Sid (talk) 20:17, 23 November 2010 (UTC) - I don't hate anyone's guts. This is your Frat style thinking. Basically, you here on RW just mock everyone who is not part of your group. OK, you have some knowledge of technical things: so fucking what? Who do you think you are to pass judgement like that? I respect your skills in certain areas, but you have no respect at all for others. Juvenile. 85.72.236.124 (talk) 03:34, 24 November 2010 (UTC) - "No respect at all for others" is of course prima facie wrong. We as a site hold strong respect for individuals and communities that represent strong rational, scientific points of view. We are highly critical of communities, individuals and view points that reject critical and scientific thinking, or that embrace overtly authoritarian practices. CZ has coddled pseudoscience and quack medicine, and has had a rather authoritarian political zeitgeist both now and in the past. You are a vocal supporter and instigator of strong authoritarian tactics and have been the focus of many discussion because of this. Tmtoulouse (talk) 03:55, 24 November 2010 (UTC) - I am slowly beginning to be disgusted with both Howie and Marty. Howie first, because he is an egomaniac, but Marty second because he only feeds the fire. Luckily, there are some sane people who actually care more about CZ than themselves over there, who might be able to save it. That said, Marty, best of luck with your "fork". ħuman 04:20, 24 November 2010 (UTC) - I think your mixing up people. Tmtoulouse (talk) 04:23, 24 November 2010 (UTC) - Yes, I started the fork, not Martin. :-) Thomas Larsen (talk) 04:42, 24 November 2010 (UTC) - But Citizendium is a fork too so Human is right in that regard. FreeThought (talk) 04:47, 24 November 2010 (UTC) - They are all just completely clueless about anything. I started reading here, because (unlike RW) I do pay attention to my critics. What I find is that they just leap to conclusions, show no understanding of why people are doing or saying certain things (and we all know that there are serious problem on CZ), and all the time make offensive remarks about people who have never said anything bad to them. As I stated above, it is juvenile. Delinquent too. 85.72.236.124 (talk) 04:51, 24 November 2010 (UTC) - Yes, you have set up your psychological barriers to maintain contentment in your little insular echo chamber. Beside Larry Sanger himself you appear to be the biggest anchor dragging things down into oblivion so I suppose to makes since that your approach is to lash out in this fashion. Tmtoulouse (talk) 05:19, 24 November 2010 (UTC) - So what's wrong with Tom setting up a fork? Knowledge is free, that's why we have public libraries. Seems some people on Citizendium want to hold a monopoly on it. If Citizendium want to hold on to their work they should have chosen a different licence. FreeThought (talk) 05:30, 24 November 2010 (UTC) - Since you ask, what people object to is not making the fork (which is perfectly legal) but using our forums to talk about it and complaining about everything -- even when they have been told privately what is really happening and why. Solving problems is rarely easy, and people just get pissed off with this "you are useless idiots and I am making my own wiki" mentality. Just go and do it, don't bleat about it for years. 85.72.236.124 (talk) 05:36, 24 November 2010 (UTC) - Tom hasn't been bleating about it for years though. He made a suggestion in two posts he was setting up a fork and people are welcome to join it. I don't see how that is offensive. FreeThought (talk) 05:43, 24 November 2010 (UTC) - Please forgive us if we cannot report on such things that are "even when they have been told privately what is really happening". Ain't part of the public record, didn't happen. ħuman 06:32, 24 November 2010 (UTC) - I think he has, FT. But he hasn't been told any secrets, that remark did not refer to him. Yes, Human, I know (sigh). Not my fault, though. I have stated a lot of things openly. 85.72.236.124 (talk) 11:06, 24 November 2010 (UTC) - Once the management of the community has gone behind closed doors its done. The project relies on motivating a group of people to provide significant effort on a volunteer basis. The more you keep your base community in the dark and out of the loop, the less enfranchised they feel, the less motivated they are to do anything. And that's talking about your core contributors, its nothing compared to the massive repulsion field it creates for recruitment and new users. Flexibility, openness, and direct engagement and enfranchisement of the community are keys. I just think they are stuck in models of community management that have significant barriers to exit that don't exist online. Tmtoulouse (talk) 06:36, 24 November 2010 (UTC) - Martin a.k.a. 85.*, I really don't understand your comments. - Where would you recommend discussing the fork initially if not on the Citizendium forums? I could have made the announcement here, and Citizens could have read about it on RationalWiki before hearing about it on their own forums. But I don't think that would be fair on people at the Citizendium. - For the record, I have not been told privately by you what is "really happening" at the Citizendium. If I knew, it wouldn't affect my decision to create Tendrl. - I haven't stated anywhere that any Citizen is a "useless idiot", and I hope that is not the "mentality" that has come through in my communications either. Rather, I have quite openly stated my support for the Citizendium (exampleimg). I have criticised the project, too—particularly its bureaucracy and endless sniping—but I wish it well. - I realise that I'm part of the free software culture, and principles like forking that I find perfectly acceptable may be offensive to you. As I see it, creating a fork moves towards a goal, improving upon an older project and providing it with valuable competition. I don't see forking as an antagonistic move. - At this stage, I don't plan to encourage people to fork much content from the Citizendium—if people want to copy articles they wrote themselves, that's fine, but I don't want widespread copying from any other project without at least substantial revision (and, of course, appropriate attribution). Also, of course, I have no interest in taking articles like Homeopathy on board from the Citizendium. - Anyone is welcome to contribute to Tendrl. - If you have other concerns, feel free to e-mail me. I want to be as reasonable as possible. Thomas Larsen (talk) 06:47, 24 November 2010 (UTC) - CC-BY 3.0 license makes this fork Citizendium/Wikipedia-incompatible, so it's impossible to import content from these projects. Trycatch (talk) 11:26, 24 November 2010 (UTC) - Oops, an oversight on my part! I'll fix that tomorrow. I think pretty much all our content so far is CC-BY-SA, so it should be fine. Thomas Larsen (talk) 11:58, 24 November 2010 (UTC) - Fixed. "Tendrl" is now a CC-BY-SA project. :-) Thomas Larsen (talk) 00:10, 25 November 2010 (UTC) - Indeed, Martin really doesn't seem to understand why calling a fork "treason" is a fundamentally inappropriate response. May I suggest Wikipedia:Fork (software engineering), which I wrote large chunks of years ago, and the essays linked at the bottom, to understand where people come from on this. The Citizendium fork involved a great deal of ill will on Dr Sanger's part (though not the other way), but forks don't have to be like that and forking is not in general an expression of "fuck you." Indeed, ill will the other way - from the forked entity - is (as I note) a danger sign for the viability of said forked entity - David Gerard (talk) 14:27, 24 November 2010 (UTC) - Tendrl link does not seem to be working. Started last night and is continuing this morning.LittleRedWriter (talk) 16:47, 24 November 2010 (UTC) - It's working for me at the moment—were you receiving a timeout? Thomas Larsen (talk) 00:10, 25 November 2010 (UTC) You now have a WikiIndex entry. Good luck in your efforts to come up with a better name, which you desperately need. Onion Hi! :) 17:42, 26 November 2010 (UTC) Point of order[edit] "Martin a.k.a. 85.*" Please don't do that here. Call them BoN, Anon, 85..., etc., but you are claiming to know who the user of an anonymous IP account is, and we simply don't do that here, unless they sign "joe" after posting as a BoN. And what is the asterisk all about? ħuman 06:20, 26 November 2010 (UTC) Sanger and Wikileaks[edit] Btw, Larry Sanger is going ape shit over WikiLeaks. It is hard to take him seriously, I see this as "any wiki that's more popular than mine sucks." Tmtoulouse (talk) 07:25, 27 November 2010 (UTC) - Which is funny, because WikiLeaks is not a wiki. Also: "I'll go ahead and say the obvious: Wikileaks is an enemy of the U.S.--and not just the government. Deal with them accordingly." How? Nuke the servers? Chikenhawk.--ZooGuard (talk) 07:45, 27 November 2010 (UTC) - He is still going at it, all those tweets just start reading like "look at me! look at me!" but then I guess that's what twitter is all about. Do you think he ever gets as tired as the rest of with beating the wikipedia co-founder drum? Tmtoulouse (talk) 23:41, 27 November 2010 (UTC) - So someone expressed an opinion and they are now an "enemy of the US". Doesn't Sanger believe in the First Amendment? Given the censoring of posts on the citizendium forums, I'm not surprised. FreeThought (talk) 03:56, 28 November 2010 (UTC) - That Twitter stream is remarkable. Well worth saving - David Gerard (talk) 11:39, 28 November 2010 (UTC) - "Speaking as Citizendium's founder, I consider you enemies of the U.S.--not just the government, but the people." Doesn't have quite the same ring to it, does it - David Gerard (talk) 11:41, 28 November 2010 (UTC) - Yeah I was thinking the same thing, he certainly doesn't trumpet his role in CZ like he does his "role" at WP. Tmtoulouse (talk) 16:25, 28 November 2010 (UTC) - Probably because the general populace doesn't even know that CZ exists. --Sid (talk) 17:26, 28 November 2010 (UTC) - Or maybe he made those tweets outrageous so that people would attack wikipedia? I'm now inclined to believe this was his motive. It certainly caused a few unwary people to put down wikipedia in reaction. Well played Larry, Goebbels would be impressed. FreeThought (talk) 23:02, 28 November 2010 (UTC) - - I doubt it's that well thought out. I suspect it's more that WatchKnow's out of money and he's seeking a new patron, and "Wikipedia co-founder" is still more bankable than "Citizendium founder" - David Gerard (talk) 13:25, 30 November 2010 (UTC) - Odd way to garner financial support by attacking a group. I'm also puzzled by some of Sanger's tweets that border into homophobia - that "gay shoes" put-down. It's one side of Sanger I haven't seen before. FreeThought (talk) 17:02, 30 November 2010 (UTC) - That's because Sanger is basically losing it. The combativeness is because the one trick that worked to get publicity for Citizendium was trying to start public fights with Wikipedia. so of course it's the first thing to try - David Gerard (talk) 12:44, 3 December 2010 (UTC) King Martin's caninisation (yes, that is spelt correctly) of those who disagree with him[edit] H.R.H. King Martin-Hyphen-Martin, G.P.C., MEMBER OF YE OLDE ROYAL EDITORIAL COUNCIL AND DON'T YOU FUCKIN' FORGET IT, Citiz. Def., etc. etc., hath declared that anyone who disagreeth with him is a "dog." Hie thee to, and search thou down for ye olde aforementioned three-letter canid quadruped, an thou believest not thine obd't svt., OHai. ((Um, did I leave out a pretentious hyphen somewhere?)) OHai (talk) 03:57, 28 November 2010 (UTC) - Making fun of how someone's name is spelled isn't a high-class maneuver. Just saying. Doctor Dark (talk) 05:12, 28 November 2010 (UTC) - I don't really get the dog angle here, but the comments are really telling. The EC just spent days bickering among itself about Howard, trampling straight over all suggestions or requests for details by the general community, only to suddenly decide that it's all fine after all... but how DARE the Managing Editor request details about their priorities? I mean, it's not like CZ is in some sort of existential crisis where it needs the support of the general community, which in turn demands greater transparency and is criticalimg of the way the EC is drifting off into Lala Land... --Sid (talk) 11:17, 28 November 2010 (UTC) - Yeah, it's not like the Managing Editor now has the job at the top of the place that Larry previously had. What does a Managing Editor have to do with editorial direction, anyway? The nerve! - David Gerard (talk) 11:32, 28 November 2010 (UTC) - "Every man and his dog" - it's a very common idiom, meaning everyone (emphatically & indiscriminately), not calling anybody a dog. Wěǎšěǐǒǐď Methinks it is a Weasel 11:52, 28 November 2010 (UTC) - Indeed. However, the Editorial Council regards the Managing Editor as no-one in particular, and emphatically not as anyone who has any business asking about editorial matters - David Gerard (talk) 12:18, 28 November 2010 (UTC) - Which is doubly unfortunate because the Managing Editor (Daniel Mietchen) is one of the really good guys over there, sensible and non-egocentric. Wonder how long before he throws in the towel. Doctor Dark (talk) 14:42, 28 November 2010 (UTC) - If you quarter-wits could read properly, you would see that the Charter states specifically that the managing editor is not a replacement for Larry, and that s/he makes provisional decisions which are ranked below the authority of the two councils (Article 36). Considering that you on RW are always bleating about accountability, opennness and democracy, you should be pleased that an elected body has more power than one individual. But wait, I forgot, you're really just trying to find fault, not be rational...85.72.236.124 (talk) 15:02, 28 November 2010 (UTC) - Maybe it's that us quarter-wits believe fixing obvious problems is more important than adhering to the bureaucratic strictures of charters and committees and such. Why don't you guys just roll up your sleeves and get to work instead of endless arguing and insistence on properly following proper protocols? Many of us on the outside get the impression that you enjoy all the arguing and bureaucracy, preferring it to something dull and pedestrian like, say, writing encyclopedia articles. Doctor Dark (talk) 15:38, 28 November 2010 (UTC) - Yes, we do "bleat" about accountability, openness and democracy because they're important. That's why the way you dismiss a perfectly reasonable request to be more accountable and open is of concern to us. Instead, you accuse Daniel of interfering. And heaven forbid that you should set priorities! Much better to thrash around at random because after all, if you don't know where you're going, you don't need a map. –SuspectedReplicant retire me 15:42, 28 November 2010 (UTC) - Since you clearly don't get it (along with Daniel), I shall remind you that the aborted action taken against one member was taken precisely because the EC could not get beyond stupid bureaucracy. Most of us want no bureaucracy at all, yet are forced into it reluctantly in order to get anything done at all. Indeed, it would be a glorious day when we could actually sit down and discuss priorities, but it is not going to happen -- for the reasons already stated. 85.72.236.124 (talk) 16:13, 28 November 2010 (UTC) - It is literally the case that Martin prefers bureaucracy to writing stuff, as he has expressly declared his return was for. Though he does actually edit in main space on occasion - David Gerard (talk) 17:29, 28 November 2010 (UTC) - (e/c) I get it perfectly: you're a power-crazed little man who doesn't realise that his site is founded in bureaucracy and is incapable of operating because of the egos involved. It should be quite clear that your current structure doesn't work, but instead of frankly admitting the problem and inviting suggestions from the "experts" on your site, you retreat into your little castle and insult people instead. Even from the small amount of information you let out of the citadel, it's pretty clear that you are one of the major problems. The best thing you could do for CZ is resign. You won't, though, because you're more interested in maintaining your position than fixing the problem. –SuspectedReplicant retire me 17:32, 28 November 2010 (UTC) - (also e/c) I like how a CZ member freely bleats about how the ME "doesn't get it". I really do. But the thing is that one of Daniel's duties is "to represent the Citizendium in its relations with external bodies, such as the mass media, and academic or non-academic institutions." Meaning that it's his job to, oh, I don't know, try convincing a university or so to host a great project such as CZ for free. Because, y'know, you're rapdidly running out of money. Or hey, how about contacting the press to get some good publicity after the whole funding issue news broke? Yeah, his job. - And right now, the random antics of the EC are making even CZ regulars go "Guys, WTH are you doing?". If somebody from outside looks beyond the pretty face of the wiki mainpage, he won't see a Project Of Experts, he'll just see a community trying to get some council to stop behaving like asshats. So yeah, Daniel DOES GET IT. He gets that he needs the EC to stop being morons who operate in secret and out of reach of the mere mortals because the resulting negative vibes make his job infinitely harder in a time of crisis. - But hey, why think things through when it's much easier to pat yourself on the back for having such a powerful council that doesn't have to take shit from the ME? --Sid (talk) 17:42, 28 November 2010 (UTC) - Looking at the recent comments on the proposal page, the discussion (check the comments by Howard and Hayford) seems to shift towards how much power the Secretary has and whether the discussion time frames are fixed or minimum values. I'd have thought that, given the amount of chartering, proposing and motioning, both of these questions would have gotten obvious answers ages ago. - In other news, Hayford's still busy rubbing in the absolute power of the EC: "WE were elected to the EC, not them, and we will do things OUR way, in the priority that WE judge is correct." Repeat after me, guys: "SIR, YES SIR!" --Sid (talk) 21:17, 28 November 2010 (UTC) - I like the refusal to address major issues until all the details are cleared. Details have an amazing ability to multiply uncontrollably. You never finish all the details, ever. If you refuse to spend any time or energy on big picture issues until all the little stuff is done, well suddenly its 2 years later and the only people left are the 5 EC members still working on those pesky details. Details are important but its the big stuff that will make or break your site. You gotta be able to multitask, to address both kinds of issues at once. - Group think is also a powerful dynamic, refusal to actually listen to the rest of community not only pisses them off but removes the single best pressure valve to break out of tunnel visioning governance bodies tend to get into. Nope, this is not the way to go about things. At best you just pissed off your whole user base (a volunteer user base, that can jump ship at anytime), more likely it just dragged the EC down to a completely ineffectual circle jerk. Tmtoulouse (talk) 22:16, 28 November 2010 (UTC) - If I read it correctly (sorry, brain's currently fried), Howard's (currently) last comment echoes what I'm thinking: Small details that aren't vital are exactly the type of thing the Managing Editor can decide on the fly until the EC properly decides on them. You know, so that the EC can focus on the big stuff. --Sid (talk) 22:37, 28 November 2010 (UTC) - Yep, right, you know better... NOT. You should call yourselves Irrational Wiki -- as we call you anyway -- because your inflated egos are all that you comprehend. Even when people leak things that they shouldn't, you still persist with your "own interpretations" of reality. Ignoring evidence (and insisting on your own view of things) is exactly what science does not permit, although very many second-rate practitioners do this. 85.72.236.124 (talk) 22:46, 28 November 2010 (UTC) - Hm, indeed. Your well-formulated arguments, presented in this calm and mature way, convinced me, and I have seen the error of my ways. *nods* In less sarcastic news, I've read more creative smack talk from Rob, TK and even Ken - and all of those were even man enough to create accounts, too. --Sid (talk) 22:58, 28 November 2010 (UTC) - You've had comments, observations, criticism and suggestions from RW and all you can do is insult us. You, Martin, are a perfect example of what's wrong with Citizendium. On RW, I've met a lot of people who are more knowledgeable than me on various topics, and the reason I know that is because they've made good edits or cogent comments. You're just a blowhard with a power fixation who, as has been mentioned above, doesn't actually do very much useful work on the site itself. You might dislike Wikipedia, but I suggest you read this page because it describes your condition precisely. –SuspectedReplicant retire me 22:59, 28 November 2010 (UTC) - In other news: Saved by the bell. --Sid (talk) 23:03, 28 November 2010 (UTC) - Irrational wiki, wow, no one has ever came up with that insult before, with such originality and creativity I am surprised CZ is failing. You know there is a few of you from CZ that like to harp on this wagon that we are pushing points that you guys have all ready refuted, or that we are out right lying. I would absolutely love if for the first time you managed to come up with a specific example. So far whenever I ask I am ignored. Please show us something we have continued to say despite evidence to the contrary. Tmtoulouse (talk) 00:42, 29 November 2010 (UTC) I think it's unfair to criticize Martin for not doing much. Remember the CZ distinction between editors & authors. Authors are supposed to write things & editors are supposed to guide & approve. Of course it doesn't work like that a lot of the time, but Martin shouldn't be criticized for sticking to his official job description. As to the last remark, I seem to remember making this point before too. Nobody's ever going to admit to continuing to say something despite evidence to the contrary. What makes you any different in that respect? That you're right? But everyone says that too. Peter Jackson 11:43, 29 November 2010 (UTC) - If it were true that "Authors write things & editors guide & approve", CZ would not have more than a few hundred articles. Editors would be unemployed (or would be forced to amuse themselves on the forum). The bulk of the existing CZ articles are written by editors, not by authors. Even most of the 150 or so approved articles were written by an editor and approved by another editor (or by three editors among whom the major contributing editor). --P. Wormer (talk) 16:07, 29 November 2010 (UTC) - As for saying stuff, contrary evidence and all that, I am just asking for some specifics. We can't have an opportunity to change what we are saying when no one actually address specific points with specific evidence. Tmtoulouse (talk) 16:31, 29 November 2010 (UTC) - Well, you know the Wiki system: it is quite a job to figure out who wrote what. But Howard Berkowitz, Milton Beychok, Nick Gardner, Anthony Sebastian, Gareth Leng, Richard Jensen, and myself are editors who contributed a significant number of new articles. --P. Wormer (talk) 17:18, 29 November 2010 (UTC) - Jensen no longer edits citizendium. He's now an admin on Conservapedia. FreeThought (talk) 06:04, 30 November 2010 (UTC) - I am really confused.....Tmtoulouse (talk) 17:51, 29 November 2010 (UTC) - PWormer: "Well, you know the Wiki system: it is quite a job to figure out who wrote what." - but the system makes it completely possible due to edit histories. I wish I had the wiki software when I was editing and "improving" my poems and essays published on line, to keep track of the changes. Asterisk (talk) 04:50, 30 November 2010 (UTC) - That IP 85.72.236.124 comes across as a really nasty person. 86.167.69.100 (talk) 20:10, 29 November 2010 (UTC) - Said IP is Martin Baldwin-Edwards himself, so the antagonism is not fully unjustified since many of us have been highly critical of his actions. That said him seems to be a bit of an asshole not only here but to everyone at CZ as well. Tmtoulouse (talk) 20:13, 29 November 2010 (UTC) - Toulouse: you do not even make sense any more. What is the point of commenting on things, when you have no interest in hearing the facts, but instead prefer your own fantasies as explanations. That is delusional. And you continue with the personal insults, in the juvenile delinquent fashion. Insofar as managing CZ is concerned, it is absurd to think that a few people can construct an encyclopedia.. 85.72.221.97 (talk) 00:06, 30 November 2010 (UTC) - So as expected you come back with nothing but invective and vague complaints. The fact is that no one at CZ has come forward with specific information that countermands our analysis. Since the whole management of CZ is done behind lock and key everyone (us and your loyal contributors) are left to try and piece together the issues with what little dribbles of information you let out on purpose or accident. Sure we are likely to get things wrong now and then, but we have no way of knowing because anyone that could correct us stays locked up in their little cave peeking out just long enough to hurl some insults our way. - In the end though what matters is the success or lack thereof that you guys have in producing a healthy, functioning and productive community. Larry Sanger left you hanging, bad, his management was incompetent at all levels. There is a grace period that you are in now to see if your management councils can do better. The results are, spotty, at best. Tmtoulouse (talk) 00:45, 30 November 2010 (UTC) Edit break[edit] To be fair to M B-E, he seems to have spent his entire life being listened to either as an academic, a lecturer, an author or an "expert" reporting to governmental organisations. As a perpetual number one, he's probably not accustomed to people disagreeing with his proclamations from the throne ex cathedra. 07:21, 30 November 2010 (UTC) -. Thus spoke anonymous 85.72.221.97. - This quotation summarizes neatly all that makes me puke on the manner that CZ is governed these days. In exact contrast, I recognize the importance of authors that write "little contributions" in a "self-centered way", who don't want to be part of a "functioning structure" heeling—as canines—Mr. Baldwin-Edwards. - --P. Wormer (talk) 08:59, 30 November 2010 (UTC) - Let me just clarify "Authors are supposed to write things & editors are supposed to guide & approve.". That is true, but the important thing to remember is that everyone on CZ is an author, including Editors. Being an Editor is an additional role to being an author, not a replacement. The problem comes with the fact that you cannot approve an article on your own if you have performed any author work on it. If you have done any author work on an article, you then need three Editors in agreement to get the article approved. Due to the lack of Editors in some workgroups, this has made some Editors reluctant to act as an author in fear of being unable to ever get articles approved. --Chris Key (talk) 10:24, 30 November 2010 (UTC) - While you're around to clarify stuff (which I do appreciate), could you elaborate a bit on "if you have performed any author work on it"? Does this only mean non-trivial changes like adding new content and such, or does it also include spellcheck, stylistic changes, etc.? - And on a more fundamental level, I think a system counts as broken if it actively discourages making improvements to articles like that (at least while you only have a few people to work with). --Sid (talk) 11:44, 30 November 2010 (UTC) - Edit: It's possible that the above sounds like a terribly retarded question, of course, but I learned not to take too much for granted with CZ, especially since I find it counter-intuitive to make a difference between "author" and "editor" work in the first place on a wiki system where technically, anybody can do either/both. So I'm somewhat puzzled by what counts as what. (Plus, my brain is still fried and I don't have time to read up through The Charter or whatever other super-formal document you guys have to specify what does what. =P) --Sid 11:57, 30 November 2010 (UTC) - The policy states that "Editors may approve articles if they have not contributed significantly to the article as an author". Spellchecking and other edits that do not really alter the content don't count as 'significant', so Editors may still do this kind of work and then approve the article. - I agree that the system causes issues. When it was designed I think it was assumed that we wouldn't have problems getting three Editors working on a single article - and therefore single-Editor approval was to be the exception rather than the rule. This policy is something that the EC have control over, and it may change if they see fit. --Chris Key (talk) 13:37, 30 November 2010 (UTC) - Regardless of some other comments here, CZ was originally conceived as "expert-led" rather than expert-written. In other words, the idea was to take some of the more positive aspects of WP and overlay them with expert guidance -- either literally as guidance (such as where to start, what literature exists, &c) or in terms of dispute resolution by people who have some idea of what they are talking about (unlike WP). The problem that has arisen is the mutation of the project, which arose spontaneously owing to a general lack of authors as well as editors. The single-editor approval was definitely envisaged as the norm -- and the three editor solution posited as an exceptional case to allow editors' articles to get approval. Now, we have the case that there are too many chiefs and almost no Indians: I do not see the solution as being to discourage newcomers by acting as an elite club. Sadly, some CZ editors have had that attitude and failed to realise that there is a world of difference between an expert-guided project and an expert-exclusive project. Some of those people left CZ "in disgust" and may be heard spouting off from time to time. Our task is to get in more people, of all sorts, so that CZ does not look like an exclusive little club for retired academics.85.72.221.97 (talk) 15:32, 30 November 2010 (UTC) - That's a sensible analysis and consistent with my own CZ experience. I found CZ a lonely place -- no one showed up for me to "gently guide." Eventually I got tired of putting up with Sanger's micromanaging and the other nonsense for no reward in terms of cooperative work or intellectual stimulation. (The Homeopathy debacle sure didn't help.) Doctor Dark (talk) 15:49, 30 November 2010 (UTC) - @85.72.221.97: How many retired academics are still contributing to CZ? Off-hand I would not know any. So this is a good time to take preventive action and add to CZ's charter (Article 121 sub 3A): Registration by retired academics is not acceptable, because, before you know it, CZ becomes infested and "will look like an exclusive little club for retired academics". --P. Wormer (talk) 16:17, 30 November 2010 (UTC) - "too many chiefs and almost no Indians"? Depends where you are. Quite a few workgroups have no editors at all. Peter Jackson 11:46, 2 December 2010 (UTC) - Very true. I meant, the ratio of chiefs to Indians is too high; but there is a population shortage of both. 85.72.221.97 (talk) 14:30, 2 December 2010 (UTC) Critical acclaim![edit] Date: Thu, 2 Dec 2010 18:09:46 -0500 Message-ID: <AANLkTim8+5eXZE-bXwRhDyg_+cr1n1Pz0iNw4-u1HbCh@mail.gmail.com> Subject: Your work From: Gregory Kohs <thekohser@gmail.com> To: David Gerard <dgerard@gmail.com> Content-Type: multipart/alternative; boundary=0015174789948f299004967584c1 --0015174789948f299004967584c1 Content-Type: text/plain; charset=ISO-8859-1 I sure your live-in mistress can't wait to see what I write about next on Examiner. You're so full of yourself on that Alexa 100,000 irrational wiki. Greg - David Gerard (talk) 23:16, 2 December 2010 (UTC) - You can feel the love. We should write an article on Alexa one day, it is such a shit metric. - π 23:18, 2 December 2010 (UTC) - Some day I hope we could have the same alexa ranking as Wikipedia Review, the sky is the limit! And threatening someone with examiner articles? Really? Really? The examiner is hardcore! Tmtoulouse (talk) 23:34, 2 December 2010 (UTC) - Not to mention the fact that Greg will be lining his pockets a bit as we all (or rather our mistresses) go to his Examiner page to see what he's written. Then again, it's the Examiner. Any publication that let's Hurlbut publish his stuff has less than zero credibility anyway. --Ψ GremlinHable! 09:29, 3 December 2010 (UTC) - How the Examiner works. It's a commercial blogging host that pays you 0.5 to 1 cent per page view, more or less. It in no way constitutes media, it just somewhat pretends to. "OOH YOU JUST WAIT TILL YOU SEE WHAT I WRITE ABOUT YOU IN MY ONLINE JOURNAL" - David Gerard (talk) 12:37, 3 December 2010 (UTC) Boing Boing[edit] Should Wikimedia Foundation host Citizendium? --I'm bored (talk) 18:50, 30 November 2010 (UTC) - Lol, even the pointy-headed academic turned up, touting their credentials and vocabulary. But not actually saying anything. (this was funnier following the comment above this section...) Asterisk (talk) 06:06, 1 December 2010 (UTC) - Even funnier is how CZ being owned by Wikimedia would be such a slap in the face to Larry "the cry baby" Sanger's "Jimbo Wales screwed me" sob story. Scotch (talk) 07:46, 5 December 2010 (UTC) - I swear that the prospect of watching Larry absolutely shit was only a slight motivation. As it is, I understand the WMF techies are OK with CZ camping out on Wikimedia if there's a box available (as they know the CZ techies are competent), but the organisational problems (CZ has no existence as an entity) would absolutely need to be sorted. In any case, CZ is sorted for hosting for the next couple of months, which is quite long enough to see if it can keep its shit together - David Gerard (talk) 14:05, 5 December 2010 (UTC) "Your opinion counts for very little here."[edit] In response to a postimg in which Martin Baldwin-Edwards directed a threat against Howard Berkowitz, I askedimg that he not make threats on Citizendium's forums. (I believe there are very clear rules about such conduct in Citizendium's new charter.) Baldwin-Edwards then respondedimg with the following: Tom: you do not tell members of the CZ councils how to behave on their own Forum board. if [sic] you don't like CZ, then kindly just quit it. Your opinion counts for very little here. And that, folks, is why Citizendium will fail. I was tempted to ask Baldwin-Edwards if he is familiar with the following articles in the charter: - "Citizens shall act responsibly and in a civil manner: derogatory or offensive language or behavior will not be tolerated." - "All Citizens shall be equal and no special privileges shall be granted except those granted in this charter to Editors and Officers." - "All Citizens shall be treated fairly and respectfully by other Citizens, Editors, and Officers of the Citizendium." - "Citizens should expect Officers and Editors to be fair and impartial. Biased Officers and Editors shall recuse themselves from their official positions in any dispute resolution process." - "All Citizens, regardless of position or status, shall be bound by this Charter including its amendments, and no referendum or decision of any council or official shall contravene it." So much for goodwill. I'm now tempted to write letters to any organisations that consider sponsoring Citizendium, recommending that they leave the project well alone. Let it die.—Thomas Larsen (talk) 10:42, 4 December 2010 (UTC)72 - The more I follow the goings on on CZ, the more I'm reminded of the behaviour of the Fab Five over at CP. There might not be the general nuttiness, but the swaggering arrogance is there for all to see. And yes, that is exactly the reason by CZ will fail. --Ψ GremlinПоговорите! 10:48, 4 December 2010 (UTC) - I'm not going to defend Martin but Berkowitz's constant whining on the Talk pages and forum when he doesn't get what he wants, grates people. I don't blame editors wanting to remove him. Martin was correct over the Wikileaks article - its was being written in a biased way with a pro-US point of view, with little regard for neutrality. FreeThought (talk) 10:56, 4 December 2010 (UTC) - What frustrates me most about Citizendium is that discussions always boil down to interpersonal disputes, never actually to questions about the neutrality of content. Thomas Larsen (talk) 11:31, 4 December 2010 (UTC) - That is a lie. Attempts to deal with neutrality have been systematically blocked by one person (and one person only), who will shortly answer to the EC for his behaviour in the past. The case with wikileaks is a repetition of past patterns. 85.72.221.97 (talk) 11:37, 4 December 2010 (UTC) - Does anyone else find the tone of the anonymous post above somehow chilling? --BobSpring is sprung! 13:41, 4 December 2010 (UTC) - I regret to say that Mr Wormer has some expertise in being obnoxious. The tone of my last post was factual, since the matter of editorial conduct has been sent to the EC for dispute resolution. 85.72.221.97 (talk) 14:48, 4 December 2010 (UTC) - That's right, Martin. Someone has disagreed with you, which obviously can't be tolerated. All dissent much be crushed! Purge the undesirable elements from your site then sit back in satisfaction at your supreme power, admiring the beautiful way in which the tumbleweed blows through the deserted pages. –SuspectedReplicant retire me 15:25, 4 December 2010 (UTC) - Apparently it's against RW house rules to name anonymous IPs *shrug*, that's what Human said above in the "Point of Order" comment above. Just sayin' FreeThought (talk) 16:05, 4 December 2010 (UTC) - It's been done several times already, and the IP essentially outed himself anyway: see these two posts. –SuspectedReplicant retire me 16:21, 4 December 2010 (UTC) - My general impression was that he wasn't exactly trying to hide his identity and posting as an IP was more about not deigning to even bother with making an account here. If, however, I am mistaken and said IP really is trying to stay "pseudonymous" then yes we need to stop outing him. Tmtoulouse (talk) 16:47, 4 December 2010 (UTC) - I don't have a problem with being referred to here as Martin, but prefer the surname be omitted. The only issue is that my ip might change (it is semi-fixed lol) and there could be fraudulent posters... 85.72.221.97 (talk) 17:30, 4 December 2010 (UTC) - Ummmmm. I may be missing some grand political point here, but wouldn't the simplest thing be to create an account? Our registration process is most simple and quick.--BobSpring is sprung! 18:41, 4 December 2010 (UTC) (undent) I might note that I am more than open to specific suggestions in making the Wikileaks article more neutral, at least when it's unlocked. Before that happened, I tried to respond. At least two of the other commenters on the talk page are not from the U.S. and didn't see huge problems in the evolving article. More and more information was going in, and hopefully better balancing, until the deletion got so bad I couldn't keep track of the edits. My email here is enabled, so if someone has a specific comment (not "this is from a US standpoint", but something I can fix), it's welcome. Howard C. Berkowitz (talk) 20:20, 4 December 2010 (UTC) - I share Tom's concerns; Martin's threat was entirely out of line. I don't think Howard is entirely guiltless either, though I do think his behaviour has been mostly reasonable. Back when there were elections for the Editorial Council, I voted for both these guys; both are intelligent and knowledgeable, and both do contribute substantially to the wiki. I might vote differently next time. - As for CZ dying, it is not clear to me that it either will or should. Certainly there are problems to be solved and it is not clear either that they will be or, for that matter, whether the structure itself -- charter, constables, councils, ... -- creates more problems than it solves. There is a risk the whole thing will go belly up. - Anyway, I do not think that the key point is that this particular site and structure survive -- it is an interesting experiment, worth investing some time in and hoping for the success of, but no more. The vital thing is that some of us are writing good material there, and that will likely survive on other sites even if CZ itself fails. Possibly Tendril, though it is not yet clear to me that that will work either. Certainly Wikipedia; already two approved articles I wrote (cypherpunk and Kerchkoff's principle) have been copied almost entirely to WP. RW or Wikitravel or a dozen other sites could take anything they find useful; the license is designed for that. So if you think, as I do, of contributing to an overall "Creative Commons" rather than worrying too much about a particular project's borders, CZ provides an attractive mechanism for that. The notion of expert oversight is basically sound, whether or not all the details have been or will be correctly worked out. Pashley (talk) 22:52, 4 December 2010 (UTC) - "The notion of expert oversight is basically sound"—I agree. The key is, as you say, "whether or not all the details have been or will be correctly worked out". Thomas Larsen (talk) 01:09, 5 December 2010 (UTC) - Some of us are more expert than others, but don't like to push ourselves forward: For example, nobody has ever suggested that I should write a signed article on topics for which I have an international reputation.img Guess who, without looking. Fricassée (talk) 13:22, 5 December 2010 (UTC) - Re survival on other sites, just to mention that all CZ material is welcome on Wikinfo. Peter Jackson 15:21, 5 December 2010 (UTC) - "Attempts to deal with neutrality have been systematically blocked by one person" Hm. As far as I can tell, it's Howard who's been trying to get the Editorial Council to work on general policies, including presumably neutrality or whatever, and the rest of the members who seem to think that sounds too much like hard work and prefer to wait for individual disputes to be referred to them and then see whether a pattern emerges in the decisions they find themselves making. Peter Jackson 15:25, 5 December 2010 (UTC) - I suggest that you ask the other members of the EC, rather than taking my word for it, Peter. 85.72.221.97 (talk) 17:11, 5 December 2010 (UTC) Disappearing references[edit] On Wikipedia has been emptied of all posts, so thisimg struck me as worth saving while it's in cache - David Gerard (talk) 13:45, 1 December 2010 (UTC) - Quite an interesting read. Especially the part where they talk about the connotations of Citizendiums failures for the internet as a whole. Thanks for sharing, David. Audi (talk) 16:50, 6 December 2010 (UTC) Incompetent ramblings - and a few diagrams[edit] As some of our readers from Citizendium know, I like to manipulate data and fabricate diagrams. So, let's see what November 2010 has in sleeve for us... (To compare these numbers with those of RationalWiki and Conservapedia, have a look here.) larronsicut fur in nocte 20:28, 2 December 2010 (UTC) - Seems an accurate presentation to me. The problems are with the message (ie CZ) not with the messenger. Thanks for these. 85.72.221.97 (talk) 05:12, 3 December 2010 (UTC) - Indeed, it is an accurate presentation. Alexander Stos looks after the statistics at cz:CZ:statistics. His updates of the existing diagrams are prompt and regular, but he hasn't done anything new for quite a while. And sadly, I don't see him getting involved in the discussion of the google-metrics. Anyway, his numbers seem to be quite comparable to mine: - larronsicut fur in nocte 15:04, 6 December 2010 (UTC) CZ's failure is all wikipedia's fault![edit] I found this comment amusing. If Wikipedia hadn't driven away all those experts biting at the bit to edit free encyclopedia's Citizendium would be a thriving community. Way to own up to the failure of your community model there. Tmtoulouse (talk) 02:49, 3 December 2010 (UTC) - I liked the earlier comment, regarding the fork, "splitting of the limited resources of a relatively small group of contributors", emphasis added. IE, CZ is so small they can't afford some members leaving for elsewhere. Asterisk (talk) 07:17, 3 December 2010 (UTC) - Ande by page 5, Milton chimes in to discuss how many citizens can dance on the head of a pin, if they are, indeed, still citizens. Sorry, Milton, 'til now, I liked you. Asterisk (talk) 07:25, 3 December 2010 (UTC) - Innes' other comment that wikipedia is blocked in colleges while citizendium isnt isn't strictly true. Here in Toronto, the university library and computer labs allow access to wikipedia, while citizendium is unreachable. FreeThought (talk) 11:47, 3 December 2010 (UTC) - And I thought these people were smart. They can't even spell... "When trying to reign the project in...". I'd accept that mistake from the hoi polloi, and always have, but from a bunch of self-appointed PhD "experts"? Pomegranate (talk) 08:22, 4 December 2010 (UTC) - You express your own misconceptions. Many experts are not PhDs or academics, although the majority are. CZ is not an exclusive club, we all have different abilities (and defects) and it is true that I spend a certain amount of time asking people to correct their spelling errors. They also spend time asking me to correct my mistakes, of various sorts. 85.72.221.97 (talk) 10:05, 4 December 2010 (UTC) - I think there is some stuff above about what CZ considers and "expert" that is probably what the Pommy is basing their analysis on. That is, to be an "expert" on CZ, one must have academic credentials. Cummerbund (talk) 02:28, 5 December 2010 (UTC) - No, this is not the case. Berkowitz has none, for example (and it shows). Most without academic training are in specialised areas where this is not needed or even useful, but have proved their expertise. 85.72.221.97 (talk) 09:47, 5 December 2010 (UTC) - Neither has Aleta Curry, and it shows, the gaffe about the United States being the world's largest democracy, and British English spellings for example. FreeThought (talk) 10:35, 5 December 2010 (UTC) - Aleta does not claim expertise in areas where she has none. We all make gaffes (note spelling). 85.72.221.97 (talk) 11:29, 5 December 2010 (UTC) - There's a difference between spelling errors and factual ones, not to mention the false claims made against LArron. FreeThought (talk) 12:12, 5 December 2010 (UTC) - Actually, your spelling [gaff] is an uncommon variant: I meant more that there are different ways of looking at things, sometimes -- just as there are diffent spellings (even within British English). For me, that is a world apart from insisting on your own expertise. Concerning LArron, I have made my own opinion (with some expertise in using statistics) fairly clear. 85.72.221.97 (talk) 12:25, 5 December 2010 (UTC) - Oh wow! Please, please say that you've taken more statistics classes than LArron and then I'll be able to state with 95% certainty that you are a clone of Andy Schlafly. –SuspectedReplicant retire me 12:59, 5 December 2010 (UTC) @85.xx.xxx.xx: Concerning LArron, I have made my own opinion (with some expertise in using statistics) fairly clear. Where? I only remember the line: It also has Larry as a bot (the pink colour) so I wouldnt pay too much attention to their competence in these things, which raises more questions about your colour-blindness than my competence... larronsicut fur in nocte 13:49, 5 December 2010 (UTC) - Sorry LArron, that was more of a joke anyway. I thanked you under the last contribution... 85.72.221.97 (talk) 15:24, 5 December 2010 (UTC) - Silly me, couldn't spot the joke - ah, us bloody foreigners, lacking the necessary sophistication to spot irony... larronsicut fur in nocte 15:06, 6 December 2010 (UTC) - You've got to admit that confusing the Editor-in-Chief with a bot is an amusing thought (even if you didn't really)... 85.72.221.97 (talk) 15:24, 6 December 2010 (UTC) Non- personcitizen[edit] Tom Larsen's recent comments seem to have been moved to the non-citizens' section of the forum & references to Tendrl removed as "self-promotion". Peter Jackson 15:18, 5 December 2010 (UTC) - Oh surely you're kidding here. They wouldn't do such a th-FFFFFFFFUUUUU-img --Sid (talk) 15:47, 5 December 2010 (UTC) - And I see that the entire thread referenced in the section above has been nuked. Good one. --Sid (talk) 15:50, 5 December 2010 (UTC) - And hereimg seems to be the discussion about terminating Citizenship. Oddly, there seems to be little obvious consensus, especially not towards revoking it. Milton seems to assert non-Citizenship, Matt and Martin have doubts. So what happened? --Sid (talk) 15:58, 5 December 2010 (UTC) - There is little consensus on anything in CZ, and far too much not done as a consequence. 85.72.221.97 (talk) 20:10, 5 December 2010 (UTC) - Well, in their defence, I did say I no longer considered myself an active "Citizen" and wanted to disassociate myself from the project. Thomas Larsen (talk) 21:38, 5 December 2010 (UTC) - Roll up, roll up, get your bach flower remedies here. You want past life regressions? No problem sir, happy to oblige, how about a side order of ESP or remote viewing (etc.)? Check out Ramanandadingdong, now taking orders through Citizendium (discounts apply). - Want to hire a repressed control-freak with no personal skills to write you an essay on what humans do? Well, you could try the Yellow Pages of wikis, Citizendium where you might get lucky. - Want a fully qualified chiropractor in a boyscout uniform to take your kids into the woods unsupervised? Look no further. - I think that maybe, just maybe, their views of self-promotion are slightly skewed. D.T.F. (talk) 06:09, 6 December 2010 (UTC) Google Analytics[edit] So they have gone ahead with installing Google Analytics regardless of any privacy concerns and now are starting to get the results. Over on this threadimg the discussion begins. Its vaguely interesting, but my point is a broader one. I personally think that while these kinds of statistics can be interesting, they should not be taken that seriously, and certainly not used as the guide on how to build your website. While certainly an improvement over alexa (god we need a good article about that place) anything beyond the number of times a page is served up is inferential and conjecture. Unique IPs do not translate into unique users, the same users will go through multiple IPs, or even multiple users with the same IP. Unique visits is a strange stat that is built around an ip showing up at the site then leaving the site for a certain amount of time before coming back. Installed log report generating software can often let you tweak the definitions of a visit but I don't know if google does and I find the default options are usually sub-optimal, and even the whole concept murky. The same thing with length of time on the page. Frankly, with the use of tabbed browsing it just seems impossible to get anything very meaningful from these kinds of inferences of the log data. It just seems that people that would never allow for this shotty and murky of an analysis in a professional setting just swallow the likes of alexa whole. Again, it is interesting and might hint at trends but do not try and adjust and build your content around the results. Tmtoulouse (talk) 16:38, 5 December 2010 (UTC) - The trouble with metrics is that you start working to the metric, even if you know that's not necessarily a good idea - David Gerard (talk) 18:01, 5 December 2010 (UTC) - Try telling that to the Provost and the Dean for me, please. :-P Doctor Dark (talk) 19:27, 5 December 2010 (UTC) - I guess they don't trust their server logs to tell them what is going on on their website? Blancmange (talk) 02:37, 6 December 2010 (UTC) - The important difference between the server logs and the analytics report is that the analytics ignores logged-in users. --Chris Key (talk) 05:26, 6 December 2010 (UTC) - And the analytics ignore everything that doesn't pass through google? Like links from external sites that lead to the site? I think that's how we/they do the "entry point" thing here - Trent looks at the logs and if some page is a big draw (not just from google), he tells us/them. Blancmange (talk) 06:43, 6 December 2010 (UTC) - No, Google Analytics doesn't just analyze which traffic goes to the site from Google. It's a full web analytics tool. Article also offers info on how to opt out of Google tracking you all across the net. --Sid (talk) 00:58, 7 December 2010 (UTC) - Ah, ok. Blancmange (talk) 02:33, 7 December 2010 (UTC) Restart[edit] I'd always thought that it was from Norfolk, Somerset or Ireland but apparently it's from Maine. Whatever, the predicament of Citizendium is easily encapsulated in the phrase: "You can't get there from here". It's in a swampy hollow created by Larry and wants to get to the clear mountaintop that can be seen in the distance. Sadly they need a whole new set of rules and contributors. The present mob are so counterproductive that it's hard to see how they got this far. (Because the only had to do what Larry told them) - Get rid of the "constables". - Stop multiplying tasks - with the number of people available two "councils" is one too many - let them do the job of the "constables" - Allow non-expert[sic] members - they can and probably will do your proof reading and reference finding (think undergrad assistants) Give 'em a new designation: "students" "sophomores"? (Eduzendiam students aren't enough) - Revert to standard wiki formatting until there's enough material to warrant all the various pages that exist. Like it or not, people are familiar with references and see alsos at the foot of the page - if you want readers encourage them by familiarity. - Council members = Bureaucrats - Regular members (citizens) = sysops - Students = The rest - This will never happen - David Gerard (talk) 18:30, 5 December 2010 (UTC) - As I said: you can't get there from here! Fricassée (talk) 18:58, 5 December 2010 (UTC) - It won't happen because the editors currently in power are quite happy to let the status quo remain. Any editors they had with radical ideas on changing on the system were forced/chased out months ago. FreeThought (talk) 23:47, 5 December 2010 (UTC) - Here's another idea that will never happen. Get rid of the Charter and all the Constables and Councils and the rest of the bureaucratic overhead. Replace it all with two rules: (1) Write good stuff, in particular articles that accurately summarize the academic literature on topics where such exists. (2) If you act like a jerk we'll kick you out. As the project goes on develop remaining necessary policies as common law, i.e., make policies descriptive rather than prescriptive. Doctor Dark (talk) 18:38, 5 December 2010 (UTC) - Tendrl:Rules. Just sayin'... ;-) - To its credit, Citizendium does allow non-experts to sign up and contribute; they're called "authors". (An expert can become an "editor", but remains an "author" as well.) It seems the nature of the system wasn't explained well when it was being advertised, because there are a lot of people who think Citizendium is expert-only. Thomas Larsen (talk) 21:33, 5 December 2010 (UTC) - Hmm, sorry about that. I suppose Boris assumed you were checking the site on a more regular basis. :-) You should still be able to create an account with a different name, or I'd be happy to rename your existing account if you'd prefer that. Thomas Larsen (talk) 03:34, 6 December 2010 (UTC) - (It raises some questions about real names, too. I mean, I personally prefer to know who I'm working with, but perhaps enforcing a real-names policy is too high a barrier to participation? A compromise solution would be just to encourage real names, but require them for people in administrative positions—I don't know, and I'm open to suggestions.) Thomas Larsen (talk) 03:41, 6 December 2010 (UTC) - At least don't block people until you know they've seen the warning on their talk page. And enable mediawikiwiki:Manual:$wgBlockAllowsUTEdit. Personally, I think you should drop the real name policy, in fact I think you should allow IP editing - I see you have flagged revisions installed, so you can still control what regular readers see. There are plenty of wikis that have failed because of this siege mentality, Conservapedia, A Storehouse of Knowledge, Citizendium. If one of your readers wants to fix a simple typo, let them, don't make them jump through hoops, because they just won't bother. And the real name policy is pointless unless you require proof that the name is actually real and not made up. -- Nx / talk 06:58, 6 December 2010 (UTC) - That would make it too much like Wikipedia, and then people would be asking "why bother?" What Tendrl need to do is quickly settle on a name and a logo. The site as is may turn away potential contributors on first look. Tendrl also needs to find a niche in what they want to do, and do it well. You will attract contributors in that area, then expand out when you have the manpower and resources to do so. FreeThought (talk) 08:56, 6 December 2010 (UTC) - I see some editor was blocked at Tendril for being called George Bush. OK, so it's the name of an ex-president - but it can't be that uncommon.--BobSpring is sprung! 09:47, 6 December 2010 (UTC) - Indeed, there were at least two of them with that name. FreeThought (talk) 09:55, 6 December 2010 (UTC) [unindent] Human and George Bush we reinstated at Tendrl. Before you laugh: Tendrl is new and its participants have to grope for rules. Personally I'm against a "Charter" chiseled in stone, at least in the beginning of such undertaking. Let rules grow by trial and error. --P. Wormer (talk) 11:43, 6 December 2010 (UTC) - OK - but it would seem reasonable to allow people to use whatever name they like. Given that it is not possible to realistically enforce "real names" any attempt to do so is simply invitation for people to point out the absurdity of the "rule" by registering improbable names and insisting they are real.--BobSpring is sprung! 20:54, 6 December 2010 (UTC) Ha![edit] Do you really think that WP would want such a quarrelsome mob?img Fricassée (talk) 21:08, 6 December 2010 (UTC) - Hmmmmmm... I was about to go all "Wow, this is a RETARDED idea!" here, but I changed my mind. Make no mistake, the proposal at face value IS silly - it boils down to making CZ a WP component, but without any sign of compromise. Sorry, but the WP/CZ rules and core policies are radically different, so if you wanted to actually mesh things together even a little bit, CZ would have to agree to compromises that would end up in crippling the status quo. - Buuuuuuut... when you look at the gist of it, you will recognize another proposal that was sadly neglected early on: WMF hosting. - If you simply leave out the parts where WP and CZ would interact directly or indirectly, you end up with "WMF hosts CZ, CZ gets to keep its rules and stays alive". Which is a great suggestion, and it's amazing that somebody openly picked up the old "Should WMF host CZ?" idea beyond "Well, we'll think about it once the WMF bows down and asks us nicely". WMF hosting would be an insane stroke of luck when it comes to the finances, and hey, it may even be possible if the MC asks nicely. --Sid (talk) 21:51, 6 December 2010 (UTC) - Wikimedia hosting is plausible, but there's no way Wikipedia would agree to hosting any kind of a semi-autonomous wiki with its own policies within WP itself. Wéáśéĺóíď Methinks it is a Weasel 22:07, 6 December 2010 (UTC) - Personally, I think its a bad idea for both projects. There are plenty of cheap hosting solutions. They have now over +$2000 in donations. They could easily set up with one of those hosts as well as establish an independent Citizendium Foundation. FreeThought (talk) 23:17, 6 December 2010 (UTC) -) 2007 all over again[edit] "I asked Andy through private mail, and he told me to rule over you as I see fit. --TKSenior AdministratorHahaha, suckers"img --Sid (talk) 17:19, 5 December 2010 (UTC) - Potentially related, though filed after the announcement: An "anonymous proposal"img to give the Ombudsman more power. Maybe I'm just too pessimistic from my years of watching CP, but this looks a little... weird to me... Can I get a few extra eyes here to maybe explain things? --Sid (talk) 17:45, 5 December 2010 (UTC) - It seems to be a proposal to sack Daniel by transferring his powers to Gareth. Probably unconstitutional. Peter Jackson 11:40, 6 December 2010 (UTC) - Why would they want to sack Daniel? From what I've seen he's a sensible and agreeable fellow. Doctor Dark (talk) 13:54, 6 December 2010 (UTC) - Yes, that is nonsense. The proposal merely tries to make the Ombudsman's decisions binding, rather than merely guidance that certain %^&*$$# can choose to ignore... 85.72.221.97 (talk) 15:21, 6 December 2010 (UTC) - But the Managing Editor already has that power. Peter Jackson 10:28, 7 December 2010 (UTC) Martin... resigns?[edit] Thisimg somehow completely flew under my radar (and seems to be very recent), could somebody bring me up to speed or at least voice a theory? --Sid (talk) 21:56, 6 December 2010 (UTC) - He announced his resignation in the forumimg, too. larronsicut fur in nocte 21:59, 6 December 2010 (UTC) - The rat leaves the sinking ship, but it might just be what the ship needs to survive. –SuspectedReplicant retire me 22:17, 6 December 2010 (UTC) - There'll always be a home for him at Tendrl - David Gerard (talk) 22:20, 6 December 2010 (UTC) - I thought I saw some talk in the forums that suggested that he left and never came back during the charter drafting phase too. Tmtoulouse (talk) 23:24, 6 December 2010 (UTC) - I believe there are new elections coming up soon for Citizendium. Martin has probably resigned to run again for a new council. FreeThought (talk) 23:39, 6 December 2010 (UTC) - I think he also retired back in the Larry days. Everybody sing... Doctor Dark (talk) 23:41, 6 December 2010 (UTC) - I'm in China & Youtube is blocked here. What's the song? Pashley (talk) 02:03, 7 December 2010 (UTC) - I'm in Germany and Youtube is blocking the song (...XD), but googling the URL tells me that it's "Dan Hicks and his Hot licks - How Can I Miss You When You Won't Go Away". --Sid (talk) 02:16, 7 December 2010 (UTC) - There'll always be a home for him at Tendrl... Please have mercy with the fledgling.--P. Wormer (talk) 06:30, 7 December 2010 (UTC) - Dieses Video enthält Content von Sony Music Entertainment. Es ist in deinem Land nicht verfügbar. So annoying... larronsicut fur in nocte 07:49, 7 December 2010 (UTC) And the tech side of it[edit] From the EC wiki.img My first comment would have been about them changing Martin's password, thus completely locking him out of even changing his preferences or anything and forcing him to be a guest forever, but then I saw the post about the IP ban... and... could our techies translate this to me? (Here is the entire block log, just to show that Peter didn't explicitly block the IP, just to avoid the potentially obvious question.) Because to me this sounds like a VERY odd wiki setup. --Sid (talk) 00:36, 7 December 2010 (UTC) - They have a separate wiki for the Editorial Council. He's not blocked from the main site. Doctor Dark (talk) 01:51, 7 December 2010 (UTC) - Indeed, but that doesn't explain why Peter's ban of Martin apparently banned 127.0.0.1 instead of Martin's IP, does it? It seems to me that Peter did the reasonable thing that also would work as a long-term solution (as opposed to making the sysop basically hack every account that is not part of the current EC - which may be needed a lot since the EC elections are coming up), but that it didn't work because... well... it apparently kinda breaks the wiki? I'm no luddite, but I'm also no expert in wiki IP blocking and caching setups, so I need somebody to tell me if I'm reading this right. --Sid (talk) 02:12, 7 December 2010 (UTC) "Martin...resigns?" My work here is done! Fricassée (talk) 07:27, 7 December 2010 (UTC) - Is that you Howie? FreeThought (talk) 08:57, 7 December 2010 (UTC) - Not me -- I'm borrowing computer access until I get a new power supply, hopefully tomorrow -- very little access. I don't really have details on what happened. Howard C. Berkowitz (talk) 19:03, 7 December 2010 (UTC) - Power supply back in, and working this time...when my business partner told me he had a wire left over when he installed it, somehow, I was not surprised when the OS didn't come all the way up. - I don't really see value in refighting battles with Martin, but I do see some indications that people are now working on fresh starts that don't involve personalities. Howard C. Berkowitz (talk) 17:06, 10 December 2010 (UTC) - As far as I am concerned, Berkowitz has fucked CZ. I want no more to do with an 'encyclopedia' where the personal opinions of unqualified people are taken seriously. The place for that is Wikipedia. 85.72.221.97 (talk) 19:51, 10 December 2010 (UTC) - Gee, I must be so powerful. Do I get the privileges of a Bond villain? Oh please...have no more to do with CZ. Howard C. Berkowitz (talk) 19:53, 10 December 2010 (UTC) - Hey 85, why not just put a sock in it you obnoxious cowardly failure. D.T.F. (talk) 07:19, 11 December 2010 (UTC) - Hey, DTF, don't be an asshole to the BoNs. As if you are one level above them? Everybody say what they want to! Marble-headed poop skater (talk) 08:33, 11 December 2010 (UTC) - Omg DTF you swine, how dare you tell anyone to shut up. Why don't YOU shut up. I shall away to thine talkpage to put you down. You aren't a level above anyone! In fact you are a level below me. And I'm not a level above you either, that would be wrong. And anyway just shut up! That's censorship what you did to that poor 85 because he might listen and think he really had to shut up because you said so. D.T.F. (talk) 11:19, 11 December 2010 (UTC) In passing[edit] Moved from Saloon Bar: [3] No-One (talk) 12:39, 10 December 2010 (UTC) - Is there a world bureaucracy shortage? By the way some at cz are behaving you'd think that they were trying to corner the market. No-One (talk) 14:28, 11 December 2010 (UTC) - Cheezits.... so now they're going to have both an Editorial Council and a Committee of Editors? For an active community of what, maybe a couple dozen people? Incredible. Doctor Dark (talk) 15:05, 11 December 2010 (UTC) - It's unlikely that will pass; it was introduced through a mandatory suggestion mechanism. My hope is that sense is breaking out on the Editorial Council, even if I'm Dr. No or Ernst Blofeld but without the Bond Girls. Howard C. Berkowitz (talk) 18:49, 11 December 2010 (UTC) - You're not in the same league as Dr. No or Blofeld - they could act and got well paid to do it. The shambolic ramblings at citizendium has shown the organisation to be more akin to that of KAOS than SPECTRE.FreeThought (talk) 06:07, 12 December 2010 (UTC) - You misunderstand. I don't claim to be such a villain, but one departed seeker of power seems to believe that I, by myself, am the Antichrist...or something like that. Nostradamus' guy in the Blue Turban? Would you believe he wanted to be the Chief? Howard C. Berkowitz (talk) 08:44, 12 December 2010 (UTC) - Simular structurs have been suggested from time to time on wikipedia. They tend to die out because in a wiki enviroment there is nothing for them to do.Geni (talk) 19:13, 11 December 2010 (UTC) Re: Ha![edit] (From the archives):) - Well, it's been over a week, where is their big announcement?? FreeThought (talk) 08:16, 15 December 2010 (UTC) Dana Ullman is back![edit] Every now and then I find myself become sympathetic to CZ, but then pseudoscientific obscenities like thisimg pop up and I remember that this place is a cesspit for this kind of negligent quack medicine. As long as this kind of thing is tolerated, and frauds and quacks like Ullman are given positions of power and respect CZ deserves to rot. Tmtoulouse (talk) 02:38, 7 December 2010 (UTC) - I agree with Trent here, though I think I might make some popcorn and watch Dana Ullman spew his nonsense so that I can add it to our article here. Audi (talk) 02:48, 7 December 2010 (UTC) - She can't even write well. Lithograph (talk) 05:34, 7 December 2010 (UTC) - Although many high quality studies have been published in leading conventional medical journals and have shown that homeopathic remedies might be effective in particular conditions, there are not enough replication of this research to establish a strong evidence base for homeopathy. One, two, three, many? - In the past, a remedy diluted to more than 12C was assumed to all probability to contain not a single molecule of the original substance. However, new research has confirmed that even extremely diluted homeopathic medicines contain ponderal doses of the original substance.<*ref*>Chikramane PS, Suresh AK, Bellare JR, Kane SG. Extreme homeopathic dilutions retain starting materials: A nanoparticulate perspective. Homeopathy. 2010 Oct;99(4):231-42.</*ref*> The abstract states: - So, the homeopath making these market samples haven't worked carefully - or did the scientist use large quantities of the remedies - like an ocean? - larronsicut fur in nocte 08:04, 7 December 2010 (UTC) - @Lithograph - Dana Ullman is a boy, not a girl. I was confused by his girl's name at first too. - The diff Trent provided is bad enough, but the then follows a Ken-like series of edits that systematically remove all remaining objectivity from the article, until rather hilariously, thisimg edit attempts to "res[t]ore...objectivity". I know this is only a draft, but even so it makes me wonder why anybody gives a damn whether CZ lives or dies. –SuspectedReplicant retire me 08:17, 7 December 2010 (UTC) - The kendollingimg is amusing, but the following edit it hilarious. Also, why do these people allow such a non-expert in anything to write all over their nice expert wiki? Palindrome (talk) 08:43, 7 December 2010 (UTC) - Ask Larry Sanger, he invited Ullman to come to Citizendium. No-one else did. FreeThought (talk) 08:45, 7 December 2010 (UTC) - Ullman is an expert in homeopathy, & may well be a respected authority within that field, regardless of what mainstream opinion says. This is one of the major flaws of having an expert-centric wiki set-up: it places specialised knowledge, even in quack subjects, above wider credibility & lets those recognised as experts put across their own interpretations of their subject. Wēāŝēīōīď Methinks it is a Weasel 19:09, 7 December 2010 (UTC) - I'm an expert in the field of what my feet smell after 3 hours of football. 05:36, 8 December 2010 (UTC) - The result of this is that CZ, even if working as intended, isn't as good as WP for general users because it won't end up properly wiki-linked. The person who believes ghosts are real will be Editor for the Ghost article and won't want to link "biased" work by people who don't believe because they aren't experts. Imagine if any of the web's rabbit warrens was done this way: TV Tropes pages that just say {favourite show} did the trope first and best, and don't link other examples. Wowhead Alliance quests not linking the equivalent Horde quest because "Horde suck". ED's 4chan page refusing to even mention 2ch. The hours of time-wasting eliminated would not make up for the loss of context. 82.69.171.94 (talk) 12:51, 8 December 2010 (UTC) - See also [1]img. Pashley (talk) 03:28, 8 December 2010 (UTC) - Looks like Dana may be run over by the EC: Specific request to approve the Dec 5 version - the one just before the recent edit spree. --Sid (talk) 18:52, 7 December 2010 (UTC) - We can hope. That version is not perfect, but it is significantly better than either the current approved version or the current draft with Ullman's recent edits. Pashley (talk) 03:28, 8 December 2010 (UTC) - Does it have any good stuff we can use to make our article even better? Equestrian (talk) 04:53, 8 December 2010 (UTC) By unanimous Editorial Council decision, there's now a new approved version, much better than the old version and without Ullman's recent edits. Pashley (talk) 03:30, 14 December 2010 (UTC) Also, the draft has been archived and replaced with a lovely notice. Pashley (talk) 06:33, 17 December 2010 (UTC) Encyclopedia of Life[edit] This is how you do an expert-guided compendium: They also review Wikipedia articles.--ZooGuard (talk) 11:45, 15 December 2010 (UTC) - Encyclopedia of Life has a $50 million budget (with eventual plans for a further $60 million of spending). It's to be expected if it does better than Citizendium.Geni (talk) 21:49, 15 December 2010 (UTC) - I wonder how their budget got so large... Occasionaluse (talk) 22:22, 15 December 2010 (UTC) - Basicaly they managed to get a bunch of leading science institutions onboard[4]. Or rather a bunch of leading science institutions formed EOL.Geni (talk) 23:12, 15 December 2010 (UTC) - It seems this wiki runs on a hundred or so dollars a month. And it appears to be doing better than CZ. I don't think it's money, it's attitude, and its ramifications, that matters. Burlap bags (talk) 08:52, 17 December 2010 (UTC) - EOL is not even a wiki. They use modified Grid software for their webpages. FreeThought (talk) 09:21, 17 December 2010 (UTC) Building a walled garden[edit] This proposal recently passed by CZ's Editorial Council makes me go "huh?", or as the young people say, "WTF?" It puts CZ in the bizarre position of forbidding its users from importing free content, while at the same time producing content that others (such as WP) are free to import. I'd be interested to hear Howard Berkowitz explain why he voted for this strange proposal. Doctor Dark (talk) 04:37, 13 December 2010 (UTC) - The walled garden is withering and dying. Plenty of manure, but not enough plants or gardeners. FreeThought (talk) 06:39, 13 December 2010 (UTC) - The EC wiki appears to have done a runner. Oompa loompa (talk) 10:00, 13 December 2010 (UTC) - locke appears down in toto - David Gerard (talk) 10:29, 13 December 2010 (UTC) - Wow, that's absolutely fucking bonkers. –Tom Morris (talk) 17:48, 13 December 2010 (UTC) Their $700/month web hosting is back up now. Doctor Dark (talk) 17:15, 13 December 2010 (UTC) - I'm told Locke just needed a restart to unlocke the EC. - Why did I vote for that? There's no simple, single answer. We had people who would import large numbers of WP articles that needed at least reformatting, and sometimes corrections -- for example, someone did a slight reformat of over 100 cargo ship entries from the Dictionary of American Fighting Ships, put them in WP, copied them to CZ, and then left. In other cases, someone would decide a WP article looked good, not always with expertise on the subject, import it, and essentially leave it for someone else to clean up. - You should note that people may import articles that they primarily wrote at WP, on the theory that they can intelligently restructure them. I've done this, in part taking advantage of different CZ rules on sourcing, original synthesis, etc. - I have not been impressed by WP articles rewritten anew for CZ, outside specific types. For example, it's not unreasonable to import short articles on chemicals, especially if the structural formula is importable -- there's not much else to add. Political, military and historical articles have been much less successful. Yes, there have been times I have taken a WP article into a word processor, color-coded it, and used it as a reminder when writing my own from scratch (i.e., until there is no color left). - We aren't trying to match WP for number of articles. I would rather see, not walled gardens, but rational groups of articles, highly linked directly and via related articles. While not all agree with me, I personally like to develop top-down, in a way that is the antithesis of walled garden. Take Wars of Vietnam, for example: Sanger and Jensen argued with me about it being the Vietnam War, and essentially an American activity. I rather thought the Vietnamese had a bit to do with it, and finally got the American involvement under a broader context. Even there, Wars of Vietnam are primarily beginning with 19th century Western involvement; Dai Viet and the Trung Sisters aren't quite in the same hierarchy. Howard C. Berkowitz (talk) 22:10, 13 December 2010 (UTC) - Do you have any weed to sell? Occasionaluse (talk) 22:14, 13 December 2010 (UTC) - My two cents, mostly to defuse OU's comment a bit: - Au contraire, OU's comment was spot on :) FreeThought (talk) 00:26, 14 December 2010 (UTC) - I found the import suggestion (the one that talked about importing an insane number of "good" articles from WP as a basis to improve) ridiculous, too. Pretty sure that I also said so when it came up initially. It's mostly a question of manpower: CZ barely has enough manpower to even write their own crap and get it approved - but it's more work (and more tedious) to take a long and fleshed-out article, verify every tiny claim with the sources, and then improve it substantially so that it's more than just a simple copy. - So yeah, I would've said "This is a ridiculous idea because of X" in the forum thread, too. Buuuut... reacting to such a suggestion by banning ALL imports except for stuff you wrote yourself anyway and some cases where you beg Hayford The Great and the EC to greenlight it... yeah, no. That's equally ridiculous. This is a classic case where new regulation was simply not necessary. The existing mechanisms were good enough to handle it: - Most cases would simply solve themselves as people would try to import something and then realize that they bit off more than they can chew. - If people just import crap and don't do anything with it, it can be deleted. - If people import something and work on it, but not enough, it can be deleted/rejected/whatever. - But hey, what if somebody managed it? What if somebody imported one of WP's good articles and improved it in a meaningful way? Yeah, well, tough luck. Have fun begging or simply working on the WP article on WP. - See, the only thing that is actually new is that you made a rule against succeeding. Everything else could've been handled informally. And even if there was a pressing need for rules, why not just introduce something like "Imports will be accepted if the imported article (which will be kept in userspace or a subpage for the time being) is substantially improved within 7/14/30 days"? That would work, and you still would allow people to prove themselves as well as getting good articles to CZ. - Long story short, I see a sledgehammer being used to deal with a speck of dust on a window. --Sid (talk) 00:02, 14 December 2010 (UTC) - There are plenty of decent WP articles in my field that could serve as a starting point for expansion, clarification and so on. I did a bit of that during my time at CZ, and I think it was worthwhile. But if I'm forced to start from a blank screen I'll put all that effort into writing a review article for a journal. Doctor Dark (talk) 00:16, 14 December 2010 (UTC) - Yes, there are plenty of decent articles on WP, not all of it is crap. The same can be said for citizendium - some good some bad. To suggest that citizendium is always better than X is bullshit. There are some classic howlers on that site. If citizendium proposed to publish them, good luck - they'll need it. Why any academic would want anything citizendium is offering is another matter entirely. FreeThought (talk) 00:26, 14 December 2010 (UTC) - Sid, those are some reasonable thoughts. Still, as you suggest there's a manpower problem. This could be an interim rule, but there were continuing problems of people bringing in WP articles on subjects where they had no particular competence, announcing they thought the article was pretty good, and they then suggested others could collaborate to clean it up. One example caused a hell-freezing-over event: Martin and I agreed it was a poor article and should not be imported. - No one is suggesting CZ is always better or vice versa. What is being suggested is that CZ is trying to constrain the problem, at least in the moderate term. FreeThought, not all my writing is for CZ; there are things I'd send to journals or other venues, especially if they are more opinion pieces. There are areas, in which I consult, where I've been asked by my client not to give away certain details of analysis. - In this case, I defend the decision as a way to deal with specific problems. Now, some of the problems had been, if you will, competing bureaucracies. There had been a rule in place very much like what you suggested, but the Constables didn't want to enforce what they considered a matter of content. The dynamics between the EC and Constabulary, which you may laugh at but is real in the situation, needed a stronger policy. Howard C. Berkowitz (talk) 01:23, 14 December 2010 (UTC) - What have constables got to do with content? Content is under the proviso of the editorial council, as I understand it. The role of the constable concerns users not content. If the role of the constable is broader than that, then citizendium really is a "police state". FreeThought (talk) 01:43, 14 December 2010 (UTC) (undent) Editors/Editorial Council are not themselves authorized to delete or blank articles; that's purely the role of the Constabulary. Some Constables were resisting Editor orders to delete a badly imported article because they didn't see a rule for it, and they saw their role as check-and-balance. Now, there should be no problem. Howard C. Berkowitz (talk) 01:58, 14 December 2010 (UTC) - Funny, you just defined the problem (overarching bureaucracy), and claim it is solved by yet another restraint on adding content? Nice work. Blancmange (talk) 04:02, 14 December 2010 (UTC) - Ever had to deal with a fishhook caught in a person? You have to push it in further, until the point comes out, then clip off the barb and remove the hook. In many venues, there are complaints that solutions should be one-step and simplistic. Sometimes, you have to aggravate something before you can cure it. - On the other hand, I'm not convinced that all adding of content is good. We aren't trying to compete on number of articles. There have been some awful articles either imported or started by advocates, and there's been no way to get rid of them -- or sometimes a process that takes months. - Things swung too far in the other direction, when an individual essentially throw a substance-free tantrum and get an article locked, often an article that was under serious development. Howard C. Berkowitz (talk) 04:12, 14 December 2010 (UTC) - Yes, I tear it out and bandaid it. Aggravating this just, well, aggravates the mess that is CZ and its overwhelming pile of laws and regulations. It takes CZ several months to get rid of unwanted articles? It takes RW 3-7 days, with no rules whatsoever. I'd respond to your third paragraph if it made any sense, sorry, desperately needs copyediting and at least one reference. Blancmange (talk) 04:18, 14 December 2010 (UTC) - There may have been a small number of awful articles imported, not all of them were bad, but on the flipside starting Lemma articles that no-one is interested in expanding hasn't solved cz's problems. Not everyone on the planet has an obsessive interest in Nazis or Nazism either. FreeThought (talk) 01:45, 20 December 2010 (UTC) - Obsessive? No, some are also concerned with remembering the past, while watching some Western politicians push their demagoguery against democracy and civil liberties. Some even believe that government secrecy was not discovered by Wikleaks in Rick's Cafe. A reasonable number of good people, however, like to dig into any subject on which they work, such as in the nicely collaborative social capital article, or Internet protocols, or any of a number of things. - Lemma articles are not necessarily intended to be expanded. Their major purpose is allowing work on Related Articles pages, and are more a tool for writing than for external use. Their use as pure definitions has gone down since a better metadata generator was implemented. - But, WTF. CZ's problems aren't going to be solved overnight, but at least I'm working on them. Howard C. Berkowitz (talk) 03:33, 20 December 2010 (UTC) - So the purpose of these Nazi articles is to draw a comparison with the present day Obama administration, you're running out of ideas Howie. FreeThought (talk) 05:06, 20 December 2010 (UTC) Note that many of CZ's "approved" articles began as WP imports. A few were written by people who now are CZ contributors and thus are acceptable under the new policy but a good number were not. Given the new policy, will CZ delete these? Doctor Dark (talk) 07:06, 14 December 2010 (UTC) - They seemed to be forgetting that a wiki is a collaborative writing effort. So what if the article is not 100T% perfect, you can worked on it and edit the article to meet standards. Notice I've not been contributing much at CZ and I have started an article at Tendrl. Hmmmmmmmmmmm....LittleRedWriter (talk) 06:18, 15 December 2010 (UTC) - And the garden walls continue to remain high and unwelcoming. FreeThought (talk) 23:56, 16 December 2010 (UTC) Tendrl[edit] Just to keep people up-to-date with the progress of Tendrl (a fork of Citizendium that I started about a month ago): - You no longer have to register with your own real name to contribute, although we do encourage that. Pseudonyms are acceptable, and even anonymous (logged-out) editing has been enabled. See [5]. - A number of Citizendium articles—mostly about subjects in mathematics and physics—have been imported. However, our intention is to be a wholly separate and independent project. - We plan to acquire new VPS hosting sometime next week (see [6]). We need to decide on a project name so that we can register a dedicated domain; "Knowino" is one possibility right now. Feedback and suggestions are welcome—see Tendrl:Our name! Once core pages are expanded a bit more and we've moved to VPS hosting, we'll start "advertising" the project actively. Thomas Larsen (talk) 02:02, 14 December 2010 (UTC) - I'm sorry... but there needs to be a major prize for the first person, your good self excepted, who is prepared, in public, to give a flying fuck. –SuspectedReplicant retire me 02:08, 14 December 2010 (UTC) - I'm prepared to give Tendrl the benefit of the doubt. They appear sincere and they don't have the same loser admins that Citizendium have been left with. FreeThought (talk) 02:41, 14 December 2010 (UTC) - Fine, but - and I mean this in the nicest possible way - why should anyone care what you think? What is it about Tendrl that distinguishes it from CZ? - Citizendium failed despite a blaze of media publicity. Tendrl, as far as I can see, is an attempt to do the same thing as CZ but without the publicity. You're already setting up your own bureaucracy, and justifying it in terms like "Well it's not as bad as CZ". - Good luck... but I still don't see why you're trying to exist. –SuspectedReplicant retire me 02:52, 14 December 2010 (UTC) - Wow, you're nasty. Why is a fork such a bad idea, even if it turns into a failed experiment? At least they allow regular people to just sign up (or not) and edit. Blancmange (talk) 03:47, 14 December 2010 (UTC) - I'm not involved in setting up Tendrl. Publicity is meaningless unless you have something to sell. Posting the existence of Tendrl on two forums is not in the same league, because x number of websites get mentioned all the time on forums. Don't worry Blancmange, we'll wait for our friend to leave an origami unicorn behind. FreeThought (talk) 04:38, 14 December 2010 (UTC) - Nasty and rather stupid. And unobservant - a cursory look at even just this page would have shown that FreeThought isn't the force behind Tendrl, so preaching at them was just a waste of everyones time. And don't you just love the sheer arrogance of someone who comes out with "Why should anyone care what you think?" - by that you mean that we ought to care what you think. But do we? Like hell, you are just another ass. Next time try not talking out of it. D.T.F. (talk) 08:19, 14 December 2010 (UTC) - Neither actually, but at least I understand how talk pages work. I wasn't talking (in the first instance) to FreeThought, I was talking to Thomas. Now try answering my questions instead of spreading muck: why should anyone care about Tendrl? I didn't say that it was a bad idea, that was Blancmange putting words into my mouth. Neither do I expect anyone to care what I think, but that still doesn't answer anything. It's good to see the basic idiocy and arrogance of CZ are being included in the fork, though. –SuspectedReplicant retire me 10:11, 14 December 2010 (UTC) - Dear SuspectedReplicant, could you please expand the previous msg a little? What exactly is the basic idiocy and arrogance of Tendrl that you refer to? I contributed a little to Tendrl and would like it not to make the mistakes of earlier Wikis. So, maybe you can help avoiding them by explaining what is done wrong by Tendrl. --P. Wormer (talk) 13:24, 14 December 2010 (UTC) - So, you understand how talkpages work... but not how to use indentation, obviously, since you replied to FreeThought rather than the person you meant to reply to. And since there isn't much to understand about talkpages except indentation that means that you don't - understand how they work that is. Good grief. D.T.F. (talk) 20:35, 14 December 2010 (UTC) - I understand them perfectly. Being active on several wikis means I have plenty of experience with talk pages, and with the kind of unfathomable mess that results if you reply where you feel like it rather than in order. You, on the other hand, don't seem to have very much experience of the English language, judging from appalling word salads you've posted here so far. At least you're a perfect example of the basic idiocy to which I referred earlier. –SuspectedReplicant retire me 01:05, 15 December 2010 (UTC) - Omg, I'm so injured. To have a self-important wanker that nobody likes criticise me is too much to bear. Goodbye cruel world. D.T.F. (talk) 05:55, 15 December 2010 (UTC) - Remember to slice along the veins, not across. –SuspectedReplicant retire me 11:26, 15 December 2010 (UTC) ← SuspectedReplicant, do you have any specific criticisms of Tendrl? I'm very open to feedback, and you could probably suggest some good improvements. Cheers! Thomas Larsen (talk) 01:28, 15 December 2010 (UTC) - This section would be really interesting if it weren't for Suspected Replicant being a jackass and unclear about who they are responding to. Tendrl will probably die on the vine (unless it grows due to CZ evacuees?), but still, it's an interesting concept. Diamond (talk) 05:33, 15 December 2010 (UTC) - Fuck you too, Diamond. Thomas, the problem is that you're a project waiting for an idea. You say you want a principles fork, but you don't know what these new principles are going to be. You're going to take the best of CZ and WP but you have no objective criteria to define this. It's all just so pointless. Using a half-dead wiki as a starting point was always a bad idea too. Why not just help Wikipedia? Anything else is just going to end up in another empty website and a waste of people's time. –SuspectedReplicant retire me 11:26, 15 December 2010 (UTC) - But can you help Wikipedia if it doesn't want to be helped, if it refuses to recognize it has a problem? You can improve some bits of it. But is that actually a good thing if it improves WP's reputation and thereby, indirectly, that of its biased, or just inaccurate, articles? Peter Jackson 17:03, 16 December 2010 (UTC) - That's why I quit contributing to WP in my field of expertise -- to avoid giving dodgy material a veneer of undeserved credibility. It's better for bad articles to be obviously bad. Doctor Dark (talk) 19:07, 16 December 2010 (UTC) - With respect, to me that smells like hubris. Is it your name that would add the veneer of credibility, or would that addition come from the "enhanced quality" of the article itself? Neither one of those strikes me as particularly praiseworthy motivation. There are times when I recuse myself, abstain from jumping into a Wikipedia discussion, but for me it is more like triage, applying my effort where I think it will help. Other times, I sit on my butt because I enjoy being lazy, no rationalization offered. - This is just as good a place as any to say that I consider WP to be a credible thing, on balance. I am reminded of the seeming chaos that forms a honeycomb. Individual workers bumble about, ripping here, pasting there, smoothing things out some other place, and what emerges is a structural marvel, able to contain significant weight of a dense liquid, but build of soft wax, kept soft at body heat. Sorry for the flowery metaphor, but I like that kind of thing. - Word on the streets I walk (parents of middle- and high-school kids, and the kids themselves) is that WP is a good start for exploring any topic you like, but kids don't dare cite it in their papers. So? When I was a kid, citing an encyclopedia was considered lazy scholarship. Didn't stop us from using them. __ Sprocket J Cogswell (talk) 01:17, 17 December 2010 (UTC) - I think wikipedia is amazing. Don't know what all the whining is about, other than perhaps reporters who do their job badly casting the entire site as potentially full of vandalism? Burlap bags (talk) 05:05, 17 December 2010 (UTC) - A friend put it well: "Wikipedia is good for things that nobody cares about." What he meant was that it's fine for uncontroversial topics where no one has a personal investment. But I work in an area of science that is subject to certain public controversies that have no scientific basis, the kind of twaddle that we cover here at RW. Having to compromise with fringe theorists and wackos (as the WP admins demanded) would lead to the type of articles that I described above: something that looks reasonably credible, but seriously misrepresents the state of the science. Doctor Dark (talk) 05:48, 17 December 2010 (UTC) - Sorry, I guess I meant the other 3 million articles. It does help that I am "old" and know where to expect controversy, I guess. Burlap bags (talk) 05:56, 17 December 2010 (UTC) Inn-teresting. My sense of NPOV is that whack jobs and fringists need not be given undue weight there. Care to point at any pages that deserve scrutiny? Naturally my WP watchlist includes some twaddle-sensitive scientific pages. Can't always lend expert weight to the voices of reason, but it is fun to watch. Sprocket J Cogswell (talk) 06:06, 17 December 2010 (UTC) - Anything related to religion or politics, broadly conceived. E.g. history of science is bedevilled by extreme Indian nationalists claiming they invented everything, and citing "reliable sources" published by Indian universities (where you can get degrees in astrology). Peter Jackson 16:42, 17 December 2010 (UTC) - First yousomeone mentions "an area of science that is subject to certain public controversies that have no scientific basis" and then you point at "Anything related to religion or politics, broadly conceived." Sharper focus would be more useful here. I don't care about religion or politics. I find practical empirical matters more interesting. Sprocket J Cogswell (talk) 21:27, 17 December 2010 (UTC) - "compromise with fringe theorists and wackos (as the WP admins demanded)" I have yet to see an example of this. The meme, "example, or it didn't happen" applies here. Unless you show me something relevant, I call bullshit. Hot air. Blowhard. Nonsense. Clear enough? Show me. Sprocket J Cogswell (talk) 03:06, 21 December 2010 (UTC) Expert participation[edit] Citizendium is currently trying to figure out why they aren't attracting and retaining expertsimg. Since the presence of expert editors was advertised as one of the main improvements of Citizendium over Wikipedia, that's understandably a big deal for them. Now, if "an authority who is probably the world expert on implementation, running an open source lab outside academia" (according to Howard Berkowitzimg) was calledimg an "imbecile" because he "refused to join CZ because he regarded the questions as too intrusive", by the Secretary of the Editorial Council, what kind of self-respecting academic would want to join the project? The Secretary then goes on to presumeimg that the expert "thought our registration requirements were too onerous for his dainty sensibilities and lofty sense of self-esteem". I'm not sure whether to laugh or cry. I suspect there are two main reasons why experts choose not to join Citizendium, and neither of them have anything to do with imbecility or cretinism: - Citizendium has very little readership outside of its own community - some of its leadership is arrogant, prejudiced, and unwilling to take advice. Of course, the "imbeciles" and "cretins" of Citizendium are welcome to work on Tendrl. We'll probably be getting our own VPS hosting sometime next week, and we certainly won't be dismissing people because they are critical of the project. Thomas Larsen (talk) 04:08, 17 December 2010 (UTC) - The recent condescending forum comments on the citizendium website are a poor reflection of their project, and their editors. The walled garden is now an ivory tower replete with foul-smelling moat. FreeThought (talk) 05:13, 17 December 2010 (UTC) - OMG I'd thought I'd seen it all, but Haystack Penis (I know that's not original) goes completely over the top with: - "Your friend may be the world's leading expert on whatever subject you say he is, but he's also the world's leading imbecile. And you may tell him that I said so, and use my name as well. What a cretin! And if all the other myriad people you tell us about share his feelings (which I seriously doubt, by the way), then they are cretins too. - And we are better off without them." - I thought that kind of language was bannable on CZ? Who on earth would want to join after reading that comment by one of their most eminent experts? Wasn't he the guy who "hosted" the fundraiser? Burlap bags (talk) 05:34, 17 December 2010 (UTC) - No, he wasn't the guy who hosted the fundraiser. From memory, it was Milton. Thomas Larsen (talk) 07:00, 17 December 2010 (UTC) - My apologies. I'm new to this crowd of academic experts and their navel-gazing. Still, what that guy typed wasn't exactly "collegial", let alone polite, was it? Didn't he call someone an imbecile and a cretin for not wanting to join his private club? Burlap bags (talk) 08:50, 17 December 2010 (UTC) - Can we stop with the idea that everyone on Citizendium is an "expert"? It's not Nupedia. Anyone can participate, if they are willing to go through the sign-up form. They just can't be editors. Unless they are homeopathic quacks. —Tom Morris (talk) 18:07, 18 December 2010 (UTC) - Aren't they all supposed to be "polite" or something, at least? Sister golden hair (talk) 06:13, 19 December 2010 (UTC) Knowino/Tendrl[edit] I copied this section to Talk:Knowino. 193.200.150.137 (talk) 13:44, 22 December 2010 (UTC) CZ approved articles vs. Wikipedia[edit] Very much is said about the differences between CZ and WP, how only a few articles have been approved, but of these only Homeopathy has received much attention. Now, out of the 155 approved articles, how many are really better, more reliable and exhaustive than the corresponding ones at Wikipedia? In my brief random survey I found some similar and some clearly worse than Wikipedia's articles. Most were more "concise" (à la Schlafly). I stayed away from political or social articles though. Editor at CPmały książe 08:07, 15 December 2010 (UTC) - There is a comparison of approved Citizendium and corresponding English Wikipedia articles on w:Wikipedia:WikiProject Citizendium Porting#Articles. Approved Citizendium articles are quite good for a wiki-encyclopedia of that size, say, 20-40 thousands of articles, 0,5-2 millions of edits. I've randomly clicked on several mainpage-featured articles from Wikipedias of comparable size from List of Wikipedias, and most of them at first sight are not as good as an average approved Citizendium article or corresponding en-wiki article. Trycatch (talk) 10:52, 15 December 2010 (UTC) - I went looking for articles on subjects that interest me, and found a lot of "lemmas." Where I did find an article of any length, it was still in draft. How long until CZ becomes a useful, truly compendious resource? What's the word I'm looking for...? Ah, yes, "weak." Sprocket J Cogswell (talk) 01:24, 17 December 2010 (UTC) - What on earth are "lemmas"? Aren't they bits of proofs that are on the other blackboard, or something like that? Burlap bags (talk) 08:54, 17 December 2010 (UTC) - The average article length for a CZ article is roughly three sentences long, and is declining. The presence of large numbers of lemma articles that users start but don't expand, doesn't improve that average. FreeThought (talk) 09:27, 17 December 2010 (UTC) - What sensible folk call stubs, over there they call "lemmas." Seems to me like precious obfuscation for the sake of feeling superior to the οἱ πολλοί. Not an appealing presentation in my view. Sprocket J Cogswell (talk) 15:29, 17 December 2010 (UTC) - Oddly enough CZ has stubs too. During the time that I wrote regularly for CZ, lemmas were introduced (for me out of the blue) and I never quite understood them. It seems that they are a definition without a main article. CZ has the idea of a "cluster" consisting of a main article and some supporting articles, such as a definition, external links and so forth. As far as I understand it, lemmas have the purpose of easily getting rid of red links (links in other articles to articles not yet written). I disliked lemmas, first because they are organized with complicated templates, and second because I got much of my inspiration from red links. All of the sudden red links were gone in articles of interest to me. At the same time stubs and short articles were banned from the "Random Pages", which were another source of inspiration to me. --P. Wormer (talk) 15:59, 17 December 2010 (UTC) - Well, at least I get to keep my "unappealing presentation" intact, and add "uninspiring, hence mildly dysfunctional." Thanks for the informed clarification. Sprocket J Cogswell (talk) 16:26, 17 December 2010 (UTC) - Impressive, citing hoi polloi in Greek. Unfortunately for you, hoi is a definite article, so the hoi polloi is nonsense. Peter Jackson 16:45, 17 December 2010 (UTC) - Unfortunately for you, I speak plain English, where "the hoi polloi" makes perfect sense, spelt in Greek numbers/letters irregardless. Beyond critiquing style, do you have anything to say of substance? Sprocket J Cogswell (talk) 21:20, 17 December 2010 (UTC) - You speak plain English... except when you improperly use foreign words in replacement? Way to try and look cleverer than you are. Problem is, most people who try and look cleverer than they are fail. D.T.F. (talk) 16:04, 18 December 2010 (UTC) - O bugger, found out again. I admit that my Greek mostly covers things like wavelength and autocorrelation, one letter at a time. Once I've said καλημέρα, I have used up all my spoken Greek. Can do a bit better in Türkçe, and unlike at least one professor I had, I have never made the mistake of calling a lowercase η a "ν". In English, it is "the hoi polloi" and has been so for AFOAL time. There's an odd replacement for you to try and correct. - Thanks to that Jackson fellow, I now have another particle of familiarity with yet another idiom. No thanks to you, we are straying further from the topic of CZ sucking, and sucking mightily. All the ad hominem style over substance crap you fling will not change that. Sprocket J Cogswell (talk) 16:54, 18 December 2010 (UTC) - "'Twill fill with joy and madness stark the hoi polloi" - Iolanthe, Gilbert and Sullivan. In some of the Victorian scores, "hoi polloi" is even written in Greek letters. 86.179.219.80 (talk) 19:05, 8 January 2011 (UTC) - I don't think we're straying that far. CZ sucks, everyone knows that already. Noone, even from CZ, is going around saying CZ is great. It has problems. It isn't even very big, nor does it have many contributors. So why have this page at all? Considering this is about the only active discussion on the internet about CZ, isn't this page just drawing attention to something that could best be forgotten? Well, not really - if it is it is negative attention, and noone even cares enough about CZ to wish it dead, so that can't be it. Do we want to teach CZ how to do things better? Nope, that has been tried, they don't accept advice readily and when they do they try to make out it was their idea all along, and we all accept that CZ is a flawed concept so who here cares if they live anyways? No, the point is not to come here and wallow in CZs misery but to try and learn something from their mistakes, to have a record of what went wrong over there so that future encyclopedia builders may avoid the pitfalls CZ lumbered into, and pitfall #1 was them thinking they were cleverer than they were. D.T.F. (talk) 18:29, 18 December 2010 (UTC) Agreement, in general terms, is what you will get from me about that. In their case, it appears that the particular strain of "thinking they were cleverer than they were" most responsible for the sluggish performance is similar to what led to the downfall of the late unlamented CCCP (Ha! I was able to type that in all Roman characters, he said, smugly) which was the delusion that the (self-selected) central planners were cleverer than the populace. In the corners of Wikipedia I inhabit, there seem to be enough knowledgeable level-headed contributors to keep the noise from the wandals and lunatics down to a level which does not corrupt the useful signal too badly. Some days it seems that most effort goes toward maintenance and counterwandalism, which can be discouraging. All the same, additions appear, and are hammered into shape. That site is growing rapidly enough, with no need to toot a horn about it. Sorry for the rambling. What I mean to say is that it might be more fruitful to study what it is about Wikipedia that is being done right, that does in fact attract throngs of useful contributors. I'll start by suggesting that consensus, both as an ideal and as a messy practice, has a lot to do with it. I have not seen anything resembling admins demanding "compromise with fringe theorists and wackos" as described above. I believe such demands may often be met with vigorous resistance. Thanks for your thoughtful response, Sprocket J Cogswell (talk) 21:57, 18 December 2010 (UTC) - Likewise - you have made a very important point, a connection I didn't make. CZ is obsessed with what other wikis have done wrong yet does not take full account of what they have done right. Perhaps it is that fundamental negativity that has gotten them where they are. So, this page is not just to learn from their mistakes but to learn from their successes. Cheers for that. D.T.F. (talk) 06:09, 19 December 2010 (UTC) - I think there has also been a fair amount of RW trying to help CZ right their boat, but many are now archives here, and in their forums, linked to from archives here. Suggestions for cheaper hosting, attracting editors, etc. Sister golden hair (talk) 06:26, 19 December 2010 (UTC) - I'm the first to admit that all is not perfect at CZ. At times, it's damned aggravating, even if one sees some of the "internals". - My point in being here is not to defend CZ, but both to gain information -- some opinions are useful from my perspective while others are not -- and, occasionally, to share some information on things that CZ may have done right. Unfortunately, some of the things, still evolving, that are done well are not made obvious in a way that lets the general user exploit them, although they can be very useful to an author. Lemmas, for example, were conceived for some very specific reasons, one of which was making Related Articles pages more useful. - Really, I try not to take the fights with me. I'm human and don't always succeed. Equally, being human, it is sometimes useful to find others seeing as crap what I see as crap. Can it be fixed? Honestly, I don't know. If I didn't have some reason to believe that it might, I wouldn't be continuing to add content where some others are more concerned, it seems, with tangents or political gaming. In a case or two, I'm hoping that some people that might be jerks now could return to being the reasonable people they were when I first met them. - My ideal wiki, at this point, is more Damifino rather than Knowino (sorry, Tom -- couldn't resist). Howard C. Berkowitz (talk) 08:21, 22 December 2010 (UTC) - Haha! I don't mind, and I actually found that funny. :-) Thomas Larsen (talk) 08:26, 22 December 2010 (UTC) Homeopathy resurrected[edit] It appears a previous version of the much disputed CZ Homeopathy article has been copied by CAM advocates to .FreeThought (talk) 01:48, 25 December 2010 (UTC) - I'm glad I swallowed a last bite before reading this. The link looked familiar, and then I recognized it. Ramanand created a Forum thread called "Appeal to Editorial Council", in which, when challenged, he gave a link to an article that established that homeopathy works. That's the link. So, he's suggesting that CZ use what it's already rejected as a source to tell it that it should accept what it's rejecting, or something like that. - Apropos of his whining on the Forum thread, I just asked him directly, rather than talking about it here, why don't you actually appeal to the EC whatever it is (I'm really not sure) that you want to appeal? It might be that he isn't being made a Health Sciences Editor, or something entirely different. - He was also telling us that homeopathic and osteopathic medical schools are identical in the US. Ummm...no. To some extent, true in the UK, where osteopaths (not osteopathic physicians) have a very limited curriculum. For most practical purposes, the difference between osteopathic and conventional schools (and residency programs) is historical only. Some of the best science-based physicians I know have DO, not MD degrees, although one put it is that the only difference is that the osteopathic schools teach the same curriculum as the MD schools, except they add a few good techniques they stole from the chiropractors. Howard C. Berkowitz (talk) 02:49, 25 December 2010 (UTC) - Just checked out the page views on both articles: Citizendium (3,857 views), Wiki 4 CAM (12,828 views). Not an isolated page by any means. FreeThought (talk) 03:20, 25 December 2010 (UTC) - Unfortunately neither CZ nor Wiki4Cam give the time period for which the pageview numbers apply. The past month? Past year? Since the article's existence? Doctor Dark (talk) 03:31, 25 December 2010 (UTC) - No, he's saying that (where he's from, presumably India) Homeopathy is treated as counted as medicine, and as a result Homeopaths get medical training, just as in the US (but not most countries) Osteopaths get medical training. The result will be that an Indian "homeopath" ends up acting a lot more like an MD. Without having attended an Indian medical school I can't say whether that's true, but it's plausible. Of course there'd still be crazy people, but that happens even with MDs, some of them start giving chelation to small children or zapping people with imaginary force fields. 82.69.171.94 (talk) 15:09, 30 December 2010 (UTC) Elitist?[edit] A number of people here really don't seem to grasp the fact that writing an encyclopedia is essentially an "elitist" activity and cannot, by definition, be anything else. And, if you don't want constant disruption, then you're going to have to keep the "non-elite" out. Now, I'm not sure how that really relates to Dan Nessett's post just prior to it: Dan was, essentially, saying that Citizendium needs to be nicer to contributors. But Hayford doesn't seem to grasp the fact that building a public wiki must be a non-elitist activity. I've got no objections to getting "elites" involved: indeed, experts will play a very important role in Knowino. But when you disenfranchise everyone else, when you get caught up in arguments over who is actually "elite", you miss the whole point of contributing to a wiki—in fact, I think you miss the whole point of a wiki encyclopedia. Most people who contribute to wikis do so for fun. They do it because they enjoy writing, or because they like the company of other similarly-minded nerds. ;-) Why has Citizendium lost so many contributors? That is not a difficult question to answer. In some cases, it's been because they simply don't enjoy writing according to Citizendium's style. And, well, if people leave because they can't contribute journal articles, so be it. But the real tragedy is when a person is alienated from the project due to mistreatment, incivility, and a perception—however true—of endless infighting. And the Secretary of the Editorial Council even goes so far as to call an expert an "imbecile" because he wouldn't sign up due to the registration questions. Not only is Citizendium becoming a top-down oligarchy, it's alienating experts as well as amateurs. Thomas Larsen (talk) 22:44, 30 December 2010 (UTC) - This is not exactly breaking news. I'm not one to say "I told you so" but... OK, I am. We've focused on alienation of experts in our Citizendium article for quite some time now. Doctor Dark (talk) 02:23, 31 December 2010 (UTC) - Oh, I know. :-) I just find it interesting that, even now, as Citizendium begins to sink deeper into the Bog of Lost Sites, one or two people who happen to hold senior positions there think the solution is to alienate even more people. It's rather sad, really. Thomas Larsen (talk) 03:09, 31 December 2010 (UTC) - I think it's sadder Peirce doesn't realise that many draftees deliberately failed that test so as to *not* get into the military - we are talking here the Vietnam War. I've mentioned this a number of times before, but there are some individuals on their councils who should never have been there in the first place. Their attitude is not suited for collaborative projects. FreeThought (talk) 03:25, 31 December 2010 (UTC) - Tim Starling wrote about editors with a battlefront mentality. When I read Peirce's reaction to Tim's comment I was reminded of the Dutch Chapter of the Hell's Angels. A few years ago I saw a live talk show on Dutch national television in which somebody argued that the HA's are violent types. A few minutes later, still during the program, a few HA's forced their way in into the studio and hit the guy who said that, shouting that they were not violent.--P. Wormer (talk) 11:53, 31 December 2010 (UTC) - A rather more serious example is those Muslims who threaten to asassinate people who accuse Islam or Muhammad of violence. Peter Jackson 12:02, 4 January 2011 (UTC) And the hits keep coming...[edit]Have a look at CZ's latest gem, their article on Pranic Healing. Here we are helpfully informed of the "internationally acclaimed" book Pranic Psychic Self-Defense for Home and Office. Like the Bible and toilet paper, no family should be without it. I love this stuff. No parody could come even close. Doctor Dark (talk) 04:33, 6 January 2011 (UTC) - Wha-? Oh, Ramanand. /me nominates for speedy delete. —Tom Morris (talk) 12:13, 6 January 2011 (UTC) - As has Gareth. As soon as I saw it, I sent an email to the Managing Editor. - Again, though, the EC is voting against specific articles of mine, clearly a much greater threat. Howard C. Berkowitz (talk) 19:44, 6 January 2011 (UTC) - You and your articles are not special. FreeThought (talk) 04:22, 13 January 2011 (UTC) - This is great comedy material: - Master Choa Kok Sui developed Arhatic Yoga®, a system of practices to help seekers on the spiritual path expand their consciousness and work toward achieving soul-realization. Arhatic Yoga is a scientific synthesis of yogic practices to safely accelerate the spiritual development of practitioners. The practice of Arhatic Yoga is said to balance aspects of Universal Love, Intelligence and Will. Practitioners aim to develop higher intuition, advanced mental faculties, qualities of good character and progress towards realizing their true potential as Beings of Divine Love, Light and Power. - It really is magnificent first-order bullshit. It would almost be a shame to delete it.--BobSpring is sprung! 21:42, 6 January 2011 (UTC) - What is it with these people that they can't just stick with the spiritual aspects but have to go pretending it's "scientific"? You see this all the time with New Agers and similar folk. Doctor Dark (talk) 23:13, 6 January 2011 (UTC) - But even before they mention science we have: "to help seekers on the spiritual path expand their consciousness and work toward achieving soul-realization." It's gobbledygook isn't it? What the hell is "soul-realization"?--BobSpring is sprung! 23:53, 6 January 2011 (UTC) - Realization you've been had? *lol* Just got to love the double-speak - like yogic flying is actually jumping up and down on your bum *lol*. FreeThought (talk) 09:06, 13 January 2011 (UTC) - Freethought, you misconstrue that my articles or actions are the issue. The issue is that the EC should be focused on structural issues and general improvement rather than individuals. During the Charter process, I disagreed with creation of a Managing Editor position because it could repeat the problems of Sanger as arbitrary decider; Daniel has not abused his role. The Charter also says that disputes are to be settled at the lowest level possible, starting with discipline-specific Editors. - When the EC spends significant time policing individual work, early in the process, and not even calling for History Editors to determine if a Historiography section is or is not appropriate in a historical article, I suggest there is a problem of priorities. If EC members are upset about "article structure", is it not appropriate for them to be developing or approving general guidelines for such structure, rather than simply saying they don't like ill-defined problems in a specific article? - Incidentally, Pranic Healing has been deleted. The issues of neutrality and fringe policy won't be solved overnight, but that's something on which the EC should work. Howard C. Berkowitz (talk) 04:32, 13 January 2011 (UTC) - Your complaints are misdirected. You're far better bringing this up on citizendium rather than here, as those involved on the council will misconstrue your intentions. RationalWiki advice to those on citizendium is I suspect as welcome as finding a turd in a punchbowl. FreeThought (talk) 09:06, 13 January 2011 (UTC) - Looks like the homeopaths have moved to Memory of water, by the way. Ullman's active on the talk page there. Not much yet, but it's looking to start up as the same sort of thing - small, initial studies brought frth as proof science is wrong. Aconite (talk) 04:37, 13 January 2011 (UTC)
https://rationalwiki.org/wiki/RationalWiki_talk:Nothing_is_going_on_at_Citizendium/Archive3
CC-MAIN-2022-05
refinedweb
22,351
61.46
Subject: Re: [boost] Scoped Enum Emulation From: Stewart, Robert (Robert.Stewart_at_[hidden]) Date: 2012-01-25 07:26:14 Vicente J. Botet wrote: > Le 24/01/12 22:51, Beman Dawes a écrit : > > > > What do others think? Should > /boost/detail/scoped_enum_emulation.hpp > > functionality be moved to config? > > > > Are there any improvements that would make the emulation > > better, without turning something simple into something > > complex? > > I'm working with a different emulation which uses classes. > For example > > // enum class cv_status; > BOOST_DECLARE_STRONG_ENUM_BEGIN(cv_status) > { > no_timeout, > timeout > }; > BOOST_DECLARE_STRONG_ENUM_END(cv_status) > > The macros are defined as follows: > > #ifdef BOOST_NO_SCOPED_ENUMS > #define BOOST_DECLARE_STRONG_ENUM_BEGIN(x) \ > struct x { \ > enum enum_type > > #define BOOST_DECLARE_STRONG_ENUM_END(x) \ > enum_type v_; \ > inline x() {} \ > inline x(enum_type v) : v_(v) {} \ > inline operator int() const {return v_;} \ > friend inline bool operator ==(x lhs, int rhs) \ > {return lhs.v_==rhs;} \ > friend inline bool operator ==(int lhs, x rhs) \ > {return lhs==rhs.v_;} \ > friend inline bool operator !=(x lhs, int rhs) \ > {return lhs.v_!=rhs;} \ > friend inline bool operator !=(int lhs, x rhs) \ > {return lhs!=rhs.v_;} \ > }; > > #define BOOST_STRONG_ENUM_NATIVE(x) x::enum_type > #else // BOOST_NO_SCOPED_ENUMS > #define BOOST_DECLARE_STRONG_ENUM_BEGIN(x) enum class x > #define BOOST_DECLARE_STRONG_ENUM_END(x) > #define BOOST_STRONG_ENUM_NATIVE(x) x > #endif // BOOST_NO_SCOPED_ENUMS > > While this is not yet a complete emulation of scoped enums, > it has the advantage of that there is no need to use a macro to > name the strong type. It looks decent, but shouldn't int be a computed type based upon the size and signed-ness of enum_type? Of course, you could also provide macros to specify the underlying type and use that, instead of enum_type, as the type of v_. That would increase compatibility with strongly typed enums in C++11. Why convert implicitly to int rather than to enum_type? You could convert to the computed type I mentioned above, but converting to enum_type would ensure the most appropriate conversions, wouldn't it? It might also be good to put the semicolon closing the enumerated type definition into the _END macro. That would increase symmetry and more strongly tie the _END macro to the construct: BOOST_DECLARE_STRONG_ENUM_BEGIN(cv_status) { no_timeout, timeout } BOOST_DECLARE_STRONG_ENUM_END(cv_status) It would be nice if the macros would produce strongly typed enums, when available, and devolve to emulation when not. It might also prove useful to have a macro to forward declare them: BOOST_FORWARD_DECLARE_STRONG_ENUM(name). That would forward declare a normal class, when emulating, and forward declare the enum class otherwise. _____
https://lists.boost.org/Archives/boost/2012/01/189847.php
CC-MAIN-2019-43
refinedweb
395
55.03
Ticket #6627 (closed Bugs: fixed) nonfinite_num_put formatting of 0.0 is incorrect Description We noticed time strings changed from displaying 00:00.00 to 00:00000 after installing the nonfinite_num facets globally. It appears that formatting for 0.0 isn't behaving as it should when precision and fixed are involved (and potentially others). #include <boost/math/special_functions/nonfinite_num_facets.hpp> #include <iomanip> #include <iostream> #include <sstream> void writeZero( std::ostream& stream ) { stream << std::fixed << std::setw( 5 ) << std::setfill( '0' ) << std::setprecision( 2 ) << 0.0; } int main() { std::stringstream standardStream; writeZero( standardStream ); std::cout << standardStream.str() << std::endl; std::stringstream nonfiniteStream; std::locale nonfiniteNumLocale( std::locale(), new boost::math::nonfinite_num_put< char >() ); nonfiniteStream.imbue( nonfiniteNumLocale ); writeZero( nonfiniteStream ); std::cout << nonfiniteStream.str() << std::endl; return 0; } // OUTPUT: // // 00.00 // 00000 Attachments Change History comment:3 Changed 5 years ago comment:4 Changed 5 years ago by krwalker@… Do you have those reversed? int main() { std::locale::global( std::locale( std::locale(), new boost::math::nonfinite_num_put< char >() ) ); writeZero( std::cout ); std::cout << std::endl; // 00.00 std::stringstream astream; writeZero( astream ); std::cout << astream.str() << std::endl; // 00000 return 0; } "Changing the global locale does not change the locales of pre-existing streams. If you want to imbue the new global locale on cout, you should call std::cout.imbue(locale()) after calling std::locale::global()." -- comment:5 Changed 5 years ago by krwalker@… I may have a patch that makes the FP_ZERO case only output the '-' if ( flags_ & signed_zero ). It then reduces iosb.width() by 1 and delegates to std::num_put. Changed 5 years ago by krwalker@… - attachment nonfinite_num_put_zero_formatting.patch added comment:6 Changed 5 years ago by pbristow OK - if you are keen to fix this properly, I'll try to get to look at this more closely, including writing all the several test cases to check I haven't introduced another bug in fixing this one. You could also email me privately. Changed 5 years ago by pbristow - attachment nonfinite_num_facets_formatting_2.hpp.patch added Patch to try to treat unsigned zero as normal value. comment:7 follow-up: ↓ 8 Changed 5 years ago by pbristow I have yet to fully understand how this code is intended to work. You patch seems rather complicated? Does my 2nd patch (that simply aims to treat an unsigned zero as a normal value) work for you ? (While I write some tests). comment:8 in reply to: ↑ 7 Changed 5 years ago by pbristow I have now written a much larger collection of tests (though there are an almost infinite number of possible combinations of ostream options like precision, width, letf right, internal, showpos showpoint :-( And committed a revised version (using code from KR Walker) that passes on MSVC 10 these tests with the signed_zero flag set (and also handles inf and nan as before). I am not entirely convinced that there is really a need for a signed_zero flag at all. Are there really any platforms/libraries still out there that do not output negative zero correctly as -0? Feedback welcome. comment:9 Changed 5 years ago by pbristow Tests reveal that, for some combinations of options, there is a difference in output between Dinkumware STL and other STL implementations, for example: #if defined(_CPPLIB_VER) && (_CPPLIB_VER >= 306) Dinkumware outputs "0.00" CHECKOUT(std::showpoint << 0., "0.000000"); std::setprecision(6) CHECKOUT(std::setprecision(2) << std::showpoint << 0., "0.00"); #else others output "0.0" CHECKOUT(std::showpoint << 0., "0.00000"); std::setprecision(6) CHECKOUT(std::setprecision(2) << std::showpoint << 0., "0.0"); #endif I am unclear if either of these is 'wrong' according the C/C++ Standards, but this is outside control of Boost.Math code, so I propose to declare this fixed in trunk. (There are also some STL libraries that use two exponent digits rather than three - this is standards conformant). comment:10 Changed 4 years ago by pbristow - Status changed from new to closed - Version changed from Boost 1.48.0 to Boost 1.50.0 - Resolution set to fixed Since the C++ IO Library Standard is somewhat permissive in output, and the support for signed zero is platform dependent, it is not believed possible to provide a solution which is exactly portable (producing idential output) over all platforms. It is probably wise to avoid using the signed_zero flag.
https://svn.boost.org/trac/boost/ticket/6627
CC-MAIN-2016-40
refinedweb
717
57.16
Wikibooks:New page patrol What is new page patrol?Edit New page patrol means watching the new page feed to find pages which are vandalism, nonsense, spam, misnamed, don't fit within project scope, are duplicates, etc. In the past this has been done entirely manually, which meant lots of work was duplicated. A new software feature allows us to make sure efforts don't overlap. What's the new software feature?Edit New page patrol has long been a maintenance task Wikibookians performed, but a new software feature that was enabled recently makes division of labour easier. Until the feature was enabled, there was no way to tell which pages on Special:Newpages had already been looked at by another user. This feature allows users with the sysop, bot, and patroller rights to mark pages as patrolled. Patrolled pages appear on a white background on Special:Newpages; unpatrolled pages appear highlighted in yellow. You can hide already-patrolled pages, and you can sort by namespace (the "all" option is now fixed too). How do I mark a page as patrolled?Edit To mark a page as patrolled, you must open it from the link provided on Special:Newpages. Towards the bottom right corner of the page, a "mark this page as patrolled" link appears. Click it to patrol the page. You can now click the link to go back to Special:Newpages to continue patrolling, or you can click the edit tab to edit the page if needed. It's best to mark a page as patrolled, then go back and tag it/fix it/whatever. Admins, bots and patrollers have all pages they create automatically marked as patrolled. What do I need to know to do new page patrol?Edit You need to read and understand the following policies: inclusion policy, deletion policy, naming policy. You must also understand how to use the following templates properly: - {{delete|reason}} - {{cleanup|reason}} - {{copyvio|source}} and {{subst:nothanks|Module|~~~~}} - {{query}} and {{subst:query notice|Module|~~~~}} - {{transwiki|WikiWiki}} - {{rename|Book Name}} - {{dupe|module}} - {{subst:test|~~~~}} - {{rfd}} and {{subst:rfd warning|Book|~~~~}} If you're also going to patrol new images, you'll also need to use: - {{subst:bfu}} and {{subst:Image badfairuse|Image:Name.ext|~~~~}} - {{subst:nfur}} and {{subst:Image fairuse|Image:Name.ext|~~~~}} - {{subst:nld}} and {{subst:Image copyright|Image:Name.ext|~~~~}} What should I mark as patrolled?Edit - Mark as patrolled - All pages that meet the requirements in our inclusion and naming policies - All pages that don't meet those requirements, but have been tagged or otherwise dealt with - Remember to mark pages as patrolled, then go back and tag them or fix them or whatever (this ensures that you don't forget to go back to patrol the page, but also makes sure that it's marked as taken care of as soon as you start - there won't be duplication of efforts). - Don't mark as patrolled - Anything you want a 2nd opinion on - Patrol only these namespaces: Main, Wikibooks, Cookbook, Wikijunior. Watch for strange stuff in the other namespaces, but you don't have to patrol them. How do I get patroller rights?Edit Once you've read and understand this page, and the policies and templates linked from it, list your request on WB:RFP, where an admin will process it. You should do some new page patrolling without the right so other new page patrollers can see your work. This lets us gauge whether you know what you're doing before letting you mark pages as patrolled because once you have the right, nobody is likely to check your work. Remember that patrolled pages can be hidden, and patrollers rarely examine previously-patrolled pages. See alsoEdit - Help:Tracking changes#Reviewing pages - Wikibooks:User rights - AJAX Patrolling: allows faster patrolling of new pages by making the [mark as patrolled] AJAX. Enable this on the Gadgets tab of my preferences. - User:Mike.lifeguard/Twinkle Speedy documentation - enable this on the Gadgets tab of my preferences. - Twinkle Speedy: Administrators and patrollers may use to help make new page patrolling and related tasks easier using the familiar Twinkle interface. The script cannot be used unless you have the sysop or patrol rights; the script will fail in Internet Explorer, has been tested in Firefox, and may work in Opera, Camino, or Safari. As with the original Twinkle script, all edits made are your responsibility regardless of bugs in the script. See the full documentation.
https://en.m.wikibooks.org/wiki/Wikibooks:New_page_patrol
CC-MAIN-2016-40
refinedweb
745
61.77
#include <KCompositeJob> Detailed Description The base class for all jobs able to be composed of one or more subjobs. Definition at line 35 of file kcompositejob.h. Constructor & Destructor Documentation Creates a new KCompositeJob object. - Parameters - Definition at line 31 of file kcompositejob.cpp. Destroys a KCompositeJob object. Definition at line 41 45 of file kcompositejob.cpp. Clears the list of subjobs. Note that this will not delete the subjobs. Ownership of the subjobs is passed on to the caller. Definition at line 85 of file kcompositejob.cpp. Checks if this job has subjobs running. - Returns - true if we still have subjobs running, false otherwise Definition at line 62 of file kcompositejob.cpp. Forward signal from subjob. - Parameters - - See also - infoMessage() Definition at line 111 of file kcompositejob.cpp. Called whenever a subjob finishes. Default implementation checks for errors and propagates to parent job, and in all cases it calls removeSubjob. - Parameters - Reimplemented in KIO::TransferJob. Definition at line 96 of file kcompositejob.cpp. Retrieves the list of the subjobs. - Returns - the full list of sub jobs Definition at line 80 of file kcompositejob.cpp. The documentation for this class was generated from the following files: Documentation copyright © 1996-2019 The KDE developers. Generated on Sun Oct 13 2019 03:28:17 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006 KDE's Doxygen guidelines are available online.
https://api.kde.org/frameworks/kcoreaddons/html/classKCompositeJob.html
CC-MAIN-2019-43
refinedweb
231
53.58
Timeline Oct 10, 2011: - 5:41 PM Changeset [12656] by - Making a sandbox copy for RFC 77 development work. - 9:22 AM RenderingOsmData edited by - (diff) - 8:53 AM Changeset [12655] by - mapscript OWSRequest->type is now initialized with MS_GET_REQUEST. … - 5:05 AM Changeset [12654] by - shortcut png transform when alpha == 255 - 3:10 AM Ticket #4046 (GD autoconf check puts default locations in gcc search paths) closed by - fixed: fixed in r12653 - 3:09 AM Changeset [12653] by - remove default compiler search paths from the GD CFLAGS/LDFLAGS (#4046) - 3:07 AM Ticket #4046 (GD autoconf check puts default locations in gcc search paths) created by - The GD configure script will put: - /usr/include or … Oct 9, 2011: - 11:38 PM Ticket #4045 (Allow for fixed date/time expires settings in mapcache) created by - Currently <expires> and <auto_expire> are set in seconds after tile … - 4:08 AM Changeset [12652] by - more mapcache documentation Oct 8, 2011: - 12:19 PM Ticket #3701 (Allow setting an attribute so content can be displayed when layers ...) closed by - fixed: Fixed in 6.0 and trunk documentation in r12651. Updated … - 12:13 PM Changeset [12651] by - Added documentation on nodata to the template_output document (#3701). … - 10:24 AM Changeset [12650] by - add mod_fcgid config info - 10:04 AM Changeset [12649] by - configure: make --with-exempi works out-of-the-box when xmp.h is … - 9:21 AM Changeset [12648] by - add fastcgi instructions, and internal refs - 9:05 AM Changeset [12647] by - avoid unused function warning when tiff write support not enabled - 3:18 AM Changeset [12646] by - Added documentation of the runtime substitution parameters … - 2:08 AM Changeset [12645] by - Fixed som typos (xml_<namespace> -> xmp_<namespace>) and some … - 2:02 AM Changeset [12644] by - Fixed som typos (xml_<namespace> -> xmp_<namespace>) and some … Oct 7, 2011: - 12:41 PM Ticket #1166 (When providing a non-"library" SLD parameter, LAYERS should not be ...) closed by - fixed: Empty LAYERS parameter is also valid when SLD is provided: r12643 Closing. - 12:40 PM Changeset [12643] by - Empty LAYERS parameter is also valid when SLD is provided (#1166) - 11:03 AM Changeset [12642] by - damn... use pointers… - 10:33 AM Changeset [12641] by - Modified the way we write colorObj.. we now pass a defaultColor object … - 10:05 AM Changeset [12640] by - switch internal image representation to premultiplied pixels, with … - 8:59 AM Changeset [12639] by - switch agg band order to match the cairo one - 8:07 AM Changeset [12638] by - Python Mapscript does not write COLOR to reference map (#4042) - 3:46 AM Changeset [12637] by - implement global mutex for the seeder (shouldn't be needed, but let's … Oct 6, 2011: - 9:01 PM Changeset [12636] by - Fix configure to work without explicit --with-exempi or --without-exempi - 8:38 PM Changeset [12635] by - Correction: named styles are now supported - 8:24 PM Changeset [12634] by - Correction: named styles are now supported - 7:55 PM Changeset [12633] by - Small corrections to new cascaded GetLegendGraphics? section (#3923) - 7:25 PM Ticket #3923 (cascading getLegendGraphic) closed by - fixed - 11:36 AM Ticket #3932 (Support for XMP metadata in output images) closed by - fixed: Committed code to trunk in r12630 and docs to trunk in r12631 - 11:35 AM Changeset [12632] by - regenerated configure script with autoconf2.59 - 11:34 AM Changeset [12631] by - Support for XMP metadata in output images (#3932) - 11:33 AM Changeset [12630] by - Support for XMP metadata in output images (#3932) RFC 76 - 11:06 AM Changeset [12629] by - Remove C++ comment line warnings and struct initialization warnings. … - 10:57 AM Conferences edited by - (diff) - 9:15 AM Changeset [12628] by - Added a section about wms cascading - 8:01 AM Changeset [12627] by - switch pygments highlighting from console to bash - 5:44 AM Changeset [12626] by - add mapcache compilation/installation document - 4:18 AM Changeset [12625] by - add svn keywords - 4:15 AM Changeset [12624] by - add TIFF cache storage format (write support disabled by default) add … - 4:10 AM Changeset [12623] by - adjust wms extent to mapserver extent. use a copy of the mapObj … - 3:04 AM Changeset [12622] by - fix ol map on dev doc site - 2:46 AM Ticket #4044 (fribidi library is not thread safe) created by - there should be a thread mutex around the fribidi calls, as fribidi is … - 2:26 AM Ticket #4043 (use OGC SLD WMS 1.3 schema instead of GetSchemaExtension) created by - Why are we not using … - 2:09 AM Ticket #4042 (Python Mapscript does not write COLOR to reference map) created by - Hi, when creating COLOR attribute to REFERENCE map using Python … Oct 5, 2011: - 10:28 AM Changeset [12621] by - fix borked text formatting - 10:25 AM Changeset [12620] by - add documentation for different mapcache caches, along with the … - 8:42 AM Ticket #4041 (Use the thread safe version of proj if available) created by - recent proj versions have a thread safe api which would enable us to … - 5:55 AM Changeset [12619] by - fix problem with recursive seeder not seeding all tiles if not … Oct 4, 2011: - 11:54 PM Ticket #4040 (Specify names for ERS temp files from WCS) created by - The GDAL/ERS driver creates two temp files (.ers and .aux.xml) with … - 9:09 AM Ticket #4039 (WMS GetCapabilities truncated for a CONNECTIONTYPE POSTGIS) created by - - Problem data statement: […] - GetCapabilities? is truncated at … - 8:02 AM Changeset [12618] by - Fix Linux build vs strlcpy (#4008) - 8:01 AM Changeset [12617] by - Fix Linux build vs strlcpy (#4008) - 6:18 AM Ticket #4008 (Backport mapprojhack.c to 5.4 and 5.6) closed by - fixed: confirmed that osx still builds with these changes. closing… - 6:16 AM Changeset [12616] by - Draft version of RFC 77, multi-label support. - 3:50 AM Changeset [12615] by - Add missing strlcpy definition (#4008) - 3:49 AM Changeset [12614] by - Add missing strlcpy definition (#4008) Oct 3, 2011: - 8:59 AM Changeset [12613] by - Fixed some spelling mistakes and formatting issues for the projection … - 7:59 AM Ticket #4038 (PHP mapscript - cannot add a style to a label) created by - It is unclear how to add a style to a label using PHP mapscript. Oct 2, 2011: - 11:55 PM Changeset [12612] by - fix failed wms demo when no source defined Sep 30, 2011: - 1:44 PM Changeset [12611] by - Added a new regexp example to the expressions document. - 11:50 AM Ticket #4037 (PHP Mapscript - Unable to add a style object to label object) created by - There is currently no way to add a style to a label using MapScript? … - 9:18 AM Changeset [12610] by - add very experimental native mapserver source to mapcache. not enabled … - 5:21 AM Changeset [12609] by - fix segfault on multi-layer wms request on non-exisiting layer - 4:04 AM Changeset [12608] by - add minzoom/maxzoom to grid reference in tileset to allow using … Sep 29, 2011: - 11:33 PM Ticket #3830 (Migration guide: DUMP shouldn't be needed in OGC request) reopened by - I forgot the migration guide. I don't think I can edit the migration … - 9:58 AM Changeset [12607] by - Add voting history - 9:37 AM Changeset [12606] by - add minimal mapcache.xml config file - 9:12 AM Changeset [12605] by - Updated documentation to reflect the deprecation of the DUMP parameter … - 8:52 AM Ticket #3830 (Migration guide: DUMP shouldn't be needed in OGC request) closed by - fixed: The 6.0 and trunk documentation has been updated in r12604. I have … - 8:46 AM Changeset [12604] by - Updated documentation to reflect the deprecation of the DUMP parameter … - 8:31 AM Changeset [12603] by - temporarily remove mapcache.xml to mapcache.xml.sample before creating … - 8:29 AM Changeset [12602] by - limit size of wms getmap requests to 2048 pixels by default (#4036) - 8:27 AM Ticket #4036 (mapcache wms service does not limit image size requests) created by - not limiting image size can cause malicious users to request large … - 5:55 AM Ticket #3421 (SWIG MapScript Creating a lineObj) reopened by - After looking through the SWIG Mapscript documentation I did not see … - 5:03 AM Ticket #3830 (Migration guide: DUMP shouldn't be needed in OGC request) reopened by - Reopening: this change needs to be mentioned in the 6.0 migration … - 5:00 AM Changeset [12601] by - Added ref to ticket #3830 (DUMP keyword now obsolete) - 4:51 AM Changeset [12600] by - Fix ticket 3703 URL - 4:48 AM Ticket #300 ([WMS] Extend behavior of DUMP mapfile parameter for GML output) closed by - fixed: Fixed. This is addressed by the new machanism to enable/disable OGC … - 2:30 AM Suggestions-Enhancements created by - - 2:29 AM WikiStart edited by - (diff) - 1:47 AM Suggestions-Enhancements: edited by - (diff) - 1:37 AM Suggestions-Enhancements: created by - - 12:48 AM WikiStart edited by - (diff) Sep 28, 2011: - 4:27 PM Ticket #3432 (Mapfile WEB TEMPLATE not working with URLs) closed by - fixed: The following was added to mapfile-> web for 6.0 trunk documentation … - 4:25 PM Changeset [12599] by - Supplemented the description of TEMPATE in the web document (#3432). - 4:09 PM Ticket #3421 (SWIG MapScript Creating a lineObj) closed by - invalid: I don't understand what to do. I close it as invalid. Please reopen … - 3:54 PM Ticket #3415 (SWIG Mapscript Documentation error - resultCacheObj.getResult( int i )) closed by - fixed: Fixed for 6.0 and trunk documentation in r12598. - 3:54 PM Changeset [12598] by - fixed the return type of getResult(int i) of resultCacheObj (#3415). - 3:40 PM Ticket #3208 (WMS Server Documents needs more info) closed by - fixed: A fix has been attempted for trunk and 6.0 documentation in r12597. … - 3:36 PM Changeset [12597] by - Improvement of the wms_server document - templates, gml output (#3208). - 2:27 PM Changeset [12596] by - Added index entries (#4001) - errors - 2:10 PM Ticket #3055 (Addition to Error Message Documentation) closed by - fixed: The error message + a short description was added in the errors … - 2:08 PM Changeset [12595] by - Added the msWMSLoadGetMapParams(): WMS server error to the errors … - 2:03 PM Changeset [12594] by - Added the msWMSLoadGetMapParams(): WMS server error to the errors … - 1:41 PM Changeset [12593] by - Small adjustments to the description of GEOMTRANSFORM in the style … - 7:25 AM Changeset [12592] by - Added GetLegendGraphic? Cascading support Sep 27, 2011: - 9:14 PM Ticket #4035 (Data drops out of view when source data are lon/lat and display is ...) created by - Frank Warmerdam and I have discussed this a few times: when lon/lat … - 2:55 PM Ticket #4034 ([date] tag bug - interacts with [shpxy] tag) created by - The [date] tag currently doesn't work (it is left as-is in the … - 7:22 AM Changeset [12591] by - New metadata parameter wms_bbox_extended added in wms_server … Sep 26, 2011: - 3:09 PM Changeset [12590] by - Fix formatting - 2:11 PM Changeset [12589] by - Change 7X to 76 - 1:53 PM Changeset [12588] by - RFC for adding XMP support to Mapserver (#3932) Sep 25, 2011: - 11:41 AM Ticket #3191 (Link mailing list or support pages from front page) closed by - fixed: I think this is a good idea. Since no one has dismissed this as a … - 11:37 AM Changeset [12587] by - Added a link to the community pages to the Mapserver front page (#3191). - 6:12 AM Ticket #4033 (Cannot compile with libpng 1.5.4 (zlib.h missing in MS5 branch)) closed by - fixed: libpng 1.5 was not released when mapserver 5.6 came out. Mapserver 6.0 … - 6:10 AM Changeset [12586] by - define Z_BEST_COMPRESSION for newer libpng versions (#4033) Sep 24, 2011: - 3:53 PM Ticket #3310 (Describe usage of mod_rewrite together with cgi wrapper script) closed by - fixed: I have modified the explanation in wms_server using the suggestions in … - 3:50 PM Changeset [12585] by - Modified the the part on changing URLs to eliminate the map paramter … - 2:48 PM Ticket #3426 (symbolObj.angle is declared as double in MapScript Dotnet) closed by - fixed: This is not so easy to document. I have added examples in the swig … - 2:42 PM Changeset [12584] by - Added examples for setbinding in swig mapscript documentation (#3426). … - 1:40 PM Ticket #4033 (Cannot compile with libpng 1.5.4 (zlib.h missing in MS5 branch)) created by - The libpng 1.5.4 header file png.h does not link to zlib.h any more. … - 1:30 PM Ticket #3502 (Add a timestamp macro to templating module) closed by - fixed: Fixed for 6.0 and trunk documentation in r12583. - 1:29 PM Changeset [12583] by - Added the date tag to the template document (#3502). - 1:20 PM Ticket #3516 (Mapserv CGI - Group name passed in "LAYERS" parameter) closed by - fixed: Notes have been added for GROUP and STATUS in the layer document for … - 1:18 PM Changeset [12582] by - Added notes on the behaviour of groups in the layer document (#3516). - 12:58 PM Ticket #3518 (Missing dd in SIZEUNITS documentation) closed by - wontfix: Encouraging the use of dd as sizeunits does not seem like a good idea … - 12:52 PM Changeset [12581] by - Fixed some formatting in the wfs_server document. - 12:18 PM Ticket #3562 (Need to document the gml_constants config option.) closed by - fixed: I tried to look at the code, and found that the values for the … - 12:13 PM Changeset [12580] by - Added documentation on gml_constantsChant to the wfs_server … - 9:32 AM Ticket #3996 (Documentation of runtime/variable substitution) closed by - fixed: Links added between mapfile->variable_sub and cgi->runsub for 6.0 and … - 9:30 AM Changeset [12579] by - Added links between the variable substitution document under mapfile … - 8:55 AM Ticket #3712 (Drop map-specific query modes for the MapServer CGI) closed by - fixed: All mentions of *QUERYMAP have been removed from the 6.0 and … - 8:52 AM Changeset [12578] by - Added the qformat parameter to the cgi controls document (#3712). - 8:48 AM Changeset [12577] by - Use CSLTokenizeString2() to avoid mismatch between MapServer's memory … - 8:01 AM Changeset [12576] by - Removed references to the *QUERYMAP modes in the cgi controls document … - 6:46 AM Ticket #3660 (OGR auto style: use opacity parameter) closed by - fixed: I have attempted to fix the OGR documentation for trunk and 6.0 in r12575. - 6:43 AM Changeset [12575] by - Documentation of transparency support for OGR connections (#3660). Sep 23, 2011: - 9:32 AM Changeset [12574] by - small doc change - 9:31 AM Changeset [12573] by - small doc change Sep 22, 2011: - 12:36 PM RenderingOsmDataUbuntu edited by - modified libpixman-dev to libpixman-1-dev for MapCache? (diff) - 10:31 AM Changeset [12572] by - fix windows build Sep 21, 2011: - 10:30 PM Changeset [12571] by - add Mike to the PSC - 3:46 PM Changeset [12570] by - Fixed issues introduced with ticket #3925 prohibiting compilation of … - 7:41 AM Changeset [12569] by - fix return value of postgis time filter - 7:34 AM Changeset [12568] by - Rewrite postgres TIME queries to take advantage of indexes (#3374) - 5:29 AM Ticket #4029 (Centroid geomtransform missing from msStyleSetGeomTransform) closed by - fixed: fixed in trunk (r12566) and 6.0 branch (r12567) - 5:28 AM Changeset [12567] by - fix centroid geomtransform parser (#4029) - 5:25 AM Changeset [12566] by - fix centroid geomtransform parser (#4029) Sep 20, 2011: - 11:06 AM Changeset [12565] by - cluster:buffer is a valid option - 11:04 AM Changeset [12564] by - cluster:buffer is a valid option - 10:56 AM Ticket #4032 (PHP/MapScript: some objects are clonable and some aren't) created by - Currently, some objects are clonable and some not. The clonable … - 8:51 AM Ticket #3901 (PHP MapScript does not implement NEW properties like GEOMTRANSFORM, ...) closed by - fixed: The styleObj properties patch backported in branch 6.0 in r12563. The … - 8:47 AM Changeset [12563] by - Backport in 6-0: PHP MapScript? is missing many styleObj properties (#3901) - 8:38 AM Ticket #4031 (Enable mapfile strings and arbitrary file streams to be tokenized) created by - mapfile.c contains the function msTokenizeMap() which is useful in … - 8:37 AM Changeset [12562] by - Backport in 6-0: Added geomtransform in writeStyle function - 8:35 AM Changeset [12561] by - Added geomtransform in writeStyle function - 8:15 AM Changeset [12560] by - Backport branch 6-0 Updated XMLMapfile stuff for MapServer 6.0 - 8:02 AM Changeset [12559] by - Updated XMLMapfile stuff for MapServer 6.0 - 6:20 AM Ticket #4030 (Data-statement for PostGIS connection breaks when containing tabs or ...) created by - When a PostGIS layer has a data-statement with embedded tabs or … Sep 19, 2011: - 2:56 PM Ticket #3488 (OGC Filter: add supported units) closed by - fixed: I have included a section on supported units of measure to the … - 2:54 PM Changeset [12558] by - Added a section on supported units of measure to the wfs … - 2:22 PM Ticket #3440 (missing return value for mapscript function whichShapes in documentation) closed by - fixed: Fixed for 6.0 and trunk swig and php documentation in r12557. I could … - 2:18 PM Changeset [12557] by - Added MS_DONE to the return codes of Layer-> whichShapes in the … - 10:11 AM Changeset [12556] by - a few cleanups for syntax highlighting - 10:11 AM Changeset [12555] by - add VALIDATION as a keyword for the syntax highligher - 12:46 AM Ticket #4029 (Centroid geomtransform missing from msStyleSetGeomTransform) created by - In mapgeomtransform.c, msStyleSetGeomTransform doesn't look for the … Sep 17, 2011: - 4:09 PM Ticket #3780 (qstring_validation_pattern) closed by - fixed: CGI controls documentation updated for 6.0 and trunk in r12554. No … - 4:06 PM Changeset [12554] by - Added note on the need for use of validation for qstring in cgi … - 3:35 PM Ticket #3804 (WMTS/REST services in MapServer with GDAL) closed by - wontfix: GDAL is covered in the supported formats section of input-> raster. … - 3:19 PM Ticket #3872 (Make sure qlayer documentation is correct.) closed by - fixed: OK. I guess you know the behaviour, so I removed the note "If not … - 3:16 PM Changeset [12553] by - Removed note on the effect of missing qlayer in the cgi controls … - 2:57 PM Changeset [12552] by - Adding RFC for INSPIRE view service support. - 2:12 PM Ticket #4028 (WFS encoding output support) created by - When you get datasources encoded in a different encoding system than … - 2:07 PM Changeset [12551] by - add Mike's DBINCLUDE rfc doc - 1:36 PM Changeset [12550] by - Unified OWS requests and applied to WCS (defaults to version 2.0 now) … - 1:07 PM Ticket #4027 (Document behavior and use of SQL “inner query” in mapfile DATA statement) created by - By misadventure I’ve discovered that the inner query of a … - 12:57 PM Changeset [12549] by - revert include file location - 12:56 PM Changeset [12548] by - Cleaning some wms tests. - 12:54 PM Changeset [12547] by - don't use libPNG pre-zlib filtering (#4017) - 12:48 PM Changeset [12546] by - add tinyows doc folder - 12:37 PM Changeset [12545] by - add examples folder - 12:09 PM Ticket #4026 (Inter-character spacing control for labels and annotation) created by - When text (labels, annotation) is drawn, the default inter-character … Sep 16, 2011: - 5:36 AM Ticket #4025 (LayerFeatureConstraints in SLD not possible) created by - Filtering via SLD's is not possible with LayerFeatureConstraints?, only … Sep 15, 2011: - 3:21 PM Ticket #4024 (is Force2d call redundent? (potential overhead)) closed by - wontfix: facinating. in mapnik we're barely catching up with ST_* usage in … - 2:36 PM Ticket #4024 (is Force2d call redundent? (potential overhead)) created by - I think ST_Asbinary calls force2d internally, but mapserver's postgis … - 2:34 PM Ticket #4023 (subtle postgis optimization re: bbox formation) created by - In testing mapnik rendering against postgis I've played with using … - 10:07 AM Ticket #4022 (WFS data exposed without DUMP TRUE in layer) created by - WFS Queries (with or without OUTPUTFORMAT) work whether or not DUMP … Sep 14, 2011: - 3:08 PM Ticket #4021 (GetFeatureInfo using MySQL join doesn't work) created by - Trying to do GetFeatureInfo? request with layer joined to MySQL table … Sep 13, 2011: - 9:16 AM Ticket #4020 (WCS 1.0 rangeset_axes parsing crashes in some cases) closed by - fixed: This appears to be a very old error, but has not come up before … - 9:14 AM Changeset [12544] by - fix crash describing coverage when rangeset_axes contains value other … - 9:11 AM Changeset [12543] by - fix crash describing coverage when rangeset_axes contains value other … - 9:08 AM Ticket #4020 (WCS 1.0 rangeset_axes parsing crashes in some cases) created by - If a WCS 1.0 describecoverage request is made when the mapfile … Sep 12, 2011: - 7:05 PM Changeset [12542] by - Updated voting status and marked as adopted… - 7:02 PM Changeset [12541] by - Added Olivier to the PSC RFC… - 1:45 PM Ticket #4008 (Backport mapprojhack.c to 5.4 and 5.6) reopened by - 5.4 doesn't seem to compile as of the recent addition due to the … - 3:35 AM Ticket #4019 (Unable to compile MapServer 6.0.1 with libpng 1.5.4) closed by - invalid: c.f. … - 3:08 AM Ticket #4019 (Unable to compile MapServer 6.0.1 with libpng 1.5.4) created by - When trying to compile MapServer 6.0.1 with libpng 1.5.4, I get: g++ … - 1:59 AM Changeset [12540] by - Added index entries to the raster input document (#4001). Also did … - 1:27 AM Changeset [12539] by - set png filter to PNG_FILTER_NONE (#4017) - 12:37 AM Ticket #2909 (Processing directives documentation update) closed by - fixed: Thanks, zjames. I have updated the trunk and 6.0 docs with your … - 12:35 AM Changeset [12538] by - Fixed description of LABEL_NO_CLIP (#2909), also added some index … - 12:34 AM Ticket #4018 (don't reparse the same mapfile over and over again for fastcgi) created by - the given map= mapfile parameter is parsed at each request in the … - 12:27 AM Ticket #4017 (Expose more png compression options) created by - The wms shootout has shown that png compression options have a big … Sep 11, 2011: - 2:16 PM Ticket #4016 (pointObj draw - missing original coordinates) created by - Hi, first, sorry if any mistakes, it is my first post. When using … - 2:24 AM Changeset [12537] by - add map->maxsize that was missing in msCopyMap() Sep 10, 2011: - 1:56 AM Changeset [12536] by - Updated the download document windows download link section (#4014) - … - 1:42 AM Ticket #4014 (Update Mapserver Downloads documentation) closed by - fixed: Fixed for 6.0 and trunk documentation in r12535. How do you know that … - 1:36 AM Changeset [12535] by - Updated the download document windows download link section (#4014). - 1:03 AM Ticket #4015 (label style offset, geomtransform labelpoly, symbolscaledenom) created by - I have discovered that LABEL STYLE OFFSET, at least when used with … - 12:47 AM RenderingOsmDataUbuntu edited by - (diff) Note: See TracTimeline for information about the timeline view.
http://trac.osgeo.org/mapserver/timeline?from=2011-10-10T09%3A22%3A44-0700&precision=second
CC-MAIN-2016-36
refinedweb
3,816
51.21
Your. On Sun, Aug 9, 2009 at 3:29 AM, ilya <ilya.nikokoshev at gmail.com> wrote: > Thank you and everyone else for insightful posts detailing why my > examples don't make a good argument for the syntax. > > Even though my original suggestion, similarly pep 312, wouldn't break > any existing programs and would not lead to ambiguity in 'if _:', I > rescind it. > > However, another reason for implicit lambdas is lazy evaluation. For > example, in another thread people discuss "... except ... if/else" > conditional statement --- one reason being control expressions > evaluate lazily. A function call passing callables currently looks > ugly and unreadable: > > lazy_cond(expr, lambda: expensive(5), lambda: factorial(10**5)) > > and here 6 keystrokes of 'lambda' word *do* matter. > > Therefore I hope my unsuccessful proposal will encourage people to > find something that works. > > > On Sat, Aug 8, 2009 at 4:21 AM, Steven D'Aprano<steve at pearwood.info> > wrote: > > On Fri, 7 Aug 2009 10:46:40 pm ilya wrote: > >> I was thinking about a good syntax for implicit lambdas for a while > >> and today I had this idea: make ``_:`` a shortcut for ``lambda > >> _=None:`` > > > > [...] > > > >> The rationale is that you only want to get rid of lambda keyword to > >> create a *very* simple function, the one that will be called either > >> without parameters or with only one parameter. For everything more > >> complicated, you really should go and write the explicit function > >> signature using lambda. > > > > Why would you want to get rid of the lambda keyword? What's the benefit? > > > > Is this about saving twelve keystrokes? > > > > lambda _=None: > > versus > > _: > > > > Just how often do you want, or need, to write such a lambda? It seems to > > me that not only is it a special case you want to break the rules for, > > which goes against the Zen, but it's an incredibly rare special case. > > > > _ as an identifier already has three conventional meanings: > > > > (1) In the interactive interpreter, _ is the value of the last > > expression typed. > > > > (2) It is commonly used to mean "I don't care about this value", e.g. > > > > t = ("Fred Smith", "123 Fourth Street", "New York", "dog") > > name, _, _, pet = t > > > > (3) It is also often used in internationalization. > > > > You want to give it the extra meaning "a default parameter name for > > lambda when I can't be bothered typing even a single letter name". > > > > Because _ is already a valid identifier, this will break code that does > > this: > > > > while _: > > process() > > _ = function() > > > > if _: > > print "something" > > > > > > Not the best choice of names, but somebody, somewhere, is doing that, > > and your suggestion will break their code. > > > > > > Looking at the three examples you gave: > > > > map( _: _ + 5, some_list) > > register_callback( _: True) > > def apply_transform(..., transform = _:_, ... ): > > > > In the first case, I wouldn't use the short-cut form even if it were > > available. I'd write a lambda that used a more meaningful name. In this > > case, I'm expecting an int, so I would use n, or a float, so I'd use x. > > I'd also avoid setting the pointless default: > > > > map(lambda x: x+5, some_list) > > vs > > map(_: _+5, some_list) > > > > Since your suggestion doesn't do precisely what I want, the only reason > > I would have for using your construct is to save seven keystrokes. > > Encouraging laziness on the behalf of the programmer is not a good > > reason for special-casing rare cases. > > > > Second case: register_callback( _: True) > > > > I assume you're implying that the callback function must take a single > > argument. In this example, using _ as the parameter name to the lambda > > makes sense, because it is a "don't care" argument. But if the callback > > function is documented as always being given a single argument, I would > > want to know if it was being called without any arguments, so the > > default value of None is inappropriate and I would avoid using it. > > > > Third case: def apply_transform(..., transform = _:_, ... ): > > > > I don't think I'd write a function called apply_transform() which made > > the transformation function optional, let alone buried deep in the > > middle of a whole lot of extra parameters. (I presume that's what > > the "..."s are meant to imply.) But putting that aside, I see your > > intention: a default do-nothing function which appears in a very long > > parameter list. The problem is that instead of trying to shrink the > > default value so you can fit all the parameters on a single line, you > > should make such a complicated function signature more readable by > > spreading it out: > > > > def apply_transform( > > obj, > > start, end, # start (inc) and end (exc) positions to apply > > another_arg, # does something very important I'm sure > > x=0, y=1, z=2, # more very important arguments > > transform=( # default null transformation > > lambda obj=None: obj), > > frotz=False, # if true, frotz the hymangirator with spangule > > hymangirator=None, > > spangule=None, > > magic=12345, # this needs no documentation > > namespace={}, > > flibbertigibbet=None, > > _private_magic=[] # the caller shouldn't supply this > > ): > > > > (Even better is to avoid such complicated function signatures, but > > sometimes that's not an option.) > > > > So again I'd be very unlikely to use your suggested construct except out > > of simple can't-be-bothered-to-type-a-dozen-letters laziness. Pandering > > to that sort of laziness is probably not a good thing. > > > > Fundamentally, this suggestion doesn't add expressability to the > > language, or power. Laziness on it's own is not a good reason for > > special casing rare cases. If it was a very common case, then *perhaps* > > you would have an argument for special casing needless verbiage: > > conciseness (up to a point) is a virtue in a language. That's partly > > why we have lambdas in the first place, so we can write this: > > > > reduce(lambda a,b: (a+b)/2.0, somelist) > > > > instead of this: > > > > def average(a, b): > > return (a+b)/2.0 > > reduce(average, somelist) > > > > But this isn't a common case, it's a rare case, and the case you're > > hoping to replace is pretty concise already. > > > > > > > > -- > > Steven D'Aprano > > _______________________________________________ > > Python-ideas mailing list > > Python-ideas at python.org > > > > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > > -- Gerald Britton -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
https://mail.python.org/pipermail/python-ideas/2009-August/005497.html
CC-MAIN-2016-30
refinedweb
1,035
60.35
PORTMAP(3N) PORTMAP(3N) NAME pmap_getmaps, pmap_getport, pmap_rmtcall, pmap_set, pmap_unset, xdr_pamp, xdr_pmaplist - library routines for RPC bind service DESCRIPTION These routines allow client C programs to make procedure calls to the RPC binder service. portmap(1) maintains a list of mappings between programs and their universal addresses. Routines #include <<rpc/rpc.h>> struct pmaplist * pmap_getmaps(addr) struct sockaddr_in *addr; Return a list of the current RPC program-to-address mappings on the host located at IP address *addr. This routine returns NULL if the remote portmap service could not be contacted. The com- mand `rpcinfo -p' uses this routine (see rpcinfo(8C)). u_short pmap_getport(addr, prognum, versnum, protocol) struct sockaddr_in *addr; u_long prognum, versnum, protocol; Return the port number on which waits a service that supports program number prognum, version versnum, and speaks the trans- port protocol protocol. The address is returned in addr, which should be preallocated. The value of protocol can be either IPPROTO_UDP or IPPROTO_TCP. A return value of zero means that the mapping does not exist or that the RPC system failed to con- tact the remote portmap service. In the latter case, the global variable rpc_createer (see rpc_clnt_create(3N)) contains the RPC status. If the requested version number is not registered, but at least a version number is registered for the given program number, the call returns a port number. Note: pmap_getport() returns the port number in host byte order. Some other network routines may require the port number in network byte order. For example, if the port number is used as part of the sockaddr_in structure, then it should be converted to network byte order using htons(3N). enum clnt_stat pmap_rmtcall(addr, prognum, versnum, procnum, inproc, in, outproc, out, timeout, portp) struct sockaddr_in *addr; u_long prognum, versnum, procnum; char *in, *out; xdrproc_t inproc, outproc; struct timeval timeout; u_long *portp; Request that the portmap on the host at IP address *addr make an RPC on the behalf of the caller to a procedure on that host. *portp is modified to the program's port number if the procedure succeeds. The definitions of other parameters are discussed in callrpc() and clnt_call() (see rpc_clnt_calls(3N)). Warning: If the requested remote procedure is not registered with the remote portmap then no error response is returned and the call times out. Also, no authentication is done. bool_t pmap_set(prognum, versnum, protocol, port) u_long prognum, versnum; int protocol; u_short port; Registers a mapping between the triple [prognum,versnum,proto- col] and port on the local machine's portmap service. The value of protocol can be either IPPROTO_UDP or IPPROTO_TCP. This rou- tine returns TRUE if it succeeds, FALSE otherwise. It is called by servers to register themselves with the local portmap. Auto- matically done by svc_register(). bool_t pmap_unset(prognum, versnum) u_long prognum, versnum; Deregisters all mappings between the triple [prognum,versnum,*] and ports on the local machine's portmap service. It is called by servers to deregister themselves with the local portmap. This routine returns TRUE if it succeeds, FALSE otherwise. bool_t xdr_pmap(xdrs, regp) XDR *xdrs; struct pmap *regp; Used for creating parameters to various portmap procedures, externally. This routine is useful for users who wish to gener- ate these parameters without using the pmap interface. This routine returns TRUE if it succeeds, FALSE otherwise. bool_t xdr_pmaplist(xdrs, rp) XDR *xdrs; struct pmaplist **rp; Used for creating a list of port mappings, externally. This routine is useful for users who wish to generate these parame- ters without using the pmap interface. This routine returns TRUE if it succeeds, FALSE otherwise. SEE ALSO rpc(3N), portmap(8C), rpcinfo(8C) 20 January 1990 PORTMAP(3N)
http://modman.unixdev.net/?sektion=3&page=pmap_unset&manpath=SunOS-4.1.3
CC-MAIN-2017-17
refinedweb
604
54.73
. Spidermonkey traditionally does use two bits for tagging of integers, but with a 30-bit payload regardless of architecture. (Last time I checked, they were working on switching to a 128-bit value type.) Numbers are converted between 30-bit integer and pointer-to-double representation as needed. Not that this is helpful in any way. ECMAScript Harmony will probably add a separate type system for "value types", which is specifically motivated by IBM's desire to support decimal floats and Mozilla's desire not to. The details haven't worked out, but the general idea is a category of types that have operator overloading but no properties. It should be possible for a host app to implement sane integers on top of this. So as long as you don't need to do anything before 2013 and will control your host app, you're home free! Wow, I was unaware that anyone more recent than, like, Charles Babbage thought that decimal floats were a good idea... Operator overloading. I'd have to file that under "now you have two problems"... I'm kinda on the fence about operator overloading. It leads to much stupidity, but on the other hand, doing maths on non-primitive types (vector algebra and so forth) without it is painful. Separating extensible-things-with-operators from objects might actually be a good idea. I kinda do this already with Objective-C++, but using ObjC++ gives me at least five problems anyway. :-) (Obviously I'll defer to Brendan on all the on-topic stuff.) Actually decimal floats have a very specific use case: computations on monetary values. Regular floats may do some funky things in a complex computation (especially one involving lots of inputs, like an average). This is fine so long as you have all your decimal places, but falls apart when you are restricted to two or so decimal places (like money). To compound the problem, two different CPUs might come up with different answers for the same algorithm and inputs. With decimal floats (or BCD) you avoid the whole issue by doing the calculations the way an old fashioned hand-crank calculator would. Given IBM's customer base, this is a pretty important consideration for them. is the most-dup'ed JS bug, last I checked. Try javascript:alert(.1+.2) and consider people doing non-monetary arithmetic, say for CSS style computation. It's a problem, and IBM is right to want a solution, but IEEE754r is a glorious, decade-long, multi-format (last I heard; to placate two competing hardware vendors) mess. The standards body was reduced to using Robert's Rules in committee to prevent bloodshed at one time. Some on Ecma TC39 (JavaScript) are warm to bignums, but the whole decimal jam-job has made everyone a bit gun-shy. /be I did some time in standards (SNIA, in this case), and AFAICT the biggest hobby-horse in IBM is fixed-point arithmetic. We chose the absolute minimal set of useful primitive types for our data abstraction (unlike JavaScript, apparently) and at the last minute the IBM folks came back with a fixed-point data requirement. Also, everything in our standard had to be a superset of all of their products, but that wasn't a problem unique to IBM. When it comes to financial software, every decent developer already uses integer millis (1000x the smallest currency unit). or vampire-squid banks: $ bc bc 1.06 This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. 2^53 9007199254740992 ./1000 9007199254740 ./365.25 24660367569 ./24 1027515315 ./3600 285420 The last three division steps show how JS's Date object fits time since the epoch in a double and still cover a very large extrapolated-Gregorian calendar. /be Or you could use rational numbers, which are more general, more elegant, and possibly even more efficient. And hey, Lisp does that by default, too. Sam Tobin-Hochstadt, private correspondence: "Bignums are great. There's no reason for the performance problems to stop people from switching to bignums in any non-C language, and it's fairly straightforward to optimize to use fixnum operations in many cases. I think [trace-based JITting] will do well here, also. Exact rationals are a whole other can of worms. I admit to liking having them around in Scheme, but I rarely use them. ... I think [fast enough and better semantics than double] is the case for bignums, but not for rationals. I don't know of any other awesome solutions for rationals, though. Fast, precise, rational - pick any two." /be Decimal floats are very useful for avoiding rounding/representation errors when data moves back and forth between pen and paper and computerized systems. Some businesses have financial processes where this is essential. Actually, William Kahan said "A major decrease in avoidable silly errors can be achieved by letting most (not all) scientists, engineers, ... and other computer users perform most (not all) of their floating-point arithmetic in decimal; therefore that is the way we must go." SpiderMonkey used one bit for ages, not two. Now we use NaN boxing. It was not Mozilla who stopped IBM's mad IEEE754r decimal jam attempt. We actually had a prototype patch from Sam Ruby. All the other browser vendors, plus Doug Crockford of Yahoo!, were solidly opposed. In many ways Mozilla was IBM's best friend on this point, and we are still paying for it. Value types are a dark horse, but if based on proxies they may turn out well. The bignum strawman could just be a new primitive type, and happen quickly. IBM would not be happy, but many others would. Right now web sites are doing crypto in JS using ints (bitwise ops) and double multiplies! /be Is this Brendan's fault, or did this creep in sometime later? No idea, but it sounds like the sort of 4am shortcut of which that first implementation was entirely composed, so I'd guess it's been in there from the. So double by default, int under the hood, and bitwise ops are 32-bit int (uint if you use >>>). I blame Java. /be So, since I have to ask: What would we be stuck with if JS hadn't happened? Something like PHP only worse, which Netscape's VP Eng killed fast (in early July 1995, if memory serves; I did JS in early-mid-May) since it was "third" after Java and JS. Two new web programming languages were hard enough to justify... /be I think JS was mostly a great thing to come along, in its way so please take this in the context of that spirit. I respect a lot of what you've done -- but also have a gripe that relates to this thread. As respectfully as I can ... I'm sorry that this will be harsh but, this bit, where you." WTF IS THE MATTER WITH YOU!?!?! By which I mean... we all know the creation story of Jscript but in that story and in your reiteration of it here... basically you bent over for your masters. You had the option of a few of you standing up, not bending over... and quite possibly crashing the damn company (then running to the press and working on setting up your own separate thing). You might have lost, in other words.. You might collectively have been tossed out on your butt, blacklisted, replaced, and have had no impact .... but I don't think it was terribly likely. With a collective spine you could have won. Better quality AND well earned riches. Instead, hackers sold out. I think you could have made that "F.U." threat and leveraged it for more say about "Doing Things Right" in the engineering department. Instead, y'all wussed out, rushed, and took lots of money for it. At least that's how it looked from nearby. From where I sat, back then (just down the road)... the silly valley execs and money people were talking about you hackerish types there behind your back all up and down the valley. They played you guys. They were mostly bluffing. They knew all along you were their highest cards.... they just wanted to be able to brag about how they got your labor on the cheap and in a rush. And, no, they didn't really appreciate what you were trying to do. They heaped a bunch of needless stress on y'all with the effect of reduced quality essentially so they could justify their existence to their friends. Brendan's house and my nightclub thank us for selling out early and often. And, I do too. I tried to convey that but just to be really clear... I want both. I want the world where you win big and the world where you don't blame it all on Teh Man but rather sometimes stand up to him and win for the team, not just your accounts. You are both, so far as I can tell, smart, hard working, socially beneficial people on balance and perhaps it is my brain bug or perhaps I'm right but my perception is you screwed up in your relation to big money back then, around stuff like the topic. Younger hackers reading this should wonder if maybe they can do better. My impression from what I saw around the corridors of power back then is that you guys folded like a house of cards compared to what you could have gotten away with. (Maybe that's why they paid you the big bucks. :-) In my experience, when it comes to the very best engineers, what you're paying for is not that they will do everything perfectly. It's their ability to know what to neglect, in favor of getting the important work done. Cue Tool's "Hooker With A Penis". +1. This song should be automatically played each time someone busting ass gets accused of selling out by an armchair quarterback! It's a good question how much we sold out, and although jwz was earlier than I (dumbass me had an offer in spring '94 to join Netscape on the first floor, but I stayed loyal to MicroLunacy^H^H^H^H^H^HUnity), I am pretty sure we didn't trade technical merit for money. I think you have us wrong. We didn't sell out -- others (later-comers, who did far less work) made much more than we did. We were naive, in point of fact. If we had it all to do over again, I think we would (in hindsight and with your advice) use our leverage to get better technical and financial outcomes. But at the time, mostly we felt the need to move very quickly, not to make money but because we knew Microsoft was coming after us. Microsoft low-balled Netscape in late '94, and the Jims (Clark and Barksdale) told them to pound sand. After that, we felt the monster truck riding up on our little Yugo's rear bumper, month by month. If you appreciate this and it is accurate, consider that JavaScript (please, not "JScript") saved you from VBScript. As far as us not selling out: live and learn. I'm not sure I'll ever have the opportunity to apply this knowledge, but here it is for you kids who may end up in a rocket-like startup. Either get your unfair share, or get your technical goods, or both (in your dreams) -- but don't just work hard to try to invent something new in a hurry, to improve the nascent Web. You might not have the good grace I've had with Mozilla and Firefox to make up for it. /be I'm still bummed that I failed to talk you in to making #!/usr/bin/javascript work back then, because I think that we were still in the window where we had a shot at smothering Perl in the crib... Now there's something to dream about. I'm still holding out hope for /usr/bin/javascript :) It's here. #!/usr/bin/env node console.log('hello JS world!') gjs-console also works like this, and Rhino's shell mode will also do in a pinch. And yes, I've written shell scripts in JavaScript -- under time pressure, no less. With gjs-console giving you access to a decent standard library (everything in GNOME, more or less), it's not bad at all. I don't begrudge you the lack of bignums; the state of bignum libraries back then was not terrific. What I don't get is the lack of proper tail calling. Once you understand it, it's not any harder than doing it wrong. How did that get missed? Ten days to implement the lexer, parser, bytecode emitter (which I folded into the parser; required some code buffering to reorder things like the for(;;) loop head parts and body), interpreter, built-in classes, and decompiler. I had help only for jsdate.c, from Ken Smith of Netscape (who, per our over-optimistic agreement, cloned java.util.Date -- Y2K bugs and all! Gosling...). Sorry, not enough time for me to analyze tail position (using an attribute grammar approach:). Ten days without much sleep to build JS from scratch, "make it look like Java" (I made it look like C), and smuggle in its saving graces: first class functions (closures came later but were part of the plan), Self-ish prototypes (one per instance, not many as in Self). I'll do better in the next life. /be I hear you, and I've been in the "ten days to do the impossible" situation before. The real shame is that it never got fixed up as the language was reimplemented (many times), and now it's more difficult to do than it would have been, say, 10 years ago. After the beginning came a painful death dance, largo non troppo. Netscape used its IPO mad-money to binge on acquisitions and over-invest in for-diminishing-profits servers,"enterprise groupware" (which killed the Netscape browser as much as MSFT did), and Java, while under-investing in HTML and JS (never mind CSS). You're right: it would have been better to make Proper Tail Calls a rule of the language's semantics, an asymptotic space guarantee. Not clear it would have stuck in the market, though. Most JS devs didn't know Scheme and did not write tail-recursive or intentionally tail-calling code. We have a chance for Harmony to require PTCs, though. See the wiki link in my earlier comment, and please advocate on es-discuss@mozilla.org (mailman subscribe protocol). /be 32-bit uint? Wait, Java doesn't do uints. As far as I can tell as a networking guy, not supporting uints has pretty much doomed Java for network programming, since going the next int size up and masking is way inefficient for parsing protocol headers. C still wins. 32-bit uint shows up in several places in JS. The >>> operator (same as in Java); the Array.length property and array indexes (property names that act like uint32 but equate to strings by the standard conversion). I'm a C hacker, so you won't hear me arguing back. unsigned is useful for systems programming chores, and not just for parsing packed structs. It's hard to strength-reduce integer arithmetic by hand without something like unsigned (>>> suffices for div, but the bitwise-logicals are not enough for all cases without a sign test and conditional negation). See for something we hope to get into JS, which needs packed structs and arrays for at least WebGL (and better than WebGL's typed arrays in JS, which are like aliased Fortran common vectors -- get me off this Karmic wheel!). /be JS has a lot of stupid in it. News at 11, as chouck used to say. There's hope: Convincing all the browser vendors will be tough. Some don't want to do much more than be super-fast at the current version of the language (or the last one). IBM voted "no" on ECMA-262 Edition 5 just because we did not jam IEEE754r decimal (which has finite precision, rounds base 10 numbers well compared to base 2 numbers printed in base 10, but lacks hardware support, is slow even on hardware that has support, and is different enough that no one could agree on whether and how to integrate it) into what was supposed to be a "no new syntax" update to the standard. The rounding bug is still the most-dup'ed JS bug in buzilla.mozilla.org. We want to fix it, and right now bignums have a shot. Write your browser vendors! /be My browser vendors are writing me. What should I tell them? What are they writing to you? Ask them for bignums in JS if you agree they would help compared to alternative courses of evolution, and compared to doing nothing for numeric types in JS. Heck, ask for anything that you think needs doing. I'm interested in cross-browser evolutionary jumps that the vendors can actually swallow (unlike boil-the-ocean schemes and effectively-single-vendor managed-[native-or-not-]code stacks), so feel free to keep me posted too. /be "ask for anything that you think needs doing"? I've been asking browser vendors for an implementation of Javascript that won't crash for over fifteen years. Not holding my breath. (Most crashes these days are associated with Flash, which is ActionScript, which is still ultimately your fault, dammit.) Not crashing is a research problem. We're on it though -- results will be sent backward in time via tachyons as soon as we have them. Browsers all crash, often in the other hairy parts of the codebase not implemented in memory-safe languages: HTML, CSS, DOM, crypto, HTTP, img, video, graphics, etc., etc. JS counts as causal for only some of the crash bugs. Even Chrome's process isolation (in other browsers too, variously) doesn't completely save them from lack of memory safety and control flow integrity in the main implementation language: C++. If you meant "DoS" instead of "machine crash", that's a different research problem, but we have watchdogs and quotas. I can't take direct blame for ActionScript, but sure: that's my fault too. The good parts of JS go back to Scheme and Self, so all credit to Guy, Gerry, David, Craig, et al. /be They usually write to tell me about some new layout thing or a fancy new feature that uses the mouse or the keyboard. I tell them it would be great to be able to upload a microphone recording. They write back about what a great new audiovisual codec they have. It's been going on for more than a decade. Flash finally came through a few years ago, but HTML5 has been spinning its wheels on bidirectional audio for at least a year now. You learned about this because of how it recently screwed twitter I assume? I don't get your outrage. Yes, all JavaScript numbers are floats, and no, JavaScript doesn't have a builtin BigInt type. Congrats, you made it through chapter 1 of JavaScript For Dummies. Yes it's *nice* that some languages have builtin arbitrary precision integers and automatically convert on overflow. But most don't - not Java, not C/C++, not even Python. Ruby does. The only thing that seems weird is that you expect this behavior. There are plenty of weird things about JavaScript, but this isn't on my list. You might want to try this out instead: 1) This is not what passes for outrage, sonny. 2) It may shock you to learn (from your response, I guess it may actually shock you to learn) that I don't read books like JavaScript for Dummies. I am, in fact, highly ignorant of the ins and outs of Javascript, because most of the time I just don't care. But hey, if it makes you happy to have known this piece of pointless trivia before I stumbled across it and decided to point and laugh, then I'm glad to have brought that sliver of joy into your day. Seriously, replies that just say "so what, I knew that" are barely above "first porst!" Try harder. "Most languages" blah blah blah. With such low expectations, I guess you get the languages you deserve. That's the only think I "expect", not bignums. While I'm enjoying my sliver and you're busy not caring enough about JavaScript to write blog entries on the subject, don't overlook the link to CoffeeScript. It may not read your mind and polish your knob while you code, but it does elevate JS to a real programming language with some significant advantages over other dynamic languages like python and ruby. Beyond the fact this is just a fun jwz "WTF" post, I'm mildly amused your suggestion to someone noting a language flaw is to recommend something with a disclaimer that includes "Until it reaches 1.0, there are no guarantees that the syntax won't change between versions." As opposed to stable languages, like Python? Python does have proper integers. It didn't originally, but that was long ago. Python 3.x ints behave the way JWZ wants (arbitrary precision, automatically scaling up). Python 2.6 (the default on most linux distros and OSX) & 2.7 have separate integers (32 bits, possibly more) and longs (arbitrary precision) and do not automagically scale up if you overflow the normal int size. Don't get me wrong, programming languages should hide this crap. But I sure wouldn't expect it in an old language. I stand corrected. Ruby *and* Python get a gold star today. Not that your bluff hasn't already been called - but what would you consider a not old language? Certainly not Ruby? How about this - why don't you name a commonly-used language other than Python or Ruby that automatically converts integers to arbitrary precision types on overflow? I'm correct about C, C++, Java, C#, ObjectiveC, Ada, Modula2, and Pascal. I'm less familiar with VB, Fortran, and Cobol, but I don't think they have magic integer types either. I'd be curious to know what languages set JWZ's expectation that you can pay no attention to the storage characteristics of numeric types, because I suspect he "grew up" using the same statically typed languages I did. We programmed with a nine volt battery, a paperclip, and a steady hand. You may think you've thrown down the gauntlet, but really you seem to have circled around to one of Jamie's original points. On the other hand, Python ints are all boxed (no fixnums) and heap-allocated. The interpreter pre-allocates 0-100 on startup, I think, which I always thought was pretty grody. It's worse than that, small numbers are de-duped, big numbers are not. Python 2.6.5 (release26-maint, Aug 20 2010, 17:50:24) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> 0 is 0 + 0 True >>> 256 is 256 + 0 True >>> 257 is 257 + 0 False >>> Wait - I think I have the perfect rebuttal: Congrats, you made it through chapter 1 of Python For Dummies! My favourite 30-year old programming language handles this stuff just fine. It the modern languages that are made of fail. jwz complains about Javascript numbers and you link to a source-level compiler that exhibits the same flaw? Python does have long. More salient to the current conversation, I wrote (most of) JS bigint in Of course, they're slower than when implemented at C-level, but *shrug*. Skulpt rocks! /be ActionScript 3 has ints, but still *doesn't have integer math*. So: var x :int = 3; myArray[x/2] = "foo"; // Congratulations, you've just corrupted your array by storing an element at index 1.5 // Yes. Really. Wow, that is just... Beautiful. Almost as nice as the bit in FORTRAN that lets you modify the value of constant numbers. FORTRAN? How quaint. Try this in Python: True=False True ???? That's beautiful. >>> True = False >>> True False >>> True is False True But how do I make it spin around, smoke, and repeatedly scream "does not compute"? I think you have to be James T. Kirk, and then both you and the people of the planet are freed from the evil overlord computer that runs things. In Python, it's just: import kirk kirk.paradox( computer ) FWIW, you're not overwriting True there; you're creating a variable in that scope with the same name. >>> foo = True >>> True = False >>> True, foo (False, True) Brilliant. And what's the chance that this, of all things, is what the programmer intended? Emit a diagnostic. This is why we can't have nice things. This is why stdout and stderr are separate streams even though creating a sane default interface to review both at once requires too much cooperation between shell and GUI authors [or technically too much intelligence from either, in UNIXland - #!/usr/bin/javascript should announce exceptions over dbus or Growl if it wants the user to see them, right?]. (I mean sure, shell scripts + named pipe + [pick save-default-session-feature in your favorite tabbed xtermalike] but when's the last time you bought a car that left you to install the mirrors and the seats yourself?) Or are all you rich kids just using Xcode? I love this C code. It breaks the brain of the reader without breaking the code. #define false true #define true false To be fair (sort of), ActionScript 3 was designed to basically be an implementation of the then-proposed Ecmascript 4. So its int/uint handling can still be blamed on Brendan, I guess? No, AS3 preceded ES4 and was one of its progenitors. AS3 itself was based on Waldemar Horwat's JS2 or "early ES4" from the Netscape days, but with changes Waldemar was not in on. See where, in a later slide, I get into why C-like storage types without C's evaluation (promotion) rules can make type-annotated code *slower* than untyped code. AS3 has this bug. It is the performance side of the same coin that the original post about AS3 brings up (lack of div leading to array indexing by a fractional number, equated to its canonical string conversion). AS3 tries to be like JS and evaluate intermediates using doubles only, yet adds Java-like int storage type honey traps to JS. This design choice is not my fault :-|. /be Yeah, I realized "early ES4", but I hadn't realized exactly the who and what behind all those things, so fair enough. I think I learned more of these details two years ago when I first looked into this (as one of the few implementers of an AS3 VM, I kind of wanted to know who to blame), but I didn't retain the details in my memory, and the info I was reading back then doesn't seem to be showing up now, so this was the best I could come up with now. So, aspersions uncast, blame unassigned, sorry! Hell yeah, on topic! I'll get cut down for this, but... Ok, so the boxing is just bizarre, but I would argue that the choice of having just doubles is the correct one. They're typically supported on a lot of hardware that doesn't have native 64-bit integers. There, it has the interesting effect that the largest machine int is actually the 53-bit integer range of doubles. So, I if you have to pick just one, for simplicity, doubles are a counterintuitive but inspired choice. Why you would want to pick just one number type? Well, it avoids a *lot* of problems. In larger environments you can deal with all the (conv|co|av)ersions any (if not all) ways you like, but in the pocket class, you really want one and only *the right* one. That's not how you spell 'coercions', but other than that I think this is a fairly good point. FWIW, I agree: if you're only going to have one storage type, 64-bit doubles are a reasonable choice. Fast, and includes the 32-bit integers as a subset. Not bad. Any time I see the earnest defense of modern languages, I'm reminded of this quote from the UHG: Pretty much sums up life after C. all language designers need to read CLtL at the very least. need I mention Python? (I'm like a Yankees fan on Red Sox turf, but whatever.) Scheme was the ideal admired by the JS supporters at Netscape, and the trick used to recruit me. But (see above) the "look like Java" orders and the intense time pressure combined to take out a lot of good ideas from Scheme that would've taken longer (not much longer, sigh) to implement. One example: just the C-like syntax makes tail calls harder (or less likely: consider falling off the end of a function returns undefined, so the last expression-statement before such an implicit return is not in tail position). On the upside, JS commands growing academic attention (Web 2.0 trickle-down economics plus JS's browser ubiquity with open source implementations and large user-testing communities), as demonstrated by everything from our practical PLDI 2010 paper on TraceMonkey, to pure semantic modeling such as The Ecma TC39 standards body benefits from this: Shriram and Arjun came to the July meeting, Cormac Flanagan and Sam Tobin-Hochstadt are regulars, Tom Van Cutsem (AmbientTalk, and of course the new Proxies for JS) is attending, and long-time Scheme (and JS) semanticist Dave Herman finished his PhD at Northeastern and works at Mozilla now. This doesn't excuse anything from 15 years ago, but it is a positive development. I'm sure Schemers and Common Lispers would agree. /be thanks for the refs. FYI, this post was linked as a 'Development Quote of the Week' by this week's Linux Weekly News. It's behind a paywall for one week, but then open to the public. Ewen
http://www.jwz.org/blog/2010/10/every-day-i-learn-something-new-and-stupid/
CC-MAIN-2015-22
refinedweb
5,077
72.05
I app in the same way you’d consume a regular web service or ASP.NET Web API deployed in Azure in any other environment (Azure App Service or even Azure Service Fabric). But I think this E2E walkthrough is interesting in any case if you are not aware of Azure Functions and/or Xamarin.Forms. With Xamarin.Forms you can create native cross-platform mobile apps with C#, targeting iOS, Android and UWP. With Azure Functions you can create event-driven task/services with a server-less approach, just like publishing a C# function in the cloud and, ‘voila’, it’ll work right away! It is a convenient way to create small web services that you can invoke through HTTP, they can also be scheduled or triggered by events. This is the definition you can see in azure.com: “Azure Functions is a solution for easily running small pieces of code, or ‘functions’ in the cloud. You can write just the code you need for the problem at hand, without worrying about a whole application or the infrastructure to run it.” You can get an Overview of Azure Functions here, and more details about how to implement Azure Functions with .NET/C# here. Step 1 – Create the Azure Function app environment in Azure’s portal A “function app” hosts the execution of your functions in Azure. Before you can create a function, you need to have an active Azure subscription account. If you don't already have an Azure account, free accounts are available or even better, free benefits with VS Dev Essentials (Azure credit, $25/month for 12 months, for free). Go to the Azure portal and sign-in with your Azure account. If you have an existing function app to use, select it from Your function apps then click Open. To create a new function app, type a unique Name for your new function app or accept the generated one, select your preferred Region, then click Create + get started. At this point you have your Azure Function App environment where you can create as many Functions as you’d like. In your function app, click + New Function > HttpTrigger – C#, provide a name for the Function and hit “Create”. This creates a function that is based on the specified template. Step 2 – Write the code for your Azure Function in the web editor Now write your code. Of course, in a real case, your function code might be accesing some kind of data source like Azure SQL DB, DocDB, Azure Storage, etc. In this simple case, for simplicity’s sake, I’m generating the data (a List,Vehicle>) within my code and using Linq. You can simply copy/paste the following code into your function if you want to go faster: You should have something like the following: Step 3 – Test the Azure function in Azure’s portal Since the Azure Functions contain functional code, you can immediately test your new function. In the Develop tab, review the Code window and notice that this C# code expects an HTTP request with a “make” value (car make) passed either in the message body or in a query string. When the function runs, this value is used to filter the data to return. Scroll down to the Request body text box, change the value of the make property to “Chevrolet”, and click Run. You will see that execution is triggered by a test HTTP request, information is written to the streaming logs, and the function response (list of filtered cars) is displayed in the Output. To trigger execution of the same function from another browser window or tab, copy the Function URL value from the Develop tab. And now paste it in a browser address bar but don’t forget to append an additional parameter with “make=Chevrolet” or “make=Ford” (query string value as &make=Chevrolet) and hit enter. You’ll get the same JSON data in the browser: You can also test it with curl or other tools like PostMan like explained in Testing Azure Functions. Step 4 – Develop your Xamarin app and consume the Azure function Now that our Azure Function works and is tested, let’s create our Xamarin.Forms app. I created a plain “Blank App (Xamarin.Forms Portable)” solution using the following template: Once you have the solution, I usually get rid of the Windows 8.1 projects and just leave the Android, iOs and UWP projects plus the PCL project, of course, as I show below: Since this is a Xamarin.Forms solution and this is a pretty straight forward sample, we’ll just need to change code in the shared PCL project. In particular, we just need to add code in the App.cs. Next, add the Newtonsoft.Json Nuget package to your PCL project: Now, because I want to have a quick proof of concept, I wrote all the code as C# code. Of course, in a more elaborated Xamarin.Forms app, I usually would use XAML for my controls and views rather than by adding controls in C# like I’m doing in this example. But for the sake of brevity, I just added C# code. Within the App.cs, add the line: using Newtonsoft.Json; And now paste the following code in your App.cs file within your app namespace definition: Basically, I’m using the System.Net.Http.HttpClient class in order to call the Azure Function, in the same way you do when consuming regular Web API services. That code is implemented in a Function/Service agent class called VehiclesAzureFunction, and I’m running that code when the user presses the “Connect to Azure Function”. When running the Xamarin app in the Android emulator, this is what you get when starting the app and after pressing the button and calling the Azure Function: Code available at GitHub You can grab all the code from GitHub, as it’ll easier to see than just copying/pasting from a blog post. In the code at GitHub I’m also using some base classes with generics that are useful when you want to re-use more code in a Xamarin project. Code download in GitHub: Next steps In this blog post I simply created an Azure Function that was directly called through HTTP from a Xamarin.Forms app and I’m showing in the mobile app the returned data. Once you have deployed you Azure Function you can learn on How to scale Azure Functions, as well. But, there are many other possibilities you can do with Azure Functions like cool things related to events, for instance, triggering an Azure Function when a WebHook event is received (like a GitHub WebHook or a custom ASP.NET WebHook) like in this example: Azure Function triggered by a WebHook. Another interesting scenario would be to create an Azure Function that receives a WebHook and that Azure Function generates a Mobile Push Notification to be sent to mobile devices (iOS, Android and Windows devices) through Azure Push Notifications Hub. I also like this Azure Function example using the new Azure Cognitive Services (in this case, purely server-side related): Smart image re-sizing with Azure Functions and Cognitive Services. At the end of the day, Azure Functions can be useful when testing small proof of concepts of code or quick actions with C# or Node.js code and you don’t want/need to deploy a whole VS project. You could use it even for building simple microservices, too. Although, for a more complex/large Microservices approach solution, you should check out this MSDN Magazine article Azure Service Fabric and the Microservices Architecture that I wrote. 🙂 Happy coding! thanks for the example! and very interesting part of the article Microservices Architecture
https://blogs.msdn.microsoft.com/cesardelatorre/2016/05/17/using-azure-functions-from-xamarin-mobile-apps/
CC-MAIN-2018-47
refinedweb
1,298
68.1
QtQuick Material in secondary Window Hello, i'm new at QML and want to create a domestic alarm. I succesfully created a window with buttons and images with a Dark theme, but when i want to open a new window i can't set the same theme. I tried putting the code in a different .qml and in the same .qml. My code is like this: @ import QtQuick 2.7 import QtQuick.Controls 2.0 import QtQuick.Controls.Material 2.0 import QtQuick.Layouts 1.3 import QtQuick.Window 2.0 ApplicationWindow { visible: true width: 640 height: 480 title: qsTr("Smart Home") Material.theme: Material.Dark Material.accent: "#004D40" Image { ...... } Rectangle { ... AnimatedImage { .... } MouseArea { ..... } } Window { id: ventanaprueba visible: true Material.theme: Material.Dark Material.accent: Material.DeepOrange } } @ The window "ventanaprueba" opens but all white, when it should have a black background. I tried adding in the main.cpp file #include <QQuickStyle> and QQickStyle::setStyle("Material"); but didn't change anything. Can someone please help me? Thanks in advance - GrecKo Qt Champions 2018 last edited by Windowisn't aware of the QQuickStyle, if you want to have a style window use ApplicationWindow.
https://forum.qt.io/topic/82064/qtquick-material-in-secondary-window
CC-MAIN-2022-40
refinedweb
190
53.17
A lot of times an operation on a single aggregate root needs to result in side effects that are outside the aggregate root boundary. There are several ways to accomplish this, such as: - A return parameter on the method - A collecting parameter - Domain events We’ve used the default implementation of domain events for quite a while, but with some recent applications I’ve worked on, we’ve noticed one small design issue: public static class DomainEvents It’s that big ol’ “static” piece. Domain events are then raised explicitly by calling this static method, Raise: public static IHandlerContainer Container { get; set; } public static void Raise<TEvent>(TEvent args) where TEvent : IDomainEvent { if (Container != null) Container.GetAllHandlers<TEvent>() .Cast<IHandler<TEvent>>() .ForEach(x => x.Handle(args)); Where Container in this case is just a facade over an IoC container. The silly Cast is from this being C# 3.0 code, the contravariance of C# 4.0 would fix this. Again, another design issue here. The reference to the container is static. This means that more powerful container patterns such as nested containers are out of the picture. This is too bad, because nested containers are another great tool in the toolbox that lets us delete a lot of code that sets up contexts for things. The problem here is that I still want to raise events in a static manner from an entity. I don’t want to have to reference some event pipeline object thingy in my domain objects, and I’m really not keen to start injecting things. Instead, I want a true, fire-and-forget event. Contextual containers and disposable actions What I need to do is allow this static method to work with a contextual, scoped piece of code. But that’s exactly what the “using” statement allows us to do – create a scoped piece of code, that executes something at the beginning (whatever creates the IDisposable) and something at the end (the Dispose method). To help us create this scoped context to slip in our nested container, we can take advantage of Ayende’s most brilliant piece of code ever written, the DisposableAction: public class DisposableAction : IDisposable { private readonly Action _callback; public DisposableAction(Action callback) { _callback = callback; } public void Dispose() { _callback(); } } I can then just implement a simple method on the DomainEvents class to allow me to swap out – and then restore – the container reference: public static IDisposable CreateContext(IHandlerContainer container) { var existingContainer = Container; Container = container; return new DisposableAction(() => { Container = existingContainer; }); } I keep a reference around to the previous container, then swap out the DomainEvents’ container for the one passed in. When this CreateContext is finished, the DisposableAction restores the previous container with a handy closure. So how do I use this in real code? Something like: public void Process<T>(T message) where T : IMessage { using (var nestedContainer = _container.GetNestedContainer()) using (var unitOfWork = new UnitOfWork(_sessionSource)) using (DomainEvents.CreateContext(new HandlerContainer(nestedContainer))) I have several scoped items I’m using to process a message (part of a batch processing program). Each line in a file gets processed as a single message, with its own unit of work, its own container etc. It’s now very plain to see the context I create to process the message because I just use the C# feature that creates bounded, self-cleaning contexts: the “using” statement. This method still isn’t thread-safe, as it still has static elements. I’ve just allowed a scoped, nested container to be used instead of a single, global static ontainer. Some folks mentioned patterns like event aggregators, so there are likely other patterns that can help out with the static nature of this domain events pattern. But for now, I can harness the power and simplicity of nested containers, and keep my handy domain events around as well. Post Footer automatically generated by Add Post Footer Plugin for wordpress.
http://lostechies.com/jimmybogard/2010/08/04/container-friendly-domain-events/
CC-MAIN-2014-52
refinedweb
648
50.06
Class “CollectionView” Object > NativeObject > Widget > Composite > CollectionView A scrollable list that displays data items in cells, one per row. Cells are created on demand by the createCell callback and reused on scrolling. Example import {CollectionView, contentView, TextView} from 'tabris'; const items = ['Apple', 'Banana', 'Cherry']; new CollectionView({ left: 0, top: 0, right: 0, bottom: 0, itemCount: items.length, createCell: () => new TextView(), updateCell: (view, index) => { view.text = items[index]; } }).appendTo(contentView); See also: JSX Creating a simple CollectionView JSX Creating a CollectionView with multiple cell types JSX Creating a CollectionView with pull-to-refresh support JSX Creating a CollectionView with sticky headers JSX Creating a CollectionView with dynamic column count JSX collectionview-cellheightauto.jsx TSX collectionview-cellheightauto.tsx TSX collectionview-celltype-ts.tsx TSX collectionview-scroll-ts.tsx JSX collectionview-swipe-to-dismiss.jsx TSX collectionview-ts.tsx Constructor new CollectionView(properties?) Methods cellByItemIndex(itemIndex) Returns the cell currently associated with the given item index. Returns null if the item is not currently displayed. insert(index, count?) Inserts one or more items at the given index. When no count is specified, a single item will be added at the given index. New cells may be created if needed. The updateCell callback will only be called for those new items that become immediately visible. Note that inserting new items changes the index of all subsequent items. This operation will update the itemCount property. itemIndex(widget) Determines the item index currently associated with the given cell. load(itemCount) Loads a new model with the given itemCount. This operation will update the itemCount property. refresh(index?) Triggers an update of the item at the given index by calling the updateCell callback of the corresponding. If no index is given, all visible items will be updated. remove(index, count?) Removes one or more items beginning with the given index. When no count is given, only the item at index will be removed. Note that this changes the index of all subsequent items, however. This operation will update the itemCount property. reveal(index) Scrolls the item with the given index into view. set(properties) Sets all key-value pairs in the properties object as widget properties. Important TypeScript note: When called on this you may need to specify your custom type like this: this.set<MyComponent>({propA: valueA}); Properties cellHeight The height of a collection cell. If set to "auto", the cell height will be calculated individually for each cell. If set to a function, this function will be called for every item, providing the item index and the cell type as parameters, and must return the cell height for the given item. Note: On iOS "auto" may cause significant performance downgrade as it requires additional layouting passes to calculate cell height internally. If possible please use a combination of fixed itemHeight and cellType properties to specify different height for different cells. cellType The name of the cell type to use for the item at the given index. This name will be passed to the createCell and cellHeight callbacks. Cells will be reused only for those items that map to the same cell type. If set to a function, this function will be called for every item, providing the item index as a parameter, and must return a unique name for the cell type to use for the given item. See also: TSX collectionview-celltype-ts.tsx JSX collectionview-celltype.jsx columnCount The number of columns to display in the collection view. If set to a value n > 1, each row will contain n items. The available space will be equally distributed between columns. See also: JSX collectionview-columncount.jsx createCell A callback used to create a new reusable cell widget for a given type. This callback will be called by the framework and the created cell will be reused for different items. The created widget should be populated in the updateCell function. firstVisibleIndex The index of the first item that is currently visible on screen. itemCount The number of items to display. To add or remove items later, use the methods insert() and remove() instead of setting the itemCount. To display a new list of items, use the load() method. lastVisibleIndex The index of the last item that is currently visible on screen. refreshEnabled Enables the user to trigger a refresh by using the pull-to-refresh gesture. See also: JSX collectionview-refreshenabled.jsx refreshIndicator Whether the refresh indicator is currently visible. Will be set to true when a refresh event is triggered. Reset it to false when the refresh is finished. refreshMessage iOS The message text displayed together with the refresh indicator. updateCell A callback used to update a given cell widget to display the item with the given index. This callback will be called by the framework. Events refresh Fired when the user requested a refresh. An event listener should reset the refreshIndicator property when refresh is finished. scroll Fired while the collection view is scrolling. See also: TSX collectionview-scroll-ts.tsx JSX collectionview-scroll.jsx Change Events cellHeightChanged Fired when the cellHeight property has changed. itemCountChanged Fired when the itemCount property has changed. createCellChanged Fired when the createCell property has changed. updateCellChanged Fired when the updateCell property has changed. cellTypeChanged Fired when the cellType property has changed. refreshEnabledChanged Fired when the refreshEnabled property has changed. refreshIndicatorChanged Fired when the refreshIndicator property has changed. refreshMessageChanged Fired when the refreshMessage property has changed. firstVisibleIndexChanged Fired when the firstVisibleIndex property has changed. lastVisibleIndexChanged Fired when the lastVisibleIndex property has changed. columnCountChanged Fired when the columnCount property has changed.
http://docs.tabris.com/3.0/api/CollectionView.html
CC-MAIN-2019-26
refinedweb
923
51.65
Path¶ Download this notebook from GitHub (right-click to download). import numpy as np import holoviews as hv from holoviews import opts hv.extension('matplotlib') A Path element represents one more lines, connecting arbitrary points in two-dimensional space. Path supports plotting an individual line or multiple subpaths, which should be supplied as a list. Each path should be defined in a columnar format such as NumPy arrays, DataFrames or dictionaries for each column. For a full description of the path geometry data model see the Geometry Data User Guide. In this example we will create a Lissajous curve, which describe complex harmonic motion: lin = np.linspace(0, np.pi*2, 200) def lissajous(t, a, b, delta): return (np.sin(a * t + delta), np.sin(b * t), t) hv.Path([lissajous(lin, 3, 5, np.pi/2)]).opts(color='black', linewidth=4) If you looked carefully the lissajous function actually returns three columns, respectively for the x, y columns and a third column describing the point in time. By declaring a value dimension for that third column we can also color the Path by time. Since the value is cyclical we will also use a cyclic colormap ( 'hsv') to represent this variable: hv.Path([lissajous(lin, 3, 5, np.pi/2)], vdims='time').opts( cmap='hsv', color='time', linewidth=4) If we do not provide a color overlaid Path elements will cycle colors just like other elements do unlike Curve a single Path element can contain multiple lines that are disconnected from each other. A Path can therefore often useful to draw arbitrary annotations on top of an existing plot. A Path Element accepts multiple formats for specifying the paths, the simplest of which is passing a list of Nx2 arrays of the x- and y-coordinates, alternative we can pass lists of coordinates. In this example we will create some coordinates representing rectangles and ellipses annotating an RGB image: angle = np.linspace(0, 2*np.pi, 100) baby = list(zip(0.15*np.sin(angle), 0.2*np.cos(angle)-0.2)) adultR = [(0.25, 0.45), (0.35,0.35), (0.25, 0.25), (0.15, 0.35), (0.25, 0.45)] adultL = [(-0.3, 0.4), (-0.3, 0.3), (-0.2, 0.3), (-0.2, 0.4),(-0.3, 0.4)] scene = hv.RGB.load_image('../assets/penguins.png') (scene * hv.Path([adultL, adultR]) * hv.Path(baby)).opts( opts.Path(linewidth=4) ) A Path can also be used as a means to display a number of lines with the same sampling along the x-axis at once. If we initialize the Path with a tuple of x-coordinates and stacked y-coordinates, we can quickly view a number of lines at once. Here we will generate a number of random traces each slightly offset along the y-axis: N, NLINES = 100, 10 paths = hv.Path((np.arange(N), np.random.rand(N, NLINES) + np.arange(NLINES)[np.newaxis, :])) paths2 = hv.Path((np.arange(N), np.random.rand(N, NLINES) + np.arange(NLINES)[np.newaxis, :])) (paths * paths2).opts(aspect=3, fig_size=300) For full documentation and the available style and plot options, use hv.help(hv.Path). Download this notebook from GitHub (right-click to download).
https://holoviews.org/reference/elements/matplotlib/Path.html
CC-MAIN-2019-51
refinedweb
539
58.89
Dealing with Data In this chapter you'll learn about the following: - Rules for naming C++ variables - C++'s built-in integer types: unsigned long, long, unsigned int, int, unsigned short, short, char, unsigned char, signed char, and bool - The climits file, which represents system limits for various integer types - Numeric constants of various integer types - Using the const qualifier to create symbolic constants - C++'s built-in floating-point types: float, double, and long double - The cfloat file, which represents system limits for various floating-point types - Numeric constants of various floating-point types - C++'s arithmetic operators - Automatic type conversions - Forced type conversions (type casts) The essence of object-oriented programming (OOP) is designing and extending your own data types. Designing your own data types represents an effort to make a type match the data. If you do this properly, you'll find it much simpler to work with the data later. But before you can create your own types, you must know and understand the types that are built in to C++ because those types will be your building blocks. The built-in C++ types come in two groups: fundamental types and compound types. In this chapter you'll meet the fundamental types, which represent integers and floating-point numbers. That might sound like just two types; however, C++ recognizes that no one integer type and no one floating-point type match all programming requirements, so it offers several variants on these two data themes. Chapter 4, "Compound Types," follows up by covering several types that are built on the basic types; these additional compound types include arrays, strings, pointers, and structures. Of course, a program also needs a means to identify stored data. In this chapter you'll examine one method for doing sousing variables. Then, you'll look at how to do arithmetic in C++. Finally, you'll see how C++ converts values from one type to another. Simple Variables Programs typically need to store informationperhaps the current price of IBM stock, the average humidity in New York City in August, the most common letter in the U.S. Constitution and its relative frequency, or the number of available Elvis impersonators. To store an item of information in a computer, the program must keep track of three fundamental properties: Where the information is stored What value is kept there What kind of information is stored The strategy the examples in this book have used so far is to declare a variable. The type used in the declaration describes the kind of information, and the variable name represents the value symbolically. For example, suppose Chief Lab Assistant Igor uses the following statements: int braincount; braincount = 5; These statements tell the program that it is storing an integer and that the name braincount represents the integer's value, 5 in this case. In essence, the program locates a chunk of memory large enough to hold an integer, notes the location, assigns the label braincount to the location, and copies the value 5 into the location. These statements don't tell you (or Igor) where in memory the value is stored, but the program does keep track of that information, too. Indeed, you can use the & operator to retrieve braincount's address in memory. You'll learn about that operator in the next chapter, when you investigate a second strategy for identifying datausing pointers. Names for Variables C++ encourages you to use meaningful names for variables. If a variable represents the cost of a trip, you should call it cost_of_trip or costOfTrip, not just x or cot. You do have to follow a few simple C++ naming rules: The only characters you can use in names are alphabetic characters, numeric digits, and the underscore (_) character. The first character in a name cannot be a numeric digit. Uppercase characters are considered distinct from lowercase characters. You can't use a C++ keyword for a name. Names beginning with two underscore characters or with an underscore character followed by an uppercase letter are reserved for use by the implementationthat is, the compiler and the resources it uses. Names beginning with a single underscore character are reserved for use as global identifiers by the implementation. C++ places no limits on the length of a name, and all characters in a name are significant. The next-to-last point is a bit different from the preceding points because using a name such as __time_stop or _Donut doesn't produce a compiler error; instead, it leads to undefined behavior. In other words, there's no telling what the result will be. The reason there is no compiler error is that the names are not illegal but rather are reserved for the implementation to use. The bit about global names refers to where the names are declared; Chapter 4 touches on that topic. The final point differentiates C++ from ANSI C (C99), which guarantees only that the first 63 characters in a name are significant. (In ANSI C, two names that have the same first 63 characters are considered identical, even if the 64th characters differ.) Here are some valid and invalid C++ names: int poodle; // valid int Poodle; // valid and distinct from poodle int POODLE; // valid and even more distinct Int terrier; // invalid -- has to be int, not Int int my_stars3 // valid int _Mystars3; // valid but reserved -- starts with underscore int 4ever; // invalid because starts with a digit int double; // invalid -- double is a C++ keyword int begin; // valid -- begin is a Pascal keyword int __fools; // valid but reserved starts with two underscores int the_very_best_variable_i_can_be_version_112; // valid int honky-tonk; // invalid -- no hyphens allowed If you want to form a name from two or more words, the usual practice is to separate the words with an underscore character, as in my_onions, or to capitalize the initial character of each word after the first, as in myEyeTooth. (C veterans tend to use the underscore method in the C tradition, whereas Pascalians prefer the capitalization approach.) Either form makes it easier to see the individual words and to distinguish between, say, carDrip and cardRip, or boat_sport and boats_port. Real-World Note: Variable Names Schemes for naming variables, like schemes for naming functions, provide fertile ground for fervid discussion. Indeed, this topic produces some of the most strident disagreements in programming. Again, as with function names, the C++ compiler doesn't care about your variable names as long as they are within legal limits, but a consistent, precise personal naming convention will serve you well. As in function naming, capitalization is a key issue in variable naming (see the sidebar "Naming Conventions" in Chapter 2, "Setting Out to C++"), but many programmers may insert an additional level of information in a variable namea prefix that describes the variable's type or contents. For instance, the integer myWeight might be named nMyWeight; here, the n prefix is used to represent an integer value, which is useful when you are reading code and the definition of the variable isn't immediately at hand. Alternatively, this variable might be named intMyWeight, which is more precise and legible, although it does include a couple extra letters (anathema to many programmers). Other prefixes are commonly used in like fashion: str or sz might be used to represent a null-terminated string of characters, b might represent a Boolean value, p a pointer, c a single character. As you progress into the world of C++, you will find many examples of the prefix naming style (including the handsome m_lpctstr prefixa class member value that contains a long pointer to a constant, null-terminated string of characters), as well as other, more bizarre and possibly counterintuitive styles that you may or may not adopt as your own. As in all the stylistic, subjective parts of C++, consistency and precision are best. You should use variable names to fit your own needs, preferences, and personal style. (Or, if required, choose names that fit the needs, preferences, and personal style of your employer.) Integer Types Integers are numbers with no fractional part, such as 2, 98, 5286, and 0. There are lots of integers, assuming that you consider an infinite number to be a lot, so no finite amount of computer memory can represent all possible integers. Thus, a language can represent only a subset of all integers. Some languages, such as standard Pascal, offer just one integer type (one type fits all!), but C++ provides several choices. This gives you the option of choosing the integer type that best meets a program's particular requirements. This concern with matching type to data presages the designed data types of OOP. The various C++ integer types differ in the amount of memory they use to hold an integer. A larger block of memory can represent a larger range in integer values. Also, some types (signed types) can represent both positive and negative values, whereas others (unsigned types) can't represent negative values. The usual term for describing the amount of memory used for an integer is width. The more memory a value uses, the wider it is. C++'s basic integer types, in order of increasing width, are char, short, int, and long. Each comes in both signed and unsigned versions. That gives you a choice of eight different integer types! Let's look at these integer types in more detail. Because the char type has some special properties (it's most often used to represent characters instead of numbers), this chapter covers the other types first. The short, int, and long Integer Types Computer memory consists of units called bits. (See the "Bits and Bytes" sidebar, later in this chapter.) By using different numbers of bits to store values, the C++ types short, int, and long can represent up to three different integer widths. It would be convenient if each type were always some particular width for all systemsfor example, if short were always 16 bits, int were always 32 bits, and so on. But life is not that simple. However, no one choice is suitable for all computer designs. C++ offers a flexible standard with some guaranteed minimum sizes, which it takes from C. Here's what you get: A short integer is at least 16 bits wide. An int integer is at least as big as short. A long integer is at least 32 bits wide and at least as big as int. Bits and Bytes The fundamental unit of computer memory is the bit. Think of a bit as an electronic switch that you can set to either off or on. Off represents the value 0, and on represents the value 1. An 8-bit chunk of memory can be set to 256 different combinations. The number 256 comes from the fact that each bit has two possible settings, making the total number of combinations for 8 bits 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2, or 256. Thus, an 8-bit unit can represent, say, the values 0 through 255 or the values 128 through 127. Each additional bit doubles the number of combinations. This means you can set a 16-bit unit to 65,536 different values and a 32-bit unit to 4,294,672,296 different values. A byte usually means an 8-bit unit of memory. Byte in this sense is the unit of measurement that describes the amount of memory in a computer, with a kilobyte equal to 1,024 bytes and a megabyte equal to 1,024 kilobytes. However, C++ defines byte differently. The C++ byte consists of at least enough adjacent bits to accommodate the basic character set for the implementation. That is, the number of possible values must equal or exceed the number of distinct characters. In the United States, the basic character sets are usually the ASCII and EBCDIC sets, each of which can be accommodated by 8 bits, so the C++ byte is typically 8 bits on systems using those character sets. However, international programming can require much larger character sets, such as Unicode, so some implementations may use a 16-bit byte or even a 32-bit byte. Many systems currently use the minimum guarantee, making short 16 bits and long 32 bits. This still leaves several choices open for int. It could be 16, 24, or 32 bits in width and meet the standard. Typically, int is 16 bits (the same as short) for older IBM PC implementations and 32 bits (the same as long) for Windows 98, Windows NT, Windows XP, Macintosh OS X, VAX, and many other minicomputer implementations. Some implementations give you a choice of how to handle int. (What does your implementation use? The next example shows you how to determine the limits for your system without your having to open a manual.) The differences between implementations for type widths can cause problems when you move a C++ program from one environment to another. But a little care, as discussed later in this chapter, can minimize those problems. You use these type names to declare variables just as you would use int: short score; // creates a type short integer variable int temperature; // creates a type int integer variable long position; // creates a type long integer variable Actually, short is short for short int and long is short for long int, but hardly anyone uses the longer forms. The three types, int, short, and long, are signed types, meaning each splits its range approximately equally between positive and negative values. For example, a 16-bit int might run from 32,768 to +32,767. If you want to know how your system's integers size up, you can use C++ tools to investigate type sizes with a program. First, the sizeof operator returns the size, in bytes, of a type or a variable. (An operator is a built-in language element that operates on one or more items to produce a value. For example, the addition operator, represented by +, adds two values.) Note that the meaning of byte is implementation dependent, so a 2-byte int could be 16 bits on one system and 32 bits on another. Second, the climits header file (or, for older implementations, the limits.h header file) contains information about integer type limits. In particular, it defines symbolic names to represent different limits. For example, it defines INT_MAX as the largest possible int value and CHAR_BIT as the number of bits in a byte. Listing 3.1 demonstrates how to use these facilities. The program also illustrates initialization, which is the use of a declaration statement to assign a value to a variable. Listing 3.1 limits.cpp // limits.cpp -- some integer limits #include <iostream> #include <climits> // use limits.h for older systems int main() { using namespace std; int n_int = INT_MAX; // initialize n_int to max int value short n_short = SHRT_MAX; // symbols defined in limits.h file long n_long = LONG_MAX; // sizeof operator yields size of type or of variable cout << "int is " << sizeof (int) << " bytes." << endl; cout << "short is " << sizeof n_short << " bytes." << endl; cout << "long is " << sizeof n_long << " bytes." << endl << endl; cout << "Maximum values:" << endl; cout << "int: " << n_int << endl; cout << "short: " << n_short << endl; cout << "long: " << n_long << endl << endl; cout << "Minimum int value = " << INT_MIN << endl; cout << "Bits per byte = " << CHAR_BIT << endl; return 0; } Compatibility Note The climits header file is the C++ version of the ANSI C limits.h header file. Some earlier C++ platforms have neither header file available. If you're using such a system, you must limit yourself to experiencing this example in spirit only. Here is the output from the program in Listing 3.1, using Microsoft Visual C++ 7.1: int is 4 bytes. short is 2 bytes. long is 4 bytes. Maximum values: int: 2147483647 short: 32767 long: 2147483647 Minimum int value = -2147483648 Bits per byte = 8 Here is the output for a second system, running Borland C++ 3.1 for DOS: int is 2 bytes. short is 2 bytes. long is 4 bytes. Maximum values: int: 32767 short: 32767 long: 2147483647 Minimum int value = -32768 Bits per byte = 8 Program Notes The following sections look at the chief programming features for this program. The sizeof Operator and the climits Header File The sizeof operator reports that int is 4 bytes on the base system, which uses an 8-bit byte. You can apply the sizeof operator to a type name or to a variable name. When you use the sizeof operator with a type name, such as int, you enclose the name in parentheses. But when you use the operator with the name of the variable, such as n_short, parentheses are optional: cout << "int is " << sizeof (int) << " bytes.\n"; cout << "short is " << sizeof n_short << " bytes.\n"; The climits header file defines symbolic constants (see the sidebar "Symbolic Constants the Preprocessor Way," later in this chapter) to represent type limits. As mentioned previously, INT_MAX represents the largest value type int can hold; this turned out to be 32,767 for our DOS system. The compiler manufacturer provides a climits file that reflects the values appropriate to that compiler. For example, the climits file for Windows XP, which uses a 32-bit int, defines INT_MAX to represent 2,147,483,647. Table 3.1 summarizes the symbolic constants defined in the climits file; some pertain to types you have not yet learned. Table 3.1 Symbolic Constants from climits Initialization Initialization combines assignment with declaration. For example, the statement int n_int = INT_MAX; declares the n_int variable and sets it to the largest possible type int value. You can also use regular constants to initialize values. You can initialize a variable to another variable, provided that the other variable has been defined first. You can even initialize a variable to an expression, provided that all the values in the expression are known when program execution reaches the declaration: int uncles = 5; // initialize uncles to 5 int aunts = uncles; // initialize aunts to 5 int chairs = aunts + uncles + 4; // initialize chairs to 14 Moving the uncles declaration to the end of this list of statements would invalidate the other two initializations because then the value of uncles wouldn't be known at the time the program tries to initialize the other variables. The initialization syntax shown previously comes from C; C++ has a second initialization syntax that is not shared with C: int owls = 101; // traditional C initialization int wrens(432); // alternative C++ syntax, set wrens to 432 Remember If you don't initialize a variable that is defined inside a function, the variable's value is undefined. That means the value is whatever happened to be sitting at that memory location prior to the creation of the variable. If you know what the initial value of a variable should be, initialize it. True, separating the declaring of a variable from assigning it a value can create momentary suspense: short year; // what could it be? year = 1492; // oh But initializing the variable when you declare it protects you from forgetting to assign the value later. Symbolic Constants the Preprocessor Way The climits file contains lines similar to the following: #define INT_MAX 32767 Recall that the C++ compilation process first passes the source code through a preprocessor. Here #define, like #include, is a preprocessor directive. What this particular directive tells the preprocessor is this: Look through the program for instances of INT_MAX and replace each occurrence with 32767. So the #define directive works like a global search-and-replace command in a text editor or word processor. The altered program is compiled after these replacements occur. The preprocessor looks for independent tokens (separate words) and skips embedded words. That is, the preprocessor doesn't replace PINT_MAXIM with P32767IM. You can use #define to define your own symbolic constants, too. (See Listing 3.2.) However, the #define directive is a C relic. C++ has a better way of creating symbolic constants (using the const keyword, discussed in a later section), so you won't be using #define much. But some header files, particularly those designed to be used with both C and C++, do use it. Unsigned Types Each of the three integer types you just learned about comes in an unsigned variety that can't hold negative values. This has the advantage of increasing the largest value the variable can hold. For example, if short represents the range 32,768 to +32,767, the unsigned version can represent the range 0 to 65,535. Of course, you should use unsigned types only for quantities that are never negative, such as populations, bean counts, and happy face manifestations. To create unsigned versions of the basic integer types, you just use the keyword unsigned to modify the declarations: unsigned short change; // unsigned short type unsigned int rovert; // unsigned int type unsigned quarterback; // also unsigned int unsigned long gone; // unsigned long type Note that unsigned by itself is short for unsigned int. Listing 3.2 illustrates the use of unsigned types. It also shows what might happen if your program tries to go beyond the limits for integer types. Finally, it gives you one last look at the preprocessor #define statement. Listing 3.2 exceed.cpp // exceed.cpp -- exceeding some integer limits #include <iostream> #define ZERO 0 // makes ZERO symbol for 0 value #include <climits> // defines INT_MAX as largest int value int main() { using namespace std; short sam = SHRT_MAX; // initialize a variable to max value unsigned short sue = sam;// okay if variable sam already defined cout << "Sam has " << sam << " dollars and Sue has " << sue; cout << " dollars deposited." << endl << "Add $1 to each account." << endl << "Now "; sam = sam + 1; sue = sue + 1; cout << "Sam has " << sam << " dollars and Sue has " << sue; cout << " dollars deposited.\nPoor Sam!" << endl; sam = ZERO; sue = ZERO; cout << "Sam has " << sam << " dollars and Sue has " << sue; cout << " dollars deposited." << endl; cout << "Take $1 from each account." << endl << "Now "; sam = sam - 1; sue = sue - 1; cout << "Sam has " << sam << " dollars and Sue has " << sue; cout << " dollars deposited." << endl << "Lucky Sue!" << endl; return 0; } Compatibility Note Listing 3.2, like Listing 3.1, uses the climits file; older compilers might need to use limits.h, and some very old compilers might not have either file available. Here's the output from the program in Listing 3.2: Sam has 32767 dollars and Sue has 32767 dollars deposited. Add $1 to each account. Now Sam has -32768 dollars and Sue has 32768 dollars deposited. Poor Sam! Sam has 0 dollars and Sue has 0 dollars deposited. Take $1 from each account. Now Sam has -1 dollars and Sue has 65535 dollars deposited. Lucky Sue! The program sets a short variable (sam) and an unsigned short variable (sue) to the largest short value, which is 32,767 on our system. Then, it adds 1 to each value. This causes no problems for sue because the new value is still much less than the maximum value for an unsigned integer. But sam goes from 32,767 to 32,768! Similarly, subtracting 1 from 0 creates no problems for sam, but it makes the unsigned variable sue go from 0 to 65,535. As you can see, these integers behave much like an odometer. If you go past the limit, the values just start over at the other end of the range. (See Figure 3.1.) C++ guarantees that unsigned types behave in this fashion. However, C++ doesn't guarantee that signed integer types can exceed their limits (overflow and underflow) without complaint, but that is the most common behavior on current implementations. Figure 3.1 Typical overflow behavior for integers. Beyond long C99 has added a couple new types that most likely will be part of the next edition of the C++ Standard. Indeed, many C++ compilers already support them. The types are long long and unsigned long long. Both are guaranteed to be at least 64 bits and to be at least as wide as the long and unsigned long types. Choosing an Integer Type With the richness of C++ integer types, which should you use? Generally, int is set to the most "natural" integer size for the target computer. Natural size refers to the integer form that the computer handles most efficiently. If there is no compelling reason to choose another type, you should use int. Now look at reasons why you might use another type. If a variable represents something that is never negative, such as the number of words in a document, you can use an unsigned type; that way the variable can represent higher values. If you know that the variable might have to represent integer values too great for a 16-bit integer, you should use long. This is true even if int is 32 bits on your system. That way, if you transfer your program to a system with a 16-bit int, your program won't embarrass you by suddenly failing to work properly. (See Figure 3.2.) Figure 3.2 For portability, use long for big integers. Using short can conserve memory if short is smaller than int. Most typically, this is important only if you have a large array of integers. (An array is a data structure that stores several values of the same type sequentially in memory.) If it is important to conserve space, you should use short instead of int, even if the two are the same size. Suppose, for example, that you move your program from a 16-bit int DOS PC system to a 32-bit int Windows XP system. That doubles the amount of memory needed to hold an int array, but it doesn't affect the requirements for a short array. Remember, a bit saved is a bit earned. If you need only a single byte, you can use char. We'll examine that possibility soon. Integer Constants An integer constant is one you write out explicitly, such as 212 or 1776. C++, like C, lets you write integers in three different number bases: base 10 (the public favorite), base 8 (the old Unix favorite), and base 16 (the hardware hacker's favorite). Appendix A, "Number Bases," describes these bases; here we'll look at the C++ representations. C++ uses the first digit or two to identify the base of a number constant. If the first digit is in the range 19, the number is base 10 (decimal); thus 93 is base 10. If the first digit is 0 and the second digit is in the range 17, the number is base 8 (octal); thus 042 is octal and equal to 34 decimal. If the first two characters are 0x or 0X, the number is base 16 (hexadecimal); thus 0x42 is hex and equal to 66 decimal. For hexadecimal values, the characters af and AF represent the hexadecimal digits corresponding to the values 1015. 0xF is 15 and 0xA5 is 165 (10 sixteens plus 5 ones). Listing 3.3 is tailor-made to show the three bases. Listing 3.3 hexoct1.cpp // hexoct1.cpp -- shows hex and octal constants #include <iostream> int main() { using namespace std; int chest = 42; // decimal integer constant int waist = 0x42; // hexadecimal integer constant int inseam = 042; // octal integer constant cout << "Monsieur cuts a striking figure!\n"; cout << "chest = " << chest << "\n"; cout << "waist = " << waist << "\n"; cout << "inseam = " << inseam << "\n"; return 0; } By default, cout displays integers in decimal form, regardless of how they are written in a program, as the following output shows: Monsieur cuts a striking figure! chest = 42 (42 in decimal) waist = 66 (0x42 in hex) inseam = 34 (042 in octal) Keep in mind that these notations are merely notational conveniences. For example, if you read that the CGA video memory segment is B000 in hexadecimal, you don't have to convert the value to base 10 45,056 before using it in your program. Instead, you can simply use 0xB000. But whether you write the value ten as 10, 012, or 0xA, it's stored the same way in the computeras a binary (base 2) value. By the way, if you want to display a value in hexadecimal or octal form, you can use some special features of cout. Recall that the iostream header file provides the endl manipulator to give cout the message to start a new line. Similarly, it provides the dec, hex, and oct manipulators to give cout the messages to display integers in decimal, hexadecimal, and octal formats, respectively. Listing 3.4 uses hex and oct to display the decimal value 42 in three formats. (Decimal is the default format, and each format stays in effect until you change it.) Listing 3.4 hexoct2.cpp // hexoct2.cpp -- display values in hex and octal #include <iostream> using namespace std; int main() { using namespace std; int chest = 42; int waist = 42; int inseam = 42; cout << "Monsieur cuts a striking figure!" << endl; cout << "chest = " << chest << " (decimal)" << endl; cout << hex; // manipulator for changing number base cout << "waist = " << waist << " hexadecimal" << endl; cout << oct; // manipulator for changing number base cout << "inseam = " << inseam << " (octal)" << endl; return 0; } Here's the program output for Listing 3.4: Monsieur cuts a striking figure! chest = 42 (decimal) waist = 2a hexadecimal inseam = 52 (octal) Note that code like cout << hex; doesn't display anything onscreen. Instead, it changes the way cout displays integers. Thus, the manipulator hex is really a message to cout that tells it how to behave. Also note that because the identifier hex is part of the std namespace and the program uses that namespace, this program can't use hex as the name of a variable. However, if you omitted the using directive and instead used std::cout, std::endl, std::hex, and std::oct, you could still use plain hex as the name for a variable. How C++ Decides What Type a Constant Is A program's declarations tell the C++ compiler the type of a particular integer variable. But what about constants? That is, suppose you represent a number with a constant in a program: cout << "Year = " << 1492 << "\n"; Does the program store 1492 as an int, a long, or some other integer type? The answer is that C++ stores integer constants as type int unless there is a reason to do otherwise. Two such reasons are if you use a special suffix to indicate a particular type or if a value is too large to be an int. First, look at the suffixes. These are letters placed at the end of a numeric constant to indicate the type. An l or L suffix on an integer means the integer is a type long constant, a u or U suffix indicates an unsigned int constant, and ul (in any combination of orders and uppercase and lowercase) indicates a type unsigned long constant. (Because a lowercase l can look much like the digit 1, you should use the uppercase L for suffixes.) For example, on a system using a 16-bit int and a 32-bit long, the number 22022 is stored in 16 bits as an int, and the number 22022L is stored in 32 bits as a long. Similarly, 22022LU and 22022UL are unsigned long. Next, look at size. C++ has slightly different rules for decimal integers than it has for hexadecimal and octal integers. (Here decimal means base 10, just as hexadecimal means base 16; the term decimal does not necessarily imply a decimal point.) A decimal integer without a suffix is represented by the smallest of the following types that can hold it: int, long, or unsigned long. On a computer system using a 16-bit int and a 32-bit long, 20000 is represented as type int, 40000 is represented as long, and 3000000000 is represented as unsigned long. A hexadecimal or octal integer without a suffix is represented by the smallest of the following types that can hold it: int, unsigned int, long, or unsigned long. The same computer system that represents 40000 as long represents the hexadecimal equivalent 0x9C40 as an unsigned int. That's because hexadecimal is frequently used to express memory addresses, which intrinsically are unsigned. So unsigned int is more appropriate than long for a 16-bit address. The char Type: Characters and Small Integers It's time to turn to the final integer type: char. As you probably suspect from its name, the char type is designed to store characters, such as letters and numeric digits. Now, whereas storing numbers is no big deal for computers, storing letters is another matter. Programming languages take the easy way out by using number codes for letters. Thus, the char type is another integer type. It's guaranteed to be large enough to represent the entire range of basic symbolsall the letters, digits, punctuation, and the likefor the target computer system. In practice, most systems support fewer than 256 kinds of characters, so a single byte can represent the whole range. Therefore, although char is most often used to handle characters, you can also use it as an integer type that is typically smaller than short. The most common symbol set in the United States is the ASCII character set, described in Appendix C, "The ASCII Character Set." A numeric code (the ASCII code) represents each character in the set. For example, 65 is the code for the character A, and 77 is the code for the character M. For convenience, this book assumes ASCII code in its examples. However, a C++ implementation uses whatever code is native to its host systemfor example, EBCDIC (pronounced "eb-se-dik") on an IBM mainframe. Neither ASCII nor EBCDIC serve international needs that well, and C++ supports a wide-character type that can hold a larger range of values, such as are used by the international Unicode character set. You'll learn about this wchar_t type later in this chapter. Try the char type in Listing 3.5. Listing 3.5 chartype.cpp // chartype.cpp -- the char type #include <iostream> int main( ) { using namespace std; char ch; // declare a char variable cout << "Enter a character: " << endl; cin >> ch; cout << "Holla! "; cout << "Thank you for the " << ch << " character." << endl; return 0; } Here's the output from the program in Listing 3.5: Enter a character: M Holla! Thank you for the M character. The interesting thing is that you type an M, not the corresponding character code, 77. Also, the program prints an M, not 77. Yet if you peer into memory, you find that 77 is the value stored in the ch variable. The magic, such as it is, lies not in the char type but in cin and cout. These worthy facilities make conversions on your behalf. On input, cin converts the keystroke input M to the value 77. On output, cout converts the value 77 to the displayed character M; cin and cout are guided by the type of variable. If you place the same value 77 into an int variable, cout displays it as 77. (That is, cout displays two 7 characters.) Listing 3.6 illustrates this point. It also shows how to write a character constant in C++: Enclose the character within two single quotation marks, as in 'M'. (Note that the example doesn't use double quotation marks. C++ uses single quotation marks for a character and double quotation marks for a string. The cout object can handle either, but, as Chapter 4 discusses, the two are quite different from one another.) Finally, the program introduces a cout feature, the cout.put() function, which displays a single character. Listing 3.6 morechar.cpp // morechar.cpp -- the char type and int type contrasted #include <iostream> int main() { using namespace std; char ch = 'M'; // assign ASCII code for M to c int i = ch; // store same code in an int cout << "The ASCII code for " << ch << " is " << i << endl; cout << "Add one to the character code:" << endl; ch = ch + 1; // change character code in c i = ch; // save new character code in i cout << "The ASCII code for " << ch << " is " << i << endl; // using the cout.put() member function to display a char cout << "Displaying char ch using cout.put(ch): "; cout.put(ch); // using cout.put() to display a char constant cout.put('!'); cout << endl << "Done" << endl; return 0; } Here is the output from the program in Listing 3.6: The ASCII code for M is 77 Add one to the character code: The ASCII code for N is 78 Displaying char ch using cout.put(ch): N! Done Program Notes In the program in Listing 3.6, the notation 'M' represents the numeric code for the M character, so initializing the char variable c to 'M' sets c to the value 77. The program then assigns the identical value to the int variable i, so both c and i have the value 77. Next, cout displays c as M and i as 77. As previously stated, a value's type guides cout as it chooses how to display that valuejust another example of smart objects. Because c is really an integer, you can apply integer operations to it, such as adding 1. This changes the value of c to 78. The program then resets i to the new value. (Equivalently, you can simply add 1 to i.) Again, cout displays the char version of that value as a character and the int version as a number. The fact that C++ represents characters as integers is a genuine convenience that makes it easy to manipulate character values. You don't have to use awkward conversion functions to convert characters to ASCII and back. Finally, the program uses the cout.put() function to display both c and a character constant. A Member Function: cout.put() Just what is cout.put(), and why does it have a period in its name? The cout.put() function is your first example of an important C++ OOP concept, the member function. Remember that a class defines how to represent data and how to manipulate it. A member function belongs to a class and describes a method for manipulating class data. The ostream class, for example, has a put() member function that is designed to output characters. You can use a member function only with a particular object of that class, such as the cout object, in this case. To use a class member function with an object such as cout, you use a period to combine the object name (cout) with the function name (put()). The period is called the membership operator. The notation cout.put() means to use the class member function put() with the class object cout. You'll learn about this in greater detail when you reach classes in Chapter 10, "Objects and Classes." Now, the only classes you have are the istream and ostream classes, and you can experiment with their member functions to get more comfortable with the concept. The cout.put() member function provides an alternative to using the << operator to display a character. At this point you might wonder why there is any need for cout.put(). Much of the answer is historical. Before Release 2.0 of C++, cout would display character variables as characters but display character constants, such as 'M' and 'N', as numbers. The problem was that earlier versions of C++, like C, stored character constants as type int. That is, the code 77 for 'M' would be stored in a 16-bit or 32-bit unit. Meanwhile, char variables typically occupied 8 bits. A statement like char c = 'M'; copied 8 bits (the important 8 bits) from the constant 'M' to the variable c. Unfortunately, this meant that, to cout, 'M' and c looked quite different from one another, even though both held the same value. So a statement like cout << '$'; would print the ASCII code for the $ character rather than simply display $. But cout.put('$'); would print the character, as desired. Now, after Release 2.0, C++ stores single-character constants as type char, not type int. Therefore, cout now correctly handles character constants. The cin object has a couple different ways of reading characters from input. You can explore these by using a program that uses a loop to read several characters, so we'll return to this topic when we cover loops in Chapter 5, "Loops and Relational Expressions." char Constants You have several options for writing character constants in C++. The simplest choice for ordinary characters, such as letters, punctuation, and digits, is to enclose the character in single quotation marks. This notation stands for the numeric code for the character. For example, an ASCII system has the following correspondences: 'A' is 65, the ASCII code for A 'a' is 97, the ASCII code for a '5' is 53, the ASCII code for the digit 5 ' ' is 32, the ASCII code for the space character '!' is 33, the ASCII code for the exclamation point Using this notation is better than using the numeric codes explicitly. It's clearer, and it doesn't assume a particular code. If a system uses EBCDIC, then 65 is not the code for A, but 'A' still represents the character. There are some characters that you can't enter into a program directly from the keyboard. For example, you can't make the newline character part of a string by pressing the Enter key; instead, the program editor interprets that keystroke as a request for it to start a new line in your source code file. Other characters have difficulties because the C++ language imbues them with special significance. For example, the double quotation mark character delimits strings, so you can't just stick one in the middle of a string. C++ has special notations, called escape sequences, for several of these characters, as shown in Table 3.2. For example, \a represents the alert character, which beeps your terminal's speaker or rings its bell. The escape sequence \n represents a newline. And \" represents the double quotation mark as an ordinary character instead of a string delimiter. You can use these notations in strings or in character constants, as in the following examples: char alarm = '\a'; cout << alarm << "Don't do that again!\a\n"; cout << "Ben \"Buggsie\" Hacker\nwas here!\n"; Table 3.2 C++ Escape Sequence Codes The last line produces the following output: Ben "Buggsie" Hacker was here! Note that you treat an escape sequence, such as \n, just as a regular character, such as Q. That is, you enclose it in single quotes to create a character constant and don't use single quotes when including it as part of a string. The newline character provides an alternative to endl for inserting new lines into output. You can use the newline character in character constant notation ('\n') or as character in a string ("\n"). All three of the following move the screen cursor to the beginning of the next line: cout << endl; // using the endl manipulator cout << '\n'; // using a character constant cout << "\n"; // using a string You can embed the newline character in a longer string; this is often more convenient than using endl. For example, the following two cout statements produce the same output: cout << endl << endl << "What next?" << endl << "Enter a number:" << endl; cout << "\n\nWhat next?\nEnter a number:\n"; When you're displaying a number, endl is a bit easier to type than "\n" or '\n', but, when you're displaying a string, ending the string with a newline character requires less typing: cout << x << endl; // easier than cout << x << "\n"; cout << "Dr. X.\n"; // easier than cout << "Dr. X." << endl; Finally, you can use escape sequences based on the octal or hexadecimal codes for a character. For example, Ctrl+Z has an ASCII code of 26, which is 032 in octal and 0x1a in hexadecimal. You can represent this character with either of the following escape sequences: \032 or \x1a. You can make character constants out of these by enclosing them in single quotes, as in '\032', and you can use them as parts of a string, as in "hi\x1a there". TIP When you have a choice between using a numeric escape sequence or a symbolic escape sequence, as in \0x8 versus \b, use the symbolic code. The numeric representation is tied to a particular code, such as ASCII, but the symbolic representation works with all codes and is more readable. Listing 3.7 demonstrates a few escape sequences. It uses the alert character to get your attention, the newline character to advance the cursor (one small step for a cursor, one giant step for cursorkind), and the backspace character to back the cursor one space to the left. (Houdini once painted a picture of the Hudson River using only escape sequences; he was, of course, a great escape artist.) Listing 3.7 bondini.cpp // bondini.cpp -- using escape sequences #include <iostream> int main() { using namespace std; cout << "\aOperation \"HyperHype\" is now activated!\n"; cout << "Enter your agent code:________\b\b\b\b\b\b\b\b"; long code; cin >> code; cout << "\aYou entered " << code << "...\n"; cout << "\aCode verified! Proceed with Plan Z3!\n"; return 0; } Compatibility Note Some C++ systems based on pre-ANSI C compilers don't recognize \a. You can substitute \007 for \a on systems that use the ASCII character code. Some systems might behave differently, displaying the \b as a small rectangle rather than backspacing, for example, or perhaps erasing while backspacing, perhaps ignoring \a. When you start the program in Listing 3.7, it puts the following text onscreen: Operation "HyperHype" is now activated! Enter your agent code:________ After printing the underscore characters, the program uses the backspace character to back up the cursor to the first underscore. You can then enter your secret code and continue. Here's a complete run: Operation "HyperHype" is now activated! Enter your agent code:42007007 You entered 42007007... Code verified! Proceed with Plan Z3! Universal Character Names C++ implementations support a basic source character setthat is, the set of characters you can use to write source code. It consists of the letters (uppercase and lowercase) and digits found on a standard U.S. keyboard, the symbols, such as { and =, used in the C language, and a scattering of other characters, such as newline and space characters. Then there is a basic execution character set (that is, characters that can be produced by the execution of a program), which adds a few more characters, such as backspace and alert. The C++ Standard also allows an implementation to offer extended source character sets and extended execution character sets. Furthermore, those additional characters that qualify as letters can be used as part of the name of an identifier. Thus, a German implementation might allow you to use umlauted vowels and a French implementation might allow accented vowels. C++ has a mechanism for representing such international characters that is independent of any particular keyboard: the use of universal character names. Using universal character names is similar to using escape sequences. A universal character name begins either with \u or \U. The \u form is followed by 8 hexadecimal digits, and the \U form by 16 hexadecimal digits. These digits represent the ISO 10646 code for the character. (ISO 10646 is an international standard under development that provides numeric codes for a wide range of characters. See "Unicode and ISO 10646," later in this chapter.) If your implementation supports extended characters, you can use universal character names in identifiers, as character constants, and in strings. For example, consider the following code: int k\u00F6rper; cout << "Let them eat g\u00E2teau.\n"; The ISO 10646 code for ö is 00F6, and the code for is 00E2. Thus, this C++ code would set the variable name to körper and display the following output: Let them eat gteau. If your system doesn't support ISO 10646, it might display some other character for or perhaps simply display the word gu00E2teau. Unicode and ISO 10646 Unicode provides a solution to the representation of various character sets by providing standard numeric codes for a great number of characters and symbols, grouping them by type. For example, the ASCII code is incorporated as a subset of Unicode, so U.S. Latin characters such as A and Z have the same representation under both systems. But Unicode also incorporates other Latin characters, such as those used in European languages; characters from other alphabets, including Greek, Cyrillic, Hebrew, Arabic, Thai, and Bengali; and ideographs, such as those used for Chinese and Japanese. So far Unicode represents more than 96,000 symbols and 49 scripts, and it is still under development. If you want to know more, you can check the Unicode Consortium's website, at. The International Organization for Standardization (ISO) established a working group to develop ISO 10646, also a standard for coding multilingual text. The ISO 10646 group and the Unicode group have worked together since 1991 to keep their standards synchronized with one another. signed char and unsigned char Unlike int, char is not signed by default. Nor is it unsigned by default. The choice is left to the C++ implementation in order to allow the compiler developer to best fit the type to the hardware properties. If it is vital to you that char has a particular behavior, you can use signed char or unsigned char explicitly as types: char fodo; // may be signed, may be unsigned unsigned char bar; // definitely unsigned signed char snark; // definitely signed These distinctions are particularly important if you use char as a numeric type. The unsigned char type typically represents the range 0 to 255, and signed char typically represents the range 128 to 127. For example, suppose you want to use a char variable to hold values as large as 200. That works on some systems but fails on others. You can, however, successfully use unsigned char for that purpose on any system. On the other hand, if you use a char variable to hold a standard ASCII character, it doesn't really matter whether char is signed or unsigned, so you can simply use char. For When You Need More: wchar_t Programs might have to handle character sets that don't fit within the confines of a single 8-bit byte (for example, the Japanese kanji system). C++ handles this in a couple ways. First, if a large set of characters is the basic character set for an implementation, a compiler vender can define char as a 16-bit byte or larger. Second, an implementation can support both a small basic character set and a larger extended character set. The usual 8-bit char can represent the basic character set, and another type, called wchar_t (for wide character type), can represent the extended character set. The wchar_t type is an integer type with sufficient space to represent the largest extended character set used on the system. This type has the same size and sign properties as one of the other integer types, which is called the underlying type. The choice of underlying type depends on the implementation, so it could be unsigned short on one system and int on another. The cin and cout family consider input and output as consisting of streams of chars, so they are not suitable for handling the wchar_t type. The latest version of the iostream header file provides parallel facilities in the form of wcin and wcout for handling wchar_t streams. Also, you can indicate a wide-character constant or string by preceding it with an L. The following code stores a wchar_t version of the letter P in the variable bob and displays a whar_t version of the word tall: wchar_t bob = L'P'; // a wide-character constant wcout << L"tall" << endl; // outputting a wide-character string On a system with a 2-byte wchar_t, this code stores each character in a 2-byte unit of memory. This book doesn't use the wide-character type, but you should be aware of it, particularly if you become involved in international programming or in using Unicode or ISO 10646. The bool Type The ANSI/ISO C++ Standard has added a new type (new to C++, that is), called bool. It's named in honor of the English mathematician George Boole, who developed a mathematical representation of the laws of logic. In computing, a Boolean variable is one whose value can be either true or false. In the past, C++, like C, has not had a Boolean type. Instead, as you'll see in greater detail in Chapters 5 and 6, "Branching Statements and Logical Operators," C++ interprets nonzero values as true and zero values as false. Now, however, you can use the bool type to represent true and false, and the predefined literals true and false represent those values. That is, you can make statements like the following: bool isready = true; The literals true and false can be converted to type int by promotion, with true converting to 1 and false to 0: int ans = true; // ans assigned 1 int promise = false; // promise assigned 0 Also, any numeric or pointer value can be converted implicitly (that is, without an explicit type cast) to a bool value. Any nonzero value converts to true, whereas a zero value converts to false: bool start = -100; // start assigned true bool stop = 0; // stop assigned false After the book introduces if statements (in Chapter 6), the bool type will become a common feature in the examples.
http://www.informit.com/articles/article.aspx?p=352319&amp;seqNum=4
CC-MAIN-2018-26
refinedweb
8,928
61.26
Meet Pandas: Query Dataframe August 25, 2020 | 2 min read | 21 views 🐼Welcome back to the “Meet Pandas” series (a.k.a. my memorandum for learning Pandas)!🐼 Last time, I discussed grouping and several types of boxplot functions. Today, I’m going to briefly summarize how to extract a piece of data from a dataframe by specifying some conditions. Load Example Data As before, I use the “tips” dataset provided by seaborn. This is a data of food servers’ tips in restaurants with six factors that might influence tips. import pandas as pd import seaborn as sns sns.set() df = sns.load_dataset('tips') df The dataframe should look something like this: Let’s say we want a subset of records where and . Boolean Indexing You might solve this problem by Boolean indexing (if you get 28 records, your query is probably correct). df[ (df['tip']>2) & (df['tip']<3) & (df['day'].isin(['Sat', 'Sun'])) ] But, have you felt that this is a little cumbersome and easy to cause mistakes? I have. The common mistakes include: df[ (2<df['tip']<3) & (df['day'].isin(['Sat', 'Sun'])) ] # >> ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). df[ (df['tip']>2) and (df['tip']<3) and (df['day'].isin(['Sat', 'Sun'])) ] # >> ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). df[ (df['tip']>2) & (df['tip']<3) & (df['day'] in ['Sat', 'Sun']) ] # >> ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). query Method Pandas has query method, which is easier to read and write than Boolean indexing. This method allows you to specify querying conditions in an SQL-like manner. df.query('2 < tip < 3 and day in ["Sat", "Sun"]') I usually prefer query to Boolean indexing because its SQL-like syntax enhances readability and reduces potential errors. But query is not always the better choice. Please note the following: - If the column name includes spaces or periods, you can’t use query pandas.Seriesobjects do not have querymethod - Boolean indexing is a bit faster than query References [1] pandas.DataFrame.query — pandas 1.1.1 documentation [2] pandas.DataFrameの行を条件で抽出するquery | note.nkmk.me [3] pandasで複数条件のAND, OR, NOTから行を抽出(選択) | note.nkmk.me
https://hippocampus-garden.com/pandas_query/
CC-MAIN-2020-40
refinedweb
394
68.36
Python support for the dataclass decorator.. So let’s have a look at how we can use this! The Star Wars API You know a movie’s fanbase is passionate when a fan creates a REST API with the movie’s data in it. One Star Wars fan has done exactly that, and created the Star Wars API. He’s actually gone even further, and created a Python wrapper library for it. Let’s forget for a second that there’s already a wrapper out there, and see how we could write our own. We can use the requests library to get a resource from the Star Wars API: response = requests.get('') This endpoint (like all swapi endpoints) responds with a JSON message. Requests makes our life easier by offering JSON parsing: dictionary = response.json() And at this point we have our data in a dictionary. Let’s have a look at it (shortened): { 'characters': ['', … ], 'created': '2014-12-10T14:23:31.880000Z', 'director': 'George Lucas', 'edited': '2015-04-11T09:46:52.774897Z', 'episode_id': 4, 'opening_crawl': 'It is a period of civil war.\r\n … ', 'planets': ['', ...], 'producer': 'Gary Kurtz, Rick McCallum', 'release_date': '1977-05-25', 'species': ['', ...], 'starships': ['', ...], 'title': 'A New Hope', 'url': '', 'vehicles': ['', ...] } Wrapping the API To properly wrap an API, we should create objects that our wrapper’s user can use in their application. So let’s define an object in Python 3.6 to contain the responses of requests to the /films/ endpoint: class StarWarsMovie: def __init__(self, title: str, episode_id: int, opening_crawl: str, director: str, producer: str, release_date: datetime, characters: List[str], planets: List[str], starships: List[str], vehicles: List[str], species: List[str], created: datetime, edited: datetime, url: str ): self.title = title self.episode_id = episode_id self.opening_crawl= opening_crawl self.director = director self.producer = producer self.release_date = release_date self.characters = characters self.planets = planets self.starships = starships self.vehicles = vehicles self.species = species self.created = created self.edited = edited self.url = url if type(self.release_date) is str: self.release_date = dateutil.parser.parse(self.release_date) if type(self.created) is str: self.created = dateutil.parser.parse(self.created) if type(self.edited) is str: self.edited = dateutil.parser.parse(self.edited) Careful readers may have noticed a little bit of duplicated code here. Not so careful readers may want to have a look at the complete Python 3.6 implementation: it’s not short. This is a classic case of where the data class decorator can help you out. We’re creating a class that mostly holds data, and only does a little validation. So let’s have a look at what we need to change. Firstly, data classes automatically generate several dunder methods. If we don’t specify any options to the dataclass decorator, the generated methods are: __init__, __eq__, and __repr__. Python by default (not just for data classes) will implement __str__ to return the output of __repr__ if you’ve defined __repr__ but not __str__. Therefore, you get four dunder methods implemented just by changing the code to: @dataclass class StarWarsMovie: title: str episode_id: int opening_crawl: str director: str producer: str release_date: datetime characters: List[str] planets: List[str] starships: List[str] vehicles: List[str] species: List[str] created: datetime edited: datetime url: str We removed the __init__ method here to make sure the data class decorator can add the one it generates. Unfortunately, we lost a bit of functionality in the process. Our Python 3.6 constructor didn’t just define all values, but it also attempted to parse dates. How can we do that with a data class? If we were to override __init__, we’d lose the benefit of the data class. Therefore a new dunder method was defined for any additional processing: __post_init__. Let’s see what a __post_init__ method would look like for our wrapper class: def __post_init__(self): if type(self.release_date) is str: self.release_date = dateutil.parser.parse(self.release_date) if type(self.created) is str: self.created = dateutil.parser.parse(self.created) if type(self.edited) is str: self.edited = dateutil.parser.parse(self.edited) And that’s it! We could implement our class using the data class decorator in under a third of the number of lines as we could without the data class decorator. More goodies By using options with the decorator, you can tailor data classes further for your use case. The default options are: @dataclass(init=True, repr=True, eq=True, order=False, unsafe_hash=False, frozen=False) - init determines whether to generate the __init__dunder method. - repr determines whether to generate the __repr__dunder method. - eq does the same for the __eq__dunder method, which determines the behavior for equality checks ( your_class_instance == another_instance). - order actually creates four dunder methods, which determine the behavior for all lesser than and/or more than checks. If you set this to true, you can sort a list of your objects. The last two options determine whether or not your object can be hashed. This is necessary (for example) if you want to use your class’ objects as dictionary keys. A hash function should remain constant for the life of the objects, otherwise the dictionary will not be able to find your objects anymore. The default implementation of a data class’ __hash__ function will return a hash over all objects in the data class. Therefore it’s only generated by default if you also make your objects read-only (by specifying frozen=True). By setting frozen=True any write to your object will raise an error. If you think this is too draconian, but you still know it will never change, you could specify unsafe_hash=True instead. The authors of the data class decorator recommend you don’t though. If you want to learn more about data classes, you can read the PEP or just get started and play with them yourself! Let us know in the comments what you’re using data classes for! 18 Responses to Python 3.7: Introducing Data Classes Varun Ramesh says:April 18, 2018 It seems to me that the ‘StarWarsMovie’ dataclass will fail a static type check if a string is passed in as an argument for ‘release_date’, ‘created’, or ‘edited’. Since type annotations support unions, I think that ‘Union[datetime, str]’ might be the right annotation. Peter Norvig says:April 18, 2018 `post_init` could be for attr in [‘release_date’, ‘created’, ‘edited’]: if isinstance(getattr(self, attr), str): setattr(self, attr, dateutil.parser.parse(getattr(self, attr))) Wiliam says:April 18, 2018 I wonder if this hinders readability or understanding. Does it? When do we star tto worry about these small details? victor n. says:April 18, 2018 oh wow. Wiliam, i don’t know about readability/understanding but what Peter wrote above is what is more maintainable. it’s preferable (imo) to the original. all you need to do now is add attributes to that list above and everything automagically works. Kevin says:April 18, 2018 Not really, it might look weird to someone with little coding experience in Python (<2 years) but with a few years of proficiency this sort of thing becomes commonplace. Although I probably wouldn’t write it exactly how the guy above did, or I’d at least surround all that getattr/setattr stuff with a comment explaining why this is done in a loop. The loop takes about 3 lines so if I have only 3 attributes to do this on I might not turn it into a loop/dynamic thing. MrObvious says:April 19, 2018 Dude, that’s Peter ‘f*cking Norvig ” .. That guy above” jeez.. lol 😉 Anentropic says:April 19, 2018 it’s better in every way Kevin Galkov says:April 18, 2018 I wonder if this hinders readability or understanding. Does it? When do we start to worry about these small details? Chris Adams says:April 18, 2018 Kevin: I generally prefer this style because it makes it immediately obvious that all of the listed fields have intentionally identical behaviour. That might be a minor thing now but it tends to avoid bugs later when maintenance work means someone either has to confirm that intention or, worse, misses an instance and the behaviour is subtly no longer consistent. Jeremy Kun says:April 18, 2018 Peter has spoken. Quaint Alien says:April 18, 2018 Love your code golfing tricks! tm says:April 18, 2018 Reviewing the non-dataclass class, if your constructor can take a `str` or `datetime` argument for the date objects, shouldn’t the __init__ arguments for the date objects be `Union[str, datetime]`? Also, mypy doesn’t like the way that the parse function is called with a typed `datetime` argument: `Argument 1 to “parse” has incompatible type “datetime”; expected “Union[bytes, str, IO[str], IO[Any]]”` Not sure how you rectified that. Darren says:April 18, 2018 That moves python closer to the scala case class – Given python and scala are commonly used in big-data (Spark), some kind of python/scala convergence is not too surprising. The key difference, however, is that scala will catch type errors at compile-time. Anentropic says:April 19, 2018 mypy will catch Python type errors at “build time” i.e. whenever you choose to run mypy, perhaps as part of your CI tests Brian Bruggeman says:April 18, 2018 on dataclasses, I’m not sure types are needed anymore: I think this is super awesome; Python should never require types. Wagner Macedo says:April 20, 2018 And we lose readability. Like it or not, we programmers have to deal with data types every time, if there is a standard way to document the types (this was the main reason behind type hinting), why not to use? bc says:April 19, 2018 Better to use a library like Traits () or Atom (). Eric Frederich says:April 19, 2018 What if the json-object returns a key which a reserved word or otherwise not a valid Python variable name? I supposed you could define a @classmethod called from_json_response or something which would then return something like cls(a=data[‘a’], b=data[‘b’], …etc) where a mapping of json names to python names could be enumerated. Unfortunately this seems to repeat a lot of code. I think golang lets you decorate structs saying what the JSON keys should be when serializing/deserializing.
https://blog.jetbrains.com/pycharm/2018/04/python-37-introducing-data-class/
CC-MAIN-2021-17
refinedweb
1,715
57.37
In this section of the ethical hacking tutorial you will learn about the difference between ethical hacking and penetration testing, what are the various phases of hacking, what is footprinting and scanning, network scanning, gaining access, maintaining access and covering..Read More tracks. There are five phases in penetration testing. It includes – There are various concepts of hacking such as phase of pentesting, footprinting, scanning, enumeration, system hacking, sniffing traffic and so on. Footprinting, also known as reconnaissance is used for gathering all possible data about target system. It can be active or passive. The collected data is used to intrude the system and decide the attack types on the system based on the security. Several information such as domain name, IP address, namespace, email id, location, history of the website can be found by this method. Several tools are used to gather information such as – Scanning tools such as – Penetration testing/exploitation tools such as – Get certified in Ethical Hacking today with our online Ethical Hacking course! Scanning is the second stage of information gathering where the hacker tries to do a deep search into the system to look for valuable information. Ethical hackers tries to prevent organization’s attack use this network scanning effectively. The tools and techniques used for scanning are – The hackers trie to identify a live system using a protocol, blueprint the same network and perform vulnerability scan to find weaknesses in the system. There are three types of scanning – Here the hacker uses different techniques and tools to gain maximum data from the system. They are – Once you gain access to the system using various password cracking methods, the next step is to maintain the access in the system. To remain undetected, one has to secure the presence. To secure the hacker can install a hidden infrastructure to keep access of backdoor open. Trojan horses, covert channels and rootkits are used. A trojan horse provides access at application level, used to gain remote access. A covert channel is where the data can be sent through secret communication tunnels. A rootkit is a malware type which hides itself from the system, they conceal to bypass the computer security measures. All the traces of attack such as log files, intrusion detection system alarms are removed to cover the tracks. Removes all files and folders created, modify logs and registry once the hacker leaves the system. Using of reverse Http shells and ICMP tunnels also helps to cover tracks. Leave a Reply Cancel reply Your email address will not be published. Required fields are marked * Comment Name * Browse Categories
https://intellipaat.com/blog/tutorial/ethical-hacking-cyber-security-tutorial/ethical-hacking-system-hacking/
CC-MAIN-2020-40
refinedweb
432
54.02
Hi, I was reading through ... mport.html. But I can't figure out how can I import in a dataset that is transposed, but with my variable names only starting in Column C, while actual data starts in Column D. This is what I have done, which imports in the transposed data, but doesn't give the data series a name, and doesn't skip the first two columns as well. If the imported data doesn't skip the first two columns, it offsets the actual date the data starts in. import ABC.csv "byrow" @frequency q 2001 @smpl 2001Q1 2010Q4 Is there a function similar to skip or how should I be doing this? Thanks! Importing into transposed data skipping columns - Posts: 20 - Joined: Tue Feb 10, 2015 10:11 pm - Fe ddaethom, fe welon, fe amcangyfrifon - Posts: 11490 - Joined: Tue Sep 16, 2008 5:38 pm Re: Importing into transposed data skipping columns Did you try colhead=3 ? - EViews Developer - Posts: 454 - Joined: Tue Sep 16, 2008 3:00 pm - Location: Irvine, CA Re: Importing into transposed data skipping columns If you use our import wizard, you can select which row is the name, and which row is where the data starts. At the end, you'll see the captured command with all the appropriate options. 3 posts • Page 1 of 1 Return to “Data Manipulation” Who is online Users browsing this forum: No registered users and 7 guests
http://forums.eviews.com/viewtopic.php?f=3&p=59525&sid=3543e9f0e118fa084db3cf0ed4d39dd0
CC-MAIN-2018-05
refinedweb
241
62.41
Desktop Analytics: The Embeddable Library In the spirit of me actually spending the bulk of my time writing code, I’m going to dispense with the meandering long-winded stories and just show a few details of the upcoming implementation of the Desktop Analytics platform. Within the next couple of months, I’m going to release the full product suite. The server is written in Java and the reporting GUI will be deployed as an Adobe AIR application, but those aren’t what I want to talk about today. Today I just want to provide a sneak-peek into the embeddable library that developers will link into their own applications to enable statistical reporting. The embeddable library is written in the D programming language, which compiles to native code and exposes C linkage. (DLLs on Windows and SOs on Linux; the OSX compiler is a little out of date and has some bugs, but rumor has it that it’s going to get a major overhaul in the near future.) Initially, I’m only going to support C/C++ deployment on Windows and Linux. But shortly thereafter, I’m going to write wrappers for Java and .NET. And hopefully the OSX compiler will be up to snuff by this summer. I’m shooting for maximum compatibility with the most in-demand development platforms, so I’m going to let users tell me which wrappers they’re most interested in seeing. (Chances are very good that I’ll also write a Flash/Flex/ActionScript implementation, to support analytics on RIA applications. My recent development experience in Flex has been exceptionally pleasant. Stay tuned!) Another one of my goals is to make it absolutely dirt-simple for developers to use the library. In the most basic cases, it should take fewer than five lines of code. Here’s a simple case: #include <AnalyticsLib.h> void main() { char* sessionId = AnalyticsLib.createSession( "", "MyDeveloperId", "MyApplicationId", "MyApp Version 1.0.6" ); /* ...application logic goes here... */ AnalyticsLIb.closeSession(sessionId); } Aaaaaand that’s it! Easy like Sunday morning! The analytics library starts a background thread that handles all the nitty-gritty details. It caches all of its data locally (on disk), periodically communicating with the server (over HTTPS, if you like) and flushing its local cache. If the application (or computer) crashes suddenly, the data is all recoverable. The next time the user starts your application the analytics library will flush the data from the previous session, along with a flag indicating that session terminated abnormally. If the user disconnects form the internet, or if the server goes offline for whatever reason (upgrades, restarts, etc), no problem. The library will cache its data on disk and re-submit later, when the network is connected and server is online. If it can’t submit the data during the current session, it’ll submit the next time the user runs your application. When the library finally does connect with the server, they exchange a series of one-time-only, one-way-hashed authentication tokens, to prevent casual mischief-makers from sending the server bogus data. Every time a new session is created, the library inspects the local operating environment, collecting information about the current device: - The operating system name and version. - The CPU name, speed, and number of cores. - The total amount of system RAM. - The number of local hard drives, and their total capacity and available space. - The number of screens, and their dimensions. - The presence (and versions) of the JVM and CLR. - The network bandwidth (bytes per second, downstream). If you need other information about the local environment (like maybe the installed version of some particular DLL) you can collect that info too. It’s part of the API, which we’ll get into later, but it’s not built into the default environment inspection process. NOTE: As a customer of this product, you’ll have to agree not to collect any personally-identifiable user information. It will be one of the terms of the license. You’ll also have to disclose to your users that you’re collecting anonymous usage data. The last decade has proven that it’s possible to collect anonymous statistical data on the web without breaching user privacy, and I stay within those bounds. Along with the environment data, the library will also automatically report some historical device-usage statistics: - The start and end times of the session. - The currently-deployed version of the software. - The total cumulative number of sessions originating from this device. - The elapsed time since the previous session. - The elapsed time since installation. Beyond the automatically-collected device and session information, the analytics library also provides a simple API for reporting developer-configurable data, in four different categories: environment variables, benchmarks, events, and logs. Once you’ve created a session, if you need to collect any environment variables not already included in the default installation, you can call a simple function in the analytics library to submit your own: AnalyticsLib.envNumeric("Workstation Uptime (Minutes)", 673.25); AnalyticsLib.envText("Python Version", "2.5.2"); Environment variables are reported only once per session. Re-submitting a new value for a previously submitted environment variable will overwrite the old value with the new. If you’re concerned about the performance characteristics of your code on your users’ computers, you can run benchmarks locally and submit the results of those benchmarks for analysis later. Wouldn’t it be great to know the mean, median, and standard deviations of the execution time for some expensive function in your code? Do most of your users have blazing fast computers, or are most of them running pokey old-timer machines? double units = 10000; // The cross-referenced document contains 10,000 words. AnalyticsLib.submitBenchmark("Document Cross-Reference", millis, units); The benchmark API allows you to specify the number of work-units performed by your code as well as its execution time. Later, you can use this information to produce charts and graphs examining how your code’s execution time scales with its dataset. Do your algorithms have logarithmic or quadratic performance characteristics? If you submit benchmark data from within your application, you’ll be able to produce histograms reports for your entire user base. And, of course, you’ll be able to compare the benchmarks of various different user groups. Is your application’s performance more sensitive to CPU speed or to the amount of system RAM? The analytics library also provides an API for submitting arbitrary events. An event is an instantaneous moment in time, with an associated name, and optional textual and numeric values. Here are a few examples: // User disconnected from the network. AnalyticsLib.event("NetworkDisconnect"); // User was idle for 1 hour. AnalyticsLib.eventNumeric("BackFromIdleState", 60.0); // User invoked the SpellCheck feature, from the Tools|Text hierarchy. // You can report on any level of this hierarchy. AnalyticsLib.eventText("FeatureInvoke", "Tools|Text|SpellCheck"); // User spent 6 minutes reading the 'Beginner Tutorial' page in the Help system. AnalyticsLib.eventTextNumeric("HelpSystem", "Beginner Tutorial", 6.0); With these events, you’ll be able to write complex reports to analyze the usage patterns of your users. How fully do your trial users explore the user interface? Do your “premium version” users actually use the premium features that they paid for? Although your users have long-running sessions (maybe they run your software all day long at work), how much time do they spend actively in the application, verses letting it sit idle in the background? Finally, the analytics library provides a mechanism for submitting log data. try { doSomething(); } catch (const MyException& e) { char* failureType = e.getFailureType(); char* message = e.getMessage(); AnalyticsLib.log("3D Rendering Subsystem", failureType, message); } In your ongoing effort to improve product quality, you can inspect those remote error logs to find the hard-to-reproduce faults that never seem to occur in your own development environment. You can even run statistical reports about the modules where the errors originated, the types of errors that occur, the types of devices where they occur most often, and the events immediately preceding the crash. So those are the features! I hope you’re as excited as I am to see this stuff finally see the light of day in a few months. Like I’ve said before, the embeddable library is about 90% complete (which is why I can describe it with such detail at this point). Currently, the lion’s share of my time is going into the GUI implementation. And boy is it going to be sexy. I couln’t be happier with the Adobe AIR platform. The Flex API is extremely well-designed, and the event-driven model is excellent. Also: the graphics are stunning. Pixel-perfection in a highly-functional GUI is finally possible, and I’m happy to say that the user interface is a near-verbatim recreation of my original mockup. ActionScript, as a programming language, leaves a *little* to be desired, but it’s pretty decent, and the development environment is very productive. Anyhow, I’m still working my ass off to get a 1.0 version released by the end of Q1, but I wanted to provide you guys with some of the implementation details that I’m hammering out, so that you can start salivating for the release. Anyone interested in beta testing? January 6th, 2009 at 8:48 pm “Initially, I’m only going to support C/C++ deployment on Windows and Linux.” This rather suggest you chose a technology which suited you, as opposed to a technology which suited your customers. January 6th, 2009 at 11:12 pm Thanks for the comment, James! Actually, I *did* choose the technology most suited to me. Here were the characteristics I was looking for in an implementation language: * The more compatibility the better. If I write the library in Java, then it’s absolutely limited to JVM languages. If I want lots of people to buy my product (and I do) then I have to shoot for the least common denominator: I absolutely must provide a C interface. Once the C interface exists, I can build a wrapper for practically any other language that exists. * I want my software to be able to report interesting details about the hardware (CPU, memory, etc), so I need to use a language that can execute low-level system calls. Personally, I dislike C++ (though C isn’t quite so bad), and I didn’t want to write any C/C++ if I didn’t have to. Luckily, the D programming language meets my most important criteria. It’s not my favorite programming language, but it gets the job done well enough. I suppose if I could have written the library in any language at all, I would have chosen C#, which I think is very well designed. Throughout my career, I’ve written code in JavaScript, Visual Basic, Java, C#, Perl, Python, Ruby, C++, D, ActionScript, and a handful of other niche languages (including a special-purpose language whose compiler I built myself!) So I’m no language bigot, and I don’t find programming-langauge-choice a very interesting topic (though some people get VERY passionate about it). For me, the fun part of software development is the THING you’re building rather than the languages and tools that you use to build it. January 7th, 2009 at 5:49 am Ah, so you are going to have a Java Wrapper - exciting! I’ll keep watching your blog - I think I would like to beta test when that wrapper is done. January 7th, 2009 at 1:04 pm Oh, there will absolutely be a Java version. I just figured the C/C++ version would be best to release first, since it’ll give me a foundation for developing all the subsequent higher-level language libraries. Personally, my hunch is that the .NET version is going to be the most popular, and I’d really rather release it first. But it’d be silly to keep writing the same code over and over again in different languages when I can write all the core features once and build upon those initial releases. January 8th, 2009 at 3:50 pm One thought that came to mind is how this will work with all the “silent outbound connection-blocking” firewalls like ZoneAlarm? It seems like the end users of the applications that *your* users develop will have to be instructed to allow those applications outbound access. January 13th, 2009 at 2:55 pm > One thought that came to mind is how this will work with all the “silent outbound connection-blocking” firewalls like ZoneAlarm? My company writes autoupdate software, and there was consideration to leverage this tool for client-side statistics (i.e. we have had requests, and so there is a market for such a tool). Products like ZoneAlarm and anti-virus software are an occasional problem but I wouldn’t rank them as a show-stopper. My only recommendation is that you write your code with excellent error handling capabilities, for every single API call. The ZoneAlarm/anti-virus tools can mess with almost all API calls and will generate funny errors if the user decides to block the outgoing server requests. Simon@AutoUpdate+ Professional update management tool
http://benjismith.net/index.php/2009/01/05/desktop-analytics-the-embeddable-library
crawl-002
refinedweb
2,214
54.52
Re: [openstack-dev] [new][cloudpulse] Announcing a project to HealthCheck OpenStack deployments Jay Pipes jaypi...@gmail.com writes: On 05/12/2015 02:16 PM, Fox, Kevin M wrote: Nagios/watever As A Service would actually be very useful I think. Frankly, so do tenants. Tenants install software on their images using configuration management tools like mentioned above... I don't see a Re: [openstack-dev] [oslo] eventlet 0.17.3 is now fully Python 3 compatible Victor Stinner vstin...@redhat.com writes: Is there already a static analysis tool that helps find these things? (Would a pylint check for the above be useful? Some of them would be hard to find reliably, but a bunch of the above would be trivial) I read that hacking has some checks. It's Re: [openstack-dev] Regarding openstack swift implementation Hello, On Tue, Apr 21, 2015 at 7:34 AM, Subbulakshmi Subha subbulakshmisubh...@gmail.com wrote: I want to simulate it one javasim,so I want to know how are objects stored in the partitions in swift.I computed the md5 hash of the objects url, n the medium of storage is hashmap,so I want to Re: [openstack-dev] [kolla][tripleo] Optional Ansible deployment coming On 03/31/2015 03:30 PM, Steven Dake (stdake) wrote: Hey folks, One of our community members submitted a review to add optional Ansible support to deploy OpenStack using Ansible and the containers within Kolla. Our main objective remains: for third party deployment tools to use Kolla as a Re: [openstack-dev] [kolla] Re: Questions about kolla On Mon, Mar 30, 2015 at 3:42 PM, Steven Dake (stdake) std...@cisco.com wrote: We plan to dockerize any service needed to deploy OpenStack. We haven’t decided if that includes ceph, since ceph may already be dockerized by someone else. But it does include the HA services we need as well as Re: [openstack-dev] [infra] What do people think of YAPF (like gofmt, for python)? On Thu, Mar 26, 2015 at 5:02 PM, James E. Blair cor...@inaugust.com wrote: This is purposefully done to ensure that developers do not inadvertently run code on their workstations from a source they may not trust. Sure, but is that really make a difference between having some scripts in a Re: [openstack-dev] [infra] What do people think of YAPF (like gofmt, for python)? On Thu, Mar 26, 2015 at 12:22 AM, Monty Taylor mord...@inaugust.com wrote: git review is used by a ton of people who write in non-python. I think adding openstack-specific style enforcement to it would make it way less generally useful. I think if we wanted to do that we could just extend Re: [openstack-dev] [kolla] Re: Why we didn't use k8s in kolla? Hello, So if I understand correctly Kubernetes was avoided since there is no control point and using fig/docker-compose would get you a top to the bottom deployment that easy to control. At this point is there any reasons not using something like ansible+docker plugin instead? I have used Re: [openstack-dev] [oslo.config][kolla] modular backends for oslo.cfg On Tue, Mar 24, 2015 at 11:40 AM, Flavio Percoco fla...@redhat.com wrote: This however won't remove the need of a config file. For instance, plugins like etcd's will need the host/port options to be set somewhere - or passed as a cli parameter. Yes totally! I have been using command line [openstack-dev] [oslo.config][kolla] modular backends for oslo.cfg Hello, While on a long oversea flight with Sebastien Han we were talking how he had implemented ceph-docker with central configuration over etcd. We quickly came up to the conclusion that if we wanted to have that in Kolla it would be neat if it was done straight from oslo.config so that would Re: [openstack-dev] [OpenStack-Infra] [infra] Infra cloud: infra running a cloud for nodepool cor...@inaugust.com (James E. Blair) writes: A group of folks from HP is interested in starting an effort to run a cloud as part of the Infrastructure program with the purpose of providing resources to nodepool for OpenStack testing. HP is supplying two racks of machines, and we will operate Re: [openstack-dev] [Keystone] Deprecation of Eventlet deployment in Kilo (Removal for M-release) Morgan Fainberg morgan.fainb...@gmail.com writes: The Keystone development team is planning to deprecate deployment of Keystone under Eventlet during the Kilo cycle. Support for deploying under eventlet will be dropped as of the “M”-release of OpenStack. great! glad there is one project Re: [openstack-dev] Pluggable Auth for clients and where should it go On Wed, Feb 18, 2015 at 8:54 PM, Dean Troyer dtro...@gmail.com wrote: I think one thing needs to be clarified...what you are talking about is utilizing keystoneclient's auth plugins in neutronclient. Phrasing it as 'novaclient parity' reinforces the old notion that novaclient is the model Re: [openstack-dev] The root-cause for IRC private channels (was Re: [all][tc] Lets keep our community open, lets fight for it) Daniel P. Berrange berra...@redhat.com writes: Personally I think all our IRC channels should be logged. There is really no expectation of privacy when using IRC in an open collaborative project. Agreed with Daniel. I am not sure how a publicly available forum/channel can be assumed that there Re: [openstack-dev] [devstack] [third-party] how to use a devstack external plugin in gate testing Jaume Devesa devv...@gmail.com writes: Following the conversation... We have seen that glusterfs[1] and ec2api[2] use different approach when it comes to repository managing: whereas glusterfs is a single 'devstack' directory repository, ec2api is a whole project with a 'devstack' directory Re: [openstack-dev] [devstack] [third-party] how to use a devstack external plugin in gate testing Sean Dague s...@dague.net writes: I'm going to be -1ing most new or substantially redone drivers at this point. External plugins are a better model for those. +1 Chnmouel __ OpenStack Development Mailing List (not for Re: [openstack-dev] [openstack-announce] [keystonemiddleware] keystonemiddlware release 1.3.0 On Thu, Dec 18, 2014 at 8:25 PM, Morgan Fainberg morgan.fainb...@gmail.com wrote: * http_connect_timeout option is now an integer instead of a boolean. * The service user for auth_token middlware can now be in a domain other than the default domain. fyi it has a fix in there as well so you Re: [openstack-dev] [hacking] proposed rules drop for 1.0 On Tue, Dec 9, 2014 at 12:39 PM, Sean Dague s...@dague.net wrote: 1 - the entire H8* group. This doesn't function on python code, it functions on git commit message, which makes it tough to run locally. I do run them locally using git-review custom script features which would launch a flake8 [openstack-dev] [devstack] external plugin support for Devstack Re: [openstack-dev] [devstack] external plugin support for Devstack On Mon, Nov 24, 2014 at 3:32 PM, Sean Dague s...@dague.net wrote: We should also make this something which is gate friendly. I think the idea had been that if projects included a /devstack/ directory in them, when assembling devstack gate, that would be automatically dropped into devstack's Re: [openstack-dev] Alembic 0.7.0 - hitting Pypi potentially Sunday night Hello, On Fri, Nov 21, 2014 at 10:10 PM, Mike Bayer mba...@redhat.com wrote: 1. read about the new features, particularly the branch support, and please let me know of any red flags/concerns you might have over the coming implementation, at Re: [openstack-dev] [devstack] keystone doesn't restart after ./unst Re: [openstack-dev] [Horizon] [Devstack] Re: [openstack-dev] Travels tips for the Paris summit On Wed, Oct 15, 2014 at 10:25 AM, Thierry Carrez thie...@openstack.org wrote: I found this: The Creperies (Brittany pancakes) are also great choices for vegetarians since you can easily pick a vegetarian filling. my Re: [openstack-dev] 2 Minute tokens On Wed, Oct 1, 2014 at 3:47 AM, Adam Young ayo...@redhat.com wrote: 1. Identify the roles for the APIs that Cinder is going to be calling on swift based on Swifts policy.json FYI: there is no Swifts policy.json in mainline code, there is one external middleware available that provides it Re: [openstack-dev] [kolla] Kolla Blueprints On Tue, Sep 30, 2014 at 6:41 PM, Steven Dake sd...@redhat.com wrote: I've done a first round of prioritization. I think key things we need people to step up for are nova and rabbitmq containers. For the developers, please take a moment to pick a specific blueprint to work on. If your Re: [openstack-dev] [keystone][swift] Has anybody considered storing tokens in Swift? On Mon, Sep 29, 2014 at 11:47 PM, Dmitry Mescheryakov dmescherya...@mirantis.com wrote: As a result of operation #1 the token will be saved into Swift by the Keystone. But due to eventual consistency it could happen that validation of token in operation #2 will not see the saved token. Re: [openstack-dev] [keystone][swift] Has anybody considered storing tokens in Swift? On 30/09/2014 01:05, Clay Gerrard wrote: eventual consistency will only affect container listing and I don't think there is a need for container listing in that driver. well now hold on... if you're doing an overwrite in the face of server failures you could still get a stale read if a Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker On Thu, Sep 25, 2014 at 6:02 AM, Clint Byrum cl...@fewbar.com wrote: However, this does make me think that Keystone domains should be exposable to services inside your cloud for use as SSO. It would be quite handy if the keystone users used for the VMs that host Kubernetes could use the same Re: [openstack-dev] [all][tripleo] New Project - Kolla: Deploy and Manage OpenStack using Kubernetes and Docker On Wed, Sep 24, 2014 at 12:40 AM, Steven Dake sd...@redhat.com wrote: I'm pleased to announce the development of a new project Kolla which is Greek for glue :). Kolla has a goal of providing an implementation that deploys OpenStack using Kubernetes and Docker. This project will begin as a Re: [openstack-dev] [Nova] [All] API standards working group On Wed, Sep 24, 2014 at 12:18 AM, Jay Pipes jaypi...@gmail.com wrote: Yes, I'd be willing to head up the working group... or at least participate in it. I am certainly interested, count me in. Chmouel ___ OpenStack-dev mailing list Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3) On Fri, Sep 19, 2014 at 6:58 PM, Donald Stufft don...@stufft.io wrote: So you can remove all that code and just let requests/urllib3 handle it on 3.2+, 2.7.9+ and for anything less than that either use conditional dependencies to have glance client depend on pyOpenSSL, ndg-httpsclient, and Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3) Ian Cordasco ian.corda...@rackspace.com writes: urllib3 do that automatically. I haven’t started to investigate exactly why they do this. Likewise, glance client has custom certificate verification in glanceclient.common.https. Why? I’m not exactly certain this probably come from pre-requests Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3) On Thu, Sep 18, 2014 at 1:58 PM, Donald Stufft don...@stufft.io wrote: Distributions are not the only place that people get their software from, unless you think that the ~3 million downloads requests has received on PyPI in the last 30 days are distributions downloading requests to package Re: [openstack-dev] [all] OpenStack bootstrapping hour - Friday Sept 19th - 3pm EST Sean Dague s...@dague.net writes: Episode 0 - Mock best practices will kick off this Friday, Sept 19th, from 3pm - 4pm EST. Our experts for this will be Jay Pipes and Dan ah too bad this is when the week-end start in Europe (and usually mean family time for me) but no complaining I guess there Re: [openstack-dev] adding RSS feeds to specs repositories Doug Hellmann d...@doughellmann.com writes: I have completed a series of patches [1] for (I think) all of the specs repositories to add RSS feeds so that when specs are approved and merged they are easily publicized. Col, thanks for setting this up. [...] I originally thought we would Re: [openstack-dev] memory usage in devstack-gate (the oom-killer strikes again) On Tue, Sep 9, 2014 at 12:24 AM, Joe Gordon joe.gord...@gmail.com wrote: 1) Should we explicitly set the number of workers that services use in devstack? Why have so many workers in a small all-in-one environment? What is the right balance here? This is what we do for Swift, without Re: [openstack-dev] Review change to nova api pretty please? On Sat, Aug 30, 2014 at 11:28 AM, Alex Leonhardt aleonhardt...@gmail.com wrote: Is there a list of things not to send to this list somewhere accessible (link?) that I could review, to not send another (different) request by mistake and possibly upset or annoy people on here ? There is this Re: [openstack-dev] [specs] script to help with spec reviews to convert them in html or pdf On Sat, Aug 16, 2014 at 7:34 PM, Jeremy Stanley fu...@yuggoth.org wrote: Not to detract from your suggestion, but if the specs project in question has a docs job then the link zuul leaves for it in each check report takes you to a draft rendering of the specs with the change applied rather [openstack-dev] [specs] script to help with spec reviews to convert them in html or pdf Hello, If like me you find it difficult to read a large text file like the rst specs inside gerrit diff interface viewer, I have created a script that gets rst files in a review to generate them in a html or pdf files and open it with your desktop default viewer (linux/mac) . Script is available Re: [openstack-dev] [all] 3rd Party CI vs. Gerrit Re: [openstack-dev] [all] 3rd Party CI vs. Gerrit On Wed, Aug 13, 2014 at 6:27 PM, James E. Blair cor...@inaugust.com wrote: If it is not worth looking at a job that is run by the OpenStack CI system, please propose a patch to openstack-infra/config to delete it from the Zuul config. We only want to run what's useful, and we have other Re: [openstack-dev] [devstack] Core team proposals On Thu, Aug 7, 2014 at 8:09 PM, Dean Troyer dtro...@gmail.com wrote: Please respond in the usual manner, +1 or concerns. +1, I would be happy to see Ian joining the team. Chmouel ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org Re: [openstack-dev] Tox run failure during installation of dependencies in requirements On Wed, Aug 6, 2014 at 12:19 PM, Narasimhan, Vivekanandan vivekanandan.narasim...@hp.com wrote: Timeout: (pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x37e4790, 'Connection to pypi.python.org timed out. (connect timeout=15)') I think this error Re: [openstack-dev] Thoughts on the patch test failure rate and moving forward Hello, Thanks for writing this summary, I like all those ideas and thanks working hard on fixing this. * For all non gold standard configurations, we'll dedicate a part of our infrastructure to running them in a continuous background loop, as well as making these configs available Re: [openstack-dev] REST API access to configuration options On Tue, Jul 15, 2014 at 9:54 AM, Henry Nash hen...@linux.vnet.ibm.com wrote: Do people think this is a good idea? Useful in other projects? Concerned about the risks? FWIW, we have this in Swift for a while and we actually uses it for different testing in cloud capabilities. I personally Re: [openstack-dev] Fwd: Fwd: Debian people don't like bash8 as a project name (Bug#748383: ITP: bash8 -- bash script style guide checker) Sean Dague s...@dague.net writes: bashate ftw. +1 to bashate Chmouel ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org Re: [openstack-dev] masking X-Auth-Token in debug output - proposed consistency On Wed, Jun 11, 2014 at 9:47 PM, Sean Dague s...@dague.net wrote: Actually swiftclient is one of the biggest offenders in the gate - I'd be happy to fix that but that Re: [openstack-dev] masking X-Auth-Token in debug output - proposed consistency On Thu, Jun 12, 2014 at 12:58 PM, Chmouel Boudjnah chmo...@enovance.com wrote: On Wed, Jun 11, 2014 at 9:47 PM, Sean Dague s...@dague.net wrote: Actually swiftclient is one of the biggest offenders in the gate - Re: [openstack-dev] masking X-Auth-Token in debug output - proposed consistency On Thu, Jun 12, 2014 at 1:59 PM, Sean Dague s...@dague.net wrote: The only thing it makes harder is you have to generate your own token to run the curl command. The rest is there. Well I would have imagine that the curl command debug are here so people can easily copy and paste them and/or Re: [openstack-dev] Selecting more carefully our dependencies On Thu, May 29, 2014 at 11:25 AM, Thomas Goirand z...@debian.org wrote: So I'm wondering: are we being careful enough when selecting dependencies? In this case, I think we haven't, and I would recommend against using wrapt. Not only because it embeds six.py, but because upstream looks Re: [openstack-dev] [Swift] storage policies merge plan On Tue, May 27, 2014 at 10:02 AM, Hua ZZ Zhang zhu...@cn.ibm.com wrote: Do it make sense to support storage policy work in devstack so that it can be more easily tested? -Edward Zhang I don't think storage policy on one VM (which has other OpenStack services) like usually setup for Re: [openstack-dev] Spec repo names jebl...@openstack.org (James E. Blair) writes: about this, the more I think that the right answer is that we should stick with codenames for the spec repos. The codenames are actually I hereby +1 this, except old timers that i don't think many people knows the OpenStack components by their [openstack-dev] [swift] swiftclient functional tests gate for python-swiftclient /test_multithreading.py R tests/unit/test_swiftclient.py R tests/unit/test_utils.py R tests/unit/utils.py M tox.ini 13 files changed, 320 insertions(+), 2 deletions(-) Approvals: Alistair Coles: Looks good to me (core reviewer); Approved Chmouel Boudjnah: Looks good to me (core reviewer) Jenkins: Verified Re: [openstack-dev] SSL in Common client Rob Crittenden rcrit...@redhat.com writes: From what I found nothing has changed either upstream or in swift. If you are asking about the ability to disable SSL compression it is up to the OS to provide that so nothing was added when we changed swiftclient to requests. Most modern OSes have Re: [openstack-dev] SSL in Common client Chmouel Boudjnah chmo...@enovance.com writes: Most modern OSes have SSL compression by default, only Debian stable was still enabling it. I mean have SSL compression *disabled* by default. Chmouel. ___ OpenStack-dev mailing list OpenStack-dev Re: [openstack-dev] Gerrit downtime and upgrade on 2014-04-28 On Fri, Apr 25, 2014 at 6:10 PM, Zaro zaro0...@gmail.com wrote: Gerrit 2.8 allows setting label values on patch sets either thru the command line[1] or REST API[2]. Since we will setup WIP as a -1 score on a label this will just be a matter of updating git-review to set the label on new Re: [openstack-dev] Gerrit downtime and upgrade on 2014-04-28 On Wed, Apr 23, 2014 at 12:40 AM, James E. Blair jebl...@openstack.orgwrote: There are a few changes that will impact developers. We will have more detailed documentation about this soon, but here are the main things you should know about: What plugins are going to be enabled under gerrit? Re: [openstack-dev] [SWIFT] swift and authorization policy I haven't done a full review but I like what you did and this should be the proper way to handle ACL for keystoneauth. I am not sure tho that forking oslo.common.policy is any better than copy/pasting it with its dependences. I would suggest we move `swift-keystoneauth` to its own project part Re: [openstack-dev] [Devstack] add support for ceph On Fri, Apr 18, 2014 at 6:32 AM, Sean Dague s...@dague.net wrote: That being said, there are 2 devstack sessions available at design summit. So proposing something around addressing the ceph situation might be a good one. It's a big and interesting problem. I have add a session that just do Re: [openstack-dev] [tripleo] /bin/bash vs. /bin/sh FWIW: we are using bash in devstack if we were going to try to make it POSIX bourne shell (or whatever /bin/sh is) it would have been a huge pain. On Tue, Apr 15, 2014 at 1:25 PM, Dougal Matthews dou...@redhat.com wrote: Another +1 for using bash. Sounds like an easy win. On 15/04/14 12:31, [openstack-dev] [Devstack] add support for ceph Hello, We had quite a lengthy discussion on this review : about a patch that seb has sent to add ceph support to devstack. The main issues seems to resolve around the fact that in devstack we support only packages that are in the distros and not having [openstack-dev] [Aws as a seervice] Jumpgate review A good article mentioned here: for me, it's a gateway instead of I think our better approach of drivers inside openstack. I would imagine it's not a static one and would pass down everything it doesn't know about. If Re: [openstack-dev] [marconi] sample config files should be ignored in git... On Thu, Mar 27, 2014 at 7:29 PM, Kurt Griffiths kurt.griffi...@rackspace.com wrote: P.S. - Any particular reason this script wasn't written in Python? Seems like that would avoid a lot of cross-platform gotchyas. I think it just need to have someone stepping up doing it. Chmouel Re: [openstack-dev] [nova][qa][all] Home of rendered specs Thierry Carrez wrote: specs instead of docs because docs.openstack.org should only contain what is actually implemented so keeping specs in another subdomain is an attempt to avoid confusion as we don't expect every approved blueprint to get implemented. Great Re: [openstack-dev] [MagnetoDB] Best practices for uploading large amounts of data Maksym Iarmak wrote: I suggest taking a look, how Swift and Ceph do such things. under swift (and CEPH via the radosgw which implement swift API) we are using POST and PUT which has been working relatively well Chmouel ___ OpenStack-dev mailing list Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate On Tue, Mar 18, 2014 at 2:09 PM, Sean Dague s...@dague.net wrote: We've not required UCA for any other project to pass the gate. Is it that bad to have UCA in default devstack, as far as I know UCA is the official way to do OpenStack on ubuntu, right? Chmouel. Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate On Tue, Mar 18, 2014 at 5:21 PM, Sean Dague s...@dague.net wrote: So I'm still -1 at the point in making UCA our default run environment until it's provably functional for a period of time. Because working around upstream distro breaks is no fun. I agree, if UCA is not very stable ATM, this Re: [openstack-dev] Moving swift3 to stackforge (was: Re: [OpenStack-Infra] Intermittent failures cloning noVNC from github.com/kanaka) On Sat, Mar 15, 2014 at 4:28 AM, Pete Zaitcev zait...@redhat.com wrote: I think we should've not kicked it out. Maybe just re-fold it back into Swift? we probably would need to have a vote/chat of some sort first. Chmouel ___ OpenStack-dev mailing [openstack-dev] Moving swift3 to stackforge (was: Re: [OpenStack-Infra] Intermittent failures cloning noVNC from github.com/kanaka) On Thu, Mar 13, 2014 at 3:44 PM, Sean Dague s...@dague.net wrote: In Juno I'd really be pro removing all the devstack references to git repos not on git.openstack.org, because these kinds of failures have real impact. Currently we have 4 repositories that fit this bill: Re: [openstack-dev] Replication multi cloud You may be interested by this project as well : you would need to replicate your keystone in both way via mysql replication or something like this (and have endpoint url changed as well obviously there). Chmouel On Thu, Mar 13, 2014 at 5:25 PM, Marco Re: [openstack-dev] [Nova client] gate-python-novaclient-pypy FYI: this is all over for all clients that gates with pypy : On Mon, Mar 10, 2014 at 10:37 AM, Gary Kotton gkot...@vmware.com wrote: Hi, The client gate seems to be broken with the following error: Re: [openstack-dev] [Neutron][FYI] Bookmarklet for neutron gerrit review if peoples like this why don't we have it directly on the reviews? Chmouel. On Tue, Mar 4, 2014 at 10:00 PM, Carl Baldwin c...@ecbaldwin.net wrote: Nachi, Great! I'd been meaning to do something like this. I took yours and tweaked it a bit to highlight failed Jenkins builds in red and Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support On Fri, Mar 7, 2014 at 12:30 AM, Matt Riedemann mrie...@linux.vnet.ibm.comwrote: What would be awesome in Juno is some CI around RBD/Ceph. I'd feel a lot more comfortable with this code if we had CI running Tempest Seb has been working to add ceph support into devstack which could be a Re: [openstack-dev] [oslo][all] config sample tools on os x On Thu, Feb 20, 2014 at 10:22 AM, Julien Danjou jul...@danjou.info wrote: I'm pretty sure it'd be OK to use getopt in a portable way rather than specifically the GNU version, but I had no idea if it was acceptable. If everybody think it is, I can give it a try. In which sort of system setup Re: [openstack-dev] [openstack-announce] python-heatclient 0.2.7 released On Wed, Feb 19, 2014 at 1:50 AM, Steve Baker sba...@redhat.com wrote: Changes in this release: It is probably worth mentioning[1] that python-heatclient is now using the requests library instead of its homegrown httpclient library which Re: [openstack-dev] [oslo][all] config sample tools on os x On Thu, Feb 20, 2014 at 12:17 AM, Sergey Lukjanov slukja...@mirantis.comwrote: tools/config/generate_sample.sh isn't working on OS X due to the getopt usage. Any recipes / proposals to fix it? I have a workaround at least. thanks for the workaround, I had a look on this while reporting bug Re: [openstack-dev] pep8 gating fails due to tools/config/check_uptodate.sh On Wed, Feb 5, 2014 at 4:20 PM, Doug Hellmann doug.hellm...@dreamhost.comwrote: Including the config file in either the developer documentation or the packaging build makes more sense. I'm still worried that adding it to the sdist generation means you would have to have a lot of tools Re: [openstack-dev] Python 3 compatibility On Mon, Feb 3, 2014 at 5:29 PM, Julien Danjou jul...@danjou.info wrote: Last, but not least, trollius has been created by Victor Stinner, who actually did that work with porting OpenStack in mind and as the first objective. AFAIK: victor had plans to send a mail about it to the list later Re: [openstack-dev] [Heat] How to model resources in Heat Zane Bitter zbit...@redhat.com writes: As I said, figuring this all out is really hard to do, and the existing resources in Heat are by no means perfect (we even had a session at the Design Summit devoted to fixing some of them[1]). If anyone has a question about a specific model, feel free Re: [openstack-dev] Hacking repair scripts On Tue, Jan 28, 2014 at 2:13 AM, Joshua Harlow harlo...@yahoo-inc.comwrote: Thought people might find it useful and it could become a part of automatic repairing/style adjustments in the future (similar to I guess what go has in `gofmt`). nice, it would be cool if this can be hooked Re: [openstack-dev] Hierarchicical Multitenancy Discussion On Tue, Jan 28, 2014 at 7:35 PM, Vishvananda Ishaya vishvana...@gmail.comwrote: The key use case here is to delegate administration rights for a group of tenants to a specific user/role. There is something in Keystone called a domain which supports part of this functionality, but without Re: [openstack-dev] [requirements][oslo] Upgrade six to 1.5.2? On Wed, Jan 22, 2014 at 11:17 AM, Julien Danjou jul...@danjou.info wrote: On Tue, Jan 21 2014, ZhiQiang Fan wrote: six 1.5.2 has been released on 2014-01-06, it provides urllib/urlparse compatibility. Is there any plan to upgrade six to 1.5.2? (since it is fresh new, may need some time to Re: [openstack-dev] a common client library On Thu, Jan 16, 2014 at 12:38 PM, Chris Jones c...@tenshu.net wrote: Once a common library is in place, is there any intention to (or resistance against) collapsing the clients into a single project or even a single command (a la busybox)? that's what openstackclient is here for Re: [openstack-dev] a common client library On Thu, Jan 16, 2014 at 4:37 PM, Jesse Noller jesse.nol...@rackspace.comwrote: Can you detail out noauth for me; and I would say the defacto httplib in python today is python-requests - urllib3 is also good but I would say from a *consumer* standpoint requests offers more in terms of usability Re: [openstack-dev] a common client library On Thu, Jan 16, 2014 at 5:23 PM, Jay Pipes jaypi...@gmail.com wrote: Right, but requests supports chunked-transfer encoding properly, so really there's no reason those clients could not move to a requests-based codebase. We had that discussion for swiftclient and we are not against it but Re: [openstack-dev] a common client library On Thu, Jan 16, 2014 at 8:40 PM, Donald Stufft don...@stufft.io wrote: On Jan 16, 2014, at 2:36 PM, Joe Gordon joe.gord...@gmail.com wrote: 2) major overhaul of client libraries so they are all based off a common base library. This would cover namespace changes, and possible a push to move Re: [openstack-dev] [all] Organizing a Gate Blocking Bug Fix Day On Thu, Jan 9, 2014 at 1:46 PM, Sean Dague s...@dague.net wrote: Specifically I'd like to get commitments from as many PTLs as possible that they'll both directly participate in the day, as well as encourage the rest of their project to do the same I'll be more than happy to participate (or Re: [openstack-dev] [oslo.config] Centralized config management On Thu, Jan 9, 2014 at 7:53 PM, Nachi Ueno na...@ntti3.com wrote: One example of such case is neuron + nova vif parameter configuration regarding to security group. The workflow is something like this. nova asks vif configuration information for neutron server. Neutron server ask Re: [openstack-dev] [Nova] libvirt unit test errors On Tue, Jan 7, 2014 at 2:28 PM, Gary Kotton gkot...@vmware.com wrote: 2014-01-07 11:59:47.428 Requirement already satisfied (use --upgrade to upgrade): markupsafe in Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer On Mon, Jan 6, 2014 at 12:52 PM, Kodam, Vijayakumar (EXT-Tata Consultancy Ser - FI/Espoo) vijayakumar.kodam@nsn.com wrote: In this case, simply changing the meter properties in a configuration file should be enough. There should be an inotify signal which shall notify ceilometer of the Re: [openstack-dev] [OpenStack-Dev] IDE extensions in .gitignore I am not sure if this is the global .gitignore you are thinking of but this is the one I am in favor of: Maintaining .gitignore in 30+ repositories for a potentially infinite number of editors is very hard, and thankfully we [openstack-dev] [testr] debugging failing testr runs Hello, I was wondering what was the strategy to debug a failed run with tox? I was trying to see which tests was failing with python-keystoneclient and py33 and this is the type of error i am getting : (bet it with tox directly or from the Re: [openstack-dev] [Keystoneclient] [Keystone] [Solum] Last released version of keystoneclient does not work with python33 On 10 Dec 2013, at 09:20, Morgan Fainberg m...@metacloud.com wrote: I think the correct way to sync is to run the update.py script and submit a review (I don’t think it’s changed recently). Seems pretty straightforward then, thanks. let see how the review goes here then : Re: [openstack-dev] Catalog type naming guidelines On Tue, Dec 10, 2013 at 3:57 PM, Sergey Lukjanov slukja...@mirantis.comwrote: we have a concern about the correct catalog type naming for projects who'd like to have a name like data processing. just out of interests why is it a problem and need to be standardized, isn't that supposed to be [openstack-dev] Fwd: [qa][tempest] Bug Triage Day - Thu 12th - Prep would be nice to participate for the ppl interested with tempest Chmouel. -- Forwarded message -- From: Adalberto Medeiros adal...@linux.vnet.ibm.com Date: Tue, Dec 10, 2013 at 3:52 PM Subject: [openstack-dev] [qa][tempest] Bug Triage Day - Thu 12th - Prep To: OpenStack Re: [openstack-dev] OK to Use Flufl.enum On Tue, Dec 10, 2013 at 5:23 PM, Jay Pipes jaypi...@gmail.com wrote: The IntEnum is my new definition of the most worthless class ever invented in the Python ecosystem -- taking the place of zope.interface on my personal wall of worthlessness. this is the kind of things you can do with the Re: [openstack-dev] [heat] Core criteria, review stats vs reality On Tue, Dec 10, 2013 at 11:26 AM, Thierry Carrez thie...@openstack.orgwrote: That's why I thought creating VIP parties for +2 reviewers (or giving them special badges or T-shirts) is spreading the wrong message, and encourage people to hang on to the extra rights associated with the duty. [openstack-dev] [Swift] python-swiftclient, verifying SSL certs by default Hello, There has been a lengthy discussion going on for quite sometime on a review for swiftclient here : The review change the way works swiftclient to refuse to connect to insecure (i.e: self signed) SSL swift proxies unless you are specifying the
https://www.mail-archive.com/search?l=openstack-dev%40lists.openstack.org&q=from:%22Chmouel+Boudjnah%22&o=newest
CC-MAIN-2019-43
refinedweb
5,892
60.85
Get the highlights in your inbox every week. Get started with Roland, a random selection tool for the command line Get help making hard choices with Roland, the seventh seventh of my picks for 19 new (or new-to-you) open source tools to help you be more productive in 2019. Roland By the time the workday has ended, often the only thing I want to think about is hitting the couch and playing the video game of the week. But even though my professional obligations stop at the end of the workday, I still have to manage my household. Laundry, pet care, making sure my teenager has what he needs, and most important: deciding what to make for dinner. Like many people, I often suffer from decision fatigue, and I make less-than-healthy choices for dinner based on speed, ease of preparation, and (quite frankly) whatever causes me the least stress. Roland makes planning my meals much easier. Roland is a Perl application designed for tabletop role-playing games. It picks randomly from a list of items, such as monsters and hirelings. In essence, Roland does the same thing at the command line that a game master does when rolling physical dice to look up things in a table from the Game Master's Big Book of Bad Things to Do to Players. With minor modifications, Roland can do so much more. For example, just by adding a table, I can enable Roland to help me choose what to cook for dinner. The first step is installing Roland and all its dependencies. git clone git@github.com:rjbs/Roland.git cpan install Getopt::Long::Descriptive Moose \ namespace::autoclean List:AllUtils Games::Dice \ Sort::ByExample Data::Bucketeer Text::Autoformat \ YAML::XS cd oland Next, I create a YAML document named dinner and enter all our meal options. type: list pick: 1 items: - "frozen pizza" - "chipotle black beans" - "huevos rancheros" - "nachos" - "pork roast" - "15 bean soup" - "roast chicken" - "pot roast" - "grilled cheese sandwiches" Running the command bin/roland dinner will read the file and pick one of the options. I like to plan for the week ahead so I can shop for all my ingredients in advance. The pick command determines how many items from the list to chose, and right now, the pick option is set to 1. If I want to plan a full week's dinner menu, I can just change pick: 1 to pick: 7 and it will give me a week's worth of dinners. You can also use the -m command line option to manually enter the choices. You can also do fun things with Roland, like adding a file named 8ball with some classic phrases. You can create all kinds of files to help with common decisions that seem so stressful after a long day of work. And even if you don't use it for that, you can still use it to decide which devious trap to set up for tonight's game. 4 Comments, Register or Log in to post a comment. What about the good old fortune command? It was part of BSD games and has been included in virtual all linux distros. Adding your private fortune database is pretty trivial, no more complex than a YML file: your fortunes are (multi)lines of text separated by a % line, and you need to add a .dat file with the command strfile -c % dinner dinner.dat after which the command fortune dinner achieves essentially the same. On ubuntu sudo apt install fortunes and you can view some samples in /usr/share/games/fortunes some of which are great fun. fortune was covered in an article in December, 2018. This is, without any doubt, an application that fills out a much-needed gap in the Linux world. Great little article. Made everyone in my office laugh especially when someone spotted the inevitable "git clone ... Roland.git". Glad that a namesake of mine helps you plan your meals! Hope Roland McGrath (see glibc) knows about this...
https://opensource.com/article/19/1/productivity-tools-roland
CC-MAIN-2022-05
refinedweb
673
70.33
08 July 2011 11:43 [Source: ICIS news] LONDON (ICIS)--?xml:namespace> “Such a consolidation process is an overture toward the full privatisation of this part of the chemical sector,” said the Polish treasury minister, Aleksander Grad. He also said that a strong, single entity should prove attractive in a sell-off, which he said would be conducted “in the not-too-distant future”. On 7 July, ZAT launched bookbuilding for its share call in which it is seeking to buy up to 66% of ZChP, Shareholders have until 16 August to respond to the call. Together, both companies would become The ZAT group is a producer of nitrogen fertilizers, caprolactam (capro), and nylon 6 (or polyamide 6). Late last year it became the majority-owner of Zaklady Azotowe Kedzierzyn (ZAK), a fellow producer of nitrogen fertilizers and maker of oxo-alcohols. Ciech – another state-controlled group – is currently Poland’s largest chemical maker. Ciech is also Europe’s second-largest soda ash maker and a producer of toluene diisocyanate (TDI), bringing in approximately Zl 4bn in yearly revenues. Previous attempts to separately privatise ZAT, ZChP, ZAK and Ciech had failed. ($1 = Zl 2.74, €1 = Zl 3
http://www.icis.com/Articles/2011/07/08/9476047/polish-treasury-allows-zaklady-azoty-tarnows-zchp-acquisition.html
CC-MAIN-2013-48
refinedweb
198
54.42
aniBlock Importer for Poser [commercial] edited December 1969 in The Commons aniBlocks can now be imported into Poser. Go Go Go! - aniBlocks can now be imported into Poser. Go Go Go! - Someone asked on another thread, so I will also clarify here. You do not need to own anIMate2 or DAZ Studio to use aniBlock Importer for Poser. Hey Go Figure I love the aniBlocks. We haven't had a new set in awhile. hint hint hint I have been using the importer for Carrara and it works great. Poser users will love it. I love aniblocks, as well. I bought this earlier today but I'm having trouble getting it to work. I get this: Traceback (most recent call last): File "H:\Program Files\Poser 9\Runtime\Python\poserScripts\ScriptsMenu\Import\aniBlock.py", line 1, in import gofigure.aniblock ImportError: DLL load failed: The specified module could not be found. I'm not too familiar with Poser plugins...is there a special way I need to install this? I ran the installer specifed a temp folder. In the temp folder there is a runtime folder (wiith the nested python/addons and python/poserscripts). I copied the runtime folder to my Poser 9 runtime folder. My aniblocks are in a seperate runtime folder---does this matter? I am pretty sure that it should not have asked for a temp forlder but your poser folder. Is H:\Program Files\Poser 9 where your poser is installed? Look in “H:\Program Files\Poser 9\Runtime\Python\addons and see if you see a gofigure folder, if not try installing again or go look at the temp folder you chose and look for a Runtime\python\addons\gofigure folder and move it to the right spot. Yes the temp folder has the GoFigure folder: H:\temp\Runtime\Python\addons\gofigure\__init__.py H:\temp\Runtime\Python\addons\gofigure\aniblock.pyd H:\temp\Runtime\Python\poserScripts\ScriptsMenu\Import\aniBlock.py I install to a temp folder so I can see what is going to be installed, then move it to the Program folder which is: H:\Program Files\Poser 9\ So I end with: H:\Program Files\Poser 9\Runtime\Python\addons\gofigure\__init__.py H:\Program Files\Poser 9\Runtime\Python\addons\gofigure\aniblock.pyd H:\Program Files\Poser 9\Runtime\Python\poserScripts\ScriptsMenu\Import\aniBlock.py I have installed mine the same way and it works fine. Do you have the latest Poser update installed? This product requires SR3. TD I have pretty much the same problem But I installed the plugin directly to my poser installation at P:\Poser Pro 2012 SR3\Poser Pro 2012 The addon is there: at P:\Poser Pro 2012 SR3\Poser Pro 2012\Runtime\Python\addons\gofigure Both __init__.pyc and aniblock.pyd are there There was initially a __init__.py file there as well. It was 0 bytes. But removing had no effect Could it be that the addon has a problem because the install location of Poser is not default? Ii can see the python import script in the menu, but nothing appears in the addons menu I am running SR3 (or actually a few builds later) Unfortunately there is zero documentation, not even a readme which tells you to go to the addon folder. The link DAZ provides in the installer gives a Page not Found Hmm. Odd. Wonder why it works here. One more thought: There are two versions to download. Make sure you have the right one for your system (32 bit vs 64 bit). It might be that these call different DLL's. Also : Do you have a figure loaded when importing? TD Ok, I read the product description again. There is no UI at all except for the python import script. That is a bit disappointing. In that case having the aniblock distributed as pose files would have been better - at least then you have some visual feedback of what you are loading I tried both versions--the 64 bit version give me an error (slightly differant, but indicates that 64 bit is the wrong version). I have M4 loaded and selected (tried both hip and body). I am using windows 7, 64 bit. Poser 9 (standard, not Pro). Could it be because Poser 9 standard is 32 bit, but my machine is 64 bit? I'm having the same problem. Back to the drawing board. A pretty big disappointment for me. Don't know how this slipped through QA. This is what I get. Traceback (most recent call last): File "C:\Program Files\Smith Micro\Poser Pro 2012\Runtime\Python\poserScripts\ScriptsMenu\Import\aniBlock.py", line 1, in import gofigure.aniblock ImportError: No module named gofigure.aniblock Yup, no interface. You simply point it to the aniblock and it loads into the current animation layer. I actually prefer this script as I can use it to convert any of my aniblocks including my home made ones. If they would offer pose files they would undoubtedly cost much more as they would have to be offered for each pack separately. TD Weird. Same system here. Just using Pro 2012, but that is not it as wimvdb has the same issue with Pro 2012. I do not know what is different on my system, but it works without any problem so far. I tried several blocks with V4 selected. All imported fine. TD Actually it works for me. I did not realize there was no UI and the addons menu would be empty. I tried the import script and I can get the blocks to load. But it is a lot of guesswork without any hint of what and how long it is I guess I have to convert them to pose files myself I just have DAZ Studio open on my second monitor,. That way I can easily see what is what. Makes it also faster to combine blocks and make new ones. If it works for you on Pro, maybe the difference is related to P9/Pro 2012? TD It might be a P9 issue. I assume you would load the aniblock by selecting Scripts/Import/Aniblock and then a dialogue should ask you to locate the aniblock--is that right? Maybe I'm trying to import it the wrong way.... No, that is correct. You get a dialog and select one of the aniblock files. It will import into the current layer, if there are not enough frames, it will give you an error and tell you how many frames you have to set as minimum. TD There is something wrong in your P9 install The 64 bit programs are installed in Program Files and the 32bit programs (Poser 9) are installed in Program Files (86) If your Poser 9 is really installed in Program Files, the aniblock addin may be looking at the wrong place I have it running now (sort of).. I went into the python file and ran the scripts from the run scripts file. The results, so far, are only so-so. Many of the aniBlocks don't work because they twist the figure into pretzel-like moves. So far, IMHO...this is not ready for Prime Time. Hi pete! Which one did cause you pretzel problems? I have tried quite a number so far and not encountered any issue? If you give an example that did not work I could try and see if I can reproduce it here. TD P9 works fine installed where it is, but I'll install it to the default location and see if that helps. Yes, P9 works fine at that location. But the addon needs to access another folder location (Menuscripts) and depending on how the script parses for that location, it migh use program files (86) for the 32 bit version to create an absolute pathname. But that is just guess work on my part. Got things working better now. My error as far as the pretzel stuff. I forgot to un-check the IK settings. I still have to manually run "poseraddon.py" manually from the menu before the script will work from the scripts file. I might try to have the poseraddon.py run automatically with the scripts run plug-in. EDIT: That seems to work now. Now that it works, I highly recommend it. A couple people have mentioned it and many have the following question "Why on earth does the importer tell you how many frames you need? Why not just make room for the frames automatically?" Answer: The PoserPython API does not alow it. I would guess this is an oversite on the part of Poser. They have a Scene.NumFrames, but did not provide a Scene.SetNumFrames. Hopefully they will add this in the next release. The other question is "Why does it require SR3?" Answer: It could be done and in fact it was working, but import took soooo long prior to SR3 because they did not have the Parameter.SetValueFrame. We are talking minutes to import a small animation. I still have this same problem. Can someone please tell me how you fixed it. Probably something GoFigure should do. I am running Poser 9 with SR3 on Windows 7. Poser is installed in C:\Program Files (x86)\Smith Micro\Poser 9. I am getting the following error message when I select "Import/aniBlock" from the "Scripts" menu. Traceback (most recent call last): File "C:\Program Files (x86)\Smith Micro\Poser 9\Runtime\Python\poserScripts\ScriptsMenu\Import\aniBlock.py", line 1, in import gofigure.aniblock ImportError: DLL load failed: The specified module could not be found. Thanks, Joe This error means that poser could not find gofigure.aniblock. Then there must be a file named aniblock.pyd located in folder C:\Program Files (x86)\Smith Micro\Poser 9\Runtime\Python\addons\gofigure Is it there? What bitness of the installer did you install? Yes, aniBlock.pyd is in the specified folder. I installed the 32 bit version. I have tried installing both versions, but nothing helped. Thanks, Joe I have the same problem. Running Poser 2012 64 bit. Files are in the right folder but no DLL. aniblock.pyd is in Runtime/Python/addons and aniblock.py can be found in Runtime/Python/poserScripts/ScriptsMenu When attempting to import using scripts I get the display: Traceback (most recent call last): File "F:\Poser Pro 2012\Runtime\Python\poserScripts\ScriptsMenu\Import\aniBlock.py", line 1, in import gofigure.aniblock ImportError: DLL load failed: The specified module could not be found. Has this ever been resolved? Thanks in advance. Just a follow up. I moved the aniblock.pyd file to the DLL folder and now get the prompt: Traceback (most recent call last): File "F:\Poser Pro 2012\Runtime\Python\poserScripts\ScriptsMenu\Import\aniBlock.py", line 1, in import gofigure.aniblock ImportError: No module named aniblock Additionally, once I get this script working, where should the aniblock files reside - in Poses? Or will I import them from an external file on a per use basis? Again, thanks in advance!
http://www.daz3d.com/forums/viewreply/109399/
CC-MAIN-2015-48
refinedweb
1,850
76.72
3.12. Weight Decay¶ In the previous section, we encountered overfitting and the need for capacity control. While increasing the training data set may mitigate overfitting, obtaining additional training data is often costly, hence it is preferable to control the complexity of the functions we use. In particular, we saw that we could control the complexity of a polynomial by adjusting its degree. While this might be a fine strategy for problems on one-dimensional data, this quickly becomes difficult to manage and too coarse. For instance, for vectors of dimensionality \(D\) the number of monomials of a given degree \(d\) is \({D -1 + d} \choose {D-1}\). Hence, instead of controlling for the number of functions we need a more fine-grained tool for adjusting function complexity. 3.12.1. Squared Norm Regularization¶ One of the most commonly used techniques is weight decay. It relies on the notion that among all functions \(f\) the function \(f = 0\) is the simplest of all. Hence we can measure functions by their proximity to zero. There are many ways of doing this. In fact there exist entire branches of mathematics, e.g. in functional analysis and the theory of Banach spaces which are devoted to answering this issue. For our purpose something much simpler will suffice: A linear function \(f(\mathbf{x}) = \mathbf{w}^\top \mathbf{x}\) can be considered simple if its weight vector is small. We can measure this via \(\|\mathbf{w}\|^2\). One way of keeping the weight vector small is to add its value as a penalty to the problem of minimizing the loss. This way if the weight vector becomes too large, the learning algorithm will prioritize minimizing \(\mathbf{w}\) over minimizing the training error. That’s exactly what we want. To illustrate things in code, consider the previous section on “Linear Regression”. There the loss is given by Recall that \(\mathbf{x}^{(i)}\) are the observations, \(y^{(i)}\) are labels, and \((\mathbf{w}, b)\) are the weight and bias parameters respectively. To arrive at the new loss function which penalizes the size of the weight vector we need to add \(\|\mathbf{w}\|^2\), but how much should we add? This is where the regularization constant (hyperparameter) \(\lambda\) comes in: \(\lambda \geq 0\) governs the amount of regularization. For \(\lambda = 0\) we recover the previous loss function, whereas for \(\lambda > 0\) we ensure that \(\mathbf{w}\) cannot grow too large. The astute reader might wonder why we are squaring the weight vector. This is done both for computational convenience since it leads to easy to compute derivatives, and for statistical performance, as it penalizes large weight vectors a lot more than small ones. The stochastic gradient descent updates look as follows: As before, we update \(\mathbf{w}\) in accordance to the amount to which our estimate differs from the observation. However, we also shrink the size of \(\mathbf{w}\) towards \(0\), i.e. the weight ‘decays’. This is much more convenient than having to pick the number of parameters as we did for polynomials. In particular, we now have a continuous mechanism for adjusting the complexity of \(f\). Small values of \(\lambda\) correspond to fairly unconstrained \(\mathbf{w}\) whereas large values of \(\lambda\) constrain \(\mathbf{w}\) considerably. Since we don’t want to have large bias terms either, we often add \(b^2\) as penalty, too. 3.12.2. High-dimensional Linear Regression¶ For high-dimensional regression it is difficult to pick the ‘right’ dimensions to omit. Weight-decay regularization is a much more convenient alternative. We will illustrate this below. But first we need to generate some data via That is, we have additive Gaussian noise with zero mean and variance 0.01. In order to observe overfitting more easily we pick a high-dimensional problem with \(d = 200\) and a deliberatly low number of training examples, e.g. 20. As before we begin with our import ritual (and data generation). In [1]: %matplotlib inline import gluonbook as gb from mxnet import autograd, gluon, init, nd from mxnet.gluon import data as gdata, loss as gloss, nn n_train, n_test, num_inputs = 20, 100, 200 true_w, true_b = nd.ones((num_inputs, 1)) * 0.01, 0.05 features = nd.random.normal(shape=(n_train + n_test, num_inputs)) labels = nd.dot(features, true_w) + true_b labels += nd.random.normal(scale=0.01, shape=labels.shape) train_features, test_features = features[:n_train, :], features[n_train:, :] train_labels, test_labels = labels[:n_train], labels[n_train:] 3.12.3. Weight Decay from Scratch¶ Next, we will show how to implement weight decay from scratch. For this we simply add the \(\ell_2\) penalty as an additional loss term after the target function. The squared norm penalty derives its name from the fact that we are adding the second power \(\sum_i x_i^2\). There are many other related penalties. In particular, the \(\ell_p\) norm is defined as 3.12.3.1. Initialize Model Parameters¶ First, define a function that randomly initializes model parameters. This function attaches a gradient to each parameter. In [2]: def init_params(): w = nd.random.normal(scale=1, shape=(num_inputs, 1)) b = nd.zeros(shape=(1,)) w.attach_grad() b.attach_grad() return [w, b] 3.12.3.2. Define \(\ell_2\) Norm Penalty¶ A convenient way of defining this penalty is by squaring all terms in place and summing them up. We divide by \(2\) to keep the math looking nice and simple. In [3]: def l2_penalty(w): return (w**2).sum() / 2 3.12.3.3. Define Training and Testing¶ The following defines how to train and test the model separately on the training data set and the test data set. Unlike the previous sections, here, the \(\ell_2\) norm penalty term is added when calculating the final loss function. The linear network and the squared loss are as before and thus imported via gb.linreg and gb.squared_loss respectively. In [4]: batch_size, num_epochs, lr = 1, 100, 0.003 net, loss = gb.linreg, gb.squared_loss train_iter = gdata.DataLoader(gdata.ArrayDataset( train_features, train_labels), batch_size, shuffle=True) def fit_and_plot(lambd): w, b = init_params() train_ls, test_ls = [], [] for _ in range(num_epochs): for X, y in train_iter: with autograd.record(): # The L2 norm penalty term has been added. l = loss(net(X, w, b), y) + lambd * l2_penalty(w) l.backward() gb.sgd([w, b], lr, batch_size) train_ls.append(loss(net(train_features, w, b), train_labels).mean().asscalar()) test_ls.append(loss(net(test_features, w, b), test_labels).mean().asscalar()) gb.semilogy(range(1, num_epochs + 1), train_ls, 'epochs', 'loss', range(1, num_epochs + 1), test_ls, ['train', 'test']) print('l2 norm of w:', w.norm().asscalar()) 3.12. In [5]: fit_and_plot(lambd=0) l2 norm of w: 11.611943 3.12. In [6]: fit_and_plot(lambd=3) l2 norm of w: 0.041254018 3.12.4. Weight Decay in Gluon¶ Weight decay in Gluon is quite convenient (and also a bit special) insofar as it is typically integrated with the optimization algorithm itself. The reason for this is that it is much faster (in terms of runtime) for the optimizer to take care of weight decay and related things right inside the optimization algorithm itself, since the optimizer itself needs to touch all parameters anyway. Here, we directly specify the weight decay hyper-parameter through the wd parameter when constructing the Trainer instance. By default, Gluon decays weight and bias simultaneously. Note that we can have different optimizers for different sets of parameters. For instance, we can have a Trainer with weight decay and one without to take care of \(\mathbf{w}\) and \(b\) respectively. In [7]: def fit_and_plot_gluon(wd): net = nn.Sequential() net.add(nn.Dense(1)) net.initialize(init.Normal(sigma=1)) #}) train_ls, test_ls = [], [] for _ in range(num_epochs): for X, y in train_iter: with autograd.record(): l = loss(net(X), y) l.backward() # Call the step function on each of the two Trainer instances to update the weight and bias separately. trainer_w.step(batch_size) trainer_b.step(batch_size) train_ls.append(loss(net(train_features), train_labels).mean().asscalar()) test_ls.append(loss(net(test_features), test_labels).mean().asscalar()) gb.semilogy(range(1, num_epochs + 1), train_ls, 'epochs', 'loss', range(1, num_epochs + 1), test_ls, ['train', 'test']) print('L2 norm of w:', net[0].weight.data().norm().asscalar()) The plots look just the same as when we implemented weight decay from scratch (but they run a bit faster and are a bit easier to implement, in particular for large problems). In [8]: fit_and_plot_gluon(0) L2 norm of w: 13.311797 In [9]: fit_and_plot_gluon(3) L2 norm of w: 0.032502275 So far we only touched upon what constitutes a simple linear function. For nonlinear functions answering this question is way more complex. For instance, there exist Reproducing Kernel Hilbert Spaces which allow one to use many of the tools introduced for linear functions in a nonlinear context. Unfortunately, algorithms using them do not always scale well to very large amounts of data. For all intents and purposes of this book we limit ourselves to simply summing over the weights for different layers, e.g. via \(\sum_l \|\mathbf{w}_l\|^2\), which is equivalent to weight decay applied to all layers. 3.12. 3.12.6. Problems|x) \propto p(x|w) p(w)\). How can you identify \(p(w)\) with regularization?
http://gluon.ai/chapter_deep-learning-basics/weight-decay.html
CC-MAIN-2019-04
refinedweb
1,521
57.47
In your cell, you can cancel the image load if it's going to get re-used. In your UITableViewCell subclass add the following: -(void)prepareForReuse { [super prepareForReuse]; [self.imageView cancelCurrentImageLoad]; // UIImageView for whatever image you need to cancel the loading for } Be sure to #import "UIImageView+WebCache.h" as well. Though your app shouldn't be crashing, but I cannot help you without seeing some code, since it's not possible to pinpoint the cause of the crash from your description above. Seems like a few lines of code with Camel. i.e. from("file:/..").to("hdfs:..") plus some init and project setup. Not sure how much easier (less lines of code) you can do it using any method. If the HDFS options in Camel is enough for configuration and flexibility, then I guess this approach is the best. Should take you just a matter of hours (or even minutes) to have some test cases up and running. You can find where the class resides (inside a jar or whatever), via: <!-- This classes URL: <%= getClass().getProtectionDomain().getCodeSource() .getLocation().toExternalForm(); %> --> The getLocation delivers an URL. Text::CSV_XS (XS is the C version of the module, and is faster than native Perl Text::CSV) is the usual tool of choice. It handles quoted (and comma containing) fields easily can be used for both reading and writing Can switch between delimiters so you can have a writer object using TAB. Example (sans error handling): my $csv_in = Text::CSV_XS->new ({ binary => 1 }); my $csv_out = Text::CSV_XS->new ({ binary => 1, sep_char => " ", eol => " " }); open my $fh_in, "<", "file_in.csv" or die "file_in.csv: $!"; open my $fh_out, ">", "file_out.csv" or die "file_out.csv: $!"; while (my $row = $csv_in->getline($fh_in)) { $csv_out->print ($fh_out, $row) } close $fh_in; close $fh_out; So I figured out where was the problem. I my engine's engine.rb file I had such a code for initialization. The problem was with receiver of config. Since I provide an app instance to the block, the receiver of config is app. And that caused the problem: initializer("my_engine.locales") do |app| tracking_logger = Logger.new(app.root.join('log', "my_engine_log.log"), 10, 30*1024*1024) config.i18n.load_path += Dir[root.join('my', 'locales', '*.{rb,yml}').to_s] config.i18n.default_locale = :ru config.i18n.fallbacks = [:en] tracking_logger.debug "MyEngine::Engine specific locale settings are set. Def locale == :ru " end So I changed the receiver to MyEngine and now everything works just fine: initializer("my_engine.locales") do |app| tracking_logger = Logger.new(app.root.join( It does not seem like there is a way to select multiple files below API 19. See Select multiple files with Intent.ACTION_GET_CONTENT I would rather, before merging devel to A, making sure all devel ignored files are ignored as well in A. The trick for that is to remove from the index of A everything, update the .gitignore content, and add everything back! git checkout A # update the .gitignore file from devel in A git checkout devel -- .gitignore # remove/add everything git rm --cached -r . git add . git commit -m "new commit with devel gitignore files out" # then git merge devel I have a pair of utility files that do this, so I can call from either command line, or another batch file. This manages several versions of the file in the same folder, and I would call it before moving your new file in. Usage is: archivefile 5 "c: empsongs" .txt [move] Where 5 is the number of versions to keep. The base file name in this example is "c: empsongs.txt" and it creates (in c: emp) 5 files: songs_1.txt songs_2.txt songs_3.txt songs_4.txt songs_5.txt If you specify move it will move the original songs.txt otherwise it copies it. Note, if you're using it from another batch file (which you'll probably want to), you need to use call or it won't return to your original batch file (both batch files will exit instead, which isn't what you want): call archivefile 5 "c: empsong I want to add security in my classes so that no one can de-compile my class files or open jar files. You can't. It is mathematically impossible. If someone can get hold of the JAR file / ".class" files, they can extract and decompile the classes. (There are ways to make it a bit harder ... but there is nothing you can do to stop a skilled and motivated attacker ... apart from not delivering code to him, in any form.) I also want to add constraints in my web.xml file so that no one can try to access from another servlet. Also impossible: If you are talking about another servlet in the same web container, then anything you can put into the web.xml file can be edited out by the person who controls web container. If you are talking about a servlet in another web container, th Assuming by shell u mean bash: Skeleton to start with: luk32:~/projects/tests$ cat ./process_files.sh #!/bin/bash DEST=./copies for num in "$@"; do file="AllResponses_"$num"_6_20_2013.txt" if [ -f $file ]; then cp $file $DEST else touch $DEST/$file fi done; It takes numbers as arguments, then tries to find a file with given pattern in current working directory. If found copy to destination folder, else touch the file. You will probably have to tinker a little bit to get friendlier than hard-coded date handling. Example: luk32:~/projects/tests$ ls -l total 40116 -rw-r--r-- 1 luk32 luk32 4 cze 21 11:33 AllResponses_1_6_20_2013.txt -rw-r--r-- 1 luk32 luk32 5 cze 21 11:33 AllResponses_3_6_20_2013.txt -rw-r--r-- 1 luk32 luk32 0 cze 21 11:32 AllResponses_4_6_20 The issue here is that the entire IF expression is evaluated before the SET /P statement within it can be executed. InstalledVersion is not set yet, and so this invalid expression is evaluated: IF GEQ 2 ( Nothing inside of the IF expression executes because it cannot be completely evaluated. A solution is to enable delayed expansion and replace %InstalledVersion% with !InstalledVersion!, as described in this post. You can also restructure the code so the GEQ comparison happens after the IF expression. something like this? DEBUG=echo cd ${directory_with_files} for file in * ; do dest=$(stat -c %y "$file" | head -c 7) mkdir -p $dest ${DEBUG} mv -v "$file" $dest/$(echo "$file" | sed -e 's/.* (.*)/1/') done DISCLAIMER: test this in a safe copy of your files. I won't be responsible for any lost of data ;-). Since code files are generated using a tool (ILSpy), code style and formatting (whitespaces and indentations) will be same for almost every code file. What I would do is check if decompiling of that 5000+ class library is against the law, write a simple parser to parse code files and find class/method/property definitions check if what I've found matches with what is in xml If everything is ok, then I would copy found xml tags to their new position on code files since now that I know positions of definitions in a file. PS: just a reminder, to ease string replacing/inserting, always do inserting from bottom of the file to top of the file. this way, whenever you insert a line, indexes of other lines to be inserted will remain same. Using more than one <input type="file" /> element for your form is the best you can do. It's not possible to select multiple files from different directories in a single upload dialog. In answer to the first question "How come I only get one result when I am printing excelfiles()" this is because your return statement is within the nested loop, so the function will stop on the first iteration. I would try building up a list instead and then return this list, you could also combine this with the issue of checking the name e.g. : import os, fnmatch #globals start_dir = os.getenv('md') def excelfiles(pattern): file_list = [] for root, dirs, files in os.walk(start_dir): for filename in files: if fnmatch.fnmatch(filename.lower(), pattern): if filename.endswith(".xls") or filename.endswith(".xlsx") or filename.endswith(".xlsm"): file_list.append(os.path.join(root, filename)) return file_list file_list = ex You may need to edit your .csproj manually, something like this: [...] <ItemGroup> <Compile Include="RequestPage.cs"> <DependentUpon>RequestPage.uitest</DependentUpon> </Compile> <Compile Include="RequestPage.Designer.cs"> <DependentUpon>RequestPage.uitest</DependentUpon> </Compile> [...] Streams are slow. Part is because the constraints that apply to them are onerous, part is because implementations have a tendency of being poorly optimized. Try using Boost.Spirit parsers. While the syntax takes a bit of getting used to and compilation can sometimes be very slow, the runtime performance of Spirit is very high. Added loop around file processing, and collecting all log files before that, #!/usr/bin/perl use strict; use warnings; use POSIX 'strftime'; # my $current_date = strftime "%Y%m%d", localtime; # my $filename = "/home/ado/log/log.$current_date"; my @filenames = reverse sort glob("/home/ado/log/log.*"); if (@filenames > 7) { $#filenames=6; } for my $filename (@filenames) { my %output; open my $file, "<", $filename or die("$!: $filename"); while (<$file>) { if (/item_id:(d+)s*,s*start/) { $output{$1}++; } } close $file; for my $item(keys %output) { print "$item->$output{$item} "; } } You could do it like this: InputStream is = new FileInputStream(inputFile);//input file is the path to your input file ANTLRInputStream input = new ANTLRInputStream(is); GeneratedLexer lex = new GeneratedLexer(input); lex.setTokenFactory(new CommonTokenFactory(true)); TokenStream tokens = new UnbufferedTokenStream<CommonToken>(lex); GeneratedParser parser = new GeneratedParser(tokens); parser.setBuildParseTree(false);//!! parser.top_level_rule(); And if the file is quite big, forget about listener or visitor - I would be creating object directly in the grammar. Just put them all in some structure (i.e. HashMap, Vector...) and retrieve as needed. This way creating the parse tree (and this is what really takes a lot of memory) is avoided. Cocoa error 513 is NSFileWriteNoPermissionError. You can find it in Foundation Constants Reference. Maybe you don't have write permission for /var. What you do right now reads the whole directory (more or less) into memory only to discard that content for its count. Avoid that by streaming the directory instead: my $count; opendir(my $dh, $curDir) or die "opendir($curdir): $!"; while (my $de = readdir($dh)) { next if $de =~ /^./ or $de =~ /config_file/; $count++; } closedir($dh); Importantly, don't use glob() in any of its forms. glob() will expensively stat() every entry, which is not overhead you want. Now, you might have much more sophisticated and lighter weight ways of doing this depending on OS capabilities or filesystem capabilities (Linux, by way of comparison, offers inotify), but streaming the dir as above is about as good as you'll portably get. It's a pretty sophisticated solution using Lo-Dash debounce ;-) (in a sec...) Know that when you used your older code of: grunt.config(['coffee', 'glob_to_multiple', 'src'], filepath); Grunt is instructed to run the coffee task with the new file. The problem with this is that it's a synchronic process and so when another file is changed ( usually this happens in a matter of milliseconds) then Grunt Watch won't allow you to run another process until the debounceDelay has been reached. The default debounceDelay is 500 ms, but this can be changed using options of the watch task. (read more About option.debounceDelay Basically when you save multiple files, as you saw - only the first file saved is changed. In order to bypass this, a great utility for delaying (debouncing) function run i You can define environment variables and use those during compilation. For example, say: INCDIR=/home/me/dir1/dir2 LIBDIR=/home/me/dir1/dir2/lib and execute gfortran by saying: gfortran myprog.f90 -I${INCDIR} -L${LIBDIR} Can i make one simple exe tool for this. Of course. How to i find that missing files can any one give some guide to do. You need two lists. You'll need to read and fill one, and then read and compare to determine what's missing. The code might look something like this: var targetFiles = new List<string>(); foreach (var f in Directory.GetFiles(targetDir)) { targetFiles.Add(Path.GetFileName(f)); } var missingFiles = new List<string>(); foreach (var f in Directory.GetFiles(sourceDir)) { if (targetFiles.Contains(Path.GetFileName(f))) { continue; } missingFiles.Add(Path.GetFileName(f)); } Now all of the missing files in the target are in the missingFiles list. You can then loop through that list to copy those missing files to the target. Since you've tagged your question with git I assume you are asking about git usage for this. Well, SQL dumps are normal text files so it makes perfect sense to track them with git. Just create a repository and store them in it. When you get a new version of a file, simply overwrite it and commit, git will figure out everything for you, and you'll be able to see modification dates, The same is true for .xlsx if you decompress them. .xlsx files are zipped up directories of XML files (See How to properly assemble a valid xlsx file from its internal sub-components?). Git will view them as binary unless decompressed. It is possible to unzip the .xlsx and track the changes to the individual XML files inside of the archive for the second NP_len_*.fa pattern the regex can be like .+NP_len_d{1,3}.fa and for the first one where you do not want the N us this .+?[^N]P_len_d{1,3}.fa this one will match all patterns just except N before P. I have considered that folder names might grow in future about you xaa part. you can alternatively match for string of length 3 also. If you absolutely have to do this: Remove all locale files except those in the /en subdirectory. Remove all named Readers and Writers and associated subdirectories except those you need (you still need the Abstracts and the Interfaces though). Remove all CachedObjectStorage options except Memory/Icache/CacheBase. Remove Charts if you don't use them. Remove Shared PCLZip, OLE files and Escher.... and hope that I've not miscalculated I don't know if there's a way to get a clean build via the UI (I haven't found it yet, anyway), but it's pretty easy via commandline. From the root directory of your project: ./gradlew clean I also tried "recursive directory listing" at Emacs startup. It simply is way to loooong to be usable. So, try to stick with a limited number of "root" directories where you put your agenda files. Even better is sticking with "~/org", and possibly a few subdirs like "work" and "personal", and that's it. You can put there your agenda files, and have links Inside them to your real project root dirs. I know, this is not optimal, but I don't have anything better right now. Following code will work. In your case your were getting denying permissions issue you may check the folder in operating system installation partition. (C:). Following will work. Path path = Paths.get("D:\TestFolder"); if (Files.exists(path)) { System.out.println("exist"); } if (Files.notExists(path)) { System.out.println("not exist"); } javadoc says about Files.notExists() Tests whether the file located by this path does not exist. This method is intended for cases where it is required to take action when it can be confirmed that a file does not exist. Note that this method is not the complement of the exists method. Where it is not possible to determine if a file exists or not then both methods return false. As with the exists method,? Something like this should work: #!/bin/bash for file in *.html do lowercase=`echo $file | tr '[A-Z]' '[a-z]'` mv "$file" "$lowercase" for f in *.html do sed -i "s/$file/$lowercase/g" "$f" done done Change *.html to *.<extension> if you're working with something other than html files. I'm not clear on this line of code: public ActionResult ResourceType() { return View("ResourceType/Index"); } Does this mean that you are loading the resource type view from the Parameters controller? If that's the case, it's not what you want to do. You want to use RedirectToAction. Edit: you could also supply the model in your existing code using a different View() overload. I assume this isn't what you want, though, since you do have a ResourceTypeController. Edit #2 Based on your comment, I then figured that once those sub views were loaded, their own individual controllers would take over? No, it actually works the other way around. The controller loads the view (and passes the model, if one is required). It looks like you are trying to load the view, and expecting I ended up placing my package root in a subdirectory and creating a build script at the root that created all the empty directories I needed, so I wouldn't need .gitignore files and the .git file would no longer get packaged. You can create an src within lib and put your files there. If the files are executable, you can put them inside bin, or within a directory inside bin. See the package layout conventions for details. You are trying to reuse the Text_File fstream. To do this, you have to do a close() to flush the stream, after you are done writing to a csv file. Please see this question: C++ can I reuse fstream to open and write multiple files? Also: Here's my Google search for this question: Reading tomcat6 documentation and the part Deploying on a running Tomcat server, it states on the last line: Note that web application reloading can also be configured in the loader, in which case loaded classes will be tracked for changes. Therefore going to the loader documentation you can see that setting the attribute reloadable of your loader configuration will do exactly what you are asking reloadable:. You can use the Manager web app The problem is that you're linking the object files, not just compiling them. Make sure that you only compile the files, don't link them! You do that using the -c option. Do not use the -l option, you don't want to link anything at this stage. So: gcc -c -o usb_comm.o usb_comm.c gcc -c -o hex2bin.o hex2bin.c gcc -c -o hex_read.o hex_read.c gcc -c -o crc32.o crc32.c (I omitted the -I flags to save space here.) Then finally link all those object files into a shared library, and link against usb-1.0: gcc -shared -o libhello.so usb_comm.o hex2bin.o hex_read.o crc32.o -lusb-1.0 You should use a Makefile though for this. Or, even better, use a proper build system, like CMake, which is very easy to use. It's provided by all common Linux distros, so simply install it with the package man Is there a way to commit part of modified files (all files are staged) by using libgit2sharp? Currently, there's no way to perform a partial staging/unstaging in LibGit2Sharp. I'd suggest you to subscribe to Issue 195 in order to be notified when this is available. There are no Commit method in Repository that takes path parameter. Actually, the action of committing consists of taking a snapshot of the Index and creating a durable Commit git object in the object database. As such, the Commit API doesn't accept paths. In order to create a Commit from a file (or list of files) on your file system, you'd first have to add them to the Index with repo.Index.Stage(), then invoke the repo.Commit() method.
http://www.w3hello.com/questions/-Big-Files-and-Downlaod-using-asp-net-
CC-MAIN-2018-17
refinedweb
3,288
66.13
I use a fairly vanilla Atom install. Every so often in my python code I need to set a debugger breakpoint, so I type: import ipdb; ipdb.set_trace() # DEBUG However, recently Atom started autocompleting the word ipdb in my files so that as soon as I type ip it helpfully suggests: import ipdb; ipdb.set_trace() I’d like to change this text, so that it includes the # DEBUG at the end. However, I can’t figure out where this is coming from, as it appears even when I have no other files open that have the text ipdb so it’s clearly some kind of user-dictionary. I checked snippets, but they’re empty other than the default ‘lorem’ provider. Where is this user dictionary coming from and how can I edit it?
https://discuss.atom.io/t/cant-figure-out-how-to-fix-weird-autocomplete-issue/23996
CC-MAIN-2018-30
refinedweb
133
70.13
Nicola Pero, Thank you for continuing this discussion. Thanks to the others too. I note the points about -j, namespaces and subprojects made and will reply to them in context of Nicola's message. Dealing with the last point first: I was not accusing the current maintainer of not maintaining it enough. It seemed a popular opinion that the current design isn't documented as much as it could be and I've struggled to link the Documentation with what exists at the minute, so I concluded that there are people who would like to help who don't understand the current code. There's more love which it could get. What is the Master idea? Why can't we set the environment and then exec the user's preferred shell again instead of sourcing GNUstep.sh? Those are probably obvious questions and I didn't find answers. I know shell and some make, but I don't know gnustep-make and it's complex enough to deter me. I admit I am rusty on this. My first published computing article was about applying graph theory to compilation, but I've not had opportunity to work on this for years. Nicola Pero <address@hidden> wrote: > Well, I tried to explain in words. You can't iterate in make. You can't > execute pieces of code iteratively -- you can only execute them once, or > never. this does not make it exactly easy to iterate over files, read > each of them, and then execute the same rules for each of them with > different variables. You CAN NOT execute the same rules with different > variables in the same make invocation, which is what we want to do here. Are you sure it's what we want to do? It's what we would want to do if we were using an imperative programming language like Objective-C to drive the build, but we agree that make is not a full programming language. According to its manual, GNU make "constructs a dependency graph of all the targets and their prerequisites." To that graph, the above situation needs to look like the final stage of building the files *depends on* having completed the previous stage for all files. Why can't that dependency be declared, rather than relying on implicit execution order in the version of make used? I think this may be why using -j is breaking for some people. Have I understood Helge's point correctly? > Moreover, there are no local variables or rules. Everything is global. Yes, this point is understood. There will be times when we need to do something specific with variables or have two subprojects that interfere with each other. Is this the general case, though? If not, can we accommodate the special cases without penalising the general? In the few GNUmakefiles I've just looked at, it seems that things could be rewritten with prefixes or +=, or are already using prefixes. Some things, like APP_NAME, can already take a list. Most packages don't do anything more than the minimum of setting a few things and doing some includes - is that generally true? > Btw, how much better ... are you sure that that is what gives you slow > build times ? [...] Even ignoring the point about -j, having to revisit every directory on a large project that was already built is a performance problem too. One feature of make is that it can do just the work needed to rebuild. > > Yes, the subprojects that you use in one project have to be compatible, > > but that's already true. > No, it's not. You can do > ADDITIONAL_OBJCFLAGS += -lPincoPallino > in one GNUmakefile, and that will *not* be seen by another GNUmakefile in > another directory. If you are messing with the general *CFLAGS for something which should not be passed to all final outputs, isn't that asking for trouble? Why not AppName_OBJCFLAGS += -lPincoPallino (or similar) if it's only for that app? > > Recursion doesn't change that fundamentally either. > Of course it does. ADDITIONAL_OBJCFLAGS defined in subproject A have no > effect on ADDITIONAL_OBJCFLAGS defined in subproject B because they are > read in different, sandboxed, make invocations. Depending on their relative positions in the hierarchy, subproject A can pollute subproject B even across make invocations. It's not global, but allowing it's still a bug, isn't it? Some variables are reset with some make invocations, but it still alarms me to see it called "isolation" or "sandboxing". > > Not isolation at all. Assuming isolation seems dangerous to me. > I don't agree. "Isolation" is a basic design pattern of good software > architecture, and it looks like an essential requirement for a maintable, > usable system to assume isolation of subproject makefile code. You are assuming isolation where it doesn't exist, which is the problem. How can a subproject's make be "isolated" from all other makefiles yet still allow settings passed from the environment? > > Recursion does allow "dirty" makefile practices to continue longer. Maybe > > sometimes recursion would be necessary, but I don't see why it seems to > > be the default for so many GNUstep packages. Is it just lack of time to > > hack an elegant solution, which is what your message suggests? > > No - it's the way that make works that forces you to do this. Are you sure? The gnustep-make DESIGN file looks ideal for using a straight make rather than recursing. > make is supposed to let you define rules and targets and use them. (btw, > generally, make does encourage you to use recursive invocations to iterate > over directories). Make encourages you and then penalises you for doing it? I'd take the advice of such an awkward friend with caution ;-) Why do you think it's encouraged? It's documented, but that might be to stop people doing it badly. The case given as an example in the manual is fairly narrow. Other people are very scathing about it, such as "Recursive make Considered Harmful" by Peter Miller (1998), Journal of AUUG, I think that gnustep-make is better than automake, but I wonder if more speed can be obtained by looking again for simplicity and seeing if any of the special case complexity can be refactored. More efficient rebuilds and more use of -j seems desirable. The DESIGN looks sound, but I can't relate it to the code very well. I acknowledge the edge case of incompatible projects, but I still haven't seen anything which looks like a general problem. The problem causing iteration over files looked solvable by other means, but maybe I misunderstood it. I await any replies with interest. -- MJR/slef
http://lists.gnu.org/archive/html/discuss-gnustep/2005-01/msg00164.html
CC-MAIN-2014-15
refinedweb
1,109
64.91
On Wed, 2009-03-25 at 10:03 +0100, Ingo Molnar wrote:> * Paul Mackerras <paulus@samba.org> wrote:> > > Ingo Molnar writes:> > > > > * Paul Mackerras <paulus@samba.org> wrote:> > > > > > > +++ b/kernel/perf_counter.c> > > > @@ -1362,8 +1362,13 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)> > > > vma_size = vma->vm_end - vma->vm_start;> > > > nr_pages = (vma_size / PAGE_SIZE) - 1;> > > > To answer your question below... this: ^^^> > > > > > > > > > - if (nr_pages == 0 || !is_power_of_2(nr_pages))> > > > - return -EINVAL;> > > > + if (counter->hw_event.record_type == PERF_RECORD_SIMPLE) {> > > > + if (nr_pages)> > > > + return -EINVAL;> > > > + } else {> > > > + if (nr_pages == 0 || !is_power_of_2(nr_pages))> > > > + return -EINVAL;> > > > + }> > > > > > Hm, is_power_of_2() is buggy then as 1 page is a power of two as > > > well: 1 == 2^0.> > > > > > Hm, it seems fine:> > > > > > static inline __attribute__((const))> > > bool is_power_of_2(unsigned long n)> > > {> > > return (n != 0 && ((n & (n - 1)) == 0));> > > }> > > > > > that should return true for an input of 1.> > > > > > What am i missing?> > > > > > Ingo> > > > We have one page as a header that contains the info for reading > > the counter value in userspace plus the head pointer, followed by > > (for a sampling counter) 2^N pages of ring buffer.> > ah - ok. Morning confusion. (any email from me that comes at single > digit hour local time should be considered fundamentally suspect ;-)> > Wouldnt it still be better to keep the symmetry between counting and > sampling counters? In theory we could transit between these stags > and 'switch off' a sampling counter or 'switch on' a counting > counter - via an ioctl or so. Shouldnt counting counters be sampling > counters that were created while disabled temporarily?I think I initially intended 0 pages to be ok, even for samplingcounters. I just messed up that if stmt.if (nr_pages != 0 && !is_power_of_2(nr_pages)) return -EINVAL;would I think, do what I intended.
https://lkml.org/lkml/2009/3/25/117
CC-MAIN-2016-36
refinedweb
275
65.93
While attempting to apply a host template, I keep running into this error/notification: Host must have a single version of CDH installed I didn't notice multiple cm agents running on the host, is there anything else I need to check? Thanks! Hi buntu, Apart from having the CM Agent service running - did you send a parcel to the host? (This is done by Adding to cluster) Below API endpoint should be called with POST /clusters/{{ cluster }}/hosts and payload: {'items': ['my_node_fqdn', 'other_node_fqdn'] } Hopes this helps others. Created 03-19-2020 10:14 AM Thank You. This worked for me. Hello, In my case, I'm trying to deploy a new host with Cloudera CM Api (suing Python). All runs good, but when I want to apply a host template, I'm receiving this error: ApiException: Host must have a single version of CDH installed. (error 400) I know that Cloudera automatically dsitribute and activate parcels to new host, but how can I do, in teh same code, to recognize that the activation completes and then run apply host template? Parcel state is Activated when running, despite of is in distribution state. This is my code: from cm_api.api_client import ApiResource, ApiException from time import sleep import ssl from time import gmtime, strftime import socket repo_url='' hostnames = ["clouderapre-node3"] hosts = [] host_password='cloudera' parcel_version = "5.15.0-1.cdh5.15.0.p0.21" CM_SERVER_HOST="clouderapre-mgr.fintonic.com" cxt = ssl.create_default_context(cafile="/home/rlopez/certs/CA.crt.pem") api = ApiResource(CM_SERVER_HOST, username="cmapi", password="B1gd4t@", use_tls=True, ssl_context=cxt) cm = api.get_cloudera_manager() cluster = api.get_cluster('cluster') parcel = cluster.get_parcel('CDH', parcel_version) template = cluster.get_host_template(name='nodo-computo') for hostname in hostnames: create_host = api.create_host(hostname, hostname, socket.gethostbyname(hostname), "/default") hosts.append(create_host) install_cm_agent = cm.host_install(user_name='root', host_names=[hostname], cm_repo_url=repo_url, password=host_password, ) while install_cm_agent.success == None: sleep(5) install_cm_agent = install_cm_agent.fetch() if install_cm_agent.success != True: print "cm_host_install failed: " + install_cm_agent.resultMessage exit(0) sleep(60) for host in hosts: hostId = [host.hostId] add_host = cluster.add_hosts(hostId) install_template = template.apply_host_template(host_ids=hostId, start_roles=True) while install_template.resultMessage == "ApiException: Host must have a single version of CDH installed. (error 400)": sleep(5) install_template = template.apply_host_template(host_ids=hostId, start_roles=True).wait() exit(0) Created 11-23-2018 10:47 AM Facing similar issue, did you get any solution yet? Regards, Smith Hi, Do you have a solution for this? Hi, From my point of view, during the time the CDH parcels packages are syncing, the below info is the summary size of parcels directory, it may last some time which depend your rate of network transport, after it's finished you can continue your next work like apply service template on hosts. 2.7G /opt/cloudera/parcels Cheers, Hua
https://community.cloudera.com/t5/Support-Questions/Apply-host-template-Host-must-have-a-single-version-of-CDH/m-p/81499
CC-MAIN-2020-34
refinedweb
457
51.95
Quoting Matthew Hague <matthewhague at zoho.com>: >> Date: >> >> When evince is FullFloating I got problems when opening more than one >> instance of it, like a second paper. It just lets the first one disappear >> under the freshly opened one. > > The solutions discussed already are probably the way to go for a clean > approach, but just a remark on this point, as it's a general problem with > floating windows in xmonad. > > There's a (undocumented, i think) function raiseWindow that you can > use to put > a window in front of all others. I guess this isn't called by default since > tiled windows don't need to be raised in front of each other (whereas floats > do). > > So you can add this to your config to get around it, but maybe it's a hack: > > raiseFocused :: X () > raiseFocused = do > disp <- asks display > mw <- gets (W.peek . windowset) > maybe (return ()) (io . (raiseWindow disp)) mw > > myLogHook = ... <+> raiseFocused <+> ... The way to raise the focused floating window without going behind xmonad's back is to use shiftMaster. mod+click does this by default. ...prolly don't want to stick that in your log hook, though. ~d
http://www.haskell.org/pipermail/xmonad/2012-July/012836.html
CC-MAIN-2014-35
refinedweb
192
72.97
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. Jython is incredibly useful. I want to maintain some shared code, basically a few data structures, config items, convenience functions. I try adding a shared.py in the jss/jython/workflow directory, and I can then easily "include shared" in my Jython scripts ... until I need to revise shared.py ... an old version gets cached in JIRA/Jython somewhere and I don't know how to get the revised version re-interpreted. I have tried removing shared$py.class as if it were an errant .pyc file, but nothing happens. A new class file is not even re-created. So, I have to copy all my shared functions manually back into my scripts. Talk about a bad scene! How can I convince Jython to re-load my updated shared module? So, a colleague got this into my scripts, and it appears to do the trick: import shared reload(shared) Can anyone posting bounty here confirm this is valid?.
https://community.atlassian.com/t5/Jira-questions/How-do-I-make-Jython-un-cache-modules/qaq-p/104775
CC-MAIN-2018-22
refinedweb
180
77.13
There are numerous (hundreds?) of different measures for memory usage on a Linux machine, but what is a good heuristic/metric to use to help determine if a server needs more memory? Some ideas: /proc/meminfo /proc/vmstat There's no right answer to this. Peter is correct in saying that the values you need to be looking at are reported in top and free (you can get the source code for the procps package which shows how to get these values from 'C' - but for scripts it's simpler to just run 'free') If the system has unused memory (the first line of output from free) then it's unlikely to get much faster by adding more memory - but it might get faster by reducing the VFS cache pressure (keep stuff in cache longer). Although there's no right answer, there's lots of wrong ones - you can't tell from userspace which pages are shared but accesed via different locations - looking at memory usage to determine how much memory is free just doesn't work. As a starting point then you should be looking at the two values for free memory reported by 'free' You can run the command top to see an overview of all the major component in linux including memory usage. When viewing top for the first time, do note that memory used includes buffers and cache if any. top There is also the free command that's for memory. You can execute like free -m to view memory free in megabytes. free free -m There are many more tools, but I think that has sufficiently answered the tool part of the question. As to when you need more memory depends on the application you are running. Does it need burst capacity? Does it benefit heavily from a large cache size? But generally, if you're hitting swap and often, you really need more ram. If I were you, I would collect data on load, free memory, free -m and the main performance characteristic of your server (e.g. latency per request), and graphed in Calc/Excel, trying to discern the "swapout cliff" for several datapoints (memory configurations - 8 G, 16G, 32G etc.). Then, I would try various regressions to find the link between the "cliff" and memory available. A search of existing literature at CiteSeerX would also help. I have said this before, the best measure to get real time memory requirement is to observe the COmmitted_AS field in /proc/meminfo and comparing it over time to see how much memory you need. Theoretically, if your Committed_AS is always over than (Memfree+swapfree) then you are fine. But if it is less than that and you accumulate your workload on the system over time, you are approaching towards a OOM situation. Committed_AS value determines how much memory is required to the system if all the memory requests were being granted to the system at this very instant. Monitoring it is a good measure over time to see whether you need to increase RAM or you need to decrease workload. Really it all depends on the application(s), however you can use the method employed by the kernel to determine memory pressure which should give you a general overview on the hosts capability to manage the memory. Memory pressure is ideal since it is devoid of worrying about page cache, swappiness or even how much memory you actually have. Memory pressure is effectively a count of how many pages want to be marked active as per /proc/meminfo. The kernel measures memory pressure by keeping track of how many pages go from 'inactive' to 'active' in the page table. A lot of shifting between these two statuses indicates you probably do not have a lot of spare memory available to make more pages active. Low memory pressure is indicated by having very few promotions from inactive to active (because the kernel clearly has enough space to make active pages stay active). This script will measure pressure every PERIODIC seconds. The more data you can collect the better. The idea here is you graph the data and stick your Y axis with 0 at the centre. In ideal circumstances the graph should a horizontal line following 0. If the lines regularly spike outside of 0 (particularly 'Active' being positive, or spiking quite high regularly), the memory pressure on the host is high and more memory would be beneficial. import os import sys import re import time PERIODIC = 1 pgs = re.compile('Active:\s+([0-9]+) kB\nInactive:\s+([0-9]+) kB') meminfo = open('/proc/meminfo') def read_meminfo(): content = meminfo.read(4096) m = pgs.search(content, re.M) active, inactive = int(m.group(1)), int(m.group(2)) active = active / 4 inactive = inactive / 4 meminfo.seek(0, 0) return active,inactive if __name__ == "__main__": oldin, oldac = read_meminfo() while True: time.sleep(PERIODIC) active, inactive = read_meminfo() print "Inactive Pressure:\t%d" % (inactive - oldin) print "Active Pressure:\t%d" % (active - oldac) oldac = active oldin = inactive By posting your answer, you agree to the privacy policy and terms of service. asked 2 years ago viewed 1901 times active
http://serverfault.com/questions/437138/which-metric-should-i-use-to-determine-when-a-server-is-low-on-memory/437321
CC-MAIN-2015-14
refinedweb
858
60.35
as kevin said thats ur hardluck and mine as well !! @123 bhaimujeh mera ek post ya msg dkha do jis me mene likha ho k they are unrepaired !! usman bhai ap is poster ko bolo k mera ek msg aur ek post dkha de k jis me mene kaha ho unrepaired hai .. in fact i honored my words that they are in working condition . when i bought these i was told k these are unrepaired and are in working conditon .. jab k dono he band the. seller came here to clear his name i give him credit for that thansk for the support !!! the best way to support is money back guarratte..as buyer is not satisfied as from what i can understand, this was not a blind deal. the buyer heard it , saw it , held it , and then decided to buy it. therefore this was not a blind deal. hence if the buyer was not satisfied, he shuldnt had brought it. but such things shouldnt happen. ban cool_ash who did a blind deal with qadir and made him a fool. or ban the guy who also gifted me 2 "repaired subs". i threw them away, but not everyone is willing to do tht. Moneyback is not a right !It depends on seller ! @ aq dude , in any case, u shuld have told em their status and story. Only then could he have made a proper decision. U know most of the rookies cant judge subs condition and are quick to make the deal, so dont xploit that U shuld have stated that they are repaired and perfectly working, its a matter of darn 5k man. right or wrong ur good will and repute has been damaged in my opinion This poster is selling and passing on that cv sub aswell, so hes not an honest guy too This is what i posted in his sale thread @ poster there were a couple of spolied cvs gone to karachi which got repaired, u must promise to the purchaser its not one of them and chk wid the guy u bought from aswell no criticism, just a warning so later u dont face sum ones anger best of luck Nice try kiddo allot of buyers of electricity are not happyso do you think government will return their money on their unsatisfactory experience? I am not satisfied with the My University. Will they give me my fee back P.S my statement is just concerned with the 2c and have NO link with the OP its not sellers fault if buyer don’t know what he is buying he is not exploiting the buyer instead he gave it to him in 5k and his demand was 7k"buyers must be aware" ^agreed.. <?xml:namespace prefix = o<o:p></o:p> they can give you a partial fee back as you haven’t used their services for full duration of time.but again there are 100 unis offering same program y did you took theadmission in this particular uni? This shows you took an informed decisionso now it’s at unis desecration if they will refund or notif it’s me then i will never return your money I always know you are a lootera... Accountants and returning money. NEVER EVER ! come on u know me better in our past dealing :dEVEN THOUH THY WERE NOT VERY MATERIAL LOL. TUm to dill per lay gaye ho ! ^+1 and its such a shame. i have always believed in buying stuff from users over the internet instead of going to the market, believing that unlike the mostly dishonest shopkeepers, the educated people on the internet would not commit fraud in a one to one deal. apnee izzat ka he kuch khayal karein gey, i.e. self respect. but time and again i have been proved wrong. Yet, i myself keep getting fooled again every now and then :P. Remember guys, hiding the true facts is as much a bad thing to do as lying about them. And look at 1234567890, making a thread about the fraud when he was not able to sell the stuff even though he lied as much about it as he could. what a show guys, saddening. I pray that all of us, including me, develop the courage and character not to sell our souls at such low prices. There are bigger tests ahead in this life and the one hereafter. Hope we stay prepared. no.. boss when i asked warranty from u for one day so i can make it check from any shop.. then u said yar warranty kiya dn ..its not repaired or any other fault...
https://www.pakwheels.com/forums/t/aqgmbest-abdul-qadir-is-a-fraud/160017?page=2
CC-MAIN-2017-17
refinedweb
776
79.7
@fxm: Not sure what you are doing, but for me - both compile without -exx as noted. @coderjeff: What is the difference between step 4 and step 6? END/STOP and destructors For other topics related to the FreeBASIC project or its community. - Posts: 381 - Joined: Nov 28, 2012 1:27 - Location: California Re: END/STOP and destructors speedfixer wrote: Code: Select all ..... ''---------- case 3 ------------- 'dim as string a(5) 'print a(9) ' compile with no -exx: ' screen crash, seg fault ''---------- case 4 ------------- 'dim as string a(5) 'print "AAA" 'print a(9); "XXX" ' compile with no -exx: ' AAA, XXX, ZZZ, DDD, no message ..... Cases 3 and 4 do not compile at all. (test it if you are not convinced) With "redim as string a(5)", this would compile.With "redim as string a(5)", this would compile.fxm wrote:(the value limits of numeric literals used as static array indexes are always checked at compile phase) Re: END/STOP and destructors speedfixer wrote:What is the difference between step 4 and step 6? Yeah, it's almost the same thing; I was attempting to highlight what I think is an important detail in the startup: 4) call fbc MAIN() - this is YOUR (implicit) main function in the main module This is an actual procedure entry point "main(argc, argv[])" in the linker's namespace like in C. It's the "main" entry point for your program from the perspective of the CRT start up code. If your are running a debugger and set a breakpoint on "main" this is where it stops. 5) call to fb_Init() to finish init of the program This is automatically added by fbc to the start of your implicit main function before any of your statements. Initialization of fbrt is not quite finished yet: the call to fb_Init() passes argc/argv[] to fbrt so that COMMAND() will work and passes compile-time __FB_LANG__ constant so that rtlib can respect dialect dependent behaviours (I think only RND() function is affected). 6) YOUR PROGRAM HERE These are the statements written in your main source module. From a simplistic point of view this is where the FreeBASIC program actually starts. Return to “Community Discussion” Who is online Users browsing this forum: No registered users and 26 guests
https://www.freebasic.net/forum/viewtopic.php?p=246176&amp
CC-MAIN-2019-43
refinedweb
379
66.98
You have to use: addobservermulti, which adds the observer as many times as data feeds are in the system. It is actually missing from the docs, even if documented in the source. To be added. The actual docstring def addobservermulti(self, obscls, *args, **kwargs): ''' Adds an ``Observer`` class to the mix. Instantiation will be done at ``run`` time It will be added once per "data" in the system. A use case is a buy/sell observer which observes individual datas. A counter-example is the CashValue, which observes system-wide values ''' You seem to look for the parameter barplotto the BuySellobserver, which adds extra distance if True. The distance is controlled by the parameters bardist See the BuySellreference: Docs - Observers Reference @backtrader Thank you! Helped.
https://community.backtrader.com/topic/344/plot-observer-stoploss-takeprofit-on-corresponding-price-plots
CC-MAIN-2017-30
refinedweb
125
56.15
Introduction : In this tutorial, we will learn how to clone or copy a vector object in Java. The program will take user inputs to create one vector and then it will clone the vector to a different variable. Vector is like a dynamic array in Java. Arrays are fixed. You can’t add extra items to an array. But vectors are of variable size. You can add as many items you want. The size of the vector will increase whenever you keep adding items to it. Our program will first ask the user to enter the count of elements of the vector. It will then take the inputs of each element of the vector using a loop. Finally, it will clone the vector to a different variable and print out the result. Example Java Program : import java.util.Scanner; import java.util.Vector; public class Example { public static void main(String[] args) { //1 int count; //2 Scanner s = new Scanner(System.in); Vector vector = new Vector<>(); //3 System.out.println("Enter total number of elements you want to add : "); count = s.nextInt(); //4 for (int i = 0; i < count; i++) { System.out.print("Enter string for position " + (i + 1) + " : "); vector.add(s.next()); } //5 Vector cloneVector = (Vector) vector.clone(); //6 System.out.println("New vector is : "); for (Object aCloneVector : cloneVector) { System.out.println(aCloneVector); } } } Explanation : The commented numbers in the above program denote the step numbers below : - Create one integer variable count to store the total size of the vector. - Create one Scanner variable s to read the user input. Also, create one Vector vector to hold string inputs. - Ask the user to enter the total size of the vector. Read the user input value using the Scanner s and store it in count variable. - Now, run one for loop to take the inputs for the vector from the user. On each iteration, read the user input and add it to the vector using add() method. We are reading the user input value using next() method. - This step is used for cloning the vector. For cloning, we have one built-in method called clone(). This new vector is stored in the cloneVector variable. Note that we need to cast the new value to a Vector. - Finally, print out the new vector to the user. We are using one for each loop to print out the content of the newly created vector. Sample Output : Enter total number of elements you want to add : 3 Enter string for position 1 : Hello Enter string for position 2 : World Enter string for position 3 : !! New vector is : Hello World !! Enter total number of elements you want to add : 2 Enter string for position 1 : 1 Enter string for position 2 : 1 New vector is : 1 1 Conclusion : Cloning a vector is easy using its built-in clone method. In this example, we have learned how to create a vector using user inputs, how to clone a vector and also how to loop through all elements of the vector. Try to run the example program we have shown above and drop one comment below if you have any queries. Similar tutorials : - Java Program to print the sum of square series 1^2 +2^2 + ?.+n^2 - Java Program to calculate BMI or Body Mass Index - Java program to find the area and perimeter of an equilateral triangle - Java Program to Multiply Two Matrices - Java Program to find Transpose of a matrix - Java program to find maximum and minimum values of a list in a range
https://www.codevscolor.com/java-clone-vector
CC-MAIN-2020-40
refinedweb
590
66.44
In Python there are two 'similar' data structures: - list - CPython’s lists are really variable-length arrays - set - Unordered collections of unique elements Which to be used can make a huge difference for the programmer, the code logic and the performance. In this post are listed when to use list/when to use set, several examples and performance tests. Some key difference between lists and sets in Python with examples: List allow duplicated values. Sets can't contain duplicates The major difference for me is that list contains duplication while the set has only unique values. This can be seen from this example below: mylist = [1,2,3,4,5, 1,2,3,4,5] myset = {1,2,3,4,5} list_to_set = set(mylist) print(type(mylist)) print(type(myset)) print('{} - {}'.format(type(list_to_set), list_to_set)) result: <class 'list'> <class 'set'> <class 'set'> - {1, 2, 3, 4, 5} Sets are unordered Another key difference is that list has order while the sets are without. In Other words if you try to get the first element of a set you will end with error: TypeError: 'set' object does not support indexing as the example below: mylist = [1,2,3,4,5, 1,2,3,4,5] myset = {1,2,3,4,5} print(mylist[0]) print(myset[0]) result: 1 TypeError: 'set' object does not support indexing Sets are more efficient than lists Hash lookup is used for searching in sets which means that they are considerably faster than searching in list. The next example demonstrate how much faster are sets in comparison to lists. For 100000 times searching in list and set we have the following times: - list - 49.663 seconds - set - 0.007 seconds import cProfile def before(): for i in range(1, 100000): i in mylist def after(): for i in range(1, 100000): i in myset mylist = [] for i in range(1, 100000): mylist.append(i) myset = set(mylist) cProfile.run('before()') cProfile.run('after()') result: 4 function calls in 49.663 seconds 4 function calls in 0.007 seconds As you can see the searching in list is much more slower in comparison to set. So if you want to improve the performance of your Python applications you can consider using sets where it's possible. List can store anything while set stores only hashable items This code example demonstrates this problem: mylist = (([5],['b'])) myset = {((5),('b'))} myset = {([5],['b'])} The third line will raise error: TypeError: unhashable type: 'list' Because the set works only with hashable items. In other words you can add tuples to set but not lists. So if you want to get lists of lists then you need to use list.
https://blog.softhints.com/python-list-set-examples/
CC-MAIN-2021-25
refinedweb
449
67.49
NAME sched_setaffinity, sched_getaffinity - set and get a process’s CPU affinity mask SYNOPSIS #include <sched.h> int sched_setaffinity(pid_t pid, unsigned int len, unsigned long *mask); int sched_getaffinity(pid_t pid, unsigned int len, unsigned long *mask); DESCRIPTION. A set bit corresponds to a legally schedulable CPU while an unset bit corresponds to an illegally schedulable CPU. In other words, a process is bound to and will only run on processors whose corresponding bit is set. Usually, all bits in the mask are set. If the process specified by pid is not curently running on one of the CPUs specified in mask, then that process is migrated to one of the CPUs specified in mask. The argument len is the length (in bytes) has size len the affinity mask of process pid. If pid is zero, then the mask of the current process is returned. RETURN VALUE On success, sched_setaffinity() returns 0. On error, -1 is returned, and errno is set appropriately. On success, sched_getaffinity() always returns the size (in bytes) of the affinity mask used by the kernel. On error, -1 is returned, and errno is set appropriately. ERRORS EFAULT A supplied memory address was invalid. EINVAL The affinity bitmask mask contains no processors that are physically on the system, or the length len. NOTES The affinity mask is actually a per-thread attribute that can be adjusted independently for each of the threads in a thread group. The value returned from a call to gettid(2) can be passed in the argument pid. len field, and still later versions reverted again. The glibc prototype is now /*); SEE ALSO clone(2), getpriority(2), gettid(2), nice(2), sched_get_priority_max(2), sched_get_priority_min(2), sched_getscheduler(2), sched_setscheduler(2), setpriority(2), capabilities(7) sched_setscheduler(2) has a description of the Linux scheduling scheme.
http://manpages.ubuntu.com/manpages/dapper/man2/sched_getaffinity.2.html
CC-MAIN-2015-06
refinedweb
299
55.54
Hi everyone, newcomer here. I tried this puzzle in Lua.I had the exact same solution but with other variable names and it didn't work (highestMountainH instead of max and highestMountainID instead of imax). After some name replacements attempts and copy pasting, I found out that I can use whatever name I want for imax, but I have to use max otherwise it won't work. Do we have to use the variable names of the solution ?Is that standard behaviour for all puzzles ?How are we supposed to know which name was used ? Only mountainH was mentioned in the constraints. Thank you in advance for your help! EDIT: ok, half an hour later, I reloaded the page, and now it works!I used ctrl-z and came back to the exact same code I had and it doesn't produce errors.Not too sure what happened. Did you sneaky devs quickly fix it ? I have tried C++ test and found it irrelevant since the compiler version is quite old - 4.9 Thank you ! A l'aide SVP je sais pas comment réussir si un personne peut m'aider svp mrc I could use some help... I don't know how to sole this in java script except the order of mountains to fire at but when i do that the mountains change so my order doesn't work. Can a expert help a noob out please? If anybody is listening i need the help! Hi! I begin now to learn about coding, but I haven't any basis. Is there somewhere a easy introduction to the langage of Javascript?? I couldn't even understand what i, i++ or even parseInt() should mean. You should read some documentation, like the basis of this language. Hi! Is this site for professionals or it's a learning site? I'm beginer and I totaly do not understand how to solve this puzzle. What commands should I know? With what language are you trying to solve this puzzle? With Python! OK, so find a Python tutorial somewhere before trying to solve puzzles. You have to read the input then print on the standard output your answer in order to destroy a mountain without being killed. Hello, I am a beginner at Java and I was trying to solve this puzzle. I got passed the descending mountains. However, once I get to the scattered mountains I am clueless.I tried that provided solution but it doesn't work either. This is what I have int hmax = 0; int imax = 0; // game loop while (true) { for (int i = 0; i < 8; i++) { int mountainH = in.nextInt(); // represents the height of one mountain. if( mountainH > hmax){ hmax =- mountainH; imax =+ i; } The solution does not have hmax=- mountainH. When I try it without the subtraction sign not even the descending mountains test case works. Any help would be appreciated! You need to reset hmax et imax each turn can you please explain? I've tried looking up what you mean and come up empty handed You define imax and hmax ONE time for the first turn. So after reading the inputs, you set imax/hmax to the biggest moutain and fire it properly. Since you don't reset imax/hmax after firing (or before reading new input), you'll keep firing the same mountain even if it's not the biggest mountain anymore. Terrible explanation of input and terrible explanation of what you have to do. Hi, so I have a few questions on the first puzzle in c++, it goes, "for i = 0, i < 8, i++"why isnt i initialized, what is i++?I get that i is somehow the mountain that I am currently getting a cin for height of but, what? why can't i initialize the variables for highest mountain and which mountain to fire on outside of the while loop? this is what i had: using System; using System.Linq; using System.IO; using System.Text; using System.Collections; using System.Collections.Generic; class Player { static void Main(string[] args) { int HighestMountainHeight = 0; int MountainToFire = 0; while (true) { for (int i = 0; i < 8; i++) { int mountainH = int.Parse(Console.ReadLine()); // represents the height of one mountain. if (mountainH > HighestMountainHeight) { HighestMountainHeight = mountainH; MountainToFire = i; } } Console.WriteLine(MountainToFire); // The index of the mountain to fire on. } } } Your two init lines are not well placed. First iteration of the loop, you find the highest mountain right?Once you've found it, you save the height. On the second iteration, all mountains have the same height as during the first iteration but one (the previous highest one that you shot). So, what happens with this condition: if (mountainH > HighestMountainHeight) ? if (mountainH > HighestMountainHeight) It's never true. Because no current mountain is higher than the previous higher mountain. Do you understand the issue?
http://forum.codingame.com/t/the-descent-puzzle-discussion/1332?page=7
CC-MAIN-2018-30
refinedweb
808
76.11
This article is going to be using some of the new features available with the new Visual Studio 2008 Beta 2 Professional edition. Download it now if you haven't already. I have been using it for almost 3 months now, and I have not looked back. I now use it for all my HTML and CSS editing, including writing this very article. Not only does it have a host of new features, but you get performance improvements too, and even though it is still a beta version, I have had absolutely no problems or crashes whilst using it. I am not going to explain what some of the new features are, but rather leave it to Scott Guthrie and let his blog posts do the talking. One of the features that caught my eye straight away was Extension Methods. Extension methods "allow developers to add new methods to the public contract of an existing CLR type, without having to sub-class it or recompile the original type." They also make other powerful features, like LINQ, possible. I also then realized that it is now possible to "add" functionality to the existing framework classes, for example, the System.String class. I have always had a class of static string utility methods that I use in all my projects and applications. It is a part of a utils class library that I have built and grown over the years while using .NET. So this was the perfect opportunity to rewrite my utilities class library using version 3.5 and take advantage of (and learn) the new features. I have already started doing this, and I am blogging about it as I go. System.String An extremely simple (and useless!) example of an Extension Method can be seen in the following code: namespace Utils.MyExtensions { public static class JunkExtensions { public static string ToSomethingElse(this string input) { return input.PadLeft(10); } } } Please note that your Extension Methods must be defined inside a non-generic static class. Now, to use the Extension Method, include the namespace in your code: using Utils.MyExtensions; ...and your new Extension Method is now available to you, as you can see in the intellisense: I will be the first to admit that Extension Methods can be both useful and powerful, but when is enough enough? Scott Guthrie says in his post: "As with any extensibility mechanism, I'd really caution about not going overboard creating new extension methods to begin with. Just because you have a shiny new hammer doesn't mean that everything in the world has suddenly become a nail!" You couldn't say it any better than that. Listen to Scott - he knows best. You must not convert all your static utility methods to Extension Methods. What is the point? Do you really want to see a string class that has hundreds of methods available, but you only use 10% of them most of the time? Obviously not. You should only convert your most reusable, everyday static methods. Don't just create an Extension Method because you can; create one because you know it will help you code smarter. I create Extension Methods that, I think, improve and extend the framework, and that I feel should also be part of the framework. When I find myself asking the question: "But why didn't they include this method in the framework?", a new Extension Method usually follows shortly after. Also, make Extension Methods that are just wrappers around your utility methods. This way, the utility code and the Extension Method code can be kept separate, so the developer has the choice to use the Extension Methods or not. This can easily be explained with an example utility method: public static string CutEnd(string input, int length) { if (IsEmpty(input)) return input; if (input.Length <= length) return string.Empty; return input.Substring(0, input.Length - length); } and here is my Extension Method: public static string CutEnd(this string input, int length) { return Utils.Strings.CutEnd(input, length); } This gives the developer the freedom he/she wants, whereby the Extension Method or the static method can be used. Also, only one set of tests needs to be done for both methods. Now, I would have thought that creating an Extension Method that checks for nulls is pretty pointless. My thinking was: "how could I check for null if the object is null - won't I get a NullReferenceException?" Well, I tested it, and apparently no - you won't get the exception. Here's the Extension Method: NullReferenceException public static bool IsNullOrEmpty(this string input) { if (input == null) return true; return input.Length == 0; } Here's some code to test it: string str = null; if (!str.IsNullOrEmpty()) { Console.Write(str); } This code runs fine - no problems! Very strange indeed, but at the same time, useful. More can be read about this at. When developing any code that will be reused again and again (including Extension Methods), writing tests for that code becomes a necessity. This is to ensure the code is both reliable and bug free. When creating a reusable framework that is reused on every project and also likely to change a number of times in its lifetime, testing becomes an absolute requirement! The only smart way to utilize these types of tests is by writing Unit Tests. Unit tests allows you to keep a set of tests that you can run at any time, over and over again, to check for all use-case scenarios. So, when you make a change to a utility method, run the unit tests for that method to make sure it still does what you expect. There are many unit testing frameworks out there, including NUnit which is an Open Source framework written in C#. You can also purchase and/or download add-ins for Visual Studio to allow for unit testing within the IDE, but these are no longer required. There is now built-in Unit Testing functionality in VS2008 Pro. Woohoo! With a few simple clicks of the mouse, you can create unit tests for code that you have written. Let's do that for our Extension Method: It's that simple! We have now created a new project called UtilsTests that contains all our tests for the Extension Methods we just created: Now all you need to do is customize the tests to suite your needs. Here is the generated test code: /// <summary> ///A test for ToSomethingElse ///</summary> [TestMethod()] public void ToSomethingElseTest() { string input = string.Empty; // TODO: Initialize to an appropriate value string expected = string.Empty; // TODO: Initialize to an appropriate value string actual; actual = JunkExtensions.ToSomethingElse(input); Assert.AreEqual(expected, actual); Assert.Inconclusive("Verify the correctness of this test method."); } As you can see, it intelligently looks at the input parameters of the method and generates the test for you. All you need to do now is change the input and the expected output, then remove the Assert.Inconclusive line, add a few tweaks, and you have a working unit test. This is what the test should look like now: Assert.Inconclusive [TestMethod()] public void ToSomethingElseTest() { string input = "abc123"; string expected = " abc123"; string actual = input.ToSomethingElse(); Assert.AreEqual(expected, actual); } Obviously, a unit test is only as good as the coder who wrote it, so make sure your unit tests are extensive. Actually, be more extensive than you usually would, and take into account even the simplest cases. In fact, be paranoid when you write your unit test. This way, you will be sure your tests cover every angle. Now run your tests by clicking Test > Run > All Tests in Solution: ..and see the result in VS2008: As you can see from the screenshot, not all our tests passed. We now need to delve into the code and see why. So, like you would normally do as a VS developer, place a break point into your failed test code. But this time, instead of just running your tests, choose to run them in Debug mode by clicking Test > Debug > All Tests in Solution: Now you can step through your code and see why the test is failing: So we have run through a very simple example of what Extension Methods are and how to write unit tests to test them. I hope this article has created some interest in the new features available in VS2008, as they have dramatically helped me code smarter. I use the unit test features daily with my new utilities project written for the .NET Framework 3.5. For example, every day I seem to change or add functionality to my string utils class, and what better way to know and trust that my changes work than by having a set of unit tests that I can run at a click of a button. Go check out my String Utils class written in .NET 3.5 with a few handy Extension Methods. There you can download the source which includes the full unit tests for the methods. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) public static class FrameworkExtensions { public static string CutEnd(this string input, int length) { return Utils.Strings.CutEnd(input, length); } } string x = "xyz"; //Use extension as utility static method FrameworkExtensions.CutEnd(x, 2); //Use as extension method x.CutEnd(2); General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/script/Articles/View.aspx?aid=21173
CC-MAIN-2015-11
refinedweb
1,589
62.98
I read recently that in just a couple of years over a third of all development will be cloud based. That's a big shift. So I started to look into some of the technologies and what it takes to develop applications for the cloud. Now to be clear, I think there might be some misconceptions regarding what a cloud application is. I'm not referring to web server based applications. The development that I'm talking about would be applications for platforms like Azure. The only way that I know to learn is by doing; experience is definitely my best teacher. Unfortunately, the day job rarely provides the diversity in challenges or the opportunities in creativity to explore all the new technologies. So for me, an alternative is to always think like an entrepreneur. I look at the new technologies and think of new businesses or ventures that could be started using the new technologies. In examining this new paradigm, I find that there is an overwhelming number of potential opportunities. I say that because I don't think that the change is going to be to simply to move existing applications to a new environment. Instead, I think the technologies, tools, and infrastructure are all in place for a whole new wave of solutions that were previously not feasible. We may argue, but I believe a key player in these new solutions will be the device we all carry around with us, the cell phone. Actually, I don't think we should call them cell phones any more. Because they are really 'wireless personal computers' that happen to also allow you to make phone calls. These devices have the power and capabilities that surpass those of a laptop of just a couple of years ago. Given the availability of broadband speed to support these devices, the sky is the limit as far as application capability goes. And I'm not just talking about cute pastime applets or games; I'm talking about real world business solutions. So the primary goal here is to learn what it takes to develop applications for Azure, but not just simply the mechanics of packaging and deploying an application. Specifically, I want to explore using Silverlight on both desktop and phone (as a single solution) deployed on Azure. But this goal is still too broad for the purpose of conceptualizing specific applications. So to narrow things down, we'll bring in Bing Maps as the central theme in exploring solutions for the cloud. You'd be amazed at how many other usages can be devised besides the typical usage for traveling directions. Since Bing Maps will form the core of the exploration, we'll start by getting acquainted with that service and the associated Silverlight control in this article. In the next article, I'll describe some cloud application concepts centered around Bing Maps. At the same time, we'll expand on the map functionality presented in this article to highlight some functionality of the concepts. And in the third article, we'll cover a complete solution for the cloud, again utilizing Bing Maps. One final introduction note, since this is a learning exercise for me, the prose is more on the line of a tutorial in a step by step fashion. The student is really me, but you are invited to follow along. In order for you to get access to the map services provided by Microsoft, you need to sign up and get an account. You can sign up for a developer account at. Once you sign up, you'll be able to get a Bing Maps key which you can then use to get free access to the service. All the information you will need is provided on the above link. We are also going to make extensive use of the Silverlight Map control, so you will need to download that also. You can find the SDK and associated instructions here:. Now that we're done with the external pre-requisites, let's see how we can apply them. Start by creating a basic Silverlight application, name it SLMapTest, and accept the default ASP web hosting. To start with, since we want to use the Map control, you'll need to add a reference for the DLLs to the project (see below). You should find them in the Program Files folder under Bing Maps Silverlight Control-Libraries. Now, add the map control to your page, along with the appropriate namespace, as shown below. Note that you will have to insert your own Bing Maps Credentials key (in place of 'Your Key') in order to get access to the service. xmlns:m="clr-namespace:Microsoft.Maps.MapControl;assembly=Microsoft.Maps.MapControl" ... <Grid x: <m:Map x: </m:Map> </Grid> ... Compile and run. You have just provided your user with a fully interactive map. The user can scroll through the world map, change view modes, and zoom in to any desired area. Not bad for essentially no work on our part. Although that's not very impressive from the user's perspective (they can do that with Bing), it does prove that we have all the necessary pieces hooked up correctly. Now let's add some basic functionality that will allow the user to enter a desired location/address and then have the system display a close up view of the area if it was found. Modify the page as shown below: <m:Map x: <TextBox Height="23" HorizontalAlignment="Right" Margin="0,26,50,0" Name="textAddress" VerticalAlignment="Top" Width="120"> </TextBox> <Button Content="Find" Height="23" HorizontalAlignment="Right" Margin="0,26,0,0" Name="buttonFind" VerticalAlignment="Top" Width="44" Click="buttonFind_Click"> </Button> <TextBlock Height="23" HorizontalAlignment="Right" Name="textResult" VerticalAlignment="Top" Width="166"/> </m:Map> All we've done is to add a textbox for user entry, a button to invoke the request, and a TextBlock to display any abnormal results. Here is what it will look like after we've hooked up things and the user has entered a location: TextBlock You have programmatic control of the map control through the methods and properties that it exposes (the documentation provided with the SDK has full description). The method we are interested in is SetView, which centers the map on a set of coordinates that corresponds to the address of the location that the user entered. Unfortunately, the map control does not have a method which will accept a street address, only geo coordinates (latitude/longitude). That means that we have to perform the translation between a location address (as a string) and geo coordinates. Fortunately for us, Microsoft makes available a service which provides that functionality. SetView To continue with the project, we'll need to set up a proxy to the Bing GeoCode service which will provide the translation we need. Right click on the Solution Explorer reference node and select "Add Service Reference...". Type in the following for the address:. Click GO to retrieve the file. Once the file is found and loaded, you should see the contract definition for the service in the left pane. Before you hit OK to have the wizard generate the proxy, make two additional changes. First, change the default namespace to something more descriptive like "BingGeocodeService" or whatever you'd like. The proxy class that gets generated by default brings in some type definitions that will clash with types already defined in the map control. To eliminate this, click on the Advanced button and then select "Reuse types in specified reference assemblies". Proceed by selecting the two map assemblies. This will eliminate those definitions from the generated proxy namespace. Finally click OK to generate the proxy. In order for us to make use of the service, add a member variable for the proxy in the code-behind file, as shown below (don't forget the namespace): #region private property private BingGeocodeService.GeocodeServiceClient geocodeClient; private BingGeocodeService.GeocodeServiceClient GeocodeClient { get { if (null == geocodeClient) { BasicHttpBinding binding = new BasicHttpBinding(BasicHttpSecurityMode.None); UriBuilder serviceUri = new UriBuilder( "" + "v1/GeocodeService/GeocodeService.svc"); //Create the Service Client geocodeClient = new BingGeocodeService.GeocodeServiceClient(binding, new EndpointAddress(serviceUri.Uri)); geocodeClient.GeocodeCompleted += new EventHandler<binggeocodeservice.geocodecompletedeventargs>( GeocodeCompleted); } return geocodeClient; } } We are ready to start hooking things together. Create an event handler for the button Click event. This will occur when the user depresses the Find button. In that handler, we'll add some code that will use our newly created proxy to make a call to the Bing Geocode service. The contract indicates a method, appropriately named GeocodeAsync, which can take an address string (in the form of a query) and return, through a callback, the latitude/longitude values for the location. We will then in turn, pass those values to the map control to center the view around the coordinates. Click GeocodeAsync Since the call to the service is an asynchronous call, we created an event handler (GeocodeCompleted) for the reply when the endpoint was being instantiated (see above). In the button click event handler, we grab the address string the user entered and pass it to a helper method, GeocodeAddress, where the call to the service is made. The following code shows the three methods: GeocodeCompleted GeocodeAddress private void buttonFind_Click(object sender, RoutedEventArgs e) { //Make sure there's something there if (textAddress.Text.Length > 0) { GeocodeAddress(textAddress.Text); } } ... private void GeocodeAddress(string address) { //Pack up a request BingGeocodeService.GeocodeRequest request = new BingGeocodeService.GeocodeRequest(); request.Query = address; //what user entered // Don't raise exceptions. request.ExecutionOptions = new BingGeocodeService.ExecutionOptions(); request.ExecutionOptions.SuppressFaults = true; // Only accept results with high confidence. request.Options = new BingGeocodeService.GeocodeOptions(); //ObservableCollection is the default for Silverlight proxy generation. request.Options.Filters = new ObservableCollection<BingGeocodeService.FilterBase>(); BingGeocodeService.ConfidenceFilter filter = new BingGeocodeService.ConfidenceFilter(); filter.MinimumConfidence = BingGeocodeService.Confidence.High; request.Options.Filters.Add(filter); //Need to add your key here request.Credentials = new Credentials(); request.Credentials.ApplicationId = (string)App.Current.Resources["BingCredentialsKey"]; //Make the call GeocodeClient.GeocodeAsync(request); } ... private void GeocodeCompleted(object sender, BingGeocodeService.GeocodeCompletedEventArgs e) { string callResult = ""; try { //Was the service able to parse it? And did it return anything? if (e.Result.ResponseSummary.StatusCode != BingGeocodeService.ResponseStatusCode.Success || e.Result.Results.Count == 0) { callResult = "Could not find address."; } else { // Center and Zoom the map to the location of the item. bingMap.SetView(e.Result.Results[0].Locations[0], 18); } } catch { callResult = "Error processing request."; } textResult.Text = callResult; } Try it. Compile and run the application to make sure you can access the geocode service. Yeah I know, you can just go to Bing Maps and get a lot more information returned. But the plan is not to provide that type of service, but instead make use of what's readily available in new ways. At this point, we are still just checking the plumbing. But before we start on the road of exploring new functionality that we can create using the map control, I think I'd like to take a little detour into the land of MVVMness. Primarily because once we start adding more code, things will get pretty messy if we continue with our current approach. And secondarily because... There is a lot to be said about MVVM. And in fact, a simple search will overwhelm you with all that is being said. How can there be so much diversity in opinions, and approaches to a pattern? A pattern by definition is supposed to do the opposite, that is, provide a simple, easy to understand approach to solving a problem. A pattern is a communication mechanism. In my opinion, we're missing the forest for the trees. If we step back and look at it from another perspective, perhaps it may become clearer (or not). I think, as a community, we've come to a realization of what we want (and need) and have labeled it with the 'umbrella' term MVVM. MVVM is really a collection of techniques (some available through the framework, others improvised) that provide us the capability to satisfy the goals of good design: separated presentation; separation of concerns; testability; etc. That's really too much to be defined as a pattern. Martin Fowler writing about MVC here states: "It's often referred to as a pattern, but I don't find it terribly useful to think of it as a pattern because it contains quite a few different ideas." MVVM, being a variation of MVC, would fall under the same critique. And in this blog on MVVM, John Grosman writes: "My team has been using the pattern feverishly for a couple of years, and we still don't have a clean definition of the term ViewModel" (note, emphasis mine). "One problem is (that) ViewModel actually combines several separate, orthogonal concepts: view state, value conversion, commands for performing operations on the model." I think each of the 'orthogonal concepts' mentioned above could by itself be described by its own pattern. And not only is it too much stuff to describe as a simple pattern, the mechanisms to implement those 'orthogonal concepts' have not been there (they have been dribbling in over time). In addition, the support that has been added has varied between the platforms! That should be enough to explain why there is so much confusion surrounding MVVM, but there is more in my opinion. If an application consisted simply of a single View and its associated ViewModel/Model, then things could be OK (after you figure out how to implement the 'orthogonal concepts'). However, rarely are things that simple. In even the simplest applications, there will be several Views. In fact, simply displaying a dialog constitutes a separate View. Typically, you'll have several Views which will undoubtedly have some state/data linkage to each other. Buying into the separation of concerns requires that there be some kind of messaging infrastructure available to the ViewModels. This is one additional hole that exists in the support required to implement MVVMness. Fortunately, this gap can be filled through the use of various community toolkits and libraries. The dilemma with this approach is that the lifetime of your application may be longer than the support from the community. And Microsoft may have other plans which may not be completely in line with the library you selected. So hopefully, a messaging mechanism will be incorporated into future releases of the .NET Framework (and a dependency injection facility would be nice too). Yeah, I know. Most readers won't relate. Anyway, within the context of our sample application, the break up is going to be facilitated by using the MVVM Light Toolkit, which you can find here. The toolkit provides several 'facilities' that assist in pursuing MVVMness, including: a mechanism to tie the View and the ViewModel together; an implementation for ICommand; and a nice messaging mechanism. And it also integrates nicely with Visual Studio and Blend. There are other toolkits and libraries available, I just happened to use this one for this application. ICommand Once you've installed the MVVM Light Toolkit, the first thing that's needed to continue with our project is to add references to the GalaSoft MvvmLight library DLLs. There are separate libraries for the different platforms, so pull in the ones for Silverlight 4 for this project. Then create a folder in the project, call it ViewModel, and add a MvvmViewModel(SL) as MainViewModel, and a MvvmViewModelLocator(SL) as ViewModelLocator, to the new folder. These templates were added as part of the MvvmLight installation. The MvvmLight template classes provide commented instructions within them that indicate what's needed. But we'll go through the steps here. First, add the ViewModelLocator to the application resources (in App.xml) as shown below: <Application xmlns="<a href=""></a>" xmlns:</a>" xmlns:</a>" xmlns:</a>" xmlns:sys="clr-namespace:System;assembly=mscorlib" x:Class="SLMapTest.App" xmlns:vm="clr-namespace:SLMapTest.ViewModel" mc: <Application.Resources> <!--Global View Model Locator--> <vm:ViewModelLocator x:Key="Locator" d: </Application.Resources> </Application> Separated presentation in MVVMness means that the View and its associated ViewModel can be designed independently. However, at some point, they need to be introduced to each other. One mechanism to accomplish this is provided by the aptly named ViewModelLocator. It is essentially a global lookup for a View:ViewModel association. So we'll continue by adding an entry in the ViewModelLocator for the new ViewModel class we created above, MainViewModel. You can do this with a code snippet provided with the MvvmLight installation and described in the ViewModelLocator template comments. The end result is that the MainViewModel is added as a bindable property to the class. MainViewModel Now we can bind the MainPage view to the associated MainViewModel class by modifying MainPage.xml as follows: ... xmlns:m="clr-namespace:Microsoft.Maps.MapControl;assembly=Microsoft.Maps.MapControl" mc:Ignorable="d" d: ... If you compile and run the application, you may notice that nothing really has changed. But if you place a breakpoint in the MainViewModel class constructor, you will see that it is indeed being created, so now we just need to make use of it. So the whole idea behind MVVMness is to have as little as possible (or nothing) in the code-behind file so that the View can be designed independently from the business logic (starting with the ViewModel), and that the business logic can also be tested independently from the View, thus requiring no user interaction. However, I think that the former should be a goal, not a mandate. If you have to complicate the application to an extreme, simply to eliminate something from the code-behind, stop and think about it. Is the result worth the effort or complexity created? Continuing with our project, in order for the MainViewModel to support MainPage (View), we need to add some binding properties. Specifically, we need support for the button and two text boxes. The code below shows the class ready for binding with the View: public class MainViewModel : ViewModelBase { RelayCommand findButton; string findString; string resultString; /// <summary> /// Initializes a new instance of the MainViewModel class. /// </summary> public MainViewModel() { findButton = new RelayCommand(FindButtonClick); } #region Commanding public ICommand FindButtonCommand { get { return findButton; } } void FindButtonClick() { } #endregion #region data context public string FindString { get { return findString; } set { findString = value; } } public string ResultString { get { return resultString; } set { resultString = value; } } #endregion } If you bring up the properties window for the controls, you'll be able to do all of the bindings interactively; it beats typing. But make sure you compile first, otherwise the bindable properties won't show up. Here is how the binding was defined: ... <TextBox Height="23" HorizontalAlignment="Right" Margin="0,26,50,0" Name="textAddress" VerticalAlignment="Top" Width="120" Text="{Binding Path=FindString, Mode=TwoWay}"> </TextBox> <Button Content="Find" Height="23" HorizontalAlignment="Right" Margin="0,26,0,0" Name="buttonFind" VerticalAlignment="Top" Width="44" Command="{Binding Path=FindButtonCommand}"> </Button> <TextBlock Height="23" HorizontalAlignment="Right" Name="textResult" VerticalAlignment="Top" Width="166" Text="{Binding Path=ResultString}" Foreground="#FFEA1818" /> ... Now we can move the remaining code that we had in the code-behind file for MainPage to its ViewModel. You can peruse through the complete code in the downloadable project. There's only one hiccup at this point (there will be many others) in moving the code out of the code-behind file. The MainViewModel class does not know about the map control. And it shouldn't. The map control is controlled through method calls, so there is no easy binding mechanism that allows access to all the functionality we'll need. I did some searching to see if there was a quick answer to the problem, but did not come up with anything. I also know that I intend to add some more interaction than is typical for a map viewer, so I don't want to paint myself into a corner from the beginning. If I later find a way to move the functionality to the ViewModel and make it cleaner, I'll do it then. For now, we'll leave some of that interaction in the code-behind file for MainPage. So how can the MainViewModel handle the button click message but have the View react to the event without the MainViewModel having a linkage to the View? The answer is to use messages. Included with the MVVM Light Toolkit is a nice generic messaging infrastructure. It's essentially a publication service that allows subscribers to 'register' to receive a message and publishers to 'send' (publish) a message. It is very similar to the EventAggregator available in Prism. In our situation, we can have the MainPage (View) register to be notified when the MainViewModel processes the button click event. The following code shows the change in both classes. First, a definition of a SetMapViewMsg message: SetMapViewMsg namespace SLMapTest.Messages { public struct SetMapViewMsg { public Location CenterLocation; } } The message contains the latitude/longitude values for the translated address the user entered, as a Location property. The MainPage (View) subscribes to the message since it knows what to do with the values, that is, pass them to the map control. And the MainViewModel publishes the messages since it has access to the service which does the translation. Here's how the subscription is done: Location public MainPage() { InitializeComponent(); //Subscribe to message, we know what to do with user input Messenger.Default.Register<SetMapViewMsg>(this, SetViewHandler); } #region message handler void SetViewHandler(SetMapViewMsg msgData) { bingMap.SetView(msgData.CenterLocation, 18); } #endregion And here is the revised GeocodeCompleted in MainViewModel to implement the publication: ... else { //What are we getting back? Location geoLocation = new Location( e.Result.Results[0].Locations[0].Latitude, e.Result.Results[0].Locations[0].Longitude); // Zoom the map to the desired location Messenger.Default.Send<SetMapViewMsg>( new SetMapViewMsg() { CenterLocation = geoLocation }); } ... Now if we compile and run the application, we should be back to where we were, but with a better foundation to add the additional functionality to come. But before we start on that, there is just one annoying (to me) part of the UI that needs to be addressed. When the user enters an address in the textbox, the most natural action is to terminate it with the Enter key. But as you can see, if you run the app, nothing happens. The user needs to click on the button in order to process the entry. That's annoying, so I think we need to address that. The result we want is that when the keyboard Enter key is depressed (for the textbox), the system behaves the same way as if the Find button was depressed. It's just more intuitive. To do this, we need to inspect each character that is being entered so we know when the Enter key comes through. Normally, you can do this by implementing a handler for the KeyUp event. But since we are pursuing MVVMness, we can achieve that functionality using behaviors, and specifically in this solution, we can make use of the EventToCommand behavior provided with the MVVM Light Toolkit. The cool feature here is that it allows us to pass the event arguments which we need. Here is the change for the MainPage XAML. It essentially generates a method call to KeyUpCommand, passing to the method the event parameters, whenever the KeyUp event is generated on the textbox. KeyUpCommand ... <TextBox Height="23" HorizontalAlignment="Right" Margin="0,26,50,0" Name="textAddress" VerticalAlignment="Top" Width="120" KeyUp="textAddress_KeyUp" Text="{Binding Path=FindString, Mode=TwoWay}"> <i:Interaction.Triggers> <i:EventTrigger <cmd:EventToCommand </i:EventTrigger> </i:Interaction.Triggers> </TextBox> ... As you can see, we are setting a trigger on the KeyUp event which will call the method (delegate) defined by the KeyUpCommand, and we are passing the Event Args to the method. So in the MainViewModel, we have to create the RelayCommand (which implements ICommand) property and define the method to be called for each KeyUp event. Here's the code in summary form (you can see the completed code in the downloadable project): RelayCommand ... RelayCommand<KeyEventArgs> keyUpCommand; /// <summary> /// Initializes a new instance of the MainViewModel class. /// </summary> public MainViewModel() { findButton = new RelayCommand(FindButtonClick); //We want to know when user presses ENTER //in the find address text field keyUpCommand = new RelayCommand<KeyEventArgs>(TextFieldCharEntry); } ... public ICommand KeyUpCommand { get { return keyUpCommand; } } void TextFieldCharEntry(KeyEventArgs e) { //If it's ENTER key... if (e.Key.Equals(Key.Enter)) { if (findAddress != null && findAddress.Length > 0) { GeocodeAddress(findAddress); } } } ... Pretty straightforward on this side, right? So go ahead and compile and run the application. Enter an address and press the Enter key in the textbox to verify you don't need to use the Find button. If you were following along, you would have noticed that nothing happened! So let's see if we're at least getting the keystroke events. Place a breakpoint inside the block that checks for the Enter key, shown in the code block above. Run it again. You'll note that the characters are indeed being passed and the Enter key is being detected. But if you examine the findAddress property, you'll notice that it is null! And that's why the code is not being executed. The problem is that the textbox does not perform the binding update until it loses (input) focus. Which makes sense, but causes us a problem as you can see. That is also why it works when the button is pressed (focus changed to the button). findAddress Since we are getting each character as it is being entered, we could just update the local variable (for the address) at the same time we are checking for the Enter key. But there is another way that may be more appropriate and general, or just maybe more elegant. And this also demonstrates that there can be code in the code-behind file which is completely appropriate and does not detract from MVVMness. I don't think it takes anything away from the testability of the ViewModel. The solution is to add a KeyUp event handler to the MainPage in the code-behind file. Then, in the handler, we can force the binding update when the Enter key is detected. Here's the code for the handler: ... private void textAddress_KeyUp(object sender, KeyEventArgs e) { if (e.Key == System.Windows.Input.Key.Enter) { //We need to force the update manually... var binding = textAddress.GetBindingExpression(TextBox.TextProperty); binding.UpdateSource(); } base.OnKeyUp(e); } ... That'll do it. Now compile and run the application. When the user presses the Enter key, the system responds in the same manner as pressing the Find button. Now we are ready to start adding some new map related functionality to the application. We've got a good foundation to build on. We'll be looking at the map control with the goal of seeing how it could be used to improve typical business processes, as well as how it might be used to create new and novel solutions. However, even though Bing Maps is the visual glue, the final goal though is to tie the triumvirate of Silverlight, Azure, and Phone into a complete solution; that's my ultimate learning goal. So in part 2, we'll continue by learning how to draw on the map, we'll do reverse geocode, routing, and other cool stuff. To be able to implement some of that, we'll make use of some other Bing Maps services. That's as far as I can see from this vantage point. I'm sure there are going to be some surprises and challenges that I can't foresee, and that's OK because that's going to add to the fun of.
http://www.codeproject.com/Articles/141606/Exploring-Opportunites-for-The-Cloud-Part-1/?fid=1603245&df=90&mpp=10&sort=Position&tid=3720539
CC-MAIN-2015-48
refinedweb
4,583
55.24
Andy DePue - Total activity 48 - Last activity - Member since - Following 0 users - Followed by 0 users - Votes 0 - Subscriptions 15 Andy DePue created a post, "Suppress" intention not always available?I've noticed that in the beta of IntelliJ (6.0 build 5594) my "Suppress" intentions are often not available. For example, look at this method: clone() { try { return (SomeGenericType)s... Andy DePue created a post, IDEA 5.1 debugs very slowlyI just upgraded to IDEA 5.1, and the debugger has become very slow. What I mean by this is that if step through a program (Step Into, Step Over, etc), it can take IDEA 1 to 5 seconds to finish eac... Andy DePue created a post, IDEA crashes when trying to change font in UbuntuI should note that this actually worked at one point, then I did an update to my Ubuntu system and now things just aren't jiving. IDEA runs pretty well for the most part, except when I go to Setti... Andy DePue created a post, IDEA 5.0.2 doesn't support multiple modules defined in same source root?I have a file tree that looks something like this:We have about 6 modules defined as shown above. Conceptually, this fits our project, but for the purpose of editing and debugging in IDEA, it woul... Andy DePue created a post, Subversion directory historyIs there a way within IDEA to see the history for a particular directory? For example, files added, files deleted, files moved from/to the directory. I'd also like to see if the directory has bee... Andy DePue created a post, "Lock acquired but not safely unlocked" inspectionThe "Lock acquired but not safely unlocked" inspection wants you to write code like this:However, isn't it best practice to write code like this?Because we follow the above convention in our own co...
https://intellij-support.jetbrains.com/hc/en-us/profiles/2160701985-Andy-DePue
CC-MAIN-2020-10
refinedweb
312
64.2
Talk:Proposed features/shrubbery Topiary One of the suggestions we received was to add a way to mark shrubberies pruned and clipped into aesthetically pleasing shapes. For this purpose we have added shrubbery:topiary=* to the proposal. If this tag name is confusing or unclear, please let us know. --JeroenHoek (talk) 10:52, 9 May 2021 (UTC) - Topiary seems to be the technical term but maybe shrubbery:shape is easier, especially for non-native speakers. For roofs roof:shape is used. --Lkw (talk) 16:31, 9 May 2021 (UTC) - Noted. shrubbery:shape might be a good alternative. --JeroenHoek (talk) 07:59, 12 May 2021 (UTC) Images in sample section As far as I remember, you have received some more comments, e.g. not using a term, for the hodgepodge (AE) that what you want to map this under, that is not confusing to native speakers of BE, the lingua franca in OSM. Are you going to address those? --Hungerburg (talk) 23:42, 9 May 2021 (UTC) - Not wanting to sound rude, hope above did not come out like so; Meanwhile I learned, that the scope from previous proposal got limited, but the pictures remained the same, and I have a hard time telling from them, what is the shrubbery? Please annotate the pictures graphically, indicating the bounds projected to flat azimuth! E.g. with yellow marker lines. From a German forum thread hint on the reduction, I conclude, that some of the photos currently under the "samples" section would better be under the "does not apply" section. --Hungerburg (talk) 21:40, 11 May 2021 (UTC) - Thanks for the tip. We'll review the images and work on making that section more clear. I think it could be improved to highlight the reduced scope and clarify the boundary between the tags. --JeroenHoek (talk) 07:56, 12 May 2021 (UTC) Hungerburg please don't abbreviate. What is AE and what is BE? --Kogacarlo (talk) 14:02, 13 May 2021 (UTC) - American English and British English, I suspect. --JeroenHoek (talk) 14:41, 13 May 2021 (UTC) Keep it simple You propose an new value for the natural key, to tag patches of trimmed shrubs with a decorative/barrier/filler function. shrubbery looks like an appropriate collectivum for a patch of shrubs, so natural=shrubbery should be fine. As for density, just tag density=*. It's overly complicated and redundant (and thus a risk for the proposal) to include the object definer in the dimension qualifier. You tag shrubbery, then the density is the density of the shrubbery, and the recommended values are sparse, medium, dense. No need for a kind of reverse namespacing, where you namespace main keys just to subdivide the value set of a secondary attribute. As for the examples, I know many patches of shrubbery that look like some of your examples of not-shrubbery-but-scrub, but in fact are intentionally trimmed in a fashionable "wild" look. --Peter Elderson (talk) 16:43, 14 May 2021 (UTC) - @Pelderson: Do you have example images for us so that we can improve. You can also send them to me via a OSM message --Cartographer10 (talk) 16:01, 2 June 2021 (UTC) What about these? Would these patches be another type of scrub, another type of shrubbery, or another type of natural? - These shrubs would be shrubbery for sure and not scrub. There are there for decorative and space-filling purposes. We are still thinking whether we can subdefine then in a tag shrubbery --Cartographer10 (talk) 15:26, 2 June 2021 (UTC) - Thing is, there are no shrubs. Flowers, herbs, grass & grassy plants, heath, trees, but almost no woody plants. I couldn't call this a type of shrubbery. I wouldn't call it scrub either. Probably leisure=garden would be the best fit. That said, next year all these patches may be shrubs, or just grass, or all tulips.--Peter Elderson (talk) 17:14, 2 June 2021 (UTC) - You are right, trick question. I didn't look good enough. Then this is not covered with our shrubbery tag because there already is a tag for. --Cartographer10 (talk) 18:07, 2 June 2021 (UTC) Suggestion to reconsider the values The basic idea of introducing supplemental tags for natural=scrub is good i think. But i would suggest to reconsider the values used: - Under the criterion of Verifiability the choice of values is not good. If the pointless fights on natural=wood vs. landuse=forest are any indicator there would - with the current proposal - be many who will say any natural=scrub in Central Europe will be cultivated=full or cultivated=semi - and this way demonstrate the classification to be pointless. - For the current suggestion regarding scrub:density it should be noted that people will map scrubland often from imagery so having a tag defined through walkability while people will mostly tag it based on foliage cover density is very likely to result in low quality mapping. One option for example would be: - cultivated=intensely - there are visible signs that the scrubland is intensely managed and gardened by humans to match certain (usually esthetic) requirements. - cultivated=yes - there are visible signs that the scrubland receives systematic human maintenance - like the removal of dead twigs and weeds, potentially also occasional cutting back, but no intense shaping of the scrubs like with cultivated=intensely - cultivated=no - the scrubland receives no systematic human maintenance on a regular basis. This does not rule out local human interference like cutting free a path. - scrub:foliage_density=dense - the scrubland has a full or nearly full foliage cover. - scrub:foliage_density=open - the scrubland has an open foliage cover (so ground or underbrush is visible from above) but the foliage still forms a continuous pattern over the area. - scrub:foliage_density=sparse - there is only sparse coverage with scrubs with on average so much spacing in between that there is no continuous foliage cover. - scrub:ground_density=sparse - the scrubs stand so sparsely that you can/could walk through it without need to touch and push through between the scrubs. - scrub:ground_density=open - the scrubs stand more densely but with still enough open space between them so an able-bodied persons could push through. - scrub:ground_density=dense - scrubs form a dense thicket that is impassable without damaging the plants. --Imagico (talk) 10:14, 15 July 2021 (UTC) - Thanks for the critical view. - About the values. Basically what you describe is also what I intend. Indeed cutting free a path trough a forest or scrubland does not make it cultivated. Apparently, this is still not clear in the definitions. - About the values, cultivated=intensely is also cultivated so yes. If yes is introduced, then I think it is better to switch it. The highest value is then cultivated=yes, then cultivated=semi and then cultivated=no. Is that an idea? YOu then only have the potential problem that yes will also be used instead of cultivated=semi. But apart from this, intenselly is maybe better that fully. - Values like `semi` or `medium` are inherently vague. Also a mapper who has doubts about what to tag (because they have no local knowledge for example or only had a quick look) will in many cases choose semi/medium as a substitute for unknown. I know of no case in OSM where such values have been used that do not suffer from poor data quality as a result of the vagueness. Cultivated - yes or no plus intensely as a variation of yes in my eyes is much clearer and better defined. --Imagico (talk) 19:31, 15 July 2021 (UTC) - I understand your way of thinking. I will take it into consideration --Cartographer10 (talk) 19:51, 15 July 2021 (UTC) - About the density. I understand the point and I think for trees, that would be the case. A dense canopy does not mean you cannot walk below the trees. But is it for scrub? Scrub often touches the ground so the dense folioage you see also applies to the ground, right? --Cartographer10 (talk) 18:54, 15 July 2021 (UTC) - natural=scrub is used in a lot of different situations and for many there is a significant difference between the foliage density and the ground density. natural=scrub in OSM means any woody plants higher than heath but less high than full grown trees. An area with young broadleaved trees for example will often develop a dense foliage cover rather quickly but will often be perfectly walkable on the ground. And there are scrublands which are clearly impassable but without full foliage cover. - But as soon as you can walk under it, shouldn't it then be more wood/forest? Atleast with scrub I think of scrub that touches the ground. Maybe if you crawl you can go to scrub but most often not. --Cartographer10 (talk) 19:51, 15 July 2021 (UTC) - As said natural=scrub in OSM is used for all kind of woody vegetation higher than heath and less high than full grown trees. The correlation between foliage cover and walkability is often not very strong. I remember many cases of young tree patches maybe 3-4m high, like beech or maple, with full foliage cover but still perfectly walkable. I think it would be good to allow both on-the-ground and remote mappers to tag the properties observable to them and not have to guess one that they cannot reliably observe. --Imagico (talk) 13:59, 16 July 2021 (UTC) - We discussed your density proposal but we decided to stay with density=*. The tags you propose are really specific and outside the scope of this proposal. Feel free to create a proposal for your detailed density tags. We think that density=* for a solid basis and via namespacing, people who know more can specify what you propose. --Cartographer10 (talk) 18:28, 3 August 2021 (UTC) - Starting to see, why you want "cultivated" not "manicured". Looks like the proposal changed scope more than a little. From my point of view, the most interesting new twist is "densitiy". As my main interest is in hiking, including off-path, I would welcome and support an outcome, that can be applied to woods/forests just the same. - In the area that I am comfortable with, scrub mostly means, impassable, whereas wood means, passable. That is fine and dandy, no need for any extra tags or attributes. If it were not for the, quite large areas, where scrubs or trees are spaced far from each other to allow for grasses to get enough light to grow. That means, making the area passable per pedes possible in the scrub case and a joyful experience in the wood case. Occasionally, this is purposefully mapped as a dual landuse/natural, which looks nice in OSM-Carto, but is not actually concise in its meaning. - This not to be meant to subvert even more granular distinctions. This just an example of what will be affected when vote comes to pass. --Hungerburg (talk) 21:54, 24 July 2021 (UTC) - Well, manicured just doesn't seen right. Cultivated is better here. That indeed changed the scope more then a little but that is what it takes to continue the shrubbery proposal as extension to natural=scrub. About the density, several people already mentioned to be interested in that no matter the outcome of the proposal. Because you wrote under the topic of imagico, what do you think about the density tags as described by Imagico? One density tag for all or two seperate? --Cartographer10 (talk) 12:38, 25 July 2021 (UTC) - Hmm, I did post under this section, because it goes into depth about density, and also because it mentions woods. And I feel, woods are in dire need of a density attribute too! Perhaps I just made some noise, please excuse if off-topic all too much. - As has been said above and more recently below, mapping customs in deciding between wood and scrub are not to be trusted too much. Around my home place, lots of areas with trees are mapped as scrub. BTW, mentions the term "cultivation". The occasional and rare clearcuts or areas devastated by avalanches are also mapped as "scrub" quite often. First come the - only then come the young trees, even if planted. The area gets swapped to wood/forest only after the trees have grown so tall, that the shrubs have lost the game. - Following this scheme, ground-density with shrubs is always "dense" and with trees is almost always "no". From that perspective, the distinction between ground and top layer, one based on the impression from aerial imagery, the other based on walkability, makes little sense. - So thinking further, how to avoid namespacing, toying around with the words "canopy" (top) and "understorey" (bottom) layers, I realised, that wood needs three "densities", perhaps four, because, you know, the understorey can be made of shrubs! All the while I think, if it were not for the example mentioned above with some yound woods, scrub perhaps can use a single one, not density, but* --Hungerburg (talk) 23:34, 26 July 2021 (UTC) - Spacing (the proposal) has metrical distances; From a mapping point of view, for scrubs and woods, (orchards, …), two values "dense" and "light/sparse" might be enough, IMO; defined as something that can be seen from aerial. Passibility, traversabilty would be another key then, between impenetrable and fine, going a high risk to be called subjective - If called "density", would that be better then? Could that be proposed without namespacing? --Hungerburg (talk) 08:15, 27 July 2021 (UTC) Please do not widen the definition of natural=scrub natural=scrub is used to tag scrub(land), i.e. uncultivated land covered with shrubs, bushes or stunted trees. This is was the tag value implies and what is defined on the wiki. Scrubland and areas covered with ornamental shrubs are very different, both in terms of appearance, species and ecosystem. Therefore, i think it is a bad idea to widen the definition of natural=scrub to also include cultivated shrubs (shrubbery). Besides, many natural=scrub areas would suffer information loss. Unfortunately, i don't have a better solution than what you proposed before (natural=shrubbery). It's a pity that the proposal counts as rejected although 64% 55% was in favour of (what would be considered a large majority in a plebiscite) Edit: There number of nays in the archived proposal is wrong. There were 31 nays, not 21. Maybe the threshold at which a proposal is considered accepted should be lowered to an absolute majority (50% + 1 vote). --Dafadllyn (talk) 21:42, 23 July 2021 (UTC) - I understand your position, but the cold facts are that there is, unfortunately, not enough support for a separate tag. So there are two options: this proposal is accepted and cultivated will give you a way to separate man-managed shrubbery and hedges from wild scrubland (not ideal, but with presets and perhaps even rendering support this could become common for cultivated shrubs), or, the status quo is kept, which means that you have absolutely no way of knowing that any natural=scrub is actually a bunch of planted and neatly pruned bushes in the middle of a parking lot. No amount of wiki documentation stressing that natural=scrub is only for wild scrubland is going to turn back the clock on that one. --JeroenHoek (talk) 06:52, 24 July 2021 (UTC) - I'm unsure if there is enough support for widening the definition of natural=scrub either. Only 10 people (14%) thought that natural=scrub should also be used for "areas with managed shrubs". Unfortunately they didn't seem to understand that the difference between scrubland and shrubbery is not only about being managed or not, but about native vs. alien or cultivated shrubs. (Of the other people who rejected the proposal, 8 thought that the natural=* key should not be used for something that is not purely natural – although natural=* is already used for other features created or modified by humans – and another 13 people rejected the proposal for other reasons.) - I see at least two other options: - In my opinion, misusing a tag harms the quality of OpenStreetMap and should be corrected, not accepted. If there is no approved or de facto tag, one could either remove the wrong tag and add a note=* or use a new tag that one think makes most sense. --Dafadllyn (talk) 20:54, 24 July 2021 (UTC) - The voting tally is a little more complex. Many no-votes without any explanation supported the standpoint of @Jo Cassel: () which comes down to "use natural=scrub for areas". (Some of the German no-voters did not explain so on the voting page; they did do so on the German OSM-forum though. These vocal no-voters are currently all on holiday apparently.) - Using natural=shrubbery as an 'in use' tag is a possible strategy, and a change in voting procedure is as well. We'll let this version run its course first though. --JeroenHoek (talk) 06:23, 7 August 2021 (UTC) Please do not widen the definition of natural natural=* was originally used for natural features. I see in the chronology, adding few words, the domain natural seems not just natural onlyː why we need to mix things belonging to different domains? When OSM was a young project we did it with amenity; now amenity is a huge collection of very different things. So, let's try to learn from our juvenile errors. --Ale Zena IT (talk) 08:54, 26 July 2021 (UTC) Please do not explicitly exclude forests/woods The question of whether scrubland-like forests (for instances, compartments under natural regeneration) are to be mapped with natural=scrub or landuse=forest/natural=wood have never been decided AFAIK. During the talks about two failed proposals about mapping forests, some mappers raised the issue, and it seems that, though using natural=scrub to map such areas is a disputed custom, some mappers still map such areas this way, simply because the area looks like a scrubland, and that by-passers may not be able to distinguish it from a true, natural scrubland. My point is that the proposal should either try to solve the dispute (I agree this may be unpractical) or do not choose a side by explicitly preventing forests/woods. IMO, it should leave the question open and say something like "You may use this tag for forests/woods which look like scrubland, but be warned that using natural=scrub for such areas is a disputed tagging custom.". This way, the issue would be explicitly left out of proposal scope and could gather wider consensus by preventing such natural=scrub mappers to vote no because they feel that your proposal invalidates this mapping custom. Penegal (talk) 18:01, 25 July 2021 (UTC) - This proposal really presents no opinion on this matter. It is way out of scope. I can add a note to that effect if that reassures you. --JeroenHoek (talk) 06:45, 4 August 2021 (UTC) - This was a misunderstanding. The user was refering to the sentence "For vegetation where the bushes act as ground-cover for a forest or clump of trees, the tags landuse=forest or natural=wood are preferred.". that was not intended as any response on the dispute you are refering to. It was more meant as "if the forest floor is covered in bush/shrubbery then look at natural=wood or landuse=forest" instead, for example as on this image . So just a clarification that these should be tagged as forest --Cartographer10 (talk) 18:32, 4 August 2021 (UTC) The tagging section The tagging section utterly fails to address the consequences of recent changes in the proposal: You went from creating a new value "shrubbery" for the natural key into proposing subkeys for "natural=scrub". Still, out of 9 proposed tags, 8 only apply to "shrubbery" and only 1 applys to "scrub", namely "maintained=no". Your reactions on talking points show, that you yourselves seem not aware of that shift neither. --Hungerburg (talk) 19:49, 4 August 2021 (UTC) - We are proposing three keys. scrub:density=* is applicable to all of natural=scrub, and is in fact already in use — we co-opted it. The other key maintained=* applies to all of natural=scrub too. Only scrub:shape=* is specific to topiaries. You yourself suggested the use of a subkey in the comment for your no-vote on natural=shrubbery. Of course, anyone who wishes to can just ignore these tags and keep on mapping actual wild scrublands with just natural=scrub. Who knows, if these additional keys become popular enough, the documented default of natural=scrub can even become 'wild natural shrubs or scrubland' (maintained=no). Of course without a proposal like this the end result is that natural=scrub will simply remain 'any sort of bushy area', which is the (overwhelming) status quo due to lack of alternatives. --JeroenHoek (talk) 07:11, 5 August 2021 (UTC) - I am not principally against sub-tagging; I only think, the promise "can be used on any natural feature" does not deliver. --Hungerburg (talk) 21:34, 6 August 2021 (UTC) - I also consider, a sentence like "Who knows … the documented default can even become 'wild natural shrubs or scrubland'" i.e. what it is now, a bit weird. --Hungerburg (talk) 21:39, 6 August 2021 (UTC) - natural=scrub explicitly mentions that it is used for shrubberies as well (not that it should be used like that, but that it undeniably is). We tried to introduce a separate tag for that, which didn't succeed and which you voted against yourself. So now we offer a way to distinguish between true natural scrubland and cultivated beds of shrubs. If that succeeds, the documentation can at some point move towards defaulting to natural scrubland if maintained=* is missing, and consider shrubberies (i.e., man-managed shrubs) mapped as natural=scrub without maintained=* a mapping error. Currently, data consumers cannot make this assumption, because of the de facto use of natural=scrub for cultivated areas. By voting against this as well you are implicitly stating that the status quo should continue, and natural=scrub will remain the go to tag for mappers adding shrubby details to urban areas. --JeroenHoek (talk) 06:12, 7 August 2021 (UTC) Final state of the proposal page This proposal was voted on in two voting rounds. The second iteration represented a compromise incorporating the biggest issue raised by voters in the first iteration: i.e., no separate tag should be introduced; use natural=scrub. This second iteration was rejected even more strongly, and we as proposal authors have completely abandoned that approach. Instead, we have started using natural=shrubbery as proposed in the first iteration, with a few minor adjustments based on feedback received during the voting. To keep the voting history and the state of both proposal iterations clear, this proposal page links to the post-voting states of both iterations. Please leave it like that unless a third iteration is proposed. @Adamant1: Reverting the proposal page to the post-voting state of the second iteration isn't helpful in this particular case. We've left it as an index to both iterations on purpose, in particular because we are championing natural=shrubbery. --JeroenHoek (talk) 06:34, 23 August 2021 (UTC) - People gave important reasons for why the proposal was rejected in their votes. How exactly is it not helpful to have that information? It's not like you can't retain the information/votes and also have it as an index to both iterations. Their literally just links. In the meantime, the complete context of why a proposal is rejected or accepted should be retained in the article where people can read over it if they want to. Retaining the final state of the second proposal on the page is a perfectly fine way to that. Especially since it's what the information box is about, not the first one. --Adamant1 (talk) 14:06, 24 August 2021 (UTC) - There were two voting rounds, equally important for the whole story of this proposal. People gave important reasons during both — hence the index page to provide quick access to both post-voting pages. Archiving the proposal also has the benefit of not clashing with the current documentation, and, more importantly, of not further propagating the compromise proposal of augmenting natural=scrub. If the comments made one thing clear, it is that that variant is seen as inferior to natural=shrubbery by those who supported it as a compromise. If the proposal page should show anything, it would be the first iteration voted upon, but since this is already covered by natural=shrubbery this would only lead to more confusion. Besides, archiving a proposal is not uncommon or anything. What exactly are you worried about? The information is right there. --JeroenHoek (talk) 16:07, 24 August 2021 (UTC)
https://wiki.openstreetmap.org/wiki/Talk:Proposed_features/shrubbery
CC-MAIN-2022-40
refinedweb
4,156
60.85
In this article, we’ll examine the inner workings of the map() function in Python. As always, we’re using Python 3. In this post, we’re taking a look at the Python map() function. Its goal may appear rather simple (help the developer work with large sets of data), but there’s always more to learn. Here are some of the Python-focused content we produced as of late: Last but not least, do check our Python interview questions In this article, we’ll examine the inner workings of the map() function in Python. As always, we’re using Python 3. In theory, articles like these shouldn’t be necessary — if you encounter a function you don’t understand well, the go-to source of knowledge is supposed to be the Python built-in help() module. C:\Python\Test_folder> Python Python 3.7.0 (v3.7.0:1bf9cc5093, Jun 27 2018, 04:06:47) [MSC v.1914 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> You pass the function you want to learn about as the argument… >>> help(map) … and get an explanation that (hopefully) clears up all misunderstandings. Here’s what the Python help() module has to say about map(): Make an iterator that computes the function using arguments from each of the iterables. Stops when the shortest iterable is exhausted. Well, the folks at the Python Foundation have condensed a lot of information into this neat little description — so much so that many novice Python programmers have a hard time understanding how they can apply map() in their programs. It probably looks like this to a beginner Python programmer: Quite easy to follow, don’t you think? However, the explanation above introduces two terms which are essential to Python: iterator and iterable. Since they appear frequently throughout various Python documentation files, let’s explain them now: With these terms defined, we can now move to the next point — analyzing what map() does and how it can be applied. The map() function applies the function you provide to each item in an iterable. The result is a map object which is an iterator. Here’s the syntax of map(): map(func, *iterables) Without map(), we’d be forced to write complex code to “loop” the given function over a number of items. Let’s imagine a neat little experiment: we have a list of 10 words. test_list = ["effort", "circle", "yearly", "woolen", "accept", "lurker", "island", "faucet", "glossy", "evader"] We suspect that some of them may be abcderian (i.e. the letters appear in alphabetical order). We write a function called is_abecedarian to check if the given word is abcderian: def is_abecedarian(input_word): index = 0 for letter in input_word[0:-1]: if ord(input_word[index]) > ord(input_word[index + 1]): return False else: index += 1 return True Now, we want to apply our function to the word list and create a new list which will hold True and False values, signifying whether some of the words are indeed abcderian. One method would involve initializing a new list, then using a for loop to iterate over the list elements: value_list = [] for item in test_list: value = is_abecedarian(item) value_list.append(value) This will output: [True, False, False, False, True, False, False, False, True, False] Thanks to map(), however, we can reduce the code above to a neat little one-liner: map(is_abecedarian, test_list) However, we should be mindful that map() doesn’t return a list — it returns a map object instead. You might be curious which words are actually abcderian — let’s code the answer to your question: for item in test_list: if is_abecedarian(item): print(f"The word '{item}' is abecedarian. :)") else: print(f"The word '{item}' is not abecedarian. (") This will output: The word 'effort' is abecedarian. :) The word 'circle' is not abecedarian. The word 'yearly' is not abecedarian. The word 'woolen' is not abecedarian. The word 'accept' is abecedarian. :) The word 'lurker' is not abecedarian. The word 'island' is not abecedarian. The word 'faucet' is not abecedarian. The word 'glossy' is abecedarian. :) The word 'evader' is not abecedarian. We can also visualize this explanation to help you understand it even better: Almost feels like magic It’s also useful to define map and mapping — we can use the definitions provided by Allen B. Downey in his Think Python book: Since map() doesn’t return a list/tuple/set, we need to take an extra step to convert the resulting map object: def capitalize_word(input_word): return input_word.capitalize() map_object = map(capitalize_word, ['strength', 'agility', 'intelligence']) test_list = list(map_object) print(test_list) map_object = map(capitalize_word, ['health', 'mana', 'gold']) test_set = set(map_object) print(test_set) map_object = map(capitalize_word, ['armor', 'weapon', 'spell']) test_tuple = tuple(map_object) print(test_tuple) This will output: ['Strength', 'Agility', 'Intelligence'] {'Mana', 'Health', 'Gold'} ('Armor', 'Weapon', 'Spell') From multiple lines to a pretty one-liner Lambda expressions are a fine addition to our arsenal: combining them with map() makes your Python program even smaller, code line-wise. Lambda expressions create anonymous functions, i.e. functions that aren’t bound to a specific identifier. Conversely, creating a function via the def keyword bounds the function to its unique identifier (e.g. def my_function creates an identifier my_function). However, lambda expressions have a set of limitations: each of them can only do one thing and only be used on one place. They’re often used in conjunction with other functions, so let’s see how can we utilize them and map() simultaneously. For instance, we can turn the code below… cities = ["caracas", "bern", "oslo", "ottawa", "bangkok"] def capitalize_word(input_word): return input_word.capitalize() capitalized_cities = map(capitalize_word, cities) … into a more concise version: cities = ["caracas", "bern", "oslo", "ottawa", "bangkok"] capitalized_cities = map(lambda s: s.capitalize(), cities) However, there’s a caveat: both map() and lambda expressions provide the ability to condense multiple-line code into a single line. While this functionality is pretty awesome, we need to keep in mind one of the golden rules of programming: Code is read more often than it is written. This means that both map() and lambda expressions can improve code brevity, but sacrifice code clarity. Sadly, there aren’t really any clear guidelines on how readable your code should be — you’ll figure this yourself out as your programming experience grows. Naturally, map() is well-suited for scenarios when you want to iterate over a dictionary. Let’s imagine we have a dictionary containing the prices of apples, pears, and cherries. We need to update the price list by applying a 15% discount to it. Here’s how we can do it: price_list = { "pear": 0.60, "cherries": 0.90, "apple": 0.35, } def calulates_discount(item_price): return (item_price[0], round(item_price[1] * 0.85, 2)) new_price_list = dict(map(calulates_discount, price_list.items())) The output will be: {'pear': 0.51, 'cherries': 0.77, 'apple': 0.3} Programming is especially interesting when you start to combine multiple functions — a good example is using map() and lambda expressions simultaneously to iterate over a dictionary. In the code snippet below, we initialize a list of dictionaries and pass each dictionary as a parameter to the lambda function. list_of_ds = [{'user': 'Jane', 'posts': 18}, {'user': 'Amina', 'posts': 64}] map(lambda x: x['user'], list_of_ds) # Output: ['Jane', 'Amina'] map(lambda x: x['posts'] * 10, list_of_ds) # Output: [180, 640] map(lambda x: x['user'] == "Jane", list_of_ds) # Output: [True, False] Like with all technologies/products/techniques/approaches/etc., some Python developers consider the map()() function to be somewhat un-Pythonic (i.e. not following the spirit and design philosophies of how Python programs should be built). They suggest using list comprehensions instead so that map(f, iterable) turns into [f(x) for x in iterable] Speed- and performance-wise, map() and list comprehensions are roughly equal, so it’s highly unlikely that you’ll see a significant decrease in execution times — experienced Python developer Wesley Chun addressed this question in his talk Python 103: Memory Model & Best Practices (the discussion starts around 5:50 if the timestamp didn’t work correctly). Phew — turns out this map() function in Python isn’t so complicated after all! To solidify your knowledge, however, you should play around with these examples and tweak them — by breaking and fixing these code snippets, you may very well run into certain aspects of the map() function that we didn’t cover. Good luck! 🙂 In the programming world, Data types play an important role. Each Variable is stored in different data types and responsible for various functions. Python had two different objects, and They are mutable and immutable objects. Python's reduce(): From Functional to Pythonic Style – Real Python Lambda Functions. Python lambda functions are an invaluable feature. One of the benefits of using Python is that it facilitates a relatively compact code structure compared to other programming languages. This means getting so much more done with fewer lines of code. One such tool is the lambda function. These allow us to create anonymous functions in python. Let’s learn about the python range function in detail. The range type represents an immutable sequence of numbers and is commonly used for looping a specific number of times in for loops.
https://morioh.com/p/80512b5f2bbb
CC-MAIN-2020-40
refinedweb
1,525
54.32
. Will it be attached to a flight simulator software. Autopilots are not stand-alone devices. They only run with an airplane they are mounted on, and it is this airplane (or its simulator) which A) supplies the flight data, such as airspeed, heading, rate of climb, accelereations, and many other relevant data. B) the autopilot influences the flight pad in a desired way by deflecting control surfaces, which in turn changes the flight data as a feedback. These deflections will stabilize when the dynamic feedback of the flight data equals to the control input. In other words, when the aircraft flies to the dstination you want it to fly, despite of disturbancesm such as wind changes and fuel burn. If you want to run the autopilot class stand alone, you must al least have a bare bone flight formula which simulates the airplane characteristics. Unless you have some knowledge in aeronattical engineering you may not be in a position to make a meaningful model. Just to give you an idea of the complexity. The most important force vector of an airplane is the Lift vector. This has to be bigger than the weight of the airplane to start flying. Here is a simplified formula: L = (1/2) d v2 s CL L = Lift, which must equal the airplane's weight in pounds for level flifgt. If you feel it is doable for you, go to the NASA learning pages at and teach yourself some aeronautics. Simulators work that way that they use a fixed time raster to recalculate the aircraft formula. This time may be for example 100 milliseconds. Every 100 msec the flight data are updated by the changes which did occur within the past 100 msec. This will give a pretty good approximation to a real airplane. jack.net, (Aeronautical engineer, programmer, pilot and CFI) For example, if the current heading is 330 and the desired heading is 030, what do you do? If you're not turning, turn right. If you're already in a right turn, do nothing. If you're in a left turn, turn right. Likewise, if ytou're on the desired heading, you have to stop turning if you're in a turning state. The same thing goes for altitude changes. Once you have the those basics established, you can work on how to implement those actions. And, as above, we don't worry about control surfaces yet, but instead we use basic actions needed to perform the higher level actions above. For example, to initiate a right turn , we roll the aircraft into a 30 degree right bank and maintain that bank angle. To stop the right turn, we roll the aircraft to the left until the wings are level. Similarly, to initiate a climb or descent, add or remove a standard amount of power to achieve a standard climb or descent rate. To level off, return the power to the previous setting. NOW you can worry about control surfaces. Well, the ailerons. Maybe. To initiate a roll right, raise the right aileron and lower the left aileron by some standard amount. To stop the roll, center the ailerons. In real life, the ailerons are cross-linked so that turning the yoke causes appropriate, simultaneous aileron deflection, so you may want to consider a yoke controller instead of an aileron controller. You can make it a bit more realistic by realizing that a rolling back level from a given bank angle takes a palpable amount of time. So, in order to roll out level on a given heading, you have to start the roll back x many degrees ahead of the desired heading, x depending bank angle and roll rate. === The primary function of the rudder is to keep the nose pointed properly during the turn. The amount of yaw needed depends on the rate of turn, which is a function of bank angle. Insufficient yaw, i.e., the nose is pointing outside the turn, may cause the up wing to lose lift or stall, thereby causing the aircraft to flop back level. Too much yaw, i.e., the nose is pointing to the inside of the turn, may cause the down wing to lose lift or stall, thereby causing the bank angle to suddenly increase, possibly inverting the aircraft. ===. === A good simplification of flying, but probably too much simplified. It will be very tough to build an autopilot based on your observations. You mix up cause and effect. It is right, most of your statements appear to the pilot the way you describe them, but are technically wrong. For example: ====. ==== If you add power, the aircraft in the first place accelerates its forward velocity, since the power vector is higher than the total drag. Speed will slightly increase, as a result total drag increases in about a square function of the speed increase. Drag and power vectors will balance out, but we have a higher resulting speed. This in turn will increase the lift vector which opposes the aircraft's weight. Since the weight is stable at this point, the aircraft starts to climb with a higher lift. Unless you respect such fundamentals of aircraft control, your autopilot will never be a "Automatic Pilot", but just a useless piece of code. Or can you show me an autopilot which uses pimarly Autothrottle to do Altitude Hold? Most of the GA planes do not have autothrottle, but they may have a perfect working altitude hold. So my recommendation is: If someone wants to program an Autopilot which deserves this name, you have to start with the fundamentals of flight the way they physically work. It is not enough to use the training talk your CFI tells you when you try your first 360. Happy landings.. Jack.net We value your feedback. Take our survey and automatically be enter to win anyone of the following: Yeti Cooler, Amazon eGift Card, and Movie eGift Card! The autopilot is a very very basic one, and I am not even worried about the actual computeMovement class, but any suggestions for an algorithm would be appreciated, though not 100% needed. I am more worried about the framework of an autopilot, rather than the intricate movements, etc. Similarly, a climb state variable CState indicates climb rate in feet per second. For example, -10 indicates a 10 foot per second descent. We also use variables CurrentHeading, DesiredHeading, CurrentAltitude, and DesiredAltitude as expected. Once per second update the Current* variables based on the values of *State, then modify the *State variables as appropriate. For example, in a one second timer handler: // Update heading and altitude CurrentHeading+=TState; CurentAltitude+=CState; // Decide if we need to start or stop a turn if (CurrentHeading==DesiredHe { TState=0; // Stop turn, if we were } else { // Look at DesiredHeading and CurrentHeading to decide which direction to turn // then set TState accordingly (3 degrees per second is a common rate). // (Note that by always setting TState, we don't care what it was) } // Do same sort of thing for altitude. // Display updated states 1) Now, do I have to adjust each individual part of the plane i.e. rudder, elevator, aileron? 2) Also, how should I go about getting each idividual setting for the three parts? They are going to be sent to me in an object with methods of extracting included I am fairly certain. 3) And are my set* methods ok as is? I've written a Timer class (basically a stopwatch) which I think might be useful as far as helping the FCA make its calculations. Where do you think I should utilize it (if, of course, you think I should, and how? // Creates the timer Timer timer = new Timer (); // Start timer timer.start(); Here's something I did this evening. Note the slowing of turns and altitude changes as the target is approached. using System; using System.Collections.Generic using System.Windows.Forms; using System.Timers; namespace AutoPilot { partial class Form1 { /// <summary> /// Required designer variable. /// </summary> private System.ComponentModel.ICon double DesHdg=0; double DesAlt=10000; double CurHdg=0; double CurAlt=10000; double TState=0; double CState=0; double Epsilon=0.4; System.Timers.Timer APTimer; /// <summary> /// Clean up any resources being used. /// </summary> /// <param name="disposing">true if managed resources should be disposed; otherwise, false.</param> protected override void Dispose(bool disposing) { if(disposing && (components != null)) { components.Dispose(); } base.Dispose(disposing); } protected override void OnActivated(EventArgs e) { this.tbCmdAlt.Text=DesAlt. this.tbCmdHdg.Text=DesHdg. this.lbCurHdg.Text="Cur Hdg: "+String.Format("{0:###}", this.lbCurAlt.Text="Cur Alt: "+String.Format("{0:##,### } #region Windows Form Designer generated code /// <summary> /// Required method for Designer support - do not modify /// the contents of this method with the code editor. /// </summary> private void InitializeComponent() { this.tbCmdHdg = new System.Windows.Forms.TextB this.tbCmdAlt = new System.Windows.Forms.TextB this.label1 = new System.Windows.Forms.Label this.label2 = new System.Windows.Forms.Label this.groupBox1 = new System.Windows.Forms.Group this.cbSet = new System.Windows.Forms.Butto this.lbCurHdg = new System.Windows.Forms.Label this.lbCurAlt = new System.Windows.Forms.Label this.APTimer = new System.Timers.Timer(); this.groupBox1.SuspendLayo ((System.ComponentModel.IS this.SuspendLayout(); // // tbCmdHdg // this.tbCmdHdg.Font = new System.Drawing.Font("Arial this.tbCmdHdg.Location = new System.Drawing.Point(11,37 this.tbCmdHdg.Name = "tbCmdHdg"; this.tbCmdHdg.Size = new System.Drawing.Size(109,26 this.tbCmdHdg.TabIndex = 0; // // tbCmdAlt // this.tbCmdAlt.Font = new System.Drawing.Font("Arial this.tbCmdAlt.Location = new System.Drawing.Point(144,3 this.tbCmdAlt.Name = "tbCmdAlt"; this.tbCmdAlt.Size = new System.Drawing.Size(109,26 this.tbCmdAlt.TabIndex = 1; // // label1 // this.label1.AutoSize = true; this.label1.Font = new System.Drawing.Font("Arial this.label1.Location = new System.Drawing.Point(38,21 this.label1.Name = "label1"; this.label1.Size = new System.Drawing.Size(43,18) this.label1.TabIndex = 2; this.label1.Text = "HDG"; // // label2 // this.label2.AutoSize = true; this.label2.Font = new System.Drawing.Font("Arial this.label2.Location = new System.Drawing.Point(184,2 this.label2.Name = "label2"; this.label2.Size = new System.Drawing.Size(37,18) this.label2.TabIndex = 3; this.label2.Text = "ALT"; // // groupBox1 // this.groupBox1.Controls.Ad this.groupBox1.Controls.Ad this.groupBox1.Controls.Ad this.groupBox1.Controls.Ad this.groupBox1.Controls.Ad this.groupBox1.Font = new System.Drawing.Font("Arial this.groupBox1.Location = new System.Drawing.Point(12,12 this.groupBox1.Name = "groupBox1"; this.groupBox1.Size = new System.Drawing.Size(274,10 this.groupBox1.TabIndex = 4; this.groupBox1.TabStop = false; this.groupBox1.Text = "CMD"; // // cbSet // this.cbSet.Font = new System.Drawing.Font("Arial this.cbSet.Location = new System.Drawing.Point(91,69 this.cbSet.Name = "cbSet"; this.cbSet.Size = new System.Drawing.Size(85,30) this.cbSet.TabIndex = 4; this.cbSet.Text = "SET"; this.cbSet.UseVisualStyleB this.cbSet.Click += new System.EventHandler(this.S // // lbCurHdg // this.lbCurHdg.AutoSize = true; this.lbCurHdg.Font = new System.Drawing.Font("Arial this.lbCurHdg.Location = new System.Drawing.Point(19,18 this.lbCurHdg.Name = "lbCurHdg"; this.lbCurHdg.Size = new System.Drawing.Size(129,22 this.lbCurHdg.TabIndex = 5; this.lbCurHdg.Text = "Curr Hdg: 000"; // // lbCurAlt // this.lbCurAlt.AutoSize = true; this.lbCurAlt.Font = new System.Drawing.Font("Arial this.lbCurAlt.Location = new System.Drawing.Point(19,20 this.lbCurAlt.Name = "lbCurAlt"; this.lbCurAlt.Size = new System.Drawing.Size(143,22 this.lbCurAlt.TabIndex = 6; this.lbCurAlt.Text = "Curr Alt: 10,000"; // // APTimer // this.APTimer.Enabled = true; this.APTimer.Interval = 333; this.APTimer.Synchronizing this.APTimer.Elapsed += new System.Timers.ElapsedEvent // // Form1 // this.AutoScaleDimensions = new System.Drawing.SizeF(6F,13 this.AutoScaleMode = System.Windows.Forms.AutoS this.ClientSize = new System.Drawing.Size(309,24 this.Controls.Add(this.lbC this.Controls.Add(this.lbC this.Controls.Add(this.gro this.Name = "Form1"; this.Text = "Form1"; this.groupBox1.ResumeLayou this.groupBox1.PerformLayo ((System.ComponentModel.IS this.ResumeLayout(false); this.PerformLayout(); } // ////////////////////////// // ////////////////////////// void APTimer_Elapsed(object source,ElapsedEventArgs evt) { APTimer.Enabled=false; // Update heading and altitude CurHdg+=TState; if (CurHdg>359) CurHdg-=360; if (CurHdg<0) CurHdg+=360; CurAlt+=CState; // Decide if we need to start or stop a turn if (Math.Abs(CurHdg-DesHdg)<E { TState=0; // Stay on this heading } else { // We have to turn (or continue a turn) // Look at DesiredHeading and CurrentHeading to decide which direction to turn // then set TState accordingly (4 degrees per second is reasonable). // (Note that by always setting TState, we don't care what it was) // Decide which way to turn and how much of the turn is left double TurnDir=1; // 1=> right -1=>left double TurnAmt=0; if (CurHdg>DesHdg) { if ((CurHdg-DesHdg)>180) { TurnDir=1; TurnAmt=DesHdg+360-CurHdg; } else { TurnDir=-1; TurnAmt=CurHdg-DesHdg; } } else // DesHdg>CurHdg { if ((DesHdg-CurHdg)>180) { TurnDir=-1; TurnAmt=CurHdg+360-DesHdg; } else { TurnDir=1; TurnAmt=DesHdg-CurHdg; } } TState=1.5*TurnDir; // ca. 4 deg/sec // Now decide if we should slow the turn if (TurnAmt<15) TState/=2; // Cut turn rate in half when within 15 degrees if (TurnAmt<5) TState/=2; // Cut in half again when within 5 degrees } // Do same sort of thing for altitude. if (Math.Abs(CurAlt-DesAlt)<1 { CState=0; } else { double ClimbDir=1; if (CurAlt>DesAlt) ClimbDir=-1; CState=20*ClimbDir; if (Math.Abs(CurAlt-DesAlt)<1 if (Math.Abs(CurAlt-DesAlt)<1 if (Math.Abs(CurAlt-DesAlt)<1 } // Display updated states this.lbCurHdg.Text="Cur Hdg: "+String.Format("{0:000}", this.lbCurAlt.Text="Cur Alt: "+String.Format("{0:##,### this.lbCurAlt.Refresh(); this.lbCurHdg.Refresh(); APTimer.Enabled=true; } private void Set_Click(object sender,System.EventArgs e) { DesHdg=Convert.ToInt32(thi DesAlt=Convert.ToInt32(thi } #endregion private System.Windows.Forms.TextB private System.Windows.Forms.TextB private System.Windows.Forms.Label private System.Windows.Forms.Label private System.Windows.Forms.Group private System.Windows.Forms.Butto private System.Windows.Forms.Label private System.Windows.Forms.Label } }
https://www.experts-exchange.com/questions/21829197/Flight-Control-for-an-Autopilot.html
CC-MAIN-2017-51
refinedweb
2,291
52.97
Computing Pi in C Dik T. Winter wrote a 160-byte C program to compute the first 800 digits of pi. We analyze his code here. The original code);} can be rewritten as: #include <stdio.h> int main() {; } return 0; } Then we see that during the first iteration of the $k$ loop, the code is essentially computing the digits of By Beeler et al. 1972, Item 120, this is an approximation of $\pi$. We note each term in the approximation gives an additional bit of precision (see above link) thus 14 terms give 4 decimal digits of precision each time (since $2^{14} \gt 10^4$). This is why $2800 = 14 \times 200$ terms are used. To get a better idea of what the program does, let us write where each $a_i, b_i$ are coprime positive integers. Set $n = 2799$. Then our goal is to compute the digits of Let $x = 2 \cdot 10^7$. In the first iteration, we compute Let $x = q_n b_n + r_n$ and $q_i b_i + r_i = x + q_{i+1}b_{i+1}$ for $i \lt n$ where the $q_i, r_i$ are positive integers. Then $P_0 = q_0$, and we have The program then prints out $q_0 mod 10^4$, which should be the first four digits of an eight-digit number. The first four digits are subtracted away. Then on the next iteration, we roughly compute $10^4$ times the error term add any possible carry to the unprinted digits from the last iteration, and print the first four digits. This process is repeated until 200 digits are being printed. At each step, we can forget about 14 terms because from the above bounds we know they cannot possibly affect any other digits. However there is one thing I don’t understand. Why is it guaranteed only four digits will be printed each time? For example if we add a carry to 9999 then it will take five digits. In fact, 9999 appears in the first 200 digits, but fortunately there is no carry immediately after that.
http://crypto.stanford.edu/pbc/notes/pi/code.html
CC-MAIN-2014-10
refinedweb
341
72.56
Type: Posts; User: chaos2oo2 Hi, I've written a namespace extension (c++). What I want to do now is to copy the windows explorers 'open with' menu. I've implemented my context menu using IContextMenu, IContextMenu2 and... the application is the windows explorer in windows 7, does it ask for IID_IExtractIconA? Example of IExtractIcon implementations I can find just do if(riid == IID_IExtractIcon){ ... } Hi, I've implemented IShellFolder and IExtractIcon. In IShellFolder I have the following code STDMETHODIMP CILCShellFolder::GetUIObjectOf(HWND hwndOwner, UINT cidl, PCUITEMID_CHILD_ARRAY... I've found it out...thank you anyway! HKLM { NoRemove Software { NoRemove Microsoft Hi, I have created a Namespace Extension (I hope so) by creating a ATL Project with MFC support as dll in Visual Studio 2010. Now I have a Implementation of IShellFolder: //... Hi, I have developed a namespace extension. Now for 64bit OS we need to register both DLLs 32 bit and 64 bit parallel. Registering both DLLs using regsvr32 works perfect and everything is fine. ... Hello, I have a Dialog with an CTabCtrl. Each tab is a single dialog template with a underlying class. Now the behaviour is as following: Switching from tab 1 to tab 2 to tab 3, everything is... Hello, i have a Namespace Extension which works very well so far. All virtual folders. For several different folders, I have about 10 ico in the resource file of my dll. Now from time to time the... Hello, I have a template in Visual Studio with a Size of 250x80 containing a CStatic and a CEdit. This template (dialog resource) is used by a FormView which is embedded in another dialog, so... Hello, I have a CDialogEx in my dll which is displayed correctly. This CDialogEx does have a group box. Now I need to display a dynamic amount of edit boxes and labels in a panel/canvas or... so as far as I see there is no easy way, I don't have the time to implement the whole data exchange on my own. What must be passed are lists of instances from my data handler. A process queries... Hello, I have a problem described here (). I've developed an namespace extension with a datahandler. This datahandler needs to be... will it be possible to pass a pointer to an instance of an object (which will be in the shared memory) to the service method then? In case of a seperate process? @Igor: How is servicing data... Hmm, doesn't this only work for files? Maybe I should describe a bit more detailed what I got. Currently I have a data handler class which provides several public functions and is a singleton... Hello, I have a developed an application which uses a complex data structure as data source. This data structure should exist only one time in memory. Every instance of my appliction should be... there is only how I create it, but I just have a hwnd and the running explorer instance and need it from those. I can't recreate it. Yes I did, but maybe missed your point of view somehow...maybe my english is not good enough. I know now that I can't just make it writeable by casting my CString. and after SomeWindowsFunc was called 'fooB'must have been changed too, right? so you don't think this cast above was the source of my problem? Or do you think it was? Agree...the problem was this: CString description; // points internal to TEXT("") m_descriptionEdit.GetWindowTextW((LPTSTR)(LPCTSTR)description, m_descriptionEdit.GetWindowTextLengthW() +... I figured out that this here messed up my programm: void CNewDocumentDialog::okClicked() { if(m_spsDropDown.GetCurSel() != CB_ERR){ Hello, I have the following code segments: CString fooA = TEXT(""); CString fooB = TEXT(""); The question has been asked several time in this board, but without reply. I'm developing a Namespace extension and I need the IShellBrowser of the Explorer to get the IShellView for triggering a... The complete code in snippets ^^ was the following: CRecordElement* instElement = new CRecordElement(description, instNodeID, INSTANCE); instElements->push_back(instElement); It wasn't a pointer. In my 'item.h" I just declared CString nodeid; as public member but did not assign anything in the constructor. After adding
http://forums.codeguru.com/search.php?s=192b3c71355c5c3fc8ebfc1341af2586&searchid=6137091
CC-MAIN-2015-06
refinedweb
696
67.86
5.11. Batch Normalization¶ Training very deep models is difficult and it can be tricky to get the models to converge (or converge within a reasonable amount of time) when training. It can be equally challenging to ensure that they do not overfit. This is one of the reasons why it took a long time for very deep networks with over 100 layers to gain popularity. 5.11.1. Training Deep Networks¶ Let’s review some of the practical challenges when training deep networks. - Data preprocessing is a key aspect of effective statistical modeling. Recall our discussion when we applied deep networks to Predicting House Prices. There we standardized input data to zero mean and unit variance. Standardizing input data makes the distribution of features similar, which generally makes it easier to train effective models since parameters are a-priori at a similar scale. - As we train the model, the activations in intermediate layers of the network will assume rather different orders of magnitude. This can lead to issues with the convergence of the network due to scale of activations - if one layer has activation values that are 100x that of another layer, we need to adjust learning rates adaptively per layer (or even per parameter group per layer). - Deeper networks are fairly complex and they are more prone to overfitting. This means that regularization becomes more critical. That said dropout is nontrivial to use in convolutional layers and does not perform as well, hence we need a more appropriate type of regularization. - When training deep networks the last layers will converge first, at which point the layers below start converging. Unfortunately, once this happens, the weights for the last layers are no longer optimal and they need to converge again. As training progresses, this gets worse. Batch normalization (BN), as proposed by Ioffe and Szegedy, 2015, can be used to cope with the challenges of deep model training. During training, BN continuously adjusts the intermediate output of the neural network by utilizing the mean and standard deviation of the mini-batch. In effect that causes the optimization landscape of the model to be smoother, hence allowing the model to reach a local minimum and to be trained faster. That being said, one has to be careful in oder to avoid the already troubling trends in machine learning (Lipton et al, 2018). Batch normalization has been shown (Santukar et al., 2018) to have no relation at all with internal covariate shift, as a matter in fact it has been shown that it actually causes the opposite result from what it was originally intended, pointed by Lipton et al., 2018 as well. it back to a given order of magnitude via \(\mathbf{\mu}\) and \(\sigma\). Consequently we can be more aggressive in picking large learning rates on the data. To address the fact that in some cases the activations actually need to differ from standardized data, we need to introduce scaling coefficients \(\mathbf{\gamma}\) and an offset \(\mathbf{\beta}\). We use training data to estimate mean and variance. Unfortunately, the statistics change as we train our model. To address this, we use the current minibatch also for estimating \(\hat{\mathbf{\mu}}\) and \(\hat\sigma\). This is fairly straightforward. All we need to do is aggregate over a small set of activations, such as a minibatch of data. Hence the name Batch Normalization. potentially very noisy estimates of mean and variance. Normally we would consider this a problem. After all, each minibatch has different data, different labels and with it, different activations, predictions and errors. As it turns out, this is actually beneficial. This natural variation acts as regularization which prevents models from overfitting too badly. There is some preliminary work by Teye, Azizpour and Smith, 2018 and by Luo et al, 2018 which relate the properties of Batch Normalization (BN) to Bayesian Priors and penalties respectively. In particular, this resolves the puzzle why BN works best for moderate sizes of minibatches, i.e. of size 50-100. Lastly, let us briefly review the original motivation of BN, namely covariate shift correction due to training. Obviously, rescaling activations to zero mean and unit variance does not entirely remove covariate shift (in fact, recent work suggests that it actually increases it). In fact, if it did, it would render deep networks entirely useless. After all, we want the activations become more meaningful for solving estimation problems. However, at least, it prevents mean and variance from diverging and thus decouples one of the more problematic aspects from training and inference. After a lot of theory, let’s look at how BN works in practice. Empirically it appears to stabilize the gradient (less exploding or vanishing values) and batch-normalized models appear to overfit less. In fact, batch-normalized models seldom even use dropout. 5.11). 5.11 the batch norm: Recall that mean and variance are computed on the same minibatch \(\mathcal{B}\) on which this transformation is applied to. Also recall that the scaling coefficient \(\mathbf{\gamma}\) and the offset \(\mathbf{\beta}\) are parameters that need to be learned. They ensure that the effect of batch normalization can be neutralized as needed. 5.11. 5.11 highly undesirable once we’ve trained the model. One way to mitigate this is to compute more stable estimates on a larger set for once (e.g. via a moving average) and then fix them at prediction time. Consequently, Batch Normalization behaves differently during training and test time (just like we already saw in the case of Dropout). 5.11.3. Implementation from Scratch¶ Next, we will implement the batch normalization layer via the NDArray from scratch. In [1]: import gluonbook as g Next, we takes care of this). In [2]: 5.11.4. Use a Batch Normalization LeNet¶ Next, we will modify the LeNet model in order to apply the batch normalization layer. We add the batch normalization layer after all the convolutional layers and after all fully connected layers. As discussed, we add it before the activation layer. In [3]:. In [4]: lr, num_epochs, batch_size, ctx = 1.0, 5, 256, gb.try_gpu() net.initialize(ctx=ctx, init=init.Xavier()) trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': lr}) train_iter, test_iter = gb.load_data_fashion_mnist(batch_size) gb.train_ch5(net, train_iter, test_iter, batch_size, trainer, ctx, num_epochs) training on gpu(0) epoch 1, loss 0.6442, train acc 0.770, test acc 0.828, time 3.7 sec epoch 2, loss 0.3957, train acc 0.857, test acc 0.819, time 3.4 sec epoch 3, loss 0.3439, train acc 0.877, test acc 0.866, time 3.4 sec epoch 4, loss 0.3169, train acc 0.886, test acc 0.871, time 3.5 sec epoch 5, loss 0.2996, train acc 0.892, test acc 0.879, time 3.5 sec Let’s have a look at the scale parameter gamma and the shift parameter beta learned from the first batch normalization layer. In [5]: net[1].gamma.data().reshape((-1,)), net[1].beta.data().reshape((-1,)) Out[5]: ( [1.8761728 1.545957 2.051373 1.5199137 1.1039094 1.8849751] <NDArray 6 @gpu(0)>, [ 1.2102104 -0.09512085 -0.01889443 1.0439981 -0.3983889 -2.0144103 ] <NDArray 6 @gpu(0)>) 5.11.5. Gluon Implementation for Batch Normalization). In [6]: always the Gluon variant runs a lot faster since the code that is being executed is compiled C++/CUDA rather than interpreted Python. In [7]: net.initialize(ctx=ctx, init=init.Xavier()) trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': lr}) gb.train_ch5(net, train_iter, test_iter, batch_size, trainer, ctx, num_epochs) training on gpu(0) epoch 1, loss 0.6374, train acc 0.776, test acc 0.858, time 1.9 sec epoch 2, loss 0.3929, train acc 0.857, test acc 0.832, time 1.9 sec epoch 3, loss 0.3452, train acc 0.874, test acc 0.854, time 1.9 sec epoch 4, loss 0.3207, train acc 0.883, test acc 0.846, time 1.9 sec epoch 5, loss 0.2987, train acc 0.890, test acc 0.850, time 1.9 sec 5.11. 5.11.7. Problems? 5.11.8. References¶ [1] Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167.
http://gluon.ai/chapter_convolutional-neural-networks/batch-norm.html
CC-MAIN-2019-04
refinedweb
1,385
60.82
{- | Module: Bio.Sequence.Fasta This module incorporates functionality for reading and writing sequence data in the Fasta format. Each sequence consists of a header (with a '>' prefix) and a set of lines containing the sequence data. As Fasta is used for both amino acids and nucleotides, the resulting 'Sequence's are type-tagged with 'Unknown'. If you know the type of sequence you are reading, use 'castToAmino' or 'castToNuc'. -} module Bio.Sequence.Fasta ( -- * Reading and writing plain FASTA files readFasta, writeFasta, hReadFasta, hWriteFasta -- * Reading and writing quality files , readQual, writeQual, hWriteQual -- * Combining FASTA and quality files , readFastaQual, hWriteFastaQual, writeFastaQual -- * Counting sequences in a FASTA file , countSeqs -- * Helper function for reading your own sequences , mkSeqs -- * Data structures , Qual ) where import Data.Char (chr) -- isSpace import Data.List (groupBy,intersperse) import System.IO import qualified Data.ByteString.Lazy.Char8 as B import qualified Data.ByteString.Lazy as BB import Data.ByteString.Lazy.Char8 (ByteString) import Bio.Sequence.SeqData splitsAt :: Offset -> ByteString -> [ByteString] splitsAt n s = let (s1,s2) = B.splitAt n s in if B.null s2 then [s1] else s1 : splitsAt n s2 {- -- ugly? class SeqType sd where toSeq :: sd -> sd -> Sequence fromSeq :: Sequence -> (sd,sd) instance SeqType B.ByteString where toSeq = Seq fromSeq (Seq x y) = (x,y) instance SeqType BS.ByteString where toSeq h s = Seq (B.fromChunks [h]) (B.fromChunks [s]) fromSeq (Seq x y) = (c x, c y) where c = BS.concat . B.toChunks -} -- | Lazily read sequences from a FASTA-formatted file readFasta :: FilePath -> IO [Sequence Unknown] readFasta f = (mkSeqs . blines) `fmap` B.readFile f -- | Replacement for 'lines' that gobbles control-M's -- Some tools, like CLC, likes to add these to the end of each line. blines :: B.ByteString -> [B.ByteString] blines = B.lines . B.filter (/=Data.Char.chr 13) -- | Write sequences to a FASTA-formatted file. -- Line length is 60. writeFasta :: FilePath -> [Sequence a] -> IO () writeFasta f ss = do h <- openFile f WriteMode hWriteFasta h ss hClose h -- | Read quality data for sequences to a file. readQual :: FilePath -> IO [Sequence Unknown] readQual f = (mkQual . blines) `fmap` B.readFile f -- | Write quality data for sequences to a file. writeQual :: FilePath -> [Sequence a] -> IO () writeQual f ss = do h <- openFile f WriteMode hWriteQual h ss hClose h -- | Read sequence and associated quality. Will error if -- the sequences and qualites do not match one-to-one in sequence. readFastaQual :: FilePath -> FilePath -> IO [Sequence Unknown] readFastaQual s q = do ss <- readFasta s qs <- readQual q -- warning: assumes correct qual file! let mkseq s1@(Seq x y _) s2@(Seq _ _ (Just z)) | seqlabel s1 /= seqlabel s2 = error ("mismatching sequence and quality: " ++toStr (seqlabel s1)++","++toStr (seqlabel s2)) | B.length y /= B.length z = error ("mismatching sequence and quality lengths:" ++ toStr (seqlabel s1)++","++show (B.length y,B.length z)) | otherwise = Seq x y (Just z) mkseq _ _ = error "readFastaQual: could not combine Fasta and Qual information" zipWith' f (x:xs) (y:ys) = f x y : zipWith' f xs ys zipWith' _ [] [] = [] zipWith' _ _ _ = error "readFastaQual: Unbalanced sequence and quality" return $ zipWith' mkseq ss qs -- | Write sequence and quality data simulatnously -- This may be more laziness-friendly. writeFastaQual :: FilePath -> FilePath -> [Sequence a] -> IO () writeFastaQual f q ss = do hf <- openFile f WriteMode hq <- openFile q WriteMode hWriteFastaQual hf hq ss hClose hq hClose hf hWriteFastaQual :: Handle -> Handle -> [Sequence a] -> IO () hWriteFastaQual hf hq = mapM_ wFQ where wFQ s = wFasta hf s >> wQual hq s -- | Lazily read sequence from handle hReadFasta :: Handle -> IO [Sequence Unknown] hReadFasta h = (mkSeqs . blines) `fmap` B.hGetContents h -- | Write sequences in FASTA format to a handle. hWriteFasta :: Handle -> [Sequence a] -> IO () hWriteFasta h = mapM_ (wFasta h) wHead :: Handle -> SeqData -> IO () wHead h l = do B.hPut h $ B.pack ">" B.hPut h l B.hPut h $ B.pack "\n" wFasta :: Handle -> Sequence a -> IO () wFasta h (Seq l d _) = do wHead h l let ls = splitsAt 60 d mapM_ (B.hPut h) $ intersperse (B.pack "\n") ls B.hPut h $ B.pack "\n" hWriteQual :: Handle -> [Sequence a] -> IO () hWriteQual h = mapM_ (wQual h) wQual :: Handle -> Sequence a -> IO () wQual h (Seq l _ (Just q)) = do wHead h l let qls = splitsAt 20 q qs = B.pack . unwords . map show . BB.unpack mapM_ ((\l' -> B.hPut h l' >> B.hPut h (B.pack "\n")) . qs) qls wQual _ (Seq _ _ Nothing) = return () -- | Convert a list of FASTA-formatted lines into a list of sequences. -- Blank lines are ignored. -- Comment lines start with "#" are allowed between sequences (and ignored). -- Lines starting with ">" initiate a new sequence. mkSeqs :: [ByteString] -> [Sequence Unknown] mkSeqs = map mkSeq . blocks mkSeq :: [ByteString] -> Sequence Unknown mkSeq (l:ls) -- maybe check this? | B.length l < 2 || isSpace (B.head $ B.tail l) -- = error "Trying to read sequence without a name...and failing." | otherwise = Seq (B.drop 1 l) (B.concat $ takeWhile isSeq ls) Nothing where isSeq s = (not . B.null) s && ((flip elem) (['A'..'Z']++['a'..'z']) . B.head) s mkSeq [] = error "empty input to mkSeq" mkQual :: [ByteString] -> [Sequence Unknown] mkQual = map f . blocks where f [] = error "mkQual on empty input - this is impossible" f (l:ls) = Seq (B.drop 1 l) B.empty (Just $ BB.pack $ go 0 ls) isDigit x = x <= 58 && x >= 48 go i (s:ss) = case BB.uncons s of Just (c,rs) -> if isDigit c then go (c - 48 + 10*i) (rs:ss) else let rs' = BB.dropWhile (not . isDigit) rs in if BB.null rs' then i : go 0 ss else i : go 0 (rs':ss) Nothing -> i : go 0 ss go _ [] = [] -- | Split lines into blocks starting with '>' characters -- Filter out # comments (but not semicolons?) blocks :: [ByteString] -> [[ByteString]] blocks = groupBy (const (('>' /=) . B.head)) . filter ((/='#') . B.head) . dropWhile (('>' /=) . B.head) . filter (not . B.null) countSeqs :: FilePath -> IO Int countSeqs f = do ss <- B.readFile f let hdrs = filter (('>'==).B.head) $ filter (not . B.null) $ blines ss return (length hdrs)
http://hackage.haskell.org/package/bio-0.4.8/docs/src/Bio-Sequence-Fasta.html
CC-MAIN-2016-50
refinedweb
977
68.26
C# comes with lots of built-in datatypes, but not everything we might want to use. We start with a very simple example of building your own new type of object: Contact information for a person involves several pieces of data, and they are all unified by the fact that they are for one person, so we would like to store them together as a unit. For simplicity, let us just consider the contact information to be name, phone number, and email address. You could always keep three independent string variables, but conceptually the main idea is the contact. It just happens to have parts. In order to treat a contact as one entity, we create a class, This way we can have a single variable refer to a Contact object. Such an object is an instance of the class. It is important to distinguish between a class and an instance of a class: A class provides a template or instructions to make new instance objects. A common comparison is that a class is like a cookie cutter while an instance of the class is like a cookie. You might consider constructor parameters as being for different decorations on different cookies, so not all cookies must end up completely the same. Later we will see an example for rational numbers, The Rational Class, where the parts of the class are more tightly integrated, but that is more complicated, so we defer it. We have already considered built-in types with internal state, like a List: Each List can contain different data, and the internal data can be changed. The idea of creating a new type of object opens new ground for managing data. Thus far we have stored data as local variables, and we have called functions, with two ways to get information in and out of functions: We have stored and passed around built-in types of object using this model. We have alternatives for storing and accessing data in the methods within a new class that we write. Now we have the idea of an object that has internal state (like a contact with a name, phone, and email). We shall see that this state is not stored in local variables and does not need to be passed through parameters for methods within the class. Pay careful attention as we introduce this new location for data and the new ways of interacting with it. This is quite a shift. Do not take it lightly. We can create a new object with the new syntax. We can give parameters defining the initial state of the new object. In our example the obvious thing to do is supply parameters giving values for the three parts of the object state, so we can plan that Contact c = new Contact("Marie Ortiz", "773-508-7890", "mortiz2@luc.edu"); would create a new Contact storing the data. A Contact object, created with new is an instance of the class Contact. Like with built-in types, we can have the natural operations on the type as methods. For instance we can GetName, GetPhoneand GetEmail Thinking ahead to what we would like for our Contact objects, here is the testing code of contact1/test_contact1.cs: using System; namespace IntroCS { public class TestContact {("\nFull contact info for Otto:"); c2.Print(); } } } When running this testing code, we would like the results: Marie's full name: Marie Ortiz Her phone number: 773-508-7890 Her email: mortiz2@luc.edu Full contact info for Otto: Name: Otto Heinz Phone: 773-508-9999 Email: oheinz@luc.edu We are using the same object oriented notation that we have for many other classes: Calls to instance methods are always attached to a specific object. That has always been the part through the . of object .method (... ) So far we have been thinking and illustrating how we would like objects in this Contact class to look like and behave from the outside. We could be describing another library class. Now, for the first time, we start to delve inside, to the code and concepts needed to make this happen. We start with the most basic parts. First we need a class: Our code is nested inside public class Contact { // ... fields, constructor, code for Contact omitted } This is the same sort of wrapper we have used for our Main programs! Before, everything inside was labeled static. Now we see what happens with the static keyword omitted.... A Contact has a name, phone number and email address. We must remember that data. Each individual Contact that we use will have its own name, phone number and email address. We have used some static variables before in classes, with the keyword static, where there is just one copy for the whole class. If we omit the static we get an instance variable, that is the particular data for an individual Contact, for an individual instance of the class. This is our new place to store data: We declare these in the class and outside any method declaration. (This is in the same place as we would store Static Variables). They are fields of the class. As we will discuss more in terms of safety and security, we add the word “private” at the beginning: public class Contact { private string name; private string phone; private string email; // ... constructor, code for Contact omitted } You also see that we are lazy in this example, and abbreviate the longer descriptions fullName, phoneNumber and emailAddress. It is important to distinguish instance variables of a class and local variables. A local variable is only accessible inside the block (surrounded by braces) where it was declared, and is destroyed at the end of the execution of that block. However the class fields name, phone and The lifetime of a variable is from when it is first created until it is no longer accessible by the program. We repeat: Note Instance variable have a completely different lifetime and scope from local variables. An object and its instance variables, persist from the time a new object is created with new for as long as the object remains referenced in the program. We need to get values into our field variables. They describe the state of our Contact. We have used constructors for built-in types. Now for the first time we create one. The constructor is a slight variation on a regular method: Its name is the same as the kind of object constructed, so here it is the class name, Contact. It has no return type (and no static). Implicitly you are creating the kind of object named, a Contact in this case. The constructor can have parameters like a regular method. We will certainly want to give a state to our new object. That means giving values to its fields. Recall we are want to store this state in instance variables name, phone and public Contact(string fullName, string phoneNumber, string emailAddress) { name = fullName; phone = phoneNumber; email = emailAddress; } While the local variables in the formal parameters disappear after the constructor terminates, we want the data to live on as the state of the object. In order to remember state after the constructor terminates, we must make sure the information gets into the instance variables for the object. This is the basic operation of most constructors: Copy desired formal parameters in to initialize the state in the fields. That is all our simple code above does. Note that name, phone and So far always we have always been referring to a built-in type of object defined in a different class, like arrayObject.Length. The constructor is creating an object, and the use of the bare instance variable names is understood to be giving values to the instance variables in this Contact object that is being constructed. Inside a constructor and also inside an instance method (discussed below) C# allows this shorthand notation without someObject.. The instance variable names and method names used without an object reference and dot refer to the current instance. Whenever a constructor or non-static method in the class is called, there is always a current object: Mainmust create the first object. methodNameis called with explicit dot notation, someObject.methodName(), then it is acting on the current object someObject. In any program’s execution the first call to an instance method must either be in this explicit form or from within a constructor for a new object. Again, this means that in execution, whenever an instance method is called, there is a current specific object. This is the object associated with any instance variable or method referred to in that method, if there is not an explicit prefix in the someObject. form. This will take practice to get used to. In instance methods you have an extra way of getting data in and out of the method: Reading or setting instance variables. (As we have just pointed out, in execution there will always be a current object with its specific state.) The simplest methods do nothing but reading or setting instance variables. We start with those: The private in front of the field declarations was important to keep code outside the class from messing with the values. On the other hand we do want others to be able to inspect the name, phone and email, so how do we do that? Use public methods. Since the fields are accessible anywhere inside the class’s instance methods, and public methods can be used from outside the class, we can simply code public string GetName() { return name; } public string GetPhone() { return phone; } public string GetEmail() { return email; } These methods allow one-way communication of the name, phone and email values out from the object. These are examples of a simple category of methods: A getter simply returns the value of a part of the object’s state, without changing the object at all. Note again that there is no static in the method heading. The field value for the current Contact is returned. A standard convention that we are following: Have getter methods names start with “Get”, followed by the name of the data to be returned. In this first simple version of Contact we add one further method, to print all the contact information with labels. public void Print() { Console.WriteLine (@"Name: {0} Phone: {1} Email: {2}", name, phone, email); } Again, we use the instance variable names, plugging them into a format string. Remember the @ syntax for multiline strings from String Special Cases. You can see and see the entire Contact class in contact1/contact1.cs. This is our first complete class defining a new type of object. Look carefully to get used to the features introduced, before we add more ideas: We will be making an elaboration on the Contact class from here on. We introduce new parts individually, but the whole code is in contact2/contact2.cs. The current object is implicit inside a constructor or instance method definition, but it can be referred to explicitly. It is called this. In a constructor or instance method, this is automatically a legal local variable to reference. You usually do not need to use it explicitly, but you could. For example the current Contact object’s name field could be referred to as either this.name or the shorter plain name. In our next version of the Contact class we will see several places where an explicit this is useful. In the first version of the constructor, repeated here, public Contact(string fullName, string phoneNumber, string emailAddress) { name = fullName; phone = phoneNumber; email = emailAddress; } we used different names for the instance variables and the formal parameter names that we used to initialize the instance variables. We chose reasonable names, but we are adding extra names that we are not going to use later, and it can be confusing. The most obvious names for the formal parameters that will initialize the instance variables are the same names. If we are not careful, there is a problem with that. An instance variable, however named, and a local variable are not the same. This is nonsensical: public Contact(string name, string phone, string email) { name = name; // ???? ... Logically we want this pseudo-code in the constructor: instance variable name =local variable name We have to disambiguate the two uses. The compiler always looks for local variable identifiers first, so plain name will refer to the local variable name declared in the formal parameter list. This local variable identifier hides the matching instance variable identifier. We have to do something else to refer to the instance variable. The explicit this object comes to the rescue: this.name refers to a part of this object. It must refer to the instance variable, not the local variable. Our alternate constructor is: public Contact(string name, string phone, string email) { this.name = name; this.phone = phone; this.email = email; } The original version of Contact makes a Contact object be immutable: Once it is created with the constructor, there is no way to change its internal state. The only assignments to the private instance variables are the ones in the constructor. In real life people can change their email address. We might like to allow that with our Contact objects. Users can read the data in a Contact with the getter methods. Now we need setter methods. The naming conventions are similar: start with “Set”. In this case we must supply the new data, so setter methods need a parameter: public void SetPhone(string newPhone) { phone = newPhone; } In SetPhone, like in our original constructor, we illustrate using a new name for the parameter that sets the instance variable. For comparison we use the alternate identifier matching approach in the other setter: public void SetEmail(string email) { this.email = email; } Now we can alter the contents of a Contact: Contact c1 = new Contact("Marie Ortiz", "773-508-7890", "mortiz2@luc.edu"); c1.SetEmail ("maria.ortiz@gmail.com"); c1.SetPhone("555-555-5555"); c1.Print(); would print Name: Marie Ortiz Phone: 555-555-5555 Email: maria.ortiz@gmail.com We created the A good design decision is to separate the different actions: the first is to generate the 3-line string showing the full state of the object. Once we have this string, we can easily print it or write it to a file or .... Hence we want a method to generate a descriptive string. Think more generally about string representations: All the built-in types can be concatenated into strings with the ‘+’ operator, or displayed with Console.Write. We would like that behavior with our custom types, too. How can the compiler know how to handle types that were not invented when the compiler was written? The answer is to have common features among all objects. Any object has a ToString method, and that method is used implicitly when an object is used with string concatenation, and also for Write. The default version supplied by the system is not very useful for an object that it knows nothing about! You need to define your own version, one that knows how you have defined your type, with its own specific instance variables. You need to have that version used in place of the default: You need to override the default. To emphasize the change in meaning, the word override must be in the heading: public override string ToString() { return string.Format (@"Name: {0} {1} {2}", name, phone, email); } See what the method does: it uses the object state to create and return a single string representation of the object. For any kind of new object that you create and want to be able to implicitly convert to a string, you need a ToString method with the exact same heading as the ToString for a Contact. A more complete discussion of override would lead us into class hierarchies and inheritance, which we are not emphasizing in this book. We still might like to have a convenience method ToString. We want one instance method, ToString for the same object. How does this work? It is like when instance method GetName refers to an instance variable name without using dot notation. Then name is assumed to refer to this object associated with the call to GetName. We can change our public void Print() { Console.WriteLine(ToString()); } Here ToString() is a method called without dot notation explicitly attaching it to an object. As with instance variables, it is implicitly attached to this object, the one attached to the call to Again, the whole code for the elaborated Contact is in example contact2/contact2.cs. New testing code is in contact2/test_contact2.cs. Run the project and check that it does what you would expect. There are several new features illustrated in the testing code:("All together:\n{0}", c1); Console.WriteLine("Full contact info for Otto:"); c2.Print(); c1.SetEmail("maria.ortiz@gmail.com"); c2.SetPhone("123-456-7890"); Contact c3 = new Contact("Amy Li", "847-111-2222", "amy.li22@yahoo.com"); Console.WriteLine("With changes and added contact:"); var allc = new List<Contact>(new Contact[] {c1, c2, c3}); foreach(Contact c in allc) { Console.WriteLine("\n"+c); } } Contact is now a type we can use with other types. Main ends creating a List<Contact> and an array of Contacts, and processes Contacts in the List with a foreach loop. We mentioned that this particular signature in the ToString heading means that the system recognizes it in string concatenation and in substitutions into a Write or WriteLine format string. Find both in Main. The ToString override also means that the body of our this: public void Print() { Console.WriteLine(this); } When we use Console.WriteLine on this current object, which is not already a string, there is an automatic call to ToString. A common error is for students to try to declare the instance variables twice, once in the regular instance variable declarations, outside of any constructor or method and then again inside a constructor, like: public Contact(string fullName, string phoneNumber, string emailAddress) { string name = fullName; // LOGICAL ERROR! string phone = phoneNumber; // LOGICAL ERROR! string email = emailAddress; // LOGICAL ERROR } This is deadly. It is worse than redeclaring a local variable, which at least will trigger a compiler error. Warning Instance variable only get declared outside of all functions and constructors. Same-name local variable declarations hide the instance variables, but compile just fine. The local variables disappear after the constructor ends, leaving the instance variables without your desired initialization. Instead the hidden instance variables just get the default initialization, null for an object or 0 for a number. There is a related strange compiler error. This is not likely to happen frequently, but thinking through its logic (or illogic) could be helpful in understanding local and instance variables: Generally when you get a compiler error, the error is at or before the location the error is referenced, but with local variables covering instance variables, the real cause can come later in the text of the method. Below, when you first refer to r in Badnames, it appears to be correctly referring to the instance variable r: class ForwardError { private int r = 3; // ... void BadNames(int a, int b) { int n = a*r + b; // legal in text *just* to here; instance field r //... int r = a % b; // r declaration makes *earlier* line wrong //... } The compiler scans through all of BadNames, and sees the r declared locally in its scope. The error may be marked on the earlier line, where the compiler then assumes r is the later declared local int variable, not the instance variable. The error it sees is a local variable used before declaration. This is based on a real student example. This example points to a second issue: using variable names that that are too short and not descriptive of the variable meaning, and so may easily be the same name as something unrelated. Be careful to distinguish lifetime and scope. Either a local variable or an instance variable can be temporarily out of scope, but still be alive. Can you construct an example to illustrate that? One of ours is lifetime_scope/lifetime_scope.cs.
http://books.cs.luc.edu/introcs-csharp/classes/a-first-class.html
CC-MAIN-2019-09
refinedweb
3,360
62.68
Hi, I am trying to store pointers to some inherited classes within a linked list, but am stuck with the first part. I understand that each node contains two pointers to the next and previous, but how should I create the first 'head node' that points to the first object that will be added? Here is my code: header file : Code:class Vehicle { friend class listclass; public: Vehicle(int publicmpg = 0); protected: Vehicle *llink, *rlink; int protectedmpg; }; Vehicle::Vehicle(int publicmpg) { protectedmpg = publicmpg; } //------------------------- // //------------------------- class Listclass { public: //addvehicle() Listclass(int publicfirst = 0); protected: int protectedfirst; }; Listclass::Listclass(int publicfirst) { protectedfirst = publicfirst; } //------------------------- // //------------------------- class SportsCar : public Vehicle { public: SportsCar(int publicsixspeed = 0, int publicmpg = 0); protected: int protectedsixspeed; }; SportsCar::SportsCar(int publicsixspeed, int publicmpg) : Vehicle(publicmpg) { protectedsixspeed = publicsixspeed; } main.cpp : How/should I create the 'head node' first, and then add the first items to the list?How/should I create the 'head node' first, and then add the first items to the list?Code:#include <iostream> #include <list> #include "Vehicle.h" using namespace std; int main() { list<Vehicle*> listname; Listclass headnode; // Should I create the 'head node' here before // going on to creating actual objects? int publicmpg = 11; int publicsixspeed = 22; SportsCar node(publicmpg, publicsixspeed); SportsCar *ptrToSportsCar; ptrToSportsCar = &node; listname.push_back( ptrToSportsCar ); cout << "past push" << endl; system ("PAUSE"); return 0; I've read up and understand the 'head node' will only contain a pointer to the first node in the list, but how do I create it?. I thought it might be easiest to create a single link list first, then try modify it into a doubly linked list. Once I have the list established I think I'll be fine adding new items to the list, but I'm stuck 'getting it all setup' if you like. Appreciate anyone's advice. Thanks!
https://cboard.cprogramming.com/cplusplus-programming/116093-lists-first-step-create-head-node.html
CC-MAIN-2017-13
refinedweb
301
56.59
- Abstraction - Flavors of Pen - Need a pen?. Pens used to live in RoboFab. With the development of FontParts, RoboFab pens were reorganized and moved to other packages. Some pens now live in ufoLib, some in fontPens, and some in fontTools. See RoboFab vs. FontParts APIs > Pens for a complete list of RoboFab pens and their new locations. Abstraction Using the pen as an intermediate, the code that just wants to draw a glyph doesn’t have to know the internal functions of the glyph, and in turn, the glyph doesn’t have to learn anything about specific drawing environments. Different kinds of glyph objects work very different on the inside: fontParts.nonelab.RGlyphstores the coordinates itself and writes to GLIF mojo.roboFont.RGlyphstores data in RoboFont robofab.objects.objectsRF.RGlyphstores data in FontLab - etc. Despite the differences, all RGlyph objects have a draw method which follows the same abstract drawing procedures. So the code that uses the RGlyph.draw(pen) is not aware of the difference between the three kinds of glyphs. Why pens? In order to make a glyph draw in for instance a new graphics environment, you only need to write a new pen and implement the standard methods for the specifics of the environment. When that’s done, all glyphs can draw in the new world. There are basically two different kinds of pen, Pen and PointsPen, which do different things for different purposes, and are intended for different methods in RGlyph. Pen The normal Pen object and pen that descend from it can be passed to aGlyph.draw(aPen). The glyph calls these methods of the pen object to draw. It’s very similar to “Drawing like PostScript”. - moveTo(pt) - Move the pen to the (x, y)in pt. - lineTo(pt) - Draw a straight line to the (x, y)coordinate in pt. - curveTo(pt1, pt2, pt3) - Draw a classic Cubic Bezier (“PostScript”) curve through pt1(offcurve), pt2(also offcurve) and pt3which is oncurve again. - qCurveTo(*pts, **kwargs) - Draw a Quadratic (“TrueType”) curve through, well, any number of offcurve points. This is not the place to discuss Quadratic esoterics, but at least: this pen can deal with them and draw them. If the last point is set to None, no on curve is needed, the implied start point lays inbetween the first and last off curve. - closePath - Tell the pen the path is finished. - addComponent(baseName, transform) - Tell the pen to add a component of baseName, with a trasformation matrix transform: a 6 tuple containing an affine transformation (xx, xy, yx, yy, x, y). PointsPen-curve points. The PointsPen is passed to the aGlyph.drawPoints(aPointsPen). - beginPath(identifier=None) - Start a new sub path. - endPath - End the current sub path. - addPoint(pt, segmentType=None, smooth=False, name=None, **kwargs) - Add a point to the current sub path. - addComponent(self, baseName, transform) - Add a component of baseName, with a trasformation matrix transform: a 6 tuple containing an affine transformation (xx, xy, yx, yy, x, y). Need a pen? If you need a pen to do some drawing in a RGlyph object, you can ask the glyph to get you one. Depending on the environment you’re in, the RGlyph will get you the right kind of pen object to do the drawing. In RoboFont newGlyph = CurrentGlyph() pen = newGlyph.getPen() In NoneLab using FontParts from fontParts.nonelab import RGlyph newGlyph = RGlyph() pen = newGlyph.getPen() In FontLab using RoboFab from robofab.world import CurrentGlyph newGlyph = CurrentGlyph() pen = newGlyph.getPen() For a more in-depth look at pens, see Using pens. Adapted from the RoboFab documentation.
https://doc.robofont.com/documentation/topics/pens/
CC-MAIN-2021-39
refinedweb
595
66.84
Prerequisites : Socket Programming in Java This article assumes that you have basic knowledge of socket programming in java and the basic details of client-server model used in communication. Why to use threads in network programming? The reason is simple, we don’t want only a single client to connect to server at a particular time but many clients simultaneously. We want our architecture to support multiple clients at the same time. For this reason, we must use threads on server side so that whenever a client request comes, a separate thread can be assigned for handling each request. Let us take an example, suppose a Date-Time server is located at a place, say X. Being a generic server, it does not serve any particular client, rather to a whole set of generic clients. Also suppose at a particular time, two requests arrives at the server. With our basic server-client program, the request which comes even a nano-second first would be able to connect to the server and the other request would be rejected as no mechanism is provided for handling multiple requests simultaneously. To overcome this problem, we use threading in network programming. The following article will focus on creating a simple Date-Time server for handling multiple client requests at the same time. Quick Overview As normal, we will create two java files,Server.java and Client.java. Server file contains two classes namely Server (public class for creating server) and ClientHandler (for handling any client using multithreading). Client file contain only one public class Client (for creating a client). Below is the flow diagram of how these three classes interact with each other. Server Side Programming(Server.java) - Server class : The steps involved on server side are similar to the article Socket Programming in Java with a slight change to create the thread object after obtaining the streams and port number. - Establishing the Connection: Server socket object is initialized and inside a while loop a socket object continuously accepts incoming connection. - Obtaining the Streams: The inputstream object and outputstream object is extracted from the current requests’ socket object. - Creating a handler object: After obtaining the streams and port number, a new clientHandler object (the above class) is created with these parameters. - Invoking the start() method : The start() method is invoked on this newly created thread object. - ClientHandler class : As we will be using separate threads for each request, lets understand the working and implementation of the ClientHandler class extending Threads. An object of this class will be instantiated each time a request comes. - First of all this class extends Thread so that its objects assumes all properties of Threads. - Secondly, the constructor of this class takes three parameters, which can uniquely identify any incoming request, i.e. a Socket, a DataInputStream to read from and a DataOutputStream to write to. Whenever we receive any request of client, the server extracts its port number, the DataInputStream object and DataOutputStream object and creates a new thread object of this class and invokes start() method on it. Note : Every request will always have a triplet of socket, input stream and output stream. This ensures that each object of this class writes on one specific stream rather than on multiple streams. - Inside the run() method of this class, it performs three operations: request the user to specify whether time or date needed, read the answer from input stream object and accordingly write the output on the output stream object. Output A new client is connected : Socket[addr=/127.0.0.1,port=60536,localport=5056] Assigning new thread for this client Client Socket[addr=/127.0.0.1,port=60536,localport=5056] sends exit... Closing this connection. Connection closed Client Side Programming (Client.java) Client side programming is similar as in general socket programming program with the following steps- - Establish a Socket Connection - Communication Output : What do you want?[Date | Time].. Type Exit to terminate connection. Date 2017/06/16 What do you want?[Date | Time].. Type Exit to terminate connection. Time 05:35:28 What do you want?[Date | Time].. Type Exit to terminate connection. Geeks Invalid input What do you want?[Date | Time].. Type Exit to terminate connection. Exit Closing this connection : Socket[addr=localhost/127.0.0.1,port=5056,localport=60536] Connection closed How these programs works together? - When a client, say client1 sends a request to connect to server, the server assigns a new thread to handle this request. The newly assigned thread is given the access to streams for communicating with the client. - After assigning the new thread, the server via its while loop, again comes into accepting state. - When a second request comes while first is still in process, the server accepts this requests and again assigns a new thread for processing it. In this way, multiple requests can be handled even when some requests are in process. How to test the above program on your system? Save the two programs in same package or anywhere. Then first run the Server.java followed by the Client.java. You can either copy the client program in two three separate files and run them individually, or if you have an IDE like eclipse, run multiple instances from the same program. The output shown above is from a single client program, the similar results will be achieved if multiple clients are used. Next: Multi-threaded chat Application : Server Side Programming , Client Side Programming: - Socket Programming in Java - Difference Between Daemon Threads and User Threads In Java - Killing threads in Java - Joining Threads in Java - Output of Java program | Set 16 (Threads) - Producer-Consumer solution using threads in Java - Understanding threads on Producer Consumer Problem | Java - Green vs Native Threads and Deprecated Methods in Java - Java tricks for competitive programming (for Java 8) - Print 1 2 3 infinitely using threads in C - Two way communication between Client and Server using Win32 Threads - Maximum number of threads that can be created within a process in C - Java Programming Basics - Comparison of Java with other programming languages - Functional Programming in Java with Examples - Beginning Java programming with Hello World Example - Which Java libraries are useful for competitive programming? - Fast I/O in Java in Competitive Programming - Different Approaches to Concurrent Programming in Java - The complete History of Java Programming Language
https://www.geeksforgeeks.org/introducing-threads-socket-programming-java/?ref=lbp
CC-MAIN-2020-34
refinedweb
1,053
54.32
JSONP Example with Observables Learning Objectives Know what JSONP is and how it overcomes CORS. Know how to make JSONP requests in Angular. What is JSONP? JSONP is a method of performing API requests which go around the issue of CORS. This is a security measure implemented in all browsers that stops you from using an API in a potentially unsolicited way and most APIs, including the iTunes API, are protected by it. Because of CORS if we tried to make a request to the iTunes API url with the http client library the browser would issue a CORS error. The explanation is deep and complex but a quick summary is that unless an API sets certain headers in the response a browser will reject the response, the iTune API we use doesn’t set those headers so by default any response gets rejected. So far in this section we’ve solved this by installing a chrome plugin which intercepts the response and adds the needed headers tricking the browser into thinking everything is ok but it’s not a solution you can use if you want to release you app to the public. If we switch off the plugin and try our application again we get this error in the console XMLHttpRequest cannot load. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin '' is therefore not allowed access. JSONP is a solution to this problem, it treats the API as if it was a javascript file. It dynamically adds as if it were a script tag in our our HTML like so: <script src=""></script> The browser then just downloads the javascript file and since browsers don’t check for CORS when downloading javascript files it a works around the issue of CORS. Imagine the response from the server was {hello:'world'}, we don’t just want to download the file because: The browser would download the json and try to execute it, since json isn’t javascript nothing would actually happen. We want to do something with the json once it’s downloaded, ideally call a function passing it the json data. So APIs that support JSONP return something that looks like javascript, for example it might return something like so: process_response({hello:'world'}); When the file is downloaded it’s executed as if it was javascript and calls a function called process_response passing in the json data from the API. Of course we need a function called process_response ready and waiting in our application so it can be called but if we are using a framework that supports JSONP these details are handled for us. That’s is JSONP in a nutshell. We treat the API as a javascript file. The API wraps the JSON response in a function who’s name we define. When the browser downloads the fake API script it runs it, it calls the function passing it the JSON data. We can only use JSONP when: The API itself supports JSONP. It needs to return the JSON response wrapped in a function and it usually lets us pass in the function name we want it to use as one of the query params. We can only use it for GET requests, it doesn’t work for PUT/POST/DELETE and so on. Refactor to JSONP Now we know a little bit about JSONP and since the iTunes API supports JSONP lets refactor our observable application to one that uses JSONP instead. For the most part the functionality is the same as our Http example but we use another client lib called Jsonp and the providers for our Jsonp solution is in the JsonpModule. So firstly lets import and the new JsonpModule and replace all instances of Http with Jsonp. import {JsonpModule, Jsonp, Response} from '@angular/http'; . . . @NgModule({ imports: [ BrowserModule, ReactiveFormsModule, FormsModule, JsonpModule (1) ], declarations: [AppComponent], bootstrap: [AppComponent], providers: [SearchService] }) class AppModule { } Then lets change our SearchService to use the Jsonp library instead of Http, like so: @Injectable() export class SearchService { apiRoot: string = ''; constructor(private jsonp: Jsonp) { (1) } search(term: string) { let apiURL = `${this.apiRoot}?term=${term}&media=music&limit=20&callback=JSONP_CALLBACK`; (2) return this.jsonp.request(apiURL) (3) .map(res => { return res.json().results.map(item => { return new SearchItem( item.trackName, item.artistName, item.trackViewUrl, item.artworkUrl30, item.artistId ); }); }); } } Important The iTunes API supports JSONP, we just need to tell it what name to use via the callback query parameter. We passed it the special string JSONP_CALLBACK. Angular will replace JSONP_CALLBACK with an automatically generated function name every time we make a request That’s it, the rest of the code is exactly the same and the application works like before. Important Summary JSONP is a common solution in the web world to get around the issue of CORS. We can only use JSONP with APIs that support JSONP. To use JSONP in Angular we use the Jsonp client lib which is configured by importing the JsonpModule and adding it to our NgModule imports. We make a JSONP request by calling the request function, it still returns an Observable so for the rest of your application that fact you are using Jsonp instead of Http should be invisible. Listing import { NgModule, Component, Injectable } from "@angular/core"; import { BrowserModule } from "@angular/platform-browser"; import { platformBrowserDynamic } from "@angular/platform-browser-dynamic"; import { HttpClientJsonpModule, HttpClientModule, HttpClient } from "@angular/common/http"; import { ReactiveFormsModule, FormControl, FormsModule } from "@angular/forms"; import { map, debounceTime, distinctUntilChanged, switchMap, tap } from "rxjs/operators"; class SearchItem { constructor( public track: string, public artist: string, public link: string, public thumbnail: string, public artistId: string ) {} } @Injectable() export class SearchService { apiRoot: <div class="form-group"> <input type="search" class="form-control" placeholder="Enter search string" [formControl]="searchField"> </div> </form> <div class="text-center"> <p class="lead" *Loading...</p> </div> <ul class="list-group"> <li class="list-group-item" * <img src="{{track.thumbnail}}"> <a target="_blank" href="{{track.link}}">{{ track.track }} </a> </li> </ul> ` }) class AppComponent { private loading: boolean = false; private results: Observable<SearchItem[]>; private searchField: FormControl; constructor(private itunes: SearchService) {} ngOnInit() { this.searchField = new FormControl(); this.results = this.searchField.valueChanges.pipe( debounceTime(400), distinctUntilChanged(), tap(_ => (this.loading = true)), switchMap(term => this.itunes.search(term)), tap(_ => (this.loading = false)) ); } } @NgModule({ imports: [ BrowserModule, ReactiveFormsModule, FormsModule, HttpClientModule, HttpClientJsonpModule ], declarations: [AppComponent], bootstrap: [AppComponent], providers: [SearchService] }) class AppModule {} platformBrowserDynamic().bootstrapModule(AppModule);
https://codecraft.tv/courses/angular/http/jsonp-with-observables/
CC-MAIN-2018-34
refinedweb
1,055
51.18
This! Shukriyaa! > This is a targeted update, addressing some key areas of customer feedback since the Visual Studio 2013 release. Are you saying you fixed the GUI and brought back setup and deployment projects? Has search by wildcard made its way back that's what I want to know I only update with an update that includes Setup & Deployment, Windows Forms and WPF improvements and fix the user interface and icons. Awesome , thanks so much Add a file to project, then delete that file but not in the project. Sometimes, when intellisense fails to update in time, the navigate popup (ctrl+,) can list this false file. If you try to open this file from navigator, the navigator will go into bug state and remains on screen , unable to close, unable to navigate into any file. It seems like some internal exception is not caught. This problem is especially bad with large c++ project. I turned off the navigator preview option, if that matters. Get well soon! No SSDT-BI support yet I presume. right? Not sure what peoples complaints are about the VS UI. Though I do miss the MSI template, not sure why Microsoft choose to ditch that feature as InstallShield limited edition is pretty… Limited to say the least. It requires upgrading if you want to use 90% of it's features. The other 10% I can power shell or batch scripts for, so it begs the question of what's the point of the limited edition of Installshield? I have to agree, I don't see any issues with the UI (I rather like it). Imho, the removal of deployment projects is a part of MS's overall move to wanting us to write apps for Metro (or modern UI or whatever they're calling it now). @DLLE – I've reported this issue for you at connect.microsoft.com/…/navigate-to-ctrl-can-get-stuck-after-deleting-files. ROSLYN!!!! Nope. Thank you @Luke je veux etre memebre Can't wait to get this This update is really Buggy here (Win7/X64)! It even crash during startup phase without any project loaded causing Windows error reporting. I cannot use it and uninstalled it immediately. @TorstenR: I've reported this issue for you at connect.microsoft.com/…/update-1-crashes-on-win7-x64. Feel free to add additional details to the report. Issues I've got with VS2013 after using it for just a couple of days: 1) Intellisense stopped working pretty quickly, so I lost all autocomplete when entering namespaces or classnames (restart brought it back, but boy did it break fast), 2) STRONGLY dislike that it now adds closing braces any time I add one (when I go up to code I already wrote, it'll insert an extra one even though I already have one below), 3) it's driving me nuts with stale code…got a WCF service whose breakpoints stop working because the code has changed; even restarting IIS Express isn't fixing it, so I'm having to change over to a command line service to debug, 4) you changed the comment/uncomment icons AGAIN?!??! well done pondicherry………… Holy cow, I just installed Update 1, and now I'm restarting VS2013 every 5 minutes because it locks some PDBs (a simple console app using a simple WCF service). I'm just about fed up enough with it to revert to VS2012. Not what I was hoping for out of a bug fix pack at all. @Keith Patrick: Regarding the WCF debugging issues you mention, any chance you could report these at connect.microsoft.com/VisualStudio with details about your repro? You also mentioned not being a fan of automatic brace completion. This can be configured in Quick Launch (ctrl+q) searching for "automatic brace completion". On the comment/uncomment icon, VS2013 actually uses an icon that is nearly identical to VS2010. @Luke Hoban (MS) – There's already an item for it (connect.microsoft.com/…/533411). MS marked it fixed, even though as you can see from the comments that it's actually gotten worse with VS2013. This is a typical workflow for MS Connect issues, which is why I gave up on entering bugs there (about half of mine get closed for one reason or another, while the rest get replied to with a "We'll look at it in a future version", which I interpret as "Hopefully you'll forget about it in a few years") @Keith Patrick regarding your comment that Intellisense has stopped working for you but restart cured the issue, we would like to follow up with you to hear more about the problem if you are still experiencing it. I saw your response regarding Connect, but would definitely encourage you to open a Connect bug on this issue so we can take a look. Thanks Mark (VS Editor Team) I installed Update 1 some days ago. In the last days it occured multiple times that intelli-sense (in C#) didn't worked anymore, but I cannot see an action responisble for that. Before installing the update, this happened only very seldom, so I think it has to do with the update. As a consequence, some wrong errors are shown in the text-editor (red-underlined code-parts), while the code compiles without problems.
https://blogs.msdn.microsoft.com/somasegar/2014/01/20/visual-studio-2013-update-1/
CC-MAIN-2016-30
refinedweb
885
67.38
This HTML version of is provided for convenience, but it is not the best format for the book. In particular, some of the symbols are not rendered correctly. You might prefer to read the PDF version. You can buy this book at Amazon. The signals we have worked with so far are periodic, which means that they repeat forever. It also means that the frequency components they contain do not change over time. In this chapter, we consider non-periodic signals, whose frequency components do change over time. In other words, pretty much all sound signals. This chapter also presents spectrograms, a common way to visualize non-periodic signals. The code for this chapter is in chap03.ipynb, which is in the repository for this book (see Section 0.2). You can also view it at. Figure 3.1: Chirp waveform near the beginning, middle, and end. We’ll start with a chirp, which is a signal with variable frequency. thinkdsp provides a Signal called Chirp that makes a sinusoid that sweeps linearly through a range of frequencies. Here’s an example that sweeps from 220 to 880 Hz, which is two octaves from A3 to A5: signal = thinkdsp.Chirp(start=220, end=880) wave = signal.make_wave() Figure 3.1 shows segments of this wave near the beginning, middle, and end. It’s clear that the frequency is increasing. Before we go on, let’s see how Chirp is implemented. Here is the class definition: class Chirp(Signal): def __init__(self, start=440, end=880, amp=1.0): self.start = start self.end = end self.amp = amp start and end are the frequencies, in Hz, at the start and end of the chirp. amp is amplitude. Here is the function that evaluates the signal: def evaluate(self, ts): freqs = np.linspace(self.start, self.end, len(ts)-1) return self._evaluate(ts, freqs) ts is the sequence of points in time where the signal should be evaluated; to keep this function simple, I assume they are equally-spaced. If the length of ts is n, you can think of it as a sequence of n−1 intervals of time. To compute the frequency during each interval, I use np.linspace, which returns a NumPy array of n−1 values between start and end. _evaluate is a private method that does the rest of the math1: _evaluate def _evaluate(self, ts, freqs): dts = np.diff(ts) dphis = PI2 * freqs * dts phases = np.cumsum(dphis) phases = np.insert(phases, 0, 0) ys = self.amp * np.cos(phases) return ys np.diff computes the difference between adjacent elements of ts, returning the length of each interval in seconds. If the elements of ts are equally spaced, the dts are all the same. The next step is to figure out how much the phase changes during each interval. In Section 1.7 we saw that when frequency is constant, the phase, φ, increases linearly over time: When frequency is a function of time, the change in phase during a short time interval, Δ t, is: In Python, since freqs contains f(t) and dts contains the time intervals, we can write dphis = PI2 * freqs * dts Now, since dphis contains the changes in phase, we can get the total phase at each timestep by adding up the changes: phases = np.cumsum(dphis) phases = np.insert(phases, 0, 0) np.cumsum computes the cumulative sum, which is almost what we want, but it doesn’t start at 0. So I use np.insert to add a 0 at the beginning. The result is a NumPy array where the ith element contains the sum of the first i terms from dphis; that is, the total phase at the end of the ith interval. Finally, np.cos computes the amplitude of the wave as a function of phase (remember that phase is expressed in radians). If you know calculus, you might notice that the limit as Δ t gets small is Dividing through by dt yields In other words, frequency is the derivative of phase. Conversely, phase is the integral of frequency. When we used cumsum to go from frequency to phase, we were approximating integration. When you listen to this chirp, you might notice that the pitch rises quickly at first and then slows down. The chirp spans two octaves, but it only takes 2/3 s to span the first octave, and twice as long to span the second. The reason is that our perception of pitch depends on the logarithm of frequency. As a result, the interval we hear between two notes depends on the ratio of their frequencies, not the difference. “Interval” is the musical term for the perceived difference between two pitches. For example, an octave is an interval where the ratio of two pitches is 2. So the interval from 220 to 440 is one octave and the interval from 440 to 880 is also one octave. The difference in frequency is bigger, but the ratio is the same. As a result, if frequency increases linearly, as in a linear chirp, the perceived pitch increases logarithmically. If you want the perceived pitch to increase linearly, the frequency has to increase exponentially. A signal with that shape is called an exponential chirp. Here’s the code that makes one: class ExpoChirp(Chirp): def evaluate(self, ts): start, end = np.log10(self.start), np.log10(self.end) freqs = np.logspace(start, end, len(ts)-1) return self._evaluate(ts, freqs) Instead of np.linspace, this version of evaluate uses np.logspace, which creates a series of frequencies whose logarithms are equally spaced, which means that they increase exponentially. That’s it; everything else is the same as Chirp. Here’s the code that makes one: signal = thinkdsp.ExpoChirp(start=220, end=880) wave = signal.make_wave(duration=1) You can listen to these examples in chap03.ipynb and compare the linear and exponential chirps. Figure 3.2: Spectrum of a one-second one-octave chirp. What do you think happens if you compute the spectrum of a chirp? Here’s an example that constructs a one-second, one-octave chirp and its spectrum: signal = thinkdsp.Chirp(start=220, end=440) wave = signal.make_wave(duration=1) spectrum = wave.make_spectrum() Figure 3.2 shows the result. The spectrum has components at every frequency from 220 to 440 Hz, with variations that look a little like the Eye of Sauron (see). The spectrum is approximately flat between 220 and 440 Hz, which indicates that the signal spends equal time at each frequency in this range. Based on that observation, you should be able to guess what the spectrum of an exponential chirp looks like. The spectrum gives hints about the structure of the signal, but it obscures the relationship between frequency and time. For example, we cannot tell by looking at this spectrum whether the frequency went up or down, or both. Figure 3.3: Spectrogram of a one-second one-octave chirp. To recover the relationship between frequency and time, we can break the chirp into segments and plot the spectrum of each segment. The result is called a short-time Fourier Transform (STFT). There are several ways to visualize a STFT, but the most common is a spectrogram, which shows time on the x-axis and frequency on the y-axis. Each column in the spectrogram shows the spectrum of a short segment, using color or grayscale to represent amplitude. As an example, I’ll compute the spectrogram of this chirp: signal = thinkdsp.Chirp(start=220, end=440) wave = signal.make_wave(duration=1, framerate=11025) Wave provides make_spectrogram, which returns a Spectrogram object: make_spectrogram spectrogram = wave.make_spectrogram(seg_length=512) spectrogram.plot(high=700) seg_length is the number of samples in each segment. I chose 512 because FFT is most efficient when the number of samples is a power of 2. seg_length Figure 3.3 shows the result. The x-axis shows time from 0 to 1 seconds. The y-axis shows frequency from 0 to 700 Hz. I cut off the top part of the spectrogram; the full range goes to 5512.5 Hz, which is half of the frame rate. The spectrogram shows clearly that frequency increases linearly over time. However, notice that the peak in each column is blurred across 2–3 cells. This blurring reflects the limited resolution of the spectrogram. The time resolution of the spectrogram is the duration of the segments, which corresponds to the width of the cells in the spectrogram. Since each segment is 512 frames, and there are 11,025 frames per second, the duration of each segment is about 0.046 seconds. The frequency resolution is the frequency range between elements in the spectrum, which corresponds to the height of the cells. With 512 frames, we get 256 frequency components over a range from 0 to 5512.5 Hz, so the range between components is 21.6 Hz. More generally, if n is the segment length, the spectrum contains n/2 components. If the frame rate is r, the maximum frequency in the spectrum is r/2. So the time resolution is n/r and the frequency resolution is which is r/n. Ideally we would like time resolution to be small, so we can see rapid changes in frequency. And we would like frequency resolution to be small so we can see small changes in frequency. But you can’t have both. Notice that time resolution, n/r, is the inverse of frequency resolution, r/n. So if one gets smaller, the other gets bigger. For example, if you double the segment length, you cut frequency resolution in half (which is good), but you double time resolution (which is bad). Even increasing the frame rate doesn’t help. You get more samples, but the range of frequencies increases at the same time. This tradeoff is called the Gabor limit and it is a fundamental limitation of this kind of time-frequency analysis. Figure 3.4: Spectrum of a periodic segment of a sinusoid (left), a non-periodic segment (middle), a windowed non-periodic segment (right). In order to explain how make_spectrogram works, I have to explain windowing; and in order to explain windowing, I have to show you the problem it is meant to address, which is leakage. The Discrete Fourier Transform (DFT), which we use to compute Spectrums, treats waves as if they are periodic; that is, it assumes that the finite segment it operates on. As an example, let’s start with a sine signal that contains only one frequency component at 440 Hz. signal = thinkdsp.SinSignal(freq=440) If we select a segment that happens to be an integer multiple of the period, the end of the segment connects smoothly with the beginning, and DFT behaves well. duration = signal.period * 30 wave = signal.make_wave(duration) spectrum = wave.make_spectrum() Figure 3.4 (left) shows the result. As expected, there is a single peak at 440 Hz. But if the duration is not a multiple of the period, bad things happen. With duration = signal.period * 30.25, the signal starts at 0 and ends at 1.. Figure 3.5: Segment of a sinusoid (top), Hamming window (middle), product of the segment and the window (bottom).. Here’s what the code looks like. Wave provides window, which applies a Hamming window: #class Wave: def window(self, window): self.ys *= window And NumPy provides hamming, which computes a Hamming window with a given length: window = np.hamming(len(wave)) wave.window(window) NumPy provides functions to compute other window functions, including bartlett, blackman, hanning, and kaiser. One of the exercises at the end of this chapter asks you to experiment with these other windows. Figure 3.6: Overlapping Hamming windows. Now that we understand windowing, we can understand the implementation of make_spectrogram. Here is the Wave method that computes spectrograms: #class Wave: def make_spectrogram(self, seg_length): window = np.hamming(seg_length) i, j = 0, seg_length step = seg_length / 2 spec_map = {} while j < len(self.ys): segment = self.slice(i, j) segment.window(window) t = (segment.start + segment.end) / 2 spec_map[t] = segment.make_spectrum() i += step j += step return Spectrogram(spec_map, seg_length) This is the longest function in the book, so if you can handle this, you can handle anything. The parameter, self, is a Wave object. seg_length is the number of samples in each segment. window is a Hamming window with the same length as the segments. i and j are the slice indices that select segments from the wave. step is the offset between segments. Since step is half of seg_length, the segments overlap by half. Figure 3.6 shows what these overlapping windows look like. spec_map is a dictionary that maps from a timestamp to a Spectrum. spec_map Inside the while loop, we select a slice from the wave and apply the window; then we construct a Spectrum object and add it to spec_map. The nominal time of each segment, t, is the midpoint. Then we advance i and j, and continue as long as j doesn’t go past the end of the Wave. Finally, the method constructs and returns a Spectrogram object. Here is the definition of the class: class Spectrogram(object): def __init__(self, spec_map, seg_length): self.spec_map = spec_map self.seg_length = seg_length Like many init methods, this one just stores the parameters as attributes. Spectrogram provides plot, which generates a pseudocolor plot with time along the x-axis and frequency along the y-axis. And that’s how Spectrograms are implemented. Solutions to these exercises are in chap03soln.ipynb. In the leakage example, try replacing the Hamming window with one of the other windows provided by NumPy, and see what effect they have on leakage. See Hint: combine the evaluate functions from Chirp and SawtoothSignal. Draw a sketch of what you think the spectrogram of this signal looks like, and then plot it. The effect of aliasing should be visually apparent, and if you listen carefully, you can hear it. Find or make a recording of a glissando and plot a spectrogram of the first few seconds. One suggestion: George Gershwin’s Rhapsody in Blue starts with a famous clarinet glissando, which you can download from. Assuming that the player moves the slide at a constant speed, how does frequency vary with time? Write a class called TromboneGliss that extends Chirp and provides evaluate. Make a wave that simulates a trombone glissando from C3 up to F3 and back down to C3. C3 is 262 Hz; F3 is 349 Hz. Plot a spectrogram of the resulting wave. Is a trombone glissando more like a linear or exponential chirp? Think DSP Think Java Think Bayes Think Python 2e Think Stats 2e Think Complexity
http://greenteapress.com/thinkdsp/html/thinkdsp004.html
CC-MAIN-2017-43
refinedweb
2,458
67.25
Hello Monks, I am trying to figure out why this code performs so poorly against its C counterpart: Perl (complete program) #!/usr/bin/perl OUTER: for (my $i = 20; ; $i += 20) { foreach (my $j = 1; $j < 20; $j++) { next OUTER if ($i % $j); } print "Number: $i\n"; } [download] C (complete program) #include <stdio.h> int main(void) { int i, j; for (i = 20; ; i += 20) { for (j = 1; j < 20; j++) { if (i % j) break; } if (j == 20) { printf("Number: %d\n", i); break; } } return 0; } [download] On my machine, the C variant (Linux GCC 4.3.2, no optimization flags, not stripped) runs in 1.36 +/- 0.02 user+system seconds. The Perl variant (5.10.0) takes 48.7 user+system seconds on the same machine. Why is Perl so much slower at running the same algorithm? (Actually the C algorithm is slightly worse due to the extra comparison). Keep in mind, the purpose of the code samples is completely tangential to this discussion--I am seeing similar performance with all tight processing loops. I am aware of the ability to use native C libraries with Perl (and other optimization techniques), but my question is not about optimization. I just want to get a better idea of what is making Perl so terribly slow with simple loops like the above. Let's look at what is really going on when these programs run. So, whatever terms suits you, which may include one of the following: <opcodes|bytecode|syntax tree|abstract syntax tree|other>. (Dodge issue!) This is the output from perl -MO=Concise YourScript.pl: So the answer to your question is: when you understand why those four instructions in the C version, require those 700+ lines of assembler for the perl version, then you'll understand why the performance difference exists. And also why it isn't a problem! At root Perl is interpreted not compiled. For things like string manipulation and regular expression parsing Perl performs very well because most of the time is spent in well crafted C code (in the interpreter). For code such as your sample which is CPU intensive but doesn't take advantage of Perl's strengths you simply hit the penalty for using an interpreter. For most applications that take advantage of Perl's strengths the performance is fine. Even for many applications that don't play so well to Perl's run time strengths, the time to craft a solution in Perl and execute it is very often shorter than the time to write and execute a similar application using a different language, even though the run time may be much shorter in the other language. But even after re-writing to match your original algorithm this takes 20 odd seconds on my machine. When I add use integer; it cuts it down to 13 seconds. Thanks for the comments. Here are a few more results, based on your suggestions and comments: Yes, I acknowledged in the original post that the C algorithm is actually a bit less efficient. Putting the missing comparison into the Perl version instead of the loop label, as you have done, actually had no measurable effect on runtime. Moving my ($i, j) outside the loop had no measurable effect on runtime. Also not too surprising. Running your version exactly as listed took on average 48.8 +/- 0.1 sec—again no measurable change from my previous results. True, I could use more rigorous methods to more accurately measure the effect of the above changes, but at best we'd be looking at a fraction of a percentage difference. Nowhere near the 3000+ % difference in the C version. Adding use integer; to your version caused it to run in 46.8 sec, which is a marginal improvement. (My test machine has an FPU). Not forgetting my original question of why this runs so slow, your optimization ideas do help shed a little light on a few of the factors that may be influencing performance. It's perhaps important to underscore that my question is not, "what can I do to speed up this random little demonstration program", but instead, "why is Perl so much slower than C on some arbitrary, computationally expensive algorithm". Perhaps more to the point, "what are the factors influencing the performance of tight loops in Perl". (Or where could I read more about that topic!) Hope that makes a bit more sense. Thanks again for your insights! Because the C code and the Perl code end up doing roughly equivalent things except that the C code runs through a few dozen machine-language instructions while the Perl code runs through a few dozen "opnodes". Machine-language instructions are the fastest unit of execution on a computer. While the slowness of opnode dispatch (each requires several dozen or even hundreds of machine-language instructions) is one of the motivations for some of the design changes in Perl 6. - tye. That seems hardly a fair comparison, does it? How long did it take you to compile, link and execute this C-program (and you are not allowed to put it in a batch or shell script, because that is part of another language)? I bet the difference would be much you are not allowed to put it in a batch or shell script, because that is part of another language What? Well, OK, since it's pretty tough to automate compilation and execution without some sort of "script", the following example takes my corrected typing speed into account, too! Still less than four seconds, and most of that was thanks to my clumsy typing. Fair enough, no? Anyway, compile time is irrelevant for a bunch of reasons, but mainly: compile time will be negligible for any CPU-bound problem of significant size. Such problems are more or less the point of this node. Finally, to really look at the compilation step fairly, how many real-world programs (including real-world Perl programs) really need to be recompiled every time they are run? For many applications, an "interpreted" compilation step is just another nail in the comparative performance coffin. You don't have throw the baby out with the bath water in order to benefit from good performance: #! perl -slw use strict; use Inline C => Config => BUILD_NOISY => 1; use Inline C => <<'END_C', NAME => '_729090', CLEAN_AFTER_BUILD => 0; #include <stdlib.h> #include <string.h> #include <stdio.h> SV* thing() { int i, j; for (i = 20; ; i += 20) { for (j = 1; j < 20; j++) { if (i % j) break; } if (j == 20) { return newSViv( i ); break; } } } END_C print time; print thing(); print time; __END__ C:\test>729090-IC.pl 1228813945 232792560 1228813946 [download] Of course, if you want high execution speed, Perl is not a wise choice. you should choose Assembler or pure machine code then. In almost all other cases though, Perl is "fast enough" and much more convenient to use. A: The benefits of a higher abstraction level has to be paid by performance! Might be more interesting also to compare to Java which runs on a VM, too! (But perl is still more abstract) Cheers Rolf PS: Tuning effects like in Java's JIT -compiler are features Perl6/Parrot tries to implement for "for high-speed code execution". UPDATE: More benchmarks! C-style "fixed" typing, especially when dealing with basic numeric types is still hard to beat for a language as dynamic as perl, but as you hinted, a good enough JIT compiler (like Java's) could get pretty close to (and in some cases probably beat) the speed of a C implementation. A few existing dynamic languages that have good compilers (some need hinted with type declarations at the tight portions of code - which may be cheating, but good Common Lisp implementations use it, and AFAIK they don't even use a JIT compiler), can already come really close to C's speed. As for performance in dynamic languages in general, I'll also drop a link to clojure here - it's a lisp (dynamic typing included) implemented in Java, which for speed is probably close to the top of dynamic languages and has very interesting multithreading/concurrency properties. UPDATE It took some time to get it right; I'm new at clojure and the OP's program doesn't translate into idiomatic functional constructs (mainly because the OP's algorithm is stupidly inefficient) - but I've tried to keep it as close to the original constructs as possible: (defn inner [i] (loop [j 1] (if (= 0 (rem i j)) (if (< j 20) (recur (+ 1 j)) i) nil))) (defn loops [] (loop [i 20] (let [result (inner i)] (if result (print (format "Number %d\n" result)) (recur (+ i 20)))))) (defn time-me [] (time (loops))) [download] (time-me) Number 232792560 "Elapsed time: 10865.715 msecs" nil [download] joost-diepenmaats-macbook:~ joost$ time perl test.pl Number: 232792560 real 0m19.568s user 0m19.078s sys 0m0.060s [download] updated again to convert tabs to spaces in pasted code, and again because the timing for the clojure code was way off due to stupid programmer syndrome My code is still way slow compared to the OP's reported C time, but about twice as fast as perl - not that bad for one of my first attempts at clojure. thanx for the insight... well another good reason to overcome my parens-allergy ; ) ... maybe starting with elisp. PS: I suppose the dynamic structure of lisp (data == code) enforces a late compiling (comparable to JIT) and therefore the potential to high optimization ... These are all excellent comments. Thank you all--this really helps me understand what is going on under the hood. I had no idea the bytecode had this much overhead. A language like Perl is centered on the notion that "most of the things that we want to do in data-processing are not CPU-bound; they are I/O-bound." In other words, real-life programs usually spend a small amount of CPU-time setting up for the next I/O operation. The thing which really needs to be "optimized for" is the human time spent writing and debugging and maintaining the program. For those few truly CPU-intensive tasks that we must do from time to time, Perl allows you to define C-language extensions to itself or, more easily, to invoke external programs to do certain things. The times when we actually have to do that are in the minority, but they certainly do exist. When you devise a CPU-intensive algorithm in "straight Perl," don't be surprised when it takes much longer than "straight C." Also note that the opposite is definitely also true: write a hash-table algorithm in "C" and you sure are wasting your time, vis-a-vis doing the same thing in a couple of lines of Perl. "Tools for the job." Any software tool is going to be human-inefficient at doing a task it was not designed to do, because the design of every tool is a carefully calculated trade-off between opposing goals. Perl is really handy for text manipulation. That doesn't preclude that manipulation being CPU-intensive - it often is. For text manipulation tasks, regardless of any IO, Perl can generally easily keep up with compiled languages because the CPU-intensive work is actually done in compiled C (the Perl interpreter is compiled C). If you are using regular expressions, map, grep, index, substr and all the other Perl goodness then you really are not at a speed disadvantage compared with compiled languages. On the other hand there are good hash implementations for C++ - a good tool is a good tool in most languages. They aren't as much fun to use as Perl's hashes because strict typing means Perlish techniques are cumbersome, but they are there when you need em. Hello, It would be interesting to see how well the code performs under perl 5.8 and perl 5.6 as well. Regards,. sub new { my $i = 0; while ($i += 20) { next if ($i % 1); next if ($i % 2); next if ($i % 3); next if ($i % 4); next if ($i % 5); next if ($i % 6); next if ($i % 7); next if ($i % 8); next if ($i % 9); next if ($i % 10); next if ($i % 11); next if ($i % 12); next if ($i % 13); next if ($i % 14); next if ($i % 15); next if ($i % 16); next if ($i % 17); next if ($i % 18); next if ($i % 19); print "Number: $i\n"; last; } } [download] s/iter original new original 25.7 -- -63% new 9.57 168% -- [download] #!/usr/bin/perl use integer; my $i = 0; while ($i += 20) { last unless ($i % 3 or $i % 6 or $i % 7 or $i % 8 or $i % 9 or $i % 11 or $i % 12 or $i % 13 or $i % 14 or $i % 15 or $i % 16 or $i % 17 or $i % 18 or $i % 19 or $i % 20); } print "Number: $i\n"; [download] for values of x > 20 if x % 12 == 0 then x % 6 == 0 [download] Update: In other words, order the list in descending order removing any exact multiples of previous items. When you reach 16, one of two things will happen. Either there will be a remainder and it will exit the loop or there won't be a remainder and it will continue. There is no need to test 8 after 16. Cheers - L~R Deep frier Frying pan on the stove Oven Microwave Halogen oven Solar cooker Campfire Air fryer Other None Results (322 votes). Check out past polls.
http://www.perlmonks.org/?node_id=729070
CC-MAIN-2016-26
refinedweb
2,266
68.7
How to Build iPad Apps with Xcode and Interface Builder A few weeks ago, Andy White gave us a quick tour of Xcode and walked us through the development of a simple “Hello, World” app for the iPhone. In this tutorial, we’ll build on those learnings and introduce you to the Interface Builder, a companion to Xcode designed to allow you to quickly and easily develop interfaces for your apps. In the process, we’ll be building a simple app for the iPad: a big ol’ calculator we’ll call GrandeCalc HD. Creating the Project Creating the project starts with launching XCode and using the File menu to create a new project. For this application we want to create a View-based application. Make sure to select iPad, because the default device family is iPhone. Don’t worry, your project will create code that is compatible with both devices—this selection just means that you’ll be using an iPad layout as a default for Interface Builder. The next section of the project wizard asks you to name the project. I called mine GrandeCalc HD and gave it the namespace of com.jherrington.calculatorhd. You can call yours whatever you like. When the project is created, you’ll see something like this: This shows you the two primary Objective-C classes that drive the application. An Objective-C class has two components, the first is the interface definition of the class (including things like member variables, properties, and public methods); this is in the .h header file. The other is the implementation of the class in the .m file. The first of your two classes is the application delegate. This is the class that handles application-centric events like starting up, shutting down, and so on. We won’t be touching that class at all. The other is the view controller. This is the class that hooks up to the interface elements in the view and responds to the user tapping on them. We’ll be adding some Objective-C code there. Now that we have our application’s code skeleton, it’s time to build an interface. Start by looking in the Resources portion of the project as shown below. The display area of the view is defined in a .xib file. This is an interface builder file that has localized (in this case, just English) versions of the interface. That file includes all of the controls, their layout and sizing, their text, tags, connections to the corresponding Objective-C classes, and so on. We start editing that file by double-clicking it. Building the Interface Double-clicking the .xib file will bring up the Interface Builder. Once there, you’ll see three windows. One is a large window containing the contents of the view; a second smaller window has the toolbox of user interface elements that you see below. This is where we’ll be grabbing the text display and buttons for our calculator. The third window shows the contents of the .xib file as seen here. This window will become important when we link the buttons and the label to the Objective-C class that does all the work. In this case, the First Responder object actually represents the Objective-C class that will handle all the events. The next step is to drag and drop some buttons and a UITextField onto the view, and then start editing them. You can lay out and style your buttons however you like; the only key point is to set the Tag value of each digit button to the numeric value of the button. This way, we’ll be able to use the same event handler for each button using the tag value in our code as the relevant numeric value. Here’s how to set the Tag value of the number 6 button, for example: The layout I came up with looked like this: Don’t worry if yours looks a little different. As long as all the relevant buttons and the label are in place, we’ll be able to wire it up and make it into a functional calculator. There’s one last step to take care of before we move on to developing our app’s logic: we want to make sure our interface handles device orientation changes (from portrait to landscape, or vice versa) gracefully. You can simulate the appearance of your app’s interface by choosing Simulate Interface from the File menu. Once the simulator has launched, try rotating the simulated iPad left and right with Cmd-left or Cmd-right. Whoops! Our interface doesn’t handle landscape mode very well at the moment. Let’s see what we can do about that. Each element in your layout has a set of options that determine how it will respond to changes in the device’s orientation. Select one of your buttons, then choose the ruler icon at the top of the properties window. You want to adjust your settings to look like this: The important section is Autosizing. This defines how the element changes shape and position during a re-layout of the interface. The lines along the edge of the box define if the position of the element is fixed or floating relative to each edge of the screen. The controls on the inside of the box determine whether the element changes height or width on a re-layout. For the buttons and the text display, specify that they’re floating and that they change both width and height. To do this, make sure both arrows inside the box are solid red, and that the lines outside the box are dashed. Once that’s done, test the interface again: it should look fine in both portrait and landscape modes. With the UI layout done, it’s time to head back to Xcode to work on the Objective-C. The Logic Objective-C view classes have two critical elements; IBActions and IBOutlets. An IBAction is a method that responds to an event (for instance, a button touch). An IBOutlet is a property that the class uses to connect to a user interface element (such as the number label in the calculator). If you want to listen for events, you’ll want to add IBActions. If you want to change the state of the interface, or read its current state, you’ll need IBOutlets. Because IBActions and IBOutlets are public, they go into the header file. The header file for the calculator view looks like this: #import @interface GrandeCalc_HDViewController : UIViewController { IBOutlet UITextField* numberDisplay; float heldValue; int lastOpDirection; } @property (nonatomic, retain) UITextField* numberDisplay; -(IBAction)numberClicked:(id)sender; -(IBAction)dotClicked:(id)sender; -(IBAction)plusClicked:(id)sender; -(IBAction)minusClicked:(id)sender; -(IBAction)equalsClicked:(id)sender; @end The only outlet is the numberDisplay field, which is connected to the number display in the view. There are then five actions; these correspond respectively to a number being pressed, and the dot, plus, minus, and equals buttons being pressed. In each case, a sender object is passed along. This sender is the UI element that generated the event. For example, this could be the number button that was pressed. Since all the number buttons go to the same event handler, we’ll use the Tag value of those buttons to distinguish their numeric values. In the case of the dot, plus, minus, and equals, we’ll just ignore the sender, since we’ll only hook that up to a single UI element. If you’re new to Objective-C, you should note that the member variables (like numberDisplay, heldValue, and lastOpDirection) are defined in the @interface block. Properties and methods are defined after that. In this case, there’s one property, numberDisplay, and five public methods. The numberDisplay property will be used by the interface to set and get the object pointer to the number display element in the UI. The implementation for the view, which is held in the .m file, is shown below: #import "GrandeCalc_HDViewController.h" @implementation GrandeCalc_HDViewController @synthesize numberDisplay; - (BOOL)shouldAutorotateToInterfaceOrientation: (UIInterfaceOrientation)interfaceOrientation { return YES; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; } - (void)dealloc { [super dealloc]; } -(IBAction)numberClicked:(id)sender { UIButton *buttonPressed = (UIButton *)sender; int val = buttonPressed.tag; if ( [numberDisplay.text compare:@"0"] == 0 ) { numberDisplay.text = [NSString stringWithFormat:@"%d", val ]; } else { numberDisplay.text = [NSString stringWithFormat:@"%@%d", numberDisplay.text, val ]; } } -(IBAction)dotClicked:(id)sender { numberDisplay.text = [NSString stringWithFormat:@"%@.", numberDisplay.text ]; } -(IBAction)plusClicked:(id)sender { float curValue = [numberDisplay.text floatValue]; numberDisplay.text = [NSString stringWithString:@"0" ]; heldValue = curValue; lastOpDirection = 1; } -(IBAction)minusClicked:(id)sender { float curValue = [numberDisplay.text floatValue]; numberDisplay.text = [NSString stringWithString:@"0" ]; heldValue = curValue; lastOpDirection = -1; } -(IBAction)equalsClicked:(id)sender { float newValue = heldValue + ( [numberDisplay.text floatValue] * lastOpDirection ); numberDisplay.text = [NSString stringWithFormat:@"%g", newValue ]; heldValue = 0.0f; lastOpDirection = 0; } @end Again, if you’re new to Objective-C, all this will take some getting used to, but even though the syntax is a little odd, you should be able to see some object oriented patterns familiar to Java and C++. Just browsing across the code, you can see that each method starts with a minus and then has a method declaration. The minus means that it’s an object method. A plus would indicate a class method. The syntax of the method is exactly the same as in the header file, except that in this case there’s also a body to the method. Within each method, you’ll find Objective-C code to implement the method. In that code, you’ll find basic C operations that you’re probably familiar with; for example, the arithmetic operators, and the way that variables are defined. The really unique part is in the object oriented invocation syntax of Objective-C. Let’s have a look: [NSString stringWithString:@"0" ] This means create a new string with the value ‘0’. The @ symbol specifies that we want an Objective-C string, as opposed to a C string. And in this case, we’re calling a class method on NSString. Now look at this command: [NSString stringWithFormat:@"%@%d", numberDisplay.text, val ]; This is roughly equivalent to a call to sprintf. The format string in this case takes the current text value of the numeric display and appends the value of the digit that was pressed. All those brackets are confusing at first, but once you become familiar with them, they’ll begin to make sense. Now with the code in hand and the interface set up, it’s time to connect the two using Interface Builder. Before doing that, you’ll need to build your project by clicking Build and Run in Xcode; this will ensure that Interface Builder has all the inputs and outputs available to it, so you can hook them up with your interface components. Connecting the Interface to the Code Interface Builder looks for the IBOutlet and IBAction elements of the Objective-C view class, and provides us with an interface to wire controls to them. To link up the buttons, first select that button, then go to the connections panel of the Inspector window (represented by a blue circle with an arrow). From here, you can see all the events associated with the button. You can click on any of the circles and drag it to the File’s Owner item in the contents window. For our buttons, we’ll use the “Touch Up Inside” event. When you drop your event on the File’s Owner, you’ll see a popup that shows all the available IBAction methods. Just choose the appropriate one: numberClicked for a number button, plusClicked for the plus sign, and so on. Connect up all the buttons in this same way. The final step is to connect the numberDisplay variable to the number display UI element. Start by going to the contents window and select the File’s Owner. That should show something like the figure below in the Inspector window. You can then drag the connector for the numberDisplay to the user interface element in the layout area to link up the two. At this point you can save the interface and close Interface Builder. Then try and run your application from the Xcode IDE. It should work more or less the way you expect a calculator to work. Of course, our application logic is very simple; there are plenty of ways you could refine the app’s behavior. If the calculator fails to work, the issue is likely in the connections that you defined between the user interface (the .xib file) and the Objective-C file. Follow the instructions from the last article to add breakpoints to the click methods on the Objective-C class, and see if the methods are getting called. If they aren’t, go back to the Interface Builder to make sure that you wired up the correct events to the IBAction methods in the Objective-C class. Where to Go from Here This is just the tip of the iceberg when it comes to learning about Objective-C and development for iOS devices. In this article, we’ve learned how to put together a project, build out a user interface, and connect it to the back-end Objective-C class and make it do something. If the application you have in mind uses the network, there’s a robust HTTP library for you to use. If your ideal application is more graphical in nature, there’s an amazing Quartz graphics and effects library just waiting for your enjoyment. Feel free to use the code in this article as a starting point. If you come up with something great, be sure to let me know and I’ll buy it on the App Store (assuming you keep it relatively cheap!).
https://www.sitepoint.com/how-to-build-ipad-apps-with-xcode-and-interface-builder/
CC-MAIN-2019-13
refinedweb
2,287
63.29
Functions to encode and decode a QR bar-code in images 52 Downloads Updated 02 Nov 2010 A wrapper to the zxing library (). This submission includes files to encode a QR code from a string message, and decode a string message from an image containing an existing QR code. With little work these functions can be expanded to search for multiple bar-codes in images, decode multiple bar-codes in a single image etc. Inspired: QR Code encoder Download apps, toolboxes, and other File Exchange content using Add-On Explorer in MATLAB. Narayanan Rengaswamy (view profile) Chia Chang (view profile) To get encode_qr to work I made the following changes: qr = zeros(M_java.getHeight, M_java.getWidth); for i=1:M_java.getHeight for j=1:M_java.getWidth qr(i,j) = M_java.get(j-1,i-1); end end Naveen Bl (view profile) Mathias Magdowski (view profile) Thanks to Andres Puerto, this perfectly works for me using MATLAB Version 8.4.0.150421 (R2014b) to get the functions running without errors or warnings. Andres Puerto (view profile) Hi , downloaded 3.3 jars from ( and), and functions are inside differente folders for this matlab code, so you have to change the imports in this way: for encode function : import com.google.zxing.qrcode.*; import com.google.zxing.*; for decode function: import com.google.zxing.qrcode.*; import com.google.zxing.client.j2se.*; import com.google.zxing.*; import com.google.zxing.common.*; import com.google.zxing.Result.*; there are another changes to do: in function encode M_java.height and M_java.width are invalid, instead use M_java.getHeight(), M_java.getWidth() Special thanks for Ari Bejarano the Java's Master Ankit Suhagiya (view profile) I can run test_qr.jpg only but other images containing QR code can not be decode with this test_qr.m file...can anyone tell me the solution as i am getting empty string after running any other QR code other than test_qr.jpg. megh shah (view profile) hii i cannot find zxing file in this site ,would you suggest me another site to download zxing and also please tell me where to install this file. and please tell me procedure for installing zxing. Ibraheem (view profile) Argo Argo (view profile) s_il (view profile) Hello. I cannot run this code. When i run encode_qr.m, I get an error 'File: encode_qr.m Line: 25 Column: 8 Arguments to IMPORT must either end with ".*" or else specify a fully qualified class name: "com.google.zxing.qrcode.QRCodeWriter" fails this test.' When I run test_qr.m I get this error and also 'Error in test_qr (line 10) test_encode = encode_qr('la la la', [32 32]);' Could you help me please? Yudha Viki (view profile) I cnnot run this code, when I run the decode_qr.m, im getting an Error "Error: File: decode_qr.m Line: 26 Column: 8 Arguments to IMPORT must either end with ".*" or else specify a fully qualified class name: "com.google.zxing.qrcode.QRCodeReader" fails this test." and when i run test_qr.m im also getting an error "Warning: Invalid file or directory 'D:\TELKOM UNIVERSITY\IF\FINAL WORK PLANING\PROJECT\QR CODE DECODER\DECODE_MATLAB\3rd_party\zxing-1.6\core\core.jar'. > In javaclasspath>local_validate_dynamic_path (line 266) In javaclasspath>local_javapath (line 182) In javaclasspath (line 119) In javaaddpath (line 71) In test_qr (line 6) Warning: Invalid file or directory 'D:\TELKOM UNIVERSITY\IF\FINAL WORK PLANING\PROJECT\QR CODE DECODER\DECODE_MATLAB\3rd_party\zxing-1.6\javase\javase.jar'. > In javaclasspath>local_validate_dynamic_path (line 266) In javaclasspath>local_javapath (line 182) In javaclasspath (line 119) In javaaddpath (line 71) In test_qr (line 7) Error: File: encode_qr.m Line: 25 Column: 8 Arguments to IMPORT must either end with ".*" or else specify a fully qualified class name: "com.google.zxing.qrcode.QRCodeWriter" fails this test. Error in test_qr (line 10) test_encode = encode_qr('la la la', [32 32]);" please help me.... Thank. GP (view profile) Mina Fouad (view profile) VIBHATH V B (view profile) I cannot run this code. I am looking for decoding an image containing qr code. How to do that? When I run the decode-qr code, I am getting an error "Error using im2java2d Expected input number 1, Image, to be one of these types:uint8, uint16, double, logical". Apurva Bhargava (view profile) hello,when i am running the file test_qr, qr code is being generated and then i save the image at desktop location,But when i use another file named "decode_qr" to decode the same saved image, which was generated by test_qr, it displays an error titled "no qr code found." this decode_qr.m file is working for other downloaded qr codes. kindly help me out. mathivanan ponnambalm (view profile) I have Two questions related to QR Code 1. what the maximum size of data that can be embedded inside a QR Code 2. whether 7% of error is acceptable to decode 100 data paria (view profile) Hey, when I want use the decode file, it shows error like: Error in decode_qr (line 34) source = BufferedImageLuminanceSource(jimg); would you please help me how to solve it? kailash singh (view profile) JAKAPONG JAIBOONRUANG (view profile) Ok, it worked for me. Thank you very much. Nasir Ali (view profile) it gives an error on "BufferedImage" This is error message.can you please tell me what is the solution? ??? Undefined function or method 'BufferedImageLuminanceSource' for input arguments of type 'java.awt.image.BufferedImage'. Error in ==> decode_qr at 34 source = BufferedImageLuminanceSource(jimg); li (view profile) the great program of QR encoder,i have a quetion how to use RSENC in matlab to get error codewords with rs code mathod.i only know how to use BCHENC .hope your help ,thank you! Bahtosai (view profile) Hi all, I am curently working on QR Barcode. I found the link zxing have move. Can someone share to me or guide me step by step how to encode/decode QR barcode using matlab. I just know to use Matlab, and no eperience with JAVA etc. Many thanks. if anyone have fully matlab source code for QR barcode encode/decode please share with me. Thanks. dreamsyoung (view profile) Hello,have you build this qrcode file successifully?I have some error problems in this matlab file building,can you send your successful qrcode file to my email?thank you! haem (view profile) Peng Wenbin (view profile) Better to use java directly. Yanshuai Tu (view profile) the two jar files can be download via google search. Does anyone know how to control the Error Correction Level in generation Eugenijus Januskevicius (view profile) Hi Wang, i had thee same problem with Lithuanian symbols. Here is the patch: add qr_hints.put(EncodeHintType.CHARACTER_SET, 'UTF-8'); just after qr_hints.put(EncodeHintType.ERROR_CORRECTION, qr_quality); in the 'encode_qr.m' It will probably make QR code unreadable by any other QR decoder, except ZXing's. wang xiaoer (view profile) I got two problems:one was that a question mark would be found in the front of the information embedded when it contained a chinese character at least;the other was that,I got nothing when it contained more than sixty-six characters including punctuations,blank spaces.Can you give me some suggestions?Thanks in advance. Rob (view profile). Rob (view profile) ejs (view profile) Thanks to Gonzalo, i was able to get the code to work. Seems zxing made an Unexpected Update recently. Besides, i've made it to use error correction hints Gonzalo Garateguy (view profile) Matt, I was having the same problem but I find out that the method to obtain the height and width are slightly different you have to change M_java.height for M_java.getHeight() and M_jata.width for M_java.getWidth() Adnan Jahangir (view profile) There is a problem in encoding algorithm. Due to this you cannot decode the code generated by 'encode_qr'. To solve this problem open the function file 'encode_qr.m'. At the end of the file you will find the line - 'qr = logical(qr);'. Replace this with 'qr = 1-logical(qr);' (without apostrophe). Now plot the image with 'imagesc' command and type 'colormap(gray)' in command window. Now save the image to any image file type. Now you will be able to decode it with 'decode_qr' command. ARIZ ZUBAIR (view profile) when i run test_qr.m following error is generated:- Warning: Duplicate entry,C:\Users\ARIZ ZUBAIR\Desktop\qrcode\3rd_party\zxing-1.6\core\core.jar > In javaclasspath>local_validate_dynamic_path at 247 In javaclasspath>local_javapath at 157 In javaclasspath at 102 In javaaddpath at 68 In test_qr at 6 Warning: Duplicate entry,C:\Users\ARIZ ZUBAIR\Desktop\qrcode\3rd_party\zxing-1.6\javase\javase.jar > In javaclasspath>local_validate_dynamic_path at 247 In javaclasspath>local_javapath at 157 In javaclasspath at 102 In javaaddpath at 68 In test_qr at 7 ??? Undefined function or variable 'QRCodeWriter'. Error in ==> encode_qr at 28 qr_writer = QRCodeWriter; Error in ==> test_qr at 10 test_encode = encode_qr('la la la', [32 32]); please help me. Matt (view profile) Lior, Thanks for doing this code. I have installed ZXing and compiled the core and javase jar files. My problem is that the test_qr.m function is giving me an error when I try to run it. The actual error is 'No appropriate method, property, or field height for class com.google.zxing.common.BitMatrix'. It indicates the error is in encoder_qr.m file on line 31. When I set the debugger to stop at line 31, I saw that M_java var on line 30 was only of size 1x1. Is this variable supposed to behave like a struct and have some field values like .height and .width? So the source of the error was that Matlab did not recognize M_java.height on line 31. When I type M_java in the command line it gives me a picture of the QR code printed out, thought it doesn't appear to be sized 32 by 32. Any help would be appreciated. I'm a TA in class where we would like to used the QR codes to give cmds to the students' projects. Thank you, Matt Ahmed (view profile) i am getting this. help me out Warning: Invalid file or directory 'C:\Users\shehzil\Desktop\Lata\zxing-1.6\core\core.jar'. > In javaclasspath>local_validate_dynamic_path at 270 In javaclasspath>local_javapath at 184 In javaclasspath at 119 In javaaddpath at 69 In test_qr at 6 Warning: Invalid file or directory 'C:\Users\shehzil\Desktop\Lata\zxing-1.6\javase\javasc.jar'. > In javaclasspath>local_validate_dynamic_path at 270 In javaclasspath>local_javapath at 184 In javaclasspath at 119 In javaaddpath at 69 In test_qr at 7 ??? Error: File: encode_qr.m Line: 25 Column: 8 Arguments to IMPORT must either end with ".*" or else specify a fully qualified class name: "com.google.zxing.qrcode.QRCodeReader" fails this test. Andreas Kraushaar (view profile) I downlowed both Zxing1.6 and 1.7 but I can't import com.google.zxing.client.j2se.BufferedImageLuminanceSource, which is needed for QR-decode. All other resources work fine. Jay (view profile) Very handy! I am not familiar with Java programming, so what do I need to be able to tweak the encoding parameters such as the recommended 30% error correction level? Can I include it in my function call or I have to dive into the Java source code? Jay (view profile) Jackie (view profile) Can't decode form QR-code image(include the encode function generator image) Brandon (view profile) Never mind; I think I found the correct version of the Zxing.zip file. Brandon (view profile) where do I find Zxing.zip? Thanks! Michael Chan (view profile) could you update the link to download zxing? The link you provided is not longer valid. Thank you. mingming (view profile) Sven (view profile) Sorry, my bad - works perfectly, had a bad path issue. Tim Zaman (view profile) Lior Shapira (view profile) See the example in test_qr.m, you need to add the java path to the QR library: javaaddpath('...\zxing-1.6\core\core.jar'); javaaddpath('...\zxing-1.6\javase\javase.jar'); Make sure you unzip the zxing library and build it if necessary so you have the jar's Mohammed Huq (view profile) how can i add the QR code class path! i do not find any of "QRreader" folder in "Zxing.zip" can you please help!!
http://www.mathworks.com/matlabcentral/fileexchange/29239-qr-code-encode-and-decode?requestedDomain=true&nocookie=true
CC-MAIN-2018-09
refinedweb
2,039
58.38
Display tow dimensional array by matrix form using one for loop in java Java Display tow dimensional array by matrix form using one for loop import java.util.*; class Display2dArrayUsingOneLoop{ public static void main(String args[]) { int j=0,k=0,a[][]={ {1,2,4}, {5,6,7}}; while(true) { System.out.print(a[j][k] + " "); if (a[j].length == k + 1) { k = 0; j++; System.out.println(); } else { k++; } if(a.length == j) { break; } } } } If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
http://www.roseindia.net/answers/viewqa/Java-Interview-Questions/16509-Display-tow-dimensional-array-by-matrix-form--using-one-for-loop-in-java.html
CC-MAIN-2013-20
refinedweb
111
58.58
Adding analytics to a React Native app. In this tutorial, we’re going to look at how to integrate Segment in a React Native app. We’re specifically going to use Google Analytics as the destination for the analytics data that we gather from the app. We’ll also be using Keen IO to inspect and analyze the data. Adding analytics to a mobile app is a necessary step in order to understand your users better, so you can provide them with a better user experience. When selecting an analytics tool, it’s important to select one that allows you to save engineering time and marketing effort. One such tool is Segment. They provide you with integrations to analytics and error tracking tools such as Google Analytics, Mixpanel, Sentry, and Rollbar. They also provide integrations to marketing tools such as Mailchimp, OneSignal, and Blueshift. You can visit their integrations page for a full list of supported integrations. Prerequisites. Setting up Google Analytics This section is divided into two sub-sections: one for users who don’t have an existing Google Analytics account, and another for those who have existing accounts. Only read through the guide that applies to you. New accounts If you don’t have a Google account yet, go ahead and create one. Next, visit analytics.google.com and sign in with your Google Account. If you don’t have an existing account yet, you’ll be greeted with the following screen. Click on Sign up to start using Google Analytics: It will then ask you for some information about your project: Once you’ve filled out all the text fields, click on the Get Tracking ID button. A modal window will pop-up asking you to agree to the terms of service and license agreement. If it returns an error while signing up, select United States as your country even if you live somewhere else. That should solve the error. Once that’s done, you should be provided with a tracking ID. Existing accounts If you have an existing account Google Analytics account, click on the Admin menu at the bottom left of the screen and on the Property column, click on Create Property: But instead of selecting Mobile app, you have to select Website. This is because Google has required new properties to use Firebase for mobile app analytics. This requires you to create a Firebase app as an additional step. More importantly, you have to connect Firebase in your Segment account later on. So to keep things simple, we’re going to stick with good old Google Analytics for this tutorial. Once you’re on the property creation page, enter the details of your app. The Website URL can be your app’s website or company’s website: Once the property is created, you should be provided with a tracking ID. Take note of this as you will be needing it later when you connect to Segment. Next, create a new view by clicking on the Admin menu and clicking Create View on the third column. This time, select Mobile app and enter your app details: If at anytime you want to view your tracking ID, go to Settings → Property Settings and that should show your tracking ID: Setting up Segment If you haven’t done so already, create a new Segment account. Once you have an account, create a new workspace. Each workspace equates to a single project. A project can have multiple sources and destinations, depending on where the data is coming from (sources) and where you want to put your data (destinations): Next, you need to select the platform that you want to work with. In this case, we’re going to select My Android App. This will serve as your source: React Native deploys to both Android and iOS devices, we’re selecting Android in this case. If you want to try to add iOS later on, you can do so by adding a new source and select iOS. Once the source is created, you can now proceed to adding a destination. Click on the Add Destination button: Select Google Analytics from the list that shows: Update the settings with the Mobile Tracking ID. This should be the tracking ID you got from the Google Analytics website earlier. Click on Save once you’ve added it: The last step for setting up Segment is to get your write key. You can find that under your Sources. Earlier, we’ve added Android as a source so you can go ahead and select that and view its settings. Once you’re on the settings page, click on the API Keys menu on the left side of the screen. Copy the Write Key as this will be the one that you’re going to need to put in the React Native app later on: Creating the app To give the analytics data some context, I’ve built a very simple app that allows the user to perform some actions so that we can record data while they’re using it. The app allows the user to view a list of Pokemon. The user can perform three actions to each item: - View - Bookmark Here’s what the app looks like. Each icon corresponds to the actions mentioned above: You can find the full source code on its GitHub repo. To keep the focus on analytics, I won’t be walking you through the code of the sample app. I’ll only be explaining the code used for implementing analytics. Few existing code from the sample app will be shown, but it will only be used to provide some context for the analytics code. Start by creating a new React Native app: react-native init RNSegmentAnalytics Next, clone the repo in another directory: git clone Once it’s cloned, copy the src folder and App.js file to the new React Native project you created earlier. Installing the packages Next, install the following packages: npm install react-native-analytics-segment-io react-native-device-info react-native-vector-icons --save Here’s what each package does: - react-native-analytics-segment-io - for implementing Segment within React Native. - react-native-device-info - for getting relevant device info. This will be used as an additional data for analytics. - react-native-vector-icons - for adding icons in the app. Note that only the first two are relevant to what we’re trying to achieve. react-native-vector-icons is only for aesthetics purposes. Linking the packages The three packages require an additional step for linking the native modules. I didn’t have any luck setting up react-native-analytics-segment-io this way, but you should be able to with the other two packages: react-native link react-native-device-info react-native link react-native-vector-icons Once those are linked, the next step is to manually link the react-native-analytics-segment-io package. Linking on Android For Android, open the android/settings.gradle file and add the following right before the include ':app``': include ':react-native-analytics-segment-io' project(':react-native-analytics-segment-io').projectDir = new File(rootProject.projectDir, '../node_modules/react-native-analytics-segment-io/android') Next, update the android/app/build.gradle file and include the following under the dependencies: compile 'com.segment.analytics.android:analytics:4.3.1' compile project(':react-native-analytics-segment-io') Once that’s done, your dependencies should look like this: dependencies { compile fileTree(dir: "libs", include: ["*.jar"]) compile "com.android.support:appcompat-v7:23.0.1" compile "com.segment.analytics.android:analytics:4.3.1" // add this compile "com.facebook.react:react-native:+" compile project(':react-native-vector-icons') compile project(':react-native-device-info') compile project(':react-native-analytics-segment-io') // add this } The final step is to implement the package in the android/app/src/main/java/com/rnsegmentanalytics/MainApplication.java file. Add this right after the last import: import com.leo_pharma.analytics.AnalyticsPackage; Then initialize it in the package list: @Override protected List<ReactPackage> getPackages() { return Arrays.<ReactPackage>asList( new MainReactPackage(), new VectorIconsPackage(), new AnalyticsPackage(), // add this new RNDeviceInfo() ); } Linking on iOS By default, a new React Native project doesn’t really come with a Podfile. You can create one by navigating inside the ios directory and executing pod init. This will create the Podfile, update it so it contains the following: platform :ios, '9.0' target 'RNSegmentAnalytics' do pod 'Analytics' # add this pod 'Segment-GoogleAnalytics' #add this end In the above Podfile, we’ve added the Analytics and Segment-GoogleAnalytics pod. Once that’s done, execute pod install to install those pods. Next, follow the instructions on the project’s README to ensure the build order. Once that’s done, click the Play button on Xcode or execute react-native run-ios on your terminal to run the app. Note that you can use the repo you cloned earlier as a basis for what each of the files should look like after the packages are linked. Adding the tracking code In the App.js file at the root of the project directory, import the react-native-analytics-segment-io package that you installed earlier. Optionally, you can extract AnalyticsConstants, this allows you to get the correct values for the package setup options as you’ll see later: import Analytics, { AnalyticsConstants // optional } from "react-native-analytics-segment-io"; Next, inside componentDidMount, initialize the package by supplying the write key you got earlier: componentDidMount() { Analytics.setup("YOUR WRITE KEY") .then(() => { // setup succeeded this.initializeUser(); }) .catch((err) => { // setup failed this.initializeUser(); }); } In the code above, the setup function returns a promise, whether it succeeds or fails, we call the method which will initialize the user. I’ll explain what the initializeUser method does later. As for why we’re calling the same method whether it succeeds or fails, it’s because Analytics.setup only needs to be called once. This means that if you make changes to the code while developing, and then you reload the app, it will return an error if it’s called again. This is normal behavior, this allows us to call only the setup method once when the user launches the app. Calling the initializeUser method on failure allows us to do the same operation. That way, we can still proceed smoothly with the testing. If you want to customize the default options, you can supply a second argument to the setup method. You usually don’t have to do this, except for special cases because these options are potentially invasive. Examples of these options include: shouldUseBluetooth- whether the analytics client should record Bluetooth information. shouldUseLocationServices- whether the analytics client should use location services. You may want to supply the following options though: flushAt- the number of events to batch before sending out the analytics data to the server. By default, this is set to 20. It’s a generous amount so if you want to change it to something lower, then you can. Just remember that the lower the number means the higher impact on battery as the app will have to perform the operation more often. trackDeepLinks- whether to automatically track deep links. trackApplicationLifecycleEvents- whether to automatically make a track call for application lifecycle events. This applies to things like when the app is installed, or the app is updated. Here’s an example of how to set up with options supplied: Analytics.setup('YOUR WRITE KEY', { [AnalyticsConstants.flushAt]: true, [AnalyticsConstants.trackApplicationLifecycleEvents]: true }); Here’s the initializeUser method. What it does is to identify the current user with their user ID. The current user’s email ( device) are optional. In the code below, I’ve used a placeholder value for the email and user ID, you should replace those with the ones assigned by your app when the user signed up. It’s also used to specify the screen in which the user is currently in. This gives some context on the events that we will be recording later on: initializeUser = () => { let user_data = { email: "SOME EMAIL", device: { model: DeviceInfo.getModel(), userAgent: DeviceInfo.getUserAgent() }, timezone: DeviceInfo.getTimezone() }; Analytics.identify("SOME USER ID", user_data); // identify the user using the unique ID your app assigned to them Analytics.screen("Home Screen"); } In the sample App.js provided, we have the following tracking code, with an optional object containing the details of the action: viewAction = name => { Analytics.track("View Pokemon", { pokemon: name }); }; bookmarkAction = name => { Analytics.track("Bookmark Pokemon", { pokemon: name }); }; shareAction = name => { Analytics.track("Share Pokemon", { pokemon: name }); }; If you want the user to have the ability to opt-out of the data collection, you can call the disable function based on their preferences: Analytics.disable(); Calling this function means all subsequent calls to track and screen methods will be ignored. Data collection can be switched back by calling the enable method: Analytics.enable(); To make sure that the SDK’s internal stores are cleared when a user logs out of the app, be sure to call the reset method. This is useful for cases where you’re expecting the user to be switching accounts often: Analytics.reset(); Running the app At this point, you can now run the app on your Android or iOS device or emulator (Genymotion for Android or the iOS simulator for iOS): react-native run-android react-native run-ios Inspecting analytics data If you haven’t done so already, click on the buttons on each card on the app to record some data. You can also change the email address if you want. But unless it’s a different emulator or device instance, Google Analytics will still consider it as the same user. To view your analytics data, you can go to the Google Analytics website and view the real-time reports. Once you’re in that page, reload the app and you should see the active user and active screen: You can also click on Events to view the events happening in real-time. Aside from that, there’s isn’t really much to see in the Google Analytics’ dashboard, unless you dig deeper into the audience demographics and other features which isn’t enabled by default. Adding Keen IO For us to view more details on the data we’ve gathered from the app, we can use Keen IO. This service allows us to dig deeper into the data that we’ve collected. It provides us with querying and visualization tools to gain more insight about the data. The first step to add Keen IO is to sign up for an account. Once you have an account, it will ask you a few questions about how you plan to use their service. Once that’s done, create a new project and navigate to the Access tab. This will show you the details that you’ll need to add to Segment. Next, open the Segment website on a new browser tab, add a new destination and select Keen IO. Once selected, it will ask you which source you’d like to use. In this case, you can select the Android source we’ve added earlier: Once Keen IO is added, it will ask you about the details of your Keen IO project. This is where you add the details you saw on the Access tab of your Keen IO project from earlier: Once you’ve added the Project ID and Write Key, enable Keen IO by checking the toggle for the destination. At this point, you can now reload the app and do some actions. These actions will also now be recorded on Keen IO and ready for you to inspect. If you go to the Explorer tab of your Keen IO project, you can view all the data which relates to an event by selecting extraction as the Analysis Type. The events recorded from the app can be selected in the Event Collection drop-down. Once you’ve selected an event, click on the Run Extraction button to execute the query. By default, this will format the data as table, but you can also select JSON from the drop-down on the upper right side of the screen: You can also set the Analysis Type to count and then group the data based on a specific field. In this case, I’ve selected pokemon. Unlike the extraction query, this allows you to present the data as a pie chart: I’ve only shown you a couple of queries, but there’s so much more you could do with Keen IO to learn more about your users. Alternatives In this section, we’re going to look at some of the alternative platforms to Segment, and other React Native libraries that you can use to implement analytics. Alternatives to Segment Segment is a great analytics tool, but if you want to explore your options, here are a few alternative platforms. All of these platforms integrate with a number of analytics and marketing tools just like Segment: Alternative analytics libraries for React Native Here are some React Native libraries that allows you to implement analytics: - react-native-google-analytics or react-native-google-analytics-bridge - for implementing Google Analytics within a React Native app. The only difference between the two is that the latter uses the native implementation of Google Analytics while the former is only the JavaScript implementation. The latter gives you information about the device because it has access to native functionality for getting device information. - react-native-fabric - for crashlytics implementation with Fabric.io - react-native-mixpanel - for Mixpanel tracking implementation. - react-native-mparticle - for mParticle implementation. - react-native-td - for Treasure Data implementation. Conclusion That’s it! In this tutorial, you’ve learned how to add analytics to a React Native app using Segment. As you have seen, Segment is a valuable analytics tool that allows you to save engineering time through its integration with a number of analytics services and marketing tools. September 26, 2018 by Wern Ancheta
https://pusher.com/tutorials/adding-analytics-react-native-app
CC-MAIN-2021-25
refinedweb
2,994
54.63
Matt Austern, Robert Bowdidge, Geoff Keating The stree project is based on three fundamental premises. First: for an important class of development tasks (roughly: GUI programs written in a relatively simple subset of C++, compiled at -O0 -g), compilation time is dominated by the C++ front end. Second: the performance of the C++ front end is dominated by memory allocation and management. This includes memory allocation, initializing newly allocated objects, and bookkeeping for garbage collection. Reducing front end memory usage should thus improve front end performance. Third: many programs consist of small source files that include truly enormous header files. Such header files include <iostream> (25,000 lines), Apple's <Carbon/Carbon.h> (91,000 lines), and the X11 headers. Any given translation unit only uses a tiny fraction of the declarations in one of these headers. The goal of this project is to reduce the time and memory required for handling unused declarations. The main idea of the stree project is to avoid generating decl trees when possible. Instead the parser will generate a compact flat representation for declarations, called an stree, and expand the stree to a decl tree when necessary. Strees are not a substitute for trees. The middle-end and back end will still understand trees, not strees. Some immediate implications of this basic idea: Consider the front end data structure for a simple enumeration declaration, enum foo { a, b };. We have two enumerators. For each one we need to know its name, its type, the underlying integer type used to represent it, and its value. At present we represent enumerators with CONST_DECL nodes, so each enumerator takes 128 bytes for the tree_decl node, plus additional memory for cp-tree.h's version of lang_decl. Each enumerator has an entry in the hash table, an identifier. Each identifier has a pointer to a binding of type cxx_binding (this is the bindings field in lang_identifier, defined in name_lookup.h). The binding for foo itself points to a tree_type, and the bindings for a and b point to CONST_DECL nodes. Each CONST_DECL node has pointers to the name and to the ENUMERAL_TYPE node, and additionally has a pointer to a node representing the enumerator's value. In simple examples like this one each enumerator's value is an INTEGER_CST, giving us another 36 bytes each. (An INTEGER_CST node contains a tree_common subobject, with all the generality that implies.) We don't need 200 bytes to represent the fact that the enumerator a has the value 0. First: as an stree it's unnecessary to store a pointer to the name of this enumerator. The stree will only be accessed via a cxx_binding, so any code that accesses the stree already knows the name. Second: it isn't necessary to use anything so large as an INTEGER_CST to represent the value "0". Most of the information stored in an INTEGER_CST (chain nodes, type pointers, etc.) is unnecessary, since we already know we're getting to the value through an enumerator. We only need to store two pieces of information: the enumeration that this enumerator is associated with, and its initial value. This allows us to represent the enumerator in six bytes: a one-byte code for the type of the stree (specifically: the TREE_CODE of the tree that this stree corresponds to), four bytes (a pointer or the equivalent) for the enumeration, and one byte for the value. Note that this implies a variable-width encoding for the integer values; some enumerations will require seven or more bytes. Our current implementation is limited to enumerations defined at namespace scope. First, enumerations defined at class scope require additional context information. Second, enumerators declared at class scope might have values that depend on template parameters, meaning that we can't necessarily represent the values as simple integers. Neither is a serious problem. Because a cxx_binding's value can be either a tree or stree, we can use strees for the common, simple cases, and default to trees otherwise. Because strees are a variable-sized representation, we can add additional values needed for building trees for the complex case as needed without bloating the simpler cases. The stree data structure itself is defined in stree.[ch]. Strees are tightly-packed, serialized representations of simple declarations Strees are stored on the gc heap, but not directly: instead, they are stored in multi-page blocks of virtual memory ("chunks"), where a single chunk may contain multiple strees. Each stree is represented by an index; a separate table maps each index to the appropriate chunk and position within that chunk. We thus don't traffic in pointers to strees, but rather in integer indices referencing a location in memory. Storing strees in this manner avoids creating new objects and additional work for the garbage collector, and simplifies precompiled headers by ensuring that the chunks don't need to be placed at a specific address or when reloaded—only the table pointers need to be swizzled. Clients access stree data via an iterator: given an stree with index s, the function get_s_tree_iter (declared in stree.h) creates an iterator pointing to the beginning of s. Other functions declared in stree.h access the iterator to extract each serialized value in turn. This scheme allows us to store data in the most compressed representation possible, and in a way such that clients are insulated from the details of the representation. For enumerators, for example, instead of using a full INTEGER_CST for each value, we can use one or two bytes in the (typical) case where the values are small. Strees are created with build_s_tree, a varargs function defined in stree.c. Its first argument is the stree code, and its remaining arguments are the contents of that stree and tags to identify their types. There is no function for creating an stree by treating it as a "stream" to which values are written one at a time; eventually there probably will need to be one. It won't be hard to add it. The files stree.h and stree.c are language-independent, since, at bottom, strees are just a way of packing bytes and integers into chunks. Creation and expansion of strees are language dependent. The present implementation is focused on C++. We change cxx_binding::value from type tree to type s_tree_i_or_tree (a tagged union), and we change IDENTIFIER_VALUE so that it returns the tree value, expanding the stree if necessary. A few changes are required in functions that manipulate cxx_binding directly, but those changes are largely mechanical and are localized to cp/name_lookup.[ch]. Strees are expanded by the s_tree_to_tree function, defined in cp/decl.c. There are three points to notice about it. First, as described above, it uses the stree iterator interface. Second, the first byte of the stree is the stree code; s_tree_to_tree uses that code to determine what kind of tree to create. Third, at present s_tree_to_tree doesn't handle any cases other than enumerators. The major changes required to use strees for enumerators are in build_enumerator. First, we need to separate parsing and error checking from tree generation, deferring the latter until later. Second, for simple cases we use build_s_tree to create the stree and push_s_decl to enter the stree into the current lexical scope. In principle push_s_decl would need to know all of the same logic that pushdecl does; in practice we only use push_s_decl for the simplest cases, deferring to pushdecl (i.e. using trees instead of strees) for the more complicated cases. This design has the virtue that most of the C++ front end doesn't have to know about strees: code that goes through bindings to get trees looks exactly as before. It has the defect that, as presently written, it requires code duplication. The code required to generate an enumerator node is in both build_enumerator and s_tree_to_tree. Additionally, s_tree_to_tree is manageable only because at the moment it only handles a single case. If this project succeeds, and we're handling many kinds of strees, it would become a monstrosity. The right solution will probably be to replace s_tree_to_tree with a wrapper function that examines the stree code and dispatches to a function for the appropriate code, and, for each code, to write an implementation function that's shared between the tree and stree versions. Similarly, we can probably achieve better code sharing between pushdecl and push_s_decl. At present the compiler will not generate debugging information for unexpanded strees. This is potentially a serious issue. In principle, there are two ways of dealing with this issue: either figure out a way to generate debugging information without expanding strees, or else decide that it's acceptable to omit debugging information for "unused" declarations. (Note that by "unused" we mean declarations that are irrelevant to the compilation of the code, rather than the weaker definition of "never executed". As soon as a declaration's name is seen elsewhere in the code, we create a decl tree node for the name.) We don't believe this will be a serious problem. Consider the effect of missing debug information for unused declarations: More importantly, some gcc versions already remove unneeded declarations from debug information. GCC 3.4 does not generate DWARF debug info for function declarations, and does not generate debug info for unused types unless -fno-eliminate-unused-debug-types is specified. Apple's gcc has stripped "unused" symbols out of STABS debugging format for the last two and a half years. The debugger team expected many bugs from users trying to examine unused declarations, but have been surprised at how few bugs they've received. One of the few complaints was from a user who had a "debug" version of a struct that was used only for pretty-printing the real structure, and was stripped out because it was never actually referenced. [stree]in the subject line. Copyright (C) Free Software Foundation, Inc. Verbatim copying and distribution of this entire article is permitted in any medium, provided this notice is preserved. These pages are maintained by the GCC team. Last modified 2016-01-30.
http://www.gnu.org/software/gcc/projects/strees/index.html
CC-MAIN-2016-22
refinedweb
1,686
54.12
M5burner on windows7 cannot flash Hi I just received my M5stickC and wanted to play with UIFlow so I downloaded M5burner and the appropriate firmware. M5burner says it's OK to erase but it cannot burn the firmware "Error loading Python DLL 'C:\Users\xxx\AppData\Local\Temp_MEI68722\python36.dll'. LoadLibrary: The specified module could not be found." I do have Python on this computer. I checked but the folder AppData\Local\Temp_MEI68722\ does not exist Where should I put python36.dll so m5burner "finds" it ? Is your python version 3.6 ? I have both 2.7 and 3.6 Is there a "full" m5burner release with integrated python ? ok...I used another PC and now I can connect Now I try to use gyro and accelerometer but when I use get X acc or get X gyr it says Execute code successfully but the stick says I2C bus error(6) Do you have an example of using IMU with UIFlow? from m5stack import * from m5ui import * from uiflow import * import imu setScreenColor(0x111111) imu0 = imu.IMU() label0 = M5TextBox(27, 7, "AccX", lcd.FONT_Default,0xFFFFFF, rotate=0) setScreenColor(0x33ff33) lcd.font(lcd.FONT_DejaVu24) lcd.print('FelX', 10, 30, 0xcc33cc) wait(3) while True: label0.setText(str(imu0.acceleration[0])) wait(1) wait_ms(2) when I try the exact same blocks than in (label show get X gyr), I get this "I2C bus error(6)" and from arduino IDE I get values from IMU So I guess it has with UIFlow interface to do ?.
http://forum.m5stack.com/topic/1289/m5burner-on-windows7-cannot-flash
CC-MAIN-2019-47
refinedweb
253
66.23
In computer programming, it will often be necessary to iterate through a sequence of things in order to find a successful match. A For Loop in C# is one tool to help us with this sort of data manipulation. C# For Loop Syntax The syntax for creating a for loop in C# is as follows: for (int i = 0; i < length; i++) { } This syntax uses the for keyword followed by three statements in parentheses. Let’s break down each of these three sections. In the first section of our for statement on Line 9, we see int i = 0;. This is called the initializer section, and it is responsible for initializing our counter variable i. This variable will only be accessible inside the loop. We can call the counter variable any name we want. In the middle section of our for statement, we see i < 10;. This section is called the condition section. Our loop will iterate as long as the value of this condition is true. As soon as this condition is false, our loop will end. The last section of our for loop code says i++. This is known as the iterator section. Each time our loop runs, the command in the iterator section will execute, thus incrementing the value of variable i. The i++ syntax signifies that we want to increment the value of i by one. Using this increment operator is functionally equivalent to i = i + 1 or i += 1. Notice there is no semi-colon ; after the iterator section. The semi-colon serves as a separator between the sections of the for statement, but it is not used after the third section. C# For Loop Example We can create a simple example to illustrate this useful function. Launch a new .NET Core Command Line project called ForLoop and add the highlighted code. using System; namespace ForLoop { class Program { static void Main(string[] args) { for (int i = 0; i < 10; i++) { Console.WriteLine(i); } Console.ReadLine(); } } } You will see the For loop syntax on Line 9. This program will run through a loop 10 times. Each time the loop iterates, it will print one number to the console, beginning with the number 0. Once the loop has run ten times, the numbers 0-9 will all be printed to the console. At this point, the loop will end and our program will wait on Line 13 for some user input. The Bottom Line Iterations and loops are important parts of computer programming. There are other loops which we will visit later in our tutorial series, but the for loop is an important tool in your C# toolbox. You will use it regularly. How do you plan to use for loops in your programs? Let me know in the comments!
https://wellsb.com/csharp/beginners/csharp-for-loop-syntax-and-example/
CC-MAIN-2020-16
refinedweb
463
74.39
Chevron Corporation (Symbol: CVX). So this week we highlight one interesting put contract, and one interesting call contract, from the January 2019 expiration for CVX. The put contract our YieldBoost algorithm identified as particularly interesting, is at the $55 strike, which has a bid at the time of this writing of 98 cents. Collecting that bid as the premium represents a 1.8% return against the $55 commitment, or a 1% annualized rate of return (at Stock Options Channel we call this the YieldBoost ). Turning to the other side of the option chain, we highlight one call contract of particular interest for the January 2019 expiration, for shareholders of Chevron Corporation (Symbol: CVX) looking to boost their income beyond the stock's 4% annualized dividend yield. Selling the covered call at the $120 strike and collecting the premium based on the $4.15 bid, annualizes to an additional 2.1% rate of return against the current stock price (this is what we at Stock Options Channel refer to as the YieldBoost ), for a total of 6.1% annualized rate in the scenario where the stock is not called away. Any upside above $120 would be lost if the stock rises there and is called away, but CVX shares would have to climb 11.8% from current levels for that to happen, meaning that in the scenario where the stock is called, the shareholder has earned a 15.6% return from this trading level, in addition to any dividends collected before the stock was called. Top YieldBoost CV.
http://www.nasdaq.com/article/one-put-one-call-option-to-know-about-for-chevron-cm763045
CC-MAIN-2017-13
refinedweb
257
61.77
Source : I am just trying to keep the reading material organised for people and myself. C++ provides the following classes to perform output and input of characters to/from files: ofstream: Stream class to write on files ifstream: Stream class to read from files fstream: Stream class to both read and write from/to files. For writing into a file we declare an object of class ofstream and for reading a file object of class ifstream. Steps are as follows : - Creating stream object. - Opening the file - Performing tasks (Reading/Writing) - Closing the file 1. Creating Object ofstream myfile; //to write //or ifstream myfile; //to read 2. Opening File ofstream myfile; myfile.open ("example.bin", ios::out | ios::app | ios::binary);/* open (filename, mode);Where filenameis a string representing the name of the file to be opened, and modeis an optional parameter with a combination of the following flags */ Combining step 1 and step 2, ofstream myfile ("example.bin", ios::out | ios::app | ios::binary); //Using the benefit of the contructor for the class ofstream For ifstream and ofstream classes, ios::in and ios::out are automatically and respectively assumed, even if a mode that does not include them is passed as second argument to the open member function (the flags are combined). For fstream, the default value is only applied if the function is called without specifying any value for the mode parameter. If the function is called with any value in that parameter the default mode (which is both input and output) is overridden, not combined. To check if a file stream was successful opening a file, you can do it by calling to member is_open. This member function returns a bool value of true in the case that indeed the stream object is associated with an open file, or false otherwise: if (myfile.is_open()) { /* ok, proceed with output */ } 3. Performing Operations Text Files // writing on a text file #include #include using namespace std; int main () { ofstream myfile ("example.txt"); if (myfile.is_open()) { myfile << "This is a line.\n"; myfile << "This is another line.\n"; myfile.close(); } else cout << "Unable to open file"; return 0; } // reading a text file #include #include #include using namespace std; int main () { string line; ifstream myfile ("example.txt"); if (myfile.is_open()) { while ( getline (myfile,line) ) { cout << line << '\n'; } myfile.close(); } else cout << "Unable to open file"; return 0; } get and put stream objects: ifstream, like istream, keeps an internal get position with the location of the element to be read in the next input operation. ofstream, like ostream, keeps an internal put position with the location where the next element has to be written. Finally, fstream, keeps both, the get and the put position, like iostream. These internal stream positions point to the locations within the stream where the next reading or writing operation is performed. tellg() and tellp() no parameters return a value of the member type streampos, the current get position (in the case of tellg) or the put position (in the case of tellp). seekg() and seekp() allow to change the location of the get and put positions. Both functions are overloaded with two different prototypes. The first form is: seekg ( position ); seekp ( position ); Using this prototype, the stream pointer is changed to the absolute position position (counting from the beginning of the file). The type for this parameter is streampos, which is the same type as returned by functions tellg and tellp. The other form for these functions is: seekg ( offset, direction ); seekp ( offset, direction ); Using this prototype, the get or put position is set to an offset value relative to some specific point determined by the parameter direction. offset is of type streamoff. And direction is of type seekdir, which is an enumerated type that determines the point from where offset is counted from, and that can take any of the following values: Binary Files For binary files, reading and writing data with the extraction and insertion operators ( << and >>) and functions like getline is not efficient, since we do not need to format any data and data is likely not formatted in lines. File streams include two member functions specifically designed to read and write binary data sequentially: write and read. The first one ( write) is a member function of ostream (inherited by ofstream). And read is a member function of istream (inherited by ifstream). Objects of class fstream have both. Their prototypes are: write ( memory_block, size ); read ( memory_block, size ); // obtaining file size #include #include using namespace std; int main () { streampos begin,end; ifstream myfile ("example.bin", ios::binary); begin = myfile.tellg(); myfile.seekg (0, ios::end); end = myfile.tellg(); myfile.close(); cout << "size is: " << (end-begin) << " bytes.\n"; return 0; } // reading an entire binary file #include #include using namespace std; int main () { streampos size; char * memblock; ifstream file ("example.bin", ios::in|ios::binary|ios::ate); if (file.is_open()) { size = file.tellg(); memblock = new char [size];//we request the allocation of a memory block large enough to hold the entire file file.seekg (0, ios::beg); file.read (memblock, size); file.close(); cout << "the entire file content is in memory"; delete[] memblock; } else cout << "Unable to open file"; return 0; } 4. Closing a file Once this member function is called, the stream object can be re-used to open another file, and the file is available again to be opened by other processes. myfile.close(); In case that an object is destroyed while still associated with an open file, the destructor automatically calls the member function Concept of Buffer and Syncronization.The operating system may also define other layers of buffering for reading and writing to files. When the buffer is flushed, all the data contained in it is written to the physical medium (if it is an outputand endl. - Explicitly, with member function sync(): Calling the stream's member function sync()causes an immediate synchronization. This function returns an intvalue equal to -1 if the stream has no associated buffer or in case of failure. Otherwise (if the stream buffer was successfully synchronized) it returns 0.
https://blog.ferozahmad.com/programming/io/file-io-cplusplus/
CC-MAIN-2018-34
refinedweb
1,009
53.81
t;_b- lt') =7 (3Ib U;,cl_ S G]/01 13512 December 1984 / :e:J,,:,_, % I_I/%SA i'_ai :_nai Ac,_onaubcs and SFace Adm_nstratc, n [[[ !1i NASA Technical Memorandum 86672 N/ SA National Aeronautics and Space Administration By Mark D. Ardema Ames Research Center Moffett Field, CA g4035, USA TABLE OF CONTENTS O. PREFACE I ]. TNTRODUC_[ON I 1.1 General I 1.2 Historical Overview I 1.3 State-o#-the-Art Assessment 2 ].A Re_erences 4 3. VERTICAL _EAVV-LIFT 18 Around 1970 a resurqence of interest about liqhter-than-air vehicles (airships) occurred in both the public at large and in certain isolated elements of the aerospace industry. Such renewals of airship enthusiasm are not new and have, in fact, occurred reqularly since the days of the Hindenburg and other large niqid airships. However, the interest that developed in the early !g70's has been particularly stronQ and self-sustaininq for a number of good reasons. The First is the rapid increase in fuel prices over the last decade and the common belie? (usually true) that airships are the most fuel eff(cient means o? air transportation. Second, a number of new mission needs have arisen, particularly in surveillance and patrol and in vertical heavy-lift, which would seem to be wel!-sulted to airship capabilities. The third reason iS the recent proposal of many new and innovative airship concepts. Finally, there is the Prospect of _daPting to airships the tremendous amount of new aeronautical technologv which has been developed in the Past _ew decades thereby obtaininq dramatic new airship capabilities. The primary purpose of this volume is to survey the results of studies, conducted over the last 15 years, to assess missions and vehicle concepts for modern propelled lighter-than-air vehicles. 1. !NT_ODUCT[ON 1.1 General Several workshops and studies i- the early Ig7O's, sponsored by the National Aeronautics and Space Administration and others, rRefs. 1.!-l.lg), arrived at positive conclusions regarding modern airships and larqelv verifie_ the potential of airships _or operationally and economically satisfying many cu-rent mission needs. Noteworthv among more recent airship activities has been the series of Con- ?erences on Liqhter-Than-Air-Svstems Technoloov sponsored by the American Institute of Aeronautics and AstronautiCS. The la70 Conference is reviewed in Refs. 1.20 and 1.21. Based on the positive early study conclusions, s=veral _roanizations have analyzed specific _irshio concepts _n greater detail and, in a Few cases, Have initiated development of fliqht test and demonstration vehicles. It is the Purpose of _M_S volume to survey the results of these activities. It will be Jseful in later discussiomS to have a clear understand_nq of the definitions of various tv_es of airships and how thev _re ?elated FFig. I). A !ighter-than-alr craft _LTA) is an airborne vehicle that obtains a11 or part of its lift from the displacement of air by a lighter gas. LTA's are convenie_tl y _ivi_ed into airships ?Synonymous wit_ dirigibles) and balloons, the Former being distin- ou_shed bv tieir capability for controlled Flight. Only airships are considered here. In Pig. l, the term "conventional" applies to the class of approximately ellipsoidal fully-buoYant airships developed in the past. It is traditional to classify conventional airships according to their Structural concept _igid, nonrigid, or semirigid_. Hvbri! _irships are herein classified according to the means by which t_e aerodvnamic or propulsive portion of the lift is generated. Hybrid airship is a term which is used to "escribe a vehicle that qenerates only a fraction of its total lift from buoyancv, the remainder being generated aerodvnamical_y or by the propulsion system or both. The distinouishing characteristics of the two major conventional airship concePts--rigid and nonrloim--will be discussed briefly. The third type, semirigid, is essenti_!Iv a variant of the non- r_qiH type, !ifferino Only in the addition of a rigid keel. Specific hybri_ conceotS aill be discussed in _etail in subsequent chapters. A typical nonrigid airship F_io. _.2_ consists of a Flexible envelope, ;sual_y Fabric, Filled witn l_tinq _aS and sl_Qhtlv oressurizeH. Internal air ¢Ompar_ents 'called ballonets! expand and contract to _aintain the pressure in the envelope as atmospheric pressure and temmerature vary, as _ell _s to ma<_tain !onqitudinal trim. _a_Inmet volume Is controlled by _ucted air _rom the DroDwash or bv elec- tric b_owers. _he weiQhts 0_ the car structure, Propulsion system, and other concentrated loads are supported bv catenarv SYStems attached to the envelope. The other major type of airship was classified rigid because of its rigid structure (Fig. 1.3). Th_s structure was usually an aluminum ring-and-girder frame. An outer covering was attached to the frame to provide a suitable aerodynamic surface. Several gas cells were arrayed longitudinally with t_e frame. These cells were free to expand and contract, thereby allowing for pressure and temperature variations, ThuS, despite their nearly !dentica! outward appearance, rigid and nonrigid airships were s_qni_icantly different in their Construction and operation. The principal development trends of the three types of conventional airships are depicted in Fig. 1.a. The nnnrigid airships are historically significant for two reasons. First, a nonriqid airship was the First aircraft of any tYPe to achieve controllable flight, nearly 125 years ago. Second, nonrigid airships were the last type to be used on an extensive operational basis; the U.S. Navy decommissioned the last of its nonrigid airship fleet in the early Ig60'S. During the many vears the Navy operated nonrWQid airships, a _igh degree of availability and reliability was achieved. Most of these nOnrigid airships were built by Goodyear and a few, based on a modified Navy design, are used today for adver- tising by that company. The rigid airship was developed orlmari!v by the Zeppelin Company of Germany _nd, in fact, rigid airships became kno_n as Zeppelins. Even t_e small percentaqe of rigid airships not built by this comDanv were based, for the most part, on Zeppelin designs. The riqid airships of the Zeppelin Company recorded some h_storic "firsts" in air transportation, including inaugurating the first scheduled air service. The culmination of Zeppelin development was the Graf Zeppelin and Hindenburg airships-- unquestionably outstandina engineeringachievements for theirday. All of therigid airshipsproduced in theUnitedStates were for militarypurposes; nonewerein operationat the outbreak of World War II. An historical auestion of interest concerning m_dern airship developments is "Why, after vears of operation, did lighter-than-air vehicles vanish From the scene?" There is considerable confusion on this point; the reasons are, in fact, different _or each of the formerlv established airship _Jses, There were basically tw_ military missions For which large rigid airships were developed. The First was their use by Germanv as ae, ial bombers in World War I. They were never very effective in this rote and by the end o? the War, due to their altitude and speed limitations and the improving capabilities of fixed wina aircraft and ground artillery, they had become vulnerable and obsolete. The other military develo_ent of rigid airships was by the U.S. Navy in the late IggO's and early 1930'S. In this application, the airship served as a carrier of fixed wing aircraft which provided surveillance For surface fleets. This concept was demonstrated to be operationally successful, although it was never PrOven in wartime, The end o? this development was a direct result of the wreck of both airships, the Akron and the _acon, which had been built for this purpose, The only significant past commercial airship operations were those of the Zeppelin Company and its subsidiary DELAG. The highlights of these operations are listed on Table I.!. None of these commercial operations can be considered a _inancial success and most were heavily subsidized by the Germa_ govern- ment. For example, the transatlantic service with the Graf Zeppelin in 1933-1937 required a break-even load _actor of g3-g8_, a value seldom achieved, despite carrying postage at rates over ten times higher than 1Q75 air mail rates. Throuqhout most qf these Commercial operations, there was little or no competition from heavier- than-air craft. _owever, ai,plane technology was making rapid strides and airplane speed, range, and productivity were rising steadily. Airships and airplanes are difficult to compare because of the remoteness of the time oerlod and toe limited operational experience. Nevertheless, by the time of the Hindenburg disaster in Ia37, it seems clear that the most advanced airplane, the DC-3, had lower oper- atin_ COSTS as well _S hi_her C_uisinq speeds than the most advanced airship, the Hindenburg CReFs, ],2_ an_ I._?). Of course, this tended to b_ offset bv the Hindenburg'S luxurv and longer range. Neverthe- less, it is clear that althouQh the hurninQ of the Hindenburg hastened the end of the COmmercial airship era, it was not the primary causm; the airship had become economically uncompetitive. Bv aTl accounts, the use of nonrioid airships by the U.S, Navy in World War IS and subsequent years was very successf_Jl. The Navy's Fleet of nonrigids increased from I0 vehicles at the beginning of the War to I_5 at the end, and over _00,000 flioht hours were logged during the War. The airships were used For ocean patrol a,¢ surveillance, primarily as reTated to surface vessel escort and antisubmarine operations. The decommissioning of the Navy's airship fleet in 1961 was due apparently to austere peacetime military budgets and not tO any operational deficiency. we will conclude this Introduction with a discussion of the technical, ooeratlonal and economic characteristics of past airships and indicate how modern technology could be used to improve the performance of all airship designs. All three t v_es of conventional airships evolved into a common shape, the familiar "cigar shape" with circular Cross sections and a nearly elliptical profile. The fineness ratio of the later rigid airships was typically in the range 6-8. The fineness ratio of the nonrloid airships, which tended to be smaller and slower than the rigid ones, was typically in the range 4-5. It is Generally acknowledged today that past conventional, Fully buoyant airship designs were Jery nearly optimum For this class o? vehicle in terms of aerodynamic shape and fineness ratio. "bus a modern conve,tional airship could not be expected to show much improvement in this regard. It i5 esti- mated that a _raG reduction of aooroxlmately 10% would be possible with adequate attention to surface smoothness. Use of boundary-layer control mav give significantIv greater drag reduction CRef. 1.24 _, Reviews of airship aerodynamics for both conventional and hyaorid configurations may be Found in Refs. ].?5 and !.76. Also of interest ?or aerodynamic analysis is Re?. 1,27. The early airships wer_ designed primarily by empirical methods, and the only company to accumulate sufficient experience to design successful rigid airships was the Zeppelin Company. Two areas in which there was a serious lack of knowledge were aerodynamic }Pads an¢ desiqn criteria. Work in these areas was continued after the decommissioning of the last rigid airship in expectation of further developments. Significant progress was made in both analvtlcal and experimental techniques, but further work would need to be done in these areas for a modern airship. The Frames of most of the past rigid airships consisted of built-up rings and longitudinal girders stabilized with wire bracing. The rings and longitudinals were typically made of aluminum alloy and the bracing was steel, This structure was very light and efficient, even by present standards. However, this construction was highly complex and labor intensive, and any modern airship of this type would have to have a much Simpler construction. Possibilities include the use of _etaIclad monocoque, sandwich, or geodesic Frame construction. Materials would be modern aluminum alloys or filamentary composite materials. A good candidate for wire bracing, if required, is Kevlar rope. It Is estimated that the use of modern construction and materials would result in a hull weight saving of approximately 25% compared with a past design such as the Wacon. There have been dramatic Improvements in softgoOds with applications for airships in the past two decades. Softgoods are used for gas cells and outer coverings For rigid airships and for envelopes For nonrigid airships. The material most often used in past airships for these applications was neoprene- coated cotton, althouQh the envelopes of the later nonrigid airships were of dacron. The dramatic improvement in strenQth of modernsoftgoods compared withcottonis shown in Fig.1.5. Kevlarappears to bethebestmaterial,but it hasnotbeen fully developed for usein largeairships.It is estimated thatuseof modern softgoods would resultin component weight reductions of 40-70%compared withpast designs.Coating filmsalsohave beenimproved greatly,which will resultin a tenfoldimprovement in Gas cell andenvelope permeabiiitv. Witha fewexplainable exceptions, pastairshipshave all hadabout thesame structuralefficiency !asmeasured bye_tv weight/gas-volume ratio) despite differences in size,design concept, yearof development, andlifting gas. Theinsensitivityto sizeis areflectionof theairship"cube-cube _aw" (i.e., boththelifting capabilityandthestructuralweightincrease in proportionto thecube of the principaldimension for aconstant shade).Sincefixed-wing heavier-than-air craftfollowa"square- cubelaw,"airshipswill compare more favorably withheavier-than-air craftassizeis increased. Smaller airshipshave tendedto have nonrigid or semirigid construction, whereasthelargerairships have beenrigid, andthis would betrueof modern vehicles aswell. EitherOtto-or Diesel-cycle engines wereused onthelargeairshipsof theIg30'S.Theinternal combustion enQine has lower fuel consumption in small sizes; however, the turbine engine can be adapted For a varlet v of fuels an_ is liohter and Quieter. As compared with engines of the 1930's, modern engines have about 90'_ of the specific fuel consumption and as low as 10% of the specific weight and volume. Perhaps more important than these Imorovements is the greatly improved reliability and maintainability of modern turboshaft enqines. ThrustorS will be either prop/rotors or ducted fans; ducted fans are Guieter, safer For around personnel, and have higher thrust. There are also some longer-term alternative propulsion Systems For airships. The Diesel engine is attractive because of its low fuel consumption. However, no Diesel currently available is suitable for airship use. Another possible propulsion system is a nuclear powerplant, particularly for long endu- rance missions and large airships. An extensive development program will be required to develop a nuclear-powered airship. Enaine controls of the rigid airshios consisted of an engine telegraph that transmitted engine control commands From the helmsman tO an engine mechanic, who would then manually make the required engine control chanQes. _odern electronic power management systems will eliminate this cumbersome system and greatly increase the responsiveness, accuracy, and reliability of engine controls. Control of the thrust vector orientation by tilting mechanisms will also be greatIv enhanced with modern systems. FliGht-control systems on past airships h_ve been largely mechanical. Commands From the helm (one each for vertical and horizontal surfacesl were transmitted by cable and pulley systems to the control surfaces. In addition, there were manual controls for releasing ballast and valving lifting gas. For a large modern airship, a fly-by-wire or fly-by-light control system has obvious advantages and would likely be employed. This system would use many airplane- and/or helicopter-type components. An auto- pilot would also be provided. Between the la30's and the present, there has been a vast improvement in avionics systems due laroeIv to the dramatic changes in electronic COmmunications devices. For example, as compared with 1930 components, modern aviation radio equipment is about one-tenth the size and weiqht and is much more versatile and reliable. Proq_ess in the development of electronic components has also made possible the introduction of many navigation devices not available in the Ig30's _e.Q., VOR/DME/ILS, TACAN, radar, LORAN, OMEGA, and inertial svstems_. The various improvements in controls, avionics, and instrumentation will only modestly reduce the empty weiqht of the airship, but will sianificantly improve its controllability and reliability. Of course, _ large Increase in acGuisitlon cost will be associated with these modern systems and compo- nentS, but this will be offset by lower oDer_tinq costs due to manpower reductions. The operation Of the Ig30's airships was as labor intensive as their Construction. In flight, large onboard crews were _eouired to constantly monitor and adjust the trim of the ship and maintain nearly neutral buoyancy. Trim and neutral buoYanCy were maintained by one or more of the following procedures: valving lifting gas, dropping ballast, transferring fuel or other materials within the airshio, collecting water From the atmosphere and enQine exhaust, and moving crew members within the airship. Also, it was not unusual to repair the structure and the engines in flight. It is obvious that modern structural concepts, engines, avionics, control systems, and instrumentation will decrease the workload of the onboard crew considerably. The exoerlence of the U.S. Navy in the IgiO's and IgSO's with nonrigid airships indicates that modern airships can be designed to have all-weather capability at least equivalent to that of modern airplanes. High winds and other inclement weather need not endanger the safety of the airship and its crew either in fliaht or on the ground. However, high adverse winds will continue to have a negative impact on the operational capability of airships due to their low airspeeds. Extremely large ground crews were needed to handle the early Zeppelins. These airships were walked in and out of their storage sheds bv manpower. Up to 700 men were used to handle the Zeppelin military airships. The first significant change was the develoPment of the high-mast mooring system by the British. The U.S. Navv then developed the low-mast system, which was more convenient, less expensive, and allowed the airship to be unattended while moored. Important developments in around handlino subsequent to the Ig30's were made by the Navy in con- nection with its nonrioid airship operations. By 1960, the largest nonrigid airships were routinely beinQ handled on the qround bv small crews that used mobile masts and "mules." These mules were highly maneuverable tractors with COnstant-tension winches. Some Further improvement in ground-handling procedures would be possible with a modern airship. Handling "heavy" or hybrid airships would be particularly easy. As shown in Fig. 1.6, the flyaway costs per pound of empty weight of the rigid airships of the Ig30's were comparable with those of transport airplanes of the same era. Since then, the costs of transport airplanes have steadily risen, even when inflationary effects are factored out, because the steady introduction of new technology has made succeeding generations of airplanes more sophisticated and expensive. The increased costs have paid off in increased safety, _eliability, and productivity. AS discussed above, a modern airship would have several systems and components that are highly advanced compared with )9)O's technoloov. Thus it seems likely that rigid-airship flyaway costs would follow the trend of fixed wing aircraft 6Fiq. 1.61, and therefore a modern rioid airship should cost about the same as an eauivalent weight modern airplane. A modern nonrigid airship could cost somewhat less. 1.4 REFERENCES l.l Bloetscher, F.: Feasibility Study of Modern Airships, Phase I, _inal Report. Vol. I. Summary and Mission Analysis. NASA CR-1376g?, Aug. 1975. 1.2 Davis, S. J.; and Rosenstein, H.: Computer Aided Airship Design. AIAA Paper 75-945, 1975. 1.3 Faurote, G. L.: Feasibility Study of Modern Airships, Phase I, Final Report. VOI. ITI. Historical Overview. NASA CR-137692F3), 1975. 1.4 Faurote, G. L.: Potential Missions for Modern Airship Vehicles. AIAA Paper 75-947, 1975. 1.5 Grant, D. T.; and Joner, B. A.: Potential Missions for Advanced Airships. AIAA Paper 75-946, 1975. 1.6 Huston, R. R.; and _aurote, G. L.: LTA Vehicles -- Historical Operations, Civil and Military. AIM Paper 7_-g3g, IQ75. ].7 Joner, B.; Grant, O.; Rosenstein, H.; and Schneider, J.: Feasibility Study of Modern Airships. Final qeonrt, Phase I, Vol. I. NASA CR-137_gl, May 1975. ]._ Zoner, B. A.; and Schneider, J. J.: Evaluation of Advanced Airship Concepts. AIAA Paper 75-930, 1o75. l.q Lancaster, J. w.: Feasibility Study of Modern Airshlos, Phase I, Final Report. VOI. If. Parametric Analvsls. NASA CR°I37_a_, A,jq. I_7_. !.10 Lancaster, J. W.: Feasibility Study of Modern Airships, Phase I, Final Report. Vol. IV. Appendices. NASA CR-!376g?, Aug. I_75. 1.11 Lancaster, J. w.: LTA Vehicle Conceots to Six Million Pounds Gross Lift. AIAA Paper 75-93!, Ig75. 1.12 Anon.: Feasibility Studv of Modern Airships, Phase TT, Vol. I. Heavy Lift Airship Vehicle. Book I. Overall Study Results. NASA CR-151917, Sept. 1976. !.13 Anon.: Feasibility Study of Modern Airships, Phase If, Vol. I. Heavy Lift Airship Vehicle. Book If. Appendices to Book I. NASA CR-15]g18, Sept. 1976. !.14 Anon.: Feasibility Study of Modern Airships, Phase If, Vol. I. Heavy Lift Airship Vehicle. Book Ill Aerodynamic Characteristic_ of Heavy Lift Airships as Measured at Low Speeds, NASA CR-15191g, Sept. IO7_. !.15 Anon.: Feasibilitv Study of Modern Airships, Phase If, Vol. If. Airport Feeder Vehicle. NASA CR-ISIg?O, Sept. Ia76. 1.16 Anon.: Feasibilltv Study of Modern Airships, Phase If. Executive Summary. NASA CR-2922, 1977. 1.17 Huston, R. R.; and Ardema, M. O.: Feasibility of Modern Airships -- Design Definition and Per- formance of Selected Concepts. AIAA Paper 77-331, Jan. 1977. I .Ig Ardema, M. D.: Feaslhilitv of Modern Airships -- Preliminary Assessment. j. Aircraft, vol. 14, no. II, Nov. 1977, pp, 1140-11A8. 1.1o Anon.: Proceed_nas of the Inte_aqency Worshop on Lighter-Than-Air Vehicles. Flight Transporta- tion Laboratory Report R75-?, Cambridge, MA, Jan. 1975. I.__0 Ardema, M. O.: Assessment of an Emergino Technology. Astronautics and Aeronautics, July/AuguSt, IOBO, pp. 54. 1.21 Ardema_ M. D.: In-Depth Review of the ]QTg AIAA Lighter-Than-Air Systems Technology Conference. NASA TM-81158, Nov. 197g. 1.22 Ardema, M. D.: Economics of Modern Long-Haul Cargo Airships. AIAA Paper 77-1192, 1977. 1.23 Ardema, M. D.: Comparative Study of Modern Long-Haul Cargo Airships. NASA _4 X-73,168, June !976. 1.24 Goldschmied, F. A.: Integrated Hull Design Boundary-Laver Control and Propulsion of Submerged Bodies. AIAA Paper 66-658, 1966. rFl _I Curtiss, H. C.; Hazen, D. C.; and Putman, W. F.: LTA Aerodynamic Data Revisited. AIAA Paper 75-a51, ia75. 1.26 Putman, W. F.: Aerodynamic Characteristics of LTA Vehicles. AIAA Paoer 77-I]76, 1977. I .?7 Jones, S. P.; and Delaurier, J. D.: Aerodynamic Estimation Techniques for Aerostats and Airships. J. Aircraft, vol. ?O, no. ), Jan. 1983, pp. 120. Z cO _ 0 _ C=, cr, t. z Q. 0,. m _,ll _ 7 AIRCI=IAF'I' _+: .... _.+_.._+.+,..+ +,.++....;+:+_+ ........ t, "_; HEAVIER-THAN-AIR LiGHTER-THAN-AIR J 1 I l I FIXED ROTARY AIRSHIPS BALLOONS l' FREE ' TETHERED I I CONVENTIONAL HYBRID J I t I I I I RIGID SEMI-RIGID NON-RIGID FIXED ROTARY LIFTING UNIQUE WING WING BODY CATENARY CURTAIN SUSPENSION CABLES LONGITUDINAL GIRDERS / INTERMEDIATE RING (FRAMES) ENGINE CAR CONTROL CAR '_ BODEN$ EE RIGID VIKTORIA.LUISE R,;OO s_c._..Hw _MZ wz zR._ SEMI.RIGID "_:'_LEB A uO y # _w;> G_Fr ARO _ BALDWIN l l 1 • tSSO _900 19_O 1920 t930 1940 1950 1960 'fEAR (lllf I k_ 20O i / _RENE m 1o 20 30 40 5O WEIGHT (OZ/ycl_ =_ '930 ,g35 ,940 _945 1950 !955 196G '965 '970 "g75 !980 YEAR It was mentioned in the Introduction that the most successful past employment of airships was their use ?or ocean patrol and surveillance by the U.S. Navy during World War IS and subsequent years. For two major reasons, there has been recently a sharp rekindling of interest in improving patrol and sur- vei!lance caoabillty, particularly over water. First, the rapidly increasing sophistication and numbers of Soviet combat ships, particularly submarines, have increased the need For deep ocean surveillance platforms {with high endurance and high dash speeds_ capable of employing a wide variety of electronic and acoustic devices. Second, the recent extension of territorial water limits to 200 miles offshore has greatly increased the need for coastal patrols for a wide variety of maritime tasks. Missions similar to coastal patrol and deed ocean surveillance, in terms of vehicle design require- ments, are disaster relief and law enforcement. It is not difficult to see why airships are being considered for this class of mission. Relative to conventional surface ships, the airship has greater dash speed, is not affected by adverse sea conditions, and has a better observational vantage point. It is less detectable by underwater forces, more vlsuallv observable to surface vessels and other aircraft, and can be made less visible to radar. Relative to other types of aircraft, the airship has the ability to station-keep with low fuel expendi- ture _and thus has longer endurance), can deliver a substantial payload over long distances, and has _elatlvely low noise and vibration. In efdect, the airship as a vehicle class can be thought of as filling the gap between heavier-than-air craft and surface vessels in terms of both speed and endurance _Fiq. ?.I) anH speed and payload (Fig. ?.2). These Figures are For coastal patrol platforms but the same could be said For deep ocean surveillance vehicles as well. {n the Final analysis, perhaps the biggest stimulus For the renewed interest in airships for these missions is the present high cost of Detrol Pum-based fuels. THus there are many Fundamental reasons why the airship enjoyed suCCess in its past patrol and surveillance role with the Navy and why there is considerable interest in this apolIcatlon For the eutu,e. In Fact, many _ecent studies have arrived at positive conclusions for using airships For these missions !Refs. _.I-2.6_. However, it must be kept in mind that the airship is not the panacea for all patrol and surveillance applications. For situations In which either sustained or exceptionally high dash speed is crucial, or high altitude is highly desirable, or the transfer of large amounts Of material to another vessel Is required, or hostile forces are present, another vehicle ty_e would likely be supe- rior. An airship enjoys its hiqh endurance and payload perfor_nance only at low speed and altitudes. High dash speed is possible, but requires high fuel consumption; therefore, performance will be poor unless dash speed is used only sparingly. Payload capability falls off rapidly as altitude increases and, additionally, fuel consumption increases for station-keeplng because of higher relative winds at higher altitudes. In view of the premium on endurance in most patrol and surveillance missions, a fully or nearly fully buoyant airship of classical nearly ellipsoidal shape is indicated, and most recent studies have considered only this basic vehicle ty_e (Refs. 2.3, 2.5, and 2.7). Because of the dramatic improvement fn _oftqoods over the ?ast Few decades, mentioned In tt_e_reviou( section, _ttentlo_ has been focused on the nonrigid concept. Using modern materials, nonrigid airships are now probably superior to rigid designs at least up to a size of 5 x 106 ft 3 and possibly well beyond. The two major variables af- fecting vehicle design for the various patrol and surveillance missions are vehicle size {driven pri- marily by payload and endurance requirements) and degree of "hoverability" _equlred. It must be mentioned that several operational issues remain at least partly unresolved for airships performing the missions under consideration here. Manv of these questions will likely be resolved only by operational experience with actual vehicles. One of these issues is weather. By the very nature of most patrol and surveillance tasks, any vehicle must be able to operate in an extremely wide variety of weather conditions. Operational locations cover the entire globe and thus climates range From arctic to tropical. Missions must be performed in all weather and in fact for some applications, such as rescue work, operational requirements increase as weather conditions deteriorate. The Navy'S experience with airships in the 1940'S and 1950's indicates that airships can be designed to have the same all-weather performance as other aircraft. Even though some doubts still remain, modern design methods should be able to improve even Further the ability of airships to operate in heavy weather. Another que(tion iS that of Tow speed control. The classical fuTTy-buoyant large airship, having only aerodynamic controls, was largely uncontrollable at airspeeds below 15 knots (Ref. 2.7). This would be operationally unacceptable For most patrol and surveillance missions. This was also a primary cause of the ground handling problems experienced by past airship o_erations. It is clear that a low speed control system, probably utilizing propulsive forces, will be required. The question of hOw to ground-handle airships would seem to be the major unresolved issue. Past airship operations were characterized _y large manpower eequlrements, large ground facilities, and fre- Quent damage to the vehicles. Although the U.S. Navy made considerable improvements in its nonrigid airship operations towards the end, there is still a definite need for improvement. An essential re- qulrement would seem to be the develol_nent of an all-weather, outdoor mooring system with minimal ground cr_ requirements. Addition of a low speed control system to the vehicle should help considerably. Finally, assuming all operational questions have been satisfactorily resolved, the development of airships for patrol and surveillance will hlnqe on their cost effectiveness in performing these tasks. Most of these applications can be done by other existing and proposed vehicle tyl)es and therefore a careful comparative economic analysis will be required. Sill! II In the past few years there has been a great deal of interest in the use of airships by the U.S. Coast Guard. This stems primarily from the extension of the limits of territorial waters to 200 miles offshore and the dramatic increase in fuel prices over the last 10 years. The U.S. Coast Guard and the U.S. Navy, with support from NASA, have conducted and sponsored numerous studies of the application of airships to various Coast Guard missions {Refs. 2,1-2.3, 2.7, 2.8). A study of the use of airships in Canada is reported in Ref. 2.9. Almost without exception, these studies have concluded that airships would be both cost effective and fuel efficient when compared with existing and planned Coast Guard aircraft for many coastal patrol tasks. To Quote Ref. ?.8: "The predominant need within Coast Guard mission areas is for a cost effective aerial surveillance platform. The object of surveillance may be an oil s11ck, an individual in the water, an Iceburg or pack ice, small craft, fishing vessel or even a submersible. [In all these cases] the need exists for the mission olatform to search, detect, and identify or examine. Consequently any airship desIQn for _oast Guard a_olications must consider the capability to use a varlety of sensors operating throughout the electromaanetic spectrum. Undoubtedly, the primary long range sensor for most missions will be some form of radar. It would also be desirable for such a platform to be able to directly interact with the surface--to deploy and retrieve a small boat; to tow small craft, oil spill cleanup devices, and sensors; and to deliver bulky, moderate weight payloads to the scene of pollution Incidents. If an airshlp were capable of routinely directly interacting with the surface, such an airship could serve as a very effective multlmlsslon platform. However, the airship must serve pre_ominately as a fuel efficient aerial surveillance platform." With these basic _equlrements in mind, a recent study (Refs. 2.2, 2.3) identified eight Coast Guard tasks for which airships seem to be potentially suitable. The characteristics and requirements of these tasks are listed in Table 2.1. The maximum Capability required for each mission parameter is under- lined. At the present time, the Coast Guard uses a mix of boats, ships, helicopters, and fixed-wing aircraft to perform these tasks. However, many typical mission profiles for the applications listed in Table 2.1 seem to be better tailored to the airship's natural attributes, in that endurance is of prime importance and high speed dash and precision hover occur only infrequently and for relatively short duration IRef. 2.1). To summarize airship vehicle mission requirements, in Ref. 2.8 it is concluded that the following qualities are needed: _I) Endurance of I to 4 days, depending on cruise speed; (2) dash speed of go knots; !31 fuel efficient operation at speeds of 20 to 50 knots; (4) controllability and hoverability in winds from 0 to 45 knots; (5_ ability to operate in almost all climates and weather conditions; and C61 ability to survive, both on the ground and in the air, in all weather conditions. Two necent industry studies IRefs. ?.10 and 2.11) have conceptually designed airships to meet the mission _equirements listed in Table 2.1. The size of airship required ranges From a volume of about 3OO x )03 ft 3 for the Port Safety and Security (PSS} mission to about 1000 x 103 ft 3 for the Marine Science Activities (MSA) mission. All studies concluded that an airship of about 800 x 103 volume and 2000 horsepower could perform every_ssion except MSA, and could even do that mission with a somewhat _educed capability. The specifications and performance of a tvl)ical Conceptual design are indicated in Table _.2 (Refs. _.7, ?.IO_. As stated in Ref. 2.7, such a vehicle would employ modern but proven technoloqy and be well within the size range of past successful nonrigid designs. Therefore, the technical risk would be low. The most significant difference in the design of a modern coastal patrol nonrigid airship, as compared with past Navy vehicles, will be the use of propulsive lift to achieve low speed controllabil- ity and hoverabilitv. In Fact, the power requirements and the number and placement of proDulsors is likely to be determined from hoverability requirements _ather than from cruise performance. Such a vehicle would also be capable of vertical takeoff and landing (VTOL) performance although increased Payloads would be possible in short takeoff and landing (STOL) operation. Two different approaches to a modern coastal patrol airship are shown in Figs. _.3 and ?.4 IRe_s. _.3, _.I0-2.12). The trirotor Goodyear design (the characteristics of which are listed in Table 2.2) mounts two tilting propellors Forward on the hull and the third at the stern. Movable surfaces, on an inverted V-tail Supporting the stern propeller and on the wings supporting the forward propellers, pro- vide forces and moments in hover. A notable advantage of this concept is the greater cruise efficiency of the stern propeller, resulting from operating in the airship's wake. The quadrotor Bell design is an adaptation of the Piasecki Heli-stat, or buoyant _uadrotor concept, under consideration for vertical heavy lift and described in Section 2.2. In the quadrotor approach, two diagonally opposed rotors carry a steady down load while the other two produce an upward force. By this means, rotor lift Forces are available for cyclic deflection to produce control Forces and moments. A significant feature of this concept is that no ballast recovery would be necessary. A preliminary study of the acquisition and operating costs of the type of maritime patrol airship just described has been undertaken (Refs.).2_ ?.3). Briefly, this study arrived at a unit cost of about $5 million per airship (based on a production of 50 unitsl. When the required inves1_nent in qrounH facilities and traininq IS factored in, the total initial investment cost rises to $6.4 million per airship. The life-cv_le costs, when prorated on a flight hour basis, were estimated to range be- tween $750 to $1150 per flight hour, deoendinq on the mission. These costs ar_ very competitive with those of existing mission-capable aircraft and surface vessels, and a preliminary survey of Coast Guard needs identified a potential requirement for more than 75 airships. The study concluded that airships appear to be technically and operationally feasible, cost-effective, and fuel-efficient for many mar- itime patrol needs. The remaining unresolved technical issues for a coastal patrol airship a11 have to deal with hoverability. The following questions all need more precise answers than are available today: What is 12 the degree of hoverabillty required for mission effectiveness? What is the best design concept for a hoverable airship? What is the trade-off between performance in cruise and in hover? A major step toward answerlna these questions is being taken in the current ?light tests of the AI 50(2)FSkyshipl by the U.S. Coast Guard and U.S. Navy. The AI BQO is a development of Airship Industries of the United K_nqdom. It is a nonrigid airship of IBI,OOO ft_volume and has many advanced design features such aS c_oslte mater_al structures and vectored thrust propulsion. In addition to the mar- itime patrol flight de_onstratlons in the U.S., the airship is beina tested in England for the purpose of obtaining an airworthiness certificate IRef. 2.13) for commercial and military use. AS mentioned previously, there Is increasinq concern over the growing threat of Soviet seapower and this has led to a renewed interest in airships for patrol and surveillance at locatlons far r_moved from the shore. As compared to the coastal patrol missions, modern airships for deep ocean missions have been analyzed in only a very preliminary way. Since the biggest threat seems to be from submarines, we will concentrate here on the anti-submarine warfare {ASW) class of missions, but applications to sea control escort, electronic warfare, and oceanography (the latter largely a civil application) will be considered briefly as well. The principal references for the discuSSion which follows are Refs. 2.4- ?.6, and particularly Ref. 2.4, which Focuses on the ASW mission. According to a quote in Ref. 2.4, "The Soviet submarine force continues to be a primary threat to our vital sea lanes of communications and to our naval forces during an armed conflict." A basic mission need thus exists "...to provide the Navy with an affordable, improved ASW capability to counter a qrowing submarlne threat to our merchant ships, projection forces, and ballistic missile firing sub- marines." Compounding the problem is the fact that the oceans are getting "noisier," due to increased activity From ships, weapons, and counter measures, at the same time that advancing technology is ren- der(ng submarines "quieter." ASW was a key element of the Navy'S efforts in World War II {Ref. 2.14) an_ it is clear that, if anything, it will be even more imi)ortant in the future. Basically, in ASW an area of the ocean must be patrolled in a given period of time to detect, classify, locate, and either trail or attack the Submarines foun¢. This requires placing a vehicle in the required location and providina it with the sensors and weapons necessary to perform these duties. "here is really no one "ASW mission" but rather a wide variety of tasks. Among the mission patterers which will affect vehicle design an_ performance are: distance to the operating area, time on station, _eSDonse time, extent of the area to be searched, and the functions to be performed. Because of the complex nature o? ASW, the U.S. Navy currently depends upon a variety of air and surface platforms and sensors used in a coordinated manne,. An airship, if developed for this purpose, would work in con- junction with other vehicle ty!oes, doing only those aspects of ASW for which it is best suited. It must be mentioned that the airship is by no means the only "advanced concept" being Considered ?or ASW and related Navy applications. Figure 2.5 shows several possible advanced vehicle concepts including the surface effect ship (SES), the small water area twin hull (SWATH) ship, the patrol hydro- ?oil, the sea-loiter aircraft, the advanced land-based maritime patrol aircraft, and the helicopter and other V/STOL aircraft. Preliminary conclusions regarding many of these concepts have been positive. _he recent Advanced Naval Vehicles Concept Evaluation Program has been the most detailed comparative study of these vehicle concepts to-date (Ref. 2.15). Since not all, if any, of these concepts can be developed by the Navy in the near future, much careful vehicle analysis remains to be done. Reference ?.4 has provided a preliminary analysis of the princip)l f_atures of a deep ocean patrol alrshiD. It would be a conventionally shaped airship of about 4 x I0 b ft j volume, provided that re- fueling at sea is done routinely Cbut probably considerably larger if required to be completely self- sufficient), It should have a maximum speed of at least B5 knots and a service ceiling of at least 10,000 Ft. The Crew size would be approximately 15-18 people and, with refueling and resupply done at sea, the airship should be able to stay on station almost indefinlte!y. It is obvious that such a plat- form would he attractive for many ASW tasks. One of its outstanding attributes is the airship's capability for carrylnq ASw sensors. Reference 2.4 concludes that an airship can use almost all of the existing and proposed sensors, althouqh some may require slight modification. As compared to existing sensor platforms, the airship provides a unique comhlnation of high payload, large size, low vibration, lonq-term statlon-keeplng ability, and low noise propagated into the water. It would be particularly effective in towinq large acoustic arrays. On the neqatIv_ side, airships may have some disadvantages with regards to offensive combat capa- bility and vulnerability to both weapons and weather. The question of all-weather capability for air- ships was discussed in Section 2.2, where it was conjectured that this will not be more of a problem than ?or other vehicles. The question of vulnerability to weapons is perhaps also not as serious a problem as it would first aDpear. It is true that an airship would be in most respects the most visible o? all possible ASW platforms. However, the radar cross section could probably be made to be nO larger than that of fixed-winG aircraft because it should be possible to make the envelope transparent to radar. An airship vehicle may be no more vulnerable to weapons than any other platform because impact to the envelope would not be generally lethal. The suitability of an airship as a weapons platform remains to be _esolved. Most ships and aircraft in use by any navy are multi?unctional by necessity, and an airship, aS any new vehicle, would be expected to be likewise. There appear to be several other missions for which an airship designed primarily ?or ASW could provide support; these include anti-surface warfare, anti-air war?are, airborne early warning, electronic warfare, mine warfare, logistics resupply, and oceanography. Many of the airship's natural attributes could be used to advantage in these missions. One interesting possibility is that the airship could be designed for maximal, instead of minimal, radar cross section and could be used to simulate a carrier task group. It would also be an excellent platform for elec- tronic support measures. Ifli _li 13 The potential of airships for sea control and task Force escort missions has been examined in Ref. _.5. The basic problem is to protect a task force from long-range anti-ship cruise missiles, requiring over-the-horizon detection. This function is now performed by carrier-based aircraft but they are not well suited for this purpose and their use in this role decreases the task force offensive capability. The role of the airship would be to provide standoff airborne early warning (AEW) as well as command and control for counter attack systems. Reference 2.5 estimates that the use of airships in this way would increase the cost-effectiveness and striking power of the carrier task force, primarily by freeing heavier-than-air craft for other missions. An aspect of the AEW mission which is not well suited to airships is the need for high altitude in order tO attain as large a radar horizon aS possible. In Ref. ?.5 an operating altitJde of 15,000 ft is OmOPOSed as a mood compromise between _irship size and radar horizon. At this altitude, for a payload reauirement of 60,000 Ib, a 7 x ]0nft _ vehicle is required. Thus, although the AEW airship could perform many ASW tasks, a vehicle designed for ASW would be too small and would have insufficient alti- tude capability for most AEW tasks. One final deep ocean mission wMich deserves mention is oceanoqraphy. Although this application is too limited ever to Justify airship vehicle developenent on its own, if a deep ocean naval airship were ever developed such a vehicle would have many interestinQ civil and military oceanographic applications (Ref. ?.6_. 5asicallv, airships could make ocean measurements that are difficult, or impossible, to make from existinQ platforms. For example, an improved ability to conduct remote sensing experiments of both the sea surface and the lower marine a_osphere are badly needed. The airship would work in con- junction with existino satellite systems and oceanographic ships. TO conclude this section, we paraphrase the conclusion in Ref. ?.4. Lighter-than-air vehicles seem to be a viable vehicle choice for many ASW missions and other deed ocean missions. Their unique features give them many advantages over surface vessels and other aircraft for these applications. An ocean patrol airship would have multimission caoability and would work well in concert with existing vehicles. Development of such a vehicle would require minimal new vehicle technology and would not require the _evelooment of new sensor and other systems. 7.4 REFERENCES _.] Williams, K. E.; and Milton, J. T.: Coast Guard Missions for Lighter-Than-Air Vehicles. AIM Paper 79-I_70, 1979. ?.2 RaPpoDort, H. K.: Analysis of Coast Guard Missions for a Maritime Patrol Airship. AIAA Paper 7g-157], 1079. ?.3 Bailey, O. 5.; and qappoport, _. K.: Maritime Patrol Airship Study. AIAA Journal, Vol. IB, No. g, Sept. ia51. 2.4 Handler, G. S.: Lighter-Than-Air Vehicles for Open Ocean Patrol. AIAA _aper 79-1576, 1979 (Also Naval WeaPon Center T_ 3_B4_. 2.5 Ki,nev, D. G.: Modern RiQid Airships as Sea Control Escort Platforms. :IAA Paper 7Q-1575, 1979. _._ Stevenson, R. E.: The Potential Role of Airships For Oce_noqrao_v. AIAA _aoer _g-1574, IQ79. _.7 _rown, N. D.: _ri-Rotor Coast Guard Airship. AIAA maper 79-!573, !gTg. ?.9 Nivert, L. J.; and Williams, K. E.: Coast Guard Airship Deve!oDment. AIAA Paper 91-1311, IgBl. 7.g Unwin, C.L.R.: The Use of Non-Rigid Airships for Maritime Patrol in Canada. AIAA Paper 93-1971, IQ_3. 2.10 Brown, N. D.: Goodyear Aerospace Conceptual Design Maritime Patrol Airship -- ZP3G. NAVAIRDEVCEN Rep. NADC-78075-60, April IO7g. ?.!I Bell, J. C.; Marketos, J. D.; and Topping, A. D.: _aritime Patrol Airship Concept Study. NAVAIRDEVCEN Rep. NADC-7507A-60, Nov. 1978. 2.12 Enev, J. A.: Twin-Rotor Patrol Airship Flying Model. AIAA PaPer BI-1312, 1981. 2.!3 Bennett, A.F.C.; and Razavi, N.: Flight Testing and Operational Demonstration of a Modern Non- Rigid Airship. AIAA Paper B3-1ggg, 1983. ?.15 Meeks, T. L.; and Mantle, P. O.: Evaluation of Advanced Navy Vehicle Concepts. AIAA/SNAME Paper 76-546, 1976. ]4 _J Q. -.,._ =C_O. _ _ N U _ e_ c IIli !]i 15 Length, ft 324 Diameter, ft 73 40O 40O 35O 350 300 300 = 250 = 250 .ag ! AIRCRAFT _" 200 _150 AIRSHIPS lOO 100 AIRSHIPS 50 CUTTERS 50 CUTTERS L. I:- - I I" .... ' " " -" i | i ]2 ' . 120 96 144 lr_8 1'. 20 30 40 50 60 70 24 48 PAYLOAD. 1000 Ib ENDURANCE. hn f_ LP L9 _ig. 2.4 Bell Aerospace patrol airship design AIR L O l T E R L._...__..._...=_._ AIRPLANE _ SEA LOITER/_. AIRPLANE LTA HELO PLANING SUBMARINE 3. VERTICAL HEAVY-LIFT Early studies {Refs. 1.1-1.18 and 3.1-3.9) concluded that modern air-buoyant vehicles could satisfy the need for vertical llft and transport of heavy or out-Slzed payloads over short distances. There are two reasons that such aircraft, called heavy-lift airships (HLAs), appear attractive For both military and civil heavy-lift applications. First, buoyant lift does not lead to inherent lim- itations on payload Capacity as does dynamic lift. This is because buoyant-lift aircraft follow a "cube-cube" growth law whereas dynamic-lift aircraft follow a "square-cube" law, as discussed in Section 1.3. Figure 3.1 shows the history of rotorcraft vertical-llft capability. Current maximum payload of free world helicopters is about IB tons. Listed in the figure are several payload candidates for airborne vertical llft that are beyond this IB-ton payload weight limit, indicating a market For increased lift capability. Noteworthy military payloads beyond the existing vertical-lift capability are the main battle tank and large seaborne containers. Extension of rotorcraft lift to a 35-ton payload is possible wlth existing technology (Refs. 3.10, 3.111, and future development of conven- tional rotorcnaft up to a 75-ton payload appears feasible (Ref. 3.11). With MLA concepts, however, paylnad capability of up to _O tons _s possible using existing propulsion-system technology or even, if desired, existing rotorcraft propulsion-system hardware. The second reason airships appear attractive for heavy lift is cost. Most HLA concepts are projected to offer lower development, manufacturing, maintenance, and fuel costs than large rotorcraft with the same payloads; thus total operating and life-cycle costs may be lower. The lower development cost arises From extensive use of existing propulsion-system technoloqy or hardware, or both, making major new propulsion-system develoPment unnecessary. Low manufacturing and maintenance costs accrue because buoyant-lift components are less expensive to produce and maintain then dynamic-lift concepts. Lower Fuel costs follow directly from lower fuel consumption. As fuel prices increase, the high Fuel efficiencv of HLAs will become increasingly important. HLA costs and Fuel efficiency will be discussed in more detail later. Because the market for vertical lift of payloads in excess of _0 tons iS a new one for aerial vehicles, the size and characteristics of the market are somewhat uncertain. AS a result, several studies have been undertaken. Many of these studies have been private]y funded and their results are oroorietary, but the -esults of some have been published (Refs. 3.B, 3.9, 3.12-3.15). HLA market-study conclusions have been oenerally Favorable. Table 3.1 summarizes the results of one of these, the NASA-sponsored study of civil markets For HLAs (Refs. ].12, 3.13). The HLA civil market tends to fall into two categories. The first consists of services that are now or Could he performed by helicopters, hut perhaps only on a very limited basis. Payloads are low to moderate, _anoina from about 15 to BO tons. Specific markets include logging, containership offloading {of interest also to the militaryl, transmission tower erection, and Support of remote drill rigS. HLAs woul_ be able to capture greater shares of these markets than helicopters because of their projected !ower operating costs. Most of these applications are -elatively sensitive to cost. The largest market in terms of the potential number of vehicles required is IoaginQ. The second HLA market category involves heavy payloads of IBO to 800 tons--a totally new applica- tion of vertical aerial lift. This market is concerned primarily with support Of heavy construction projects, especially oower-generatinq plant construction. The availability of verticai aerial lift in this payload ranoe will make the expensive infrastructure associated with surface movements of heavy or bulky _tems largely unnecessary. It would also allow more freedom in the selection of plant sites by eliminatinq the restrictions imposed by the necessity for readily accessible heavy surface transporta- tion. Further, it could substantially reduce construction costs of complex assemblies by _llowing more extensive ore-assembly _n manufacturing areas. This application is relatively insensitive to cost of service. There would be military as well as civil application of ultraheavy lift. The classical fully-buovant airship Is unsuitable for most vertical heavy-lift applications because of poor low-speed control and ground-handling characteristics. Therefore, almost all HLA concepts that have been proposed are of the "hybrid" type. Because buoyant lift can be scaled up to large sizes at low cost per pound of lift (as previously describedl, control as well as a portion of the total lift. The characteristics of hybrid aircraft and their potential For the heavy-lift mission were first clearly recognized by Piasecki FRefs. 1.12, 3.3_, by Nichols fRef. 3.2), and by Nichols and Doolittle (Ref. 3.6). References 3.2 and 3.6, in particular, describe a wide variety of possible hybrid HLA concePtS. In the following sections, specific hybrid airship concepts for heavy-lift applications will be discussed. A heavy-lift airship concept which has received a great deal of attention is the buoyant Quad-rotor _BOR} which combines helicopter engine/rotor systems with airship hulls. This basic idea is not new. In the taCO's and la30's a French engineer, E. Oehmichen, not only conceived this idea, but successfully built and flight-tested such aircraft, which he called the Helicostat (Ref. 3.8_. One of his first designs CFiq. 3._aI had two rotors _riven by a sinale engine mounted beneath a cylindrical buoyant hull. According tO Ref. 3.8, Oemichen's purpose in addinq the buoyant bull tO the rotor system was IIli I! Ig threefold: "...to provide the helicopter with perfect stability, to reduce the load on the lift-rotors, and to slow down descent with optimum efficiency." Oehmichen's later effort was a auad-rotor design with two rotors mqunted in the vertical plane and two in the horizontal (Fig. 3.2b_. The hull was chanQed to an aerodynamic shape more characteristic of classical airships. Existina motion pictures of successful flights of the Helicostat demonstrate that the BC)R concept was proven feasible in the ig30's. The modern form of the concept was first proposed by Piasecki (Refs. 1.12, 3.3). Piasecki's idea is to combine existing, somewhat modified, helicopters with a buoyant hull as exemplified in Figure 3.3. The configuration shown in Fioure 3.3 will be called the "original" B0R concept. The attraction of the idea lies in its minimal development cost. In particular, no new major propulsion-system com- ponents would be needed _pronulsion systems are historically the most expensive part of an all-new aircraft development1. A fly-by-wire master control system would command the conventional controls within each helicopter to provide for lift augmentation, propulsive thrust, and control power. Other variants of the BQR idea are currently under study. A design by Goodyear Aerospace (Ref. 3.16% is shown in Figure 3.4. As compared with the original concept (Fig. 3.3), this design _called the "advanced" concept) has a new propulsion system, auxiliary horizontal-thrusting propellers, and aero- dynamic tail surfaces and controls. The four propulsion system modules would make extensive use of existing rotor-craft components and technology but would be designed specifically for the BQR. The horizontal-thrusting propellers would be shaft-driven from the main rotor engines. These propulsion modules would be designed more for hlah reliability and low maintenance costs, and less for low empty weight, than are typical helicopter propulsion systems. They would be "derated" relative to current systems, leading to further reductions in maintenance costs. In a revival of the Helicostat concept, a buoyant dual-rotor HLA has been studied by Aerospatiale CRef. 3.8). It would use the engines and rotors from a small helicopter, but propellers would be fitted for forward propulsion and yaw control (Fig. 3.5). Payload would be about 4 tons; the principal appli- cation is envisioned to be logging. The performance capability of the BQR design [Fig. 3.3) was examined in the feasibility studies of Refs. ].12-I.1A and 1.16 and is listed in Table 3._ This desian employs four CHS4B helicopters, some- what modified, and a nonrigid envelope of 2.5 x 10 _ ft 3. Totai gross weight with one engine inoperative is about 325,000 lb., of which 150,OOO lb. is payload. Empty-to-gross weight fraction is 0.455 and desian cruise speed is BO knots. Range with maximum pavload is estimated to be 100 n. mi.; with the payload replaced bv auxiliary fuel, the unrefueled ferry range would be more than 1,000 n. mi. In References 1.12, 1.16, and 3.3, the ratio of buoyant-to-total lift f_) is chosen so that the vehicle is sliahtly "heavy" when completely unloaded. In effect, the buoyant lift supports the vehicle empty we iQht, leavino the rotor lift to Support the useful load Ipayload and fuel). A different approach has been suoqested and studied by Bell eta]. (Ref. 3.17). Bell etal. proposed that _ be selected so that the buoyancy supports the empty weight plus half the useful load. It is then necessary for the rotors to thrust downward when the vehicle is empty with the same magnitude that they must thrust upward when the vehicle is fully loaded. This same principle has been used in the studies of the rotor- balloon, discussed in the following section. Use of the approach suggested by Bell etal. (high _), as opposed to the approach assumed in Table 2.4 (low _), has the potential of offering lower operating costs since buoyant lift is less expensive than rotor lift. Also, the Bell approach has better control when lightly loaded, because higher rotor forces are available. In comparison, the low £ approach may result in a vehicle that is easier to handle on the ground rslnce it is heavy when empty) and one that is more efficient in cruise or ferry when lightly loaded or with no payload (because of low rotor Forcesl. Selection of the best value of 2 depends on these and many other factors and will require a better technical knowledge of the concept. The BOR vehicle will be effficient in both cruise and hover compared with conventional-design heavy-lift helicopters (HLH). This arises primarily from the cost advantages of buoyant lift when compared with lift on a per-unit-of-lift basis, as discussed earlier. Fuel consumption of the BQR vehicle in hover will be approximately one-half that of an equivalent HLH. Relative fuel consumption of the BQR in cruise may be even lower because of the possibility of generating dynamic lift on the hull, thereby reducing or eliminating the need for rotor lift in cruising flight. When cruising with a slung payload, the cruising speeds of HLH and B0R vehicles will be approxi- mately the same since external load is generally the limiting factor on maximum speed. When cruising without a payload, as in a ferry mission, the speed of the BQR will be lower than that of an HLH. The many HLA studies have shown, however, that the higher efficiency of the B0R more than offsets this speed disadvantage. Therefore, the BOR should have appreciably lower operating costs per ton-mile in either the loaded or unloaded condition. Total operating costs per ton of payload per mile in cruise flight are compared in Fig. 3.6 (based on data provided by Goodyear1. The figure shows that the advanced BOR concept offers a decrease in operating costs by as much as a factor of 3 compared with existing helicopters. Of course, much of this cost advantage results from the larger payload of the BOR (approximately eight times larger). Operating costs in cruise flight of the advanced concept are lower compared with those of the original concept. This arises from the use of propellers instead of rotor cyclic pitch for forward propulsion, From lower assumed propulsion maintenance costs, and from lower drag due to a more streamlined interconnecting structure. The advanced Concept BC]R would be particularly efficient when cruising lightly loaded (as in ferry), since it would operate essentially as a classical fully-buoyant airship. Studies have shown that precision hover and station-keeping abilities approaching those of proposed HLHs are possible with B0R designs (Refs. 1.12, 3.3, 3.18-3.20). Automated precision hover systems recently developed for an HLH (Ref. 3.10) can be adapted for B0R use. Recent studies of BQR dynamics and control are reported in Refs. 3.21-3.24. ZO In a program Funded bytheU.S.ForestService and managed by the U.S. Navy, Piasecki Aircraft Corporation iS currently assembling a demonstratl_n vehicle of the B0R type. The flight vehicle will combine four H-34 helicopters with a I,OOO,OOO ?t j nonrigid envelope. It will have a 25-ton payload and will be used to demonstrate aerial logglng. An early hybrid HLA concept, which has subsequently received a significant amount of study and some initial develoPment, is a _otor-balloon con?iguratlon Icalled Aerocrane by its inventors, the All Amer- ican Eng!neerino Comoanvl. Early discussions of this concept appear in References 3.1, _.2, 3.5-3.7; two versions of the Aerocrane are depicted in Fig. 3.7. The original configuration consisted of a spherical helium-inflated balloon with ?our rotors _airfoils) mounted at the equator. Prooulsors and aerodynamic control surfaces were mounted on the rotors. The entire structure (except the crew cabin and payload support, which were kept stationary by a retrograde drive system) rotated (typically at a rate of 10 rOm) to provide dynamic rotor lift and control. Principal applications envisioned For the rotor-balloon are logging and containership oFfloading. Study and technology development of the rotor-balloon concept have been pursued by All American Engineering and others, partly under U.S. Navy sponsorship. Emphasis of the program has been on devising a suitable control system. A remotely controlled flying model was built to investigate stability, control, and Flying Qualities _Fig. 3-8), Results (Refs. 3.25-3.27) have shown that the rotor-balloon is controllable and that it promises to be a vehicle with a relatively low empty-to-gross weight ratio and low acaulsition cost across a wide range of vehicle sizes. Technical issues that emerged were (I) the magnitude and e_fect of the Maonus force on a large rotating sphere and (2) the highacceleration environment (about 6 g in most designs) of the oropulsors. AltHouQh the rotor-balloon technical (ssues are thought to be solvable, two characteristics emerged as being operationally limitino. First, larQe vehicle tilt angles were required to obtain the necessary control Forces in some ooeratino conditions. Second, the high drag associated with the spherical shape _esulted in very low cruise speeds, tvpIca1_y 25 mph For a 16-ton payload vehicle. This low speed meant that operation in winds of over ?O moh probably was not posslble and that the efficiency of operation in even liQht winds was siqni?icantly degraded. Even with no wind, the low speed resulted in low produc- tivity. THus, the original rotor-balloon concept was limited to very short-range applications in very light winds. THe advanced confiouration rotor-balloon depicted in Figure 3.7 fRef. 3.28) is designed to overcome the operational shortcomings of the original concept. Winglets with aerodynamic control systems are fitted to allow generation of laroe l_teral-control forces, thereby alleviating the need to tilt the vehicle. A lenticular shape % used for the lifting gas envelope to decrease the aerodynamic drag. The increase in cruise speed of the advanced concept is, however, accompanied by some increase in design complexity and structural _eiqht. A more substantial departure from the original Aerocrane concept has been proposed recently. The Cvclo-Crane (Refs. 3.2g, 3.30) is essentially a new WLA configuration concept CFig. 3.9). It consists of an ellipsoidal lifting gas envelope with four strut-m_unted airfoils at the midsection. The pro- ou!Sors are also located on these strutS. This entire structure rotates about the longitudinal ixis of the envelope to provide control _orces during hover. Isolated From the rotating structure by bearings are the control cabin at the nose and the aerodynamic surfaces at the tail. The payload is supported by a slina attached to the nose and tail. The rotation speed and yaw angles of the wings on their struts are controlled to keep the airspeed over the wings at a constant value; namely, a value equal to the vehicle cruise s_eed. Thus, For hover in still air, the wingspan _xes are aligned with the envelope _ongltudinal axis. AS Forward speed is increased, the vehicle ,otational speed decreases and the wings _re vawed until, at cruise speed, the rotation iS stooped and the wingspan axes _re perpendicular to the forward eel ocity. Hence, in cruising _!iqht the CycIo-¢rane acts as a winged airship. Preliminary _nalvsis of the Cyclo-Crane has indicated that a cruising speed of 670 mob would be possible with a 16-ton payload vehicle and that the economic performance would be Favorable (Re?. 3.311. the Aerolift Company is currently building a Cyclo-Crane fliQht demonstration vehicle at Tillamook, Oregon. It is scheduled to be fliQht tested in loaging operations in io85. Another recent rotating hybrid airship concept under development is the LTA 20-I of the Magnus Aerospace Corporation (Refs. 3.32, 3.33_. The configuration COnSists of a spinning hellum-filled spherical envelope and a rinQ-wlng tvoe oondola (Fig. 3.10). The combination of buoyancy, Magnus lift, and vectored thrust ?esult in a vehicle with controllable heavy-lift capability. Perhaps the simplest and least expensive of the HLA concepts are those which combine the buoyant- and dynamic-lift elements in discrete ?ashlon without major modification. Examples, taken from Refer- ences 1.7 and 3.6, are shown in Figure 3.11. Although such systems will obviously require minimal develoPment of new hardware, there may be serious operational problems associated with them. Safety and controllability considerations would likely restrict operation to Fair weather. Further, cruise speeds would be extremely low. The concept from Ref. 3.6 that is shown in Figure 3.11 was rejected by the authors of Ref. 3L6 because of the catastrophic failure which would ?esult from an inadvertent balloon deflation. Another approach to heavy lift with buoyant Forces is the clustering o? several small buoyant elements. Examples of this are the ONERA concept (Ref. 1.7) and the Grumman concept (Re?. 3.34) shown in Fig. 3.12. In the Grumman idea, three airships of aooroximately Conventional design, such as the o-e shown, are used to lift moderate payloads. When heavy lift is needed, the three vehicles are lashed 21 togethertemporarilywhilein theair. Thetechnique for Joiningthevehiclesandthecontrollability of thecombined system needfurtherstudy. Finally,anotherHLAconceptthathasreceived some attentionis the"ducted-fan hybrid"shownin Fig.3.13{Ref.3.6). In this vehicle,a toroidal-shapedlifting gO,envelope provides a ductor shroud for a centrallylocated fanor rotor. Therehasbeen toolittle studyof theducted-fan hybrid, however,to permitanassessmentof its potential. 3.5 REFERENCES 3.1 Carson, B. H.: An Economic Comparison of Three Heavy Lift Airborne Systems. In: Proceedings of the Interagency Workshop on Lighter Than Air Vehicles, Jan. 1975, DO. 75-85. 3.2 Nichols, J. 8.: The Basic Characteristics of Hybrid Aircraft. In: Proceedings of the Inter- aqency Workshop on Lighter Than Air Vehicles, Jan. 1975, pp. 415-430. 3.3 Piasecki, F. N.: Ultra-Heavy Vertical Lift System: The Heli-Stat -- Helicopter-Airship Combination for Materials Handling. In: Proceedings of the Interagency Workshop on Lighter Than Air Vehicles, ,]an. I975, pp. 465-476. 3.4 Keating, S. J., Jr.: The _ransoort of Nuclear Power Plant Combonents. In: Proceedings of the Interagency Workshop on Lighter Than Air Vehicles, Jan. 1975, pp. 539-549. 3.5 Perkins, R. G., Jr.; and Doolittle, D. 8.: Aerocrane -- A Hybrid LTA Aircraft for Aerial Crane Applications. In: Proceedings of the Interagency Workshop on Lighter Than Air Vehicles, Jan. !975, Do. 571-584. 3.6 Nichols, J. B.; and Doolittle, D. 8.: Hybrid Aircraft for Heavy Lift -- Combined Helicopter and Lighter-Than-Air Elements. Presented at 30th Annual National V/STOL Forum of the American Helicopter Societv, Washington, D. C., Preprint 814, May 1974. 3.7 Anon.: Feasibi]ity Study on the Aerocrane Heavy Lift Vehicle, Summary Report. Canadair Rept. RAX-268-100, Nov. 1977. 3.8 Helicostat. Societe Nationale Industrielle Aerosoatiale, Ceder, France, brochure, undated. ].9 Anon.: Alberta Modern Airship Study, Final Report. Soodyear Aerospace Corp. GER-16559, June 1978. _.lO Niven, A. J.: Heavy Lift Helicopter Flight Control System. Vol. !. _roduction Recommendations. USAAMRDL-TR-7_-4OA, Sebt. I977. 3.11 Rosenstein, H.: Feasibility Study of a 75-Ton Payload Helicopter. Boeing Company Rept. DZIO-II401-l, June 1979. 3.!2 Mettam, P. J.; Hansen, D.; 8yrne, R. W.; and Ardema, M. D.: A Study of Civil Markets for Heavy Lift Airships. AIAA Paper 79-1579, 1979. 3.!3 Mettam, P. O.; Hansen, D.; and Byrne, R. W.: Study of Civil Markets for Heavy Lift Airships. NASA CR-152202, Dec. 1978. 3.14 Sander, 8. J.: The Potential Use of the Aerocrane in British Columbia Logging Conditions. Forest Engineering Research Institute of Canada Report, undated. 3.15 Erickson, J. R.: Potential for Harvesting Timber with Lighter-Than-Air Vehicles. AIAA Paper 79-1580, 1979. 3.16 Kelley, J. B.: An Overview of Goodyear Heavy Lift Development Activity. AIAA Paper 79-1611, 1979. 3.17 Bell, J. C.; Marketos, J. 0.; and Topping, A. D.: Parametric Design Definition Study of the Unballasted Heavy-Lift Airship. NASA CR-152314, July 1979. 3.18 Nagabhushan, B. L.; and Tomlinson, N. P.: Flight Dynamics Analyses and Simulation of Heavy Lift Airshio. AIAA Paper 79-1593, 1979. 3.19 Meve_s, D. N.; and Piasecki, F. N.: Controllability of Heavy Vertical Lift Ships, The Piasecki Heli-stat. AIAA Paper 79-15a4, igTg. 3.20 Pav1ecka, V. H.: Thruster Control ?or Airships. AIAA Paper 79-1595, 1979. 3.21 Nagabhushan, B.L.: Dynamic Stability of a Buoyant Ouad-Rotor Aircra?t, J. Aircraft, vol. 20, no. 3, March 1983, pg. ?A3. 3.22 Tischler, M. 8.; Rinqland, R. F.; and Jex, H. R.: Heavy-Lift Airship Dynamics, J. Aircraft, vol. 20, no. 5, May 1983, pg. 425. 3.2] Tischler, M. B.; and Jex, H. R.: Effects of Atmospheric Turbulence on a Quadrotor Heavy Lift Airship, J. Aircraft, vol. 20, no. !2, Dec. 1983, pg. 1051. 3.24 Talbot, P. D.; and Gelhausen, P. A.: Effect of Buoyancy and Power Design Parameters on Hybrid Airship Performance, AIAA Paper 93-1976, 1983. 22 3.25 Putman, W. F.; and Curtiss, H. C., Jr.: Precision Hover CaDabilit|es of the Aerocrame. AIAA Paper 77-1174, 1977. 3.26 Curtlss, W. C., Jr.; Putmam, W. _.; and McKIII_o, R. M., Jr.: A Study of the Precision Hover Capabilities of the Aerocrane Hybrid Heavv Lift Vehicles. AIAA Paper 79-1592, 1979. ].27 Putman, W. F.; and Curtlss, H. C., Jr.: An Anal_tlcal and Experi_ntal Investigation of the Hovering Dynamics of the Aerocrane Hybrid Heavy Lift Vehicle. Naval Air Development Center Rept. 0S-137, June 1976. 3.28 Elias, A. L.: Win_-Tlp-Winglet Dropulsion for Aerocrame-Type Hybrid Lift Vehicles. AIAA Paper 75-944, 1975. 3.29 Cr_mmins, A. G.: The Cyclocrane Concept. Aerocranes of Canada Rept., Feb. 1979. 3.30 Curtlss, H. ¢.: A Orel_mlnarv Investigation of the Aerodynamics and Control of the Cyclocrane Hybrid Heavy Lift Vehicle. Department of Wechanlcal and Aerospace Engineering Rept. 1444, Princeton University, May 1979. 3.31 Crimmins, A. G.; and Doolittle, D. B.: The Cyclo-Crane -- A Hybrid Aircraft Concept, AIAA Paper 83-2004, 1983. 3.32 Scholaert, H.S.B.: Dynamic Analysis of the Magnus Aerospace Corporation LTA 20-I Heavy-Lift Aircraft, AIAA Paper B3-1977, IO83. 3.33 OeLaurier, J. D.; WcKinney, W. D.; Kung, W. L.; Green, G. M.; and Scholaert, H.S.B.: Development of the Wa:nus Aerospace Corporation's Rotating-Sphere Airship, AIAA Paper B3-_003, 1983. 3.34 Munier, A. E.; and _pps, L. M.: The Heavy L_ft Airshio -- Ootential, Problems, and Plans. Proceedings of the gth AFGL Scientific Balloon Symposium, G. r. Nolan, ed., AFGL-TR-T6-O-306, Dec. IO?_. 23 Useful Number of Market area load, vehicles tons required Heavy-lift Logging 25-75 >1000 Unloading cargo in congested ports 16-80 200 Hioh-voltaae transmission towererection 13-25 10 Support of remote drill-rig installations 25-150 15 Ultraheavy-lift Support of power-generating plantconstruction IBO-gO0 30 Support of oil-gasoffshore platform construction 500 3, Othertransportation 25-800 10 Table3.1 Principalheavy-liftairshipmarkets Grossweight, a lb 37_,Q50 Rotorlift, lb 180,800 Buoyant lift, Ib !44,150 Emptvweight,lb 148,070 Usefulload, a Ib 176,_00 Payload,Ib iSO,O00 Staticheaviness, a Ib 3,Q20 Envelopevolume, ft 3 2._ x 106 Bal!onetvolume, ft ! 5.75x I_5 Ballonetceiling,ft ],500 Hull¢ineness ratio 3.? Designspeed (TAS), knots 60 Design range With maximum _ayload, n. mi. I00 / 900 _- I INDUSTRIAL EQUIPMENT / " " " 200_ f ANO CONSTRUCT,ON I" _V;_._ . 60_ CRANE, SHOVEL - 40 ton / _'_/CONTA,NER. INTL. STD. / ._soF// i.,8,,o._ / |// SEALAND COMMERCIAL 40 _-_/ / CONTAINER (8 x 8.5 • 35 ft) -_ }• / LOGGING (MINIMUM JHEAVYLIFT • 30_',,/ _ EFFICIENT SIZE) _ AIRSHIP LIFT 20_-'_- M,_vAN_._CS __.._8 • 2o__.,II CAPA,IL,T___ Y HELICOPTER LIFTCAPABILITY _ | 1950 1960 1970 1980 1990 CALENOAR YEAR Fig. 3.3 Buoyant quad-rotor, original concept Fig. 3.4 Buoyant quad-rotor, advanced concept (Hel(*Stat) 2.or 64 HELICOPTER _ _ .6 _i_: """'17/..... _ .i \ BUOYANT \QUAD ROTOR WITH S-64 HELICOPTERS u \,, BUOYANT \QUAD ROTOR WITH ROTOR MODULES I L J 1000 1500 2000 UTiLiZATION, hr/year Fig. 3.5 Modern Helicostat Fig. 3.6 Relative heavy-lift operating COSTS ADVANCED CONCEPT ORIGINAL CONCEPT [[li li 27 CRUISE HOVER Fig.3.11 Combined descrete concepts GRUMMAN ONERA r = =, 29 The obvious benefits of aerial observations caused the balloon to be used as a military surveillance platform only I0 years after its conception and develoPment by French experimenters in the IBth Century. Cables or lines between the balloon and ground anchor points were used to achieve fixed Spatial loca- tions. Improved more stable tethered balloons were developed later using cylindrical or elliosoidal envelope forms equipped with air in?Tared tall surfaces. These t_es were used in World War I as manned observation platforms and in World War I and II as unmanned "barrage balloons, to discourage low alti- tude aerial attack. Tethered balloons continue tO serve as sensor platforms and for other applications in military service. Civil versions are currently being used as teleconlnunications centers flying at 3000 m altitudes. THere are also important military and civil applications for platforms which can fly at altitudes beyond the capabilities and limitations o? tethered systems. Since much success has been achieved with free flying stratospheric balloons, it has seemd reasonable that this technology could be applied to development of powered versions with statlon-keeplng Capability; nainely, high altitude airships or dirig- ibles. Consequently, a number of develoPmental programs and studies have been addressed to achieving this objective. THis section is a review of these efforts. Two prime military needs continue to require improved observational or sensing techniques: (I) early evaluation of threat danger, and (2) location and neutralization of enemy forces. In modern times, these needs have driven sensinq altitudes into the stratosphere and even beyond into space. Satellites and airplanes perform some of these required functions but are limited by payload capacity, location flexibility, and High cost _Ref. 4.11. A h_gh altitude platform at 71,000 m can extend a detection perimeter outward to a radius of 33 nautical miles C_O0 kml for surface threats and to 440 n. mi. (BOO km) for aircraft flying at 3000 m. Since the platform can be located at the radius distance From the command and control center, the distances between the threat and the target are essentially doubled relative to existing aircraft. This _rovides more time for _etectlon and interception (Ref. 4.2)° Turning to civil needs, a Hioh altitude geo-stationary platform can provide many of the functions of svnchronous satellites plus a host of other services at a fraction of the cost (Ref. 4.3). Conti n- uous regional coveraqe without the radio Path losses associated with space-based systems is possible. A _urther national advantage is the avoidance of the problem of frequency saturation and other international complications. Civil telecommunications is the outstandino application for platforms and would include the fol- lowing services: (I) Direct TV home telecast, {2) Remote area telecast, (3) Communications experiments, r4) Educational and medical information, and (S_ Mobile telephone relay and personal receivers. Other potential benefits have also been identified CRefs. 4.3, 4.4) such as: (I) Forest area sur- vei?)_nce, (2) roe mappino, {3) Coastal surveillance of air and sea traffic, pollution mon_torlng and weather observation, and (4) Scientific experiments. M_nimum expenditure of energy for station-keeplng requires operation in minimum winds. All studies of platforms have assumed, therefore, that the operating altitudes would be in the stratonull region of the atmosphere. This iS a zone of low winds, which varies in dimension and altitude depending on loca- tion and season. For airship design, a nominal pressure altitude of 50 mb. has been assumed which under standard conditions equates to a geometric altitude of approximately ?0,700 m. Detailed analyses of wind data show that design for a peak velocity of SO knots would satisfy a g5 percentile probabilltv for operations over most U.S. locations 6Ref. 4.5), and design for 75 knots would be sufficient for most worldwide points of interest (Ref. 4.6l. The maintenance of flight at any altitude requires elimination of, or provision for, changes In static lift caused by al:nos_herlc and radiation effects. The most important is the variation in super- heat, which is the differential temperature between the liftinq gas and the atmosphere. Low pressure scientific balloons on short endurance flights use a combination of gas venting (to control rise) or dropping ballast (to stop descent). Low altitude airships are able to use aerodynamic llft (positive or negative) while under way. This latter means is also available to high altitude platform types, and studies have shown that the magnitude of the compensatinq forces required do not exceed the capabilitles of the airships to generate them (Ref. 4.7). However, ?lying the airship at some pitch angle may com- promise its mission performance. A further disadvantage Is the need for circling flight (to maintain station) when wind velocities are below the airspeed required For aerodynamic lift. Another means of altitude control is the use of superpressure. Thls principle involves maintaining a constant volume of lifting qas while allowing the internal pressure to vary between that required for structural integrity and aerodynamic function and that produced by superheat effects. This principle Is used in hlgh Pressure scientific balloons where long endurance and constant altitude is required and 31 works well. It involves use of stronger, hence heavier, envelopes and therefore larger envelope volumes are required for equivalent payloads. Vectored thrust could be considered where propellers or rotors are used to produce vertical thrust similar to the hybrid heavy-llft airships descrlbed in Section 3. These t)_es would be heavier and have higher Crag for a given payload and may also complicate the accommodation of payloads. Other methods of controlled llft could include use of artificial superheat at night (derived from propulsive heatl; that is, liftina gas could be compressed and stored in the daytime and released at niqht. Alternatively, compound aas Systems, employing the ballasting effects of vapor-liquid gas states, could be used (Ref. 4.81. Each approach has its advantages and limitations. The only one used for long endurance balloons thus far has been the superpressure principle. High altitude conditions allow consideration of concepts which would not be oractical for low altitude airships, such as the gas compression principle which is limited to low rates of gas volume change. At the _0 mb pressure altitude, the air density is only 0.06 that of sea level. This requires a 94 percent qas volume change between launch Cot takeoff) and operating altitude. One method of accommo- dating this change is to launch the airship as a free balloon with a small bubble of helium in the top of its envelope. In this case, the airship must be Flown initially with its major axis vertical and most of the envelope suspended in a Flaccid condition. The ascent to altitude is a drifting flight and essentially uncontrolled. Launch is limited to the same conditions as those for balloons, namely low winds. A second method requires the airship to be Fully inflated (94_ air) and launched like a conven- tional low altitude airship. Under these conditions, the vehicle can be flown to altitudes under control. A disadvantage is that of ground-handling a large airship in such manner as to avoid damaging the structure. This method offers some flexibility over the balloon launch technique but is also lim- ited to times of very low winds on the ground, The choice of design concepts involves the many interrelated f_ctors usually associated with air- craft desion; but for hiQh altitude airships, which take about 17 mJ of helium to lift i kg Cat 50 mb), most design choices are heavily influenced by their effects on weight. Some initial Investiaations utilized powered scientific balloons as platforms. Two experiments fHI-PLATFORM I and POBAL) were flown bv the U.S. Air Force in the Ig60's using natural shaped polyethyl- ene balloons to suPPort battery-powered propulsion modules. A later Air Force project involved a small solar powered airship (HI-PLATFORM If). This was flown at 20,420 m for a total of 2 hours (Ref. 4.9). The first major effort toward long duration flight was a U.S. Navy sponsored program known as High Altitude Superpressure Powered Aerostat (HASPA). This program was designed to demonstrate station- keeping at 21,335 m while supportina a 90 kq payload for a fliqht duration of 30 days. An airship approach was used employing a modified class C envelope shape with a volume of 22,656 m j. Constant altitude control was to be achieved using the superpressure principle. Propulsion was provided by electric motors driving a vectorable (for control) stern mounted propeller. Electric power was to be furnished from batteries, fuel cells, or solar cells. Launch was to be accomplished in the Free balloon manner, and only the payload and power SUpply system were to be recovered. Two flights were attempted but none were successful due to materiel Failures at launch. The program was subsequently terminated and replaced by HI-SPOT (Ref. 4.10). These early programs are summarized in Table 4.1. The U.S. Navy Program, "Hiqh Altitude Surveillance Platform for Over the Horizon Targeting -- FHI-SPOT)," incorporates the major objectives of HASPA but also includes a mission scenario. The latter requirement involves launch from a U.S. base, flight at Ig-22,000 m altitude over a distance of 6000 nautical miles to station-keeping location for a Ig-day surveillance period {assuming 44.6 knot average winds) and carrying a 250-kg payload. Transit to and from the station assumes utilization of wind patterns so that power and fuel requirements are equivalent to flying a round trip of 1000 nautical miles in )till air. These requirements have resulted in a vehicle design concept with a hull volume of 141,600 m , a maximum speed of 75 knots, and equipped with a 15B H.P. propulsion system (Figs. 4.1 and 4.2). A key feature of the HI-SPOT concept is a low draq envelope. This design is based on the principle of maintainlnq a laminar flow boundary laver over the forward half of the hull. This is achieved by using a Carmlchael" dolphin shape (Ref. 4.11), with its maximum diameter located at 50-601( of the hull length. Very smooth and accurate hull contours are also required and if these can be achieved, a total Crag coefficient of 0.016 is expected. The HI-SPOT would use a "4 layer" envelope material designed to minimize diurnal temperature effects. Power is provided by a hydrogen fueled internal combustion system driving a single gimballed propeller which is also used as the primary means of directional control. High metacentric stability is relied upon for longitudinal balance and augmented by trimming effects from ballonets and water ballast. The HI-SPOT airship is intended to be launched and recovered as a constant volume hull; i.e., com- pletely inflated at all times. Helium and air would be separated by two bulkheads and three ballonets for trim control during takeoff and climb. Once maximum altitude is achieved, a super-pressure mode could be used. Constant mass would be maintained by use of engine exhaust water recovery. It is planned to allow air to mix with helium on descent and use ballonets for trim (Ref. 4.12). 32 Initial studies of the concept have been completed. The next phase, if aCCompllshed, would include scaled demonstration flights and some technology development. The benefits projected For the use of high altitude powered platforms (HAPP) for telecommunications and other civil applications have been investigated in a series of studies by NASA which focused on missions, power supply systems, and vehicle concepts. All of these studies were based on the assumption of a geo-stationary vehicle operating at the BO-mb level over various sites in the U.S. It was also assumed that the airship would be launched and recovered at or near the locations over which it would fly, and essentially no transit would be required. These requirements allow serious consideration of the use of miCrOWaVe energy projected frem a ground station as a power source for propulsion and pay- load. On this basis the endurance of the airship is not limited by fuel supply, and very long time on station becomes a possibility (Ref. 4.13). Several concepts have been considered in studies of the HAPP vehicle. A first approach assumed _se of a conventional nonrlgid-type hull equipped with ballonets and using dynamic lift to counteract static lift changes. Subsequently, hull shapes similar to the HI-SPOT have been identified as more desirable. The difference in requirements between the military and civil systems and the use of microwave power results in a much _maller airship. The HAPP would lift a 675-kg payload but would only need an envelope volume of 70,800 m j {Ref. 4.141. 4.4 Propulsion At present, there are no existina propulsion systems which are readily applicable to high altitude o!atforms. Some near term configurations may be possible using existing components, such as photovol- taic units and electric motors; but in general, a technoloqy development program is indicated for any operational applications. There are several basic power options for propulsion of high altitude plat- forms. These include: chemical, electro-chemical, electro-radio, electro-optical, nuclear, and solar- thermal. Some of these are compared in Fig. _.3 which assumes a constant cruise requirement of 75 knots. The interrelationship between mission, vehicle, and power train requirements dictates the choice of a suitable system. For example, a vehicle which must cruise from base to a distant location, such as the HI-SPOT, is not able to use microwave power even though this is the most efficient system. Likewise, spree of the other systems _solar cells) which do not change weight with duration are not applicable because the surface area requirements are excessive. Other aspects which must be considered include minimum fuel consumption, high reliability, low heat generation and/or hiqh heat rejection capability, minimum hazard effects Cwhich tend to rule out nuclear systemsl and low development risk and cost. AS previously noted, high altitude airships are extremely sensitive to weight effects, so that minimum mass/thrust power ratio remains a most important criterion. These various f_ctors were considered in current studies of military and civil vehicles and the propul- sion systems were chosen accordingly. The propulsion sYstem for HI-SPOT has been projected as a liquid-cooled, turbocharged, reciproca- ting engine assembly driving a single 26 m dia. propeller and fueled with hydrogen. The engine assembly would consist of four four-cylinder powerplants each oroducinq 3g kw of Dower. They would be coupled to the single prooeller shaft through a 30:1 reduction gear. The hydrogen fuel would be stored in liquid Corm in spherical insulated tanks. Air would be delivered to the engines via a 20:1 turbocharger. The choice of this approach included, among other things, the state of technoloqy development for the COmponents involved. The very high endurance of the HAPP vehicle and the non-transit aspect allowed a choice of the low mass/power ratio system available in microwaves. The transmittal of microwave power is also considered as a near term technoloqy. This system involves generation of microwave frequency energy on the ground, beaming this energy to the aircraft using a suitable transmitting antenna, receiving the microwaves on the airship and converting them to OC electric power. A rectifying antenna on the airship accomplishes this latter function. The power density in the microwave transmission can be selected to enable prac- tical size of antennas and rectennas to be used. A transmitting frequency of 2.45 GHZ was used in all studies since it is relatively insensitive to atmospheric attenuation, represents a current state of development, and is acceptable from a hazard standpoint. If it is assumed that part, or perhaps all, of the envelope is transparent to microwave energy, the rectenna can be mounted within the gas or air space to obtain minimum drag. 4.5 REFERENCES 4.1 Rich, R.: "Navy HASPA Missions," U.S. Navy High Altitude Platform Workshop, July, 1978. 4.2 Kuhn, -Ira f., Jr.: "High Altitude Long Endurance Sensor Platform For Wide Area Defense of the Fleet," U.S. Navv High Altitude Platform Workshop, July, 1978. 4.3 Kuhner, M. 8. et al.: "Applications for a High Altitude Powered Platform (HAPP)," Battelle report BLC-OA-TER-77-R, Sept. Iq77. 4.4 Kuhner, M. B.; an_ McDowell, O. R.: "User Definition and Mission Requirements for Unmanned Air- borne Platforms," Battelle report, Feb. 197g. 4.5 StrganIc, T. W.: "Wind Study for High Altitude Powered Platform Design," NASA Reference Publica- tion 1044, Dec. igTg. 4.6 U.S. Navy Contract N6226g-82-C-0276 HI-SPOT Mid-Term Program Review, Lockheed Missiles and Space Co., Nov. 1981. llli li 33 4.7 Sinko, J. W.: "Circling Flight in Wind for HAPP Aircraft," Stanford Research Inst. Report, Aug. 1978. 4.8 Mayer, N. J.: "High Altitude Airshlos, Performance, Feasibility, Develol_ent," EASCON ig7g Conference, Oct. IgTg. 4.9 Korn, A. 0.: "Unmanned Power Balloons," Eighth AFCRL Scientific Balloon Symposium, Sept. 1974. 4.10 Petrone, F. J.; and Wessel, P. R.: "HASPA Design and Flight Test ObjectiveS," AIAA paper 75-g24, 1975. 4.11 Carmichael, B. H.: "Underwater Vehicle Drag Reduction Through Choice of Shape," AIAA paper 66-657, 1966. 4.12 Final Report "HI-SPOT Conceptual Design Study," Lockheed Missiles and Space Co., March 1982. 4.13 Mayer, N. J.; and Needle man, H. C.: "NASA Studies of a High Altitude Powered Platform -- HAPP," Tenth AFSL Scientific Balloon Symposium, March 1979. 4.14 NASA Contract NAS 6-3131 "HAPP Technical Assessment and Concept Development Progress Report, ILC Dover Corp., Feb. 1981. 34 HiQh Platform I A.F. 3000 m 3 Free Goodyear/ 9-68 Co(nplete Demonstrated initial Balloon * Pow- Winzen feasibility at ered Gondola 21,335 m. High Platform II A.F. I04B m3 Air- Raven 5-70 Complete 2 hr. flight _t ship 20,420 m. Solar pow- ered -- balloon launched. Hiqh Platform lit A._. 16,ggo m3 Raven Study Complete Study completed 8-71. only Stern propelled -- solar powered concept. POBAL A.r. 20,136 m 3 tree Goodyear g-72 Complete 3 hr. flight at Balloon + Pow- 18,287 m. ered Gondola fill Ii 35 PE R FORMANCE AIRSHIP CHARACTERISTICS Fig. 4.1 High altitude surveillance platform Fig. 4.2 Typical antenna installations for over-the-horizon targeting H1 02 RECIP H2 AiR FUEL CELL HYDROCARBON FUEL CELL ,DOt, //// I I J /. RECAP ,,,Dilll/ ,_ SOLAR REGE%ERATIVE o 3o _o _ _o J IS0 ELAPSED TIME AT CRUISE. diys As mentioned in Section I, one of the past uses of airships was commercial long-haul transportation by the Zeppelin Company. This mission has also received attentlcm in m_y co_}rehensive studies of modern airships, such as the Feasibility Study of Modern Airships (Refs. 1.1-1.181, and has been the primary focus of many other assessments (Refs, 1.12, 1.23, S.l-l.IB). Our main goal in this section will be to analyze the potential of modern airships to compete In the transportation market. The rapid growth of air transportation over the last SO years has been due primarily to the economic gains resulting From the steadv increase In the size and cruise speed of transport airplanes. Historically, productivity 6cruise speed x payload weight) has been the most important parameter In long-haul transportation because higher productivity leads directly to higher revenues and lower oper- ating costs per ton-mile. The economics of size are obvious, but the economies of speed are frequently mlsu_derstood. High cruise speed is desirable for many reasons. First and most importantly, at least to the operators, hlqher speed means the hourly-based components of operating cost may be spread out over morm miles and thus costs per mile wlll be lower. A second advantage of a hioher speed air vehicle is that it is less susceptible to weather delay than a slower one because headwinds will have less of an effect on ground speed, and adverse weather can be more easily avoided. Finally, there is the customer appeal of shorter trip times. _ecent increases in airplane speed have been oosslble because the flight efficiency of the Jet transport airplane tends tO increase wlth increasing speed, at least up to about Mach 0.8. Of course, it has taken a great deal of development to realize the high speeds and Flight efflciencles of today's airplanes. The effect that increasing productivity has had on transcontinental air fares is discussed in Ref. I.??. In the early days of commercial alrplane transportation, fares dropped rapidly until about the time of the introduction of the DO-3. Then, fares remained aoproxlmateIy Constant for nearly 30 years. Thus the increasing productivity had the effect of nullifying inflationary effects for three decades, and air travel was a much better value in real terms in _967 then it was in 1937. More recently, fares have tended to follow the general inflationary trend. This Is primarily true because there have been no speed increases since 1958. The effect of cruise speed on the flight efficiency of fully-buoyant airships is quite different Fro_l that of airplanes. The flight efficiency of fully-buoyant airships inevitably and rapidly de- creases with increasino speed and no amount of development will slqniflcantly alter this trend. References 5.) and _.ig indicate that a modern airship with a cruise speed of 120 mph, or about one* Fourth the speed of today's fanJet transport airplanes, will have the same flight efficiency and empty weight fraction as the airplane. Therefore, for equivalent sizes we may expect that such an airship will have only one-fourth the productivity of the airplane. We conclude this subsection by directly comparing past commercial airshlo operations with airplane operations of the same era. There is no question that initially, until about 1930, airships were superior to airpl aries for long-haul transportation in terms of performance, capacity, economics, and safety. However, neither form of air transportation was truly competitive with surface modes at that time. In the Ig30'S the airplane surpassed the airship In terms of speed, operating cost, and even safety 6Ref. 5.21. (It should be noted, however, that the limited operating experience, especially with large rigid airships, makes any statement of this type somewhat conjectural.) In 1937, the most advanced passenoer airplane (PC-3) had double the cruising speed of the most advanced airship (the Hindenburg). References 1.3, 5.20 and 5.21 indicate that in 1937 the PC-3 had total operating costs per seat-mile be- tween one-half and one-thlrd those of the Hindenburg. AlthouGh the Hindenburg disaster and the approach of World War II hastened the end of commercial airship operations, it is clear that the fundamental cause was the growing inabilltv of the airship to compete economically with the airplane in long-haul transportation. Although past commercial airship operations have consisted primarily of long-haul transportation of passengers along with freight and mail, because of the airship's low speed and productivity thlS is not a likely mlsslon for a modern airship. One passenger-carrylng possibility is For a cruise ship type of operatlo_ but the market size for this application is likely too low for development incentive. Because of an airship's natural attributes and drawbacks compared with other transportation modes, attention for passenoen airships is draw_ to short-haul applications. For short stage lengths, the speed disadvantage of airships as compared with airplanes is relatively unimportant. However, the V/STOL capabllltv and the relatively low noise and fuel consumption (due to lower power levels) of the airship become important advantages. These advantages may allow an airship to penetrate short-haul markets which have to-date been unavailable to h_avlmr-than-alr craft. In fact, there are Passenoer markets not presently serviced by the trunk or local airlines because of their short stage lengths or other factors. Specific mlsslons are service between city centers, between minor airports, and airport feeder service. Vehicles in the 30- to 150-passenger range would be required, and stage lengths would lle between )0 and ?00 miles. Air modes offer no advantages over ground modes at stage lengths less than about 20 miles and passenger airships probably cannot compete with airplanes at stage lengths greater than 200 miles. Presently existing competing modes include general aviation fixed and rotary wlnq aircraft as well as ground modes. Air modes hay, b_n able to UI! I! 37 cases they allow savings in door-to-door times. An airship has a good chance to be co_petitlve because of the relatively high operating costs of the c_eting heavier-than-alr craft. In fact, Airship Industries envisions the short-haul passenger market as one application of its AI-600 airship. Turning now to the transportation of cargo, speed Is not as significant to shippers as to passen- aers aS is evidenced by the relatively low percentage of cargo that travels by alr. For example, the air mode carries only 0.5% of the total cargo by weight in the U.S.-Europe market and less than 0.2% of the U.S. domestic freight. Because of the higher availability of trucks and their more numerous ter- minals, trucks generally give faster door-to-door service {as well as lower cost) than airplanes at stage lengths less than 500 miles, Because of the airship's low productlvlty, it is not likely it will be able to compete economically with either existing air or ground modes of cargo transportation. How- ever, there may be a range of stage lenqths centered around 500 miles for which an airship service could offer lower door-to-door trip times than any other mode could offer. Thus there may be a limited market for airship transportation of speed-sensitive, high-value cargo over moderate ranges. In addition to the conventional cargo transportation missions just discussed, there may be special cargo missions for which the airship Is uniquely suited. An example is transportation in less developed regions where ground mode infra-structure and air terminals do not exist (Refs. 5.22, 5.23). Agricul- tural commodities are a particularly attractive application since their transportation is one-time-only, or seasonal, in nature and crop locations are often in remote regions with difficult terrain. Closely related to this application is timber transportation in remote areas. The problem with this class of application is that the market size is not well-defined at present and may be too small to warrant a vehicle development. There is the same problem with long-haul transport of heavy and/or outsized cargo. Short haul of heavy cargo, on the other hand, appears to be a viable application and this mission was discussed in Section 3. An airship application frequently mentioned a few years ago is the transportation of natural gas. This application is unique in the sense that the cargo itself would serve as the lifting gas and possibly even as the Fuel. Significant advantages of an airship over pipeline and liquid-natural-gas tanker ships are increased route flexibility and decreased capital investment in facilities in countries which are potentially politically unstable. However, an early study (Ref. 1.7) found that, because of t_e extremely low costs of transportation by oipelines and tankers, airship costs would be several times higher than the transoortation costs of existing systems. Thus, in spite of some obvious advantages, the transportation of natural oas does not seem to be a viable mission for airships. For military long-haul missions, as opposed to Civil missions, there are many important consid- erations other than operating cost. For example, vehicle requirements include extremely long range, very large payloads, low observable properties, and a high degree of self-sufficiency {minimum depen- dence on Fixed ground facilitiesl. Since an airship would compare very favorably with airplanes for many of these requirements, several authors have considered airships for the strategic airlift mission. Interest in this airship aoplication stems not only from deficiencies in existing strategic aircraft but also from a severe capacity deficiency in the entire military airlift system. For example, the United States possesses about one-third of the airlift capacity that would be required in the event of a major NATO-Warsaw Pact conflict (Ref. 5.24). The question of how to provide the additional needed capability is obviously of vital importance. Because of the limited amount of resources available For military forces and the global commitments of these Forces, the United States and other western military powers have adopted a policy of limited forward deployment of forces. Strategic mobility is then required for reinforcement in the event of hostilities. In the early stages of a conflict, this reinforcement would be provided by conventional airlift. As sealift becomes effective {about 30 days for sealift between the United States and Europe), airlift would be used only for the resupply of high-value or critically needed supplies {Ref. 5.24). In this scenario, an airship could supplement the existinq airlift and sealiFt capability by providing faster response time than sealift and greater payload-ranqe perfor_ance than conventional airlift. The advantage of an airship over an airplane for strategic mobility comes from the airship's characteristic of retaining its efficiency as vehicle 51ze is increased (see Section 3.I). This allows consideration of vehicles with payloads several times those_of _xlsting transport airplanes. Figure 5.1, taken from Ref. 5.24, shows than an airship of 40 x 106 ft 3 volume could transport a payload of 300 tons from the middle of the continental United States to Europe and return (a distance of about g000 nautical miles) without refuellno. Thus fuel supplies at the offloadinq base would not be depleted. This capability is Far in excess of what is possible with the C-5 airplane. The main question is w_ether or not such an increase in capability is affordable. Both conventional and hybrid airship concepts have been proposed for transportation missions. We have previously discussed conventional airships and hybrid concepts for vertical heavy-lift. We now discuss hybrid airship concepts proposed primarily for transportation missions. These concepts include airships with wings, "lifting-body" shapes, multiple cylindrical hulls, and concepts which combine propeller/rotor systems with buoyant hulls. Both VTOL and STOL versions of these vehicles have been studied. Early studies {Refs. 1.1-1.18) quickly eliminated both the more radical concepts (because of design uncertainty) and the multiple hull concepts {because of their relatively high surface area-to-volume ratios). More detailed analysis showed that winged airships are generally inferior to the lifting bodies. Therefore, the subseouent discussion will consider only liftlng-body hybrids for long-haul missions and prop/rotor hybrids for short haul. Many different lifting-body airship concepts were studied in Refs. 1.1-I.1B. We will select the Aereon Dynalrship (Ref. S.14) as representative of this class of vehicle because of the background of 38 information available on the delta planfom lifting-body shape and because this vehicle has received the most attention. The Aereon Dy_alrship (Fig. 5.2), consists of a buoyant hull of approximately delta planform with an aspect ratio in the range of 1.5 to 2.0. Control surfaces and propulsors are arrayed along the vehicle trailing edge for maximum efficiency. The Dy_airshlp concept has received considerable analysis and development including the construction of a flight vehicle. The basic idea of the Dynalrshlp, as with all liftlng-body hybrids, Is to "flatten N the buoyant hull to obtain a shape with higher lift efficiency. On the negative side, this flattening increases the surface area which tends to increase friction drag and structural weight. There has been considerable disagreement in the literature as to the net effect of these trends. Thls question will be taken up In more detail in the following section. A vehicle concept for the shOrt-haul transportation mission, called the airport feeder vehicle, was studied in Refs. 1.15 and 1.16. The concept Is a semlbuoyant airship capable of transporting passengers or cargo to major conventional takeoff and landing hub terminals from suburban and downtown depots. The basic configuration and operational concept are _epicted _n Fig. 5.3. The hull is of the classical shape and is a pressurized metalclad construction of 428,500 ft J. The vehicle gross weight is 67,500 Ib; 35% of the total lift is provided by buoyant force with the remainder provided by dynamic forces, The propulsion system consists of four fully cross-shafted, tilting prop/rotorS. At low speeds the propul- sors are tilted to provide vertical lift and at cruise they are tilted to provide horizontal thrust, with the dynamic lift then provided by the hull being flown at a positive angle-of-attack. The design has an O0-passenaer capacltv and controllable VTOL capability. The cruise velocity for maximum specific oroductivitv was estimated to be 130 knots at an altitude of 2000 ft. The noise level at takeoff was estimated to be 86.5 pNdB and the fuel consumption to be 0.25 gallons/ton mile. The major areas of technical uncertaintv were identified to be the hover/transitlon phase stability, and the control characteristics and flying/ride qualities in turbulent air. Turning to the military strateaIc airlift mission, a recent study (Ref. 5.25) has analyzed both conventional rigid and liftlng-body hybrid airship designs for thls applicatlon. It was found that both vehicle concepts had about the same performance, but the lifting-body design was judged superior due to the problem of ballasting for buoyancy control in conventional airships. The lifting-body airship proposed in Ref. 5.25 is shOwn in Fig. S.4. It Is a delta-planform configuration of low aspect ratio with a cylindrical forebody. Actually it is closer in appearance and performance characteristics to a classical airship than to the "high" aspect ratio delta-planform hybrids, such as the Aereon Dynalrship. It can in fact be viewed as a conventional airship with a "faired-in" horizontal tail which is flown "heavy." The design features VTOL and hover capability, 115 knot cruise speed, and a payload of 363 tons. The configuration parameters were selected based on parametric study of this class of shape. In thls section we take up in more detail the question of the productivity of modern airshipS. Specific productivity (cruise speed times payload weight, divided by empty weight) will be used as a figure of merit. Productivity is a vehicle's rate of doing useful work and is directly proportional to the rate of generation of revenue. Assuming vehicle cost to be proportional to empty weight, specific productivity is then a direct measure of return on investment. Early studies have resulted in a wide variety of conclusions reqarding the performance of airshlos in transportation missions. In particular, some studies have concluded that delta-planform hybrids have inferior productivity character(sties and operating economics when Compared with classical, fully- buoyant, approximately ellipsolCal airships and that neither vehicle is competitive with transport air- planes. On the other hand, other studies have concluded that deltoids are greatly superior to ellip- soids and, in fact, are competitive with existing and anticipated airplanes. Reference 5.18 identified substantial differences in estimating aerodynamic performance and, most siqnificantly, empty weight, as the cause of these discrepancies. This subsection is based on Ref. _.18 and the results are in basic agreement with another similar study {Ref. 5.15). In the parametric study of Ref. 5.IB, four vehicle classes and two empty weight estimation formulas were analyzed for three standard missions. Specifically, the cases considered were (I) a classical, fully-buoyant, ellipsoldal airship whose weight is estimated by a "baseline" formula; (2) the same vehicle, but whose weight is estimated to be one-half that given by the baseline formula; (3) a conventionally-shaped airship flown wlth dynamic lift (and therefore a "hybrid'); (4) a "high" aspect ratio {1.74) delta-planform hVorid with baseline empty weight, similar to the Dynairship of Fig. 5.2; (5) the same vehicle with one-half the empty weight; and (6) a low aspect ratio (0.58) delta-planform hybrid similar to the vehicle shown in Fig. 5.4 with baseline weight. In all cases, It Is assumed that ballast is collected to maintain constant gross weight during flight. Two empty weight estimation formulas are included because of the laroe discrepancies in this parameter in the literature. The three missions are (I) a short range mission {300 n.mi. range, 2,000 ft. altitude, 100,000 lb. gross takeoff weight); (2) a transcontinental mission (_,000 n.mi. range, 13,000 ft. altitude, 600,000 lb. gross takeoff weight); and (3) an intercontinental mission (5,000 n.mi. range, 2,000 ft. altitude, 1,000,000 lb. gross takeoff WeightS. The slx specific vehicles were optimized with respect to cruise speed and buoyancy ratio In terms of maximum specific productivity For each mission. The results of the analysis are shov_ in Fig. 5.5-S.7. I. Empty-weiqht fraction has a relatively large effect on airship speciflc productivity. Reducing the empty Weight by one-half and reoptimizing the vehicles results in higher best speeds and large increases in specific productivity (between 2001( and 5001(, depending on vehicle shape and mission). 39 Deltoids aremore sensitive to mptv weight than ellipsoids. CBecause'large, high-aspect-ratio deltoid hybrid airships have never before been designed, built, and flown, there is significant uncertainty regarding their Structural weights.) 3. Low-aspect-ratio 60.58) deltoid hybrid airships have higher specific productivity than fully- buoyant ellipsoldal vehicles, except at long ranges where they are comparable. Among the vehicle con- cepts considered, it is the best airship for all three missions, considered from a specific productivity standi)oint. Such _ vehicle seems to be an effective compromise between the good aerodynamic efficiency of the hioh-asoect-ratio deltoid and the good structural efficiency of the classical ellipsoidal airship. At Innger ranges than those considered here, the classical airship would tend to be slightly superior. 4. For equivalent empty weight fractions, airships cannot compete with existing transport air- planes On a specific productivity basis. Values of airship specific productivity were approximately one-third, one-fifth, and one third those of equivalent size airplanes for the short range, tranS- continental, and intercontinental missions, respectively. 5. The cruise speeds for maximum specific productivity of airships are very low compared with those of jet transport airplanes. This is particularly true for Fully-buoyant airships at intermediate to long ranges for which optimum cruise speeds of 60 knots are typical. The fuel efflciencies of fully-buoyant, ellipsoidal airships were found to be about five times better than those of transport airplanes. The fuel efficiencies of deltoid hybrid airships are inter- mediate between those of fully-buoyant elllpsoidal airships and airplanes, ranging from one and one-half to five times better than those for airplanes. Because airship fuel efficiency is highly sensitive to cruise speed, fuel efficiencies will be greatly reduced if higher speeds are adopted For operational reasons. In any event, airships will use less Fuel than airplanes and will, therefore, become increas- ingly more competitive as fuel prices increase. Direct operating cost FDOC) is the usual criterion by which a transportation vehicle is judged. Unfortunately, as is the case for productivity estimates, there has been also a great deal of disagree- ment between the various published estimates of airship DOC's. Some studies (Refs. 5.1, 5.3-5.5, 5.8) have concluded that airships are economically superior to transport airplanes, some CRefs. 5.6, 5.7, 5.al have concluded they are about equal, and some {Refs. 1.22, 1.23, 5.20, 5.21, 5.26) have predicted that the DOC. of a modern airship would be much greater than that of existing airplanes. These studies are criticallv reviewed in Ref. 1.22, where the discrepancies are found to result from differences in study oround rules and in differing degrees of optimi_n in technical and economic assumptions. To coml)ute the operating cost elements of depreciation and insurance, an estimate of vehicle unit acquisition cost is needed, and here already is a major cause of published disagreement. Although an _ccurate estimate of airship vehicle acquisition cost has vet to be made, Fig. 1.6 indicates the plaus- ible conclusion that the develooment and manufacturing costs of airships will be _oughly the same as those for airplanes and thus major capital investments will be required. Table 5.1 compares an airship DOC as estimated in R)F. !.22 with the DOC being experienced for the Boeing 747 rRefs. 5.26, 5.27_. The airship is a 10 x 10oft _ modern rigid design; all costs are in i)75 U.S. dollars. The table shows that the airship has been assumed to have a lower unit cost and much higher annual utilization _due to its lower speed) but has only one-fifth the block speed of the 747. On an hourly basis, the airship has lower depreciation, insurance, maintenance costs, and much lower fuel COSTS. This results in an hourly cost for the airship which is about one-third that of the air- plane. However, when converted to a per-mile basis, the airship DOC is about 2.4 times that of the airplane. Assuming reasonable values of indirect operating costs, profit, and load factor, and using the DOC estimate just discussed, required airship revenues were also computed in Ref. 1.22. These revenues are compared to the national average revenues of several modes in 1975 (Ref. 5.28) in Figure 5.B. The fig- ure shows that the revenue required for a profitable airship cargo operation is substantially greater than transport airplane revenues and many times greater than the revenues of surface modes. When one considers Short-haul VTOL airship operations, the economic competitiveness of airships improves considerably. This is because existing and anticipated heavler-than-air VTOL vehicles, mainly helicopters, are tel atlvely expensive to operate as compared with conventional fixed-wing aircraft. An estimated breakdown of DOC for the airport feeder airship concept of Fig. 5.3 is shown in Table 5.2 fRef. 1.15, 1.16). In comparison with other advanced, conceptual VTOL aircraft, the airship DOC of 5.52¢ per available-seat statute mile is economically competitive. In comparison with actual helicopter airline experience, it is superior by about a factor of two. The fuel consumption is estimated to be about 30% better than for current helicopters. To conclude this section, all evidence points to the conclusion that airships will have difficulty competing with alrplanes over established transportation routes. It will take a strong combination of several of the following requirements to make a transport airship viable: (I) large payload, (2) ex- tremely long or very Short range, (3) expensive or limited Fuel, (4) low noise, {5) VTOL, (6) undevel- oped infrastructure, and _7) hlgh-value or critical cargo. The best possibilities therefore seem to be either a short-haul VTOL passenger vehicle or a large, long-range strategic military vehicle. 40 5,6 R_FERENCES Southern California Aviation Council, Inc., Committee on Lighter Than Air: Technical Task Force Report; Pasadena, CA, May 15, IO74. 5.? Smith, C. L.; and Ardema, M. D.: Preliminary Estimates of Ol_eratlng Costs for Lighter Than Air Transports, Proceedings of the Interagency Workshop on Lighter Than Air Vehicles, Flight Transpor- tation Laboratory Report R75-2, Cambridge, MA, January 197_ (cited in following references as "Proceedings'). 5,4 Lightsoeed USA, Inc.: Executive Summary of Lightship Development, February 1976. 5.5 Reier, G. J.; and Hidalgo, G. C.: Roles for Airships in Economic Develol_nt, Proceedings. 5.6 Madden, R. T.; and 81oetscher, F.: Effect of Present Technology on Airship Capabilities, Proceedings. 5.7 ?oliver, R. D., et al: Airships As An Alternate Form of Transportation, A Systems Study, prepared by the Eglin AFB Class of the Master's Degree Program at the University of West Florida, March 1976. g.g Couohlin, S.: The Application of the Airship to Regions Lacking in Transport Infrastructure, Proceedln_s. 5.11 Munk, R.: Action Rather Than Words, presented at the Symooslum on the Future of the Airship -- A Technical Aooraisal, London, Enoland, November 20, 1975. 5.12 Pub_an, W. F.: Perfor_ance Comoarisons for a Conceptual Polnt-Design Dynairshlp In and Out of Ground Effect. Final Report of NADC Contract N6226g-T7-M-2502, Feb. 1978. 5.13 Semi Air Buoyant Vehicle -- SABV Parametric Analysis and Conceptual Design Study. Goodyear Aerospace Corporation, Final _eoort o? NADC Contract N-6226g-TB-C-O46B, June 1977. 5.14 _iller, W. M., Jr.: The Dynairship. Proceedings of the Interagency Workshop on Lighter Than Air Vehicles, Flight Transportation Laboratory Report R75-Z, Cambridge, Mass., Jan. 1975. 5.15 Brewer, W.: The Productivity of Airships in Long Range Transportation. AIAA Paper 79-1598, Ig7g. 5.16 Havill, C. D.; and Williams, L. J.: Study of Buoyancy Systems for Flight Vehicles. NASA TM X-62,168, ig77. 5,17 Glod, J. E.: Airship Potential in Strategic Airlift Operations. AIAA Paper 79-1598, Ig?g. 5,18 Ardema, M. D.; and Flaig, K.: Parametric Study of Modern Airship Productivity, NASA TM 81151, July 1980. 5.19 Schneider, John J.: Future Lighter-Than-Air Concepts, SAE Paper No. 750618, presented at the Air Transportation Meeting, Hartford, CT, May 6-8, 1975. 5.__0 Brooks, P. W.: Why the Airship Failed, Aeronautical Journal, October 1975. 5.21 Shevell, R. S.: TechnolocL v, Efficiency, and Future Transport Aircraft, Astronautics and Aeronautics, Sept. 1975. 5.22 Mayer, N. J.: A Study of Dirigibles for Use in the Peruvian Selva Central Region, AIAA Paper 83-IQ70, 1983. 5._3 Cahn-Hildago, G.R.A.: Barriers and Possibilities ?or the Use of Airships In Developing Countries, AIAA Paper 83-1914, 1983. 5.24 Pasquet, G. A.: Lighter-Than-Air Craft for Strategic Mobility, AIAA Paper 79-1597, Ig7g. 5.25 Glod, J. E.: Airship Potential in Strategic Airlift Operations, AIAA Paper 79-1598, 1979. S.26 Vittek, J. F.: The Economic Realities of Air Translsort, presented at the Symposium on the Future of the Airship -- A Technical Appraisal, London, England, November 20, 1975. 5.27 Ray and Ray: Operating and Cost Data 747, 0C-I0, and L-I011 -- Second Quarter, 1975, Aviation Week and Space Technolog.y, vol. 103, no. 12, September 23, Ig75, p. 38. 5.78 Summary of National Transportation Statistics, Report NO. DOT-TSC-OST-76-18, June 1976, U.S. Depart_hent of Transportation. 41 Boeing 747 Composite Airship of Actual Estimate Data Insurance, _/hr 30 75 Direct Operating Cost, cents/available seat statute mile Depreciation 1.37 Crew 0.75 Fuel 1.25 Insurance 0.26 Maintenance 1.7B 15 40 x I06ff 3 AIRSHIP i .J S z 5 Z z \ C_ AIRPLANE er | 15OO PAYLOAD IN TONS AIRWAYS TO MAJOR AIRPORTS OR ADJACENT _ _f__.__-: CARGO PASSENGER,' ACCESS " . _-]_ ,/ z i z, / fill Ii 4J ENVELOPE VOLUME - 42,B01.U(I h.3 PLANFORM AREA - 340,071 ft 2 FINENESS RATIO - §.SM 40O ASPECT RATIO - (140 OPTIMUM SPEEDS STATIC LIFT/GRO_WEIGHT - (180 GROSS WEIGHT - 2,500.000 Ib V= 170 kno_ 35O 5 25O 1383 ft, _'-I L_J 150 DELTOID, LOW 16o J.3o_.. ASPECT RAT,O 100 " _ """ "" _ 90 ELLIPSOID 140 -_--- "" 90 , 4"- 3"_J h --4_ DELTOID__ .... --7. - ......... B^ \ 115 _80 / u0 50 - i ELLIPSOID HYBRID BO i 0 • 1,0 BUOYANCY RATIO, 2 140 NE WEIGHT "65 1so _ ELLIPSOID, HALF V • 120 knott _ BASEL,NEWE,GHT J 12o >" ;'o m 125 >- k- p. 100 i k- o o so ELLIPSOID 8- ,o//' / 120 75 I ""_ 25 DELTOID f" / ' 2O ELLIPSOID HYBRID _85 / /ELL'PSO'O i I _ I , l / / HYBRID 2 .4 .6 .8 1.0 / , BUOYANCY RATIO. J ; .; " ._ 110 BUOYANCY RATIO, Fig. 5,6 Airship specific productivity, Fig, 5.7 Airship specific productivity, transcontinental mission intercontinental mission 50- | ,o-_J LOWER BOUND _ 30- 27.17 i _" 2o- =P _> 1o-, ] [_ r-'-I., . ot AIRSHIP 747 AIR TRUCK RAIL SHiP PIPELINE Ill! I_ 1. Re_oortNo. 2. Go_n_nt _ No. 3. Recipi4mt'sCatalogNo. NASA TM 86672 4. Title and Sul_itl, 5. Rqx)rt O_te Missions and Vehicle Concepts for Modern, December 1984 15 _p_e_ntary Noles Point of Contact: Mark D. Ardema, Ames Research Center, MS 210-9, Moffett Field, CA 94035,(415) 694-5450 or FTS 448-5450 16 Abstract Hybrid Aircraft Mission Analysis Subject Category - 01
https://www.scribd.com/document/3906846/Missions-and-Vehicle-Concepts-for-Modern-Propelled-Lighter-Than-Air-Vehicles
CC-MAIN-2019-30
refinedweb
20,461
56.86
#include <stdio.h> #include <string.h> #include <rte_common.h> Go to the source code of this file. String-related functions as replacement for libc equivalents Definition in file rte_string_fns.h. Takes string "string" parameter and splits it at character "delim" up to maxtokens-1 times - to give "maxtokens" resulting tokens. Like strtok or strsep functions, this modifies its input string, by replacing instances of "delim" with '\0'. All resultant tokens are returned in the "tokens" array which must have enough entries to hold "maxtokens". Copy string src to buffer dst of size dsize. At most dsize-1 chars will be copied. Always NUL-terminates, unless (dsize == 0).
https://doc.dpdk.org/api-22.07/rte__string__fns_8h.html
CC-MAIN-2022-40
refinedweb
107
69.89
Create a global variable called myUniqueList. It should be an empty list to start. Next, create a function that allows you to add things to that list. Anything that’s passed to this function should get added to myUniqueList, unless its value already exists in myUniqueList. If the value doesn’t exist already, it should be added and the function should return True. If the value does exist, it should not be added, and the function should return False; extra is if we can make the remaining values to a list called my leftovers myUniqueList = () myLeftovers = () def addUniqueElement(b): if b not in myUniqueList: print(myUniqueList.append(b)) return True else: myLeftovers.append(newElement) return False print(addUniqueElement())
https://proxies-free.com/i-cannot-figure-out-how-to-fix-this-problem-can-someone-help-me-pirple-homework-python/
CC-MAIN-2021-10
refinedweb
118
74.59
Something I really like about FastAPI and Typer, both from the same author, Sebastian Ramirez, AKA Tiangolo, is the super-convenient dependency injection. In a recent project, I wasn't able to use FastAPI, so I decided to roll my own. In this post, I'll describe how. But first, let me show you how nice Tiangolo's libraries are to use: Typer Typer is for building Python command-line tools. Often you want to be able to call python scripts from the command line with extra arguments to do all manner of automation tasks. Python has a low-level method of fetching the values passed from the command-line using sys.arv which contains a list of arguments: python sys_test.py 123 import sys print(f"Your number was {sys.argv[1]}") This would print "Your number was 123" But this gets a bit complex when you have more than one argument, and particularly with optional arguments as well. It also doesn't do anything to help you produce errors when expected args aren't provided. Using argparse Python does have a solution to this though, it comes with a higher-level library for building a command-line interface (allowing for optional arguments, help text, etc) using the argparser module: from argparse import ArgumentParser def main(foo: str, bar: int): print(foo) print(bar) if __name__ == "__main__": parser = ArgumentParser(description="Function that does things and stuff") parser.add_argument("foo", type=str, help="Required Foo") parser.add_argument("--bar", type=int, help="Optional Bar", default=1) args = parser.parse_args() main(args.foo, args.bar) As you can see, argparser works by having you write some code to prepare all of the arguments. And gives you an opportunity to define some documentation for these arguments as well. Running it the script using python argparser_example.py --help flag looks like this: usage: argparser_example.py [-h] [--bar BAR] foo Function that does things and stuff positional arguments: foo Required Foo optional arguments: -h, --help show this help message and exit --bar BAR Optional Bar But, there are some drawbacks to this. Firstly, it's quite a lot of code just to accept a couple of arguments, I can never remember how to write these with argparser, particularly how to deal with optional and default arguments. Secondly, you need to define all your arguments before calling the business function ( main() in this case), passing the arguments on. While this is fine for some use-cases, in others it's a bit annoying to maintain. Fortunately there are several other libraries in python's package ecosystem, some of which are more convenient to use or more powerful. One such library I previously used is Google's Fire, while another that I use now, is Tiangolo's Typer Using Typer The Typer equivalent of the above code is substantially shorter: import typer def main( foo: str=typer.Argument(..., help="Required Foo"), bar: int=typer.Argument(1, help="Optional Bar") ): """ Function that does things and stuff """ print(foo) print(bar) if __name__ == '__main__': typer.run(main) Now, instead of argument parsing having to live by itself, where it's easy to forget and end up with the wrong arguments or mismatched types or help text, instead, the CLI arguments are introspected from the function's arguments including the type hints, and injected by Typer! This is the kind of injection we want to replicate later. The benefit is the CLI arguments and documentation live alongside the function arguments, making it much easier and neater to maintain. Running python typer_example.py --help now generates the following: Usage: c.py [OPTIONS] FOO [BAR] Function that does things and stuff Arguments: FOO Required Foo [required] [BAR] Optional Bar [default: 1] Options: --install-completion Install completion for the current shell. --show-completion Show completion for the current shell, to copy it or customize the installation. --help Show this message and exit. Useful to note that you can run --install-completion to install shell autocomplete! One of the many things that Typer provide in addition to basic CLI argument handling. It's worth noting at this point that Google's Fire library works in a similar way. Fast API Another example of this kind of dependency injection at work, is in FastAPI, a high-level HTTP server framework, which leverages a package called Pydantic, to automatically decode an API request's parameters, payload, and path variables, saving you from a lot of boilerplate To compare, we can look at Flask, another popular HTTP server framework, that is often the starting point for new Python students Using Flask A simple Flask app to return a simple payload might look like this: from flask import Flask, request, jsonify app = Flask(__name__) @app.route("/foo/<foo>", methods=["post"]) def handle_foo(foo): payload = request.json bar = payload.get("bar", 1) return jsonify(foo=foo, bar=bar) There's a few things going on here, first, foo is taken from the path, so if I were to hit the server at /foo/hello then the foo argument would be hello. This is nice as Flask handles this for us. The next thing is, it expects a json payload, and I will try to read the "bar" key from it, defaulting to the value 1. So basically I want the payload to look like: { "bar": 123 } But, there's not much validation here. If I provide a string instead of an int, there wouldn't be an error. If I provided extra values, there's no error. While this is a trivial case to check, imagine you had tens of payload values (potentially even nested structures!) to check, and you didn't want to have to write that out in a big chain of if loop. Using FastAPI FastAPI's approach is to define the data models up-front with Pydantic, and use these models in the arguments of the function. Not only does this allow FastAPI to automatically decode the value and provide it to you, but it also automatically generate documentation! from pydantic import BaseModel, Field from fastapi import FastAPI app = FastAPI() class Payload(BaseModel): bar: int = Field(1, title="Optional Bar") class RetModel(BaseModel): foo: str = Field(..., title="Result Foo") bar: int = Field(..., title="Result Bar") @app.post("/foo/{foo}", response_model=RetModel) async def handle_foo(foo: str, payload: Payload): return RetModel(foo=foo, bar=payload.bar) Here's there's more work pre-defining a bunch of models, but this has several benefits: - Pydantic will be used to parse the data. If the incoming data is the wrong format, an error message will be generated that explains exactly what is wrong with it. Saves you from having to write a big data validation chunk of code - It's automatically injected so you don't have to handle it - Documentation can be automatically generated for you! Even this tiny bit of code generates a fully-fledged API documentation available when you run the server, complete with descriptions, and correct variable types: Rolling your own So we've seen Tiangolo's excellent work with dependency injection, making use of Python's introspection capabilities for the code to look at your function's arguments at run-time and figure out what the function wants, and injecting them. As you can see from not only FastAPI and Typer examples, but also Flask too, that the function is wrapped with a decorator which allows metaprogramming, something Python is fairly strong at. This decorator is what allows us to do this injection, and where we'll put the introspection code. In this example, we're going to retro-fit Flask with FastAPI-type injection of payload models. This is actually not necessary in real-life because there are many libraries out there that do this or a similar thing already, but this is here for illustrative purposes. To do this, we first start with our previous Flask example: from flask import Flask, request, jsonify app = Flask(__name__) @app.route("/foo/<foo>", methods=["post"]) def handle_foo(foo): payload = request.json bar = payload.get("bar", 1) return jsonify(foo=foo, bar=bar) Next, we'll start creating a decorator that will do our injection job, the blank decorator that won't do anything looks like this: from flask import Flask, request, jsonify app = Flask(__name__) def inject_pydantic_parse(func): def wrapper(*args, **kwargs): return func(*args, **kwargs) return wrapper @app.route("/foo/<foo>", methods=["post"]) @inject_pydantic_parse def handle_foo(foo): payload = request.json bar = payload.get("bar", 1) return jsonify(foo=foo, bar=bar) For now, this does nothing at all, it passes straight through, but now that the function is being decorated by wrapper, we can start doing stuff to it. Our first job is to detect any Pydantic models in the function arguments. We can do this using get_type_hints() from the typing module, which lets us introspect our function (passed in as func): from typing import get_type_hints from flask import Flask, request, jsonify from pydantic import BaseModel, Field app = Flask(__name__) class Payload(BaseModel): bar: int = Field(1, title="Optional Bar") def inject_pydantic_parse(func): def wrapper(*args, **kwargs): for arg_name, arg_type in get_type_hints(func).items(): parse_raw = getattr(arg_type, "parse_raw", None) if callable(parse_raw): kwargs[arg_name] = parse_raw(request.data) return func(*args, **kwargs) return wrapper @app.route("/foo/<foo>", methods=["post"]) @inject_pydantic_parse def handle_foo(foo, payload: Payload): return jsonify(foo=foo, bar=payload.bar) Here, this loop is going over all the type hints in the function, and for each of them, it'll test to see if there is a parse_raw callable, and call it with the request.data, inserting the results into the keyword arguments that will be used to call the function later. This is the "injection"! for arg_name, arg_type in get_type_hints(func).items(): parse_raw = getattr(arg_type, "parse_raw", None) if callable(parse_raw): kwargs[arg_name] = parse_raw(request.data) Finally, since this wrapper can handle the return value as well, we can also deal with that in a similar way, and collect the return value as a pydantic model. To do this we need an extra layer of wrapping due to the way python handles decorator arguments. There's actually neater ways to do this with some built-in libraries, but I'll show it here without: from typing import get_type_hints from flask import Flask, request from pydantic import BaseModel, Field app = Flask(__name__) class Payload(BaseModel): bar: int = Field(1, title="Optional Bar") class RetModel(BaseModel): foo: str = Field(..., title="Result Foo") bar: int = Field(..., title="Result Bar") def inject_pydantic_parse(response_model): def wrap(func): def wrapped(*args, **kwargs): for arg_name, arg_type in get_type_hints(func).items(): parse_raw = getattr(arg_type, "parse_raw", None) if callable(parse_raw): kwargs[arg_name] = parse_raw(request.data) retval = func(*args, **kwargs) if isinstance(retval, response_model): return retval.dict() return retval return wrapped return wrap @app.route("/foo/<foo>", methods=["post"]) @inject_pydantic_parse(response_model=RetModel) def handle_foo(foo, payload: Payload): return RetModel(foo=foo, bar=payload.bar) Now, by annotating the handle_foo() function with @inject_pydantic_parse(response_model=RetModel) the wrapper will check if the return object from handle_foo was indeed the desired respones_model, and then decode this (in this case using the dict() method that pydantic models have, which the rest of Flask will handle turning into JSON for us). Doing a detection for the nominated response model is safe as it doesn't interfere with other possible return values from the function. While this seems like a lot of extra code, this inject_pydantic_parse() function can be tidied away in a utility module, and now you can decorate all of your Flask endpoints in FastAPI style... ...though, not really. This is a super-simple example and doesn't handle a lot of edge-cases. To use this for real, you'd need to add in more error handling, and perhaps combine both decorators so you only needed one. You might as well just switch to FastAPI if you had the option! (I did not, as I was using Google Cloud Functions, and so this option helped me a great deal) Cover Photo by Jr R on Unsplash Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/meseta/explaining-typer-and-fastapi-dependency-injection-and-rolling-your-own-in-python-2bf2
CC-MAIN-2021-17
refinedweb
2,004
52.39
Null and Attributes ReSharper and Rider users have been enjoying static analysis for nullable types for over 10 years already. Meanwhile, this feature is now also supported by the compiler itself, and is commonly known as nullable reference types (NRT). It turned out that adding support for C#’s NRT inside ReSharper and Rider was quite challenging, especially since the innards are still a moving target, and our developers were able to find lots of edge cases. All those years we’ve worked with nullability analysis have finally paid off! In this post, we won’t be talking about the boring things, like that we’ve re-implemented related compiler warnings with our own fast code inspections that show up at the moment you’ve finished typing. Rather, we will focus on the new convenient quick-fixes that ReSharper and Rider are offering, and how everything works together to help migrate a codebase to use nullable reference types. For users of JetBrains Annotations for null-checking, we will also briefly discuss the differences with Roslyn’s own attributes in a follow-up post. Let the journey begin! How to deal with cascading warnings As mentioned in other articles, migrating to nullable reference types often causes a lot of propagating warnings. This is a good opportunity to note that ReSharper and Rider come with actions to jump to the next issue in either the current file or the complete solution. For the latter to work, we need to include warnings in solution-wide analysis. Both the fast code analysis and these shortcuts enable us to efficiently introduce NRT to our codebase. Tip: If there are too many warnings, we can temporarily raise the severity for NRT warnings to the error level. Codebase meets nullable reference types Since its early public discussion, the Person class has always served as a good example to illustrate the power of nullable reference types. Let’s not make an exception here. As a first step, we’ve already declared the middleName parameter as being nullable: public class Person { public string FirstName { get; } public string MiddleName { get; } public string LastName { get; } public Person(string firstName, string? middleName, string lastName) { FirstName = firstName; // CS8601: Possible null reference assignment. MiddleName = middleName; LastName = lastName; } } The property initialization of MiddleNamenow shows a possible null reference assignment (CS8601) warning. A common task during our migration is to change the type of the property to its nullable form to bring everything in line: Let’s consider another example of an IPersonRepository interface that provides an API to search for a Person instance by name. In a simpler form, its definition and implementation could look something like this: public interface IPersonRepository { Person GetPerson(string lastName); } public class PersonRepository : IPersonRepository { public Person GetPerson(string lastName) { // CS8603: Possible null reference return. return _people.SingleOrDefault(x => x.LastName == lastName); } } So our API could return null, causing the compiler to issue a possible null reference return (CS8603) warning. In order to fix this, we need to change the method signature to use Person?as return type. Easy enough. And while we’re at it, let’s also update the derived interface: Depending on our preferences, we could have also changed the signature directly. Here we can choose to either use the return type from the base type, or to adapt the base declaration: Note that overriding and implementing members always allows to strengthen the contract for output values and weaken it for input values (similar to covariance and contravariance). Moving forward, we might use our IPersonRepository interface in a situation like the following, where we first make sure that our retrieved Person object is valid (not null) and then continue working with it: public void M() { var p = _repository.GetPerson("Mads"); if (IsValid(p)) { // CS8602: Dereference of a possibly null reference. Console.WriteLine($"Great job {p.FirstName}!"); } } private bool IsValid(Person? person) { return !string.IsNullOrEmpty(person?.FirstName) && !string.IsNullOrEmpty(person?.LastName); } Again we’re facing a propagating warning – dereference of a possibly null reference (CS8602). Generally, we could silence the compiler by using the null-conditional operator ( ?.). However, since we know that our object won’t be null, we can be more explicit and make use of the null-forgiving operator ( !), which suppresses the error and essentially tells the compiler that we know better: Another way of fixing this is to add a [NotNullWhen(true)] attribute to the person parameter. This way, the compiler knows that after IsValid has returned true, the object can be dereferenced. ReSharper has been nicely covering such situations way before NRT. We can just add a ContractAnnotation attribute to our IsValid method, and define the contract => true, person: notnull; => false, which would mean that if the method returns true, we can also be sure that our object won’t be null. Let’s look at another NRT situation in our codebase. We have a WorkItem that must be assigned to a person that we’ve just received from our IPersonRepository: public void N() { var person = _repository.GetPerson("Doe"); var task = new WorkItem("Fix all the bugs!"); // CS8604: Possible null reference argument. task.AssignTo(person); } internal class WorkItem { public WorkItem(string description) => Description = description; public string Description { get; } public void AssignTo(Person person) { } } The AssignTomethod can’t deal with nullvalues, so the compiler will issue a warning of a possible null reference argument (CS8604). With another quick-fix, we can just use the null-coalescing operator ( ??) to provide a fallback value. This also works nicely with ReSharper’s and Rider’s smart completion: No manager at hand? No worries! We can also just use a throw expression to throw an exception. If the argument is a parameter of the enclosing function, a default of ArgumentNullException will be used. Otherwise, we can define the exception type ourselves: As we can see, the new nullable reference types also work very nicely with some of the previously introduced language features. And with ReSharper and Rider, the transitions to better code are just an Alt+Enter keystroke away. We hope these new quick-fixes prove helpful in easily migrating your codebase to use nullable reference types and take advantage of the new language feature. Tell us how it goes, what you think, or if you’re missing anything. Download ReSharper 2020.1 or check out Rider 2020.1. 3 Responses to Nullable Reference Types: Migrating a Codebase – A Look at New Language Features in C# 8 Harald Hansen says:April 21, 2020 I hope I don’t appear snide if I suggest people read the now classic “Falsehoods Programmers Believe About Names” blog by Patrick McKenzie. Other than than, great blog and love your product! (I use R# every day) Mike-E says:April 30, 2020 Indeed, great post. Digging the subtle shout of respect to Mads, as well! Bart Koelman says:April 30, 2020 For anyone that’s been using [NotNull] / [CanBeNull] attributes in their codebase, the analyzer at can convert them to the new C# NRT syntax. Works on entire file/project/solution with a single click!
https://blog.jetbrains.com/dotnet/2020/04/20/nullable-reference-types-migration/
CC-MAIN-2021-17
refinedweb
1,170
52.19
¶ - 0.7.11 - 0.7.10 - 0.7.9 - 0.7.8 - 0.7.7 - 0.7.6 - 0.7.5 - 0.7.4 - 0.7.3 - 0.7.2 - 0.7.1 - 0.7.0 - 0.7.0b4 - 0.7.0b3 - 0.7.0b2 - 0.7.0b1 -.7 Changelog¶ 0.7.11¶no release date orm¶ [orm] [bug]).¶ engine¶ [engine] [bug] The regexp used by the make_url()function now parses ipv6 addresses, e.g. surrounded by brackets.¶ sql¶ [sql] [bug] Fixed regression dating back to 0.7.9 whereby the name of a CTE might not be properly quoted if it was referred to in multiple FROM clauses.¶ [sql] [bug] [cte] Fixed bug in common table expression system where if the CTE were used only as an alias()construct, it would not render using the WITH keyword.¶ [sql] [bug] Fixed bug in CheckConstraintDDL where the “quote” flag from a Columnobject would not be propagated.¶ postgresql¶ mysql¶ misc¶ [bug] [tests] Fixed an import of “logging” in test_execute which was not working on some linux platforms.¶ References: #2669, pull request 41 0.7.10¶Released: Thu Feb 7 2013 orm¶ .¶ [orm] [bug] Query.merge_result()can now load rows from an outer join where an entity may be Nonewithout throwing an error.¶ .8.0b2.¶ [orm] [bug].¶ engine¶ [engine] [bug] Fixed MetaData.reflect()to correctly use the given Connection, if given, without opening a second connection from that connection’s Engine.¶ sql¶ [sql] [bug] Backported adjustment to __repr__for TypeDecoratorto 0.7, allows PickleTypeto produce a clean repr()to help with Alembic.¶ [sql] [bug] Fixed bug where Table.tometadata()would fail if a Columnhad both a foreign key as well as an alternate “.key” name for the column.¶ [sql] [gae] [mysql].¶ mysql mssql¶ [mssql] [bug] Fixed bug whereby using “key” with Column in conjunction with “schema” for the owning Table would fail to locate result rows due to the MSSQL dialect’s “schema rendering” logic’s failure to take .key into account.¶ [mssql] [bug] Added a Py3K conditional around unnecessary .decode() call in mssql information schema, fixes reflection in Py3k.¶ oracle¶ [oracle] [bug] The Oracle LONG type, while an unbounded text type, does not appear to use the cx_Oracle.LOB type when result rows are returned, so the dialect has been repaired to exclude LONG from having cx_Oracle.LOB filtering applied.¶ [oracle] .¶ 0.7.9¶Released: Mon Oct 01 2012 orm¶ [orm] [bug].¶ [orm] [bug] A warning is emitted when lazy=’dynamic’ is combined with uselist=False. This is an exception raise in 0.8.¶ [orm] [bug] Fixed bug whereby user error in related-object assignment could cause recursion overflow if the assignment triggered a backref of the same name as a bi-directional attribute on the incorrect class to the same target. An informative error is raised now.¶ [orm] [bug] Fixed bug where incorrect type information would be passed when the ORM would bind the “version” column, when using the “version” feature. Tests courtesy Daniel Miller.¶ [orm] [bug].¶ engine¶ [engine] [feature] Dramatic improvement in memory usage of the event system; instance-level collections are no longer created for a particular type of event until instance-level listeners are established for that event.¶ [engine] [bug].¶ [engine] [bug] Added gaerdbms import to mysql/__init__.py, the absence of which was preventing the new GAE dialect from being loaded.¶ [engine] [bug] Fixed cextension bug whereby the “ambiguous column error” would fail to function properly if the given index were a Column object and not a string. Note there are still some column-targeting issues here which are fixed in 0.8.¶ [engine] [bug] Fixed the repr() of Enum to include the “name” and “native_enum” flags. Helps Alembic autogenerate.¶ sql¶ [sql] [bug] Fixed the DropIndex construct to support an Index associated with a Table in a remote schema.¶ [sql] [bug] Fixed bug in over() construct whereby passing an empty list for either partition_by or order_by, as opposed to None, would fail to generate correctly. Courtesy Gunnlaugur Þór Briem.¶ [sql] [bug] Fixed CTE bug whereby positional bound parameters present in the CTEs themselves would corrupt the overall ordering of bound parameters. This primarily affected SQL Server as the platform with positional binds + CTE support.¶ [sql] [bug].¶ [sql] [bug] quoting is applied to the column names inside the WITH RECURSIVE clause of a common table expression according to the quoting rules for the originating Column.¶ [sql] [bug] Fixed regression introduced in 0.7.6 whereby the FROM list of a SELECT statement could be incorrect in certain “clone+replace” scenarios.¶ [sql] [bug] Fixed bug whereby usage of a UNION or similar inside of an embedded subquery would interfere with result-column targeting, in the case that a result-column had the same ultimate name as a name inside the embedded UNION.¶ [sql] [bug].¶ [sql] [bug] Added missing operators is_(), isnot() to the ColumnOperators base, so that these long-available operators are present as methods like all the other operators.¶ postgresql¶ [postgresql] [bug] Columns in reflected primary key constraint are now returned in the order in which the constraint itself defines them, rather than how the table orders them. Courtesy Gunnlaugur Þór Briem..¶ [postgresql] [bug] Added ‘terminating connection’ to the list of messages we use to detect a disconnect with PG, which appears to be present in some versions when the server is restarted.¶ mysql¶ sqlite¶ [sqlite] [feature] Added support for the localtimestamp() SQL function implemented in SQLite, courtesy Richard Mitchell.¶ [sqlite] [bug]”“”).¶ [sqlite] [bug] Adjusted column default reflection code to convert non-string values to string, to accommodate old SQLite versions that don’t deliver default info as a string.¶ mssql¶ [mssql] [bug] Fixed compiler bug whereby using a correlated subquery within an ORDER BY would fail to render correctly if the stament also used LIMIT/OFFSET, due to mis-rendering within the ROW_NUMBER() OVER clause. Fix courtesy sayap¶ [mssql] [bug] Fixed compiler bug whereby a given select() would be modified if it had an “offset” attribute, causing the construct to not compile correctly a second time.¶ [mssql] [bug] Fixed bug where reflection of primary key constraint would double up columns if the same constraint/table existed in multiple schemas.¶ 0.7.8¶Released: Sat Jun 16 2012 orm¶ [orm] [feature] The ‘objects’ argument to flush() is no longer deprecated, as some valid use cases have been identified.¶ [orm] [bug] Fixed bug whereby subqueryload() from a polymorphic mapping to a target would incur a new invocation of the query for each distinct class encountered in the polymorphic result.¶ [orm] [bug] Fixed bug in declarative whereby the precedence of columns in a joined-table, composite column (typically for id) would fail to be correct if the columns contained names distinct from their attribute names. This would cause things like primaryjoin conditions made against the entity attributes to be incorrect. Related to as this was supposed to be part of that, this is.¶ [orm] [bug] Fixed identity_key() function which was not accepting a scalar argument for the identity. .¶ [orm] [bug] Fixed bug whereby populate_existing option would not propagate to subquery eager loaders. .¶ engine¶ [engine] [bug] Fixed memory leak in C version of result proxy whereby DBAPIs which don’t deliver pure Python tuples for result rows would fail to decrement refcounts correctly. The most prominently affected DBAPI is pyodbc.¶ [engine] [bug] Fixed bug affecting Py3K whereby string positional parameters passed to engine/connection execute() would fail to be interpreted correctly, due to __iter__ being present on Py3K string..¶ sql¶ [sql] [bug] added BIGINT to types.__all__, BIGINT, BINARY, VARBINARY to sqlalchemy module namespace, plus test to ensure this breakage doesn’t occur again.¶ [sql] [bug] Repaired common table expression rendering to function correctly when the SELECT statement contains UNION or other compound expressions, courtesy btbuilder.¶ [sql] [bug] Fixed bug whereby append_column() wouldn’t function correctly on a cloned select() construct, courtesy Gunnlaugur Þór Briem.¶ postgresql¶ mysql¶ 0.7.7¶Released: Sat May 05 2012 orm¶ [orm] [feature] Added prefix_with() method to Query, calls upon select().prefix_with() to allow placement of MySQL SELECT directives in statements. Courtesy Diana Clarke¶ [orm] [feature] Added new flag to @validates include_removes. When True, collection remove and attribute del events will also be sent to the validation function, which accepts an additional argument “is_remove” when this flag is used.¶ [orm] [bug] Fixed issue in unit of work whereby setting a non-None self-referential many-to-one relationship to None would fail to persist the change if the former value was not already loaded..¶ [orm] [bug] Fixed bug in 0.7.6 introduced by whereby column_mapped_collection used against columns that were mapped as joins or other indirect selectables would fail to function.¶ [orm] [bug] Fixed bug whereby polymorphic_on column that’s not otherwise mapped on the class would be incorrectly included in a merge() operation, raising an error.¶ [orm] [bug] Fixed bug in expression annotation mechanics which could lead to incorrect rendering of SELECT statements with aliases and joins, particularly when using column_property().¶ [orm] [bug] Fixed bug which would prevent OrderingList from being pickleable. Courtesy Jeff Dairiki¶ [orm] [bug] Fixed bug in relationship comparisons whereby calling unimplemented methods like SomeClass.somerelationship.like() would produce a recursion overflow, instead of NotImplementedError.¶ sql¶ [sql] [feature] Added new connection event dbapi_error(). Is called for all DBAPI-level errors passing the original DBAPI exception before SQLAlchemy modifies the state of the cursor.¶ [sql] [bug] Removed warning when Index is created with no columns; while this might not be what the user intended, it is a valid use case as an Index could be a placeholder for just an index of a certain name.¶ [sql] [bug] If conn.begin() fails when calling “with engine.begin()”, the newly acquired Connection is closed explicitly before propagating the exception onward normally.¶ [sql] [bug] Add BINARY, VARBINARY to types.__all__.¶ postgresql¶ [postgresql] [feature] Added new for_update/with_lockmode() options for PostgreSQL: for_update=”read”/ with_lockmode(“read”), for_update=”read_nowait”/ with_lockmode(“read_nowait”). These emit “FOR SHARE” and “FOR SHARE NOWAIT”, respectively. Courtesy Diana Clarke¶ [postgresql] [bug] removed unnecessary table clause when reflecting domains.¶ mysql¶ [mysql] [bug] Fixed bug whereby column name inside of “KEY” clause for autoincrement composite column with InnoDB would double quote a name that’s a reserved word. Courtesy Jeff Dairiki.¶ [mysql] [bug] Fixed bug whereby get_view_names() for “information_schema” schema would fail to retrieve views marked as “SYSTEM VIEW”. courtesy Matthew Turland.¶ [mysql] [bug].¶ sqlite¶ [sqlite] [feature] Added SQLite execution option “sqlite_raw_colnames=True”, will bypass attempts to remove “.” from column names returned by SQLite cursor.description.¶ [sqlite] [bug] When the primary key column of a Table is replaced, such as via extend_existing, the “auto increment” column used by insert() constructs is reset. Previously it would remain referring to the previous primary key column.¶ mssql¶ [mssql] [feature] Added interim create_engine flag supports_unicode_binds to PyODBC dialect, to force whether or not the dialect passes Python unicode literals to PyODBC or not.¶ [mssql] .¶ [mssql] [bug].¶ 0.7.6¶Released: Wed Mar 14 2012 orm¶ [orm] [feature] Added “no_autoflush” context manager to Session, used with with: will temporarily disable autoflush.¶ [orm] [feature] Added cte() method to Query, invokes common table expression support from the Core (see below).¶ [orm] [feature] Added the ability to query for Table-bound column names when using query(sometable).filter_by(colname=value).¶ [orm] [bug] Fixed event registration bug which would primarily show up as events not being registered with sessionmaker() instances created after the event was associated with the Session class.¶ [orm] [bug] Fixed bug whereby a primaryjoin condition with a “literal” in it would raise an error on compile with certain kinds of deeply nested expressions which also needed to render the same bound parameter name more than once.¶ [orm] [bug].¶ [orm] [bug] Fixed bug whereby objects using attribute_mapped_collection or column_mapped_collection could not be pickled.¶ [orm] [bug] Fixed bug whereby MappedCollection would not get the appropriate collection instrumentation if it were only used in a custom subclass that used @collection.internally_instrumented.¶ [orm] [bug] Fixed bug whereby SQL adaption mechanics would fail in a very nested scenario involving joined-inheritance, joinedload(), limit(), and a derived function in the columns clause.¶ [orm] [bug] Fixed the repr() for CascadeOptions to include refresh-expire. Also reworked CascadeOptions to be a <frozenset>.¶ [orm] [bug] Improved the “declarative reflection” example to support single-table inheritance, multiple calls to prepare(), tables that are present in alternate schemas, establishing only a subset of classes as reflected.¶ [orm] [bug] Scaled back the test applied within flush() to check for UPDATE against partially NULL PK within one table to only actually happen if there’s really an UPDATE to occur.¶ [orm] [bug] Fixed bug whereby if a method name conflicted with a column name, a TypeError would be raised when the mapper tried to inspect the __get__() method on the method object.¶ engine¶ [engine] [feature]).¶ [engine] [feature] Added pool_reset_on_return argument to create_engine, allows control over “connection return” behavior. Also added new arguments ‘rollback’, ‘commit’, None to pool.reset_on_return to allow more control over connection return activity.¶ [engine] [feature] Added some decent context managers to Engine, Connection: with engine.begin() as conn: <work with conn in a transaction> and: with engine.connect() as conn: <work with conn> Both close out the connection when done, commit or rollback transaction with errors on engine.begin().¶ [engine] [bug] Added execution_options() call to MockConnection (i.e., that used with strategy=”mock”) which acts as a pass through for arguments.¶ sql¶ [sql] [feature] Added support for SQL standard common table expressions (CTE), allowing SELECT objects as the CTE source (DML not yet supported). This is invoked via the cte() method on any select() construct.¶ [sql] [bug] Fixed memory leak in core which would occur when C extensions were used with particular types of result fetches, in particular when orm query.count() were called.¶ [sql] [bug] Fixed issue whereby attribute-based column access on a row would raise AttributeError with non-C version, NoSuchColumnError with C version. Now raises AttributeError in both cases.¶ [sql] [bug].¶ [sql] [bug] A warning is emitted when a not-present column is stated in the values() clause of an insert() or update() construct. Will move to an exception in 0.8.¶ [sql] [bug].¶ [sql] [bug] Fixed bug in new “autoload_replace” flag which would fail to preserve the primary key constraint of the reflected table.¶ [sql] [bug] Index will raise when arguments passed cannot be interpreted as columns or expressions. Will warn when Index is created with no columns at all.¶ mysql¶ [mysql] [feature] Added support for MySQL index and primary key constraint types (i.e. USING) via new mysql_using parameter to Index and PrimaryKeyConstraint, courtesy Diana Clarke.¶ [mysql] [feature] Added support for the “isolation_level” parameter to all MySQL dialects. Thanks to mu_mind for the patch here.¶ sqlite¶ mssql¶ oracle¶ [oracle] [feature] Added a new create_engine() flag coerce_to_decimal=False, disables the precision numeric handling which can add lots of overhead by converting all numeric values to Decimal.¶ [oracle] [bug] Added missing compilation support for LONG¶ [oracle] [bug] Added ‘LEVEL’ to the list of reserved words for Oracle.¶ 0.7.5¶Released: Sat Jan 28 2012 orm¶ [orm] [feature] Added “class_registry” argument to declarative_base(). Allows two or more declarative bases to share the same registry of class names.¶ [orm] [feature] query.filter() accepts multiple criteria which will join via AND, i.e. query.filter(x==y, z>q, …)¶ [orm] [feature] !¶ [orm] [feature] New declarative reflection example added, illustrates how best to mix table reflection with declarative as well as uses some new features from.¶ [orm] [bug].¶ [orm] [bug] Fixed regression from 0.7.4 whereby using an already instrumented column from a superclass as “polymorphic_on” failed to resolve the underlying Column.¶ [orm] [bug] Raise an exception if xyzload_all() is used inappropriately with two non-connected relationships.¶ [orm] [bug] Fixed bug whereby event.listen(SomeClass) forced an entirely unnecessary compile of the mapper, making events very hard to set up at module import time (nobody noticed this ??)¶ [orm] [bug] Fixed bug whereby hybrid_property didn’t work as a kw arg in any(), has().¶ [orm] [bug] ensure pickleability of all ORM exceptions for multiprocessing compatibility.¶ [orm] [bug] implemented standard “can’t set attribute” / “can’t delete attribute” AttributeError when setattr/delattr used on a hybrid that doesn’t define fset or fdel.¶ [orm] [bug] Fixed bug where unpickled object didn’t have enough of its state set up to work correctly within the unpickle() event established by the mutable object extension, if the object needed ORM attribute access within __eq__() or similar.¶ [orm] [bug] Fixed bug where “merge” cascade could mis-interpret an unloaded attribute, if the load_on_pending flag were used with relationship(). Thanks to Kent Bower for tests.¶ [orm] Fixed regression from 0.6 whereby if “load_on_pending” relationship() flag were used where a non-“get()” lazy clause needed to be emitted on a pending object, it would fail to load.¶ engine¶ [engine] [bug] Added __reduce__ to StatementError, DBAPIError, column errors so that exceptions are pickleable, as when using multiprocessing. However, not all DBAPIs support this yet, such as psycopg2.¶ [engine] [bug] Improved error messages when a non-string or invalid string is passed to any of the date/time processors used by SQLite, including C and Python versions.¶ [engine] [bug] Fixed bug whereby a table-bound Column object named “<a>_<b>” which matched a column labeled as “<tablename>_<colname>” could match inappropriately when targeting in a result set row.¶ [engine] [bug] Fixed bug in “mock” strategy whereby correct DDL visit method wasn’t called, resulting in “CREATE/DROP SEQUENCE” statements being duplicated¶ sql¶ [sql] [feature].¶ [sql] [feature] Added “false()” and “true()” expression constructs to sqlalchemy.sql namespace, though not part of __all__ as of yet.¶ [sql] [feature] Dialect-specific compilers now raise CompileError for all type/statement compilation issues, instead of InvalidRequestError or ArgumentError. The DDL for CREATE TABLE will re-raise CompileError to include table/column information for the problematic column.¶ [sql] [bug] Improved the API for add_column() such that if the same column is added to its own table, an error is not raised and the constraints don’t get doubled up. Also helps with some reflection/declarative patterns.¶ [sql] [bug] Fixed issue where the “required” exception would not be raised for bindparam() with required=True, if the statement were given no parameters at all.¶ mysql¶ sqlite¶ [sqlite] [bug] the “name” of an FK constraint in SQLite is reflected as “None”, not “0” or other integer value. SQLite does not appear to support constraint naming in any case.¶ [sqlite] [bug] sql.false() and sql.true() compile to 0 and 1, respectively in sqlite¶ [sqlite] [bug] removed an erroneous “raise” in the SQLite dialect when getting table names and view names, where logic is in place to fall back to an older version of SQLite that doesn’t have the “sqlite_temp_master” table.¶ mssql¶ [mssql] [bug].¶ [mssql] [bug].¶ oracle¶ misc¶ [feature] [examples] Simplified the versioning example a bit to use a declarative mixin as well as an event listener, instead of a metaclass + SessionExtension.¶ [bug] [core] Changed LRUCache, used by the mapper to cache INSERT/UPDATE/DELETE statements, to use an incrementing counter instead of a timestamp to track entries, for greater reliability versus using time.time(), which can cause test failures on some platforms.¶ [bug] [core].¶ [bug] [py3k] Fixed inappropriate usage of util.py3k flag and renamed it to util.py3k_warning, since this flag is intended to detect the -3 flag series of import restrictions only.¶ [bug] [examples] Fixed large_collection.py to close the session before dropping tables.¶ 0.7.4¶Released: Fri Dec 09 2011 orm¶ [orm] [feature].¶ [orm] [feature] IdentitySet supports the - operator as the same as difference(), handy when dealing with Session.dirty etc.¶ [orm] [feature] Added new value for Column autoincrement called “ignore_fk”, can be used to force autoincrement on a column that’s still part of a ForeignKeyConstraint. New example in the relationship docs illustrates its use.¶ [orm] [bug] Fixed backref behavior when “popping” the value off of a many-to-one in response to a removal from a stale one-to-many - the operation is skipped, since the many-to-one has since been updated.¶ [orm] [bug].¶ [orm] [bug] fixed inappropriate evaluation of user-mapped object in a boolean context within query.get(). Also in 0.6.9.¶ [orm] [bug] Added missing comma to PASSIVE_RETURN_NEVER_SET symbol¶ [orm] [bug] Cls.column.collate(“some collation”) now works. Also in 0.6.9¶ [orm] [bug] the value of a composite attribute is now expired after an insert or update operation, instead of regenerated in place. This ensures that a column value which is expired within a flush will be loaded first, before the composite is regenerated using that value.¶ [orm] [bug].¶ [orm] [bug] Fixed bug whereby a subclass of a subclass using concrete inheritance in conjunction with the new ConcreteBase or AbstractConcreteBase would fail to apply the subclasses deeper than one level to the “polymorphic loader” of each base¶ [orm] [bug] Fixed bug whereby a subclass of a subclass using the new AbstractConcreteBase would fail to acquire the correct “base_mapper” attribute when the “base” mapper was generated, thereby causing failures later on.¶ [orm] [bug] Fixed bug whereby column_property() created against ORM-level column could be treated as a distinct entity when producing certain kinds of joined-inh joins.¶ [orm] [bug] Fixed the error formatting raised when a tuple is inadvertently passed to session.query(). Also in 0.6.9.¶ [orm] [bug].¶ [orm] [bug] __table_args__ can now be passed as an empty tuple as well as an empty dict.. Thanks to Fayaz Yusuf Khan for the patch.¶ [orm] [bug] Updated warning message when setting delete-orphan without delete to no longer refer to 0.6, as we never got around to upgrading this to an exception. Ideally this might be better as an exception but it’s not critical either way.¶ [orm] [bug] Fixed bug in get_history() when referring to a composite attribute that has no value; added coverage for get_history() regarding composites which is otherwise just a userland function.¶ engine¶ [engine] [bug].¶ sql¶ [sql] [feature].¶ [sql] [feature] Added accessor to types called “python_type”, returns the rudimentary Python type object for a particular TypeEngine instance, if known, else raises NotImplementedError.¶ [sql] [bug] :).¶ [sql] [bug].¶ schema¶ [schema] [feature] Added new support for remote “schemas”:¶ [schema] [feature].¶ [schema] [bug] Fixed bug whereby TypeDecorator would return a stale value for _type_affinity, when using a TypeDecorator that “switches” types, like the CHAR/UUID type.¶ [schema] [bug] Fixed bug whereby “order_by=’foreign_key’” option to Inspector.get_table_names wasn’t implementing the sort properly, replaced with the existing sort algorithm¶ [schema] [bug] the “name” of a column-level CHECK constraint, if present, is now rendered in the CREATE TABLE statement using “CONSTRAINT <name> CHECK <expression>”.¶ [schema] MetaData() accepts “schema” and “quote_schema” arguments, which will be applied to the same-named arguments of a Table or Sequence which leaves these at their default of None.¶ [schema] Sequence accepts “quote_schema” argument¶ [schema] tometadata() for Table will use the “schema” of the incoming MetaData for the new Table if the schema argument is explicitly “None”¶ [schema] Added CreateSchema and DropSchema DDL constructs - these accept just the string name of a schema and a “quote” flag.¶ [schema] When using default “schema” with MetaData, ForeignKey will also assume the “default” schema when locating remote table. This allows the “schema” argument on MetaData to be applied to any set of Table objects that otherwise don’t have a “schema”.¶ [schema] a “has_schema” method has been implemented on dialect, but only works on PostgreSQL so far. Courtesy Manlio Perillo.¶ postgresql¶ [postgresql] [feature] Added create_type constructor argument to pg.ENUM. When False, no CREATE/DROP or checking for the type will be performed as part of a table create/drop event; only the create()/drop)() methods called directly will do this. Helps with Alembic “offline” scripts.¶ [postgresql] [bug].¶ mssql¶ [mssql] [feature] lifted the restriction on SAVEPOINT for SQL Server. All tests pass using it, it’s not known if there are deeper issues however.¶ [mssql] [bug] repaired the with_hint() feature which wasn’t implemented correctly on MSSQL - usually used for the “WITH (NOLOCK)” hint (which you shouldn’t be using anyway ! use snapshot isolation instead :) )¶ [mssql] [bug] use new pyodbc version detection for _need_decimal_fix option.¶ [mssql] [bug] don’t cast “table name” as NVARCHAR on SQL Server 2000. Still mostly in the dark what incantations are needed to make PyODBC work fully with FreeTDS 0.91 here, however.¶ [mssql] [bug] Decode incoming values when retrieving list of index names and the names of columns within those indexes.¶ misc¶ [feature] [ext].¶ [bug] [pyodbc] pyodbc-based dialects now parse the pyodbc accurately as far as observed pyodbc strings, including such gems as “py3-3.0.1-beta4”¶ [bug] [ext] the @compiles decorator raises an informative error message when no “default” compilation handler is present, rather than KeyError.¶ [bug] [examples] Fixed bug in history_meta.py example where the “unique” flag was not removed from a single-table-inheritance subclass which generates columns to put up onto the base.¶ 0.7.3¶Released: Sun Oct 16 2011 general¶ [general] Adjusted the “importlater” mechanism, which is used internally to resolve import cycles, such that the usage of __import__ is completed when the import of sqlalchemy or sqlalchemy.orm is done, thereby avoiding any usage of __import__ after the application starts new threads, fixes. Also in 0.6.9.¶ orm¶ [orm].¶ [orm].¶ [orm].¶ [orm] Added new flag expire_on_flush=False to column_property(), marks those properties that would otherwise be considered to be “readonly”, i.e. derived from SQL expressions, to retain their value after a flush has occurred, including if the parent object itself was involved in an update.¶ [orm].¶ [orm].¶ [orm] Fixed a variety of synonym()-related regressions from 0.6: - making a synonym against a synonym now works. - synonyms made against a relationship() can be passed to query.join(), options sent to query.options(), passed by name to query.with_parent(). [orm] Fixed bug whereby mapper.order_by attribute would be ignored in the “inner” query within a subquery eager load. . Also in 0.6.9.¶ [orm] Identity map .discard() uses dict.pop(,None) internally instead of “del” to avoid KeyError/warning during a non-determinate gc teardown¶ [orm] Fixed regression in new composite rewrite where deferred=True option failed due to missing import¶ [orm] Reinstated “comparator_factory” argument to composite(), removed when 0.7 was released.¶ [orm] Fixed bug in query.join() which would occur in a complex multiple-overlapping path scenario, where the same table could be joined to twice. Thanks much to Dave Vitek for the excellent fix here.¶ [orm] Query will convert an OFFSET of zero when slicing into None, so that needless OFFSET clauses are not invoked.¶ [orm] Repaired edge case where mapper would fail to fully update internal state when a relationship on a new mapper would establish a backref on the first mapper.¶ [orm] Fixed bug whereby if __eq__() was redefined, a relationship many-to-one lazyload would hit the __eq__() and fail. Does not apply to 0.6.9.¶ [orm] Calling class_mapper() and passing in an object that is not a “type” (i.e. a class that could potentially be mapped) now raises an informative ArgumentError, rather than UnmappedClassError.¶ [orm] New event hook, MapperEvents.after_configured(). Called after a configure() step has completed and mappers were in fact affected. Theoretically this event is called once per application, unless new mappings are constructed after existing ones have been used already.¶ [orm] When an open Session is garbage collected, the objects within it which remain are considered detached again when they are add()-ed to a new Session. This is accomplished by an extra check that the previous “session_key” doesn’t actually exist among the pool of Sessions.¶ [orm]. [orm] Declarative will warn when a subclass’ base uses @declared_attr for a regular column - this attribute does not propagate to subclasses.¶ [orm] The integer “id” used to link a mapped instance with its owning Session is now generated by a sequence generation function rather than id(Session), to eliminate the possibility of recycled id() values causing an incorrect result, no need to check that object actually in the session.¶ [orm] Behavioral improvement: empty conjunctions such as and_() and or_() will be flattened in the context of an enclosing conjunction, i.e. and_(x, or_()) will produce ‘X’ and not ‘X AND ()’..¶ [orm].¶ [orm] Fixed bug whereby with_only_columns() method of Select would fail if a selectable were passed.. Also in 0.6.9.¶ engine¶ [engine] The recreate() method in all pool classes uses self.__class__ to get at the type of pool to produce, in the case of subclassing. Note there’s no usual need to subclass pools.¶ [engine] Improvement to multi-param statement logging, long lists of bound parameter sets will be compressed with an informative indicator of the compression taking place. Exception messages use the same improved formatting.¶ [engine] Added optional “sa_pool_key” argument to pool.manage(dbapi).connect() so that serialization of args is not necessary.¶ [engine] The entry point resolution supported by create_engine() now supports resolution of individual DBAPI drivers on top of a built-in or entry point-resolved dialect, using the standard ‘+’ notation - it’s converted to a ‘.’ before being resolved as an entry point.¶ [engine] Added an exception catch + warning for the “return unicode detection” step within connect, allows databases that crash on NVARCHAR to continue initializing, assuming no NVARCHAR type implemented.¶ schema¶ [schema] Modified Column.copy() to use _constructor(), which defaults to self.__class__, in order to create the new object. This allows easier support of subclassing Column.¶ [schema] Added a slightly nicer __repr__() to SchemaItem classes. Note the repr here can’t fully support the “repr is the constructor” idea since schema items can be very deeply nested/cyclical, have late initialization of some things, etc.¶ postgresql¶ [postgresql] Added “postgresql_using” argument to Index(), produces USING clause to specify index implementation for PG. . Thanks to Ryan P. Kelly for the patch.¶ [postgresql] Added client_encoding parameter to create_engine() when the postgresql+psycopg2 dialect is used; calls the psycopg2 set_client_encoding() method with the value upon connect.¶ [postgresql] Fixed bug related to whereby the same modified index behavior in PG 9 affected primary key reflection on a renamed column.. Also in 0.6.9.¶ [postgresql] Reflection functions for Table, Sequence no longer case insensitive. Names can be differ only in case and will be correctly distinguished.¶ [postgresql] Use an atomic counter as the “random number” source for server side cursor names; conflicts have been reported in rare cases.¶ [postgresql].¶ mysql¶ sqlite¶ mssql¶ [mssql].¶ [mssql]().¶ [mssql] “0” is accepted as an argument for limit() which will produce “TOP 0”.¶ oracle¶ [oracle] Fixed ReturningResultProxy for zxjdbc dialect.. Regression from 0.6.¶ [oracle].¶ misc¶ [types] Extra keyword arguments to the base Float type beyond “precision” and “asdecimal” are ignored; added a deprecation warning here and additional docs, related to¶ [ext] SQLSoup will not be included in version 0.8 of SQLAlchemy; while useful, we would like to keep SQLAlchemy itself focused on one ORM usage paradigm. SQLSoup will hopefully soon be superseded by a third party project.¶ [ext] Added local_attr, remote_attr, attr accessors to AssociationProxy, providing quick access to the proxied attributes at the class level.¶ [ext] Changed the update() method on association proxy dictionary to use a duck typing approach, i.e. checks for “keys”, to discern between update({}) and update((a, b)). Previously, passing a dictionary that had tuples as keys would be misinterpreted as a sequence.¶ [examples] Adjusted dictlike-polymorphic.py example to apply the CAST such that it works on PG, other databases. Also in 0.6.9.¶ 0.7.2¶Released: Sun Jul 31 2011 orm¶ [orm]¶ [orm] A rework of “replacement traversal” within the ORM as it alters selectables to be against aliases of things (i.e. clause adaption) includes a fix for multiply-nested any()/has() constructs against a joined table structure.¶ [orm] Fixed bug where query.join() + aliased=True from a joined-inh structure to itself on relationship() with join condition on the child table would convert the lead entity into the joined one inappropriately. Also in 0.6.9.¶ [orm].¶ [orm] Load of a deferred() attribute on an object where row can’t be located raises ObjectDeletedError instead of failing later on; improved the message in ObjectDeletedError to include other conditions besides a simple “delete”.¶ [orm] Fixed regression from 0.6 where a get history operation on some relationship() based attributes would fail when a lazyload would emit; this could trigger within a flush() under certain conditions. Thanks to the user who submitted the great test for this.¶ [orm] Fixed bug apparent only in Python 3 whereby sorting of persistent + pending objects during flush would produce an illegal comparison, if the persistent object primary key is not a single integer. Also in 0.6.9¶ [orm] Fixed bug whereby the source clause used by query.join() would be inconsistent if against a column expression that combined multiple entities together. Also in 0.6.9¶ . Also in 0.6.9.¶ [orm] Added public attribute “.validators” to Mapper, an immutable dictionary view of all attributes that have been decorated with the @validates decorator. courtesy Stefano Fontanelli¶ [orm] Fixed subtle bug that caused SQL to blow up if: column_property() against subquery + joinedload + LIMIT + order by the column property() occurred. . Also in 0.6.9¶ [orm] The join condition produced by with_parent as well as when using a “dynamic” relationship against a parent will generate unique bindparams, rather than incorrectly repeating the same bindparam. . Also in 0.6.9.¶ [orm] Added the same “columns-only” check to mapper.polymorphic_on as used when receiving user arguments to relationship.order_by, foreign_keys, remote_side, etc.¶ [orm] Fixed bug whereby comparison of column expression to a Query() would not call as_scalar() on the underlying SELECT statement to produce a scalar subquery, in the way that occurs if you called it on Query().subquery().¶ [orm] Fixed declarative bug where a class inheriting from a superclass of the same name would fail due to an unnecessary lookup of the name in the _decl_class_registry.¶ [orm] Repaired the “no statement condition” assertion in Query which would attempt to raise if a generative method were called after from_statement() were called.. Also in 0.6.9.¶ engine¶ [engine] Context manager provided by Connection.begin() will issue rollback() if the commit() fails, not just if an exception occurs.¶ [engine] Use urllib.parse_qsl() in Python 2.6 and above, no deprecation warning about cgi.parse_qsl()¶ [engine] Added mixin class sqlalchemy.ext.DontWrapMixin. User-defined exceptions of this type are never wrapped in StatementException when they occur in the context of a statement execution.¶ [engine] StatementException wrapping will display the original exception class in the message.¶ [engine] Failures on connect which raise dbapi.Error will forward the error to dialect.is_disconnect() and set the “connection_invalidated” flag if the dialect knows this to be a potentially “retryable” condition. Only Oracle ORA-01033 implemented for now.¶ sql¶ schema¶ [schema] New feature: with_variant() method on all types. Produces an instance of Variant(), a special TypeDecorator which will select the usage of a different type based on the dialect in use.¶ [schema] Added an informative error message when ForeignKeyConstraint refers to a column name in the parent that is not found. Also in 0.6.9.¶ [schema] Fixed bug whereby adaptation of old append_ddl_listener() function was passing unexpected **kw through to the Table event. Table gets no kws, the MetaData event in 0.6 would get “tables=somecollection”, this behavior is preserved.¶ [schema] Fixed bug where “autoincrement” detection on Table would fail if the type had no “affinity” value, in particular this would occur when using the UUID example on the site that uses TypeEngine as the “impl”.¶ [schema] Added an improved repr() to TypeEngine objects that will only display constructor args which are positional or kwargs that deviate from the default.¶ postgresql¶ mysql¶ sqlite¶ mssql¶ oracle¶ [oracle] Added ORA-00028 to disconnect codes, use cx_oracle _Error.code to get at the code,. Also in 0.6.9.¶ [oracle] Added ORA-01033 to disconnect codes, which can be caught during a connection event.¶ [oracle] repaired the oracle.RAW type which did not generate the correct DDL. Also in 0.6.9.¶ [oracle] added CURRENT to reserved word list. Also in 0.6.9.¶ [oracle] Fixed bug in the mutable extension whereby if the same type were used twice in one mapping, the attributes beyond the first would not get instrumented.¶ [oracle] Fixed bug in the mutable extension whereby if None or a non-corresponding type were set, an error would be raised. None is now accepted which assigns None to all attributes, illegal values raise ValueError.¶ misc¶ [examples] Repaired the examples/versioning test runner to not rely upon SQLAlchemy test libs, nosetests must be run from within examples/versioning to get around setup.cfg breaking it.¶ [examples] Tweak to examples/versioning to pick the correct foreign key in a multi-level inheritance situation.¶ [examples] Fixed the attribute shard example to check for bind param callable correctly in 0.7 style.¶ 0.7.1¶Released: Sun Jun 05 2011 general¶ orm¶ [orm] “delete-orphan” cascade is now allowed on self-referential relationships - this since SQLA 0.7 no longer enforces “parent with no child” at the ORM level; this check is left up to foreign key nullability. Related to¶ [orm] Repaired new “mutable” extension to propagate events to subclasses correctly; don’t create multiple event listeners for subclasses either.¶ [orm] Modify the text of the message which occurs when the “identity” key isn’t detected on flush, to include the common cause that the Column isn’t set up to detect auto-increment correctly;. Also in 0.6.8.¶ [orm] Fixed bug where transaction-level “deleted” collection wouldn’t be cleared of expunged states, raising an error if they later became transient. Also in 0.6.8.¶ engine¶ [engine] Deprecate schema/SQL-oriented methods on Connection/Engine that were never well known and are redundant: reflecttable(), create(), drop(), text(), engine.func¶ [engine] Adjusted the __contains__() method of a RowProxy result row such that no exception throw is generated internally; NoSuchColumnError() also will generate its message regardless of whether or not the column construct can be coerced to a string.. Also in 0.6.8.¶ sql¶ [sql] Fixed bug whereby metadata.reflect(bind) would close a Connection passed as a bind argument. Regression from 0.6.¶ [sql] Streamlined the process by which a Select determines what’s in its ‘.c’ collection. Behaves identically, except that a raw ClauseList() passed to select([]) (which is not a documented case anyway) will now be expanded into its individual column elements instead of being ignored.¶ postgresql¶ mysql¶ [mysql] Unit tests pass 100% on MySQL installed on windows.¶ [mysql].¶ [mysql] supports_sane_rowcount will be set to False if using MySQLdb and the DBAPI doesn’t provide the constants.CLIENT module.¶ 0.7.0¶Released: Fri May 20 2011 orm¶ [orm]¶ [orm] query.count() emits “count(*)” instead of “count(1)”.¶ [orm].¶ [orm].¶ [orm].¶ [orm] It is an error to call query.get() when the given entity is not a single, full class entity or mapper (i.e. a column). This is a deprecation warning in 0.6.8.¶ [orm] Fixed a potential KeyError which under some circumstances could occur with the identity map, part of¶ [orm] added Query.with_session() method, switches Query to use a different session.¶ [orm] horizontal shard query should use execution options per connection as per¶ [orm])¶ [orm] Fixed the error message emitted for “can’t execute syncrule for destination column ‘q’; mapper ‘X’ does not map this column” to reference the correct mapper. . Also in 0.6.8.¶ [orm] polymorphic_union() gets a “cast_nulls” option, disables the usage of CAST when it renders the labeled NULL columns.¶ [orm] polymorphic_union() renders the columns in their original table order, as according to the first table/selectable in the list of polymorphic unions in which they appear. (which is itself an unordered mapping unless you pass an OrderedDict).¶ [orm] Fixed bug whereby mapper mapped to an anonymous alias would fail if logging were used, due to unescaped % sign in the alias name. Also in 0.6.8.¶ sql¶ [sql] Fixed bug whereby nesting a label of a select() with another label in it would produce incorrect exported columns. Among other things this would break an ORM column_property() mapping against another column_property(). . Also in 0.6.8¶ [sql].¶ [sql] Some improvements to error handling inside of the execute procedure to ensure auto-close connections are really closed when very unusual DBAPI errors occur.¶ [sql] metadata.reflect() and reflection.Inspector() had some reliance on GC to close connections which were internally procured, fixed this.¶ [sql] Added explicit check for when Column .name is assigned as blank string¶ [sql] Fixed bug whereby if FetchedValue was passed to column server_onupdate, it would not have its parent “column” assigned, added test coverage for all column default assignment patterns. also in 0.6.8¶ postgresql¶ mssql¶ misc¶ This section documents those changes from 0.7b4 to 0.7.0. For an overview of what’s new in SQLAlchemy 0.7, see¶ [documentation] Removed the usage of the “collections.MutableMapping” abc from the ext.mutable docs as it was being used incorrectly and makes the example more difficult to understand in any case.¶ [examples] removed the ancient “polymorphic association” examples and replaced with an updated set of examples that use declarative mixins, “generic_associations”. Each presents an alternative table layout.¶ [ext] Fixed bugs in sqlalchemy.ext.mutable extension where None was not appropriately handled, replacement events were not appropriately handled.¶ 0.7.0b4¶Released: Sun Apr 17 2011 general¶ [general] Changes to the format of CHANGES, this file. The format changes have been applied to the 0.7 releases.¶ [general] The “-declarative” changes will now be listed directly under the “-orm” section, as these are closely related.¶ [general] The 0.5 series changes have been moved to the file CHANGES_PRE_06 which replaces CHANGES_PRE_05.¶ [general].¶ orm¶ [orm] Some fixes to “evaluate” and “fetch” evaluation when query.update(), query.delete() are called. The retrieval of records is done after autoflush in all cases, and before update/delete is emitted, guarding against unflushed data present as well as expired objects failing during the evaluation.¶ [orm] Reworded the exception raised when a flush is attempted of a subclass that is not polymorphic against the supertype.¶ [orm] Still more wording adjustments when a query option can’t find the target entity. Explain that the path must be from one of the root entities.¶ [orm] Some fixes to the state handling regarding backrefs, typically when autoflush=False, where the back-referenced collection wouldn’t properly handle add/removes with no net change. Thanks to Richard Murri for the test case + patch. (also in 0.6.7).¶ [orm] Added checks inside the UOW to detect the unusual condition of being asked to UPDATE or DELETE on a primary key value that contains NULL in it.¶ [orm].¶ [orm] a “having” clause would be copied from the inside to the outside query if from_self() were used; in particular this would break an 0.7 style count() query. (also in 0.6.7)¶ [orm] the Query.execution_options() method now passes those options to the Connection rather than the SELECT statement, so that all available options including isolation level and compiled cache may be used.¶ engine¶ sql¶ [sql] The “compiled_cache” execution option now raises an error when passed to a SELECT statement rather than a Connection. Previously it was being ignored entirely. We may look into having this option work on a per-statement level at some point.¶ [sql] Restored the “catchall” constructor on the base TypeEngine class, with a deprecation warning. This so that code which does something like Integer(11) still succeeds.¶ [sql] Fixed regression whereby MetaData() coming back from unpickling did not keep track of new things it keeps track of now, i.e. collection of Sequence objects, list of schema names.¶ [sql] The limit/offset keywords to select() as well as the value passed to select.limit()/offset() will be coerced to integer. (also in 0.6.7)¶ [sql] fixed bug where “from” clause gathering from an over() clause would be an itertools.chain() and not a list, causing “can only concatenate list” TypeError when combined with other clauses.¶ [sql] Fixed incorrect usage of “,” in over() clause being placed between the “partition” and “order by” clauses.¶ [sql] Before/after attach events for PrimaryKeyConstraint now function, tests added for before/after events on all constraint types.¶ [sql] Added explicit true()/false() constructs to expression lib - coercion rules will intercept “False”/”True” into these constructs. In 0.6, the constructs were typically converted straight to string, which was no longer accepted in 0.7.¶ schema¶ [schema].¶ postgresql¶ sqlite¶ oracle¶ [oracle] Using column names that would require quotes for the column itself or for a name-generated bind parameter, such as names with special characters, underscores, non-ascii characters, now properly translate bind parameter keys when talking to cx_oracle. (Also in 0.6.7)¶ [oracle] Oracle dialect adds use_binds_for_limits=False create_engine() flag, will render the LIMIT/OFFSET values inline instead of as binds, reported to modify the execution plan used by Oracle. (Also in 0.6.7)¶ misc¶ [types] REAL has been added to the core types. Supported by PostgreSQL, SQL Server, MySQL, SQLite. Note that the SQL Server and MySQL versions, which add extra arguments, are also still available from those dialects.¶ [types] Added @event.listens_for() decorator, given target + event name, applies the decorated function as a listener.¶ [pool] AssertionPool now stores the traceback indicating where the currently checked out connection was acquired; this traceback is reported within the assertion raised upon a second concurrent checkout; courtesy Gunnlaugur Briem¶ [pool] The “pool.manage” feature doesn’t use pickle anymore to hash the arguments for each pool.¶ [documentation] Documented SQLite DATE/TIME/DATETIME types. (also in 0.6.7)¶ [documentation] Fixed mutable extension docs to show the correct type-association methods.¶ 0.7.0b3¶Released: Sun Mar 20 2011 orm¶ [orm].¶ [orm].¶ [orm].¶ [orm] Improvements to the error messages emitted when querying against column-only entities in conjunction with (typically incorrectly) using loader options, where the parent entity is not fully present.¶ [orm] Fixed bug in query.options() whereby a path applied to a lazyload using string keys could overlap a same named attribute on the wrong entity. Note 0.6.7 has a more conservative fix to this.¶ engine¶ sql¶ [sql] Added a fully descriptive error message for the case where Column is subclassed and _make_proxy() fails to make a copy due to TypeError on the constructor. The method _constructor should be implemented in this case.¶ [sql] Added new event “column_reflect” for Table objects. Receives the info dictionary about a Column before the object is generated within reflection, and allows modification to the dictionary for control over most aspects of the resulting Column including key, name, type, info dictionary.¶ [sql].¶ [sql] Added new generic function “next_value()”, accepts a Sequence object as its argument and renders the appropriate “next value” generation string on the target platform, if supported. Also provides “.next_value()” method on Sequence itself.¶ [sql] func.next_value() or other SQL expression can be embedded directly into an insert() construct, and if implicit or explicit “returning” is used in conjunction with a primary key column, the newly generated value will be present in result.inserted_primary_key.¶ [sql] Added accessors to ResultProxy “returns_rows”, “is_insert” (also in 0.6.7)¶ postgresql¶ mssql¶ firebird¶ misc¶ [declarative] Arguments in __mapper_args__ that aren’t “hashable” aren’t mistaken for always-hashable, possibly-column arguments. (also in 0.6.7)¶ [informix] Added RESERVED_WORDS informix dialect. (also in 0.6.7)¶ [ext] The horizontal_shard ShardedSession class accepts the common Session argument “query_cls” as a constructor argument, to enable further subclassing of ShardedQuery. (also in 0.6.7)¶ [examples] Updated the association, association proxy examples to use declarative, added a new example dict_of_sets_with_default.py, a “pushing the envelope” example of association proxy.¶ [examples] The Beaker caching example allows a “query_cls” argument to the query_callable() function. (also in 0.6.7)¶ 0.7.0b2¶Released: Sat Feb 19 2011 orm¶ sql¶ [sql] Renamed the EngineEvents event class to ConnectionEvents. As these classes are never accessed directly by end-user code, this strictly is a documentation change for end users. Also simplified how events get linked to engines and connections internally.¶ [sql] The Sequence() construct, when passed a MetaData() object via its ‘metadata’ argument, will be included in CREATE/DROP statements within metadata.create_all() and metadata.drop_all(), including “checkfirst” logic.¶ [sql] The Column.references() method now returns True if it has a foreign key referencing the given column exactly, not just its parent table.¶ postgresql¶ misc¶ [declarative] Fixed regression whereby composite() with Column objects placed inline would fail to initialize. The Column objects can now be inline with the composite() or external and pulled in via name or object ref.¶ [declarative] Fix error message referencing old @classproperty name to reference @declared_attr (also in 0.6.7)¶ [declarative] the dictionary at the end of the __table_args__ tuple is now optional.¶ [ext] Association proxy now has correct behavior for any(), has(), and contains() when proxying a many-to-one scalar attribute to a one-to-many collection (i.e. the reverse of the ‘typical’ association proxy use case)¶ [examples] Beaker example now takes into account ‘limit’ and ‘offset’, bind params within embedded FROM clauses (like when you use union() or from_self()) when generating a cache key.¶ 0.7.0b1¶Released: Sat Feb 12 2011 general¶ [general] New event system, supersedes all extensions, listeners, etc.¶ [general] Logging enhancements¶ [general] Setup no longer installs a Nose plugin¶ [general] The “sqlalchemy.exceptions” alias in sys.modules has been removed. Base SQLA exceptions are available via “from sqlalchemy import exc”. The “exceptions” alias for “exc” remains in “sqlalchemy” for now, it’s just not patched into sys.modules.¶ orm¶ [orm] More succinct form of query.join(target, onclause)¶ [orm] Hybrid Attributes, implements/supersedes synonym()¶ [orm] Rewrite of composites¶ [orm] Mutation Event Extension, supersedes “mutable=True”¶ [orm] PickleType and ARRAY mutability turned off by default¶ [orm] Simplified polymorphic_on assignment¶ [orm] Flushing of Orphans that have no parent is allowed¶ [orm].¶ [orm] Warnings generated when collection members, scalar referents not part of the flush¶ [orm] Non-Table-derived constructs can be mapped¶ [orm] Tuple label names in Query Improved¶ [orm] Mapped column attributes reference the most specific column first¶ [orm] Mapping to joins with two or more same-named columns requires explicit declaration¶ [orm] Mapper requires that polymorphic_on column be present in the mapped selectable¶ [orm] compile_mappers() renamed configure_mappers(), simplified configuration internals¶ [orm] the aliased() function, if passed a SQL FromClause element (i.e. not a mapped class), will return element.alias() instead of raising an error on AliasedClass.¶ [orm] Session.merge() will check the version id of the incoming state against that of the database, assuming the mapping uses version ids and incoming state has a version_id assigned, and raise StaleDataError if they don’t match.¶ [orm] Session.connection(), Session.execute() accept ‘bind’, to allow execute/connection operations to participate in the open transaction of an engine explicitly.¶ [orm] Query.join(), Query.outerjoin(), eagerload(), eagerload_all(), others no longer allow lists of attributes as arguments (i.e. option([x, y, z]) form, deprecated since 0.5)¶ [orm] ScopedSession.mapper is removed (deprecated since 0.5).¶ [orm] Horizontal shard query places ‘shard_id’ in context.attributes where it’s accessible by the “load()” event.¶ [orm] A single contains_eager() call across multiple entities will indicate all collections along that path should load, instead of requiring distinct contains_eager() calls for each endpoint (which was never correctly documented).¶ [orm] The “name” field used in orm.aliased() now renders in the resulting SQL statement.¶ [orm] Session weak_instance_dict=False is deprecated.¶ [orm] An exception is raised in the unusual case that an append or similar event on a collection occurs after the parent object has been dereferenced, which prevents the parent from being marked as “dirty” in the session. Was a warning in 0.6.6.¶ [orm] Query.distinct() now accepts column expressions as *args, interpreted by the PostgreSQL dialect as DISTINCT ON (<expr>).¶ [orm].¶ [orm] the value of “passive” as passed to attributes.get_history() should be one of the constants defined in the attributes package. Sending True or False is deprecated.¶ [orm] Added a name argument to Query.subquery(), to allow a fixed name to be assigned to the alias object. (also in 0.6.7)¶ [orm] A warning is emitted when a joined-table inheriting mapper has no primary keys on the locally mapped table (but has pks on the superclass table). (also in 0.6.7. (also in 0.6.7)¶ [orm] Fixed bug where a column with a SQL or server side default that was excluded from a mapping with include_properties or exclude_properties would result in UnmappedColumnError. (also in 0.6.7. (also in 0.6.7)¶ sql¶ [sql] Added over() function, method to FunctionElement classes, produces the _Over() construct which in turn generates “window functions”, i.e. “<window function> OVER (PARTITION BY <partition by>, ORDER BY <order by>)”.¶ [sql] LIMIT/OFFSET clauses now use bind parameters¶ [sql] select.distinct() now accepts column expressions as *args, interpreted by the PostgreSQL dialect as DISTINCT ON (<expr>). Note this was already available via passing a list to the distinct keyword argument to select().¶ [sql] select.prefix_with() accepts multiple expressions (i.e. *expr), ‘prefix’ keyword argument to select() accepts a list or tuple.¶ [sql] Passing a string to the distinct keyword argument of select() for the purpose of emitting special MySQL keywords (DISTINCTROW etc.) is deprecated - use prefix_with() for this.¶ [sql] TypeDecorator works with primary key columns¶ [sql] DDL() constructs now escape percent signs¶ [sql] Table.c / MetaData.tables refined a bit, don’t allow direct mutation¶ [sql] Callables passed to bindparam() don’t get evaluated¶ [sql] types.type_map is now private, types._type_map¶ [sql] Non-public Pool methods underscored¶ [sql] Added NULLS FIRST and NULLS LAST support. It’s implemented as an extension to the asc() and desc() operators, called nullsfirst() and nullslast().¶ [sql] The Index() construct can be created inline with a Table definition, using strings as column names, as an alternative to the creation of the index outside of the Table.¶ [sql] execution_options() on Connection accepts “isolation_level” argument, sets transaction isolation level for that connection only until returned to the connection pool, for those backends which support it (SQLite, PostgreSQL)¶ [sql] A TypeDecorator of Integer can be used with a primary key column, and the “autoincrement” feature of various dialects as well as the “sqlite_autoincrement” flag will honor the underlying database type as being Integer-based.¶ [sql].¶ [sql] Result-row processors are applied to pre-executed SQL defaults, as well as cursor.lastrowid, when determining the contents of result.inserted_primary_key.¶ [sql] Bind parameters present in the “columns clause” of a select are now auto-labeled like other “anonymous” clauses, which among other things allows their “type” to be meaningful when the row is fetched, as in result row processors.¶ [sql] TypeDecorator is present in the “sqlalchemy” import space.¶ [sql].¶ [sql].¶ [sql] Column.copy(), as used in table.tometadata(), copies the ‘doc’ attribute. (also in 0.6.7)¶ [sql] Added some defs to the resultproxy.c extension so that the extension compiles and runs on Python 2.4. (also in 0.6.7)¶ [sql] The compiler extension now supports overriding the default compilation of expression._BindParamClause including that the auto-generated binds within the VALUES/SET clause of an insert()/update() statement will also use the new compilation rules. (also in 0.6.7)¶ [sql] SQLite dialect now uses NullPool for file-based databases¶ [sql] The path given as the location of a sqlite database is now normalized via os.path.abspath(), so that directory changes within the process don’t affect the ultimate location of a relative file path. (also in 0.6.7)¶ [postgresql] Added an additional libpq message to the list of “disconnect” exceptions, “could not receive data from server” (also in 0.6.7)¶ mysql¶ mssql¶ [mssql].¶ firebird¶ misc¶ Detailed descriptions of each change below are described at:¶ [declarative] Added an explicit check for the case that the name ‘metadata’ is used for a column attribute on a declarative class. (also in 0.6.7)¶
https://docs.sqlalchemy.org/en/rel_1_1/changelog/changelog_07.html
CC-MAIN-2019-13
refinedweb
9,341
56.55
#pragma is just kind of the standard way for a compiler to give an interface to do nonstandard things-- by definition, #pragma arguments are to be interpreted in whatever arbitrary way the compiler decides. Meaning the old GCC "feature" above was not an easter egg as some have claimed, but actual support for #pragma-- just an odd implementation, that's all. The problem with this is that it's perfectly accepted for compilers to introduce pragma directives with the same keyname but different effects-- meaning . Ariels has an excellent example of how this can get painful in #pragma is useless. This is why even today GNU tries to avoid #pragma to the greatest extent possible, using language extentions such as are described in Ariels' node and in some cases (though i can't seem to find the exact documentation) having the programmer #define specific strings in order to denote which gcc-specific compilation options they wish to use. When they do use #pragma for things, they request programmers preface their #pragma calls with "GCC" in order to prevent ugly namespace collisions and ugly ifdef cludges. #pragma mark label adds an item named label to the menu; selecting label jumps you to the #pragma. You can also say #pragma mark -, which adds a divider to the menu.. meaning you can cleanly arrange your functions into logical groups within the menu. This directive has no effect in the primary source file (main()?); it only works on #included files. gcc info gathered from C's #pragma exists so we can pass various nonstandard information to the C compiler. As alluded to above, it isn't necessarily well-liked by compilerers. Why not? After all, there are lots of good reasons to want to communicate some nonstandard information to your C compiler. If this information is not necessary to compile the program, but only to reduce warnings or produce better code, why not use #pragma? #pragma Of course. It's meant for nonstandard features. Only most "nonstandard" features will still be supported by almost all compilers. And each will do it differently. Let's look at a typical example. Many C compilers support inlining of functions (at least within the same source filecompilation unit). Some heuristics are used to determine inlinability, but you might want to override them. Surely we can use #pragma to do it! Well, almost. #pragma inline global function #pragma inline function #pragma inline #pragma inline (function) The way to deal with compiler dependencies like this (and they are unavoidable) is to use #ifdef to determine which compiler we're using. So we could say #ifdef /* Define function, then ... */ #ifdef __sgi #pragma inline global function #elif defined(__alpha) #pragma inline (function) #elif defined(__sun) #pragma inline function #endif The code above is a maintenance nightmare. If we want to add support for the new ariel Si Si compiler, we have to add a #elif clause on every inlining. The solution to this is abstraction: create some higher level construct INLINE(function) and use that instead. The definition of the construct will use #ifdefs, but they'll all be in one spot, and easy to maintain and modify. #elif INLINE(function) Presumably, INLINE will be a macro, and we'll #define it appropriately. INLINE #define Excuse me, did someone say "macro"? Because you cannot standardly use macros and preprocess constructs. And anything beginning with # belongs to the preprocessor. So there is no maintainable way to use #pragmas in your code. __inline__ #ifdef __gcc #define INLINE(f) __inline__(f) #else #define INLINE(f) #endif Log in or register to write something here or to contact authors.
http://everything2.com/title/%2523pragma
CC-MAIN-2014-15
refinedweb
603
54.52
I am trying to customize a workflow initialize form for the "Approval - SharePoint 2010" workflow. I write some JQuery function to hide some rows. The Jquery script is included in Master file. When the workflow form is loaded (with URL /15/IniWrkflIP.aspx) my function will be fired and it works as expected. However, when I click on "Add a New Stage" button. it will call default event return (InDocUI.OnClick(this, event)), the form trigger a postback (towards /_layouts/15/Postback.FormServer.aspx). Then all the rows come back again! How can I trigger my JQuery function again after every Postback? Hi, Please try to add your code into a setInterval() method. Or please provide some screenshots and more detailed information for further research. The IniWrkflIP.aspx and Postback.FormServer.aspx in (C:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\TEMPLATE\LAYOUTS), you can also try to write the code into the pages. Best Regards, Dennis Please remember to mark the replies as answers if they help. If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com
https://social.msdn.microsoft.com/Forums/en-US/53850339-b5a6-47a3-9c55-7a71484452e3/approval-workflow-customization?forum=sharepointdevelopment
CC-MAIN-2019-35
refinedweb
183
69.58
problem with j2me crypto class Hi all... below decryption code is work on java sdk. but when i try on j2me, the error come out said that cannot find symbol constructor SecretKeySpec and IvParameterSpec. does anyone has a clue on this problem? please help... thanks a lot. import javax.crypto.*; byte[] raw = ciphertext. getBytes(); Cipher ciphertype = Cipher.getInstance("AES/CBC"); SecretKeySpec keySpec = new SecretKeySpec(raw, "AES"); IvParameterSpec ivSpec = new IvParameterSpec("7g^1m)3h4%czl*v1".getBytes()); ciphertype.init(Cipher.DECRYPT_MODE, keySpec, ivSpec); byte[] outText = ciphertype.doFinal(hexToBytes("f1936fe02eed7bb472311e58e6de9677")); JavaME MIDP2.0 does not have the javax.crypto package by default. What tool are you using WTK, or an IDE with some mobile plugin? make sure you select to have the SATSA jars included then all so add the import javax.crypto.spec.*; to your imports. -Shawn
https://www.java.net/node/695113
CC-MAIN-2015-22
refinedweb
136
55.2
@(#) $Header: /tcpdump/master/libpcap/INSTALL.txt,v 1.29 2008-06-12 20:21:51 guy Exp $ (LBL) To build libpcap, run "./configure" (a shell script). The configure script will determine your system attributes and generate an appropriate Makefile from Makefile.in. Next run "make". If everything goes well you can su to root and run "make install". However, you need not install libpcap if you just want to build tcpdump; just make sure the tcpdump and libpcap directory trees have the same parent directory. If configure says: configure: warning: cannot determine packet capture interface configure: warning: (see INSTALL for more info) then your system either does not support packet capture or your system does support packet capture but libpcap does not support that particular type. (If you have HP-UX, see below.) If your system uses a packet capture not supported by libpcap, please send us patches; don't forget to include an autoconf fragment suitable for use in configure.in. It is possible to override the default packet capture type, although the circumstance where this works are limited. For example if you have installed bpf under SunOS 4 and wish to build a snit libpcap: ./configure --with-pcap=snit Another example is to force a supported packet capture type in the case where the configure scripts fails to detect it. You will need an ANSI C compiler to build libpcap. The configure script will abort if your compiler is not ANSI compliant. If this happens, use the GNU C compiler, available via anonymous ftp: If you use flex, you must use version 2.4.6 or higher. The configure script automatically detects the version of flex and will not use it unless it is new enough. You can use "flex -V" to see what version you have (unless it's really old). The current version of flex is available via anonymous ftp:-*.tar.Z As of this writing, the current version is 2.5.4. If you use bison, you must use flex (and visa versa). The configure script automatically falls back to lex and yacc if both flex and bison are not found. Sometimes the stock C compiler does not interact well with flex and bison. The list of problems includes undefined references for alloca. You can get around this by installing gcc or manually disabling flex and bison with: ./configure --without-flex --without-bison If your system only has AT&T lex, this is okay unless your libpcap program uses other lex/yacc generated code. (Although it's possible to map the yy* identifiers with a script, we use flex and bison so we don't feel this is necessary.) Some systems support the Berkeley Packet Filter natively; for example out of the box OSF and BSD/OS have bpf. If your system does not support bpf, you will need to pick up:-*.tar.Z Note well: you MUST have kernel source for your operating system in order to install bpf. An exception is SunOS 4; the bpf distribution includes replacement kernel objects for some of the standard SunOS 4 network device drivers. See the bpf INSTALL document for more information. If you use Solaris, there is a bug with bufmod(7) that is fixed in Solaris 2.3.2 (aka SunOS 5.3.2). Setting a snapshot length with the broken bufmod(7) results in data be truncated from the FRONT of the packet instead of the end. The work around is to not set a snapshot length but this results in performance problems since the entire packet is copied to user space. If you must run an older version of Solaris, there is a patch available from Sun; ask for bugid 1149065. After installing the patch, use "setenv BUFMOD_FIXED" to enable use of bufmod(7). However, we recommend you run a more current release of Solaris. If you use the SPARCompiler, you must be careful to not use the /usr/ucb/cc interface. If you do, you will get bogus warnings and perhaps errors. Either make sure your path has /opt/SUNWspro/bin before /usr/ucb or else: setenv CC /opt/SUNWspro/bin/cc before running configure. (You might have to do a "make distclean" if you already ran configure once). Also note that "make depend" won't work; while all of the known universe uses -M, the SPARCompiler uses -xM to generate makefile dependencies. If you are trying to do packet capture with a FORE ATM card, you may or may not be able to. They usually only release their driver in object code so unless their driver supports packet capture, there's not much libpcap can do. If you get an error like: tcpdump: recv_ack: bind error 0x??? when using DLPI, look for the DL_ERROR_ACK error return values, usually in /usr/include/sys/dlpi.h, and find the corresponding value. Under {DEC OSF/1, Digital UNIX, Tru64 UNIX}, packet capture must be enabled before it can be used. For instructions on how to enable packet filter support, see: Look for the "How do I configure the Berkeley Packet Filter and capture tcpdump traces?" item. Once you enable packet filter support, your OSF system will support bpf natively. Under Ultrix, packet capture must be enabled before it can be used. For instructions on how to enable packet filter support, see: If you use HP-UX, you must have at least version 9 and either the version of cc that supports ANSI C (cc -Aa) or else use the GNU C compiler. You must also buy the optional streams package. If you don't have: /usr/include/sys/dlpi.h /usr/include/sys/dlpi_ext.h then you don't have the streams package. In addition, we believe you need to install the "9.X LAN and DLPI drivers cumulative" patch (PHNE_6855) to make the version 9 DLPI work with libpcap. The DLPI streams package is standard starting with HP-UX 10. The HP implementation of DLPI is a little bit eccentric. Unlike Solaris, you must attach /dev/dlpi instead of the specific /dev/* network pseudo device entry in order to capture packets. The PPA is based on the ifnet "index" number. Under HP-UX 9, it is necessary to read /dev/kmem and the kernel symbol file (/hp-ux). Under HP-UX 10, DLPI can provide information for determining the PPA. It does not seem to be possible to trace the loopback interface. Unlike other DLPI implementations, PHYS implies MULTI and SAP and you get an error if you try to enable more than one promiscuous mode at a time. It is impossible to capture outbound packets on HP-UX 9. To do so on HP-UX 10, you will, apparently, need a late "LAN products cumulative patch" (at one point, it was claimed that this would be PHNE_18173 for s700/10.20; at another point, it was claimed that the required patches were PHNE_20892, PHNE_20725 and PHCO_10947, or newer patches), and to do so on HP-UX 11 you will, apparently, need the latest lancommon/DLPI patches and the latest driver patch for the interface(s) in use on HP-UX 11 (at one point, it was claimed that patches PHNE_19766, PHNE_19826, PHNE_20008, and PHNE_20735 did the trick). Furthermore, on HP-UX 10, you will need to turn on a kernel switch by doing echo 'lanc_outbound_promisc_flag/W 1' | adb -w /stand/vmunix /dev/mem You would have to arrange that this happen on reboots; the right way to do that would probably be to put it into an executable script file "/sbin/init.d/outbound_promisc" and making "/sbin/rc2.d/S350outbound_promisc" a symbolic link to that script. Finally, testing shows that there can't be more than one simultaneous DLPI user per network interface. If you use Linux, this version of libpcap is known to compile and run under Red Hat 4.0 with the 2.0.25 kernel. It may work with earlier 2.X versions but is guaranteed not to work with 1.X kernels. Running more than one libpcap program at a time, on a system with a 2.0.X kernel, can cause problems since promiscuous mode is implemented by twiddling the interface flags from the libpcap application; the packet capture mechanism in the 2.2 and later kernels doesn't have this problem. Also, packet timestamps aren't very good. This appears to be due to haphazard handling of the timestamp in the kernel. Note well: there is rumoured to be a version of tcpdump floating around called 3.0.3 that includes libpcap and is supposed to support Linux. You should be advised that neither the Network Research Group at LBNL nor the Tcpdump Group ever generated a release with this version number. The LBNL Network Research Group notes with interest that a standard cracker trick to get people to install trojans is to distribute bogus packages that have a version number higher than the current release. They also noted with annoyance that 90% of the Linux related bug reports they got are due to changes made to unofficial versions of their page. If you are having trouble but aren't using a version that came from tcpdump.org, please try that before submitting a bug report! On Linux, libpcap will not work if the kernel does not have the packet socket option enabled; see the README.linux file for information about this. If you use AIX, you may not be able to build libpcap from this release. We do not have an AIX system in house so it's impossible for us to test AIX patches submitted to us. We are told that you must link against /lib/pse.exp, that you must use AIX cc or a GNU C compiler newer than 2.7.2, and that you may need to run strload before running a libpcap application. Read the README.aix file for information on installing libpcap and configuring your system to be able to support libpcap. If you use NeXTSTEP, you will not be able to build libpcap from this release. If you use SINIX, you should be able to build libpcap from this release. It is known to compile and run on SINIX-Y/N 5.42 with the C-DS V1.0 or V1.1 compiler. But note that in some releases of SINIX, yacc emits incorrect code; if grammar.y fails to compile, change every occurence of: #ifdef YYDEBUG to: #if YYDEBUG Another workaround is to use flex and bison. If you use SCO, you might have trouble building libpcap from this release. We do not have a machine running SCO and have not had reports of anyone successfully building on it; the current release of libpcap does not compile on SCO OpenServer 5. Although SCO apparently supports DLPI to some extent, the DLPI in OpenServer 5 is very non-standard, and it appears that completely new code would need to be written to capture network traffic. SCO do not appear to provide tcpdump binaries for OpenServer 5 or OpenServer 6 as part of SCO Skunkware: If you use UnixWare, you might be able to build libpcap from this release, or you might not. We do not have a machine running UnixWare, so we have not tested it; however, SCO provide packages for libpcap 0.6.2 and tcpdump 3.7.1 in the UnixWare 7/Open UNIX 8 part of SCO Skunkware, and the source package for libpcap 0.6.2 is not changed from the libpcap 0.6.2 source release, so this release of libpcap might also build without changes on UnixWare 7. If linking tcpdump fails with "Undefined: _alloca" when using bison on a Sun4, your version of bison is broken. In any case version 1.16 or higher is recommended (1.14 is known to cause problems 1.16 is known to work). Either pick up a current version from: or hack around it by inserting the lines: #ifdef __GNUC__ #define alloca __builtin_alloca #else #ifdef sparc #include <alloca.h> #else char *alloca (); #endif #endif right after the (100 line!) GNU license comment in bison.simple, remove grammar.[co] and fire up make again. If you use SunOS 4, your kernel must support streams NIT. If you run a libpcap program and it dies with: /dev/nit: No such device You must add streams NIT support to your kernel configuration, run config and boot the new kernel. If you are running a version of SunOS earlier than 4.1, you will need to replace the Sun supplied /sys/sun{3,4,4c}/OBJ/nit_if.o with the appropriate version from this distribution's SUNOS4 subdirectory and build a new kernel: nit_if.o.sun3-sunos4 (any flavor of sun3) nit_if.o.sun4c-sunos4.0.3c (SS1, SS1+, IPC, SLC, etc.) nit_if.o.sun4-sunos4 (Sun4's not covered by nit_if.o.sun4c-sunos4.0.3c) These nit replacements fix a bug that makes nit essentially unusable in pre-SunOS 4.1. In addition, our sun4c-sunos4.0.3c nit gives you timestamps to the resolution of the SS-1 clock (1 us) rather than the lousy 20ms timestamps Sun gives you (tcpdump will print out the full timestamp resolution if it finds it's running on a SS-1). FILES ----- CHANGES - description of differences between releases ChmodBPF/* - Mac OS X startup item to set ownership and permissions on /dev/bpf* CREDITS - people that have helped libpcap along INSTALL.txt - this file LICENSE - the license under which tcpdump is distributed Makefile.in - compilation rules (input to the configure script) README - description of distribution README.aix - notes on using libpcap on AIX README.dag - notes on using libpcap to capture on Endace DAG devices README.hpux - notes on using libpcap on HP-UX README.linux - notes on using libpcap on Linux README.macosx - notes on using libpcap on Mac OS X README.septel - notes on using libpcap to capture on Intel/Septel devices README.sita - notes on using libpcap to capture on SITA devices README.tru64 - notes on using libpcap on Digital/Tru64 UNIX README.Win32 - notes on using libpcap on Win32 systems (with WinPcap) SUNOS4 - pre-SunOS 4.1 replacement kernel nit modules VERSION - version of this release acconfig.h - support for post-2.13 autoconf aclocal.m4 - autoconf macros arcnet.h - ARCNET definitions atmuni31.h - ATM Q.2931 definitions bpf/net - copy of bpf_filter.c bpf_dump.c - BPF program printing routines bpf_filter.c - symlink to bpf/net/bpf_filter.c bpf_image.c - BPF disassembly routine config.guess - autoconf support config.h.in - autoconf input config.sub - autoconf support configure - configure script (run this first) configure.in - configure script source dlpisubs.c - DLPI-related functions for pcap-dlpi.c and pcap-libdlpi.c dlpisubs.h - DLPI-related function declarations etherent.c - /etc/ethers support routines ethertype.h - Ethernet protocol types and names definitions fad-getad.c - pcap_findalldevs() for systems with getifaddrs() fad-gifc.c - pcap_findalldevs() for systems with only SIOCGIFLIST fad-glifc.c - pcap_findalldevs() for systems with SIOCGLIFCONF fad-null.c - pcap_findalldevs() for systems without capture support fad-sita.c - pcap_findalldevs() for systems with SITA support fad-win32.c - pcap_findalldevs() for WinPcap filtertest.c - test program for BPF compiler findalldevstest.c - test program for pcap_findalldevs() gencode.c - BPF code generation routines gencode.h - BPF code generation definitions grammar.y - filter string grammar ieee80211.h - 802.11 definitions inet.c - network routines install-sh - BSD style install script lbl/os-*.h - OS-dependent defines and prototypes llc.h - 802.2 LLC SAP definitions missing/* - replacements for missing library functions mkdep - construct Makefile dependency list msdos/* - drivers for MS-DOS capture support nametoaddr.c - hostname to address routines nlpid.h - OSI network layer protocol identifier definitions net - symlink to bpf/net optimize.c - BPF optimization routines packaging - packaging information for building libpcap RPMs pcap/bluetooth.h - public definition of DLT_BLUETOOTH_HCI_H4_WITH_PHDR header pcap/bpf.h - BPF definitions pcap/namedb.h - public libpcap name database definitions pcap/pcap.h - public libpcap definitions pcap/sll.h - public definition of DLT_LINUX_SLL header pcap/usb.h - public definition of DLT_USB header pcap-bpf.c - BSD Packet Filter support pcap-bpf.h - header for backwards compatibility pcap-bt-linux.c - Bluetooth capture support for Linux pcap-bt-linux.h - Bluetooth capture support for Linux pcap-dag.c - Endace DAG device capture support pcap-dag.h - Endace DAG device capture support pcap-dlpi.c - Data Link Provider Interface support pcap-dos.c - MS-DOS capture support pcap-dos.h - headers for MS-DOS capture support pcap-enet.c - enet support pcap-int.h - internal libpcap definitions pcap-libdlpi.c - Data Link Provider Interface support for systems with libdlpi pcap-linux.c - Linux packet socket support pcap-namedb.h - header for backwards compatibility pcap-nit.c - SunOS Network Interface Tap support pcap-nit.h - SunOS Network Interface Tap definitions pcap-null.c - dummy monitor support (allows offline use of libpcap) pcap-pf.c - Ultrix and Digital/Tru64 UNIX Packet Filter support pcap-pf.h - Ultrix and Digital/Tru64 UNIX Packet Filter definitions pcap-septel.c - Intel/Septel device capture support pcap-septel.h - Intel/Septel device capture support pcap-sita.c - SITA device capture support pcap-sita.h - SITA device capture support pcap-sita.html - SITA device capture documentation pcap-stdinc.h - includes and #defines for compiling on Win32 systems pcap-snit.c - SunOS 4.x STREAMS-based Network Interface Tap support pcap-snoop.c - IRIX Snoop network monitoring support pcap-usb-linux.c - USB capture support for Linux pcap-usb-linux.h - USB capture support for Linux pcap-win32.c - WinPcap capture support pcap.3pcap - manual entry for the library pcap.c - pcap utility routines pcap.h - header for backwards compatibility pcap_*.3pcap - manual entries for library functions pcap-filter.4 - manual entry for filter syntax pcap-linktype.4 - manual entry for link-layer header types ppp.h - Point to Point Protocol definitions runlex.sh - wrapper for Lex/Flex savefile.c - offline support scanner.l - filter string scanner sunatmpos.h - definitions for SunATM capturing Win32 - headers and routines for building on Win32 systems
http://opensource.apple.com//source/libpcap/libpcap-29/libpcap/INSTALL.txt
CC-MAIN-2016-36
refinedweb
3,005
59.9
Reliving past pains Residential school victim shares stories of violence, sexual assault and humiliation 1257+:(677(55,725,(6 MONDAY, APRIL 18, 2011 Volume 65 Issue 51 Joe Handley Sandy Lee Bonnie Dawson $1 (includes GST) Flipping for judo in Hay River Eli Purchase Dennis Bevington On the campaign trail Federal candidates begin debate circuit; discuss Nutrition North Documentary to showcase Kole Crook Steve Beck First RCMP aboriginal community constable Paul Bickford/NNSL photo Publication mail Contract #40012157 QUOTE: "We figure (there's) millions spent on bingo in Yellowknife." – Whati Chief Alfonz Nitsiza, on why a new online course offered by De Beers to teach money management is welcome in the community, page 33. 7 71605 00200 2 2 NEWS/NORTH NWT, Monday, April 18, 2011 &'$$ Kivalliq Covered Canadian North Canadian North Codeshare (operated by Calm Air) ¨½¿»£ ç£¨½¿»¦¿»£Ù¿½×£ śÄ¡ÆÂ¹©¸ª¦£çï¡ÊÀÕÎë¿»£Ŝ '(%)('$ "$$$ &! #$#$$#$$"%%"#" "$# '$"(%"$"&" $' !%$" $$'$"% $$$#$$#& 9ecf[j_j_l[j^hek]^#\Wh[i Ed[#YWbbXeea_d] =h[WjYedd[Yj_edi ¨½¿»£¦£çï¡Ð¿½ÀÛ²ÐǤàÆÀÛ²»£ ¸¶½ÀÚÛ׿½ÛÇ«ÆÂÇ·¹ à´£çï¡Ü«Ð¶Ò£Ç¿»£ ÛÂÙ¿½×£©¸ª¦£ç£¨Ê«Âſ ¹ªËÀ׿½Û££²¿¡¿²ªžÒ½Ⲫžç¿¹ª á²ÆÃ¿º£é¦ÌÛܫж¨ëéó¹é¶Ò¿»£ ¡ª¦»ïÀ×Ü£ſ Ƨ£¨Í³Ïª Ƥ¥ØÒЧ¥¶Ô ¡ ÖÝ ¶Ƥ¡¡¼¶ØÈºÆØ ª ¶ Ƚ½ÔÙ Ƥ×ØÙ¨Ï¼¸ Ƥ ħ¤Ò ¨Í¼ºØª Ƥ ¥»ØÄ³Í¿ Ƥ¥ÆÔ ƤäëÙ¨¶Ø¼¸ ¹ªËÀ×¶¿¹×Ü£¤àÆÀÛ²»£ž (%")$#$$ &! &#$mmm$YWdWZ_Wddehj^$Yec" mmm$YWbcW_h$Yec" çÕËÒª¦£çΫ׿½Û£Þïmmm$YWdWZ_Wddehj^$Yec çÂÆÂÆÃ¿º£ mmm$YWbcW_h$Yec" feature news NEWS/NORTH NWT, Monday, April 18, 2011 3 Correction The photo caption on page 3 of the April 11 edition of News/North should have read from left to right are Joseph and Danny Bayha. News/North apologizes for any embarrassment or confusion the errors may have caused. NEWS Briefs Candidates' forum at school A candidates' forum for the upcoming federal election will be held at Fort Providence's Deh Gah School on April 19. The school-sponsored event, which begins at 10 a.m., is to let students meet candidates, and will also be open to community members. Three candidates have committed to attend the forum – Western Arctic MP and NDP candidate Dennis Bevington, Liberal candidate Joe Handley and Eli Purchase of the Green Party. Conservative candidate Sandy Lee and Bonnie Dawson of the Animal Alliance Environment Voters Party of Canada are unable to attend. – Paul Bickford Alice Perrin shared her residential school experience and how she overcame it at the Explorer Hotel in Yellowknife on Saturday. Perrin was asked to speak at an event in Yellowknife as part of National Victim of Crime Awareness Week hosted by Yellowknife Victim Services Program. photo courtesy of Alice Perrin Principal heading to Harvard Sophie Call, the principal of Ecole Boreale in Hay River, will be on leave next school year to attend Harvard University in Boston. Call will study for a master's of education in the mind, the brain and learning. The degree deals with the neuroscience of the learning process and cognition. Call said she is very excited to be chosen for the program. She also noted her ability to attend Harvard was greatly helped by financial assistance from the NWT Teachers' Association. – Paul Bickford Telling her story Alice Perrin, a victim of the residential school system, speaks in Yellowknife by Katherine Hudson Vote mob Youth between the ages of 18 and 30 are encouraged to assemble over the lunch hour at Yellowknife city hall Monday for a vote mob. The mob is being held during Liberal leader Michael Ignatieff's campaign barbeque. Vote mobs have been organized around the country to show political leaders that Canadian youth are voting in the May 2 federal election. The gathering is a non-partisan event, with the intention of demonstrating the power of the youth voice. – Nicole Veerman Ice crossing closed The Mackenzie River ice crossing near Fort Providence closed for the season last Monday because of deteriorating road conditions. The closure was about five days earlier than the April 16 average. Every winter, there is about a one-month period where there is no road and no ferry crossing, but this year the Department of Transportation has predicted the opening of the ferry crossing will be delayed past its usual May 13 operation date. The delay is due to low and fluctuating water levels in Great Slave Lake. – Nicole Veerman She said communication was a constant struggle, with nuns reading Somba K'e/Yellowknife prayer books in French and speaking A handkerchief soaked with tears, in Latin. a loneliness so strong it was palpable "When we first went there, they and a feeling of guilt that carved stripped us of our aboriginal clothes itself into her being were what Alice and then gave us a bath right away. Perrin said she took from her years at They put us in cold water and started residential schools. to wash us and pulled on our hair and No hugs were offered to the four- because I was crying, I got hit right year-old girl who found herself so away," she said. utterly alone in a residential school in "In order to have me stop talkFort Resolution in the ing my Dene language, 1950s. they'd hit me under the Alice Perrin found chin and sometimes I'd a way to ease the be biting my tongue at pain through words, the same time." through talking about She said these were her suffering. her first memories. She shared her She often cried story and how she because as a young girl overcame it in Yellowaway from her family, knife on Saturday, to she would be overconclude National Vicwhelmed by loneliness; tim of Crime Awareness Week at an instead of receiving comfort, the event hosted by Yellowknife Victim nuns would hit her. Services. "I guess they were trying to break Perrin was born on the north us in. It didn’t work. What I ended shore of Great Bear Lake, in Cam- up doing was crying quietly, without eron Bay, a community that is no a sound. My handkerchief would longer there. be soaking wet from crying all the She arrived at Saint Joseph's Indi- time." an Mission, a Roman Catholic resi"I was stuck there without going dential school in Fort Resolution, in home for 72 months. That’s six 1952 at the age of four. years," she said. She could speak Slavey when she She was torn away from her home entered the system, however when again and wound up in the residential the nuns could not communicate with school system for a total of 12 years, her, Perrin said they became "mean until 1964. She spent some time at and rough with us." Catholic Lapointe Hall in Fort SimpNorthern News Services "They were trying to break us." son and Akaitcho Hall, Yellowknife's vocational school. She said her chores as a young girl consisted of cleaning the stairway that exited the dining area. She said she would be bent down, using a dustpan and a brush. "A few times they pulled my ears and yelled at me and put my head right near the floor, pointing at two little hairs at the stairway that I had left there." She said the residential school students submitted to the punishment because they had no one to turn to for help. "We were in confinement. There was nobody around to protect us. They never gave us any hugs. No love, no compassion," she said. The students would have liked to have been spared the humiliation of having their sheets presented to the whole school after they wet the bed. Perrin would have liked protection from a man who touched her between the legs. She said as she grew up, she carried with her the immense shame of seeing what happened to her "sisters" – the girls at the school – and being unable to help.. "We didn’t want to let anybody know about it. It wasn’t our fault. It wasn’t our fault." Reflecting on her painful experiences, Perrin wrote a book – sometimes crying over her keyboard, sometimes needing her husband to hold her for support. It is titled "How My Heart Shook Like a Drum" and it took her six years to complete. "It was a healing journey for me," said Perrin. Although she lives in Chelsea, Que., now, Perrin visited Yellowknife this past weekend where she spoke at the Explorer Hotel. "I hope that the residential school students, the former students, find a way to heal themselves. I would like to let them know that whatever happened to them in residential schools was not their fault. "When you're abused like that you really do have to deal with it. Cry, scream or yell – whatever you need to let it out in order to recognize and acknowledge that it happened and move toward harmony," she said. Marie Speakman, a worker at the Yellowknife Victim Services Program, recommended that Perrin come to speak. "I have gone through it myself," said Speakman of abuse. Speakman has been sober for 28 years. "It takes a long time to get to be where you are at. It doesn't happen overnight. "It's important to see the positiveness. Even if you've gone through the abuse, you can still make changes," she said. Perrin's talk comes right after the Truth and Reconciliation Commission was in Yellowknife on Thursday, where others had the chance to share their experiences from residential schools. 4 NEWS/NORTH NWT, Monday, April 18, 2011 opinions NWT Archives: The RenĂŠ Fumoleau Collection Rene Fumoleau: PWNHCN-N-1995-002-4285 Prince of Wales Northern Heritage Centre website: CONSTITUTIONAL CONFERENCE Front, Arnold McCullum, Ted Blondin, Freddy Greenland, and Georges Erasmus. Back Alexis Arrowmaker, Don Cardinal and George Blondin in Yellowknife in January 1982. NWT students face many challenges Northern News Services Friends, the matter of how well our First Nations students are doing in school has been on the table now for the last several years, which in itself says a lot about the commitment of our Department of Education to first recognize and acknowledge the problems involved and then to hopefully do something about it. As things now stand there is a lot of work to do here, with only 44 per cent of our aboriginal students graduat- ing high school compared to 70 per cent of others. In all of the regions covered, so far in Yellowknife, the Sahtu, Tlicho, Deh Cho and just recently in the South Slave there seems to be an agreement that a lot of it has to do with early childhood education. There have been studies done before on including more of a Dene curriculum but with varying levels of success. On this one topic I would agree with Chief Roy Fabien when he said we are in danger of losing our Dene culture and language entirely unless we have Dene immersion the way the French people do. Of course this all involves present government policy, but in our case it is also a matter of our treaty right to education. In any other place in southern Canada the money for the education for First Nations students goes directly to the individual band councils and then disbursed as needed. In the North, this same money goes to the Government of the NWT's Department of Education. As a student I have to apply each year for my funding the same way anyone who has been in the North for only six months does, although the treaty clearly states that this is my right under law. Things being the way they are, though, we the Dene also have to opt for being heard as just another "special interest" group and have to deal with budget restrictions besides. A couple of years ago, filmmaker Raymond Yakeleya of Tulita brought forth the disturbing case of his nephew who couldn't get into the post-secondary program of education he wanted to A MOUNTAIN View Antoine Mountain is a Dene artist and writer originally from Radilih Koe’/ Fort Good Hope. He can be reached at. take in Edmonton because the diploma he earned in the NWT was simply not good enough. All in all, I think students in the smaller communities face many more challenges today with the added temptations of drugs and alcohol and the resultant apathy of an uninspiring peer group. But there are exceptions, the ones who will go all the way with their schooling no matter what. I have been taught to be that way myself, too; to keep going no matter what. As a student you have to deal with the lack of money to eat properly and the isolation from family and friends. But it is worth it and hopefully the people at the top, such as our Minister of Education Jackson Lafferty, will see wise to help implement some of these recommendations to help our students of the future. Mahsi, Thank you. news NEWS/NORTH NWT, Monday, April 18, 2011 5 Federal candidates weigh in on new Nutrition North Canada Liberals, NDP say program a disaster, no real saving for customers; Conservatives keen on working out kinks by Andrew Livingstone Northern News Services NWT Nutrition North, the new incarnation of the Food Mail program, rolled out on April 1 and residents across the NWT have been making their opinions heard on whether or not the new program is helping to reduce the cost of food for people in remote communities. Since the program started, residents in Norman Wells have seen a dramatic increase in food demand, leading to rows of empty shelves, all for what one resident said were minimal savings. Due to increased cost in shipping and a mountain of new paperwork, specifically keeping track of every product that goes to each community monthly, grocery stores in Yellowknife and Winnipeg have cancelled their personal order programs, leaving hundreds of customers to rely on their local stores. With voters going to the polls on May 2, News/North asked the five candidates vying for the Western ArcSandy tic seat their Lee thoughts on the $60 million program. NDP incumbent Dennis Bevington said no matter what community he visits, the complaints are all the same – residents are upset with the program and the added bureaucracy is forcing some grocers to get out of the personal order business. "They're very concerned about the supplies," he said, pointing to the lack of availability of fresh foods in some communities. "When the Conservatives brought in the Nutrition North program, we wanted people to still have personal orders. They agreed to have personal food orders, but it's not practical to do that now." Conservative candidate Sandy Lee said the program, while still in its infancy, has Joe some issues that need to Handley be worked out, but added it's something that can work in the long run with some improvements. "We are hearing some transitional issues we need to address, so I will be working on that," she said, if elected to office. "That would be my focus should I be in there." Lee said the program's base funding has increased to $60 million from $40 million and hasn't been cut like Bevington said in a media interview last Wednesday. "This process is more transparent in that the savings will be shown and it'll be passed onto the consumers," she said. "Under the Food Mail program, Canada Post would get most of the money. It was unclear who was getting the benefit of the subsidy. At the end of the day, once we work Bonnie out these wrin- Dawson kles, it will benefit the Northerners." Liberal candidate Joe Handley said the program is a complete disaster. He said the program was built from Ottawa down and the old program should come back into place until the government can do more consultation with community governments and residents to find a way to make it more effective and cost-saving. "I haven't heard one person say this is a good program," he said, adding restaurant owners in remote communities are feeling the same pinch and higher meal costs could be in the future. He cited the same problems about the program as Bevington has – no real saving at grocery stores and the loss of personal orders. Eli "It's only a few pennies saved. Purchase People are saying it's costing us more now than it did before Nutrition North came in," he said. Bevington said the Conservatives "really missed the boat" and the program needs to be seriously reworked or even scrapped. "(Nutrition North) needs to change and it may have to be scrapped altogether or get rid of the regulations that are the result of this boondoggle," he said. "It's been poorly thought out for a such an important program. "We will have to identify where the problems are and how to fix them. We can't spend two years doing this, it needs to be fixed." Eli Purchase, candidate for the Green Party, said the program needs to be monitored closely to make sure it's doing what Dennis it is set up to Bevington do – reduce the cost of living for residents and provide reasonablypriced, healthy food. "This program is supposed to be reducing the cost of groceries and everyday necessities for everyday people in isolated communities," he said. "This is something where we have to be really careful to ensure that this program is doing what it set out to do." Bonnie Dawson, candidate for the Animal Alliance Environment Voters Party of Canada, said her party doesn't have a specific stance in its platform on the issue of Nutrition North. However, on a personal level, she said the program needs major improvement. 6 NEWS/NORTH NWT, Monday, April 18, 2011 NWT 5PQ8FFLMZ8JOOFST 3FOFF4BOEFSTPOBOE8JMMJBN#FBVMJFV areUIJTXFFLTXJOOFSTJOUIF/FXT/PSUI )PDLFZ1PPMXJUIQPJOUTFBDI Super Gra Grand rand Prize Priize ttUJDLFUTUPB)PDLFZ(BNF UJDLFUTUPB JDLFUT UP B )PDLFZ( )PDLFZ (BN BNF F t BJSMJOF UJDL tBJSMJOFUJDLFUT UJDLFU FUTT t OJHIUTBDDPNNPE tOJHIUTBDDPNNPEBUJPO OJHIUT BDDPNNPEBU BUJP JPO O Prizes sponsored by: 1257+:(677(55,725,(6 Name TP Name TP Lynda Smith ClarenceTutcho Caleb Manuel Gilbert Ruben Gilbert Thrasher Jr. Jayden Kakfwi Kellan Kirby Bobby McLeod Herbert Beaulieu Ron B. Lalonde Sr. Somoe Edwards Renee Sanderson Jayson Cottam Shirley Boyer Aiden Kunnizzi Sherman Beal Dustin Froehlich J.P. Manuel Sarah Hart Murray McNeely Maureen Daigneault Micheal Krengnektak Robert Payne Trudy King Jeremy Carl Ruben Kimberley Reynolds Nathan Lockhart Barry Jacobson Johnathan Kuneyuna Robert Larson Scott Smith Tucker Gordan Lyndon Kakfwi Harris Mackay Jennifer Snodgrass Joseph Landry Andy Tereposky Ty Buchanan Barry Snook Carla Ruben John Edwards Jr. Laney Beaulieu Eleanor Jerome Frances Mandeville Daryl Tuccaro Elise Lockhart Justin Dore Liam Tereposky Michelle Bourque Stephen MacKay 1432 1427 1411 1407 1405 1395 1394 1393 1393 1392 1392 1391 1384 1377 1376 1375 1374 1370 1369 1368 1365 1363 1359 1359 1357 1357 1357 1356 1356 1356 1356 1356 1353 1352 1352 1352 1351 1351 1350 1349 1348 1348 1347 1347 1346 1346 1346 1346 1346 1346 Crystal McArthur Kenneth Boucher John Steen Matthew Orbell Nolan Kakfwi Bobby Lennie John Sperry Garrett Ruben Pamela Williams Drake Giroux Jolene Greenland Shane Thompson Caryn Smith Ronalda Wilcox John Stanga Lindsay Mckay Douglas Keevik Jorgan Elias Jr. Kathy Burns Robin Sproule Quintin Hysert Ivan Sanderson Joe Thrasher Jr. Barbara Jacobs Henry Fabien J.A. Normand Plant‌ Kayleigh Hunter Keith Cottam Colin Webster Nick Rivet Martin Ganinzhii Raven Firth Amy MacDonald Arvin Landry Chuck Lirette Ken Smith Kim Schofield Maxine House Melody Parker Tammy Hunter Brent Cardinal Bret Moore Cheryl Martin Corey Wainman Meredith Wilson Ryan Snodgrass Tia Dillon Caydyn Bennett Jarret Bourke Tyler Manuel 1345 1345 1343 1342 1342 1340 1339 1338 1337 1336 1336 1336 1335 1335 1334 1334 1333 1333 1333 1333 1332 1331 1331 1330 1330 1330 1330 1330 1328 1326 1325 1325 1324 1323 1323 1323 1323 1323 1323 1323 1322 1322 1322 1322 1322 1322 1322 1321 1321 1321 Overall NWT / Nunavut Leaders Lynda Smith (NT) 1,432 Clarence Tutcho (NT) 1,427 Caleb Manuel (NT) 1,411 As of games played up to aandd including April 11, 2011. NNSL Hockey Pool is not affiliated in anyy way with the National Hockey League. news NEWS/NORTH NWT, Monday, April 18, 2011 7 Kakisa grayling protection plan Fishers fear Deh Cho Bridge will increase access to the run by Roxanna Thompson Northern News Services Ka'a'gee Tu/Kakisa Fishers who frequent the spring grayling run near Kakisa are worried the Deh Cho bridge will increase pressure on fish stocks. Mac Stark has been fishing the Kakisa River's annual grayling run since 1983 and in that time has only missed three years. Stark, who lives in Hay River, is one of the regular fly fishers who fish the annual run. These fishers have volunteered to do a creel count, or angler survey, to accurately record how many grayling and other species are caught during the run, where in the river they are caught and in the case of grayling, their size. The group started gathering data last year and will continue again this year. The purpose is to gather information before the Deh Cho Bridge opens, said Stark. "The next five years, I think we're going to see a change with that bridge being opened," he said. Stark and other Kakisa River fly fishers are concerned that when the bridge opens anglers from Yellowknife will have road access to the annual grayling runs. The run generally beings around April 15, peaks the last week of the month and lasts into early May, he said. Normally fishers from Yellowknife miss the run because the Mackenzie River ice crossing is closed and the ferry hasn't opened yet. The only thing keeping them away from the run has been the cost of flying to Hay River, said Stark, adding a five-hour drive for a weekend of fishing won't be a barrier, he added. The number of fish taken out of the Kakisa River isn't the concern so much as how they are handled, he said. The current regulations for Arctic grayling are one daily and one in possession. This means an angler can take one fish home a day but as long as they have that fish in their possession they can't take home another. The grayling caught and re-released have to be handled properly, said Stark. They need to be put back in the water as quickly as possible. The grayling are only in the river for their spawning run and are easy to catch because they feed aggressively during that time. "It's exhausting for them," he said. Because of the run's short duration in the Kakisa River it could be damaged very easily, said Stark. What's at risk is a unique site. "We really want to see this preserved," said Stark The information gathered will be added to the creel count that is entering its third year in Kakisa. Kakisa has an aquatic monitoring program funded by the federal Aboriginal Aquatic Resource and Oceans Management Program (AAROM). Due to the pressure from sport fishing, Kakisa has been focusing on gathering data, said Mike Low, the AAROM technical adviser for Dehcho First Nations. The creel count from the fly fishers will be an important piece of that data because the community monitor normally doesn't begin work until May, he said. "Having them out there is just really good for the river," said Low about Stark and his fellow sport anglers. The opening of the Deh Cho Bridge will increase the traffic on Highway 1 but Low said the regulations and the monitoring program should minimize the effects of increased fishing during the grayling run. If changes are noticed the monitoring plan could be adapted to focus on the area of concern or the data could be taken to the Department of Fisheries and Oceans Canada so regulations can be changed at that level, Low said. 8 NEWS/NORTH NWT, Monday, April 18, 2011 Editorial & Opinions COMMENTS AND VIEWS FROM NEWS/NORTH AND LETTERS TO THE EDITOR Published Mondays YELLOWKNIFE OFFICE: Box 2820, Yellowknife, NT, X1A 2R1 Phone: (867) 873-4031 Fax: E-mail: nnsl@nnsl.com editorial@nnsl.com advertising@nnsl.com circulation@nnsl.com nnsladmin@nnsl.com Website: DEH CHO OFFICE, FORT SIMPSON: Roxanna Thompson, Bureau Chief Phone: '580 Fax: E-mail: dehchodrum@nnsl.com Website: 2010 SOUTH SLAVE OFFICE, HAY RIVER: Paul Bickford, Bureau Chief Phone: (867) 874-2802 Fax: (867) 874-2804 E-mail: editor@ssimicro.com MACKENZIE DELTA OFFICE, INUVIK: 6DPDQWKD6WRNHOO‡6DPDQWKD6WRNHOO Phone: Fax: (867) 777-4412 E-mail: newsinuvik@nnsl.com Website: BAFFIN OFFICE, IQALUIT: (PLO\5LGOLQJWRQ‡-HDQQH*DJQRQ Phone: Fax: (867) 979-6010 E-mail: editor@nunavutnews.com Website: KIVALLIQ OFFICE, RANKIN INLET: Darrell Greer – Bureau Chief Phone: Fax: E-mail: kivalliqnews@nnsl.com Website: PUBLISHER: J.W. (Sig) Sigvaldason – jsig@nnsl.com GENERAL MANAGER: Michael Scott – mscott@nnsl.com MANAGING EDITOR: Bruce Valpy – valpy@nnsl.com ACCOUNTING – nnsladmin@nnsl.com -XG\7ULIIR‡)ORULH0DULDQR‡0\UD%RZHUPDQ &RXUWQH\%UHEQHU‡/LVD%LVVRQ‡.LUVWHQ+REEV EDITORIAL PRODUCTION: editorial@nnsl.com Co-ordinating editor: Derek Neary Assignment editor: Chris Puglia Production manager: Steve Hatch Photo co-ordinator: Ian Vaydik News editors &KULV3XJOLD‡-HQQLIHU*HHQV 0LNH:%U\DQW‡7LP(GZDUGV Sections Sports: James McCarthy – sports@nnsl.com Business: Guy Quenneville – business@nnsl.com Arts: Adrian Lysenko – entertainment@nnsl.com General news: $QGUHZ/LYLQJVWRQH‡1LFROH9HHUPDQ .DWKHULQH+XGVRQ‡7HUUHQFH0F(DFKHUQ Andrew Livingstone Editorial board: %UXFH9DOS\‡0LNH:%U\DQW‡'HUHN1HDU\ -HQQLIHU*HHQV‡&KULV3XJOLD‡7LP(GZDUGV ADVERTISING – advertising@nnsl.com Manager: Petra Ehrke Representatives: .LPEHUO\'R\OH‡2UOHQH:LOOLDPV (G.DPLQVNL‡7HUU\'REELQ‡'DZQ-DQ] -ROHQH+XJKHV‡&KULVVLH0RUJDQ ,VDDF:RRG‡3DOODYL.XWH CUSTOMER SERVICE: Ruthcil Barbosa-Leonardis SYSTEMS: David Houghton CIRCULATION – circulation@nnsl.com -RG\0LOOHU‡*DJLN6PEDW\DQ‡6DPYHO%DODVDQ\DQ Subscriptions: 2QH\HDUPDLO‡7ZR\HDUPDLO 2QOLQHHQWLUHFRQWHQW \HDU Individual subscriptions, multiple user rates on request NORTHERN NEWS SERVICES LIMITED 100% Northern owned and operated Publishers of: 'HK&KR'UXP‡,QXYLN'UXP .LYDOOLT1HZV‡<HOORZNQLIHU 1:71HZV1RUWK‡1XQDYXW1HZV1RUWK Member of: Canadian Community Newspapers Association Ontario Community Newspapers Association Manitoba Community Newspapers Association Saskatchewan Weekly Newspapers Association Alberta Weekly Newspapers Association Ontario, Manitoba and Alberta Press Councils Yellowknife Chamber of Commerce Contents copyright – printed in the North by Canarctic Graphics Limited Member of the Ontario Press Council. The Ontario Press Council was created to defend freedom (PDLO,QIR#RQWSUHVVFRP )D[ SEND US YOUR COMMENTS E-mail us at: editorial@nnsl.com with the subject line "My opinion"; or send mail to News/North at Box 2820, Yellowknife X1A 2R1; or drop your letter off at RXU RIILFH DW 6WUHHW $OO OHWWHUV VXEPLWWHG must be signed with a return address and daytime telephone number. We will do our best to ensure that letters submitted by 3 p.m. on Thursday are printed in Monday's News/North. NNSL file photo Nutrition North Canada, the replacement for the old Food Mail program, designed to get healthier foods into the hands of Northerners has been met with criticism from customers and grocers who say the new program is not living up to expectations. April fools Nutrition North hasn't fixed what was wrong with Food Mail Northern News Services When the federal government announced it was launching a new plan to replace the Food Mail program, many were optimistic, hoping for more affordable nutritious foods in their communities. Instead, on April 1 the price of many healthful perishable foods dropped by an unimpressive five to seven per cent. Many Northerners are finding the overall cost of their groceries has increased, and many no longer have the option of avoiding local retail prices by ordering their own food from southern stores as paperwork headaches are causing those grocers to opt out of the program in droves. The old Food Mail program wasn't perfect. The same complaints people had about that program – the lack of transparency on the part of retailers and obstacles to personal orders – continue with Nutrition North. More research should have been done to explain why prices are so high to begin with and that information should be used to fine tune the Food Mail program. In Yellowknife, a shopper can pick up four litres of milk for $4.99. During the Food Mail era, Canada Post could ship that four litre jug of milk, weighing approximately four kilograms, for 80 cents per kilogram to Norman Wells. That cost about $3.20 for each jug. That brings the price to about $8.19. Part of the hype of the new system was there would be greater THE ISSUE: accountability on the part of retail)22'35,&(6 ers. We hope that is so, but we WE SAY: have not seen it yet. Stores must be forced to show Northern consumers :+(5( 67+($&&2817$%,/,7< line-by-line the breakdown of product cost -- base price, shipping cost, Keep in mind the shelf price of stocking and overhead mark-up, milk at a Yellowknife store already and profit – on subsidized items. includes the mark-up for overhead. That information is vital to targeting But, shoppers in Norman Wells the cause of high food prices and were paying $13.99 for that jug of truly making basic staples affordmilk. What was the reason for that able. extra $7.79 over and above the We asked the North West Comshipping cost? Is the cost of doing pany for this breakdown. The combusiness in remote stores that pany wouldn't tell us, saying it was high? Answer that question and "competitive information." you'd solve the dilemma of high Food security is at or near the top food prices in the North. of the list of pressing social issues In Norman Wells, that same in the NWT, Nunavut and in other quantity of milk is now $12.49 locations around the world as, we under Nutrition North Canada, a must also acknowledge, global food modest reduction. Pop and chips prices have been climbing steadily are still far more affordable. over the past several months. Back when Health Minister Leona Yet cheaper -- and less nutritious Aglukkaq was running for Nunavut -- food options can lead to obesity, MP in 2008, she campaigned on diabetes, rickets, and increase risk changing Food Mail. She told News/ factors for some forms of cancer. North, "Where's the subsidy? I don't The federal government has the see the subsidy. I'll use the pinechoice of either investing in Northapple as an example. It's bought for ern nutrition now or paying more $3.39 or something in Yellowknife. over the long term for our healthBy the time it hits the Taloyoak care bills. store, it's a $15 pineapple. So We need a solution. The fact food where is the subsidy going and how prices remain a burden on Northare the stores using that subsidy? I ern families is a black mark on the think they owe us an explanation." reputation of our nation and no govThey still do. And we're not geternment should allow the problem ting it from Nutrition North. to persist. editorial – opinions Inuvik Works a vital service BUSH – the lighter side INUVIK Drum who benefited from the program. Through training at Inuvik Works, the young man was able to procure employment and become a contributing part of the community – something she said he wouldn't have been able to do otherwise. Gordon said the Gwich'in Tribal Council, Inuvialuit Regional Corporation and the town, along with the program committee, are working to restructure Inuvik Works to fit back into funding criteria with the hopes of getting it up and running again by summer. It's this effort to refocus the program that proves how important Inuvik Works is to helping those who are marginalized get the support they need to be a contributing part of society. By refocusing what Inuvik Works does, it will be more successful and help more people in the community. The loss of funding, while it puts a strain on the already successful program and the people it serves, is almost a blessing in disguise. Reinvention sometimes is needed to keep things fresh. BEATS Running for community support Huge congratulations are in order for the organizers, volunteers and runners who helped raise more than $20,000 for the Inuvik Homeless Shelter this past week. Ultra marathon runner Alicja Barahona completed a 374-km run from Inuvik to Tuktoyaktuk and back, getting into Inuvik on Sunday at 4 p.m. with about a dozen local runners who completed the final 30 kilometres with her. The effort that went into raising the money goes to show how dedicated people in the community are to making Inuvik a better place for everyone. It's these kinds of events that show how passionate people are about this town and what they are willing to do to improve the overall quality of life for those who might be falling through the cracks. Kudos to all involved! Protecting what's important Northern News Services to Kakisa to cast a line. The list of potential effects the Deh More anglers will mean Cho Bridge will have on the region just more fish out of the keeps growing. water, which will add furWhile it's probably not in the forefront ther stress to the grayling of the minds of most people following that are already exhausted the Deh Cho Bridge saga, the from their health of the grayling run on spawning trip. the Kakisa River is yet another Some item the bridge could change. anglers believe Those who have thought about the grayling population and this consequence are the future runs could suffer as a ones closest to it – the people result, thereby damaging a of Kakisa and the group of world-class fishing site. What's anglers who fish the run each being done to prevent that from year. happening is a testament to While it may be hard for foresight and to the importance non-anglers to imagine, there of locally driven initiatives. are apparently a large number While most planners linked of fishers in Yellowknife and to the bridge and fishery regulathe surrounding area who have tions probably didn't think of been eying the spring grayling Kakisa River when construction run with covetous eyes. While with Roxanna Thompson on the bridge began, people in those farther north are still Kakisa and fly fishers did. Both waiting for the ice to melt off groups are doing their parts their lakes and rivers, anglers in Kakisa to protect something that is important to are enjoying free flowing water and hunthem. gry grayling that are eager to bite. Kakisa has been using the Aboriginal While the grayling are safe this year, Aquatic Resource and Oceans Managethe fear is that by next year when the Deh ment program (AAROM) to develop and Cho Bridge is open Yellowknife anglers fund a customized program that allows won't think twice about driving five hours them to track aquatic issues such as pres- DEH CHO Drum Vote for the future Northern News Services Northern News Services Every community needs support programs for its residents. Whether it be support groups for addictions or work placement programs, communities are built around the foundation of being able to provide for their residents. The sudden close of the Inuvik with Andrew Livingstone Works program last week due to loss of funding, while said to be temporary, will be a blow to those who benefit from it – those who use the program and those who see the benefit. The program has been running for more than 10 years and has provided support to the community and its residents. Whether it be cleaning up garbage around town or helping with snow removal at an elders home, it provided a much-needed service. Not only did it help elders and keep the town clean, it also provided residents – single mothers, low income families, elders and the disabled – get the training or work they needed to live a normal life. Margaret Gordon, chairperson for the committee overseeing the program, gave the example of one individual in town NEWS/NORTH NWT, Monday, April 18, 2011 9 BEATS sure from sport anglers. By using the program, Kakisa has already gathered two years worth of data about which types of fish are being taken out of the Kakisa River and the conditions in the river. This data will be supplemented by the creel count that a group of fly fishers have volunteered to undertake. The count will form a picture of what the grayling run looks like before the bridge opens. The information gathered between these two initiatives will allow both groups to determine if increased access to the site is having a detrimental effect on the grayling and give them a basis on which to demand changes to better protect the run. Kakisa River may prove to be a template for how other Deh Cho communities can use available resources and make partnerships in order to protect valued resources. Just when it looks like spring, we get a good dump of snow as a reminder that winter’s grip is still strong. If it were not for technology giving the forecast, we would be in the dark to just accept the weather as is, similar to the wait-and-see of which politGONAEWO ical party will lead us into the Our way of life future. The result of the election could very well rest on the John B. Zoe is the acting executive director with the colour of the bouquet associTlicho government and a ated with one of Canada’s pol- former land claims negotiHe holds an honorary itical parties that will be tossed ator. doctorate of law. by the new princess following her royal wedding to Prince William at the end of April. Just before the election, many Christians will have celebrated Easter, a time of reflection and renewal. It is a time to shed old issues and be in tune with our Creator, self, others and the greater community. Canadians may very well vote with their souls, recently cleansed of judgment to decide which party will be proper to lead the country. One thing that is predictable is climate change and we know that the North will be the most affected; the fallout will be felt in small bites giving a false sense that things are still the same. The whole of the North, the cradle of the extraction business, will be joined with more land to explore with the recent denial by Canada to discontinue the subsurface protection of Edehzhie. Edehzhie is a candidate for protection and until the process is complete, an interim withdrawal of the sub-surface had been in place extended by a couple of renewals. In a surprise move, the minister of Indian and Northern Affairs Canada pulled the fundamental plug to drain the life out of the strategy. The description of Edehzhie described in the many years of research could very well be the North’s Serengeti. There could be no other candidate areas on the horizon that will be pushed for protection, especially with the present climate of decision-making by the higher ups. Edehzhie is the most significant candidate area for protection. Now that it is not moving ahead, we may have to resign ourselves to the fact that there will not be any areas set aside for protection in the near future. The fact that the federal government will modify the present northern regulatory processes by bending and stretching it without supposedly breaking it will be interesting. Turning the free-wheeling federal reins over to the government of the Northwest Territories for land and resources management by devolution does not give any more assurance for any form of development. There needs to be a balance to sustainable development and sites protected from development. All these big-ticket item decisions are made at levels not reached by individual Northerners. Elections can be defining moments when individuals vote to send the right message to who will represent the north in Ottawa. NNSL WEB POLL HOW HAS THE NEW NUTRITION NORTH CANADA PROGRAM AFFECTED YOU SINCE IT STARTED ON APRIL 1. It is harder to get the groceries I need, especially healthy food. 42% My personal food orders have become more expensive. 29% I am saving money and have been able to buy healthier food. 29% Have recent forums to discuss the pains caused by residential schools helped your healing process? Have your say at. Poll results will be published in next Monday's News/North. 10 NEWS/NORTH NWT, Monday, April 18, 2011 cartoons BUSH: Another look at life in the North DEH CHO DRUM Students in Fort Simpson's Bompas Elementary School were introduced to the art of eating healthy earlier this month. INUVIK DRUM Former ,QXYLN 'UXP HGLWRU .LUD &XUWLV KDG WKH opportunity to try muktuk during a community feast. NUNAVUT NEWS/NORTH An Iglulik team won this year's bridge building competition hosted by Northwest Territories Association of Professional Engineers and Geologists. news NEWS/NORTH NWT, Monday, April 18, 2011 11 Federal candidates face off Western Arctic election forum held in Hay River by Paul Bickford Northern News Services Hay River The NWT's first all-candidates forum for the upcoming federal election was held in Hay River last week. The five candidates in the Western Arctic riding appeared on April 14 before just under 100 people, who heard the politicians' views on a wide variety of topics. However, one recurring issue – driven by Conservative candidate Sandy Lee – was whether the NWT would be better served by an MP in the governing party, instead of current representative Dennis Bevington of the New Democratic Party. Lee thinks so, expressing confidence the Conservatives will likely form a majority government or at least a minor- ity government after the May 2 election. "Having an MP in the NDP party is not helping us," she said. Lee said she likes Bevington as a person. "I don't want to attack him too much, but his record does not stand for his position that he puts the interest of the North before the interest of the party," she said. Bevington questioned how much influence a Conservative backbencher would have, noting a federal cabinet minister from Nunavut was unable to prevent federal cuts in the North. "Nobody tells me to be quiet, and I think that's something that everybody here should understand about the nature of Parliament and the (Prime Minister) Stephen Harper government, which is the most dictatorial government that we have seen, especially with its own members," he said. Bevington noted he is rated fifth among MPs in voting against his own party, including on issues such as the gun registry. The most interesting interaction of the forum was between Lee and Liberal candidate Joe Handley. The former colleagues in the territorial government – Handley as premier and Lee who served as MLA at the time and then later as minister of Health and Social Services – exchanged pointed and sometimes humorous barbs. "Nobody tells me to be quiet." .com The Voice of Denendeh Lee pointed out a proposed but ultimately abandoned change to supplementary health care that introduced a means test for non-aboriginal people was signed by Handley as premier. "Around the same time that you signed the bridge deal," she added, referring to the controversial Deh Cho Bridge over the Mackenzie River. That drew the biggest reaction of the evening – laughter and a smattering of applause from the audience. "We have to keep our pres- entations factual," countered Handley, noting the GNWT gave the minister of Health and Social Services – Lee – the authority to check the impact of changes to supplementary health. Handley also took his own digs at Lee. "Unlike other parties, and one other party in particular, I wasn't anointed," he said, referring to Lee's controversial route to the Conservative nomination. The most interesting commitment of the evening came from Green Party candidate Eli Purchase, who noted the starting salary for an MP is $157,000 a year, while the average Northerner makes about $45,000 a year. "So I'm promising that half of every pay cheque after taxes and half of every income tax refund I get goes to registered charities and community organizations in the Northwest Territories," Purchase said. Bonnie Dawson of the Animal Alliance Environment Voters Party of Canada emphasized the link between animal cruelty and violence against humans, noting the NWT has the highest rate of violent and domestic crime in Canada. The forum, held at the community hall and sponsored by the Hay River Chamber of Commerce, touched on a number of other issues, including aboriginal self-government, economic development, devolution and support for seniors. 12 NEWS/NORTH NWT, Monday, April 18, 2011 th $118$/CANADIAN NORTH %DOVLOOLH&XS‡$SULO YEL/2:.1,)(2/'7,0(5+2&.(<$662&,$7,21 ',$0 02 21 1' ' * 2 /' /' 6 , /9 /9 ( 5 & 2 33(5 33( 3 (5 FM TERRITORIAL BEVERAGES LTD. ÂŽ OM O MEGA ME EGA A MA ARINE AR R NE RI N Cam mco Con onstr n ructiio i n ion Da arey Danarey Da y Catering Ca aterin Yellowkni ellowknife Dirrect Charge Co-o Ch op J&A J & Fire Protection P Pr Su Sub ub Arc Arcti ctic Surve S vey eys ys s ÂŽ A)) Balsil Bals als ls silllie sill lie eD Div iivi vis vi isi sio ion on n D) Canadia an North Div ivis sion on &KDPSLR PSLRQVKLS7+(-2+1*%$/6,//,(0(025,$/&83 VKLS7+(-2+1*%$/6,//,(0(025,$/&83 KLS7+(-2+1*%$/6,//,(0(025,$/&8 LS7+(-2+1*%$/6,//,(0(025,$/& S7+(-2+1*%$/6,//,(0(025,$/& 7+(-2+1*%$/6,//,(0(025,$ 7+(-2+1*%$/6,//,(0(025,$/ +(-2+1*%$/6,//,(0(025,$ +( (-2+1*%$/6,//,(0(025 -2+1*%$/6,//,(0(02 -2+1*%$/6,//,(0(025 2+1*%$/6,//,(0(02 +1*%$/6,//,(0 +1 1* *%$/6,//,( % 6 0(0 Winner: ner: Tal Talb a bot’s t’s Leafs s &KDPSLRQVKLS PS S7+ 7+(&$1$' +(&$1$',$11257+&83 +( &$ &$1$',$11257+&83 $1$',$11257+& $',$11257+& $' ',$1 11257 57+&83 Winner: Fort rtt Siim mpson mpson pson n Su Sub ub Arc ub A Arctic rctic Ea Eagl Eagles les 7RXUQDPHQW PH W0 0937 09 7+ 7+(( ('-( (6.(752 23+< Winner: er: Ch hris s Ca ahoo h ho oo on 7 7D ERWÂśV 7DOE ERW ER RWÂśV/HDI WÂś IV 7RXUQDPHQW09 93 0 0,,&+$(/:,17(50(025,$ 0,&+$( &+$(/:,17(50(025, +$(/:,17(50(025 $(/:,17(50(02 $(/ (/:,17(50(02 /: :,17 17(50( 0(02 (02 ,$/7 75 3+ 7523+< 7 Winner: Rona a Bonnetr ald Bonnetrouge, )RUW6 6L VRQ6 6LPSV 6XE$ $UFFWLF(DJOH (D HV B)) Su S Super up u p r 8 Di Div iv vis sio on on &KDPSLRQVKLS7 &K & KDPS DP DP KLS7+( 7+(68 7+ 83 3 (5 5 &8 &83 W Winner: Hay ay Riv Riiive ver e err O Old ld Tim mers me s 7RXUQDPHQW093 09 7+ 7+( 7+(602.(<+($/0(025,$/7523+< +(602.(<+($/0(025,$/7523+ +(602.(<+($/0(025,$/7523+< (602.(<+($/0(025,$/7523+ 602.(<+($/0(025,$/7 602.(<+($/0(025,$/7523 02.(<+($/0(025,$/752 02 2.(<+($/0(025,$/ 2.(<+($/0(025,$/75 2.(<+($/0(025,$/ .(<+($/0(025,$/ (<+($/0(025,$ <+($/0(025 <+($/0(025, +($/0(02 + +($/0 0(0 0( (0 02 7 2 Win Winner: Louis uis Ga arrd diner, +D\5 5LYHU2OG7LPHUV 5LYHU2OG7LPHUV 5LYHU2OG YHU2OG7LPH G7LP PH C) J& J&A J&A &A Fire Pro Prottection o Divisio Di on o &KDPSLRQVKLS7+(7 PSLR PS +( 7(55<7((. &2(0(025,$/&83 55<7((. &2(0(025,$/&8 5<7((. &2(0(025,$/&8 5< <7((. &2(0(025,$/ <7((. &2(0(025,$/& < 7((. &2(0(025,$/ 7( 7((. &2(0(025,$ 7( ((. &2(0(025 ( (. & &2(0 2(0( (0 025 025,$/ 25 &83 3 Winner: Win in nner: Trinity He nn Hellicop ic co opte opt pte er Eagles E 7RXUQDPHQW093 RXUQ U DP PHQW 7+(%/$1&+(-(6.(0(025,$/7523+< 7+ +(%/$1&+(-(6.(0(025,$/7523+ +( (%/$1&+(-(6.(0(025,$/7523 (% %/$1&+(-(6.(0(025,$/75 %/$1&+(-(6.(0(025,$/752 % %/$ $1&+(-(6.(0(025,$/ $1&+(-(6.(0(025,$/7 1&+(-(6.(0(025,$ &+(-(6.(0(025 &+(-( -(6.(0 (6.(0 .( 0 0( (0 025 02 75 Winner Winn ne n e err: Ian n Moir, oir,, 7UL 7ULQ ULQLW\ QLW +H HOLFRSWH WHU(DJOH WH HU(DJOH HV Player ayerr App Appre App precia pr ecia attiion at ation ion on Awar A Aw Awa ard rd d Ret Return etur tur ur trip rip fforr TW TWO WO a an n nyw ywhere yw where ere Ca ana ana adia ad adi an nN No Nor orth rth h Air Aiirlines Ă€LH Ă€LHV HVV Winner: W in er: Fre re ee ee em ma ma an n Sm Smith, Hay Hayy Ri Ha Riv R iver ive err, NW NWT T Fan Fa F a A Appr pre pr re reci ec eciation ciation iation ation a at ttion ion on A Aw Award Awards Awar Awa w Re Return et rn n tri trip ttrrip ip p for fo orr ONE ON NE N E anywhere anywh anywher anywhe Can any Cana Ca nad ad adi dia ian North Airlines ia esĂ€\ s Ă€\\ Win Wi W inn nne nn ne errs: ers s:: Bob ob B Balsi Bals Bal Balsillie Balsilli Balsil a e, Yello Yellow Ye Yel ello low owk wknife, wkn e NW NWT W P &R Pat Ro o ob obe be ber ert rtt W Win Wint inte nter, Yellowknife, lowknife, o knif N NWT NW WT T The her erresa sa O Ollay Ola Olay ayv ayva yvar, Yellowknife, yv nife NW ni NWT WT T Linda L Li Lin ind nd nda da aB Ba Bal Bals Balsillie a alsi alsil sillie ie, Yellowknife,, NW NWT WT E)) Omega Om Ma arine in (Over (Ove Over 45) 4 Divisi Diivi ion n &KDPSLRQVKLS KLS2 20(*$0 ( 0$5,1 1(&83 ( 3 Winner: Old ld d Gary’s Abs sorbin ne Seniors nio ors s 7RXUQDPHQW0 093 52<(// <(//,60(025,$/7523+< <(//,60(025,$/7523+< /,60(025,$/7523 /,60(025,$/7523+ ,6 60(025,$/752 (025,$/75 $/7 7523+< + Winner: Ne eil Slade2 2OG*DU\ÂśV$EVRUELQH6HQLRUV 2OG*D DUU\ÂśV$EVRUELQH6H DU\ÂśV$EVRUELQH6HQLR $EVRUE VRUELQH U H6 QLR RUV RU The Short orty ty Brow own Fair a P Play ay y Awa Awar Award Awa d $ZDUGHGD G DQ DQQ QX QXD XDO DOO\ O\ \WR WR RWK WK KH7 7HDPZKLFKH[HPSOLÂżHV 7HD DP PZKLFFKH FKH[HPS FKH[HPSO FKH[HPSOLÂż H[HP HP P V V VSRUWVPD DQVKLS SE E\K KD KDY DYL YLQ QJWWDNHQWKHIHZHVWSHQDOW\ QWKHIH IHZHVWSHQ IHZHVWSHQ HZHV ZH HVW VWWS VW SH DOW\ W\ PLQXWHVVLLQ QWKH%DOVLOOLH WK WKH KH H% %D DOV OVLOOOOLH H&XSWRXUQ H& UQDP PHQW PH W Winn W nne nerr: Old Old dG Ga Gary ary ry’ y’s s Absorbine e Senior io ors rs s Merc Me chand an nd dise is se e Draws * *LILIW& W&H &HU HUWLÂż WLÂżF ÂżFD FDWH DWHIURPOv ve ve erl rrla a er ande er S Sporrt Sp rts rs Winne Winn nerrs: Chu Chuck uck Parke er, Ye Yellow e wknife, knife, nife, ife NW NWT T Debbie Rud d, Ye Ye Yel ello llo o nife, owkn ife, fe NW NWT T *LIW * LIW&D &DU DUG GIU IURP RPKentuck cky ky y Frie Frie r ed ed Chic icken cke en e n Winner Winn ers rs: Paul Gar ard rd d, Ye Ye Yel ello lo ow ow wkniffe, N NWT T Shelley y Gri Grim r mes me es s, Yell Yellowknife, lowknife,, NW NWT WT Andy y Wil Willi Will lia ia am am ms s, Y Yellowknife, ello owknife, N NW WT W T Jane elle ell lle e Ja Ja am m me e es s, Yello Yellowknife, NW NWT WT Elli lie ie e Ad Ad Adj dju jju un, Yellowknife, un Y owk NWT NWT $100 *LIW&DUGIURP Yellowknife Direct Charge Co-op Winner: Chris Silzer, Yellowknife, NWT *HQHURXVO\GRQDWHGE\ TERRITORIAL BEVERAGES LTD. photo stories NEWS/NORTH NWT, Monday, April 18, 2011 13 A day at Mezi school Northern News Services Whati/Lac La Martre SCHOOL Feature by Guy Quenneville With a population of about 500, Whati, resting on the banks of Lac La Martre, has only one school, Mezi Community School, named after Mezi Beaulieu, an elder who built a cabin in the area in the 1980s. Grade 3 student Javen Nitsiza gets distracted during a Dogrib language class taught by Mary Adele Flunkie. When News/North visited Whati on April 13, the school was bursting with activity thanks to a free book fair hosted by De Beers Canada and the Yellowknife Book Cellar. Here's a peek at what else was going on – in and out of the classroom – at Mezi that day. Grade 5 student Myles Beaverho gets away from the rat race of life by immersing himself in The Hulk inside Mezi's cozy, quiet library. Alan C. Moosenose stirs the pot of cream of squash soup during nutrition class. 14 NEWS/NORTH NWT, Monday, April 18, 2011 Una Kuatiktitnit Tuhaktitauyuq S ap tai b a 1 9 , 2 0 1 1 N u t q agvigh a S ik uug iah i m a g u v i t M a n i g h a gh au tigh at M a k p i g a t I n n i g t i g v i g hat. Si k u u g ia kp a kt ut Nuna lla nni Maligaat . M am it ing nirm ut a ulla kt ig ut ighaat . Uumanni Saptaiba 19, 2007 Sikuugiahimayut Ingilgaat Aullaktitauplutik Nunamingnit aullaqihimayuq. Taimani, imatut amigaitilagiit 80,000 illiniariaghimayut Tadja illiniariaghimayut Tadja inuuyut huli 2007. Ahiit January 1, 2011, Sikuugiahimayut Ahinut Manighaghautighat Akiliktauyut imaa amigaitigiyunut 76,623 inungnut. Una taimagvighaat tikilingmiyuq. kuugiaghimayut Manighaghautighat Inniktiklugit, aullaktitlugulu una uplua inigvigha Saptaiba 19, 2011. Makpigamik pittaaktuhi, una hivayaglugu 1-866-879-4913, unaluunniit qaritauyakkut qiniqlugu, pullaklugu Service Canada Centre. Service Canada havaktiita ikayuqtaaktut titigak titigaktitlugu CEP makpigaak. Hapfuma naunaighimayut Maligaghautait, Saptaiba19, 2011 Nutqagvigha Sikuugiaghimayut Manighaghautighat (CEP) Makpigaat inniktigvighat. Makpiigangmik Immaitumik titigagiighimaguma Sikuugiaghimayutmik Manighaghautighamik? Titigagighimaguvit hapkuningga atlamik tititgaffagaghaitutit. Kiuyauhimaitkuvit Tadja huli uttaqqiguvit, apighuutighaqaguvit uumunna CEP makpigaakut, una hivayaglugu naunaighimayuq atanni. Hunauva hamna Sikuugiaghimayut Manighaghautighat? Hamna manighaghautigivaktat hapkunani Sikuugiaghimayut Maligainni Sikuurviit naunaiyaghimayut, inuuyut huli Uumani May 30, 2005. Hamma akighait innigighimayut $10,000 hivullirmi ukiuq sikuurhimaguvit, (imalunnit ukiup illainagaluangganni sikuuriaghimaguvit) uvalu $3,000 ukiullat tamaita (imalunnit ukiup illainagaluangganni sikuuriaghimaguvit). Kituut Sikuurviit illauvaat? Hapkuat Illitariyauhimayut Sikuurviit Illauqtauyut Iniktait. Sisinik Sikuurvingnik illayauyut; allat Sikuurviit illaliutiniagiaghaita Tadja naunaighimaittuq. Hapkuat inniktait Sikuurviit naunaiktauhimayut, uvanni qinninagialgit qariyautakkut websaitganni naunaighimayuq. Qannuqtut ik CEP? pinahuangniaqiq makpiganUktugumaguvit makpigaat Si- Unattaungmi Independent Assessment Process? The Independent Assessment Process (IAP) una Kuatighimaittumik aktuqtiktauhimayut, anniqtuqtauhimayut, Ihuinnaqtauhimayutlu Sikuurgiaktitauhimaplutik. Hamna IAP ayungnavyangmat hivituyumiklu havagiakaghunni, ilvit Luiyaghangnik nanihitquyauyutit illingnik, IAP-mik titigagumaguvit. CEP uvalu IAP atlatqiyauyuk makpigaak, nalliangnik makpirangnik titigagtaaktut Sikuugtuviniit CEP and IAP. Hamna titiqqaat innikvighat IAP manighagvighak Saptaiba 19, 2012 Illitturittiagumaguvit apighutighakaguvit tamagikkut una hivayainagialik 1-866-8794913 qaritauyakut. IRS Crisis Line Ikayuktit (1-866-925-4419) ikayuktaaktut uqautjittaaktut sikuugiaghimayunut ayughaktunut. Hivayauta 1-866-879-4913 takulugu: NEWS/NORTH NWT, Monday, April 18, 2011 15 OfďŹ cial Court Notice September 19, 2011 is the deadline for Common Experience Payment applications. The Residential Schools Settlement Agreement. The healing continues. On September 19, 2007 the Residential Schools Settlement became effective. At the time, it was estimated that 80,000 former students were alive in 2007. As of January 1, 2011, Common Experience Payments have been issued to 76,623 former students. An important deadline is now approaching. Under the terms of the Settlement, September 19, 2011 is the Common Experience Payment (CEP) Application Deadline. complete and submit an application form by September 19, 2011. To get an application form, please call 1-866-879-4913, go to the website or visit a Service Canada Centre. Service Canada staff members are available to help applicants complete the CEP application form. What if I have already applied for a Common Experience Payment? If you have already applied please do not submit a new application. If you have not received a decision or have questions about your CEP application, please contact the phone number below. What is a Common Experience Payment? It is a payment made under the Residential Schools What about the Independent Assessment Settlement Agreement Process? The to former students who I n d e p e n d e n t lived at a recognized Assessment Process For more information call Residential School under (IAP) is a separate out1-866-879-4913 or visit: the Residential Schools of-court process for the Settlement Agreement resolution of claims of and who were alive on sexual abuse, serious May 30, 2005. Payments are $10,000 for the physical abuse, and other wrongful acts suffered at first school year (or part of a school year) plus residential schools. The IAP is a complex process $3,000 for each additional school year (or part of and it is strongly recommended that you hire a a school year). lawyer if you wish to submit an IAP application. CEP and IAP are separate processes and former Which schools are included? The list of students may apply for the CEP, or for the IAP, or recognized Residential Schools has been for both the CEP and IAP. The deadline to apply updated. Six Residential Schools have been for an IAP payment is September 19, 2012 added; decisions regarding a number of other More information on both processes is available schools are in progress. A complete and updated at 1-866-879-4913 and at the website. The IRS list of recognized residential schools is available Crisis Line (1-866-925-4419) provides immediate at the website listed above. and culturally appropriate counselling support to How do I apply for CEP? To apply for former students who are experiencing distress. a Common Experience Payment, please 16 NEWS/NORTH NWT, Monday, April 18, 2011 ÄÒöÒêÄ›±´êçÒvv¶²v®¬|{Ò ¢wÊÈ"ê› ´jÆÒzÄçÄ Ò mêÒ¡²|m}¨Òzj¶²vê¡Ïjv| £Í¡›¨ |Ĩ mÑ› |mÙÖvv ´çÑÒö²¡ÆÒ uÈÆ ¡Ë~¦z f¦kÏÂz k×Ôt t kv¦ªhÖoÖ xf¡ ªhzyÖxh°z f¦kÖt z iÍÖxhªhÖÄzt{t¬¨ ³kÉxfåfÖ kvÖ°z k{¦Öxh´°tz vhÖÖxh¦ªhÖoz z f¦kÖtÏzmÉv²Öt{¦ÉÄÖ m{ ¨zj´Ëm©| mÙÖvv| ¢wÊÈ " zÄçÄ Ò mêÒ¡²| m}¨Òzj¶²v ê¡Ïjv| ê› ´jÆÒ o«ÉkϨtz mztÍhtÔÏÂz x mztÍht¦É~z x fÖf²ÖthÔxh°z kvfhMSz f°ËÖvt vÍÖv m¦ÉkÔÖt¬¨zxfåfÖkvÖ°zk{¦Öxh´°tzvÍht xxtÏ ÖѨ zÄ ê¡ÏÌÒ¡ zÄçÄ Ò mêÒ¡²| m}¨Òzj¶²v( vÍÊÖ Âz ku vzxf¦¨tz y vÍhtmtxh×~Âzf¡yɲh°hÈÆ©zkmÖ~tzåÔËÂz ¤z xfåfÖ kvÖ°z k{¦Öxh´°t vÍhtÂz ku hÕª¨tzhÕªhthsåhyzkyzvz £jÈ zÄçÄ Ò mêÒ¡²| m}¨Òzj¶²v( y z Ä ¨ } m Ò Ä ± ´ Ñ ª Ö j ¯ m Ò z j v | v ¶ ² v ( x f k{¦Öxh´°tz k{¦hxh°Ö ky ¡Ë~¦z f¦kÏÂz fϨ ÔhkÖxhtzt´°t f«ÖvÖ fÖÖvfÂ×zvÖ k×Ôt ths f¦kÖtÏz °ÔÖvÏz f¦å ɲh° m¦ÉƬ¦k´°Ö lÖ{¡kËt m¦ÉkÙÍÖvz mÉ ¡Ë~¦ f¦kÏ ky ¡Ë~¦z f¦kÏÂz t~z lÖxhÏz k k×Ôt ths k iªhÖvz k×z mztkxh× xh°z f k{¦Öxh´°tz ê}¡ mö Í›|j׬ªv|jÆÝ ¡Ë~¦ f¦kÏ xf Ù M S z M S ¬ ¦ Ö r z f Ï ¨ Ô h k Ö x h t z t ´ ° t f ¦ k Ï Ï z h { h Ï z !! ""jÊÈ« |zmѪv|jÆÝ# m²Ékv±¬¨ m¦ÉƬ¦k´°Ö h È Æ © z f ª Û z ```[N\RMNW]RJU\LQXXU\N]]UNVNW]LJ k ¦ ¦ É y Ò ² h ¬ ª f¦kÏhs h{hs k¨ Ésotz v° Âz fϨ kvz k×z ÔhkÖxhtzt´°tz vÍht xfåfÖ kvÖ°z f¦kÏzh{hzhÈÆ©zfªÛzf¦kÏhsh{hs k{¦Öxh´°tz k fϨ ÔhkÖxhtzt´°tz f«Övt m ¦ É Æ ¬ ¦ k ´ ° h M S k f ¦ k Ö t z v Í Ë Ö o z ¨ m Ä ¨ m Ñ O R | Ä ¬ ¨ j ¶ ´ j ¡ È | ( v v Ò z j ¡ ² | xfåfÖ kvÖ°z k{¦Öxh´°tz hÈÆ©z f Ϩ ĨçË´j¡²| £Í¡›©| Ĩ mÑOR| Ĭ mÒzj¡OU| ÔhkÖxhtzt´°tz hÈÆ©z x}z xfåfÖ kvÖ°z m Û ¢ ° Ö v z ¡ Ë ~  § z f ¦ k Ï M P z f ª ¦ h ´ ² h ¬ ¨ t " k{¦Öxh´°tz k fϨ ÔhkÖxhtzt´°tz vÂÛ f¡¦hÖvÔÍ¡kÖoÖ m¬¨z k¡ªfz k×z f¦kÏÃz °z vÍÏÂåhsfϨÔhkÖxhtzt´°tzk{¦Öxh´°thhÄÖ kfªkÖ°zttÖxh°zf¦åɲh°z¡Ë~§zf¦kÏÃz ¢wÊÈ" kvfhÄzf{kÖ{ÂttÖ°Ò« Ö Ò ê ¡ Ï Í Ò q Ý z Ä ç Ä Ò m ê Ò ¡ ² | m } ¨ Ò z j ¶ ² v ( v Í Ë Â z x f å f Ö k v Ö ° z k{¦Öxh´°tkum²ÊϨ kv¨ vÍhtxxtÖÄÛ ¢wÊÈ " m° Âz vÍht xxtÏ ku hÕª¨tz hÄÛ x~kϨt¬©z f{kÖ{Âz hÈÆ©z v{kËtåfz x} m¦ÉƬ¦k´° kvfhMSz hÆÛz k f{kÖ{ k¬ªz ¡Ë~¦ f¦kϦkÔzxÏ~z vkÂÏÖvz hժ mxÔÖtt´°xhÄÖ xÈÆåfÖ k f¦Ö~Ïz f¦ÛztkÖv f¦É~zf°Ö¡Ëtf¦kÖtÏzÖvhÖvz news NEWS/NORTH NWT, Monday, April 18, 2011 17 Community constable in Hay River Steve Beck part of first troop to graduate training in Regina by Paul Bickford Northern News Services Hay River Hay River is getting the Northwest Territories' first and, for the time being, only aboriginal community constable, a new position created by the RCMP. The community's Steve Beck was among the first troop of aboriginal community constables to graduate on April 12 from the RCMP's training academy in Regina. Beck, 36, said it has always been his ambition to join the RCMP. "I think it's every kid's dream at some point, whether they admit it or not, to be a Mountie," he said, noting the force is respected around the world. As an aboriginal community constable, he will be an armed, uniformed officer with the rank of special constable. His focus will be crime prevention and reduction, and building relationships between the community and the RCMP. STEVE BECK: Hay River resident graduates RCMP training to become the NWT's first aboriginal community constable. "It will be something different," he said. "Hopefully, we can make a change." Beck, who is of Metis heritage, is anxious to get to work at the Hay River RCMP detachment, beginning April 18. Bridging the gap Among his goals is to start bridging the gap between young people and the police. Before joining the RCMP, he spent 17 years with the GNWT's Department of Justice, including the past 11 years as a deputy sheriff. In addition, he runs a trapping program to take young people on the land to experience a traditional lifestyle. Although he is a fullycertified police officer, Beck said his main priority will not be leading investigations. "They want to make sure that my priority is trying to stop the crime before it happens, as opposed to just reacting to it," he explained. Beck was among seven cadets to graduated as aboriginal community constables out of the 10 who started the training. The troop began training in November as part of a three-year pilot project. Beck is being welcomed by the Hay River detachment. "Having an aboriginal community constable at the post is definitely going to help," said Cpl. Robert Gal- Paul Bickford/NNSL photo IMAGINARY RIDE Three young children from Fort Smith – left to right, Finnlay Rutherford-Simon, Anais Aubrey-Smith and her brother Leif Aubrey-Smith – go for an imaginary ride on a parked snowmobile at the Hay River Ski Club on April 9. The children and other family members were in Hay River to compete in ski races. lant. "It's going to help bridge the gap with aboriginal people." Gallant explained Beck will help other members of the detachment better understand aboriginal culture. The corporal called Beck a great person, noting his long service as a deputy sheriff and his family's involvement in dog racing and other traditional activities. It had originally been hoped three aboriginal community constables would have been coming to the NWT, specifically Fort Smith and Yellowknife along with Hay River. However, Cpl. Wes Heron, media relations officer with the RCMP's 'G' Division, said the two other NWT recruits didn't complete the training in Regina, explaining that can happen for many reasons. Heron said that means the NWT will have only one aboriginal community constable for duration of the pilot project. "Steve is going to be a role model," the corporal said, adding Beck will also help with recruiting and as a contact with aboriginal groups. There is one other graduate from northern Canada – Adrian Pilakapsi of Rankin Inlet, Nunavut. Four graduates are from Manitoba, while another is from Alberta. "Among other qualities, these cadets will bring to our organization linguistic, cultural and community skills and knowledge that go beyond those taught at depot," said RCMP Commissioner William Elliot in a news release, noting aboriginal communities identified the need for an alternate service delivery. The aboriginal community constables will enhance but not replace the work of general-duty constables and can provide tactical, enforcement and investigational support to other officers, if required. 18 NEWS/NORTH NWT, Monday, April 18, 2011 SERVICE DIRECTORY ‡ACCOUNTING ‡MANAGEMENT CONSULTANTS $FSUJGJFE(FOFSBM"DDPVOUBOUT 4FSWJOHUIF/PSUITJODF 4FSWJOHUIF/PSUITJODF r1FSTPOBMBOE$PSQPSBUF"DDPVOUT r"VEJUT3FWJFXTr#PPLLFFQJOH"DDPVOUJOH r$PNQVUFS"QQMJDBUJPOTr*ODPNF5BY(45 r.BOBHFNFOU$POTVMUJOHr#VTJOFTT7BMVBUJPOT ‡MANAGEMENT CONSULTANTS Avery Cooper Financial Corp. 3PCFSU Stewart r.BOBHFNFOU$POTVMUJOH r#VTJOFTT1MBOT'JOBODJOH1SPQPTBMT r*NQMFNFOUBUJPO"TTJTUBODF r*OGPSNBUJPO4ZTUFNT%FTJHO r#VTJOFTT7BMVBUJPOT r4BHF4PGUXBSF$FSUJGJFE$POTVMUBOUT 5SBJOFST*OTUBMMFST r$PNQSFIFOTJWF3FWJFXTr4USBUFHJD1MBOOJOH r"VEJUr&GGJDJFODZ"VEJUTr4PMVUJPOT r$PSQPSBUF5VSOBSPVOEr3FPSHBOJ[BUJPOT r"DDPVOUJOHr$POGMJDU5SBOTGPSNBUJPO #PY :FMMPXLOJGF /85 9"1 &NBJMTUFXBSU!QFBDFDB IUUQXXXQFBDFDB Tel: (867) 873-5595 3PCFSU"4UFXBSU $" $.$ Fax: (867) 873-5596 1SFTJEFOU "ĂžĂĂşÄ $ááøĂúùÝÊþĂþêĂúáÎ.4* ÊÜùÜßĂúÜÊßùáÜÊôÊÝÝáÍùÊßùáÜáÎùÜÏĂøĂÜÏĂÜßøúáÎĂÝÝùáÜÊôÎùúþÝ Yellowknife, 4918-50th Street Laurentian Building, Box 1620, X1A 2P2 Ph: (867) 873-3441 Fax: (867) 873-2353 Toll Free: 1-800-661-0787rIUUQXXXBWFSZDPOUDB ‡)INANCIAL SE59ICES ‡ACCOUNTING ‡MENTAL HEALTH SERVICES -.@ $ +! *@ Isaksimagit Inuusirmi Katujjiqatigiit Ikitiahimalugu Inuuhik Katimajiit Embrace Life Council Saisis la vie $)"35&3&% "$$06/5"/54 ."$,":--1 4NBMM#VTJOFTT"DDPVOUJOH 'JOBODJBM4UBUFNFOUT Auditing 1FSTPOBMBOE$PSQPSBUF5BY #VTJOFTT1MBOT #VTJOFTT7BMVBUJPOT .BOBHFNFOU$POTVMUJOH 1SPHSBN&WBMVBUJPOT Yellowknife: 301, 5120-49 Street 10#PY :FMMPXLOJGF /59"/ (867) 920-4404 (867) 920-4135 (fax) Rankin Inlet: 10#PY 3BOLJO*OMFU /69$( Call toll free Providing mental health services to former residential school students and their family. Toll Free: 1-866-804-2782 Email: embracelife@inuusiq.com 1-866-920-4404 (toll free) ‡ACCOUNTING & MANAGEMENT CONSULTANTS BISWANATH CHAKRAB BARTY & CO CHUWLÂżHG *HQHUUDO AFFRXQWDQW OUR SERVICES & Audit Engagement, Review Engagement, Compilation Engagement & Financial Planning, Project Evaluation & Business Financing Assistance & Accounting, Payroll Services & Bookkeeping & General Management & Business Consulting & Valuation including Litigation Support & Income Tax Returns & GST Returns We Gr Gro Grow row By He Help Helping lping ng Oth Others Others ers rs Grow Gro Gr row ww www ww ww.chakra barty.ca arty.ca LS BUSINESS SOLUTIONS L LTD. )LQDQFLDO & 0DQDJHPHQW CRQVXOOWDQWV RI WKH NRUWK N OUR SERVICES &Financial Planning, Project Evaluation & Business Financing Assistance &General Management & Business Consulting &Program Review &Policy & Procedure Manual Development &Strategic Planning Consulting & Performance Measurement & Management Consulting &Assistance in Preparing Tenders & Request for Proposals (RFPs) CONTACT US: P.O. Box 20072, 2nd oor 5016-50th Avenue Yellowknife, NT X1A 3X8 ‡LANGUAGE SERVICES ‡MLA Tusaajiit Translations slation Inuktitut & English glish Language Services Servic Over 20 years of Specializing in Cross C Cultural Comm municationss Tel: (867) 873-3021 Tel Fax: (867) 669-9014 Suzie Napayok Interpreter/Translator 153 Magrum Crescent Yellowknife, NT X1A 3V8 Cell: (867) 445-7807 Email: snapayok@theedge.ca Eva Aariak MLA for Iqaluit East Email: iqaluiteastMLA@qiniq.com Constituency Office P.O. Box 1150 Iqaluit, NU X0A 0H0 Phone: (867) 979-0410 Fax: (867) 979-0415 Legislative Office P.O. Box 1200 Iqaluit, NU X0A 0H0 Phone: (867) 975-5050 Fax: (867) 975-5051 ‡LEGAL SERVICES Daniel Shewchuk MLA for Arviat $!#'*)$!* Phone: 1-867-669-0242 Fax: 1-867-669 7242 Direct: 1-867-445-8121 Email: info@chakrabarty.ca (*!#, Constituency Office P.O. Box 229 Arviat, NU X0C 0E0 Phone: (867) 857-4485 Fax: (867) 857-4486 Email: arviatmla@qiniq.com Legislative Office P.O. Box 2410 Iqaluit, NU X0A 0H0 Phone: (867) 975-5024 Fax: (867) 975-5044 Email: dshewchuk@gov.nu.ca *(!"$!* Experienced Nunavut Lawyer ‡)INANCIAL SE59ICES $IBSUFSFE"DDPVOUBOU .BOBHFNFOU $POTVMUBOUT ##&((&%)+#**!&% 1-888-597-9355 +PIO/JOHBSL .-"GPS"LVMMJR BARRISTERS BARRISTERS Buying, Selling orr 5HÂżQDQFLQJ SOLICITORS D+RPH" SOLICITORS Wee can help! We are experienced in transactions throughout oughout the Northwest Territories and Nunavut. Nunavu Ahlstrom Wright Oliver & Cooper, p Barristers Solicitors P.O. Box 383,, 5016-47th St.,, Yellowknife Yellowknife,, NT X1A 2N3 T 3K ‡)D[ 3K ‡)D[ Fax: (86 7ROO)UHH 7ROO )UHH 1SBDUJDJOHJOUIF:FMMPXLOJGFPGmDF +BDL38JMMJBNTt%BMF"$VOOJOHIBNt(FSBME4UBOH Aboriginal Law Civil Litigation Environmental Law Corporate/Commercial Labour & Employment Residential School Claims Practicing in the NWT, Nunavut & Alberta UI4USFFUt:FMMPXLOJGF /59"1 1It5PMM'SFF "LVMMJR$POTUJUVFODZ0GGJDFT 3FQVMTF#BZ 1Ir'BY &NBJMSFQVMTFCBZNMB!RJOJRDPN ,VHBBSVL 1Ir'BY &NBJMLVHBBSVLNMB!RJOJRDPN -FHJTMBUJWF0GGJDF 1Ir'BY &NBJMKOJOHBSL!BTTFNCMZOVDB Matte Mat aat att ttte tte te Legal Lega Le eg ega gal al Service Services Ser erv rvviic ices ccees es Matte Se For Advertising Information J. Rock Matte BA LLB, Barrister & Solicitor IQALUIT Call: (867) 979-5990 Fax: (867) 979-6010 Email: editor@nunavutnews.com Service bilingue français/anglais Wills ~ Real Estate ~ Corporate & Commercial Environmental Law ~ Administrative Law 32%R[‡$$YHQXH Fort Simpson, NT X0E 0N0 7HO)D[ (PDLOURFNPDWWH#JPDLOFRP visit us at: YELLOWKNIFE Call: (867) 873-4031 Fax: (867) 873-8507 Email: advertising@nnsl.com community STREET talk with Guy Quenneville WHATI/LAC LA MARTRE Karan Bernice Nitsiza "Walking to the airport." Joseph Football "Going fishing with my dad." NEWS/NORTH NWT, Monday, April 18, 2011 19 Once the snow has cleared, what are you looking forward to doing? Benzi Nitsiza "Jogging and walking." Janita Bishop "Biking and walking." Helen Wedamin "Boat riding with my dad." Freda Flunkie "Walking around town." 20 NEWS/NORTH NWT, Monday, April 18, 2011 news Inuvik to Tuk and back for $20,000 Marathon raises money for homeless shelter by Andrew Livingstone Northern News Services Inuvik In the early morning hours Sunday, Alicja Barahona wondered if she would make it to the end of her long journey. The ultra marathon runner was 48 km outside of Inuvik, on the last leg of her 374-km trek from Inuvik to Tuktoyaktuk, running to raise money for the Inuvik Homeless Shelter. Cynthia Wicks was with her at the time, supporting the runner during a dark and frigid morning – temperatures hovering around -35 C – as she faced the equivalent of her ninth full marathon in four days. "She said 'I don't know how I'm going to do this,'" Wicks said. "It's the first time I've seen someone get to that wall and hit those mental and physical challenges. It was hard for us because there was nothing we could do. It was scary. (She faced) pure exhaustion and her blood sugar levels were dropping. She was just depleted of all energy." But Barahona, 57, was joined by about a dozen runners from Inuvik for her last 30 km and completed her journey Sunday afternoon, helping to raise more than $20,000 for the Inuvik Homeless Shelter in the process. For Wicks, organizer of the Arctic Challenge – who also ran approximately 60 km with Barahona from April 6 to 10, including those last 30 – it truly showed just how incredible the human spirit can be. "It's been amazing. It's changed a lot of people's lives, the inspiration and strength she's shown," Wicks said only moments after reaching the Inuvik Legion at 4 p.m. to celebrate with about two dozen residents and her fellow runners and volunteers. "To have the support of the community, it's been great." Runners hugged and cheered upon arriving at the Legion, chanting Barahona's name as she came out of the support van that travelled with her for every kilometre she ran. For the Polish-born, New York-based Barahona it was a test of unimaginable endurance. For the 12 or so runners who joined Barahona to complete the final 30 km, it was an opportunity to be part of something special. Soura Rosen, who spent the final two days on the run supporting Barahona, said it was an inspiring moment to be a part of. "We'd all take turns running with her," she said of how the support crew took turns braving the cold weather to run with Barahona. "It was also really inspiring. You're watching this woman go, go, go, and you'd run with her for some time, and then get back in the car and you'd watch her and that's where it would get inspiring." Alana Mero, town councillor and member of the Interagency Committee, said the goal of the shelter is to teach people how to overcome challenges in life and the run showed that overcoming large obstacles can be done. "We don't have the resources that people have down south and the challenges people face here can be more life-threatening here than elsewhere," she said. "Just as the run was very hard, for some in our community life is very hard. We want to teach people to get past the barriers that are there. It takes a rare person to run to Tuk and back. Thank you for giving us your heart and soul." Kathleen Selkirk, coordinator for the shelter, said she spent two days with Barahona on her trek and was in awe of the 57-year-old's determination to complete the run. "She was just killing herself for us," Selkirk said at Andrew Livingstone/NNSL photo Runners with the Inuvik Running Club brave frigid temperatures to complete the final 30 km of ultra marathon runner Alicja Barahona's quest to raise PRQH\IRUWKH,QXYLN+RPHOHVV6KHOWHU the celebration event. "She believed in our cause and it was amazing. "It's an amazing thing to see people coming together for the cause." Selkirk said the money raised will have immediate impact in keeping the shelter functioning. She said in the long run, it may have an even bigger impact because of the national and international attention the Arctic Challenge received. Wicks said there are plans to make the run an annual event with hopes of attracting other ultra marathoners to the region to try their luck at Barahona's incredible achievement. "I think it shows a lot of people, it gives them hope," she said. "She's 57 and she's run almost 400 km and (the runners are) so inspired by that and it gives them the sense they can do anything. Most of us have just run the most we've ever done. It's an incredible feeling." NEWS/NORTH NWT, Monday, April 18, 2011 21 Around the North Phone: (867) 873-4031 Paul Bickford/NNSL photo BOTTLE DRIVE Five-year-old Charlotte Buth, left, and her sister Sarah Buth, 7, were among the hockey players going door-to-door on April 10 for Hay River Minor Hockey's annual spring bottle drive. Fort Smith joins 'Not Us!' Thebacha/Fort Smith Fort Smith is the latest NWT community to join the 'Not Us!' campaign against drugs and alcohol. 'Not Us!' is an initiative of the GNWT's Department of Justice. It provides funding and support for communities to promote drug-free, healthy lifestyles. The Fort Smith project will receive $10,000. The sponsored agency is the South Slave Divisional Education Council, and the initiative is supported by many other Fort Smith organizations. "With 'Not Us!', communities are taking a stand and telling drug dealers they are not welcome, and teaching our kids that using drugs and alcohol is not acceptable," said Justice Minister Jackson Lafferty in a news release. "Through strong partnerships, we are moving towards safer, healthier communities." Since the launch of 'Not Us!' in March of last year, the GNWT has funded campaigns in Hay River, Inuvik, Dettah and Ndilo. Several other communities are hoping to launch their own initiatives in the coming year. – Paul Bickford Language program on reserve K'atlodeeche/Hay River Reserve Beginning in the fall, Aurora College will offer its aboriginal language and culture instructor program on the Hay River Reserve. The Department of Education, Culture and Employment is providing $300,000 to the college to cover program expenses. The college plans to deliver the program in partnership with K'atlodeeche First Nation and the South Slave Divisional Education Council. Graduates of the two-year diploma program on the Hay River Reserve will be eligible for certification to teach the South Slavey language and culture from kindergarten to Grade 12 in NWT schools. "It is our goal, under the Northwest Territories Strategy for Teacher Education, to increase the number of aboriginal language teachers in all regions of the NWT," stated Education, Culture and Employment Minister Jackson Lafferty in a news release. The program was offered for the first time in Behchoko in 2007. The development of an Inuvik-based program is also underway. – Paul Bickford Elder activities Lli Goline/Norman Wells Elders in the community looking to get out for some exercise, good food and health education can do so for the next two Wednesdays, April 20 and 27 at the community hall. Elders are welcome to attend Active Living at 11 a.m. which will be followed by a short health presentation. Rides are available from the local taxi company. – Andrew Livingstone Trade show in Fort Smith Thebacha/Fort Smith The sixth annual Fort Smith Trade Show is set for the end of this month. It is scheduled from 9 a.m. to 5 p.m. on April 30 at Centennial Arena. The event is a joint initiative of Thebacha Business Development Services, the Fort Smith Chamber of Commerce, and the Department of Industry, Tourism and Investment. – Paul Bickford Science fair winners Aklavik Three of the 10 students from Aklavik's Moose Kerr School took home medals from the Beaufort-Delta Regional Science Fair on April 9. Alannis McKee, in Grade 9, took home second place in the junior division and was selected as a representative of the Mackenzie Delta at the Canada Wide Science Fair in Toronto from May 14 to 21. Two other students, Carly Sayers and Theiron John in Grade 10 took home first and third in the senior division for their projects. – Samantha Stokell Rubber Boots Festival on its way Radilih Koe'/Fort Good Hope This year the Rubber Boots Festival will be split up over two weekends, to accommodate Holy Week – the sombre last week of Lent leading up to, but not including, Easter Sunday. Children and youth activities will be held on April 24 and 25 – Easter Sunday and Monday – and include an Easter egg hunt, snowshoe races, arrow shooting and lots of races such as three-legged, plank walk, piggy back, rubber boots, relay and gunny sack. Other games such as musical chairs and back push will be there for the kids to have fun too. The adult events will be held on April 29 and 30. Events for those over the age of 16 include a traditional tent set up and judging, snowshoe races, arrow shoot, axe throw and tea making. On April 30 there will be a traditional talent show with animal calling and jigging. – Samantha Stokell Gwich'in Day in Fort McPhoo Tetlit 'Zheh/Fort McPherson On April 22 the hamlet of Fort McPherson will celebrate everything for the annual Gwich'in Day. Fun games and a cookout will be held to entertain the com- munity and celebrate the day the Gwich'in signed their land claim. It'll be a big event, with games and activities for elders, children and adults. It'll start in the afternoon at the Charles Koe Building and go through the afternoon. Another goal this year for the event will be promote healthy living and traditional culture. There'll be door prizes and prizes for events held. The Gwich'in signed a land claim agreement with the federal and territorial governments on April 22, 1992, 71 years after Treaty 11 in 1921. – Samantha Stokell Four-on-Four hockey tournament Paulatuk This past weekend, the community of Paulatuk was to gather at the Leonce Dehurtevent Arena for a Four-on-Four hockey tournament. Players were to submit their name and then the teams were to be formed randomly, by pulling names out of a hat. A skills competition was also scheduled to give players a chance to shine. The competition was to include fastest skater, puck handling, shooting accuracy and the shootout. The final games were to be held on April 17 and the winners will receive bragging rights. – Samantha Stokell Community Corporation meeting Ulukhaktok/Holman The Ulukhaktok Community Corporation will meet on April 21 for its annual general meeting. The corporation is a division of the Inuvialuit Regional Corporation and has many goals and objectives. It sets criteria for membership, identifies active members of the corporation, governs matters of local concern, exercises control over any development activity, provides grants for the community and establishes hunter and trapper committees. – Samantha Stokell Spring break for students Tetlit 'Zheh/Fort McPherson Students staying in Fort McPherson over their spring break will have a bounty of fun activities to choose from. School's out from April 18 to 26, and each day the recreation department will hold different activities to entertain the youth and children. There will be outdoor events such as pond hockey and sledding, as well as indoor fun like curling and movie nights. Food, of course, will be offered, including hotdog lunches or pizza. For more information, call the hamlet office. – Samantha Stokell 22 NEWS/NORTH NWT, Monday, April 18, 2011 around the NWT Helping families grow Kathleen Roberts building better family relationships through literacy by Andrew Livingstone Northern News Services Lli Goline/Norman Wells For Kathleen Roberts, learning doesn't begin on the first day of school, it begins on the first day of life. The adult educator in Norman Wells, who moved to the community in January this year from Yellowknife, said the opportunity to put together a family literacy program in the Sahtu community was a chance for her to give back to her new community. Roberts is running a 13-week family literacy program every Sunday at the community literacy centre. She said the program is a chance for parents and guardians to connect with their little ones on the importance of literacy. "We try to address the many ways that families can learn together and parents and caregivers are the children's first and most important teachers," she said. "We try to focus on things you can do at home that are literacy activities, anything from writing to a grocery list together or writ- ing a thank you note or going out on the land and learning about culture and traditional and cooking together." Roberts said they focus on activities that help promote literacy, like singing songs and reading books – but also to provide parents with tools to help with reading and writing in the home. "It's a chance to help prepare children for school, too," she said. Through funding from the NWT Literacy Council, Roberts said a big focus of the program is to help make reading fun and "both parents and children learn to develop a love for reading rather than it being a threat. "Part of it is to enrich relationships in families through spending time with each other and parents become more interested in their own reading," she said. The program's first session was on April 9 with 13 participants – five adults and eight children – coming out to take advantage of the event. Roberts said parents get to take home books they read together with their children "It's a chance to help prepare children." NORTHERNER SKRWRFRXUWHV\RI.DWKOHHQ5REHUWV Kathleen Roberts is a Norman Wells adult educator dedicated to improving the overall quality of literacy among families and young children. Roberts has organized a series of weekend literacy events at the community learning centre to help promote family literacy. during the day's programming. She said she includes a list of tips on how to bring literacy into every day activities at home. "For instance, the book was called on Mother's Lap and I included some sleep tips for every child," she said, adding they do songs and rhymes that relate to the theme. "I also encourage parents to bring books from home that they read to their children to share with everyone." Roberts said it's been a while since there has been a literacy program in the community. Having done a program similar to this a year ago, she said it was a good opportunity to give back. "It benefited the community and gave mothers and children a chance to socialize because sometimes you can feel very isolated as a caregiver," she said, adding parents can compare notes on caregiving and grow from that. "They have a chance to meet new friends and what I really like is when dads can come. "I see this as a chance to do something to help the community and it's a need that I kind of picked up on here." For Roberts, she feels strongly that learning is a lifelong skill. "It begins from day one," she said. "It's why I'm doing this." photo stories NEWS/NORTH NWT, Monday, April 18, 2011 23 Head over heels for Judo Fort Smith's Ryan Tourangeau, left, is instructed by Yellowknife's Mario DesForges, the president and head coach with the NWT Judo Association. JUDO Feature by Paul Bickford Northern News Services Hay River The first gathering of NWT judo enthusiasts – instructors and learners – took place in Hay River this month. The April 8 to 10 camp at Ecole Boreale was held by the NWT Judo Association. It brought together about 40 young people and the NWT's four instructors known as sensei – two black belts and two brown belts. (Sensei is a Japanese word meaning master or teacher.) The two black belts are Mario DesForges and Maxence Jaillet, both of Yellowknife. The two brown belts are Dean Harvey of Fort Simpson and Chantal Rioux of Yellowknife. The students came from Yellowknife, Fort Smith, Fort Simpson and Hay River. It is hoped the territorial judo gathering will become an annual event. Chris Stipdonk of Fort Simpson throws Maxence Jaillet over his shoulder. Jaillet – a black belt and vicepresident of the NWT Judo Association – took repeated falls to teach the move's proper technique. Six-year-old Mia Steinwand of Hay River does some warm-up exercises. Dejah Clarke of Hay River practises a judo move. 24 NEWS/NORTH NWT, Monday, April 18, 2011 around the NWT Making a house a home Monique Gagnier works in Wesclean's new section for home decor by Paul Bickford Northern News Services Hay River Monique Gagnier helps her customers find that extra decorative touch to turn a house into a home. "It all ties together with the painting, the decorating, the pictures, the vases, the knickknacks," she said. "It all goes hand-in-hand." Since September, Gagnier has worked in the new home decor section of Hay River's Wesclean Northern Sales Ltd., which started out in the 1970s selling industrial cleaning supplies. "It's a new venture for me and it's a new line for Wesclean," she said of home decor, adding she is also responsible for residential flooring sales. Gagnier, a 36-year-old mother of two, is new to home decorating and flooring sales. Her previous job was as an office manager at an auto body company. "I just needed a change," she said, adding she has always been interested in home decor. Wesclean's new section features things like decorative vases, wall art, metal abstract art and wooden balls for coffee tables. "It just adds that little extra touch to your home that maybe it's missing," Gagnier said. Wesclean president Brad Mapes offered Gagnier the opportunity to work at the company and she said she jumped all over it. At about the same time, Wesclean also launched a home decor section at its Yellowknife branch, Aurora Decorating Centre. Gagnier said this is the time of year when people think about freshening up their homes. "Everyone is so excited. It's spring," she said. "They want something fresh. They want something new in their house that's bright." Gagnier said she offers her own personal touch when helping customers, but always remembers that not everyone will like what she likes. "So you have to have a broader mind, because you're helping a wide array of people and not everyone has the same taste," she explained, adding she listens to what the customer wants and goes from there. Gagnier, who was born and raised in Hay River, said she enjoys working at Wesclean. ON THE Job Paul Bickford/NNSL photo Monique Gagnier has been working since September in the home decor and flooring section of Hay River's Wesclean Northern Sales Ltd. "Obviously, it's still new to me," she said. "I haven't even been here a year yet. It's something different and it's something that I'm very interested in, so my days go by really fast." She especially enjoys when customers come back and say they love the changes to their homes. "It gives you a nice feeling knowing you can help them out and they feel good about that." Gagnier said the biggest challenge was learning about flooring products, which include things like hardwood, laminate, carpet, linoleum, ceramic tile and cork. As a homeowner, she knew some things about flooring, she said. "But to get in there and be able to help your customers there's a lot of info that you need to research." While Gagnier is ready to talk home decor, flooring and curtains, customers often want to talk about something else – curling. Gagnier competed four times at The Scotties, the national women's curling championship, and other Canadian tournaments. "Curling was my life for years and years growing up. It's still a big part of my life, just not to the extent that it once was," she said, adding she is now a recreational curler. news NEWS/NORTH NWT, Monday, April 18, 2011 25 Helicopter voting for hunters: MLA Norman Yakeleya says election may be swayed by spring hunt by Paul Bickford Northern News Services Tulita/Fort Norman Some aboriginal hunters in the NWT won't be casting ballots in the May 2 federal election because they will be out in the bush on traditional spring hunts. However, Sahtu MLA Norman Yakeleya is suggesting a solution for Elections Canada – a helicopter-borne polling station to go to the hunters. "Every vote counts and everybody has the right to participate in a democratic process such as voting for our representative to speak for us in Ottawa in the federal government," he said. "It's still not too late for Elections Canada to go around and have the people vote. It's nothing for them to fly to, for example, a camp out in Tulita called Willow Lake for the people to vote." Yakeleya said Elections Canada didn't properly take into account people going on the annual spring hunts. "This is very culturally insensitive to the aboriginal people and their cultures and traditions," he said. Yakeleya said, while chartering a helicopter would be NORMAN YAKELEYA: Sahtu MLA says Elections Canada has been culturally insensitive by not taking into consideration traditional spring hunt. expensive, Elections Canada could chalk it up as a lesson learned. The MLA estimated 400 people, including about 150200 eligible voters, are heading to the spring hunt from the Sahtu communities of Tulita, Deline, Fort Good Hope, Colville Lake and Norman Wells. Of those hunters, he said a high percentage won't be able to vote because of the timing of advance polls and the election. In addition, he said spring hunts occur in other regions of the NWT. Yakeleya believes enough hunters will not vote as to impact the election outcome. "So Elections Canada is somewhat in the driver's seat here in terms of determining who may be our next MP in Ottawa," he said. Yakeleya explained the spring hunt cannot be delayed because hunters travel by snowmobile and the snow is disappearing. The MLA himself will head out on the spring hunt in late April, but will vote before then. Eddy McPherson Jr. of Tulita will miss the election as he will be about 50 km outside the community on the spring hunt. McPherson, the president of the Fort Norman Metis Land and Financial Corporation, agrees with Yakeleya that Elections Canada should seek out the hunters' votes. "It's a big thing missing the election, but, if they had a system where they could say 'there were 20 voters or more, they should go out to these bush camps'," he said. McPherson, who was to head out on the land late last week and not return until May 15, said most hunters stay about a month, while others spend a couple of weeks in the bush. While declining comment on specific situations, Elections Canada spokesperson Diane Benson explained there are a number of ways to vote in the electoral process. That includes dropping into the electoral office in Yellowknife to fill out a ballot, voting by mail by requesting a special ballot, and casting votes at advance polls or on election day. However, Yakeleya said the Elections Canada's mail-in process is unrealistic for communities in the Sahtu. "In theory it sounds like this is a good process," he said. "However, you know at times the mail to get to our communities sometimes takes a week." Plus, he said English is the second language for some of his constituents and they don't know the mail-in process. "No one has come around to explain it in their language." Benson said Elections Canada reaches people in different languages. "I do know that we do provide voter information in 11 aboriginal languages," she said. Advance polls will take place in Inuvik, Norman Wells, Yellowknife, Behchoko, Fort Simpson, Hay River and Fort Smith on April 22, 23 and 25, Benson added. "Those dates are actually set by law." Roxanna Thompson/NNSL photo SAWING TO THE FINISH LINE Margaret Jumbo gives a last burst of power while trying to secure a fast time during the women's log sawing competition at the Ndu Tah Spring Carnival in Trout Lake last month. 26 NEWS/NORTH NWT, Monday, April 18, 2011 Award Winning! Nort rthern Exp posure re 28 th Annual NWT/NUNAVUT OP PPORT RTUNITIES NORT RTH 20 2011 A Northern News Services Publication Once a year, astute advertisers take advantage of a unique opportunity to gain maximum exposure in the Northwest Territories and Nunavut, reaching leaders in business, governmentt and d industry. i d t Written and researched by people who know the Region, Opportunities North offers readers a fascinating insight to our part of the world. An invaluable tool for people doing business in the North. Over 17,000 copies distributed! Opportunities North 2011 will examine: t.JOJOHt0JM(BTt"HSJDVMUVSFt$PVOUSZ'PPETt5SBOTQPSUBUJPO t'JTIFSJFTt5SBQQJOHt"SUT $SBGUTt$POTUSVDUJPOt(PWFSONFOU t -BCPVSt5SBJOJOHt$PNNVOJDBUJPOTt5PVSJTNt4NBMM#VTJOFTT t'PSFTUSZt&EVDBUJPOt3FUBJM5SBEFt .BOVGBDUVSJOHBOENPSF Advertisers not served by an agency can take advantage of no-charge consultation, design and layout with our advertising professionals and production shop. Release Date JUNE 2011 Advertising Deadline THURSDAY, APRIL 21, 2011 Opportunities North is featured on the business page of our website at and access is free. Our business page receives over 20,0000 visitors annually. Produced as a special supplement to: 1257+:(677(55,725,(6 Take advantage of this unique advertising opportunity. Contact our advertising professionals today: Yellowknife Phone: 867-873-4031 Fax: 867-873-8507 E-mail: advertising@nnsl.com COLLECT CALLS ACCEPTED Iqaluit Phone: 867-979-5990 Fax: 867-979-6010 E-mail: editor@nunavutnews.com Box 2820, 5108 - 50th Street Yellowknife, NT X1A 2R1 Visit our website at: NEWS/NORTH NWT, Monday, April 18, 2011 27 Entertainment & Arts Inuvik singer-songwriter nominated for award (17(57$,10(17+27/,1(‡$'5,$1/<6(1.2 3KRQH ‡(PDLOHQWHUWDLQPHQW#QQVOFRP‡)D[ Page 28 NWT teams perform well at Balsillie Cup Page 30 Documenting a music legend's legacy Filmmakers aim to create documentary about the life of Kole Crook North the way he saw it in his time," said filmmaker NWT Bob Ellison, who is co-proTEN YEARS after his tragic ducing and co-directing the GHDWK+D\5LYHUILGGOHU.ROH project along with filmmaker Crook's legacy has carried .HLWK0DF1HLOO on through the creation of "We're trying to rebuild WKH .ROH &URRN )LGGOH 6RFL- his life story, trying to get a ety and in the sense of where memories of he was and various people what he did he inspired and because during his he's not here short life. anymore we Now, two need to rely Northern filmon his friends, makers are in relatives and pre-production acquaintances of a documento tell us those tary that will tell Crook's stories." story from the perspective By the time he was of the people who knew him, \HDUVROG .ROH ZDV RQH played music with him or of the Northwest Territorwere touched by his genuine ies' best known fiddlers. On character. Dec. 31, 2001, Crook was "We're going to travel on his way to a New Year's where he travelled, we're going to try and show the Please see Known as, page 28 by Adrian Lysenko Northern News Services "We're trying to rebuild his life story." ENTERTAINMENT Notes with Adrian Lysenko entertainment@nnsl.com Arts council appointed NWT Four Northwest Territories residents were appointed to the NWT Arts Council by Minister of Education, Culture and Employment Jackson Lafferty last Tuesday. 7KH\DUH0DUJDUHW1D]RQRI,QXYLN9LYLDQ(GJL0DQXHO of Fort Good Hope, Barb Tsetso of Fort Simpson, and Leela Gilday of Yellowknife. (VWDEOLVKHGLQWKHFRXQFLOH[LVWVWRSURPRWHWKH arts in the NWT. It makes recommendations to the minister on financial contributions to NWT residents for creative artistic projects. Funding for literacy projects NWT The NWT Literacy Council is accepting applications for the 2011-2012 funding year for projects aimed at younger children. The projects must benefit pre-school children, up to the age of six, and their families. Funding will be given up to a maximum of $3,000 and can go toward more than one project. At least one person on the project must have family literacy training through the council. Applications can be submitted now and into next year, but projects must be completed by March 31, 2012 and a short report will be required as part of the funding agreement. For more information visit the NWT Literacy Council website. – Andrew Livingstone The Gumboots are coming Hay River and Inuvik Yellowknife folk trio the Gumboots are going to be playing in Hay River and Inuvik this week. With a variety of musical instruments that include guitars, harmonicas, tin whistles, a mandolin, fiddle, recorder, accordion, bodhran, banjo, and occasionally the cello, the Gumboots perform and record original folk music focusing on Northern history. The trio will be playing at the Riverview Cineplex in Hay River on Thursday and at the Igloo Church (Our Lady of Victory) in Inuvik on Saturday. Tickets are available on the Northern Arts and Cultural Centre's website or at the door of the venues. NNSL file photo Filmmakers Bob Ellison and Keith MacNeill are in the pre-production phase of creating a documentary about the life and impact of the late Hay River fiddler Kole Crook (pictured). 28 NEWS/NORTH NWT, Monday, April 18, 2011 entertainment pages Singer songwriter nominated for award /HDQQH*RRVHVHWVVLJKWVRQ1DWLYH$PHULFDQ,QGLJHQRXV,PDJH$ZDUGIRUEHVWFRXQWU\DOEXP by Andrew Livingstone Northern News Services Inuvik SINGER SONGWRITER LEANNE GOOSE'S third studio album, Got You Covered, is getting recognition for its down-home country sound and was recently nominated for a Native American Indigenous Image Award for best country album. "I'm pretty proud, it was unexpected," Goose said. "You see the advertising come out for award shows and you don't really expect if you submit you'll be nominated. There is so much great talent out there and to be selected as one of the people to be nominated, I'm quite happy and very proud." Goose grew as a musician by playing at community events like the annual Muskrat Jamboree and Northern talent shows – essentially any event where musicians were asked to perform. Got You Covered is an album for all the people that have been there to support her through her career, she said. She felt compelled to return to her country roots after releasing Anywhere, her first full-length album. She described that album as a full-on rock album that garQHUHGKHUPRUHWKDQDKDOIGR]HQ nominations at the 2008 Canadian Aboriginal Music Awards and Aboriginal Peoples Choice Awards. "I got a lot of e-mails and comments from people who wanted to hear the old country songs and they started to give me lists of the songs they wanted to hear when I played shows," she said, adding eight of the tracks on Got You Covered were the most requested songs by her fans. "You have to go back and acknowledge the people who got you here. Got You Covered is a tribute to home and my friends and family and supporters, and I wouldn't be able to do what I'm doing today if it wasn't for them. "It's those connections I can never forget and I'm eternally grateful for." With the success of her most recent album, Goose is already planning her next release. She was recently busy writing a tune about her mother's residential school experiences in the late V "She told me the story about when the plane came to get her," she said. "I had the first line and then I had the second and then I had 12 lines. Goose said her song ideas stew in her mind, something clicks and then it pours out onto paper. "It'll start with something, I'll hear coffee perking or the phone will ring or someone will say something and it'll have a rhythm to it," she said of her song-writing inspiration. The yet-to-be-titled album posed a chance for her to return to writing after spending some time living in Winnipeg, where she took music lessons and searched for more exposure. "It's different when you live up here," she said. "There is a lot of homegrown talent. To move into a bigger arena and you're in competition for gigs and you're trying to make sure you get airplay and exposure and you're making enough money to put bread on the table, it's a hard go. "I wanted to get back to writing. I've been home for a year and I've got three songs that I feel are strong and I've got five or six in Andrew Livingstone/NNSL photo Singer-songwriter Leanne Goose has been nominated for a 1DWLYH $PHULFDQ ,QGLJHQRXV ,PDJH $ZDUG LQ WKH FDWegory for Outstanding Country Album. the works and hope to have the new album recorded sometime in the fall." Goose is hoping to attend the awards ceremony in Albuquerque, New Mexico, on April 29 and is fundraising towards it. "It's an expensive trip to make so I'm hoping I can raise enough money to go," she said. Known as 'an old person in a young person's body' FDPH DFURVV D .ROH &URRN )LGGOH Camp in Fort Providence while Eve performance in Norman Wells travelling across the North produwhen the plane he was travel- cing Our Dene Elders with Native ling in crashed into a steep ridge Communications Society. NLORPHWUHVVRXWKRI)RUW*RRG "You see both sides of life in Hope. the North and sometimes when , KDG PHW .ROH LQ WKH VXP- you see something this positive it's mer of 2001 and been really pretty overwhelming," said Ellison. impressed. I was really blown away "I walked in on a orchestra in by his mastery of music then and a school in Fort Providence and it was quite shocked, like everyone was incredible." else, six months later (when he Later, the filmmaker died)," said MacNeill. approached MacNeill with the idea The idea of the documentary for the documentary. started in 2007 when Ellison "Bob first told me about his Documenting, from page 27 idea, told me about all the differHQW SHRSOH ZKR KDG D .ROH VWRU\ and all the different, really great moments that people remembered about him and the really positive way he had touched a lot of people's lives," said MacNeill. "This was exactly the kind of idea that I was really interested in pursuing." From communities like Fort Good Hope, Deline, Hay River, Fort Smith, Inuvik and all the way to the east coast of Canada the filmmakers found someone who had a story about Crook's positive influence on their life. "The thing that has struck me is, with every person we have talked to, we discover another person who has a story or another person who has a great memory of .ROHVDLG0DF1HLOO+HZDVZHOO known among people all over the North as an old person in a young person's body, and that says a lot to me about the values the guy had, his respect for traditional beliefs, his respect for elders and his maturity and the way he could reach out and help people." MacNeill and Ellison are still in the research, fundraising and planning phase of the project and hope to begin filming this spring through to the summer and fall. They are also still looking for more stories. "We are really encouraging anybody who has a picture, a video, a sound recording, a memory, a story, anything they want to share DERXWNQRZLQJ.ROHDQGWKHHIIHFW he had on people and the music, but also just the warmth and the friendship," said MacNeill. The filmmakers can be contacted at koleproject@hotmail.ca. paper game NEWS/NORTH NWT, Monday, April 18, 2011 29 WIN A PRIZE! for the best story about this picture WORD quest Congratulations to Preston Tutcho, last week's Word Quest winner. Ts'ezeh means yelling in North Slavey. Word Quest continues with: Ascenseur Hint: It helps you up. The official languages in Nunavut are Inuktitut, English, French and Inuinnaqtun. The official languages of the Northwest Territories are Cree, Chipewyan, Inuvialuktun, Inuktitut, Inuinnaqtun, Tlicho, North Slavey, South Slavey, Gwich'in, English and French. That's a total of 11 language groups for the two territories. Test your language skills! If you know what language the Word Quest word is in and what it means, you may win a prize from News/North. A winner for this week's Word Quest will be drawn from all the correct answers received. Answers should give the meaning of the word and the language that it is in. Please include your name, address and telephone number. Send your entries to: Word Quest, Northern News Services Ltd., Box 2820, Yellowknife, NT X1A 2R1 or fax us at (867) 873-8507. Northern News Services acknowledges the assistance of the language commissioner of Nunavut and the Teaching & Learning/Aboriginal Language Centres of the NWT for their assistance. Every picture tells a story. Take a good look at this photo and write a short story based on what you see. Your story should be no longer than 10 sentences. Winning entries will be published on this page in an upcoming issue of the newspaper. Each week we'll give away a prize for the best story. Send entries to Paper Game. You can e-mail editorial@nnsl.com, or fax us at (867) 873-8507 or mail to Northern News Services Ltd., Box 2820, Yellowknife, NT, X1A 2R1. Entries must be received within two weeks of this publication date. Be sure to include your name, age, address and telephone number. Expect four to six weeks for delivery of your News/North hat. If you don't receive it after that time call collect (867) 873-4031. Words and translations provided by Department of Culture, Language, Elders and Youth, GN ,OLTXVLOLUL\LLW8TDXVLOLUL\LLW ,QQDPDULOLUL\LLW8YLNNDOLUL\LOOX MEMORY test 1: Who won the D division of the Balsillie Cup in Yellowknife? 2: What is the name the late and well-known NWT fiddler who will be featured in an upcoming documentary? 3: Which important skills will a new program in Whati strive to teach residents? 4: Who is the NWT's first aboriginal community constable with the RCMP? Where will he be working? 5: What do hunters who will be out on the land during the May 2 election want Elections Canada to do to help them vote? Paul Bickford/NNSL photo ROLLING ALONG On April 9, four Hay River women – left to right, Caryn Hirst with son Hudson, Nikki Ashton with daughter -HUVH\REVFXUHG .HOVH\*LOOZLWK$VKWRQ VGDXJKWHU$YDLDDQG6DUDK)URHVHZLWKGDXJKWHU9LFWRULD²SXVK strollers along Hay River's Miron Drive, a popular area for people out for a walk. Good advertising is good business! For advertising information, call collect (867) 873-4031 30 NEWS/NORTH NWT, Monday, April 18, 2011 Sports & Recreation Money-making course Page 33 632576+27/,1(‡-$0(60cCARTHY 3KRQH ‡(PDLOVSRUWV#QQVOFRP‡)D[ Fast and female Page 31 Communities win big at Balsillie Cup Hay River and Fort Simpson grab titles at annual oldtimers hockey tournament by James McCarthy Northern News Services Somba K'e/Yellowknife The Hay River Rusty Blades and the Fort Simpson Sub-Arctic Eagles made the trip to Yellowknife worthwhile. Both teams came away from the 28th annual Canadian North Balsillie Cup in Yellowknife on April 10 with titles in their back pockets. The Rusty Blades managed to knock off Yel lowk n i fe’s Coldwell Banker Blades by a score of 5-2, while the Eagles defeated the Hay River Rusty Blades Ds in the B division championship final, 6-3. Hay River goaltender Marc Miltenberger said the score wasn’t really indicative of the game. “It was good, tough hock- ey,â€? he said. “We got down in the first three minutes but came back and got it done.â€? Indeed, the Rusty Blades were behind 2-0 after the first three minutes of the game but managed to fight back and score five unanswered markers to seal the deal. Miltenberger said the plan was to key in on two of the Blades’ better players and ensure they didn’t have room to move. “Rob Redshaw and John Kelly are two of the better players from the (Yellowknife) league and we planned for that,â€? he said. “We planned to cover those guys off and once we managed to do that, the rest was just playing good, hard hockey.â€? The Eagles were a mixed "It was good, tough hockey." Please see Fort, page 32 SPORTS Check with James McCarthy e-mail: sports@nnsl.com Fishing for votes NWT The NWT is right in the thick of things when it comes to the World Fishing Network's search for the "Ultimate Fishing Town" in Canada. Three NWT communities – Yellowknife, Lutsel K'e and Deline – are on this year's list for the North region and Lutsel K'e is currently the top NWT community as of press time with 18 votes. Yellowknife and Deline are well back of the leaders. Iqaluit presently sits as the top Northern community. You can vote by visiting the World Fishing Network website. Should your chosen community receive enough votes to be declared one of the top three towns in the North region, it will face the top 20 towns in the final round of voting, which begins on May 10. The winning town will receive a $25,000 donation for fishing-related causes in the town, 10 WaveSpin reels and a feature about the town on the World Fishing Network. Sarah Daitch one of the best Thebacha/Fort Smith Sarah Daitch had a great season and her efforts were recognized by Cross Country Canada earlier this month. The governing body for cross-country skiing released its final standings for the 2010-2011 season on April 6 and the Fort Smith native was the fourth best skier in the Haywood FIS Series, featuring skiers from across Canada and parts of the United States. The series included sprint, middle and long-distance races. Daitch was also fifth overall in the Teck Sprint Series. Soccer teams lining up Somba K'e/Yellowknife The 2011 edition of Junior Super Soccer is coming up fast and teams are already beginning to line up for the big weekend. This year's tournament will take place from April 28 to May 1 and 17 teams from both the NWT and Nunavut have already confirmed their attendance as of press time. The NWT teams include entries from Inuvik and Lutsel K'e along with the Yellowknife schools, as well as Baker Lake, Taloyoak, Iglulik, Iqaluit and Rankin Inlet from Nunavut. James McCarthy/NNSL photo Hay River Rusty Blades goaltender Marc Miltenberger makes a stop in traffic during B division action at WKHWKDQQXDO&DQDGLDQ1RUWK%DOVLOOLH&XSLQ<HOORZNQLIHRQ$SULO sports & recreation NEWS/NORTH NWT, Monday, April 18, 2011 31 Fast and female in the NWT Almost four dozen young skiers get chance to attend girls-only camp by James McCarthy Northern News Services Lli Goline/Norman Wells It was a skiing camp just for the girls in Norman Wells. The Sahtu community hosted the second annual Fast and Female ski camp, which wrapped up April 10. A total of 46 girls from around the NWT got the chance to improve their skiing and life skills under the watchful eye of NWT crosscountry skiing star Sarah Daitch and legendary NWT skier Sharon Firth, both of whom served as ambassadors for this year's camp. Daitch said it was a chance for girls between the ages of nine and 19 to come together and ski in a non-competitive environment. "We wanted to promote friendship between everyone, have everyone meet new people and collaborate together on being active and healthy," she said. While skiing was the main focus of the weekend, there were also workshops on leadership and life skills, video presentations and talks with Firth and even some non-skiing workouts involving gymnastics from instructor Desiree Gautreau from NWT Gymnastics and a Zumba workout courtesy of instructor Tara Newbigging. "The motto of our camp this year was 'Empower- ment Through Sport,'" said Daitch. "We wanted the girls to experience and try all sorts of new things. We even had some biathlon work and the girls really took to that." The camp was brought to the Sahtu to allow more girls from the Northern portion of the territory to take part, including Colville Lake. The community was able to send six girls to this year's camp their chaperone, Marie LaForme, a teacher at Colville Lake School. She said the girls were extremely excited about going, especially before the flight out. "They had made calendars a couple of weeks before leaving and they were marking off the days," she said. "There's not a lot of opportunity to get out and do events like this and it was exciting for them. They really enjoyed the biathlon part." Perhaps the most impressive part of the camp itself was the cost, or lack thereof. None of the girls had to pay anything out of pocket to get to the camp and Daitch said it took a lot of work to pull that off. "We had to do a lot of fundraising, but we wanted everyone to be able to take part in this," she said. LaForme was also happy about the no-fee deal, saying she didn't know if there would be as many girls who would have gone. "Maybe two or three, but I know Sarah did a lot of work to make sure as many people got the chance to go," she said. There has been some discussion about where next year's camp will be, with the frontrunners being either Hay River or Inuvik, but Daitch said she wants to see it continue for a long time. "It's so inspiring to see all of these girls come together," she said. "I don't want to stop doing this." The bug has also hit the Colville Lake gang as LaForme said the camp has given the community something to look forward to. "Everyone in the community was asking how they did when they got back," she said. "It just gave us a lot of positive energy and there's even two girls who say they're planning on trying out for the Arctic Winter Games next year because they saw some athletes who have inspired them and I'm excited about that." "It's so inspiring." photo courtesy of Sarah Daitch Fast and Female ambassador Sharon Firth, left, shares some laughs with Ruth Hanthorn of Fort McPherson during the second edition of the all-girls skiLQJFDPSLQ1RUPDQ:HOOVRQ$SULO SPORTS CARD BASKETBALL AGE: 14 Community: Tulita ELDON HORASSI Eldon likes basketball because of the SRVLWLYHLQテ々HQFHLWKDVKDGRQKLVOLIH "It's let me do great things and it's helped me to become a better person in life," he said. Eldon said he prefers watching college basketball over the professional ranks. 32 NEWS/NORTH NWT, Monday, April 18, 2011 sports & recreation Jacques Lemaire retires again – for now And why Marty Turco could have a new career in sports wagering Northern News Services four-wheeled vehicle on May 29, r+BDRVFT-FNBJSFIBTDBMMFEJU which currently stands at 302 feet. a career, again, following the New The plan is to have a masked driver Jersey Devils' failure to reach the speed down a 90-foot ramp, built on post-season. Not that he didn't try a 10-story door and leap across the because the Devils were on a roll, track's infield. This is disaster waitbut just ran out of time. Of course, ing to happen. this isn't the first time Lemaire r:PVLOFXJOUIFBHFPGDBNFSB almost rescued the Devphones, fax machines and ils from misery and it's dial-up modems, this was only a matter of time about to happen. Trading before he's called on card company Panini has again. Bets on a Janucome out with the world's ary return next season, first video trading card anyone? and they will feature NBA r8BUDIJOHUIF.BTplayers such as Blake ters golf tournament last Griffin, among others. weekend, did anyone Just like video killed the get the feeling Rory radio star, video has just McIlroy's Sunday brain killed the trading card. fart was a mirror image r1SPGFTTJPOBMBUIMFUFT of Greg Norman's epic aren't the sharpest tools in 1996 collapse? Boy, it the shed, as we all know, with James McCarthy was painful to watch, but you would think they but he's only 21 and he'll would at least get this get lots of chances to right. Green Bay Packers make it right in the future. Speaking linebacker Clay Matthews on his of the Masters, was it only me or is Twitter account on the final day of it painful to hear the commentators the Masters: talk about the scenery more than the "Tiger, the gold jacket's yours... tournament itself? Yes, the Augusta McIlroy's gonna choke!!" National course is pretty, but I'm Too much "Happy Gilmore" watching for golf, not azaleas. watching, I see. r)FSFhTBSFBTPOUPXBUDI r%JEZPVSFBEBCPVUIPX the Indianapolis 500: Team Hot Marty Turco was betting some fan Wheels, a new stunt driving team, is in Montreal last week? Yeah, the planning on breaking a world record Chicago Blackhawks goaltender, for the longest distance jump in a who hasn't played in a game since SPORTS Talk February, decided to take on a fan who bet him five bucks the 'Hawks wouldn't score in the first period. He won, then went double or nothing on the next goal, won again. Triple or nothing? Winner again, but no luck in overtime as the odds went to 5 to 1 and he paid out the fan. Good, clean fun, I say. He ain't no Pete Rose, though. r)FSFhTTPNFUIJOH*QVMMFEPVU of my rear end: Just read about how the Dayton Dragons minor league baseball team is planning on breaking the American pro sports record for the most consecutive sellouts at 814 this season, even after losing 24 straight at home last season. Either they have the best fans in the world or Dayton really is a town with nothing else better to do. r4FQBSBUFEBUCJSUIHPMGFS Alvaro Quiros and Borat. r"OEGJOBMMZ BOPUIFSJOTUBMMNFOU of "Good Idea, Bad Idea". Good idea: Manny Ramirez call- ing it a career in baseball. Bad idea: Manny Ramirez calling it a career in baseball. If ever there was one person who could command an audience for doing absolutely nothing, it was Ramirez. Forget the Barry Bonds stuff. All Ramirez had to do was open his mouth and you were guaranteed entertainment. Oh well, see you, Manny. Just not in Cooperstown. Until next time, folks... Fort Simpson comeback Communities, from page 30 bag, with players coming from Fort Simpson, Fort Providence and four other communities, organized by Fort Simpson’s Mike Squirrel. He said it took a while to get their legs going, but it all worked out in the end. “The guys were out there to have fun, winning was a bonus,â€? he said. The record for the Eagles was even at one win and one loss following round-robin play. Thanks to a superior goal differential of exactly one, the Eagles were the top team in the D division and met Hay River in the final. Down 2-1, things were looking grim for Fort Simpson at the halfway point of the final. The team, however, rallied back to score five more goals in the second period to win 6-3. Fort Simpson had outskated and outshot Hay River for three periods but the Rusty Blades had a good goalie. It wasn’t until the last period of the final the goals started getting around him, Squirrel said. Inuvik was the lone non-Yellowknife squad in the A division and "The guys were out there to have fun." managed to finish third overall. One thing Miltenberger was happy to see was the relaxed style of officiating. He said having a couple of young referees letting the players play was a big deal. “We got to play that old-time sort of hockey and that’s a bonus,â€? he said. As much as the hockey is the most important part, Miltenberger said being around fellow hockey players for a weekend of tournament play is the big thing. “It’s more of an occasion for us as opposed to a tournament now,â€? he said. “That’s the nice thing about oldtimers hockey. You get to see old friends, you get great hospitality and it’s just great fun.â€? NEWS/NORTH NWT, Monday, April 18, 2011 33 Business & Labour %86,1(66+27/,1(‡*8<48(11(9,//( 3KRQH ‡(PDLOEXVLQHVV#QQVOFRP‡)D[ Course to help residents manage money De Beers' online program discusses budgeting, saving and debt management by Guy Quenneville Northern News Services Whati/Lac La Martre There's money in diamond mining; the challenge is making it last. That's the impetus behind a new online course on money management De Beers Canada launched in Whati last week. The interactive tool, called Your Money Matters, was developed by British Columbia-based Association of Service Providers for Employability and Career Training (ASPECT), a non-profit association of community-based trainers, and is licensed by De Beers, owner of the Snap Lake diamond mine, for use in Whati, Gameti, Wekweeti, Behchoko, Ndilo, Dettah and Lutsel K’e. Made up of five modules, Your Money Matters takes users through the process of reading pay stubs, banking, managing debt, budgeting and saving – crucial skills in a territory where underground miners living in remote communities can make as much as $100,000 a year but face steep living costs. "This is long in coming and we're kind of excited about it," said Alfonz Nitsiza, chief of Whati, a community of about 500 people where more than Please see Earnings, page 35 BUSINESS Briefs ZLWK.HYLQ$OOHUVWRQ e-mail: business@nnsl.com Discovery Air launches Discovery Air Innovations Somba K'e/Yellowknife Discovery Air is launching a new company to identify innovative business opportunities for the company and its subsidiaries. Discovery Air Innovations will seek out opportunities for Discovery Air's subsidiaries Air Tindi, Great Slave Helicopters and Discovery Mining Services to provide speciality aviation services such as military training, forest fire management, utility and charter services. "One thing Discovery Air is trying to do is expand the current services we are providing in the regions," said Sheila Venman, investor relations spokesperson for Discovery Air. "So if we identify something that we feel would fit for one of our companies, we would have them deliver it. If it doesn't fit with one of them, we will look at starting a company to provide it," said Venman. Frobuild under new management team Iqaluit The Qikitaaluk Corporation and Nunasi Corporation announced Tuesday that they have hired Northern Industrial Sales (NIS) to manage Iqaluit-based Frobuild Construction. "We had a couple of rough years trying to rebuild our company, so we decided to bring in NIS as a new management team because they have a lot of experience operating in the North and have a good history," said Chris West, president of Frobuild Construction. "I am excited about this opportunity. Having them on board will freshen up business and they have the experience to help rebuild confidence from our consumers," said West. Frobuild is a lumber and hardware-retail outlet jointly owned by Nunasi and the Qikitaaluk Corporation since 2006. Shear to visit Cambridge Bay Ikaluktutiak/Cambridge Bay Representatives from Shear Minerals, who recently purchased the Jericho Diamond Mine, will be visiting Cambridge Bay tomorrow to consult with members of the community about the project. "This is part of our ongoing presence in the community. It gives us a chance to meet with the people, update them on the project and hear their ideas and concerns," said Pamela Strand, president of Shear Minerals, who will be among the representatives visiting the community. Shear Minerals purchased the mine in July 2010 after the previous owners Tahera Diamonds ran into financial and operating problems. Guy Quenneville/NNSL photo Dennis Camsell, a Whati resident who works at BHP Billiton's Ekati Diamond Mine, said many residents of Whati gamble away the money they make working at the territory's diamond mines. 34 NEWS/NORTH NWT, Monday, April 18, 2011 business & labour New centre to help new entrepreneurs Multipurpose building will include business incubator, tourism training centre by Kevin Allerston Northern News Services Iqaluit Carrefour Nunavut is in the final planning stages for a facility it hopes will help new entrepreneurs and people wanting to enter the territory's tourism sector. In January, Carrefour Nunavut announced it received $140,000 in funding from the Canadian Northern Economic Development Agency (CanNor) to support the planning and design of a new multiservice centre slated to be built in Iqaluit. The 40,000-square-foot facility would be spread over four floors and would include a conference centre, a training centre and what they are calling a business incubator. "One thing we felt we needed is a space where new businesses can get started and get a foot in the door," said Daniel Cuerrier, director general of Carrefour Nunavut. "It doesn't matter if they are francophone or not, we want to help support all entrepreneurs." He said the business incubator will be a place where new businesses can access basic services like legal, reception and accounting services while paying a low rent for office space. "You know, office space is outrageously expensive in Iqaluit, and we know how frustrating it can be for entrepreneurs operating from a home office. So we wanted to do this to give businesses a chance to get started," said Cuerrier. Tourism training The multipurpose centre will also include a tourism training centre to help people in the region learn the basics of the tourism services industry. "We previously had been inviting tourists to stay in local Inuit homes so that they can get the full experience of our cultural events. From there, we decided we wanted to improve on the training side of things which is where we got the idea for this training centre," said Cuerrier. "We will start them off with the basics of the tourism sector and from there they can decide if they would like to study it further at the college." Cuerrier said the next step is to find a company who can handle the construction and design for the facility, which he estimates will cost from $18 million to $20 million to construct. "The next step is to partner with a construction company that will set it up," said Cuerrier. "In the North things can take time, but hopefully we will have it built in the next two years." Cuerrier said Carrefour Nunavut had three design plans for the facility, but they didn't match their vision and were scrapped. "We want it to be a landmark in the community and draw attention. We are also wanting it to be carbon neutral," said Cuerrier. Hal Timar, executive director of the Baffin Regional Chamber of Commerce, said he supports the idea of a tourism training centre. "Tourism is an important pillar of our economy and it's an area with tremendous potential, so anything more that can be done to support the industry is great," said Timar. However, he said the facility should be a part of a larger strategy. "With the business incubator, I hope it is a part of a larger strategy. The danger is that if they get used to working in a situation with lower overhead costs that that is the only way they are viable. So it is important that they evaluate the business and its prospect for growth," said Timar." photo courtesy of Carrefour Nunavut Daniel Cuerrier from Carrefour Nunavut is planning a multipurpose centre IRU,TDOXLW,WZRXOGLQFOXGHDFRQIHUHQFHFHQWUHWRXULVPWUDLQLQJFHQWUHDQGD business incubator, where new entrepreneurs can benefit from low overhead costs as the business develops. business & labour Tax filing deadline Northern News Services Apparently, we can move awfully fast if we want to. I once heard of a gentleman who moves faster than time itself! His son, who hangs around with my son, bragged his dad can do just that. This is how he explained it: “My Dad can move super fast. He goes to work every day and when he leaves his office at five, he gets home at 4:30!” And then there are those of you who move like molasses when it comes to filing your personal tax return. So please note this: Monday, May 2 is your tax filing deadline if you have not filed your 2010 tax return. The normal deadline is April 30 and since that day falls on the weekend, the deadline is extended to the next working day – Monday, May 2. You may be living precariously if you have not filed your 2010 tax return or plan to miss the deadline. You will be charged late-filing penalties if you owe taxes and you file late. I don’t mean to scare you so here is the good news first. There are no late-filing penalties if you don’t owe taxes because these penalties only apply to taxes owing. NEWS/NORTH NWT, Monday, April 18, 2011 35 TRENDS AT A GLANCE – past 13 weeks 1.10 Bank rate 3.25 Prime rate 1.05 3.00 Therefore, there are no late-filing per cent). A five per cent penalty penalties if you expect a refund or is cheap but that’s just the warm have no taxes owing by the deadup. The penalty grows by one per line. cent for each additional month you Nonetheless, even are late. The penalty is if you expect a refund, six per cent if you file in you should file on time June and 12 per cent by to avoid other potenDecember. It tops out at tial tax traps. Here’s 17 per cent – thankfully an example. Do you – even if you late-file own foreign property beyond the 12 months with a cost greater period. than $100,000 (Google And it gets worse. ‘T1135’ or read TaxThe penalty more Break, April 11, 2010)? than doubles if you are If you do, you must a repeat late-filer, which Wong, CGA, CFP is a file an additional Form Andy simply means you have tax consultant at MacKay T1135 or face a costly LLP, Chartered Accountants, paid a late-filing penalty in Yellowknife. He can be late-filing penalty of in any of the previous reached at andywong@yel. $25/day. For example, three tax years (2007mackay.ca. if you late-file your 2009). This super latetax return (that has a penalty is an automatic refund) along with the Form T1135 10 per cent plus two per cent per (if it is required) on, say, June 30, month of taxes owing, to a maxyour Form T1135 late-filing penalty imum of 20 months or 50 per cent will be $1,525 ($25 x 61 days)! of taxes owing. Paying a late-filing Need more convincing to file penalty is not the only hit to your your tax return on time? There is pocket book. You will also be an automatic five per cent penalty charged interest on both the taxes on the taxes owing if you are late. owing and the penalties, and it For example, if you file your return may well be the largest of the three on May 3 and owe $3,000; the late- amounts. filing penalty is $150 ($3,000 x five What if you know you owe TAX Break 1.00 2.75 0.95 0.90 5.7 BANK RATE 0.9979% Wk1 2 3 4 5 6 7 8 9 10 11 12 13 Mortgage - five-year closed PRIME RATE 3.00% 2.50 1.10 5.6 1.05 5.5 1.00 5.4 0.95 5.3 0.90 5.2 5.1 1500 Wk1 2 3 Wk1 2 3 4 5 6 7 8 9 10 11 12 13 Gold - $US per ounce 0.80 140 5 6 7 8 9 10 11 12 13 DOLLAR US$1.0389 0.85 MORTGAGE 5.69% 4 Dollar - $CDN vs $US Wk1 2 3 4 5 6 7 8 9 10 11 12 13 Oil - $US per barrel 130 1400 120 1300 110 1200 1100 1000 100 GOLD (LONDON) $1476.75 Wk1 2 3 4 5 6 7 8 9 10 11 12 13 taxes but cannot file on time? Pay your estimated taxes by the May 2 deadline. Penalties won’t apply even if you file late if you owe nothing by the filing due date. If you think you owe $5,000, pay it by May 2. When you do file later on and your actual taxes owing are only $4,000, the overpayment of $1,000 will be refunded. If you miscalculated and 90 80 BRENT CRUDE OIL US$123.79 Wk1 2 3 4 5 6 7 8 9 10 11 12 13 owe $5,500, the late-filing penalties only apply to the balance owing of $500. The May 2 personal tax deadline does not apply to everyone. If you or your spouse report selfemployment income from a partnership or proprietorship, the filing deadline is extended to June 15 for both of you. Visitors' Centre for Gjoa Haven by Kevin Allerston Northern News Services Uqsuqtuuq/Gjoa Haven Guy Quenneville/NNSL photo From left, Bruce Spencer, a training co-ordinator with De Beers Canada, Jim Stauffer, an Aurora College community adult educator in Whati, and Cathie Bolstad, director of external and corporate affairs for De Beers, try out a new online course centred on money management. Earnings wasted: chief Course, from page 33 10 people work at the Snap Lake, Ekati and Diavik diamond mines. Tlicho chiefs have noted with concern the tendency among some workers across the region to let their substantial earnings go to waste in bingo halls and other gambling venues, said Nitsiza. "Particularly in Whati, we have a lot of people who work in the mine that have been working there for a long time, and they've done well," he said. "Some have a mortgage now in the community and some moved to Yellowknife and bought their own homes and they have vehicles and their spouses work in some cases. Those are the ones that are successful, I may say. "But others, maybe half, have worked about the same length of time but really have nothing to show for it ... We figure (there's) millions spent in bingo in Yellowknife." Dennis Camsell, a Whati resident who has worked at BHP Billiton's Ekati diamond mine for 13 years, echoed Nitsiza's concerns about gambling and added the same temptations apply when residents travel outside the territory. Recalling a trip he recently took to Alberta with a close friend, Camsell said, "On a Saturday, they went to bingo – twice in one day." The importance of saving money takes on added urgency when considering the limited operating lives of mines, said Cathie Bolstad, director of external and corporate affairs for De Beers Canada. Snap Lake, for instance, opened in 2008 with an expected mine life of about 20 years. "People really have to stretch those paycheques," said Bolstad. Educational tools like Your Money Matters – as well as a program being considered by the Tlicho Government, in which high school graduates travel door-to-door in communities to talk to householders about the importance of saving – are effective ways of deterring people from needless spending, but they'll take time to register, said Nitsiza. "Education is the way to get there. We'll get there, but it's slow-going," he said. A quarter of NWT mine workers do not have a high school diploma, according to a 2009 NWT Survey of Mining Employees conducted by the NWT Bureau of Statistics. The Hamlet of Gjoa Haven and the Kitikmeot Inuit Association are announcing that they have awarded Arctic Canada Construction Ltd. to design and build a visitors' centre for the community. The goal of the centre will be to highlight the arts, crafts and culture of the region and promote tourism in the area. "The community is excited. This is something they've wanted for a long time and are looking forward to the benefits it will bring," said Ed Stewart, economic development officer for the hamlet of Gjoa Haven. "The centre will be 1,500 square-feet of worldclass arts and crafts and examples of our culture." The project is supported through funds provided by the Kitikmeot Inuit Association and Nunavut Tunngavik Inc., Conservation Area Inuit Impact and Benefit Agreements, and long term management funding through the Government of Nunavut's Community Economic Development fund, program through the Department of Economic Development and Transportation. The centre is expected to be completed by the summer of 2012. Growing the Inuvik chamber by Guy Quenneville Northern News Services Inuvik The president of the Inuvik Chamber of Commerce has expansion in mind. Lee Smallwood, who took over as president last year from Larry Peckford, said that in addition to ensuring the chamber retains a stable supply of directors, he wants to grow the membership of the chamber, which currently stands at about 45. "We want to increase our members by about 40 per cent over the next year," said Smallwood. The chamber's first annual general meeting is scheduled to take place May 5. Lee Smallwood 36 NEWS/NORTH NWT, Monday, April 18, 2011 MARKETPLACE “Job Bankâ€? online at! /85"%7&35*4*/()05-*/&t1)0/& 803% t'"9 NNSL WORD CLASSIFIEDS NOW RUN IN 5 NWT PAPERS %FI$IP%SVNt*OVWJL%SVNt/85/FXT/PSUIt:FMMPXLOJGFSt8FFLFOEFStPLUS NNSL classiďŹ eds online Book your classiďŹ ed online or email to classiďŹ eds@nnsl.com t1FSTPOBMT t1FSTPOBMT 2t"OOPVODFNFOUT 1t37T 12t#PBUT.PUPST t.JTDGPS4BMF LOOKING FOR Christopher. I’m looking for my grandnephew named Christopher whose father is Michel Sedlar. Christopher’s father worked at the Raglan Mine in 1996 and most likely at the Terre de BafďŹ n. Christopher’s father and mother met while at the mine. She is of English (England) and Inuit origins. If you know Christopher, Please tell him to contact me at the following E-mail address: la_chouette45@hotmail.ca $$$ 1ST, 2nd, 3rd mortgages - tax arrears, renovations, debt consolidation, no CMHC fees. $50K you pay $208.33/month (OAC). No income, bad credit, power of sale stopped!! Better Option Mortgages, call 1-800-2821169, (LIC# 10969). APRIL 27 is the Canadian Cancer Society’s Daffodil Day. Do something helpful for someone experiencing cancer. Show your support by wearing a daffodil lapel pin during Daffodil Month. www. DaffodilsForLife.ca 2008 18 ft pioneer spirit travel trailer selling one very clean camper, very clean and has only been used two weeks total. Sleeps seven, skylight over shower, perfect for a couple or small families. Come turn key with forks knives, closet dividers, etc. Can be pulled by a half ton easily located in Hay River. Asking $16,000 O.B.O. paid $21,000 new..make an offer Call (867)875-8533. 25.5 FT Bayliner boat. New lower price, excellent condition. Sleeps six, kitchen and bathroom with shower. Fridge and stove. Freshwater boat. New canopy top and portable AC unit. Beautiful boat, runs and looks great. Comes with a 7500 lb galvanized trailer. Asking $15,900. OBO. Must sell, call (867)874-6997 for more information and pictures. FIREWOOD FOR sale: cut, split, dried & delivered $325 a cord. Just blocked $300. Call Kerry at (867) 444-6305 or (867) 669-3189. 28 FOOT sailboat (Viking 28) located in Yellowknife. Includes complete set of sails plus Gennaker and Spinnaler. Inboard engine, head, gallery with stove, sink and ice chest BBQ, heater, cockpit dodger, depth/speed indicator, VHF and stereo. Comes with heavy duty trailer and mooring. Asking $17,500. OBO. Call (613)328-6051. NATIVE TANNED moose hides and tanned beaver and other furs available at reasonable prices. Phone (780) 355-3557 or (780) 461-9677 or write Lodge Fur and Hides, Box 87, Faust Alberta, T0G 0X0. ďŹ nd your life partner. Misty River Introductions is Ontario’s traditional matchmaker. Call (705) 734-1292,. com. DATING SERVICE. Long-term/ short-term relationships, call now. 1-877-297-9883. Exchange voice messages, voice mailboxes. 1-888-534-6984. Live adult casual conversations -1on1, 1-866-3119640, meet on chat-lines. Local single ladies. 1-877-804-5381. (18+) $500$ LOAN, no credit refused. Fast, easy and secure. 1-877-7761660.. AS SEEN on TV - 1st, 2nd, home equity loans, bad credit, selfemployed, bankrupt, foreclosure, power of sale and need to re-ďŹ nance?? Let us ďŹ ght for you because we understand - Life Happens!! Call toll-free 1-877-7334424 or. The ReďŹ nancing Specialists (MortgageBrokers.com LIC#10408). DEBT CONSOLIDATION Program. Helping Canadians repay debts, reduce/eliminate interest, regardless of your credit. Steady income? You may qualify for instant help. Considering bankruptcy? Call: 1-877-220-3328 Free consultation government approved, BBB Member. CRIMINAL RECORD? Guaranteed record removal. 100% free information booklet. 1-8-NOWPARDON (1-866-972-7366). Speak with a specialist - no obligation.. com. A+BBB rating. 20+ yrs experience. ConďŹ dential. Fast. Affordable. ALCOHOLICS ANONYMOUS has daily meetings. Call (867)4444230 for more info. or visit our website: THE NORTHERN Cancer Support Group. Will be holding their monthly meeting on Tuesday May 3 at 7 pm at the Yellowknife Public Library Meeting Room. If you or someone you know is affected by Cancer, you are welcome to attend. For more information contact Walt Humphries at (867) 873-5486. HAVELOCK COUNTRY Jamboree, Canada’s largest live country music & camping festival Aug. 18-21/11. Announcing Martina McBride, Billy Currington, Joe Nichols and more, over 25 entertainers... tickets 1-800539-3353. ATTENTION RESIDENTIAL school survivors! If you received the CEP (Common Experience Payment), you may be eligible for further cash compensation. To see if you qualify, phone toll free 1-877-988-1145 now. Free service! CHILD CARE available in private day home located on Jeske Crescent. Welcoming children 3-4 years old. I provide two snacks plus lunch daily. Reasonable rates. Full and/or part-time spaces open. Call Mariam at (867)873-5455 for more details. LOOKING FOR a live in or nonlive in full time nanny and housekeeper Call (867)446-1886 for more info. SMALL WORLD Licensed Day Home will have openings in mid August 2011 for two infants (one year olds) and one preschool child. For information and interview. Call 867-669-4080 or e-mail r-a@theedge.ca. Lost & Found FOUND: SET of keys with special fob found on the ice next to an island in YK Bay on April 9th. Call to identify at (867) 873-4826. LOST BLACK Blackberry Curve with a black cover. Lost in front of Domino’s/liquor store in Yellowknife on April 5. If you found contact (867) 873-3715. Reward offered. 1989 FORD F-150. Good running condition, recently serviced. CD player w/ pioneer sound system. Asking $1,800 OBO, call (867)4458140 1993 CHEVY Pickup truck, small V8 automatic, fully inspected, runs great. Asking $2200. Please call 867-446-2016. 1994 CHEVY Pickup 4x4 with canopy, great looking truck, command start, new transmission. Call 867-920-4504. Asking $3500 OBO. 2003 BLUE Chev. Ext. cab 1/2 ton 4x4. Call (867)765-8952. 2006 CHEVY Silverado Z71. 39,000 km. Maintained in excellent condition. Leather interior plus all extras you want. Extended Warranty. Asking $18,500. Call 867-445-2412 2007 QUAD with bucket loader system. 2007 Midwest WRX 400 Quad, bought new in 2008 and less than 1000 km, 400 cc 4 stroke, 5 speed semi-auto w/ reverse, elect start, 2/4 WD, signal lights, backup horn, front/rear racks, hitch, winch. In great shape. Comes with rare Groundhog bucket loader system, electric/hydraulic lift, lower and dump. Lift height 60â€? (high enough to load a pickup). Aux 12V marine deep cycle battery tied to ATV battery. Battery maintenance charger. Loader system can be removed in 1/2 hour and installed in under an hour. See it work at:. Asking $7100 for quad & loader. Call at (867)445-7062 or (867)8738045. ANY ONE having claim to a 1998 F 150 p/u. Vin 2FTZX1764WCA84943 call (867) 765-0772 LOST GLASSES: Found near the CBC Building. Black frames in a light blue case. Call (867) 9205400. FOR SALE: 1997 Explore XL for sale. 185,000km. Needs work, engine runs good. Best offer. Call (867) 446-0824. LOST- PLEASE help me ďŹ nd my dog! He is a two and half year-old male Papillion. He is white and sable and answers to “lennieâ€?. Lennie weighs about 7 pounds and was wearing a purple puff parka with a purple harness underneath when he went missing in the Rat Lake area at approximately 7:00 a.m. on March 1st. He is not from Yellowknife so he will be scared and disoriented. His family is very worried and desperately want him back. Please call 306-502-2875 or 867-669-7573 if you have any information about lennie. FOR SALE: 2006 Chevrolet cobalt LS Sedan. Automatic transmission 84000 km. Asking $5500. Contact Dale at (867)766-4266 anytime. s GIANT ALASKAN Malamute Pups C.K.C. Reg. Vet checked ready to go. May 15, taking deposit now. Call (867) 874-6916. TO GIVE away complete 29 volume leather-bound encyclopedia Britannica, 1987 edition, with 17 years annual updates to 2003. Call dale at (867)766-4266 anytime. YELLOWKNIFE: 2003 Blue Chev. Ext. cab 1/2 ton 4x4. Call (867)765-8952. GUARANTEED APPROVAL Drive away today! We lend money to everyone. Fast approvals, best interest rates. Over 500 vehicles sale priced for immediate delivery OAC. 1-877-796-0514. www. yourapprovedonline.com. 2006 SKIDOO GSX 800 HO-E for sale. Machine has skid plate and A-arm protectors, tunnel bag, plus 1 seat and cover. The sled has been serviced annually, stored in a heated garage and has low milage (2,570 km). Priced at $5,499.00. Contact Duncan on (867)445-1609. 2006 POLARIS FST Switchback with 2 up seat, excellent condition. Great Value! Asking $5,495.00. Call (867) 920-2225 and ask for sales. FOR SALE 2003 350 Marine Motor For Sale, only 270hrs on motor. Asking $5200 OBO. Please phone or leave a message @ (867)446-0932. TITAN INFLATABLE Boat-11’9â€? lenght with aluminun oorboards and inatable keel. Used three seasons; always stored in heated garage. Comes with cover, stowaway aluminum oars, removable bench seat, foot-pump, repair kit and carrying bags. Recently cleaned and ready to go for the season. Over $3000 new; ďŹ rst $1500 takes it. Call (867) 669-0248 after 6pm. CAMERA LENSES: 1-Tamron AF 28-80mm f/3.5-5.6 Aspherical D Special Edition for Nikon SLR w/ HOOD (58). 1-Tamron AF 75300mm f/4-5.6 D N Macro built in 1.3 Special Edition of Nikon SLR w/ HOOD (62). 1-500mm f/8.0 PRO HD high resolution digital lens “BOWER ORIGINALâ€?. 1-650-1300mm f/8.0-16.0 Pro high deďŹ nition digital lens “BOWER ORIGINALâ€?. 1- nikon 50 mm f/1.8 AF Nikon prime lens. 1-58 mm high deďŹ nition 2X digital multi coated telephoto lens. 1-58 mm high deďŹ nition PRO wide angle lens w/ Macro lens attachment. 1-TMT T- Mount adapter ring for Nikon DSLR. Asking $1500. Call (867)446-1886 CONTEMPORARY SOFA and matching large chair, brown in color, brushed suede-look fabric. Excellent condition, used one year. We installed a pellet stove and the set is too large for the small room. Paid $12000 asking $400. Two toddler sleeping cots and sheets, A1 condition. Asking $25 for each set. Quilted double bed cover, matching shams, and bed skirt. Pattern: Blue with rose and matching colored accessories asking $30. Eighty year old Singer Sewing Machine best price offered. Call 867-669-4080. MOVING: ITEM for sale. 1 yr. old Kenmore washer and dryer, used only once weekly. Asking $300. for pair. 1 yr. old apartment size kenmore freezer asking $150. Canada goose resolute black parka (men’s large). Asking $200. 1967 Goya Spanish guitar. Good condition c/w case. Large assortment of books ranging from paperback to antique leather-bound asst. prices. Home winemaking kit, includes 2 glass carboys, primary fermenter, corker, etc. Asking $40. Contact Dale at (867)766-4266 anytime. FOR SALE: Parkhurst 1961 Hockey Cards Toronto, Montreal, Detroit, been in storage for 45 yrs. very good to excellent condition. Gordie Howie to Tim Horton. Just a very few missing from set. Contact John at (867) 872-0405. MARTIN 12 String acoustic guitar. Martin DM-12 guitar in excellent condition. Sold spruce top with mahogany body and neck. Matte ďŹ nish, new strings and old case. Asking $700. Please call (867)920-4341. STEEL BUILDINGS 20X24, 100X100-others. Get a bargain, Buy Now! Not avail. Later. Price on the Move. com, Source # 1 KM. Call 1 800964-8335. STEEL BUILDINGS 30X40, 50X100-others. Time to Buy Now at old Price, Prices going up!, Source # 1KM. Call 1 800-964-8335. WINTER ITEMS: Aerolite Snowshoe (29X9) original package $160. Cold Wave snowmobile suit Men’s med. black $200. Bombardier boots (size 6) $25. All excellent condition. Call (867) 445-2412 AT LAST! An iron ďŹ lter that works. IronEater! Fully patented Canada/U.S.A. Removes iron, hardness, sulfur, smell, manganese from well water. Since 1957. Phone 1-800-BIG IRON;. DIESEL ENGINES Remanufactured. Save time, money and headaches. Most medium duty applications 5.9L, 8.3L, ISB, CAT, DT466, 6.0L. Ready to run. Call today 1-800-667-6879 GENERATOR SETS. Buy direct and save. Oilpatch, farm, cabin or residential. Buy or rent - youĂ•ll get the best deal from DSG. 1-800-667-6879. com Coupon # SWANA G1101 MAJOR ENGINE manufacturers say that quality fuel treatments are an essential part of diesel engine protection. Get the best value with 4Plus 1-800-667-6879 MORE POWER Less Fuel for diesel farm equipment. Tractors, combines, sprayers or grain trucks. Find out about safe electronics from DSG. Call today 1-800-667-6879. WALKER POPLAR, plugs: $1.69/ each for a box of 210 ($354.90). Full range of trees, shrubs, cherries & berries. Free shipping. 1-866-873-3846 or treetime.ca. DO-IT-YOURSELF STEEL buildings priced for spring clearance - ask about free delivery to most areas! Call for quick quote and free brochure - 1-800-5422. SAWMILLS - Band/chainsaw - spring sale - cut lumber any dimension, anytime. Make money and save money in stock ready to ship. Starting at $1,195.00. www. NorwoodSawmills.com/400OT 1-800-566-6899 Ext:400OT. FAST RELIEF the ďŹ rst night!! Restless leg syndrome and leg cramps gone. Sleep soundly, safe with medication, proven results.. 1-800-7658660. NEWS/NORTH NWT, Monday, April 18, 2011 37 t.JTDGPS4BMF t.JTDGPS4BMF CAN’T GET up your stairs? Acorn Stairlifts can help. Call Acorn Stairlifts now! Mention this ad and get 10% off your new Stairlift. Call 1-866-981-6590. START YOUR university education at Lakeland CollegeĂ• s Lloydminster campus. BeneďŹ t from small class sizes, approachable faculty, and cutting-edge science labs. Popular transfer routes include Arts, Commerce, Education, General Studies, Science, and Social Work. Lakeland also offers pre-professional studies towards pre-dentistry, pre-medicine, prepharmacy, pre-veterinary medicine, and new this year University of Saskatchewan pre-nursing. Grade 11 marks 85% plus? You may receive a scholarship of $1,500 to $3,500. Visit or phone 1 800 661 6490, ext. 5429. APPLIANCE REPAIR service & electrical work. Most brand names, stoves, fridges, washers, dryers etc. 15 years experience. honest & reliable. Reasonable rates. journeyman redseal. Call Lloyd 867-446-2890. BETTER LOAN rates! Get cash now! Regain financial freedom! Get out of debt now! Why wait? Need cash fast! Good, bad credit, even bankruptcy, debt consolidations! Personal loans, business start up avail. Home Reno, 1st & 2nd mortgage, medical bills loans available from $2,500K to $1M no application fees, no processing fees, free consultations, quick, easy and confidential. Call 24 hrs. Toll free 1(800)4668135. J O U R N E Y M A N CARPENTER. Available for misc. work and Reno’s. Contact Cameron at (867)444-0547. LYNNE’S CLEANING Services does cleaning for residential and commercial cleaning. Call @ (867)669-0195 FURNITURE REPAIR services. Complete repairs (ReďŹ nishing, recovering, etc.) to upholstery (leather and fabric) and wooden pieces (incl. cabinetry). Please call Lorne at (867) 445-6969. No job too small. READY FOR A Career Change? Less stress? Better pay? Consider Massage Therapy. Independent Study in Calgary or Edmonton. Excellent instructors, great results. Affordable upgrade to 2200 hours. 1-866-491-0574;.. COM “YOUR long term solar partnersâ€? - system sales/installations/ďŹ nancing/dealership. Start making money with the ‘MicroFIT Program’ today! Call now! Tollfree 1-877-255-9580. READY TO change your life? Reach your goals, live your dreams. Work from home online. Real training and Support. Evaluate our system.. TUNDRA COMICS REAL ESTATE t'PS3FOU t'PS3FOU $500 WEEKLY, furnished motel suites. Located downtown. full cable. Direct dial phone. Wireless internet, parking available. Pet friendly. Daily, weekly & monthly rates. Call (867) 873-6023. PALM SPRINGS vacation rental. Brand new three bedroom luxury home in gated golf course community with swimming pool, spa and built-in barbeque. Monthly rentals only. Go online: www. vrbo.com/332144 to view details. Contact Lynda Sorensen at lynda5086@gmail.com for further information. $$$ MAKE fast cash - start your own business - driveway sealing systems, possible payback in 2 weeks. Part-time, full-time. Call today toll-free 1-800-465-0024. Visit:. ROOM FOR Rent. Semi furnished with son and single father. Mature nonsmokers please. Call (867)4440547. BE YOUR own boss with Great Canadian Dollar Store. New franchise opportunities in your area. Call 1-877-388-0123 ext. 229 or visit our website:. com today. FOR RENT: Secure self storage in Kam Lake. 8’x10’ and 8’x20’. On or offsite seacan rentals. Please call Great Slave Storage at (867) 873-5022 for pricing or e-mail: terryhme@hotmail.com. ROOM FOR Rent in a two bedroom luxury apartment. Private bathroom & laundry facilities, inhouse gym, Jacuzzi, good maintenance, security. Range Lake call (867)446-2940. Available immediately $900/month. Females preferred, reference required. HOME BASED business. Established franchise network, serving the legal profession, seeks self-motivated individuals. No up-front fees. Exclusive territory. Complete training. Continuous Operational Advertising Support;. ARIES - Mar 21/Apr 20 Aries, a change in scenery would be well timed. While it’s not good to run away from your problems, some time away could provide a new perspective. LIBRA - Sept 23/Oct 23 Preparation is essential to avoid feeling out of control, Libra. Don’t worry, when you put your mind to it, you can accomplish just about anything. TAURUS - Apr 21/May 21 Taurus, foster closer relationships with family this week because you might need them in the days to come. It always helps to have someone you can trust nearby. SCORPIO - Oct 24/Nov 22 Scorpio, it may be time for you to start over, but this is not necessarily a bad thing. You may find a new path that is much more to your liking and new relationships to boot. GEMINI - May 22/Jun 21 Think again before you make a large purchase, Gemini. Overspending may not be prudent at this juncture in time. Big expenses loom on the horizon, and you need to be prepared. CANCER - Jun 22/Jul 22 Someone is thinking about you, Cancer. It could lead to romantic endeavors. The excitement will be in discovering just who has his or her eyes pointed in your direction. LEO - Jul 23/Aug 23 Leo, a complete change of direction is possible this week. Indecision could cause you to act rashly and that could lead to irreversible damage. VIRGO - Aug 24/Sept 22 Virgo, stop trying to prove yourself to others. Be your own person and live your own life and you will be much happier for it. Realize that you can’t compete on the same level all the time. SAGITTARIUS - Nov 23/Dec 21 Don’t try to push your point of view on someone else, Sagittarius. It won’t be well received at this juncture in time. Let others have their opinions for the moment. CAPRICORN - Dec 22/Jan 20 There’s no time to relax, Capricorn. Just when you tackle one project, another takes its place. Fortunately, you have an abundance of energy to keep you going. AQUARIUS - Jan 21/Feb 18 Aquarius, you may get some news you didn’t expect and it will take a while to absorb all of this information. When you think about it, the change could be good. PISCES - Feb 19/Mar 20 Pisces, soliciting help doesn’t mean you are abandoning your independence. It just means you’re smart. /0/$0..&3$*"25 words or less added words .10 ea Includes GST - Prepayment required. POFJTTVF XFFLT (Mondays-Wednesdays -Fridays) '3&&"%754 Misc. for sale Items under $500 - 25 words or less Lost and found and to give aways 25 words or less (Mondays-Wednesdays -Fridays) 2 weeks #JSUIT#JSUIEBZT &OHBHFNFOUT8FEEJOHT Anniversaries 0CJUVBSJFT.FNPSJVNT Two standard sizes: (1 1â „2 "x4") (3 1â „4"x4") $24 $46 Enter the ONLINE CLASSIFIEDS Plus 6% GST - Must be prepaid. No charge for photo supplied. Additional charges for photos taken at our office. COMMERCIAL 25 words $10 - Boxed $15 Additional words .15 ea Includes GST. Commercial classifieds can be charged only by established clients and there will be a handling charge of $5 per month unless display advertising is running in the same period. If no account, prepayment is required $6450.$-"44*'*&%4 $15 per inch x 1 column. Includes GST. Boxed, Bold, Centred with NO art or photos. 461&341&$*"- Full range of display options including: art, logos and pictures - One size only! 20 lines on 2 columns - boxed QMVT(45 $-"44*'*&%%*41-": Boxed Advertising - minimum 1 col. x 3". Rates available upon request. If no account - prepayment is required. '3&& All classified ads published in our newspapers are also posted online at at no additional cost. DEADLINES 4:00 p.m. Monday for Wednesday’s Yellowknifer; 4:00 p.m. Wednesday for Friday’s Yellowknifer; 4:00 p.m. Thursday for Monday’s News/ North. 1257+:(677(55,725,(6 Horoscopes April 17 - 23, 2011 options for every budget ANNOUNCEMENTS STAY HOME and build a career! Build a business working from home in your spare time. Free training, great retirement income.. TWO WHEELIN’ excitement Learn to repair street, off-road and dual sport bikes. Hands-on training. On-campus residences. Great instructors. Challenge 1st year apprenticeship exam. 1-888-9997882;. $-"44*'*&%3"5&441&$4 at classifieds.nnsl.com PRIZES DAILY! No registration or purchase necessary. Ph: (867) 873-9673 Fax: (867) 873-8507 E-mail: classifieds@nnsl.com P.O. Box 2820, Yellowknife, N.W.T. X1A 2R1 Monday-Friday 8:30 am - 5:00 pm 5108-50th Street Entry forms online under these categories: - Cars/Trucks - Boats/Motors - Misc for Sale - Personals - Announcements - Real Estate For Sale - Real Estate Rentals $-"44*'*&%10-*$: Northern News Services reserves the right to refuse to publish any advertisement, to correctly classify any advertisement and to delete objectionable words or phrases. Publication of an advertisement does not constitute an agreement for continued publication. Occasionally instructions are misunderstood and an error may occur in an ad. If this happens to you please contact us and we will be happy to correct it as soon as possible. However, we will not be responsible for errors that appear in advertisements after the first available business day following publication of the error. Changes other than price or phone number may be considered a new ad and may affect your rate. W I N N I N G H I N T: $-"44*'*$"5*0/4 ENTER EVERY DAY – ENTER ALL 7 CONTESTS. (Entry information is used solely to select a winner and for no other purpose. Entry data is destroyed after each draw). Enter now and while you are at it, CHECK OUT THE BARGAINS or PLACE A CLASSIFIED at: classifieds.nnsl.com ClassiďŹ eds 10 15 20 30 40 70 75 80 100 110 115 Personal Regular Meetings Announcements Situations Wanted Childcare/Domestic Help Lost & Found Pets To Give Away Motorcycles & RVs Vehicles Snowmobiles 120 125 130 140 150 160 165 170 180 190 Boats & Motors Aircraft Garage Sales Misc. for Sale Misc. Wanted Business Services Business Opportunities For Rent Wanted To Rent Real Estate Good advertising doesn’t cost...it pays! 38 NEWS/NORTH NWT, Monday, April 18, 2011 REAL ESTATE t'PS3FOU t'PS3FOU t'PS3FOU t3FBM&TUBUF t3FBM&TUBUF t3FBM&TUBUF 2 BEDROOM, 2 Bathroom adult Non-Smoking, No pet apartment with heated garage. Expressions of interest for large bright 2 bedroom, 2 bathroom, 2 bathroom adult non-smoking, no pet unit bordering Range Lake available July 1, 2011. Proof of employment and landlord references required. Please email moira.young@ gmail.com with personal description of yourselves with contact number. Rent $2400/month including utilities, analoque cable and 6 appliances. Features include heated garage parking for one vehicle, balcony with lake view, spacious bedrooms and entertainment area, in suite laundry in just over 1300 sq. ft. Suitable for quiet, clean, couple that prefer to live in residential area. FULLY FURNISHED room with double bed in very quiet, cozy, non-smoking, climate controlled residence in downtown Yellowknife beside Aurora College. Own full size fridge, linen, towels, utilities, cutlery, kitchen utensils, washer, dryer, free local calls, cable, DVD player, all cleaning, and hi-speed wireless internet included. Parking with plug-in available. $1290 per month, $420 per week or $120 per night. $300 cash deposit required. Short stays welcome - no notice required. Stay a week, stay a month, stay a year. Pay only for the time you need. Excellent choice for contract or camp employees and students. Call (867) 445-7813 or Toll Free: 1-877-445-7813. address 5405 50th Avenue. Email: littlebrownhouse@ymail.com. TWO SMOKE free small basement bachelor units downtown. Small spaces still get you utilities, cable, Wi-Fi, most furnishings, laundry, full bathroom and separate entry at $890/month for the larger unit and $850 for the smaller unit on a one year lease. Smaller unit available June 1, larger unit available sooner. References required, N/S, N/P, employed. Phone (867) 873-4625 before 7:00 p.m. daily. 2004 STICK built 3 bedroom house pinned to bedrock in Northlands 1144 sq. ft. $260,000.00. Assumable mgr. available (867)8734088 #73 GOLD City Court: 3 bedroom, 1 basement suite 1/2 bathroom, double wide and Jacuzzi in the master bedroom. Full bathroom up stairs. Second floor 1/2 bathroom. New washer, dyer, counter top, hot water tank, new paint and new laminated floor. Asking price $360,000. Please call (867) 873-8459 to view, serious applicant only. 3 BDRM trailer, 1.5 bath, heated garage, close to schools, quiet area, no freeze ups in winter. Asking $278,000. Call (867)445-3336 or e-mail: davco48@yahoo.ca. FURNISHED ROOM in Frame Lake South. $750 per month included cable TV and internet. Call (867)445-2295. WANTED: 3 bedroom for April 1st - Old Town Downtown. We are looking for a 3 bedroom apartment/suite/or house for April 1st. 1 year lease (negotiable). Downtown or old Town preferred. Furnished or unfurnished. We have no pets. References available. Please leave a message at (867)444-9757 for Jenn. ACREAGE IN Paradise! Acreage in Paradise Valley for Sale. Perfect place to raise a family, plenty of room and space, ideal for horse lovers. Enjoy the country life in this 1400 sq. ft. 4 bedroom, 2 bathroom bungalow with full basement and covered veranda. Property is located on aprox. 6 acres of cleared land with river frontage at both front and back, 24kms from Hay River. Detached triple garage, fenced areas with animal shelters for raising your own livestock, extensive landscaping, fruit and berry trees. Listed at $365,000. Please email ecoleman@northwestel.net or phone (867)874-2342 after 6pm for more info. or to set up a viewing appointment. $055"(& out buildings for Sale, 16 km from YK. Asking 45,000.00. Call (867) 445-1218. MORTGAGES- LOWER rates, Clifford Sabirsh, broker. Tel: (780) 850-5236. Email cliffmortgage@ shaw.ca, Google-enter-clifford sabirsh REGISTER NOW! Saskatoon Active Adult Large Ground Level Townhomes place.ca FOR SALE20 ft. Aluminum houseboat on trailer. Solar panels, hot and cold running water, propane heater, refrigerator, cook stove, radio, potty and shower. $5000 or best price. Can be seen at Fort Providence gas station, NT. Contact Dale Hayunga at dhayunga@hotmail.com. Whatsit? WANTED FOR sale or option mining claims, land and land with mineral rights, former operating mines, gravel pits. Exposure to our wide client base. 1-888-259-1121. HOUSE FOR Sale, Paradise Valley, Hay River. 202 Paradise RD. 24 km from town, quiet. Riverside, family home on 1 acre, backyard fully fenced. 2 kitchens, 2 living rooms, 4 bedrooms, 2 bathrooms. Attached garage 25x27 and more storage. $325,000. Please call (867) 874-2150. Email: d_rever@hotmail.com. Well maintained (sale by owner). HOUSE FOR sale at 3140 Iqaluit (Apex). Land lease $1.00/year. 1 bedroom at 14’x12’ and 2nd room by 14’x10’. Living room 25’x14’. Kitchen 18’x21’. Walk in 14’x4’. Mechanical room 10’x7’. Bathroom 10’x7’. Col. porch 7’x6’x11’. Landing 32’x8’ with guard rail. Asking $330,000. For more information please call (867) 979-5545 or (867) 222-2154. winner The for the March 28th whatsit is Ed Dowbush. It was a witch. HOUSE FOR Sale: 14 Cranberry Crescent. 1400 sq. ft modular home. 3 bedrooms, 2 full baths, Renovated addition ‘08, Garden shed and multipurpose shelter. Beautiful yard and garden. Various shrubs and hedges. Comes with all appliances, window air conditioner, patio furniture and much more. Must see to appreciate!! Asking price $199,000, seller motivated. Call at (867)874-6406 Guess Whatsit this week and you could win a toque!! Entries must be received within 10 days of this publication date. Send your answers to NNSL by: E-mail: classifieds@nnsl.com Fax: (867) 873-8507 Or mail to: WHATSIT, C/0 News/North, Box 2820, Yellowknife X1A 2R1 (please - no phone calls) The following information is required: My Guess is __________________________________________ Name ________________________________________________ Daytime Phone No. _____________________________________ Mailing Address _______________________________________ _____________________________________________________ Name and date of publication ____________________________ 04/18/11 HOUSE FOR Sale by owner. Executive quality built home in Range lake North on corner of Crescent/cul-desac. 4 bedroom, 2 1/2 baths. Huge corner lot with flat grass yard, fully fenced and landscaped. Skylight, oak hardwood flooring on main floor. Sunken living room and family room with pellet stove. Office/den or 5th bedroom on main floor. Bonus room/large bedroom over garage. Spacious kitchen with walk-in pantry. Double garage, RV parking. Asking price: $609,000, Call to view at (867)669-7474. recycle NEWS/NORTH NWT, Monday, April 18, 2011 39 REAL ESTATE EMPLOYMENT For more Employment Advertising, from all Northern News Services Newspapers Go to our website at Click the “jobs” icon EMPLOYMENT 40 NEWS/NORTH NWT, Monday, April 18, 2011 EMPLOYMENT NEED CASH? FIND IT FAST in the classifieds Sell or trade your unwanted items and get cash fast CALL US TODAY FOR ALL YOUR ADVERTISING NEEDS! Ph: (867) 8739673 Fax: (867) 873-8507 Did you have the winning bid? Check out all awarded contracts on Updated every Monday Stay Alive... Don’t drink and drive! NEWS/NORTH NWT, Monday, April 18, 2011 41 EMPLOYMENT Tenders on the web All tenders advertised in the current editions of Deh Cho Drum - Inuvik Drum NWT News North Nunavut News North - Kivalliq News Yellowknifer Help others help themselves... are also available on the nnsl web site. For more information on how to access them, contact circulation@nnsl.com Give to your favourite charity. reduce, reuse, recycle 42 NEWS/NORTH NWT, Monday, April 18, 2011 EMPLOYMENT EMPLOYMENT OPPORTUNITIES OUTSIDE THE NORTH COMPANY DRIVER (Class 1), Tank Truck Drivers, Acid Haulers, Pressure Truck Operators & Vacuum Truck Operators Required. Johnstone Tank Trucking is seeking reliable and experienced drivers in our Frobisher location. Apply at or fax resume to 306-486-2022. TENDERS/NOTICES Give to your favourite charity. TIPS To Help You Write Your Classified Ad CONCRETE FINISHERS. Edmonton-based company seeks experienced concrete finishers for work in Edmonton and Northern Alberta. Subsistence and accommodations provided for out of town work; John@RaidersConcrete.com. Cell 780-660-7130. Fax 780444-7103. 1. Identify - begin with JOURNEYMAN MECHANICS required immediately, NW Alberta. Heavy Duty and Automotive positions, competitive wages, benefit plan. Caterpillar experience. More info:. Fax 780-351-3764. Email: info@ritchiebr.com. information you provide to the reader the better the re-sponses. Put yourself in the buyer’s place. What would you want to know? EXPERIENCED WINCH tractor and bed truck drivers for drilling, rig moving trucking company. Phone, fax, email or mail. Email: rigmove@telus.net. Phone 780-842-6444. Fax 780-842-6581. H & E Oilfield Services Ltd., 2202 - 1 Ave., Wainwright, AB, T9W 1L7. HOLIDAY ON Horseback in Banff, Alberta. Seeking individuals interested in riding in the Rockies! Hiring for trail guides, cooks, carriage drivers and packers. Horse experience required. Also looking for sales clerks/reservation agents in busy western shop. Must share enthusiasm for the western lifestyle! Staff accommodation available; warner@horseback.com;. IDEAL FOR semi-retired couples: Service Master Security is accepting applications for contract oilfield security workers from mature responsible couples. Skills & requirements: Basic computer literacy, excellent communication skills & work ethics, reliable 4x4 transportation, handy-man & equipment maintenance abilities an asset, must pass criminal records check & qualify for Guard Licensing, must be willing to obtain Safety training as required. Job specific training is provided. Contact for details: 403-348-5513. Fax resume: 403-348-5681. Email: servicemasters@telus.net. JOHNSTONE TANK Trucking is looking for a Lead Hand/Shop Foreman for the Frobisher Shop. Apply online or fax resume to 1-403-206-4175. PASSIONATE ABOUT safety and looking out for the well-being of workers, the Saskatchewan Association for Safe Workplaces in Health is the newest Safety Association in Saskatchewan working towards eliminating injuries in the workplace. Established in 2010, the SASWH Board is continuing to move the Association forward, but to fully accomplish this would like to welcome someone into the inaugural position ofƒ Chief Executive Officer Working in partnership with the Board of Directors, the CEO will be looked upon to steer the direction of the Association which includes developing an operational plan to achieve the strategic objectives; leading the review and re-engineering of education programs; leading the development of a website for the Association; and moving health and safety to the forefront of Stakeholder organization agendas. The ideal candidate for this role will have a post-secondary education in business/commerce or a relevant health-related discipline, and demonstrated experience in managing and leading an organization or department. The successful candidate will have experience working with a Board; be strong in developing relationships with multiple stakeholders; have a proven ability to develop and execute strategic plans; and most importantly, possess an engaging and empowering leadership style that inspires those around them. This individual should be credible and competent, with a down to earth and diplomatic style that garners the trust of the people they work with. For more information, please contact.. Executive Source Partners at 306-359-2550/866-399-2550 or search@executivesource.ca. the item for sale or service that you are offering. 2. Describe - the more 3. Don’t Exaggerate - list the features and the condition. Make your description attra-ctive but believable! 4. Include Price - research shows that people are more interested when they know the price. If the price is negotiable, say so. 5. Be Home - when you run your ad, be home or specify the hours buyers can call. Most people won’t call back. These are tips to help you get started. For additional assistance, call us today. NORTHERN NEWS SERVICES Ph: (867) 873-9673 Fax: (867) 873-8507 classifieds@nnsl.com For advertising information call collect (867) 979-5990 NEWS/NORTH NWT, Monday, April 18, 2011 43 TENDERS/NOTICES WHEN IT’S TIME FOR A CHANGE... wake up to a world of new career opportunities with the “Employment” section of the classifieds. Check out new listings every week. Your seatbelt won’t work if you don’t wear it! Find jobs in your own area of expertise or set out on a new career path. You’ll also find information about area employment agencies and career management centers, whose services can simplify your job search. So, don’t delay; turn to the classifieds and get started today! 44 NEWS/NORTH NWT, Monday, April 18, 2011 76 Hours -30 CELSIUS 370 Kilometres $20,000 for the Shelter Alicja Barahona has made Canadian history, becoming the first person to complete an ultra marathon in the Arctic Circle. It began and ended in temperatures below minus 30 Celsius over the 370 kilometre run from Inuvik to Tuktoyaktuk and back again. The Arctic Challenge was a personal triumph for Alicja. It was also a point of personal commitment as she raised awareness about homelessness in the Canadian North and raised funds for the Inuvik Homeless Shelter. “It’s amazing the simple things you can do that make such big changes. This run brought a community together to support each other and that is a great feeling.” Pledges and donations totalled $20,000 for Canada’s northern-most emergency shelter. Thank you Alicja for bringing the Arctic Challenge to life. And thank you to our donors, corporate sponsors and the Inuvik Run Club for making it a success. NORTHWEST TERRITORIES POWER CORPORATION Up to the Challenge An NT Hydro Company The Ladies Auxiliary Branch 220 Tyson’s Catering Newspaper for the Northwest Territories, released every monday.
https://issuu.com/nnslonline/docs/nt041811
CC-MAIN-2017-22
refinedweb
29,741
62.68
Linux Kernel notes How to configure and build the Linux kernel and write device drivers for it (e.g. adding device driver for Wii Nunchuck by going through the FreeElectons tutorials). Page Contents References / To Read - Linux Kernel In A Nutshell, Greg Kroah-Hartman, O'Reilly Media (PDF of entire book release by free electrons!). - Essential Linux Device Drivers, Sreekrishnan Venkateswaran, Prentice Hall (Not sure if this is legit?) - Exploiting Recursion in the Linux Kernel , - Linux Kernel and Driver Development Training Lab Book, FreeElectrons.com. - Linux Kernel and Driver Development Training Slides, FreeElectrons.com. - Linux Device Tree For Dummies Thomas Petazzoni. - Device Tree: The Disaster so Far, Mark Rutland 2013 - Device Tree presentations papers articles. - Linux Insides, GitBook by @0xAX - The sysfs Filesystem - - - - - Linus on COE - - include/linux/jiffies.h and kernel/time/jiffies.c - - - - - - Interrupts - - Scripts to build a minimal linux from scratch! - - - Time, delays etc - - - - - - - - ************ - - - memory barries - - - - - SMATCH - - The Zen Of KObjects - - KObjects and sysfs Generally Useful Links The Kernel Development Timeline [Information from.] Configure & Build the Linux Kernel Downloading The Kernel Source & A Look At Its Structure To download the latest, bleeding edge, of the kernel source, do the following: git clone (html|git)://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git Now we can have a look at the first level of the directory structure: The arch directory contains all of the architecture dependent code. This code is specific to each platform that the kernel will run on. The kernel itself is as generic as possible, but because at this low level it does have to be concerned with architecture specifics this is where you'll find the implementations. The block directory contains the Linux block I/O layer files which are used to help manage block devices, like your hard disk, for example. The Documentation directory is very useful! Here you can find many bits of documentation on the Linux kernel. The files are mostly text files, but under Documentation/DocBook you will find a set of files that can be built using make to produce PDF or HTML documentation. If, for example you are interested in learning about the kernel debugger, there is a book on that in kgdb.xml. If you want to compile the documentation for just this, for example, edit the Makefile under Documentation/DocBook/Makefile and change the variable DOCBOOKS. You can do this by renaming or commenting out the existing variable and re-defining it with just the resource you want (you might want to do this as the build of the complete set of books fails sometimes - or at least did for me). To compile your DocBook, return to the root directory of the Linux source and run make pdfdocs. The PDF will be build and stored in the DocBooks directory. The drivers directory contains a slew of device drivers organised under their type. So, for example, input device drivers (think keyboard, touchpad etc) are found under drivers/input and PCI device drivers under drivers/pci. There may be further hierarchical organisation too. For example, under drivers/input there are further subdirectories for keyboards, touch screens, mice etc. Other character device drivers such as /dev/null are found under drivers/char. The fs directory houses file system modules. These support a variety of file systems like FAT, NTFS, ext3, ext4 and more. These allow the Linux user to mount Windows file systems, use the Linux file systems and many more, including Mac. Configuring The Kernel The Kernel... a pretty complex beast. As such there is a lot of configuration that can be done. Most hardware providers will have reference configurations that are used as a basis for product specific configurations. Some distributions will provide the configuration file used to build the kernel in the /boot directory. Or sometimes in /proc/config.gz, but this is often not configured as a build option. Anyway, all these configurations are found in configuration files ( .config files), which are generally many thousands of lines long. Given that there are so many options, Linux provides utilities to help with the configuration processes (as well as reference configurations from various distros/manufacturers etc). There are three main ways to configure the Linux kernel. Each of them uses the current config and gives you a way to update it: - config - this is a basic command line interface: a ton of yes/no style of questions, which you don't want to use! - menuconfig - an ncurses inferface with help options etc. - nconfig - a newer version of menuconfig with improved user interface, still ncurses based. - xconfig - a GUI based interface. These targets can all be found in scripts/kconfig/makefile. The current config is stored in the root directory in the file .config. The default config file used can be taken from your system. For example, when I ran menuconfig it creates the configuration file from /boot/config-4.4.0-78-generic. When you are compiling your own kernel you will probably specify the architecture so you'd specify something like make ARCH=arm menuconfig, for example. But, this will still use a default config and probably not the one you want. Therefore the full make command will be something like make ARCH=arm menuconfig arch/arm/configs/corgi_config, if you're building for the ARM Corgi platform. When you do this, the old .config will be overwritten by your new configuration. Interestingly, but perhaps unsurprisingly, launching most of the config tools will involve a compilation step on the host machine. You can see the exact steps taken by running the make command with the extra option V=1. The make step is required to build the config tool being used. Menuconfig Menuconfig presents a nice ncurses display that you can navigate around using the instructions at the top of the window. If you do not have ncurses installed, install it using: sudo apt-get install libncurses5-dev You can run menu config using: make menuconfig What you'll notice is that there are a nice set of menus and when you use the help you will be given some information about the item you've selected. For example, the help for System Type > MMU-based Paged Memory Management Support, reads as follows: Nice, but how does menuconfig know all this stuff?! When it runs it reads the main Kconfig file, found in the root directory. This will source the main Kconfig for the architecture you are compiling for. The variable SRCARCH is used as it is a slightly modified, depending on platform, version of ARCH. The main Kconfig file is taken from arch/$SRCARCH/Kconfig. So, if we're compiling for arm, the Kconfig file would be found in arch/arm/Kconfig. This file defines a lot of things but also sources a ton of other Kconfig files! We can look in this file to get an idea of the syntax: ... menu "System Type" config MMU bool "MMU-based Paged Memory Management Support" default y help Select if you want MMU-based virtualised addressing space support by paged memory management. If unsure, say 'Y'. ... We can see that a menu called "System Type" is being defined. If we look in the first menuconfig screenshot we can see that menu item highlighted. We can drill into this menu to find the menu item "MMU-based Paged Memory Management Support". If we then select the help feature for this item, we see the second screenshot shown above, which matches the description we found in our main Kconfig file :) Not every menu item is defined in the main file however. It sources many others and will also pull in all Kconfig files it finds. For example there is a Kconfig file in most of the leaf directories of the drivers tree. Xconfig Nicer GUI application that requires QT. If you no not have QT installed, you can do so by running the following command: sudo apt-get install libqt4-dev Involked in the same way (following our example so far): make ARCH=arm xconfig arch/arm/configs/corgi_config Xconfig has a few nice advantages like a GUI which will show you the symbols next to the menu descriptions and a beefed up find functionality. Building The Kernel The Kernels own documentation in Documentation/kbuild/makefiles.txt is very comprehensive! The makefile target make oldconfig reads the existing .config file and prompts the user for options in the current kernel source that are not found in the file. This is useful when taking an existing configuration and moving it to a new kernel [Ref]. Before the command is run you would have copied an older kernel config file into the kernel root as .config. This make target "refreshes" it by asking questions for newer options not found in the config file. # From kernel README "make ${PLATFORM}_defconfig" Create a ./.config file by using the default symbol values from arch/$ARCH/configs/${PLATFORM}_defconfig. Use "make help" to get a list of all available platforms of your architecture. # See also scripts/kconfig/Makefile: %_defconfig: $(obj)/conf $(Q)$< --defconfig=arch/$(SRCARCH)/configs/$@ $(Kconfig) Aaaah cross compile for android, make sure you are using the Android prebuilt toolchains! export PATH=/path-to-android-src-root/prebuilts/gcc/linux-x86/aarch64/aarch64-linux-android-4.9/bin:$PATH make prepare ARCH=arm64 CROSS_COMPILE=aarch64-linux-android- V=1 - saving my arse! Modules.symvers ??? Unbind A Kernel Driver Most drivers have an entry under /sys/bus/xxx/drivers/yyy. For example if you ls the directory /sys/bus you will see somthing similar to the following (list snipped to shorten it): /sys/bus |-- i2c |-- pci |-- pci_express |-- platform |-- pnp |-- scsi |-- sdio |-- serio |-- spi |-- usb <snip> The different types of buses are listed. Under each bus, the following structure is seen: /sys/bus/XXX |-- devices \-- drivers We're interested in the drivers subdirectory. Let's take a look at a sample directory for I2C drivers: /sys/bus/i2c/drivers |-- 88PM860x |-- aat2870 <snip> Under the drivers directory we can see (above) a list of devices for which drivers exist. Building The Kernel ========================================= /proc/interupts - To look at interrupt lines claimed by drivers - architecture dependent /proc/stat - shows interrupt count Input driver model docs: Documentation/input/, and in particular input.txt When driver registers as input device, the following is auto created: /dev/input/eventX (cat this to get dump of Linux input events as they happen) As well as sysfs entries. /proc/bus/input/devices - Where system keeps track of all devices /sys/class/input/eventX - directory - info about device assoc. with this event cat /sys/class/input/eventX/uevent to get device info such as major and minor number etc getevent and sendevent command: system/core/toolbox/(get|send)event.c. For example 'getevent -l' lists the devices and their human readable names and associated "eventX" Both tools seem to use the /dev/input/eventXX to read and write events. To write events you need to have root access to make /dev/input/eventXX writeable! # kbuild supports saving output files in a separate directory. # To locate output files in a separate directory two syntaxes are supported. # In both cases the working directory must be the root of the kernel src. # 1) O= # Use "make O=dir/to/store/output/files/" # # 2) Set KBUILD_OUTPUT # Set the environment variable KBUILD_OUTPUT to point to the directory # where the output files shall be placed. # export KBUILD_OUTPUT=dir/to/store/output/files/ # make # # The O= assignment takes precedence over the KBUILD_OUTPUT environment # variable. ### # External module support. # When building external modules the kernel used as basis is considered # read-only, and no consistency checks are made and the make # system is not used on the basis kernel. If updates are required # in the basis kernel ordinary make commands (without M=...) must # be used. # # The following are the only valid targets when building external # modules. # make M=dir clean Delete all automatically generated files # make M=dir modules Make all modules in specified dir # make M=dir Same as 'make M=dir modules' # make M=dir modules_install # Install the modules built in the module directory # Assumes install directory is already created Linux Kernel and Driver Development Training Lab Book References: - Linux Kernel and Driver Development Training Lab Book by FreeElectrons.com. - Linux Kernel and Driver Development Training Slides, Free Electrons. - BeagleBone Black System Reference Manual, Rev C.1. - AM335xSitaraTM Processors Datasheet. - AM335x and AMIC110 SitaraTM Processors Technical Reference Manual. - BeagleBone, Robert Nelso. - Setting Up the BeagleBone Black's GPIO Pins. - BBB Schematic. Todo read: -- writes to onboard eMMC Notes created whilst working through Linux Kernel and Driver Development Training Lab Book by FreeElectrons.com, whilst consulting the accompanying slides. Use the lab slides! To begin with I hadn't found them and thought the lab book was a little short on descriptions. When I found out they also had slides it made a lot more sense! Setup Note: To begin with I had a rather limited setup where I was doing this. I was compiling on a Linux server but only had a Windows desktop PC. Setting up an NFS server from Windows or from a Virtual Box running Linux or through Cygwin took way too much time and was getting in the way of actually learning anything so I gave up and used a pure Linux system... I'd advise anyone reading this to do the same!!! Download The Source Download the linux kernel: git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git --depth=1 The flag --depth=1 means we ignore all history to make download faster. Use stable releases: cd ~/linux-kernel-labs/src/linux/ git remote add stable git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git git fetch stable [--depth=1] Again you can use --depth=1 to speed up the download, but note you won't have any log history Select a branch: # list available braches git branch -a # Create a local branch starting from that remote branch git checkout -b 4.9.y stable/linux-4.9.y Setup USB to Serial Picocom is a minimal dumb-terminal emulation program. Type Ctrl+A, Ctrl+X to exit once connected. sudo apt-get install picocom sudo adduser $USER dialout picocom -b 115200 /dev/ttyUSB0 Setup TFTP Server sudo apt-get install tftpd-hpa By default all files you want to be TFTP'able should be placed in /var/lib/tftpboot. In my install, the directory was owned by root and not in any group which was a pain, so sudo chgroup YOUR_GROUP /var/lib/tftpboot so you can get easier access to this directory. YOUR_GROUP should be a group you already belong to, or a new group, up to you. If you wanted to add a new group, say tftp_users, for example, you could run sudo groupadd tftp_users and then run sudo adduser jh tftp_users , to add yourself to the group and then sudo chgroup tftp_users /var/lib/tftpboot to put the directory in this same group. Then probably sudo chmod g+w /var/lib/tftpboot to give this new group write permissions on the directory. Alternatively you could change the root directory that the sever uses (see below). To start/stop etc the server use the following commands: service tftpd-hpa status # Usefully also prints out logs service tftpd-hpa stop service tftpd-hpa start service tftpd-hpa restart To edit the server configuration edit /etc/default/tftpd-hpa. The default when I installed it was this: TFTP_USERNAME="tftp" TFTP_DIRECTORY="/var/lib/tftpboot" TFTP_ADDRESS=":69" TFTP_OPTIONS="--secure" To the options I added --verbose. Setup NFS Server The Linux Kernel, as configured for the lab book exercises, mounts the root file system over NFS. Thus we need to setup the NFS server on our machine. Note: I tried doing this from a Windows box and it is torture so just use a native Linux machine and save yourself heartache... sudo apt-get install nfs-kernel-server Run sudo vi /etc/exports to add this line, replacing "<user>" with your username and the IP address with your board's IP address. /home/<user>/linux-kernel-labs/modules/nfsroot 192.168.2.100(rw,no_root_squash,no_subtree_check) Restart server: sudo /etc/init.d/nfs-kernel-server restart Now unpack the lab files so that you get the directory mentioned above: cd /home/<user> wget tar xvf linux-kernel-labs.tar.xz For futher information on how to configure the NFS server and even shorted the export path used, see the Ubuntu community guide SettingUpNFSHowTo. Setup Uboot The version of UBoot I had by default on my BB did not have the " saveenv" command :( So, for now at least, I will type the boot commands manually rather than worrying about creating a new UBoot image. Can do that later if this becomes a real pain. setenv serverip 192.168.2.1 setenv ipaddr 192.168.2.100 tftp 0x81000000 test.txt md 0x81000000 The memory dump showed contents of test.txt, so we know the TFTP server is running correctly. NOTE: In UBoot, if you want to copy-past these into picocom copy the text into an editor and then replace newlines by "&&" to get a one-liner. Compiling & Booting The Kernel (+ DTB) References - Linux Kernel Makefiles, Kernel Documentation. - Device Tree For Dummies, T Petazzoni. - A Symphony of Flavours: Using the device tree to describe embedded hardwarw, G Likely, J Boyer. An Alternative Setup I needed to create a Kernel build and file system from "kinda-scratch" to boot off an SD card. The instructions I had to follow were slightly different and I've put some brief notes here. Compiling The Kernel This section of the lab book was a little thin. Export the following: sudo apt-get install gcc-arm-linux-gnueabi # Get the cross compiler tool chain dpkg -L gcc-arm-linux-gnueabi # Find out path and name of tool chain If the folder for the gcc-arm-linux-gnueabi- toolchain is not on your PATH, add it, then export the following environment variables... export CROSS_COMPILE=arm-linux-gnueabi- export ARCH=arm To configure the kernel type: make menuconfig omap2plus_defconfig Note the file omap2plus_defconfig is found in arch/arm/configs/. Made sure config had CONFIG_ROOT_NFS=y set. To find this hit "/" and type CONFIG_ROOT_NFS. The search results will look like this: The red box highlights the information we need: how to find this configuration option in the menu system, which can be quite a maze. Navigate File Systems > Network File Systems and you should see the menu shown below: The image above shows a [*] next to "NFS client support". This means that the support will be compiled into the kernel image. All of the other options with an [M] next to them are compiled as modules. They will not be included in the compiled kernel image but can be loaded separately as modules later. Modules marked with [ ] are not compiled at all. Exit and save your config. You will see the exit message: # configuration written to .config The config system has copied the file arch/arm/configs/omap2plus_defconfig to the root directory and merged in any changes you made manually from the menuconfig utility. Now set the build the kernel: # * -jX (optional) sets number of threads used for parallel build. # * V=1 (optional) puts the build into verbose mode so that you can # see the toolchain commands being invoked. make -j16 V=1 The last build message you should see is... Kernel: arch/arm/boot/zImage is ready If you list the directory arch/arm/boot you will see the zImage file and another file named dts/am335x-boneblack.dtb. This is the Linux Device Tree blob. It gives the entire hardware description of the board in a format the Kernel can read & understand. [Ref]. Copy both files to TFTP home directory. What Are The zImage and DTB Files? So, we have two files: arch/arm/boot/zImage and arch/arm/boot/dts/am335x-boneblack.dtb, but what are they? The zImage is a compressed kernel image that has a little bit of code in front of the compressed blob that will makes it bootable and self extracting so that the kernel proper can be decompressed into RAM and then run. The am335x-boneblack.dtb is a Linux Device Tree Blob. This is a binary format that the kernel can read and contains the description of the hardware [Ref]. This method was adopted to try and standardise how hardware information was passed to the kernel because in the embedded world there was little standardisation and systems varied considerably [Ref].. So, this is what the .dtb file is. It is generated by many .dts files, which are text files containing a human readable "map" (tree really) of the system's hardware. Using DTB's we can somewhat decouple the kernel build from the hardware it is running on, rather than having to do a specific kernel build for every hardware variant out there! Mostly everything is moving towards using device trees. If CONFIG_OF is defined you know you are in Kernel with device tree and open firmware support enabled. Booting The Kernel Now we can boot the kernel. From UBoot type: setenv bootargs root=/dev/nfs rw ip=192.168.2.100 console=ttyO0 nfsroot=192.168.2.1:/home/<user>/linux-kernel-labs/modules/nfsroot setenv serverip 192.168.2.1 setenv ipaddr 192.168.2.100 tftp 0x81000000 zImage tftp 0x82000000 am335x-boneblack.dtb bootz 0x81000000 - 0x82000000 The NFS directory it booted from (you should see a message " VFS: Mounted root (nfs filesystem) on device 0:14") should contain the data from the course labs TAR file. If it does not, you will see a kernel panic saying " Kernel panic - not syncing: No working init found". If you see this, make sure that the directory you shared as /home/<user>/linux-kernel-labs/modules/nfsroot contains the modules\nfsroot folder from the lab file. Based on the previous section we can understand what we have done here. We are booting a zImage. The first argument is the address of the zImage. The second argument is the hyphen, which specifies the address of the initrd in memory as non-existent, i.e., there is no ramfs, and the third argument is the address of the DTB that UBoot will pass to the Kernel. The other important environment variable set is bootargs. This is a string that is passed to the kernel. UBoot will pass this to the kernel. In this case it contains all the information required to tell the kernel to try and mount the root directory over NFS and boot from it. Writing Modules References - Linux Kernel Development, Third Edition, Robert Love. - Building External Modules, Linux Kernel Documentation. - Building A Linux Kernel Module Without The Exact Kernel Headers, Glandium.org. - My solutions to exercises. Building and Loading The lab book instructions at this point really are pretty sparse. Writing the module isn't hard and grepping the source code for files with "version" in their name brought me to <linux-root>/fs/proc/version.c, which shows how to get the Linux version. My solution is here on GitHub and a snapshot is shown below: static int __init hello_init(void) {...} static void __exit hello_exit(void) {...} module_init(hello_init); module_exit(hello_exit); The only interesting things in the snippet are the tokens __init and __exit. The functions marked by __init can be removed after either kernel boot or module load completes and the functions marked by __exit can be removed by the compiler if the code is built into the kernel, as these functions would never be called. Navigate to the NFS file system you unpacked and exported over NFS. Change to the root/hello. In the Makefile, I have changed the value of $KDIR as I stashed the kernel source in a different location to that specified in the lab book. Run make all to build. Once built, from the board's console you can type insmod /root/hello/hello_version.ko to load the module. In my GitHub area, I've used my own external build, outside the lab directory, but doing it the lab way will also do an external build. I preffered the former so that I could stash everything in my GitHub repo more easily. How Linux Verifies & Loads Modules At this point I want to use some previous learning to talk about how the Kernel loads modules and why you should build modules with the exact kernel headers used to compile your kernel [Ref][Ref]. In summary... ...The Linux kernel contains data structures whose layout varies not only from version to version but also depending on the compilation options. As a consequence, when you compile a kernel module, you need to have not only the header files from the kernel source, but also some header files that are generated during the kernel compilation... When you build your module you'll get a whole load of files generated in your module's directory. One file of importance is Module.symvers, which contains a list of all exported symbols from a kernel build [Ref] and their corresponding CRC (if CONFIG_MODVERSIONS is enabled, which it is in this lab). This set of exported kernel symbols is known as the exported kernel interfaces. Modules can only use these explicitly exported functions which are marked as exported in the kernel code by the macro EXPORT_SYMBOL_[GPL] (the _GPL suffix is option, i.e., we have 2 macros here). Another file of import is hello_version.mod.c, which is auto-generated by the build and compiled into the resulting .ko kernel object file. The file contains some information that is used to "tag" the module with the verssion of the kernel it was built against, so that when it is loaded, the running kernel can check that the module was compiled against the correct kernel headers [Ref]. In hello_version.mod.c, the macro MODULE_INFO(vermagic, VERMAGIC_STRING) is used (define in linux/moduleparam.h). This macro will define a static and constant variable in the section .modinfo. The variable name is based on the tag, which in this case is "vermagic". The value is defined by VERMAGIC_STRING which is based on the the kernel version and git repo status. Thus, when the module is loaded the kernel can scan the .modinfo section of the module object file for the "vermagic" symbol and check that it matches the kernel's own version magic, in this way checking that the module being loaded is indeed written for this version of the kernel. In this file we also see how the module intialisation and exit functions are found by the kernel, and also realise what the macros module_init and module_exit do: __visible struct module __this_module __attribute__((section(".gnu.linkonce.this_module"))) = { .name = KBUILD_MODNAME, .init = init_module, #ifdef CONFIG_MODULE_UNLOAD .exit = cleanup_module, #endif .arch = MODULE_ARCH_INIT, }; So here we can see a structure that is stored in a specific section. When loading the module the kernel will be able to find this structure in this section and consult the member variables .init and .exit to locate the module's initialisation and exit functions. Debugging Bugger! The first problem I saw was when I tried to load the module using insmod: [ 7121.807971] hello_version: disagrees about version of symbol module_layout insmod: can't insert 'hello_version.ko': invalid module format This implies a mismatch between the kernel version and the header files the module was compiled against! This is very strange as I must surely be building against the right kernel... I've built the kernel and loaded it on the BB after all! Refering back to the file hello_version.mod.c, this symbol can be found: static const struct modversion_info ____versions[] __used __attribute__((section("__versions"))) = { { 0xb1dd2595, __VMLINUX_SYMBOL_STR(module_layout) }, ... This structure is declared in include/linux/module.h and looks like this: struct modversion_info { unsigned long crc; char name[MODULE_NAME_LEN]; }; So we can see that the .mod.c file is creating an array of modversion_info structs in the section named __versions. The symbol named "module_layout" (the macro __VMLINUX_SYMBOL_STR just strigifies it's argument) is given a CRC value of 0xb1dd2595. This CRC value has been read out of Module.symvers. So, the "invalid module format" message is due to a mismatch between the module's recorded CRC for the symbol module_layout and the CRC the kernel expects (see check_version() in kernel/module.c). Question is how on earth has this happened?! Just using modprobe -f <module>[Ref] won't get me out of trouble here either :( So, now deciding to enable debug for just kernel/module.c so that the pr_debug() macros will become non-empty and emmit debugging information [Ref]. When re-building the kernel I used the following: make clean && make -j16 CFLAGS_module.o=-DDEBUG If you type dmesg with debug enabled for module.c, you'll see a lot more information output to the system log. Balls! Cleaning and recompiling kernel solved this issue! Adding A Parameter To The Module Modules can be given parameters at either boot time, if compiled into the kernel, or on module load. They allow some flexibility in module configuration so that, for instance, you could take the same binary and run it on different systems by just toggling a parameter... useful! Define using: /* params: * name - name if parameter variable you declared in your code and * the name exposed to user (to use different names use * module_param_named()) * type - paramers data type: byte | [u]short | [u]int | [u]long | * charp | [inv]bool * perm - octal format or by or'ing S_IRUGO | S_IWUSR etc etc */ ... definition-of-your-variable-name ... module_param(name, type, perm); So for the lab exercise, have to add: static char *who_param = NULL; module_param(who_param, charp, 0644); Another way of doing this would be to get the kernel to copy the string into a buffer: #define MAX_WHO_SIZE 25 static char who_param_buf[MAX_WHO_SIZE]; module_param_string(who_param, who_param_buf, MAX_WHO_SIZE, 0644); Recompile the module and then load it by typing, for example: insmod hello_version.ko who_param=JEHTech If you want to see the parameters with which a module was loaded you can use this for a list of patameters: ls /sys/module/<modul's name>/parameters To find out what the parameter's value is, cat the file with the parameter's name you're interested in. Adding Time Information See include/linux/timekeeping.h. /*** In your driver */ #include <linux/time.h> void do_gettimeofday(struct timeval *tv) /*** From include/uapi/linux/time.h */ struct timeval { __kernel_time_t tv_sec; /* seconds */ __kernel_suseconds_t tv_usec; /* microseconds */ }; So for our driver, we need to record the time when it was loaded and then record the time when it was removed and calculate the difference. Also note that floating point operations should not be done in kernel code, in case you were thinking of converting values to doubles to do the maths. /*** Global variables: */ struct timeval load_time, unload_time; /*** In the init function: */ do_gettimeofday(&load_time); /*** In the exit function */ struct timeval diff_time; do_gettimeofday(&unload_time); diff_time.tv_sec = unload_time.tv_sec - load_time.tv_sec; diff_time.tv_usec = unload_time.tv_usec - load_time.tv_usec; if (diff_time.tv_usec < 0) { diff_time.tv_usec += 1000000; diff_time.tv_sec -= 1; } printk(KERN_ALERT "Driver loaded for %ld seconds, %ld usec\n", diff_time.tv_sec, diff_time.tv_usec); printk(KERN_ALERT "Goodbye\n"); Follow Linux Kernel Coding Standards ~/linux-kernel-labs/src/linux/scripts/checkpatch.pl --file --no-tree hello_version.c I2C Driver For Wii Nunchuck References - Essential Linux Device Drivers, S. Vankateswaran, Prentice Hall. - How To Instantiate I2C Devices, Linux Docs. - Writing I2C Clients, Linux Docs. - Checking I2C Functionality Supported, Linux Docs. - Free Electrons Guide To Wii Nunchuck I2C Inteface. Device Tree Setup Beagle Bone Black DTS file is in arch/arm/boot/dts/am335x-boneblack.dts. So, I need do two three: First, check that the pinmux puts the I2C bus on the J9 pins, second enable the second I2C bus, and third create a definition for the new device on that bus. Snapshot of P9 muxing from data sheet: The ARM Cortex-A8 memory map can be found in the AM335x Tech Ref Manual[Ref] and the pad control registers (0x44E1_0000) and I2C registers (0x4802_A0000) can be found in section 9.2.2 of the same document. For the register offsets see section 9.3.1. If we look in the am33xx.dtsi file for the our chip we will find that the i2c1 interface is located at that address: i2c1: i2c@4802a000 { compatible = "ti,omap4-i2c"; ... This is the only place where i2c1 is defined in the Beagle Bone Black device tree files. What we're interested in is the MUX mode as in the image above we can see that we will want either mode 2 or mode 3 as these are the only two modes for which the I2C1 pins are muxed onto the P9 connector. What is a little annoying is that in the above image there are a lot of signal "names". These names relfect the pin functionality when used in a particular way (i.e., the signal that will be output), but there is a specific name for the actual pin. To find this out one has to refer to the AM335x data sheet: To investigate the I2C setup a little further we'll look back at the first I2C bus, I2C0, to see how it is being configured in the DTS files... The pin mux for the first I2C bus looks like this in arch/arm/boot/dts/am335x-bone-common.dtsi: am33xx_pinmux { pinctrl-names = "default"; pinctrl-0 = <&clkout2_pin>; ... i2c0_pins: pinmux_i2c0_pins { pinctrl-single,pins = < AM33XX_IOPAD(0x988, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c0_sda.i2c0_sda */ AM33XX_IOPAD(0x98c, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c0_scl.i2c0_scl */ >; }; This block of DTS script is defining the pin controller device. It uses the "alternative" binding discussed in the Linux pin control bindings doc [Ref]. This means that the pin binding is specified using a "hardware based index and a number of pin configuration values", which we see specified in the AM3XX_IOPAD() macro. The node "pinmux_i2c0_pins", annotated with the label "i2c0_pins" is a a pin configuration child node that a client device (any module whose signals are affected by pin configuration) can reference. So, we would expect the I2C0 config to reference this... and it does: &i2c0 { pinctrl-names = "default"; pinctrl-0 = <&i2c0_pins>; Now some vocabulary: a client device is any module whose signals are affected by pin configuration [Ref]. The node i2c0 shown above is a client device as it's signals are affected by how the pinmux controller is configured. Each client device can have one or more "pin states", which are assigned contiguous numbers, starting from zero. So, this client device only has the one "state". The field "pinctrl-0" is state number 0. It specifies a list of phandles pointing at pin config nodes, which must be (grand)children of the pin controller device. The field "pinctrl-names" gives a textual name to this particular state that can be used. Where there "pinctrl-1"..."pintctrl-N" states, the names list would have N+1 members and the first name would name "pinctrl-0", the second "pinctrl-1" and so on. In this case there is only one state named "default". Back to the node labelled "i2c0_pins" in the pin controller device... what does the child node "pinctrl-single,pins" mean? Well, the contents of each of those ... child nodes is defined entirely by the binding for the individual pin controller device. There exists no common standard for this content" [Ref]. There is some clue however... it seems that there does seem to be some kind of, at least de-facto, naming convention. Where we see "pinctrl-single,pins", the part before the comma generally is the driver name. So we know (also helps that the lab book tells us) that the pinctrl-single driver is being used and a quick search leads us to drivers/pinctrl/pinctrl-single.c. If we look at the driver we can see that it is described as a one-register-per-pin type device tree based pinctrl driver, and that in the function pcs_parse_one_pinctrl_entry() it parses the device tree looking for device nodes with the name "pinctrl-single,pins". The telling part of the function is: /* Index plus one value cell */ offset = pinctrl_spec.args[0]; vals[found].reg = pcs->base + offset; vals[found].val = pinctrl_spec.args[1]; The "pinctrl-single,pins" property is a list of offsets and value "tuples", which agrees with what we noted about the pin binding being specified using a "hardware based index and a number of pin configuration values". In the AM335x, each configurable pin has its own configuration register for pull-up/down control and for the assignment to a given module: okay good, matches the pinctrol-single driver's description. Section 9.3 of the TRM, table 9-10, lists the memory-mapped registers for the control module. From this we can see that registers 0x988 and 0x98C are labelled conf_i2c0_sda and conf_i2c0_scl respectively, which is what we would expect for the I2C0 interface. Great, i2c0 looks good, which gives us confidence to construct something for i2c1. We have already looked up the pin names for i2c1: "spi0_cs0" and "spi0_d1" (we're using mode 2), so we need to look these up in the same table. The table gives us the respective offsets 0x95C and 0x958, so we can construct our addition to the DTSI file... The pin controller and i2c bus/devices ar defined in am335x-bone-common.dtsi and am335x-boneblack.dts. So, as the lab book instructs I will put my definitions in a new am335x-customboneblack.dts file where I can define the extension to the pin controller and a new i2c1 bus node and its nunchuck device node: am33xx_pinmux { i2c1_pins: pinmux_i2c1_pins { pinctrl-single,pins = < AM33XX_IOPAD(0x958, PIN_INPUT_PULLUP | MUX_MODE2) /* i2c1_sda */ AM33XX_IOPAD(0x95c, PIN_INPUT_PULLUP | MUX_MODE2) /* i2c1_scl */ >; }; }; &i2c1 { pinctrl-names = "default"; pinctrl-0 = <&i2c1_pins>; status="okay"; clock-frequency = <100000>; wii_nunchuck { compatible = "nintendo,nunchuk"; reg = <0x52>; }; }; One last question is where does the make system get information about what DTB to construct for our target. The file arch/arm/boot/dts/Makefile lists which DTBs should be generated at build time [Ref]. In this file we can see the Linux make target dtb-$(CONFIG_SOC_AM33XX), which lists am335x-boneblack.dtb as a target. This is a classic kbuild definition. If CONFIG_SOC_AM33XX is defined then this target will be built, otherwise it is ignored. Once build the precise build command used are saved in arch/arm/boot/dts/.am335x-boneblack.dtb.cmd. I'm copying am335x-boneblack.dts to am335x-boneblack-wiinunchuck.dts and adding its DTB as a dependency for the dtb-$(CONFIG_SOC_AM33XX) target. Can build it using: make dtbs This outputs the file arch/arm/boot/dts/am335x-boneblack-wiinunchuck.dtb. This needs to be copied to the tftp directory on the host and the UBoot boot command changed to download this rather than the "stock" DTB file. Urg... working on another BBB kernel compilation and have created the following, which explains how pinctrl driver is found... This still left me with the question, how does the device that requests a certain pin-muxing, get that pin-muxing? After all, in most devices I see requesting a pin-muxing in the device-tree, the doesn't appear to be anything to read the pin-muxing in their probe() function. The reason is this (kinda - see paragraphs after the quote too!): When a device driver is about to probe, the device core will automatically attempt to issue pinctrl_get_select_default() on these devices. This way driver writers do not need to add any of the boilerplate code ... ... So if you just want to put the pins for a certain device into the default state and be done with it, there is nothing you need to do besides providing the proper mapping table. The device core will take care of the rest. Err... is this true? I've had a little search through the 4.4 kernel and the references I find to pinctrl_get_select_default() are pretty minimal. They seem to occur in some device specific files and then ones that look more generic like gpio-of-helper.c::gpio_of_helper_probe(). But even that seems like it is a specific driver, which probably does the pinctrl "boilerplace", but would need to be compiled into the kernel, and in any case only would apparently work for GPIOs. A search for the device manager and where it might probe devices revealed dd.c::really_probe(), part of the attempt to bind a device with a driver, which calls pinctrl_bind_pins() before probing the device. The comment for pinctrl_bind_pins() says ...called by the device core before probe..., which gives us the answer for this kernel. And, in fact, it calls pinctrl_set_state() for the default state. What I also found is that there is an "&init" state too that will superceed the default state at initialisation. The above quote is either inaccurate or for another kernel version. The Linux I2C Core The I2C core is a set of convenience functions for driver developers that "hook" into the Linux driver infrastructure... The I2C core is a code base consisting of routines and data structures available to host adapter drivers and client drivers. Common code in the core makes the driver developer's job easier. The Linux Device Tree References: - Device Tree Usage, ELinux.org - Linux Device Tree For Dummies Thomas Petazzoni. - A Symphony of Flavours: Using the device tree to describe embedded hardware, G. Likely, J Boyer - Device Tree presentations papers articles. - Device Tree Compiler Manual. - How To Compile And Install The Device Tree Compiler On Ununtu. Linux uses device trees to avoid having to recompile the kernel for every single variation of a board, for example. So, on one architecture, say for example 64-bit Arm, we don't have to recompile the kernel for every possible combination of 64-bit Arm chip and its peripherals. Instead, a device tree is stored somewhere, for example an EPROM, and can be passed to the kernel at boot time so that the same binary kernel image can run on multiple platforms and the platforms themselves tell the kernel binary what they "look" like. The following is a good explanation of what a device tree is:. The following image gives a basic, high-level introduction to the device tree file syntax... Device tree files can also import other device tree files and also C header files so that it is easy and less error prone to share named quantities between kernel code and the device tree. The device tree is exposed in Linux under the /proc/device-tree/ directory. The directories are paths in the device tree and the nodes can all be cat'ed. If you have the device tree compiler on your system, which is contained in the device-tree-compiler package, you can use dtc -I fs /sys/firmware/devicetree/base [Ref]. Building DTC From Scratch I wanted to run dtc natively on my target to double check that the DTB blobs it was using were correct. To cross build the DTC itself: git clone git://git.kernel.org/pub/scm/utils/dtc/dtc.git cd dtc export CROSS_COMPILE=arm-linux-gnueabihf- export ARCH=arm make V=1 NO_PYTHON=1 # NO_PYTHON stops it trying to build Python library/bindings Linux DTB Accessor Functions Source The device tree access functions are found under the directory drivers/of: Interrupts References: - Interrupt definitions in DTS (device tree) files for Xilinx Zynq-7000 / ARM, Eli Billauer's Tech Blog. - - - - In the above image we saw the part of the DTS file that read interrupts = <intr-specifiers>. The interrupt specifiers are an n-tuple, the meaning of which seems specific to the interrupt controller. For example, PCI interrupt numbers only use one cell, whereas the system interrupt controller uses 2 cells for the irq number and falgs. When it is a 3-tuple, it looks like it is likely to mean the following: - SPI flag - 0 means device is not using a Shared Peripheral Interrupt (SPI), anything else means that it is a shared interrupt. - Interrupt line number. This is the hardware interrupt number. It is a peripheral interrupt identifier in the actual hardware interrupt controller. - Interrupt type - edge, level triggered etc. This can be an ORed combination of some of the flags in irq.h. For example, it could be IRQ_TYPE_LEVEL_HIGHor IRQ_TYPE_EDGE_RISING. TODO interrupt number loose all kind of correspondence to hardware interrupt numbers: mechanism to separate controller-local interrupt numbers, called hardware irq’s, from Linux IRQ number: irq_alloc_desc*() and irq_free_desc*() APIs provide allocation of irq numbers #address-cells property indicate how many cells (i.e 32 bits values) are needed to form the base address part in the reg property #interrupt-cells indicates the number of cells in the interrupts property for the interrupts managed by the selected interrupt controller #interrupt-cells property is used by the root of an interrupt domain to define the number of <u32> values needed to encode an interrupt specifier. include/linux/of.h:struct device_node { const char *name; const char *type; phandle phandle; char *full_name; struct property *properties; struct property *deadprops; /* removed properties */ struct device_node *parent; struct device_node *child; struct device_node *sibling; struct device_node *next; /* next device of same type */ struct device_node *allnext; /* next in list of all nodes */ struct proc_dir_entry *pde; /* this node's proc directory */ struct kref kref; unsigned long _flags; void *data; #if defined(CONFIG_SPARC) char *path_component_name; unsigned int unique_id; struct of_irq_controller *irq_trans; #endif }; struct property { char *name; int length; void *value; struct property *next; unsigned long _flags; unsigned int unique_id; }; include/linux/of_irq.h: struct of_irq { struct device_node *controller; /* Interrupt controller node */ u32 size; /* Specifier size */ u32 specifier[OF_MAX_IRQ_SPEC]; /* Specifier copy */ }; irq_of_parse_and_map() | | (and then irq_create_of_mapping():kernel/irq/irqdomain.c) v In of_irq_map_on() Get interrupts property - returns an array of u32s Get reg property - returns a device_node struct pointer interrupt cells Get the parent interrupt controller Call of_irq_map_raw(p, intspec + index * intsize, intsize, addr, out_irq:of_irq); ^ ^ ^ ^ ^ ^ ^ ^ ^ stuct filled ^ ^ ^ reg property ^ ^ number of values in intr property ^ pointer to first interrupt property in the list ^ + index of the interrupt to resolve ^ * intsize - the #cells in an interrupt property The parent controller Search up the interrupt tree to find the first #interrupt-cells property. Then search there and further up to find the interrupt-controller and then split out the intspec into the out_irq struct's specifier[] array. irq_create_of_mapping - translate a device tree interrupt specifier to a valid linux irq number. LOOK IN LOGS FOR: pr_debug("%s: mapped hwirq=%i to irq=%i, flags=%x\n", controller->full_name, (int)hwirq, irq, type); Linux Device Drivers References - Linux Device Driver Model Overview, Linux Docs. - Device Drivers, Linux Kernel Docs. - Linux Kernel and Driver Development Training Slides, FreeElectrons.com. Intro The slides do a fantastic job of describing the Linux driver infrastructure. The following image is an annotated rehash of their diagrams with some extra added in from the Linux docs... So what is the Linux device driver core? It is a central "repository" that tracks all the device busses in the system and their types, all the available drivers and all the devices and marries drivers with devices when they are detected. Bus types define, surprisingly, a type of bus, for example PCI or I2C. The physical hardware that manages access to a physical instance of the bus type is called an "adapter driver". The device structure represents physical devices that are attached to am adapter driver and device_drivers are bound to devices when a device is detected by the system. All devices have a class so that the user can generalise devices by their use. Having looked through the driver core's code a little it looks like devices will be detected, except hot pluggable devices which must have another route to driver binding, like this. The driver will register the device(s) for which it caters either at system start-up or dynamically (when insmod is called to load a driver). This will eventually result in a call to the driver cores device_add(), which will do things like creating the sysfs entries for the device, adding the device to the bus_type via bus_add_devices(), and finally calling bus_probe_device() which will eventually via either the bus_type or directly, call the drivers probe() method. Buses that support hot-plug devices, such as USB for example, must have another way to alert the core that a device has become available. For now, I'm ending my learning-via-code-base as its probably a bit more depth than I need right now. Which Drivers You can see which drivers are statically compiled into the kernel using the following command [Ref]. cat /lib/modules/$(uname -r)/modules.builtin You can see which drivers are loaded as modules using the following: cat /proc/modules # Or to pretty print it... lsmod Device Attributes References: - Sysfs - The file system for exporting kernel objects, Linux Kernel documentation. Attributes can be represented in the sysfs file system for kobjects, which is useful from a device point of view because it allows the driver to intercept read and writes of these attributes in order to control a device. For example, the Atmel MaxTouch device driver creates several attributes, such as "pause", which when written to can be used to pause the touch screen device being driven. Or, for example, it could export an attribute that would let the user configure touch sensitivity for example. Attributes are defined via the device_attribute structure: struct device_attribute { struct attribute attr; ssize_t (*show)(struct device *dev, struct_device_attribute *attr, char *buf); ssize_t (*store)(struct device *dev, struct_device_attribute *attr, char *buf, size_t count); }; You'd make one of these for each attribute you want to create and then register it using: int device_create_file(struct device *, const struct device_attribute *); To make defining these structures easier use: DEVICE_ATTR(myname, mode, showFunc, storeFunc) ^^^^^^^^^^^^^^^^^^^ Normally you always have a read but may not want to allow a write, in which case storeFunc can be NULL. It will create and initialise a device_attribute struct for you and give it the name dev_attr_myname, which you can then pass to device_create_file. The MaxTouch driver has quite a few attributes so stores them in an array of struct attribute, which it then embeds in a struct attribute_group, which can be passed in bulk to sysfs_create_group(): static DEVICE_ATTR(myname1, mode1, showFunc1, storeFunc1); ... static DEVICE_ATTR(mynameN, modeN, showFuncN, storeFuncN); static struct attribute *mxt_attrs[] = { &dev_attr_myName1.attr, ... &dev_attr_myNameN.attr }; static const struct attribute_group mxt_attr_group = { .attrs = mxt_attrs, }; ... // Later during initialisation... error = sysfs_create_group(&dev.kobj, &mxt_attr_group); Pin Multiplexing References: - Linux Kernel and Driver Development Training Slides, Free Electrons. - PINCTRL (PIN CONTROL) subsystem, Linux Docs. - Pin Control Bindings, Linux Docs. - BeagleBone Black System Reference Manual, Rev C.1. - GPIOs on the Beaglebone Black using the Device Tree Overlays, Derek Molloy. Introduced in v3.2: Hardware modules that control pin multiplexing or configuration parameters such as pull-up/down, tri-state, drive-strength etc are designated as pin controllers. Each pin controller must be represented as a node in device tree, just like any other hardware module. The pinmux core takes care of preventing conflicts on pins and calling the pin controller driver to execute different settings. For example, the Beagle Bone Black has a limited number of output pins and you can choose what gets output via the P9 header. For example, P9 pin 17 can be used by either the SPI, I2C, UART or PWM components of the SoC [Ref]. A list of a load of pin control device-tree bindings can be found in the Linux source tree documentation folder under devicetree/bindings/pinctrl. From a driver's point of view, it must request a certain pin muxing from the pin control sub system, although generally it is discouraged to let individual drivers get and enable pin control. So, for example, in the Free Electrons Lab book we would assume that out Wii Nunchuck is permanently connected to the system and so our BeagleBone configuration would always be setup to enable the I2C muxing on the P9 connector. To see a list of the claimed pin muxes (and the GPIOS) you can type the following: cat /sys/kernel/debug/pinctrl/44e10800.pinmux/pinmux-pins. Just remember these are the pins of the chip, not just, for example, the GPIO pins!! Wait Queues References: - Driver porting: sleeping and waking up, Corbet on LWN.net. - Wait queues vs semaphores in Linux, answer StackOverflow by user Anirudh Ramanathan. - My little toy example. Wait queues and timers. The wait queue is a mechanism that lets the kernel keep track of kernel threads that are waiting for some event. The event is programmer defined, so each bit of code can declare its own wake queues, add kthreads to this queue when they should block waiting on a condition and then signal all kthreads waiting on that condition when it is met. The queue, accessed through the wait-queue API, is always accessed atomically so many threads can use the same queue to wait on the same condition(s). The basic code structure for waiting on a condition (ripped verbatim, comments added, from the reference) is: // Define the root of the list of kthreads that wish to // sleep on whatever condition this queue represents DECLARE_WAIT_QUEUE_HEAD(queue); // The wait_queue_t structure that we will add to the above list. This // struct will hold our task details so we can be woken up later on // when the condition is met DECLARE_WAITQUEUE(wait, current); while(true) { // Add this task to the wait queue add_wait_queue(&queue, &wait); // Tell the scheduler to take us off the runnable queue. set_current_state(TASK_INTERRUPTIBLE); // The conditions wasn't met so allow scheduler to put someone // else on the CPU (won't be use because we've said we dont // want to be on the read queue by setting out state above) schedule(); // Check for the condition that is set somewhere else... if (condition) break; // We're awake again which means we either got a signal or // the condition has been met remove_wait_queue(&queue, &wait); if (signal_pending(current)) return -ERESTARTSYS; } set_current_state(TASK_RUNNING); There's one small thing to note in the above example vs the reference... I had to move where the condition was checked. I think their logic was a little wrong because if you don't check after the schedule, you'll always do one more wait that you need to! One thing that I wondered was why would I use a waitqueue over a semaphore or vice versa. Luckily just typing it int good led me to this answer: One or two examples I came across use something called interruptable_sleep_on(), but apparently this is deprecated since 2.6 because it is prone to race conditions [Ref]. A semaphore is more of a concept, rather than a specific implementation ... The Linux semaphore data structure implementation uses a wait-queue. Without a wait queue, you wouldn't know which process demanded the resource first, which could lead to very large wait times for some. The wait-queue ensures fairness, and abates the resource starvation problem. Or, instead of using macros to initialisee static variables, you can use: init_waitqueue_head(&waitq_head); init_waitqueue_entry(&waitq_entry, current); Work Queues Work queues allow you to request that code be run at some point in the future... How Drivers Request Firmware Snippets from kernel v3.18.20 so possibly already quite out of date :(. Drivers for devices that require firmware may want to provide a facility by which the firmware can be updated. To do this the driver will call the kernel function requst_firmware(): requst_firmware(const struct firmware **firmware_p, const char *name, struct device *device) Where firmware_p is used to return the firmware image identified by the name for the specified device. Following the code one can see that there are various ways that the firmware can be located. The first location to be searched is the "built-in" location, i.e., in the kernel image itself. A special section of the kernel image, called .builtin_fw, is created to hold the name and data associated with firmware images and is searched as follows: # include/linux/firmware.h struct builtin_fw { char *name; void *data; unsigned long size; }; # drivers/base/firware_class.c extern struct builtin_fw __start_builtin_fw[]; extern struct builtin_fw __end_builtin_fw[]; static bool fw_get_builtin_firmware(struct firmware *fw, const char *name) { struct builtin_fw *b_fw; for (b_fw = __start_builtin_fw; b_fw != __end_builtin_fw; b_fw++) { if (strcmp(name, b_fw->name) == 0) { fw->size = b_fw->size; fw->data = b_fw->data; return true; } } return false; } But how do these files make it into the firmware image in the first place? The answer comes for the Makefile firmware/Makefile, a snippet of which appears below. Note that the kernel must have been built with CONFIG_FIRMWARE_IN_KERNEL=y. #./firmware/Makefile cmd_fwbin = $@;\ echo " .section .rodata" >>$@;\ echo " .p2align $${ASM_ALIGN}" >>$@;\ echo "_fw_$${FWSTR}_bin:" >>$@;\ echo " .incbin \"$(2)\"" >>$@;\ ## The binary firmware data inserted here echo "_fw_end:" >>$@;\ echo " .section .rodata.str,\"aMS\",$${PROGBITS},1" >>$@;\ echo " .p2align $${ASM_ALIGN}" >>$@;\ echo "_fw_$${FWSTR}_name:" >>$@;\ echo " .string \"$$FWNAME\"" >>$@;\ echo " .section .builtin_fw,\"a\",$${PROGBITS}" >>$@;\ ## Section identified by symbol __start_builtin_fw in gcc echo " .p2align $${ASM_ALIGN}" >>$@;\ echo " $${ASM_WORD} _fw_$${FWSTR}_name" >>$@;\ ## struct builtin_fw->name echo " $${ASM_WORD} _fw_$${FWSTR}_bin" >>$@;\ ## struct builtin_fw->data echo " $${ASM_WORD} _fw_end - _fw_$${FWSTR}_bin" >>$@; ## struct builtin_fw->size <snip> $(patsubst %,$(obj)/%.gen.S, $(fw-shipped-y)): %: $(wordsize_deps) $(call cmd,fwbin,$(patsubst %.gen.S,%,$@)) The patsubst macro finds all white-space seperated words in $fw-shipped-y and replaces them with $(obj)/<word>.gen.S to create a list of targets for this command. For each of those targets the command cmd_fwbin is called to create the target by echoing the linker section asm commands into an asm file (the .incbin puts the image into the section so that the firware becomes part of the compiled kernel image). Thus once the asm files are created, built and linked into the kernel image, the kernel includes these bits of firmware in its final binary. $fw-shipped-y is part of the KBuild sytax: fw-shipped-$(CONFIG_DRIVER_NAME) += ... When a driver defines this variable it will, via the KBuild magic, get included in $fw-shipped-y assuming that the kernel was build using the CONFIG_FIRMWARE_IN_KERNEL=y config option. If the image is not found in the built-ins, the next port of call is to engage a user-mode helper to grab the contents of the firmware from a file and "pipe" it into a driver buffer. How do we know where on the file system to look? The answer is found in drivers/base/firmware_class.c: /* direct firmware loading support */ static char fw_path_para[256]; static const char * const fw_path[] = { fw_path_para, "/lib/firmware/updates/" UTS_RELEASE, "/lib/firmware/updates", "/lib/firmware/" UTS_RELEASE, "/lib/firmware", "/firmware/image" }; Where fw_path_para is a string that can be passed to the kernel via the command line to allow boot time configuration of where firmware files can be looked for. Debugging FTrace References: - Debugging the kernel using Ftrace - part 1. - Debugging the kernel using Ftrace - part 2. - trace-cmd: A front-end for Ftrace. - Secrets of the Ftrace function tracer. - - The above are two really useful references on how to use ftrace to see what is going on inside the kernel using files found in /sys/kernel/debug/tracing. Use the article "Secrets of the Ftrace function tracer" to see how to limit the increasible amount of information that can be thrown at you through the trace. To get a nicer interface to the above take a lookg at trace-cmd. TO DO Firmware Jiffies Defined in See functions like time_after(a,b) or time_in_rance(a,b,c) to deal with timer wrapping. Other time const u64 now = ktime_to_ns(ktime_get()); const u64 end = ktime_to_ns(ktime_get()) + 10; /* 10ns */ Sleeping Kernel Threads See: struct task_struct *kthread_create(int (*threadfn)(void *data), void *data, const char *namefmt, ...); struct task_struct *kthread_run(int (*threadfn)(void *data), void *data, const char *namefmt, ...); int kthread_stop(struct task_struct *thread); /** * kthread_stop - stop a thread created by kthread_create(). * @k: thread created by kthread_create(). * * Sets kthread_should_stop() for @k to return true, wakes it, and * waits for it to exit. This can also be called after kthread_create() * instead of calling wake_up_process(): the thread will exit without * calling threadfn(). * * If threadfn() may call do_exit() itself, the caller must ensure * task_struct can't go away. * * Returns the result of threadfn(), or %-EINTR if wake_up_process() * was never called. */ Kernel Semaphores if (down_interruptible(&dev->sem)) return -ERESTARTSYS; Quote verbatim:. #include <linux/semaphore.h> Kernel Times To schedule a recurring short task that executes in interrupt context and on the CPU that create the timer. Refernce: void timed_function(unsigned long tmrdata) { struct my_data *data = (struct my_data *)tmrdata; /* Code to generate the key press */ /* Atomically access the direction */ if (I should still be running) { /* Atomically access configured delay */ timer.expires += + msecs_to_jiffies(10); add_timer(&data->something.timer); } } struct timer_list timer; init_timer(&timer); timer.data = (unsigned long)&mxt_data; timer.function = volume_key_press_generator_thread; timer.expires = jiffies + msecs_to_jiffies(10); ------- See also del_timer() to remove timer from queue and del_timer_sync() ------- Build On Ubuntu ref: Wanted this to quickly test and play with some driver stuff that didn't require any actual hardware... just get familiar with some API $ uname -r 4.4.0-83-generic = Misc Notes == Building Kernel and Drivers You need to know exactly the confiuration of the kernel on which your modules will be loaded, which means you must compile your kernel before compiling your modules so that you have what is needed to compile your module. More and more of linux func is modules 1. Reduce size of kernel 2. Speedup the boot time The kernel build makefile is very complex and needs at least two options to cross compile it - ARCH and CROSS_COMPILE. Default ARCH is the architecture of the host machine, not x86 necessarily. GCC is the only compiler that can compile the linux kernel. Linux kernel is not a C compliant program because it uses features of GCC that are not specified by the language. But these features are needed to make a kernel, which made the kernel specific to the compiler on which it was tested as these are undefined features of the C language. Use the "O" option to cmompile out of source so that you can use same source for different build variants. Never modify .config manually because 1. Tools can check this automatically - use them to detect dependencies between options etc. Avoids incorrect configs. 2. The C compiler cannot read this file. It reads .h files that procide the same options. The config tools generate the .h files in an autogenerated location in the build tree that will duplicate the configuration options provided in the .config file. The source is .config and the output is the .h. There are 1000s of config options so start with a default config for your board [my_board_or_soc]_defconfig. Used to be board specific but is now SoC specific. The device tree is the BSP. THe kernel tends to be generic for all boards made from one given SoC or even SoCs of a given family and the device tree provides the customisations due to constraints on the system designed using the SoC, e.g. PCB layout. Build outputs is zImage, uImage. UBoot traditionally use uImage, an image packaged for uBoot, but now uBoot doesn't need this anymore and use, usually, a zImage or a plain image. Images generally include a checksum of some form. Can avoid mix-and-match attacks when ram disk and kernel version and device tree are not what expected. Built into the new images - FITs. That is why new image isn't used any more. FIT has the same syntax as a device tree with options that can specify how to boot the kernel. FIT = flattened image tree. [See]. zImage not always favoured as decompression works without the MMU, without caches so it is faster to read a non-compressed image. Also before uncompressing itself it has to relocate itself, otherwise it is doing decompression in place which doesnt work. So on 64 bit systems (ARM) normally boot with plain image but 32 bit use zImage. vmlinux is the ELF file that corresponds to the linux kernel as it will be loaded in virtual memory for execution. Exactly the same format as a linux application. not always ubootable. INSTALL_PATH and INSTALL_MOD_PATH are useful and used in installation makefile targets. Modules are generated througout the build tree so theser have to be grouped to a known location on your ram disk - so use 'make modules_install'. Once here must use 'depmod -a' to create the files needed for Linux device module to work correctly. When compiling out-of-source render the src tree read-only - it is good practice so that you don't screw up your source tree and need to start using make commands such as 'distclean' etc. (chmod a=rX) = Linux Modules init_module and cleanup_module should have a module specific name. Good practice to print loading and unloading message for module load and unload event. But use short messages because prink() disables interrupts whilst using busy-waiting to poll for characters - eek! Don't use printk() elsewhere for this reason - there are new and better debug options!
https://jehtech.com/linux_kernel.html
CC-MAIN-2021-25
refinedweb
10,824
54.52
How to store Windowlayout? Hey guys, I have a qtquick application for windows. I would like to store the window position and size when I close my application. In c++ I would use QSettings but I have no idea how I can save these settings from QML. As an alternative I can save the windowlayout from c++ but I don't know how I get the position of my mainwindow from c++. Maybe I can create some a slots (windowPosLeft, windowPosTop, windowWidth, windowHeight) and connect it in qml. What is the recommended way? CU mts Qt.labs.settings, just bind needed values to properties of Settings object. This will save it for you, just import "import Qt.labs.settings 1.0", then store there the values you are interested on. import QtQuick 2.2 import QtQuick.Controls 1.1 import Qt.labs.settings 1.0 ApplicationWindow { id: window width: 360 height: 360 Settings { property alias x: window.x property alias y: window.y property alias width: window.width property alias height: window.height } } Thanks! Looks exactly what I was looking for. CU mts
https://forum.qt.io/topic/53108/how-to-store-windowlayout
CC-MAIN-2018-13
refinedweb
183
71
#!/usr/bin/env bash echo -e "[ \e[32mINFO\e[0m ] This is a useful info message." echo -e "[ \e[33mWARN\e[0m ] This is an ominous warning message." echo -e "[ \e[31mERROR\e[0m ] This is a scary error message." If you’ve spent much time in the command line, especially if you script, you will have noticed that some programs output colored output. This output, when piped to another program such as less, is no longer colored. I personally have written quite a few scripts that have colored or styled (bold) output so the end user of my scripts has an easier time reading things. However, when I want to output to a program such as less, my output is mangled. For example, a script like this… #!/usr/bin/env bash echo -e "[ \e[32mINFO\e[0m ] This is a useful info message." echo -e "[ \e[33mWARN\e[0m ] This is an ominous warning message." echo -e "[ \e[31mERROR\e[0m ] This is a scary error message." Outputs something like this in the terminal INFO ] This is a useful info message. WARN ] This is an ominous warning message. ERROR ] This is a scary error message. …But something like this when piped to less [^[[32mINFO^[[0m] This is a useful info message. [^[[33mWARN^[[0m] This is an ominous warning message. [^[[31mERROR^[[0m] This is a scary error message. As you can see, the escape codes are sent verbatim to the program on the other side of the pipe. If it doesn’t properly render them by default (which newer versions of gnu less do now), you’ll see those escape codes on your output. This is particularly annoying in the cases of log files. The same behavior occurs also if stdout is redirected into a file. So how do we detect that the end user has redirected output with a pipe or a redirection so we can remove the escape codes and just ouput plain text? Bash provides this really useful if switch, -t. From the bash man page… … -t fd True if file descriptor fd is open and refers to a terminal. … Let’s give this a try. #!/usr/bin/env bash if [[ -t 1 ]]; then echo -e "\e[32mYay! Colored text is cool!\e[0m" else echo -e "Boo! Plain text is lame." fi If we execute this with ./test.sh, we’ll see the green colored text "Yay! Colored text is cool!". If we then execute this with ./test.sh | less, we’ll see the plain text "Boo, Plain text is lame." This also outputs plain text if you redirect the output using ./test.sh > test.out. When I saw this functionality in bash, I immediately wanted to know how it worked. Sure, I’d read the documentation, but there’s no understanding like the understanding that comes from writing it in C yourself. #include <stdio.h> #include <sys/stat.h> /** * Detects if the specified file descriptor is a character device, or something * else. * Useful for determiniing if colored output is supported. * * @param FILE* fd File descriptor to check * * @return int FD is char dev (1) or not (0) */ int ischardev(FILE* fd) { struct stat d; // stat the file descriptor fstat(fileno(fd), &d); // Check st_mode (see "man 2 stat" for more information about this) if(S_ISCHR(d.st_mode)) return 1; return 0; } int main(int argc, char* argv[]) { if(ischardev(stdout)) { printf("Character dev. Colors supported\n"); } else { printf("Something else. Colors not supported\n"); } return 0; } In this code, we can see how to detect if stdout will support formatting escape characters. In the ischardev function, we stat the stdout file descriptor and check if it is a character device (IS_CHR). Only character devices support escape character styling, so if the stdout file descriptor is anything other than a character device, it is safe to assume it does not support escape character formatting. Last edited: January 18, 2017
https://oper.io/?p=linux_development:detecting_stdout_escape_char_support
CC-MAIN-2018-47
refinedweb
653
75.5
C# preprocessor directives Although the compiler doesn't have a separate preprocessor, the directives described in this section are processed as if there were one. You use them to help in conditional compilation. Unlike C and C++ directives, you can't use these directives to create macros. A preprocessor directive must be the only instruction on a line. Nullable context The #nullable preprocessor directive sets the nullable annotation context and nullable warning context. This directive controls whether nullable annotations have effect, and whether nullability warnings are given. Each context is either disabled or enabled. Both contexts can be specified at the project level (outside of C# source code). The #nullable directive controls the annotation and warning contexts and takes precedence over the project-level settings. A directive sets the context(s) it controls. Conditional compilation You use four preprocessor directives to control conditional compilation: #if: Opens a conditional compilation, where code is compiled only if the specified symbol is defined. #elif: Closes the preceding conditional compilation and opens a new conditional compilation based on if the specified symbol is defined. #else: Closes the preceding conditional compilation and opens a new conditional compilation if the previous specified symbol isn't defined. #endif: Closes the preceding conditional compilation. When the C# compiler finds an #if directive, followed eventually by an #endif directive, it compiles the code between the directives only if the specified symbol is defined. Unlike C and C++, you can't assign a numeric value to a symbol. The #if statement in C# is Boolean and only tests whether the symbol has been defined or not. For example: #if DEBUG Console.WriteLine("Debug version"); #endif You can use the operators == (equality) and != (inequality) to test for the bool values true or false. true means the symbol is defined. The statement #if DEBUG has the same meaning as #if (DEBUG == true). You can use the && (and), || (or), and ! (not) operators to evaluate whether multiple symbols have been defined. You can also group symbols and operators with parentheses. #if, along with the #else, #elif, #endif, #define, and #undef directives, lets you include or exclude code based on the existence of one or more symbols. Conditional compilation can be useful when compiling code for a debug build or when compiling for a specific configuration. A conditional directive beginning with an #if directive must explicitly be terminated with an #endif directive. #define lets you define a symbol. By using the symbol as the expression passed to the #if directive, the expression evaluates to true. You can also define a symbol with the DefineConstants compiler option. You can undefine a symbol with #undef. The scope of a symbol created with #define is the file in which it was defined. A symbol that you define with DefineConstants or with #define doesn't conflict with a variable of the same name. That is, a variable name shouldn't be passed to a preprocessor directive, and a symbol can only be evaluated by a preprocessor directive. #elif lets you create a compound conditional directive. The #elif expression will be evaluated if neither the preceding #if nor any preceding, optional, #elif directive expressions evaluate to true. If an #elif expression evaluates to true, the compiler evaluates all the code between the #elif and the next conditional directive. For example: #define VC7 //... #if debug Console.WriteLine("Debug build"); #elif VC7 Console.WriteLine("Visual Studio 7"); #endif #else lets you create a compound conditional directive, so that, if none of the expressions in the preceding #if or (optional) #elif directives evaluate to true, the compiler will evaluate all code between #else and the next #endif. #endif(#endif) must be the next preprocessor directive after #else. #endif specifies the end of a conditional directive, which began with the #if directive. The build system is also aware of predefined preprocessor symbols representing different target frameworks in SDK-style projects. They're useful when creating applications that can target more than one .NET version.20, NET20_OR_GREATER, NET11_OR_GREATER, and NET10_OR_GREATER. - These are different from the target framework monikers (TFMs) used by the MSBuild TargetFrameworkproperty and NuGet. Note For traditional, non-SDK-style } //... } Defining symbols You use the following two preprocessor directives to define or undefine symbols for conditional compilation: #define: Define a symbol. #undef: Undefine a symbol. You use #define to define a symbol. When you use the symbol as the expression that's passed to the #if directive, the expression will evaluate to true, as the following example shows: #define VERBOSE #if VERBOSE Console.WriteLine("Verbose output version"); #endif can't assign a value to a symbol. The #define directive must appear in the file before you use any instructions that aren't also preprocessor directives. You can also define a symbol with the DefineConstants compiler option. You can undefine a symbol with #undef. Defining regions You can define regions of code that can be collapsed in an outline using the following two preprocessor directives: #region: Start a region. #endregion: End a region. #region lets you specify a block of code that you can expand or collapse when using the outlining feature of the code editor. In longer code files, it's convenient to collapse or hide one or more regions so that you can focus on the part of the file that you're currently working on. The following example shows how to define a region: #region MyClass definition public class MyClass { static void Main() { } } #endregion A #region block must be terminated with an #endregion directive. A #region block can't overlap with an #if block. However, a #region block can be nested in an #if block, and an #if block can be nested in a #region block. Error and warning information You instruct the compiler to generate user-defined compiler errors and warnings, and control line information using the following directives: #error: Generate a compiler error with a specified message. #warning: Generate a compiler warning, with a specific message. #line: Change the line number printed with compiler messages. #error lets you generate a CS1029 user-defined error from a specific location in your code. For example: #error Deprecated code in this method. Note The compiler treats #error version in a special way and reports a compiler error, CS8304, with a message containing the used compiler and language versions. #warning lets you generate a CS1030 level one compiler warning from a specific location in your code. For example: #warning Deprecated code in this method. isn't another #line hidden directive) will be stepped over. This option can also be used to allow ASP.NET to differentiate between user-defined and machine-generated code. Although ASP.NET is the primary consumer of this feature, it's likely that more source generators will make use of it. A #line hidden directive doesn't affect file names or line numbers in error reporting. That is, if the compiler finds an error. Beginning with C# 10, you can use a new form of the #line directive: #line (1, 1) - (5, 60) 10 "partial-class.g.cs" /*34567*/int b = 0; The components of this form are: (1, 1): The start line and column for the first character on the line that follows the directive. In this example, the next line would be reported as line 1, column 1. (5, 60): The end line and column for the marked region. 10: The column offset for the #linedirective to take effect. In this example, the 10th column would be reported as column one. That's where the declaration int b = 0;begins. This field is optional. If omitted, the directive takes effect on the first column. "partial-class.g.cs": The name of the output file. The preceding example would generate the following warning: partial-class.g.cs(1,5,1,6): warning CS0219: The variable 'b' is assigned but its value is never used After remapping, the variable, b, is on the first line, at character six. Domain-specific languages (DSLs) typically use this format to provide a better mapping from the source file to the generated C# output. To see more examples of this format, see the feature specification in the section on examples. Pragmas #pragma gives the compiler special instructions for the compilation of the file in which it appears. The instructions must be supported by the compiler. In other words, you can't use #pragma to create custom preprocessing instructions. #pragma warning: Enable or disable warnings. #pragma checksum: Generate a checksum. #pragma pragma-name pragma-arguments Where pragma-name is the name of a recognized pragma and pragma-arguments is the pragma-specific arguments. #pragma warning #pragma warning can enable or disable certain warnings. #pragma warning disable warning-list #pragma warning restore warning-list Where warning-list is a comma-separated list of warning numbers. The "CS" prefix is optional. When no warning numbers are specified, disable disables all warnings and restore enables all warnings. Note To find warning numbers in Visual Studio, build your project and then look for the warning numbers in the Output window. The disable takes effect beginning on the next line of the source file. The warning is restored on the line following the restore. If there's no restore in the file, the warnings are restored to their default state at the first line of any later files in the same compilation. // pragma_warning.cs using System; #pragma warning disable 414, CS3021 [CLSCompliant(false)] public class C { int i = 1; static void Main() { } } #pragma warning restore CS3021 [CLSCompliant(false)] // CS3021 public class D { int i = 1; public static void F() { } } #pragma checksum Generates checksums for source files to aid with debugging ASP.NET pages. #pragma checksum "filename" "{guid}" "checksum bytes" Where "filename" is the name of the file that requires monitoring for changes or updates, "{guid}" is the Globally Unique Identifier (GUID) for the hash algorithm, and "checksum_bytes" is the string of hexadecimal digits representing the bytes of the checksum. Must be an even number of hexadecimal digits. An odd number of digits results in a compile-time warning, and the directive is ignored. doesn doesn't find a #pragma checksum directive in the file, it computes the checksum and writes the value to the PDB file. class TestClass { static int Main() { #pragma checksum "file.cs" "{406EA660-64CF-4C82-B6F0-42D48172A799}" "ab007f1d23d9" // New checksum } } Feedback Submit and view feedback for
https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/preprocessor-directives
CC-MAIN-2022-33
refinedweb
1,720
56.76
/* /do_command.c,v 1.20 2001/03/17 00:21:54 peter Exp $"; #endif #include "cron.h" #include <sys/signal.h> #include <stdlib.h> #if defined(sequent) # include <sys/universe.h> #endif #if defined(SYSLOG) # include <syslog.h> #endif #if defined(LOGIN_CAP) # include <login_cap.h> #endif static void child_process __P((entry *, user *)), do_univ __P((user *)); void do_command(e, u) entry *e; user *u; { Debug(DPROC, ("[%d] do_command(%s, (%s,%d,%d))\n", getpid(), e->cmd, u->name, e->uid, e->gid)) /* fork to become asynchronous -- parent process is done immediately, * and continues to run the normal cron code, which means return to * tick(). the child and grandchild don't leave this function, alive. * * vfork() is unsuitable, since we have much to do, and the parent * needs to be able to run off and fork other processes. */ switch (fork()) { case -1: log_it("CRON",getpid(),"error","can't fork"); break; case 0: /* child process */ acquire_daemonlock(1); child_process(e, u); Debug(DPROC, ("[%d] child process done, exiting\n", getpid())) _exit(OK_EXIT); break; default: /* parent process */ break; } Debug(DPROC, ("[%d] main process returning to work\n", getpid())) } static void child_process(e, u) entry *e; user *u; { int stdin_pipe[2], stdout_pipe[2]; char *input_data; char *usernm, *mailto; int children = 0; # if defined(LOGIN_CAP) struct passwd *pwd; login_cap_t *lc; # endif Debug(DPROC, ("[%d] child_process('%s')\n", getpid(), e->cmd)) /* mark ourselves as different to PS command watchers by upshifting * our program name. This has no effect on some kernels. */ #ifdef __APPLE__ /*local*/{ register char *pch; for (pch = ProgramName; *pch; pch++) *pch = MkUpper(*pch); } #else setproctitle("running job"); #endif /* discover some useful and important environment settings */ usernm = env_get("LOGNAME", e->envp); mailto = env_get("MAILTO", e->envp); _DFL); */ /* create some pipes to talk to our future child */ pipe(stdin_pipe); /* child's stdin */ pipe(stdout_pipe); /* child's stdout */ /* since we are a forked process, we can diddle the command string * we were passed -- nobody else is going to use it again, right? * * if a % is present in the command, previous characters are the * command, and subsequent characters are the additional input to * the command. Subsequent %'s will be transformed into newlines, * but that happens later. * * If there are escaped %'s, remove the escape character. */ /*local*/{ register int escaped = FALSE; register int ch; register char *p; for (input_data = p = e->cmd; (ch = *input_data); input_data++, p++) { if (p != input_data) *p = ch; if (escaped) { if (ch == '%' || ch == '\\') *--p = ch; escaped = FALSE; continue; } if (ch == '\\') { escaped = TRUE; continue; } if (ch == '%') { *input_data++ = '\0'; break; } } *p = '\0'; } /* fork again, this time so we can exec the user's command. */ switch (vfork()) { case -1: log_it("CRON",getpid(),"error","can't vfork"); exit(ERROR_EXIT); /*NOTREACHED*/ case 0: Debug(DPROC, ("[%d] grandchild process Vfork()'ed\n", getpid())) /* write a log message. we've waited this long to do it * because it was not until now that we knew the PID that * the actual user command shell was going to get and the * PID is part of the log message. */ /*local*/{ char *x = mkprints((u_char *)e->cmd, strlen(e->cmd)); log_it(usernm, getpid(), "CMD", x); free(x); } /* that's the last thing we'll log. close the log files. */ #ifdef SYSLOG closelog(); #endif /* get new pgrp, void tty, etc. */ (void) setsid(); /* close the pipe ends that we won't use. this doesn't affect * the parent, who has to read and write them; it keeps the * kernel from recording us as a potential client TWICE -- * which would keep it from sending SIGPIPE in otherwise * appropriate circumstances. */ close(stdin_pipe[WRITE_PIPE]); close(stdout_pipe[READ_PIPE]); /* grandchild process. make std{in,out} be the ends of * pipes opened by our daddy; make stderr go to stdout. */ close(STDIN); dup2(stdin_pipe[READ_PIPE], STDIN); close(STDOUT); dup2(stdout_pipe[WRITE_PIPE], STDOUT); close(STDERR); dup2(STDOUT, STDERR); /* close the pipes we just dup'ed. The resources will remain. */ close(stdin_pipe[READ_PIPE]); close(stdout_pipe[WRITE_PIPE]); /* set our login universe. Do this in the grandchild * so that the child can invoke /usr/lib/sendmail * without surprises. */ do_univ(u); # if defined(LOGIN_CAP) /* Set user's entire context, but skip the environment * as cron provides a separate interface for this */ if ((pwd = getpwnam(usernm)) == NULL) pwd = getpwuid(e->uid); lc = NULL; if (pwd != NULL) { pwd->pw_gid = e->gid; if (e->class != NULL) lc = login_getclass(e->class); } if (pwd && setusercontext(lc, pwd, e->uid, LOGIN_SETALL & ~(LOGIN_SETPATH|LOGIN_SETENV)) == 0) (void) endpwent(); else { /* fall back to the old method */ (void) endpwent(); # endif /* set our directory, uid and gid. Set gid first, * since once we set uid, we've lost root privledges. */ setgid(e->gid); # if defined(BSD) initgroups(usernm, e->gid); # endif setlogin(usernm); setuid(e->uid); /* we aren't root after this..*/ #if defined(LOGIN_CAP) } if (lc != NULL) login_close(lc); #endif chdir(env_get("HOME", e->envp)); /* exec the command. */ { char *shell = env_get("SHELL", e->envp); # if DEBUGGING if (DebugFlags & DTEST) { fprintf(stderr, "debug DTEST is on, not exec'ing command.\n"); fprintf(stderr, "\tcmd='%s' shell='%s'\n", e->cmd, shell); _exit(OK_EXIT); } # endif /*DEBUGGING*/ execle(shell, shell, "-c", e->cmd, (char *)0, e->envp); warn("execl: couldn't exec `%s'", shell); _exit(ERROR_EXIT); } break; default: /* parent process */ break; } children++; /* middle process, child of original cron, parent of process running * the user's command. */ Debug(DPROC, ("[%d] child continues, closing pipes\n", getpid())) /* close the ends of the pipe that will only be referenced in the * grandchild process... */ close(stdin_pipe[READ_PIPE]); close(stdout_pipe[WRITE_PIPE]); /* * write, to the pipe connected to child's stdin, any input specified * after a % in the crontab entry. while we copy, convert any * additional %'s to newlines. when done, if some characters were * written and the last one wasn't a newline, write a newline. * * Note that if the input data won't fit into one pipe buffer (2K * or 4K on most BSD systems), and the child doesn't read its stdin, * we would block here. thus we must fork again. */ if (*input_data && fork() == 0) { register FILE *out = fdopen(stdin_pipe[WRITE_PIPE], "w"); register int need_newline = FALSE; register int escaped = FALSE; register int ch; if (out == NULL) { warn("fdopen failed in child2"); _exit(ERROR_EXIT); } Debug(DPROC, ("[%d] child2 sending data to grandchild\n", getpid())) /* close the pipe we don't use, since we inherited it and * are part of its reference count now. */ close(stdout_pipe[READ_PIPE]); /* translation: * \% -> % * % -> \n * \x -> \x for all x != % */ while ((ch = *input_data++)) { if (escaped) { if (ch != '%') putc('\\', out); } else { if (ch == '%') ch = '\n'; } if (!(escaped = (ch == '\\'))) { putc(ch, out); need_newline = (ch != '\n'); } } if (escaped) putc('\\', out); if (need_newline) putc('\n', out); /* close the pipe, causing an EOF condition. fclose causes * stdin_pipe[WRITE_PIPE] to be closed, too. */ fclose(out); Debug(DPROC, ("[%d] child2 done sending to grandchild\n", getpid())) exit(0); } /* close the pipe to the grandkiddie's stdin, since its wicked uncle * ernie back there has it open and will close it when he's done. */ close(stdin_pipe[WRITE_PIPE]); children++; /* * read output from the grandchild. it's stderr has been redirected to * it's stdout, which has been redirected to our pipe. if there is any * output, we'll be mailing it to the user whose crontab this is... * when the grandchild exits, we'll get EOF. */ Debug(DPROC, ("[%d] child reading output from grandchild\n", getpid())) /*local*/{ register FILE *in = fdopen(stdout_pipe[READ_PIPE], "r"); register int ch = getc(in); if (in == NULL) { warn("fdopen failed in child"); _exit(ERROR_EXIT); } if (ch != EOF) { register FILE *mail = NULL; register int bytes = 1; int status = 0; Debug(DPROC|DEXT, ("[%d] got data (%x:%c) from grandchild\n", getpid(), ch, ch)) /* get name of recipient. this is MAILTO if set to a * valid local username; USER otherwise. */ if (mailto) { /* MAILTO was present in the environment */ if (!*mailto) { /* ... but it's empty. set to NULL */ mailto = NULL; } } else { /* MAILTO not present, set to USER. */ mailto = usernm; } /* if we are supposed to be mailing, MAILTO will * be non-NULL. only in this case should we set * up the mail command and subjects and stuff... */ if (mailto) { register char **env; auto char mailcmd[MAX_COMMAND]; auto char hostname[MAXHOSTNAMELEN]; (void) gethostname(hostname, MAXHOSTNAMELEN); (void) snprintf(mailcmd, sizeof(mailcmd), MAILARGS, MAILCMD); if (!(mail = cron_popen(mailcmd, "w", e))) { warn("%s", MAILCMD); (void) _exit(ERROR_EXIT); } fprintf(mail, "From: %s (Cron Daemon)\n", usernm); fprintf(mail, "To: %s\n", mailto); fprintf(mail, "Subject: Cron <%s@%s> %s\n", usernm, first_word(hostname, "."), e->cmd); # if defined(MAIL_DATE) fprintf(mail, "Date: %s\n", arpadate(&TargetTime)); # endif /* MAIL_DATE */ for (env = e->envp; *env; env++) fprintf(mail, "X-Cron-Env: <%s>\n", *env); fprintf(mail, "\n"); /* this was the first char from the pipe */ putc(ch, mail); } /* we have to read the input pipe no matter whether * we mail or not, but obviously we only write to * mail pipe if we ARE mailing. */ while (EOF != (ch = getc(in))) { bytes++; if (mailto) putc(ch, mail); } /* only close pipe if we opened it -- i.e., we're * mailing... */ if (mailto) { Debug(DPROC, ("[%d] closing pipe to mail\n", getpid())) /* Note: the pclose will probably see * the termination of the grandchild * in addition to the mail process, since * it (the grandchild) is likely to exit * after closing its stdout. */ status = cron_pclose(mail); } /* if there was output and we could not mail it, * log the facts so the poor user can figure out * what's going on. */ if (mailto && status) { char buf[MAX_TEMPSTR]; snprintf(buf, sizeof(buf), "mailed %d byte%s of output but got status 0x%04x\n", bytes, (bytes==1)?"":"s", status); log_it(usernm, getpid(), "MAIL", buf); } } /*if data from grandchild*/ Debug(DPROC, ("[%d] got EOF from grandchild\n", getpid())) fclose(in); /* also closes stdout_pipe[READ_PIPE] */ } /* wait for children to die. */ for (; children > 0; children--) { WAIT_T waiter; PID_T pid; Debug(DPROC, ("[%d] waiting for grandchild #%d to finish\n", getpid(), children)) pid = wait(&waiter); if (pid < OK) { Debug(DPROC, ("[%d] no more grandchildren--mail written?\n", getpid())) break; } Debug(DPROC, ("[%d] grandchild #%d finished, status=%04x", getpid(), pid, WEXITSTATUS(waiter))) if (WIFSIGNALED(waiter) && WCOREDUMP(waiter)) Debug(DPROC, (", dumped core")) Debug(DPROC, ("\n")) } } static void do_univ(u) user *u; { #if defined(sequent) /* Dynix (Sequent) hack to put the user associated with * the passed user structure into the ATT universe if * necessary. We have to dig the gecos info out of * the user's password entry to see if the magic * "universe(att)" string is present. */ struct passwd *p; char *s; int i; p = getpwuid(u->uid); (void) endpwent(); if (p == NULL) return; s = p->pw_gecos; for (i = 0; i < 4; i++) { if ((s = strchr(s, ',')) == NULL) return; s++; } if (strcmp(s, "universe(att)")) return; (void) universe(U_ATT); #endif }
http://opensource.apple.com/source/cron/cron-5/cron/do_command.c
CC-MAIN-2016-44
refinedweb
1,745
60.55
Pylons works well with many different types of databases, in addition to other database object-relational mappers. In addition to the declarative style, SQLAlchemy has a default more verbose and explicit approach. Here is a sample model/__init__.py with a “persons” table, based on the default SQLAlchemy approach: """The application's model objects""" import sqlalchemy as sa from sqlalchemy import orm from myapp.model import meta def init_model(engine): meta.Session.configure(bind=engine) meta.engine = engine t_persons = sa.Table("persons", meta.metadata, sa.Column("id", sa.types.Integer, primary_key=True), sa.Column("name", sa.types.String(100), primary_key=True), sa.Column("email", sa.types.String(100)), ) class Person(object): pass orm.mapper(Person, t_persons) This model has one table, “persons”, assigned to the variable t_persons. Person is an ORM class which is bound to the table via the mapper. Here’s an example of a Person and an Address class with a many:many relationship on people.my_addresses. See Relational Databases for People in a Hurry and the `SQLAlchemy manual`_ for), }) If the table already exists, SQLAlchemy can read the column definitions directly from the database. This is called reflecting the table. The advantage of this approach is that it allows you to dispense with the task of specifying the column types in Python code. Reflecting existing database tables must be done inside init_model() because to perform the reflection, a live database engine is required and this is not available when the module is imported. A live database engine is bound explicitly in the init_model() function and so enables reflection. (An engine is a SQLAlchemy object that knows how to connect to a particular database.) Here’s the second example with reflection: """The application's model objects""" import sqlalchemy as sa from sqlalchemy import orm from myapp.model import meta def init_model(engine): """Call me before using any of the tables or classes in the model""" # Reflected tables must be defined and mapped here global t_persons t_persons = sa.Table("persons", meta.metadata, autoload=True, autoload_with=engine) orm.mapper(Person, t_persons) meta.Session.configure(bind=engine) meta.engine = engine t_persons = None class Person(object): pass Note how t_persons and the orm.mapper() call moved into init_model(), while the Person class didn’t have to. Also note the global t_persons statement. This tells Python that t_persons is a global variable outside the function. global is required when assigning to a global variable inside a function. It’s not required if you’re merely modifying a mutable object in place, which is why meta doesn’t have to be declared global. You now have everything necessary to use the model in a standalone script such as a cron job, or to test it interactively. You just need to create a SQLAlchemy engine and connect it to the model. This example uses a database “test.sqlite” in the current directory: % python Python 2.5.1 (r251:54863, Oct 5 2007, 13:36:32) [GCC 4.1.3 20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sqlalchemy as sa >>> engine = sa.create_engine("sqlite:///test.sqlite") >>> from myapp import model >>> model.init_model(engine) Now you can use the tables, classes, and Session as described in the `SQLAlchemy manual`_. For example: #!/usr/bin/env python import sqlalchemy as sa import tmpapp.model as model import tmpapp.model.meta as meta DB_URL = "sqlite:///test.sqlite" engine = sa.create_engine(DB_URL) model.init_model(engine) # Create all tables, overwriting them if they exist. if hasattr(model, "_Base"): # SQLAlchemy 0.5 Declarative syntax model._Base.metadata.drop_all(bind=engine, checkfirst=True) model._Base.metadata.create_all(bind=engine) else: # SQLAlchemy 0.4 and 0.5 syntax without Declarative meta.metadata.drop_all(bind=engine, checkfirst=True) meta.metadataa.create_all(bind=engine) # Create two records and insert them into the database using the ORM. a = model.Person() a.name = "Aaa" a.email = "aaa@example.com" meta.Session.add(a) b = model.Person() b.name = "Bbb" b.email = "bbb@example.com" meta.Session.add(b) meta.Session.commit() # Display all records in the persons table. print "Database data:" for p in meta.Session.query(model.Person): print "id:", p.id print "name:", p.name print "email:", p.email print Some applications need to connect to multiple databases (engines). Some always bind certain tables to the same engines (e.g., a general database and a logging database); this is called “horizontal partitioning”. Other applications have several databases with the same structure, and choose one or another depending on the current request. A blogging app with a separate database for each blog, for instance. A few large applications store different records from the same logical table in different databases to prevent the database size from getting too large; this is called “vertical partitioning” or “sharding”. The pattern above can accommodate any of these schemes with a few minor changes. First, you can define multiple engines in your config file like this: sqlalchemy.default.url = "mysql://..." sqlalchemy.default.pool_recycle = 3600 sqlalchemy.log.url = "sqlite://..." This defines two engines, “default” and “log”, each with its own set of options. Now you have to instantiate every engine you want to use. default_engine = engine_from_config(config, 'sqlalchemy.default.') log_engine = engine_from_config(config, 'sqlalchemy.log.') init_model(default_engine, log_engine) Of course you’ll have to modify init_model() to accept both arguments and create two engines. To bind different tables to different databases, but always with a particular table going to the same engine, use the binds argument to sessionmaker rather than bind: binds = {"table1": engine1, "table2": engine2} Session = scoped_session(sessionmaker(binds=binds)) To choose the bindings on a per-request basis, skip the sessionmaker bind(s) argument, and instead put this in your base controller’s __call__ method before the superclass call, or directly in a specific action method: meta.Session.configure(bind=meta.engine) binds= works the same way here too. If you’re running multiple instances of the _same_ Pylons application in the same WSGI process (e.g., with Paste HTTPServer’s “composite” application), you may run into concurrency issues. The problem is that Session is thread local but not application-instance local. We’re not sure how much this is really an issue if Session.remove() is properly called in the base controller, but just in case it becomes an issue, here are possible remedies: def pylons_scope(): import thread from pylons import config return "Pylons|%s|%s" % (thread.get_ident(), config._current_obj()) Session = scoped_session(sessionmaker(), pylons_scope) If you’re affected by this, or think you might be, please bring it up on the pylons-discuss mailing list. We need feedback from actual users in this situation to verify that our advice is correct. Most of these expose only the object-relational mapper; their SQL builder and connection pool are not meant to be used directly. All the SQL libraries above are built on top of Python’s DB-API, which provides a common low-level interface for interacting with several database engines: MySQL, PostgreSQL, SQLite, Oracle, Firebird, MS-SQL, Access via ODBC, etc. Most programmers do not use DB-API directly because its API is low-level and repetitive and does not provide a connection pool. There’s no “DB-API package” to install because it’s an abstract interface rather than software. Instead, install the Python package for the particular engine you’re interested in. Python’s Database Topic Guide describes the DB-API and lists the package required for each engine. The sqlite3 package for SQLite is included in Python 2.5. Object databases store Python dicts, lists, and classes in pickles, allowing you to access hierarchical data using normal Python statements rather than having to map them to tables, relations, and a foreign language (SQL). Pylons can also work with other database systems, such as the following: Schevo uses Durus to combine some features of relational and object databases. It is written in Python. CouchDb is a document-based database. It features a Python API. The Datastore database in Google App Engine.
http://docs.pylonsproject.org/projects/pylons-webframework/en/latest/advanced_models.html
CC-MAIN-2014-35
refinedweb
1,336
50.73
* Lars Marius Garshol | | I know, but it's much better to simply modify the output from expat | (preferably in C source) than to implement namespaces in Python. * Paul Prescod | | I'm not clear what route you are advaocating: | | [...] What I mean is that expat already has namespace handling, but with an interface that is tuned for C and not for Python. I think that rather than just directly translating the C interface into Python we should try to make it more convenient. At the moment expat represents namespace names as 'uri localname'. What I would want pyexpat.c to do is to turn those cooked strings into ('uri', 'localname') tuples. So it's not a matter of reimplementing anything, but just of improving the interface. --Lars M.
https://mail.python.org/pipermail/xml-sig/2000-July/002924.html
CC-MAIN-2014-10
refinedweb
127
72.76
Why Python? The importance of programming languages is often overstated. What I mean by that is that people who are new to programming tend to worry far too much about what language to learn. The choice of programming language does matter, of course, but it matters far less than most people think it does. To put it another way, choosing the "wrong" programming language is very unlikely to mean the difference between failure and success when learning. Other factors (motivation, having time to devote to learning, helpful colleagues) are far more important, yet receive less attention. The reason that people place so much weight on the "what language should I learn?" question is that it's a big, obvious question, and it's not difficult to find people who will give you strong opinions on the subject. It's also the first big question that beginners have to answer once they've decided to learn programming, so it assumes a great deal of importance in their minds. There are three main reasons why choice of programming language is not as important as most people think it is. Firstly, nearly everybody who spends any significant amount of time programming as part of their job will eventually end up using multiple languages. Partly this is just down to the simple constraints of various languages – if you want to write a web application you'll probably do it in Javascript, if you want to write a graphical user interface you'll probably use something like Java, and if you want to write low-level algorithms you'll probably use C. Secondly, learning a first programming language gets you 90% of the way towards learning a second, third, and fourth one. Learning to think like a programmer in the way that you break down complex tasks into simple ones is a skill that cuts across all languages – so if you spend a few months learning Python and then discover that you really need to write in C, your time won't have been wasted as you'll be able to pick it up much quicker. Thirdly, the kinds of problems that we want to solve in biology are generally amenable to being solved in any language, even though different programming languages are good at different things. In other words, as a beginner, your choice of language is vanishingly unlikely to prevent you from solving the problems that you need to solve. Having said all of the above, when learning to program we do need to pick a language to work in, so we might as well pick one that's going to make the job easier. Python is such a language for a number of reasons: - It has a consistent syntax, so you can generally learn one way of doing things and then apply it in multiple places - It has a sensible set of built in libraries for doing lots of common tasks - It is designed in such a way that there's an obvious way of doing most things - It's one of the most widely used languages in the world, and there's a lot of advice, documentation and tutorials available on the web - It's designed in a way that lets you start to write useful programs as soon as possible - Its use of indentation, while annoying to people who aren't used to it, is great for beginners as it enforces a certain amount of readability Python also has a couple of points to recommend it to biologists and scientists specifically: - It's widely used in the scientific community - It has a couple of very well designed libraries for doing complex scientific computing (although we won't encounter them in this book) - It lend itself well to being integrated with other, existing tools - It has features which make it easy to manipulate strings of characters (for example, strings of DNA bases and protein amino acid residues, which we as biologists are particularly fond of) Python vs. Perl For biologists, the question "what language should I learn" often really comes down to the question "should I learn Perl or Python?", so let's answer it head on. Perl and Python are both perfectly good languages for solving a wide variety of biological problems. However, after extensive experience teaching both Perl and Python to biologists, I've come the conclusion that Python is an easier language to learn by virtue of being more consistent and more readable. An important thing to understand about Perl and Python is that they are incredibly similar (despite the fact that they look very different), so the point above about learning a second language applies doubly. Many Python and Perl features have a one-to-one correspondence, and so if you find that you have to work in Perl after learning Python you'll find it quite familiar. Formatting When discussing programming, we use lots of special types of text – we'll need to look at examples of Python code and output, the contents of files, and technical terms. Take a minute to note the typographic conventions we'll be using. In the main text of this book, bold type is used to emphasize important points and italics for technical terms and filenames. Where code is mixed in with normal text it's written in a monospaced font with a red tint like this. Example Python code looks like this: Some example code goes here Sometimes it's useful to refer to a specific line of code inside an example. For this, we'll use numbered circles like this❶: a line of example code another line of example code this is the important line❶ here is another line Example output (i.e. what we see on the screen when we run the code) looks like this: Some output goes here Often we want to look at the code and the output it produces together. In these situations, you'll see a block of code immediately followed by its output. Other blocks of text (usually file contents or typed command lines) look the same as code output - hopefully it'll be clear from context what they are. Often when looking at larger examples, or when looking at large amounts of output, we don't need to see the whole thing. In these cases, I'll use ellipses (...) to indicate that some text has been missed out. I have used UK English spelling throughout, which I hope will not prove distracting to US readers. In programming, we use different types of brackets for different purposes, so it's important to have different names for them. Throughout this book, I will use the word parentheses to refer to (), square brackets to refer to [], and curly brackets to refer to {}. Getting in touch Learning to program is a difficult task, and my one goal in writing these pages is to make it as easy and accessible as possible to get started. So, if you find anything that is hard to understand, or you think may contain an error, please get in touch – just drop me an email at martin@pythonforbiologists.com and I promise to get back to you. Setting up your environment All that you need in order to follow the examples is a standard Python installation and a text editor. All the code in this book will run on either Linux, Mac or Windows machines. The slight differences between operating systems are explained in the text. Python 2 vs. Python 3 As will quickly become clear if you spend any amount of time on the official Python website, there are two versions of Python currently available. The Python world is, at the time of writing, in the middle of a transition from version 2 to version 3. A discussion of the pros and cons of each version is well beyond the scope of this book1, but here's what you need to know: install Python 3 if possible, but if you end up with Python 2, don't worry – all the code examples in the book will work with both versions. If you're going to use Python 2, there is just one thing that you have to do in order to make some of the code examples work: include this line at the start of all your programs: from __future__ import division We won't go into the explanation behind this line, except to say that it's necessary in order to correct a small quirk with the way that Python 2 handles division of numbers. Depending on what version you use, you might see slight differences between the output on these pages and the output you get when you run the code on your computer. I've tried to note these differences in the text where possible. Installing Python The process of installing Python depends on the type of computer you're running on. If you're using Windows, start by going to this page: then follow the link at the top of the page to the latest release. From here you can download and run the Windows installer. If you're using Mac OS X, head to this page: then follow the link at the top of the page to the latest release. From here you can download and run the OS X installer. If you're running a mainstream Linux distribution like Ubuntu, Python is probably already installed. If your Linux installation doesn't already have Python installed, try installing it with your package manager – the command will probably be either sudo apt-get install python idle or sudo yum install python idle Editing and running Python programs In order to learn Python, we need two things: the ability to edit Python programs, and the ability to run them and view the output. There are two different ways to do this – using a text editor from the command line, or using Python's graphical editor program. Using the command line If you're already comfortable using the command line, then this will probably be the easiest way to get started. Firstly, you'll need to be able to open a new terminal. If you're using Windows, you can do this by running the command prompt program. If you're using OS X, run the terminal program from inside the Utilities folder. If you're using Linux, you probably already know how to open a new terminal – the program is probably called something like Terminal Emulator. Since a Python program is just a text file, you can create and edit it with any text editor of your choice. Note that by a text editor I don't mean a word processor – do not try to edit Python programs with Microsoft Word, LibreOffice Writer, or similar tools, as they tend to insert special formatting marks that Python cannot read. When choosing a text editor, there is one feature that is essential2 to have, and one which is nice to have. The essential feature is something that's usually called tab emulation. The effect of this feature at first seems quite odd; when enabled, it replaces any tab characters that you type with an equivalent number of space characters (usually set to four). The reason why this is useful is discussed at length in chapter 4, but here's a brief explanation: Python is very fussy about your use of tabs and spaces, and unless you are very disciplined when typing, it's easy to end up with a mixture of tabs and spaces in your programs. This causes very infuriating problems, because they look the same to you, but not to Python! Tab emulation fixes the problem by making it effectively impossible for you to type a tab character. The feature that is nice to have is syntax highlighting. This will apply different colours to different parts of your Python code, and can help you spot errors more easily. Recommended text editors are Notepad++ for Windows3, TextWrangler for Mac OSX4, and gedit for Linux5, all of which are freely available. To run a Python program from the command line, just type the name of the Python executable (python.exe on Windows, python on OS X and Linux) followed by the name of the Python file you've created. If any of the above doesn't work or seems complicated, just use the graphical editor as described in the next section. Using a graphical editor Python comes with a program called IDLE which provides a friendly graphical interface for writing and running Python code. IDLE is an example of an Integrated Development Environment (sometimes shortened to IDE). IDLE works identically on Windows, OS X and Linux. To create a new Python file, just start the IDLE program and select New File from the File menu. This will open a new window in which you can type and edit Python code. When you want to run your Python program, use the File menu to save it (remember that the filename should end with .py) then select Run Module from the Run menu. The output will appear in the Python Shell window. You can also use IDLE as a text editor – for example, to view input and output files. Just select Open from the File menu and pick the file that you want to view. To open a non-Python file, you'll have to select All files from the Files of type drop-down menu. Reading the documentation Part of the teaching philosophy that I've used in writing these pages is that it's better to introduce a few useful features and functions rather than overwhelm you with a comprehensive list. The best place to go when you do want a complete list of the options available in Python is the official documentation which, compared to many languages, is very readable.
https://pythonforbiologists.com/introduction/
CC-MAIN-2017-26
refinedweb
2,314
63.32
49336/python-error-indentationerror-expected-an-indented-block I am trying to execute the following python code: def example(p,q): a = p.find(" ") b = q.find(" ") str_p = p[0:a] str_q = p[b+1:] if str_p == str_q: result = True else: result = False return result And I get the folloeing error: IndentationError: expected an indented block Python requires its code to be indented and spaced properly. Never mix tabs and spaces. If you're using tabs stick to tabs and if you're using space stick to space. Also in your code, change this part: if str_p == str_q: result = True else: result = False return result To return str_p == str_q For Python 2.6 and later and Python ...READ MORE Hi @Isha, According to your error File ...READ MORE Trying using the io module for this ...READ MORE if you google it you can find. ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE can you give an example using a ...READ MORE You can simply the built-in function in ...READ MORE Hi @Kashish, try something like this: for i ...READ MORE It's pretty simple to raise a query raise ...READ MORE OR
https://www.edureka.co/community/49336/python-error-indentationerror-expected-an-indented-block
CC-MAIN-2019-39
refinedweb
197
77.84