text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
: Hey Alex, Aren't unidirectional associations similar to aggregation. How can be differentiate them. The parent in an aggregation owns the relationship with the child. If the parent dies, the children die too. In an association, this isn't true. in the doctor patient example, if we delete doctor james, then let the program to print doctor-patient relation, the output will still be correct? No, if you delete the Doctor and then try to print the Doctor-Patient relationship, the program will exhibit undefined behavior, because you'll be accessing memory that has been deleted. When the Doctor is deleted, you also need to ensure that all pointers to the Doctor are removed. This can be a bit of a management challenge. Fortunately, C++ provides a class that can help with this: std::shared_ptr, which we cover in chapter 15. ." Can you please give a code example of what the other way would look like. I tried to make it myself but I am still confused on the way to set it up. Hi Maxpro! I have a question about your design for the Patient/Doctor classes. Isn't it more appropriate to allow the patients to choose their doctors and no the other way around? Not really -- at least in the US, patients can decide what doctors they'd like to see, but ultimately it's up to the doctors to decide if they want to see the patient or not (some doctors are full and not accepting new patients). But, for what it's worth, the code would support patient's adding doctors if that made more sense for your use case. First of all, these tutorials are good stuff. This is probably the wrong lesson to ask these questions, but the car lot example brought them to mind. Is a pure static class just a namespace? As in, is equivalent to Actually, I realised while writing this that namespaces can't have private members, but are there any other differences? Secondly, is and is that why it's called an enum class? Static classes and namespaces are kind of similar. There's a reasonable discussion of when to use which one in stack overflow. An enum class and a class with a public enum aren't the same, though the usage of both would be similar. I haven't read any anecdotes to indicate whether the naming of an "enum class" is related to a having a class with an enum, or whether it was just trying to save on adding new keywords. Maybe both. I have seen in this nad 10.3 chapter that contrarry to what is taught in chapter on overloading operator<<(we just do friend std::ostream& operator<<... foward declaration and then define it outside class) friend std::ostream& operator<< is defind here, in class(by defined I mean adding {....}). Where to use each of these two techniques? Thanks for grad tutorial! Generally it's better to define your functions outside the class (in a separate .cpp file). In most of the tutorials here, we do it inside the class to keep the examples concise and make it easier for you to try yourself. In first example I can't figure out where "m_patient.push_back(pat)" comes from, specificaly where "push_back" is defined. Can anyone pleas help me with that? It's part of the std::vector functionality. See and search for Push_back. "friend Doctor;" --> "friend class Doctor;"? friend Doctor is okay in C++11 and newer. But friend class Doctor works everywhere, so I've updated the lesson accordingly. 1.Could you explain why both the vector types are pointers ( vector< Doctor*>...)? 2. Furthermore, could you please elaborate how this step works? {pat->addDoctor(this);} 1) If the vectors didn't contain pointers to Doctors, then the vector would manage the existence of the Doctor. Having the vector hold pointers to Doctors means the vector only owns that pointer, not the Doctor itself. 2) When addPatient(pat) is called, "this" points to the Doctor the Patient is being added to, and "pat" points to the Patient. Calling pat->addDoctor() should be obvious -- we're calling the Patient::addDoctor() member function on Patient pat. Passing in "this" gives us a way to pass the implicit Doctor object from the Doctor::addPatient() function to the Patient::addDoctor() function. 1) What if you made the vectors hold actual Doctor- and Patient-objects (rather than pointers), then made the addPatient- and addDoctor-functions take references to such objects as parameters? Then the Doctor- and Patient-objects would still exist independently of one another and the vectors would only manage the existence of the references, right? Wouldn't that work just as well as using pointers (in terms of functionality at least, I don't know about performance)? > What if you made the vectors hold actual Doctor- and Patient-objects (rather than pointers) Then you'd end up with a lot of duplicate copies of Doctors and Patients. For example, Patient's Dave and Betsy would both have an independent copy of Doctor Scott. But that's a bit weird, since the Patient's don't "own" the doctors. If we wanted to update Scott's information (e.g. his age, or specialties), we'd have to ensure all the copies got updated. So no, it wouldn't work as well. Even if we added references to the original doctors/patients to the vectors, rather than copies? Like this: So here I made m_patient into a vector of Patient-objects rather than pointers, and made the addPatient-function take a Patient-reference (&pat) rather than a pointer to a Patient, and then this reference is added to the vector. If I now define p1 and d1 as objects rather than pointers and then pass p1 as an argument into d1's addPatient-function, like so: wouldn't this result in a reference to p1 being added to d1.m_patients, and not a copy? So if any changes are made to p1 in the future, those changes will automatically apply to the reference inside d1.m_patients as well? So there wouldn't be any issue with multiple copies of Patients and Doctors, nor any issue with making sure copies are updated properly? And of course I mean for the Patient-class to do the same, by having a vector of Doctor-objects and by having its addDoctor-function take a reference to the Doctor as parameter. Maybe I'm missing something here, but hopefully you can see where I'm coming from. The references just prevent the Doctor or Patient from being copied when you pass it to the add function. Your std::vectors will still hold copies. In order to do what you're suggesting, you'd have to create arrays of references. But C++ don't allow you to do this. So we have to use arrays of pointers. Aaaah, I see, so when I pass the Patient to the addPatient-function, it will be passed as a reference and not a copy, but when I try to add that reference to m_patients, it actually makes a copy of the reference and adds that to the vector? So the vector doesn't actually hold a reference to the original Patient, but a copy that was copied from the reference? That makes sense, thank you! Yes, exactly that! In the deleted constructor is made private. Since nothing is supposed to call it, wouldn't it be better to make it public? My point is, if you try to call it, the compiler will complain that's illegal because of it being private, rather than your intention of not using it. What are other side effects of making the deleted constructor private/public? (I have read something about static functions accessing it or not at StackOverflow but was a poor explanation. That's why I love this tutorial it explains everything so well it makes me actually want to take my phone and keep learning at any moment as if it was an addictive phone game :) Yes, what you're saying makes sense -- because we don't want people creating CarLot() objects, if we make the deleted constructor public, the compiler will give a clearer warning that this is disallowed rather than being masked by the private access control. I've updated the example. Nice thinking! Static members (such as getCar()) can still access the static members, but that's what we're desiring in this case. Thank you Alex Please help.I don't get it this pat.m_doctor[count]->getName() and doc.m_patient[count]->getName() Here, are we not already mentioned the name. ( doc.m_patient[count] = "Dave" ? Why is that we use ->getname(). doc.m_patient[count] returns a Doctor object. We need to use the getName() member function to get the Doctor's name from it. For example I wrote this code.Can you explain me please ? If I add .getName at the end of the a.array[count] ı get an error.Normally it works well I know yours is different but I don’t get it exactly what I don’t understand. #include <vector> #include <string> #include <iostream> class A{ private: std::string m_name; std::vector<std::string> array; public: A(){} std::string getName(){return m_name;} void AddName(std::string a){ array.push_back(a); } friend std::ostream& operator<<(std::ostream &out,A &a){ int length = a.array.size(); for (int count = 0; count < length ; ++count) out << a.array[count] << std::endl; return out; } }; int main(){ A object; object.AddName("a"); object.AddName("b"); std::cout << object << std::endl; return 0; } Ok man I got it thank you. :) Your vector is a vector of std::string, so you can insert and modify strings directly. My vector was a vector of some class containing a std::string. Because the std::string was contained inside the class object, we needed to use a member function to retrieve it. "Consider the simplified case where a Course can only have prerequisite." should be replaced with: "Consider the simplified case where a Course can only have one prerequisite." Quite right. Thanks! Hi Alex, I wounder in exempel patient and doctor. let's asume the program doesnt end when u delete patient and doctor. then the std::vector(*patient/doctor) would be left with dangling point so u need to set them to nullptr, right? Is there a way to pop_back a specific of them? lets asume i wanna just take away p2? You're correct, if we deleted any of the Doctors or Patients without removing them from the vectors, the vectors would be holding dangling pointers. That's not a problem here, because we don't do the deletes until the program is ending anyway. But it's a good question in general. While moving the last element of a vector is easy (use pop_back), removing an arbitrary element from a vector is not straightforward. But you can do so like this: Hi Alex! Thanks for ur great respond!:) I had problem with the one u used and found another that worked vec.erase(vec.begin() + i); // where i is the possition of the vector! I got one more follow up question. let's say Dave (patient) change doctor so Scott is no longer having him ( let's just focus on so we just wanna delete from doctor m_patient). So i wanna make a function that give Doctor option to write a name and it will delete it from vector ( just wanna know how it works) lets say i got a std::cin>> deletePatient; and he types "Dave" how do i search in the vector for "Dave" possition which is m_patient[0] so i can put it in vec.erase and delete him from vector? Many thanks really good learning page:) well i meant the order should be like this: 1. First delete Dave from heap memory. 2. point Dave to nullptr. 3. then take it out from the std::vector through vec.erase right? No. First remove Dave from the vector, then delete it. If you delete Dave before removing it from the vector, the vector will be pointing at deallocated memory, which means when you go to see if the element is Dave, then you'll be accessing deallocated memory, which will cause undefined behavior. Thanks for making it clear. I wounder if i did vector.push_back(new Patient("Dave")). and if i did erase from vector first can i still access it so i can delete it? like in last quiz in 12.x after we add all those circle and triangle and then we deleted them. should we not pop_back so the vector is empty? Yes, if you push_back a Patient, you can pop_back that same Patient off the stack and then delete it. If we intended to continue using the vector, we'd definitely want to get rid of all the stuff we'd deleted manually. However, since the program is ending anyway, it doesn't matter. The vector will clean up after itself (note: it will not delete the pointers, which is good, since we've already done that manually). This is also more difficult than it seems like it should be. See these answers for some various ways to do this. Ok, i really didnt understand that, in the "old_name_" do i put "Dave" ? Yes, if you have the pointer pointing to the Dave object. If not (and you only know the name "Dave") then things are even more complicated. See this thread. Regarding: "We'll implement this function below Doctor since we need Doctor to be defined at that point." Consider: "We'll *define* this function below Doctor *definition* since we need Doctor to be defined *already for this function to be successfully defined*." Thanks for the suggestion. Text updated. Regarding: "They should use Doctor::addPatient() instead, which is publicly exposed" Accurate but does not explicitely expose to the "unseasoned" the association scheme to be used. Consider something of the sort: "We plan the association patient-doctor to occur at the same place where the association doctor-patient occurs. Thus when Doctor::addPatient(...) will be launched, it will launch Patient::addDoctor(...) appropriately so that the two associations will be properly implemented. For this scheme to work Patient::addDoctor(...) needs to be visible to Doctor objects (only, thus not being public) as will be arranged for below through appropriate befriending." Is this correct? Yep! Alex Bro I love you. I learned a lot from this site. thank you senpai Alex would you please explain how we might use that Course/prerequisite example? I'm interested in its design... It has no string members for example! It probably makes more sense for each course to have a name (or numeric id) so we can differentiate them. I've updated the example to include a name member for each course. please I need clarification on how the following lines of codes executes please I can't move on without understanding the above. Thanks in advance The data looks like this: d1->m_patient = ["Dave"] d2->m_patient = ["Dave", "Besty"] p1->m_doctor = ["James", "Scott"] p2->m_doctor = [] p3->m_doctor = ["Scott"] The code you pasted is from a Doctor member function, where doc is set to whichever doc we called (*d1 or *d2). It looks through all of the patients for that Doctor and prints their names. The Patient code works similarly, but iterates through each Patient's Doctors. In the examples above (and in the previous chapter), why do we need to use pointers for some of the objects? Hi, I have a question. On line 82 how come you do not include the Class Name Patient to the overloaded operator << like the way you did the ? Is it not required in this context - if not how come? Because operator<< is a friend function. Friend functions aren't considered members of the class. Grammar fix (plural doctors doesn't quite make sense): Change "The relationship between a doctors and patients is a great example of an association." To: "The relationship between a doctor and its patients is a great example of an association." Fixed, thanks! Is there a reason you used a normal for loop rather than a for each loop? No particular reason. Typo fixed, and good idea. Lesson updated. Under the "Reflexive Association" header, in the first sentence you accidentally called it "reflective" association. Great tutorials, thanks! Also, in the indirect association example (cars in a lot) it may be more clear to say: CarLot::getCar(d.getCarId()); ...rather than hard-coding the "17" again, to demonstrate how the Driver object is associated with the Car object. Hi Alex, I'm so lucky to find ur tutorials. I wanted to know if there is any difference between aggregation and association other than the direction. Thanks In terms of how aggregation and associations are implemented, there's usually little difference in C++. The differences between the two are mostly conceptual. Hi Alex, Can you please explain the syntax in line 28 of the Doctor-Paitent code : [code] std::string getName() const [code] i cannot seem to understand the role of 'const' at the end of function getName() thanks in advance Const in this context means getName() is a const member function -- that is, getName() promises not to modify any of the member variables, or call any non-const functions. Const objects can only call const member functions (and constructors/destructors). Hello Alex First of all, thank you for your generosity on sharing your tremendous knowledge of c++. There is one thing, regarding the DOCTOR and PATIENT program, I am not sure about. I copied and pasted part that I don't know void addPatient(Patient *pat) { // Our doctor will add this patient m_patient.push_back(pat); // and the patient will also add this doctor pat->addDoctor(this); } how does that pointer-to-member operator, -> , work here? because later you write like following in your program d1->addPatient(p1); d2->addPatient(p1); d2->addPatient(p3); Can you explain the mechanism behind there with this pointer-to-member operator? Thank you! Remember that "a->b" is the same as "(*a).b". So, when we say pat->addDoctor(this), we're really just saying, "Get the Patient that pointer pat is pointing at, and then call member function addDoctor with argument this." See lesson 6.12 for more info. Thanks Alex! Typo at the beginning: "quality" should be "qualify" I think. Yup, thanks for catching that. Typo at the begininng, after the "Association" sub-heading, first sentence... I think "and" should be "an". Also, your overloaded operator<< functions for both Doctor and Patient use "std:cout" instead of "out". Thanks, all updated! Alex, You still have "std::cout" instead of "out" in one of the operator<< functions. Thanks, fixed! Great explanation , thank you :)) Name (required) Website
https://www.learncpp.com/cpp-tutorial/10-4-association/comment-page-1/
CC-MAIN-2019-13
en
refinedweb
I'd like to define a general function foo that takes data, perhaps manipulates underlying class variables, and returns an int. However, when I attempt to create a separate function that takes a vector of foo objects, the compiler fails to deduce the template parameter. The following illustrates what I've tried: #include <vector> template <typename T> class Base { public: virtual int foo(const T& x) const = 0; }; template <typename T> class Derived : public Base<std::vector<T> > { // specialize for vector data public: virtual int foo(const std::vector<T>& x) const { return 0;} }; template <typename T> int bar(const T& x, const std::vector< Base<T> >& y) { if(y.size() > 0) return y[0].foo(x); } int main(int argc, char** argv) { std::vector<double> x; std::vector< Derived<double> > y; bar(x, y); } This fails to find a matching function for bar, with the notes: main.cc:16:5: note: template argument deduction/substitution failed: main.cc:24:11: note: mismatched types ‘Base<T>’ and ‘Derived<double>’ and main.cc:24:11: note: ‘std::vector<Derived<double> >’ is not derived \ from ‘const std::vector<Base<T> >’ Forgive me if the answer lies in an already-posted thread; I've read quite a number that do seem related, but don't, to my knowledge, address this issue. First note that std::vector<Base<T> > and std::vector<Derived<T> > are different types, even if Base<std::vector<T>> is the base of Derived<T>. Type conversion doesn't happen in template type deduction. So T cannot be deduced by matching the second argument y of type std::vector<Derived<double>> that you pass to bar with std::vector<Base<T>>. Next, suppose we make y of the "right" type std::vector< Base<double> > y; so you can pass it to bar. Now in principle we can deduce T by matching the second parameter in bar of type std::vector<Base<T>> with the type std::vector< Base<double> > of y. So T is deduced as double, however don't forget that x, which you pass as the first parameter to bar, has type vector<double>, so from x we will deduce T as vector<double>, which of course is inconsistent with double deduced from y. So type deduction fails. Here is a simplified example that replicates your issue.
http://databasefaq.com/index.php/answer/46168/c-templates-vector-passing-vector-of-derived-templated-class
CC-MAIN-2019-13
en
refinedweb
tag:blogger.com,1999:blog-24080016269578604432018-12-05T11:08:54.396+00:00Thoughts.In descending chronological order.Adam Creeger Grails developers can learn from the Github/Rails Mass Assignment Vulnerability<div style="padding:5px; background-color: rgba(165, 175, 200, .2);"><strong>In short:</strong> Github's security was breached due to a "vulnerability" in Rails. Grails also suffers from the same vulnerability, but there are ways to protect your app. Check your code for instances of:<pre name="code" class="java">new DomainModel(params)</pre> and replace them with <a target="_blank" href="">bindData</a> or command objects. When using bindData, use the "includes" option - it is safer than "excludes". The following regular expression might help you find some offending code:<pre name="code" class="java">new .*?\(params\)</pre><br />In addition, make sure that all your domain objects have comprehensive constraints to protect from malicious users. For more info, read on.</div><hr style="margin: 20px auto; border:1px solid silver; width:75%" />Over the weekend, Github suffered a <a href="" target="_blank">security breach</a> that allowed an unauthorized user to make a <a href="" target="_blank">commit</a> to the main <a href="" target="_blank">rails/rails</a> repo. Fortunately, the user had no malicious intent, and only made the commit to bring awareness to the issue. Whether this was the best approach to achieve this is another discussion, and not the subject of this post.<br /><br /.<br /><br /><h4>Weapons of Mass Assignment</h4>Rails, just like many modern web frameworks, allows you to quickly create an object using the request's parameters. In Rails, this code looks a little like this:<pre name="code" class="ruby"><br />@user.update_attributes(params[:user])<br /></pre>- or - <br /><pre name="code" class="ruby">@user = User.new(params[:user])</pre><br />Basically, what is happening here is that any of the <span class="code">user</span> instance's properties that have a corresponding request parameter get the value of that parameter. In this case, imagine the User object class had at least two properties: <ul><li>name</li><li>isAdmin</li></ul> Now imagine that the request to create a user had one parameter, name. Under normal operating circumstances, this would work - but it is inherently insecure. An attacker only has to guess that a property exists on the User object with the name 'isAdmin', then add isAdmin=true to the HTTP request. When they do this, the user will be created with isAdmin set to true. Bad news.<br /><br />This is known as the "mass-assignment vulnerability" and is, along with mitigation strategies, described in many places - including the <a href="" target="_blank">Ruby on Rails Security Guide</a>.<br /><br /><h4>What happened at Github?</h4>According to the <a href="" target="_blank">github blog</a>, the "malicious" user used the mass-assignment vulnerability to compromise the form that allows you to set authorized SSH public keys for your repo, and added his public key to the rails repo, effectively giving him permission to directly commit there.<br /><br /><h4>What about Grails?</h4>Grails also has the concept of mass assignment, but tends to refer to it as "data binding" or "batch updating". It is <a href="" target="_blank">covered in detail</a> in the Grails Reference, but i'll summarize it here.<br /><br /><pre name="code" class="java"><br />//Create a user object, initializing it with values taken directly from the request.<br />def user = new User(params)<br /></pre>- or -<br /><pre name="code" class="java"><br />def user = new User()<br />//Update the user object, with values taken directly from the request.<br />user.properties = params<br /></pre>This is exactly equivalent to the ruby code I posted above. It also suffers from the same vulnerability. Any property on the User object that grants that user elevated privileges will be open to attack via HTTP.<br /><br /><h4>If it's insecure, why would anyone use it?</h4>Well, two reasons:<ol><li>Many people don't know it's insecure</li><li>They RTFM. But only briefly.</li></ol> To address my first point, most Grails developers, like those using any other language/frameworks, are mid-level devs under pressure to get features out. In this situation, it's also unlikely that their code will get peer-reviewed, and unlikely that they'll benefit from an experienced developer pointing out the error of their ways. This is what has happened in the Rails world, and I'm sure it's happened with Grails too.<br /><br />So, what about the documentation? Grails docs are awesome. They've improved leaps and bounds in the last couple of years. They're clear, concise and accurate. However, in this case, the security risks and mitigation strategies are buried deep down in the topic of <a href="" target="_blank">data binding</a>. "Data Binding and Security concerns" appears as the 7th and final section, after the section entitled "Data binding and type conversion errors". Up until this point, every example shows the use of the <span style="font-family:Courier">new User(params)</span> method of object creation.<br /><br / <span style="font-family:Courier">new User(params)</span>.<br /><br /><h4>So, what should I do?</h4:<h5>Using bindData</h5><pre name="code" class="java"><br />//Update the Person object, with values taken directly from the request - including only properties known to be safe<br />def p = new Person()<br />bindData(p, params, [include: ['firstName', 'lastName]])<br /></pre><br />For more, see the <a href="" target="_blank">bindData docs</a>. You'll notice that there are ways to blacklist/<strong>exclude</strong> certain properties - this is ok, but it is more prudent to always white-list/<strong>include</strong> allowed properties instead.<br /><br /><h5>Using the subscript operator</h5><pre name="code" class="java"><br />//Retrieve a Person object, and update it with values taken directly from the request - including only properties known to be safe<br />def p = Person.get(1)<br />p.properties['firstName','lastName'] = params<br /></pre><br /><br />This is a "white-list" approach, that achieves the same as the bindData method above, but only works on Domain Objects. It is also not as flexible as bindData.<br /><br /><h5>Using Command Objects or Action Arguments</h5><a href="" target="_blank">Command Objects</a>:<br /><br /><pre name="code" class="java"><br />class UserController {<br /> def authService<br /><br /> def create = { CreateUserCommand cmd -><br /> if (cmd.hasErrors()) {<br /> redirect(action: 'createForm')<br /> return<br /> }<br /><br /> authService.createUser(cmd)<br /> }<br />}<br /><br />class CreateUserCommand {<br /> String username<br /> String password<br /><br /> static constraints = {<br /> username(blank: false, minSize: 6)<br /> password(blank: false, minSize: 6)<br /> }<br />}<br /></pre><br /><br />If you have one or two properties you need to update, you could use action arguments (new in Grails 2.0). They allow you to do something similar:<br /><br /><pre name="code" class="java"><br />def create = { String firstName, String lastName-><br /> def user = new User(firstName: firstName, lastName: lastName)<br /> // save the user, checking for validation errors<br />}</pre><br /><br /><strong>Important:</strong>.<br /><br /><h4>What about the Grails Framework?</h4>Thankfully, the Grails core team is wanting to address this, and are asking for feedback from the community. Here's mine...<br /><h5>Update the docs for all Grails versions</h5>Remove.<br /><h5>Introduce a dataBindable static property</h5>Grails doesn't have an equivalent of Rails's <a href="" target="_blank">attr_accessible</a> or <a href="" target="_blank">attr_protected</a> attributes. But it should. For example:<br /><br /><pre name="code" class="java"><br />class User {<br /> //Fields listed here would not be processed bindData, or new User(params)<br /> static dataBindable = ["username", "firstName", "lastName"]<br /> <br /> String username<br /> String firstName<br /> String lastName<br /> <br /> boolean isActive = false<br /> int failedPasswordAttemptCount = 0<br /> <br /> Date dateCreated<br /> Date lastUpdated<br /><br /> static constraints = {<br /> //strict constraints go here<br /> }<br />}<br /></pre>Crucially, a class with an empty or missing dataBindable property would not be processed at all by bindData or other batch updating mechanisms. This is a harsh breaking change, but makes it clear that the developer must think about security. Since it is a breaking change, there should be a configurable "legacy mode" that could be configured to "true" to enable old behavior for all objects, or it could also be configured to a list of classes or namespaces to enable gradual migration. "dateCreated" and "lastUpdated" should never be updated via bindData or similar.<br /><h5>Update the scaffolding scripts</h5:<ul><li>Not display properties that are not in the "dataBindable" property</li><li>Update these properties using explicit setters in the controller code, with comments explaining why this is so.</li></ul><br /><h4>In Conclusion</h4>Grails isn't immune from this kind of vulnerability - in fact it is likely that many Grails apps suffer from it. Use what happened to Github as inspiration for auditing your code, and make sure that you use sensible methods of handling data from HTTP requests. I hope both the Grails framework and its community are able to benefit from this weekend's events.Adam Creeger is all about Tweet Shortening, not just URL shortening<strong>In short:</strong> Break the habit of using cryptic short urls and then adding some context to them manually. Instead, use a shortened version of your [brand] name as your domain, then write each "URL alias" in the style of a short facebook status update. For example: <blockquote><a target="_blank" href=""></a></blockquote>If you're tweeting, you can then use the rest of the 140 characters much more efficiently - or not at all.<br /><br /><h4>The Background</h4>A.<br /><br />A few days ago, I got around to building it. Sure, I could have bought it, but I wanted the challenge of maintaining it and hosting it too. To make a short story shorter, it is now live. I used it for real this morning, in <a href="" target="_blank">this tweet</a>. To save you a click, here it is:<br /><br /><img src="" border="0" alt="" id="BLOGGER_PHOTO_ID_5569547201399869202" /><br /><br /, <strong>it has become my personal Tweet Shortening Service.</strong><br /><br /><h4>Tweet Shortening in Action</h4>The link to this page is <a href=""><:<br /><blockquote>I wrote a blog post about being smarter with URLs: <a href=""></a></blockquote>But instead, I think I'll just tweet:<br /><blockquote> - kind of.</blockquote>Pompous? Definitely. Inaccurate? Maybe. Concise? Yes. Intriguing? I hope so. Effective? We'll see. (For the record, I don't really believe I've invented anything)<br /><br /><h4>How to start get your own Tweet Shortening Service</h4>In the off chance you want to join in, you can follow these steps:<ol><li>First of all, you should buy the domain you want to use to identify yourself. You might want to check out <a href="" target="_blank">iwantmyname.com</a> as a start.</li><li>Now, you want to get a white-labeled URL shortener. I rolled my own, but I wouldn't recommend it for most people. It looks like <a href="" target="_blank">ShortSwitch</a> is a good option. For the majority of individuals, their $4 a month plan should suffice.</li><li>Start writing short tweets! I suggest your first should be something like: "<em></em>" linking to this blog post of course :-)</li></ol><br /><h4>The problems with Tweet Shortening</h4>There (<a href=""></a>) in TweetDeck, it auto-converts it to <a href=""></a>. Damage done. Thankfully, you can disable this otherwise useful feature. Those smart folks at TweetDeck were also kind enough to make it easy to toggle on and off on a per-link basis.<br /><br />That's it! I'm enjoying messing around with short tweets, I hope you do too.Adam Creeger Days In: Some RockMelt Tips and Tricks<strong>UPDATE: <span style="text-decoration:line-through">I have 50 Rockmelt invites, on a first come, first served basis. Click here for your invite!</span> Sorry, all the invites were used in only 45 minutes...</strong><br /><br />It's been almost 3 whole days since I got hold of <a href="" target="_blank" onClick="recordOutboundLink(this, 'Outbound Links', 'rockmelt.com');">RockMelt</a>,.<br /><br /><h4>Learn the lingo, get an edge</h4>An "edge" is a simply a toolbar that appears at the edge of the screen. RockMelt has two edges out of the box, the friend edge (on the left) and the app edge (on the right).<br /><br /><strong>The friend edge</strong> will show you either your friends that are online, or your favorites (more on that later). <strong>The app edge</strong>.<br /><br /><em>Side note: Graph geeks amongst you may well have though an edge was a relationship between two friends - but no, in RockMelt, it's just a toolbar.</em><br /><br /><h4>Share a link with a friend, in a couple of clicks</h4>Like a page you're looking at? Want to share it with someone in particular? That's pretty easy.<br /><br />Drag and drop the URL from the address bar (using the globe <img style="border:none;vertical-align:bottom;padding:0" src="" /> or the padlock <img style="border:none;vertical-align:bottom;padding:0" src="" />) onto your friend in the friend edge on the left. Then choose one of the options:<br /><br /><img style="border:none" src="" /><br /><br />Type your own message, and your done!<br /><br /><h4>Open search results in way that suits you</h4>Take a close look at the search results pane, and you'll see two features you may find useful:<br /><img style="border:none" src="" /><ul><li>If you would rather open each search result in a new background tab, click on the small plus icon <img style="border:none;vertical-align:bottom;padding:0" src="" /> that appears when you move your mouse over each result.</li><li>If you decide you want to see your search results in one tab, just like a search in most other browsers, then click the "View in tab" link at the top.</li><br /></ul>I also use the arrow keys to quickly preview each result, then the enter key to go there.<br /><br /><h4>Let RockMelt feed you</h4>After a little while, RockMelt can start suggesting new feeds for you to consume. Use the browser for a few days, then use the the "Add Feeds" button <img style="border:none;vertical-align:bottom;padding:0" src="" /> on the app edge (on the right) to have RockMelt automatically suggest feeds for sites you've been using. Click the star next to a feed to add it to the app edge.<br /><br /><h4>Get friendly with the address bar</h4>You can get to your friend's Facebook profile pages easily in RockMelt. Just type their name into the address bar, click on the suggestion, then click on your friend's profile image that appears.<br /><br /><h4>Pick favorites</h4>Chances are you've got a lot of friends. Cut down the noise and choose some <img style="border:none;vertical-align:bottom;padding:0" src="" /> favorites. The easiest way is to add favorites is to:<ol><br /><li>Click on the star in the online/favorite toggle button <img style="border:none;vertical-align:bottom;padding:0" src="" /> at the top of the friend edge.</li><br /><li>Click the "Show Friends" button <img style="border:none;vertical-align:bottom;padding:0" src="" /> towards the bottom of the friend edge.</li><br /><li>Use the search box to find the friends you really want to know about, then make them a favorite by clicking the star <img style="border:none;vertical-align:bottom;padding:0" src="" /> next to their name.</li><br /><li>(Optional) Unfriend everyone else. :-)</li><br /></ol><h4>Business up front, party in the back</h4>Being so connected to your friends is great, but what if you need to focus on your work and not get distracted? Use the <span style="font-family:courier">Ctrl-Shift-Space</span> key combo to take the edges off your RockMelt. Think of it as a modern-day <a target="_blank" onClick="recordOutboundLink(this, 'Outbound Links', 'en.wikipedia.org/wiki/Boss_key');" href="">Boss Key</a>. If you want to hide just one of the edges you can use the <span style="font-family:courier">Ctrl-Shift-LeftArrow</span> and <span style="font-family:courier">Ctrl-Shift-RightArrow</span> to control your "friend edge" and your "app edge" respectively. The same key combo will bring them back.<br /><br /><h4>Use your invites, wisely</h4>RockMelt seems to give out an invite or two every couple of days (so far!) It also very cleverly suggests friends that have requested an invite, so hook a friend up, send them an invite - you're sure to get more. Use the "Open Invites" button <img style="border:none;vertical-align:bottom;padding:0" src="" /> to send invites to the friends who you know really want one.Adam Creeger launches<span style="font-style:italic;">Note: The opinions expressed in this post (and all others) are my own and are not necessarily representative of those of AKQA, HealthPartners or any party involved in virtuwell. Please read <a target="_blank" title="Press, then release." href="">this press release</a> for more information.</span><br /><br />Today is a good day. This morning, at around 7am PST, <a target="_blank" title="The smartest, friendliest insurance company on the planet" href="">HealthPartners </a>launched an application called <a target="_blank" title="Say hi to the bouncing ball, he's called Fred" href="">virtuwell</a>. It was created in partnership with <a target="_blank" href="" title="Shameless plug.">AKQA</a> (the company I work for) over the last 15 months or so.<br /><br />The premise of the application is simple. If you or your kids are feeling sick and you are short of time, or perhaps if you don't have insurance, then virtuwell may be for you. It offers online diagnosis and treatment (including prescriptions) at a very affordable cost - it may even be covered by your health plan.<br /><br />As Technical Architect, virtuwell was definitely the most challenging project of my career. We were responsible for developing the entire application, and had to meet the strict security and quality requirements that come hand in hand with a healthcare app without sacrificing a clean, friendly user interface. On top of that, I've had to master an entirely new technology stack. We have all learned a lot.<br /><br />As a recent arrival in the USA, I have barely began to understand the challenges facing the health system here, but I am proud to have been part of a passionate and dedicated team that has made strides towards making healthcare more accessible to all.<br /><br />It is also my mother's birthday.<br /><br />As I said earlier, today is a good day.Adam Creeger 3.1.1 and paragraph tagsRight now I am getting to grips with the finer details of an Alfresco v3.1.1 installation. It has been fun*.<br /><br />Today.<br /><br />I found <a target="_blank" href="">this bug report</a>,:<br /><br />(DISCLAIMER: The following changes will be lost if you upgrade/replace your Alfresco installation. But since this issue doesn't occur in any other version of Alfresco, that should be ok.)<br /><br />Step 1: Open up <tomcat>/webapps/alfresco/scripts/ajax/xforms.js<br /><br />Step 2: Find the definition of alfresco.constants.TINY_MCE_DEFAULT_SETTINGS (it is near the end) and change it to be:<br /><pre name="code" class="javascript"><br />alfresco.constants.TINY_MCE_DEFAULT_SETTINGS =<br />{<br />theme: "advanced",<br />mode: "exact",<br />plugins: alfresco.constants.TINY_MCE_DEFAULT_PLUGINS,<br />width: -1,<br />height: -1,<br />auto_resize: false,<br />force_p_newlines: false,<br />encoding: "UTF-8",<br />entity_encoding: "raw",<br />add_unload_trigger: false,<br />add_form_submit_trigger: false,<br />theme_advanced_toolbar_location: "top",<br />theme_advanced_toolbar_align: "left",<br />theme_advanced_buttons1: "",<br />theme_advanced_buttons2: "",<br />theme_advanced_buttons3: "",<br />urlconverter_callback: "alfresco_TinyMCE_urlconverter_callback",<br />file_browser_callback: "alfresco_TinyMCE_file_browser_callback",<br />forced_root_block: false,<br />force_br_newlines: true<br />};<br /></pre><br /><br />Note the two last lines.<br /><br />When you are done, all you need to do is clear your browser's cache, and go edit some web content in Alfresco. Anything you create from now on will no longer be wrapped in the usually wonderful <p> tags.<br /><br />*This depends on your definition of fun.</p>Adam Creeger version of FBConnectAuth released: 1.0One year on, I've just released a minor enhancement to the tiny open source project I created called <a target="_blank" href="">FBConnectAuth - Facebook Connect Authentication for ASP.NET</a>.<br /><br />This release contains two enhancements:<br /><div><ul><li>It supports Facebook's new <a target="_blank" href="">Graph API Javscript SDK</a> (but remains backwards compatible)</li><li>It works in partially trusted environments</li></ul><div>It is specifically targeted at .NET 2.0 (as was the previous release) for the benefit of those who don't have control over their production environment.</div></div><div><br /></div><div>Interestingly, I noticed that the new Graph API requires the use of the Facebook Application's "Application ID", rather than "API Key". This means that an example of using FBConnectAuth looks with the Graph API like this:</div><br /><pre name="code" class="c-sharp"><br />//Note this is the "app id", not "api Key"<br />FBConnectAuthentication auth = new FBConnectAuthentication(appId,appSecret);<br />if (auth.Validate() != ValidationState.Valid)<br />{<br /> // The request does not contain the details<br /> // of a valid Facebook connect session.<br /> // You'll probably want to throw an error here.<br />}<br />else<br />{<br /> FBConnectSession fbSession = auth.GetSession();<br /><br /> string userId = fbSession.UserID;<br /> string sessionKey = fbSession.SessionKey;<br /><br /> //This is the Graph API access token<br /> //(available only when using the Graph API)<br /> string accessToken = fbSession.AccessToken;<br /><br /> // The above values can now be used to communicate<br /> // with Facebook on behalf of your user,<br /> // perhaps using the Facebook Developer Toolkit<br /><br /> // The expiry time and session secret is also available.<br />}<br /></pre><br /><br />If you are interested, go <a target="_blank" href="">take a look</a>.Adam Creeger Grails tip: Using a DB reserved word as a domain class name in GrailsWe recently came across a situation where we couldn't our Grails app was failing because it was trying to create a table with the name of 'Condition', which turns out to be a reserved word in MySQL... We worked around it by changing the name of the table to 'conditions' by using the <a target="_blank" href="">Grails ORM DSL</a>, but it turns out there is another way.<div><br /></div><div><b>Backtick to the rescue...</b></div><div>Hibernate allows you to use backticks (`) to indicate that a name should be escaped - you can simply use this in your grails mapping. For example, we could have used:</div><br /><pre class="code"><br />class Condition {<br /> String property1<br /> String property2<br /> ...<br /><br /> static mapping = {<br /> table '`condition`'<br /> ...<br /> }<br />}<br /></pre><br /><br />To be honest, I'm not sure why Grails and/or hibernate don't escape all table and column names by default (I'm sure there is a good reason) - there is an <a href="" target="_blank">open JIRA issue</a> in Grails around this very problem...Adam Creeger location of the User Profile for Network Service on Windows Server 2008 & 7This kind of thing should be easy to find, but I couldn't hunt it down on google. So to save someone else some pain, here it is - the location of the %USERPROFILE% / home directory for the NT AUTHORITY\NetworkService user:<div><br /></div><div>(drum roll...)</div><div><br /></div><div>%systemroot%\ServiceProfiles\NetworkService</div><div><br /></div><div>which usually translates as:</div><div><br /></div><div>c:\Windows\ServiceProfiles\NetworkService</div><div><br /></div><div>The user profiles for other "well known" service accounts (such as LocalService) are siblings of this directory.</div><div><br /></div><div>I hope that saves someone some time...</div><div><br /></div>Adam Creeger - A good year...2009 draws to a close, and it just struck me what a phenomenal year it has been.<br /><br />This year I:<br /><br /><ul><li>Traveled through Morocco</li><li>Visited Paris</li><li>Got engaged!</li><li>Drove across the USA, from New York to California, via Nashville, New Orleans, Austin and Roswell</li><li>Moved to San Francisco</li><li>Helped AKQA and Fiat win <a href="">loads of awards</a> for eco:Drive</li><li><a href="">Hung out with a supermodel</a>, and got paid for it.</li><li>Got <a href="">interviewed by Wired Magazine</a>, along with a cool photo shoot.</li><li>Joined a great team at AKQA SF, and helped make it even greater</li><li>Released <a href="">FBConnectAuth</a>, a tiny open source component for ASP.NET that helps with Facebook Connect authentication.</li><li>Met someone who was actually using FBConnectAuth. </li><li>Got to live in a great house in SF, in an awesome neighborhood.</li><li>Learnt to speak American.</li><li>Got to learn Groovy and Grails</li><li>Worked (and still working) on a massive (and still super-secret) grails project</li><li>Earned a new nickname ("Piping Hot" - for my terrible Halo skills)</li><li>Made new friends with funny accents.</li></ul><br />I have a lot to be thankful for.<br /><br />I hope you all (that's you mum) have a wonderful 2010! <div><br /></div><div>Happy Holidays (see, check out my American skills)</div>Adam Creeger winning...18 months ago a few of us started work on something rather special. 12 hours ago, we won <a href="">the advertising industry's biggest award</a>.<br /><br />It is sometimes the ones who shout the loudest that get the praise, so I want to take a moment to thank those that worked incredibly hard on an amazing product. To the small core of us that sat in that cosy, sunny room, developing a language of our own: Mark, Harald, Stuart (who pretty much became my wife), James, Richard S, Tristan, Martin, Zahid, Kevin - you rock! We can now, officially, get a woop woop.<br /><br />To the guys who worked so closely with us the whole time, crafting words, designing t-shirts and making everything look wonderful: Chris, Richard B, Andy - you guys were the best creative team a bunch of geeks could have ever asked for.<br /><br />Alison, you made the complicated simple. James and Nick, the branding was amazing - the video was a masterpiece.<br /><br />Thanks to Bonnie, Eli and Livia for kicking things off and keeping them going. Deep gratitude to Neville and Miriam for all your advice.<br /><br />And not forgetting our client - Luis. Without such a visionary and passionate figure sitting on the other side of the table, none of this would have happened.<br /><br />That is enough gushing for now - I'm off to bed. Ciao!Adam Creeger Adobe AIR files with Authenticode CertificatesThere are some things that always seem a little trickier than they should be. Renting an apartment in London that has working heating and hot water is one of them, converting an Authenticode certificate from SPC and PVK files to a format that you can sign Adobe AIR files with is another.<br /><br />Since I still haven't figured out the former, I'm going to write about the latter.<br /><br />First things first, if you're going to get a certificate <span style="font-weight: bold;">solely</span> for signing AIR files, then buy one from <a target="_blank" href="">Verisign</a> or <a target="_blank" href="">Thawte</a> specifically for AIR. It's just easier.<br /><br /.<br /><br /...<br /><br />Firstly, you need to ask yourself two questions:<ol><br /><li>Do you feel lucky?</li><br /><li>Can you find a tool called pvk2pfx on your machine. This is a pain to get hold of, but lives in bin directory of most Microsoft SDKs.</li><br /></ol>If you're lucky AND you can find pvk2pfx, then <a href="#IAmFeelingLucky">skip forward a little</a>. If you can't find it, then read on.<br /><a name="IAmNotFeelingLucky"></a><br /><br /><h3>I don't have pvk2pfx, I need something else</h3><br />Worry not, this is still completely possible...<br /><ol><br /><li>Get a tool called pvkimprt. You can <a target="_blank" href="">download it from Microsoft</a>. Run the self extracting whatsit, then run the installer, and make a note of its final resting place.</li><br /><li>Open up a commmand prompt (Start, Run, "cmd").</li><br /><li>Change to the directory where pvkimprt ended up.</li><br /><li>Run:<br /><span style="font-family:Courier;">pvkimprt -PFX <path\to\cert.spc> <path\to\key.pvk></span></li><br /><li>Choose the following options:<br /><br /><a title="Choose to export the private key..." onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 320px; height: 246px;" src="" alt="Choose to export the private key..." id="BLOGGER_PHOTO_ID_5277745364691796066" border="0" /></a><br /><br /><a title="Choose the PFX format, choosing to include certificate chain and use strong protection." onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 246px;" src="" border="0" alt="Choose the PFX format, choosing to include certificate chain and use strong protection." id="BLOGGER_PHOTO_ID_5277746167415234386" /></a><br /><br /><a title="Choose a password..." onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 246px;" src="" border="0" alt="Choose a password..." id="BLOGGER_PHOTO_ID_5277746167726973346" /></a><br /><br /><a title="Select a location to save the pfx file..." onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 246px;" src="" border="0" alt="Select a location to save the pfx file" id="BLOGGER_PHOTO_ID_5277746171691645074" /></a><br /><br /><a title="This is what success looks like..." onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 183px; height: 100px;" src="" border="0" alt="This is what success looks like..." id="BLOGGER_PHOTO_ID_5277746172288301090" /></a><br /></li><br /><li>You can use the resulting pfx file to <a target="_blank" href="">sign an Adobe AIR file with adt</a>.</li></ol><br /><br />You're done! You might want to ignore the rest of this post, it will make you wish you were luckier. Any questions/mistakes/omissions, feel free to ask...<br /><br /><a name="IAmFeelingLucky"></a><h3>I'm lucky, I have pvk2pfx</h3>This is much easier. Simply:<ol><br /><li>Open up a commmand prompt (Start, Run, "cmd")</li><br /><li>Change to directory where you found pvk2pfx.</li><br /><li>Run:<br /><span style="font-family:Courier;">pvk2pfx -pvk <path\to\key.pvk> -pi <pvk password> -spc <path\to\cert.spc> -pfx <path\to\output.pfx> -po <new password for pfx file></span> (all on one line)</li><br /><li>You can use the resulting pfx file to <a target="_blank" href="">sign an Adobe AIR file with adt</a>.</li><br /></ol><br /><br />I hope that was helpful!Adam Creeger Anatomy of a Seriously Sophisticated AIR ApplicationAs promised, I've uploaded the slides that <a href="" target="_blank" title="Mr. Pixel Pod">Rick Williams</a> and I presented at Adobe MAX Milan on December 2nd 2008. Enjoy!<br /><br /><embed src="" flashvars="id=e756537e-239c-4c8d-b3da-c22f91ce9469" width="500" height="375" allowFullScreen="true" type="application/x-shockwave-flash"></embed>Adam Creeger to all those who came to see us talk...Just a quick note to say thanks to all the folks that came to see <a href="" target="_blank" title="Mr. Rick Williams">Rick</a> and I present "The Anatomy of a Sophisticated Air Application" at Adobe MAX in Milan. It was also great to speak to so many people who loved Fiat eco:Drive.<br /><br />We will the get the slides up ASAP. Over the next few weeks, I'll blog about some of the topics we covered in a bit more detail - leave a comment if there is something specific you would like to know about.<br /><br />Thanks!Adam Creeger kind of waterfall development I like...The process cynic in me says you don't often hear the words "waterfall" and "innovation" in the same sentence... This, however, is an Creeger Sequence Diagrams: It doesn't get any more exciting than this...Ok, a slight exaggeration. You might even accuse me of lying, but I find this pretty damn cool.<br /><br />Last year I stumbled upon an ingenious tool - <a target="_blank" title="Get to know Alice, Bob and co..." href="">Web Sequence Diagrams</a>. By using a markup language, you can draw sequence diagrams without the fuss of Visio, or even Gliffy.<p><img src="" alt="A day in the life of a builder, sequence diagram style..." /></p><br /.<br /><br />So, I set about writing one. And got no further. Life, well actually work, intervened, as usual.<br /><br />But fear not (for I know you are trembling with fear and anticipation), others have come to the rescue with a <a title="Oh the excitment..." target="_blank" href="">plugin </a>for <a target="_blank" title="The uber-wiki" href="">Confluence</a> and a <a href="" target="_blank" title="What sheer bliss...">plugin</a> for the <a target="_blank" title="Ok, it has some flaws, but not many..." href="">rather wonderful Trac</a>. On top of all this, you can also find a <a target="_blank" title="With these scripts, you are truly spoiling us" href="">whole set of example scripts for Python, Java, and Ruby</a>. Apparently, you can even render inline markup by using a bit of JavaScript magic, but I haven't quite got that working yet...<br /><br />Genius.Adam Creeger prediction about projection...<p>This morning, <a title="David Pogue, talking about a small projector" target="_blank" href="">news about a tiny projector</a> dropped into my inbox.</p><p>To summarise, this is a battery powered projector, perfect for showing movies from your ipod. Great. The author of that article got it right, this is game changing stuff. Very cool.<br /><br />But I think it is bigger than that.<br /><br />You see, when the MP3 format first came on the scene, the people rejoiced. "Woo", said Bob, "I can burn a CD with 200 songs". "Amazing", said Esmeralda, "I can FTP this song in just 1 hour!". "Holy crap!", said <a title="Now making music production software..." target="_blank" href="">Justin<.<br /> <a title="Not quite like this, but nearly..." target="_blank" href="">futuristic input mechanism</a>, and you'll have a pocket powerhouse.<br /><br />I'm guessing this is why Google never bothered trying to make a desktop operating system - mobile will be the new desktop.Adam Creeger belated first post...<p>There seem to be a few questions to answer before actually starting a blog. What will the content be? How often will I post? How will I start it? When will I stop procrastinating and actually do it?</p><p>This sentence answers all those questions in one - (see below; whenever something substantial falls out of my brain; with a random post; a few posts ago)</p><p>With the realisation that a blog that has a shaky start is better than no blog at all, it's time to get this thing off the ground. First let me introduce myself. My name is Adam Creeger. I work as a "Technical Architect" for <a id="neq1" title="Check out the recruitment section..." href="" target="_blank">AKQA</a> London, a leading light in the murky world of Digital Advertising. They used to call themselves an interactive marketing shop, but I guess the advertising industry shifted itself our way, and swallowed us up in the process.</p><p.</p><p <a id="t4p5" title="the obligatory wikipedia link" href="" target="_blank">dynamic language</a> (including Powershell). At the end of last year, I stepped back in time and did some <a title="Fancy fancy..." href="" target="_blank">award winning</a>, brain-taxing <a href="" title="Fiat 500? Paint it black..." target="_blank">work</a> (<a id="hm6p" title="enjoy" href="" target="_blank">transitive closures</a>, <a title="Say Ciao! to Franco, Merv and Claudia..." target="_blank" href="" id="pv6t">Fiat eco:Drive</a> - one heck of an AIR project. Most importantly though, I get to work with some inspirational people who astound me with their passion for this game.</p><p.</p><br /><p>Thank you, and good night.</p>Adam Creeger in Flex/Air: Filters behaving badly...I'm a firm believer in diagnostics - in .NET land I just can't live without log4net. The mx.logging.* namespace in Flex 3 appears to give us <span style="font-style: italic;">some </span>similar functionality (I'm talking about restricting loggers to certain "categories" here) but without the <a target="_blank" href="">lovely configuration framework</a> of log4net.<br /><br />In an app we're writing at the moment, I was seeing some wierd behaviour. Basically, the logging targets were just not obeying their filters, meaning every message was getting written everywhere. Not good. At all.<br /><br />So I did some digging and found two things that surprised me:<br /><br /><h4>1. Setting a level on a target will call Log.addTarget() - probably prematurely</h4><br />When stepping through the source code of the Flex logging framework, I found this bit of code in AbstractTarget:<br /><br /><pre class="code">public function set level(value:int):void<br />{<br /> // A change of level may impact<br /> // the target level for Log.<br /> Log.removeTarget(this);<br /> _level = value;<br /> <strong style="color: rgb(204, 0, 0);">Log.addTarget(this);</strong><br />}</pre><br />This highlighted line is the culprit. The contents of the addTarget method set up logging restrictions based on the value of the filters array. This has two consequences. Firstly, if you follow <a target="_blank" href="">Adobe's guidelines</a> you'll end up running through the addTarget code twice, once when you set the level, and once when you actually call Log.addTarget yourself - not<span style="font-weight: bold;"> </span><span style="font-style: italic;">awful, </span>but not great either - lots of loops, which gets bad when you've got a lot of Loggers.<br /><br />The other consequence was the cause of my pain...<br /><br /><h4>2. You have to set your filters before you set your level</h4><br />Yep. That's right, the following code is bad:<br /><pre class="code">//BAD CODE, DON'T USE<br />var target : TraceTarget = new TraceTarget();<br />target.level = level; //This should not come first.<br />target.filters = filters; //This is pretty much ignored</pre><br />Here's why:<br /><ol><li>The default value of target.filters is ["*"], which will means the target listens to log messages from every logger.</li><li>When you set target.level, Log.addTarget is called, and all the filter magic happens. Except that you haven't set the filters yet. So it uses the default value ["*"], and your filter listens to everything. This was what was happening in our code.</li></ol>So, the correct code should be: <pre class="code">var target : TraceTarget = new TraceTarget();<br />target.filters = [com.mydomain.myproject.MyClass,mx.rpc.*]; //Set first<br />target.level = level; // set second.</pre><br /().<br /><br />The observant among you may have noticed that if Adobe improve the behaviour of setting target.level in the future, you may find that your logging stops because addTarget never gets called. If you're worried about that, you can do the following: <pre class="code">var target : TraceTarget = new TraceTarget();<br />target.filters = [com.mydomain.myproject.MyClass,mx.rpc.*];<br />Log.addTarget(target); //honours the filters<br />// setting target.level will<br />// remove the target, then add it again.<br />target.level = level;<br /></pre>Still inefficient, but at least it is future proof. I hope that helps somebody, somewhere...Adam Creeger gets that search thing wrong...Microsoft are rather obviously obsessed with getting their search revenues up. That's no surprise - pretty much everyone knows about their [twice] failed Yahoo bid. Then there is their <a target="_blank" href="">cashback </a>offer, and now the announcement of a plan to <a target="_blank" href="">provide search facilities</a> (and therefore advertising revenue) on Facebook. It makes sense. Search is big business.<br /><br />It seems, however, that Steve Ballmer has forgotten why people actually use search engines. Apparently he thinks that:<br /><br /><blockquote>"advertisers don’t want to sell on Live Search unless there’s more people using the site, and people don’t want to search on the site unless there are more relevant ads."<br />Source: <a target="_blank" href="">CNN</a><br /></blockquote>Oh dear.<br /><br />Surely Ballmer cannot believe that Google's near domination of the global search market was down to relevant ads? Can he? That would make him <a target="_blank" href="">slightly crazy</a>...Adam Creeger while using mxmlc and antMy first blog post. Very practical too...<br /><br />Basically, I'm working on a fancy Adobe AIR app right now. As part of that, we are implementing an automated build process using CruiseControl and such. I'm using <a href="">ant</a>, and the <a href="">flexTasks</a> provided by Adobe. All was going pretty well, until this happened:<br /><pre class="code">[mxmlc] Loading configuration file C:\Program Files\Adobe\Flex3SDK\frameworks\air-config.xml<br />[mxmlc] Error: null<br />[mxmlc]<br />[mxmlc] java.lang.OutOfMemoryError<br /></pre><br />After a bit of googling, there was no <span style="font-style: italic;">obvious</span> solution. I did see mentions of setting the ANT_OPTS environment variable. So this is what I did (I'm running Windows Server 2003 btw...):<br /><br /><ol><li>Hit Windows-Break to open the "System Properties" dialog.</li><li>Go to the advanced tab.</li><li>Click "environment variables"</li><li>In the System variables section, click New.</li><li>Enter the variable name as <span style="font-family:courier new;">ANT_OPTS</span>, and the value as <span style="font-family:courier new;">-Xmx512m</span>.</li><li>Click "OK".</li></ol>If you were using a command prompt, you'll need to close it down and re-open it to use the new settings. Try running your ant script again, and your error should be gone.<br /><br />You can find a much nicer visualisation of that process <a href="">here</a>. If you are on a UNIX box, then you'll need to:<br /><pre class="code">set ANT_OPTS=-Xmx512M; export ANT_OPTS</pre>from the shell.<br /><br />If you want to know more, you should read about the <a href="">Java virtual machine's memory tuning options</a>. I couldn't find any documentation about the <span style="font-family: courier new;">ANT_OPTS</span> environment variable.Adam Creeger
http://feeds.feedburner.com/adamcreeger/thoughts
CC-MAIN-2019-13
en
refinedweb
Programming multicore microcontrollers Contents Introduction Microcontroller until now were mostly single cores based, but XMOS introduced a microcontroller where multiple processes can run at once. This can be a great advantage where time critical processes can be handled simultaneously. If you want to do that on a single core microcontroller, you would have to handle the two threads with interrupts or some other means of dividing the available processing power onto the several tasks. This project shows how multicore microcontrollers are programmed, and how several tasks are handled in an extension to the C language. It also shows how different tasks can communicate with each other, and how timed I/O can assist in achieving higher communication transfer speeds. XMOS microcontrollers XMOS is a silicon manufacturer who specialise in multicore microcontrollers. Each chip can contain 4 to 16 XCOREs, each capable of running a seperate task. They share on chip memory, and can be connected to external I/O. An XMOS controller can also be fitted with analog capabilities, or physical interfaces like an USB PHY. Also, the XCOREs inside the microcontroller is equipped with a dedicated multiplier, ideal for performing DSP tasks. The controller runs at a decent 400MHz-500MHz and is capable of doing 1000 MIPS. With these numbers, XMOS controllers easily outperform any Arduino board, and give twice the processing power of the fastest single core microcontrollers. XMOS controllers are programmed by a JTAG interface, and the IDE is Eclipse based. It accepts normal C/C++ programming but has an extension to the C language for handling multiprocessing instructions, XC. With these extensions parallel running tasks can be initiated and specialised I/O can be configured. I/O : ports and clocks I/O on an XMOS controller is regulated by ports. Each port is identified by its width, and can be multiplexed with other ports. The widths these ports come in are 32, 16, 8, 4 and 1 bit. Writing to or reading from a port means all the I/O pins are accessed at once. This is a big advantage over adressing individual pins in terms of speed. Also, a port can be configured with an internal or external clock, or a data valid signal. Buffered ports can be used to accumulate multiple I/O actions into one single bigger variable. In the name of the port, the port width is given, so is XS1_PORT_1D a 1 bit wide port, and XS1_PORT_8B means an 8 bit port. Because there is a limited number of physical pins on the microcontroller, more than one port can be multiplexed onto the same pin. By default, port mapped with a smaller width have priority over a larger width port assigment. A one bit wide port can be configured as a clock input or output, or accompany a data port as a data valid indicator. When used in this form, a clock block indicates a group of ports which work on the same clock cycle. Apart from these ports, pins can also be assign to a link. These link pins form a 5 bit bidirectional bus, and can be used for interconnection between multiple XMOS chips. This link bus is used for inter-process communication. This way more XCOREs can work together, even across a multi-chip system. Link buses always have priority over port mapped pins. In XC, you can easily specify to input from a port: in port input_port = XS1_PORT_8B; unsigned char variable; input_port :> variable; The microcontroller will wait on the input instruction until data becomes available. This is because all ports are triggered by a clock block. The input for a clock block can be an 1 bit wide input port or the (divided) processor clock. An example of how an external signal can be used as a clock. You will first have to assign a clock block to be triggered by a 1 bit input port. In this example, this is the clock_port port. Next, assign the input port to use the clock block as a clock source: in port clock_port = XS1_PORT_1A; clock input_clock = XS1_CLKBLK_1; configure_clk_src(input_clock, clock_port); configure_in_port(input_clock, input_port); Output data to a port is very similar In the following example, an 8 bit wide variable is used with a 4 bit wide output port. Only the 4 LSB's of the variable will be outputted: out port output_port = XS1_PORT_4C; unsigned char data_to_output = 0x0d; output_port <: data_to_output; A port can also be bidirectional. It acts like a normal output or input port. If you want the port to be tristated, just perform an input on the port. port bidir_port = XS1_PORT_1F; bidir_port <: data_to_output; // port is driven to output the LSB of the variable. bidir_port :> variable; // same port is now tristated on the next clock cycle, and the LSB of variable now contains the port value. Running parallel processes The XC language extension also has instructions to run tasks in parallel to each other. >> Insert info here << First project: Blinking leds The GPIO slice is connected to the square slot on the xCORE-USB slicekit. It contains 4 leds which are connected to XS1_PORT_4A on tile 1 of the processor. We can let those leds blink as a 4 bit counter: #include <platform.h> #include <xs1.h> #include <timer.h> on tile[1]: out port ledjes = XS1_PORT_4A; void task1(void){ unsigned char led_status = 0; while (1){ if(led_status < 16){ ledjes <: led_status; led_status++; delay_milliseconds(50); } else { led_status=0; } } } int main(){ par{ on tile[1]: task1(); } return 0; }
https://ackspace.nl/w/index.php?title=Programming_multicore_microcontrollers&oldid=6154
CC-MAIN-2019-13
en
refinedweb
I am sure that I have seen someone have a part of their prompt aligned to the right in their terminal window and then have the actual cursor start on a second line. I know that I can achieve the second line with a "\n" in the PS1, but I cannot figure out how to align part of it to the right. Was what I saw just whitespace added between the two strings? What you want can fairly easily be done by displaying the first line before displaying the prompt. For example, the following displays a prompt of \w on the left of the first line and a prompt of \u@\h on the right of the first line. It makes use of the $COLUMNS variable which contains the width of the terminal and the $PROMPT_COMMAND parameter which is evaluated before bash displays the prompt. print_pre_prompt () { PS1L=$PWD if [[ $PS1L/ = "$HOME"/* ]]; then PS1L=\~${PS1L#$HOME}; fi PS1R=$USER@$HOSTNAME printf "%s%$(($COLUMNS-${#PS1L}))s" "$PS1L" "$PS1R" } PROMPT_COMMAND=print_pre_prompt - 3 - 1Both this and the highest voted answer don't work correctly if .inputrchas set show-mode-in-prompt on. Both don't calculate the length of the non-prinable ANSI CSI codes, and don't properly enclose them in \[and \]as mentioned by @Mu Mind. See this answer for a resolution. – Tom Hale Apr 26 '17 at 10:58 Based on the information I found here I was able to discover a simpler solution to right align while accommodating variable length content on the right or left including support for colour. Added here for your convenience... Note on colours: using the \033 escape in favour of alternatives, without \[\] groupings, proves most compatible and therefor recommended. The trick is to write the right hand side first, then use carriage return ( \r) to return to start of line and continue to overwrite the left hand side content on top of that, as follows: prompt() { PS1=$(printf "%*s\r%s\n\$ " "$(tput cols)" 'right' 'left') } PROMPT_COMMAND=prompt I am using tput cols on Mac OS X to retrieve the terminal/console width from terminfo since my $COLUMNS var is not populated in env but you may substitute the replaceable " *" value in %*s, by providing " ${COLUMNS}", or any other value you prefer, instead. The next example uses $RANDOM to generate different length content includes colours and shows how you might extract functions to refactor the implementation to reusable functions. function prompt_right() { echo -e "\033[0;36m$(echo ${RANDOM})\033[0m" } function prompt_left() { echo -e "\033[0;35m${RANDOM}\033[0m" } function prompt() { compensate=11 PS1=$(printf "%*s\r%s\n\$ " "$(($(tput cols)+${compensate}))" "$(prompt_right)" "$(prompt_left)") } PROMPT_COMMAND=prompt Since printf assumes the length of string to be the # of characters we need to compensate for the amount of characters required to render the colours, you will find it always short of the end of screen because of the non printed ANSI characters without compensation. The characters required for colour remains constant and you will find that also printf takes into account the change in length, as returned by $RANDOM for example', which keeps our right alignment in tact. This is not the case with special bash prompt escape sequences (ie. \u, \w, \h, \t) though, as these will only record a length of 2 because bash will only translate them when the prompt is displayed, after printf has rendered the string. This does not affect the left hand side but best to avoid them on the right. Of no consequence if the generated content will remain at constant length though. Like with the time \t option which will always render the same amount of characters (8) for 24 time. We only need to factor in the compensation required to accommodate for the difference between 2 characters counted which results to 8 characters when printed, in these cases. Keep in mind that you may need to triple escape \\\ some escape sequences which otherwise hold meaning to strings. As with the following example the current working directory escape \w holds no meaning otherwise so it works as expected but the time \t, which means a tab character, does not work as expected without triple escaping it first. function prompt_right() { echo -e "\033[0;36m\\\t\033[0m" } function prompt_left() { echo -e "\033[0;35m\w\033[0m" } function prompt() { compensate=5 PS1=$(printf "%*s\r%s\n\$ " "$(($(tput cols)+${compensate}))" "$(prompt_right)" "$(prompt_left)") } PROMPT_COMMAND=prompt nJoy! Using printf with $COLUMNS worked really well, something like: printf "%${COLUMNS}s\n" "hello" It right justified it perfectly for me. The following will put the current date and time in RED on the RHS of the terminal. # Create a string like: "[ Apr 25 16:06 ]" with time in RED. printf -v PS1RHS "\e[0m[ \e[0;1;31m%(%b %d %H:%M)T \e[0m]" -1 # -1 is current time # Strip ANSI commands before counting length # From: PS1RHS_stripped=$(sed "s,\x1B\[[0-9;]*[a-zA-Z],,g" <<<"$PS1RHS") # Reference: local Save='\e[s' # Save cursor position local Rest='\e[u' # Restore cursor to save point # Save cursor position, jump to right hand edge, then go left N columns where # N is the length of the printable RHS string. Print the RHS string, then # return to the saved position and print the LHS prompt. # Note: "\[" and "\]" are used so that bash can calculate the number of # printed characters so that the prompt doesn't do strange things when # editing the entered text. PS1="\[${Save}\e[${COLUMNS:-$(tput cols)}C\e[${#PS1RHS_stripped}D${PS1RHS}${Rest}\]${PS1}" Advantages: - Works correctly with colours and ANSI CSI codes in the RHS prompt - No subprocesses. shellcheckclean. - Works correctly if .inputrchas set show-mode-in-prompt on. - Correctly encapsulates the non-prompt-length-giving characters in \[and \]so that editing text entered at the prompt doesn't cause the prompt to reprint strangely. Note: You'll need to ensure that any colour sequences in the $PS1 before this code is exeucted are properly enclosed in \[ and \] and that there is no nesting of them. - while i do like this approach in theory, in practice it doesn't work out of the box (ubuntu 18.04, GNU bash 4.4.19): appending the code directly into .bashrc first gives the error bash: local: can only be used in a function, which is trivial to fix, and after that, it doesn't show anything because COLUMNSis not defined: it has to be substituted with $(tput cols). same outcome if the snippet is saved on a different file, and then sourced into .bashrc. – Polentino Sep 3 '18 at 17:56 - 1 I just thought I would throw mine in here. It's almost exactly the same as the GRML zsh prompt (except zsh updates it's prompt a little better on new lines and back spaces - which is impossible to replicate in bash ... well very difficult at this point in time, at least). I spent a good three days on this (only tested on a laptop running arch), so here's a screenshot and then the stuff that goes in my ~/.bashrc :) warning - it's a little crazy important aside - every ^[ (such as ^[[34m) is really the escape character (char)27. The only way I know how to insert this is to enter ctrl+([v) (i.e. hit both [ and v while ctrl is held down. # grml battery? GRML_DISPLAY_BATTERY=1 # battery dir if [ -d /sys/class/power_supply/BAT0 ]; then _PS1_bat_dir='BAT0'; else _PS1_bat_dir='BAT1'; fi # ps1 return and battery _PS1_ret(){ # should be at beg of line (otherwise more complex stuff needed) RET=$?; # battery if [[ "$GRML_DISPLAY_BATTERY" == "1" ]]; then if [ -d /sys/class/power_supply/$_PS1_bat_dir ]; then # linux STATUS="$( cat /sys/class/power_supply/$_PS1_bat_dir/status )"; if [ "$STATUS" = "Discharging" ]; then bat=$( printf ' v%d%%' "$( cat /sys/class/power_supply/$_PS1_bat_dir/capacity )" ); elif [ "$STATUS" = "Charging" ]; then bat=$( printf ' ^%d%%' "$( cat /sys/class/power_supply/$_PS1_bat_dir/capacity )" ); elif [ "$STATUS" = "Full" ] || [ "$STATUS" = "Unknown" ] && [ "$(cat /sys/class/power_supply/$_PS1_bat_dir/capacity)" -gt "98" ]; then bat=$( printf ' =%d%%' "$( cat /sys/class/power_supply/$_PS1_bat_dir/capacity )" ); else bat=$( printf ' ?%d%%' "$( cat /sys/class/power_supply/$_PS1_bat_dir/capacity )" ); fi; fi fi if [[ "$RET" -ne "0" ]]; then printf '\001%*s%s\r%s\002%s ' "$(tput cols)" ":( $bat " "^[[0;31;1m" "$RET" else printf '\001%*s%s\r\002' "$(tput cols)" "$bat " fi; } _HAS_GIT=$( type 'git' &> /dev/null ); # ps1 git branch _PS1_git(){ if ! $_HAS_GIT; then return 1; fi; if [ ! "$( git rev-parse --is-inside-git-dir 2> /dev/null )" ]; then return 2; fi branch="$( git symbolic-ref --short -q HEAD 2> /dev/null )" if [ "$branch" ]; then printf ' \001%s\002(\001%s\002git\001%s\002)\001%s\002-\001%s\002[\001%s\002%s\001%s\002]\001%s\002' "^[[0;35m" "^[[39m" "^[[35m" "^[[39m" "^[[35m" "^[[32m" "${branch}" "^[[35m" "^[[39m" fi; } # grml PS1 string PS1="\n\[\e[F\e[0m\]\$(_PS1_ret)\[\e[34;1m\]${debian_chroot:+($debian_chroot)}\u\[\e[0m\]@\h \[\e[01m\]\w\$(_PS1_git) \[\e[0m\]% " I'm still working on making the colors configurable, but I am happy with the colors as they are now. Currently working on a fix for the crazy ^[ character and easy color switching :) - It's not Ctrl + [ and v simultaneously, it's Ctrl + v followed by Ctrl + [. – NieDzejkob Feb 25 '18 at 13:45 You can use printf to do right alignment: $ printf "%10s\n" "hello" hello $ PS1='$(printf "%10s" "$somevar")\w\$ ' Adding on Giles' answer, I wrote something to handle colors better (provided they're properly enclosed in \[ and \]. It's case-by-case and doesn't handle every case, but it lets me set my PS1L in the same syntax as PS1 and uses the (uncolored) date as PS1R. function title { case "$TERM" in xterm*|rxvt*) echo -en "\033]2;$1\007" ;; *) ;; esac } print_pre_prompt() { PS1R=$(date) PS1L_exp="${PS1L//\\u/$USER}" PS1L_exp="${PS1L_exp//\\h/$HOSTNAME}" SHORT_PWD=${PWD/$HOME/~} PS1L_exp="${PS1L_exp//\\w/$SHORT_PWD}" PS1L_clean="$(sed -r 's:\\\[([^\\]|\\[^]])*\\\]::g' <<<$PS1L_exp)" PS1L_exp=${PS1L_exp//\\\[/} PS1L_exp=${PS1L_exp//\\\]/} PS1L_exp=$(eval echo '"'$PS1L_exp'"') PS1L_clean=$(eval echo -e $PS1L_clean) title $PS1L_clean printf "%b%$(($COLUMNS-${#PS1L_clean}))b\n" "$PS1L_exp" "$PS1R" } Here it is on github: dbarnett/dotfiles/right_prompt.sh. I use it in my .bashrc like this: source $HOME/dotfiles/right_prompt.sh PS1L='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]' PS1='\[\033[01;34m\]\w\[\033[00m\]\$ ' PROMPT_COMMAND=print_pre_prompt Note: I also added a newline after PS1R, which makes no visual difference, but seems to keep the prompt from getting garbled if you scroll back through certain commands in your command history. I'm sure someone else can improve on this, and maybe generalize some of the special-case-iness. Here is a solution based on PROMPT_COMMAND and tput: function __prompt_command() { local EXIT="$?" # This needs to be first history -a local COL=$(expr `tput cols` - 8) PS1="💻 \[$(tput setaf 196)\][\[$(tput setaf 21)\]\W\[$(tput setaf 196)\]]\[$(tput setaf 190)\]" local DATE=$(date "+%H:%M:%S") if [ $EXIT != 0 ]; then PS1+="\[$(tput setaf 196)\]\$" # Add red if exit code non 0 tput sc;tput cuu1; tput cuf $COL;echo "$(tput setaf 196)$DATE"; tput rc else PS1+="\[$(tput setaf 118)\]\$" tput sc;tput cuu1; tput cuf $COL;echo "$(tput setaf 118)$DATE"; tput rc fi PS1+="\[$(tput setaf 255)\] " } PROMPT_COMMAND="__prompt_command" The magic is performed by: tput sc;tput cuu1; tput cuf $COL;echo "$(tput setaf 196)$DATE"; tput rc Which is broken down by: tput sc # saved the cursor position tput cuu1 # up one line tput cuf $COL # move $COL characters left echo "$(tput setaf 196)$DATE" # set the colour and print the date tput rc # restore the cursor position In PS1, tput is escaped with \[ \] so that it is not counted in displayed length.
https://superuser.com/questions/187455/right-align-part-of-prompt/1203400
CC-MAIN-2020-05
en
refinedweb
Provided by: libacl1-dev_2.2.53-4_amd64 NAME acl_get_entry — get an ACL entry LIBRARY Linux Access Control Lists library (libacl, -lacl). SYNOPSIS #include <sys/types.h> #include <sys/acl.h> int acl_get_entry(acl_t acl, int entry_id, acl_entry_t *entry_p); DESCRIPTION descriptors that refer to entries within the ACL continue to refer to those entries. Any existing ACL pointers that refer to the ACL referred to by acl continue to refer to the ACL. RETURN VALUE. ERRORS. STANDARDS IEEE Std 1003.1e draft 17 (“POSIX.1e”, abandoned) SEE ALSO acl_calc_mask(3), acl_create_entry(3), acl_copy_entry(3), acl_delete_entry(3), acl_get>.
http://manpages.ubuntu.com/manpages/disco/man3/acl_get_entry.3.html
CC-MAIN-2020-05
en
refinedweb
We're at the point we need to do something with the Story we're displaying. Or the job. Either one. My thoughts on this is to have two actions. A tap and a swipe. The swipe is going to require a change in the list itself (I think). A tap is pretty easy, we'll do that first. Part of this task is going to be to build a new wrapper for an OS component. The one I'll focus on here is the Toast. I suspect I'll wrap it just like I've wrapped the AlertDialog. Though this time I'll have to TDD it into existence, so it won't QUITE be as extensive from the start, if anytime soon. :) Such is the cost of other's not TDDing thier code. FOR SHAME! :-P My expectation here is to use toasts to indicate that, yes, the tap is working. Then figure out how to refactor to a swipe. It feels like a super silly small step. I know that it's not. But to not just slap something "so simple" into place and try it a few times... It's still fighting 15 years of habit. Unfortunately; I have this idea of what I'm aiming to do... and I just don't feel it right now. I need a windows laptop that doesn't suck so I can work on the UWP app. That's currently hooked my thoughts and desires. So... Until I'm thinking android... time passes And down time at the gymnastics studio has gotten a smidge of android written! The big this is getting a toast to show up when an item is tapped. It uses the title and puts up a toast. The Toast isn't wrapped. And technically... I could get away with how it is... but that creates a few lines of uncovered code. I need to get a test in place that actually calls and verifies the behavior in the onClickListener. This will come soon; probably with longer nights at the gymnastics studio. I want to get a wrapper around the Toast to be able to unit test it. This seems like a great place to be able to start that! ... There's a test in here that needs massive refactoring... time passes And I wish I'd made mention of what test required refactoring... Oh Well... I'm sure I'll find it again... Toast has some difficulties being wrapped. The use of static methods to instantiate makes wrapping that behavior... mostly not doable. Not "impossible", but I'm avoiding tools like PowerMock to rewrite test code for test. My current plan is to just bounce out if it's in a test environment. I expect some grief over this plan; "Test code in production code?!?!?" ... I also probably mention this every time I'm making on of these posts. Having hooks into production just to support testing, I agree is bad; but I'm more opposed to tramp data. I'm noticing the construction of my wrapper is heavily slanted towards how I use a toast; which; honestly - Yes. I'm not looking to build a perfect wrapper class.... heh... WELL... not perfect but that's for a later challenge. Right now, I'm looking to be able to TDD code that will use a toast. This wrapper can, and if you're doing it for your own code - should, only involve the functionality you need. There's no reason to build a general purpose wrapper; it'll make it more complex than it needs to be. Evolve it as your needs evolve. ... Think about that while I go delete the Toast API I just put into my Toaster class. OH... That test... Yes. That needs refactoring. For future me; it's the onBindViewHolderShouldSet* tests. They're kinda big... and repetitive. Now I'm going to, more or less, C&P the giant test that needs massive refactoring to be able to test that my Toaster class works as a wrapper around the toast. Here's what I currently have as a wrapper for the toast public class Toaster { private static Toast replacementToast; private Toast currentToast; @VisibleForTesting(otherwise = VisibleForTesting.NONE) /* package */ static void setReplacementToast(final Toast toast){ replacementToast = toast; } public Toaster makeToast(final Context context, final CharSequence text, final int duration) { currentToast = replacementToast == null ? new Toast(context) : replacementToast; currentToast.setText(text); currentToast.setDuration(duration); return this; } public void show() { currentToast.show(); } } It's very small as I'm only implementing what I need. The tests that drove this are equally compact public class ToasterTests extends QacTestClass { @Mock Context mockContext; @Test public void makeToastSetsValues(){ final Toast mockToast = Mockito.mock(Toast.class); Toaster.setReplacementToast(mockToast); final Toaster toast = new Toaster(); toast.makeToast(mockContext, "SomeText", Toast.LENGTH_LONG); toast.show(); Mockito.verify(mockToast).setText("SomeText"); Mockito.verify(mockToast).setDuration(Toast.LENGTH_LONG); } @Test public void showShows(){ final Toast mockToast = Mockito.mock(Toast.class); Toaster.setReplacementToast(mockToast); final Toaster toast = new Toaster(); toast.makeToast(mockContext, "SomeText", Toast.LENGTH_LONG); toast.show(); Mockito.verify(mockToast).show(); } } The core of the change to the test to validate the Toast functionality is in the previous assert section. This test's "Arrange" is longer. topItemsAdapter.onBindViewHolder(viewHolder, position); Toast mockToast = Mockito.mock(Toast.class); Toaster.setReplacementToast(mockToast); assertThat(containerCaptor.getValue()).isNotNull(); Mockito.when(mockView.getContext()).thenReturn(Mockito.mock(Context.class)); //Act containerCaptor.getValue().onClick(mockView); //Assert Mockito.verify(mockToast).show(); Summary I'm going to end this one here. I've gotten some interactivity in place; and I've created a new OS Dependency Wrapper. Not as full featured, I feel, as the AlertDialog. I may end up rewriting that one to be TDD'd and follow a different pattern. It works for now; and I think it's pretty nice; but I also think it's trying to be a bit too dynamic. I could drop the generic aspect, I think. Anyway - A tiny bit more functionality; but a whole new useful tool decoupling code from the underlying OS - That should come in useful later.
https://quinngil.com/2017/07/30/android-hacker-news-ineractive-item/
CC-MAIN-2020-05
en
refinedweb
Error on empty date cell Excel allows setting the number format of an empty cell. If the cell is set to the date format, the value is None, and the attempt to read the value raises a Type Error, as it can't add the calendar offset to None. Not sure I understand the from_excel() function in date_time.init fully - would it be an acceptable patch just to return None in this case if the value is None? from openpyxl import load_workbook wb = load_workbook("test_date.xlsx") ws = wb.active ws.cell('A1').value # datetime.datetime(2014, 1, 31, 0, 0) ws.cell('A2').value ## Returns Error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "openpyxl/cell/cell.py", line 347, in value value = from_excel(value, self.base_date) File "openpyxl/compat/functools.py", line 122, in wrapper result = user_function(*args, **kwds) File "openpyxl/date_time/__init__.py", line 54, in from_excel parts = list(jd2gcal(MJD_0, value + offset - MJD_0)) TypeError: unsupported operand type(s) for +: 'NoneType' and 'float' I guess it's important to catch the exception. I am slightly worried about applying a format to Noneespecially in the light of today's discussion on empty cells. But as Excel insists on treating dates as integer (there is actually a cell-type dfor datetimes, it's just that nobody seems to use it). Update docs, resolves #380 → <<cset 90c5ce8e271d>> Removing version: 2.1.x (automated comment)
https://bitbucket.org/openpyxl/openpyxl/issues/380/error-on-empty-date-cell
CC-MAIN-2020-05
en
refinedweb
Intro These days i have been working on my library, mainly on improve log. But in adition i had work on integration with azure and and serverless functions. This will be a short post, because microsoft tutorials are really good and library integration has no breaking changes. So here is a how-to use both. Local server with Azure signalr Here you can read a guide of how to create a signalr resource on azure. On this option you only need to stablish a connection with our local server. It will log througth azure and return a url. With that url we can connect to azure signalr. This will occur automatically, user has nothing to do. local server and library do all work. Implementation is like a normal chat. This code can be checked here import logging from signalrcore.hub_connection_builder import HubConnectionBuilder # Create custom handler handler = logging.StreamHandler() handler.setLevel(logging.DEBUG) formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') handler.setFormatter(formatter) # build connection ... hub_connection = HubConnectionBuilder() \ .with_url(server_url, options={ "verify_ssl": False, "headers": { } }) \ .configure_logging(logging.DEBUG, socket_trace=True, handler=handler) \ .with_automatic_reconnect({ "type": "interval", "keep_alive_interval": 10, "intervals": [1, 3, 5, 6, 7, 87, 3] }).build() Azure functions Other implementation choice throught azure is a serverles function. With this Tutorial you can implement. Tutorial has also a tiny web code in html/javascript, with this you can verify that your serverless function is working properly. Client is a bit different, messages are sended with an http post instead of of sending it through the socket. import logging import sys import requests from signalrcore.hub_connection_builder import HubConnectionBuilder def input_with_default(input_text, default_value): value = input(input_text.format(default_value)) return default_value if value is None or value.strip() == "" else value server_url = input_with_default( 'Enter your server url(default: {0}): ', "localhost:7071/api") username = input_with_default('Enter your username (default: {0}): ', "mandrewcito") handler = logging.StreamHandler() handler.setLevel(logging.DEBUG) formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') handler.setFormatter(formatter) hub_connection = HubConnectionBuilder() \ .with_url("ws://"+server_url, options={ "verify_ssl": False, "access_token_factory": lambda: "", "headers": { } }) \ .configure_logging(logging.DEBUG, socket_trace=True, handler=handler) \ .build() hub_connection.on_open(lambda: print("connection opened and handshake received ready to send messages")) hub_connection.on_close(lambda: print("connection closed")) hub_connection.on("newMessage", print) hub_connection.start() message = None # Do login while message != "exit()": message = input(">> ") if message is not None and message is not "" and message is not "exit()": # hub_connection.send("sendMessage", [username, message]) requests.post("", json={"sender": username, "text": message}) hub_connection.stop() sys.exit(0) this changes will be realsed on the new version of library 0.8.4 Links Thank you for reading, and write any thought below :D Discussion (1) Is it possible to connect a locally running Python script to an Azure SignalR instance so that the Python script is just behaving as a client? If I use the Azure SignalR connection string as a server_urlvalue, it causes errors in the requestsmodule. Azure SignalR connection strings are in the following structure: Endpoint=;AccessKey=yyy=;Version=1.0;
https://practicaldev-herokuapp-com.global.ssl.fastly.net/mandrewcito/signalr-core-python-client-vii-azure-18ma
CC-MAIN-2021-31
en
refinedweb
I am trying to loop over a set of graphs as shown in the snippet from the main script below: import pickle for timeOfDay in range(1440): # total number of minutes in a day=1440 with open("G_" + timeOfDay +'.pickle', 'rb') as handle: G = pickle.load(handle) ## do something with G I could load all the Graphs G_1.pickle to G_1440.pickle in the RAM but it exhausts my RAM. So, I tried to create a dictionary G_RAM which has 1440 keys with G_RAM[i] = the graph loaded from G_{i}.pickle. But it runs into a whole lot of issues, sometimes getting stuck sometimes not deleting the older graphs at the right time. I would like to know if there is a standard way to do this without having to re-invent the wheel. Any help is appreciated! Thanks in advance! Now, I have a background thread which does the following: - reads the global value of timeOfDayvariable - uses Nprocesses to populate the G_RAMfor keys timeOfDayto timeOfDay + Nand deletes all keys G_RAM[i] for i < timeOfDay Please note that the loop above cannot be parallelized because the graph at G_i.pickle must be processed before G_{i+1}.pickle My attempt is reproduced below: import multiprocessing import pickle import gc HORIZON_PARALLEL_LOADING_OF_GRAPHS = 10 THREADS_PARALLEL_LOADING_OF_GRAPHS = 5 G_RAM = {} ##### GLOBAL VARIABLES # global variable to keep track of which timeOfDays are already being worked at by multiple process # Simply checking for the keys in the G_RAM is not enough because it is possible that some process is # working to populate it already but it is not complete, hence we keep track of which timeOfDays are already # being worked at already_being_worked_at = [] timeOfDay = 0 # Inspired by from def background(f): ''' a threading decorator use @background above the function you want to run in the background ''' def backgrnd_func(*a, **kw): threading.Thread(target=f, args=a, kwargs=kw).start() return backgrnd_func def mp_worker(key): # load graph at time "key" with open('G' + str(key) + '.pickle', 'rb') as handle: read_data = pickle.load(handle) return (key, read_data) def mp_handler(data): p = multiprocessing.Pool(THREADS_PARALLEL_LOADING_OF_GRAPHS) res = p.map(mp_worker, data) # populating the global dict here global G_RAM for k_v in res: G_RAM[k_v[0]] = k_v[1] p.close() p.join() @background def updateDict(): # This will print the count for every second # G_RAM is a global variable consisting of the graphs at different times of day global timeOfDay time.sleep(0.01) gd_keys = list(G_RAM.keys()) # remove the used graphs (before time timeOfDay) for key in gd_keys: if key < timeOfDay or (key > min(timeOfDay + HORIZON_PARALLEL_LOADING_OF_GRAPHS, 1440)): del G_RAM[key] gc.collect() # check which keys (upto t+HORIZON_PARALLEL_LOADING_OF_GRAPHS) are missing from the G_RAM data = [] for key in range(t, min(t + HORIZON_PARALLEL_LOADING_OF_GRAPHS, 288)): if key not in G_RAM: data.append(key) # removing already processed t to avoid creating duplicate processes global already_being_worked_at # we want to call the worker thread again only for the timeOfDays which are not being worked at by worker processes temp = [] for key in data: if key not in already_being_worked_at: temp.append(key) data = temp # if the missing timeOfDays are already being worked at by the processes, we have no action item # otherwise, we call the worker threads with the new timeOfDays which we need but are not being # worked at if len(data) > 0: mp_handler(data) # we update our list of already_being_worked_at already_being_worked_at = already_being_worked_at + data ### main script follows updateDict() for dayNumber in range(100): already_being_worked_at = [] # reset the list of populated keys at the start of the new day for timeOfDay in range(1440): # total number of minutes in a day=1440 with open("G_" + timeOfDay +'.pickle', 'rb') as handle: G = pickle.load(handle) ## do something with G Source: Python-3x Questions
https://askpythonquestions.com/2021/05/18/how-to-create-a-data-loader-which-populates-a-dictionary-in-background/
CC-MAIN-2021-31
en
refinedweb
Originally posted on dev. When we start building a Python project that goes beyond simple scripts, we tend to start using third-party dependencies. When working on a larger project, we need to think about managing these dependencies in an efficient manner. And when installing dependencies, we always want to be inside virtual environments. It helps keep things nice and clean. It also helps avoid messing up our Python environment. Why do we need Python Virtual Environments? We can use Pip to install packages to our Python project. And it is common to have multiple packages installed in a single Python project. This can lead to some issues regarding the versions of the packages installed and their dependencies. When we use pip install in a project, we are installing the package and its dependencies in the global Python namespace. And this will install the package for the specific python version that we have configured Python for. We can find out where this directory is by using $ python3.7 -c "import sys; print('\n'.join(sys.path))" /usr/lib/python27.zip /usr/lib/python2.7 /usr/lib/python2.7/lib-dynload /usr/lib/python2.7/site-packages And if install the same package using pip3 install , it will be installed in a separate directory with the Python 3 version. We can overcome this by using the following command: python2.7 -m pip install <package name> This still does not solve our problem of packages being installed system-wide. It can lead to the following problems: - Different projects having different versions of the same package will conflict with one another - A project’s dependencies can conflict with system-level dependencies which can break the system altogether - Multi-user projects is not a possibility - Testing code against different python and library versions is a challenging task To avoid those problems, Python developers use Virtual Environments. These virtual environments make use of isolated contexts (directories) for installing packages and dependencies. Creating virtual environments We need a tool to make use of Python virtual environments. The tool used to make these is known as venv. It is built into the standard Python library for Python 3.3+. If we were using python 2, we would have had to install it manually. This is one of the few packages that we do want to install globally. python2 -m pip install virtualenv Note: We will talk more about venv in this post and Python 3 since there are a few differences between it and virtualenv. The commands are a bit different and the tools work differently under the hood. We will start by making a new directory wherein we want to work with our project. $ mkdir my-python-project && cd my-python-project Then we will create a new virtual environment: $ python3 -m venv virtualenv # creates a virtual environment called virtualenv, the name can be anything we want This will create a directory called virtualenv in the directory that we just created. The directory will contain a bin folder, a lib folder, an include folder, and an environment configuration file. All these files ensure that all Python code gets executed within the context of the current environment. This helps achieve isolation from global environments and avoid the problems we discussed earlier. In order to start using this environment, we need to activate it. Doing so will also change our command prompt to the current context. $ source env/bin/activate (virtualenv) $ The prompt is also an indicator that the virtual environment is active and python code executes under that environment. Inside our environment, system-wide packages are not accessible and any packages installed inside the environment are not available outside. Only pip and setuptools are installed by default in a virtual environment. After activating an environment, the path variable gets modified to achieve the concept of virtual environments. When we are done and want to switch to the global environment, we can exit using the deactivate command. (virtualenv) $ deactivate $ Managing dependencies across environments Now that we have our virtual environments setup, we do not want to keep sharing the packages that can be installed using pip. We want to exclude our virtual environment folder, and be able to reproduce our work on a different system. We can do this by making use of a requirements file in the root directory of our project. Let us assume we installed Flask in our virtual environment. After that, if we run pip freeze, which will list the packages that we have installed and their version numbers. (virtualenv) $ pip freeze Flask==1.1.2 We can write this to a requirements.txt file to upload to git, or share with other people in any other form. (virtualenv) $ pip freeze > requirements.txt This command can be used to update the file too. And then, whenever someone wants to run our project on their computer, all they need to do is: $ cd copied-project/ $ python3 -m venv virtualenv/ $ python3 -m pip install -r requirements.txt And everything will work as it was on our system. Now we can manage python virtual environments and thus manage dependencies and packages as needed. If you have any questions regarding this, feel free to drop a comment below. Originally published at on February 23, 2021.
https://learningactors.com/managing-python-dependencies-using-virtual-environments/
CC-MAIN-2021-31
en
refinedweb
Writing and Using Higher-Order and Anonymous Functions in Scala The hands-on lab is part of this learning path Ready for the real environment experience? Description When you use the functional programming paradigm, the function is the core component you need to focus on and use. It can be used as a value, so you can set a value for it (defining the body of a function), and you can use it as a function's argument, and also as a return value. A function that takes a function as an argument or returns it as a value is called a higher-order function. Most of the time, the functions you need to pass as an argument to another function are not shared and used by other components. In this situation, creating the functions could pollute the environment namespace (you have a lot of functions that are only used by a single entity or a single time). To avoid this scenario, you can pass the body of a function to another function without defining the function you need to pass as an argument; you simply pass the value. This kind of function is called anonymous function (you could hear about it also as a lambda function). That's because it doesn't have a formal definition, and can't be referred to. In this lab, you will start using higher-order functions and anonymous, with a focus on the three most used higher-order functions in functional programming: map, filter, and reduce. Learning Objectives Upon completion of this beginner level lab, you will be able to: - Understand higher-order functions and anonymous functions - Start using higher-order functions and anonymous functions to better solve problems Intended Audience This lab is intended for: - Software engineers focusing on the functional programming paradigm - Data engineers need to follow a clear way to handle and manipulate data Prerequisites To get the most out of this lab, you should have basic knowledge of Scala. To achieve this, the following labs are suggested:
https://cloudacademy.com/lab/writing-using-higher-order-anonymous-functions-scala/
CC-MAIN-2021-31
en
refinedweb
cfree — free allocated memory #include <stdlib.h> /* In SunOS 4 */ /* In glibc or FreeBSD libcompat */ /* In SCO OpenServer */ /* In Solaris watchmalloc.so.1 */ This function should never be used. Use free(3) instead. In glibc, the function cfree() is a synonym for free(3), "added for compatibility with SunOS". Other systems have other functions with this name. The declaration is sometimes in < stdlib.h > and sometimes in < malloc.h > Some SCO and Solaris versions have malloc libraries with a 3-argument cfree(), apparently as an analog to calloc. The 3-argument version of cfree() as used by SCO conforms to the iBCSe2 standard: Intel386 Binary Compatibility Specification, Edition 2.
https://man.linuxexplore.com/htmlman3/cfree.3.html
CC-MAIN-2021-31
en
refinedweb
Introduction In this article I will show you how you can build an application using Flutter with Supabase as a backend. Supabase is a Firebase alternative which uses Postgres, so that's something different and that's even amazing. Postgres is also a relational database which is also open source and a powerful tool. We will learn and build a simple grocery application using Flutter with Supabase. I am not going to go over setting up Flutter step by step since how I built it uses the Stacked architecture which follows an MVVM style so I will just show you how to write Supabase code in Dart with-in a Flutter application instead. You can learn more about Stacked architecture at FilledStacks :D My repository for SupaGrocery will be shared at the end of this tutorial. So you can go ahead and download it. Demo Database Design But before we start everything else, we'll take a look at our database design. See attached image below. For our simple grocery app, we'll only require 4 of these tables. app_users: This is the table where we will store our users, it will have the same primary ID with the supabase auth users. I was not able to use just the userstable since it cannot be read publicly so I had to create this table. groceries: All the grocery list of each user will be stored in this table. products: All of the products created by the user will be stored in this table. grocery_products: This is where we sort of link the products with a grocery. This is what we call a pivot table. Relationships In relation databases, table relationships are very common thing and is what I love the most about in relational databases. These two are the most common relationships: - One to One - One to Many - Many to Many (Pivot table) Our app_users table has a One to Many relationship with the two tables we created namely products and groceries since a user can have many grocery listing and can also have many products in that grocery listing. Then for our groceries table we have the created_by column as a foreign key so that will link to the app_users table which will then identify it as part of the user's grocery listing in our application. The same goes for products table with the created_by column as a foreign key as well. Then for our pivot table which is a Many to Many relationship, because a grocery listing can have many products and a product can belong to many grocery listing. Supabase setup Create your first Supabase account! Head over to that is their official website. Should take you to this wonderful dark themed site :D Now go ahead and click that button "Start your project" It will show you this auth0 page, so just continue with GitHub to get you registered in no time! Then just sign in with your GitHub credentials. Once you are done with creating your first account, you might already be in your dashboard which will have a listing of all your projects created in Supabase. Now click on "New Project" and select any organization as you wish. I'll just select "Personal" which I modified. When taken to this page, just fill in the following fields: Name: "Grocery App" Database Password: "s0m3Str0ng_PassWord!@#" (You should use your own password) Region: (Select anything that is near you) When that is done click on "Create new project"! It will then redirect you to this page. It will take a few minutes, so please wait :) Creating Tables When the Supabase is setup and you have created a new project. It shall take you up into this page. Now let's click on "Create a new table" We'll put up all the details on what we have from our database design so this setup should be pretty quick. What I would suggest is to uncheck "Include primary key" and just add a primary key later on when the table is created. There is some sort of bug which I cannot have a default value for the uuid key to just generate a uuid when a new record is created. Then just click on "Save" at the upper right corner to finally create the table. When that table is created, we can proceed to add our primary key which is a uuid. Click ahead on that plus icon to add a new column for the table. Then name the column as id and it will be a primary key and a type of uuid then have the default value "Automatically generate UUID" and click "Save" once that is done. Once that is done, we can proceed to create more of those columns that we defined from our database design. Next is we will create a table for products and we'll have a foreign key setup with this table since a product belongs to a user. So we'll learn how to do that quickly. So given that you already created a primary key id and its corresponding column name as a varchar, let's create one last field which is created_by and setup this as a foreign key that links up with the app_users table. Now click on "Add foreign key relation" button at the bottom Then select the table app_users and the id field, when that is done click "Save" Should then show you it is now linked up with the app_users table, so that is pretty amazing. That is all you need to know for setting up foreign keys. Now the rest of the tables is up to you now. You got this! Flutter Datamodels We will be setting up our data models using freezed package with json_serializable and make sure to have a builder_runner setup in your project. The following is our application datamodels import 'package:freezed_annotation/freezed_annotation.dart'; part 'application_models.freezed.dart'; part 'application_models.g.dart'; @freezed class AppUser with _$AppUser { const factory AppUser({ required String id, required String name, required String email, }) = _AppUser; factory AppUser.fromJson(Map<String, dynamic> json) => _$AppUserFromJson(json); } @freezed class Grocery with _$Grocery { const Grocery._(); const factory Grocery({ required String id, required String name, @JsonKey(name: 'created_by') required String createdBy, @Default([]) @JsonKey( name: 'grocery_products', fromJson: Grocery._productsFromJson, toJson: Grocery._productsToJson, ) List<GroceryProduct>? groceryProducts, }) = _Grocery; bool get hasGroceryProducts => groceryProducts!.length > 0; List<Product?>? get products { if (!hasGroceryProducts) return []; return groceryProducts!.map((e) => e.product).toList(); } factory Grocery.fromJson(Map<String, dynamic> json) => _$GroceryFromJson(json); static List<GroceryProduct>? _productsFromJson(List<dynamic>? list) { if (list == null) { return []; } return list.map((e) => GroceryProduct.fromJson(e)).toList(); } static List<Map<String, dynamic>>? _productsToJson( List<GroceryProduct>? list) { if (list == null) { return []; } return list.map((e) => e.toJson()).toList(); } } @freezed class GroceryDto with _$GroceryDto { const factory GroceryDto({ required String name, @JsonKey(name: 'created_by') required String createdBy, }) = _GroceryDto; factory GroceryDto.fromJson(Map<String, dynamic> json) => _$GroceryDtoFromJson(json); } @freezed class Product with _$Product { const factory Product({ required String id, required String name, @JsonKey(name: 'created_by') required String createdBy, }) = _Product; factory Product.fromJson(Map<String, dynamic> json) => _$ProductFromJson(json); } @freezed class ProductDto with _$ProductDto { const factory ProductDto({ required String name, @JsonKey(name: 'created_by') required String createdBy, }) = _ProductDto; factory ProductDto.fromJson(Map<String, dynamic> json) => _$ProductDtoFromJson(json); } @freezed class GroceryProduct with _$GroceryProduct { const factory GroceryProduct({ required String id, @JsonKey(name: 'grocery_id') required String groceryId, @JsonKey(name: 'product_id') required String productId, required int quantity, @JsonKey(name: 'products') Product? product, @Default('') String? unit, }) = _GroceryProduct; factory GroceryProduct.fromJson(Map<String, dynamic> json) => _$GroceryProductFromJson(json); } @freezed class GroceryProductDto with _$GroceryProductDto { const factory GroceryProductDto({ @JsonKey(name: 'grocery_id') required String groceryId, @JsonKey(name: 'product_id') required String productId, @Default(1) int quantity, String? unit, }) = _GroceryProductDto; factory GroceryProductDto.fromJson(Map<String, dynamic> json) => _$GroceryProductDtoFromJson(json); } @freezed class AuthDto with _$AuthDto { const factory AuthDto({ required String email, required String password, String? name, }) = _AuthDto; factory AuthDto.fromJson(Map<String, dynamic> json) => _$AuthDtoFromJson(json); } The code above will generate us the following files We don't have to write everything we just let it auto generate using build_runner To break it down for you regarding our data models, we see we have our primary tables for our grocery application - AppUser - Grocery - Product - GroceryProduct DTOs - GroceryDto - ProductDto - GroceryProductDto - AuthDto But what are those datamodels with "Dto" name on them? DTO simply means Data Transfer Object, I like to use DTOs in any API request that I make.. Flutter Setup Install a Flutter application and set it up. Then have the following dependencies to setup Supabase with it. packages: I added postgrest since I want to take all typings from the package and Supabase is using those. When that is done, you can proceed to setting up your Supabase client import 'package:supabase/supabase.dart'; // use your own SUPABASE_URL const String SUPABASE_URL = ''; // use your own SUPABASE_SECRET key const String SUPABASE_SECRET = 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoiYW5vbiIsImlhdCI6MTYxOTMwODI5MCwiZXhwIjoxOTM0ODg0MjkwfQ.Kk1ckyjzCB98aWyBPtJsoWuTsbq2wyYfiUxG7fH4yAg'; final SupabaseClient supabase = SupabaseClient(SUPABASE_URL, SUPABASE_SECRET); These can be found from your project settings in API tab. To get the SUPABASE_URL And the SUPABASE_SECRET Then we can make queries when this is already setup! Supabase Queries If you know SQL or familiar with it, it should feel very similar. But these will be auto generated from Supabase itself, so don't worry in case you don't know how to construct a Supabase query. Just check on the project API which will be dynamically generated for you whenever you update table or change any columns. To compare it, this is a RAW SQL query. SELECT * FROM products And this is how you write queries with Supabase in Dart supabase.from("products").select().execute(); Make sure you always have the execute at the last part otherwise it will not get all data from products table. What about querying for a single record? In SQL we have, SELECT * FROM products WHERE id = "uuid-string"; In Supabase Dart we have, supabase.from("products").select().eq("id", "uuid-string").single().execute(); There are more queries to show from your Supabase project, so be sure to check it out here Authentication In every application, one thing you can secure your user's data is to have an authentication system. So with Supabase it is very easy to get started with authentication right away as they provide a very simple and intuitive API! class AuthenticationService { final _logger = Logger(); final _localStorageService = locator<LocalStorageService>(); AppUser? _user = null; AppUser? get user => _user; bool get hasUser => _user != null; Future<void> initialize() async {} Future<AppUser?> signIn({required AuthDto payload}) async {} Future<AppUser?> signUp({required AuthDto payload}) async {} Future<void> signOut() async {} Future<AppUser?> fetchUser({required String id}) async {} Future<PostgrestResponse> _createUser(User user, AuthDto payload) {} } To the code above, to break it down. This is dependent with the local storage service (Shared Preferences) which is where we will store out JWT auth token / refresh token and the Logger which can be useful for debugging. So I like to have a Logger with me. We have a private propery _user which is where we store our user with its own getter and a boolean getter to check if the user is logged in if the _user property is not null. Inside the initialize() method is where we will perform auto login. So if the user has a refresh token stored in their local storage, we will proceed to login this user and get user data and store it in _user property so the hasUser boolean getter will be true. Future<void> initialize() async { final accessToken = await _localStorageService.getItem('token'); _logger.i(accessToken); if (accessToken == null) { return; } final response = await supabase.auth.api.getUser(accessToken); if (response.error != null) { return; } final user = response.data!; _logger.i(user.toJson()); await fetchUser(id: user.id); } Next is the AuthDto that contains password field. When a user provided correct and existing email, we will take their access token and store it in local storage. Future<AppUser?> signIn({required AuthDto payload}) async { final response = await supabase.auth.signIn( email: payload.email, password: payload.password, ); if (response.error != null) { _logger.e(response.error!.message); return null; } _logger.i(response.data); await _localStorageService.setItem('token', response.data!.accessToken); return await fetchUser(id: response.data!.user!.id); } We use the signUp method whenever we have a new user that wants to use our app. When a new user is created, we take the access token and save it to local storage. We will also proceed to creating a new user record in the app_users table but it will be in a different method called _createUser Future<AppUser?> signUp({required AuthDto payload}) async { final response = await supabase.auth.signUp(payload.email, payload.password); if (response.error != null) { _logger.e(response.error!.message); return null; } final user = response.data!.user!; _logger.i(user.toJson()); await _createUser(user, payload); await _localStorageService.setItem('token', response.data!.accessToken); return await fetchUser(id: user.id); } _createdUser will create a new user record inside app_users table. Future<PostgrestResponse> _createUser(User user, AuthDto payload) { return supabase .from("app_users") .insert( AppUser( id: user.id, name: payload.name!, email: user.email, ), ) .execute(); } Then the signOut which is already self explanatory. Here we just remove the access token from the local storage when user decides to signOut Future<void> signOut() async { final response = await supabase.auth.signOut(); if (response.error != null) { _logger.e(response.error!.message); return; } _logger.i(response.rawData); await _localStorageService.removeItem('token'); return; } And lastly we have the fetchUser method, that will fetch the user record that is currently authenticated so we'll have their information across the entire application whenever we need it. Future<AppUser?> fetchUser({required String id}) async { final response = await supabase .from("app_users") .select() .eq('id', id) .single() .execute(); _logger.i( 'Count: ${response.count}, Status: ${response.status}, Data: ${response.data}', ); if (response.error != null) { _logger.e(response.error!.message); return null; } _logger.i(response.data); final data = AppUser.fromJson(response.data); _user = data; return data; } Supabase Service We finished handling our data models and authentication, then we can create and handle read write operations for our application. Thanks to the concept of abstraction, we don't have to write up a lot of code for the same functionality, we will be writing less code and have this functionality extended to other service that requires it. The following will be the abstract class that handles CRUD operations (Cread, Read, Update, Delete) import 'package:logger/logger.dart'; import 'package:postgrest/postgrest.dart'; import 'package:supagrocery/app/app.locator.dart'; import 'package:supagrocery/app/supabase_api.dart'; import 'package:supagrocery/services/authentication_service.dart'; abstract class SupabaseService<T> { final _authService = locator<AuthenticationService>(); final _logger = Logger(); String tableName() { return ""; } Future<PostgrestResponse> all() async { _logger.i(tableName()); final response = await supabase .from(tableName()) .select() .eq('created_by', _authService.user!.id) .execute(); _logger.i(response.toJson()); return response; } Future<PostgrestResponse> find(String id) async { _logger.i(tableName() + ' ' + id); final response = await supabase .from(tableName()) .select() .eq('id', id) .single() .execute(); _logger.i(response.toJson()); return response; } Future<PostgrestResponse> create(Map<String, dynamic> json) async { _logger.i(tableName() + ' ' + json.toString()); final response = await supabase.from(tableName()).insert(json).execute(); _logger.i(response.toJson()); return response; } Future<PostgrestResponse> update({ required String id, required Map<String, dynamic> json, }) async { _logger.i(tableName() + ' ' + json.toString()); final response = await supabase.from(tableName()).update(json).eq('id', id).execute(); _logger.i(response.toJson()); return response; } Future<PostgrestResponse> delete(String id) async { _logger.i(tableName() + ' ' + id); final response = await supabase.from(tableName()).delete().eq('id', id).execute(); _logger.i(response.toJson()); return response; } } This abstract class has a dependency on the AuthenticationService that we just created so we'll be able to attach the user's ID every time they create records in our database. And we'll have the tableName to override for each feature services that requires it. So when creating our ProductService and GroceryService, we simply extend this class and that override tableName with their corresponding table names. This is an example for ProductService/authentication_service.dart'; import 'package:supagrocery/services/supabase_service.dart'; class ProductService extends SupabaseService<Product> { final _authService = locator<AuthenticationService>(); @override String tableName() { return "products"; } Future<PostgrestResponse> fetchProducts() async { return await supabase .from("products") .select("*") .eq('created_by', _authService.user!.id) .execute(); } } This will also have the methods from SupabaseService abstract class that we created and won't have to rewrite anything of it, we only need to override the tableName and return the name of that table. With that inside the ProductService we can then write up any method that is relevant to the business logic. Then this is our GroceryService/supabase_service.dart'; import 'authentication_service.dart'; class GroceryService extends SupabaseService<Grocery> { final _authService = locator<AuthenticationService>(); @override String tableName() { return "groceries"; } Future<PostgrestResponse> fetchGroceryList({required String id}) async { return await supabase .from("groceries") .select("*, grocery_products(*, products(*) )") .eq('id', id) .eq('created_by', _authService.user!.id) .single() .execute(); } Future<PostgrestResponse> addProductsToList({ required String id, required List<Product?> products, }) async { return await supabase .from("grocery_products") .insert( products.map((e) { return GroceryProductDto( groceryId: id, productId: e!.id, ).toJson(); }).toList(), ) .execute(); } Future<PostgrestResponse> markProductChecked( {required GroceryProduct payload}) async { return await supabase .from("grocery_products") .update(payload.toJson()) .eq('id', payload.id) .execute(); } Future<PostgrestResponse> removeProduct({required String id}) async { return await supabase .from("grocery_products") .delete() .eq('id', id) .execute(); } } Summary We covered database design, setting up Supabase, implementing an authentication system with Supabase API, and using abstraction to easily implement new features. I hope this gave you an idea and was useful in any sort of way. Thanks for reading and hope you enjoyed! Discussion (7) Can you suggest me some good resources to learn this syntax? .select(", grocery_products(, products(*) )") I can't find good explanatory material I can't find any too. But I'll create a tutorial for that. Stay tuned my friend! This was helpful postgrest.org/en/v7.0.0/api.html#r... Wow, cool. Thanks for sharing it! How did you make your database picture? It is really cool. Got that from here drawsql.app/ Thank you
https://practicaldev-herokuapp-com.global.ssl.fastly.net/carlomigueldy/building-a-simple-grocery-app-in-flutter-with-supabase-5fad
CC-MAIN-2021-31
en
refinedweb
NAME¶isympy - interactive shell for SymPy SYNOPSIS¶ isympy [-c | --console] [-p ENCODING | --pretty ENCODING] [-t TYPE | --types TYPE] [-o ORDER | --order ORDER] [-q | --quiet] [-d | --doctest] [-C | --no-cache] [-a | --auto] [-D | --debug] [ -- | PYTHONOPTIONS] isympy [ {-h | --help} | {-v | --version} ] DESCRIPTION¶isympy is a Python shell for SymPy. It is just a normal python shell (ipython shell if you have the ipython package installed) that executes the following commands so that you don't have to: >>> from __future__ import division >>> from sympy import * >>> x, y, z = symbols("x,y,z") >>> k, m, n = symbols("k,m,n",¶ - -c SHELL, --console=SHELL - Use the specified shell (python or ipython) as console backend instead of the default one (ipython if present or python otherwise). Example: isympy -c python SHELL could be either 'ipython' or 'python' - -p ENCODING, --pretty=ENCODING - Setup pretty printing in SymPy. By default, the most pretty, unicode printing is enabled (if the terminal supports it). You can use less pretty ASCII printing instead or no pretty printing at all. Example: isympy -p no ENCODING must be one of 'unicode', 'ascii' or 'no'. - -t TYPE, --types=TYPE - Setup the ground types for the polys. By default, gmpy ground types are used if gmpy2 or gmpy is installed, otherwise it falls back to python ground types, which are a little bit slower. You can manually choose python ground types even if gmpy is installed (e.g., for testing purposes). Note that sympy ground types are not supported, and should be used only for experimental purposes. Note that the gmpy1 ground type is primarily intended for testing; it the use of gmpy even if gmpy2 is available. This is the same as setting the environment variable SYMPY_GROUND_TYPES to the given ground type (e.g., SYMPY_GROUND_TYPES='gmpy') The ground types can be determined interactively from the variable sympy.polys.domains.GROUND_TYPES inside the isympy shell itself. Example: isympy -t python TYPE must be one of 'gmpy', 'gmpy1' or 'python'. - -o ORDER, --order=ORDER - Setup the ordering of terms for printing. The default is lex, which orders terms lexicographically (e.g., x**2 + x + 1). You can choose other orderings, such as rev-lex, which will use reverse lexicographic ordering (e.g., 1 + x + x**2). Note that for very large expressions, ORDER='none' may speed up printing considerably, with the tradeoff that the order of the terms in the printed expression will have no canonical order Example: isympy -o rev-lax ORDER must be one of 'lex', 'rev-lex', 'grlex', 'rev-grlex', 'grevlex', 'rev-grevlex', 'old', or 'none'. - -q, --quiet - Print only Python's and SymPy's versions to stdout at startup, and nothing else. - -d, --doctest - Use the same format that should be used for doctests. This is equivalent to 'isympy -c python -p no'. - -C, --no-cache - Disable the caching mechanism. Disabling the cache may slow certain operations down considerably. This is useful for testing the cache, or for benchmarking, as the cache can result in deceptive benchmark timings. This is the same as setting the environment variable SYMPY_USE_CACHE to 'no'. - -a, --auto - Automatically create missing symbols. Normally, typing a name of a Symbol that has not been instantiated first would raise NameError, but with this option enabled, any undefined name will be automatically created as a Symbol. This only works in IPython 0.11. Note that this is intended only for interactive, calculator style usage. In a script that uses SymPy, Symbols should be instantiated at the top, so that it's clear what they are. This will not override any names that are already defined, which includes the single character letters represented by the mnemonic QCOSINE (see the "Gotchas and Pitfalls" document in the documentation). You can delete existing names by executing "del name" in the shell itself. You can see if a name is defined by typing "'name' in globals()". The Symbols that are created using this have default assumptions. If you want to place assumptions on symbols, you should create them using symbols() or var(). Finally, this only works in the top level namespace. So, for example, if you define a function in isympy with an undefined Symbol, it will not work. - -D, --debug - Enable debugging output. This is the same as setting the environment variable SYMPY_DEBUG to 'True'. The debug status is set in the variable SYMPY_DEBUG within isympy. - -- PYTHONOPTIONS - These options will be passed on to ipython (1) shell. Only supported when ipython is being used (standard python shell not supported). Two dashes (--) are required to separate PYTHONOPTIONS from the other isympy options. For example, to run iSymPy without startup banner and colors: isympy -q -c ipython -- --colors=NoColor - -h, --help - Print help output and exit. - -v, --version - Print isympy version information and exit. FILES¶ - ${HOME}/.sympy-history - Saves the history of commands when using the python shell as backend.
https://manpages.debian.org/buster/isympy-common/isympy.1.en.html
CC-MAIN-2021-31
en
refinedweb
Class defining default implementations for some spherical geometry methods. More... #include <StelSphereGeometry.hpp> All methods are reentrant. This method is heavily used and therefore needs to be very fast. The returned SphericalCap doesn't have to be the smallest one, but smaller is better. Reimplemented from SphericalRegion. It can be used for safe computation of intersection/union in the general case. Implements SphericalRegion. The format is: it is a list of closed contours, with each points defined by ra dec in degree in the ICRS frame. Implements SphericalRegion. Reimplemented in SphericalTexturedPolygon.
http://stellarium.org/doc/0.17/classSphericalPolygon.html
CC-MAIN-2019-13
en
refinedweb
Is it possible to write a single method total to do a sum of all elements of an ArrayList, where it is of type <Integer> or <Long>? I cannot just write public long total(ArrayList<Integer> list) and public long total(ArrayList<Long> I have POJO class Student like this class Student { private int score; private String FirstName; //Getters and setters ................. } I am creating ArrayList like this public static void main(String[] args) { List<Student> al_students= new Arra I am trying to sort List using java comparator the reference is import java.util.Comparator; import net.java.dev.wadl.x2009.x02.ResourceDocument.Resource; publi How do you get all unique groups from contacts. I am already able to get all instances of groups but I guess I have more than one account on the device so I am receiving multiple instances of the same groups like Coworkers,Coworkers,Coworkers,Coworke code is from Java SCJP6. It's from the topic of The Comparable Interface from chapter 7 on Collections. In line 4 we are casting 'Object o' to DVDInfo type. I don't understand this. Why are we casting it as to DVDInfo? class DVDInfo implements C I'm running the following code, but getting error The method trimToSize() is undefined for the type List<Integer> public class ListPerformance { public static void main(String args[]) { List<Integer> array = new ArrayList<Integer>(); Lis So I have been creating a Word-Puzzle which I recently got stuck on a index out of bounds problem. This has been resolved however the program is not doing what I would like it to do. The Idea i that the test class will print 3 words in an array e.g. public class Human { private int age; private float height; private float weight; private String name = new String(); public Human() { } public Human(String name, int age, float height, float weight) { this.name = name; this.age = age; this.height = I've successfully parsed this data into my Android application. This JSON takes the format of [ {...}, {...}, {...} ] My issue is that I need to parse JSON in the format of { "count":3, "result":[ {...} I Browse some Base adapter tuts and did this, I don't know how to get the contacts set to the list can any one explain what I missed Here is the code: class SingleRow{ String name; String number; int image; SingleRow(String name,String number){ this. I am new to html/javascript This is my class: TravelObjects{ String sourceName; String lattitude; String longitude; ... getters and setters } I am recieving array of TravelObject class as server response whose variables I want to use on html side. Ho This question already has an answer here: How to unserialize PHP Serialized array/variable/class and return suitable object in C# 3 answers In the picture you can see what is in ht, but how do i parse the values from ht.. its a php serialization that Collections.sort(orderedStudents, new Comparator<Student>() { public int compare(Student s1, Student s2) { return s2.getAggregate().compareTo(s1.getAggregate()); } }); This is the method i used. --------------Solutions------------- The problem is th Don't know why I'm getting this error. Working with ArrayLists and sorting words into appropriate ArrayLists by alphabetical order. If anyone can help me understand why I'm getting the error and how to fix it that would be great! import java.util.*; I found that this worked :) I found a way to make it work. Instead of this: for(int i=0; i<alla.size(); i++){ if(alla.get(i).getClass().getName().equals("Aktie")){ alla.get(i).setKurs(0.0); } } I got this to work: for(Värdesak v : alla){ if(v I am Rest Assured for testing REST API and have a case where I need to extract values (costCenterId and organizationId) from a GET request getTenantFloorManagers = given(authToken).when().get(“/costcenter/manager/floormanagers").asString(); that retu I have a game that is based on a 9x9 grid array in which the user attempts to escape but at random positioning in the array there are blocks in which the user cannot move to or it will end the game. 3=user, 1=safe, 2=wall, 0=safezone. essentially I w I am receiving input through stdin in the form of a line (there will be many many lines in the input), and what I want to do is take information from each line and store it into an arraybased list but I am having trouble with how to continue after a
http://www.dskims.com/tag/arraylist/
CC-MAIN-2019-13
en
refinedweb
Obviously we are going to need our conversion routine more than once so it makes sense to convert it into a function that returns a function that approximates the data. However there are a few small changed that are worth making. The first is that the plot is from 0 to n-1 which depends on the number of points in the data set. Much better to normalize it to be in the range 0 to 1 no matter what. The original data points at 0,1,2,3 and so on are now at k/n where k runs from 0 to n-1. The second problem is that if you examine the function returned xp[t] you will find that it contains long decimal constants that look something like: 0.101036 Sin[3.28439 - 10 π t] The examples in Wolfram Alpha have nice rational fractions in the formulas to make them look more hand crafted. Fortunately Mathematica has a Rationalize function which will convert decimal values into exact or approximate fractions. For example: Rationalize[0.101036 Sin[3.28439-10 π t], 0.001] where the 0,001 specifies the accuracy of the approximation gives: which looks a lot more friendly - no really, it does look easier! So we need to modify the equation a little to produce a function that does the job: trigSeries[x_] := ( f = Chop[Fourier[x]]; n = Length[x]; hn = Ceiling[n/2]; A0 = First[f]; f = Take[f, {2, hn}]; A = Abs[f]; P = Arg[f]; Function[t, Rationalize[A0/Sqrt[n] + 2/Sqrt[n]*Total[A*Sin[Pi/2+ 2*Pi*Range[1, hn - 1]*t + P]], .001]]) If you now try this out using the original data in x and y: x = {0, 1, 4, 6, 8, 10, 8, 6, 4, 1, 0};y = {0, 1, 1.5, 1.5, 1, 0,-1,-1.5,-1.5,-1,0};xp := trigSeries[x];yp := trigSeries[y];ParametricPlot[{xp[t], yp[t]}, {t, 0, 1}] You will see a shape that looks a lot like the original but smoother. The reason for the smoothing is that the Fourier series is only an exact fit at the points x,y you specify. If you were to draw straight lines between these points you would have something that looks much more like the original: Notice the curve given above is created by plotting the two parametric formulas: which look like the sorts of equations you can see listed in Wolfram Alpha and which would have taken a long time to find by trial and error or manual skill. This may not be producing impressive graphics, but that is because what we started with wasn't impressive. Take a much bigger curve and digitize it to a set of co-ordinates and the same method will give you a pair of equations that approximate your original curve. From here it is only a short step to a portrait of Einstein - well it still quite a big step really. The next trick is to implement the same idea in Python. You might initially think that this is impossible as Python isn't an math processing language like Mathematica. However we can do the same sort of trick by building up a string that has the same formula and then using the eval function. You might also think that Python isn't ideal because it doesn't have a DFT function and not much in the way of plotting but these are easy to fix by importing MatPlotLib, Numpy and SciPy. So assuming we have these modules imported let's go though the steps to build up the formula and then put the whole thing together as function later. As a sort of exercise in Pythonic code no for loops were used or harmed in the development of this function. There are very definitely places were a for loop might have done the job more efficiently but... First we need to perform a DFT and this is just a call to the SciPy fft function: import scipy as spimport numpy as npf=sp.fft(x) The only problem is that SciPy uses a different definition of the DFT to Mathematica in that it doesn't use 1/sqrt(N) in the forward transform and uses 1/N in the inverse transform. These are the sort of minor differences that are sent to make numerical programming more difficult. The main point is that where we had 1/sqrt(N) in the formula we now need to use 1/N. Next we can extract the first term A0 from f: n=len(x)A0=abs(f[0])/n Notice that we have already divided by n. In the case of Python it is easier to combine as many of the numerical values together as early as possible. Before moving on it would be a good idea to convert A0 into a fraction - i.e. as we did with Mathematica's Rationalize function. Python has a fractions module that can be used to find rational approximations. You need to add from fractions import Fraction and you can do the conversion using: A0=Fraction(A0).limit_denominator(1000) The final argument specifies the largest denominator that is allowed - the bigger the more accurate the approximation. Notice that Fraction is a constructor for a fraction object and its methods. Next we need to process the rest of the list f to get the other amplitudes and the phase angles: hn=np.ceil(n/2) f=f[1:hn] A=2*abs(f)/n P=sp.pi/2-sp.angle(f) In this case we remove the first term from the first half of the list and then calculate 2^A/n and the Pi/2 minus the phase angle. Why are we now subtracting the phase angle rather than adding it? The SciPi angle method returns the phase angle computed from a different origin. Again these things are sent to make math programming difficult.
https://www.i-programmer.info/projects/119-graphics-and-games/5735-how-to-draw-einsteins-face-parametrically.html?start=2
CC-MAIN-2019-13
en
refinedweb
I would like to create a simple Calculator service that has a single method to add numbers. This Add async public class RateLimitingCalculator { public async Task<int> Add(int a, int b) { //... } } I don't think using Rx makes sense here, unless you can rewrite your method into something like public IObservable<int> Add(IObservable<Tuple<int, int>> values), as suggested by Enigmativity in a comment. What I would do is to separate the concern of rate limiting into a separate class. That way, your code could look something like this: public class RateLimitingCalculator { private RateLimiter rateLimiter = new RateLimiter(5, TimeSpan.FromSeconds(1)); public async Task<int> Add(int a, int b) { rateLimiter.ThrowIfRateExceeded(); //... } } The implementation of RateLimiter depends on your exact requirements, but a very simple, not-thread-safe version could look like this: class RateLimiter { private readonly int rate; private readonly TimeSpan perTime; private DateTime secondStart = DateTime.MinValue; private int count = 0; public RateLimiter(int rate, TimeSpan perTime) { this.rate = rate; this.perTime = perTime; } public void ThrowIfRateExceeded() { var now = DateTime.UtcNow; if (now - secondStart > perTime) { secondStart = now; count = 1; return; } if (count >= rate) throw new RateLimitExceededException(); count++; } }
https://codedump.io/share/KMf3BQ3O8o9o/1/how-to-build-a-rate-limiting-api-with-observables
CC-MAIN-2017-17
en
refinedweb
Accessing a Remote Namespace The main purpose of the IWMIExtension interface is to provide developers with access to a remote namespace through ADSI. By using ADSI, developers can query Active Directory for more information about directory structure and computer network location. After Active Directory returns the location of a computer, developers can use the IWMIExtension::GetWMIServices method to access the WMI namespace on that computer. Developers can then perform any task that WMI allows, such as creating an object or querying the Windows Management service. Note For more information about support and installation of this component on a specific operating system, see Operating System Availability of WMI Components. The following procedure describes how to access a remote namespace using ADSI. To access a remote namespace using ADSI - Determine the name of the computer that contains the namespace you wish to access. You need the name of the computer to locate the computer on the enterprise. The most common way to locate a computer is to query Active Directory. Active Directory uses the current Directory Services security settings for the user. Make sure the user has the correct level of access before attempting to use Active Directory. - Retrieve the Active Directory Computer object that represents the computer. The following example shows how to query Active Directory in Microsoft Visual Basic with the GetObject function. Similarly, a C++ application can use a COM call to query Active Directory. The following example shows how to query Active Directory using the ADsGetObject interface. The C++ code requires the following references and #include statements to compile correctly. - Use the Active Directory Computer object to access the WMI \cimv2 namespace with the GetWMIServices method. Visual Basic developers can use the IWMIExtension.GetWMIServices method, as shown in the following example. C/C++ developers can query COM for a pointer to IWbemServices, as shown in the following example. - Use the namespace to access the methods and objects inside the namespace. The following Visual Basic example uses a connection to an SWbemServices object named "WMIServices" to connect to a Win32_LogicalDisk object in the default namespace. The example shows how to display the type of file system that the C:\ hard disk drive contains. The following C++ sample also shows how to display the file system type that the "C:" hard disk drive contains. ISWbemObject* pObj = NULL; BSTR strObjPath = SysAllocString( L"Win32_LogicalDisk.DeviceID=\"C:\""); hRes = pSvc->Get(strObjPath, NULL, NULL, &pObj); SysFreeString(strObjPath); BSTR strObjText = NULL; hRes = pObj->GetObjectText_(0, &strObjText); wprintf(L"%s\n", strObjText); SysFreeString(strObjText); Related topics
https://msdn.microsoft.com/en-us/library/windows/desktop/aa384706(v=vs.85).aspx
CC-MAIN-2017-17
en
refinedweb
Forward and inverse kinematics calculations using Tekkotsu output indices. More... #include <Kinematics.h> Forward and inverse kinematics calculations using Tekkotsu output indices.. Wherever a reference frame index is requested, you can simply supply one of the output indexes in the usual manner: kine->link->linkToBase(CameraFrameOffset); Example code: // Find the ray from the camera to whatever the near-field IR is hitting: fmat::Transform T = kine->linkToLink(NearIRFrameOffset,CameraFrameOffset); fmat::Column<3> camera_ray = T*fmat::pack(0,0,state->sensors[NearIRDistOffset]); float x; // x will be in the range ±1 for resolution layer independence float y; // y ranges ±y_dim/x_dim (i.e. ±1/aspectRatio) config->vision.computePixel(camera_ray[0],camera_ray[1],camera_ray[2],x,y); Finally, for each model we have created a database of "interest points" -- locations of notable interest on the body of the robot. These may be of use to people attempting to use the limbs to manipulate objects. To access these interest points, call getInterestPoint with the name of the interest point, obtained from the diagrams. Note that you can pass a comma separated list of interest point names and the result will be the midpoint of those interest points: kine->getInterestPoint(BaseFrameOffset,"LowerInnerFrontLFrShin,LowerOuterFrontLFrShin"); Definition at line 68 of file Kinematics.h. List of all members. [protected] we'll be using the hash_map to store named interest points Definition at line 290 of file Kinematics.h. Constructor, pass the full path to the kinematics configuration file. Definition at line 71 of file Kinematics.h. Copy constructor, everything is either update-before-use or static, copy is normal init. Definition at line 76 of file Kinematics.h. [virtual] Destructor. Definition at line 40 of file Kinematics.cc. Returns a matrix for transforming from the base frame to link j frame. Definition at line 118 of file Kinematics.h. Referenced by RawCam::drawShapesIntoBuffer(), projectShapeToCamera(), and PostureEngine::solveLinkVector(). returns a transformation to account for standing pose, where the origin of the "local" space is the projection of the base frame origin along the ground plane normal Definition at line 44 of file Kinematics.cc. Definition at line 131 of file Kinematics.h. Referenced by baseToLocal(), Grasper::ReleaseArm::doStart(), and localToBase(). Calculate the leg heights along a given "down" vector (0 is level with base frame). This can be based on either the gravity vector from accelerometer readings, or if that may be unreliable due to being in motion, you could do some basic balance modeling and pass a predicted vector. This uses the interest point database to find the lowest interest point for each leg Definition at line 115 of file Kinematics.cc. Referenced by findUnusedLeg(). Find the ground plane by fitting a plane to the lowest 3 interest points. Definition at line 173 of file Kinematics.cc. This function merely calls the other version of calculateGroundPlane with the current gravity vector as the "down" vector. Definition at line 159 of file Kinematics.cc. Referenced by baseToLocal(), DualCoding::MapBuilder::calculateGroundPlane(), LookAtMarkers::Search::doEvent(), LookAtMarkers::TrackMarker::doEvent(), localToBase(), DualCoding::Lookout::processSearchEvent(), projectToGround(), DualCoding::MapBuilder::projectToLocal(), and DualCoding::Lookout::setupTrack(). [static, protected] checks that statics have been initialized, and calls initStatics if they are missing Definition at line 261 of file Kinematics.h. Referenced by init(). Find the leg which is in least contact with ground. Definition at line 147 of file Kinematics.cc. Returns the location of a named point, relative to any desired reference frame. You can pass a comma separated list of interest point names and the result will be the midpoint of those IPs. If an interest point is not found, a std::runtime_error is thrown. Definition at line 89 of file Kinematics.cc. false Returns the location of a named point and the link it is attached to. If name is not found, link will be -1 and ip will be all 0's. Definition at line 67 of file Kinematics.cc. Referenced by getInterestPoint(). returns the KinematicJoint structure for the specified Tekkotsu output or reference frame offset Definition at line 98 of file Kinematics.h. Referenced by ArmController::ArmController(), CBracketGrasperPredicate< N >::CBracketGrasperPredicate(), RRTNode3DR< N >::CollisionChecker::CollisionChecker(), RRTNode2DR< N >::CollisionChecker::CollisionChecker(), Grasper::computeGoalStates(), XWalkParameters::computeNeutralPos(), Grasper::PlanArmApproach::doStart(), ArmController::doStart(), Grasper::MoveArm::executeMove(), Grasper::getCurrentState(), IKCalliope::IKCalliope(), HeadPointerMC::lookAtPoint(), HeadPointerMC::lookInDirection(), ArmMC::moveOffsetToPoint(), ArmMC::moveOffsetToPointWithOrientation(), ArmController::setJoint(), ShapeSpaceCollisionCheckerBase< N >::ShapeSpaceCollisionCheckerBase(), ShapeSpacePlanner2DR< N >::ShapeSpacePlanner2DR(), and ShapeSpacePlanner3DR< N >::ShapeSpacePlanner3DR(). Definition at line 100 of file Kinematics.h. Referenced by Grasper::PlanBodyTransport::doStart(), Grasper::ArmRaise::doStart(), Grasper::ArmNudge::doStart(), Grasper::PlanArmApproach::doStart(), Grasper::PlanBodyApproach::doStart(), ArmController::pointPicked(), and ArmMC::setFingerGap(). Returns a pointer to the root of the kinematic tree. Definition at line 95(). initializes static variables -- only call if not staticsInited Definition at line 34 of file Kinematics.cc. Referenced by checkStatics(). Returns a matrix for transforming from link frame j to base frame. Definition at line 109 of file Kinematics.h. Referenced by baseToLink(), LookAtMarkers::Search::doEvent(), LookAtMarkers::TrackMarker::doEvent(), KoduInterpreter::GiveActionRunner::GiveActionSend::doStart(), KoduInterpreter::GrabActionRunner::ExecuteGrabAction::PrepareForAnotherGrasp::RepositionBody::doStart(), KoduInterpreter::GrabActionRunner::ExecuteGrabAction::VerifyObjectWasGrabbed::VerifyObjectInGripper::doStart(), KoduInterpreter::GrabActionRunner::ExecuteGrabAction::VerifyObjectWasGrabbed::LookAtTheGripper:::VerifyObjectWasGrabbed::VerifyObjectInGripper::doStart(), KoduInterpreter::PerceptualMultiplexor::FailureRecovery::ObjectManipRecovery::VerifyObjectWasGrabbed::LookAtTheGripper::doStart(), Grasper::ReleaseArm::doStart(), Grasper::Verify::CheckCross::doStart(), Grasper::Verify::CheckDomino::doStart(), Grasper::DoBodyApproach3::doStart(), Grasper::DoBodyApproach2::doStart(), DualCoding::Lookout::findLocationFor(), DualCoding::MapBuilder::getCamCrosses(), DualCoding::MapBuilder::getCamCylinders(), DualCoding::MapBuilder::getCamDominoes(), DualCoding::MapBuilder::grabCameraImageAndGo(), HeadPointerMC::lookAtJoint(), DualCoding::Lookout::processPointAtEvent(), DualCoding::Lookout::processSearchEvent(), DualCoding::MapBuilder::projectToLocal(), and DualCoding::Lookout::setupTrack(). Returns a matrix for transforming from link iL to link oL. Definition at line 123 of file Kinematics.h. Referenced by getInterestPoint(), IKCalliope::IKCalliope(), and ArmMC::setFingerGap(). Definition at line 140 of file Kinematics.h. Referenced by localToBase(). Definition at line 137 of file Kinematics.h. Assignment operator, everything is either update-before-use or static, assignment is no-op. Definition at line 83 of file Kinematics.h. [static] A simple utility function, converts x,y,z,h to a fmat::Column<4> Definition at line 242 of file Kinematics.h. A simple utility function, converts x,y,z,h to a fmat::Column<3> Definition at line 234 of file Kinematics.h. Referenced by calculateGroundPlane(), getInterestPoint(), DualCoding::Lookout::moveHeadToPoint(), pack(), projectToPlane(), and PostureEngine::solveLinkVector(). 0 Find the location of an object on the ground with a custom ground plane specification. gndPlane must be specified relative to the base frame, in the form , Definition at line 542 of file Kinematics.cc. Find the location of an object on the ground (the easy way from a vision object event (i.e. EventBase::visObjEGID)). Definition at line 219 of file Kinematics.h. Find the location of an object on the ground from an arbitrary ray r_j in reference frame j (probably CameraFrameOffset). Definition at line 211 of file Kinematics.h. Referenced by projectToGround(). Find the point of intersection between a ray and a plane. p_b should be of the form For projecting to the ground plane, one of the specialized projectToGround() functions may be more convenient. Mathematical implementation: We'll convert the ray to the plane's reference frame, solve there. We find a point on the ray (ro_b) and the direction of the ray (rv_b). rv_b does not need to be normalized because we're going to find a scaling factor for it, and that factor accounts for current magnitude. Proof, p=plane normal vector, d=plane displacement, r = ray direction, o = ray offset, x = [x y z] coordinates, t = scaling factor Find distance from the ray offset (ro_b) and the closest point on the plane. Object height is applied along the plane normal toward the ray origin (we assume the ray source is "above" ground) Find scaling factor by projecting ray vector (rv_b) onto plane normal. Intersection point will be rv_b*dist/align + ro_b, but need to watch out for case where align==0 (rv_b and plane are parallel, no intersection) Definition at line 468 of file Kinematics.cc. A simple utility function, pulls the first 3 rows of the first column, divides each by the fourth row, and stores into ox, oy, and oz. Definition at line 249 of file Kinematics.h. refresh the joint settings in root from WorldState::outputs Reimplemented in PostureEngine. Definition at line 554 of file Kinematics.cc. Referenced by calcLegHeights(), calculateGroundPlane(), getKinematicJoint(), getPosition(), linkToBase(), linkToLink(), and projectToPlane(). these interest points are shared by all Kinematics classes (i.e. all PostureEngines) this is to reduce initialization time, but does mean one robot can't do interest point calculations regarding a different model robot... Definition at line 294 of file Kinematics.h. Referenced by calcLegHeights(), and calculateGroundPlane(). holds mapping from tekkotsu output index to chain and link indicies Definition at line 284 of file Kinematics.h. Referenced by calcLegHeights(), calculateGroundPlane(), getInterestPoint(), getKinematicJoint(), getPosition(), init(), Kinematics(), linkToBase(), linkToLink(), operator=(), projectToPlane(), PostureEngine::solveLink(), PostureEngine::solveLinkOrientation(), PostureEngine::solveLinkPosition(), PostureEngine::update(), and update(). [mutable, protected] determine if the joints are up to date (compare to WorldState::lastSensorUpdateTime) Definition at line 281 of file Kinematics.h. Referenced by update(). the root of the kinematic tree Definition at line 278 of file Kinematics.h. Referenced by getRoot(), init(), Kinematics(), and operator=(). initially false, set to true after first Kinematics is initialized Definition at line 287 of file Kinematics.h. Referenced by checkStatics(), and initStatics().
http://tekkotsu.org/dox/classKinematics.html
CC-MAIN-2017-17
en
refinedweb
interface for notifications from Wireless More... #include <SocketListener.h> interface for notifications from Wireless Definition at line 5 of file SocketListener.h. List of all members. . Referenced by Wireless::pollProcess().
http://tekkotsu.org/dox/classSocketListener.html
CC-MAIN-2017-17
en
refinedweb
This was a quite nice challenge about elliptic curves, giving 40 points. The problem is stated as follows: Ed25519-sign the flag with the same private key and implementation flaw used to sign the messages below. The flag is the base64 representation of the signature. Public key: sctf.io Signature: 2016 Q1 Signature: OK, so the two signatures have the same prefix . Hmm… let us first look at the Ed25519 signing algorithm and try to identify the flaw: Clearly, is equal in for two different messages implying that either: - There is a collision in SHA512, which is so unlikely that we can rule that out completely, or, is independent of the message. Now, we know that the implementation is flawed. It is rather obvious that the marked (see above image) connection from the message is missing. First, define . The -part is computed as Since does not differ in the flawed implementation, we have that and therefore, So, we may obtain and, conquently, . To forge a signature for message , we now compute . Using the previously obtain and , we can compute . The signature becomes . OK, so how does it look in code? import ed25519, hashlib, libnum, binascii, base64 def bit(h,i): return (ord(h[i/8]) >> (i%8)) & 1 def Hint(m): h = hashlib.sha512(m).digest() return sum(2**i * bit(h,i) for i in range(2*256)) def decodeint(s): return sum(2**i * bit(s,i) for i in range(0,256)) def encodeint(y): bits = [(y >> i) & 1 for i in range(256)] return ''.join([chr(sum([bits[i * 8 + j] << j for j in range(8)])) for i in range(256/8)]) # modular field size n = 2**252 + 27742317777372353535851937790883648493 pkey = b'5bfcb1cd3938f3f6f3092da5f7d7a1bdb1d694a725d0585a99208787554e110d' # messages message1 = 'sctf.io' message2 = '2016 Q1' message3 = 'the flag' # provided test-case signatures sig1 = b'68299a51b6b592e2db83c26ca3594bdd81bdbb9f11c597a1deb823da7c8b9de8e2224855125b1acbeab1468bf4860c1eeb05b6d2375e2214c55bdfe808a6c106' sig2 = b'68299a51b6b592e2db83c26ca3594bdd81bdbb9f11c597a1deb823da7c8b9de825ad01a05a0cce69258d41d42ed046956e7d4586eb21ff031bf8ac03243d5e04' # extract required data rbin = binascii.unhexlify(sig1[:64]) pkbin = binascii.unhexlify(pkey) s1 = decodeint(binascii.unhexlify(sig1[64:])) s2 = decodeint(binascii.unhexlify(sig2[64:])) # compute H(R,A,M) hram1 = Hint(rbin+pkbin+message1) hram2 = Hint(rbin+pkbin+message2) # ok, now we got the information we need a = (s1-s2)*libnum.modular.invmod(hram1-hram2,n) % n r = (s1-hram1*a) % n # compute new hram hram = Hint(rbin+pkbin+message3) s = (r+hram*a) % n # this is our new fresh signature flagsig = sig1[:64] + binascii.hexlify(encodeint(s)) Converting it to the correct format base64.b64encode(binascii.unhexlify(flagsig)) This gives us the flag sctf{aCmaUba1kuLbg8Jso1lL3YG9u58RxZeh3rgj2nyLneh12Mf7NAvaREdBikQrkpWSa3UT15wUmunsrceSa3CUCQ==}. (Image inspired by)
https://grocid.net/2016/04/14/sctf-ed25519/
CC-MAIN-2017-17
en
refinedweb
! With input from client experts and industry partners, Integrated Informatics Inc. developed the Geodetics Toolkit, a commercial product. With this toolkit, their users can load, analyze, map, audit, and generate reports for seismic and well surveys. The tools in the toolbox take text and binary data formatted according to both custom and industry standards (i.e. UKOOA, SEG-Y, SEGP1, and so on) and load it into a schema of feature classes and tables. Below, Integrated Informatics Inc. discusses reasons for choosing Python as the language for developing and delivering their commercial product. They discuss how Python is used to their advantage in the software development environment and show examples of how Python is much more than just a scripting language. Finally, they discuss the use of open source libraries in their product. Seismic and Well survey data are integral parts of any oil and gas company's database and are used in a variety of manners such as visualization, resource discovery, and inventory. The oil and gas industry has a long and rich history of computer utilization and digital data storage and over this history many different standards for survey storage have arisen and companies have chosen the standard that suits them best or even developed their own. The result is a proliferation of survey files that appear similar but are often different in very subtle ways. Since survey information has existed for a long time it is not surprising that there are many tools available for processing the data. The problem is that such tools are not typically a part of a larger software solution. They are built for viewing and doing some processing but they often do not work well with the rest of the enterprise database. This is not ideal as the full benefit of these data can only be realized when they are overlaid, integrated, and analyzed together. Given that most oil and gas companies employ the ESRI stack as their main GIS software it is compelling to introduce capabilities for loading well and seismic from a variety of formats directly into ArcGIS. For the development of the Geodetics Tools we had the option of using an ArcObjects-based approach but we choose Python for two primary reasons: 1) Speed of Development and 2) Ease of Deployment. Python and the Geoprocessing Framework are tightly integrated and Python is the recommended language for implementing script tools. Choosing Python means that as a developer you can spend your time coding functionality rather than coding a user interface. This in turn means you can deliver products to your clients faster than you might otherwise expect because core parts of the project (the user interface) are already taken care of. For example, when you create a tool through the geoprocessing framework you get a tool that looks and acts like the core tools delivered with ArcGIS. This means default validation, user interface controls and a documentation style that allows your custom tools to blend in seamlessly with other parts of the application like Model Builder. It is worth noting that development time for the budding programmer is also decreased due to the nature of Python itself. Guido van Rossum, the author of Python, created the language to be easy and intuitive, open source, understandable as plain English and suitable to everyday tasks. To you, the GIS Analyst turned developer or dabbler, this means that Python is easy to learn and easy to read. You'll spend less time learning and more time creating solutions and improving workflow. The Geodetics Toolbox currently contains over 40 tools. An enormous amount of time was saved because we didn't have to program UI's for each of the tools. The time saved allowed us to invest heavily into testing the tools, building a test harness, improving performance, and introducing innovations and new functionality. One of the beautiful things about a Python solution is the ease of deployment. There are no dlls to register, no complicated installations to run and no COM dependencies to worry about. With the Geodetics Toolbox, we are able to simply zip the solution in our office and unzip it in an accessible location on the client's network. With the code in place, the client need only to add the toolbox to ArcGIS Desktop and they have access to the functionality. In many large corporations the software installation is the dominion of the IT department and pushing out software to the employees of the company is their job and often their headache. As a solution provider the easier you can make installation the more appreciative IT will be and the quicker your clients can access new functionality. For Integrated Informatics, the use of Python means that we are ensuring both the easiest possible installation process and the quickest possible request turnaround time for our clients and that translates into happy clients. The Python code for the Geodetics Toolbox contains well over 5000 lines of code (not including open source packages). As with all commercial products our customers send us enhancement requests on a regular basis and which we try to fulfill as quickly as possible. Requests range from new parameters to whole new toolsets. It is therefore important for us to know when changes impact existing functionality and whether any changes have caused tools to fail. With over 40 tools, each of which have up to dozens of parameters it is simply not effective to manually test each tool by hand. To achieve the desired turn-around times on requests at the expected level of quality, automated testing is an absolute must. Python is, as they say, "batteries included" so for testing we only needed to look at Python itself for at least part of the solution. Python has an excellent standard module called unittest which is a part of the core Python install. For each tool we have a separate test script to test the many different permutations of parameters and data inputs. With individual test suites for each tool we can be very precise about what we are testing and efficient with time during the business day. Test suites for each tool are a good step but it is critical that these test scripts be run in an automated fashion on a regular basis, not just when you remember to run them. In vogue of late is the notion of 'continuous integration', the concept that after each change checked in to the code base a trigger initiates all the tests in the test suite. This can be an excellent option for certain types of code base and certain types of tests, however, for tools that do heavy processing such high frequency testing is not always practical. More important is the idea that the test be triggered on a regular basis. This can be nightly, weekly or even monthly depending on the frequency with which the code base is updated. With an automated run of your test suite you always have a finger on the pulse of your code so that if and when a bug is introduced to the code base you can know quickly that there is a problem and correct it. One of the basic tenets in programming is to avoid code duplication. There are a couple ways that you can avoid code duplication and make your code base more efficient and effective. First you can use functions to contain pieces of code that you use over and over again. Functions can do a lot and are the first step on the way to reducing code duplication. For large code bases, functions are limited in what they can do (though they do have their place), and in many cases it is appropriate to start creating classes. Though Python is thought of as a scripting language in the context of esri's software (for example note the use of 'scripting' in the name 'arcgisscripting'), it is actually a fully Object Oriented (OO) programming language. As such, programmers have the ability to create classes and full class hierarchies using inheritance. In large code bases, well written OO style code can help conceptually abstract complex ideas, reduce code duplication and isolate code into small concise chunks making it easier to change, manage, and test. The very simple example below is intended to be a gentle introduction to Python classes and a small class hierarchy. from os.path import basename class AbstractReport(object): """ Base parsing class """ _table_fields = None def __init__(self, file_path, records): """ initializes the class """ self._file_path = file_path self._records = records def calc_coords(self): """ calculates coordinates to be written to report table """ raise NotImplementedError def write_table(self): """ parses the records in the file """ coords = self.calc_coords() print ('writes a table using fields %s ' '\nand calculated coords %s to file %s' % (self._table_fields, coords, basename(self._file_path))) class OrthoCheckReport(AbstractReport): """ Orthongonal Check Report Class """ _table_fields = ['FLD_A', 'FLD_B', 'FLD_C'] def calc_coords(self): """ calculates coordinates to be written to report table """ print ('special Orthogonal Check report calculations using records %s' % self._records) return ['ortho', 'check', 'results'] class QAQCReport(AbstractReport): """ QAQC Report class """ _table_fields = ['FLD_X', 'FLD_Y', 'FLD_Z'] def calc_coords(self): """ calculates coordinates to be written to report table """ print ('special QAQC report calculations using records %s' % self._records) return ['qaqc', 'report', 'results'] if __name__ == '__main__': input_file = r'c:\test\seismic_file.txt' records = ['reca', 'recb', 'recc'] ocr = OrthoCheckReport(input_file, records) qqr = QAQCReport(input_file, records) ocr.write_table() qqr.write_table() When run, this code prints: special Orthogonal Check report calculations using records ['reca', 'recb', 'recc'] writes a table using fields ['FLD_A', 'FLD_B', 'FLD_C'] and calculated coords ['ortho', 'check', 'results'] to file seismic_file.txt special QAQC report calculations using records ['reca', 'recb', 'recc'] writes a table using fields ['FLD_X', 'FLD_Y', 'FLD_Z'] and calculated coords ['qaqc', 'report', 'results'] to file seismic_file.txt In the hierarchy above the code for the write_table method is only present in one of the classes (the AbstractReport class) but instances of the other classes ( OrthoCheckReport or QAQCReport) can still call that method. This is because both OrthoCheckReport and QAQCReport are "subclasses" of the AbtractReport "base class" and "inherit". A subclass that inherits from a base class has access to all the methods and properties of the base class. This means that regardless of what class is created above, calls to write_report will go through the same method. The calc_coords method demonstrates what happens when code needs to be different in the subclasses from the base class. Each of the subclasses has a different way of calculating the coordinates for the table and therefore has unique code. To ensure this, the subclasses 'overload' the calc_coords method from the base class. As demonstrated above, 'overloading' in Python is as simple as just adding a method with the same name to your subclasses. This means that even though the write_table method has the exact same code for all of the classes, when it calls calc_coords it will follow a unique path through the subclasses. By doing this, unnecessary logic (like extra "if" statements) are eliminated from the code making it more streamlined and much easier to read. To make a class inherit from another class simply include the name of the desired base class in the class declaration: class OrthoCheckReport(AbstractReport): When you do this make sure that the initialization (__init__) of the subclass is the same as the initialization of the base class. If it is different you will need to write an __init__ for the subclasses as well. Check the documentation and help for examples. Python has become one of the most popular open source programming languages. As such, the users of Python have created literally thousands of open source packages, many of which are directly applicable to the kind of things you want to do in your applications. In the Geodetics Toolbox we use many open source packages to achieve our client's goals. As an example, consider a common client's requests: create a pdf report from analysis performed in ArcGIS. As it turns out there is an easy-to-use cross platform open source package available called the ReportLab Toolkit which has the ability to create pdf documents. This package contains comprehensive and robust pdf manipulation capabilities as well as excellent documentation and a tutorial to help people get started. Using this package we were able to write reports and data to pdf documents with relative ease in a very short total development time. So next time you get a request, ask yourself the question "has someone else already done this" and search the internet before diving directly into development. When you find a package that does exactly what you need the first thing to do is to read the license. Open Source licenses come in a variety of different forms and many are written to prevent software from being "closed source". It is extremely important to read the license very closely and make sure that you are using the package correctly. In the case of the ReportLab Toolkit, the license is a form of the Berkeley Software Distribution license (commonly referred to as the "BSD" license). This license is very permissive and allows the software to be used and distributed in other proprietary software given a few minor conditions. Other licenses are not nearly as permissive and are designed to ensure that software which uses other open source software is open source as well (for example the GPL). Take the time to familiarize yourself with the most common licenses so you know what and how you can use open source packages. A useful table of licenses can be found here:. Another very useful site is which contains information on all open source licenses. For the Geodetics Toolbox we incorporated the ReportLab Toolkit as a subpackage to our package which means we actually delivered the code with our code. While it may seem simple to reference open source packages in your code, it is important that you actually incorporate the package. This allows you to control the version of the package being used and ensures that the package is available on the clients machine. Requesting that the client install the package themselves is a hassle to the client and should be avoided. Again, when you deliver an open source package with your code it is imperative that you read the license and fulfill the obligations in that license. To deliver the functionality in the Geodetics Toolbox, we chose Python for its development language. Python and the Geoprocessing Framework enabled us to deliver the functionality quickly and have it look and feel exactly like the rest of the ArcGIS product. Using modules that are part of any Python install we created a suite of unittests to ensure the quality of our product over many code deliveries and new functionality requests. We leveraged Python's capabilities as an Object Oriented programming language to reduce code duplication in our code base making the code easier to maintain. Finally, because Python has such a large open source community we were able to find open source package that shortened our development time and helped us meet client needs. Integrated Informatics Inc. is a leading consultancy for Geographic Information System implementation and development. Founded in 2002, Integrated provides spatial data management and automated mapping solutions to clients throughout North America with offices in Calgary, Alberta, St. John's, Newfoundland and Houston, Texas. Integrated has longstanding relationships with its clients who comprise the major and super-major independent and integrated Energy companies, Provincial and State Governments, and Engineering and Environmental consultancies. We have a proven track record of developing and implementing strategies, systems, and technologies that support business goals and deliver corporate value. Our strength is from our people. Our team is comprised of experienced professionals in the fields of Geographic Information Systems, Spatial Data Management, Project Data Management, Application Development and discipline specific experts. We have experts on staff in the disciplines of pipeline engineering, environmental and geoscience analysis. Our work environment promotes ideas, innovation, and implementation through professional development, continued training, industry involvement, and internally funded research and development. Integrated is a Silver Tier International Esri business partner, an active participant in the beta program for holistic testing for Esri and sits on the advisory board for the GIS Program at the Southern Alberta Institute of Technology (SAIT). Visit us at or contact us at gis@integrated-informatics.com for more information about our unique solutions.
http://resources.arcgis.com/en/communities/python/01r500000005000000.htm
CC-MAIN-2017-17
en
refinedweb
WebSocket is a recent technology that provides two-way communication over a TCP connection. This allows us to create real-time web apps where servers can push data to clients. In this blog post, I’ll demonstrate how this can be done by building a simple chat app using ASP.NET WebAPI and ASP.NET’s new support for WebSockets in .NET 4.5. Before we get started, there are a few requirements for using WebSockets. It must be supported by both the browser and the web server. The WebSocket protocol is currently supported in Chrome, Firefox, and Safari and will be supported in the upcoming Internet Explorer 10 release. On the server side of things, you will need Windows 8 (or Windows Server 2012) to support WebSockets. Now you may not always be able to guarantee that your client browser and your web server support WebSockets. If that’s your case, I highly recommend you take a look at SignalR. SignalR provides you with the abstraction of a real-time, persistent connection without having to worry about how the data is being sent back and forth between the browser and the server. Now once you’ve met the requirements, you’ll need to enable support for WebSockets on IIS. You can do so by going through Control Panel > Programs > Turn Windows features on or off. You’ll then need to make sure the following boxes are checked: - Internet Information Services > World Wide Web Services > Application Development Features > ASP.NET 4.5 - Internet Information Services > World Wide Web Services > Application Development Features > WebSocket Protocol - .NET Framework 4.5 Advanced Services > ASP.NET 4.5 Next, it’s time to write our app. Start by creating a new Empty ASP.NET MVC 4 App in Visual Studio 2012. Create a new HTML page called “chat.htm” with this in the body: 1: <form id="chatform" action=""> 2: <input id="inputbox" /> 3: </form> 4: <div id="messages" /> The HTML here is just a simple chat field to enter messages and a <div> to display our broadcast messages at. Next, let’s implement our client-side WebSocket functionality: 1: $(document).ready(function () { 2: 3: var username = prompt('Please enter a username:'); 4: 5: var uri = 'ws://' + window.location.hostname + window.location.pathname.replace('chat.htm', 'api/Chat') + '?username=' + username; 6: websocket = new WebSocket(uri); 7: 8: websocket.onopen = function () { 9: $('#messages').prepend('<div>Connected.</div>'); 10: 11: $('#chatform').submit(function (event) { 12: websocket.send($('#inputbox').val()); 13: $('#inputbox').val(''); 14: event.preventDefault(); 15: }); 16: }; 17: 18: websocket.onerror = function (event) { 19: $('#messages').prepend('<div>ERROR</div>'); 20: }; 21: 22: websocket.onmessage = function (event) { 23: $('#messages').prepend('<div>' + event.data + '</div>'); 24: }; 25: }); This JavaScript depends on the JQuery library. Note how we’re sending a username in the query string for our initial web socket, and note how we’re sending messages over the websocket whenever a message is submitted, and that we display messages received over the websocket as well. Now it’s time to implement our server-side WebSocket handler. First, let’s make sure that the right route is configured for WebAPI to receive the WebSocket upgrade request. You’ll need to run this command in your Application_Start method: 1: routes.MapHttpRoute( 2: name: "DefaultApi", 3: routeTemplate: "api/{controller}/{id}", 4: defaults: new { id = RouteParameter.Optional } 5: ); If you’re using an empty MVC project template, this should already be there for you in RouteConfig.cs. Next, let’s implement our WebAPI controller: 1: public class ChatController : ApiController 2: { 3: public HttpResponseMessage Get(string username) 4: { 5: HttpContext.Current.AcceptWebSocketRequest(new ChatWebSocketHandler(username)); 6: return Request.CreateResponse(HttpStatusCode.SwitchingProtocols); 7: } 8: 9: class ChatWebSocketHandler : WebSocketHandler 10: { 11: private static WebSocketCollection _chatClients = new WebSocketCollection(); 12: private string _username; 13: 14: public ChatWebSocketHandler(string username) 15: { 16: _username = username; 17: } 18: 19: public override void OnOpen() 20: { 21: _chatClients.Add(this); 22: } 23: 24: public override void OnMessage(string message) 25: { 26: _chatClients.Broadcast(_username + ": " + message); 27: } 28: } 29: } Our controller has just one method, Get, which listens for WebSocket upgrade requests. Since the initial upgrade handshake for WebSockets looks just like an HTTP request/response, this allows us to go through the entire WebAPI pipeline just as if it were any other HTTP request/response. This means for example that message handlers will run, action filters get called, and model binding lets us bind the request to action parameters. In the example above, we’ve bound the username from the query string as an action parameter. The Get action then does two things. It first accepts the web socket request and sets up a WebSocketHandler to manage the connection. It then sends back a response with HTTP status code 101, notifying the client that is has in fact agreed to switch to the WebSocket protocol. The ChatWebSocketHandler then just takes care of managing a list of chat clients and broadcasting the message to all clients when it receives a message. So we’re able to have WebAPI handle the initial WebSocket upgrade request and create a web socket handler to manage the lifetime of the connection. You’ll need the Microsoft.WebSockets NuGet package to be able to build the controller above. Note that because of limitations of the WebSocket feature, you’ll need to deploy the app to IIS or IIS Express. This won’t work in the Visual Studio Development Server. Finally, you’ll need this in your Web.config to get WebSockets working: 1: <configuration> 2: <appSettings> 3: <add key="aspnet:UseTaskFriendlySynchronizationContext" value="true" /> 4: </appSettings> 5: </configuration> And there you go, you should be able to deploy your app to IIS and test your chat app. You can try opening multiple browser windows to see chat messages broadcast in real-time to all the chat clients. The information in your post about ASP .NET is more interesting. and this info is more useful for the developers to develop the .Net Application. Thanks for share this valuable info. brucewhitney.devhub.com This is a really great post, do you have any example how to implement WebSocketsin web forms rather than MVC. Thanks Any chance you could upload the source code ? Good info, but it's now 3 1/2 years old. Is it still correct and current? Is there any way to keep the connection open between multiple pages and use the full MVC routing structure?
https://blogs.msdn.microsoft.com/youssefm/2012/07/17/building-real-time-web-apps-with-asp-net-webapi-and-websockets/
CC-MAIN-2017-17
en
refinedweb
On Sun, Aug 11, 2013 at 07:16:30AM +0200, Michael Haggerty wrote: > On 08/11/2013 01:20 AM, Fredrik Gustafsson wrote: > > [...] > > It would be very hard to do a tool such as you describe, the reason is > > that there's no sane way to order your tags. Git today show tags > > alphabetically, all versions does not have a alphabtic order. [...] > > It would be quite easy to make a script that create such branch for you, > > if you only can sort the tags somehow. > > GNU sort has a nice option that can sort this way: > > -V, --version-sort > Sort by version name and number. It behaves like a standard sort, > except that each sequence of decimal digits is treated numerically > as an index/version number. Advertising That's a nice feature, I remember we had that one as a feature request for git tag just a few days ago. It works well with git.git version numbers but won't be usefull in this case for git.git since git.git has other tags too (like the gitgui version tags). However if you've a nice namespace for the tags where you only tags versions, it might be an alternative. --
https://www.mail-archive.com/git@vger.kernel.org/msg33852.html
CC-MAIN-2017-17
en
refinedweb
tst man page tst — ternary search trie functions Synopsis #include <inn/tst.h> struct tst; struct tst *tst_init(int node_line_width); void tst_cleanup(struct tst *tst); int tst_insert(struct tst *tst, const unsigned char *key, void *data, int option, void **exist_ptr); void *tst_search(struct tst *tst, const unsigned char *key); void *tst_delete(struct tst *tst, const unsigned char *key); Description tst_init allocates memory for members of struct tst, and allocates the first node_line_width nodes. A NULL pointer is returned by tst_init if any part of the memory allocation fails. On success, a pointer to a struct tst is returned. The value for node_line_width must be chosen very carefully. One node is required for every character in the tree. If you choose a value that is too small, your application will spend too much time calling malloc(3) and your node space will be too spread out. Too large a value is just a waste of space. tst_cleanup frees all memory allocated to nodes, internal structures, as well as tst itself. tst_insert inserts the string key into the tree. Behavior when a duplicate key is inserted is controlled by option. If key is already in the tree then TST_DUPLICATE_KEY is returned, and the data pointer for the existing key is placed in exist_ptr. If option is set to TST_REPLACE then the existing data pointer for the existing key is replaced by data. Note that the old data pointer will still be placed in exist_ptr. If a duplicate key is encountered and option is not set to TST_REPLACE then TST_DUPLICATE_KEY is returned. If key is zero length then TST_NULL_KEY is returned. A successful insert or replace returns TST_OK. A return value of TST_ERROR indicates that a memory allocation error occurred while trying to grow the node free. Note that the data argument must never be NULL. If it is, then calls to tst_search will fail for a key that exists because the data value was set to NULL, which is what tst_search returns. If you just want a simple existence tree, use the tst pointer as the data pointer. tst_search finds the string key in the tree if it exists and returns the data pointer associated with that key. If key is not found then NULL is returned, otherwise the data pointer associated with key is returned. tst_delete deletes the string key from the tree if it exists and returns the data pointer assocaited with that key. If key is not found then NULL is returned, otherwise the data pointer associated with key is returned. History Converted to POD from Peter A. Friend's ternary search trie documentation by Alex Kiernan <alex.kiernan@thus.net> for InterNetNews 2.4.0. $Id: tst.pod 9073 2010-05-31 19:00:23Z iulius $
https://www.mankier.com/3/tst
CC-MAIN-2017-17
en
refinedweb
Microsoft Store Services SDK launches support for interstitial banner ads We are excited to announce the launch of interstitial banner ads support in the Microsoft Store Services SDK. Interstitial banner ads have been one of the top Windows Dev Center feature requests since we introduced support for interstitial video ads. What are interstitial banner ads? Interstitial banner ads are a very popular way of monetizing apps and games because they offer much higher eCPMs than standard banner ads. They can earn up to 8-10 times more than standard banner ads for the following reasons: -. Mobile Interstitial Ad Sample Tablet/Desktop Interstitial Ad Sample Where are interstitial banner ads available? This new ad format is available in the latest release of the Microsoft Store Services SDK for Universal Windows Platform (UWP) apps for Windows 10. If you haven’t started monetizing your app or game with ads using the Microsoft Store Services SDK, this is a great time to start. We have lined up some advertising networks to enable interstitial banner ads to earn good revenue for developers, and we will continue to on-board more networks to increase the earning potential of this and other ad formats. To learn more about the options for adding ads to your apps with this SDK, see this article. How do I get started? Since we are in beta, please mail out to aiacare@microsoft.com for onboarding your app or game to the beta program. Once you have an interstitial ad unit created by us, you can continue with the following steps to incorporate into your app or game. To add an interstitial banner ad into your game or app, use the InterstitialAd class from the Microsoft Store Services SDK. If you are already familiar with the steps for adding interstitial video ads to a game or app, the process for adding interstitial banner ads is nearly identical. The only difference is that when you call the RequestAd method to fetch an ad, you specify AdType.Display for the ad type (this is a new enum value in the latest release of the SDK). using Microsoft.Advertising.WinRT.UI; … … // declare the object and set your app parameters InterstitialAd myInterstitialAd = null; string myAppId = "d25517cb-12d4-4699-8bdc-52040c712cab"; string myAdUnitId = "11389925"; … … // instantiate the Ad myInterstitialAd = new InterstitialAd(); myInterstitialAd.AdReady += MyInterstitialAd_AdReady; myInterstitialAd.ErrorOccurred += MyInterstitialAd_ErrorOccurred; myInterstitialAd.Completed += MyInterstitialAd_Completed; myInterstitialAd.Cancelled += MyInterstitialAd_Cancelled; … … // request for the ad a few seconds before you intend to display myInterstitialAd.RequestAd(AdType.Display, myAppId, myAdUnitId); … … // display the ad if (InterstitialAdState.Ready == myInterstitialAd.State) { myInterstitialAd.Show(); } … … For the complete steps and code samples for adding an interstitial banner or interstitial video ad to your game or app, see this article. What are the best practices? Because of their larger size, interstitial banner ads require more bandwidth on user devices than standard banner ads. To accommodate this, we recommend that you use the RequestAd method to fetch an interstitial banner ad around 3-4 seconds before you want to display it. To make the most of interstitial ads in your game or app, check out our interstitial guidelines. Join the conversation Thanks For sharing happy new year 2018 Very good Info Provided by you!!
https://blogs.windows.com/buildingapps/2017/03/27/microsoft-store-services-sdk-launches-support-interstitial-banner-ads/
CC-MAIN-2017-17
en
refinedweb
On Tuesday 04 November 2003 01:02 am, Delaney, Timothy C (Timothy) wrote: > > From: Alex Martelli [mailto:aleaxit at yahoo.com] > > > > BTW, when we do come around to PEP 318, I would suggest the 'as' > > clause on a class statement as the best way to specify a metaclass. > > I just realised what has been bugging me about the idea of > > def foop() as staticmethod: > > and it applies equally well to > > class Newstyle as type: > > Basically, it completely changes the semantics associated with 'as' in > Python - which are to give something a different name (technically, to > rebind the object to a different name). Yes, that's what the 'as' clause means in from and import statements, of course. > OTOH, the first case above means 'create this (function) object, call this > decorator, and bind the name to the new object'. So instead of taking an > existing object (with an existing name) and rebinding it to a new name, it > is creating an object, doing something to it and binding it to a name. A > definite deviation from the current 'as' semantics, but understandable. I'm not sure I follow. "import X as y" means basically y = __import__('X') (give or take a little:-). 'def foo() as staticmethod:' would mean instead foo = staticmethod(new.function(<codeobj>, globals(), 'foo')) so what comes after the 'as' is a name to bind in the existing case, it's a callable to call in the new proposed syntax. There is a binding in each case, and in each case something is called to obtain the object to bind; I think the distinction between new and existing object is spurious -- __import__ can perfectly well be creating a new object -- but the real distinction is that the name to bind is given after 'as' in the existing case, it's NOT so given in the new proposed one. > However, the second case above is doing something completely different. It Not at all -- it does: Newstyle = type('Newstyle', (), <classdict>) where <classdict> is built from the body of the 'class' statement, just like, above, <codeobj> is built from the body of the 'def' statement. I find this rather close to the 'as staticmethod' case: that one calls staticmethod (the callable after the 'as') and binds the result to the name before the 'as', this one calls type (the callable after the 'as') and binds the result to the name before the 'as'. > is creating a new object (a class) and binding it to a name. As a side > effect, it is changing the metaclass of the object. The 'as' in this case "changing"? From what? It's _establishing_ the type of the name it's binding, just as (e.g.) staticmethod(...) is. I.e., stripping the syntax we have in today's Python: >>> xx = type('xx', (), {'ba':23}) >>> type(xx) <type 'type'> >>> xx = staticmethod(lambda ba: 23) >>> type(xx) <type 'staticmethod'> ...so where's the "completely different" or the "changing" in one case and not the other...? > has nothing whatsoever to do with binding the object name, but a name in > the object's namespace. It has everything to do with determining the type of the object, just like e.g. staticmethod would. > I suppose you could make the argument that the metaclass has to act as a > decorator (like in the function def above) and set the __metaclass__ > attribute, but that would mean that existing metaclasses couldn't work. It > would also mean you were defining the semantics at an implementation level. I'm sure I've lost you completely here, sorry. >>> class xx(object): pass ... >>> xx.__metaclass__ Traceback (most recent call last): File "<stdin>", line 1, in ? AttributeError: type object 'xx' has no attribute '__metaclass__' why would a class created this way have to set '__metaclass__', again? A metaclass is the class object's type, and it's called to create the class object. If I do "xx = type('xx', (), {})" I get exactly the same result as with the above "class xx" statement -- no more, no less. "class" just gives me neat syntax to determine the 3 arguments with which the metaclass is called -- a string that's the classname, a tuple of bases, and a dictionary. That "__metaclass__ attribute" is just an optional hack which Python can decide to determine _which_ metaclass to call (in alternative to others, even today) for a certain 'class' statement. > I'm worried that I'm being too picky here, because I *like* the way the > above reads. I'm just worried about overloading 'as' with too many > essentially unrelated meanings. I accept that in both 'def foo() as X' and 'class foo as X' the X in "as X" is very different from its role in 'import foo as X' -- in the import statement, X is just a name to which to bind an object, while in the def and class statements X would be a callable to call in order to get the object -- and the name to bind would be the one right after the def or class keywords instead. So maybe we should do as Phillip Eby suggests and use 'is' instead - that's slightly stretched too, because after "def foo() is staticmethod:" it would NOT be the case that 'foo is staticmethod' holds, but, rather, that isinstance(foo, staticmethod) [so we're saying "IS-A", not really "IS"]. But the def and class statements cases are SO close -- in both what comes after the 'is' (or 'as') is a callable anyway. The debate is then just, should said callable be called with an already prepared (function or class) object, just to decorate it; or should it rather be called with the elementary "bricks" needed to build the object, so it can build it properly. Incidentally, it seems to me that it might not be a problem to overload e.g. staticmethod so it can be called with multiple arguments (same as new.function) and internally calls new.function itself, should there be any need for that (not that I can see any use case right now, just musing...). Alex
https://mail.python.org/pipermail/python-dev/2003-November/039941.html
CC-MAIN-2017-17
en
refinedweb
ping_iterator_get_info - Receive information about a host #include <oping.h> int ping_iterator_get_info (pingobj_iter_t *iter, int info, void *buffer, size_t *buffer_len);.h" and. The buffer argument is a pointer to an appropriately sized area of memory where the result of the call will be stored. The buffer_len value is used as input and output: When calling ping_iterator_get_info it reports the size of the memory region pointed to by buffer. The method will write the number of bytes actually written to the memory into buffer_len before returning. ping_iterator_get_info returns zero if it succeeds. EINVAL is returned if the value passed as info is unknown. Both, buffer and buffer_len, will be left untouched in this case. If the requested information didn’t fit into buffer then the size that would have been needed is written into buffer_len; buffer itself is left untouched. The return value is ENOMEM in this case. · PING_INFO_RECV_TTL is not available under Debian Etch due to a missing define in the header files. ping_iterator_get(3), liboping(3) liboping is written by Florian octo Forster <octo at verplant.org>. Its homepage can be found at <>. (c) 2005-2009 by Florian octo Forster.
http://huge-man-linux.net/man3/ping_iterator_get_info.html
CC-MAIN-2017-17
en
refinedweb
.... Printable View .... Code: #include <stdio.h> #include <stdlib.h> /* self referential structure */ struct listNode { char data; /* each listNode contains a character */ struct listNode *nextPtr; /* pointer to the next node */ struct listNode *prevPtr; /* pointer to previous node */ }; /* end structure listNode */ typedef struct listNode ListNode; /* synonym for struct listNode */ typedef ListNode *ListNodePtr; /* synonym for ListNode* */ /* prototypes */ void insert( ListNodePtr *sPtr, char value ); char delete( ListNodePtr *sPtr, char value ); int isEmpty( ListNodePtr sPtr ); void printList( ListNodePtr currentPtr ); void instructions( void ); void printBackwards (ListNodePtr currentPtr ); int main() { ListNodePtr startPtr = NULL; /* initially there are no nodes */ int choice; /* users choice */ char item; /* char entered by user */ instructions(); /* display the menu */ printf( "? " ); scanf( "%d", &choice ); /* loop while user does not choose 3 */ while ( choice != 3 ) { switch ( choice ) { case 1 : printf( "Enter a character: "); scanf( "\n%c", &item ); insert( &startPtr, item ); /* insert item in the list */ printList( startPtr ); printBackwards( startPtr ); break; case 2: /* if list is not empty */ if (!isEmpty(startPtr )) { printf( "Enter character to be deleted: "); scanf( "\n%c",&item); /* if character is found remove it */ if (delete( &startPtr, item ) ) { printf( "%c deleted.\n", item); printList( startPtr ); printBackwards( startPtr ); } /* end if*/ else { printf( " %c not found.\n\n", item); } /* end else */ } /* end if */ else { printf( "List is empty.\n\n" ); } /* end else */ break; default: printf( "Invalid choice.\n\n" ); instructions(); break; } /* end switch */ printf("? "); scanf( "%d", &choice ); } /* end while */ printf( "end of run.\n" ); return 0; /* indicate sucessful termination */ } /* end main */ /* display program instructions to user */ void instructions ( void ) { printf( "Enter your choice:\n" " 1 to insert an element into the list.\n" " 2 to delete an element from the list.\n" " 3 to end.\n" ); } /* end of instructions */ /* insert a new value into the list in sorted order */ void insert ( ListNodePtr *sPtr, char value) { ListNodePtr newPtr; /* pointer to a new node */ ListNodePtr previousPtr; /* pointer to previous node in list */ ListNodePtr currentPtr; /* pointer to current node on list */ newPtr = malloc( sizeof( ListNode )); /* create node */ if ( newPtr != NULL ) { /* is space available */ newPtr->data = value; /*place value in node */ newPtr->nextPtr = NULL; /*node does not link to another node */ newPtr->prevPtr = NULL; previousPtr = NULL; currentPtr = *sPtr; /* loop to find the correct location in the list */ while (currentPtr != NULL && value > currentPtr->data) { previousPtr = currentPtr; /* walk to........*/ currentPtr = currentPtr->nextPtr; /* .....next node */ /* add here , point up stream */ } /* end while */ /* insert new node at begining of list */ if ( previousPtr == NULL ) { newPtr->nextPtr = *sPtr; if(*sPtr != NULL) (*sPtr)->prevPtr = newPtr; *sPtr = newPtr; } /* end if */ else { /* insert new node between previousPtr and currentPtr */ newPtr->prevPtr = previousPtr; previousPtr->nextPtr = newPtr; newPtr->nextPtr = currentPtr; if (currentPtr != NULL) currentPtr->prevPtr = newPtr; } /* end else */ } /* end if */ else { printf( "%c not inserted. No memory available.\n", value ); } /* end else */ } /* end function insert */ /* delete a list element */ char delete ( ListNodePtr *sPtr, char value ) { ListNodePtr previousPtr; /* pointer to previous node on list */ ListNodePtr currentPtr; /* pointer to current node on list */ ListNodePtr tempPtr; /* temperary node pointer */ /* delete first node */ if (value == ( *sPtr )->data) { tempPtr = *sPtr; /* hold onto node being removed */ *sPtr = ( *sPtr )->nextPtr; /* de-thread the node */ free( tempPtr ); /* free the de-threaded node */ return value; } /* end if */ else { previousPtr = *sPtr; currentPtr = ( *sPtr )->nextPtr; /* loop to find correct location on list */ while ( currentPtr != NULL && currentPtr->data != value ) { previousPtr = currentPtr; /* walk to ....*/ currentPtr = currentPtr->nextPtr; /*....next node*/ } /* end while */ /* delete node at currentPtr */ if ( currentPtr != NULL ) { tempPtr = currentPtr; previousPtr->nextPtr = currentPtr->nextPtr; free ( tempPtr ); return value; } /* end if */ } /* end else */ return '\0'; } /* end function delete */ /* return 1 if list is empty, 0 otherwise */ int isEmpty ( ListNodePtr sPtr ) { return sPtr == NULL; } /* end function isEmpty */ /* Print the list */ void printList ( ListNodePtr currentPtr ) { /* if list is empty */ if ( currentPtr == NULL) { printf( " The list is empty.\n\n" ); } /* end if */ else { printf( " The list is:\n" ); /* while not the end of the list */ while ( currentPtr != NULL ) { printf( "%c --> ", currentPtr->data ); currentPtr = currentPtr->nextPtr; } /* end while */ printf( "NULL\n\n" ); }/* end else */ } /* end function printlist */ void printBackwards ( ListNodePtr currentPtr ) { ListNodePtr temp = NULL; while ( currentPtr != NULL ) { temp = currentPtr; currentPtr = currentPtr->nextPtr; } printf( "\nThe list in reverse is:\n" ); printf( "NULL" ); currentPtr = temp; while ( currentPtr != NULL) { printf( " <-- %c", currentPtr->data ); currentPtr = currentPtr->prevPtr; } printf("\n\n"); } wow thank you so much. This topic in C has been driving me crazy.. Ive been stumped on that part for a while.. Thanks a lot, now thats one less thing to worry about, now its time to work on the delete function, heh :) thank you again Why did you remove your original question, now no one else knows for sure what this was about. I agree with citizen you should not have removed your original post. You might want to repost your question for the sake of everyone. I'm not a fan of playing Jeopardy either. Next time we'll either quote your entire post to reply, or we simply wont answer at all. If I wanted to take a stab at "why" my guess would be that the teacher wonders through the forums periodically. But who knows. My original problem was that I was missing part of the link in the program for prevPtr. When I ran the program, the print forward would work fine, but the print backwards wasn't working correctly. I was losing part of my list in my insert function and couldn't figure out why. For example.. When I entered the word dog.. when my program would display.. the "g" would be lost when printed in reverse and that showed my prevPtr had something wrong with it. Okay cool. Thanks for coming back to put that back in. i have a question, the double link list right, i was told it points to both the previous and the next nodes but i dont fully understand how they work and also the circular link list. I use circular linked lists for memory management frequently. The proven advantage of circular linked lists is it very easy to insert elements without having any sort of "special case"
https://cboard.cprogramming.com/c-programming/108647-modify-make-doubly-linked-list-printable-thread.html
CC-MAIN-2017-17
en
refinedweb
Hello, I've got a function that returns a memory addres of an array. However making the char * static seems to be the only solution. However it since its static it doesnt get cleaned out so everything is kept. #include <iostream> // cout << endl ... #include <fstream> // for reading in the file ... #include <cstring> // for comparing strings ... using namespace std; char temp[100]; // Global array to hold line by line char * getsym(); void file_input(void); int main() { char * a; file_input(); a = getsym(); cout <<a; a = getsym(); cout << a; return 0; } //sorts out the temp array and returns tokens. char * getsym() { char buffer[10]; char ch; cout << "getting the next token" << endl; static int i = 0; ch = temp[i]; if(!isdigit(ch)) { while(isalpha(ch)) { buffer[i] = ch; cout << "***Is alpha function called *** " << endl; i++; ch = temp[i]; } return buffer; } if(!isalpha(ch)) { while(isdigit(ch)) { buffer[i] = ch; cout << "***Is digit function called *** " << endl; i++; ch = temp[i]; } return buffer; } if(ch == '+') { cout << "\nLanguage symbol found = " << ch << endl; buffer[0] = '+'; return (buffer); } if(ch == '$') { cout << "\nReserved symbol found = " << ch << endl; i++; } return NULL; } I've got a theory of how to fix it. put a for loop in that loops round 10 times filling it with. Err Don't know. Can anyone help with this? point me in the right direction ? (no pun intended? ) Thanks.
https://www.daniweb.com/programming/software-development/threads/39428/code-bug
CC-MAIN-2017-17
en
refinedweb
One of the routine tasks in many Java applications is to convert a string to a number. For instance, you may have a form where user submits his or her age. The input will come to your Java application as a String but if you need the age to do some calculation you need to convert that String into a number. Like, int age = new Integer(ageString).intValue(); But there is a possibility that the user might have entered an invalid number. Probably they entered “thirty” as their age. If you try to convert “thirty” to a number you will get NumberFormatException. One way to avoid this is to catch and handle the NumberFormatException. But this is not the ideal and the most elegant solution to convert a string to a number in Java. Another approach is to validate the input string before performing the conversion using Java regular expression. I like that second approach because it is more elegant and it will keep your code clean. Here is how you can validate if a String value is numeric using Java regular expression. import java.util.regex.Matcher; import java.util.regex.Pattern; /*A Java class to verify if a String variable is a number*/ public class IsStringANumber{ public static boolean isNumeric(String number){ boolean isValid = false; /*Explaination: [-+]?: Can have an optional - or + sign at the beginning. [0-9]*: Can have any numbers of digits between 0 and 9 \\.? : the digits may have an optional decimal point. [0-9]+$: The string must have a digit at the end. If you want to consider x. as a valid number change the expression as follows. (but I treat this as an invalid number.). String expression = "[-+]?[0-9]*\\.?[0-9\\.]+$"; */ String expression = "[-+]?[0-9]*\\.?[0-9]+$"; CharSequence inputStr = number; Pattern pattern = Pattern.compile(expression); Matcher matcher = pattern.matcher(inputStr); if(matcher.matches()){ isValid = true; } return isValid; } } The code above is self-explanatory. I explained the logic on how to build the regular expression to validate that a string has all numeric digits. I then use Matcher and Pattern classes to test the input string against the regular expression. If the input string passes the regular expression validation the method will return true, indicating that the string contains a numeric value. If the input string does not match the regular expression the method will return false. Enjoy.! It does not work for exponential numbers (e.g.: 0.1e-10). Maybe using this regex is enough: “[-+]?[0-9]*\\.?[0-9]+e[-]?[0-9]+$” (I’ve basically added ‘e[-]?[0-9]+’ at the end of your regex) Sorry, the right regex is: ^[-+]?[0-9]*\.?[0-9]+([eE][-+]?[0-9]+)?$ Hello there, You have done a fantastic job. I will certainly digg it and personally suggest to my friends. I am confident they’ll be benefited from this web site. my web site; horror movies 2013 At times, the by a bowel deals too much or even as well powerfully, and so foodstuff techniques from the digestive system too quickly, causing diarrhea. Other instances, the particular intestinal muscle tissue deal nevertheless don’t unwind all over again, or even these people commitment very slowly but surely, leading to constipation. These kind of crazy, out-of-sync lean muscle motions usually are powering this agony of IBS, comparable to lean muscle jerks in your lower-leg bring about the ache of the charley horse. All of us get intestinal tract gasoline, but for people with IBS, it could come to be stuck interior, causing bloating as well as distention. Also visit my web blog IBS Cure Greetings! Very helpful advice within this post! It is the little changes which will make the most important changes. Thanks a lot for sharing! We stumbld over here by a different page and thought I may as well check thigs out. I like wat I see so now i’m following you. Look forward to exploring your web page again. I go to see every day a few web pages and websites to read content, except this website presents quality based content.! Thanks for another fantastic article. Where else may anyone get that kind of information in such an ideal means of writing? I have a presentation subsequent week, and I’m at the look for such information. I’ll right away grasp your rss as I can not find your e-mail subscription link or e-newsletter service. Do you have any? Please permit me know so that I may subscribe. Thanks. If you’re searching for tips related to penny stocks, there are several obtainable. The most striking reason is that the recession period is going on throughout the world and the investors can invest with a small amount of money. Are you looking to trade penny stocks to earn a good return on your investment. Hi, i feel that i saw you visdited my web site thus i came to go back the prefer?.I’m attempting to in finding issues to improve mmy web site!I assume its good enough to use some of your ideas!! A personal blog is a smart statement, so why not start blogging now. Whenever I write an article, I see to it that I will have a good transition leading my readers to my website. Then to share the good news of what you can do to control it and not let it ruin your life. It is common for people to blog instead of set up an online site because they lack the funds. – And the simplest way to make profit from blogging is by allowing Google Adsense or any other ads on your blog. my web page :: nitin yadav personal blog post; casette da giardino It’s remarkable in support of me to have a web site, which is beneficial designed for my knowledge. thanks admin Feel free to surf to my webpage … Nam Khoa I read this piece of writing completely about the resemblance of most recent and earlier technologies, it’s remarkable article. Hi there, I found your website by way of Google even as searching for a comparable subject, your site got here up, it appears good. I’ve bookmarked it in my google bookmarks. Hello there, simply became aware of your blog through Google, and found that it’s really informative. I am gonna watch out for brussels. I’ll appreciate in the event you proceed this in future. Lots :: endometriosis Having read this I thought it was very informative. I appreciate you taking the time and energy to put this article together. I once again find myself spending way too much time both reading and commenting. But so what, it was still worth it! Have a look at my page radang payudara Hey There. I discovered your blog using msn. This is an extremely neatly written article. I’ll make sure to bookmark it and come back to learn extra of your useful information. Thank you for the post. I will certainly return. Here is my homepage :: obat penghilang bau ketiak Hi there, after reading this awesome paragraph i am as well happy to share my knowledge here with colleagues. Here is my web site – kulup panjang I like what you guys are usually up too. This kind of clever work and coverage! Keep up the wonderful works guys I’ve added you guys to our blogroll. Also visit my web site; radang panggul magnificent post, very informative. I ponder why the other specialists of this sector don’t understand this. You must continue your writing. I’m sure, you have a great readers’ base already! My webpage: endometriosis This article presents clear idea for the new people of blogging, that truly how to do blogging and site-building Just desire rewarding work. What’s up to every body, it’s my first go to see of this blog; this website consists of amazing and truly excellent stuff in support of visitors. Also visit my homepage … Impotensi Hey There. I discovered your weblog the usage of msn. That is a really smartly written article. I will make sure to bookmark it and come back to learn extra of your helpful information. Thank you for the post. I’ll certainly return. my weblog – bau ketiak An outstanding share! I have just forwarded this onto a coworker who had been conducting a little research on this. And he in fact bought me dinner simply because I stumbled upon it for him… lol. So let me reword this…. Thanks for the meal!! But yeah, thanks for spending time to discuss this matter here on your internet site. I all the time emailed this webpage post page to all my contacts, as if like to read it next my contacts willl too. I am now not certain the place you’re getting your info, however good topic. I needs to spend some time studying more or understanding more. Thanks for fantastic information I was looking for this information for my mission. This is my first time pay a visit at here and i am really happy to read everthing at single place. you’re truly a good webmaster. The web site loading velocity is incredible. It sort of feels that you’re doing any distinctive trick. Furthermore, The contents are masterpiece. you have performed a wonderful job on this subject! This is my very first time to visit here. I found so many entertaining stuff in your weblog, especially in its discussion. I guess I’m not the only one having all the entertainment here! Keep up the outstanding work. I am regular reader, how are you everybody? This paragraph posted at this site is really nice. “Space Travel and the Effects of Weightlessness on the Human Body” Candian Space Agency:. Here is an outline of the different types of jobs available for travel photographers and what kind of equipment they will need to have. Everyone admires the idea of travelling though the motives, passions and preferences of travelling are completely different. ” is perfectly grammatically correct but is passive in its voice. This open architecture, in and of itself, leads to much of the confusion about voice. The DSP Inverter Technology is the true digital signal processor with added 20 percent power conversion rate compare to transistor generators. I’m not sure exactly why butt this website iss oading incrwdibly slow for me. Is anyone else having this problem or iis it a issue on my end? I’ll checck back later on aand see if the problem srill exists. Wow tha was odd. I just wroe an extremely long comment but after I clicked submit my comment didn’t appear. Grrrr… well I’m nnot writiong all that oer again. Anyways, just wanhted to say great blog! Hello, this weekend is good in support of me, because this occasion i am reading this great informative paragraph here at my home.
http://zparacha.com/best-way-to-check-if-a-java-string-is-a-number/
CC-MAIN-2017-17
en
refinedweb
an attribute that can be put on methods of NetworkBehaviour classes to allow them to be invoked on clients from a server. [ClientRPC] functions are called by code on Unity Multiplayer servers, and then invoked on corresponding GameObjects on clients connected to the server. The arguments to the RPC call are serialized across the network, so that the client function is invoked with the same values as the function on the server. These functions must begin with the prefix "Rpc" and cannot be static. #pragma strict public class Example extends NetworkBehaviour { var counter: int; @ClientRpc public function RpcDoMagic(extra: int) { Debug.Log("Magic = " + (123 + extra)); } function Update() { counter += 1; if (counter % 100 == 0 && NetworkServer.active) { RpcDoMagic(counter); } } } using UnityEngine; using UnityEngine.Networking; public class Example : NetworkBehaviour { int counter; [ClientRpc] public void RpcDoMagic(int extra) { Debug.Log("Magic = " + (123 + extra)); } void Update() { counter += 1; if (counter % 100 == 0 && NetworkServer.active) { RpcDoMagic(counter); } } } The allowed argument types are; • Basic type (byte, int, float, string, UInt64, etc) • Built-in Unity math type (Vector3, Quaternion, etc), • Arrays of basic types • Structs containing allowable types • NetworkIdentity • NetworkInstanceId • NetworkHash128 • GameObject with a NetworkIdentity component attached.
https://docs.unity3d.com/ScriptReference/Networking.ClientRpcAttribute.html
CC-MAIN-2017-17
en
refinedweb
Deploy your Django project as directory This method of deployment is convenient if you have a non wildcard SSL certificate for your domain. This way, you can add another application to your domain in a new directory (eg. example.com/foo/). The first step is to set up nginx to pass SCRIPT_NAME to your app: location /foo/ { include uwsgi_params; uwsgi_param SCRIPT_NAME /foo; uwsgi_modifier1 30; uwsgi_pass unix:/run/uwsgi/app/foo/socket; } In your Django application, you must always use the URL resolver to generate URLs (reverse()), if you really want to hack your URLs you can use get_script_prefix() with something like that: from django.core.urlresolvers import get_script_prefix script_prefix = get_script_prefix() The last step is to define STATIC_URL to ‘/foo/static/’ (and MEDIA_URL if used). Note: admin URLs use the ‘admin’ namespace so for example, you can do resolve(‘admin:index’) to resolve as ‘/foo/admin/’.
https://medium.com/@sraimbault/deploy-your-django-project-as-directory-3689023d9f1b?source=user_profile---------1-----------
CC-MAIN-2017-17
en
refinedweb
24 August 2012 10:42 [Source: ICIS news] SINGAPORE (ICIS)--Lucite International is planning to shut its 120,000 tonne/year methyl methacrylate (MMA) plant at ?xml:namespace> The company will shut the plant for around three weeks and will begin production early to mid-October. The company uses ethylene, carbon monoxide and methanol as feedstock. Lucite International is a subsidiary of Japanese producer Mitsubishi Rayon Co (MRC), which is one of the largest MMA
http://www.icis.com/Articles/2012/08/24/9589684/lucite-to-shut-singapore-mma-unit-end-sept-for-maintenance.html
CC-MAIN-2015-06
en
refinedweb
I’ve been tracking the Rich Internet Application (RIA) framework technology scene lately. That’s a broad category. As the technology is put to use, its applicability grows into other domains. Mobile or otherwise-embedded devices, set-top boxes or game consoles, tables, and stand-alone or kiosk applications are all targets now. Web applications are still the largest niche for RIA platforms, so I’ve compiled a list of the web-oriented technologies for comparison. Since I most enjoy writing web applications in Ruby, I’m tracking the way each platform supports Ruby integration—specifically Ruby on Rails. Here are the contenders, in order of fitness for web application development, according to my own opinion: 1. OpenLaszlo - Development tools are few but seldom needed. Flash 8 and a good editor suffice. - They beat everyone else to the punch: OL3 has been stable since mid-2006; OL4 has been stable for a few months - Language is LZX, an XML + EcmaScript variant. - Clear XML syntax (no messy namespaces, doesn’t rely on XSLT) - 3.3.3 release only works with Flash plugin, which is not slated to become open-source - Also runs in browser alone with DHTML engine (in OL4) - Open source everything else, including free tools, ease of contribution, available community - Strong cross-platform support. I’ve used it on Windows, Linux and OS X. - No way to run Ruby in-place, although it can be a front-end for Rails - Widget toolkit is custom “LPS” widgets 2. Silverlight - Seems farther ahead than Apollo with respect to tools and availability - Tools are good but proprietary (Expression, Visual Studio) - XAML + DLR means you can use C#, which I always prefer over JavaScript - No Linux support until the Mono project writes it - May eventually become somewhat open-source. Maybe. My guess is the runtime will never be open. - May eventually run Ruby, but can probably integrate with Rails as a front-end currently - Browser plugin only, although the code is “portable” to other CLR runtimes - No widget toolkit yet, although it’s reportedly in development 3. Apollo - Tools are lagging behind, except for FlexBuilder, which is proprietary - Still only an “alpha” release - MXML/Actionscript language – looks like a mess compared to LZX, in my opinion - Open-source tools. Claims an open-source runtime by the end of the year. - Windows and OS X support now with Linux coming soon - Includes a built-in WebKit browser component - Can’t run Ruby, but demos of it as a Rails front-end exist - Desktop-app-like integration with file choosers and native installers - Flex widgets 4. JavaFX - Almost no tools yet, except those for Java which would be unfair to count here. - JavaFXScript, which is not Java nor JavaScript. New language, some interesting features, but no public libraries yet. - Open-source - Runs on JVM, which is also open-source - Can run Ruby via JRuby via Java, although there is no direct JavaFXScript<->Ruby integration yet that I’m aware of - Could certainly be a Rails front-end - Cross-platform, can use Java for filesystem access(?) - Browser or WebStart, which is semi-desktop - Swing widgets - Can use any Java library - Worst product name of the lot There are a few items missing; I’m not considering every facet of these technologies (commercial support, accessibility features, ease of deployment, etc., spring to mind). I skipped XUL because I see Firefox-only as a serious limitation, although things like Songbird are quite impressive. I didn’t include Flex itself as Apollo seems to be Adobe’s strongest entry in this arena. I like the article that Tech Team Lead News has on the same topic. Ryan Stewart has a good post that looks at Apollo and Silverlight from different web developer roles. They both missed OpenLaszlo, which seems to be common in the midst of the marketing blitzes from Microsoft and Adobe. I’m interested in what the community thinks about my assessments. What features are the most important to your consideration? What experiences have you had with these platforms that would rearrange my “best-liked” ordering? 2 Comments FWIW – If you have Apollo on your list, you can put XUL/XULRunner on the list too. Apollo is not browser-based. XULRunner is a runtime that allows XUL-based apps to run as desktop apps too. On Windows, Linux and Mac. Today. The fact that Silverlight can be embedded in HTML means that it has a lower risk profile, enabling gentle uptake and a smaller learning curve.
http://spin.atomicobject.com/2007/06/01/rich-internet-application-platform-shoot-out/
CC-MAIN-2015-06
en
refinedweb
. Guys I am working with millions of lines of data that "should" all be in the same format, however I am finding hundreds of lines that are not. A line may have ~10 fields space delimited. Suppose I am splitting the line into scalars that I can work with, and perform math on. Suppose fields 6-8 are supposed to be numeric and available for math. I am getting non-numeric warnings on several of them and just want to write the line out to an "errors" file so that I can resolve the formatting. How can I do something to test if scalar 6, 7, or 8 is not numeric (or even empty), write the line to a file and move to the next line. I can handle the "else". I'd prefer to use standard perl as it is very difficult at my company to pull in additional packages. my $input='rnbqkbnr'; my $search='n'; my $index=0; my @result; foreach(split //,$input) { if($_ eq $search) { push(@result,$index); } $index++; } print "Result: ",join(' , ',@result); [download] Esteemed monks, I wrote a small darkpan browser for $work and I have the following issue... The app is built as a Dancer web app. One of the features is that module PODs (ours, or dependencies from CPAN's) should be rendered to HTML and displayed in a page. I use Pod::Simple::XHTML to do the heavy lifting here. Since I also want the app to be relocatable without breaking all the inter-POD links, I have done this: my $pod_renderer = Pod::Simple::XHTML->new; # this is going to be inserted in a larger document $pod_renderer->html_header('[% TAGS [- -] %]'); $pod_renderer->html_footer(''); # $pod_renderer->perldoc_url_prefix('[- request.uri_base -]/mirror/[ +- selected_mirror -]/module/'); $pod_renderer->output_string(\my $html); $pod_renderer->parse_file($module); # spew $html into a file on disk [download] (ignore the [- selected_mirror -] part, that's just because we have multiple Pinto instances and the app can generate links e.g. from the integration to preproduction releases of a module) When rendering the full module page, I have a total of three (!) template rendering passes: once to turn the POD-rendered-as-template into HTML; once to render the regular .tt file, which includes the previously rendered HTML directly; and finally once because the .tt file has a layout and Dancer's template engine works like this. The first pass (POD to .tt) uses custom TAGS because otherwise the Template.pm PODs would not render properly (they of course include lots of [% %] everywhere). [- -] turned out to be a very bad idea (USAGE: foobar [-optional]) and I need to fix this. The second and third pass are just Dancer's standard template to HTML rendering. The module page has other things besides the rendered POD, so it needs to be its own template that somehow includes the templated POD... I feel like the way I wrote it is now a cluster[beep] of badly thought-out fixes upon badly thought-out fixes. Have you guys done something like this? How did you do it? Alternatively, do you see a simpler/better solution? Hello Monks, I am getting this error from a script that I have created. Initially I script with many subroutines inside, and I decide to split it on *.pm files so it can be easier readable/understandable. Since I tried to modify it, although that I am at the final steps of my script I am getting this error: Use of uninitialized value $selector in split at /usr/local/share/perl +/5.18.2/Net/OpenSSH/Parallel.pm line 141. [download] I opened the Net::OpenSSH::Parallel module at the specific line: my @parts = split /\s*,\s*/, $selector; [download] My main.pl script is pasted underneath but I do not know if it can provide much of assistance: In case you need me to post all my modules please feel free to ask me to. Update: Adding all modules. processLogFiles.pm FiLeS.pm The conf.ini file: The directories.ini file: Thanks in advance everyone for their time and effort to assist me. Hello Monks This is a continuation of last nights question [SOLVED][SOAP::Lite] Obtain request body before request is sent? but I think I need to step back and understand what's going on before moving forward. First of all, I think what I'm really after is the request envelope. I'd like to be able to print or otherwise store in a variable the XML markup in the envelope. Second of all, I need to modify the HTTP request header (not the XML header as I proposed in the above linked thread). I had my terms mixed up last night but I think I'm starting to understand better. Anyway. What I'm really after is what happens when you call $client->my_api_method(\%params); where is the XML actually built? Here is the code I'm working with <#!/usr/bin/env perl use strict; use warnings; use 5.010; use LWP::UserAgent; use SOAP::Lite; use LWP::Debug; LWP::Debug::level('+'); SOAP::Lite->import(+trace => ' +all'); my $client = SOAP::Lite->proxy($proxy) ->ns($namespace, 'foo') # I'm not sure I understand this. ->uri($uri) ->on_action(sub { sprintf '%s', $_[0] }) ->on_fault(sub { my($soap, $result) = @_; die ref $result ? "Fault Code: " . $result->faultcode . "\n" . "Fault String: " . $result->faultstring . "\n" : $soap->transport->status, "\n"; }); my $params = { foo => 'bar', biz => 'baz'}; #my $data = SOAP::Data->name($params); #my $serial = $client->serializer; #my $xml = $serial->envelope($data); #print Dumper $xml; #my $result = $client->my_api_method($params); my $result = $client->call('my_api_method' => $params); [download] I've figured out that $client->my_api_method(\%params) is the same as $client->call('my_api_method' => \%params) When running with trace, it looks like SOAP::Serializer is getting a hash, but when I call $serial->envelope($data) I get an error that it's "Wrong type of envelope (SOAP::Data=HASH(0x8feb78)) for SOAP call". (Of course all the docs point to the SOAP::Serialize docs say that it's used by SOAP::Lite and all the SOAP ::Lite docs say to see SOAP::Serialize... :/ I've looked at the code for SOAP::Serializer, I'm not convinced it's what's building the XML.) What modules used by SOAP::Lite actually generate the XML here? I can generate a SOAP::Data hash, but then what do I do with it? I know I can pass it to $client->call so something behind that is generating the xml. Thanks for the assistance. Here is my input: foo_1-a foo_2-b foo_3-b foo_4-b bar_1-a bar_2-a bar_3-b bar_4-a bar_5-b [download] And my desired output: foo 4 foo_1-a foo_2-b foo_3-b foo_4-b bar 5 bar_1-a bar_2-a bar_3-b bar_4-a bar_5_b [download] I wish that I had code to show you, but I don't know where to start and am thinking perl may not be the best thing for the job. I want to build a hash for each foo or bar, where foo/bar are the values and the hash keys are the full word. e.g.: %hash_bar =() bar_1-a => bar bar_2-a => bar bar_3-b => bar bar_4-a => bar bar_5-b => bar %hash_foo =() foo_1-a => foo foo_2-b => foo foo_3-b => foo foo_4-b => foo [download] Then I want to print any value in the hash (since they are all the same) followed by the number of keys in the hash, followed by the keys for each hash It's a stretch to call this pseudo code, but just to clarify my question: open FH, "<file.txt"; while (<FH>) { if (/((\S+)_\S+-\S+)) { #for each unique $1; %hash_$1 =(); # populate hash with keys $2 and values $1 $hash_$1($2)=$1; } } [download] I'm not expecting anyone to do this for me, but any direction to the function needed for this would be much appreciated. I know that I wouldn't be able to create the hashes in that if statement. I would need to create a hash of all the unique $1 first (so that I could use the exists function) and then for each key in that hash, read through the file again, creating a new hash for each key in the original hash. But that seems very inelegant, and I didn't know how to even write the pseudo code. Am I just totally on the wrong track? Thanks! I have a piece of code that works, but when I add use strict I get an error "Can't use string ("0") as a HASH ref while "strict refs" in use" I comment out use strict and its fine. #!/usr/local/bin/perl use strict; #use warnings; my %input; $input{"1"} = 0; $input{"2"} = 1; $input{"3"}->{a} = "ah"; $input{"3"}->{b} = "bee"; $input{"4"} = 'string'; jscript(%input); sub jscript { my %input = @_; my $total = scalar(keys(%input)); my $subtotal = $total--; my $cnt = 0; my $count = 0; my ($j,$NEST); print "Javascript literal\n\[ "; foreach my $number (sort keys %input) { $count++; unless ($input{$number} =~ /HASH/) { if ($count < $total) { print "\"$input{$number}\"\, "; } else { print "\"$input{$number}\""; } } foreach my $subject (keys %{ $input{$number} }) { if ($cnt == 0 ) { $j = '"'; } if ($cnt > 0 ) { $j = ', "';} $NEST .= "${j}${subject}\" : \"$input{$number}{$subject}\" +"; $cnt++; } if ($cnt > 0 && $count < $subtotal) { print "\{ $NEST \}\, "; $cnt = 0; $total++; $NEST = ""; } elsif ($cnt > 0 && $count == $subtotal) { print "\{ $NEST \} "; $cnt = 0; $NEST = ""; } } print " \]\n"; } [download] Script output without strict, which is perfect Javascript literal [ "0", "1", { "a" : "ah", "b" : "bee" }, "string" ] [download] Script output with strict Javascript literal Can't use string ("0") as a HASH ref while "strict refs" in use at ./r +.pl line 37. [
http://www.perlmonks.org/?next=10;node_id=479
CC-MAIN-2015-06
en
refinedweb
Tutorial for: Dajax Requirements: Dajax enables you to build web applications using only Python code and HTML, with little to no JavaScript required. This is the third and final tutorial in the Django and AJAX set. This tutorial will focus on building a simple application which uses Dajax to load data from a model and update data back into a model. This tutorial will use some JavaScript, mainly to build the request required when updating or querying objects. For an introduction on what AJAX is and how it works, please refer to the first tutorial in this set. In order to begin using Dajax, you will need to install Dajaxice. Here is what needs to be added to your settings.py to make Dajax work:', 'dajax', ..... ) and Dajax JavaScript from. Since Dajax is more complex than Dajaxice, there are additional JavaScript files which need to be loaded: {% load dajaxice_templatetags %} ... <head> ... <script type="text/javascript" src="jquery-1.7.1.min.js"></script> <script type="text/javascript" src="jquery.ba-serializeobject.min.js"></script> <script type="text/javascript" src="jquery.dajax.core.js"></script> {% dajaxice_js_import %} ... </head> This tutorial will be using jquery.ba-serializeobject.js again, so go ahead and download it if you do not have it, and place it into your static directory. You will also notice jquery.dajax.core.js above. This file does all the magic on the client side, and is required for Dajax to work. That being said, Dajax also supports other popular JavaScript frameworks such as Prototype, Dojo, and Mootools. This will allow you to use Dajax in almost any existing environment. Given that you are using Django to power the backend that is. That's really it for configuration, so lets do a simple test to confirm that you have Dajax all up and running in your Django project. Create a file called ajax.py in your applications folder, not the project, and enter in the following: from dajaxice.decorators import dajaxice_register from dajax.core import Dajax @dajaxice_register def say_hello(req): dajax = Dajax() dajax.alert("Hello World!") return dajax.json() A very simple Hello World message. This is just about the most simplest Dajax callback you can build. Well, you could remove the alert and return nothing, but what's the point of a callback that simple? Here's a small bit of a code to insert into your template's body tag to test the callback out: <script type="text/javascript"> Dajaxice.dajaxapp.say_hello(Dajax.process); </script> Here our application name is dajaxapp, change this to the label of your application's package. This should be for the most part very easy to read and understand code. Dajax.process is the data processor for the returning data, and this JavaScript function manages the part of taking what we do in Python and making it work in JavaScript. Here is what the returned data looks like, if you are curious: [{"cmd": "alert", "val": "Hello World!"}] Dajax takes what you did in Python, and turns it into something which JavaScript can parse and do something with. In other examples, I will also provide the response from Dajax, so that you can further understand what is happening under the hood. Now that the most simplest example is out of the way, lets build a model and use that with Dajax to fetch data and display it to the end-user. First we need some sort of model: from django.db import models class Person(models.Model): name = models.CharField(max_length=80) birthday = models.DateField() gift_bought = models.BooleanField() def __unicode__(self): return u"%s" % self.name This is a very simple birthday reminder model, which keeps track of people, their important days, and of course if you already bought their gift. Nothing too special. Go ahead and populate the model with some data using either a Python shell or the admin interface. First we'll put together the Dajax callback which will use the query and grab the data from the database. It will then assign the data to HTML IDs. We will also perform some formatting within Python to make the output look prettier:() p = Person.objects.get(pk=pk) dajax.assign('#idName', 'innerHTML', p.name) dajax.assign('#idDay', 'innerHTML', p.birthday.strftime("%B %d")) gift = 'Yes' if p.gift_bought else 'No' dajax.assign('#idGift', 'innerHTML', gift) return dajax.json() This Dajax callback will accept one argument, which is the primary key of the Person object. Here is the HTML code to make this all work: <body> <script type="text/javascript"> Dajaxice.dajaxapp.get_person(Dajax.process, {'pk':1}); </script> Name: <span id="idName"></span><br/> Birthday: <span id="idDay"></span><br/> Gift bought: <span id="idGift"></span><br/> </body> Fairly straightforward. Here is the response body of the AJAX request: [ {"cmd": "as", "id": "#idName", "val": "John Smith", "prop": "innerHTML"}, {"cmd": "as", "id": "#idDay", "val": "October 19", "prop": "innerHTML"}, {"cmd": "as", "id": "#idGift", "val": "No", "prop": "innerHTML"} ] You can see above all the assignment calls being performed. Using Dajax, you can build web applications fairly quickly; and easily add new JavaScript functions to use. The best part about using Dajax over other solutions, is that you only need to create a Python function, there is no need to do anything in JavaScript. Once the Python function is complete, you can go ahead and use it directly in your HTML page to load data and do other various tasks. Let's go a bit further in this example, and make the data easily browsable by using 2 simple buttons. This will allow end-users to easily browse the data in the database, either based on a particular filter, or all the data. This will expand on the get_person Dajax callback and enable it to control a simple browsing widget. This time, we'll begin with the HTML code, as that has changed a large amount: <style> .hideIt { display: none; } </style> </head><body> <script type="text/javascript"> var cur = 1; Dajaxice.dajaxapp.get_person(Dajax.process, {'pk':cur}); </script> <div id="idError" class="hideIt">You have reached the end of the database.</div> <table bgcolor="#acacac" border="1"> <tr><th>Name</th><td id="idName"></td></tr> <tr><th>Birthday</th><td id="idDay"></td></tr> <tr><th>Gift bought</th><td id="idGift"></td></tr> </table> <a href="#" onclick="Dajaxice.dajaxapp.get_person(Dajax.process, {'pk':cur-1});"><</a> <a href="#" onclick="Dajaxice.dajaxapp.get_person(Dajax.process, {'pk':cur+1});">></a> </body> This example uses a little more JavaScript, it is used to keep track of the application's current state. In this case, it is keeping track of the current pk being loaded from the database. The buttons merely alter this variable when requesting an update from the server. Here is the updated ajax.py to make this widget work:() try: p = Person.objects.get(pk=pk) except Person.DoesNotExist: dajax.remove_css_class('#idError', 'hideIt') return dajax.json() dajax.add_css_class('#idError', 'hideIt') dajax.assign('#idName', 'innerHTML', p.name) dajax.assign('#idDay', 'innerHTML', p.birthday.strftime("%B %d")) gift = 'Yes' if p.gift_bought else 'No' dajax.assign('#idGift', 'innerHTML', gift) dajax.script("cur = %d;" % pk) return dajax.json() The server code will be controlling the class for the idError. This will allow the application to alert the end-user that they have reached the end or beginning of the dataset. This code will obviously not work too well in a production environment, due to the fact that when a Person is deleted, it will cause the error to be displayed. However, for simplicities sake, this example will not go through all those checks, and will avoid using a queryset to limit the results. In this example, you will notice a few new Dajax commands being used, two of which relate to CSS alterations, and the other one runs some JavaScript on the client side. This JavaScript is used to update the state of the application in the browser. This could have been done on the client side directly, but in case of any errors, we don't want this variable to be incorrect or out of sync. Setting it this way, will ensure that the various will always be what you expect it to be. The next obvious addition, is the ability to tell the application that a birthday gift has indeed been purchased for this specific person. We will implement this in the form of making the No clickable. If the user clicks it, it will signal to the application that the record needs to be updated to reflect that change. This change only requires modifying the ajax.py file: from dajaxice.decorators import dajaxice_register from dajax.core import Dajax from ajaxsite.dajaxapp.models import Person def _get_person(p): dajax = Dajax() dajax.add_css_class('#idError', 'hideIt') dajax.assign('#idName', 'innerHTML', p.name) dajax.assign('#idDay', 'innerHTML', p.birthday.strftime("%B %d")) gift = 'Yes' if p.gift_bought else '<a href="#" onclick="Dajaxice.dajaxapp.bought_gift(Dajax.process, {\'pk\':cur});">No</a>' dajax.assign('#idGift', 'innerHTML', gift) dajax.script("cur = %d;" % p.pk) return dajax.json() @dajaxice_register def get_person(req, pk): try: p = Person.objects.get(pk=pk) except Person.DoesNotExist: dajax = Dajax() dajax.remove_css_class('#idError', 'hideIt') return dajax.json() return _get_person(p) @dajaxice_register def bought_gift(req, pk): p = Person.objects.get(pk=pk) p.gift_bought = True p.save() return _get_person(p) Here I decided to separate the functions a bit. This will allow future functions to just return _get_person with the Person object. We will be using this new function in the next example. The next example will be a create form, this will allow us to add a new Person. I will not be building an edit form in this example, as people rarely, if ever change their birthday... However, this example should be enough to allow you to understand form development and create such a callback yourself: from ajaxsite.dajaxapp.forms import PersonForm .... @dajaxice_register def add_person(req, form): f = PersonForm(form) if f.is_valid(): return _get_person(f.save()) dajax = Dajax() dajax.assign('#person_errors', 'innerHTML', 'Correct the following fields: %s' % f.errors) return dajax.json() The rest of ajax.py is the same, also you will need to create a ModelForm for the Person model. Here are the new additions to the HTML code: ... <script type="text/javascript"> var cur = 1; function add_person(){ data = $('#person_form').serializeObject(); Dajaxice.dajaxapp.add_person(Dajax.process, {'form':data}); return false; } Dajaxice.dajaxapp.get_person(Dajax.process, {'pk':cur}); </script> ... <div id="person_errors"></div> <form action="" method="post" id="person_form" accept-{% csrf_token %} <table>{{form}}</table> <input type="button" value="Add Person" onclick="add_person();" /> </form> There you have it, a complete Dajax example, which shows how to build a simple dataset browser, and how to add new functions with very little work involved. The largest selling point of using Dajax is that once the initial configuration is done, adding new AJAX callbacks takes almost no time at all, and is mostly done through Python. Be sure to read through both the Dajaxice and Dajax documentation, they explain some very important deployment configurations. This concludes the Django and AJAX tutorial set. I may create additional tutorials on Dajax, but it will not be part of this tutorial set and will focus on more advanced usage scenarios. Hi, Nice tutorial, very clear. I followed many times, step by step, but I keep getting this error: Invalid block tag: 'dajaxice_js_import' Looks like template tag is not available, but I'm sure that everything it was done as you said. Maybe some path, url. Where should I start looking? Please help me! I would appreciate it. Leandro, the error you are receiving is due to not having the dajaxice in your Django's INSTALLED_APPS, or you did not install Dajaxice correctly. Dajax does require Dajaxice. Hi, very good tutorial. One question: _get_person(p) is called with p being a Person model and also a Person form ( in return _get_person(f.save()) ) ...is this posiible in django, i.e., a model object and a form objetc from that model can be treated as the same type? Hello Adolfo. Thank you for your comment, and I am sorry about the issues you experienced while posting the comment. Python isn't statically typed, meaning you can pass any object as a parameter. Since this was an example, I did not do any checks to confirm the incoming object was what I am expecting, so if another programming uses this function and gives it a object that it cannot process, it will generate an exception. It is a good habit in dynamically typed languages to use "Assert" or do extensive exception handling to avoid any application errors. If you are the sole coding of your code, and know for sure what a function accepts, you are free to do as you please when passing around objects. Both the Form and Model objects in Django have similar enough properties that I can pass either or into this function and it will work without complaint. Before creating a function like this one, read into the Django docs to make sure it will work with both object types you plan on accepting or use "isinstance" to route the object around the function as needed. Hi Kevin, I worked through your tutorials here and managed to get all of your examples working fine. I'm now trying to submit a form and save an object from within ajax.py, but can't seem to get it working. What I'd like to be able to do is to pass additional objects to the ajax function, and use both them and the form cleaned data to create a new object within ajax.py. It seems that this sort of thing should be relatively simple, but in the error log I get nothing except: 'Dajaxice: Something went wrong', which is not very helpful. Is there any change you would be willing to expand on this tutorial by doing something similar to my aims? It seems like it must be a fairly common use case. If not, do you have any suggestions as to how I could debug my code? Many thanks, I've found this set of tutorials extremely useful. Good post but I was wondering if you could write a litte more on this topic? I'd be very grateful if you could elaborate a little bit further. Cheers!|
http://pythondiary.com/tutorials/django-and-ajax-dajax.html
CC-MAIN-2015-06
en
refinedweb
On Wed, Aug 02, 2006 at 05:20:40PM +0200, Rolf Eike Beer wrote:> As suggested by Muli Ben-Yehuda this function is moved to generic code as> may be useful for all archs.I like it, but ...> diff --git a/include/asm-x86_64/dma-mapping.h b/include/asm-x86_64/dma-mapping.h> index b6da83d..10174b1 100644> --- a/include/asm-x86_64/dma-mapping.h> +++ b/include/asm-x86_64/dma-mapping.h> @@ -55,13 +55,6 @@ extern dma_addr_t bad_dma_address;> extern struct dma_mapping_ops* dma_ops;> extern int iommu_merge;> > -static inline int valid_dma_direction(int dma_direction)> -{> - return ((dma_direction == DMA_BIDIRECTIONAL) ||> - (dma_direction == DMA_TO_DEVICE) ||> - (dma_direction == DMA_FROM_DEVICE));> -}> -Several files include asm/dma-mapping.h directly, which will now causethem to fail to compile on x86-64 due to the missing definition forvalid_dma_direction, unless by chance another header already broughtit in indirectly. I guess the right thing to do is convert them all tousing linux/dma-mapping.h instead..>./include/linux/dma-mapping.h:27:#include <asm/dma-mapping.h>Cheers,Muli-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2006/8/2/191
CC-MAIN-2015-06
en
refinedweb
Answered: Call Controller function outside Ext namespace Answered: Call Controller function outside Ext namespace Hello, everyone. I want to call controller method in <script> section in index.html. Can I do it and how? Thanks for advice Resolved (by using gloval variable App) Resolved (by using gloval variable App) Can you give an example please? I have the same problem. Thanks. Hello! Of course. index.html Code: ... <link href="resources/css/main.css" media="screen" rel="stylesheet" type="text/css"> <script id="microloader" type="text/javascript" src="sdk/microloader/development.js"></script> <script type="text/javascript"> var g_App = undefined, .... Code: ... launch: function() { // Destroy the #appLoadingIndicator element Ext.fly('appLoadingIndicator').destroy(); g_App = this; // Initialize the main view }, ... Code: g_App.getController('News').LoadData(); Doesn't work for me, the global variable remains undefined while assigned one's scope is launch() function <myAppName>.app.getControllerInstances()['<controllerName>'] does the job though Can you attach your unworking project? Irrelevant now, got it working apparently. Thanks for your reply.
http://www.sencha.com/forum/showthread.php?242243-Call-Controller-function-outside-Ext-namespace&p=901283
CC-MAIN-2015-06
en
refinedweb
#include <qgscrscache.h> Definition at line 24 of file qgscrscache.h. Definition at line 35 of file qgscrscache.cpp. Definition at line 31 of file qgscrscache.cpp. Referenced by instance(). Returns the CRS for authid, e.g. 'EPSG:4326' (or an invalid CRS in case of error) Definition at line 40 of file qgscrscache.cpp. References QgsCoordinateReferenceSystem::createFromOgcWmsCrs(), mCRS, and mInvalidCRS. Referenced by crsByEpsgId(). Definition at line 58 of file qgscrscache.cpp. References crsByAuthId(). Definition at line 22 of file qgscrscache.cpp. References mInstance, and QgsCRSCache(). Referenced by QgsCoordinateReferenceSystem::readXML(). Definition at line 38 of file qgscrscache.h. Referenced by crsByAuthId(). Definition at line 37 of file qgscrscache.h. Referenced by instance(), and ~QgsCRSCache(). CRS that is not initialised (returned in case of error) Definition at line 40 of file qgscrscache.h. Referenced by crsByAuthId().
http://qgis.org/api/1.8/classQgsCRSCache.html
CC-MAIN-2015-06
en
refinedweb
. Memoization The word memoization was coined by Donald Michie, a British artificial-intelligence researcher, to refer to function-level caching for repeating values. Today, memoization is common in functional programming languages, either as a built-in feature or as one that's relatively easy to implement. Memoization helps in the following scenario. Suppose you have a performance-intensive function that you must call repeatedly. A common solution is to build an internal cache. Each time you calculate the value for a certain set of parameters, you put that value in the cache, keyed to the parameter value(s). In the future, if the function is invoked with previous parameters, return the value from the cache rather than recalculate it. Function caching is a classic computer science trade-off: It uses more memory (which we frequently have in abundance) to achieve better performance over time. Functions must be pure for the caching technique to work. A pure function is one that has no side effects: It references no other mutable class fields, doesn't set any values other than the return value, and relies only on the parameters for input. All the methods in the java.lang.Math class are excellent examples of pure functions. Obviously, you can reuse cached results successfully only if the function reliably returns the same values for a given set of parameters. Memoization in Groovy Memoization is trivial in Groovy, which includes a family of memoize() functions on the Closure class. For example, suppose you have an expensive hashing algorithm, leading you to cache the results for efficiency. You can do so by using closure-block syntax to define the method and calling the memoize() function on the return, as shown in Listing 1. (I don't mean to suggest that the ROT13 algorithm—a version of the Caesar Cipher—used in Listing 1 is performance-challenged, so just pretend that caching is worth it in this example.) Listing 1. Memoization in Groovy class NameHash { def static hash = {name -> name.collect{rot13(it)}.join() }.memoize() public static char rot13(s) { char c = s switch (c) { case 'A'..'M': case 'a'..'m': return c + 13 case 'N'..'Z': case 'n'..'z': return c - 13 default: return c } } } class NameHashTest extends GroovyTestCase { void testHash() { assertEquals("ubzre", NameHash.hash.call("homer")) } } Normally, Groovy function definitions look like rot13() in Listing 1, with the method body following the parameter list. The hash() function definition uses slightly different syntax, assigning the code block to the hash variable. The last part of the definition is the call to memoize(), which automatically creates an internal cache for repeating values, keyed on parameter. The memoize() method is really a family of methods, giving you some control over caching characteristics, as shown in Table 1. Table 1. Groovy's memoize() family The methods in Table 1 give you coarse-grained control over caching characteristics — not fine-grained ways to tune cache characteristics directly. Memoization is meant to be a general-purpose mechanism for easily optimizing common caching cases. Memoization in Clojure Memoization is built into Clojure. You can memoize any function by using the built-in (memoize ) function. For example, if you have an existing (hash "homer") function, you can memoize it via (memoize (hash "homer")) for a caching version. Listing 2 implements the name-hashing example from Listing 1 in Clojure. Listing 2. Clojure memoization (defn name-hash [name] (apply str (map #(rot13 %) (split name #"\d")))) (def name-hash-m (memoize name-hash)) (testing "name hash" (is (= "ubzre" (name-hash "homer")))) (testing "memoized name hash" (is (= "ubzre" (name-hash-m "homer"))))) Note that in Listing 1, calling the memoized function requires an invocation of the call() method. In the Clojure version, the memoized method call is exactly the same on the surface, with the added indirection and caching invisible to the method's user. Memoization in Scala Scala doesn't implement memoization directly but has a collection method named getOrElseUpdate() that handles most of the work of implementing it, as shown in Listing 3. Listing 3. Scala memoization def memoize[A, B](f: A => B) = new (A => B) { val cache = scala.collection.mutable.Map[A, B]() def apply(x: A): B = cache.getOrElseUpdate(x, f(x)) } def nameHash = memoize(hash) The getOrElseUpdate() function in Listing 3 is the perfect operator for building a cache. It either retrieves the matching value or creates a new entry when none exists. Combining functional features In the preceding section and in the last few Java.next installments, I've covered several details of functional programming, particularly as they pertain to the Java.next languages. However, the real power of functional programming lies in the combination of features and the way solutions are approached. Object-oriented programmers tend to create new data structures and attendant operations constantly. After all, building new classes and messages between them is the predominant language paradigm. But building so much bespoke structure makes building reusable code at the lowest level difficult. Functional programming languages prefer a few core data structures and build optimized machinery for understanding them. Here's an example. Listing 4 shows the indexOfAny() method from the Apache Commons framework (which provides a slew of helpers for Java programming). Listing 4. indexOfAny() from Apache Commons // From Apache Commons Lang, public static int indexOfAny(String str, char[] searchChars) { if (isEmpty(str) || ArrayUtils.isEmpty(searchChars)) { return INDEX_NOT_FOUND; }; } The first third of the code in Listing 4 concerns edge-case checks and initialization of the variables needed for the nested iteration to come. I'll gradually transform this code into Clojure. For the first step, I remove the corner cases, as shown in Listing 5. Listing 5. Removing corner cases public static int indexOfAny(String str, char[] searchChars) { when(searchChars) {; } } Clojure intelligently handles the null and empty cases and has intelligent functions such as (when ...), which returns true only when characters are present. Clojure is a dynamically (but strongly) typed, eliminating the need to declare variable types before use. Thus, I can remove the type declarations, resulting in the code in Listing 6. Listing 6. Removing type declarations indexOfAny(str, searchChars) { when(searchChars) { csLen = str.length(); csLast = csLen - 1; searchLen = searchChars.length; searchLast = searchLen - 1; for (i = 0; i < csLen; i++) { ch = str.charAt(i); for (j = 0; j < searchLen; j++) { if (searchChars[j] == ch) { if (i < csLast && j < searchLast && CharUtils.isHighSurrogate(ch)) { if (searchChars[j + 1] == str.charAt(i + 1)) { return i; } } else { return i; } } } } return INDEX_NOT_FOUND; } } The for loop — a staple of imperative languages — allows access to each element in turn. Functional languages tend to rely more on collection methods that already understand (or avoid) edge cases, so I can remove methods such as isHighSurrogate() (which checks for character encodings) and manipulation of index pointers. The result of this transformation appears in Listing 7. Listing 7. A when clause to replace the innermost for // when clause for innermost for indexOfAny(str, searchChars) { when(searchChars) { csLen = str.length(); for (i = 0; i < csLen; i++) { ch = str.charAt(i); when (searchChars(ch)) i; } } } In Listing 7, I collapse the code into a method that checks for the presence of the sought-after characters and returns the index when they're found. While I'm in neither Java nor Clojure but a strange pseudocode place, this when method doesn't quite exist. But the (when ) method in Clojure, which this code is slowly becoming, does. Next, I replace the topmost for loop with a more concise substitute, using the for comprehension: a macro that combines access and filtering (among other things) for collections. The evolved code appears in Listing 8. Listing 8. Adding a comprehension // add comprehension indexOfAny(str, searchChars) { when(searchChars) { for ([i, ch] in indexed(str)) { when (searchChars(ch)) i; } } } To understand the for comprehension in Listing 8, you must first understand a few parts. The (indexed ...) function in Clojure accepts a Sequence and returns a sequence that includes numbered elements. For example, if I call (indexed '(a b c)), the return is ([0 a] [1 b] [2 c]). (The single apostrophe indicates to Clojure that I want a literal sequence of characters, not that I want to execute an (a ) method with two parameters.) The for comprehension creates this sequence over my search characters, then applies the inner when to find the index of the matching characters. The last step in this transformation is to convert the code into proper Clojure syntax and restore the presence of real functions and syntax, as shown in Listing 9. Listing 9. Clojure-ifying the code // Clojure-ify (defn index-filter [pred coll] (when pred (for [[index element] (indexed coll) :when (pred element)] index))) In the final Clojure version in Listing 9, I convert the syntax to proper Clojure and add one upgrade: Callers of this function can now pass any predicate function (one that returns a Boolean result), not just the check for an empty string. One Clojure's goals is the ability to create readable code (after you assimilate the parentheses), and this function exemplifies this ability: For the indexed collection, when your predicate matches the element, return the index. Another Clojure goal is expressiveness with the least number of characters; Java suffers terribly in comparison with Clojure in this regard. Table 2 compares "moving parts" quantities in Listing 4 to those in Listing 9. Table 2. Comparison of "moving parts" The difference in complexity is telling. Yet although the Clojure code is simpler, it is also more general. Here I index a sequence of coin flips, modeled as the Clojure :h (heads) and :t (tails) keywords: (index-filter #{:h} [:t :t :h :t :h :t :t :t :h :h]) -> (2 4 8 9) Notice that the return value is a sequence of all matching index positions, not just the first. List operations in Clojure are lazy when possible, including this one. If I only want the first value, I can (take 1 ) from the result, or I can print them all, as I've done here. My (index-filter ) function is generic, so I can use it on numbers. For example, I can determine the first number whose Fibonacci value exceeds 1,000: (first (index-filter #(> % 1000) (fibo))) -> 17 The (fibo) function returns an infinite but lazy sequence of Fibonacci numbers; (index-filter ) finds the first one that exceeds 1,000. (It turns out that the Fibonacci of 18 is 1,597.) The combination of functional constructs, dynamic typing, laziness, and concise syntax yields great power. Conclusion Functional programming constructs yield benefits when used piecemeal, but they offer even more advantages when they're combined. All the Java.next languages are functional to one degree or another, enabling increasing use of this style of development. In this installment, I discussed how functional programming eliminates moving parts — making programming less error-prone — and the benefits of combining functional features. In the next installment, I begin an even more powerful illustration of this concept as I discuss how the Java.next languages make concurrency on the JVM easier. Resources Learn - ROT13: ROT13 is an example of the Caesar Cipher, an ancient encryption algorithm used by Julius Caesar. - Apache Commons: Commons is a popular utility framework in the Java ecosystem. - Groovy: Groovy is a dynamic variant of the Java language, with updated syntax and capabilities. - Scala: Scala is a modern, functional language on the JVM. - Clojure: Clojure is a modern, functional Lisp that runs on the JVM. - Functional thinking: Explore functional programming in Neal Ford's column series on developerWorks. - "Execution in the Kingdom of Nouns" (Steve Yegge, March 2006): An entertaining rant about some aspects of Java language design. -.
http://www.ibm.com/developerworks/library/j-jn12/
CC-MAIN-2015-06
en
refinedweb
30 March 2010 10:02 [Source: ICIS news] By Mahua Chakravarty SINGAPORE (ICIS news)--Benzene prices in Asia hit a two-month high on Tuesday and may rise further in the next few weeks on the back of bullish aromatics markets in the US and Europe and strong crude values, said traders and producers. Spot benzene prices breached the $1,000/tonne (€740/tonne) FOB (free on board) ?xml:namespace> Prices had gone up by about $105-110/tonne in the past two weeks, based on ICIS pricing data. On Tuesday, sellers quoted higher offers of $1,025-1,030/tonne for May-loading cargoes, but bids remained much lower at $1,000/tonne, market sources said. Bids for second-half April loading lots also surfaced at $995-1,000/tonne, but no sellers stepped forth with offers, they added. An overnight spike in crude futures prices above $82/bbl, along with continuing firmness in the “[Going forward] there seems to be more upside for benzene,” said a key regional trader, citing the current supply tightness in the European benzene market that should keep prices buoyant across three regions. Asian exporters were heard to be looking at fixing parcels for April shipment to the Asia is a net benzene exporter to the Sentiment in the spot market was also more bullish for May as demand from the key downstream styrene monomer (SM) segment was expected to improve when SM plants restart, market sources said. A slew of turnarounds at SM units in northeast Asia from March had slowed down demand for benzene as the sector absorbs about half of The expected resurgence in demand would help address the benzene surplus in the region. Prices for May-loading cargoes were fetching a premium of about $5-10/tonne against second-half April shipments, they said. Supply of aromatics may also fall as a number of regional crackers were anticipated to switch to using liquefied petroleum gas (LPG) as feedstock instead of naphtha, starting end April or May, market sources said. Cracking of LPG is known to reduce aromatics supply, said a Singapore-based trader. ($1 = €0.74)
http://www.icis.com/Articles/2010/03/30/9346879/asian-benzene-prices-hit-two-month-high-may-strengthen-further.html
CC-MAIN-2015-06
en
refinedweb
Blog Map Data-bound content controls are a powerful and convenient way to separate the semantic business data from the markup of an Open XML document. After binding content controls to custom XML, you can query the document for the business data by looking in the custom XML part rather than examining the markup. Querying custom XML is much simpler than querying the document body. However, it’s a little bit involved to create data-bound content controls (but only a little bit). But there is a trick we can do – we can take a document that has un-bound content controls, generate a custom XML part automatically (inferring the elements of the custom XML from the content controls), and then bind the content controls to the custom XML part. This blog is inactive.New blog: EricWhite.com/blogBlog TOC(Update March 10, 2009 - modified code to work with latest Open XML SDK.) This approach has two benefits – first, it can serve as a way to conveniently create a document with data-bound content controls, and second, it serves to demonstrate exactly what you must do to create data-bound content controls. This example uses the following approach: This example uses the Open XML SDK V1 and LINQ to XML. Data-Bound Content Controls A document that contains properly set-up data-bound content control has the following characteristics: The following screen clipping shows the word document with content controls in the cells of a table: To set the properties of the content control, click on the Content Controls Properties button (on the Developer tab of the ribbon): In this example, the element name in the custom XML part comes from the Tag field in the content control properties window: The following screen clipping (using the Open XML Package Editor, which comes with Visual Studio Power Tools) shows that there is a relation from the main document part (document.xml) to the custom XML part (../customXml/item1.xml): The following shows the relation from the custom XML part to the custom XML properties part (itemProps1.xml): The custom XML for the example included with this post looks like this: <?xmlversion="1.0"encoding="utf-8"?><Root> <Name>Eric White</Name> <Company>Microsoft Corporation</Company> <Address>One Microsoft Way</Address> <City>Redmond</City> <State>WA</State> <Country>USA</Country> <PostalCode>98052</PostalCode></Root> This custom XML is automatically generated by this example. The custom XML properties part looks like this: <? <ds:schemaRefs/></ds:datastoreItem> The GUID in the ds:itemID attribute is generated when the example is run. The content control with properly set-up data binding looks like this: <w:sdt> <w:sdtPr> <w:aliasw: <w:tagw: <w:idw: <w:placeholder> <w:docPartw: </w:placeholder> <w:dataBinding w: <w:text/> </w:sdtPr> <w:sdtContent> <w:tc> <w:tcPr> <w:tcWw: </w:tcPr> <w:pw: <w:r> <w:t>Eric White</w:t> </w:r> </w:p> </w:tc> </w:sdtContent></w:sdt> The GUID in the w:storeItemID attribute is the same as in the custom XML properties part. This creates the association between the data-bound content control and its custom XML part. If you edit the document that has bound content controls, and change the contents in one of them, the custom XML is modified to reflect the changed content. For instance, if you edit the document and change the name to Tai Yee, then the custom XML will be: <?xmlversion="1.0"encoding="utf-8"?><Root> <Name>Tai Yee</Name> <Company>Microsoft Corporation</Company> <Address>One Microsoft Way</Address> <City>Redmond</City> <State>WA</State> <Country>USA</Country> <PostalCode>98052</PostalCode></Root> Because the GUID that creates the association is in the custom XML properties part and not in the custom XML itself, the custom XML can have any schema you desire. You can take XML from any source, with any schema, and place it, unmodified, in a custom XML part, and create the appropriate data-binding to content controls. Example using the Open XML SDK V1 and LINQ to XML The example first copies Template.docx to Test.docx. It opens Test.docx using the Open XML SDK, creates the custom XML part, creates the custom XML properties part, and then adds the data binding elements to the content controls in the main document part. using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.IO;using System.Xml;using System.Xml.Linq;using DocumentFormat.OpenXml;using DocumentFormat.OpenXml.Packaging;public static class LocalExtensions{ public static string StringConcatenate<T>(this IEnumerable<T> source, Func<T, string> func) { StringBuilder sb = new StringBuilder(); foreach (T item in source) sb.Append(func(item)); return sb.ToString(); } public static string StringConcatenate(this IEnumerable<string> source) { StringBuilder sb = new StringBuilder(); foreach (string item in source) sb.Append(item); return sb.ToString(); } public static XDocument GetXDocument(this OpenXmlPart part) { XDocument xdoc = part.Annotation<XDocument>(); if (xdoc != null) return xdoc; using (Stream str = part.GetStream()) using (StreamReader streamReader = new StreamReader(str)) using (XmlReader xr = XmlReader.Create(streamReader)) xdoc = XDocument.Load(xr); part.AddAnnotation(xdoc); return xdoc; }}class Program{ private static XNamespace w = ""; private static XName r = w + "r"; private static XName ins = w + "ins"; private static XNamespace ds = ""; static string GetTextFromContentControl(XElement contentControlNode) { return contentControlNode.Descendants(w + "p") .Select( p => p.Elements() .Where(z => z.Name == r || z.Name == ins) .Descendants(w + "t") .StringConcatenate(element => (string)element) + Environment.NewLine ).StringConcatenate(); } static void Main(string[] args) { File.Delete("Test.docx"); File.Copy("Template.docx", "Test.docx"); // Open the Open XML doc as a word processing doc using (WordprocessingDocument document = WordprocessingDocument.Open("Test.docx", true)) { // Create the contents of the custom XML part XElement customXml = new XElement("Root", document .MainDocumentPart .GetXDocument() .Descendants(w + "sdt") .Select(sdt => new XElement( sdt.Element(w + "sdtPr") .Element(w + "tag") .Attribute(w + "val").Value, GetTextFromContentControl(sdt).Trim()) ) ); // Create a new custom XML part CustomXmlPart customXmlPart = document.MainDocumentPart.AddCustomXmlPart(CustomXmlPartType.CustomXml); using (Stream str = customXmlPart.GetStream( FileMode.Create, FileAccess.ReadWrite)) using (XmlWriter xw = XmlWriter.Create(str)) customXml.Save(xw); Guid idGuid = Guid.NewGuid(); // Create the contents of the properties part XDocument propertyPartXDoc = new XDocument( new XElement(ds + "datastoreItem", new XAttribute(ds + "itemID", "{" + idGuid.ToString().ToUpper() + "}"), new XAttribute(XNamespace.Xmlns + "ds", ds.NamespaceName), new XElement(ds + "schemaRefs") ) ); // Add the custom XML properties part CustomXmlPropertiesPart customXmlPropertyPart = customXmlPart.AddNewPart<CustomXmlPropertiesPart>(); using (Stream str = customXmlPropertyPart.GetStream( FileMode.Create, FileAccess.ReadWrite)) using (XmlWriter xw = XmlWriter.Create(str)) propertyPartXDoc.Save(xw); // Load the main document part into an XDocument XDocument mainDocumentXDoc; using (Stream str = document.MainDocumentPart.GetStream()) using (XmlReader xr = XmlReader.Create(str)) mainDocumentXDoc = XDocument.Load(xr); // Add the data binding elements to the main document foreach (XElement sdt in mainDocumentXDoc.Descendants(w + "sdt")) sdt.Element(w + "sdtPr") .Element(w + "placeholder") .AddAfterSelf( new XElement(w + "dataBinding", new XAttribute(w + "xpath", "/Root/" + sdt.Element(w + "sdtPr") .Element(w + "tag") .Attribute(w + "val").Value), new XAttribute(w + "storeItemID", "{" + idGuid.ToString().ToUpper() + "}") ) ); // Serialize the XDocument back into the part using (Stream str = document.MainDocumentPart.GetStream( FileMode.Create, FileAccess.Write)) using (XmlWriter xw = XmlWriter.Create(str)) mainDocumentXDoc.Save(xw); } }} Code is attached. PingBack from Stephen McGibbon has screenshots of the Open XML and ODF support coming in Windows 7 Wordpad , as announced Suite à la PDC 2008 et au workshop Open XML donné par Microsoft à Redmond ( Doug , encore mille excuses Question regarding the GetTextFromContentControl method in your example. This looks for "p" elements and there is normally (as far as I've seen) no "p" tags within the "sdt" elements, which is the parameter into the method. Looking at some of my own Open XML documents, it looks like the following example would be more correct. Yet, this example does not support placeholders that allows carriage returns. e.Element(w + "sdtContent").Element(w + "r").Element(w + "t").Value.Trim() Additionally the code will fail whenever there is placeholders that does not have any tag specified, to avoid this you can make a check in the foreach loops, something like: if (sdt.Element(w + "sdtPr").Element(w + "tag") != null) Thanks for a great example! I just read Brian Jones' <a href="" title="Taking Advantage of Bound Content Controls">post</a> where he completely swaps out the custom XML part. The code appears much more concise, but does it lack in the area of property reconstructing the Custom XML Part Properties? Hi Eric, Can we do the custom binding for content controls that are in header and footer parts? Is there any way I can toggle the content control bordering and highlighting? I have some content controls that are very close together and they exhibit some really strange behavior. Hi Eric Great blog, and good info on content controls here, but, I'm using Word 2007, and when I create a docx with one content control nested within another, save and then try to reload that document, Word throws an error, and offers to "correct the currupted document" When I click YES, the doc loads, but the nested content control has been stripped and converted to text. This is in a completely fresh doc on a system with a fresh install of office 2007, so I'm a bit stumped. Are nested content controls +really+ supported? Thanks @Engr_Muneer, have you taken a look at "design mode" for content controls? It can really help with how you interact with them. Take a look at this post: @Darin, I tried creating nested content controls using Word 2007, and it worked just fine for me. I tried on multiple installs. Can you try on some other Word 2007 installs, see if it works elsewhere? -Eric @satchi, yes, you can link content controls in headers/footers to custom XML. The XPath expression refers to elements/attributes in the custom XML part that is related to the main document part. Very strange. I'm running word 12.0.4518.1000 (ie original shpping version from what I can tell). It definitely says that the doc has been corrupted once it's saved and reloaded. I went to a colleague's desk, he's running 12.0.6500,5000 (it says it's SP2) and his version works completely differently. No matter what we do at has desk, we can't get it to insert a nested content control at all. The ribbon buttons for controls on the developer ribbon are greyed when the cursor is in a content control. Strange I'm running Win Update on this image now. Just have to see if maybe the Sp has something to do with it. Aha! Finally figured out what's up with this. Just fyi for anyone else that might come across this page, You can't insert a content control into a "Plain text" content control. But it appears that you CAN insert a content control into a "Rich text" content control. I suppose that makes a certain amount of sense, but it sure wasn't clear (and the older Word 2007 definitely would LET me do it, even though it appears that it shouldn't have. And the controls do appear to save and reload properly, without Word stripping them out. Thank you Darin for figuring this out. This was one of those assumptions that was so ingrained in my mind that I forgot to mention. I'm going to update the nested content control blog post to tell this. Since the custom XML part is removed from Word from January 10.... Does anyone knows how to achieve content-controls/custom XML mapping in Word 2010? In other words, how it will be done in Word 2010? We are using method specified in this article for filling content controls from custom XML (Word 2007 - before January 10.), but how will we achieve that in Word 2010. I'm thinking about a future...
http://blogs.msdn.com/b/ericwhite/archive/2008/10/19/creating-data-bound-content-controls-using-the-open-xml-sdk-and-linq-to-xml.aspx?PageIndex=1
CC-MAIN-2015-06
en
refinedweb
Teleconference.2008.06.18/Agenda From OWL Revision as of 16:37, 18 June 2008 by RinkeHoekstra (Talk | contribs) Contents Call in details When joining please don't identify yourself verbally; instead, Identify Yourself to ZAKIM on IRC - Date of Call: Wednesday June 18,: Peter F. Patel-Schneider (Scribe List) - Link to Agenda: Agenda - ADMIN (20 min) - Roll call - Agenda amendments? - PROPOSED: Accept Previous Previous Minutes (04 June) - PROPOSED: Accept Previous Minutes (11 - Due and overdue Actions - Action 42 Improve examples for rich annotations / Bijan Parsia - Action 150 Create a new document as the spec for owl:internationalizedString / rif:text, including open issue discussion of namespace / Jie Bao - 109 OWL XML namespace -- new or reuse RDF namespace. Note that we should have read Ivan's email and be ready to vote on this issue. - Issue 112 per Boris's email (top role added to spec) - Other Issue Discussions - Issue 21 and Issue 24 Imports and Versioning, per update from Boris and subsequent minor changes. The intention is that: Issue 21 was resolved by requiring the import target to be equal either to the ontology URI or to the version URI of the ontology to be imported. Issue 24 was resolved by saying that an import closure containing either two different versions of the same ontology or two ontologies that are explicitly asserted to be incompatible (via an owl:incompatibleWith annotation) should be considered syntactically invalid. Sea also recent email thread. - Issue 108 Need to name the OWL Profiles (see Rinke's email amongst others). Current top choices are the one and two letter suffix names. Straw poll? - Issue 67 and Issue 81 Reification (see wiki page on Reification Alternatives) - Issue 116 Should Axiomatic Triples added to OWL-R Full? - General Discussion (20 min) - Additional other business (10 min) Next Week(s) - General Discussions (not necessarily in this order) Regrets - Carsten Lutz (travelling) - Sandro Hawke (travelling) - EvanWallace (unavoidable schedule conflict) - Ivan Herman (travelling) - Elisa Kendall (conflicting business meeting, will join if possible) - Jeff Z. Pan (conflicting meeting) - Rinke Hoekstra (oficially on holiday)
http://www.w3.org/2007/OWL/wiki/index.php?title=Teleconference.2008.06.18/Agenda&oldid=8722
CC-MAIN-2015-06
en
refinedweb
Python Code Indentation Python uses whitespace (eg spaces and tabs) to group together lines of code that belong together. Any time a line ends with a colon, you are telling Python "hey, a block of code follows" and Python expects to see some indented code. Example if x = 1: call_functionA() call_functionB() next_function() The code call_functionA and call_functionB will be executed if x equals 1, so they are grouped together with the same indenting. The code next_function() isn't part of the "if" block so it will be always be executed next, regardless of whether x equals 1 or not. Example of bad indentation: In the following code, the second line is not a block. print('Sane math.') print('foobar') Therefore Python will tell you: File "test.py", line 2 print('foobar') ^ IndentationError: unexpected indent Notice how Python tells you the file and line where the indentation problem occurred. Instead you need to do print('Sane math.') print('foobar') Example error when a block not being indented If you have this code, which defines a function: def foo(): print('foo') Python will tell you: File "main.py", line 2 print('foo') ^ IndentationError: expected an indented block Here, Python is expecting at least one line of real code -- it can't be just a comment.Instead you need to do def foo(): print('foo') Notes Four spaces for indentation is preferred. Python's indentation is similar to functional programming language syntax but takes a while to get used to if you have used other programming languages that use curly braces { } to indicate a code block. Python does not use the semi-colon ; character at the end of code line, unlike many other programming languages. For readability, the end of a code block should be followed by a blank line, just like you use a blank line between paragraphs: Example - less readable: if 1 == 2: print('Wow, what kind of math are you using?') if 2 == 2: print('Sane math.') Example - more readable: if 1 == 2: print('Wow, what kind of math are you using?') if 2 == 2: print('Sane math.')
https://reference.codeproject.com/python3/structures/python-indentation
CC-MAIN-2021-43
en
refinedweb
UK High Altitude Society There are two peripheral libraries which can be used. Firstly is the official ST libraries, or alternately libopencm3. There are more examples available for the ST libraries, however libopencm3 is nicer to use, but still in development. STM32 Intro (Jon) slides here: stm32_intro_ukhas14.pdf Ideally you should be able to apt-get install the tools, however the version in the repo is broken slightly and doesnt have the 'nano' version of newlib (printf etc) export PATH=/usr/local/gcc-arm-none-eabi-4_8-2014q2/bin:$PATH to your ~/.bashrc (assuming using bash). $ source ~/.bashrc $ sudo add-apt-repository ppa:terry.guo/gcc-arm-embedded $ sudo apt-get update $ sudo apt-get install gcc-arm-none-eabi $ arm-none-eabi-gcc –version cdinto the root of the repo $ git submodule update –initto fetch libopencm3. cdto the libopencm3 directory and run $ maketo build. We've not got things set up for the F1 (or other) series yet, give us a shout if this is what you're after and we can help. Open up Makefile in firmware/src/. Adjust the last line for either the F0 or F4 as appropriate include ../common/Makefile.f0.includeOR include ../common/Makefile.f4.include Now open up common/stm32f0-discovery.ld or common/stm32f4-discovery. Adjust the RAM and ROM lengths as appropriate for your particular device. If you want to change the name of this file to satisfy your OCD, do so and then make the relevant change in firmware/common/Makefile.fx.include (where x=0 or 4 as appropriate). st-utiland st-flashbinaries A libopencm3 LED blink example is provided in firmware/src/main.c. cd to this directory and build the firmware with $ make - this should produce the main.elf file. If you're having issues then use $ make V=1 for more verbose build output. $ st-flash erase $ make bin $ st-flash write main.bin 0x8000000 On Linux, st-flash needs root privileges ( sudo ./st-flash …) to access the USB system until you set up udev rules In theory you should be able to follow the linux instructions. You will need to have make installed, as well as python. This guide will get a windows IDE based toolchain up and running. There is the option of either using ST or openlibcm3 libraries (see above). This guide uses 'coIDE,' however there are several available. This has the advantage that the ST libraries are 'built in', and so you just need to click on the ones you want and they are copied into the project. libopencm3 can still be used, but it requires a few more tweeks These are the drivers and downloading program for ST-Link, which is found on the ST development boards The STM32s also have a UART booloader, the tool to download is here: (you dont need this if you intend to use the SWD interface on the development boards) (skip to the next step if you want to use libopencm3) This example will now get an LED flashing. After project creation the 'repository' window should be showing, which has a selection of libraries that can be copied to the project. (if not, go view→repository) With the repository showing, click 'GPIO'. A whole load of files should have been copied into the project. Also click 'C library' (for printf/sprintf etc). Open main.c, and copy in the sample code below Firstly add the libopencm3 files to the project directory. This can be fetched from the libopencm3 repository and compiled as per above instructions, or run git submodule add .\firmware\libopencm3 if you want to add it to an existing repository. Since it is unlikely for all the tools (make, python and others) to be set up correctly. As a result, a precompiled version is available here. Unzip the contents into a separate libopencm3 folder along side your project The final file needed is part of the linker script. Copy this file (f4) or this file (f0) along side your project. Note that this file needs editing depending on how much flash/RAM the target has Now set up the IDE with these files: To run the program: coIDE defaults to STLink to download programs, however if you are having issues, check 'Download' settings in the project configuration #include "stm32f0xx.h" #include "stm32f0xx_gpio.h" #include "stm32f0xx_rcc.h" //include further headers as more peripherals are used int main(void) { //turn on GPIOC //IMPORTANT: every peripheral must be turned on before use RCC_AHBPeriphClockCmd(RCC_AHBPeriph_GPIOC, ENABLE); //init structure for GPIO GPIO_InitTypeDef GPIO_InitS; GPIO_InitS.GPIO_Pin = GPIO_Pin_9; //the pin we are configuring GPIO_InitS.GPIO_Mode = GPIO_Mode_OUT; //set to output mode GPIO_InitS.GPIO_OType = GPIO_OType_PP; //set to push/pull GPIO_InitS.GPIO_PuPd = GPIO_PuPd_NOPULL; //no pullup resistors GPIO_InitS.GPIO_Speed = GPIO_Speed_50MHz; //set to max speed GPIO_Init(GPIOC, &GPIO_InitS); //write this config to GPIOC while(1) //flash forever { GPIO_SetBits(GPIOC, GPIO_Pin_9); //set pin on int32_t i = 4800000; while(i) i--; //delay a bit GPIO_ResetBits(GPIOC, GPIO_Pin_9); //set pin off i = 4800000; while(i) i--; //delay a bit } } #include <libopencm3/stm32/rcc.h> #include <libopencm3/stm32/gpio.h> #define LED_PORT GPIOC #define LED_PIN GPIO9 int main(void) { // Set clock to 48MHz (max) rcc_clock_setup_in_hsi_out_48mhz(); // IMPORTANT: every peripheral must be clocked before use rcc_periph_clock_enable(RCC_GPIOC); // Configure GPIO C.9 as an output gpio_mode_setup(LED_PORT, GPIO_MODE_OUTPUT, GPIO_PUPD_NONE, LED_PIN); // Flash the pin forever while(1) { gpio_set(LED_PORT, LED_PIN); int32_t i = 4800000; while(i) i--; gpio_clear(LED_PORT, LED_PIN); i = 4800000; while(i) i--; } } while(1) //flash forever { GPIOC->BSRR |= (1<<9); int32_t i = 4800000; while(i) i--; GPIOC->BRR |= (1<<9); i = 4800000; while(i) i--; }
https://ukhas.org.uk/guides:stm32toolchain
CC-MAIN-2021-43
en
refinedweb
Getting a Document The first task in any process involving JDOM is to obtain a JDOM Document object. The Document object in JDOM is the core class that represents an XML document. Note Like all other objects within the JDOM model, the org.jdom.Document class is detailed in Appendix A, and all its method signatures are listed. Additionally, complete Javadoc on JDOM is available at. There are two ways to obtain a JDOM Document object: create one from scratch, when no existing XML data must be read, and build one from existing XML data. Starting from Scratch When no existing XML data is needed as a starting point, creating a JDOM Document is simply a matter of invoking a constructor: Document doc = new Document(new Element("root")); As we mentioned earlier, JDOM is a set of concrete classes, not a set of interfaces. This means that the more complicated code using factories to create objects as needed to create an org.w3c.dom.Element in DOM is unnecessary in JDOM. We simply perform the new operation on the Document object, and we have a viable JDOM Document that can be used. This Document is not tied to any particular parser, either. XML often needs to be created from a blank template, rather than an existing XML data source, so there is a JDOM constructor for org.jdom.Document that requires only a root Element as a parameter. Example 8.3 builds an XML document from scratch using JDOM. Example 8-3. Building a Document import org.jdom.Document; import org.jdom.Element; ... Get Java and XML now with O’Reilly online learning. O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.
https://www.oreilly.com/library/view/java-and-xml/0596000162/ch08s03.html
CC-MAIN-2021-43
en
refinedweb
tensorflow:: ops:: SparseConcat #include <sparse_ops.h> Concatenates a list of SparseTensor along the specified dimension. Summary Concatenation is with respect to the dense versions of these sparse tensors. It is assumed that each input is a SparseTensor whose elements are ordered along increasing dimension number. All inputs' shapes must match, except for the concat dimension. The indices, values, and shapes lists must have the same length. The output shape is identical to the inputs', except along the concat dimension, where it is the sum of the inputs' sizes along concat_dim = ] Arguments: - scope: A Scope object - indices: 2-D. Indices of each input SparseTensor. - values: 1-D. Non-empty values of each SparseTensor. - shapes: 1-D. Shapes of each SparseTensor. - concat_dim: Dimension to concatenate along. Must be in range [-rank, rank), where rank is the number of dimensions in each input SparseTensor. Returns: Outputoutput_indices: 2-D. Indices of the concatenated SparseTensor. Outputoutput_values: 1-D. Non-empty values of the concatenated SparseTensor. Outputoutput_shape: 1-D. Shape of the concatenated SparseTensor. Public attributes output_indices ::tensorflow::Output output_indices output_shape ::tensorflow::Output output_shape output_values ::tensorflow::Output output_values Public functions SparseConcat SparseConcat( const ::tensorflow::Scope & scope, ::tensorflow::InputList indices, ::tensorflow::InputList values, ::tensorflow::InputList shapes, int64 concat_dim )
https://www.tensorflow.org/versions/r2.3/api_docs/cc/class/tensorflow/ops/sparse-concat?hl=nb-NO
CC-MAIN-2021-43
en
refinedweb
Hi Flavio, On an unrelated note, you will be happy to know that wxagg will be included in the next release, probably early next week. It's currently in CVS if you want to get started right away. font support has been thoroughly revised and improved by Paul Barrett, and these changes are not currently documented, so be don't be surprised if you get some unexpected font warnings in the CVS version. > Hi john , I was doing a pure TeX plot (a bunch of > equations inside a box) and I noticed that the > \sqrt{}command does not work even though it listed in the > help page for mathtext. > \frac and \dfrac would be a nice addition too... Agreed. And I guess asking for &=& array layout will be coming soon > feel free to used this little script as an example of > another use of mathtext... I will - they look nice! If you add more to it, be sure to send me the updated version. John Gill has written a Cell class for his Table class which is basically a rectangular box with a text instance inside. It might be nice to generalize that code to allow multiple lines of text to be added cell.add_line(t1) cell.add_line(t2) Cell already handles autosizing of the box to surround the text, and you wouldn't have to mess with turning off the ticks, etc.... John might be willing to do this, and it could be wrapped in a nice interface command textbox. > Flavio """ This script create a box with a series of > equations Your code revealed one bug unrelated to the sqrt problem you described, but you need to make the change below to have your example render properly. In mathtext.py, in the function math_parse_s, change maxoy = max(oys) to maxoy = abs(max(oys)) Now on to your script. A couple of minor comments first text(1,9,r'$dx/dt = \alpha y^{2}$', fontsize=15) the brackets for superscripts are not required; eg, the following is ok text(1,9,r'$dx/dt = \alpha y^2$', fontsize=15) Normally math functions like sin, cos, exp are in roman type, so I would use text(1,7,r'$dz/dt = \gamma x^2+\rm{sin}(2\pi y+\phi)$', fontsize=15) As for sqrt, the mathtext syntax differs from TeX. The main reason is that I don't draw an overbar with the sqrt symbol group, though this is something I can add (probably when I get around to dealing with frac, etc, all of which require some additional drawing and layout). The point is, you can't use the curly brackets with sqrt or you get a (silent) parse error. I'll try and amend the parser to allow the group. In the meantime, just do text(1,5,r'$\phi = zy + \Sqrt\alpha\beta $', fontsize=15) I noticed there is a small clipping bug with sqrt. There are still some hacks in the way I layout the cmex fonts which are discussed in the mathtext documentation - the clipping problem likely arises from this hack. Also, note that spaces are respected in font mode, so a hackish way to include them is \rm{ }. I've put adding the TeX small space command \/ on the (growing at an alarming rate) TODO list. So if you want a space after zy, you can do text(1,5,r'$\phi = zy\rm{ } + \sqrt\alpha\beta $', fontsize=15) That's it; here is the modified script that looks great! from matplotlib.matlab import * figure(1, figsize=(5,5), dpi=100) subplot(111) plot([0]) a=axis([0,10,0,10]) title('Equation Box') set(gca(),'xticklabels',[]) set(gca(),'yticklabels',[]) set(gca(),'xticks',[]) set(gca(),'yticks',[]) text(1,9,r'$dx/dt = \alpha y^2$', fontsize=15) text(1,8,r'$dy/dt = \beta x^2$', fontsize=15) text(1,7,r'$dz/dt = \gamma x^2+\rm{sin}(2\pi y+\phi)$', fontsize=15) text(1,5,r'$\phi = zy\rm{ } + \sqrt\alpha\beta $', fontsize=15) show()
https://discourse.matplotlib.org/t/equation-box/503
CC-MAIN-2021-43
en
refinedweb
avatica alternatives and similar packages Based on the "Relational Databases" category. Alternatively, view avatica alternatives based on common mentions on social networks and blogs. go-sql-driver/mysql9.8 5.7 avatica VS go-sql-driver/mysqlGo MySQL Driver is a MySQL driver for Go's (golang) database/sql package sqlx9.7 6.1 avatica VS sqlxgeneral purpose extensions to golang's database/sql pq9.5 6.5 avatica VS pqPure Go Postgres driver for database/sql go-sqlite39.4 4.6 L3 avatica VS go-sqlite3sqlite3 driver for go using database/sql pgx9.2 8.1 avatica VS pgxPostgreSQL driver and toolkit for Go go-mssqldb8.4 6.4 avatica VS go-mssqldbMicrosoft SQL server driver written in go language go-oci87.5 1.0 avatica VS go-oci8Oracle driver for Go using database/sql goracle6.3 1.3 avatica VS goracleOracle driver for Go, using the ODPI-C driver. godror6.2 8.6 avatica VS godrorGO DRiver for ORacle DB firebirdsql5.6 6.9 avatica VS firebirdsqlFirebird RDBMS sql driver for Go (golang) gofreetds5.4 0.0 avatica VS gofreetdsGo Sql Server database driver. go-bqstreamer5.1 0.0 avatica VS go-bqstreamerBigQuery fast and concurrent stream insert. go-adodb5.1 0.0 avatica VS go-adodbMicrosoft ActiveX Object DataBase driver for go that using exp/sql vertica-sql-go4.2 3.8 avatica VS vertica-sql-goOfficial native Go client for the Vertica Analytics Database. Sqinn-Go3.8 3.7 avatica VS Sqinn-GoSQLite with pure Go bgc2.9 0.0 avatica VS bgcDatastore Connectivity for BigQuery in go pig0.5 4.1 avatica VS pigSimple pgx wrapper to execute and scan query results * Code Quality Rankings and insights are calculated and provided by Lumnify. They vary from L1 to L5 with "L5" being the highest. Do you think we are missing an alternative of avatica or a related project? README Repository Deprecated This repository has moved to apache/calcite-avatica-go. Development will continue in the new repository. The code has been donated to the Apache Calcite project and is now part of the Apache Foundation. We recommend updating your import paths from github.com/Boostport/avatica to github.com/apache/calcite-avatica-go. This repository will be archived, but will still be readable for backwards compatibility. Apache Phoenix/Avatica SQL Driver An Apache Phoenix/Avatica driver for Go's database/sql package Getting started Install using the go tool or your dependency management tool: $ go get github.com/Boostport/avatica Usage The Phoenix/Avatica driver implements Go's database/sql/driver interface, so, import Go's database/sql package and the driver: import "database/sql" import _ "github.com/Boostport/avatica" db, err := sql.Open("avatica", "") Then simply use the database connection to query some data, for example: rows := db.Query("SELECT COUNT(*) FROM test") DSN (Data Source Name) The DSN has the following format (optional parts are marked by square brackets): http://[username:[email protected]]address:port[/schema][?parameter1=value&...parameterN=value] In other words, the scheme (http), address and port is mandatory, but the schema and parameters are optional. username This is the JDBC username that is passed directly to the backing database. It is NOT used for authenticating against Avatica. This is the JDBC password that is passed directly to the backing database. It is NOT used for authenticating against Avatica. schema The schema path sets the default schema to use for this connection. For example, if you set it to myschema, then executing the query SELECT * FROM my_table will have the equivalence of SELECT * FROM myschema.my_table. If schema is set, you can still work on tables in other schemas by supplying a schema prefix: SELECT * FROM myotherschema.my_other_table. The following parameters are supported: authentication The authentication type to use when authenticating against Avatica. Valid values are BASIC for HTTP Basic authentication, DIGEST for HTTP Digest authentication, and SPNEGO for Kerberos with SPNEGO authentication. avaticaUser The user to use when authenticating against Avatica. This parameter is required if authentication is BASIC or DIGEST. avaticaPassword The password to use when authenticating against Avatica. This parameter is required if authentication is BASIC or DIGEST. principal The Kerberos principal to use when authenticating against Avatica. It should be in the form primary/[email protected], where the instance is optional. This parameter is required if authentication is SPNEGO and you want the driver to perform the Kerberos login. keytab The path to the Kerberos keytab to use when authenticating against Avatica. This parameter is required if authentication is SPNEGO and you want the driver to perform the Kerberos login. krb5Conf The path to the Kerberos configuration to use when authenticating against Avatica. This parameter is required if authentication is SPNEGO and you want the driver to perform the Kerberos login. krb5CredentialsCache The path to the Kerberos credential cache file to use when authenticating against Avatica. This parameter is required if authentication is SPNEGO and you have logged into Kerberos already and want the driver to use the existing credentials. location The location will be set as the location of unserialized time.Time values. It must be a valid timezone. If you want to use the local timezone, use Local. By default, this is set to UTC. maxRowsTotal The maxRowsTotal parameter sets the maximum number of rows to return for a given query. By default, this is set to -1, so that there is no limit on the number of rows returned. frameMaxSize The frameMaxSize parameter sets the maximum number of rows to return in a frame. Depending on the number of rows returned and subject to the limits of maxRowsTotal, a query result set can contain rows in multiple frames. These additional frames are then fetched on a as-needed basis. frameMaxSize allows you to control the number of rows in each frame to suit your application's performance profile. By default this is set to -1, so that there is no limit on the number of rows in a frame. transactionIsolation Setting transactionIsolation allows you to set the isolation level for transactions using the connection. The value should be a positive integer analogous to the transaction levels defined by the JDBC specification. The default value is 0, which means transactions are not supported. This is to deal with the fact that Calcite/Avatica works with many types of backends, with some backends having no transaction support. If you are using Apache Phoenix 4.7 onwards, we recommend setting it to 4, which is the maximum isolation level supported. The supported values for transactionIsolation are: time.Time support The following Phoenix/Avatica datatypes are automatically converted to and from time.Time: TIME, DATE and TIMESTAMP. It is important to understand that avatica and the underlying database ignores the timezone. If you save a time.Time to the database, the timezone is ignored and vice-versa. This is why you need to make sure the location parameter in your DSN is set to the same value as the location of the time.Time values you are inserting into the database. We recommend using UTC, which is the default value of location. Version compatibility Development To run tests, but skip tests in the vendor directory, run: go test $(go list ./... | grep -v /vendor/) The driver is not feature-complete yet, so contributions are very appreciated. Updating protocol buffer definitions To update the procotol buffer definitions, update CALCITE_VER in gen-protobuf.bat and gen-protobuf.sh to match the version included by Phoenix and then run the appropriate script for your platform. About the moby.yml file The moby.yml file is used by our internal tool to automatically reload and test the code during development. We hope to have this tool open-sourced soon. License The driver is licensed under the Apache 2 license. *Note that all licence references and agreements mentioned in the avatica README section above are relevant to that project's source code only.
https://go.libhunt.com/avatica-alternatives
CC-MAIN-2021-43
en
refinedweb
The CData Python Connector for SharePoint enables you to create ETL applications and pipelines for SharePoint data in Python with petl. The rich ecosystem of Python modules lets you get to work quickly and integrate your systems more effectively. With the CData Python Connector for SharePoint and the petl framework, you can build SharePoint-connected applications and pipelines for extracting, transforming, and loading SharePoint data. This article shows how to connect to SharePoint with the CData Python Connector and use petl and pandas to extract, transform, and load SharePoint data. With built-in, optimized data processing, the CData Python Connector offers unmatched performance for interacting with live SharePoint data in Python. When you issue complex SQL queries from SharePoint, the driver After installing the CData SharePoint Connector, follow the procedure below to install the other required modules and start accessing SharePoint through Python objects. Install Required Modules Use the pip utility to install the required modules and frameworks: pip install petl pip install pandas Build an ETL App for SharePoint.sharepoint as mod You can now connect with a connection string. Use the connect function for the CData SharePoint Connector to create a connection for working with SharePoint data. cnxn = mod.connect("User=myuseraccount;Password=mypassword;Auth Scheme=NTLM;URL=;SharePointEdition=SharePointOnPremise;") Create a SQL Statement to Query SharePoint Use SQL to create a statement for querying SharePoint. In this article, we read data from the MyCustomList entity. sql = "SELECT Name, Revenue FROM MyCustomList WHERE Location = 'Chapel Hill'" Extract, Transform, and Load the SharePoint Data With the query results stored in a DataFrame, we can use petl to extract, transform, and load the SharePoint data. In this example, we extract SharePoint data, sort the data by the Revenue column, and load the data into a CSV file. table1 = etl.fromdb(cnxn,sql) table2 = etl.sort(table1,'Revenue') etl.tocsv(table2,'mycustomlist_data.csv') In the following example, we add new rows to the MyCustomList table. Adding New Rows to SharePoint table1 = [ ['Name','Revenue'], ['NewName1','NewRevenue1'], ['NewName2','NewRevenue2'], ['NewName3','NewRevenue3'] ] etl.appenddb(table1, cnxn, 'MyCustomList') With the CData Python Connector for SharePoint, you can work with SharePoint data just like you would with any database, including direct access to data in ETL packages like petl. Free Trial & More Information Download a free, 30-day trial of the SharePoint Python Connector to start building Python apps and scripts with connectivity to SharePoint data. Reach out to our Support Team if you have any questions. Full Source Code import petl as etl import pandas as pd import cdata.sharepoint as mod cnxn = mod.connect("User=myuseraccount;Password=mypassword;Auth Scheme=NTLM;URL=;SharePointEdition=SharePointOnPremise;") sql = "SELECT Name, Revenue FROM MyCustomList WHERE Location = 'Chapel Hill'" table1 = etl.fromdb(cnxn,sql) table2 = etl.sort(table1,'Revenue') etl.tocsv(table2,'mycustomlist_data.csv') table3 = [ ['Name','Revenue'], ['NewName1','NewRevenue1'], ['NewName2','NewRevenue2'], ['NewName3','NewRevenue3'] ] etl.appenddb(table3, cnxn, 'MyCustomList')
https://www.cdata.com/kb/tech/sharepoint-python-petl.rst
CC-MAIN-2021-43
en
refinedweb
Do you often ask yourself... I have built an ASP.NET Core Web Application, now what? How do I take it to the next level using tools and platforms like GitHub, Docker, CI/CD and Microsoft Azure with my application? If YES!! You are at the right place!. So, stay with me, there is a lot to cover. Let's get started! Three parts in series: The first step is to setup a repository on GitHub where you are going to put all your code. This is required, because at a later stage, you will setup Azure to trigger a build and release, each time you do a push to your repository. push. Open up a terminal or powershell session, navigate to the directory where you want to clone the repository and execute the command: $ git clone <repository-clone-url> With that set, we are ready to start developing our application. In order to create the application, you must install .NET Core (version >= 2.0). Go to .NET Core download page and install the SDK compatible with the underlying system. Now, if you are on Windows, open Visual Studio and create a new ASP.NET Core Web Application Project. style="width: 640px; height: 286px" data-src="/KB/azure/3239411/0c564a91-f648-4426-8ffe-ceb4d9147c8c.Png" class="lazyload" data-sizes="auto" data-> New ASP.NET Core Web Application Project On the next page, select the Web Application (Model-View-Controller) template and ensure that "Configure for HTTPS" is unchecked. Just keep it simple. Click OK. If you are on Linux or Mac OS, open a terminal and navigate to the clone directory. You can create a .NET Core MVC Web App using the following command: $ dotnet new mvc --name <your-project-name> Once done, you can now use your favorite editor to make required changes. For further learning or solving an issue, I would highly recommend you checkout my YouTube video playlist about Getting started with .NET Core on Linux or Mac OS. For further learning or solving an issue, I would highly recommend you checkout my YouTube video playlist about Getting started with .NET Core on Linux or Mac OS. At this point, I leave it to you, my dear reader, to come up with an idea of your simple application and bring it to life. Or you can follow along as I make: ... -Docker WebApp</a> </div> <div class="navbar-collapse collapse"> <ul class="nav navbar-nav"> <li><a asp-Home</a></li> <li><a asp-Articles</a></li> </ul> </div> </div> </nav> ... Now, when we click on the Articles link, an HTTP GET request will be sent Articles action in the Home controller, which is yet to be updated. Under the Models directory, add a .cs file ArticlesViewModel.cs, which will have the required model classes: using System; using System.Collections.Generic; namespace WebApp.Models { public class ArticlesViewModel { public List<Article> Articles { get; set; } public ArticlesViewModel() { Articles = new List<Article>(); } } public class Article { public int Id { get; set; } public string Title { get; set; } public string Author { get; set; } public DateTime PublishedOn { get; set; } public string Content { get; set; } } } Next, let's add some static data to our blog using the ArticleRepository.cs: using System; using System.Collections.Generic; using System.Linq; using WebApp.Models; namespace WebApp { public class ArticleRepository { private List<Article> articles = new List<Article> { new Article { Id = 1, Title = "What is Lorem Ipsum?", Author= "Gaurav Gahlot", PublishedOn = new DateTime(2019, 01, 20), Content = "Lorem Ipsum is simply dummy text of the printing and typesetting industry." }, } public List<Article> GetLatest() { return articles; } } } The Home controller needs to have an action that can handle the requests coming for list of latest articles. Here is the new controller with Articles action for GET requests: GET using Microsoft.AspNetCore.Mvc; using System.Collections.Generic; using System.Net.Http; using WebApp.Models; namespace WebApp.Controllers { public class HomeController : Controller { public IActionResult Index() { return View(); } public IActionResult Articles() { var model = new ArticlesViewModel(); model.Articles = new ArticleRepository().GetLatest(); return View(model); } } } The last thing we need to do is to add a View that will render our data. So, add an Articles View under Views/Home/ with the following code to render the latest articles: @{ ViewData["Title"] = "Articles"; } <div id="myCarousel" class="carousel slide" data- <div class="carousel-inner" role="listbox"> <div class="item active"> <img src="~/images/docker.png" alt="Docker" class="img-responsive" /> <div class="carousel-caption" role="option"> <p style="font-size:xx-large; color:darkslategrey"> If you see me swim, your application is up and running in Docker. </p> </div> </div> </div> </div> Once you are done making the code changes, it's time to push your code on GitHub. Open a terminal and navigate to the project directory. You can check the status of your local repository using the command: $ git status You should see all the files and directories being added or updated. Now, to stage all the changes in your repository, run the command: $ git add . Let's commit the changes with a short meaningful message: $ git commit -m "your-commit-message" Finally, push all the committed changes to remote branch on GitHub: $ git push origin Note: It is not a good practice to commit all the changes at the end. In fact, you should frequently commit all the changes that can be grouped logically. Note: It is not a good practice to commit all the changes at the end. In fact, you should frequently commit all the changes that can be grouped logically. At this point, I am assuming that your application is working well and you are ready to containerize your application with Docker. According to Docker documentation: A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Docker can build images automatically by reading the instructions from a Dockerfile. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Docker can build images automatically by reading the instructions from a Dockerfile. Here is the Dockerfile for my application: # STAGE01 - Build application and its dependencies FROM microsoft/dotnet:2.1-sdk AS build-env WORKDIR /app COPY WebApp/*.csproj ./ COPY . ./ RUN dotnet restore # STAGE02 - Publish the application FROM build-env AS publish RUN dotnet publish -c Release -o /app # STAGE03 - Create the final image FROM microsoft/dotnet:2.1-aspnetcore-runtime WORKDIR /app LABEL Author="Gaurav Gahlot" LABEL Maintainer="quickdevnotes" COPY --from=publish /app . ENTRYPOINT ["dotnet", "WebApp.dll", "--server.urls", "http://*:80"] Note that I'm using a multi-stage build to ensure that the final image is as small as possible.: $ docker build -t webapp . This will build a Docker image and keep it on our local system. To test our image and application, we will now run a container using the command: $ docker run -d -p 5000:80 --rm --name webapp webapp 19c758fdb9bfb608c4b261c9f223d314fce91c6d71d33d972b79860c89dd9f15 The above command creates a container and prints the container ID as output. You may verify that the container is running using the docker ps command. docker ps $ docker ps CONTAINER ID IMAGE PORTS NAMES MOUNTS 19c758fdb9bf webapp 0.0.0.0:5000->80/tcp webapp Now, open a browser and go to the URL. If everything is working fine, you must see your web application's home page. In my case, it looks like this: style="width: 640px; height: 413px" data-src="/KB/azure/3239411/086f5eae-4595-44ff-9070-936b23072160.Jpeg" class="lazyload" data-sizes="auto" data-> Test your application and once you are sure that it's working, commit and push the Dockerfile to your GitHub repository. master branch on the repository. The pipeline will create a Docker image and push it to Docker Hub..
https://www.codeproject.com:443/Articles/3239411/Build-and-Deploy-an-ASP-NET-Core-Web-Application-a?msg=5674815&PageFlow=FixedWidth
CC-MAIN-2021-43
en
refinedweb
Not Enough Standards is a modern header-only C++17 and C++20 library that provides platform-independent utilities. The goal of this library is to extend the standard library with recurent features, such as process management, shared library loading or thread pools. To reach that goal the library is written in a very standard compliant way, from the coding-style to the naming convention. Not Enough Standards works on any posix-compliant system and also on Windows. Not Enough Standards requires a C++17 compiler, and a C++20 compiler for thread pools. As any header only library, Not Enough Standards is designed to be directly included in your project, by copying the files you need in your project's directory. You may also use it as a CMake subproject using add_subdirectory, and use it as any other library: target_link_libraries(xxx not_enough_standards) target_include_directories(xxx PRIVATE ${NES_INCLUDE_DIR}) The files of the library are independent from each others, so if you only need one specific feature, you can use only the header that contains it. Actually the only file with a dependency is process.hpp which defines more features if pipe.hpp is available. Here is a short example using Not Enough Standards: #include <iostream> #include <nes/process.hpp> int main() { //nes::this_process namespace can be used to modify current process or get informations about it. std::cout << "Current process has id " << nes::this_process::get_id() << std::endl; std::cout << "Its current directory is \"" << nes::this_process::working_directory() << "\"" << std::endl; //Create a child process nes::process other{"other_process", {"Hey!", "\\\"12\"\"\\\\", "\\42\\", "It's \"me\"!"}, nes::process_options::grab_stdout}; //Read the entire standard output of the child process. (nes::process_options::grab_stdout must be specified on process creation) std::cout << other.stdout_stream().rdbuf() << std::endl; //As a std::thread, a nes::process must be joined if it is not detached. if(other.joinable()) other.join(); //Once joined, we can check its return code. std::cout << "Other process ended with code: " << other.return_code() << std::endl; } #include <iostream> #include <nes/process.hpp> int main(int argc, char** argv) { //Output some informations about this process std::cout << "Hello world! I'm Other!\n"; std::cout << "You gave me " << argc << " arguments:"; for(int i{}; i < argc; ++i) std::cout << "[" << argv[i] << "] "; std::cout << '\n'; std::cout << "My working directory is \"" << nes::this_process::working_directory() << "\"" << std::endl; } Current process has id 3612 Its current directory is "/..." Hello world! I'm Other! You gave me 5 arguments:[not_enough_standards_other.exe] [Hey!] [\"12""\\\] [\42\] [It's "me"!] My working directory is "/..." Other process ended with code: 0 Not Enough Standards use the MIT license.
https://awesomeopensource.com/project/Alairion/not-enough-standards
CC-MAIN-2021-43
en
refinedweb
HTTP Since Camel 2.3 Only producer is supported The HTTP component provides HTTP based endpoints for calling> URI format http:hostname[:port][/resourceUri][?options] Will by default use port 80 for HTTP and 443 for. Using System Properties When setting useSystemProperties to true, the HTTP Client will look for the following System Properties and it will use it: ssl.TrustManagerFactory.algorithm javax.net.ssl.trustStoreType - javax.net.ssl.trustStoreProvider javax.net.ssl.trustStorePassword java.home ssl.KeyManagerFactory.algorithm javax.net.ssl.keyStoreType - javax.net.ssl.keyStoreProvider javax.net.ssl.keyStorePassword http.proxyHost http.proxyPort http.nonProxyHosts http.keepAlive http.maxConnections. Exceptions HttpOperationFailedException> You can override the HTTP endpoint URI by adding a header with the key, Exchange.HTTP_URI, on the message. from("direct:start") .setHeader(Exchange.HTTP_URI, constant("")) .to(""); In the sample above Camel will call the despite the endpoint is configured with. If the http endpoint is working in bridge mode, it will ignore the message header of Exchange.HTTP_URI. Configuring URI Parameters The http producer supports URI parameters to be sent to the HTTP server. The URI parameters can either be set directly on the endpoint URI or as a header with the key Exchange.HTTP_QUERY on the message. from("direct:start") .to(""); Or options provided in a header: from("direct:start") .setHeader(Exchange.HTTP_QUERY, constant("order=123&detail=short")) .to(""); How to set the http method (GET/PATCH/POST/PUT/DELETE/HEAD/OPTIONS/TRACE) to the HTTP producer The HTTP component provides a way to set the HTTP request method by setting the message header. Here is an example: from("direct:start") .setHeader(Exchange name="CamelHttpMethod"> <constant>POST</constant> </setHeader> <to uri=""/> <to uri="mock:results"/> </route> </camelContext> Using client timeout - SO_TIMEOUT See the HttpSOTimeoutTest unit test. Configuring a Proxy The HTTP component provides a way to configure a proxy. from("direct:start") .to(""); There is also support for proxy authentication via the proxyAuthUsername and proxyAuthPassword options. Using proxy settings outside of URI To avoid System properties conflicts, you can set proxy configuration only from the CamelContext or URI. Java DSL : context.getGlobalOptions().put("http.proxyHost", "172.168.18.9"); context.getGlobalOptions().put("http.proxyPort", "8080"); Spring XML . There is also a http.proxyScheme property you can set to explicit configure the scheme to use. Configuring charset If you are using POST to send data you can configure the charset using the Exchange property: exchange.setProperty(Exchange.CHARSET_NAME, "ISO-8859-1"); Sample with scheduled poll This sample polls the Google homepage every 10 seconds and write the page to the file message.html: from("timer://foo?fixedRate=true&delay=0&period=10000") .to("") .setHeader(FileComponent.HEADER_FILE_NAME, "message.html") .to("file:target/google");("", null); URI Parameters from the Message Map headers = new HashMap(); headers.put(Exchange.HTTP_QUERY, "q=Camel&lr=lang_en"); // we query for Camel and English language at Google template.sendBody("", null, headers); In the header value above notice that it should not be prefixed with ? and you can separate parameters as usual with the & char.); Disabling Cookies To disable cookies you can set the HTTP Client to ignore cookies by adding this URI option: httpClient.cookieSpec=ignoreCookies Basic auth with the streaming message body In order to avoid the NonRepeatableRequestException, you need to do the Preemptive Basic Authentication by adding the option: authenticationPreemptive=true Advanced Usage If you need more control over the HTTP producer you should use the HttpComponent where you can set various classes to give you custom behavior. Setting up SSL for HTTP Client Using the JSSE Configuration Utility The HTTP); HttpComponent httpComponent = getContext().getComponent("https", HttpComponent.class); Configuring Apache HTTP Client Directly Basically camel-http component is built on the top of Apache HttpClient. Please refer to SSL/TLS customization for details or have a look into the org.apache.camel.component.http.HttpsServerTestSupport unit test base class. You can also: KeyStore keystore = ...; KeyStore truststore = ...; SchemeRegistry registry = new SchemeRegistry(); registry.register(new Scheme("https", 443, new SSLSocketFactory(keystore, "mypassword", truststore)));. Using HTTPS to authenticate gotchas An end user reported that he had problem with authenticating with HTTPS. The problem was eventually resolved by providing a custom configured org.apache.http.protocol.HttpContext: 1. Create a (Spring) factory for HttpContexts: public class HttpContextFactory { private String httpHost = "localhost"; private String httpPort = 9001; private BasicHttpContext httpContext = new BasicHttpContext(); private BasicAuthCache authCache = new BasicAuthCache(); private BasicScheme basicAuth = new BasicScheme(); public HttpContext getObject() { authCache.put(new HttpHost(httpHost, httpPort), basicAuth); httpContext.setAttribute(ClientContext.AUTH_CACHE, authCache); return httpContext; } // getter and setter } 2. Declare an HttpContext in the Spring application context file: <bean id="myHttpContext" factory- 3. Reference the context in the http URL: <to uri=""/> Using different SSLContextParameters The HTTP component only support one instance of org.apache.camel.support.jsse.SSLContextParameters per component. If you need to use 2 or more different instances, then you need to setup multiple HTTP components as shown below. Where we have 2 components, each using their own instance of sslContextParameters property. <bean id="http-foo" class="org.apache.camel.component.http.HttpComponent"> <property name="sslContextParameters" ref="sslContextParams1"/> <property name="x509HostnameVerifier" ref="hostnameVerifier"/> </bean> <bean id="http-bar" class="org.apache.camel.component.http.HttpComponent"> <property name="sslContextParameters" ref="sslContextParams2"/> <property name="x509HostnameVerifier" ref="hostnameVerifier"/> </bean> Spring Boot Auto-Configuration When using http with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-http-starter</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> The component supports 38 options, which are listed below.
https://camel.apache.org/components/latest/http-component.html
CC-MAIN-2021-43
en
refinedweb
Scala Driver for ArangoDB Profig was a file @darkfrog26 ; Hi, Would You give any idea to process this?, Please. I had not used circe before and I have some troubles to make it work my project. I have this error: could not find implicit value for parameter e: io.circe.Encoder[types_scarango.Recipe] ompile / compileIncremental 4s [error] override val serialization: Serialization[Recipe] = Serialization.auto[Recipe] I tried some implicits but didn worked because I think is related to java.util.Date. this is the code: code import io.circe._ import io.circe.generic.auto._ import io.circe.syntax._ case class Recipe ( fiscal_period: Int, folio: String, date: Date, id_area: Option[String], area: Option[String], servicio: Option[String], id_patient: Option[String], patient_name: Option[String], file_id: Option[String], physician_name: Option[String], diagnostic: Option[String], recomendation: Option[String], observation: Option[String] ) extends Document[Recipe] object Recipe extends DocumentModel[Recipe] { implicit val dateEncoder: Encoder[Date] = Encoder.instance( a => "01/01/2021".asJson ) implicit val dateDencoder: Decoder[Date] = Decoder.instance( a => a.as[Long].map(new Date(_))) override def indexes: List[Index] = Nil override val collectionName: String = "tblrecipe" override val serialization: Serialization[Recipe] = Serialization.auto[Recipe] } Long
https://gitter.im/outr/scarango?at=608d813d20d4f0263197428a
CC-MAIN-2021-43
en
refinedweb
First, let's have a look at the problem CSS isolation is intended to mitigate. As you build a web application, you will generally place CSS styles in a global style sheet that is referenced in the main layout file. That way, the declarations within the style sheet are available to all pages that make use of the styles, whether they actually need them or not. As you continue to develop the application, new styles will be added that relate to specific sections or even pages. You might want to change the default font for a single page, for example, so you add a new CSS selector to your style sheet that you can use to target elements on that page only, and update the class attributes of the target elements accordingly. Your global style sheet grows and grows. Your primary style management tool is Ctrl + F. Over time, you forget which style declarations are actually being used and which can safely be removed. Now, it has always been possible to create page-specific style sheets and to use the Razor sections feature to inject them into the layout on a page-by-page basis. This works ok, but requires you to remember to define the section and add the relevant HTML to include the style sheet. It also means that there is an additional HTTP call for each page-specific style sheet, until they are cached by the browser - unless you configure bundling for your page-specific styles. CSS isolation in Razor Pages basically removes the reliance on sections and includes bundling for free. So how does CSS isolation work? The feature will be enabled by default in Razor Pages, so there is no need to add additional packages or configure any services or middleware. All you have to do is to place a style sheet in the Pages folder alongside the page that it is intended to affect. You just need to follow a specific naming convention, which is the Razor page file name with .css on the end. In my case, I only want to affect the font for <p> elements in the website's home page - the Index.cshtml file, so I add a file named Index.cshtml.css to the folder where the file is, and Visual Studio helpfully groups and nests it with the existing Razor page files: The content of the file sets the font for the selector to the new one that comes with VS 2022 (or your default monospace font): p { font-family: 'Cascadia Mono', monospace; } All style sheets need to be referenced, and this one is no different, except that the reference is in the format name_of_application.styles.css. In my case, the name of the project is CssIsolationDemo, so I use the nameof operator, passing in the application's namespace. The link reference goes in the layout file, just like other global style sheet references: <link rel="stylesheet" href="@(nameof(CssIsolationDemo)).styles.css" /> When I run the application, I can see that the paragraph font on the home page has been styled appropriately: whereas the style on the Privacy page is unaffected: So how does it work? Well, if we look at the rendered source for the home page, we can see that an additional attribute has been injected into every element that was generated by the Index.cshtml template: That attribute is used as part of the selector in the CSS file that was generated and is served from the reference that we added in the layout file (): If you want to add a CSS file that affects another Index.cshtml page in the application, simply add it to the folder where the target Index file resides: The contents of multiple isolated CSS files are automatically bundled, You can see that a different attribute value is generated for each page: The bundled file itself is placed in the wwwroot folder when the application is published. Just one thing to note, CSS isolation is a build step, so it doesn't work with Razor Runtime Compliation and there don't appear to be any plans to change this for the foreseeable future. So if you find that this feature doesn't seem to work for you, it's worth checking that you haven't enabled runtime compilation of your Razor files as a first troubleshooting step. Summary SPA frameworks like Angular, React and Vuejs have supported the ability to scope CSS to individual components for a while, and Blazor had to jump on board in the last release of .NET to keep up. It's nice that this is being added to Razor Pages (and MVC, if you still want to generate HTML that way) from .NET 6 onwards.
https://www.mikesdotnetting.com/article/355/css-isolation-in-razor-pages
CC-MAIN-2021-43
en
refinedweb
Important: Please read the Qt Code of Conduct - pyside2 qPushButton shape i downloaded an image black and white and hollow from middle i want yo use that as my button but when i use QIcon or QPixmap, none of the permutations seem to work can any one suggest a way i am using this for my proj - KazuoAsano Qt Champions 2018 last edited by Hi, @blossomsg Please refer as below. It's sample of push button with image using QPixmap / QIcon. import sys from PySide2.QtWidgets import QApplication, QPushButton from PySide2.QtGui import QPixmap, QIcon from PySide2.QtCore import QSize def clicked(): print("Button clicked!") app = QApplication(sys.argv) # Set Push Button button = QPushButton("Qt Chan") # Set image in Push Button pixmap = QPixmap("QtChan.png") button_icon = QIcon(pixmap) button.setIcon(button_icon) button.setIconSize(QSize(100,100)) button.show() button.clicked.connect(clicked) app.exec_() Thanks for the reply, The answer that i am seeking for is how to change the shape of the button but not just add a picture to the button, i want a shape but not the whole box shape, QPushButton tends to provide the whole button, i even tried using setMask but no success. below are eg the kind of shapes i want my button to look These buttons are references of 3d package - Maya. Thank You. - mrjj Lifetime Qt Champion last edited by mrjj Hi QToolbutton has a autoraise property that hides buttons borders until activated providing a flat look. Alternatively, you can apply style sheet to PushButton containing border:0 to disable normal drawing and only image will be visible. If you want a press down effect, you can also use stylesheet to supply a second image to be shown when pressed @mrjj Hello, sorry for late reply was busy related to the project. below is the reference code i wrote.png); border:none} QPushButton:hover{image:url(D:\All_Projs\NitinProj\anim_conglomeration\chess.png); border:none}") vlayout_wid.addWidget(test_button_1) vlayout_wid.addWidget(test_button_2) window_wid.setLayout(vlayout_wid) window_wid.show() the result that i am seeking for i was able to achieve it in linux or rather see in linux the highlight whenever i hover mouse on the image, but windows is not able to show the stylesheet hover and pressed image url(highlights - darkening of the button). But i am fine with the result and can continue right now. am open for suggestions Thank You. @blossomsg Hi Normally you would put the images in a qresource file. Are those available with python ? this is with toggleCheckable-piece.png); border:none}") vlayout_wid.addWidget(test_button_1) vlayout_wid.addWidget(test_button_2) window_wid.setLayout(vlayout_wid) window_wid.show() Ok, pretty much the same as without :) Hi, i'll look into the qresource and will check and keep you posted but for now i think so i'll resolve it if its ok. otherwise the project will lag. Thanks anyways but will open a new thread for qresource. @blossomsg Hi That is fine, please marked as solved then :)
https://forum.qt.io/topic/98025/pyside2-qpushbutton-shape
CC-MAIN-2021-43
en
refinedweb
DecodingFailure(CNil, List(DownArray))when decoding a list of sealed trait elements. Any thoughts? could not find implicit value for evidence parameter of type sttp.tapir.Schema[ Endpoints and then interpret them (to a server, say). Or is the idea to interpret Endpoints separately and combine them using the capabilities of the target HTTP library? If so, why is that? hi :) I want fine-grained errors, meaning for all endpoints I'd use errorBaseEndpoint (like in documentation) and just for some endpoints also add BadRequest. Is it possible to "flatten"/untuple the errors , because when I try: BaseEndpoints.errorBaseEndpoint .prependErrorOut( oneOf[ErrorInfo]( statusMapping(StatusCode.BadRequest, jsonBody[ErrorInfo.BadRequest].description("bad request")) ) ) My error type is (ErrorInfo, ErrorInfo) Hi All, I'd like to declare a post endpoint which takes a bunch of people case class Person(id: Long, name: String) without id's, i.e. the request body is like [ { "name": "a" }, { "name": "b" } ] The response will be those people with id's generated on the server with status code of 201 (CREATED). My endpoint is val postPeopleEndpoint: Endpoint[String, ErrorInfo, Iterable[Person], Any] = endpoint .post .in("people") .in(stringBody) // this is string, not jsonBody[Iterable[Person]] .errorOut(oneOf[ErrorInfo]( statusMapping(StatusCode.BadRequest, jsonBody[BadRequest]) )) .out(jsonBody[Iterable[Person]]) That endpoint will return a status code of 200. How can I make it return 201? Btw, I have an example of an authenticated endpoint blueprint (an endpoint that is a PartialServerEndpoint, with the authentication logic built in around an optional bearer input). That same endpoint is then extended by each final endpoint, which implements the custom logic. The problem I'm facing is that those final endpoints also have inputs (body inputs to be exact, parsing json). So I cannot make a test that fails with a 401, because the body input makes the route fail before, with a 400 error, when no body is in the request. Is there an option to make it fail with a 401 when no body is there? @GMadorell Thank you very much for your help. So I cannot make a test that fails with a 401 I think one issue with this declarative way is that it's not flexible. I dealt with your issue with .in(stringBody).errorOut(oneOf ...and let the server logic return a particular Left[ParticularType]which is mapped to a particular http status code. I wish that the Tapir had more examples in github. @adamw I have a strange issue where I modify a schema to add a description, and for one field it works, and for the other is does not. implicit val responseSchema: Schema[DesignExecutionResponse] = implicitly[Derived[Schema[DesignExecutionResponse]]].value .modify(_.score)(_.description("The score that was used to generate candidates")) .modify(_.descriptors)( _.description("The descriptors that correspond to all the variable keys in the design results")) The second description is generated, but the first is not. Any thought on where I would look to debug that? val routes: Route = withRequestTimeoutResponse(request => HttpResponse(StatusCodes.EnhanceYourCalm, entity = "Unable to serve response within time limit, please enhance your calm.") ) { loginUser.toRoute { _ => Future(Right(logic)) } } I think I'm missing a simple example of server logic error handling. I have F[MyType] as my effect (closely following ). endpoint shape is endpoint.in("somePath").out(jsonBody[MyType]) At this point, the type is ServerEndpoint[Unit, Unit, MyType, Any, F]. I'm trying to produce a ServerEndpoint that returns no body on error, but returns an appropriate status code on error. I was thinking serverLogic(_ => existingLogic) where existingLogic returns F[MyType]. Using serverLogic with attempt has a type issue as the endpoint thus far delares error type to be Unit (expects F[Either[Unit, MyType]]. Using serverLogicRecoverErrorssimilarly already expects E <:< Throwable but E has only been built up to be Unit so far. Is there a simple way of doing this? perhaps more specifically: documentation in Exception Handling states: If the logic function, which is passed to the server interpreter, fails (i.e. throws an exception, which results in a failed Future or IO/Task), this is propagated to the library (akka-http or http4s). But I fail to see how it gets propagated; seems like the errorOut declaration (to coerce the error type to Throwable) would result in this being caught/handled by tapir. At the last, it shows up in OpenAPI documentation with whatever status code mapping is applied to Throwable toRoutesand toOpenAPI). Looking forward to more tapir/sttp awesomeness :) @description("blah")to the case class. Using automatic derivation, I do not see this description in the resulting doc. But using implicitly[Derived[Schema[MyType]].value.description("but this works")I do see such a thing. Hi, I'm digging around AsyncAPI docs generation, got success with some initial stuff, but not sure how to customize documentation of in/out messages. For example: case class Response(hello: String) val wsEndpoint: Endpoint[Unit, Unit, Flow[String, Response, Any], AkkaStreams with WebSockets] = endpoint.get.in("ping").out(webSocketBody[String, CodecFormat.TextPlain, Response, CodecFormat.Json](AkkaStreams)) How to specify the 'description' tag of 'Response' here? Array[Tuple2[TypeA, TypeB]]and I can't find any auto- or semiauto-derived way of generating a valid schema. [ [ { /* type a */ }, { /* type b */ } ], [ ... ] ]. I'm not entirely sure if this is supported by openapi but if so, could tapir generate such a data type? Hi! I am using tapir to create a post endpoint (with http4s as backend) that can receive a file (as byte array) through multipart submission and I am looking to add a filter/check that would stop submitters from uploading files over a certain size, e.g. 100MB (this is mainly to protect the server from malicious use). What would be the idiomatic way to implement such a check using tapir? I am looking to fail the upload if the uploaded content exceeds the limit (and not wait until the full upload has completed). I've considered a couple of options but none seem great: content-lengthheader (but it could be missing or inaccurate and I only seem to be able to evaluate it after the data has been received by http4s) Filesize after uploaded (again too late) Codec.mapDecode- if I'm reading this correctly, it will always reset the validatorto Validator.pass. map(Mapping.fromDecode(...))which: Mapping.fromDecodehard-codes Validator.pass map(codec: Mapping[H, HH]])calls outer.schema.contramap(codec.encode).validate(codec.validator)which will replace the Schemas validator with the new always passversion, won't it? Hello folks! First, thank you for the awesome library! I'm trying learn enough of it to protect myself against Spring Boot at work. Spring allows to generate typed http clients in a single step: Something like (Api, Url) => ClientApiImpl. This is what people usually want: just an object with ( I => Future[O]) method and all http stuff being encapsulated. As far as I could find, Tapir requires to do some plumbing to achieve that. 1) create an http interpreter val cl = SttpClientInterpreter.toRequestUnsafe(apiDef, url) 2) create a backend val backend = HttpURLConnectionBackend() 3) Manually declare an http-free interface, write boilerplate implementation like i => backend.send(cl(i)) Is there an interpreter that takes an api, url, backend (optional?) and does it for me? That would be much more easy to sell. def mkSimpleClient[I, E, O](api: ZEndpoint[I, E, O], uri: String): (I => Either[E, O]) = { val backend = HttpURLConnectionBackend() val fn = SttpClientInterpreter .toRequestUnsafe(api, Uri(uri)) i: I => { backend.send(fn(i)).body } } @adam
https://gitter.im/softwaremill/tapir?at=5fe1fedb69ee7f0422b910a4
CC-MAIN-2021-43
en
refinedweb
OutlinedInput API API documentation for the React OutlinedInput component. Learn about the available props and the CSS API. Import You can learn about the difference by reading this guide on minimizing bundle size. import OutlinedInput from '@mui/material/OutlinedInput'; // or import { OutlinedInput } from '@mui/material'; Component nameThe name MuiOutlined Outlined.
https://mui.com/api/outlined-input/
CC-MAIN-2021-43
en
refinedweb
Hi All, I've just joined the project and as a start I've took a look at some reported bugs whether I can fix some. So I've picked this one: - Dir not decorated when file is deleted The problem is that when a resource stored in svn is being deleted, to mark parent directories as dirty, the resource cannot be deleted directly but rather turned into "phantom". (See Resoure#deleteResource) The Resource#synchronizing(boolean) is checked wheter it can be turned into phantom, which futher checks for presence of non-empty getSyncInfo(). And there lies the problem, the syncInfo is always empty. I've tried to trace it down where it might get set and have found that PersistantResourceVariantByteStore#setBytes is the only reasonable place being used. However the SVNWorkspaceSubscriber does not use this PersistantResourceVariantByteStore but rather SessionResourceVariantByteStore which does not maintain the resource's syncInfo. When ? Regards, Martin Patch proposal: Index: src/org/tigris/subversion/subclipse/core/sync/SVNWorkspaceSubscriber.java =================================================================== --- src/org/tigris/subversion/subclipse/core/sync/SVNWorkspaceSubscriber.java (revision 1367) +++ src/org/tigris/subversion/subclipse/core/sync/SVNWorkspaceSubscriber.java (working copy) @@ -32,6 +32,7 @@ import org.eclipse.core.runtime.IStatus; import org.eclipse.core.runtime.MultiStatus; import org.eclipse.core.runtime.Path; +import org.eclipse.core.runtime.QualifiedName; import org.eclipse.core.runtime.Status; import org.eclipse.team.core.RepositoryProvider; import org.eclipse.team.core.TeamException; @@ -41,8 +42,8 @@ import org.eclipse.team.core.subscribers.SubscriberChangeEvent; import org.eclipse.team.core.synchronize.SyncInfo; import org.eclipse.team.core.variants.IResourceVariantComparator; +import org.eclipse.team.core.variants.PersistantResourceVariantByteStore; import org.eclipse.team.core.variants.ResourceVariantByteStore; -import org.eclipse.team.core.variants.SessionResourceVariantByteStore; import org.eclipse.team.internal.core.TeamPlugin; import org.tigris.subversion.subclipse.core.IResourceStateChangeListener; import org.tigris.subversion.subclipse.core.ISVNLocalResource; @@ -76,7 +77,7 @@ protected SVNRevisionComparator comparator = new SVNRevisionComparator(); - protected ResourceVariantByteStore remoteSyncStateStore = new SessionResourceVariantByteStore(); + protected ResourceVariantByteStore remoteSyncStateStore = new PersistantResourceVariantByteStore(new QualifiedName(SVNProviderPlugin.ID, "workspaceSubscriber")); public SVNWorkspaceSubscriber() { SVNProviderPlugin.addResourceStateChangeListener(this); Received on Fri Jun 10 18:44:56 2005 This is an archived mail posted to the Subclipse Dev mailing list.
https://svn.haxx.se/subdev/archive-2005-06/0007.shtml
CC-MAIN-2021-43
en
refinedweb
Detection of Trojan control channels Last Updated: 2008-11-16 09:22:41 UTC by Maarten Van Horenbeeck (Version: 1) Recently I was working with an organization whose network had been deeply compromised by a persistent threat agent: they had very little remaining trust in the network. A full rebuild of the network was not financially feasible for this organization, as it would have meant losing much of the unique intellectual property the organization had to offer–truly a scenario that was not acceptable. Given that a “nuke from high orbit” would not be feasible, we worked on several techniques to identify those hosts which had been compromised. Note that we did not want to identify internal data being trafficked out per se: while Data Loss Prevention solutions have greatly improved over the last few years, there are hundreds of ways to smuggle a binary piece of data out in a difficult-to-detect form. Our goal was to detect behavior indicating an active Trojan on a system. - Initially we worked on increasing situational awareness. While in our case this did include costly measures such as implementing intrusion detection systems, situational awareness can also be significantly improved by small configuration changes, such as configuring BIND to log all DNS queries, storing netflows and extending firewalls to log accepted connections; - In order to detect variants of existing, known, Trojans, we deployed an IDS on the perimeter, and installed the virus-rules from EmergingThreats. Matt Jonkman’s team regularly publishes updated signatures for known Command and Control channels. If setting up such system sounds like a bit of work, have a look at BotHunter; - We started sniffing all DNS requests from hosts on the internal network, and then applied several heuristics on the resulting DNS data: - DNS responses which had a low to very low TTL (time to live) value, which is somewhat unusual; - DNS responses which contained a domain that belonged to one of a long list of dynamic DNS providers; - DNS queries which were issued more frequently by the client than would be expected given the TTL for that hostname; - DNS requests for a hostname outside of the local namespace which were responded to with a resource record pointing to an IP address within either 127.0.0.0/8, 0.0.0.0/32, RFC1918 IP space, or anywhere inside the public or private IP space of the organization; - Consecutive DNS responses for a single unique hostname which contained only a single resource record, but which changed more than twice every 24 hours. - Anomaly detection of network traffic can be a very powerful tool in detecting command & control channels. Unfortunately, to be most effective the baselining (defining what is “good” about the network) should take place before the first compromise. However, some forms of anomaly detection still add tremendous value: - We wrote a quick set of signatures to ensure that each TCP session on port 80 and 443 consisted of valid HTTP or SSL traffic, respectively. You can also do this using a tool such as FlowGrep, or by reviewing your proxy logs for failures. This would be a useful exercise in general for all traffic that is not relayed through an application proxy, and is not blocked from direct access to internet resources. - Persistent connections to HTTP servers on the internet, even outside regular office hours, can be normal: just think of software update mechanisms. However, they should be exceptions, not the rule, so these valid exceptions can be filtered out, making this a potent mechanism to identify compromises. Is the attacker operating from the same time zone as your organization? - Persistent requests for the same file on a remote web server, but using a different parameter can indicate data smuggling over HTTP. We also took some action on the host based front. A shortlist was created of anti virus vendors that were successful on so-called “proactive detection tests” (such as the AV-Comparatives one), where month old signature sets are tested against today’s malware. We licensed the software appropriately and created a live-cd that ran each solution sequentially across all local hard drives. This CD was distributed to the offices and ran on a large sample of systems over a weekend. Upon completing the scan, the CD logged into a central FTP server and stored all suspicious binaries on this share. Each of the samples was afterwards analyzed in depth, and if found malicious, detection logic was created and deployed onto the various network based detection mechanisms. On a set of critical systems, we deployed a logon policy which ran Sysinternals’ RootkitRevealer and stored its report on a remote network share. Once these reports were verified and we had some assurance that the file system API was not hooked to hide specific files, we ran a copy of Mandiant’s Red Curtain on the system to identify suspicious binaries. These were once again hooked into the analysis process above. Regardless of whether you go for a pure-play network or host based aproach, or a combination, the investigative approach should be to identify that which is unusual, validate whether it is a manifestation of a threat, and reapply what is learned to our detection probes, or identify additional monitoring that would add value. The next step is to improve our understanding of the threat agent and how it interfaces with our network. One way to get there is nodal link analysis, an analytical technique which we'll cover in a future diary entry. If you have other ideas on how to approach this problem, do get in touch! -- Maarten Van Horenbeeck
https://dshield.org/diary/Detection+of+Trojan+control+channels/5345
CC-MAIN-2021-43
en
refinedweb
So is there a sort of ETA on when this will be part of Guile? On Fri, Aug 20, 2010 at 4:30 PM, Andy Wingo <address@hidden> wrote: > Hello, Mr. Lucy! > > At some point I might escape the need to apologize at every mail I send, > but until then: sorry for the late response! > > On Thu 05 Aug 2010 23:40, Michael Lucy <address@hidden> writes: > >> On Wed, Jul 28, 2010 at 12:13 AM, Michael Lucy <address@hidden> wrote: >>> I've officially eliminated the last define-macro expression. >>> >>> However, I get the feeling that things may not be exactly as desired. >>> The original program made extensive use of functions in building the >>> macros, and I originally tried to replace these with macros. This >>> turned out to be a little difficult to debug, however (read: I was >>> unable to make the code actually work). I eventually abandoned this >>> and just made datum->syntax calls. > > I'll have to check and see what the deal is. However note that with > procedural macros you can still use helper functions that operate on > syntax objects, destructing them via syntax and building up syntax > objects using `syntax'. Think of a procedural macro as consisting of one > helper function :) > >>> The downside is that one doesn't get all the same benefits of >>> referential transparency, so I still have gensyms in the functions >>> etc. Is this a problem? > > Yep! But it probably won't be a big deal to fix. > >>> Another question about module namespaces: I have some syntax that I'd >>> like to be available to code generated by macros in my module, but >>> which I'd rather not export to the user (to avoid clobbering their >>> functions). Is there a standard way of doing this? > > Phil mentioned @ and @@, but the normal case is that things Just Work, > due to the referential-transparency-preserving properties of > syntax-case. > > For example: > > (define-module (a) > #:export (b)) > > (define-syntax b > (lambda (x) > (syntax-case x () > ((_ exp) > #'(c exp))))) > > (define-syntax c > (syntax-rules () > ((_ exp) (car exp)))) > > (define-module (d) > #:use-module (a)) > > (b '(1 2 3)) > => 1 > > You see that the expansion of `(b '(1 2 3))' in the module `(d)' > produced a reference to `c' -- but `c' is private in the `(a)' > module. Barring the use of datum->syntax, syntax-case macros *scope free > identifiers within the lexical conext and module in which they > appear*. That's what "hygiene" is. > > Anyway, I hope to have time to poke this next week. I'm very much > looking forward to having a good PEG parser! > > Cheers, > > Andy > -- > > >
https://lists.gnu.org/archive/html/guile-devel/2010-08/msg00058.html
CC-MAIN-2021-43
en
refinedweb
In the previous article, we have discussed about C++ Programming Tutorial for Beginners. Let us learn About Multithreading Tutorial PDF in C++ Program. Excited to learn more about Multithreading in C++? Viewing BTech Geeks best & free online C++11 Multithreading Tutorial can be your savior in providing better learnings and great knowledge. The support of multithreading was initiated in C++11. Earlier, POSIX threads or p threads library in C is used to call threads. Get into this ultimate online tutorial of C++11 Multi-thread and check out the information that you require like definitions, functions, methods, working of <thread>, etc. with Examples. - Concepts in C++11 Multithreading Tutorial - C++11 Threads FAQ Topics List - What is Thread? - What is Multithreading in C++? - Working of Thread in C++11 - C++ Thread Class Example - Different Ways of Creating a Thread in C++ - Uses of Multi-threading in C++11 - Top 10 C++11 Multithreading Interview Questions List Concepts in C++11 Multithreading Tutorial Firstly, before understanding the basics about C++11 Multi-thread, you guys should aware of the topics included in this C++11 Multithreading Tutorial for a quick reference. Just take a glance at the direct links furnished below and get in-depth knowledge regarding std::thread in C++. - Part 1: Three Ways to Create Threads - Part 2: Joining and Detaching Threads - Part 3: Passing Arguments to Threads - Part 4: Sharing Data & Race Conditions - Part 5: Fixing Race Conditions using mutex - Part 6: Need of Event Handling - Part 7: Condition Variables - Part 8: std::future and std::promise - Part 9: std::async Tutorial & Example - Part 10: std::packaged_task<> Tutorial C++11 Threads FAQ Topics List - C++11: Start thread by the member function - C++11: How to put a thread to sleep - C++11: How to get a Thread ID? - C++11: Vector of Thread Objects - C++11: std::thread as a member variable in class - C++11: How to Stop a Thread After going through the above links, you’ll definitely retain all core C++11 multithreading concepts. But now, we will be discussing some fundamentals and major information like What is Thread, Uses of Multithreading, Different ways of launching threads, and most importantly quiz and interview questions on Multithreading in C++11. Do Check Related C++ Tutorials: What is Thread? A thread is a Class that represents individual threads of execution. Every single thread shares memory, file descriptors, and diverse system resources. Actually, earlier in Linux, all thread functions are stated in <pthread.h> header file but it is unavailable in standard C++ programming. What is Multithreading in C++? A specialized form of multitasking that accepts your computer to work two or more programs concurrently is known as Multithreading. Basically, multitasking is divided into two types. They are process-based and thread-based. In C++, a Multi-threaded program includes two or more parts that execute concurrently. All limitations that are covered in the prior threads library in C are defeated with this std::thread. Related Classes and Functions of thread are defined in the thread header file. Working of <thread> in C++11 std::thread is the thread class that provides a single thread in C++. For working of thread, we have to create a new thread object & then the executing code is passed and it will be called (ie., a callable object) into the constructor of the object. After creating an object a new thread is initiated that will execute the code stated in callable. Syntax: #include<thread> std::thread thread_object(callable) C++ Thread Class Example The below-illustrated example is on how we can create a simple HelloWorld program with threads: #include <iostream> #include <thread> //This function will be called from a thread void call_from_thread() { std::cout << "Hello, World" << std::endl; } int main() { //Launch a thread std::thread t1(call_from_thread); //Join the thread with the main thread t1.join(); return 0; } Different Ways of Creating a Thread in C++ Basically, there are four ways of launching a thread in C++ and they are as such: - Launching a thread using a function pointer - Launching a thread using a function object - Launching a thread using a lambda - Launching a thread using a member function From the below multithreading in C++ program, you will observe creating the three threads from the main function by using the three main callable objects that are listed above: // CPP program to demonstrate multithreading // using three different callables. #include <iostream> #include <thread> using namespace std; // A dummy function void foo(int Z) { for (int i = 0; i < Z; i++) { cout << "Thread using function" " pointer as callable\n"; } } // A callable object class thread_obj { public: void operator()(int x) { for (int i = 0; i < x; i++) cout << "Thread using function" " object as callable\n"; } }; int main() { cout << "Threads 1 and 2 and 3 " "operating independently" << endl; // This thread is launched by using // function pointer as callable thread th1(foo, 3); // This thread is launched by using // function object as callable thread th2(thread_obj(), 3); // Define a Lambda Expression auto f = [](int x) { for (int i = 0; i < x; i++) cout << "Thread using lambda" " expression as callable\n"; }; // This thread is launched by using // lamda expression as callable thread th3(f, 3); // Wait for the threads to finish // Wait for thread t1 to finish th1.join(); // Wait for thread t2 to finish th2.join(); // Wait for thread t3 to finish th3.join(); return 0; } Result: Uses of Multi-threading in C++11 A multithreading environment lets you drive many activities concurrently, where diverse threads are responsible for diverse activities. You can explore different uses of multi-threading in C++ while learning or coding on your own. But for now, we are going to see some of them below. They are as follows: - Better resource utilization. - More responsive programs. - Simpler program design. Top 10 C++11 Multithreading Interview Questions List The list of 10 best and most common interview questions on multithreading in C++11 is prevailing here for all freshers who are preparing & appearing for the software jobs in the C++ Programming language. - What is multithreading? - What are the ways to create a thread in C++? - Brief me about the available models in Multithreading? - What is C++11 thread local storage (thread_local)? - What is the difference between a thread and a process? - Name the design pattern for the thread? - What are the 6 synchronizations primitive available in Multithreading? - What is a thread pool? - What is thread starvation? - How can you create background tasks with C++11 threads?
https://btechgeeks.com/cpp11-multithreading-tutorial/
CC-MAIN-2021-43
en
refinedweb
Alexa.PlaybackController Interface (VSK Fire TV) When users make transport control utterances (e.g., Stop, Rewind, Play, etc.), the Alexa.PlaybackController interface sends transport control directives ( Pause, Play, Stop, Resume, FastForward, Rewind, StartOver) to provide instructions for controlling media. PlaybackController, it's recommended that you use Android MediaSession for transport controls instead. Media sessions provides the same features with less latency and a more consistent customer experience. See Step 2: Integrate with MediaSession for details. - Utterances for Transport Controls Directives - Handling PlaybackController Directives - Request Example - Experience Types - Handling PlaybackController Directives - Response Example - Declaring Capability Support for this Interface Utterances for Transport Controls Directives Alexa sends transport control directives to your app (for app-only integrations) or to your Lambda (for cloudside integrations) when users say the following utterances. As with other directives, when you receive a Discover directive, you must specify the PlaybackController capabilities that your video skill supports. Handling PlaybackController Directives There are several types of directives that the PlaybackController interface sends, each described in the following sections. Request Example EXTRA_DIRECTIVE_NAMESPACE: Alexa.PlaybackController EXTRA_DIRECTIVE_NAME: <transport control command> EXTRA_DIRECTIVE_PAYLOAD_VERSION: 3 EXTRA_DIRECTIVE_PAYLOAD: payload payload contains one optional field: "payload": { "experience": { "mode": "VOICE_OPTIMIZED", } } For <transport control command>, the value can be any of the following: pause play stop resume rewind fastForward startOver payload contains an empty object: {} { "directive": { "header": { "namespace": "Alexa.PlaybackController", "name": "<transport control command>", "messageId": "abc-123-def-456", "payloadVersion": "3" }, "endpoint": { "scope": { "type": "BearerToken", "token": "access-token-from-skill" }, "endpointId": "VSKTV", "cookie": { } }, "payload": { "experience": { "mode": "VOICE_OPTIMIZED", } } } } For <transport control command>, the value can be any of the following: pause play stop resume rewind fastForward startOver payload contains an empty object: {} Experience Types The payload contains one optional field, the experience object. PlaybackController Directives PlaybackController directives contain transport control commands used during media playback. Many of the actions are similar to the equivalent actions when a user would press the equivalent button on a remote control. The following sections provide the expected guidance for handling the various PlaybackController directives. Pause Directives pause should take the same action as if the user had pressed the Pause button on the remote. Play Directives play should take the same action as if the user had pressed the Play button on the remote. Stop Directives stop should request to stop playback of audio or video content. Resume Directives resume should take the same action as if the user had unpaused playback through their remote (that is, pressing the "play" button after having already paused playback earlier). Next Directives next should take the user to the next episode. If that's not possible, it should take the user to the next related video content that you choose to show the viewer (or whatever's next in the playlist you choose to use). Previous Directives previous should take the user to the previous episode. If that's not possible, it take the user to whatever was earlier in the playlist you choose to use. Fast-Forward Directives fastForward should fast-forward playback by 30 seconds. Do not take the user to a fast-forward screen with a slowly moving cursor as if they were fast-forwarding via the remote. Also, do not require the user to say "Alexa, play" after having already requested to fast-forward via voice. Users prefer to simply fast-forward 30 seconds and resume playback automatically when they use this command. If you want to fast-forward more, you can use the SeekController Directives. Rewind Directives rewind should rewind playback by 30 seconds. Do not take the user to a Rewind screen with a slowly moving cursor as if they were rewinding via the remote. Also, do not require the user to say "Alexa, play" after having already requested to rewind via voice. Users prefer to simply rewind 30 seconds and resume playback automatically when they use this command. If you want to rewind more, you can use the SeekController Directives. StartOver Directives startOver should start playback from the beginning of the audio or visual content.. Send a response back to Alexa when you receive a transport control directive from the PlaybackController interface. { "context": { "properties": [] }, "event": { "header": { "messageId": "abc-123-def-456", "namespace": "Alexa", "name": "Response", "payloadVersion": "3" }, "endpoint":{ "endpointId":"VSKTV" }, "payload":{ } } } If you cannot complete the customer request for some reason, reply with an error. See Error Handling for more details. Declaring Capability Support for this Interface To receive PlaybackController directives in your app, you must indicate support for this interface when you declare your capabilities. See the following for more information on declaring capabilities with app-only integrations: For Alexa to send PlaybackController directives to your Lambda, you must indicate support for it in your response to the Discover directive sent through the Alexa.Discovery interface. More details are provided in Alexa.Discovery. Last updated: Jun 09, 2021
https://www.developer.amazon.com/zh/docs/video-skills-fire-tv-apps/playbackcontroller.html
CC-MAIN-2021-43
en
refinedweb
React Hooks While I was learning React while in The Flatiron School’s Software Engineering bootcamp I learned about creating components by using class components only. We learned about some lifecycle methods in general and state was to be managed with a constructor. Once we were encouraged to check out functional components and useState to manage our state I enjoyed the look and feel of this style of React. Using functional components and useState amongst the other react Hooks make the code look much cleaner and easier to understand. I have been using useState and useEffect in my recent projects and I have enjoyed using them. I want to dig deeper into the full lifecycle and become familiar all of the Hooks to become a more experienced React developer. In this blog I will be covering The 3 most common hooks in detail and a brief introduction to the other 7. What is a Hook? To learn more about Hooks we should start with the definition. The official react documentation says, “Hooks are functions that let you “hook into” React state and lifecycle features from function components. Hooks don’t work inside classes — they let you use React without classes.” Hooks are functions that are prepended buy the “use”. Rules of Hooks - Only call Hooks at the top level - Only use Hooks in functional components - You cannot call Hooks from regular JavaScript functions they must be React functions. - Node version 6 or above - NPM version 5.2 or above - Import all appropriate hooks you plan to use at the top of a component. import { useState } from 'react'; What are the Hooks available? There are currently 10 hooks available as well as custom hooks. - useState - useEffect - useContext - useReducer - useCallback - useMemo - useRef - useImperativeHandle - useLayoutEffect - useDebugValue useState() The most important and often used hook. The purpose is to handle reactive data in the form of state. When a change is made to state you want it to update throughout the component. UseState takes 1 optional argument which is the default state. UseState returns an array of 2 values. The first value is the reactive value of the state and the second is the setter in which you call to change the state when necessary. These are local variables that can be named anything but the setter must be prepended by “set”. [name, setName] = useState("programmer") //name in state will default to programmerconst eventToChangeName = () => { setName("Adam") } //name will be Adam after function is called useEffect() useEffect is one of the more confusing Hooks. To understand useEffect you must understand the React component lifecycle, which I will likely do a blog on soon. A simple refresher of lifcycle methods are componentDidMount(), componentDidUpdate(), componentWillUnmount(). UseEffect allows us to handle these lifecycle methods in one function. UseEffect takes it’s first argumet as a function you define. This will run once when mounted then every time state changes. An issue I have run into with this in my own projects has been doing a fetch request inside useEffect to set state asynchronously. The fetch will run then set state. After that is completed state will be updated and the componentDidUpdate() role of useEffect will run again, creating an infinite loop. To avoid this useEffect takes a second argument which is an array of dependencies. If you pass in an empty array it will run once. If you add state in tyhe array of dependencies it will run every time thaty state is updated. useEffect(() => { eventToChangeName() }, []) useContext() UseContext allows us to work with React’s context API which shared data throughout the entire component tree without passing props. You can create a context and call it from useContext to use in a component on a different level than the one you are on. cont pets = { bobo: 'Cat' , lucy: 'Dog' }const PetContext = createContext(pets);function App(props) { return( <PetContext.Provider value={pets.bobo}> <FindTheAnimal /> </PetContext.Provider> ) }function FindTheAnimal = () => { const animal = useContext(PetContext) return ( <p> animal </p> ) } useRef() UseRef allows you to create a mutable object that will keep the same reference between renders. This is used when you want to store a value like useState but do not want to trigger a re-render of the page. A common use for useRef is to grab HTML elements from the DOM. function App() { const myButton = useRef(null) const click = () => myButton.current.click() return( <button ref={myButton}> </button> ) } useReducer() useReducer is a state management hook that manages state in a different way. This Hook is used in Redux to dispatch actions to the store. The useReducer function takes in an argument of a reducer you are using and returns an array of the state and the dispatch. It can also take a second argument of a default value of the state. Using the Redux pattern and useReducer is helpful in a large app whith many components to manage state. useMemo() UseMemo is used to optimize computation and improve performance. You can use this when you know there is something hurting performance. Like useEffect, you can set a dependency to determine when these computations take place. const [counter, setCounter] = useState(2)const expensiveCount = useMemo(() => { return count ** 2 }, count) This function only will take place when count changes to avoid happening every re-render. This will memoize a return value. If you want to memoize an entire function you would use the next Hook. useCallback() When you define a function in a component a new function object is created on render. This is slow when passing the same function down to multiple children components. Wrapping the function in useCallback will increase performance rendering the same thing multiple times. useImperativeHandle() If you build a reusable React library un react you may need to a native DOM element. This hook comes in if you want to change the behavior of that ref. useLayoutEffect() This is just like useEffect but with one difference. It will run but before updates have been rendered. React will wait for the code to run before it updates you the user. useDebugValue() Use useDebugValue inside custom hooks to see the name of the Hooks inside dev tools along with the Hooks involved to create the custom Hook.
https://adamadolfo8.medium.com/react-hooks-b259ee985b5d
CC-MAIN-2021-43
en
refinedweb
Async I/O and ThreadPool Deadlock (Part 2) Async I/O and ThreadPool Deadlock (Part 2) Join the DZone community and get the full member experience.Join For Free Sensu is an open source monitoring event pipeline. Try it today. Parallel Execution Now, let’s complicate our lives with some concurrency, shall we? If we are to spawn many processes, we could (and should) utilize all the cores at our disposal. Thanks to Parallel.For and Parallel.ForEach this task is made much simpler than otherwise. public static void ExecAll(List<KeyValuePair<string, string>> pathArgs, int timeout) { Parallel.ForEach(pathArgs, arg => ExecWithAsyncTasks(arg.Key, arg.Value, timeout)); } Things couldn’t be any simpler! We pass a list of executable paths and their arguments as KeyValuePair and a timeout in milliseconds. Except, this won’t work… at least not always. First, let’s discuss how it will not work, then let’s understand the why before we attempt to fix it. When Abstraction Backfires The above code works like a charm in many cases. When it doesn’t, a number of waits timeout. This is unacceptable as we wouldn’t know if we got all the output or part of it, unless we get a clean exit with no timeouts. I first noticed this issue in a completely different way. I was looking at the task manager Process Explorer (if not using it, start now and I promise not to tell anyone,) to see how amazingly faster things are with that single ForEach line. I was expecting to see a dozen or so (on a 12-core machine) child processes spawning and vanishing in quick succession. Instead, and to my chagrin, I saw most of the time just one child! One! And after many trials and head-scratching and reading, it became clear that the waits were timing out, even though clearly the children had finished and exited. Indeed, because typically a process would run in much less time than the timeout, it was now slower with the parallelized code than with the sequential version. This wasn’t obvious at first, and reasonably I suspected some children were taking too long, or they had too much to write to the output pipes that could be deadlocking (which wasn’t unfamiliar to me). Testbed To troubleshoot something as complex as this, one should start with clean test-case, with minimum number of variables. This calls for a dummy child that would do exactly as I told it, so that I could simulate different scenarios. One such scenario would be not to spawn any children at all, and just test the Parallel.ForEach with some in-proc task (i.e. just a local function that does similar work to that of a child). using System; using System.Threading; namespace Child { class Program { static void Main(string[] args) { if (args.Length < 2 || args.Length % 2 != 0) { Console.WriteLine("Usage: [echo|fill|sleep|return] "); return; } DoJob(args); } private static void DoJob(string[] args) { for (int argIdx = 0; argIdx < args.Length; argIdx += 2) { switch (args[argIdx].ToLowerInvariant()) { case "echo": Console.WriteLine(args[argIdx + 1]); break; case "fill": var rd = new Random(); int bytes = int.Parse(args[argIdx + 1]); while (bytes-- > 0) { // Generate a random string as long as the . Console.Write(rd.Next('a', 'z')); } break; case "sleep": Thread.Sleep(int.Parse(args[argIdx + 1])); break; case "return": Environment.ExitCode = int.Parse(args[argIdx + 1]); break; default: Console.WriteLine("Unknown command [" + args[argIdx] + "]. Skipping."); break; } } } } } Now we can give the child process commands to change its behavior, from dumping data to its output to sleeping to returning immediately. Once the problem is reproduced, we can narrow it down to pin-point the source. Running the exact same command in the same process (i. e. without spawning another process) results in no problems at all. Calling DoJob 500 times directly in Parallel.ForEach finishes in under 500ms (often under 450ms). So we can be sure Parallel.ForEach is working fine. public static void ExecAll(List<KeyValuePair<string, string>> pathArgs, int timeout) { Parallel.ForEach(pathArgs, arg => Task.Factory.StartNew(() => DoJob(arg.Value.Split(' '))).Wait() ); } Even executing as a new task (within the Parallel.ForEach) doesn’t result in any noticeable different in time. The reason for this good performance when running the jobs in new tasks is probably because the ThreadPool scheduler does fetch the task to execute immediately when we call Wait() and executes it. That is, because both the Task.Factory.StartNew() call as well as the DoJob() call are executed ultimately on the ThreadPool, and because Task is designed specifically to utilize it, when we call Wait() on the task, it knows that it should schedule the next job in the queue, which in this case is the job of the task on which we executed the Wait! Since the caller of Wait() happens to be running on the ThreadPool, it simply executes it instead of scheduling it on a different thread and blocking. Dumping the Thread.CurrentThread.ManagedThreadId from before the Task.Factory.StartNew() call and from within DoJob shows that indeed both are executed in the same thread. The overhead of creating and scheduling a Task is negligible, so we don’t see much of a change in time over 500 executions. All this is great and comforting, but still doesn’t help us resolve the problem at hand: why aren’t our processes spawned and executed at the highest possible efficiency? And why are they timing out? In the next part we’ll dive deep into the problem and find out what is going on. Sensu: workflow automation for monitoring. Learn more—download the whitepaper. }}
https://dzone.com/articles/async-io-and-threadpool-0
CC-MAIN-2018-51
en
refinedweb
Inko 0.3.0 released Inko 0.3.0 has been released. Noteworthy changes in 0.3.0 - Foreign Function Interface - Process Pinning - Seconds are now the base unit for timeouts - More specific platform names - VM instruction changes - musl executables are no longer provided The full list of changes can be found in the CHANGELOG. In the 0.2.4 release post we announced that for 0.3.0 we would be working towards supporting network operations, such as opening TCP sockets. Due to it still not being entirely clear how we will implement this, we decided to postpone this until at least 0.4.0. Foreign Function Interface Support for interfacing with C code is now possible using Inko's new Foreign Function Interface (FFI). The FFI is available using the module std:ffi. For example, we can use floor() from the C standard library as follows: import std::ffi::Library import std::ffi::types import std::stdio::stdout # Library.new is used to open a C library, using one or more names or paths to # find the library. let libm = Library.new(['libm.so.6']) # Using `libm.function` here we attach the `floor()` function. The type `f64` # translates to the C `double` type. let floor = libm.function('floor', [types.f64], types.f64) # Sending `call` to `floor` will execute the function. Since the return type is # `Dynamic`, we have to cast it to `Float` ourselves. let number = floor.call(1.1234) as Float stdout.print(number) We can also use C structures. For example, here is how we would use gettimeofday() in Inko: import std::ffi::(self, Library, Pointer) import std::ffi::types import std::stdio::stdout let libc = Library.new(['libc.so.6']) # int gettimeofday(void*, void*) let gettimeofday = libc .function('gettimeofday', [types.pointer, types.pointer], types.i32) # void* malloc(size_t) let malloc = libc.function('malloc', [types.size_t], types.pointer) # free(void*) let free = libc.function('free', [types.pointer], types.void) # This defines a structure similar to the following C code: # # struct timeval { # time_t tv_sec; # suseconds_t tv_usec; # } # # The exact type used (i64, i32, etc) may differ per platform. let timeval = ffi.struct do (struct) { struct['tv_sec'] = types.i64 struct['tv_usec'] = types.i64 } # Since `malloc.call` returns a `Dynamic`, we need to cast it to a `Pointer` # ourselves. let time_pointer = malloc.call(timeval.size) as Pointer gettimeofday.call(time_pointer, Pointer.null) # This will wrap the pointer in an instance of our `timeval` structure defined # earlier. let time_struct = timeval.from_pointer(time_pointer) # We can read the values of a structure by sending `[]` to it. To write a value # we would use `[]=`. stdout.print(time_struct['tv_sec'] as Integer) # Now that we're done we can release the memory of the structure. free.call(time_pointer) The Foreign Function Interface does come with some limitations. Most notably: - Variadic functions (such as printf()) are not supported at the moment. - Using Inko blocks as callbacks for C functions is not supported. This means that currently it's not possible to use C libraries that make use of callbacks, such as libuv. Variadic functions will almost certainly be supported in the future, but right now they are not a big priority. C callbacks are unlikely to be supported any time soon due to the complexity involved. For example, Inko processes can be suspended at various points in time for a variety of reasons. This means we need to somehow deal with this when this happens when calling back into Inko from C. Since we do not yet have solutions for these problems, we decided not to support calling back into Inko from C at this time. For more information, refer to the source code of std::ffi. Process Pinning Certain C functions use thread-local storage. For example, GUI libraries typically require that all operations are performed on the same thread that initialised the GUI. To support this, Inko now allows pinning of processes to OS threads. Pinning a process will result in two things happening: - The process will always run on the same OS thread. - The OS thread will only run the process that was pinned. To pin a process, use std::process.pinned: import std::process process.pinned { # All code in this block will be pinned to the current OS thread. } Because the OS thread will only run the pinned process, pinning processes should only be used when absolutely necessary. For example, say you have 8 threads, 8 pinned processes, and 2 unpinned processes. If the pinned processes are pinned before the unpinned processes start, the unpinned processes will never run as there are no threads available for them to run on. Seconds are now the base unit for timeouts The std::process module provided various methods that support timeouts. For example, std::process.receive allows you to specify the number of seconds after which this method should return: import std::process process.receive(100) # Wait for at most 100 milliseconds. Starting with 0.3.0, the base unit used is now seconds instead of milliseconds. This means that the above code on 0.3.0 will result in the process being suspended for at most 100 seconds, instead of 100 milliseconds. To suspend for at most 100 milliseconds in 0.3.0, we need to write the following: import std::process process.receive(0.1) # Wait for at most 0.1 seconds, or 100 milliseconds. This change applies to the following methods: std::process.receive_if std::process.receive std::process.suspend std::process::Receiver.receive More specific platform names The method std::os.platform now returns more specific platform names. Prior to 0.3.0, it would return one of the following values: - other - unix - windows As of 0.3.0, the following values can be returned: - android - bitrig - dragonfly - freebsd - ios - linux - macos - netbsd - openbsd - unix - unknown - windows VM instruction changes A variety of virtual machine instructions have been changed or merged together. For example, the various instructions for obtaining object prototypes ( GetIntegerPrototype, GetFloatPrototype, etc) were merged together into the GetPrototype instruction. Other instructions, such as ProcessSpawn and ProcessSuspendCurrent take different types of values as their arguments. musl executables are no longer provided Up until 0.3.0, Inko provided executables of the VM that used musl. These executables were more portable, as they did not dynamically link to the system's C standard library (e.g. GNU libc). Unfortunately, musl does not support dlopen(), which is required to support Inko's FFI. This meant we had one of two options: - Continue providing musl executables, but without support for Inko's FFI. - Stop providing musl executables altogether. Option one would most likely result in a lot of confusion, especially since ienv preferred to install musl executables over regular ones. It also didn't quite feel right to provide a build of Inko that doesn't support all of its features. Because of this, we decided to stop providing musl executables. This means that from 0.3.0 on, all executables will dynamically link to the system's C standard library, and ienv will no longer prefer to install musl executables over regular ones.
https://inko-lang.org/news/inko-0-3-0-released/
CC-MAIN-2018-51
en
refinedweb
exportar olm a pst olm in pst konvertieren olm to pst convertor converter olm para pst covert pst to olm convert olm naar pst convert olm file in pst olm datei in pst konvertieren export ost database to pst utility for export pst from edb file Convert OLM file with Export OLM to PST converter tool and convert MAC OLM file ... / Size :7,266K / Shareware Export OLM to PST file thorugh our advanced MAC OLM to PST exporter utility. Thr... / Size :7,252K / Shareware The Free Mac to PC file transfer will convert the OLM file into PST totally free... / Size :15,527K / Shareware Highly used Email Client is Windows Outlook that uses PST file format. Therefore many opt for Export Mac Mail to PST. OLM to PST Converter Pro is professional software for Instant Corrupted OLM Repair and Export Mac Mail to PST. OLM to PST Converter Free Demo is the first step to take for any user who needs to complete demand of OLM Conversion to PST Outlook. . Outlook 2011 OLM to PST converter software is available for converting and moving entire MAC Outlook 2011 database from MAC machine to Windows machine. MAC Outlook OLM to PST converter software helps you to move. import and export MAC Outlook 2011 OLM to PST file format. Download now free conversion tool for OLM file to PST file format (folder). Software is able to export unlimited heavy OLM files from OLM file to PST files. . outlook 2011 olm to pst , olm to pst converter , outlook 2011 olm file to pst , 2011 olm to pst , outlook olm file to pst , outlook olm 2011 to pst importer/exporter tool to import and export PST files from Apple MAC machine. . import olm to pst import outlook olm to pst , olm to pst converter , convert mac olm to pst , import olm file , outlook olm to pst , free mac outlook olm to pst , export olm to pst Export Outlook OLM to PST file format without any hassle and at less time. The tool OLM conversion will make it possible.. You can believe in OLM Converter to for getting conversion task safely. . export outlook olm to pst , convert outlook olm file to pst , convert olm file to pst converter , convert mac olm file to pst like Windows 8. outlook olm to pst converter app , convert outlook 2011 olm to pst , olm to pst converter software , mac olm database to pst , mac mailbox to pst mailbox If. contacts etc. from Mac OLM file into PST file. You can run our latest OLM to PST converter software on all Windows version like Windows 8. latest olm to pst converter , convert olm file to pst , convert mac outlook to pst , outlook 2011 to pst , free olm to pst. . how to export mac outlook 2011 to pst , export outlook olm to pst , export outlook 2011 to pst , download olm to pst exporter , export mac outlook 2011 olm file to pst , olm file to pst exporter The migration of OLM to PST Outlook without any alteration done to the Meta Data Information of Mac OLM file while executing the conversion can only be reachable with an effective export from Outlook 2011 to PST FREE Utility which has adequate supremacy in it to make one comprehend the query such as “How to convert OLM to EML” file. The OLM converter software after the implementation of its restructured edition has now become supportive of Windows 8 and Outlook 2013 both which makes it supportive of ALL the versions of Windows with Outlook. The Export from Outlook 2011 to PST free tool after the completion of OLM to PST conversion provides users the option to SPLIT Large Sized PST files in two or more. However the OLM converter software at all times does not fail to keep absolute Data-Accuracy within the exported files and their respective attachments. . export from outlook 2011 to pst free , olm to pst file , convert olm to pst , olm to pst conversion , olm converter , migration of olm to pst outlook , how to convert olm to eml , olm to msg outlook , free export mac olm file to pst Comprehend the technique of exporting . olm to Windows . pst file format in a constructive manner by means of OLM to PST Windows Converter Software. The tool on the other hand has been reorganized with some of the most exclusive attributes of all obtainable in the market which has made the productivity to export Mac OLM to Windows PST more precise. However the tool earlier to initiate the conversion process proffers users a self devoted Scan aspect as in to scan the entire Mac OLM file with the intent to remove the odds of Data-Corruption. The Mac Outlook export to PST once the OLM files gets exported to the desired file provide users the option to SPLIT Large Sized PST files in two or more. Furthermore to this it also has the efficiency to export Outlook 2011 to PST in ANSI as well as in UNICODE PST both systematically. It even previews the exported files and their respective attachments in Horizontal and Vertical View both. . exporting olm to windows pst , export mac olm to windows pst , olm to pst windows , changing olm to pst for outlook 2007 , export oulook 2011 to pst , mac outlook export. . how to convert olm to pst , demo of olm to pst , olm to pst conversion procedure , safe olm pst conversion
http://freedownloadsapps.com/s/export-olm-to-pst/1/
CC-MAIN-2018-51
en
refinedweb
Listener and Adapters are two different way to accomplish the event handling in java. all the event sources have different type of class support. There are mouse and key events, window and focus events, srcoll bar and checkboxes event and many more. Events Handling is a powerful aspect of Java language. It allows you to create the Graphical Applications with easy to do operations. Event handling means, to create a reaction for every action one does with any component on the application screen. There are different event classes but we shall look at them briefly after we define the element of event handling mechanism. What is Event ? Event, in deep microprocessor terms is an interrupt signal to the processor, but let us not go deep down in computer hardware and The event is actually an object in Java which represents the state change of the component. This state change could be the clicking of any button, changing size of window, minimizing, maximizing or closing of frame, moving scrollbar or moving mouse, clicking mouse, pressing keyboard and many more. All these actions generate the event object. What are Event sources? Event Sources are the components on which the action is performed, for example if we are clicking the button, the source of event would be the button. It will generate a MouseClicked event or ActionEvent. These sources are binded with event listeners, don't worry we are going to describe listeners next. But when a event occurs and event source generates an event it passes it to EventListener to handle. What are Event Listener? Event Listener are the Objects that gets notification when any event is created. Java provide different type of listener for different type of objects. Also you must remember to bind the listener with the event source so that listener know if an event is generated. Java provides support for various event types. All the event classes are decedent from EventObject class and for each type of event class we also have corresponding Listener interface and Adapter class also. Here is how we brief the Event classes we have got. Action Event: Generated when a button is press, list item is double clicked or a menu is selected. AdjustmentEvent: Generated when a scroll bar is moved. ComponentEvent: Generated when a component is set to non-visibility or visibility, moved or resized. ContainerEvent: Generated when a component is added to or removed from a container. FocusEvent: Generated when a component receives or loses focus. InputEvent: It is an abstract class for Input type of events. ItemEvent: Generated when a check box or list item is clicked also when menu item is selected or deselected. KeyEvent: Generated when input is received from keyboard. MouseEvent and MouseWheeelEVent Generated when the mouse action is performed like click, move or wheel scroll etc. TextEvent: Generated when value of text is changed in text field or area. WindowEvnet: Generated when a window is active, closed, minimized, restored etc. Each of these class has its own meaning, they contains the methods and MASKs which helps to work with the event on machine level. There are two well known ways to work with events. Using the EventListener interfaces and using the adapter classes. Now we will be describing the two ways separately. First let us look at the listeners. package events; import java.awt.event.*; import java.awt.*; public class ListenersEvent extends Frame implements ActionListener,MouseMotionListener{ Button b; ListenersEvent(){ super("listener demo"); setLayout(new FlowLayout()); setSize(300,300); setVisible(true); b = new Button("click me"); b.addActionListener(this); add(b); addMouseMotionListener(this); } @Override public void actionPerformed(ActionEvent ae) { if(ae.getSource()== b){ System.out.println("button clicked"); } } @Override public void mouseDragged(MouseEvent me) { System.out.println("mouse was dragged"); } @Override public void mouseMoved(MouseEvent me) { System.out.println("mouse was moved"); } public static void main(String[] args){ new ListenersEvent(); } } In this program the Frame will look like this and the output will look somewhat like this mouse was moved mouse was moved mouse was moved mouse was moved button clicked mouse was dragged mouse was dragged mouse was dragged Well this is just an example however the output you may observe will be according to your action on the frame. You will have to close the frame using either ctrl+c or stop icon on your IDE. Anyway let us divert our focus to the program and event. First we have implement two type of listeners ActionListener and MouseMotionListener these relate to MouseEvent and ActionEvent class. Well, there is only one method in ActionListener interface and two in MouseMotionListener interface. As you already know that we have to implement all the method of an interface weather we want it or not. We create frame and a button on the frame. Then we bind action listener with the button and MouseMotionListener with the Frame, so when we move or drag mouse on the Framed window we will be generating the events for MouseEvent type. Which may be handled in the corresponding method provided by the interface. You might notice that we have done something with the action event in actionPerformed method. Well, getSource() is method provided in the class EventObject so it can be used on any type of the event class. Here we try and check if the action event was performed on the button we created. It is good in case you want to make sure the button action does not overlap. We are done with the Listeners now let us tell you about the Adapters. Adapter are the classes which helps to reduce the code when we don't want all the method of an interfaces implemented. The methods in the adapter classes are same as in the interface but You need not to implemented all of them if you need only one or two In following program we are going to use anonymous class concepts to display the use of adapter class. We will be creating an inner class and and anonymous class which will help you reduce the code according to some level. package events; import java.awt.event.*; import java.awt.*; public class AdapterEvent extends Frame{ AdapterEvent(){ super("Adapter Event Demo"); setSize(300,300); setVisible(true); setLayout(new FlowLayout()); //event handling using anonymous class addWindowListener(new WindowAdapter() { public void windowClosing(WindowEvent we){ System.exit(0); } }); addMouseMotionListener(new MouseInnerClass()); } //Inner class to do event handling class MouseInnerClass extends MouseMotionAdapter{ public void mouseDragged(MouseEvent me){ System.out.println("mouse was dragged"); } } public static void main(String[] args){ new AdapterEvent(); } } In this program we have created an Inner class and an anonymous inner class. The anonymous inner class creates a listener for the window event which will close the program when we create the frame by clicking close button. mouse was dragged mouse was dragged mouse was dragged mouse was dragged mouse was dragged mouse was dragged mouse was dragged mouse was dragged We have added the windowListener to the frame and then we create a anonymous class inside that. This class uses the WindowApapter class and then override the windowClosing() method which takes the WindowEvent type reference as parameter. This will do for the anonymous class. When we create Inner class for event handling all we need to do is to pass the object of the inner class when we bind the event. We are binding mouse motion listener with the frame and because we are going to use mouse motion adapter to accomplish the work we will be passing the object of inner class as the parameter in the binding method. rest remains same as the anonymous class.
http://www.examsmyantra.com/article/70/java/event-handling-in-java-using-listener-and-adapters
CC-MAIN-2018-51
en
refinedweb
import itertools def overlap(a, b, min_length=3): """ Return length of longest suffix of 'a' matching a prefix of 'b' that is at least 'min_length' characters long. If no such overlap exists, return 0. """ start = 0 # start all the way at the left while True: start = a.find(b[:min_length], start) # look for b's suffx in a if start == -1: # no more occurrences to right return 0 # found occurrence; check for full suffix/prefix match if b.startswith(a[start:]): return len(a)-start start += 1 # move just past previous match def scs(ss): """ Returns shortest common superstring of given strings, which must be the same length """ shortest_sup = None for ssperm in itertools.permutations(ss): sup = ssperm[0] # superstring starts as first string for i in range(len(ss)-1): # overlap adjacent strings A and B in the permutation olen = overlap(ssperm[i], ssperm[i+1], min_length=1) # add non-overlapping portion of B to superstring sup += ssperm[i+1][olen:] if shortest_sup is None or len(sup) < len(shortest_sup): shortest_sup = sup # found shorter superstring return shortest_sup # return shortest scs(['BAA', 'AAB', 'BBA', 'ABA', 'ABB', 'BBB', 'AAA', 'BAB']) 'BAAABABBBA' scs(['ABCD', 'CDBC', 'BCDA']) 'ABCDBCDA'
http://nbviewer.jupyter.org/github/BenLangmead/comp-genomics-class/blob/master/notebooks/CG_SCS.ipynb
CC-MAIN-2018-51
en
refinedweb
import bb.cascades 1.0 Page { content: Button { text: "Rotate!" onClicked: { rotationZ = 155; } } } Implicit animations An implicit animation occurs when a visual property of a control changes while your app is running. When you change a property that can be animated, the control doesn't update its appearance immediately to reflect the new value of the property. Instead, the control animates between the old and new values of the property. For example, if you change the opacity of a Button from 1.0 to 0.0, the button fades out gradually, instead of immediately becoming invisible. Cascades performs implicit animations automatically; you don't need to do anything except change the property that you want to animate. The amount of time that an implicit animation takes to complete is predefined and can't be changed. Here's how to create a button that rotates using an implicit animation when it's clicked: // In your application source file // Create the root page and button Page* root = new Page; mButton = Button::create("Rotate!"); // Connect the button's clicked() signal to a slot. Make sure // to test the return value to detect any errors bool res = QObject::connect(mButton, SIGNAL(clicked()), this, SLOT(onButtonClicked())); Q_ASSERT(res); // Indicate that the variable res isn't used in the rest of the // app, to prevent a compiler warning Q_UNUSED(res); // Set the content of the page and display it root->setContent(mButton); app->setScene(root); You also define the onButtonClicked() slot function in your application source file: // A slot that handles a button click and rotates the button void App::onButtonClicked() { mButton->setRotationZ(155); } In your header file, you declare the Button variable and the onButtonClicked() slot: Button* mButton; public slots: void onButtonClicked(); Not applicable When you change the value of a control's property, the control is animated visually to reach the new property value, but the property value itself is changed immediately. For example, if you change the translationX property of a Label from 0 to 45, the value of the property is immediately set to 45. You'll then see the x position of the label change smoothly from 0 to 45 as the control is animated to reach the new property value. You can use implicit animations for the following types of properties: - Properties that determine how a control looks, such as rotation, translation, and opacity - Properties that determine the layout of a control in a container, such as preferred width and preferred height Here's how to change the layout of a container using an implicit animation. The container includes two buttons, and when the first button is clicked, the layout is changed from a left-to-right stack layout to a top-to-bottom stack layout. import bb.cascades 1.0 Page { Container { id: container // Create a stack layout object for each layout that // the app uses, and add them to the attached objects // list attachedObjects: [ StackLayout { id: layout1 orientation: LayoutOrientation.TopToBottom }, StackLayout { id: layout2 orientation: LayoutOrientation.LeftToRight } ] layout: layout1 Button { text: "Click me!" onClicked: { container.layout = layout2; } } Button { text: "Don't click me" } } } // The variables mContainer, mStackLayout1, and // mStackLayout2 are declared in a header file Page* root = new Page; mContainer = new Container; // Create the layouts mStackLayout1 = StackLayout::create() .orientation(LayoutOrientation::LeftToRight); mStackLayout2 = StackLayout::create() .orientation(LayoutOrientation::TopToBottom); mContainer->setLayout(mStackLayout1); // Create the buttons and connect the first button's clicked() // signal to a slot function. Make sure to test the return // value to detect any errors Button* myButton1 = Button::create("Click me!"); Button* myButton2 = Button::create("Don't click me"); bool res = QObject::connect(myButton1, SIGNAL(clicked()), this, SLOT(onButtonClicked())); Q_ASSERT(res); // Indicate that the variable res isn't used in the rest of the // app, to prevent a compiler warning Q_UNUSED(res); // Add the buttons to the container mContainer->add(myButton1); mContainer->add(myButton2); root->setContent(mContainer); app->setScene(root); A slot that handles the button click and changes the layout is declared in a header file. void App::onButtonClicked() { mContainer->setLayout(mStackLayout2); } Not applicable Controlling implicit animations You can use implicit animations to create smooth transitions and effects without writing a lot of code, but you don't have very much control over the animation that occurs. You can't change its duration or specify a sequence of animations that should occur one after another. However, you can use the ImplicitAnimationController class to define whether you want a particular visual property to use implicit animations. This class lets you turn on and turn off implicit animations for all visual properties of a control, or for specific properties only. You can use an ImplicitAnimationController to enable or disable implicit animations for the following properties of a control: - translationX, translationY - rotationZ - scaleX, scaleY - pivotX, pivotY - opacity You can't enable or disable implicit animations for layout properties (such as preferredWidth and preferredHeight). Using an ImplicitAnimationController You can add an ImplicitAnimationController to any UI control (that is, any control that inherits from the UIObject class). You specify ImplicitAnimationController objects by using the attachedObjects list for the control. The attachedObjects list allows you to use C++ objects directly in QML without creating a custom class and registering it for use in QML. To learn more about the attachedObjects list, see QML and C++ integration. Here's how to create a button that moves when it's clicked. The button uses an ImplicitAnimationController to prevent this horizontal movement from using an implicit animation. The ImplicitAnimationController includes two properties: propertyName and enabled. The propertyName property is a string that specifies the visual property that you want to control, and must be one of the visual properties that are listed in the previous section. The enabled property is a Boolean value that indicates whether implicit animations are enabled for the property. If you omit the propertyName property in ImplicitAnimationController, then all implicit animations for the control are enabled or disabled according to the value of the enabled property. // A button that moves from its starting position // to its new position without an animation import bb.cascades 1.0 Page { content: Container { Button { text: "Click me" attachedObjects: [ ImplicitAnimationController { propertyName: "translationX" enabled: false } ] onClicked: { translationX += 20; } } } } // Compare animated and non-animated transitions // by turning the toggle switch on or off import bb.cascades 1.0 Page { content: Container { Button { text: "Click me" attachedObjects: [ ImplicitAnimationController { id: allAnimationController enabled: animationToggle.checked } ] onClicked: { translationX += 20; translationY += 20; rotationZ += 20; } } ToggleButton { id: animationToggle checked: true } } } You can create an ImplicitAnimationController by using the builder design pattern and specifying the following: - The control that the ImplicitAnimationController applies to - The name of the property that the ImplicitAnimationController applies to (optional) - Whether implicit animations should be enabled or disabled If you don't specify a property name, the ImplicitAnimationController applies to all visual properties of the control. Here's how to create a button that, when it's clicked, rotates to the right and fades to an opacity of 0.5. Both of these properties use their own ImplicitAnimationController objects to disable implicit animations. Because the ImplicitAnimationController objects are created inside the scope of onButtonClicked(), they are effective only while this function executes. If you change the rotation and opacity values elsewhere in the app, implicit animations would be enabled for those changes. // The variable mButton is declared in a header file. // Create the root page and button Page* root = new Page; mButton = Button::create("Click me"); // Connect the button's clicked() signal to a slot function. // Make sure to test the return value to detect any errors. // If any Q_ASSERT statement(s) indicate that the slot failed // to connect to the signal, make sure you know exactly why // this has happened. This is not normal, and will cause your // app to stop working!! bool res = QObject::connect(mButton, SIGNAL(clicked()), this, SLOT(onButtonClicked())); // This is only available in Debug builds. Q_ASSERT(res); // Indicate that the variable res isn't used in the rest of the // app, to prevent a compiler warning. Q_UNUSED(res); // Set the content of the page and display it root->setContent(mButton); app->setScene(root); // A slot that handles the button click, and rotates and // fades the button. This slot is also declared in a // header file. void App::onButtonClicked() { ImplicitAnimationController rotateController = ImplicitAnimationController::create(mButton, "rotationZ") .enabled(false); ImplicitAnimationController opacityController = ImplicitAnimationController::create(mButton, "opacity") .enabled(false); mButton->setRotationZ(20); mButton->setOpacity(0.5); } Not applicable Where to use an ImplicitAnimationController When you use an ImplicitAnimationController, there are a few things you need to know about where to place the controller to get the result that you're looking for. You must understand the relationship between where the controller is placed and the component being animated. If you set an ImplicitAnimationController on a parent control, the child components don't inherit the controller even if they may be impacted by an implicit animation. In addition, adding an ImplicitAnimationController to a child object doesn't have an effect on a parent control. The following code sample shows an ImplicitAnimationController attached to an ImageView: Container { onTouch: { if(event.touchType == TouchType.Down) { moveContainer.translationX = 500; } } Container { id: moveContainer ImageView { imageSource: "asset:///image.png" attachedObjects: [ ImplicitAnimationController { propertyName: "translationX" enabled: false } ] } } } The above sample doesn't disable implicit animations on the Container that holds the ImageView because the controller is attached to the ImageView. If you want to disable the animation on the parent, your code should look like this: Container { onTouch: { if(event.touchType == TouchType.Down) { moveContainer.translationX = 500; } } Container { id: moveContainer ImageView { imageSource: "asset:///image.png" } attachedObjects: [ ImplicitAnimationController { propertyName: "translationX" enabled: false } ] } } If you change the opacity of a parent Container, all children are implicitly animated even if you disable implicit animations for the child controls using an ImplicitAnimationController. The following code sample demonstrates this behavior: Container { id: testCont Container { Button { text: "Change opacity" onClicked: { testCont.opacity = 0 } attachedObjects: [ ImplicitAnimationController { propertyName: "opacity" enabled: false } ] } } } In the above sample, the Button control is subject to an implicit animation caused by the change in opacity of the parent Container. If you want to disable the implicit animation on the Button, you need to attach the controller to the parent Container or change the opacity of Button instead of the root Container. The following code sample sets an ImplicitAnimationController on a parent Container: Label *label = new Label(“My label”); Container *container = new Container(); container->add(label); ImplicitAnimationController controller = ImplicitAnimationController::create(container) .enabled(false); label->setTranslationX(500); In the above sample, the Label is still subject to an implicit animation even though its parent Container disables them. If you want to disable the implicit animation for the Label, you need to add the ImplicitAnimationController to the Label itself. ImplicitAnimationController controller = ImplicitAnimationController::create(label) .enabled(false); label->setTranslationX(500); Not applicable The implicitLayoutAnimationsEnabled property Cascades also includes the implicitLayoutAnimationsEnabled property. You can set the implicitLayoutAnimationsEnabled property on individual controls to disable implicit animations that relate to changes in layout (such as changing the preferred height or width of a control). The following code sample demonstrates this behavior: Container { Container { id: testCont background: Color.Red Button { text: "Change height" onClicked: { testCont.implicitLayoutAnimationsEnabled = false; testCont.preferredHeight = 1280; } } } } // In your application source file // Create a page, two containers and a button Page *page = new Page(); Container *root = Container::create(); mytestcont = Container::create().background(Color::Red); Button *myButton = Button::create().text((const char*) "Change Height"); // Connect the button's clicked() signal to a slot. Make sure // to test the return value to detect any errors bool connectResult; Q_UNUSED(connectResult); connectResult = connect(myButton, SIGNAL (clicked()), this, SLOT(onChangeHeightButtonClicked())); Q_ASSERT(connectResult); mytestcont->add(myButton); // Set the content of the page and display it root->add(mytestcont); page->setContent(root); Application::instance()->setScene(page); } You also define the onChangeHeightButtonClicked() slot function in your application source file: void ApplicationUI::onChangeHeightButtonClicked() { // Change the button text when clicked mytestcont->bb::cascades::Control::setImplicitLayoutAnimationsEnabled(false); mytestcont->bb::cascades::Control::setPreferredHeight(1280); } In your header file, you declare the Button variable and the onChangeHeightButtonClicked()slot: private: bb::cascades::Container *mytestcont; private slots: void onChangeHeightButtonClicked(); Not applicable Last modified: 2015-05-07 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/native/documentation/ui/animations/implicit_animations.html
CC-MAIN-2018-51
en
refinedweb
With DataFrames giving us the opportunity to store huge grids of data, we will sometimes want to group particular parts of our data set for analysis. Pandas’ ‘groupby’ method gives us an easy way to do so. Let’s put together a DataFrame of a team’s recent results with a dictionary: import pandas as pd #data is in a dictionary, each entry will be a column #The first part of the entry is the column name, the second the values data = {'Opponent': ["Atletico Jave","Newtown FC", "Bunton Town", "Fentborough Dynamo"], 'Location': ["Home","Away","Away","Home"], 'GoalsFor': [2,4,3,0], 'GoalsAgainst': [4,0,2,2]} Matches = pd.DataFrame(data) Matches An obvious way to group this data is by home and away matches. Let’s use the ‘.groupby()’ method to do so. We just have to provide the column that we want to group by. In this case, location. We’ll assign that to a variable, then call ‘.mean()’ to find the average. HAMatches = Matches.groupby('Location') HAMatches.mean() Or cut out the variable and chain the ‘.mean()’ onto the end. Or chain another method: Matches.groupby('Location').mean() #Describes the dataset for each variable within - this is awesome! Matches.groupby('Location').describe() #Let's step up the chaining... #'Groupby' location, then describe it to me... #Then 'transpose' it (flip it onto its side)... #Finally, just give me 'Away' data Matches.groupby('Location').describe().transpose()['Away'] GoalsAgainst count 2.000000 mean 1.000000 std 1.414214 min 0.000000 25% 0.500000 50% 1.000000 75% 1.500000 max 2.000000 GoalsFor count 2.000000 mean 3.500000 std 0.707107 min 3.000000 25% 3.250000 50% 3.500000 75% 3.750000 max 4.000000 Name: Away, dtype: float64 print("All that work done in just " + str(len("Matches.groupby('Location').describe().transpose()['Away']")) + " characters!") All that work done in just 58 characters! Summary It is staggering how easily we can not only group data, but to also use Pandas to get some insight into our data. Really good job following this far. We learned how to use ‘groupby()’ to group by location – home or away. We then used methods to describe our data, find averages and even to change the shape of our data frames. Really impressive stuff! Next up, you might want to take a look at how we can join dataFrames together, how to deal with missing values or how to use even more operations.
http://fcpython.com/data-analysis/grouping-data
CC-MAIN-2018-51
en
refinedweb
Stochastic gradient Hamiltonian Monte Carlo package Project description sghmc has sghmc, hmc, U, gradU #!/usr/bin/env python from sghmc import sghmc from sghmc import hmc from sghmc import U from sghmc import gradU sghmc(gradU, eta, L, alpha, x, V) hmc(U, gradU, m, dt, nstep, x, MH) gradU(data, minibatch, beta) U(data, beta) Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/sghmc/
CC-MAIN-2018-51
en
refinedweb
Standard C++ Library Copyright 1998, Rogue Wave Software, Inc. divides - Returns the result of dividing its first argument by its second. #include <functional> template <class T> struct divides; divides is a binary function object. Its operator() returns the result of dividing x by y. You can pass a divides object to any algorithm that requires a binary function. For exam- ple, the transform algorithm applies a binary operation to corresponding values in two collections and stores the result. divides would be used in that algorithm in the following manner: vector<int> vec1; vector<int> vec2; vector<int> vecResult; transform(vec1.begin(), vec1.end(), vec2.begin(), vecResult.begin(), divides<int>()); After this call to transform, vecResult[n] contains vec1[n] divided by vec2[n]. template <class T> struct divides : binary_function<T, T, T> { T operator() (const T&, const T&) const; }; binary_function, Function_Objects
http://docs.oracle.com/cd/E19205-01/820-4180/man3c++/divides.3.html
CC-MAIN-2016-40
en
refinedweb
I'm looking to try out monotone, but can't seem to get it compiled on OpenBSD 4.0. Here are the steps I've taken: Build boost according to instructions in INSTALL, then copy includes and libs to /usr/local. ./configure --enable-boost-static --disable-nls (this finds and tests boost ok) Then I issue "gmake" and here is the error I get: In file included from /usr/local/include/boost/config.hpp:44, from /usr/local/include/boost/tuple/tuple.hpp:23, from cmd_list.cc:15: /usr/local/include/boost/config/stdlib/libstdcpp3.hpp:48:1: warning: "BOOST_DISABLE_THREADS" redefined <command line>:8:1: warning: this is the location of the previous definition cmd_list.cc: In member function `virtual std::string commands::cmd_ls::desc()': cmd_list.cc:506: error: use of namespace `std' as expression cmd_list.cc:506: error: syntax error before `:' token cmd_list.cc:506: error: `result' undeclared (first use this function) cmd_list.cc:506: error: (Each undeclared identifier is reported only once for each function it appears in.) gmake[2]: *** [mtn-cmd_list.o] Error 1 gmake[2]: Leaving directory `/root/tmp/monotone-0.30' gmake[1]: *** [all-recursive] Error 1 gmake[1]: Leaving directory `/root/tmp/monotone-0.30' gmake: *** [all] Error 2 Any help from the c++ gurus here would be appreciated. As I'm not on the list, please CC address@hidden, but I will check the archives and see if there is a solution posted. Also I'm sucking the lasted revisions down on my linux box and will see if that source compiles. Thanks, Jeb -- Jeb Campbell address@hidden
http://lists.gnu.org/archive/html/monotone-devel/2006-10/msg00446.html
CC-MAIN-2016-40
en
refinedweb
READDIR(3) Linux Programmer's Manual READDIR(3) readdir - read a directory #include <dirent.h> struct dirent *readdir(DIR *dirp);:.. EBADF Invalid directory stream descriptor dirp. For an explanation of the terms used in this section, see attributes(7). ┌──────────┬───────────────┬──────────────────────────┐ │Interface │ Attribute │ Value │ ├──────────┼───────────────┼──────────────────────────┤ │readdir() │ Thread safety │ MT-Unsafe race:dirstream │ └──────────┴───────────────┴──────────────────────────┘. POSIX.1-2001, POSIX.1-2008, SVr4, 4.3BSD. A directory stream is opened using opendir(3). The order in which filenames are read by successive calls to readdir() depends on the filesystem implementation; it us ('\0').. getdents(2), read(2), closedir(3), dirfd(3), ftw(3), offsetof(3), opendir(3), readdir_r(3), rewinddir(3), scandir(3), seekdir(3), telldir(3) This page is part of release 4.07 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. 2016-03-15 READDIR(3)
http://man7.org/linux/man-pages/man3/readdir.3.html
CC-MAIN-2016-40
en
refinedweb
CFD Online Discussion Forums ( ) - FLUENT ( ) - - Continuing User Defined Real Gas Model issues ( ) aeroman October 19, 2010 23:00 Continuing User Defined Real Gas Model issues I am trying to model a gun blast in fluent. I am not modeling the round, rather I am patching the barrel to known internal pressure, temperature and velocity distributions. I can supply two different "materials", one to the air domain and one to the gun domain (i.e. molecular weights and Cp). My domain is a quarter section. I am interested in implementing a new equation of state (abel-nobel) and have written and successfully compiled a user defined real gas model that runs fine in serial and parallel. The issue is that as far as I can tell I can only specify one molecular weight and Cp when using the UDRGM. So the question is, is there any way that I can specify two different molecular weights and CP's when using a UDRGM? I know it must be a possibility since examples such as this: are not difficult to come by. Thanks in advance for any help on this one. Here is my UDRGM, I have just modified the real gas example given in the fluent documentation: #include "udf.h" #include "stdio.h" #include "ctype.h" #include "stdarg.h" static int (*usersMessage)(char *,...); static void (*usersError)(char *,...); #define covolume 0.0010044 #define TDatum 288.15 #define PDatum 1.01325e5 #define rDatum 1.2941 #define MW 23 #define RGAS (UNIVERSAL_GAS_CONSTANT/MW) DEFINE_ON_DEMAND(I_do_nothing) { /* This is a dummy function to allow us to use */ /* the Compiled UDFs utility */ } void IDEAL_error(int err, char *f, char *msg) { if (err) usersError("IDEAL_error (%d) from function: %s\n%s\n",err,f,msg); } void IDEAL_Setup(Domain *domain, cxboolean vapor_phase, char *filename, int (*messagefunc)(char *format, ...), void (*errorfunc)(char *format, ...)) { /* Use this function for any initialization or model setups*/ usersMessage = messagefunc; usersError = errorfunc; usersMessage("\nLoading Real-Ideal Library: %s\n", filename); } double IDEAL_density(double Temp, double press, double yi[]) { double r=press/((RGAS*Temp)+(covolume*press)); /* Density at Temp & press */ return r; /* (Kg/m^3) */ } double IDEAL_specific_heat(double Temp, double density, double P, double yi[]) { double cp= 1807.39; return cp; /* (J/Kg/K) */ } double IDEAL_enthalpy(double Temp, double density, double P, double yi[]) { double h=Temp*IDEAL_specific_heat(Temp, density, P, yi); return h; /* (J/Kg) */ } double IDEAL_entropy(double Temp, double density, double P, double yi[]) { double CV=RGAS/IDEAL_specific_heat(Temp, density, P, yi); double gamma=IDEAL_specific_heat(Temp, density, P, yi)/CV; double s=CV*log(fabs(Temp/TDatum))+ RGAS*log(fabs((1.0/density-covolume)/(1.0/rDatum-covolume))); return s; /* (J/Kg/K) */ } double IDEAL_mw(double yi[]) { return MW; /* (Kg/Kmol) */ } double IDEAL_speed_of_sound(double Temp, double density, double P, double yi[]) { double CV=RGAS/IDEAL_specific_heat(Temp, density, P, yi); double gamma=IDEAL_specific_heat(Temp, density, P, yi)/CV; return sqrt(Temp*RGAS*gamma)*(1.0/(1.0-(covolume*density))); /* m/s */ } double IDEAL_viscosity(double Temp, double density, double P, double yi[]) { double mu=1.7894e-05; return mu; /* (Kg/m/s) */ } double IDEAL_thermal_conductivity(double Temp, double density, double P, double yi[]) { double ktc=0.0242; return ktc; /* W/m/K */ } double IDEAL_rho_t(double Temp, double density, double P, double yi[]) { /* derivative of rho wrt. Temp at constant p */ double rho_t=((covolume*density*density)-density)/Temp; return rho_t; /* (Kg/m^3/K) */ } double IDEAL_rho_p(double Temp, double density, double P, double yi[]) { /* derivative of rho wrt. pressure at constant T */ double rho_p=((1.0-(density*covolume))*(1.0-(density*covolume)))/(RGAS*Temp); return rho_p; /* (Kg/m^3/Pa) */ } double IDEAL_enthalpy_t(double Temp, double density, double P, double yi[]) { /* derivative of enthalpy wrt. Temp at constant p */ return IDEAL_specific_heat(Temp, density, P, yi); } double IDEAL_enthalpy_p(double Temp, double density, double P, double yi[]) { /* derivative of enthalpy wrt. pressure at constant T */ /* general form dh/dp|T = (1/rho)*[ 1 + (T/rho)*drho/dT|p] */ /* but for ideal gas dh/dp = 0 */ return covolume ; } UDF_EXPORT RGAS_Functions RealGasFunctionList = { IDEAL_Setup, /* initialize */ IDEAL_density, /* density */ IDEAL_enthalpy, /* enthalpy */ IDEAL_entropy, /* entropy */ IDEAL_specific_heat, /* specific_heat */ IDEAL_mw, /* molecular_weight */ IDEAL_speed_of_sound, /* speed_of_sound */ IDEAL_viscosity, /* viscosity */ IDEAL_thermal_conductivity, /* thermal_conductivity */ IDEAL_rho_t, /* drho/dT |const p */ IDEAL_rho_p, /* drho/dp |const T */ IDEAL_enthalpy_t, /* dh/dT |const p */ IDEAL_enthalpy_p /* dh/dp |const T */ }; /************************************************** ************/ aeroman October 21, 2010 15:13 Perhaps not the problem It turns out that I may be attacking my problem the wrong way. Perhaps somebody can help me out with what I think will answer this question. From the first post, I have two types of gas one is the gun gas and one is just air. So, since air is already avaiable, I made a new material called "gun_gas" which included my required cp and mw and changed the cell zone for the gun to include the new material. I then patched the gun to the pressure, temperature and velocity distributions as described. However, it would appear that I have forced the "cell zones" associated with the gun to always be the material I have described.. So the question is now, how do I specify an initial condition in the gun that has a different cp molecular weight (gamma, R etc), pressure, temperature, and velocity distribution than in the fluid domain? I suppose this must be similar to forcing some species of gas into the ambient, sort of like a venting tank. Yet I don't seem to be able to find any exmples of this. aeroman October 22, 2010 15:28 Problem Solved All, I'll do my usual trick and answer this question in case anybody is interested in the answer. For this you need to write a user defined multispecies real gas model (example provided in Fluent documentation). In my case, I had two species of gas. When you initialize the problem you can set the mas fraction of gas in each fluid zone that was specified in the UDRGM. timclark11 April 29, 2015 10:56 Hi Aeroman, I don't know if you're still active on these forums but it'd be great if you were. I'm trying to tackle almost exactly the same problem as you, I initially tried to solve it use a dynamic mesh but that proved way to computationally intensive so I've moved onto to trying to implement the same solution as you. I was just wondering if you'd be able to shed any more light on how you solved this problem? Did you work on a 64 bit system? I've been trying to compile a real gas UDF and have gotten absolutely nowhere. aeroman April 29, 2015 20:34 Wow this was a while ago. I'm not sure "active" is the right word for it, but I still use the site primarily as a resource. I'd be happy to help if i can. I ended up getting very good results as compared to experimental data and wrote a paper and presented the results. However it was important in my case that I used both multiple species as well as dynamic adaptive meshing. This helped achieve the blast wave propagated appropriatly and that the peak pressure at the shock was not reduced due to smearing across cells. I used a high performance computing cluster for this work. I'd have to double check specifics. It may be easier if you email me to discuss. timclark11 May 1, 2015 02:57 Cheers Aeroman I'd really appreciate it, I've sent you a pm with a little more info and my contact information putti007 April 8, 2016 03:34 Hi Aeroman and timclark11, i am facing the same problem can you please help me i have tried to do adaptive meshing but with no success. o can u plz sen me the details on how you tackled the problem. my mail id is- abhilashputti37@gmail.com waiting for your reply. thanking you in advance. All times are GMT -4. The time now is 20:36 .
http://www.cfd-online.com/Forums/fluent/81208-continuing-user-defined-real-gas-model-issues-print.html
CC-MAIN-2016-40
en
refinedweb
computertemp crashed with OSError in listdir()/ ProcEnviron: LANG=en_US.UTF-8 PATH=/ SHELL=/bin/bash PythonArgs: ['/usr/ SourcePackage: computertemp Title: computertemp crashed with OSError in listdir() Uname: Linux 2.6.27-3-generic i686 UserGroups: adm admin cdrom dialout fuse lpadmin plugdev sambashare vboxusers Upgraded to Karmic and tried the sensor option "Kernel i2c .. (hwmon)". After choosing this option it chrashed. Now I'm using APCI as before for getting the cpu temp same with Karmic I've tryed to install it but it crashes and I cannot install LM-sensors right so I can't read my temperatures.... This problem is occurring on Lucid with IBM ACPI sensors as well. it is caused by computertemp assuming that the sensor info always is under the device/ symlink in the sysfs hwmon directory, but Documentation/ > Up to lm-sensors 3.0.0, libsensors looks for hardware monitoring attributes > in the "physical" device directory. Since lm-sensors 3.0.1, attributes found > in the hwmon "class" device directory are also supported. Complex drivers > (e.g. drivers for multifunction chips) may want to use this possibility to > avoid namespace pollution. The only drawback will be that older versions of > libsensors won't support the driver in question. So computertemp should look directly in /sys/class/ Could you please test if latest code from SVN fixes this bug? Thanks! PS: Repository is in http:// Hi Adolfo, I'm not quite sure what you want me to do with the link. I clicked on it and it took me to a website with some information like a directory. Can you please tell me what i should do with this link? Sorry but i'm a newbie to this... Regards, Kaizer. ------- Kaizer Billimoria "'It is no measure of health to be well adjusted to a profoundly sick society." - J.Krishnamurti. "In obedience there is always fear, and fear darkens the mind." - J.Krishnamurti See My Google profile: http:// Google Wave: - <email address hidden> ------- 2010/6/5 Adolfo González Blázquez <email address hidden> > Could you please test if latest code from SVN fixes this bug? Thanks! > > PS: Repository is in > http:// > > -- > computertemp crashed with OSError in listdir() > https:/ > You received this bug notification because you are a direct subscriber > of a duplicate bug. > > Status in “computertemp” package in Ubuntu: Triaged > >/ > --oaf-activate- > ProcEnviron: > LANG=en_US.UTF-8 > > PATH=/usr/ > SHELL=/bin/bash > PythonArgs: ['/usr/ > '--oaf- > '--oaf-ior-fd=28'] > SourcePackage: computertemp > Title: computertemp crashed with OSError in listdir() > Uname: Linux 2.6.27-3-generic i686 > UserGroups: adm admin cdrom dialout fuse lpadmin plugdev sambashare > vboxusers > > To unsubscribe from this bug, go to: > > https:/ > Ok, follow this: $ sudo apt-get remove computertemp $ sudo apt-get build-dep computertemp $ sudo apt-get install gnome-common subversion $ svn co http:// $ cd computertemp $ sh autogen.sh --prefix /usr $ make $ sudo make install Then logout/login, add computertemp to your panel, and see if it works. Still present in Maverick (version 0.9.6.1-1.1) This bug's still exist in 10.04... It crashes everyday :( This bug's still exist in 10.04... It crashes everyday... just got the same thing when upgrading to the latest Karmic Koala alpha
https://bugs.launchpad.net/ubuntu/+source/computertemp/+bug/272326
CC-MAIN-2016-40
en
refinedweb
Combining Silverlight and Windows Azure projects Standard Silverlight applications require that they be hosted on HTML pages, so that they can be loaded in a browser. Developers who work with the .Net framework will usually host this page within an ASP.Net website. The easiest way to host a Silverlight application on Azure is to create a single web role that contains an ASP.Net application to host the Silverlight application. Hosting the Silverlight application in this way enables you, as a developer, to take advantage of the full .Net framework to support your Silverlight application. Supporting functionalities can be provided such as hosting WCF services, RIA services, Entity Framework, and so on. In the upcoming chapters, we will explore ways by which RIA services, OData, Entity Framework, and a few other technologies can be used together. For the rest of this chapter, we will focus on the basics of hosting a Silverlight application within Azure and integrating a hosted WCF service. Creating a Silverlight or Azure solution Your system should already be fully configured with all Silverlight and Azure tools. In this section, we are going to create a simple Silverlight application that is hosted inside an Azure web role. This will be the basic template that is used throughout the book as we explore different ways in which we can integrate the technologies together: - Start Visual Studio as an administrator. You can do this by opening the Start Menu and finding Visual Studio, then right-clicking on it, and selecting Run as Administrator. This is required for the Azure compute emulator to run successfully. - Create a new Windows Azure Cloud Service. The solution name used in the following example screenshot is Chapter3Exercise1: - Add a single ASP.Net Web Role as shown in the following screenshot. For this exercise, the default name of WebRole1 will be used. The name of the role can be changed by clicking on the pencil icon next to the WebRole1 name: - Visual Studio should now be loaded with a single Azure project and an ASP. Net project. In the following screenshot, you can see that Visual Studio is opened with a solution named Chapter3Exercise1. The solution contains a Windows Azure Cloud project, also called Chapter3Exercise1. Finally, the ASP.Net project can be seen named as WebRole1: - Right-click on the ASP.Net project named WebRole1 and select Properties. - In the WebRole1 properties screen, click on the Silverlight Applications tab. - Click on Add to add a new Silverlight project into the solution. The Add button has been highlighted in the following screenshot: - For this exercise, rename the project to HelloWorldSilverlightProject. Click on Add to create the Silverlight project. The rest of the options can be left to their default settings, as shown in the following screenshot. - Visual Studio will now create the Silverlight project and add it to the solution. The resulting solution should now have three projects as shown in the following screenshot. These include the original Azure project, Chapter3Exercise1; the ASP.Net web role, WebRole1; and the third new project HelloWorldSilverlightProject: - Open MainPage.xaml in design view, if not already open. - Change the grid to a StackPanel. - Inside the StackPanel, add a button named button1 with a height of 40 and a content that displays Click me!. - Inside the StackPanel, underneath button1, add a text block named textBlock1 with a height of 20 - The final XAML should look similar to this code snippet: - Double-click on button1 in the designer to have Visual Studio automatically create a click event. The final XAML in the designer should look similar to the following screenshot: - Open the MainPage.xaml.cs code behind the file and find the button1_Click method. Add a code that will update textBlock1 to display Hello World and the current time as follows: - Build the project to ensure that everything compiles correctly. Now that the solution has been built, it is ready to be run and debugged within the Windows Azure compute emulator. The next section will explore what happens while running an Azure application on the compute emulator. (Move the mouse over the image to enlarge.) <UserControl> <StackPanel x: <Button x: <TextBlock x: </StackPanel> </UserControl> private void button1_Click(object sender, RoutedEventArgs e) { textBlock1.Text = "Hello World at " + DateTime.Now.ToLongTimeString(); } Running an Azure application on the Azure compute emulator With the solution built, it is ready to run on the Azure simulation: the compute emulator. The compute emulator is the local simulation of the Windows Azure compute emulator which Microsoft runs on the Azure servers it hosts. When you start debugging by pressing F5 (or by selecting Debug | Start Debugging from the menu), Visual Studio will automatically package the Azure project, then start the Azure compute emulator simulation. The package will be copied to a local folder used by the compute emulator. The compute emulator will then start a Windows process to host or execute the roles, one of which will be started as per the instance request for each role. Once the compute emulator has been successfully initialized, Visual Studio will then launch the browser and attach the debugger to the correct places. This is similar to the way Visual Studio handles debugging of an ASP.Net application with the ASP. Net Development Server. The following steps will take you through the process of running and debugging applications on top of the compute emulator: - In Solution Explorer, inside the HelloWorldSilverlightProject, right-click on HelloWorldSilverlightProjectTestPage.aspx, and select Set as startup page. - Ensure that the Azure project (Chapter3Exercise1) is still set as the start-up project. - In Visual Studio, press F5 to start debugging (or from the menu select Debug | Start Debugging). Visual Studio will compile the project, and if successful, begins to launch the Azure compute emulator as shown in the following screenshot: - Once the compute emulator has been started and the Azure package deployed to it, Visual Studio will launch Internet Explorer. Internet Explorer will display the page set as the start-up page (which was set to in an earlier step HelloWorldSilverlightProjectTestPage.aspx). - Once the Silverlight application has been loaded, click on the Click me! button. The TextBlock should be updated with the current time, as shown in the following screenshot: Upon this completion, you should now have successfully deployed a Silverlight application on top of the Windows Azure compute emulator. You can now use this base project to build more advanced features and integration with other services. Consuming an Azure-hosted WCF service within a Silverlight application A standalone Silverlight application will not be able to do much by itself. Most applications will require that they consume data from a data source, such as to get a list of products or customer orders. A common way to send data between .Net applications is through WCF services. The following steps will explore how to add a WCF service to your Azure web role, and then consume it from within the Silverlight application: - In Visual Studio, right-click on the ASP.Net web role project (WebRole1) and click Add | New Item. - Add a new WCF service named HelloWorldService.svc as shown in the following screenshot: - Once the WCF service has been added into the project, three new files will be added to the project: IHelloWorldService.cs, HelloWorldService.svc, and HelloWorldService.svc.cs. - Open IHelloWorldService.cs and change the interface, so that it defines a single method named GenerateHelloWorldGreeting that takes no parameters and returns a string. The entire file should look similar to the following code snippet: - Open HelloWorldService.svc.cs and modify the code, so that it implements the GenerateHelloWorldGreeting method as follows (the method in the code snippet returns Hello World, as well as the current server time): - Add a breakpoint on the line of code that returns the "Hello world" message. This breakpoint will be used in a later step. - Build the solution to ensure there are no syntax errors. If the solution does not build, then runtime errors can occur when trying to add the service reference. - Right-click on the Silverlight project HelloWorldSilverlightProject and select Add Service Reference. Click on Discover to allow Visual Studio to automatically detect the WCF service in the solution. Select the service and name the reference HelloWorldServiceReference, as shown in the screenshot, and click OK: - With the WCF service reference added to the Silverlight application, we will change the functionality of the Click me! button. Currently when clicked, the event handler will update the TextBlock with a "Hello world" message and the current time on the client side. This will be changed, so that clicking on the button will cause the Silverlight application to call the WCF service and have the "Hello world" message generated on the server side. In Visual Studio, within the Silverlight project, open MainPage.xaml.cs. - Modify the button1_Click method, so that it calls the WCF service and updates textBlock1 with the returned value. Due to the dynamic nature of developing with Azure, the address of the service endpoint can change many times through the development lifecycle. Each time Visual Studio deploys the project onto the compute emulator, a different port number can be assigned if the previous deployment has not been de-provisioned yet. Deploying to the Windows Azure staging environment will also give it a new address, while deploying to production will provide yet another endpoint address. The following code shows one technique to automatically handle the Silverlight application being hosted at different addresses. The Silverlight application invokes the WCF service by accessing it relative to where the Silverlight application is currently being hosted. This is in contrast to the usual behavior of calling WCF services which require an absolute address that would need to be updated with each deployment. - Compile the application to check that there are no syntax errors. - Press F5 to run the whole application in a debug mode. The Azure compute emulator should start up and Internet Explorer should be launched again with the Silverlight application. - Click on the Click me! button. The Silverlight client will call the WCF service causing Visual Studio to hit the breakpoint that was set earlier inside the WCF service. This shows that even though we are running and debugging a Silverlight application, we are still able to debug WCF services that are being hosted inside the Azure compute emulator. - Remove the breakpoint and continue the execution. Click on the button a few more times to watch the TextBlock update itself. The results should look similar to the following screenshot. Be sure to keep the browser open for the next steps: - Open the Azure compute emulator. Do this by right-clicking on the Azure icon in the system tray, and then clicking on Show Compute Emulator UI. - The compute emulator UI should now be open and look similar to the following screenshot. In the screenshot, you can see that there is a single deployment (the 29th one that has been deployed to the compute emulator). The deployment has one Azure project named Chapter3Exercise1. This Azure project has a single web role named WebRole1, which is currently executing a single instance. Clicking on the instance will show the output terminal of that instance. Here the trance information can be seen, being an output to the window: using System.ServiceModel; namespace WebRole1 { [ServiceContract] public interface IHelloWorldService { [OperationContract] string GenerateHelloWorldGreeting(); } } using System; namespace WebRole1 { public class HelloWorldService : IHelloWorldService { public string GenerateHelloWorldGreeting() { var currentTime = DateTime.Now.ToLongTimeString(); return "Hello World! The server time is " + currentTime; } } } using System; using System.ServiceModel; using System.Windows; using System.Windows.Controls; using HelloWorldSilverlightProject.HelloWorldServiceReference; namespace HelloWorldSilverlightProject { public partial class MainPage : UserControl { public MainPage() { InitializeComponent(); } private void button1_Click(object sender, RoutedEventArgs e) { //Find the URL for the current Silverlight .xap file. Go up one level to get to the root of the site. var url = Application.Current.Host.Source.OriginalString; var urlBase = url.Substring(0, url.IndexOf("/ClientBin", StringComparison.InvariantCultureIgnoreCase)); //Create a proxy object for the WCF service. Use the root path of the site and append the service name var proxy = new HelloWorldServiceClient(); proxy.Endpoint.Address = new EndpointAddress(urlBase + "/HelloWorldService.svc"); proxy.GenerateHelloWorldGreetingCompleted += proxy_GenerateHelloWorldGreetingCompleted; proxy.GenerateHelloWorldGreetingAsync(); } void proxy_GenerateHelloWorldGreetingCompleted(object sender, GenerateHelloWorldGreetingCompletedEventArgs e) { textBlock1.Text = e.Result; } } } Relative WCF services The code in the code snippet shows a technique for calling WCF services relative to the currently executing Silverlight application. This technique means that the Silverlight application is not dependent on the service address being updated for each deployment. This allows the whole ASP.Net application to be hosted and deployed on a number of environments without configuration changes, such as the ASP.Net development server, Azure compute emulator, Azure staging or production environments, or on any other IIS host. Configuring the number of web roles The power in Azure comes from running multiple instances of a single role and distributing the computational load. It is important to understand how to configure the size and number of instances of a role that should be initialized. The following steps will explain how this can be done within the Visual Studio: - Stop debugging the application and return to Visual Studio. - Inside the Azure project Chapter3Exercise1, right-click on WebRole1, and select Properties. The role properties window is used to specify both the size of the instances that should be used, as well as the number of instances that should be used. The VM size has no effect on the compute emulator, as you are still constrained by the local development machine. The VM size setting is used when the package is deployed onto the Windows Azure servers. It defines the number of CPUs and amounts of RAM allocated to each instance. These settings determine the charges Microsoft will accrue to your account. In the earlier stages of development, it can be useful to set the VM size to extra small to save consumption costs. This can be done in situations where performance is not a high requirement, such as when a few developers are testing their deployments. Extra small instances The extra small instances are great while developing as they are much cheaper instances to deploy. However, they are low-resourced and also have bandwidth restrictions enforced on them. They are not recommended for use in a high performance production environment. The Instance count is used to specify the number of instances of the role that should be created. Creating multiple instances of a role can assist in testing concurrency while working with the compute emulator. Be aware that you are still constrained by the local development box, setting this to a very large number can lower the performance of your machine: - Set the Instance count to 4 as shown in the following screenshot. If you are planning to deploy the application to the Windows Azure servers, it is a good idea to set the VM size to Extra Small while testing: - Open HelloWorldService.svc.cs and modify the service implementation. The service will now use the Windows Azure SDK to retrieve instance ID that is currently handling the request: - Press F5 to debug the project again. - Open the Azure compute emulator UI. There should now be four instances handling the request. - In Internet Explorer, click on the Click me! button multiple times. The text will update with the server time and the instance that handled the request. The following screenshot shows that instance 1 was handling the request. If the instance ID does not change after multiple clicks, try to launch a second browser, and click again. Sometimes, affinity can cause a request to get sticky and stay with a single instance: using System; using Microsoft.WindowsAzure.ServiceRuntime; namespace WebRole1 { public class HelloWorldService : IHelloWorldService { public string GenerateHelloWorldGreeting() { var currentTime = DateTime.Now.ToLongTimeString(); var instanceId = RoleEnvironment.CurrentRoleInstance.Id; return string.Format("Hello World! The server time is {0}. Processed by {1}", currentTime, instanceId); } } } This exercise demonstrated requests for a WCF service being load balanced over a number of Azure instances. The following diagram shows that as requests from the Silverlight client come in, the load balancer will distribute the requests across the instances. It is important to keep this in mind while developing Azure services and develop each role to be stateless services when working with multiple instances. Each request may be handled by a different instance each time, requiring you to not keep any session state inside an instance: Summary In this article, we created a new Silverlight application that was hosted with an Azure project. We then created a WCF service that was also hosted within the Azure project, and consumed it from the Silverlight application. The WCF service was then scaled to 4 instances to demonstrate how WCF requests can be load balanced across multiple instances. A technique was also shown to allow a WCF service to be consumed through a relative path, allowing the website to be hosted anywhere without the service address needing to be changed for each deployment.
https://www.packtpub.com/books/content/combining-silverlight-and-windows-azure-projects
CC-MAIN-2016-40
en
refinedweb
Working on my connector sandbox I've been trying to run stuff in pax-exam and in unmodified karaf. I ran into some rather hard to diagnose problems due to package versioning. Our stax 1.2 api bundle includes the javax.xml.namespace package and it apparently matches the same packages as implemented in java 5 and java 6. We are exporting it at version 1.0 but pax-exam/pax-runner and karaf export it with no version (version 0.0.0.0). We've modified our copy of karaf to export at version 1.0. Trying to use our stax bundles and bundles compiled against it (jaxb spec, jaxb impl, woodstox for example) cause mysterious CNFE and NCDFE inside jaxb impl. If I rebuild everything without this package version then I can run tests in pax-exam and deploy stuff in regular karaf. I think we should remove this package version. I see that we also have versions for javax.jws;version="2.0", \ javax.jws.soap;version="2.0", \ and wonder if we will encounter similar problems with pax-exam and plain karaf with those packages. I think as a general policy that unless there's a compelling reason such as with javax.transaction, until there is a spec defining package versions for stuff coming with java, we should not version these packages. I opened and I'm going to commit changes to at least the affected bundles if not geronimo trunk. thanks david jencks
http://mail-archives.apache.org/mod_mbox/geronimo-dev/201103.mbox/%3C735E8CD7-9C1E-4C1C-8B19-88C26576ABE8@yahoo.com%3E
CC-MAIN-2016-40
en
refinedweb
For many applications, the SQL Server database doesn't only hold business data. There is a good chance that the sys.messages table holds custom messages for the application and that another table may be used for application wide defaults. These repositories for messages, defaults, and so on help the developer to maintain the vital attribute of consistency. When it comes to developing an ASP.NET application, we need to make regular references to these tables to determine the message ID of a particular message from the database, or the exact name or value of a default. My experience has been that this can lead to shortcuts and inconsistency. What is needed is a hassle-free way of generating a class and enums for my messages. What I required was all my messages for the application to be held in sys.messages and all my application defaults to be held in my table PortDefaults. In SQL Server, they are available to other developers who want to run ad hoc queries directly on the database, and to Stored Procedures and user defined functions which are the only access the ASP.NET application has to the database. When writing VB.NET code for the application, I required intellisense to offer me a list of the available messages and defaults for me to choose from so I did not need to constantly be referring back to my SQL tables. Most of all, I wanted all this to be fuss and maintenance free. If another developer had added a new set of messages, I wanted them to be available to me without any need to change other tables or code. If I added a new default to the database, I wanted it to show on my intellisense prompt. The fabulous BuildProvider class in conjunction with the CodeDom allowed these goals to be achieved easily, with considerable help from two excellent articles: BuildProvider The code in this article was developed in Visual Studio 2005, using VB.NET 2005. The application I am dealing with has hundreds of messages associated with it, and I realized that having the entire list appear in the intellisense drop down each time was going to be too much, so I decided that tables that I would use would have three columns: sys.messages doesn't have a "group" column, so I created a view to supply one: SELECT message_id, CASE WHEN message_id < 60000 THEN 'Information' WHEN message_id < 70000 THEN 'Warning' WHEN message_id < 80000 THEN 'Error' END AS [group], text FROM sys.messages WHERE (message_id > 50000) There are three distinct elements to our task. The first is to establish where our data tables are and which columns we are interested in. The second is generating the code based on the contents of our data tables. The third is to get Visual Studio to create the code automatically when we are developing code. The code we want to create is going to be something similar to: Namespace repository Class SqlMessage Enum Information The_task_has_completed_successfully = 50001 End Enum Enum Warning Stock_of_this_item_is_now_low = 60001 This_supplier_will_not_deliver_at_weekends = 60002 End Enum Enum Error No_items_were_found = 70001 This_account_has_not_been_authorised = 70002 End Enum End Class End Namespace We start by creating an XML file to hold information about our SQL connection, data tables, and columns, and a few details about what we want to create. We will give the file an extension of .repos. Any unused extension will do, but the extension will be important later. The name of the file is not important. Our XML file will look similar to this: <?xml version="1.0" encoding="utf-8" ?> <repositorys namespace="repository"> <repository connectionString= "SERVER=.\SQLEXPRESS; DATABASE=portsys;Integrated Security=SSPI" tableName="PortMessagesView" numberColumnName="message_id" groupColumnName="group" textColumnName="text" className="SqlMessage" /> <repository connectionString= "SERVER=.\SQLEXPRESS; DATABASE=portsys;Integrated Security=SSPI" tableName="PortDefaultsView" numberColumnName="uid" groupColumnName="group" textColumnName="name" className="PortDefaults" /> </repositorys> <repositorys> has the namespace attribute which specifies the namespace that our created code will be in. <repositorys> namespace I have shown two <repository>s here to demonstrate that multiple repository entries can be made in the same file. The attributes for <repository> are: <repository> connectionString tableName numberColumnName groupColumnName textColumnName className You must specify all attributes for each repository. We now know enough to move on to the second task: generating the code. If you are not familiar with the CodeDom, this is not the article to learn much from, but hopefully will be enough to inspire you to investigate further. We will navigate through our XML file, creating a CodeCompileUnit and adding our namespace on the way. CodeCompileUnit 'get the xml input file Try Dim filename As String = MyBase.VirtualPath Dim xmlStream As Stream = VirtualPathProvider.OpenFile(MyBase.VirtualPath) xmlFile.Load(xmlStream) Catch ex As XPath.XPathException System.Console.WriteLine("XML Exception:" & ex.Message) Catch ex As Exception System.Console.WriteLine("Exception:" & ex.Message) End Try 'and create our navigator navigator = xmlFile.CreateNavigator 'now on to the business of creating the code 'somewhere to put our code Dim createdCode As New CodeCompileUnit 'create the namespace Dim createdNamespace As New CodeNamespace 'and find its name and name it Dim ns As String = "" iterator = navigator.Select("/repositorys") iterator.MoveNext() ns = iterator.Current.GetAttribute("namespace", "") If ns = "" Then ns = "DefaultRepository" System.Console.WriteLine("No namespace found - using default") End If createdNamespace.Name = ns createdCode.Namespaces.Add(createdNamespace) 'add commentary Dim comment As New CodeCommentStatement("This code has " & _ "been generated by the message repository tool") createdNamespace.Comments.Add(comment) 'now we iterate through the individual repository(s) pulling 'of the attributes we need to access the data 'so that we can enumerate the datarows iterator = navigator.Select("/repositorys/repository") Do While iterator.MoveNext Dim cs As String = iterator.Current.GetAttribute("connectionString", "") If cs = "" Then System.Console.WriteLine("connectionString not specified " & _ "for repository " & iterator.Current.Name) Exit Sub End If '... and so on for our other attributes (tn(tablename), 'nc(numberColumn), gc(groupColumn) tc(textColumn) and cn(className) ... We now know what all our attributes are, so we can go on to fill the namespace with a class using CodeTypeDeclaration. Then, fill the class with one or more enums (depending on how many groups there are) using CodeTypeDeclaration with isEnum set to True. Each enum is filled with declarations using CodeMemberField to create the field and CodePrimitiveExpression to set its value. The field name must comprise only alphas and underscores, so a quick function filterName will clean the text up for use. CodeTypeDeclaration isEnum True CodeMemberField CodePrimitiveExpression filterName Private Function filterName(ByVal source As String) As String Dim filtered As String = "" For Each letter As Char In source.ToCharArray If Not Char.IsLetter(letter) Then If letter = "%"c Then filtered &= "PARM" Else letter = "_"c filtered &= letter End If Else filtered &= letter End If Return filtered End Function The filtered function returns PARM in place of the percent sign, just to highlight that a parameter is expected for the message. Not perfect, as it does not deal with escaped % signs, but is adequate for our purposes. filtered 'create our top level class with the classname Dim messageClass As CodeTypeDeclaration = New CodeTypeDeclaration(cn) messageClass.Name = cn createdNamespace.Types.Add(messageClass) 'class is the default type 'now access the data 'get the data we need Dim allDa As SqlDataAdapter = New SqlDataAdapter("select * from " & tn, cs) Dim allDs As DataSet = New DataSet allDa.Fill(allDs) 'and and a list of the distinct groups in the table which will become enums Dim groupsDa As SqlDataAdapter = _ New SqlDataAdapter("select distinct [" & _ gc & "] from " & tn, cs) Dim groupsDs As DataSet = New DataSet groupsDa.Fill(groupsDs) For Each group As DataRow In groupsDs.Tables(0).Rows 'zero is the only table Dim currentGroup As String = group.Item(0) ' there is only column zero 'now create an enum for this group Dim createEnum As CodeTypeDeclaration = New CodeTypeDeclaration(currentGroup) createEnum.IsEnum = True 'need to specify enum for this type 'and add it to our message class messageClass.Members.Add(createEnum) 'now fill it with declarations For Each datarow As DataRow In allDs.Tables(0).Select(_ "[" & gc & "]='" & _ currentGroup & "'") 'our field name is derived from the text, 'replacing punctuation with underscores using filterName function Dim fieldName As String = filterName(datarow.Item(tc).ToString) 'and our value is the value form the numbercolumn Dim fieldValue As Integer = CInt(datarow.Item(nc)) 'create the field Dim field As CodeMemberField = New CodeMemberField field.Name = fieldName field.InitExpression = New CodePrimitiveExpression(fieldValue) 'add to the current group enumeration createEnum.Members.Add(field) We now have everything in our CodeCompileUnit. Of course we have not done anything with it yet. Our next task is to get the code in our CodeCompileUnit to be made available to our application. For this we use the BuildProvider facilities available to ASP.NET. If you have never come across the BuildProvider before, then be warned - this really is as easy as it looks! First, we need to tell ASP.NET about our provider, which we do in web.config. I have created a folder in my App_Code folder called CustomBuilders, which is where I will put the builder. We specify this in <codeSubDirectories>. The namespace and class of my BuildProvider will be CustomBuilders.ReposBuilder. We specify this in the type attribute of <add> in <buildProviders>. Earlier, you will remember, we created our input XML file with a file extension of .repos. This is specified in the extension attribute of <add>. The entry in the web.config file will be similar to the example below: <codeSubDirectories> CustomBuilders.ReposBuilder type <add> <buildProviders> extension <system.web> <!-- Set compilation debug="true" to insert debugging symbols into the compiled page. Because this affects performance, set this value to true only during development. --> <compilation debug="true"> <codeSubDirectories> <add directoryName="CustomBuilders"/> </codeSubDirectories> <assemblies> <add assembly="System.Design,="VSLangProj, Version=7.0.3300.0, Culture=neutral, PublicKeyToken=B03F5F7F11D50A3A"/> </assemblies> <buildProviders> <add extension=".repos" type="CustomBuilders.ReposBuilder"/> </buildProviders> </compilation> ... </system.web> The extension attribute of .repos (or whatever extension you chose for your XML input file earlier on) is the wonderful thing about the BuildProvider. Now, every time you place a file with the .repos (or whatever) extension into the App_Code folder, the BuildProvider will be triggered to generate the code you have specified. You won't see the code (just as you don't see so much of the code in ASP.NET 2.0), but it's there, and as if by magic, your newly generated namespace and classes will be there for you to use. So (at last!), it is time to bring things together by creating our custom builder namespace (CustomBuilders) containing our build provider (ReposBuilder). We inherit the BuildProvider class and provide just one override for the GenerateCode method which will contain our code-generating code and a couple of lines to write out the code. CustomBuilders ReposBuilder GenerateCode Imports Microsoft.VisualBasic Imports System Imports System.IO Imports System.Text Imports System.Web Imports System.Web.UI Imports System.Web.Hosting Imports System.Web.Compilation Imports System.CodeDom Imports System.Xml Imports System.Data Imports System.Data.SqlClient Namespace CustomBuilders <BuildProviderAppliesTo(BuildProviderAppliesTo.Code)> _ Public Class ReposBuilder Inherits BuildProvider Private xmlFile As New XmlDocument Private navigator As XPath.XPathNavigator Private iterator As XPath.XPathNodeIterator Public Overrides Sub GenerateCode(ByVal assemblyBuilder _ As System.Web.Compilation.AssemblyBuilder) MyBase.GenerateCode(assemblyBuilder) '... 'in here, our code for reading our attributes and creating our CodeComplieUnit '... If Not (createdCode Is Nothing) Then assemblyBuilder.AddCodeCompileUnit(Me, createdCode) End If End Sub End Class End Namespace This VB file needs to be placed in the App_Code/CustomBuilders folder that we created earlier. No need to compile - nothing else required beyond this code, only the web.config entries and our XML input file with the .repos extension in the App_Code folder. So what do we get? When you add your .repos file to the App_Code folder, ASP.NET will see to the code creation for you. If you go to the VB code for a page and add an import, you will see (in our case) he {}repository come up on the list. Having imported it, you can use a simple statement like: dim t as integer = message.error.No_items_were_found When you enter the dot after the message, the intellisense will offer you Error|Information|Warning, and as you enter the dot after Error, the intellisense dropdown offers you all your error messages. The variable t will be assigned the message number from your message table. The designer is even kind enough to put them all in alphabetical order for you! t If, like me, you tend to shy away from some of the less obvious features of ASP.NET, because you don't have time to acquire the skills or feel that the return on the effort would not be worthwhile, think again when it comes to the BuildProvider. It really is so straightforward to use, and even a simple application like this could reap gains in a very short time, not to mention improvements in consistency and reductions in.
http://www.codeproject.com/Articles/14321/Automatically-generate-classes-and-enums-from-SQL-
CC-MAIN-2016-40
en
refinedweb
Color Chooser: System Color - Online Code Description This is a code which chooses the color but it have another additional functionality of providing the System Colors viz. Windows Default Color, Control Shadow, etc. Source Code import java.awt.BorderLayout; import java.awt.Color; import java.awt.Component; import java.awt.Container; import java.awt.Graphics; import java.awt.Polygon; import java.awt.SystemColor; import java.awt.event.ActionEven... (login or register to view full code) To view full code, you must Login or Register, its FREE. Hey, registering yourself just takes less than a minute and opens up a whole new GetGyan experience.
http://www.getgyan.com/show/1133/Color_Chooser%3A_System_Color
CC-MAIN-2016-40
en
refinedweb
parsimonious 0.7.0 (Soon to be) the fastest pure-Python PEG parser I could muster. Goals - Speed - Frugal RAM use - Minimalistic, understandable, idiomatic Python code - Readable grammars - Extensible grammars - Complete test coverage - Separation of concerns. Some Python parsing kits mix recognition with instructions about how to turn the resulting tree into some kind of other representation. This is limiting when you want to do several different things with a tree: for example, render wiki markup to HTML or to text. - Good error reporting. I want the parser to work with me as I develop a grammar. Example Usage Here’s how to build a simple grammar: >>> from parsimonious.grammar import Grammar >>> grammar = Grammar( ... """ ... bold_text = bold_open text bold_close ... text = ~"[A-Z 0-9]*"i ... bold_open = "((" ... bold_close = "))" ... """) You can have forward references and even right recursion; it’s all taken care of by the grammar compiler. The first rule is taken to be the default start symbol, but you can override that. Next, let’s parse something and get an abstract syntax tree: >>> print grammar.parse('((bold stuff))') <Node called "bold_text" matching "((bold stuff))"> <Node called "bold_open" matching "(("> <RegexNode called "text" matching "bold stuff"> <Node called "bold_close" matching "))"> You’d typically then use a nodes.NodeVisitor subclass (see below) to walk the tree and do something useful with it. Status - Everything that exists works. Test coverage is good. - I don’t plan on making any backward-incompatible changes to the rule syntax in the future, so you can write grammars with confidence. - It may be slow and use a lot of RAM; I haven’t measured either yet. However, I have yet to begin optimizing in earnest. - Error reporting is now in place. repr methods of expressions, grammars, and nodes are clear and helpful as well. The Grammar ones are even round-trippable! - The grammar extensibility story is underdeveloped at the moment. You should be able to extend a grammar by simply concatening more rules onto the existing ones; later rules of the same name should override previous ones. However, this is untested and may not be the final story. - Sphinx docs are coming, but the docstrings are quite useful now. - Note that there may be API changes until we get to 1.0, so be sure to pin to the version you’re using. - Optimizations to make Parsimonious worthy of its name - Tighter RAM use - Better-thought-out grammar extensibility story - Amazing grammar debugging A Little About PEG Parsers PEG parsers don’t draw a distinction between lexing and parsing; everything is done at once. As a result, there is no lookahead limit, as there is with, for instance, Yacc. And, due to both of these properties, PEG grammars are easier to write: they’re basically just a more practical dialect of EBNF. With caching, they take O(grammar size * text length) memory (though I plan to do better), but they run in O(text length) time. More Technically PEGs can describe a superset of LL(k) languages, any deterministic LR(k) language, and many others—including some that aren’t context-free (). They can also deal with what would be ambiguous languages if described in canonical EBNF. They do this by trading the | alternation operator for the / operator, which works the same except that it makes priority explicit: a / b / c first tries matching a. If that fails, it tries b, and, failing that, moves on to c. Thus, ambiguity is resolved by always yielding the first successful recognition. Writing Grammars Grammars are defined by a series of rules. The syntax should be familiar to anyone who uses regexes or reads programming language manuals. An example will serve best: my_grammar = Grammar(r""" styled_text = bold_text / italic_text bold_text = "((" text "))" italic_text = "''" text "''" text = ~"[A-Z 0-9]*"i """) You can wrap a rule across multiple lines if you like; the syntax is very forgiving. Syntax Reference Optimizing Grammars Don’t Repeat Expressions If you need a ~"[a-z0-9]"i at two points. In the future, we may identify repeated subexpressions automatically and factor them up while building the grammar.. Quantifiers Bring your ? and * quantifiers up to the highest level you can. Otherwise, lower-level patterns could succeed but be empty and put a bunch of useless nodes in your tree that didn’t really match anything. Processing Parse Trees A parse tree has a node for each expression matched, even if it matched a zero-length string, like "thing"? might. The NodeVisitor class provides an inversion-of-control framework for walking a tree and returning a new construct (tree, string, or whatever) based on it. For now, have a look at its docstrings for more detail. There’s also a good example in grammar.RuleVisitor. Notice how we take advantage of nodes’ iterability by using tuple unpacks in the formal parameter lists: def visit_or_term(self, or_term, (slash, _, term)): ... For reference, here is the production the above unpacks: or_term = "/" _ term When something goes wrong in your visitor, you get a nice error like this: [normal traceback here...] VisitationException: 'Node' object has no attribute 'foo' Parse tree: <Node called "rules" matching "number = ~"[0-9]+""> <-- *** We were here. *** <Node matching "number = ~"[0-9]+""> <Node called "rule" matching "number = ~"[0-9]+""> <Node matching ""> <Node called "label" matching "number"> <Node matching " "> <Node called "_" matching " "> <Node matching "="> <Node matching " "> <Node called "_" matching " "> <Node called "rhs" matching "~"[0-9]+""> <Node called "term" matching "~"[0-9]+""> <Node called "atom" matching "~"[0-9]+""> <Node called "regex" matching "~"[0-9]+""> <Node matching "~"> <Node called "literal" matching ""[0-9]+""> <Node matching ""> <Node matching ""> <Node called "eol" matching " "> <Node matching ""> The parse tree is tacked onto the exception, and the node whose visitor method raised the error is pointed out. Why No Streaming Tree Processing? Some have asked why we don’t process the tree as we go, SAX-style. There are two main reasons: - It wouldn’t work. With a PEG parser, no parsing decision is final until the whole text is parsed. If we had to change a decision, we’d have to backtrack and redo the SAX-style interpretation as well, which would involve reconstituting part of the AST and quite possibly scuttling whatever you were doing with the streaming output. (Note that some bursty SAX-style processing may be possible in the future if we use cuts.) - It interferes with the ability to derive multiple representations from the AST: for example, turning wiki markup into first HTML and then text. Future Directions Rule Syntax Changes - Maybe support left-recursive rules like PyMeta, if anybody cares. - Ultimately, I’d like to get rid of explicit regexes and break them into more atomic things like character classes. Then we can dynamically compile bits of the grammar into regexes as necessary to boost speed. Optimizations - Make RAM use almost constant by automatically inserting “cuts”, as described in. This would also improve error reporting, as we wouldn’t backtrack out of everything informative before finally failing. - Find all the distinct subexpressions, and unify duplicates for a better cache hit ratio. - Think about having the user (optionally) provide some representative input along with a grammar. We can then profile against it, see which expressions are worth caching, and annotate the grammar. Perhaps there will even be positions at which a given expression is more worth caching. Or we could keep a count of how many times each cache entry has been used and evict the most useless ones as RAM use grows. - We could possibly compile the grammar into VM instructions, like in “A parsing machine for PEGs” by Medeiros. - If the recursion gets too deep in practice, use trampolining to dodge it. Niceties - Pijnu has a raft of tree manipulators. I don’t think I want all of them, but a judicious subset might be nice. Don’t get into mixing formatting with tree manipulation.. PyPy’s parsing lib exposes a sane subset:. Version History - 0.7.0 - Add experimental token-based parsing, via TokenGrammar class, for those operating on pre-lexed streams of tokens. This can, for example, help parse indentation-sensitive languages that use the “off-side rule”, like Python. (Erik Rose) - Common codebase for Python 2 and 3: no more 2to3 translation step (Mattias Urlichs, Lucas Wiman) - Drop Python 3.1 and 3.2 support. - Fix a bug in Grammar.__repr__ which fails to work on Python 3 since the string_escape codec is gone in Python 3. (Lucas Wiman) - Don’t lose parentheses when printing representations of expressions. (Michael Kelly) - Make Grammar an immutable mapping (until we add automatic recompilation). (Michael Kelly) - 0.6.2 - Make grammar compilation 100x faster. Thanks to dmoisset for the initial patch. - 0.6.1 - Fix bug which made the default rule of a grammar invalid when it contained a forward reference. - 0.6 Warning This release makes backward-incompatible changes: - The default_rule arg to Grammar’s constructor has been replaced with a method, some_grammar.default('rule_name'), which returns a new grammar just like the old except with its default rule changed. This is to free up the constructor kwargs for custom rules. - UndefinedLabel is no longer a subclass of VisitationError. This matters only in the unlikely case that you were catching VisitationError exceptions and expecting to thus also catch UndefinedLabel. - Add support for “custom rules” in Grammars. These provide a hook for simple custom parsing hooks spelled as Python lambdas. For heavy-duty needs, you can put in Compound Expressions with LazyReferences as subexpressions, and the Grammar will hook them up for optimal efficiency–no calling __getitem__ on Grammar at parse time. - Allow grammars without a default rule (in cases where there are no string rules), which leads to also allowing empty grammars. Perhaps someone building up grammars dynamically will find that useful. - Add @rule decorator, allowing grammars to be constructed out of notations on NodeVisitor methods. This saves looking back and forth between the visitor and the grammar when there is only one visitor per grammar. - Add parse() and match() convenience methods to NodeVisitor. This makes the common case of parsing a string and applying exactly one visitor to the AST shorter and simpler. - Improve exception message when you forget to declare a visitor method. - Add unwrapped_exceptions attribute to NodeVisitor, letting you name certain exceptions which propagate out of visitors without being wrapped by VisitationError exceptions. - Expose much more of the library in __init__, making your imports shorter. - Drastically simplify reference resolution machinery. (Vladimir Keleshev) - 0.5 Warning This release makes some backward-incompatible changes. See below. - Add alpha-quality error reporting. Now, rather than returning None, parse() and match() raise ParseError if they don’t succeed. This makes more sense, since you’d rarely attempt to parse something and not care if it succeeds. It was too easy before to forget to check for a None result. ParseError gives you a human-readable unicode representation as well as some attributes that let you construct your own custom presentation. - Grammar construction now raises ParseError rather than BadGrammar if it can’t parse your rules. - parse() now takes an optional pos argument, like match(). - Make the _str__() method of UndefinedLabel return the right type. - Support splitting rules across multiple lines, interleaving comments, putting multiple rules on one line (but don’t do that) and all sorts of other horrific behavior. - Tolerate whitespace after opening parens. - Add support for single-quoted literals. - 0.4 - Support Python 3. - Fix import * for parsimonious.expressions. - Rewrite grammar compiler so right-recursive rules can be compiled and parsing no longer fails in some cases with forward rule references. - 0.3 - Support comments, the ! (“not”) operator, and parentheses in grammar definition syntax. - Change the & operator to a prefix operator to conform to the original PEG syntax. The version in Parsing Techniques was infix, and that’s what I used as a reference. However, the unary version is more convenient, as it lets you spell AB & A as simply A &B. - Take the print statements out of the benchmark tests. - Give Node an evaluate-able __repr__. - 0.2 - Support matching of prefixes and other not-to-the-end slices of strings by making match() public and able to initialize a new cache. Add match() callthrough method to Grammar. - Report a BadGrammar exception (rather than crashing) when there are mistakes in a grammar definition. - Simplify grammar compilation internals: get rid of superfluous visitor methods and factor up repetitive ones. Simplify rule grammar as well. - Add NodeVisitor.lift_child convenience method. - Rename VisitationException to VisitationError for consistency with the standard Python exception hierarchy. - Rework repr and str values for grammars and expressions. Now they both look like rule syntax. Grammars are even round-trippable! This fixes a unicode encoding error when printing nodes that had parsed unicode text. - Add tox for testing. Stop advertising Python 2.5 support, which never worked (and won’t unless somebody cares a lot, since it makes Python 3 support harder). - Settle (hopefully) on the term “rule” to mean “the string representation of a production”. Get rid of the vague, mysterious “DSL”. - 0.1 - A rough but useable preview release Thanks to Wiki Loves Monuments Panama for showing their support with a generous gift. - Author: Erik Rose - Keywords: parse,parser,parsing,peg,packrat,grammar,language - License: MIT - Categories - Development Status :: 3 - Alpha - Intended Audience :: Developers - - Topic :: Scientific/Engineering :: Information Analysis - Topic :: Software Development :: Libraries - Topic :: Text Processing :: General - Package Index Owner: erikrose - DOAP record: parsimonious-0.7.0.xml
https://pypi.python.org/pypi/parsimonious
CC-MAIN-2016-40
en
refinedweb
Last chance to post a question for Frank Wierzbicki...the "Ask Frank" question and answer session for Jython Monthly will come to a close this Friday, July 11. If you are interested in posting a question for Frank to answer, please visit the following URL or send email to me at the address below. Thanks to Frank and to those who have already posted questions. -- Josh Juneau juneau001@... On Sat, Jun 21, 2008 at 9:50 AM, Josh Juneau <juneau001@...> wrote: > J email them to! > > -- > Josh Juneau > juneau001@... > > > Since they are all based on the same components, take a look at or Hth, Greg. From: jython-users-bounces@... [mailto:jython-users-bounces@...] On Behalf Of DOUTCH GARETH-GDO003 Sent: Monday, July 07, 2008 6:25 AM To: jython-users@... Subject: [Jython-users] Hide unwanted commons logging trace? Hi all, I am using htmlunit as part of my project and I want to disable the log messages it prints to my command line whenever I load a web page (usually ones with Javascript are the most verbose). The example: from com.gargoylesoftware.htmlunit import * w = WebClient(BrowserVersion.FIREFOX_2) w.getPage('';) Generates the output: 07-Jul-2008 14:18:44 com.gargoylesoftware.htmlunit.javascript.host.Document jsxSet_cookie INFO: Added cookie: testcookie=1 07-Jul-2008 14:18:44 com.gargoylesoftware.htmlunit.javascript.host.Document jsxSet_cookie INFO: Added cookie: testcookie= 07-Jul-2008 14:18:44 com.gargoylesoftware.htmlunit.javascript.host.Document jsxSet_cookie INFO: Added cookie: khcookie=fzwq2gh2pz1eDnO5bRamCTbugf_q4fmi-ww4Hg Exception in declaration() HtmlPage()@5602395 The project uses the commons logging package, and I haven't a clue how to disable the output. Can anybody help? <> Regards, Gareth. I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/jython/mailman/jython-users/?viewmonth=200807&viewday=8
CC-MAIN-2016-40
en
refinedweb
Download it from: Screenshots: For those who never heard of spe: Stani's Python Editor (spe) An attempt for a plausible Python IDE for Blender and abroad. (c) 2003 Installation: Type 'python setup.py install' (For more details see README.txt in distribution.) Requirements: Python22 wxPython-2.4.0.7u Blender 2.27 Contents: Features Some useful information (see below!) General issues Linux issues Features: Sidebar per file *class/method browser (jump to source) *automatic todo list, highlighting the most important ones (jump to source) *automatic alphabetic index of classes and methods (jump to source) *notes Tools: *interactive shell *locals browser (left click to open, right click to run) *seperate session recording *quick access to python files in folder and its subfolders (click to open) *unlimited recent file list (left click to open, right click to run) *automatic todo list of all open files, highlighting the most important ones *automatic alphabetic index of all open files (jump to source) Python related: *sytax-checking *syntax-coloring *auto-indentation *auto-completion *calltips Drag & Drop: drop any amount of files or folders... *on main frame to open them *on shell to run them *on recent files to add them *on browser to add folders General *context help defined everywhere *add your own menus and toolbar buttons *exit & remember:all open files will next time automatically be loaded (handy for Blender sessions) *wxPython gui, so should be cross-platform (Windows,Linux,Mac) *scripts can be executed in different ways:run,run verbose & import *after error jump to line in source code *remember toggle:remembers open files between sessions. last but not least... Blender related: *redraw the Blender screen on idle (no blackout) *Blender object tree browser (cameras,objects,lamps,...) *add your favorite scripts to the menu *100% Blender compatible:can run within Blender, so all these features are available within Blender *** Some usefull information: It is recommended to check out all the context help, to get familiar with the features of spe. Some information which didn't fit there, comes here: *Blender:.) *Psyco:. *Refreshing: Spe has a lot of features like explore tree,index,todo list, and so on... This updated every time the file is saved or every time the refresh command is given. This can be done by pressing F5 on the keyboard, the refresh toolbar button or clicking the View>Refresh menu. *Remember option: This can be activated by checking File>Remember or by pressing the heart toolbar button. It will open automatically the scripts which were open in the last session. Useful for Blender if you have to switch continously between Blender and spe. *Running files: Spe provides many ways to run files: -Run (F9): Use this by default, unless you have specific reasons to use the other ones. It will run in the namespace of the interactive shell. So all the objects and functions of your program become available in the shell and in the locals browser (the tab next to the shell). -Run with profile (Ctrl-P): Same as above but with a profile added. A profile is a report of the program execution which shows which processes or functions are time consuming. So if you want to speed up your code, you can define the priorities based on this report. -Run in seperate namespace (Ctrl-R):. -Run verbose (Alt-R): This is for very simple programs, which do not indent more than once. It will send all source lines, as if they were typed in the interactive shell. It is probably a good learning tool for beginners. -Import (F10): Imports the source file as a module. ->For running files, they don't have to be saved. For importing files, it is recommended to save them first. *Syntax-checking: Every time you save spe does syntax checking. If there is any error, spe will jump to the line in the source code and try to highlight the error. *** General issues: - Undo might sometimes takes big steps back. - editors.txt describes possibly installed editors. It may not work 'out of the box' for many people. If so adapt the file to your system. Linux issues: *unicode: There might be problems when wx lib is compiled with unicode/gtk2, since spe uses wxSTC. One user reported that she recently compiled wx 2.4.0 with it and the scintilla widget didn't work at all. Spe: Python IDE for Blender released Scripting in Blender with Python, and working on the API Moderators: jesterKing, stiv 1 post • Page 1 of 1 1 post • Page 1 of 1 Who is online Users browsing this forum: No registered users and 1 guest
https://www.blender.org/forum/viewtopic.php?p=9652
CC-MAIN-2016-40
en
refinedweb
Generating RFC 822-style Date Strings Fredrik Lundh | June 2003 | Originally posted to online.effbot.org The RSS 2.0 (dead link) specification uses RFC 822-style date strings to store publication dates and build dates. Here’s a snippet from my publishing tool that takes an “yyyymmdd” or “yyyymmddhhmmss” string, and generates an RSS-compatible string. It uses Python’s calendar module to calculate weekdays, and to get day and month names. Tweak as necessary: def formatpubdate(date): # convert a yyyymmddhhmmss (UTC) string to RSS pubDate format from calendar import weekday, month_abbr, day_abbr year, month, day = date[:4], date[4:6], date[6:8] hour, minute, second = date[8:10], date[10:12], date[12:14] if not hour: hour = "12" if not minute: minute = "00" if not second: second = "00" wday = weekday(int(year), int(month), int(day)) return "%s, %s %s %s %s:%s:%s GMT" % ( day_abbr[wday], day, month_abbr[int(month)], year, hour, minute, second )
http://effbot.org/zone/generating-rfc822-dates.htm
CC-MAIN-2016-40
en
refinedweb
Alexandru Popescu ? wrote: > ... >). > ... Backwards compatibility was a goal for JSR-283. Making XPath optional breaks that in a big way, and I personally hope that public review will show that people do not like that. So, instead of opening pandora's box even wider, let's try to close it :-). Because, *if* backwards compatibility stops being a concern, I have *lots* of things I'd like to get rid of, such as same-name siblings, certain naming restrictions, addressing, namespace remapping... Best regards, Julian
http://mail-archives.apache.org/mod_mbox/jackrabbit-users/200707.mbox/%3C46A0FD4F.1080503@gmx.de%3E
CC-MAIN-2016-40
en
refinedweb
If I define the type in C# as a public class instead of as a struct, I still get a compile error on the same line as before in VB6. This time the error message is: “Only user-defined types defined in public object modules can be coerced to or from a variant or passed to late-bound functions”. Cool. Not. user defined types in VB are not something related to COM, but are a VB thing. THe error clearly states that the user defined type is stored inside a variant, it is not used as a com object.
https://blogs.msdn.microsoft.com/jblizzard/2003/08/04/com-interop-c-type-consumed-by-vb6-program-part-2-still-searching/
CC-MAIN-2016-40
en
refinedweb
>>.'" So? (Score:2, Insightful) Re:Perfect example (Score:4, Insightful) There's no IP. There is copyright, patents and trademarks. This sounds like a trademark thing, so no need to confuse the issue. Re:Non-issue (Score:3, Insightful) Re:So? (Score:5, Insightful) Some things are ethically questionable even when there is no legal problem involved. A concept often forgotten in the corporate world. They should plan better (Score:2, Insightful) Google simply does not care. (Score:2, Insightful) Re:They should plan better (Score:5, Insightful) As someone stated before, this is not a legal issue. It's just about basic politeness. They should change it... (Score:3, Insightful) Re:Go! (Score:3, Insightful) I don't know if there's a Poet Laureate position for Slashdot, but either way I nominate this guy. Brilliant!. Re:I said it yesterday, but... (Score:3, Insightful) Re:Hmmm... :Slashdot needs a voting mecahnism for this (Score:2, Insightful) A poll would be interesting. Personally, I think that "Go and "Go! are two different names, so there is no problem. Unless you get excited about the first one... Re:Go! (Score:3, Insightful) Re:Go! (Score:1, Insightful) That little light on your dashboard? That's your "broken sarcasm detector" indicator light. You should get that checked out. Re:Go! (Score:1, Insightful) It's worse than that. You'd think Google would have a comprehensive understanding of the value of picking a term that would make web searches easier. "Go" is rather a common word. There's the game, the other programming language, and it's everyday uses. Talk about namespace collision! Maybe they should have named it "GoTwo"? :-) Re:Hmmm... :Tingo? (Score:1, Insightful) I have recommended gingo (gingo is not go). Re:How come they didnt google "Go" lol (Score:3, Insightful) Because Googling for "go" gets you 2,950,000,000 hits. Yes, that's billions. And yet they didn't see that choosing such a common word for a language name was a bad idea. Ah, how the mighty goof up. Re:Go! (Score:3, Insightful) So what? (Score:3, Insightful) "From what I've read, Go! was pretty much unknown to anyone outside a very small group 2 years ago." From what I've read, Go was pretty much unknown outside of Google until about a week:So? (Score:3, Insightful) "Like reusing the name of an obscure project that seemingly died years ago and nobody here has even heard of?" Right. If Slashdotters haven't heard of it, there's no ethical issue. Re thinking about offering it. I saw mention on a tv special about google over a year ago that they were working on a language with short compile times. So unless you have something better than nu uh to reply with save the text. I won't be feeding the trolls.
https://developers.slashdot.org/story/09/11/12/1256234/Google-Under-Fire-For-Calling-Their-Language-Go/insightful-comments
CC-MAIN-2016-40
en
refinedweb
). Activity - All - Work Log - History - Activity - Transitions Is this not a problem whenever the target portlet method loads resources via the classloader – config files, images, etc.? I think it's important for code running in the target portlet (edit: or container) to be able to rely on the context classloader being an appropriate one to use to access the portlet's resources, but it seems you're saying that this is not a safe assumption in Pluto. I'm not saying Pluto shouldn't switch to SLF4J – I have no opinion on that. I do think there is a deeper issue here than logging, however. Also, I don't follow what makes a portlet container different from any other servlet container in this regard. For instance, why isn't your issue a problem for Tomcat? Hi John, The problem with commons-logging is different from the typical classloader usage and handling within portlets and portletcontainer and I'll try to explain. Portlets loading resources (including classes) typically do so through their own (webapp) classloader, nothing extraordinary here or different from plain web applications. So you are correct, and you can rely on the context classloader to access the portlet resources. Pluto (or better: the webcontainer) handling is save to be used for that. When a portlet invokes a portletcontainer method however it most likely will mean a "cross-context" invocation because typically (depending on your embedding portal setup) the portletcontainer code will reside in another webapplication (the portal). If that happens, its the responsibility of the portletcontainer to determine the right classloader to use (either from the portlet application or the embedding portal). A good example of this is the PortletEvent payload handling. When a portlet sets a new PortletEvent using a complex payload, the (Pluto) portletcontainer will unmarshall that payload using jaxb for which jaxb will be told to use a different classloader (the one from the portlet application in this case). This kind of cross-context/multiple classloader situations are known and recognized and explicit handling is in place to deal with them. For logging configuration however, things are a bit different. First of all, logging is usually configured using a static initializer, e.g. private static final Logger log = LogFactory.getLogger(<classname>). Such static initializers are "executed" as soon as a class is accessed/loaded, so on demand, by the loading classloader (typically the classloader of the class referencing the to be loaded resource/class) If the portal application would, during startup, preload every possible class and resource from its own webapplication, all things would be fine as then you would be guaranteed the expected classloaded to be used. However, that's unpractical, undesirable and not doable in practice. An alternative to "fix" this commons-logging static initialization could have been wrapping it and temporarily setting the CurrentClassLoader to that of the current class, somewhat similar to how we deal with the PortletEvent payload unmarshalling over jaxb for instance, but then the other way around. But that would just be a "workaround" for a wrong usage/pattern with respect how log configuration is intended to be used. The static/compile time binding as applied by slf4j is much more "natural" and doing exactly as what you expect to happen in this case, and allows us to use logging configuration for the container (and portal) classes just as for any other class and application. All of this is not so much a problem of using a portletcontainer, but of using cross-context webapplication interactions in a webserver as used/required for portals in general. Tomcat is no different in this respect than any other servlet container and I actually "hit" this problem while testing against Tomcat. However, Tomcat in general is "easier" to use than for instance JBoss or Websphere as those webservers by default use a PARENT_FIRST webapplication classloader scheme, contrary to the advised (and IMO required) recommendation of the servlet specification itself (see last paragraph of section SRV.9.5 of Servlet Spec 2.4). As a consequence, when deploying a portal (like Pluto or Jetspeed) and your own portlet applications on JBoss or Websphere you always have to make sure to override this default to use a PARENT_LAST (or CHILD_FIRST) classloader scheme to ensure the expected behavior (at least, from a portlet/portal POV). Hi Ate, I've been following this issue since it popped up on the Commons Dev mailing list. Would you mind explaining in more detail what problems you are experiencing using Commons Logging, due to the differences in class loading described above. Is it the selection/configuration of which logging implementation (Log4J, Java Util Logging etc.) to use that is the problem? Or is it something else? Hi Dennis, I wasn't aware of the discussion on the commons-dev list but I've just subscribed and responded there. As hopefully will be clear from my explanation (there) it has nothing to do with the actual logging implementation choice but only the way CL uses the current ContextClassLoader for selecting it. For anyone else interested (and it is an interesting and already long thread), here is a link on Nabble: Migration to slf4j has been completed. What I just noticed from reviewing the commit message is that in this commit another change accidentally was also merged in which I intended to do separately. This concerns two things: Testing pluto/jetspeed on Websphere showed that the stax-api-1.0.1 is invalidly packaged as it incorrectly also contains the javax.xml.namespace.QName class causing jaxb to break on Websphere 6.1 The stax-api-1.0-2.jar is clean and AFAIK otherwise the same (coming from SUN while the stax-api-1.0.1 comes from codehaus)
https://issues.apache.org/jira/browse/PLUTO-553
CC-MAIN-2016-40
en
refinedweb
Opened 7 years ago Closed 4 years ago Last modified 4 years ago #6845 closed defect (wontfix) RequestDone not allowed in INagivationContributor Description Testing on Trac 0.11.7, it appears that redirect() in an INagivationContributor will cause problems because the RequestDone exception is not handled and will result in a 500 error. It seems like this plugin would make more sense as a IRequestFilter. from trac.core import * from trac.web import IRequestFilter class AuthRequired(Component): """AuthRequiredPlugin Require anonymous users to authenticate using the form based login. This has been greatly simplified from the original implementation thanks to a hint from coderanger. """ implements(IRequestFilter) def pre_process_request(self, req, handler): if ((req.authname and req.authname != 'anonymous') or \ req.path_info.startswith('/login') or \ req.path_info.startswith('/reset_password') or \ req.path_info.startswith('/register') or \ return handler self.log.debug('Redirecting anonymous request to /login') #req.redirect(req.href.login()) # Testing new redirect syntax. Thanks to jfernandez@ist.psu.edu req.redirect(req.href.login(), {'referer':req.abs_href(req.path_info)}) return handler def post_process_request(req, template, data, content_type): return Attachments (0) Change History (2) comment:1 Changed 4 years ago by rjollos - Resolution set to wontfix - Status changed from new to closed comment:2 Changed 4 years ago by rjollos Note: See TracTickets for help on using tickets. Plugin is deprecated. See the PermRedirectPlugin.
https://trac-hacks.org/ticket/6845
CC-MAIN-2016-40
en
refinedweb
This action might not be possible to undo. Are you sure you want to continue? \ Part I: Using OpenLDAP on Debian Woody to serve Linux and Samba Users Note on Debian Sarge: Please check out Part II: OpenLDAP on Debian Pre-Sarge below for some information on LDAP with a pre-release version of Debian 3.1, aka "Debian Sarge". Table of Contents For Debian 3.0 aka Debian Woody: Introduction PDF version, Document History, Security Advisory, Licensing, Disclaimer, Original Author, Hosted By External Resources What Is LDAP? Install OpenLDAP Configure OpenLDAP Database Population Start OpenLDAP NSS: Name Service Switch NSS: Introduction NSS: Installation PAM: Pluggable Authentication Module PAM: Introduction PAM: Clients vs. Server Configuration PAM: Installation PAM: Passwords: Facts PAM: Passwords: How To Change Them PAM: The user "root" and other system UIDs Host Specific Access Introduction Approach 1: pam_check_host_attr Approach 2: Filters SSL Encryption Activate SSL Encryption for the Clients’ Queries OpenLDAP Startup Script So Far So Good, Part 1 Migrate Your Linux Users Migrate Linux: Prerequisites Migrate Linux: The Scripts Samba 2.2.x and LDAP Samba: Introduction Samba: Installation and Setup Samba: Test your Setup Samba: Add (Windows) Users Samba: Migration Summary Samba: Join Windows-Workstations to our Samba-PDC Domain Samba: Join Samba-Workstations to our Samba-PDC Domain Samba: Miscellaneous So Far So Good, Part 2 ToDo: LDAP-Client-Interfaces Directory Administrator GQ ToDo: phpLDAPadmin ToDo: Miscellaneous Miscellaneous For Debian 3.1 aka Debian Sarge pre-release version: Part II: Using OpenLDAP on Debian Sarge to serve Linux Users User Comments: User Comments Introduction LDAP is one hell of a tool: it can be used to store any kind of information, starting with your network’s users (which is what we’ll do) and not even ending with your favorite cooking recipes. As LDAP is one hell of a tool, it is all a pain in the you-know-what-I-mean to get to know it and to get it up and running. I spent lots of time with basics just to understand it. One problem for me was, that I didn’t find any good documentation on this topic for a long time. Anyway, here is first of all a small list of IMHO good documentation on this topic as well as my stuff to get it working: OpenLDAP (the software written to host the database and do some other stuff) implements one part of the whole LDAP-specification, AFAICT. We’ll use it to do the major work: host the database. This "LDAP-server" (ldap.subnet.at) will serve to Linux and Windows workstations hosting the local users and corresponding information. Later on, it shall also serve the upcoming new Linux-based mailserver. As Debian GNU/Linux is our distribution of choice, I’ll focus on the description for Debian Woody. Nevertheless, lot’s of stuff is generic material and you should be able to use it on other distributions too. I’d like to thank all people I know and those I don’t know which made this LDAP solution possible - just to mention a few groups: Debian, OpenLDAP, Samba, #ldap (irc.debian.org), the authors of all these Howto’s and other documentations, package maintainers, etc. etc. etc. Thanks! This document was created during my work as network admin at subnet - platform for media art and experimental technologies. This document’s home is. PDF version of this document As Postscript- or PDF-versions of this document have been requested several times: I created this file’s HTML/PHP code directly using vim -- which makes it a bit harder to create a proper PDF document that’s up to date. (If only I knew that this document became this large -- I’d really spent the time to learn DocBook first, or had used LyX -- or whatever.) Still, Andreas Heinzen pointed out to me, how to easily create a PDF version of this document (the latest version is as of June 11, 2005). Many thanks again to Andreas for his work and feedback on this! (Just to let you know - in case you want to do this yourself: Use html2ps and ps2pdf to create the document. Beforehand, the feedback-form and the counter should be removed from the source code.) Document History 05-09-18: Added comment for correct handling of command-line parameter in tcsh shell. 05-06-11: Added notes on work with Sarge-Pre-Release packages in Part II of this document. Updated the PDF version accordingly. 05-05-08: Uploaded Andreas Heinzen’s PDF version of this document. 04-12-22: Changed the style-sheet to reflect a more common design. Mentioned that Samba packages version 2.2.3a-13 are vulnerable. 04-11-05: Added the section "NSCD and /etc/libnss-ldap.conf" describing some possible security-improvements. 04-10-22: Added a comment about the format of ldap.secret. Use dpkg-buildpackage to compile the Deb-packages (instead of debian/rules directly). 04-09-19: Added a link to Gerald Carter’s LDAP book to the resources section as well as some links in the miscellaneous section. 04-09-02: Eventually, re-compiled and added security-fixed Samba-packages based on version 2.2.3a-13. 04-07-15: Added and updated some links to LDAP client programs and user-interfaces. 04-03-25: Mention Samba security update (see DSA-463). 04-02-18: Dual-licensed the document under GPL and GFDL. 04-02-09: I’m considering to dual-license the document, adding the GPL as another choice (as Debian considers the GFDL to be a non-free license). Mind: According to the "draft" of the Debian Documentation Policy () the GFDL as is applied here should be considered "free" even for Debian. 03-10-10: Added section "What Is LDAP?". 03-08-29: Added two links to the resources section. 03-08-18: Releasing a security advisory. According changes to the available packages and their descriptions. Minor changes to clear things up. 03-08-13: Add information on how to have Samba automatically create a machine account. (Change script "create-machine-account.sh" for this too.) 03-08-12: Minor changes to clarify things up. Release the document under the GNU Free Documentation License. Add information on the MigrationTools. Add chapter "Miscellaneous". Further minor changes. 03-08-11: "Version 1.0"! Finally, the first official release with only minor ToDo’s left. Security Advisory I wouldn’t have thought it to be necessary with a HOWTO, but it is: This section is for security issues coming up. 03-08-18 Overview: The self-compiled Samba packages (DSA-280-1 samba -- buffer overflow) as well as the self-compiled LDAP packages (DSA-227-1 openldap2 -- buffer overflows and other bugs) used in previous versions of this HOWTO unfortunately are based on vulnerable versions of those packages. If you’ve simply downloaded and used the packages from this site, you are strongly encouraged to either recompile them yourself or use the new upgraded packages provided here. Description: For some reason I did not include "deb-src woody/updates main contrib non-free" in my /etc/apt/sources.list file when initially downloading and compiling the source packages, this means I used Woody’s original packages which meanwhile turned out to be vulnerable here and there.) Mind: As to my knowledge, packages can now be considered "secure" currently. Nevertheless, Today’s security advisory does not mean I necessarily put possibly needed packages up here in the future as well. Don’t rely on this howto, keep track of security issues yourself! 04-03-25 There is a local root exploit in Samba 2.2.3a-12 that is fixed in Woody’s 2.2.3a-13 packages (check out DSA-463). 04-09-02: Recompiled packages based on the fixed version 2.2.3a-13 are provided below now. 04-12-22 Samba 2.2.3a-13 is vulnerable, see DSA-600. I removed the compiled packages, please follow the instructions below to build them yourself. Licensing This document is published under the licenses GPL and GFDL (see notes below for details). You may choose, which license to apply. GNU Free Documentation. GNU General Public License (GPL) This document may be. Norbert Klasen wrote his master thesis "Directory Services for Linux in comparison with Novell NDS and Microsoft Active Directory" about LDAP and similar setups.platform for media art and experimental technologies External Resources If you want to get to know what LDAP is and how its data is organized. Use this at your own risk! Original Author Markus Amersdorfer (markus. While I haven’t read it: Gerald Carter’s LDAP System Administration (ISBN: 1-56592-491-6).x and Implementing Disconnected Authentication and .Part I (Basics). Exim)! Getting LDAP up and running: Torsten Landschoff’s Using LDAP for name resolution is a compressed article that describes LDAP on Debian for NIS-like users (using Debian’s default LDAP packages without recompilations). Learning about LDAP: A really great and short introduction to LDAP by A.at) Hosted by subnet . nor does it necessarily hold correct information..on Debian made simple! by "mawi" does exactly what it says it does :). Exploring LDAP -. Some good articles are over at www. Lot’s of good ideas and fine-tuning. It also links to the author’s more complete 3-part series: Exploring LDAP -. including how to design an LDAP-Tree. 2003.Permission is granted to copy and distribute translations of this document into another language. Here’s a great step-by-step guidance on how to get LDAP up and running on Debian Woody. Postfix. under the above conditions for modified versions. please check out the docs in the following subsection "Learning about LDAP".x is Using OpenLDAP for Authentication.) A step-by-step guidance for Mandrake 9.. Samba & LDAP . Further articles ’round Mandrake and LDAP: Implementing a Samba LDAP Primary Domain Controller Setup on Mandrake 9. Disclaimer This document comes without any warranty and does not claim to be complete. The author(s) can not be held reliable for any loss of data or corrupted hardware or any other miscomfort due to information of this document. Frisch. Great! (Thanks to "mawi".amersdorfer <at> subnet.org. comment from Aug 26.Part III (Advanced Topics). except that this permission notice may be included in translations approved by the Free Software Foundation instead of in the original English.Part II (Directories) and LDAP -. Lots of basic as well as detailed information there. The sample chapter is about "Email and LDAP" and covers both configuration of MUAs and MTAs (Sendmail.ldapman. which is work in progress currently. Also see LDAP-Client-Interfaces and Miscellaneous below. author of the third Mandrake-LDAP-HOWTO mentioned here. user data. pointed me to a different server for his document.) Building a LAMP Server w/ LDAP Authentication. see section External Resources.a GINA for Windows 2000/XP. What Is LDAP? I will not go into details.2) PDC LDAP v. Lots of my description (above all to get this "LDAP-thing" do something close to what I wanted) is based on this doc and thus my howto cuts off some details which can be found at aphroland. passwords. OpenLDAP uses the Sleepycat Berkeley DB.c=country" for the LDAP tree. pgina. (Thanks to "unkown". comment from Aug 14.x is dealt with in the according Samba (v. Samba 3.org.com/ .3) PDC LDAP howto. There are resources out there which can and do explain this. www. I’d like to cite from the article Building an LDAP Server on Linux.net" seems to be down currently (03-08-11) as the server is moved (AFAIK).3 howto.org.xpasystems. I’d like to notice one difference in advance: while aphroland’s description uses a base structure like "o=domain. The homepage of Christof Meerwald with some notes on LDAP and the patched Debian package libpam-ldap.) Here’s a SAMBA (v 2. I’m OK with calling the whole works a database and being done with it. "mandrakesecure." Install OpenLDAP First thing to do is to install the OpenLDAP-server. Part 1 by Carla Schroder as she points out some IMHO crucial thing concerning the LDAP-world: "Let’s get like all pedantic for a moment (please put on your geek beard and pocket protector for this). RFC 2254 describes "The String Representation of LDAP Search Filters" and concludes with a few examples on how to use the filters. it might be slightly out-of-date. This doc helped me a lot by describing the installation from a Debian user’s point of view. I’d like to thank the author of the LDAP HOWTO over at howto. Nevertheless. It accesses a special kind of database that is optimized for fast reads.dc=country". but you should be able find these document in Google’s cache. customer data. Tools and stuff: The LDAP Schema Viewer (in case this one’s down.de. I’ll use the more common "dc=domain. I’m not the pedant police. not a database.PDC/BDC Relationships Using Samba and OpenLDAP. what LDAP really is and how to best design an LDAP tree . this actually depends on your taste (among other things) .at least not for now. Miscellaneous: The IRC channel "#ldap" on irc. Use it for relatively static information.debian. Having said all that. and security keys. 2003.aphroland.ldapguru. Nevertheless. such as company directories. (According to him.de a lot. Buchan Milne. you may also try this link). LDAP--Lightweight Directory Access Protocol--is a protocol. and is just kind of a naming-convention.deb libldap2_2. While this works. With LDAP it is possible to reproduce the available database to other servers on the network too.deb-packages in ~/slapd_woody-source/.d/slapd stop .0.0.23-6_i386. (If for some reason the file debian/rules should not be executable. We want our server (as well as the clients later on) to support SSL.0. (As described at aphroland. which you’d use "slurpd" for (which we won’t do). the somewhat more official way seems to me to be using dpkg-buildpackage instead. which you should install blindly accepting the default-values presented by Debconf.23-6_i386.de.deb libldap2-dev_2. (BTW: Run date -R to get the correct date string for the changelog. run chmod +x debian/rules.deb ldap-utils_2.0.) cd ~/slapd_woody-source/ dpkg -i slapd_2.23 vi debian/rules --> and replace --without-tls with --with-tls [ vi debian/changelog ] Compile the packages: dpkg-buildpackage -b -us -uc FYI: "slapd" is the part of OpenLDAP which "is the server". Mind: The packages provided on this page have an edited debian/changelog as well to hold information about what was changed: Additionally to mentioning the addition of SSL support here. we’ll wipe out the default-stuff and start from scratch on ourselves. so we’ll have to recompile and install our own Debian packages: Get the source: cd ~ mkdir slapd_woody-source cd slapd_woody-source apt-get source slapd apt-get build-dep slapd apt-get install libssl-dev Activate SSL: cd openldap2-2.0.deb [ Get subnet’s self-compiled slapd packages ] /etc/init.23-6_i386.23-6_i386. the package names are changed to contain the suffix "subnet".) Note: In previous versions of this document I stated to run ./debian/rules binary to compile the Deb packages.) Invoking dpkg-buildpackage -b -us -uc creates the . .In order to prevent the packages to be replaced by the ones from the Debian-repository.slapd /var/lib/ldap chmod 750 /var/lib/ldap rm /var/lib/ldap/* chown -R slapd. /etc/ldap/slapd. find /etc/ldap -type d -exec chmod 770 {} \.) But be aware to keep track of possible security-updates for these packages on your own from now on! (Upgrading to the possibly new packages then should be easily possible by running "dpkg -i .args {CRYPT} /var/lib/ldap/replog 256 ldbm suffix "dc=subnet.schema /etc/ldap/schema/nis..conf: ######################### /etc/ldap/slapd.schema on /home_local/slapd/slapd.conf slapd. set them to HOLD.at/~max/ldap/ # # Basic slapd. Here’s the first basic version of our main configuration file.schema /etc/ldap/schema/misc. chown -R slapd.dc=at" # use "/usr/sbin/slappasswd -h {CRYPT}" to create a rootpw-string below . (Use dselect or a command like "echo "slapd hold" | dpkg --set-selections" for this.) Configure OpenLDAP Wiping out Debian’s default configuration and setting up our own one works as follows: adduser slapd chown -R slapd.schema /etc/ldap/schema/inetorgperson.slapd /var/spool/slurpd rm /var/spool/slurpd/* cd /etc/ldap/ mv slapd.schema /etc/ldap/schema/cosine.subnet.conf include include include include include schemacheck pidfile argsfile password-hash replogfile loglevel database /etc/ldap/schema/core. Make sure to have backups of your configuration before as well as to set the packages to HOLD afterwards again.conf_DEB-orig This way.slapd /etc/ldap chmod 770 /etc/ldap find /etc/ldap -type f -exec chmod 440 {} \." again.pid /home_local/slapd/slapd. the user "slapd" (which we’ll use to run the LDAP-server later-on) is the only one who can read the LDAP configuration as well as the database.conf ######################### #. # (note: if you use the tcsh shell. besides editing the rootpw-line in your slapd. run some more file-system stuff: # chown slapd. we need to populate it.dc=at objectClass: organization o: subnet dn: cn=manager.slapd slapd.dc=at objectClass: organizationalRole objectClass: simpleSecurityObject cn: admin description: LDAP administrator userPassword: {CRYPT}xxxxxxxxxx dn: cn=nss. you should already know that this file is in "LDIF"-format: dn: dc=subnet.conf # chmod 440 slapd. you will have to use single quotes # to surround the {CRYPT}.conf_DEB-orig Database Population As our database is currently less than empty.: /usr/sbin/slappasswd -h ’{CRYPT}’) rootpw {CRYPT}xxxxxxxxxx directory "/var/lib/ldap" index objectClass eq lastmod on access to attribute=userPassword by dn="cn=manager. i.dc=at" write by dn="cn=nss.dc=at" write by anonymous auth by * none access to * by dn="cn=manager. dc=subnet.dc=subnet. use a file like the following which holds the basic data to be added.conf. As you’ve already checked out some general documents on LDAP (haven’t you?).e.conf # ll total 12 drwxrwx---r--r-----r--r----- 2 slapd 1 slapd 1 slapd slapd slapd slapd 4096 Jun 864 Jun 1928 Jun 3 14:38 schema 3 14:41 slapd.) Next. dc=subnet.dc=subnet.conf 3 14:38 slapd. To be able to test the setup. (See below for details.dc=at" read by * auth ####################################################################### Differences to aphroland’s description include using {CRYPT}-hashes instead of {MD5}-ones as well as starting the server as root and have it drop his privileges in order to become the user "slapd" as soon as it has bound to the ports 389 and 636.dc=at objectClass: organizationalRole objectClass: simpleSecurityObject .dc=subnet. To work around this.dc=at objectClass: posixGroup objectClass: top cn: maxldap gidNumber: 12345 Don’t forget to run the "/usr/sbin/slappasswd -h {CRYPT}"-command to create password-hashes for the users with {CRYPT}-entries listed in the .email.ldif-file.dc=subnet.dc=at objectClass: inetOrgPerson objectClass: posixAccount objectClass: top objectClass: shadowAccount objectClass: organizationalPerson objectClass: inetLocalMailRecipient uid: maxldap cn: Markus LDAP Test User Amersdorfer sn: Amersdorfer givenname: Markus LDAP Test User title: Admin departmentNumber: IT mobile: 012-345-6789 postalAddress: AddressLine1$AddressLine2$AddressLine3 telephoneNumber: 1234-567890 facsimileTelephoneNumber: 012-345-6789 userpassword: {CRYPT}xxxxxxxxxx labeleduri:. if you use the tcsh shell.subnet. dc=subnet. run the following command instead: "/usr/sbin/slappasswd -h ’{CRYPT}’".account@mail. dc=subnet.email.com mailRoutingAddress: my.com loginShell: /bin/bash uidNumber: 12345 gidNumber: 12345 homeDirectory: /home_local/maxldap/ gecos: maxldap_gecos-field description: Not Available localityName: Bellevue dn: cn=maxldap. Also see this OpenLDAP mailing-list article on this issue.dc=at objectClass: organizationalUnit ou: People dn: ou=Group. this might produce an error stating something like "Password generation failed for scheme CRYPT: scheme not recognized". .com mail: my. surround the parameter with single quotes.e.address@example.cn: nss description: LDAP NSS user for user-lookups userPassword: {CRYPT}xxxxxxxxxx dn: ou=People.ou=Group.address@example.email.alternate. i.server.at/~max/ mail: my. (Again. ou=People.example.dc=subnet.dc=at objectclass: top objectclass: organizationalUnit ou: Group dn: uid=maxldap. I’ll describe the latter scenario.0." or the other way round).ou=.. Start OpenLDAP You can now .slapd $ /usr/sbin/slapadd -l /etc/ldap/basics-subnet.0/ -d 255 This starts the OpenLDAP server "slapd" initially as root. While it basically boils down to a matter of taste on the one hand (whether you prefer "uid=maxldap.g.x and LDAP below) use this pattern.dc=at"). Nevertheless. The special user cn=nss: With our current ACLs. or to use this cn=nss user so that NSS can lookup the users. . You can and should add the data above to the (currently not running) OpenLDAP database by executing: # su . Smith for pointing this out!) The normal user uid=maxldap: Mind the naming pattern used here for the normal user "maxldap": its distinguished name "dn:" (which is unique within the global LDAP namespace) is constructed by using the user’s "uid=maxldap" attribute (which equals to the Linux user’s login name.dc=at".2.at" ("dc=subnet. It depends on your situation to either set read-rights for everyone to ou=People. The simple reason is that both the MigrationTools (see section Migrate Your Linux Users below) and Samba (see section Samba 2. we can deal with the client-side now.0. the corresponding Linux user’s UID can be found as LDAP’s attribute "uidNumber") prefixing the tree "ou=People.ou=People.. This can be useful for debugging processes. You’ll save yourself a lot of time if you stick with "uid=.dc=.start the OpenLDAP server (without having it disappear into daemon-mode): # /usr/sbin/slapd -u slapd -h ldap://0. to be able to become a user (e. at least to the Name Switch Service (NSS) (see section NSS: Name Service Switch below).) Having the OpenLDAP server up and running..ldif Using "slapcat" you get the database’s current contents without having to perform "ldapsearch" or similar.Many thanks to Martin B... nobody except cn=manager and cn=nss can perform standard read functionality on our LDAP tree.dc=subnet. This places the user in the organizational unit "ou=People" of "subnet. (Hint: Browse through this stuff to get a feeling for OpenLDAPs debugging information and error messages. the tree must be readable." over "cn=Markus Amersdorfer.. binds to the corresponding port (TCP/389) on all local interfaces..ou=People. using "su user") or to get information about the user ("finger user"). drops root privileges by becoming the user "slapd" and presents you with ’live’ debugging information on your screen.being root .dc=". on the other hand it’s definitely better to use "uid=" here. Some sites use the users’ common names ("cn:") instead of the uid’s to differentiate between single LDAP entries (users).. accomplish some other (stackable and thus highly configurable) tasks and finally decide for example whether the user may login or not). we again need to compile the packages ourselves to support SSL.e. This description goes for both your network-clients as well as the LDAP-server itself (as we want the server’s Linux-system too to know the users and other information stored using OpenLDAP)! NSS: Installation (with SSL capable packages) In order to have any traffic between the clients and server be encrypted. see below) is used to accomplish a user’s authentication (i. But be aware to keep track of possible security-updates for these packages on your own from now on! (Upgrading to the possibly new packages then should be easily possible by running "dpkg -i . While the PAM system (Pluggable Authentication Module.) cd ~ mkdir libnss-ldap_woody-source cd libnss-ldap_woody-source apt-get source libnss-ldap cd libnss-ldap-186 vi debian/rules --> and replace --disable-ssl with --enable-ssl [ vi debian/changelog ] dpkg-buildpackage -b -us -uc dpkg -i libnss-ldap_186-1_i386.. The first task now is to set up the NSS correctly to query the OpenLDAP server additionally to the local passwd-files (and/or the already used NIS). files. Make sure to have backups of your configuration before as well as to set the packages to HOLD afterwards again. listing. To get your local machine’s or network’s listing.conf /etc/libnss-ldap. Nowadays." again.o.) . checking if provided login and password are correct.deb [ Get subnet’s self-compiled libnss-ldap package ] echo "libnss-ldap hold" | dpkg --set-selections mv /etc/libnss-ldap. accessing the users-database is not just looking up the passwd/shadow/a.s.s.conf_DEB-orig The final two commands install the new libnss-ldap-package and set it to HOLD status.NSS: Name Service Switch NSS: Introduction On Linux (and some other UNIX-flavours).o. most applications use library calls to get user information or accomplish user authentication. the Name Service Switch is a service which provides you with a user/group/a. This is done by installing the package "libnss-ldap" and configuring the nss-processes to use it. just run "getent passwd".. (The actual configuration of encrypted communication can be found later in the document. conf: passwd: group: shadow: ldap compat ldap compat ldap compat This way. In order to be able to browse through the capabilites later (and perhaps activate some of them).subnet.dc=at ####################################################################### The bindpw-entry is the password for the NSS-user (cn=nss. Once the package is installed.dc=subnet. do not use the {CRYPT}-hash.subnet. See section PAM: The user "root" and other system UIDs below for details.dc=at) you created above when populating the LDAP database.dc=subnet. lookups for passwd.conf does not specify all of the module’s options.conf file to configure the new functionality correctly: ######################### /etc/libnss-ldap.conf-file.at/ ldap_version 3 binddn cn=nss.dc=subnet. The password has to be stated as plaintext here. we made a backup of Debian’s original and (throughout the file itself) well-documented libnss-ldap. This feature is used in the setup described here to have the user "root" both be served from LDAP and . group and shadow try LDAP first ("ldap") and NIS and the local files next ("compat"). include the LDAP NSS module in the system lookups by editing /etc/nsswitch.at No mail.subnet.) It should be possible to lookup the LDAP-user "maxldap" using finger or getent now: # finger maxldap Login: maxldap Name: maxldap_gecos-field Directory: /home_local/maxldap/ Shell: /bin/bash Last login Mon Jun 2 16:53 (CEST) on pts/1 from some-client. use the following /etc/libnss-ldap.at base ou=People.at/~max/ldap/ host ldap.subnet.dc=at uri ldap://ldap. Now.dc=at bindpw the_one_you_set_above_in_the_ldif-file__as-plaintext nss_base_passwd ou=People.dc=subnet.have it stored locally.conf ######################## #. .Mind: The manual page for libnss-ldap.as a fallback in case of LDAP wasn’t reachable . it will also show up twice in the output.dc=subnet. (If a user is listed both locally and in LDAP. No Plan.dc=at nss_base_group ou=Group. compiled ourselves to support SSL. it also might lead to the situation where it might take some time for an update of user-data to reach all clients.conf file holds some clear-text information necessary to be able to perform NSS-lookups. While this makes NSS-lookups faster.this file has to be world-readable. Server Configuration As with NSS. installing "nscd" might definitely be a good idea from the security point of view: The above mentioned /etc/libnss-ldap. it’s the same process here for the package "libpam-ldap" as it was with libnss-ldap above: recompilation with SSL enabled and installation of the new package. user lookups are seperated from user authentication on Linux systems. this filter patch has a bug. we want the LDAP-server itself too to be able to use the LDAP-based users for authentication. the same configuration applies to the server as it does to the client machines (= Linux/Unix stations not running the LDAP server but just querying it for user lookups and authentication. at least). which means we’ll need to install a patched version of libpam-ldap. In this setup. NSCD can help you solve this issue: Just install it and set the file-access-rights for /etc/libnss-ldap. Nevertheless. We’ll use this feature to be able to allow users to login to some specific workstations and block access on the network’s other workstations (see below for details). PAM: Pluggable Authentication Module PAM: Introduction As mentioned above. which in turn runs with root-privileges (I guess.resulting in funny situations such as the prompt saying "i have no name!" instead of the actual user’s login-name . the Debian Woody package has a special patch applied to be able to use filters when checking whether a user is allowed to login or not. Thus. the second is usually dealt with using PAM nowadays. basically. and thus can read the credentials from the config-file and perform the corresponding DB-lookup. the corresponding library-request executed with the user’s rights should be handled by NSCD. Basically. this means that everybody knows about the credentials of the "cn=nss" user and can do everything this special user can (which depends on the access-lists of the LDAP server). group and host lookups for running programs and caches the results for the next query".) . (Or is there some "pushing".or any other mechanism that solves this?) Anyway. Though I haven’t tried it yet.conf NSCD is "a daemon which handles passwd. While the first is covered by NSS. But then. This patched version is available from Christof Meerwalds Debian section. PAM: Clients vs. In order to prevent the users from not knowing who they are . Unfortunately.NSCD and /etc/libnss-ldap.conf to "600" (owned by root). no matter which configuration you chose. (This is not totally correct: root can change a user’s password. . but it has to know the user’s old password to able to set a new one.Nevertheless. root can not add users . I’ll describe a setup here where root can change any user’s password. Reasons are that I want root to be able to change the passwords by simply running "passwd $user" (without having to know the user’s old password).org/files/debian woody libpam-ldap Run: cd ~ mkdir libpam-ldap_cmeerw-source cd libpam-ldap_cmeerw-source apt-get update apt-get source libpam-ldap vi debian/rules --> and replace --disable-ssl with --enable-ssl [ vi debian/changelog ] dpkg-buildpackage -b -us -uc cd .conf" using the option "rootbinddn" on the server and "binddn" on all other machines. This is the same behaviour as if the users theirselves would change their passwords. This can be bad (additional "overhead". the server needs the file "/etc/ldap. You can configure a machine so that there is no almighty root anymore concerning the users. but only on the machine running the OpenLDAP server. This doesn’t seem to be easy to administer (especially in the case where the manager’s password changes) on the one hand. If not stated otherwise.list: # Patched libpam-ldap for Woody (.. depending on your needs and wishes.) 2.root can’t even change the users’ passwords. 1.dc=" to be stored on such a machine locally in a file. It’s all about the user "root" and about changing user passwords. this ability includes the need for the password for "cn=manager.dc=. this can/must be done by someone else) . additionally it has to be in plaintext. Furthermore. you have several options here now. The only difference lies in the file "/etc/pam_ldap. PAM: Installation (with SSL capable packages) Add to /etc/apt/sources. necessary change of habits) or good (the system administrator "root" is not responsible/able to change the users’ passwords. it’s the same for both possible setups.depending on your needs. If you do so. Most of the steps following are the same for all machines. You can keep the system-administrator (responsible for a machine’s uptime) seperated from the users administration.secret" which holds the manager-user’s password in plaintext (with access rights "600".org/debian/) deb-src. and it doesn’t seem to be very secure either for obvious reasons on the other hand. Nevertheless. owned by "root"). You can configure a machine to behave as if users were "installed" locally (in passwd|shadow) so that root can change them and their passwords. conf ######################### #. we set its status to HOLD. first.at/~max/ldap/ # # pam_ldap.conf_DEB-orig Again after installing the package.deb [ Get subnet’s self-compiled libpam-ldap package ] echo "libpam-ldap hold" | dpkg --set-selections mv /etc/pam_ldap.conf: ########################## /etc/pam_ldap.. Make sure to have backups of your configuration before as well as to set the packages to HOLD afterwards again.subnet.at/~max/ldap/ # # pam_ldap.dc=subnet.at base dc=subnet. using the patched package and recompiling it with our modifications we’re ready for these things to come later on.dc=at bindpw the_one_you_set_above_in_the_ldif-file__as-plaintext pam_password crypt ####################################################################### On the server (or on all machines where you want root to be able to change the users’ passwords).conf: ########################## /etc/pam_ldap.conf does not specify all of the module’s options." again.conf for the server (where root can change user passwords) host ldap.at/ ldap_version 3 binddn cn=nss. Next.at base dc=subnet..at/ ldap_version 3 rootbinddn cn=manager.conf ######################### #. configure the new PAM module using this /etc/pam_ldap.dc=at . on all the clients (where root is not able to change the users’ passwords).conf for all client machines host ldap.conf /etc/pam_ldap.dc=at uri ldap://ldap. Mind: The manual page for pam_ldap.subnet.dpkg -i libpam-ldap_140-1cmeerw_i386.subnet. use this /etc/pam_ldap.subnet. In order to be able to browse through the capabilites later (and perhaps activate some of them). we made a backup of Debian’s original and (throughout the file itself) well-documented pam_ldap. But be aware to keep track of possible security-updates for these packages on your own from now on! (Upgrading to the possibly new packages then should be easily possible by running "dpkg -i .dc=at uri ldap://ldap.) Though we’ll use a setup without SSL and without host-specific access controls for the moment.conf-file.subnet.dc=subnet. #auth required pam_nologin.so pam_unix. Most PAM-aware applications have their own PAM-stack they use. we need to include it into the PAM process: We’ll check out the modifications to be able to log in using ssh and su. if I’m correct.so password required pam_unix.so pam_ldap.subnet. only root can read the file) which holds the plaintext-password for LDAP’s "cn=manager..secret with access rights "600" and owned by "root" (i. Debian stores these configuration-files in /etc/pam. as we want to be able to fall back to the (local) pam_unix in case pam_ldap does not authenticate successfully. ("sufficient" can not be replaced with "required" here.so account account session session session session session session sufficient required sufficient required optional optional optional required pam_ldap.) (The "session" section might have to be re-ordered too.. # so there’s no need for pam_nologin in /etc/pam.so auth required pam_unix.so standard noenv # [1] pam_limits. This is necessary as once pam_ldap authenticates successfully. of course. one has to place them before pam_ldap.so # [1] pam_motd. there has to be a blank second line in this file.e.d/. no other "auth"-lines are used due to pam_ldap’s "sufficient" attribute. In order to have pam_env and similar be used..d/ssh.so # Woody’s SSHD checks for /etc/nologin automatically.d/ssh ############################ #.".so"-lines.d/ssh: ########################### /etc/pam.so auth sufficient pam_ldap. Re-ordering the "auth"-lines.so password sufficient pam_ldap.so # [1] pam_mail. I didn’t have such in my setup originally: according to Setting up LDAP for use with Samba. (While. Here is /etc/pam.) Now that we have the PAM module configured.at/~max/ldap/ auth required pam_env.secret pam_password crypt ####################################################################### and second don’t forget to create the file /etc/ldap.# don’t forget /etc/ldap.) .so pam_unix. This might be the case if for example the LDAP server could not be contacted for some reason.so ####################################################################### The changes to the original file are: The addition of the "pam_ldap.so pam_lastlog. "use_first_pass" is not needed in pam. The option "use_first_pass" is passed to pam_unix. or this module can’t authenticate the users anymore.at maxldap@ldap..subnet. Best thing is to restart it everytime to be sure changes are activated.) Things to take care of: Do not forget to edit the files /etc/pam. More up to date systems often use "md5" somewhere in the process of hashing the password. do not use "use_first_pass" on the first password-querying auth-module of your service-configuration.Here is /etc/pam. Otherwise.. I haven’t figure out yet when it is necessary to restart e.d/su ############################# #. SSH after changing its PAM-stack file.so required pam_unix.d/login similarly.subnet.d/ssh above.so required pam_unix.so module.subnet.] pam_ldap". (For some reason. you would not be able to login to any of your machines . the latter is used if somebody logs in locally (not via SSH or something like that). The first is used if some application without a specific service-file in /etc/pam.so use_first_pass sufficient pam_ldap... This way the pam_unix-module uses the password which was provided to "auth [.at/~max/ldap/ auth auth auth account account sufficient pam_rootok.so"-lines were added. MD5: The good old shadow file typically stores passwords as hashes using the "crypt" algorithm.so session sufficient pam_ldap.so ####################################################################### The changes to the original file are: Again. (These . one would have to enter the password twice for users not in LDAP. Logging in via SSH or su’ing to the LDAP-user "maxldap" should work now: $ ssh maxldap@ldap. Even if all your users (including "root") are stored in the LDAP database.d/ uses PAM-calls. In case your LDAP server wouldn’t be reachable for whatever reason. it would not be a good idea to remove the pam_unix.so sufficient pam_ldap.g. Should you re-order your PAM stack.d/other and /etc/pam.so. the "pam_ldap.at’s password: Last login: Tue Jun 3 15:11:30 2003 from some-client maxldap@ldap:~$ PAM: Passwords: Facts CRYPT vs.so session required pam_unix.not even as root (and that’s exactly what you might need to do in this case to debug or administer your network).d/su as another example: ########################### /etc/pam. ) The Mandrake-based document on mandrakesecure. just treat it as a system account and do not migrate it. Only the "LDAP Manager" is allowed to see the hash. haven’t you? PAM: The user "root" and other system UIDs (You actually don’t have to worry about that just yet. BTW: One reason for even me thinking that "crypt" really is secure enough is that the password is never sent in any way in plaintext over the network. just bare in mind you will have to decide later (when actually doing all the migration stuff in section Migrate Your Linux Users below) what to do with all the system accounts and with "root". this is the only one I found where both root and the users themselves (both LDAP-based users and local-only ones in passwd|shadow) can change their passwords. I couldn’t use these hashes. This would have been nice!) Oh.d/passwd. either Debian doesn’t support it or I simply couldn’t get this thing to work.d/passwd ########################### #. I think the general knowledge about it belongs to PAM and thus here. as every traffic between clients and server is secured using SSL (see below). migrate "root" to LDAP. (Especially. but you’ve already read it anyway.so. and of course. Otherwise. (It seems I managed to have passwords being created using MD5 and have them stored in the LDAP database this way. Unfortunately. Anyway.) I definitly recommend to keep your system UIDs (Debian currently treats the UIDs 1-999 as such) only locally on every machine. I really would have liked the MD5-thing as it creates longer password hashes and uses passwords with more than just 8 characters..conf concerning root’s non/ability to change users’ passwords explained in section PAM Clients vs.).so or using use_first_pass with pam_unix. Nevertheless. as it’s about user accounts.at/~max/ldap/ password sufficient pam_ldap. but unfortunately.so password required pam_unix. ########################### /etc/pam. Unfortunately."md5"-passwords can be distinguished from the "crypt"-only ones by starting with "$1$".) Being well known for my paranoia . good old CRYPT will (have to) suffice. If you prefer to have one root account for all machines (as I do).net (see section External Resources) describes a way to use exactly the newer MD5-based approach with your LDAP database. the users can change their passwords on their own from the command-line by simply executing passwd. . Server Configuration above.so nullok obscure min=4 max=8 ######################################################################### I tried several versions here. I never got a user to authenticate successfully.subnet. Even a local "root" user can’t see the password-hash by executing "getent shadow". Using it. The only (really bad) thing you should remember is: "Crypt"-passwords have a maximum length of 8 characters! PAM: Passwords: How To Change Them Here is my /etc/pam. don’t forget the setup of /etc/pam_ldap. I could not get the stuff working using pam_cracklib.. do not migrate them into LDAP! It’s up to you what you do with "root". this also depends on section PAM Clients vs.) ToDo . you will upgrade your machines from Woody to Sarge. the user "mysql" should only be available on a machine hosting a MySQL server).conf. it also needs to know its old password. logging in as "root" will use the LDAP-account by default. as the user "root" should not be able to simply change those LDAP values. It can (and probably will) happen that according to a new policy some system accounts changed their UID. Debian offers several ways to achieve our goal.verify this again: Oh.. With the setup explained here. what about the second? (Well. and before I forget and because it differs from standard Linux behaviour: if "root" tries to change its LDAP-account password using "passwd". if a user shall has access to one host. But be sure to also have "root" locally in /etc/(passwd|shadow|group). specifying that this user is allowed to login to the listed host(s).. If for some reason your LDAP server was not reachable. described in section NSS: Installation (with SSL capable packages). but that’s probably what you will do anyway. if you want to serve your users to client machines which do not run Debian Woody and thus use a different system accounts scheme. ok. Host Specific Access Introduction Up to now. they actually have access to all hosts on the network.) And of course (above all).g. Only if the LDAP server is not accessible. Even if it worked properly for one server. UID 0: "root" If you want to migrate the user "root" into LDAP (like I did). you will automatically fall back to the local user/password from the flat files..) Additionally. won’t you? :) . from "simple" to "advanced": . actually. Server Configuration. As described in the Mandrake based LDAP-article already mentioned in the resources section. you’ll need to change it directly in the LDAP database.The system UIDs If you need reasons for not migrating them to LDAP: Different services on different servers/machines will need different system users to be present (e. Try this yourself to make sure your machines behave properly... you’ll have problems on your hands. just as any other user does! (If you can’t remember the password anymore. but IMHO you really don’t want it this way :) . everything might turn out to work somehow. One day. (Mind the order "ldap compat" in /etc/nsswitch.. you still can log in to your (LDAP)-client machines using the local "root" account. one can define one or more "host"-attributes (host is part of "objectClass: account") in the user’s LDAP-entry. Upgrading just only one server will not work in this context. simply migrate it as described below in section Migrate Your Linux Users. simply add "host: *" to the LDAP entry. they are allowed to login. that in the /etc/pam.conf indicates.conf [.. If the user is allowed to login to all hosts. .mydomain. you can add the following there: /etc/pam_ldap.conf"." the user will be granted access to the host.net".. This means..so". The "pam_check_host_attr" option can be found in "/etc/pam_ldap.so filter=|(host=this-host. you’ll have to replace "account sufficient pam_ldap. you already know that Debian uses the split-up approach.conf" and "pam_ldap. As we’ve installed the corrected version from cmeerw.d/.conf" file. (Otherwise.so" with "account required pam_ldap.so auth sufficient pam_ldap. As already mentioned in section PAM: Pluggable Authentication Module. despite the message "Access denied.d/<service> files.org. add an attribute similar to "host: allowed-host.g. While Mandrake seems to have merged "libnss-ldap.d/ssh): #auth sufficient pam_ldap.net" or "host: *". or by editing /etc/pam_ldap..) Approach 2: Filters The more powerful way is the following one: Debian’s package libpam-ldap has a "filtering" patch applied.net)(host=\*) Only if the user’s LDAP entry contains "host: this-host.net)(host=\*) For all services at once (/etc/pam_ldap.conf" to on single "ldap.conf): pam_filter |(host=this-host. add an attribute similar to "host: allowed-host.mydomain.mydomain. /etc/pam. This way it is possible to accompany the LDAP PAM module with some filtering-rules which must match in order for the module to return successfully and allow the authentication-process to proceed. So here’s how to do it: For every host that the user shall be able to login. the standard Woody package’s filter patch has a bug which needs to be fixed. you’ll need pam_ldap.net".so to be "configured for account management (authorization)".] pam_check_host_attr yes But be careful: As the comment in Debian’s original pam_ldap.mydomain.Approach 1: pam_check_host_attr For every host the user shall be able to login. we already have a working version.mydomain. Next thing is to adapt the PAM stack: This can either be done by editing one (or more) specific service’s config file in /etc/pam.conf (which has influence on all services at once): For single services only (e. g.0. loglevel xxx ] TLSCipherSuite HIGH:MEDIUM:+SSLv2 TLSCertificateFile /etc/ldap/server. any traffic between the server and the clients was unencrypted.. let’s move on and make everything safer: Up to now.dc=subnet.0.dc=subnet.dc=at" \ -H "ldaps://ldap..at/" -W -x "(uid=maxldap)" Both commands.subnet..dc=at" -LLL -D "cn=manager.Things to take care of: Mind the backslash "\" in the filter "host=\*".0.cert TLSCertificateKeyFile /etc/ldap/server. you’ll allow access to every user who has at least one "host"-attribute (with any content)! Be sure to add "host: *" for user "root" (if it is served via LDAP) if you want it to be able to login via SSH! Add-On: Of course this setup can be extended for example by using self-defined attributes representing groups of hosts (e.dc=at" -LLL -D "cn=manager. It’s all up to you.dc=at" \ -H "ldap://ldap. With a little imagination it could perhaps also be possible to assign a user different shells on different hosts!? I don’t know yet.0/ ldaps://0...dc=subnet.at/" -W -x "(uid=maxldap)" ldapsearch -b "ou=People.. Let’s activate the SSL-encryption our packages already are capable of (as we recompiled them ourselves). with and without SSL encryption.dc=subnet.key TLSCACertificateFile /etc/ldap/ca. .cert TLSVerifyClient 0 [ . If you miss it (ending up with "host=*"). You can find a description of how to create an SSL certificate here: HOWTO Create an SSL Certificate.subnet. database ldbm ] Start the LDAP server with the following command: # /usr/sbin/slapd -u slapd -h ’ldap://0. "my-hosts: workstations") and adapting the PAM module’s filters.0/’ -d 1 You can test the server’s SSL capabilities (of course the client you are executing the second command needs the "ldap-utils"-package with SSL-support compiled into it!): ldapsearch -b "ou=People.conf: [ . should return the entry for the user "maxldap". :) SSL Encryption Now that we have a running LDAP-server which provides user-information and also have clients authenticating using these LDAP-users. you need to configure your OpenLDAP server to use it: Add to your /etc/ldap/slapd.0. Once you have your signed certificate. -u slapd \ -h ’ldap://0. for example /etc/pam. on the client side (this means every machine querying OpenLDAP.conf. Add-On: To test if your clients really are communicating with the server using an encrypted connection.0. Next. All you have to set up on each "client" is: Install our self-compiled libnss-ldap package. . Edit /etc/nsswitch.all we have to do is to change one line: # start-stop-daemon --start --quiet --pidfile "$pf" --exec /usr/sbin/slapd start-stop-daemon --start --quiet --pidfile "$pf" --exec /usr/sbin/slapd -.subnet. this is as easy as changing "uri ldap://ldap.conf and /etc/lib_pam. this most probably also includes the "server" where OpenLDAP is running on) we have configured the Linux clients to use it. lot’s of stuff.conf and /etc/pam_ldap. we should look at the changes necessary to /etc/init. While using ldap:// you should be able to find some cleartext in the data garbage. make user-queries and logins with both settings ldap:// and ldaps:// and a concurrently running "tcpdump -X host ldapserver".conf (clients) respectively /etc/lib_pam. Install our self-compiled libpam-ldap package based on cmeerw. "slapd" now logs to /var/log/debug.d/..0. well.0.at/"> in both /etc/libnss-ldap. /etc/libnss-ldap. Edit the corresponding service files in /etc/pam.d/ssh. we have the process’ user changed to "slapd" as well as the server listen for ldap-traffic on port 389 and ldaps-traffic on port 636.at/" to "uri ldaps://ldap.0/ ldaps://0. OpenLDAP Startup Script Now that we have everything set up correctly concerning the connections (including SSL-support).org.conf.subnet.Activate SSL Encryption for the Clients’ Queries Next step is to adapt the clients’ setup: Having already installed our re-compiled library-packages. on the server side we have accomplished to set up and populate the OpenLDAP server.conf (server) and /etc/ldap. after switching to ldaps:// you should only see . Communication between the clients and the server is secured using SSL encryption.secret (server).d/slapd to have the OpenLDAP server started correctly every time the machine boots .0/’ This way. So Far So Good..0. they can look up the users and use those to log into machines. Part 1 So far. but no plaintext information. This tcpdump-command shows you in ASCII the transmitted data. so no need for this here. The newer version of Debian Sarge (currently version 44-6) definitely works better." 03-08-12: As Buchan Milne pointed out in a mail on the Samba mailing-list and his Howto Implementing disconnected authentication and PDC/BDC relationships using Samba and OpenLDAP. "givenname" an "sn".Migrate Your Linux Users Migrate Linux: Prerequisites Of course you want to migrate your current Linux users from passwd/shadow to the new OpenLDAP server. but also still produces junk values for the attributes "cn". Debian offers help here by providing the MigrationTools from padl. $DEFAULT_MAIL_HOST = "mail. $EXTENDED_SCHEMA = 1. The default base is "dc=subnet.g. edit /usr/local/MigrationTools-44/migrate_common.at". as already mentioned above in section PAM: The user "root" and other system UIDs.com as the Debian package migrationtools.dc=at". Unfortunately.subnet.pl" produced junk values for the attributes "cn". the latter also uses different letter cases in a newer version than Woody’s one does (e. ("hosts" would be good to migrate too. I used version 44.conf. "givenName" instead of "givenname").pl" forgot an "s" with "dc: subnet". in my case this was our NIS server (which was a different machine than the upcoming LDAP server). After exploding the tar-ball.dc=at" which should be identical to the suffix defined in slapd. but we have a local DNS server running. the package in Debian Woody (version 40-1) is rather buggy: "migrate_base.pl" didn’t produce any output and "migrate_passwd. It’s possible to migrate nearly all data to LDAP (including /etc/(hosts|protocols|services) etc. $DEFAULT_BASE = "dc=subnet. which works fine for me. Furthermore.at".) Furthermore.ph is not necessary: Setting and exporting according environment variables (for example export LDAP_DEFAULT_MAIL_DOMAIN="subnet.at". nevertheless.). In either case. in this case "mail. In order to get a working package you should download the original version of the MigrationTools from padl.at" which will assign all users a default email address of "user@subnet. Here we set the default mail domain. the only things we’ll use LDAP for are users and groups. The default mail host is the SMTP server used to send mail.com. The extended schema is set to 1 to support more general object classes. you’ll have to execute the scripts on the machine which currently holds your users already. in this case "subnet. so install the migration-package there.at") works too and would survive an eventual upgrade (though of course migrating users will probably be performed only once).at". Citing the initial Mandrake-LDAP document on this one (with "localisation"): "This sets some defaults for the migrated data.subnet. you’ll have to exclude your system accounts (Debian Woody uses the UIDs 1-999 for this) and decide on what to . changing those values in migrate_common. so you could apt-get install this package. "migrate_group.ph and adapt the following variables: $DEFAULT_MAIL_DOMAIN = "subnet. at (Here is the original base. the only things we’ll migrate are users and groups.ldif: dn: dc=subnet. we basically do have this base structure already.pl cd /usr/share/migrationtools/ ./migrate_base.ldif file with all the additional entries. As I’ve already mentioned.dc=at ou: People objectClass: top objectClass: organizationalUnit objectClass: domainRelatedObject associatedDomain: subnet.ldif The only entries which are or might become interesting for us are the following base. Especially keep in mind when migrating both groups and users to delete all system accounts from the .ldif-file created by the migration scripts! ToDo: It should be possible to do this using some /sed/awk/grep/etc/ scripts.dc=subnet.do with "root" (UID 0). So we won’t do anything here at the moment. and of course we’ll check out the base structure.dc=at dc: subnet objectClass: top objectClass: domain objectClass: domainRelatedObject associatedDomain: subnet. Migrate Linux: The Scripts The migration scripts are located in /usr/local/MigrationTools-44/ (or wherever you saved them to): migrate_base.) Well.dc=at ou: Group objectClass: top objectClass: organizationalUnit objectClass: domainRelatedObject associatedDomain: subnet. .at dn: ou=Group.pl creates the LDAP tree’s base structure.at dn: ou=People. The migrate_all_* migrate everything by simply calling the single perl scripts.pl > base. so let’s start with this one: migrate_base.dc=subnet. The only difference is that we are missing "objectClass: domainRelatedObject" and its associated attribute. which most probably holds your passwords.ldif # Restrict access as this file holds all passwords: chmod 600 passwd./migrate_passwd. An entry in /etc/group like "somegrp:x:12345:userone.pl /etc/passwd passwd. so we have to remove the corresponding objectClass.dc=subnet. You can simply adapt all the entries by executing "sed" like the following: . we didn’t include the Kerberos schema in our slapd.subnet.dc=at" -x -W -f $FILE BTW: The main purpose of a group is to hold several users :).usertwo" can be accomplished by adding a "memberUid" attribute for each user to the group’s LDAP entry: "memberUid: userone" and "memberUid: usertwo". migrate_passwd.dc=at objectClass: posixGroup objectClass: top cn: users userPassword: {crypt}x gidNumber: 100 dn: cn=nogroup. it is called "inetLocalMailRecipient".at/ -D "cn=manager. Additionally.pl /etc/passwd passwd.dc=at objectClass: posixGroup objectClass: top cn: max userPassword: {crypt}x gidNumber: 1000 dn: cn=users. This should ensure that the script find the /etc/shadow file./migrate_group.pl Here is the minimized output from .dc=subnet.migrate_group.ldif: dn: cn=max. Instead.conf.ou=Group.ou=Group. Debian Woody’s schema files do not provide the objectClass "mailRecipient"./migrate_passwd.dc=subnet.ou=Group. try the command with an explicit environment variable set: ETC_SHADOW=/etc/shadow .ldif.pl /etc/group group.ldif If you don’t have any user passwords in your ldif file.pl .dc=at objectClass: posixGroup objectClass: top cn: nogroup userPassword: {crypt}x gidNumber: 65534 This information can now be added to the LDAP server by executing something like the following: ldapadd -H ldap://ldap.dc=subnet. .sed s/mailRecipient/inetLocalMailRecipient/g passwd.at/ -D "cn=manager. . As no flavour of Windows (AFAIK) can access the LDAP database directly.dc=subnet.at mailRoutingAddress: THEUSERNAME@mail. Thus.ldif_corrected Enter LDAP Password: adding new entry "uid=max. the corrected objectClass and other values): dn: uid=max. the Windows machines will allow access based on what the (Samba) PDC says which again makes its decisions on what the users in the LDAP database look like.dc=at" -x -W -f passwd. it would be useful to have the network’s Windows machines use this database too.dc=subnet.dc=at" Samba 2. This information can again be added to the LDAP server by executing something like the following: ldapadd -H ldap://ldap.ldif | \ sed ’/^objectClass: kerberosSecurityObject$/d’ | sed ’/^krbName: /d’ \ > passwd. 2003.subnet. of course.ldif_corrected Here is the corresponding passwd.ou=People..ou=People.subnet.ldif_corrected chmod 600 passwd.ldif file (again with one user only.x and LDAP Samba: Introduction Now that we have a working LDAP server to hold all our Linux-users. The Samba-PDC queries LDAP for entries which are "objectClass: sambaAccount".2. (This means. all Windows/Samba clients query the Samba-PDC.) 03-08-15: See user comment from Aug 14.at mailHost: mail. the Windows machines (both servers and clients) will have to be part of a domain which is controlled by a Samba PDC.subnet.at objectClass: inetLocalMailRecipient objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: account objectClass: posixAccount objectClass: top objectClass: shadowAccount userPassword: {crypt}$1$_my-password-hash-from-/etc/shadow shadowLastChange: 12174 shadowMax: 99999 shadowWarning: 7 loginShell: /bin/bash uidNumber: 1000 gidNumber: 1000 homeDirectory: /data/home/max gecos: Markus Amersdorfer.dc=subnet.dc=at uid: max cn: Markus Amersdorfer givenname: Markus sn: Amersdorfer mail: THEUSERNAME@subnet. Oh.x are as welcome and will be included here as any other feedback!) Another add-on: You only have to install this "special LDAP-Samba version" on the machine which performs as your network’s PDC.2. ] Mind: In order for Samba to support ACLs. Conclusion: Recompilation is necessary.3a-13subnet_i386.deb [ Currently now packages available.deb samba-doc_2.2..3a" currently. The default behaviour is to use the flat smbpasswd file. so "Samba" refers to "Samba 2.deb samba_2. Again.deb smbfs_2. apt-get install the mentioned packages and run "dpkg-buildpackage" again in this case. "dpkg-buildpackage" might not compile the packages at the first time you run it due to some missing build-dependencies.) Samba: Installation and Setup Samba itself (if it’s not configured to query some other server for user-authentication) can either use the smbpasswd file to hold its users or use an LDAP server to accomplish this.3a vi debian/rules --> add "--with-ldapsam \" just before "--with-msdfs" [ vi debian/changelog ] dpkg-buildpackage -b -us -uc [ Due to missing build-dependencies: apt-get install libreadline4-dev libcupsys2-dev [with extra-packages "libcupsys2 libjpeg62 libtiff3g"] dpkg-buildpackage -b -us -uc ] cd .2.2.3a-13subnet_i386. and by the way. please build them yourself . dpkg -i samba-common_2.deb smbclient_2.2. All other Samba machines should be configured to use this one Samba PDC as their "oracle" for user authentication. (Infos on Samba 3.cache the value "ac_cv_header_sys_acl_h=${ac_cv_header_sys_acl_h=no}" to "ac_cv_header_sys_acl_h=${ac_cv_header_sys_acl_h=yes}". (You’ll use options like "security = DOMAIN" and "password server = MYPDC" to accomplish this.3a-13subnet_i386.. Debian Woody’s Samba packages of course defaults to the default in this case. set the packages to HOLD using dselect or something like "echo "samba-common .2.. you have to decide at compile time. It’s not possible to do both at the same time.3a-13subnet_i386.2. remember that this HOWTO-document uses Debian Woody as the underlying distribution. cd ~ mkdir samba-source cd samba-source apt-get source samba cd samba-2. Again. I also added "--with-acl-support" and changed in debian/config.3a-13subnet_all. .. of course the Samba server needs to be able to query the LDAP server for users.hold" | dpkg --set-selections"..at ldap suffix = ou=People.g.gz chown slapd.slapd samba. This leads us to the next (and one of the last) steps to do: configure smb.dc=subnet.dc=subnet." again.gz /etc/ldap/schema/ cd /etc/ldap/schema/ gunzip samba.lmPassword.*..conf: include /etc/ldap/schema/samba. Samba has to know about this user and it’s password. Make sure to have backups of your configuration before as well as to set the packages to HOLD afterwards again. Of course.g..dc=subnet. I tried to think of a setup which would not use "cn=manager.] Restart OpenLDAP: /etc/init.schema.schema chmod 440 samba.d/samba restart .schema Add to /etc/ldap/slapd.dc=at # Plus these options for SSL support: #ldap port = 636 #ldap ssl = on Restart Samba: /etc/init.d/slapd restart Now that you have an "LDAP capable" Samba installed and now that your OpenLDAP knows about the new attributes..dc=subnet.dc=at" attribute=userPassword. using "smbpasswd") and above all edit or even add LDAP entries (e. teach your LDAP server the possibilities of Samba by adding the Samba schema file and restrict the access to the users’ Samba passwords using the Access Control Lists: Run: cp /usr/share/doc/samba-doc/examples/examples/LDAP/samba." for this.dc=at" attribute=userPassword access to dn=".ntPassword [.. Here are the additions to be added to the [global] section of your Samba PDC: /etc/samba/smb. The main reasons are that Samba (in some way) must be able to change the passwords (e.conf. machine accounts for new SMB-clients joining the domain).schema.dc=at ldap server = ldap.conf: # access to dn=". add to [global]: # Without SSL: ldap admin dn = cn=manager.) Next.*.conf (and thus Samba and its tools) correctly to use the LDAP server properly. But be aware to keep track of possible security-updates for these packages on your own from now on! (Upgrading to the possibly new packages then should be easily possible by running "dpkg -i .subnet.schema Change the already exiting password ACL rule in /etc/ldap/slapd. but I just couldn’t figure one out. BTW: As Implementing a Samba LDAP Primary Domain Controller Setup on Mandrake 9.business as usual. change its password using "smbpasswd $user". The "only" difference is that now the LDAP server is used to hold the information which is usually stored in smbpasswd: . If you don’t want to do this. Added user maxsmb.conf’s option "ldap admin dn" in the file /var/lib/samba/secrets.tdb. etc. you set an environment variable to the corresponding password and use it in the next command to tell Samba about it.g.dc=subnet. .dc=at" -LLL -D "cn=manager. you might want to test if the new setup works properly.Here is a sample smb.dc=subnet. Samba: Test your Setup Before messing around too much with your (already existing?) LDAP users. (That’s a very neat trick I saw at B. Not saving it in LDAP at this time yet helps keeping things seperated.conf file. Milne’s "Implementing Disconnected Authentication and PDC/BDC Relationships Using Samba and OpenLDAP" to keep the password from your shell’s history file and similar. we simply add one to the local flat files (passwd|shadow|group). just go on and proceed with the next section Samba: Add (Windows) Users. password: 12345] Add the "Samba user" (to LDAP): # smbpasswd -a maxsmb --> output: New SMB password: abcde Retype new SMB password: abcde LDAP search "(&(uid=maxsmb)(objectclass=sambaAccount))" returned 0 entries. it stores a password hash for the user of smb. simple and "a way which is already known".dc=subnet. ’cause that’s what we want actually. It will not show up anywhere :).of course .) "smbpasswd -w $LDAP_BINDPW" is the actual command of interest.dc=at" -W -x "(uid=maxsmb)" One can now access the Samba server using this user.be stored in the LDAP database. we have to tell Samba the corresponding password: # read -s -p "Enter LDAP Root DN Password: " LDAP_BINDPW # smbpasswd -w $LDAP_BINDPW --> output: Setting stored password for "cn=manager. the corresponding Samba user to be added will . we probably don’t need (and thus don’t want) the overhead of encryption on the very same system where both OpenLDAP and Samba are running on. Add the "Linux user" (to local flat files): # adduser --no-create-home maxsmb [e.dc=at" in secrets.x states. Nevertheless. Check yourself: # getent passwd | grep maxsmb # ldapsearch -b "ou=People. Last but not least.tdb This way. Basically. you can add the two options "ldap port" and "ldap ssl" accordingly. As Samba needs a "Linux user" for/below every "Samba user". (If the two services run on different machines. 2.255.170. If you just have a few of them.141.170.ou=People.119 bcast=193.2.3a-12 for Debian] smb: \> Now that everything is proven to work properly. . LDAP-Test)) ADMIN$ Disk IPC Service (yellow server (Samba 2.2. it simply updates the password-hashes (no matter whether it’s invoked as "smbpasswd $user" or "smbpasswd -a $user").128 Password: Anonymous login successful Domain=[SUBLDAP] OS=[Unix] Server=[Samba 2.141. there’s nothing easier than to simply add them: # smbpasswd -a $user This command seems to perform an LDAP search for an already existing entry matching "(&(uid=maxsmb)(objectclass=sambaAccount))".dc=subnet.127 nmask=255. If it finds an entry matching its query (which means the user already exists as a Samba-user).255.3a-12.170.3a-12 for Debian] smb: \> client$ smbclient //ldap/maxsmb added interface ip=193.2.141.141.2.3a-12 for Debian] tree connect failed: NT_STATUS_WRONG_PASSWORD client$ smbclient //ldap/maxsmb -U maxsmb added interface ip=193.255.170. or it uses an already existing normal Unix-user and adds the corresponding Samba attributes. don’t forget to remove this section’s test user "maxsmb": # ldapdelete -D "cn=manager. If none is found.dc=subnet.3a-12.141.127 nmask=255.255.dc=at" -W -x "uid=maxsmb.119 bcast=193.141.dc=at" # deluser --remove-home maxsmb Samba: Add (Windows) Users In order for Samba to allow access to shares for certain users (and don’t allow for others).170.128 Password: Anonymous login successful Domain=[SUBLDAP] OS=[Unix] Server=[Samba 2.119 bcast=193.128 Password: abcde Domain=[SUBLDAP] OS=[Unix] Server=[Samba 2.client$ smbclient -L //ldap Password: Anonymous login successful Domain=[SUBLDAP] OS=[Unix] Server=[Samba 2. But before doing that.2. you can extend your LDAP users to become Samba capable.170.255.3a-12 for Debian] Sharename Type Comment -----------------tmp Disk maxsmb Disk IPC$ IPC IPC Service (yellow server (Samba 2.127 nmask=255. LDAP-Test)) client$ smbclient //ldap/tmp added interface ip=193. it needs to know these users. it either creates the entire user (as is the case in the example in section Samba: Test your Setup).255. by .txt. done sed s/^/"smbpasswd -a "/ users-with-samba-passwords.dc=subnet./make-them-samba-users. You can easily tell your users about their passwords as they are stored in users-with-samba-passwords.) Samba: Migration Summary If you need to migrate lot’s of users from a Windows-PDC (such as Windows NT).g. "smbpasswd" does all the work for us. (This seems to me to be easier than to create an . perfect!) Samba: Join Windows-Workstations to our Samba-PDC Domain 03-08-13: In contrast to the initial release of this howto. Both possible ways.’ and not ".txt chmod 700 make-them-samba-users.If you have lots of users already existing in the LDAP tree (e.0 will allow to migrate a complete NT-securitydatabase to a Samba-PDC by calling "net rpc vampire"..".txt for user in ‘cat linux-and-not-samba-users. check out /usr/share/doc/samba-doc/examples/.dc=at" \ -W -x ’(&(objectClass=posixAccount)(!(objectClass=sambaAccount)))’ | grep "uid: " \ | awk ’{print $2}’ > linux-and-not-samba-users.sh This takes all Linux-users which are not Samba-users already. Samba 3.txt > make-them-samba-users.g.. ToDo: Some parts of the script should be rewritten to clear things up and make the script simpler (e. create the posixAccount-users (e. due to migrating them as described above).dc=at" -LLL -D "cn=manager. but I didn’t try it in large scale: ldapsearch -b "ou=People.sh .. or if you have lots of "Windows-users" to add.. you might check out smbldap-tools. adding the account manually or have Samba add it automatically (if it doesn’t exist already). by migrating them as described above) and afterwards run the commands mentioned in section Samba: Add (Windows) Users to "smbpasswd -a" all posixAccounts and hereby make them sambaAccounts too. (Somehow the BASH messes something up when using double-quotes. If you need to create an LDAP user database with lot’s of users which are to become Samba-users.) Also.dc=subnet.. you’ll need a script to do the work: # Warning: This should work. I meanwhile figured out how to have a machine account automatically be added to the LDAP-tree.ldif-file holding posixAccounts and sambaAccounts in the first place and add this . Scott Phelps migrated a Windows-PDC to a Samba-PDC without the clients noticing a change: He used pwdump2 to keep the user passwords and rpcclient and smbpasswd to have the Samba-PDC use the old Windows-PDC’s SID for the domain. makes them Samba-users and assigns them a random password (creating the random passwords using makepasswd might take a while!).sh. If you need to migrate lot’s of users from an already existing Samba-PDC with the users being stored in a flat smbpasswd file. (ToDo: Insert link to the Howto once available..txt.. do echo $user ‘makepasswd‘ \ >> users-with-samba-passwords.txt‘.ldif-file to the LDAP server. even when escaping & and ! using \. Mind that the filter is included in ’.sh chmod 600 users-with-samba-passwords. use this script I wrote: create-machine-account..g. To add the machine manually. searching for MachineAccounts is easy here too: just "ldapsearch" for "(gecos=MachineAccount)". add the following option to your smb. Afterwards. Milne’s document it should be possible to have all "Domain Administrators" join a machine to the domain. Nevertheless.dc=.) In this setup. Usage: # . In "non-interactive" mode (i. all the script’s status messages are logged using /usr/bin/logger. machines use a uidNumber-range which is seperated from the normal Linux users. If everything went fine until here..sh NewMachineName$ I". This group will be the group of all machines. According to B. To have Samba add the account automatically while the machine joins the domain. it creates the Linux-account.conf and ldap. run ". (But be careful: You’ll need the user "root" as mentioned in step 2. it creates the group "machines" (gidNumber $GIDNUMBER. default 20000). Seperation is done by simply using uidNumbers above $DEFAULTUID. . sorry!) You can now log on to this machine using all valid LDAP sambaAccount entries. it exits. If necessary. (In this setup. I didn’t try this yet.sh %u What create-machine-account. it makes this new entry a full Samba-Machine-Account using smbpasswd -a -m.dc=" for machine accounts.conf’s [global] section: add user script = /usr/local/sbin/create-machine-account. adding at least the "objectClass: sambaAccount" part to LDAP "is left to the reader as an exercise".secret for this./create-machine-account. Joining the domain on the Windows machine works as usual./create-machine-account. 1. Just perform steps 1 and 3 of this mini-Howto describing exactly that: Howto join a Windows client to a domain. I now. you probably hate this phrase as much as I do. but it works. BTW: Using "ou=People" is different from the smbldap-tools (smbldap-tools work-log) which use "ou=Computers. 2.) Next. Option "I" activates the interactive mode (printing status messages to stdout and possibly querying you for the rootbinddn-password). If it doesn’t already exist.e. Mind: At the beginning of the script are three options which can be changed. machine-accounts can be distinguished from others as they are posixAccounts with "gecos=MachineAccount". It checks if the machine-account already exists. If so. defaults to 20000. but are consistent with our setup here by default.. it finds the highest uidNumber of any already existing machine-account.sh NewMachineName$ <I> (WITH the machine account’s trailing "$").sh does basically boils down to: Get the necessary data to be able to connect to the LDAP-server. This indicates it’s best run on the LDAP-server/PDC itself. (The script uses settings in pam_ldap.using functions to print the status messages). It’s a little mess currently. without option "I"). g.com: We created ldif files for the base structure. The most important options are "workgroup = SUBLDAP".conf of a joining workstation. So Far So Good.conf-option "domain admin group = @staff @winadmins".sh".SID or similar stuff.conf.SID on the joining Samba machine. Mind the very useful smb. since both systems behave the same and talk the same protocol. and just to make sure: I performed this stuff with /etc/samba/ containing only smb. Joined domain SUBLDAP. after creating the machine account (for both Linux and Samba) using our script "create-machine-account.d/samba start Here is the according sample smb. join the client to the domain by running the following commands on the new workstation: client:~# /etc/init. Part 1). On the Server: Create the Linux-User account "MachineName$" 2. we migrated the current Linux-users of the network (which a NIS-server might have provided to the clients) using the official MigrationTools package from padl. On the Server: Make it a Samba-machine-account 3. Samba: Miscellaneous Here’s a document describing my experiences with the smbldap-tools: my smbldap-tools work-log. to exclude system users) prior to adding this . This results in in the same steps as described above in section Samba: Join Windows-Workstations to our Samba-PDC Domain: 1. Oh. On the Client: Join the domain So. of course. the groups and the users (including shadow-passwords) and edited them (e. client:~# /etc/init. That makes sense. Part 2 After successfully setting up the server and the (Linux-) clients (see So Far So Good. "security = DOMAIN". Just for your information: Joining the domain creates the file /etc/samba/MACHINE. and no other files such as MACHINE. (Thus it’s possible to re-use machine-accounts in case a machine left the domain without having to delete the machine-account first.Add-on: Joining a Windows machine to the domain if the machine-account already exists works just fine too.d/samba stop client:~# smbpasswd -j SUBLDAP -r YELLOW 2003/08/08 20:24:31 : change_trust_account_password: Changed password for domain SUBLDAP.) Samba: Join Samba-Workstations to our Samba-PDC Domain Joining a Samba-Workstation to our Samba-PDC controlled domain involves the same steps on the server-side as does joining a Windows-Workstations. "password server = YELLOW" and "local master = No". GQ GQ is a GTK-based LDAP client. and copy objects. assigning random passwords to the users. we added sambaAccount’s to our posixAccount’s users. Debian packages for a current version backported to Woody can be found in Christof Meerwald’s Debian section. Besides the Gnome. ToDo: phpLDAPadmin Citing the homepage: phpLDAPadmin is a web-based LDAP application for managing all aspects of your LDAP server. You can browse your LDAP tree.and web-based tools mentioned here. Check out this list of Graphical LDAP tools over at the LDAP Linux HOWTO. Using some commands. this Samba-server will become the domain’s PDC with all Windows. Directory Administrator makes adding/editing/deleting users/groups really easy! If it wasn’t for all our wish to understand what’s behind all those GUI tools. ToDo: LDAP-Client-Interfaces As the header indicates. As basically goes for the total HOWTO: I’ll add things as soon as they are ready. Next. corrections as soon as bugs are found. one would probably use this one from the beginning already. create. there are also KDE tools. Furthermore.and Samba-clients querying this PDC.information to the LDAP-Server. of course.") . We taught the OpenLDAP-server to use the new Samba-attributes. this section is currently still marked "ToDo". Directory Administrator Directory Administrator is a GTK-based LDAP client. we discussed joining SMB-clients (both Windows and Samba) to the domain using a custom script to set up the corresponding machine accounts. perform searches. Citing from its description: "DaveDAP is a web-based LDAP admin tool written in PHP. Debian packages for a current version backported to Woody can be found in Christof Meerwald’s Debian section. You can even copy objects between two LDAP servers and recursively delete or copy entire trees." (phpLDAPadmin was formerly known as "DaveDAP". Easy access to all available attributes etc. In our network. delete. we set up LDAP-capable Samba-packages on the server only. edit. GQ is great to browse your overall LDAP tree and get a good feeling of what’s where. and view your server’s schema. and Part 3.Console based LDAP user management tool.de/ (Found on. Part 2...ebenfalls deutschsprachig. LDAP Explorer Tool... II: The Differences to the Woody Documentation Above II: Feedback!? II: The Work-Log II: Basic Installation and Configuration II: LDAP Clients II: Database Population II: Configuring NSS II: Configuring PAM II: Have the Server Processes Run as Non-root II: Miscellaneous ToDos include .ToDo: Miscellaneous. Part II: Using OpenLDAP on Debian Sarge to serve Linux Users Table of Contents (Part II: Sarge) II: Introduction II: On the Versions . Some links about LDAP I found in a posting by Georg Wallner on the LUG-TS-Mailing-List: Lots of links to and info ’bout LDAP.8a (or later) if you want to get a Samba-BDC working fine. Kerberos and LDAP slides. Deutsches LDAP-HOWTO zu Debian Potato.html). libapache-mod-ldap .Apache authentication via LDAP directory.pro-linux.iit. . HOWTO: LDAP Programming in Python Only book Understanding LDAP featured by IBM.2. Milne points out in a mail on the Samba mailing-list. Part 1.gonicus. it might be good to use Samba 2.edu/~gawojar/ldap/ cpu -. Verschiedenes zu LDAP . Carla Schroder’s Building an LDAP Server on Linux. Miscellaneous As B.sourceforge.de/news/2002/4491.net/. Done The following NEW packages will be installed: db4. Sorry ’bout that ... please treat it as such: A work-log.xx.Sarge: Feedback!? Comments and corrections are welcome indeed.Sarge: On the Versions . After unpacking 4026kB of additional disk space will be used.Part II .30-3) as well as the dependencies and recommendations: # dselect Reading Package Lists. 0 to remove and 0 not upgraded.) Still.1. If you’re running Debian Sarge. and I do not have access to e. (This state of information is also reflected in the state of structuring: It’s more a "bunch of" than a "list of". while the final Debian Sarge ships LDAP packages of versions 2.. For more information on some issues. The default configuration is not removed anymore.) Part II .. for example.2-util ldap-utils libiodbc2 libltdl3 libsasl2-modules slapd 0 upgraded.g.30-3) and "ldap-utils" (2. 6 newly installed. Done Building Dependency Tree. Do you want to continue? [Y/n] . the following work-log is currently not based on the final version of Sarge. Note though that my notes below are based on LDAP packages of versions 2. I’ll gladly do so. (I moved to the UK. the necessary hardware resources here anymore.. Need to get 1559kB of archives. give the descriptions below a chance.... As much as I would have liked to re-work this document for Debian 3. as always! Part II . Recompilation of the packages is also not necessary .2.1..Sarge: The Work-Log Part II . (The general LDAP principles are still the same.1 ("Debian Sarge").1.Sarge: Introduction Part II . I would like to share my experiences with a preliminary version of Debian Sarge. should I get the time and resources again to work on this any further.xx.Sarge: Basic Installation and Configuration Install the packages "slapd" (2.Sarge: The Differences to the Woody Documentation Above With the Sarge version. please check the Woody documentation above. of course!) Part II . I tried to stick more to the Debian default packages as they are. Since. thus. dated back about half a year ago to January 2005.. I unfortunately neither have the time nor the resources to do so. conf file is to add (near the beginning of the file) the "misc.dc=com" \ -H "ldap://yourserver.Concerning the configuration of these packages: enter your DNS domain: example.example. For some reason....] Don’t forget to restart the LDAP server afterwards: # /etc/init.dc=com" -LLL -D "cn=admin..com" -W -x If you want a graphical client.example. ("allow bind_v2" would have been added to slapd.com</ldaphost> <ldapport>389</ldapport> <basedn>dc=example.conf ######################### # Note: We need this to be able to have entries with attributes such as # "mailRoutingAddress" (which we need as we’ll use the LDAP-server # to host the mail-users and -domains for the Postfix SMTP server). The solution is to fire up your editor and add the following section to your (already existing as you should start once and stop GQ first) ~/.Sarge: LDAP Clients In order to connect to the server from an Ubuntu Linux client (which is what I use on my client machine).gq file: <ldapserver> <name>peach</name> <ldaphost>yourserver.dc=com" If you want to connect as the LDAP-admin from anywhere on the network.dc=com</binddn> <pw-encoding>Base64</pw-encoding> <search-attribute>cn</search-attribute> </ldapserver> .) The only thing to change in the /etc/ldap/slapd.dc=example.schema # [.com Name or your organization: example.dc=com</basedn> <binddn>cn=admin.dc=example.0beta1-1.] include /etc/ldap/schema/misc.conf in case we allowed v2. install e. simply install the "ldap-utils" package and run something like: $ ldapserach -x -b "dc=example.schema" to be used: ######################### /etc/ldap/slapd.com Admin password: adminpassword Allow (old) LDAPv2? No. Ubuntu 4. use something like this $ ldapsearch -b "dc=example. the package "gq". # # [. I could not add a new server to its config using the graphical menu.d/slapd restart Part II .10 (aka "Warty") comes with version 1.g. com mail: my.account@mail.example.dc=example.email.dc=com objectClass: posixGroup objectClass: top cn: maxldap gidNumber: 12345 .dc=example.dc=example.address@example. using the following file "basic-test-user.ldif": dn: ou=People.com loginShell: /bin/bash uidNumber: 12345 gidNumber: 12345 homeDirectory: /home_local/maxldap/ gecos: maxldap_gecos-field description: Not Available localityName: I dont know dn: cn=maxldap.dc=com objectClass: organizationalUnit ou: People dn: ou=Group. but with the above config. you should already be able to connect to the server as "admin".com mailRoutingAddress: my.email.email.ou=People.dc=example.You can now modify the server settings.server.ou=Group.dc=com objectclass: top objectclass: organizationalUnit ou: Group dn: uid=maxldap.Sarge: Database Population Next.alternate.at/~max/ mail: my.address@example.dc=com objectClass: inetOrgPerson objectClass: posixAccount objectClass: top objectClass: shadowAccount objectClass: organizationalPerson objectClass: inetLocalMailRecipient uid: maxldap cn: Markus Peach LDAP User Amersdorfer sn: Amersdorfer givenname: Markus Peach LDAP User title: Dipl. Note though that all your data (including the password) is transferred in cleartext! Part II .-Ing. departmentNumber: IT mobile: 012-345-6789 postalAddress: AddressLine1$AddressLine2$AddressLine3 telephoneNumber: 1234-567890 facsimileTelephoneNumber: 012-345-6789 userpassword: {CRYPT}SOME-CHARACTERS-OF-YOUR-PASSWORD-HERE labeleduri:. add some test-data to your LDAP database. dc=subnet. which is all more than interesting and could be necessary if you use a non-Debian-default setup..dc=subnet.ldif": dn: uid=maxldap.) (From one of those Debconf-dialogs: "Note: As a sanity check.ou=People. use the following file "basic-remove..To add this to the running slapd-LDAP-server’s database.dc=at" adding new entry "ou=Group.dc=example. you’ll notice that some additional attributes such as "creatorsName" and "modifyTimestamp" were added automatically.ou=Group.ou=Group. it might really be a good idea to use .dc=at" deleting entry "cn=maxldap.dc=example.dc=com" Running "slapcat" as root on your LDAP-server.dc=example. we use the defaults.ou=People.ldif -D "cn=admin.dc=example.dc=com changetype: delete dn: ou=People.dc=at" deleting entry "ou=Group.ldif -D "cn=admin.dc=com" adding new entry "cn=maxldap.dc=example.dc=com changetype: delete dn: cn=maxldap.dc=example." Thus. Note: AFAIK.dc=com changetype: delete dn: ou=Group.dc=example.dc=at" -x -W Enter LDAP Password: deleting entry "uid=maxldap.ou=Group.ou=People.dc=example.Sarge: Configuring NSS # apt-get install libnss-ldap You are asked questions such as "is a login needed to retrieve data from the LDAP db?" and "should the libnss-ldap configuration file be readable and writable only by the file owner?". (This might be used to harden the installation later.dc=subnet. but at the moment.dc=com" -W -x Enter LDAP Password: adding new entry "ou=People.dc=subnet. run the following: $ ldapadd -f basic-test-user.dc=at" Part II .dc=com" adding new entry "uid=maxldap. you should use the "slapcat" and "slapadd" commands only when the slapd-process is stopped! If you wanted to remove these entries again.dc=com changetype: delete and run the following command: $ ldapmodify -f basic-remove.dc=example.dc=subnet.dc=at" deleting entry "ou=People. libnss-ldap will check if you have nscd installed and will only set the mode to 0600 if nscd is present. .. Nevertheless..] You should now be able to see the user via the NSS library calls: # finger maxldap Login: maxldap Directory: /home_local/maxldap/ Never logged in.conf /etc/nsswitch.conf_05-01-05 # $EDIT /etc/nsswitch. this sounds like it would be a good idea.) I adapted common-account. No Plan.conf a little bit .Sarge: Configuring PAM In order to be able to authenticate against the LDAP server using the user’s password. common-auth and common-password -. section "NSCD and /etc/libnss-ldap.php#nss-install.subnet.."nscd"! See making a backup of it: # cp /etc/nsswitch. but we’ll stick with "crypt" for the moment though.so account required pam_unix..d/common-account .slightly different changes each: # /etc/pam.conf" for more details!) The example-file /usr/share/doc/libnss-ldap/examples/nsswitch. (Note: You could choose to use "MD5" password hashes here!? Again.org/~torsten/ldapnss.conf.05-01-05: # To activate LDAP support. Name: maxldap_gecos-field Shell: /bin/bash # getent passwd|grep maxldap maxldap:x:12345:12345:maxldap_gecos-field:/home_local/maxldap/:/bin/bash Great.so try_first_pass . we’ll stick with the basics at the moment and just change the existing /etc/nsswitch..at/~max/ldap/index.authorization settings common to all services # markus -.html. we need to adapt the PAM service: # apt-get install libpam-ldap I basically used the same choices are as mentioned in holds really good information for decent configuration of our /etc/nsswitch. as described as follows. Part II . No mail.so account sufficient pam_ldap.debian. comment the default and add the LDAP config #account required pam_unix.conf [.] passwd: ldap compat group: ldap compat shadow: ldap compat [. as well as a "ldap"-user with the GID of the just mentioned group. reject) as well as the maxldap-LDAP-user!! :) "su" and "ssh" work too! (Note: For "ssh" to work properly.so nullok_secure auth sufficient pam_ldap. you have to restart the ssh-service first!) Part II . common-session goes unchanged. section "9. There should be a group called "ldap". create the group and user as follows: # addgroup --system ldap # adduser --system --no-create-home --group ldap Check /etc/passwd and /etc/group.05-01-05 # To activate LDAP support. At the moment I do not know why? Shouldn’t we make the ldap-changes there too?) It is now possible to log in on the command-line using NIS-users (correct password: accept.txt.2. Both IDs should be between 100-999.05-01-05 # To activate LDAP support. change the ownership of the files under /var/: [<0> root@peach ldap]# ll /var/lib/ldap/ -d drwxr-xr-x 2 root root 4096 Jan 3 15:04 /var/lib/ldap/ [<0> root@peach ldap]# ll /var/lib/ldap/ -a total 540 drwxr-xr-x 2 root root 4096 Jan 3 15:04 . comment the default and add the LDAP config #auth required pam_unix.Sarge: Have the Server Processes Run as Non-root In order to get the "slapd" process to run as a different user and group than root.d/common-password .so nullok obscure min=4 max=8 md5 use_first_pass (Note: As mentioned in Torsten’s Howto. .) You need to enable "ldap" to read the config-file: # chown ldap /etc/ldap/slapd. the system-UIDs 100-999 can be assigned for system-purposes on a dynamical basis. After adding "ldap" to SLAPD_USER and SLAPD_GROUP in /etc/default/slapd. UID and GID classes".so nullok obscure min=4 max=8 md5 password sufficient pam_ldap.2.so password required pam_unix.so nullok_secure use_first_pass # /etc/pam.so auth required pam_unix.. trying to restart the slapd-process will result in error.d/common-auth .password-related modules common to all services # markus -..conf Next. (Check /var/log/syslog for more information.authentication settings common to all services # markus -. Only 0-99 must not be used on a per-machine-basis! Thus. comment the default and add the LDAP config #password required pam_unix.gz (package "debian-policy"). wrong password. check out "/etc/default/slapd" first of all. According to /usr/share/doc/debian-policy/policy.# /etc/pam. which was fixed with 2.drwxr-xr-x 16 root root 4096 Jan 3 15:04 . .bdb -rw------1 root root 32768 Jan 7 16:31 id2entry.. SSL ..1 release versions . SSL: There was a SSL-Bugs in slapd..bdb -rw------1 root root 97140 Jan 7 16:31 log. -rw-r--r-1 root root 0 Jan 3 15:04 suffix_change [<0> root@peach ldap]# ll /var/run/slapd/ -d drwxr-xr-x 2 root root 4096 Jan 7 16:31 /var/run/slapd/ [<0> root@peach ldap]# ll /var/run/slapd/ -a total 8 drwxr-xr-x 2 root root 4096 Jan 7 16:31 .5 32916 4480 ? [.005 -rw------1 root root 8192 Jan 7 16:31 dn2id.23-1.Sarge: Miscellaneous ToDos include . have Postfix use the LDAP server for its SMTP services.d/slapd start)..004 -rw------1 root root 16384 Jan 3 15:04 __db. Apart from updating these notes to the actual Debian 3. and it should run as "ldap"-user now instead of "root": # ps aux|grep slapd ldap 2039 0.question: Do libnss-ldap and libpam-ldap support SSL properly? SSL: /etc/default/slapd: Activate "SSL"-startup-option there (and NOT in /etc/init.23-8 being in the final Sarge release (see bug 205452).001 -rw------1 root root 270336 Jan 3 15:04 __db.0000000001 -rw------1 root root 12288 Jan 7 16:31 objectClass.. .bdb [<0> root@peach ldap]# ll /var/lib/slapd/ -d drwxr-xr-x 2 root root 4096 Jan 3 15:04 /var/lib/slapd/ [<0> root@peach ldap]# ll /var/lib/slapd/ -a total 8 drwxr-xr-x 2 root root 4096 Jan 3 15:04 .. if necessary.2. # chown ldap /var/run/ldap/ -R # chown ldap /var/lib/slapd/ -R # chown ldap /var/run/slapd/ -R Starting slapd again should work now (# /etc/init. -rw------1 root root 8192 Jan 3 15:04 __db.0 0.002 -rw------1 root root 98304 Jan 3 15:04 __db.2..003 -rw------1 root root 368640 Jan 3 15:04 __db. with 2..d/slapd itself)! As a larger part of the project..] Ss 16:39 0:00 /usr/sbin/slapd -g ldap -u ldap Part II . drwxr-xr-x 6 root root 4096 Jan 7 16:31 .. drwxr-xr-x 16 root root 4096 Jan 3 15:04 . the following steps are some of those that needed to be performed next: Get the "hosts"-attribute and the according filtering to work. Include further Mail-services to use the LDAP users.. 12 Dec 2003 06:59:59 +0100 I would implement it if you could get md5 to work. admin at cs . Project homepage:. 14 Aug 2003 22:57:35 +0200 Section: Samba 2. Fri.com Fri.html Soon done with the main thing .xpasystems.com/ (new) pGina LDAPAuth: Comments I would be glad to hear about your opinion. 24 Nov 2003 14:58:00 +0100 A great document.sourceforge.x and LDAP SubSection: Samba: Introduction Paragraph: 1 There is a replacement GINA for Windows 2000/XP that allows several other methods of authentication. But the background color and the text color make it a bit difficult to read. Tue. 12 Aug 2003 12:17:23 +0200 Great document! Thank you very much for taking the time to write it. it will save me much time. I was more than happy to find such a practical howto on this topic thx really :) Mon. Thu. I will spread the word. (The form is at the end of this file.com/plugins/ldapauth. 16 Sep 2003 12:00:09 +0200 hi. I will spread the word. any corrections or additions. thank you very much. Congratulations! Kablan BOGNINI.xpasystems. montana . your howto is very good. bognini@hotmail.great doc! I put my own notes (based on this to a large extent) into doc format here:.) Tue. tough i let out most of the encryption stuff as this is just a private plaything.org/sambaldap/Samba_and_LDAP_on_Debian. edu .a \"quick cheat sheet\".html Tue. Check it out! //mawi Tue.net/ (old). thanks to your howto.2. 26 Aug 2003 16:00:55 +0200 As I mailed you before . 12 Aug 2003 09:20:25 +0200 Great document! Thank you very much for taking the time to write it. Amersdorfer. 7 Nov 2003 17:29:03 +0100 just got into ldap. 04-01-16: Added a few words to make it clearer.nice work.. I\’ll try to improve the relevant packages so that all the rebuilding steps can be left out.Mon. 9 Feb 2004 12:31:18 +0100 Nice document..ldif. [ Max. thanks! ] . 04-01-16: This was due to a copy’n’paste-error.org> Tue.ldif\’ containts lot\’s of spaces here. Wed. I\’ve been running LDAP for a while now.. thanks! . Mon. 11 Feb 2004 19:10:32 +0100 Thank you for your felpfool Page. Robert Fri.at/~max/ldap/basics_subnet. Added a link from my small tutorial on getting it running without all the bells and whistles you added.. It\’s just I\’m looking for :) Serz Tue.. Removing the emtry \’spaced\’ line by a linefeed solved that.subnet. 15 Dec 2003 02:57:09 +0100 Excellent article.. As a Debian developer I wanted to show what you can do without recompiling and all that of course. I found out that my system was not compiled with SSL.. The file \’. 17 Feb 2004 23:08:00 +0100 Interesting.. The file \’.. 10 Feb 2004 20:59:49 +0100 This is the good stuff. Torsten Landschoff <torsten@debian. Fixed.with slapadd -v -l populate. :) .ldif\’ containts lot\’s of spaces here. ] .at/~max/ldap/basics_subnet. Possible the \’Activate SSL:\’ step must be done right over the \’apt-get source slapd\’ step. But I got it working now!!! yeaaaaaa Mon. Thanks. 29 Dec 2003 15:34:46 +0100 A very helpfull page. I tried to start is as my slapd user but that didn\’t work out.. this gave me problems using . [ Max. 9 Jan 2004 21:49:29 +0100 Good document.subnet..I believe that the slapd package should be set on hold to. I wish I found this page a year a go.After the \’Start openLDAP\’ you can add that you have to be root again (using exit). I received the message \’attribute \’gidNumber\’ cannot have multiple values.I believe but not sure that the installing directions in the \’Install OpenLDAP\’ section is wrong. and I learned a couple things . .idealx. wenns in deiner Vatersprache zu lesen gewesen wäre. ] Wed.org/index. debhelper package is required in order to build libnss-ldap and libpam-ldap. 23 Feb 2004 17:05:54 +0100. 04-03-24: I checked again. it runs as "slapd" here. die einem beim Umstellen auf LDAP blüht. ] Fri.Das beste. The GPL is the \"General Public License\". not tested on woody yet): one: tell the server to use md5 (add password-hash {MD5} to the configuration) two: tell pam to crypt locally: (pam_password md5 and pam_md5 local) You might want to check this though. I was looking for something like this." before "-u slapd" to identify the rest of the line as command-line arguments to "slapd" (instead of "start-stop-daemon")? ] Wed. lord knows whatever else I changed while messing around :-) [ Max.. about the database poopulation: anyone have experience with populating the database with users/groups/aliases/. 20 Feb 2004 01:19:17 +0100 Great tutorial but just a few hangups on the way. [ Max. Mon. I\’ve recently come to the conclusion that I\’m looking after too many machines.. One question though. encore cent fois . [ Max.\" The problem is that you chose to rebuid slapd --with-tls. noted and changed accordingly. even htough it does if run \"by hand\". 18 Feb 2004 14:50:26 +0100 Thanks. 24 Mar 2004 02:03:41 +0100 Slapd seems not to drop root when started from /etc/init. s\’ il te plait . there\’s no such thing. Merci bien et. Also for libpam-ldap libpam-dev is required. 28 Apr 2004 01:33:58 +0200 Please leave a comment (but mind the netiquette). 18 Feb 2004 23:47:15 +0100 Please note that the GPL is not the \"GNU Public License\". 20 Mar 2004 18:57:39 +0100 I got md5 working as follows (on sarge. Nicht auszudenken.d/slapd with -u slapd. wie sichs gelesen hätte.) ] Fri. 04-03-25: "apt-get build-dep" is very helpful here.en. for multiple domains (sometimes distinct organisations)? Wed. das ich bisher über die Odyssee.Wed.html Sat. Thanks for the hint :).. Perhaps your script misses the " -. (Debian woody stable) [ Max. (I’ll have to integrate information about it in the HOWTO eventually. lesen und auch verwenden hab können. see its man-page. 04-02-19: Ooops. 20 Feb 2004 15:51:58 +0100 \"Unfortunately. 04-03-23: Removed multiple postings. spread out over the internet to not use LDAP. either Debian doesn\’t support it or I simply couldn\’t get this (MD5 crypt) thing to work. Without the use of ssl you can use md5 without problems. The problem seems to be in the linking of openssl (woody version) function crypt() before the usual libc crypt(). 04-07-15: Added this link. I’d suggest to run ‘lynx -dump. 6 Oct 2004 17:48:45 +0200 Great article. confirms by ldap set up is reasonable. 15 Oct 2004 00:46:47 +0200 Thanks. ] Fri. 04-10-22: Sorry. May I suggest that you make it downloadable in PDF (or ASCII) format? [ Max. . thanks for the hint.. 1 Oct 2004 01:38:42 +0200 Thanks for this! It\’s really cleared a few things up for me! Wed. 2 Sep 2004 17:39:13 +0200 Excellent document. like that. how about using LDAP Explorer Tool? It\’s a GPL win32/linux ported client to work with ldap. For PDF. thanks for the hint. works great. schade das nicht einmal auf Kerberos hingewiesen wird. man stolpert zwangsweise darüber wenn man es ldap länger einsetzt und es wäre schöner gewesen alles auf einmal zu migrieren Wed. 19 Jun 2004 22:53:47 +0200 Schönes HowTo. thank you ! Fri.at/~max/ > LDAP-Howto. 04-04-29: Removed multiple postings. 24 Jun 2004 17:52:58 +0200 realy great info. this is a very good document. 31 Aug 2004 03:54:06 +0200 Great document! Thanks Max I\’m Thanks for that you taking the time to write it.txt‘. I did get root\’s passwd usage fixed because of this document though. 25 Jun 2004 05:21:47 +0200 for ldap clients. as (unfortunately!) I didn’t use DocBook or sth.net are broken. 23 Jun 2004 11:38:12 +0200 Your link to LDAP Schema Viewer is broken: the target page does not exist. Fri.subnet. Thu. I don’t have any other version than this HTML/PHP. For ASCII. 04-07-15: Added a working link to this resource.. ] Thu. ] Thu. I’d suggest to "Print to file" and convert the PS to PDF. FYI: all links to mandrakesecure. 22 Jul 2004 11:20:15 +0200 Thanks Max for your nice doc. ] Sat.net/ [ Max.sourceforge. they seemed to have re-arranged their site and dropped all docs. do you have an experiance with samba 3 + ldap + squid + qmail . May I suggest phpldapadmin instead? It is in debian unstable: apt-get install phpldapadmin [ Max. single sing-on system Tue.[ Max. Just with a normal NIS-solution. its too exhausting to read this. DO NOT follow this howto blindly if you are running testing/unstable..Debian for slapd (Sarge) and it seems unclear if the problems with ldap and GNUTLS/OpenSSL-TLS have been resolved in the current version.. but can\’t do anything to tell the system what uidnumber. The crucial point i met is that pam_ldap is OK to verify the user password.Thu. I don’t know anything about Sarge yet.. 18 Nov 2004 06:32:25 +0100 what do you mean by this --> add "--with-ldapsam \" just before "--with-msdfs" yasanthau@pabnk. 04-12-08: Since this seems to be an issue: I’ll change the style-sheet as soon as I’ve got some time for "eye-candy" again :) . guess u don´t have to :-) Tue. you define that Samba is to be compiled with the capability to host its users-database using the LDAP-server. Thanks a lot Thu.. uhm. shell and home dir to use. 05-01-03: Apart from slapd having support for GNUTLS included (see /usr/share/doc/slapd/README. hope this helps . Are there known problems with slapd’s SSL-support? Thu. 16 Dec 2004 21:20:55 +0100 Update to my last comment Things have signifcantly changed with LDAP under Debian Sarge./configure’ etc. 25 Nov 2004 16:33:20 +0100 due to colour layout I decided to NOT read this page.. In . its too exhausting to >> read this. 4 Nov 2004 03:23:06 +0100 just great. [ Max. more common . In particular.). 14 Dec 2004 17:49:35 +0100 Thank you for your great writeup! I was reading through the changelog. etc. NSS on the other hand is responsible to perform the "lookups" for login-name/uidnumber.lk [ Max. 04-12-08: Removed multiple postings.. you can’t have one working properly without the other. Do you know what the current (Dec/04) state is? Thanks again for this document! David.org [ Max.). and it works now. gidnumber.harmel@afpamp. ] Thu.] [ Max. ] Thu. 04-12-08: By adding the line "--with-ldapsam \" in the rules file. newspapers have dark letters on light paper. etc. Am i misleding? (Is it possible to have it working with pam_ldap AND WITHOUT nss_ldap?) p. you add such options to the configure-script to state which functionality is to be included (and which is not). 25 Nov 2004 17:37:29 +0100 >> due to colour layout I decided to NOT read this page.Debian. thats the wayit should be [ Max. 22 Nov 2004 17:11:06 +0100 I tried to have login working with LDAP. do not delete the default directories and files. As to my knowledge. ] Mon. you will need both: PAM on the one hand is responsible for "authentication" (meaning to check whether a user is allowed to login using the given password. If you compiled Samba manually (using ’. 04-12-22: I changed the style-sheet to something. Your LDAP install will just not work and you will have to apt-get remove --purge and reinstall. shell. the system settings. but they were piecely done. This is still a very intersting write up and the suggestions on how to NOT run ldap as root are very valuable . Keep up the effort! syzygy. I have met some terrible documentations assuming that i knew everything and i had every tool.conf the database line must be the full path to /var/lib/ldap/.. 29 Apr 2005 00:44:55 +0200 I have look briefly at alot of documentations on this subject. ] [ Max. 1 Apr 2005 06:31:26 +0200 Amazing. and the module you want to use.. partly because I knew I\’d need to refer to some parts of it later. You explained things clearly. 25 Dec 2004 12:55:56 +0100 Hey. 04-12-22: Thanks for your information! I’m going to install a Sarge-based LDAP-server soon and am already wondering what it will be like . I\’d bookmarked this. 1 Apr 2005 07:44:13 +0200 Hey. Layout (look and feel) counts! Great job and thank you. the basics worked. I\’m not new to LDAP. and was exhausted! Almost didn\’t recognise the site when I re-visited. however. 26 Dec 2004 14:44:56 +0100 Very informative. I wish Linux documentations had this in general. Your doc is JUST GREAT. being the follower kind of person that I am. [ Max. 16 Feb 2005 23:17:37 +0100 Great documentation thank you very much. I can send you some of my experiences.but as the title at top says. and I tried to stick with the default packages and configurations . although a welcome change to the colour scheme! It\’s great to see you\’re maintaining this HOWTO. gchelmis@aueb.] Sat. and I see you\’re maintaining it. this is for WOODY. and I just followed it. .. but this page provides a lot of information. partly because the colour scheme was difficult to read (which I noticed you\’ve changed). Greetings. Your guide may not be up to date on latest Samba 3. what am I doing here? Some guy posted a link. After trying multiple times.gr Fri. using diffrent HOWTO\’s yours is the only one that worked and was up to date. Not Sarge. Thanks! Fri. Yeah I figured I\’d type something here as to be useful.. Though I didn’t complete the full process.. serving files via FTP and Samba to a bunch of clients (not as a Domain Controller).to the point and concise. Maybe. but anyways. 05-06-25 -.Update: As you can see. Fri. Thanks! If you appreciate. I\’m planning to get a Debian Sarge fileserver running in the next few weeks.slapd.. I added some notes on pre-release version of Sarge I worked with.. Hans van Kranenburg (The Netherlands) Sun. Because I want all the login/user stuff in LDAP I\’m looking around for information. it was easy to read interms of the fonts/layout/and style .au Wed. ] Thu.com/jxplorer/) is a LDAP browser deluxe. 23 Jun 2005 12:54:13 +0200 Thanks for a great document! Just set up ldap on sarge (stable). the libnss-ldap and libpam-ldap seem to support ssl without a hitch. It seemed to be a lot of work until I read this document. 21 Jun 2005 15:53:48 +0200 I would like to thank you for this great guide. Tank you.sh? thanks [ Max.csr\" I would like to ask what is .sh new. 05-06-25: This script is used to sign the "certificate signing request" to have a signed certificate be created. Here’s a copy of sign. 18 May 2005 09:14:02 +0200 JXplorer (. Peter Hopfgartner Tue. Tanks a lot for this doc! I\’ve been looking for a tutorial like this. had everything up & running in about an hour (samba with ldap authentication) Great job! Thanks! Thu.sh./sign. 24 May 2005 20:08:29 +0200 Max. Kalle Happonen . Also.Wed. Claudio Lobo (from Brazil) Tue. 23 Jun 2005 10:22:37 +0200 According to \". It really helped me out after a recent sarge install./sigh. It\’s really works. This action might not be possible to undo. Are you sure you want to continue? We've moved you to where you read on your other device. Get the full title to continue reading from where you left off, or restart the preview.
https://www.scribd.com/document/48146940/debian-ldap
CC-MAIN-2016-40
en
refinedweb
/* Data structures associated with tracepoints in GDB. Copyright 1997, (TRACEPOINT_H) #define TRACEPOINT_H 1 /* The data structure for an action: */ struct action_line { struct action_line *next; char *action; }; /* The data structure for a tracepoint: */ struct tracepoint { struct tracepoint *next; int enabled_p; #if 0 /* Type of tracepoint. (MVS FIXME: needed?) */ enum tptype type; /* What to do with this tracepoint after we hit it MVS FIXME: needed?). */ enum tpdisp disposition; #endif /* Number assigned to distinguish tracepoints. */ int number; /* Address to trace at, or NULL if not an instruction tracepoint. (MVS ?) */ CORE_ADDR address; /* Line number of this address. Only matters if address is non-NULL. */ int line_number; /* Source file name of this address. Only matters if address is non-NULL. */ char *source_file; /* Number of times this tracepoint should single-step and collect additional data. */ long step_count; /* Number of times this tracepoint should be hit before disabling/ending. */ int pass_count; /* Chain of action lines to execute when this tracepoint is hit. */ struct action_line *actions; /* Conditional (MVS ?). */ struct expression *cond; /* String we used to set the tracepoint (malloc'd). Only matters if address is non-NULL. */ char *addr_string; /* Language we used to set the tracepoint. */ enum language language; /* Input radix we used to set the tracepoint. */ int input_radix; /* Count of the number of times this tracepoint was taken, dumped with the info, but not used for anything else. Useful for seeing how many times you hit a tracepoint prior to the program aborting, so you can back up to just before the abort. */ int hit_count; /* Thread number for thread-specific tracepoint, or -1 if don't care. */ int thread; /* BFD section, in case of overlays: no, I don't know if tracepoints are really gonna work with overlays. */ asection *section; }; enum actionline_type { BADLINE = -1, GENERIC = 0, END = 1, STEPPING = 2 }; /* The tracepoint chain of all tracepoints. */ extern struct tracepoint *tracepoint_chain; extern unsigned long trace_running_p; /* A hook used to notify the UI of tracepoint operations. */ void (*deprecated_create_tracepoint_hook) (struct tracepoint *); void (*deprecated_delete_tracepoint_hook) (struct tracepoint *); void (*deprecated_modify_tracepoint_hook) (struct tracepoint *); void (*deprecated_trace_find_hook) (char *arg, int from_tty); void (*deprecated_trace_start_stop_hook) (int start, int from_tty); struct tracepoint *get_tracepoint_by_number (char **, int, int); int get_traceframe_number (void); void free_actions (struct tracepoint *); enum actionline_type validate_actionline (char **, struct tracepoint *); /* Walk the following statement or block through all tracepoints. ALL_TRACEPOINTS_SAFE does so even if the statment deletes the current breakpoint. */ #define ALL_TRACEPOINTS(t) for (t = tracepoint_chain; t; t = t->next) #define ALL_TRACEPOINTS_SAFE(t,tmp) \ for (t = tracepoint_chain; \ t ? (tmp = t->next, 1) : 0;\ t = tmp) #endif /* TRACEPOINT_H */
http://opensource.apple.com/source/gdb/gdb-1344/src/gdb/tracepoint.h
CC-MAIN-2016-40
en
refinedweb
Do you use bootstrap and Django together? If so, I have 2 powerful templatetags which may come in handy when developing your bootstrap enabled project. One filter turns a BooleanField result into a compatible bootstrap icon. The other filter can be used to display a specific amount of stars or any icon you can think of, I use this for rating stars. If you do not have already, create a new template library, which is basically a templatetags Python package inside your app. In this Python package, create a new Python file which will contain the filter, that you can load into your templates. In this file, enter in the following code: from django import template from django.utils.safestring import mark_safe register = template.Library() @register.filter def yesnoicon(value):</i>' % icon) @register.filter def ratingicon(value): return mark_safe('<i class="icon-star"></i>' * value) There you have it, now in your templates you can load the template library and use the filters like so: {% load bootstrap_filters %} {{object.is_available|yesnoicon}} {{object.rating|ratingicon}} A super simple filter interface to make using bootstrap in Django that much easier. Enjoy!
http://pythondiary.com/blog/Aug.24,2012/bootstrap-template-filters-django.html
CC-MAIN-2016-40
en
refinedweb
QOTW: "Python the language doesn't try to satisfy all tastes in language design equally." - Guido van Rossum Is it really necesary to explicitely close open files? Tips for using Unicode text (specially with non-Latin alphabets): How are class attributes exactly inherited, and how they relate to instance attributes: Automatic attribute assignment during class inheritance: Composition: delegating all method calls to the contained object may be tedious to write -- alternatives? Calling all bases implementation of overriden methods in cases of multiple inheritance: Sometimes, a scope for local variables smaller than a function is desired: More ways to define an empty function that you ever imagined: There is real advantage in putting the main program body inside a function: Using several Python interpreters in a multithreaded C++ program: Best way to store global application parameters: A portable way to open a document using its associated application: Getting your first job as a Python programmer: Idea: a namespace object (nested attribute container): Idea: multithreading might be easier if most objects were immutable: ========================================================================.
https://mail.python.org/pipermail/python-list/2009-September/551915.html
CC-MAIN-2016-40
en
refinedweb
Add Geronimo plan to JSR-77 API for deployed modules/configurations ------------------------------------------------------------------- Key: GERONIMO-1522 URL: Project: Geronimo Type: Improvement Components: management Versions: 1.0 Reporter: Aaron Mulder Fix For: 1.1 The basic JSR-77 API lets you load the J2EE deployment descriptor for a module. We should add a Geronimo method to get the Geronimo deployment plan for the module. David J and I would lean toward providing the "corrected" plan (after namespace translation, etc.). We should probably omit or encrypt the contents of anything we can identify as a password (e.g. the content of an element that contains only text and also contains the attribute "name" with a value including "password" or something like that). -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: - For more information on JIRA, see:
http://mail-archives.apache.org/mod_mbox/geronimo-dev/200601.mbox/%3C664381717.1137870401973.JavaMail.jira@ajax.apache.org%3E
CC-MAIN-2016-40
en
refinedweb
Opened 11 years ago Closed 7 years ago Last modified 3 years ago #694 closed enhancement (wontfix) [patch] TEMPLATE_DIRS should allow project root relative paths Description (last modified by ) Many people develop their projects on one machine and deploy on another. The two (or more) different computers may not have the same filesystem layout and may not even be running the same OS. As such, it'd be nice if the requirement for absolute paths could be eliminated. For my current django project, I have something like: project/ apps/ templates/ It'd be nice if the templates directory could be specified relative to the project root. Change History (12) comment:1 Changed 11 years ago by comment:2 Changed 11 years ago by (Fixed formatting in the description.) TEMPLATE_DIRS is a setting, and each of your Django installations should have a separate settings file. This is how you designate different database passwords for different servers, for instance. The solution is to use separate settings files for your multiple environments. Also, if that doesn't float your boat, you can use the "app_directories" template loader. See . comment:3 Changed 11 years ago by And one should allways remember that settings files are just python: so you can just use "from basesettings import *" to pull in common settings that are the same accross different projects. Project settings really only need to set what is special for exactly this project. Another idiom I find quite useful: import os TEMPLATE_DIRS = ( os.path.expanduser('~/project/something/templates'), ) works quite nice in projects that move to other servers and users when going production, but don't change in their structure itself. comment:4 Changed 11 years ago by Actually, adrian, that's a really poor excuse for not implementing this feature; if you specify a relative path in your TEMPLATE_DIRS the only reasonable interpretation is that it's relative to either your project-root or your settings-file. And, the different settings argument is hardly relevant, relative TEMPLATE_DIRS are also broken for single-installation projects. Besides, it's so simple to implement: --- django/conf/settings.py (/django/trunk) (revision 3075) +++ django/conf/settings.py (/django/patches/project-template_dirs) (revision 3075) @@ -44,6 +44,21 @@ setting_value = (setting_value,) # In case the user forgot the comma. setattr(me, setting, setting_value) +# TEMPLATE_DIRS are relative to the directory containing the top-level +# module of DJANGO_SETTINGS_MODULE. +if '.' in me.SETTINGS_MODULE: + name, _ = me.SETTINGS_MODULE.split('.', 1) + project_mod = __import__(name, '', '', ['']) +else: + project_mod = mod +project_root = os.path.dirname(project_mod.__file__) + +if os.path.isdir(project_root): + me.TEMPLATE_DIRS = [ + os.path.abspath(os.path.join(project_root, os.path.expanduser(path))) + for path in me.TEMPLATE_DIRS + ] + # save DJANGO_SETTINGS_MODULE in case anyone in the future cares me.SETTINGS_MODULE = os.environ.get(ENVIRONMENT_VARIABLE, '') comment:5 Changed 11 years ago by Because the settings files are pure Python, you can calculate the relative paths directly in the settings files. I'm marking this (again) as a wontfix. comment:6 Changed 8 years ago by The posted solution, to have a settings file for every installation location, is insufficient for our needs. That solution creates two distinct problems: - Every developer on the project is forced to have one or more settings files as they check out the project source code into different folders. Or each developer hacks their settings.py file to match their own situation which leads to merge conflicts. - Requiring manual changes between production and test environments increases the risk of problems that only occur during production deployment. We implemented the alternative solution from adrian's last comment and I will share it here for quick reference for other users who run into this particular limitation. In settings.py, import os PROJECT_PATH = os.path.abspath(os.path.split(__file__)[0]) TEMPLATE_DIRS = ( os.path.join(PROJECT_PATH, "templates"), ) If this ticket is revisited in the future it is my opinion that it should be implemented as suggested to promote 'batteries included' and 'keep it simple' philosophies in Django. comment:7 Changed 7 years ago by I'm newbie to django, and this page is the result of one of my few searches about django. It means, IMHO, maybe many newbies can confuse on this issue like me. my question is: "why we should use relative path for templates directory setting? It is always same in relative(for many cases) but it is different in absolute.(and how many times is it placed on outside of project?) then why we all do add three same lines into settings.py everytime? (import os, project_path=..., os.path.join...)" can I get some reasonable reason? "app_directories template loader" is not a good solution because the look&feel is site specific rather than application specific. (app_directories template loader is usefull for drop-in applications instead.) please excuse my poor English. thank you for your consideration. comment:8 follow-up: 10 Changed 7 years ago by This was marked "wontfix" by a core developer. If you disagree, start a thread on the django-developers list. comment:9 Changed 6 years ago by Please reconsider this issue, it is #1 having more than 100 votes on Implementing this will improve user experience and simplify deployment. The fact that people can change the code is not an excuse not to do the right thing. comment:10 Changed 6 years ago by Quoting myself: If you disagree, start a thread on the django-developers list. So if you feel strongly about this, you know what to do. comment:11 Changed 3 years ago by comment:12 Changed 3 years ago by Has this been fixed? It appears that didn't format as well as I had planned. The idea is that project/ is the parent of apps/ and templates/.
https://code.djangoproject.com/ticket/694
CC-MAIN-2016-40
en
refinedweb
I have a class called ArrayLinearList public class ArrayLinearList implements LinearList { protected Object [] element; protected int size; I have made a subclass of ArrayLinearList public class ArrayEventsList extends ArrayLinearList { } A screen will come up asking the user what option they want. For example: Press 'A' to add an item to the list Press 'b' to remove item from the list When the user presses 'A' it calls my method addItem() public void addItem() { ArrayLinearList.add(item); //item is whatever string the user just entered from the scanner for example "mobile" //addItem is then supposed to add mobile to the list } How would I go about adding an item the user enters on screen and add it to the ArrayLinkedList? Thank you
http://www.javaprogrammingforums.com/java-theory-questions/18387-arraylinearlist-itss-sub-class-help.html
CC-MAIN-2013-20
en
refinedweb