text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
At 12/15/2010 11:20 PM, Eric Blake Write: > On 12/14/2010 07:34 PM, Wen Congyang wrote: > > In addition to Hu's comments, and the fact that you are probably going > to revise the exposed interface anyways, here's some additional points. > >> * src/util/timer.c src/util/timer.h src/util/timer_linux.c src/util/timer_win32.c: >> timer implementation >> * src/Makefile.am: build timer >> * src/libvirt_private.syms: Export public functions >> * src/libvirt.c: Initialize timer >> * configure.ac: check the functions in librt used by timer >> >> Signed-off-by: Wen Congyang <wency cn fujitsu com> >> >> >> -EXTRA_DIST += util/threads-pthread.c util/threads-win32.c >> +EXTRA_DIST += util/threads-pthread.c util/threads-win32.c \ >> + util/timer_linux.c > > timer-win32.c? Also, I'd go with timer-linux.c, not timer_linux.c. > >> +# timer.h >> +get_clock; > > Bad idea to pollute the namespace with get_clock; better would be > something like virGetClock. > >> +virNewTimer; >> +virFreeTimer; >> +virModTimer; >> +virDelTimer; >> >> # usb.h >> + >> +static virTimerPtr timer_list = NULL; >> +static void realarm_timer(void); >> +static void __realarm_timer(uint64_t); > > It is dangerous to declare functions in the __ namespace, since that is > reserved for libc and friends. > >> +uint64_t get_clock(void) >> +{ >> + struct timespec ts; >> + >> + clock_gettime(CLOCK_MONOTONIC, &ts); >> + return ts.tv_sec * 1000000000ULL + ts.tv_nsec; > > You probably ought to check for overflow here. Dealing with raw > nanoseconds is rather fine-grained; is it any better to go with micro or > even milliseconds, or does libvirt really require something as precise > as nanosecond timeouts? > Thanks for your comment. ts.tv_sec * 1000000000ULL will overflow only when the host OS runs 585 years... So it almost dose not overflow. I think we do not require nanosecond accuracy. millisecond accuracy is good enough.
https://www.redhat.com/archives/libvir-list/2010-December/msg00613.html
CC-MAIN-2014-15
refinedweb
275
52.56
will 0.5.1 A friendly python hipchat bot Meet Will. Will is the friendliest, easiest-to-teach bot you've ever used. He works on hipchat, in rooms and 1-1 chats. He makes teaching your chat bot this simple: ```python @respond_to("hi") def say_hello(self, message): self.say("oh, hello!") ``` Will was built by [Steven Skoczen]) in the [Greenkahuna Skunkworks]), and has been [contributed to by lots of folks]) Will has docs, including a quickstart and lots of screenshots at: **[])** Check them out! - Downloads (All Versions): - 225 downloads in the last day - 730 downloads in the last week - 796 downloads in the last month -.1.xml
https://pypi.python.org/pypi/will/0.5.1
CC-MAIN-2015-11
refinedweb
108
80.41
From: Eric Niebler (eric_at_[hidden]) Date: 2006-04-18 01:50:08 Peter Dimov wrote: >, You're right, because in xpressive alone I use counted_base in no less than 3 places where the virtual destructor would be needless overhead! Clearly, it can't be all that uncommon. and Andreas wouldn't use it because of > the lack of bool NeedsLocking parameter. I thought NeedsLocking was reasonable until I noticed that atomic_count is a typedef for long if BOOST_HAS_THREADS is not defined. That should be good enough, IMO. > Anyway, ... > >> Tried that. Under certain circumstances, VC7.1 wasn't finding the >> friend functions. Hence my counted_base_access hack, and the >> intrusive_ptr_add_ref/release functions at namespace scope. > > ... I'm interested in those certain circumstances. The following program exposes the problem for me with vc7.1, vc8 and gcc 3.4.4. If you comment out the friend functions and uncomment the global free functions, the problem goes away. I don't understand it -- is some name look-up subtlety at play here? #include <boost/intrusive_ptr.hpp> template<typename Derived> struct counted_base { friend void intrusive_ptr_add_ref(counted_base<Derived> *that){} friend void intrusive_ptr_release(counted_base<Derived> *that){} }; //template<typename Derived> //void intrusive_ptr_add_ref(counted_base<Derived> *that){} //template<typename Derived> //void intrusive_ptr_release(counted_base<Derived> *that){} template<typename T> struct impl : counted_base<impl<T> > { }; template<typename T> struct outer { boost::intrusive_ptr<impl<T> > p; }; int main() { outer<int> impl; return 0; } -- Eric Niebler Boost Consulting Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2006/04/103293.php
CC-MAIN-2022-40
refinedweb
256
50.33
Blog about debugging and compiler features of C# that is help implement static class Extension {. If you would like to receive an email when updates are made to this post, please register here RSS When you say "6. The Type parameter used must be defined on the method and not the parentclass." what do you mean by "Type parameter"? I'm not sure which parameter this refers to, is it the type of the instance parameter? or some other parameter? That method is generic so it has a type parameter list (the names between < and > are the type parameters). So when he says "the Type parameter" he is referring to the T. You can also make generic classes, that's when the class itself has a type parameter list. Number six is saying that with extension methods you can't use any type parameters from the containing class only type params from the (generic) extension method. There's no way to add static methods or new operators using Extension Methods is there? All Extension methods are static methods and can be called as such. No extension methods can't be operator they are always bound as methods. Object.ExtensionName(arguments .... ) Sure the extension methods themselves are static methods, but they appear as instance methods on the type being extended. Is there a technical reason why you couldn't extend types with new static methods (and operators) or was this just a design decision to support the minimal amount necessary to make LINQ work nicely? Extension methods were added so that we can call the methods with instance syntax. This is usefull to write code that reads more like the intent of the programmer. Making the code more comprehensible and easy to change and maintain. So we can write order.Search((o) => o.Name = custName).SendTobilling(); It is important note that this is just a change in the binding rules for instance methods and is suported by intellisence and debugging which use the compilers binding code. Similar changes to binding rules for operators would be a breaking change for exiting code, and would make the client code confusing since a + b would work where the extensions were imported and not every where else. Also relational operators can only be overloaded in pairs ( OOo my head is hurting with the possible validation checks for this one and we are only getting started). Combine this with the fact that completion lists would be of no help to you, all in all this is solvable but very problematic. Static methods ... well since extension methods are static to begin with does it matter which type you are calling the method on (Extesnsion type of your type). You've been kicked (a good thing) - Trackback from DotNetKicks.com I open a codeplex project for an Extension Methods library : Welcome to the 27th Community Convergence. I use this column to keep you informed of events in the C# Plz expalin me from where this thing came from We can write client code like del1<int> d1 = t.func<int>; Sorry for the incomplete example ... basically we are creating an delegate and calling the extension method Exec on it. The important information missing is the defintion of the method func that created the delegate. public class Test { public static void Main() { Test t = new Test(); del1<int> d1 = t.func<int>; Console.WriteLine(d1.Exec(100)); } public int func() return 10; public T func<T>(T val) return val; We started using some of the new C# 3.0 features with our functional web testing tool InCisif.net, including extensions methods and lambda expression. Our goal is to be able to simplify the test code. And it is great. see our posts : Feedback welcome. I did a french webcast about extension methods: re: 拡張メソッドなら、this == null でもいい。 hi can you plaese explain me, how can we add images using the extension methods......... avantika, i am not sure i understand. Extension methods add a usefull call symatics, its up the user to implement whatever they want in the method body. Thanks for the thorough description. Just what I was looking for. Hello, today I have tried to add some methods mathematical notation to my classes to work on voxel volume data. I wanted to implement an operator * which works on the volume. Is it possible to use extension methods to add some operator functions?
http://blogs.msdn.com/sreekarc/archive/2007/04/25/extension-methods.aspx
crawl-002
refinedweb
735
64
Clearpath and Christie demo 3D video game with robots By Ryan Gariepy For our recent “hack week” we teamed up with one of the most innovative visual technology companies in the world, Christie, to make a 3D video game with robots. How did it all come together? As seems to be the norm, let’s start with the awesome: Inspiration In late October, a video from MIT was making the rounds around at Clearpath. Since we usually have a lot of robots on hand and like doing interesting demos, we reached out to Shayegan to see if we could get some background information. He provided some insights into the challenges he faced when implementing this, and I was convinced that our team could pull a similar demo together so we could see how it worked in person. Here’s what MIT’s setup looked like: (Video: Melanie Gonick, MIT News. Additional footage and computer animations: Shayegan Omidshafiei) At the most fundamental, this demo needs: - In MIT’s case, they have a combination of iRobot Creates and quadrotors as their robot fleet, a VICON motion capture system for determining robot location, a Windows computer running ROS in a virtual machine as well as projector edge-blending software, and a set of 6 projectors for the lab. There were three things we wanted to improve on for our demo. First, we wanted to run all-ROS, and use RViz for visualizations. That removes the performance hit from running RViz in a VM, and also means that any visualization plugins we came up with could be used anywhere Clearpath uses ROS. Second, we wanted to avoid using the VICON system. Though we have a VICON system on hand in our lab and are big fans, we were already using it for some long-term navigation characterization at the time, so it wasn’t available to us. Finally, we wanted to make the demo more interactive. Improvement #1: All-ROS, all the time! To get this taken care of, we needed a way to either run edge-blending software on Linux, or to use projectors that did the edge blending themselves. Fortunately, something that might be a little known fact to our regular audience is that Christie is about 10 minutes away from Clearpath HQ and they make some of the best digital projectors in the world that, yes, do edge blending and more. A few emails back and forth, and they were in! For this project, Christie arrived with four Christie HD14K-M 14,000 lumens DLP campus. Improvement #2: No motion capture Getting rid of the motion capture system was even easier. We already have localization and mapping software for our robots and the Jackals we had on hand already had LIDARs mounted. It was a relatively simple matter to map out the world we’d operate in and share the map between the two robots. Now, I will make a slight aside here… Multi-robot operation in ROS is still not yet what one would call smooth. There are a few good solutions, but not one that is clearly “the one” to use. Since all of our work here had to fit into a week, we took the quick way out. We configured all of the robots to talk to a single ROS Master running on the computer connected to the computers, and used namespaces to ensure the data for each robot stayed tied to each robot. The resulting architecture was as follows: All we had to do to sync the robot position and the projector position was to start training the map from a marked (0,0,0) point on the floor. Improvement #3: More interactivity This was the fun part. We had two robots and everyone loves video games, so we wrote a new package that uses Python (via rospy), GDAL, and Shapely to create a real-life PvP game with our Jackals. Each Jackal was controlled by a person and had the usual features we all expect from video games – weapons, recharging shields, hitpoints, and sound effects. All of the data was rendered and projected in real-time along with our robots’ understanding of their environment. And, as a final bonus, we used our existing path planning code to create a entire “AI” for the robots. Since the robots already know where they are and how to plan paths, this part was done in literally minutes. The real question How do I get one for myself? Robots: Obviously, we sell these. I’d personally like to see this redone with Grizzlies. Projectors: I’m sure there are open-source options or other prosumer options similar to how MIT did it, but if you want it done really well, Christie will be happy to help. Software: There is an experimental RViz branch here which enables four extra output windows from RViz. The majority of the on-robot software is either standard with the Jackal or is slightly modified to accommodate the multi-robot situation (and can also be found at our Jackal github repository). We intend on contributing our RViz plugins back, but they too are a little messy. Fortunately, there’s a good general tutorial here on creating new plugins. The game itself is very messy code, so we’re still keeping it hidden for now. Sorry (sad) If you’re a large school or a research group, please get in touch directly and we’ll see how we can help. Happy gaming! Follow @ClearpathRobots November 11, 2017 find more jobs Need help spreading the word? Join the Robohub crowdfunding page and increase the visibility of your campaign
http://robohub.org/clearpath-and-christie-demo-3d-video-game-with-robots/
CC-MAIN-2017-47
refinedweb
940
67.28
Another day, another short article. Today we will focus on validation of numeric query parameters. Everyone values his/her time, so here is TL;DR: Each query parameter comes as string, so it's not possible to validate numeric params correctly. Simple solution – use @Type()decorator from class-transformerlibrary, and declare numeric field as Number. I.e. @Type(() => Number) Let's take an example URL address with few query parameters: Our controller would probably look like this: @Controller('users') class UsersController{ @Get() getUsers(@Query() queryParams){} } Ok, at the beginning we've extracted our query parameters with @Query() decorator. Time to impose some validation on them. Here is our validation class: class GetUsersQuery{ @IsInt() country: number; @IsString() name: string; } We've defined very basic validation constraints for country and name fields. Then we need to do a little change in our controller method. @Controller('users') class UsersController{ @Get() getUsers(@Query() queryParams: GetUsersQuery){ console.log(queryParams) } } Ok. Time to check if our validation works correctly. Let's try to send GET request for previously mentioned URL. Everything should be just fine, right? Well, not really, here is what we got: { "statusCode": 400, "message": [ "country must be a number conforming to the specified constraints" ], "error": "Bad Request" } What? But country is a numeric field! It's even an integer! Unfortunately, not for our application. Let's try to make one step back, and remove validation, then check what kind of parameters will query object contain. { country: '1', name: 'joe' } Ok, now you can see, that each field is passed as a string value. Even an integer field. What can we do with it? After all, we need to validate whether the country field is an integer, or not, right? Once again, class-transformer library has a simple solution for us. Use @Type decorator, and declare country field as Number: class GetUsersQuery{ @IsInt() @Type(() => Number) country: number; @IsString() name: string; } Now our request will pass validation, and the response object will look like this: { country: 1, name: 'joe' } Country field has numeric type now. When we send an invalid value, for example, a string, we'll get: { "statusCode": 400, "message": [ "country must be an integer number" ], "error": "Bad Request" } But for integer type parameter it will pass. Finally, our integer validation works now correctly. Hope this short article will help you to validate numeric values passed through URL parameters. Cheers! Discussion (11) Just a little issue with this way of transform & validate query params: try @Type(() => Boolean), you'll have to use @Transform and do it yourself. Hi! Thank you for your comment. I'm not sure what do you want to achive with your code example. Could you give me an example of URL and final object with values you want to get? Regards! Just try with some boolean param, or even an array. export class FilterQuery { @IsOptional() @IsBoolean() @Type(() => Boolean) prova?: boolean } Setting 'prova' as 'false' it will cast it to true. As you said, those query params are just strings that must be serialized first in some way. So, in some cases like Number it just works but in others it follows JS conversion rules... I partially solved it doing using @Transform instead of @Type and looking Nestjs parse implementations github.com/nestjs/nest/tree/99ee3f... You are totally right! And this is the reason why this article is titled “Validating numeric query params” 😁 I think it’s the most common issue Nest users encounter (at least in my experience). For array of values in query params I use (exactly as you said) Transform decorator from class-transform package. For bools I prefer to pass just boolean in numeric form (0 or 1) or (depends on context) simple string like yes/no. But I avoid true/false, because as you noted – it can result with some false positives. Anyway, thank you for your comment, I’m sure somebody finds it helpful! Cheers! Thank you for the article, it's my first time with Nestjs and I was looking for how to parse & validate params declaratively and without too many hustles. I hadn't noticed "numeric" in the title, sorry 😅 How do you manage arrays, i.e. numeric[] ? At the moment I use ParseArray (that also is the only Parse* function that can handle natively missing query params) I've struggled with arrays a lot, and ended up with some like this. I'm not very proud of it, but it works for me. I decorate expected field with: And here is mapStringToNumberimplementation: It allows me to pass array of numbers and just string with numbers separated with commas as well. It also cleans up non-numeric values. So, when you pass e.g. categories=1,a,2, you will get an array of [1,2]. I'm not the biggest fan of using extra pipes for validating. I rather try to keep my validation in one place (in validation class with class-validator in this case). If the example is not working on your code, please check if you enabled ValidationPipe. Find more here: docs.nestjs.com/techniques/validat... Furthermore, I discovered it is required to pass in the following options to ValidationPipe else the type of the query param value at runtime will still be string, even though it may pass the class-validator validations. It would be great if we could do something like : import IsEmail from 'class-validator' @Query('email', IsEmail) email:string awesome, this is just working properly .. thanks buddy Nice to hear that!
https://dev.to/avantar/validating-numeric-query-parameters-in-nestjs-gk9
CC-MAIN-2022-27
refinedweb
917
64.51
. In object-oriented design, a class is the most specific type of an object in relation to a specific layer. Programming languages that support classes all subtly differ in their support for various class-related features. Most support various forms of class inheritance. Many languages also support features providing encapsulation, such as access specifiers.. With classes, GUI items that are similar to windows (such as dialog boxes) can simply inherit most of their functionality and data structures from the window class. The programmer then need only add code to the dialog class that is unique to its operation. Indeed, GUIs are a very common and useful application of classes, and GUI programming is generally much easier with a good class framework. Instantiation A class is used to create new instances (objects) by instantiating the class. Instances of a class share the same set of attributes, yet may differ in what those attributes contain. For example, a class "Person" would describe the attributes common to all instances of the Person class. Each person is generally. Interfaces and methods Note: the term "interface" here isn't referring to a Java interface, although the two are closely related. Objects define their interaction with the outside world through the methods that they expose. A method, or instance method, is a subroutine (function) with a special property that it has access to data stored in an object (instance). Methods that manipulate the data of the object and perform tasks are sometimes described as behavior. Methods form the object's interface with the outside world; they access the data of such an instance. For. Languages that support class inheritance also allow classes to inherit interfaces from the classes that they are derived from. client code and thus are not part of the interface. The object-oriented programming methodology is designed in such a way that the operations of any interface of a class are usually chosen to be independent of each other. It results in a client-server (or layered) design where servers do not depend in any way on the clients. An interface places no requirements for clients to invoke the operations of one interface in any particular order. This approach has the benefit that client code can assume that the operations of an interface are available for use whenever the client holds a valid reference to the object. Structure of a class Along with having an interface, a class contains a description of structure of data stored in the instances of the class. The data is partitioned into attributes (or properties, fields, data members). Going back to the television set example, the myriad attributes, such as size and whether it supports color, together comprise its structure. A class represents the full description of a television, including its attributes (structure) and buttons (interface). The state of an instance's data is stored in some resource, such as memory or a file. The storage is assumed to be located in a specific location, such that it is possible to access the instance through references to the identity of the instances. However, the actual storage location associated with an instance may change with time. In such situations, the identity of the object does not change. The state is encapsulated and every access to the state occurs through methods data types and classes from each other; that is, a class does not allow use of all possible values for the state of the object, and instead allows only those values that are well-defined by the semantics of the intended use of the data type. The set of supported (public) methods often implicitly establishes an invariant. Some programming languages support specification of invariants as part of the definition of the class, and enforce them through the type system. Encapsulation of state is necessary for being able to enforce the invariants of the class. Some languages allow an implementation of a class to specify constructor (or initializer) and destructor (or finalizer) methods that specify how instances of the class are created and destroyed, respectively. A constructor that takes arguments can be used to create an instance from passed-in data. The main purpose of a constructor is to establish the invariant of the class, failing if the invariant isn't valid. The main purpose of a destructor is to destroy the identity of the instance, invalidating any references in the process. Constructors and destructors are often used to reserve and release, respectively, resources associated with the object. In some languages, a destructor can return a value which can then be used to obtain a public representation (transfer encoding) of an instance of a class and simultaneously destroy the copy of the instance stored in current thread's memory. A class may also contain static attributes or class attributes, which contain data that are specific to the class yet are common to all instances of the class. If the class itself is treated as an instance of a hypothetical metaclass, static attributes and static methods would be instance attributes and instance methods of that metaclass. Run-time representation of classes. Information hiding and encapsulation Many languages support the concept of information hiding and encapsulation, typically with access specifiers for class members. Access specifiers specify constraints on who can access which class members. Some access specifiers may also control how classes inherit such constraints. Their primary purpose is to separate the interface of a class with its implementation. A common set of access specifiers that many object-oriented languages support is: -. Note that although many languages support the above access specifiers, the semantics of them may subtly differ in each. A common usage of access specifiers is to separate the internal data structures of a class from its interface; that is, the internal data structures are private. Public accessor methods can be used to inspect or alter such private data. The various object-oriented programming languages enforce this to various degrees. For example, the Java language does not allow client code to access the private data of a class at all, whereas in languages like Objective-C or Perl client code can do whatever it wants. In C++ language, private methods are visible but not accessible in the interface; however, they are commonly made invisible by explicitly declaring fully abstract classes that represent the interfaces of the class. Access specifiers do not necessarily control visibility, in that even private members may be visible to client code. In some languages, an inaccessible but visible member may be referred to at run-time (e.g. pointer to it can be returned from member functions), but all attempts to use it by referring to the name of the member from client code will be prevented by the type checker. Object-oriented design uses the access specifiers in conjunction with careful design of public method implementations to enforce class invariants. Access specifiers are intended to protect against accidental use of members by clients, but are not suitable for run-time protection of object's data. In object-oriented design and in UML, an association between two classes is a type of a link between the corresponding objects. A (two-way) association between classes A and B describes a relationship between each object of class A and some objects of class B, and vice versa. Associations are often named with a verb, such as "subscribes-to". An association role type describes the role type of an instance of a class when the instance participates in an association. An association role type is related to each end of the association. A role describes an instance of a class from the point of view of a situation in which the instance participates in the association. Role types are collections of role (instance)s grouped by their similar properties. For example, a "subscriber" role type describes the property common to instances of the class "Person" when they participate in a "subscribes-to" relationship with the class "Magazine". Also, a "Magazine" has the "subscribed magazine" role type when the subscribers subscribe-to it. Association role multiplicity describes how many instances correspond to each instance of the other class(es) of the association. Common multiplicities are "0..1", "1..1", "1..*" and "0..*", where the "*" specifies any number of instances. There are some special kinds of associations between classes. Composition Composition between class A and class B describes a has-a relationship where instances of class B have shorter or same lifetime than the lifetime of the corresponding instances of the enclosing class. Class B is said to be a part of class A. This is often implemented in programming languages by allocating the data storage of instances of class A to contain a representation of instances of class B. Aggregation is a variation of composition that describes that instances of a class are part of instances of the other class, but the constraint on lifetime of the instances is not required. The implementation of aggregation is often via a pointer or reference to the contained instance. In both cases, method implementations of the enclosing class can invoke methods of the part class. A common example of aggregation is a list class. When a list's lifetime is over, it does not necessarily mean the lifetimes of the objects within the list are also over. Inheritance Another type of class association is inheritance, which involves subclasses and superclasses, also known respectively as child classes (or derived classes) and parent classes (or base classes). If [car] was a class, then [station wagon] and [mini-van] might be two subclasses. If [Button] is a subclass of [Control], then all buttons are controls. In other words, inheritance is an is-a relationship between two classes. Subclasses usually consist of several kinds of modifications (customizations) to their respective superclasses: addition of new instance variables, addition of new methods and overriding of existing methods to support the new instance variables. Conceptually, a superclass should be considered as a common part of its subclasses. This factoring of commonality is one mechanism for providing reuse. Thus, extending a superclass by modifying the existing class is also likely to narrow its applicability in various situations. In object-oriented design, careful balance between applicability and functionality of superclasses should be considered. Subclassing is different from subtyping in that subtyping deals with common behaviour whereas subclassing is concerned with common structure. Some programming languages (for example C++) allow multiple inheritance - they allow a child class to have more than one parent class. This technique has been criticized by some for its unnecessary complexity and being difficult to implement efficiently, though some projects have certainly benefited from its use. Java, for example has no multiple inheritance, as its designers felt that it would add unnecessary complexity. Java instead allows inheriting from multiple pure abstract classes (called interfaces in Java). Sub- and superclasses are considered to exist within a hierarchy defined by the inheritance relationship. If multiple inheritance is allowed, this hierarchy is a directed acyclic graph (or DAG for short), otherwise it is a tree. The hierarchy has classes as nodes and inheritance relationships as links. The levels of this hierarchy are called layers or levels of abstraction. Classes in the same level are more likely to be associated than classes in different levels. There are two slightly different points of view as to whether subclasses of the same class are required to be disjoint. Sometimes, subclasses of a particular class are considered to be completely disjoint. That is, every instance of a class has exactly one most-derived class, which is a subclass of every class that the instance has. This view does not allow dynamic change of object's class, as objects are assumed to be created with a fixed most-derived class. The basis for not allowing changes to object's class is that the class is a compile-time type, which does not usually change at runtime, and polymorphism is utilized for any dynamic change to the object's behavior, so this ability is not necessary. And design that does not need to perform changes to object's type will be more robust and easy-to-use from the point of view of the users of the class. From another point of view, subclasses are not required to be disjoint. Then there is no concept of a most-derived class, and all types in the inheritance hierarchy that are types of the instance are considered to be equally types of the instance. This view is based on a dynamic classification of objects, such that an object may change its class at runtime. Then object's class is considered to be its current structure, but changes to it are allowed. The basis for allowing changes to object's class is a perceived inconvenience caused by replacing an instance with another instance of a different type, since this would require change of all references to the original instance to be changed to refer to the new instance. When changing the object's class, references to the existing instances do not need to be replaced with references to new instances when the class of the object changes. However, this ability is not readily available in all programming languages. This analysis depends on the proposition that dynamic changes to object structure are common. This may or may not be the case in practice.. When specifying an abstract class, the programmer is referring to a class which has elements that are meant to be implemented by inheritance. The abstraction of the class methods to be implemented by the subclasses is meant to simplify software development. This also enables the programmer to focus on planning and design. Most object oriented programming languages allow the programmer to specify which classes are considered abstract and will not allow these to be instantiated. For example, in Java, the keyword abstract is used. In C++, an abstract class is a class having at least one abstract method (a pure virtual function in C++ parlance). Some languages, notably Java. One common type is an inner class or nested class, which is a class defined within another class. Since it involves two classes, this can also be treated as another type of class association. The methods of an inner class can access static methods of the enclosing class(es). An inner class is typically not associated with instances of the enclosing class, i.e. an inner class is not instantiated along with its enclosing class. Depending on language, it may or may not be possible to refer to the class from outside the enclosing class. A related concept is inner types (a.k.a. inner data type, nested type), which is a generalization of the concept of inner classes. C++ is an example of a language that supports both inner classes and inner types (via typedef declarations). Another type is a local class, which is a class defined within a procedure or function. This limits references to the class name to within the scope where the class is declared. Depending on the semantic rules of the language, there may be additional restrictions on local classes compared non-local ones. One common restriction is to disallow local class methods to access local variables of the enclosing function. For example, in C++, a local class may refer to static variables declared within its enclosing function, but may not access the function's automatic variables. Named vs. anonymous classes In most languages, a class is bound to a name or identifier upon definition. However, some languages allow classes to be defined without names. Such a class is called an anonymous class (analogous to named vs. anonymous functions). Metaclasses Metaclasses are classes whose instances are classes. A metaclass describes a common structure of a collection of classes. A metaclass can implement a design pattern or describe a shorthand for particular kinds of classes. Metaclasses are often used to describe frameworks. In some languages such as Python, Ruby, Java, and Smalltalk, a class is also an object; thus each class is an instance of the unique metaclass, which is built in the language. For example, in Objective-C, each object and class is an instance of NSObject. The Common Lisp Object System (CLOS) provides metaobject protocols (MOPs) to implement those classes and metaclasses. Partial classes Partial classes are classes that can be split over multiple definitions (typically over multiple files), making it easier to deal with large quantities of code. At compile time the partial classes are grouped together, thus logically make no difference to the output. An example of the use of partial classes may be the separation of user interface logic and processing logic. A primary benefit of partial classes is allowing different programmers to work on different parts of the same class at the same time. They also make automatically generated code easier to interpret, as it is separated from other code into a partial class. Partial classes have been around ); } } } C++". Example 2 class MyAbstractClass { public: virtual void MyVirtualMethod() = 0; }; class MyConcreteClass : public MyAbstractClass { public: void MyVirtualMethod() { //do something } }; An object of class MyAbstractClass cannot be created because the function MyVirtualMethod has not been defined (the =0 is C++ syntax for a pure virtual function, a function that must be part of any derived concrete class but is not defined in the abstract base class. The MyConcreteClass class is a concrete class because its functions (in this case, only one function) have been declared and implemented. C#. ECMAScript ECMAScript (and JavaScript) doesn't directly support classes. It is a prototype-based language. However, classes are often emulated in practice, as shown in the following example. Example 1 // Constructor function Hello(s) { // Private variable var what = String(s); // Public method this.say = function() { print("Hello " + what + "!\n"); }; } var hello_world = new Hello("world"); hello_world.say(); This example is a port of the first C++ example. The Java Example 1 public class Example1 { // This is a Java class, it automatically extends the class Object public static void main (String args[]) { System.out.println("Hello world!"); } } This example shows a simple hello world program. Example 2 public class Example2 extends Example1 { // This is a class that extends the class created in Example 1. protected int data; public Example2() { // This is a constructor for the class. It does not have a return type. data = 1; } public int getData() { return data; } public void setData(int d) { data = d; } } This example shows a class that has a defined constructor, one member data, an accessor method ( getData) and a Modifier method ( setData) for that member data. It extends the previous example's class. Note that in Java all classes automatically extend the class Object. This allows you to write generic code to deal with objects of any type. Objective-C. PHP Example 1 <?php class A { public function foo() { if (isset($this)) { echo '$this is defined ('; echo get_class($this); echo ")\n"; } else { echo "\$this is not defined.\n"; } } } ?> Example 2 <?php class DateObject { public function getTime() { /** For E_STRICT mode: * * date_default_timezone_set('CET'); */ return(time()); } public function getDate() { return(date('jS F, Y', $this->getTime()); } } ?> Python. Ruby Example 1 class Hello def hello "Hello world!" end end A Ruby class Hello, with one method hello. This method returns "Hello world!". Visual Basic .NET Example 1 Class Hello Private what as String Sub New( ByVal s as String ) what = s End Sub Sub Say() MessageBox.Show("Hello " & what ) End Sub End Class Sub Main() Dim h As New Hello( "Foobar" ) h.Say() End Sub This example is a port of the C++ example above. It demonstrates how to make a class named Hello with a private property named what. It also demonstrates the proper use of a constructor, and has a public method named Say. It also demonstrates how to instantiate the class and call the Say method. References See also This entry is from Wikipedia, the leading user-contributed encyclopedia. It may not have been reviewed by professional editors (see full disclaimer)
http://www.answers.com/topic/class-computer-science
crawl-002
refinedweb
3,331
53.41
Jim Jewett <report@bugs.python.org> wrote: >/???) OK, as a basis for discussion I've added: I didn't mention the main reason why _decimal.c and libmpdec are in a flat directory: Building the library first and then the module from the library led to problems on at least Windows and AIX. That's why I started to treat all libmpdec files as part of the module, list them as dependencies in setup.py and let distutils figure everything out. Distutils also can figure out automatically if a Mac OS build happens to be a "universal" build and things like that. The build process is very well tested by now and it took quite a while to figure everything out, so I'd be reluctant to change the flat hierarchy. > > ??python/ ?? ?? ??-> ??extended module tests > > I would really expect that to still be under tests, and I would expect > a directory called python to contain code written in python, or at > least python bindings. Could you explain? The python/ directory contains deccheck.py, formathelper.py etc. >. > > "Infinity", "InFinItY", "iNF" are all allowed by the specification. > > OK; so is io.c part of the library, or part of the python binding? I see a potential source of confusion: io.c is firmly part of the library.). > Good enough, though I would rather see that as a comment near the assembly. Comments how to enforce an ANSI build (much slower!) are in LIBTEST.txt and now also in FILEMAP.txt. >. It's the opposite: names from decimal.py starting with an underscore that are not in _decimal are removed. If I don't use that trick, I end up with about 50 additional symbols from decimal.py: import decimal # the C version dir(decimal) ... '_ContextManager', '_Infinity', '_Log10Memoize', ...
https://bugs.python.org/msg155070
CC-MAIN-2019-22
refinedweb
295
68.26
Casts also have a cost. Casts that can be resolved at compile time can be eliminated by the compiler (and are eliminated by the JDK compiler). Consider the two lines: Integer i = new Integer(3); Integer j = (Integer) i; These two lines are compiled as if they were written as: Integer i = new Integer(3); Integer j = i; On the other hand, casts not resolvable at compile time must be executed at runtime. But note that an instanceof test cannot be fully resolved at compile time: Integer integer = new Integer(3); if (integer instanceof Integer) Integer j = integer; The test in the if statement here cannot be resolved by most compilers because instanceof can return false if the first operand (integer) is null. (A more intelligent compiler might resolve this particular case by determining that integer was definitely not null for this code fragment, but most compilers are not that sophisticated.) Primitive data type casts (ints, bytes, etc.) are quicker than object data type casts because there is no test involved, only a straightforward data conversion. But a primitive data type cast is still a runtime operation and has an associated cost. Object type casts basically confirm that the object is of the required type. It appears that a VM with a JIT compiler is capable of reducing the cost of some casts to practically nothing. The following test, when run under JDK 1.2 without a JIT, shows object casts having a small but measurable cost. With the JIT compiler running, the cast has no measurable effect (see Table 6-5): package tuning.exception; public class CastTest { public static void main(String[ ] args) { Integer i = new Integer(3); int REPEAT = 500000000; Integer res; long time = System.currentTimeMillis( ); for (int j = REPEAT; j > 0 ; j--) res = test1(i); time = System.currentTimeMillis( ) - time; System.out.println("test1(i) took " + time); time = System.currentTimeMillis( ); for (int j = REPEAT; j > 0 ; j--) res = test2(i); time = System.currentTimeMillis( ) - time; System.out.println("test2(i) took " + time); ... and the same test for test2(i) and test1(i) } public static Integer test1(Object o) { Integer i = (Integer) o; return i; } public static Integer test2(Integer o) { Integer i = (Integer) o; return i; } } However, the cost of an object type cast is not constant: it depends on the depth of the hierarchy and whether the casting type is an interface or a class. Interfaces are generally more expensive to use in casting, and the further back in the hierarchy (and ordering of interfaces in the class definition), the longer the cast takes to execute. Remember, though: never change the design of the application for minor performance gains. It is best to avoid casts whenever possible; for example, use type-specific collection classes instead of generic collection classes. Rather than use a standard List to store a list of Strings, you gain better performance with a StringList class. You should always try to type the variable as precisely as possible. In Chapter 9, you can see that by rewriting a sort implementation to eliminate casts, the sorting time can be halved. If a variable needs casting several times, cast once and save the object into a temporary variable of the cast type. Use that temporary variable instead of repeatedly casting; avoid the following kind of code: if (obj instanceof Something) return ((Something)obj).x + ((Something)obj).y + ((Something)obj).z; ... Instead, use a temporary variable:[6] [6] This is a special case of common subexpression elimination. See Section 3.8.2.14 in Chapter 3. if (obj instanceof Something) { Something something = (Something) obj; return something.x + something.y + something.z; } ... The revised code is also more readable. In tight loops, you may need to evaluate the cost of repeatedly assigning values to a temporary variable (see Chapter 7).
http://etutorials.org/Programming/Java+performance+tuning/Chapter+6.+Exceptions+Assertions+Casts+and+Variables/6.3+Casts/
crawl-001
refinedweb
631
54.32
A common question from programmers who have an intermediate amount of experience with using the STL is, "How do I write an STL allocator?". Writing an STL allocator is not especially difficult - only two member functions are interesting, allocate() and deallocate(). However, STL allocators must satisfy a number of other requirements (given by section 20.1.5 of the International Standard for C++, ISO/IEC 14882:2003), and the code to do so takes roughly 80 editor lines. Figuring out the code from the requirements can be overwhelming, but once you see the code, it's easy. One thing some programmers try is to derive from std::allocator. I recommend against this; it's more trouble than it's worth. Looking at std::allocator's implementation is also painful. Therefore, I've written an example STL allocator, whose purpose in life is to wrap malloc() and free(), which I've imaginatively called Mallocator. I've carefully implemented all of the integer overflow checks and so forth that would be required in real production code. And I've exhaustively commented which parts of Mallocator are boilerplate (common to all, virtually all, or all stateless allocators), and which parts you would have to customize. Hopefully, this should demystify the implementation of STL allocators: C:\Temp>type mallocator.cpp // The following headers are required for all allocators. #include <stddef.h> // Required for size_t and ptrdiff_t and NULL #include <new> // Required for placement new and std::bad_alloc #include <stdexcept> // Required for std::length_error // The following headers contain stuff that Mallocator uses. #include <stdlib.h> // For malloc() and free() #include <iostream> // For std::cout #include <ostream> // For std::endl // The following headers contain stuff that main() uses. #include <list> // For std::list template <typename T> class Mallocator { public: // The following will be the same for virtually all allocators. typedef T * pointer; typedef const T * const_pointer; typedef T& reference; typedef const T& const_reference; typedef T value_type; typedef size_t size_type; typedef ptrdiff_t difference_type; T * address(T& r) const { return &r; } const T * address(const T& s) const { return &s; size_t max_size() const { // The following has been carefully written to be independent of // the definition of size_t and to avoid signed/unsigned warnings. return (static_cast<size_t>(0) - static_cast<size_t>(1)) / sizeof(T); // The following must be the same for all allocators. template <typename U> struct rebind { typedef Mallocator<U> other; }; bool operator!=(const Mallocator& other) const { return !(*this == other); void construct(T * const p, const T& t) const { void * const pv = static_cast<void *>(p); new (pv) T(t); void destroy(T * const p) const; // Defined below. // Returns true if and only if storage allocated from *this // can be deallocated from other, and vice versa. // Always returns true for stateless allocators. bool operator==(const Mallocator& other) const { return true; // Default constructor, copy constructor, rebinding constructor, and destructor. // Empty for stateless allocators. Mallocator() { } Mallocator(const Mallocator&) { } template <typename U> Mallocator(const Mallocator<U>&) { } ~Mallocator() { } // The following will be different for each allocator. T * allocate(const size_t n) const { // Mallocator prints a diagnostic message to demonstrate // what it's doing. Real allocators won't do this. std::cout << "Allocating " << n << (n == 1 ? " object" : "objects") << " of size " << sizeof(T) << "." << std::endl; // The return value of allocate(0) is unspecified. // Mallocator returns NULL in order to avoid depending // on malloc(0)'s implementation-defined behavior // (the implementation can define malloc(0) to return NULL, // in which case the bad_alloc check below would fire). // All allocators can return NULL in this case. if (n == 0) { return NULL; } // All allocators should contain an integer overflow check. // The Standardization Committee recommends that std::length_error // be thrown in the case of integer overflow. if (n > max_size()) { throw std::length_error("Mallocator<T>::allocate() - Integer overflow."); // Mallocator wraps malloc(). void * const pv = malloc(n * sizeof(T)); // Allocators should throw std::bad_alloc in the case of memory allocation failure. if (pv == NULL) { throw std::bad_alloc(); return static_cast<T *>(pv); void deallocate(T * const p, const size_t n) const { // what it's doing. Real allocators won't do this. std::cout << "Deallocating " << n << (n == 1 ? " object" : "objects") // Mallocator wraps free(). free(p); // The following will be the same for all allocators that ignore hints. template <typename U> T * allocate(const size_t n, const U * /* const hint */) const { return allocate(n); // Allocators are not required to be assignable, so // all allocators should have a private unimplemented // assignment operator. Note that this will trigger the // off-by-default (enabled under /Wall) warning C4626 // "assignment operator could not be generated because a // base class assignment operator is inaccessible" within // the STL headers, but that warning is useless. private: Mallocator& operator=(const Mallocator&); }; // A compiler bug causes it to believe that p->~T() doesn't reference p. #ifdef _MSC_VER #pragma warning(push) #pragma warning(disable: 4100) // unreferenced formal parameter #endif // The definition of destroy() must be the same for all allocators. template <typename T> void Mallocator<T>::destroy(T * const p) const { p->~T(); } #pragma warning(pop) int main() { using namespace std; cout << "Constructing l:" << endl; list<int, Mallocator<int> > l; cout << endl << "l.push_back(1729):" << endl; l.push_back(1729); cout << endl << "l.push_back(2161):" << endl; l.push_back(2161); cout << endl; for (list<int, Mallocator<int> >::const_iterator i = l.begin(); i != l.end(); ++i) { cout << "Element: " << *i << endl; cout << endl << "Destroying l:" << endl; C:\Temp>cl /EHsc /nologo /W4 mallocator.cpp mallocator.cpp C:\Temp>mallocator Constructing l: Allocating 1 object of size 4. Allocating 1 object of size 12. l.push_back(1729): l.push_back(2161): Element: 1729 Element: 2161 Destroying l: Deallocating 1 object of size 12. Deallocating 1 object of size 4. To explain this output: The object of size 4 is the aux object. The objects of size 12 are doubly-linked list nodes. In VC's implementation, default-constructed lists allocate a sentinel node, which is where end iterators point. (End iterators have to be decrementable, so they can't point at NULL.) Stephan T. Lavavej Visual C++ Libraries Developer PingBack from why this: return (static_cast<size_t>(0) - static_cast<size_t>(1)) / sizeof(T); instead of this: return std::numeric_limits<std::size_t>::max() / sizeof(T); also, <cstddef>, <cstdlib>, and std::size_t etc should be used! [phrosty] > why this: [...] instead of this: [...] That works (the only difference is that it's not an integral constant expression). Reading 4.7 [conv.integral]/2, static_cast<size_t>(-1) is also guaranteed to work. This is what our Standard Library implementation does (see <xmemory>). > also, <cstddef>, <cstdlib>, and std::size_t etc should be used! I used to be very careful about that. C++98 had a splendid dream wherein <cfoo> would declare everything within namespace std, and <foo.h> would include <cfoo> and then drag everything into the global namespace with using-declarations. (This is D.5 [depr.c.headers].) This was ignored by lots of implementers (some of which had very little control over the C Standard Library headers). So, C++0x has been changed to match reality. As of the N2723 Working Paper, , now <cfoo> is guaranteed to declare everything within namespace std, and may or may not declare things within the global namespace. <foo.h> is the opposite: it is guaranteed to declare everything within the global namespace, and may or may not declare things within namespace std. In reality and in C++0x, including <cfoo> is no safeguard against everything getting declared in the global namespace anyways. That's why I'm ceasing to bother with <cfoo>. This was Library Issue 456, . (C++0x still deprecates the <foo.h> headers from the C Standard Library, which is hilarious.) Nice info for begginers. Posted here > One thing some programmers try is to derive from std::allocator. I recommend against this; it's more trouble than it's worth. Why do you recommend against it? I've just written a stateful allocator and I've found it more appealing to derive from the standard allocator because it saved me from writing boiler-plate code and for my allocator I do like to rely on the standard, especially when it comes to the typedefs and construction/destruction (only allocate() and deallocate() need to be overwritten) [klaus triendl] > Why do you recommend against it? People usually screw up, especially when it comes to rebinding. When they screw up, they usually find it hard to understand what went wrong. And when they succeed (or, when their bugs are subtle enough to go unnoticed for the time being), people usually find it hard to understand the finished product, since they can't see half of the code. This adds to the undeserved aura of mystery.). Small, clean and explained. Thanks! Posted on [Stephan T. Lavavej] >). Yes, you're right, it's easier to screw up. But as my allocator affects all our projects I tried to be really careful, tried to write it as generic as possible, read a lot about allocators and what the standard requires, not to mention my discovery that stateful allocators are not trivial at all, and tuple-checked my code whether I reimplemented all what you mentioned. > constructed lists allocate a sentinel node Not too related to allocator I guess, but I was wondering what the VC implementation does for associative containers regarding end iterator implementation/or where it points to. Can it change after elements have been inserted/erased etc? Reason I ask is for some code that requires concurrent access and whether lock scoping should watch out for end iterator comparison? I've had occasion to conduct many security assessments. Blackbox/binary, whitebox/source, all languages managed or otherwise, for major financial, pure software development teams, or anybody else. One thing is clear, the fundamentals in life is what matters, oddly, or fundamentally ;), the problem is the how deceptively simple an activity may be. Something we take for granted all the time, like, using memory, or adding and subtracting. How can this be so hard? I do have a particular fondness for this class of flaw, memory allocator flaws, quite an effective means for exploitation, as they would typically perform immediate consumption supplied payload's. This particular allocator, though quite capable in many cases, has some failure cases. Aside from the high performance toll, where each list item is atomically requested (perhaps this is my C upbringing expecting sizeof(foo)*elements request to be executed, so this perf characteristic may well be by design for this class). Having been bitten by the bad news bug, I'm not going to post the flaw in its entirety, even though this is not a shipping product, it's just too hard to explain the IT industry semantics to anybody who does not live it. But this should demonstrate that it is real, though I'm sure post-regression, the team here will have a number of caveat's to re-assert() this blog's thesis, however from right _NOW_ until that point, were in the spooky, the famous and just plain nebulous, window of opportunity. Here's the repro; compiled by MSVC9,SP1, default settings for a console project, no matter debug, release, CPU (d)word size character encoding whatever, even large memory aware I tested all combinations no effect, other than the AV. My locallydefinedtype is totally simple, fixed length static array field. int _tmain(int argc, _TCHAR* argv[]) { list<locallydefinedtype, Mallocator<locallydefinedtype> >::const_iterator it; try { list<locallydefinedtype, Mallocator<locallydefinedtype>> l(1); it = l.begin(); memset((void*) it->a, 0x42, sizeof(it->a)); } catch(...) { printf("catched\n"); } printf("value = %x", it->n); _getch(); return(0); I'll say it again, this list allocator is quite capiable and I would say easially a 1 %'er of upper quality in what is written. I'll also restate, my perticular fondness for this catagory of flaw, personally I think we should move 1 farther from 0 to prevent this sort of issue ;) These guy's just do not play well together. Here are the last few instructions at the time of this failure case; 00ad94ce 8bc8 mov ecx,eax 00ad94d0 c1e010 shl eax,10h 00ad94d3 03c1 add eax,ecx 00ad94d5 8bca mov ecx,edx 00ad94d7 83e203 and edx,3 00ad94da c1e902 shr ecx,2 00ad94dd 7406 je functor!memset+0x65 (00ad94e5) 00ad94df f3ab rep stos dword ptr es:[edi] functor!memset+0x5f: 00ad94dff3ab rep stos dword ptr es:[edi] Here's the memory; 00ca6fee 42 42 42 42 42 42 42 42 42 42 BBBBBBBBBB 00ca6ff8 42 42 42 42 42 42 42 42 ?? ?? BBBBBBBB?? That's all for now, I'll post the team the repro and I suppose there may be some changes. Even with /analyze, the only additional warning is " 1>c:\fs\test\functor\functor.cpp(227) : warning C6031: Return value ignored: '_getch'", that aint it... Anyhow,take care, Shane Macaulay Just incase anybody's wondering, for sure that is the exact code, the reason why it's running out of the "functor.cpp" project at line 227 is I came to this blog post by pure chance while working on some TR1 implmented proxy classes that are thunking (and saving my butt) with a side-by-side hosted CLR app and some activation context issues I had punted for a while.. so for sure that is the exact code. I really should of editied that down more, but, you know, hindsight and all... [e] > Not too related to allocator I guess, but I was > wondering what the VC implementation does for > associative containers regarding end iterator > implementation/or where it points to. > Can it change after elements have been > inserted/erased etc? That's a good question - you're asking about invalidation. C++03 23.1.2 [lib.associative.reqmts]/8: "The insert members shall not affect the validity of iterators and references to the container, and the erase members shall invalidate only iterators and references to the erased elements." std::list behaves identically, as it's also a node-based container (23.2.2.3 [lib.list.modifiers]/1,3). [ShaneK2] > Aside from the high performance toll, where each > list item is atomically requested (perhaps this is > my C upbringing expecting sizeof(foo)*elements > request to be executed, so this perf characteristic > may well be by design for this class). This doesn't make sense. By design, std::list is a node-based container, just like any linked list in C and C++. If you want a block-based container, you know where to find std::vector and std::deque. I chose std::list for my demonstration precisely because it performs multiple dynamic memory allocations. > Having been bitten by the bad news bug, I'm not > going to post the flaw in its entirety This isn't helpful. If my Mallocator contains a bug, which I don't believe is the case (*especially* given that it just wraps malloc() and free() and does nothing tricky), you need to demonstrate it with a self-contained repro. In fact, the bug almost certainly lies within your own code: > memset((void*) it->a, 0x42, sizeof(it->a)); It appears that you intend to scribble over it->a with 0x42 bytes. However, you've C casted it->a to void *. This is almost certainly wrong. The first argument to memset() should be &it->a, making a cast unnecessary. There is also a const-correctness violation here (as "it" is a const_iterator). Thanks, Stephan T. Lavavej, Visual C++ Libraries Developer Stephan, perhaps it's best to take this offline. Feel free to email me, or get in touch shane at theurldoman.com. I don't like full disclosure of flaws of any kind until I'm sure the effect is limited. Also the forum appears to be limiting my submission. In the middle of providing you a couple more examples and explore the edge cases where these failures occur, I did start to get errors back from the build process. The error, in your code however, was more a failure in the compiler than a syntactical lapse, in the age of dynamically/runtime emitted applications however, it would be interesting to apply this condition in various forms in a CLR app. I eventually was able to get this error, and can send you the various versions, the silently failing ones and the one that generated this other failure. Please choose the Technical Support command on the Visual C++ Help menu, or open the Technical Support help file for more information LINK : fatal error LNK1000 ExceptionAddress = 5E38FBBE (5E300000) "D:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin\c2.dll" I'll likely send the bug onto VC tech support if I do not hear back. Taking your critique into account, I factored this a bit, should ensure less focus on unimportant details. I'll leave any additional comments/responses I have for a direct email / designated support communiqué. All fields of the local are "unsigned char MEMBER[STATIC-SIZE];" struct locallydefinedtype { unsigned char a[2]; ... unsigned char n[0xf]; int i=0; list<locallydefinedtype, Mallocator<locallydefinedtype> >::iterator it; locallydefinedtype *p=NULL,ValuOnStak; list<locallydefinedtype, Mallocator<locallydefinedtype>> l; // to get some heap allocations around the overwrite to ensure we see our overwrite memset(&ValuOnStak, 0, sizeof(ValuOnStak)); l.push_front(ValuOnStak); l.push_back(ValuOnStak); l.pop_back(); p = static_cast<locallydefinedtype *>(&*it); printf("p is @ %p\n", p); for(i=0; i < sizeof(p->a)-1; p->a[i++] = 'A') p->a[i]='\0'; for(i=0; i < sizeof(p->b)-1; p->b[i++] = 'B') p->b[i]='\0'; for(i=0; i < sizeof(p->c)-1; p->c[i++] = 'C') p->c[i]='\0'; for(i=0; i < sizeof(p->d)-1; p->d[i++] = 'D') p->d[i]='\0'; for(i=0; i < sizeof(p->e)-1; p->e[i++] = 'E') p->e[i]='\0'; printf("%s %s %s %s %s", p->a, p->b, p->c, p->d, p->e); printf("\nwere out\n"); return(_getch()); @Shane In your first example attempting to show a bug, the problem is that in the call to printf() just before the _getch() call you are dereferencing an iterator which has been invalidated (the list<> the iterator was pointing into was local to the try{} block). The memset() call looks OK. I'm not sure what problem your second example is supposed to show - it compiles and runs OK on my set up (which has not been updated to SP1 yet). From the LNK1000 error message, it looks like there's maybe a defect in the linker causing it to crash - that may or may not be because he linker is getting confused over some problem in the Mallocator<>, but it would require a bit more work to verify that it's not simply a linker bug unrelated to Mallocator<>. But I wonder - is the code you posted the code that crashes the linker? It seems too simple to show a linker defect by itself. I've tried your tests on several compilers (including non-Microsoft ones) and I haven't noticed a problem with the Mallocator<>. However, so far only VC 9.0 crashes due to dereferencing the invalid iterator (which is what I consider the preferred behavior - at least for a non-optimized build). [mikeb] > the list<> the iterator was pointing into was local to the try{} block Excellent catch! I totally missed that massive bug (which _HAS_ITERATOR_DEBUGGING would have detected). > The memset() call looks OK. Only because it->a happened to refer to an array, and the name of an array decays into a pointer to its first element. If it->a had referred to a non-array, then terrible badness would have ensued as I explained. Either way, casting to void * the first argument to memset() is exceedingly dangerous and buys you nothing.
http://blogs.msdn.com/b/vcblog/archive/2008/08/28/the-mallocator.aspx
CC-MAIN-2015-06
refinedweb
3,279
54.02
finalize_callback The callback that is fired once the execution loop is exited, but before glview is destroyed. Synopsis: #include <glview/glview.h> typedef void(* finalize_callback)(void *callback_data); Since: BlackBerry 10.0.0 Library:libglview (For the qcc command, use the -l glview option to link against this library) Description: The finalize_callback function is invoked within the glview_loop() function after the application is given an exit event but before the graphic stack is destroyed. Once the finalize_callback function returns, the graphic stack will be taken down and glview will be destroyed. Last modified: 2014-05-14 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/core/com.qnx.doc.glview.lib_ref/topic/finalize_callback.html
CC-MAIN-2018-17
refinedweb
111
50.02
IRC log of tagmem on 2005-04-19 Timestamps are in UTC. 16:59:25 [RRSAgent] RRSAgent has joined #tagmem 16:59:25 [RRSAgent] logging to 16:59:31 [Norm] We'll be thin on the ground, take it up with the chair :-) 17:00:19 [DanC] thin on the ground... yeah... if all the potential regrets turn into actual regrets, we'll have to be unanimous, among those on the phone, to adopt my new issue 17:00:22 [Roy] Roy has joined #tagmem 17:00:38 [Zakim] TAG_Weekly()12:30PM has now started 17:00:45 [Zakim] +Norm 17:01:34 [Zakim] +[INRIA] 17:01:38 [Zakim] +Roy 17:02:06 [Roy] Scribe: Roy 17:02:35 [Zakim] + +1.804.740.aaaa 17:03:16 [Ed] Ed has joined #tagmem 17:03:30 [Vincent] Zakim, [INRIA} is Vincent 17:03:30 [Zakim] sorry, Vincent, I do not recognize a party named '[INRIA}' 17:03:40 [Zakim] +Dave_Orchard 17:03:46 [Roy] Chair: Vincent 17:04:09 [dorchard] dorchard has joined #tagmem 17:04:30 [Roy] NM: I will scribe next week 17:04:46 [Zakim] +??P1 17:04:51 [Zakim] +DanC 17:05:25 [Roy] Zakim, who is here 17:05:25 [Zakim] Roy, you need to end that query with '?' 17:05:28 [Vincent] Zakim, who is here 17:05:28 [Zakim] Vincent, you need to end that query with '?' 17:05:29 [DanC] Zakim, who is on the phone? 17:05:30 [Zakim] On the phone I see Norm, [INRIA], Roy, +1.804.740.aaaa, Dave_Orchard, ??P1, DanC 17:05:31 [Ed] Ed has joined #tagmem 17:05:56 [Roy] Topic: Roll Call 17:06:38 [Roy] DC: we haven't published anything in a while 17:06:53 [Roy] DC: a working draft sort of thing 17:07:07 [dorchard] possible regrets, may be on a plane 17:07:11 [Roy] DC: regrets for next week 17:08:11 [Roy] Vincent: Accept the minutes of 12 April telcon 17:08:22 [Roy] [no objections] 17:08:59 [Roy] Topic: New issue? Squatting on link relationship names, x-tokens, registries, and URI-based extensibility 17:09:16 [Roy] 17:09:37 [Roy] DC: it would be great if they used a URI, but they don't 17:09:49 [Roy] DC: unlear if this is a new issue or an old one 17:09:57 [Roy] s/unlear/unclear 17:10:54 [Roy] VQ: Can it be addressed in the context of one of the existing issues [missed which ones] 17:11:25 [Roy] VQ: preferencce for issue 41 17:12:00 [Roy] NM: 41 carries too much baggage already, perhaps 9 (no) 17:12:29 [Roy] NW: perhaps if it is hard to find the issue, it deserves a new one 17:12:32 [DanC] "The decision to identify XML namespaces with URIs was an architectural mistake that has caused much suffering for XML users and needless complexity for XML tools. " 17:13:00 [DanC] ack danc 17:13:00 [Zakim] DanC, you wanted to note 17:13:11 [Roy] DC: it seems we have failed to convince at least one person 17:13:37 [Roy] VQ: inclined to proceed with a new issue 17:13:41 [DanC] issue makingNewTerms ... 17:13:49 [Roy] RF: okay by me 17:13:59 [DanC] issue linkRelationshipNames 17:14:15 [Roy] DC: trying to think up a name 17:15:06 [Norm] Interestingly Atom "gets this right" AFAICS. "foo" => 17:15:37 [Roy] NM: preamble about media types would indicate relationship to issue 9, but I guess that was just a lead in -- we should be clearer that this is about the short name issue 17:15:58 [Roy] q+ 17:16:21 [Roy] VQ: just link relationships, or broader? 17:16:38 [Roy] NW: would prefer broader issue of short names 17:17:02 [Roy] no dead typing 17:17:36 [DanC] standarizedAttributeValues 17:17:38 [DanC] hmm 17:17:41 [Roy] RF: standardized attribute names in content 17:17:56 [Roy] RF: right, Values is better 17:18:56 [DanC] standarizedFieldValues 17:19:11 [Roy] RF: I meant attribute values in general, not just in XML syntax 17:19:26 [Roy] standardizedFieldValues 17:19:41 [DanC] works for me 17:20:02 [Norm] Works for me 17:20:06 [Roy] [no objections] 17:21:12 [Roy] RESOLVED: new issue standardizedFieldValues-51 17:23:07 [Roy] DC: does anyone else think they should use URIs? 17:23:41 [Roy] RF: like the IANA-based registry used by Atom relationships 17:25:18 [Roy] DC: what does it mean to add a relation name? Can they be a URI? How do you get an IANA name? 17:25:25 [Norm] I suppose someone should define the mechanism for adding to the registry, writing an RFC maybe? 17:25:46 [Roy] DC: suppose I just introduce a short name 17:26:11 [Roy] NM: is it formally defined as a relative URI? 17:27:13 [Roy] RF: it is formally defined as a suffix of the IANA base URI if the value is not already a URI 17:28:03 [Roy] ACTION: DanC to introduce new issue standardizedFieldValues-51 17:28:59 [DanC] (hmm... "standardized" is perhaps narrower than I'd like, but no matter) 17:28:59 [Roy] VQ: shall we wait and see the feedback from the introduction before continuing? 17:29:03 [Roy] [agree] 17:29:33 [Roy] Topic: Close issue xmlIDSemantics-32? 17:30:05 [Roy] NW: CR was published, in the process of implementation reports 17:30:24 [Roy] NW: inclined to continue this until PR 17:31:20 [Roy] DC: looking at how this impacts other (existing) specs and what tests are needed 17:32:06 [Roy] DC: trying to address concern about W3C having many individual specs that don't always work well together 17:32:48 [Roy] DC: for example, Chris looked at this and provided examples where various specs (like CSS) should be updated/revised to reflect the change 17:34:24 [Roy] NW: I don't think it would be appropriate for CSS to say anything about xml:id because the current [algorithm?] will pick up the new id automatically because it starts with the infoset 17:35:26 [Roy] NM: I think both views of this are right, there is a case to be said that the infoset way is architecturally better 17:36:09 [Roy] NM: OTOH, Dan is right as well and we need to provide [details missing] 17:36:32 [Roy] NW: agrees in general 17:37:56 [Roy] ACTION: Norm to raise the issue of synchronizing xml:id with CSS spec to Core WG 17:37:58 [Norm] ACTION: NW to point out to the Core WG that it would be good to get the CSS working group to buy into xml:id 17:38:42 [DanC] (the corresponding concern applies to xpath etc.) 17:38:53 [Roy] VQ: I guess we can conclude that we should not close the issue? Do we agree? 17:40:08 [Roy] DC: yes, but would like to hear from other TAG members 17:40:46 [Roy] NW: Xpath 2 has cupport for xml:id construction, Xpath 1 can support it providing that it starts with an infoset 17:41:20 [Roy] DC: surprised, so that means something that used to conform will no longer conform? 17:41:40 [Roy] NW: both still conform 17:41:54 [Zakim] -Norm 17:41:57 [Zakim] +Norm 17:44:17 [Roy] [technical discussion of XML processing continues by NW, NM, DC] 17:46:43 [Roy] q? 17:46:47 [Roy] q- 17:51:00 [Roy] VQ: let's get back to the specific issue at hand 17:51:11 [DanC] ( ) 17:51:46 [Roy] DC: I would like for this section C to have a test case prior to going into effect 17:52:09 [Roy] NW: would like Dan to send mail to public-xml to that effect 17:54:05 [Roy] NW: I don't know how to construct a test for CSS, but the introduction of xml:id does not change historical documents and its presence will be ignored by parsers ignorant of xml:id. The CSS spec doesn't need to say anything about that. 17:55:30 [Roy] VQ: suggest revisiting this after Norm completes action ... is there any other spec beyond CSS that are impacted? 17:55:37 [Roy] DC: six specs are listed 17:56:11 [Roy] VQ: let's move on 17:56:24 [Roy] Topic: Review of XRI documents 17:56:52 [Roy] VQ; Henry is not here, but can Ed provide his feedback? 17:57:20 [Roy] Ed: I sent HT some feedback this morning but haven't heard back yet 17:57:37 [Roy] Ed: (my delay) 17:57:59 [Roy] VQ: we need to reply to the WG by the end of this month 17:58:17 [Roy] VQ: and work on a longer document 17:59:19 [Roy] Ed: HT is working on the longer document ... after feedback, will have a better idea how to proceed toward sending comments to WG 18:00:00 [Roy] VQ: should we prepare something specific for the XRI team? 18:00:20 [Roy] Ed: yes, they deserve a direct feedback as opposed to a general reference 18:00:54 [Roy] RF: agree on direct response (as well as later general document) 18:01:18 [Roy] VQ: running out of time, do we have time to make a TAG decision? 18:02:10 [Roy] DC: we already have general feedback in the form of the webarch doc 18:04:38 [Roy] DO: we need to take a look at the examples given and explain how we can solve those problems using URI, HTTP, etc. 18:05:25 [DanC] q+ to propose: 1. XRI follows the pattern of an administrative hierarchy of delegation ending with a path, 2. http/dns handle that case, and is ubiquitously deployed 3. new URI schemes should not be introduced when existing schemes handle them 4. ergo XRI should not be introduced. 18:05:40 [Roy] DO: on my blog, I got comments about change of ownership of a domain and broke it down into examples 18:05:56 [Vincent] ack DanC 18:05:56 [Zakim] DanC, you wanted to propose: 1. XRI follows the pattern of an administrative hierarchy of delegation ending with a path, 2. http/dns handle that case, and is ubiquitously deployed 18:05:59 [Zakim] ... 3. new URI schemes should not be introduced when existing schemes handle them 4. ergo XRI should not be introduced. 18:06:00 [Roy] DO: that show how the points can be responded to 18:06:27 [dorchard] q+ 18:06:40 [Vincent] ack dorchard 18:07:03 [Roy] DC: wonder what parts of the argument would fail to convince 18:07:05 [DanC] DO: what's not established is "http/dns handle that case" 18:08:39 [Roy] DO: they wrote a document that shows (in their mind) why 2) is not the case. What we need to do is come up with examples that show an alternative interpretation/solution to the examples they provided in the documents. 18:08:57 [Roy] DC: did their writing convince you? 18:09:17 [Roy] DO: it did give me pause to wonder about the two scenarios already mentioned 18:10:02 [Roy] Ed: most cases of domain change can be handled by redirects 18:10:46 [DanC] dorchard, did you mail something to www-tag on this item? 18:11:36 [Roy] NM: is it obvious to people on this call that redirect is something that we can point to for longevity of URIs? 18:11:57 [Zakim] - +1.804.740.aaaa 18:12:02 [Roy] DC: yes, one of the many reasons why HTTP is better for this type of thing 18:12:37 [Zakim] + +1.804.740.aabb 18:12:53 [Roy] DC: I am willing to try to make that case (am writing a related article) 18:13:08 [dorchard] try 18:13:46 [Roy] VQ: can you do that such that we have something to approve next week? 18:13:52 [DanC] (my target is now end of day weds) 18:14:09 [DanC] (I see ) 18:14:27 [dorchard] 18:14:39 [DanC] rogrer. tx 18:15:08 [DanC] ACTION DanC: elaborate on "http/dns handle the case of an administrative hierarchy followed by a path" 18:15:09 [Roy] DO: above are links to related blog entries 18:16:23 [Roy] Ed: we should be able to have enough material to reply next week 18:16:37 [Roy] Topic: <html><body></body></html> 18:16:48 [Roy] Topic: Review of Binary XML documents 18:17:59 [DanC] sounds like ed's action continues 18:18:11 [Roy] Ed: I have this afternoon set aside for this 18:19:04 [Roy] Ed: should it be in finding form or just an email? 18:19:27 [Roy] NM: perhaps less formality is desired for a response to a WG 18:19:31 [Roy] Ed: agree 18:19:52 [Roy] Topic: Reviewing WS-Addressing Core and SOAP Binding 18:20:43 [Roy] 18:21:35 [Roy] DC: endPointRefs-47 18:21:38 [DanC] 18:22:04 [Roy] DC: suppose we just withdrew this from our list? 18:22:43 [Ed] Ed has joined #tagmem 18:23:33 [Roy] NM: what about the general concern that they are using something other than URIs for general identity? 18:25:34 [Roy] NM: what WSA did was remove the distinction that indicated a parameter was being used for identity, but they didn't remove the mechanism itself. Some people still use that feature for the purpose of identification. 18:26:19 [DanC] q+ to note my struggles reviewing WS-addressing... can anybody sketch a test scenario? how can I tell if a hunk of software is doing ws-addressing right or not? 18:27:06 [Roy] DC: similar to cookies in that WSA does not prevent the use of those fields for the sake of passing identifying data 18:27:23 [Roy] DC: but not all such fields are used in that way 18:27:58 [DanC] s/DC: similar/DO: similar/ 18:28:05 [DanC] s/DC: but/DO: but/ 18:28:18 [DanC] (hmm... [[ WS-Addressing is conformant to the SOAP 1.2 [SOAP 1.2 Part 1: Messaging Framework] processing model ...]] ) 18:29:42 [Roy] NM: I think these questions are still present, and though not directly tied to this issue it may be our last chance to deal with non-URI addressing 18:29:50 [Roy] VQ: out of time 18:30:52 [Zakim] -Norm 18:31:02 [Zakim] -Dave_Orchard 18:31:10 [Roy] ADJOURNED 18:31:14 [Zakim] - +1.804.740.aabb 18:31:15 [Zakim] -DanC 18:31:16 [Zakim] -??P1 18:31:17 [Zakim] -[INRIA] 18:31:26 [Zakim] -Roy 18:31:27 [Zakim] TAG_Weekly()12:30PM has ended 18:31:28 [Zakim] Attendees were Norm, [INRIA], Roy, +1.804.740.aaaa, Dave_Orchard, DanC, +1.804.740.aabb 18:31:38 [Roy] rrsagent, pointer? 18:31:38 [RRSAgent] See 18:31:53 [DanC] RRSAgent, make logs world-access 18:32:18 [DanC] do you want it to draft HTML minutes, roy? 18:32:25 [Roy] I can do it 18:32:29 [DanC] ok 19:31:32 [dorchard] dorchard has joined #tagmem 20:51:48 [Zakim] Zakim has left #tagmem
http://www.w3.org/2005/04/19-tagmem-irc
CC-MAIN-2014-42
refinedweb
2,597
59.57
Here's some pseudocode I wrote. - Code: Select all def bee(): #Bees! runs on global variables if nearby(Predator): #uses weighted random numbers to determine if a predator is near enough to warrant removal and returns bool sting(Predator) elif low_food(): #uses method described above to determine if food is low and returns bool get_food() elif too_hot(): #reads temperature and returns bool based on rands vent() else: #Nothing to do dawdle() #basically wander around doing nothing for bee in bees: #basically a list of dictionaries with each bee's position and state bee() There would also be a map and predators and whatnot. Unfortunately, I'm a complete n00b and can't do this. I don't know what modules to use, but would prefer the builtins (I don't have admin privileges, so I can't install any extra modules Consider it a challenge.
http://www.python-forum.org/viewtopic.php?f=10&t=6855&p=8733
CC-MAIN-2014-49
refinedweb
145
52.33
While the contingency is not all that probable, it cannot be totally ruled out that a C# or VB.NET component developer might inadvertently use a C++ keyword as a class name or a public field name. Anyway, the C++ team decided that they had to allow for this probability and gave us the __identifier keyword which allows us to use C++ keywords as identifiers in C++ source code. See the following C# class library :- /* csc /t:library Class1.cs */ public class gcnew { public gcnew() { generic = 25; } public int generic; } To use it from C++, you can use the __identifier keyword as shown below :- /* cl/clr /FUClass1.dll Prog1.cpp */ int main() { __identifier(gcnew)^ g = gcnew __identifier(gcnew)(); int x = g->__identifier(generic); return 0; }
https://voidnish.wordpress.com/2005/03/
CC-MAIN-2015-48
refinedweb
125
51.99
Improving Performance with active_tol¶ Note This is an experimental tweak to performance. Using it requires a high level of understanding of optimization and the structure of your problem. Some optimizers use an active set method, whereby their constraints are marked as active or inactive depending on their proximity to the feasible region. If a constraint is far enough from the feasible region, it is essentially redundant to the set, so an optimizer can ignore it and mark it as inactive. When this occurs, it no longer needs functional or gradient evaluations for that constraint. Since gradient calculation can be a major source of computation, some performance can be gained if we can omit calculating the derivatives for inactive constraints. The ideal way to do this would be to gain access to the optimizer internals and promote that information to OpenMDAO. This was not conveniently available, so we have instead provided an argument to add_constraint that lets you specify how far from your constraint boundary you need to go before you consider it to be inactive. This is most easily used on geometric problems where you can clearly visualize when a constraint is completely occluded by other constraints. The following restrictions apply to using the active tolerance. - Optimizer must support active set methods (only SNOPT in pyoptsparseat present) - Only works for adjoint mode, so mode in root linear solver must be set to “rev” - Relevance reduction must be enabled (“single_voi_relevance_reduction” set to True in root linear solver) Let’s consider a problem where we have 7 discs with a 1 cm diameter, and we would like to arrange them on a line as closely together as possible without overlapping. We can do this by minimizing the sum of the distances between each disc and its 6 neighbors. Now, we don’t want any of our discs to overlap, so we need to constrain each of them so that the distance to every other disc is greater than 1 diameter. The code for this is below. We used an ExecComp because the equation for distance is simple to write. To make a point about derivative calculation, we use OpenMDAO’s built-in profiling. This time, when we set up the profiler, we tell it to only count every time apply_linear (which is the workhorse derivatives function) is called on Component. The output will be placed in a file that we can process later. from __future__ import print_function from six.moves import range import numpy as np from openmdao.api import Problem, Group, pyOptSparseDriver, ExecComp, IndepVarComp if __name__ == '__main__': # So we compare the same starting locations. np.random.seed(123) diam = 1.0 pin = 15.0 n_disc = 7 prob = Problem() prob.root = root = Group() driver = prob.driver = pyOptSparseDriver() driver.options['optimizer'] = 'SNOPT' driver.options['print_results'] = False # Note, active tolerance requires relevance reduction to work. root.ln_solver.options['single_voi_relevance_reduction'] = True # Also, need to be in adjoint root.ln_solver.options['mode'] = 'rev' obj_expr = 'obj = ' sep = '' for i in range(n_disc): dist = "dist_%d" % i x1var = 'x_%d' % i # First disc is pinned if i == 0: root.add('p_%d' % i, IndepVarComp(x1var, pin), promotes=(x1var, )) # The rest are design variables for the optimizer. else: init_val = 5.0*np.random.random() - 5.0 + pin root.add('p_%d' % i, IndepVarComp(x1var, init_val), promotes=(x1var, )) driver.add_desvar(x1var) for j in range(i): x2var = 'x_%d' % j yvar = 'y_%d_%d' % (i, j) name = dist + "_%d" % j expr = '%s= (%s - %s)**2' % (yvar, x1var, x2var) root.add(name, ExecComp(expr), promotes = (x1var, x2var, yvar)) # Constraint (you can experiment with turning on/off the active_tol) #driver.add_constraint(yvar, lower=diam) driver.add_constraint(yvar, lower=diam, active_tol=diam*2.0) # This pair's contribution to objective obj_expr += sep + yvar sep = ' + ' root.add('sum_dist', ExecComp(obj_expr), promotes=('*', )) driver.add_objective('obj') prob.setup() print("Initial Locations") for i in range(n_disc): xvar = 'x_%d' % i print(prob[xvar]) # Run with profiling turned on so that we can count the total derivative # component calls. from openmdao.api import profile, Component profile.setup(prob, methods={'apply_linear' : (Component, )}) profile.start() prob.run() profile.stop() print("\nFinal Locations") for i in range(n_disc): xvar = 'x_%d' % i print(prob[xvar]) total_apply = 0 for syst in root.subsystems(recurse=True): if 'dist_' in syst.name: total_apply += syst.total_calls print("\ntotal apply_linear calls:", total_apply) Note that we defined the variable “n_disc” for the number of discs, so component and variable names such as “dist_1_2” and “y_2” had to be created with some string operations. Initial Locations 15.0 13.482345928 11.4306966748 11.1342572678 12.7565738454 13.5973448489 12.1155323006 Final Locations 15.0 12.9999999413 9.99999993369 8.99999991376 11.9999999405 13.9999999687 10.9999999358 Note that this lines our discs up neatly so that they are touching each other with their centers ranging from 9 to 15. Note that we chose a distance of 2.0 times the disc diameter as our “active_tol”. When we do this, and have 3 discs in a row, then the distance constraint between disc1 and disc3 is inactive, so its gradient is not calculated. We can look at the profiling output by issuing the following command at your operating system command line: proftotals prof_raw.0 We want the grand totals, which are printed last: Grand Totals ------------- Function Name, Total Time, Calls apply_linear, 0.00373601913452, 183 So, did our active tolerance really do anything? If we turn it off and compare the number of apply_linear calls by running and postprocessing a second time: Initial Locations 15.0 13.482345928 11.4306966748 11.1342572678 12.7565738454 13.5973448489 12.1155323006 Final Locations 15.0 12.9999998135 9.99999980586 8.99999978593 11.9999998126 13.9999999687 10.999999808 The optimium is essentially the same. The time: Grand Totals ------------- Function Name, Total Time, Calls apply_linear, 0.00686693191528, 344 So almost half of the apply_linear calls turn out to be unneeded. This would normally be a pretty bad case to run in adjoint mode because the number of constraints varies with n_disc by (n_disc**2)/2 - n_disc, while the number of design variables only varies by n_disc. However, a good choice for “active_tol” cuts out a significant number of the extra gradient calculations.
http://openmdao.readthedocs.io/en/1.7.3/usr-guide/examples/active_tol.html
CC-MAIN-2017-17
refinedweb
1,029
56.96
I have an app that I am trying to generate out some GoogleMaps. The GoogleMaps aren’t the problem, I am tripping over the proper way to handle what I want to do. I have a DB for map markers. Path 1: When the user makes a call to the controller => “mymap”, action => “show”, I grab the id parameter. From here, if 15 minutes have passed since it was last checked, I make a call to a function check_map_update that makes an RSS call to an outside repository and puts the changes in the map marker DB. Show then goes on to read the map markers and generate a map shown by the view. Path 2: In my view, I want it to pull up some thumbnails of other maps, so I make a helper called “show_thumbnail”, which I put a <% render %> call in that returns the image to display. MapController def show def check_map_update MapHelper def map_thumbnail View: Application.html.erb map/show.html.erb My Problem: I want to be able to call check_map_update from the function in the helper. Or, is the helper the best place for this? render partial seems to be little help since it doesn’t run controller code either. What I really would have liked to have is a controller that calls a couple other actions like show/1, show/53, and includes those as part of the page, but I can’t quite figure out the proper place for that and I can’t find a tutorial or decent document that says how to do it. My current thought is that check_map_update should be in the map model. But how would I go about showing multiple maps from the controller?
https://www.ruby-forum.com/t/trying-to-figure-out-if-this-is-a-helper-or-a-controller-or-a-partial/184674
CC-MAIN-2022-40
refinedweb
288
76.76
Working with physical files using the API CMS.IO is a namespace that serves as an interlayer between the business layer of Kentico and storages used for physical files. CMS.IO is used throughout the system instead of the default System.IO library provided by the .NET Framework in order to be compatible with various types of file storages. CMS.IO is an abstract layer, which accesses file storages by means of a provider built on top of it. CMS.IO defines the classes and their methods and properties, which the provider overrides in order to manipulate files in a storage. Depending on the storage type that you use for your files, the system utilizes one of the following providers: - File system storage – used by default for files stored in the Windows file system. The provider is only a wrapper for the standard System.IO library. - Azure storage – used when storing files in the Microsoft Azure Blob storage. - Amazon storage – used when storing files in the Amazon S3 storage service. The following figure demonstrates how CMS.IO allows you to access different types of file storage by building providers: This documentation section assumes that you are familiar with the System.IO library and know how to use it to manipulate files and directories in the Windows file system. Learn how to use System.IO. Was this page helpful?
https://docs.xperience.io/k12sp/custom-development/working-with-physical-files-using-the-api
CC-MAIN-2021-04
refinedweb
228
66.03
The Fibonacci sequence is a sequence of numbers such that any number, except for the first and second, is the sum of the previous two. For example: 0, 1, 1, 2, 3, 5, 8, 13, 21... Fun fact: November 23 is celebrated as Fibonacci day because when the date is written in the mm/dd format (11/23), the digits in the date form a Fibonacci sequence: 1,1,2,3. We can represent this in the formula: fib(n) = fib(n-1)+fib(n-2) With this formula, we can write a simple recursive function to solve for fib(n). Note: Only use this to test for small numbers, preferably n < 10. I'll explain later def fib(n): if n < 2: # base case return n return fib(n-1) + fib(n-2) if __name__ == "__main__": print(fib(7)) Why do we need a base case? It's easy enough to convert the formula directly into def fib(n): return fib(n-1) + fib(n-2) The problem with this though is that when you run it, it throws a RecursionError. This simply means that there's possibly an infinite loop in the program. A base case in a recursive function tells the function when to stop (to avoid going into an infinite loop) and is usually something that is already known or that can be solved for easily without needing the function. In our case here, we know from the definition that any number in the sequence, except for the first and second, is the sum of the previous two. We use that to form our base case if n < 2: return n. Making it more efficient Remember when I told you to only test the program with small values of n? Here's why. As it stands, every call to fib() results in two more calls to fib() in the return statement. The call tree grows exponentially. 15 calls are required to compute fib(5), 177 calls for fib(10), 21,891 for fib(20)... you get the point. To solve this problem, we can use memoization. Memoization helps avoid unnecessary calculation of the same values if they were already previously calculated. It works just like memorization for humans. You already have 2 x 2 memorized and can give the answer immediately without having to use a calculator. 799 x 377? You probably need to use a calculator for that. If, for some reason, you find that you get asked 799 x 377 a lot, it would be nice to have it memorized so you don't have to calculate it every other time. The value of 799 x 377 will always remain the same, so all you have to do is calculate it once, save the value in your "cache" (memory), and retrieve it every time you need it. Luckily, python has a built-in decorator that does just that. from functools import lru_cache @lru_cache(maxsize=None) def fib(n): if n < 2: # base case return n return fib(n-1) + fib(n-2) if __name__ == "__main__": print(fib(50)) Now you can test with large numbers safely. Posted on by: Discussion Good post! 🙂👍 Nim lang version. Super cool!
https://practicaldev-herokuapp-com.global.ssl.fastly.net/wangonya/fibonacci-sequence-with-python-recursion-and-memoization-f7h
CC-MAIN-2020-40
refinedweb
533
70.63
I am producing a publication-quality plot to be embedded in latex and I would like to be very precise in terms of sizes and fonts (so that fonts are of the same size in the article as in the plot). To prevent the plot from scaling in latex I would like to have it exact size, but I cannot. Here is my code: import matplotlib.pyplot as plt from matplotlib import rc, rcParams from numpy import sin rc('text', usetex=True) rc('font', family='serif', serif='Computer Modern Roman', size=8) rc('legend', fontsize=10) width_in = 5 fig = plt.figure(1, figsize=(width_in, 2)) ax = fig.add_subplot(111) ax.plot(range(0,100), sin(range(0,100))) fig.tight_layout() fig.savefig('test.eps', bbox_inches='tight', pad_inches=0) plt.close() The problem is with bbox_inches='tight' and pad_inches=0. Adding those options makes my plot 4.76 inches wide instead of declared 5 inches. But I want them to save space. So how to solve it? Edit: Well, the answers suggest to remove bbox_inches='tight' and pad_inches=0 and use just the tight_layout(). Then the images is of right size, however it has still some white padding around. I can remove it with fig.tight_layout(pad=0), but then the figure title it moved inside the box, which looks ugly. On the other hand I can use tight_layout(rect=[...]) and obtain the desired result, but it is a manual work to get the numbers right - I don't like it. Thus, currently I don't see any easy and general solution to my problem. The problem you are having is that bbox_inches='tight' just removes all of the extra white space around your figure, it does not actually re-arrange anything in your figure, after it has been rendered. You might need to tweak the parameters you pass to tight_layout (tutorial) to get your desired effect. Hopefully this gets you pointed in the right direction. Your original plot must have whitespace around it, otherwise bbox_inches=tight would not remove any of the area. There are two solutions to this that I know of: The simple method is to use tight_layout as mentioned by tcaswell. The more complicated, but more controllable method is to avoid using fig.add_subplot(111) and instead use fig.add_axes() which allows you to be much more strict in terms of how large your axes are when you define the axes instance. You can then tweak the size of your axes to take up as much of the figure area as required to maintain a 5" figure area. Once you have done this, I would recommend simply not using bbox_inches or set it to None (the default) to avoid unnessicary cropping. fig.add_axes() requires a rect parameter as its first argument which consists of [left_position, bottom_position, width, height] each of which range from 0 to 1. Edit: After going through the tight_layout tutorial again, I have realized that it covers pretty much everything. I hadn't realized that it was able to maintain the aspect ratio of the axes instance easily, even if the aspect ratio of the figure is different from that of the axes instance. I tend to try to be very explicit when I define my axes areas because I deal with satellite imagery and try to keep it as the native sensor resolution or some factor of that sensor resolution, which requires a little bit more control.
http://m.dlxedu.com/m/askdetail/3/fcaa830179dd31d37024a2c4e57ceae4.html
CC-MAIN-2018-47
refinedweb
574
63.8
Red Hat Bugzilla – Bug 14942 glibc: libio, used from libstd++, not exception aware Last modified: 2008-05-01 11:37:57 EDT Hi, This is no RedHat specific bug, but can be fixed easily with a little patch. The libstd++ uses parts of the basic libio-function in libc. Problems arise if the C++ code, called by these functions, throws exceptions. Consider the following example: ------8<-- brokenioex.cc ----------------------------- /* * No Exception awareness in IO-lib * Provided by <hzeller@to.com> */ #include <iostream> class mysb : public std::streambuf { protected: virtual int overflow(int c) { throw 1; } virtual int underflow() { throw 1; } }; int main() { mysb *sb = new mysb(); iostream out(sb); try { // this calls implicitly functions in // genops.c out << "foo"; } catch (...) { // catch: IO lib exception aware // otherwise this prog will abort return 0; } } ------------------------------------------- Calling the streambuffer's methods implicitly by using the iostream will crash this program; the coredump reveals, that: ---------------------------------------------- $ gdb ./a.out core (gdb) bt #0 0x4009e861 in __kill () from /lib/libc.so.6 #1 0x4009e548 in raise (sig=6) at ../sysdeps/posix/raise.c:27 #2 0x4009fc71 in abort () at ../sysdeps/generic/abort.c:88 #3 0x4003e80b in __default_terminate () from /usr/lib/libstdc++-libc6.1-2.so.3 #4 0x4003e82c in __terminate () from /usr/lib/libstdc++-libc6.1-2.so.3 #5 0x4003f284 in throw_helper (eh=0x40060f84, pc=0x400cd4f3, my_udata=0xbffff85c, offset_p=0xbffff858) from /usr/lib/libstdc++-libc6.1-2.so.3 #6 0x4003f43c in __throw () from /usr/lib/libstdc++-libc6.1-2.so.3 #7 0x804908b in mysb::overflow () #8 0x400cd4f4 in _IO_default_xsputn (f=0x804a540, data=0x804913c, n=3) at genops.c:197 #9 0x4003a722 in streambuf::xsputn () from /usr/lib/libstdc++-libc6.1-2.so.3 #10 0x40038624 in ostream::operator<< () from /usr/lib/libstdc++-libc6.1-2.so.3 #11 0x8048e4c in main () ------------------------------------- .. the overflow() method is called from _IO_default_xsputn() which resides in the libio/genops.c of the glibc. Unfortunatly, this file is not compiled with '-fexceptions' so the program will crash here. my (quick'n hacky) Solution: Add CPPFLAGS += -fexceptions to the libio/Makefile in glibc-2.1.3 make this work. This probably can be narrowed down to the actual functions affected (anyone has drawn a generic call graph ?) I haven't checked yet, but this bug may be hidden in newer glibc's as well. The actual problem is, that libstdc++ relies on the fact, that the c functions, it is called back by, are exception save. So beyond this very problem described here, it should be checked in libstdc++, which c-functions do callbacks of C++ methods -- and make them exception save in libc. This should be reported to the glibc/libstdc++-maintainers in the first place since this is no RedHat specific bug. I added this as patch: =================== glibc-ioexceptions.patch --- libio/Makefile Wed Jun 30 08:54:14 1999 +++ libio/Makefile.new Mon Jul 31 18:58:47 2000 @@ -42,6 +42,8 @@ include ../Makeconfig +CPPFLAGS += -fexceptions + ifeq ($(versioning),yes) routines += oldiofopen oldiofdopen oldiofclose oldiopopen oldpclose \ oldtmpfile =================== IMHO, this should be fixed soon, because this renders the very improved exception handling in recent gcc versions almost useless if you have to work around these kinds of problems - the average programmer may not even be aware of. Thank you for your time. ciao, -hen glibc-2.1.92 has libio which you can partly throw exceptions through.
https://bugzilla.redhat.com/show_bug.cgi?id=14942
CC-MAIN-2017-51
refinedweb
557
58.79
Why Pandas for data Analysis? Real ‘raw’ data needs a lot of ‘wrangling’ operations before it can be ready for dissection by a data scientist one of the popular tools for data wrangling in python is Pandas. Because of the availability of widespread packages of Pandas for almost every possible function. The library Pandas is one such package that makes life easier especially for data analysis. Through its extensive in-built functions for manipulations and visualizations. Pandas First Steps If you are using Anaconda so, you will automatically have pandas in it. But, for some reason, if you do not have it. Just run this command – conda install pandas If you are not using Anaconda. So, install via pip by – pip install pandas Importing – To import pandas, use import pandas as pd import numpy as np In conclusion, It is better to import NumPy with pandas to have access to more numpy features. In short, it will help us in Exploratory Data Analysis (EDA). Pandas Data Structures Pandas has two main data structures. - Series - Data Frames SERIES The basic syntax to create a pandas Series is as follows: newSeries = pd.Series(data , index) Data can be of Any type from Python’s dictionary to list or tuple. It can also be a Numpy array. Let’s build a series from Python List: mylist = ['Tanishka','Machine Learning', 24, 'India'] labels = ['Name', 'Career', 'Age', 'Country'] newSeries = pd.Series(mylist,labels) print(newSeries) In addition, let’s see how we can create a Series using a Python Dictionary. myDict = {'Name': 'Tanishka', 'Career': 'Machine Learning', 'Age': 24, 'Country': 'India'} mySeries = pd.Series(myDict) print(mySeries) Accessing data from Series The normal pattern to access the data from Pandas Series is – seriesName['IndexName'] Let’s take the example of the mySeries we created earlier. To get the value of Name, Age, and Career, all we have to do is print(mySeries['Name']) print(mySeries['Age']) print(mySeries['Career']) Basic Operations on Pandas Series For instance, Let’s create two new series to perform operations on them : newSeries1 = pd.Series([10,20,30,40],index=['LONDON','NEWYORK','Washington','Singapore']) newSeries2 = pd.Series([10,20,35,46],index=['LONDON','NEWYORK','INDIA','CANADA']) print(newSeries,newSeries1,sep='\n\n') Basic Arithmetic operations include +-*/ operations. These are done over-index, so let’s perform them. newSeries1 + newSeries2 Here we can see that since London and NEWYORK index are present in both Series. So, it has added the value of both, and the output of the rest is NaN (Not a number). newSeries1 * newSeries2 newSeries1 / newSeries2 DATAFRAMES Creating a DataFrame using List import pandas as pd # list of strings new_list = ['Mango','Kiwi','Strawberry','Pineapple'] # Calling DataFrame constructor on list df = pd.DataFrame(new_list) print(df) Now, using dict of ndarray/lists import pandas as pd # intialise data of lists. new_list = {'Name':['Mango','Kiwi','Strawberry','Pineapple'],'Price':[20, 21, 19, 18]} # Create DataFrame df = pd.DataFrame(new_list) # Print the output. print(df) Indexing and Selecting Data In Pandas, indexing means simply selecting particular rows and columns of data from a DataFrame. It could mean selecting all the rows and some of the columns, some of the rows and all of the columns, or some of each of the rows and columns. Indexing can also be known as Subset Selection. Indexing operator is used to referring to the square brackets following an object. The .loc an .iloc indexers also use the indexing operator to make selections. In this indexing operator to refer to df[]. Now, we Selecting a single Column # importing pandas package import pandas as pd # intialise data of lists. data = {'Name':['Mango','Kiwi','Strawberry','Pineapple'],'Price':[20, 21, 19, 18]} # retrieving columns by indexing operator first = data["Price"] print(first) Selecting a single Row using .loc import pandas as pd data = { "Name": ['Mango','Kiwi','Strawberry','Pineapple'], "Price": [20, 21, 19, 18] } #load data into a DataFrame object: df = pd.DataFrame(data) #refer to the row index: print(df.loc[0]) Selecting a single Row using .iloc import pandas as pd data = { "Name": ['Mango','Kiwi','Strawberry','Pineapple'], "Price": [20, 21, 19, 18] } #load data into a DataFrame object: df = pd.DataFrame(data) #refer to the row index: print(df.iloc[3]) REFERENCES - - - - 1 thought on “Pandas for Data Analysis7 min read” Concept of Series and Dataframes was really interesting!
https://blog.knoldus.com/pandas-for-data-analysis/
CC-MAIN-2022-05
refinedweb
718
56.25
Using Streams and Lambda Expressions in Java Java has fancy methods that make optimal use of streams and lambda expressions. With streams and lambda expressions, you can create an assembly line. The assembly-line solution uses concepts from functional programming. The assembly line consists of several methods. Each method takes the data, transforms the data in some way or other, and hands its results to the next method in line. Here’s an assembly line. Each box represents a bunch of raw materials as they’re transformed along an assembly line. Each arrow represents a method (or, metaphorically, a worker on the assembly line). For example, in the transition from the second box to the third box, a worker method (the filter method) sifts out sales of items that aren’t DVDs. Imagine Lucy Ricardo standing between the second and third boxes, removing each book or CD from the assembly line and tossing it carelessly onto the floor. The parameter to Java’s filter method is a Predicate — a lambda expression whose result is boolean. The filter method sifts out items that don’t pass the lambda expression’s true / false test. In the transition from the third box to the fourth box, a worker method (the map method) pulls the price out of each sale. From that worker’s place onward, the assembly line contains only price values. To be more precise, Java’s map method takes a Function such as (sale) -> sale.getPrice() and applies the Function to each value in a stream. So the map method takes an incoming stream of sale objects and creates an outgoing stream of price values. In the transition from the fourth box to the fifth box, a worker method (the reduce method) adds up the prices of DVD sales. Java’s reduce method takes two parameters: The first parameter is an initial value. In the image above, the initial value is 0.0. The second parameter is a BinaryOperator. In the image above, the reduce method’s BinaryOperator is (price1, price2) -> price1 + price2 The reduce method uses its BinaryOperator to combine the values from the incoming stream. The initial value serves as the starting point for all the combining. So, the reduce method does two additions. For comparison, imagine calling the method reduce(10.0, (value1, value2) -> value1 * value2) with the stream whose values include 3.0, 2.0, and 5.0. The resulting action is shown below You might have heard of Google’s MapReduce programming model. The similarity between the programming model’s name and the Java method names map and reduce is not a coincidence. Taken as a whole, the entire assembly line up the prices of DVDs sold. The code above contains a complete program using the streams and lambda expressions the first image above. import java.text.NumberFormat; import java.util.ArrayList; public class TallySales { public static void main(String[] args) { ArrayList<Sale> sales = new ArrayList<>(); NumberFormat currency = NumberFormat.getCurrencyInstance(); fillTheList(sales); double total = sales.stream() .filter((sale) -> sale.getItem().equals("DVD")) .map((sale) -> sale.getPrice()) .reduce(0.0, (price1, price2) -> price1 + price2); System.out.println(currency.format(total)); } static void fillTheList(ArrayList<Sale> sales) { sales.add(new Sale("DVD", 15.00)); sales.add(new Sale("Book", 12.00)); sales.add(new Sale("DVD", 21.00)); sales.add(new Sale("CD", 5.25)); } } The code requires Java 8 or later. If your IDE is set for an earlier Java version, you might have to tinker with the IDE’s settings. You may even have to download a newer version of Java. The boldface is one big Java assignment statement. The right side of the statement contains a sequence of method calls. Each method call returns an object, and each such object is the thing before the dot in the next method call. That’s how you form the assembly line. For example, near the start of the boldface code, the name sales refers to an ArrayList object. Each ArrayList object has a stream method. In the code above, sales.stream() is a call to that ArrayList object’s stream method. The stream method returns an instance of Java’s Stream class. (What a surprise!) So sales.stream() refers to a Stream object. Every Stream object has a filter method. So sales.stream().filter((sale) -> sale.getItem().equals("DVD")) is a call to the Stream object’s filter method. The pattern continues. The Stream object’s map method returns yet another Stream object — a Stream object containing prices. To that Stream of prices you apply the reduce method, which yields one double value — the total of the DVD prices.
http://www.dummies.com/programming/java/using-streams-lambda-expressions-java/
CC-MAIN-2017-34
refinedweb
772
67.96
Config::AutoConf - A module to implement some of AutoConf macros in pure perl. With this module I pretend to simulate some of the tasks AutoConf macros do. To detect a command, to detect a library, etc. use Config::AutoConf; Config::AutoConf->check_prog("agrep"); my $grep = Config::AutoConf->check_progs("agrep", "egrep", "grep"); Config::AutoConf->check_header("ncurses.h"); my $curses = Config::AutoConf->check_headers("ncurses.h","curses.h"); Config::AutoConf->check_prog_awk; Config::AutoConf->check_prog_egrep; Config::AutoConf->check_cc(); Config::AutoConf->check_lib("ncurses", "tgoto"); Config::AutoConf->check_file("/etc/passwd"); # -f && -r This function instantiates a new instance of Config::AutoConf, eg. to configure child components. The contructor adds also values set via environment variable PERL5_AUTOCONF_OPTS. This function checks if a file exists in the system and is readable by the user. Returns a boolean. You can use '-f $file && -r $file' so you don't need to use a function call. This function checks if a set of files exist in the system and are readable by the user. Returns a boolean. This function checks for a program with the supplied name. In success returns the full path for the executable; An optional array reference containing a list of directories to be searched instead of $PATH is gracefully honored. This function takes a list of program names. Returns the full path for the first found on the system. Returns undef if none was found. An optional array reference containing a list of directories to be searched instead of $PATH is gracefully honored. From the autoconf documentation, If `bison' is found, set [...] `bison -y'. Otherwise, if `byacc' is found, set [...] `byacc'. Otherwise set [...] `yacc'. The result of this test can be influenced by setting the variable YACC or the cache variable ac_cv_prog_YACC. Returns the full path, if found. From the autoconf documentation, Check for `gawk', `mawk', `nawk', and `awk', in that order, and set output [...] to the first one that is found. It tries `gawk' first because that is reported to be the best implementation. The result can be overridden by setting the variable AWK or the cache variable ac_cv_prog_AWK. Note that it returns the full path, if found. From the autoconf documentation, Check for `grep -E' and `egrep', in that order, and [...] output [...] the first one that is found. The result can be overridden by setting the EGREP variable and is cached in the ac_cv_path_EGREP variable. Note that it returns the full path, if found. From the autoconf documentation, If flex is found, set output [...] to ‘flex’ and [...] to -lfl, if that library is in a standard place. Otherwise set output [...] to ‘lex’ and [...] to -ll, if found. If [...] packages [...] ship the generated file.yy.c alongside the source file.l, this [...] allows users without a lexer generator to still build the package even if the timestamp for file.l is inadvertently changed. Note that it returns the full path, if found. The structure $self->{lex} is set with attributes prog => $LEX lib => $LEXLIB root => $lex_root From the autoconf documentation, Set output variable [...] to variable and is cached in the ac_cv_path_SED variable. Note that it returns the full path, if found. Checks for pkg-config program. No additional tests are made for it ... This function checks if you have a running C compiler. Prints "Checking @_ ..." Prints result \n Prints "configure: " @_ to stdout Prints "configure: " @_ to stderr Prints "configure: " @_ to stderr and exits with exit code 0 (tells toolchain to stop here and report unsupported environment) Prints "configure: " @_ to stderr and exits with exit code 0 (tells toolchain to stop here and report unsupported environment). Additional details are provides in config.log (probably more information in a later stage). Defines a check variable for later use in further checks or code to compile. Writes the defined constants into given target: Config::AutoConf->write_config_h( "config.h" ); Puts the current used language on the stack and uses specified language for subsequent operations until ending pop_lang call. Pops the currently used language from the stack and restores previously used language. If lang specified, it's asserted that the current used language equals to specified language (helps finding control flow bugs). Builds program which simply calls given function. When given, prologue is prepended otherwise, the default includes are used. Builds program for current chosen language. If no prologue is given (undef), the default headers are used. If body is missing, default body is used. Typical call of Config::AutoConf->lang_build_program( "const char hw[] = \"Hello, World\\n\";", "fputs (hw, stdout);" ) will create const char hw[] = "Hello, World\n"; /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif int main (int argc, char **argv) { (void)argc; (void)argv; fputs (hw, stdout);; return 0; } #ifdef __cplusplus } #endif Builds a static test which will fail to compile when test evaluates to false. If @decls is given, it's prepended before the test code at the variable definition place. Adds given list of directories to preprocessor/compiler invocation. This is not proved to allow adding directories which might be created during the build. Adds given flags to the parameter list for preprocessor invocation. Adds given flags to the parameter list for compiler invocation. Adds given list of libraries to the parameter list for linker invocation. Adds given list of library paths to the parameter list for linker invocation. Adds given flags to the parameter list for linker invocation. This function trys to compile specified code and runs action-if-true on success or action-if-false otherwise. Returns a boolean value containing check success state. This function trys to compile and link specified code and runs action-if-true on success or action-if-false otherwise. Returns a boolean value containing check success state. This function checks whether a specified cache variable is set or not, and if not it's going to set it using specified sub-to-check. This functions returns the value of a previously check_cached call. If symbol (a function, variable or constant) is not declared in includes and a declaration is needed, run the code ref given in action-if-not-found, otherwise action-if-found. includes is a series of include directives, defaulting to default includes, which are used prior to the declaration under test. This method: Config::AutoConf->check_decl("basename(char *)") This method caches its result in the ac_cv_decl_<set lang>_symbol variable. For each of the symbols (with optional function argument types for C++ overloads), run check_decl. If action-if-not-found is given, it is additional code to execute when one of the symbol declarations is needed, otherwise action-if-found is executed. Contrary to GNU autoconf, this method does not declare HAVE_DECL_symbol macros for the resulting confdefs.h, because it differs as check_decl between compiling languages. Check whether type is defined. It may be a compiler builtin type or defined by the includes. prologue should be a series of include directives, defaulting to default includes, which are used prior to the type under test. In C, type must be a type-name, so that the expression sizeof (type) is valid (but sizeof ((type)) is not) If type type is defined, preprocessor macro HAVE_type (in all capitals, with "*" replaced by "P" and spaces and dots replaced by underscores) is defined. This method caches its result in the ac_cv_type_type variable. For each type check_type is called to check for type. If action-if-found is given, it is additionally executed when all of the types are found. If action-if-not-found is given, it is executed when one of the types is not found. Returns the value of the integer expression. The value should fit in an initializer in a C variable of type signed long. It should be possible to evaluate the expression at compile-time. If no includes are specified, the default includes are used. Execute action-if-fails if the value cannot be determined correctly. Checks for the size of the specified type by compiling. If no size can determined, action-if-not-found is invoked when given. Otherwise action-if-found is invoked and SIZEOF_type is defined using the determined size. In opposition to GNU AutoConf, this method can determine size of structure members, eg. $ac->check_sizeof_type( "SV.sv_refcnt", undef, undef, $include_perl ); # or $ac->check_sizeof_type( "struct utmpx.ut_id", undef, undef, "#include <utmpx.h>" ); This method caches its result in the ac_cv_sizeof_<set lang>_type variable. For each type check_sizeof_type is called to check for size of type. If action-if-found is given, it is additionally executed when all of the sizes of the types could determined. If action-if-not-found is given, it is executed when one size of the types could not determined. Define ALIGNOF_type to be the alignment in bytes of type. type y; must be valid as a structure member declaration or type must be a structure member itself. This method caches its result in the ac_cv_alignof_<set lang>_type variable, with * mapped to p and other characters not suitable for a variable name mapped to underscores. For each type check_alignof_type is called to check for align of type. If action-if-found is given, it is additionally executed when all of the aligns of the types could determined. If action-if-not-found is given, it is executed when one align of the types could not determined. Check whether member is in form of aggregate.member and member is a member of the aggregate aggregate. prologue should be a series of include directives, defaulting to default includes, which are used prior to the aggregate under test. Config::AutoConf->check_member( "struct STRUCT_SV.sv_refcnt", undef, sub { Config::AutoConf->msg_failure( "sv_refcnt member required for struct STRUCT_SV" ); } "#include <EXTERN.h>\n#include <perl.h>" ); If aggregate aggregate has member member, preprocessor macro HAVE_aggregate_MEMBER (in all capitals, with spaces and dots replaced by underscores) is defined. This macro caches its result in the ac_cv_aggr_member variable. For each member check_member is called to check for member of aggregate. If action-if-found is given, it is additionally executed when all of the aggregate members are found. If action-if-not-found is given, it is executed when one of the aggregate members is not found. This function uses check_header to check if a set of include files exist in the system and can be included and compiled by the available compiler. Returns the name of the first header file found. This function is used to check if a specific header file is present in the system: if we detect it and if we can compile anything with that header included. Note that normally you want to check for a header first, and then check for the corresponding library (not all at once). The standard usage for this module is: Config::AutoConf->check_header("ncurses.h"); This function will return a true value (1) on success, and a false value if the header is not present or not available for common usage. This function checks each given header for usability. Checks for standard C89 headers, namely stdlib.h, stdarg.h, string.h and float.h. If those are found, additional all remaining C89 headers are checked: assert.h, ctype.h, errno.h, limits.h, locale.h, math.h, setjmp.h, signal.h, stddef.h, stdio.h and time.h. Returns a false value if it fails. This function checks for some default headers, the std c89 haeders and sys/types.h, sys/stat.h, memory.h, strings.h, inttypes.h, stdint.h and unistd.h Check for the following header files. For the first one that is found and defines 'DIR', define the listed C preprocessor macro: dirent.h HAVE_DIRENT_H sys/ndir.h HAVE_SYS_NDIR_H sys/dir.h HAVE_SYS_DIR_H ndir.h HAVE_NDIR_H to the NAMLEN macro. This macro might be obsolescent, as all current systems with directory libraries have <<dirent.h>>. Programs supporting only newer OS might not need touse this macro. This method provides the program source which is suitable to do basic compile/link tests to prove perl development environment. This method can be used from other checks to prove whether we have a perl development environment or not (perl.h, reasonable basic checks - types, etc.) This method can be used from other checks to prove whether we have a perl development environment or not (perl.h, reasonable basic checks - types, etc.) This method can be used from other checks to prove whether we have a perl development environment including a suitable libperl or not (perl.h, reasonable basic checks - types, etc.) Caller must ensure that the linker flags are set appropriate ( -lperl or similar). This method can be used from other checks to prove whether we have a perl development environment or not (perl.h, libperl.la, reasonable basic checks - types, etc.) This method is used to check if some common math.h functions are available, and if -lm is needed. Returns the empty string if no library is needed, or the "-lm" string if libm is needed. Actions are only called at the end of the list of tests. If one fails, action-if-not-found is run. Otherwise, action-if-found is run. This function is used to check if a specific library includes some function. Call it with the library name (without the lib portion), and the name of the function you want to test: Config::AutoConf->check_lib("z", "gzopen"); It returns 1 if the function exist, 0 otherwise. action-if-found and action-if-not-found can be CODE references whereby the default action in case of function found is to define the HAVE_LIBlibrary (all in capitals) preprocessor macro with 1 and add $lib to the list of libraries to link. If linking with library results in unresolved symbols that would be resolved by linking with additional libraries, give those libraries as the other-libs argument: e.g., [qw(Xt X11)]. Otherwise, this routine may fail to detect that library is present, because linking the test program can fail with unresolved symbols. The other-libraries argument should be limited to cases where it is desirable to test for one library in the presence of another that is not already in LIBS. It's recommended to use search_libs instead of check_lib these days. Search for a library defining function if it's not already available. This equates to calling Config::AutoConf->link_if_else( Config::AutoConf->lang_call( "", "$function" ) ); first with no libraries, then for each library listed in search-libs. search-libs must be specified as an array reference to avoid confusion in argument order. Prepend -llibrary to LIBS for the first library found to contain function, and run action-if-found. If the function is not found, run action-if-not-found. If linking with library results in unresolved symbols that would be resolved by linking with additional libraries, give those libraries as the other-libraries argument: e.g., [qw(Xt X11)]. Otherwise, this method fails to detect that function is present, because linking the test program always fails with unresolved symbols. The result of this test is cached in the ac_cv_search_function variable as "none required" if function is already available, as 0 if no library containing function was found, otherwise as the -llibrary option that needs to be prepended to LIBS. Search for pkg-config flags for package as specified. The flags which are extracted are --cflags and --libs. The extracted flags are appended to the global extra_compile_flags and extra_link_flags, respectively. Call it with the package you're looking for and optional callback whether found or not. This method proves the _argv attribute and (when set) the PERL_MM_OPT whether they contain PUREPERL_ONLY=(0|1) or not. The attribute _force_xs is set appropriate, which allows a compile test to bail out when Makefile.PL is called with PUREPERL_ONLY=0. This method proves the _argv attribute and (when set) the PERL_MB_OPT whether they contain --pureperl-only or not. This method calls _check_mm_pureperl_build_wanted when running under ExtUtils::MakeMaker ( Makefile.PL) or _check_mb_pureperl_build_wanted when running under a Build.PL (Module::Build compatible) environment. When neither is found ( $0 contains neither Makefile.PL nor Build.PL), simply 0 is returned. This check method proves whether a pureperl build is wanted or not by cached-checking $self->_check_pureperl_build_wanted. The result might lead to further checks, eg. "_check_compile_perl_api". This routine checks whether XS can be sanely used. Therefore it does following checks in given order: check_cc) and disable XS if none found ExtensivePerlAPIis enabled, check wether perl extensions can be linked or die with error otherwise When all checks passed successfully, return a true value. Intended to act as a helper for evaluating given command line arguments. Stores given arguments in instances _argv attribute. Call once at very begin of Makefile.PL or Build.PL: Your::Pkg::Config::AutoConf->_set_args(@ARGV); returns a string containing default includes for program prologue taken from autoconf/headers.m4: returns a string containing default includes for program prologue containing _default_includes plus #include <EXTERN.h> #include <perl.h> Push new file handles at end of log-handles to allow tee-ing log-output Removes specified log file handles. This method allows you to shoot you in your foot - it doesn't prove whether the primary nor the last handle is removed. Use with caution. Alberto Simões, <ambs@cpan.org> Jens Rehsack, <rehsack@cpan.org> Although a lot of work needs to be done, this is the next steps I intent to take. - detect flex/lex - detect yacc/bison/byacc - detect ranlib (not sure about its importance) These are the ones I think not too much important, and will be addressed later, or by request. - detect an 'install' command - detect a 'ln -s' command -- there should be a module doing this kind of task. A lot. Portability is a pain. <Patches welcome!>. Please report any bugs or feature requests to bug-extutils-autoconf@rt.cpan.org, or through the web interface at. I will be notified, and then you'll automatically be notified of progress on your bug as I make changes. Michael Schwern for kind MacOS X help. Ken Williams for ExtUtils::CBuilder This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/dist/Config-AutoConf/lib/Config/AutoConf.pm
CC-MAIN-2014-23
refinedweb
3,022
58.48
STRCMP(3) BSD Programmer's Manual STRCMP(3) strcmp, strncmp - compare strings #include <string.h> int strcmp(const char *s1, const char *s2); int strncmp(const char *s1, const char *s2, size_t len); The strcmp() and strncmp() functions lexicographically compare the NUL- terminated strings s1 and s2. The strcmp() and strncmp() return an integer greater than, equal to, or less than 0, according to whether the string s1 is greater than, equal to, or less than the string s2. The comparison is done using unsigned characters, so that '\200' is greater than '\0'. strncmp() compares at most len characters. bcmp(3), memcmp(3), strcasecmp(3), strcoll(3), strxfrm(3) The strcmp() and strncmp() functions conform to ANSI X3.159-1989 ("ANSI C"). MirOS BSD #10-current June 29, 1991.
https://www.mirbsd.org/htman/sparc/man3/strncmp.htm
CC-MAIN-2016-07
refinedweb
129
65.32
WebReference.com - Excerpt from Inside XSLT, Chapter 2, Part 1 (4/4) Inside XSLT Each node has a number of set properties associated with it in XSLT, and the following list includes the kinds of properties that the writers of XSLT processors keep track of for each node: - name. The name of the node. - string-value. The text of the node. - base-URI. The node's base URI (the XML version of an URL). - child. A list of child nodes; null if there are no children. - parent. The node's parent node. - has-attribute. Specifies an element node's attributes if it has any. - has-namespace. Specifies an element node's namespace nodes. There's another consideration to take into account when working with trees: XSLT processors are built on top of XML parsers, and the rules for XML parsers and XSLT processors are slightly different, which can lead to problems. This issue can become important in some cases, so the following section discusses it briefly. The Information Set Model Versus the XSLT Tree Model XML parsers pass on only certain information, as dictated by the core XML Information Set specification, which you can find at (see New Rider's Inside XML for more information on XML Information Sets), whereas XSLT processors adhere to the XSLT tree model. These models, and what they consider important, are different, which can lead to problems. For example, two XML items that are part of the core information set but are not available in XSLT: notations and skipped entity references (entity references that the XML parser has chosen not to expand). In practice, this means that even if the XML parser passes on information about these items, the XSLT processor can't do anything with it. However, notations are rarely used, and very few XML parsers generate skipped entity references, so this is not a significant problem. On the other hand, XML parsers can strip comments out of XML documents, which is something you should know about, because the XSLT model is supposed to include them. In addition, DTD information is not passed on from the XML parser to the XSLT processor (perhaps because W3C is planning more widespread use of XML schemas in XSLT 2.0, although there's still no official mechanism to connect XML schemas with XML documents yet). That's not usually a problem, because it's up to the XML parser to validate the XML document, except in one case: when an attribute is declared of type ID. In XML, you can declare an attribute with any name to be of type ID, so the XSLT processor has no idea which attributes are of this type unless the processor has access to the DTD. This is important when you're using stylesheets that are embedded in XML documents, because then the XSLT processor needs to be able to know which element in the document holds the stylesheet you want to use to transform the document. In this case, some XSLT processors, like Saxon, exceed the XSLT recommendation and scan the DTD, if there is one, to see which attributes are of type ID. There are a few more items that you also might want to know about. For example, the XSLT processing model makes namespace prefixes available in the input tree, but it gives you very little control over them in the output tree, where they are handled automatically. Also, the XSLT processing model defines a base URI for every node in a tree, which is the URI of the external entity from which the node was derived. (In the XSLT 1.1 working draft, that's been extended to support the XML, that's been extended to support the XML Base specification, as you'll see near the end of this chapter.) However, in the XML information set, base URIs are considered peripheral, which means that the XML parser may not pass that information on to the XSL processor. All in all, you should know that XSLT processors use XML parsers to read XML documents, and that the junction between those packages is not a seamless one. If you find you're missing some necessary information in an XSLT transformation, that's something to bear in mind. In fact, the differences between the XML infoset and XSLT tree model is one of the areas that XSLT 2.0 is supposed to address. Among other things, XSLT 2.0 is supposed to make it easier to recover ID and key information from the source document, as well as to recover information from the source document's XML declaration, such as XML version and encoding. Created: September 12, 2001 Revised: September 12, 2001 URL:
http://www.webreference.com/authoring/languages/xml/insidexslt/chap2/1/4.html
CC-MAIN-2014-52
refinedweb
784
58.01
I wrote my first Perl 6 program (that is, one that worked) the day before the London Perl Workshop where I proudly told people. So, JJ says “Why don’t you write an Advent calendar post for Perl 6?” With only one program under my belt, what do I have to write about? Well … I authored Astro::Constants in Perl 5, so how hard would it be to migrate it to Perl 6? Since I couldn’t keep my big gob shut, I give you the tale of a Perl 5 module author wandering through the Perl 6 landscape. If on Day 5 you were “diagnosed” as a Constant, then you need today’s post. We’re used to using variables for counting things and holding stuff. Totals change as we get more “stuff”. Constants are those values that should never change. Things like the number of seconds in a day or the speed of light. Sure, we use them like variables in our calculations, but you don’t want to accidentally change a constant through assignment such as $SPEED_LIGHT = 30; or even accidentally when you meant to test if it was equal to some value, like this if $SECONDS_IN_A_DAY = $seconds_waited { when you really meant if $SECONDS_IN_A_DAY == $seconds_waited { In these cases, you want the compiler to say “I’m sorry, Dave. I’m afraid I can’t do that.” The Perl compiler comes close. If you try the first line, you’ll get Cannot modify an immutable Num (299792458) in block at im_sorry_dave.p6 line 12 How to make a ConstantHow to make a Constant To make a variable constant, you declare it with the constant keyword. constant $tau = pi * 2; and Hey! the sigil is optional, so I can use my favourite style for constant declarations my constant SPEED_LIGHT is export = 2.99792458e8; And all this fun is not just for scalars. Lists and hashes can be declared constant, too! Why would you want to make a List constant? How about for the months of the year, which prevents this my constant @months = ; ... @months[9] = 'Hacktober'; # changing a name push @months, 'Hogmannay'; # we'd all like more time after Christmas if you try either of these, you get Cannot modify and immutable Str (D) # error for the assignment # or Cannot call 'push' on an immutable 'List' # error for the push As an aside, tau, pi, e and i are all built-in constants in Perl 6, along with their Unicode equivalents, τ, π and 𝑒. Also seems that you can get the same behaviour with sigil-less variables but let’s not go there today. Exporting Constants from a ModuleExporting Constants from a Module If you’re going to be using the same constants over and over in your code, it makes sense to put them in a module (a separate file) and load that into your programs. Now I have to admit a touch of cargo-cult programming here, but this is what worked for me and I’ll try and explain it to the best of my initiate’s ability. use v6; unit class Astro::Constants:ver<0.0.1>:auth<cpan:DUFFEE>; my constant SPEED_LIGHT is export = 2.99792458e8; ... Line 1 – easy start. use v6; tells the compiler that this is a Perl 6 program. Wait! I don’t need that there. It’s just a hold-over from writing the programs. I can get rid of it. Line 2 - unit means that this file provides only one module – no idea what that implies - class creates the lexical scope of the file – but I probably could have used module instead which is for code which doesn’t belong inside a class. Hmmm, guess I’ll have to think about my code design a bit deeper. But it still worked! Astro::Constants– module name ver– the version string, now up in the package declaration. auth– well, this is the author. It’s a little freaky to be adding higher dimensions to package names, but it does let you specify who’s version of a module to use. No more issues with name-space camping in PAUSE. Line N my for the lexical scope; constant to make it read-only; SPEED_LIGHT is the name of the constant; is export allows the constant to be used outside the module, i.e. in your code; and 2.99792458e8 is just the Perl way of expressing 2.99 × 10⁸. … and for completeness, how about finishing the module with a version method and some documentation method ver { v0.0.1 } =begin pod =head1 DESCRIPTION A set of constants used in Physics, Chemistry and Math =end pod A side effect of putting your constants into a module is that they are computed at compile time It should make your code run faster, but the compiled module persists. This is great for constants, but if your module contains something that you might want to change, it will need to be re-compiled. Using Modules in your ProgramsUsing Modules in your Programs Once you have a module, how do you use it in a program? In this example, I created a directory mylib/Astro and put the module in a file called mylib/Astro/Constants.pm6. This is my program use v6; use lib 'mylib'; use Astro::Constants; say "speed of light =\t", SPEED_LIGHT; and it worked! To explain the first 3 lines: use v6 says to use Perl 6; use lib says to add the path ‘mylib’ to the library search path; and use Astro::Constants says to search the library path for a file Astro/Constants.pm6 and load it. Do I have to do all this? … No.Do I have to do all this? … No. Why re-invent the wheel? The aforementioned JJ has previous form with constants, but you’ll want a package manager to do the work of installing it. In Fedora 28, use dnf install rakudo-zef to install the package manager, zef. Then you can search for any modules that deal with constants. Running zef search Constants will give you at least 15 packages registered in the ecosystem, not all of them are what you’re looking for. You could get started right away with zef install Math::Constants and use JJ’s module or you could use the search to see if I’ve found the time to upload my attempt (probably named Physics::Constants by then), coming in 2019 with all the summer blockbusters. Finally, a few comments on code maintenanceFinally, a few comments on code maintenance For me, code maintenance is the most importance consideration in scientific programming. Think of the new science student who walks in the door of $graduate-school and gets handed your code to maintain on Day 1. Guaranteed by Day 20, they’ve been asked to make a change. For their sake, I like to write for clarity rather than brevity because there are so many overloaded symbols in science. Because of this, I’m wary about throwing symbols into my calculations. Maybe I’m worrying for nothing, but the only way to find out is to do it and see if it hurts. One possibility that’s occurring to me right now is to be able to specify what constants you are referring to. This made up example looks a little like Python. Might be worth stealing. import GRAVITATIONAL as G; ... $F_between_two_bodies = G * $mass1 * $mass2 / $distance**2; I’ll be reading Perl 6 Deep Dive over Christmas and I’ll let you know how I got on next year. Happy Science-ing! 3 thoughts on “Day 9 – Let’s get astroPhysical! – Constants in Perl 6” Very nice description of a migration from Perl 5! With regards to the versionmethod: you don’t have to create that as you can introspect that: The same for all of the other additional naming info: Great! More boilerplate that I don’t have to write. Love it! I was going to ask if this `version` for free was only available for `class`es because I was wondering about writing my code as a `module` instead. The docs say that you get the `^ver` and `^auth` with modules as well. Nice.
https://perl6advent.wordpress.com/2018/12/09/lets-get-astrophysical-constants-in-perl-6/
CC-MAIN-2019-35
refinedweb
1,367
70.94
So you want to implement an auto-documenting API? Project description A Flask extension that implements Swagger support () What’s Swagger? Swagger is a spec to help you document your APIs. It’s flexible and produces beautiful API documentation that can then be used to build API-explorer-type sites, much like the one at – To read more about the Swagger spec, head over to or Git Repository and issue tracker: Documentation: Why do I want it? - You want your API to be easy to read. - You want other people to be able to use your API easily. - You’d like to build a really cool API explorer. - It’s Friday night and your friend just ditched on milkshakes. How do I get it? From your favorit shell: $ pip install flask-sillywalk How do I use it? I’m glad you asked. In order to use this code, you need to first instantiate a SwaggerApiRegistry, which will keep track of all your API endpoints and documentation. Usage: from flask import Flask from flask.ext.sillywalk import SwaggerApiRegistry, ApiParameter, ApiErrorResponse app = Flask("my_api") registry = SwaggerApiRegistry( app, baseurl="", api_version="1.0", api_descriptions={"cheese": "Operations with cheese."}) register = registry.register registerModel = registry.registerModel Then, instead of using the “@app.route” decorator that you’re used to using with Flask, you use the “register” decorator you defined above (or “registerModel” if you’re registering a class that describes a possible API return value). Now that we’ve got an API registry, we can register some functions. The @register decorator, when just given a path (like @app.route), will register a GET mthod with no possible parameters. In order to document a method with parameters, we can feed the @register function some parameters. Usage: @register("/api/v1/cheese/random") def get_random_cheese(): """Fetch a random Cheese from the database. Throws OutOfCheeseException if this is not a cheese shop.""" return htmlify(db.cheeses.random()) @register("/api/v1/cheese/<cheeseName>", parameters=[ ApiParameter( name="cheeseName", description="The name of the cheese to fetch", required=True, dataType="str", paramType="path", allowMultiple=False) ], responseMessages=[ ApiErrorResponse(400, "Sorry, we're fresh out of that cheese.") ]) def get_cheese(cheeseName): """Gets a single cheese from the database.""" return htmlify(db.cheeses.fetch(name=cheeseName)) Now, if you navigate to you should see the automatic API documentation. See documentation for all the cheese endpoints at What’s left to do? Well, lots, actually. This release: - Doesn’t support XML (but do we really want to?) - Doesn’t support the full swagger spec (e.g. “type” in data models - Lots more. Let me know! Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/Flask-Sillywalk/
CC-MAIN-2018-34
refinedweb
455
51.34
I try to build a library of game objects of instruments that will, later on, be used in many different configurations to animate and simulate simple Flash builds. The instruments, being interactive, will have a variable that can be set from the Inspector as, e.g. FROM or TO or both, depending if the object only inputs, outputs or both. I know the name of the game object but I don't know the name of the script. How can I address that? The documentation writes about GetComponent but that requires to know the name of the script ... unless I missed something. Alternatively, I can call all the scripts for e.g. "script" and place them in different folders, together with the object, textures, etc. But, is it wise? My question is essential as what I try to organize now is what I will have to work with for the years to come. I want the architecture to be correct now, before I end up with a lot of objects that need restructuring. Thanks in advance, Michel First of all, thank you, everyone, for your answers. You are very kind, especially when you write codes to help me, it takes time, I know! Second, ... I am 65 years old and I barely understand. This should be done by someone younger in my company but ... 3D is so much fun that I want to do it myself, even if it takes all my evenings and week-ends! ;-) Third, I am terrible at explaining what I want perhaps because I am not sure I understand it myself! ;-) No, reflections won't work with Flash and that is also valid for most of the fancy stuff in Unity3D. Yes, I will look at SendMessage that I already use in two occasions. Perhaps I am making things more difficult than what they need to be. But, imagine a pump connected to a valve. I drag a pump on the scenery, give it a name. I drag a valve and also give it a name because I will need many valves and I must be able to identify them by name. So, I know the names of the GameObjects but I don't know the name of the script because next time I might not connect the valve to a pump but to a fuel tank. So, when I write e.g. the codes: var From:GameObject; and I have selected it in the Inspector window, I can do my business to read its variables if I knew the script name. One solution, though, is to have all the scripts of objects that can be connected, called e.g. mainScript. SenMessage would work to send messages downstream of a flow. But then, it is often the object downstream that needs to look upstream to know what to do next. Cheers, Michel PS: Am I answering in the correct window of the forum, to address everyone? Excuse an old newbie! ;-) SendMessage will work any direction, it is just sending a message to a particular object. Then the compiler goes through all the scripts of the object to find the matching function. That is why it is slow and should be avoided. I am just confused on why you should not know the name of the script, if you implement the program, you should know it. If you think you won't remember, make a documentation. Yes it will go both directions and I use it already to find by a Raycast which instrument the user is trying to interact. But I find it simpler if I can directly read a variable if I know which script it is. The reason I may not know the name of the script is this: I may have a valve object with a script named, valve. It can be connected to say, a pump. But perhaps I will need another valve that I will call rotaryValve, also being able to connect it to a pump. This may contain, in addition of the valve variables something like, maximumRevolution. If they all have a script called the same, I can then have the pump to read the valve script without bothering about the type. The thing is: I am the art director of a company making online CBT for seafarers. In the future, the actual material will be made by our subcontractors in U$$anonymous$$, Poland, India and The Philippines. But if something goes wrong, they'll blame me. I have to set the rules needed to build a library of maritime process objects that can be re-used for many years. Since my company doesn't sell but lease the CBTs, we must be able to update them easily on a yearly basis. Thanks for your interest in my little problem! ;-) Answer by asafsitner · Jan 16, 2013 at 10:23 AM There are many ways to go about this. You can use **Interfaces**. If all your instrument scripts implement an IInstrument interface you can reference them like that, without having to know their specific names. IInstrument Interfaces allow you to ensure the script you're referencing has the defined behaviour and properties, such as input/output properties, and methods that are shared by all of your instruments, while each of them has it's own implementation of the interface's requirements. Another way to go about it is with an **Abstract Class**. It's very similar to an interface, with the major difference being your script can't inherit from multiple abstract classes (note that Interfaces are implemented while abstract classes are inherited). From my experience they behave more nicely with Unity's serialization and inspector than interfaces, so it's a viable option. Another way altogether is simply create a base class and have the scripts inherit from it and simply **override** it's methods with an implementation of their own. This allows you to define default behaviour straight in the class definition stage and doesn't force you to implement anything in the inherited classes, which is arguably worse practice than the previous options. On another note, you may also want to take a look at **Generics**, another powerful feature of C# that comes in handy quite often. :) Thank you very much but, with only experienced in Flash AS2, I started Unity3D writing JavaScript that I find closer to ActionScript. What you suggest is ... beyond my capacity of understanding. ;-( @asafsitner, why is a base class with overrides "arguably worse practice than the previous options." That's why it's arguably ;) If you develop all subclasses yourself it's not really a problem unless you can't remember your own implementation. When you provide the base class "as kind of an interface" and want others to write subclasses it might be a problem. But again, it depends on the case. But this solution would mean that in the end you know the script name since you implemented it but for some reasons I cannot explain, he is not able to know the script name before hand. So the whole Interface/Abstract class is somehow not applicable here. @fafase - While this solution doesn't answer the question in the header, it answers the actual question, in the body. Ins$$anonymous$$d of calling all his scripts 'mainScript' and placing them in different namespaces, he could just as well make them all implement the same interface or inherit the same base class (abstract or not) and refer to that. I think it's a better approach to what he is seeking - an access scripts other than by name. 1 Answer Changing variables in another script 3 Answers how to make one script change a variable in another scipt 2 Answers BCE0019: 'cont' is not a memBCE0019: 'cont' is not a member of 'UnityEngine.Component'. ber of 'UnityEngine.Component'. 1 Answer Getting script from another script bug 1 Answer
https://answers.unity.com/questions/381529/how-to-address-scripts-without-knowing-their-names.html
CC-MAIN-2020-50
refinedweb
1,316
71.04
Dart example program to replace a substring Introduction : In this tutorial, we will learn how to replace a part of a string in dart. Dart string class comes with a method called replaceRange, that we can use to replace a part of the string with a different string. In this tutorial, we will learn how to use replaceRange with an example in Dart. Definition : replaceRange is defined as below : String replaceRange ( int start, int end, String str ) It replaces the string from index start to end with different string str. It returns one new string i.e. the modified string. The start and end indices should be in a valid range. If the value of end is null, it will take the default length length. Example : import "dart:core"; void main() { final givenString = "hello world"; final newString = givenString.replaceRange(0, 3, "1"); print(newString); } It will print the below output : 1lo world As you can see in the above example, it replaced the string part from index 0 to 3 with 1. Try to run the above example with different indices and different strings to learn more about how replaceRange works.
https://www.codevscolor.com/dart-replace-substring/
CC-MAIN-2020-10
refinedweb
191
71.24
I want to use only studio.h library to convert from decimal number to binary number by using an array to store remainder but the result is not correct, maybe i have problem with memory allocation or return value is wrong, please help me to check it. Thank you so much! #include <stdio.h> int n = 0; int* DecimalToBinary(int number){ int a[10]; while(number!=0){ a[n++] = number%2; number/=2; } return a; } void main(){ int *d1 = DecimalToBinary(5); int *d2 = DecimalToBinary(10); for(int i = n-1 ;i>=0;i--) printf(" %d",d1[i]); printf("\n"); for(int i = n-1 ;i>=0;i--) printf(" %d",d2[i]); } You are returning a pointer to a locally allocated array. It is allocated on the stack, and goes away when the function returns, leaving your pointer pointing to garbage. You have a few options. You could pass an array in to fill: void DecimalToBinary(int result[10],int number){ while(number!=0){ result[n++] = number%2; number/=2; } return result; } // usage example: int b[10]; DecimalToBinary(b, 42); Or you could allocate an array on the heap: int* DecimalToBinary(int number){ int *a = (int *)malloc(sizeof(int) * 10); while(number!=0){ a[n++] = number%2; number/=2; } return a; } // usage example int *b = DecimalToBinary(42); free(b); // when finished with it Or you could wrap the array in a struct: typedef struct { int b[10]; } result; result DecimalToBinary(int number){ result r; while(number!=0){ r.b[n++] = number%2; number/=2; } return r; } // usage example result r = DecimalToBinary(42); If you do the malloc() option, do not forget to free() the returned data when you're done with it, otherwise it will hang around. This is called a memory leak. In more complex programs, it can lead to serious issues. Note: By the way, if your number is larger than 1023 (10 binary digits), you'll overrun the array. You may also wish to explicitly stop once you've stored 10 digits, or pass the size of the array in, or compute the required size first and allocate that much space. Also, you will get some odd results if your number is negative, you might want to use number&1 instead of number%2. Note 2: As noted elsewhere, you should make n local, or at the very least reinitalize it to 0 each time the function is called, otherwise it will just accumulate and eventually you'll go past the end of the array. You return a pointer to a local array. That local array is on the stack, and when the function returns the array goes out of scope and that stack memory will be reused when you call the next function. This means that the pointer will now point to some other data, and not the original array. There are two solutions to this: DecimalToBinaryand pass it as an argument. malloc) and return that pointer. The problem with method 2 is that it might create a memory leak if you don't free the returned pointer. As noted by Craig there is a third solution, to make the array static inside the function. However in this case it brings other and bigger problems than the two solutions I originally listed, and that's why I didn't list it. There is also another serious problem with the code, as noted by Uchia Itachi, and that is that the array is indexed by a global variable. If the DecimalToBinary function is called with a too big number, or to many times, this global index variable will be to big for the array and will be out of bounds for the array. Both the problem with dereferencing a pointer to an out-of-scope array and the indexing out of bounds leads to undefined behavior. Undefined behavior will, if you're lucky, just lead to the wrong result being printed. If you're unlucky it will cause the program to crash. int[10] is not the same as int *; not only is the former created on the stack, it is a different type alltogether. You need to create an actual int * like so: int *a = malloc (10 * sizeof (int)); Of course, don't forget to free() it after use! What you can also do and what is commonly done in C is creating the array where it is called and provide a pointer to that array to the function, this way when the array is on the stack of the function that calls it and not in the function self. We also have to specify the size of the array on to that function, since the function cannot know to how many elements the pointer points to void DecimalToBinary( int number, int* output, unsigned size ) { /*adapt this to your liking*/ int i; for ( i = 0; i < size && number != 0; i++) { output[i] = number%2; number/2; } } and in you main function you would call it like this: int array[10]; DecimalToBinary( 5, array, sizeof(array)/sizeof(array[0])); now array has the same result as a would have had in your example. The problem in your code lies here.. int * DecimalToBinary(int number){ int a[10]; while(number!=0){ a[n++] = number%2; number/=2; } return a; } The array a scope is only till this function. Once this function terminates, the memory allocated for this array will be released, either u need to use dynamic memory allocation or make array a global. This is the correct program: #include <stdio.h> int n = 0; int a[10] = {0}; int* DecimalToBinary(int number){ n = 0; while(number!=0){ a[n++] = number%2; number = number/2; } return a; } int main(){ int *d1; int *d2; int i; d1 = DecimalToBinary(5); for(i = n-1;i>=0;i--) printf(" %d",d1[i]); printf("\n"); d2 = DecimalToBinary(10); for(i = n-1;i>=0;i--) printf(" %d",d2[i]); printf("\n"); }
http://m.dlxedu.com/m/askdetail/3/4c61f03639daaed92eaa5a1e2dfbf01b.html
CC-MAIN-2019-18
refinedweb
988
58.72
FilePickerViewMode Since: BlackBerry 10.0.0 #include <bb/cascades/pickers/FilePickerViewMode> To link against this class, add the following line to your .pro file: LIBS += -lbbcascadespickers Defines view modes in FilePicker. If the view mode is not specified, FilePicker will choose the view mode based on the FileType, for example, GridView for pictures and Videos and ListView for documents and music. Overview Public Types Index Public Types ViewModes associated with FilePicker. BlackBerry 10.0.0 - Default FilePicker will choose the view mode based on the type of files being displayed. - ListView Displays files and folders in a list.Since: BlackBerry 10.0.0 - GridView Displays files and folders in a grid.Since: BlackBerry 10.0.0 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/native/reference/cascades/bb__cascades__pickers__filepickerviewmode.html
CC-MAIN-2015-32
refinedweb
132
62.24
What's New in ASP.NET MVC 4 Download Web Camps Training Kit ASP.NET MVC 4 is a framework for building scalable, standards-based web applications using well-established design patterns and the power of the ASP.NET and the .NET framework. This new, fourth version of the framework focuses on making mobile web application development easier. To begin with, when you create a new ASP.NET MVC 4 project there is now a mobile application project template you can use to build a standalone app specifically for mobile devices. Additionally, ASP.NET MVC 4 integrates with jQuery Mobile through a jQuery.Mobile.MVC NuGet package. jQuery Mobile is an HTML5-based framework for developing web apps that are compatible with all popular mobile device platforms, including Windows Phone, iPhone, Android and so on. However, if you need specialization, ASP.NET MVC 4 also enables you to serve different views for different devices and provide device-specific optimizations. In this hands-on lab, you will start with the ASP.NET MVC 4 "Internet Application" project template to create a Photo Gallery application. You will progressively enhance the app using jQuery Mobile and ASP.NET MVC 4's new features to make it compatible with different mobile devices and desktop web browsers. You will also learn about new code recipes for code generation and how ASP.NET MVC 4 makes it easier for you to write asynchronous action methods by supporting Task<ActionResult> return types. Note All sample code and snippets are included in the Web Camps Training Kit, available at Microsoft-Web/WebCampTrainingKit Releases. The project specific to this lab is available at What's New in Web Forms in ASP.NET 4.5. Objectives In this hands-on lab, you will learn how to: - Prerequisites You must have the following items to complete this lab: - Microsoft Visual Studio Express 2012 for Web or superior (read Appendix B for instructions on how to install it). - ASP.NET MVC 4 (included in the Microsoft Visual Studio 2012 installation) - Windows Phone Emulator (included in the Windows Phone 7.1.1 SDK) - Optional - WebMatrix 2 with Electric Plum iPhone Simulator extension (only for Exercise 3 used to browse the web application with an iPhone simulator) Setup Throughout the lab document, you will be instructed to insert code blocks. For your convenience, most of that code is provided as Visual Studio Code Snippets, which you can use from within Visual Studio to avoid having to add it manually. To install the code snippets: - Open a Windows Explorer window and browse to the lab's Source\Setup folder. - Double-click the Setup.cmd file in this folder to install the Visual Studio code snippets. If you are not familiar with the Visual Studio Code Snippets, and want to learn how to use them, you can refer to the appendix from this document "Appendix A: Using Code Snippets". Exercises This hands-on lab includes the following exercises: - New ASP.NET MVC 4 Project Templates - Creating the Photo Gallery Web Application - Adding Support for Mobile Devices - Using Asynchronous Controllers Note Each exercise is accompanied by an End folder containing the resulting solution you should obtain after completing the exercises. You can use this solution as a guide if you need additional help working through the exercises. Estimated time to complete this lab: 60 minutes. Exercise 1: New ASP.NET MVC 4 Project Templates In this exercise, you will explore the enhancements in the ASP.NET MVC 4 Project templates. In addition to the Internet Application template, already present in MVC 3, this version now includes a separate template for Mobile applications. First, you will look at some relevant features of each of the templates. Then, you will work on rendering your page properly on the different platforms by using the right approach. Task 1 - Exploring the Internet Application Template Open Visual Studio. Select the File | New | Project menu command. In the New Project dialog, select the Visual C# | Web template on the left pane tree, and choose ASP.NET MVC 4 Web Application. Name the project PhotoGallery, select a location (or leave the default) and click OK. Note You will later customize the PhotoGallery ASP.NET MVC 4 solution you are now creating. Creating a new project In the New ASP.NET MVC 4 Project dialog, select the Internet Application project template and click OK. Make sure you have selected Razor as the view engine. Creating a new ASP.NET MVC 4 Internet Application Note Razor syntax has been introduced in ASP.NET MVC 3. Its goal is to minimize the number of characters and keystrokes required in a file, enabling a fast and fluid coding workflow. Razor leverages existing C# / VB (or other) language skills and delivers a template markup syntax that enables an awesome HTML construction workflow. Press F5 to run the solution and see the renewed templates. You can check out the following features: Modern-style templates The templates have been renewed, providing more modern-looking styles. ASP.NET MVC 4 restyled templates New Contact page Adaptive Rendering Check out resizing the browser window and notice how the page layout dynamically adapts to the new window size. These templates use the adaptive rendering technique to render properly in both desktop and mobile platforms without any customization. ASP.NET MVC 4 project template in different browser sizes Richer UI with JavaScript Another enhancement to default project templates is the use of JavaScript to provide a more interactive JavaScript. The Login and Register links used in the template exemplify how to use the jQuery Validations to validate the input fields from client-side. jQuery Validation Note Notice the two log in sections, in the first section you can log in using a registered account from the site and in the second section you can alternatively log in using another authentication service like google (disabled by default). Close the browser to stop the debugger and return to Visual Studio. Open the file AuthConfig.cs located under the App_Start folder. Remove the comment from the last line to register Google client for OAuth authentication.(); } } Note Notice you can easily enable authentication using any OpenID or OAuth service like Facebook, Twitter, Microsoft, etc. Press F5 to run the solution and navigate to the login page. Select Google service to log in. Selecting the log in service Log in using your Google account. Allow the site (localhost) to retrieve information from Google account. Finally, you will have to register in the site to associate the Google account. Associating your Google account 13. Close the browser to stop the debugger and return to Visual Studio. 14. Now explore the solution to check out some other new features introduced by ASP.NET MVC 4 in the project template. The ASP.NET MVC 4 Internet Application Project Template HTML 5 Markup Browse template views to find out the new theme markup. New template, using Razor and HTML5 markup (About.cshtml). Updated JavaScript libraries The ASP.NET MVC 4 default template now includes KnockoutJS, a JavaScript MVVM framework that lets you create rich and highly responsive web applications using JavaScript and HTML. Like in MVC3, jQuery and jQuery UI libraries are also included in ASP.NET MVC 4. Note You can get more information about KnockOutJS library in this link: [](). Additionally, you can learn about jQuery and jQuery UI in [](). Task 2 - Exploring the Mobile Application Template ASP.NET MVC 4 facilitates the development of websites for mobile and tablet browsers. This template has the same application structure as the Internet Application template (notice that the controller code is practically identical), but its style was modified to render properly in touch-based mobile devices. Select the File | New | Project menu command. In the New Project dialog, select the Visual C# | Web template on the left pane tree, and choose the ASP.NET MVC 4 Web Application. Name the project PhotoGallery.Mobile, select a location (or leave the default), select "Add to solution" and click OK. In the New ASP.NET MVC 4 Project dialog, select the Mobile Application project template and click OK. Make sure you have selected Razor as the view engine. Creating a new ASP.NET MVC 4 Mobile Application Now you are able to explore the solution and check out some of the new features introduced by the ASP.NET MVC 4 solution template for mobile: jQuery Mobile Library The Mobile Application project template includes the jQuery Mobile library, which is an open source library for mobile browser compatibility. jQuery Mobile applies progressive enhancement to mobile browsers that support CSS and JavaScript. Progressive enhancement enables all browsers to display the basic content of a web page, while it only enables the most powerful browsers to display the rich content. The JavaScript and CSS files, included in the jQuery Mobile style, help mobile browsers to fit the content in the screen without making any change in the page markup. jQuery mobile library included in the template HTML5 based markup Mobile application template using HTML5 markup, (Login.cshtml and index.cshtml) Press F5 to run the solution. Open the Windows Phone 7 Emulator. In the phone start screen, open Internet Explorer. Check out the URL where the desktop application started and browse to that URL from the phone (e.g.:[PortNumber]/). Now you are able to enter the login page or check out the about page. Notice that the style of the website is based on the new Metro app for mobile. The ASP.NET MVC 4 project template is correctly displayed on mobile devices, making sure all the elements of the page are visible and enabled. Notice that the links on the header are big enough to be clicked or tapped. Project template pages in a mobile device The new template also uses the Viewport meta tag. Most mobile browsers define a width for a virtual browser window or "viewport", which is larger than the actual width of the mobile device. This enables mobile browsers to display the entire web page inside the virtual display. The Viewport meta tag allows web developers to set the width, height and scale of the browser area on mobile devices . The ASP.NET MVC 4 template for Mobile Applications sets the viewport to the device width ("width=device-width") in the layout template (Views\Shared_Layout.cshtml), so that all the pages will have their viewport set to the device screen width. Notice that the Viewport meta tag will not change the default browser view. Open _Layout.cshtml, located in the Views | Shared folder, and comment the Viewport meta tag. Run the application, if not already opened, and check out the differences. ... <meta charset="utf-8" /> <title>@ViewBag.Title</title> @* <meta name="viewport" content="width=device-width" /> *@ ... The site after commenting the viewport meta tag 10. In Visual Studio, press SHIFT + F5 to stop debugging the application. 11. Uncomment the viewport meta tag. ... <meta charset="utf-8" /> <title>@ViewBag.Title</title> <meta name="viewport" content="width=device-width" /> ... Task 3 - Using Adaptive Rendering In this task, you will learn another method to render a Web page correctly on mobile devices and Web browsers at the same time without any customization. You have already used Viewport meta tag with a similar purpose. Now you will meet another powerful method: adaptive rendering. Adaptive rendering is a technique that uses CSS3 media queries to customize the style applied to a page. Media queries define conditions inside a style sheet, grouping CSS styles under a certain condition. Only when the condition is true, the style is applied to the declared objects. The flexibility provided by the adaptive rendering technique enables any customization for displaying the site on different devices. You can define as many styles as you want on a single style sheet without writing logic code to choose the style. Therefore, it is a very neat way of adapting page styles, as it reduces the amount of duplicated code and logic for rendering purposes. On the other hand, bandwidth consumption would increase, as the size of your CSS files could grow marginally. By using the adaptive rendering technique, your site will be displayed properly, regardless of the browser. However, you should consider if the bandwidth extra load is a concern. Note The basic format of a media query is: @media [Scope: all | handheld | print | projection | screen] ([property:value] and ... [property:value]) Examples of media queries: >@media all and (max-width: 1000px) and (min-width: 700px) {}: For all the resolutions between 700px and 1000px. @media screen and (min-width: 400px) and (max-width: 700px) { ... }: Only for screens. The resolution must be between 400 and 700px. @media handheld and (min-width: 20em), screen and (min-width: 20em) { ... }: For handhelds(mobile and devices) and screens. The minimum width must be greater than 20em. You can find more information about this on the W3C site. You will now explore how the adaptive rendering works, improving the readability of the ASP.NET MVC 4 default website template. Open the PhotoGallery.sln solution you have created at Task 1 and select the PhotoGallery project. Press F5 to run the solution. Resize the browser's width, setting the windows to half or to less than a quarter of its original size. Notice what happens with the items in the header: Some elements will not appear in the visible area of the header. Open Site.css file from the Visual Studio Solution explorer, located in Content project folder. Press CTRL + F to open Visual Studio integrated search, and write @mediato locate the CSS media query. The media query condition defined in this template works in this way: When the browser's window size is below 850 px, the CSS rules applied are the ones defined inside this media block. Locating the media query Replace the max-width attribute value set in 850 px with 10px, in order to disable the adaptive rendering, and press CTRL + S to save the changes. Return to the browser and press CTRL + F5 to refresh the page with the changes you have made. Notice the differences in both pages when adjusting the width of the window. @media style on the left and on the right, the style is omitted" title="In the left, the page is applying the @media style, in the right, the style is omitted" /> In the left, the page is applying the @media style, in the right, the style is omitted Now, let's check out what happens on mobile devices: @media style, in the right, the style is omitted" title="In the left, the page is applying the @media style, in the right, the style is omitted" /> In the left, the page is applying the @media style, in the right, the style is omitted Although you will notice that the changes when the page is rendered in a Web browser are not very significant, when using a mobile device the differences become more obvious. On the left side of the image, we can see that the custom style improved the readability. Adaptive rendering can be used in many more scenarios, making it easier to apply conditional styling to a Web site and solving common style issues with a neat approach. The Viewport meta tag and CSS media queries are not specific to ASP.NET MVC 4, so you can take advantage of these features in any web application. In Visual Studio, press SHIFT + F5 to stop debugging the application. Exercise 2: Creating the Photo Gallery Web Application In this exercise, you will work on a Photo Gallery application to display photos. You will start with the ASP.NET MVC 4 project template, and then you will add a feature to retrieve photos from a service and display them in the home page. In the following exercise, you will update this solution to enhance the way it is displayed on mobile devices. Task 1 - Creating a Mock Photo Service In this task, you will create a mock of the photo service to retrieve the content that will be displayed in the gallery. To do this, you will add a new controller that will simply return a JSON file with the data of each photo. Open Visual Studio if not already opened. Select the File | New | Project menu command. In the New Project dialog, select the Visual C# | Web template on the left pane tree, and choose ASP.NET MVC 4 Web Application. Name the project PhotoGallery, select a location (or leave the default) and click OK. Alternatively, you can continue working from your existing ASP.NET MVC 4 Internet Application solution from Exercise 1 and skip the next step. In the New ASP.NET MVC 4 Project dialog box, select the Internet Application project template and click OK. Make sure you have Razor selected as the View Engine. In the Solution Explorer, right-click the App_Data folder of your project, and select Add | Existing Item. Browse to the Source\Assets\App_Data folder of this lab and add the Photos.json file. Create a new controller with the name PhotoController. To do this, right-click on the Controllers folder, go to Add and select Controller. Complete the controller name, leave the Empty MVC controller template and click Add. Adding the PhotoController Replace the Index method with the following Gallery action, and return the content from the JSON file you have recently added to the project. (Code Snippet - ASP.NET MVC 4 Lab - Ex02 - Gallery Action) public class PhotoController : Controller { public ActionResult Gallery() { return this.File("~/App_Data/Photos.json", "application/json"); } } Press F5 to run the solution, and then browse to the following URL in order to test the mocked photo service::[port]/photo/gallery(the [port] value depends on the current port where the application was launched). The request to this URL should retrieve the content of the Photos.json file. Testing the mocked photo service In a real implementation you could use ASP.NET Web API to implement the Photo Gallery service. ASP.NET Web API is a framework that makes it easy to build HTTP services that reach a broad range of clients, including browsers and mobile devices. ASP.NET Web API is an ideal platform for building RESTful applications on the .NET Framework. Task 2 - Displaying the Photo Gallery In this task, you will update the Home page to show the photo gallery by using the mocked service you created in the first task of this exercise. You will add model files and update the gallery views. In Visual Studio, press SHIFT + F5 to stop debugging the application. Create the Photo class in the Models folder. To do this, right-click on the Models folder, select Add and click Class. Then, set the name to Photo.cs and click Add. Add the following members to the Photo class. (Code Snippet - ASP.NET MVC 4 Lab - Ex02 - Photo model) public class Photo { public string Title { get; set; } public string FileName { get; set; } public string Description { get; set; } public DateTime UploadDate { get; set; } } Open the HomeController.cs file from the Controllers folder. Add the following using statements. (Code Snippet - ASP.NET MVC 4 Lab - Ex02 - HomeController Usings) using System.Net.Http; using System.Web.Script.Serialization; using Newtonsoft.Json; using PhotoGallery.Models; Update the Index action to use HttpClient to retrieve the gallery data, and then use the JavaScriptSerializer to deserialize it to the view model. (Code Snippet - ASP.NET MVC 4 Lab - Ex02 - Index Action) public ActionResult Index() { var client = new HttpClient(); var response = client.GetAsync(Url.Action("gallery", "photo", null, Request.Url.Scheme)).Result; var value = response.Content.ReadAsStringAsync().Result; var result = JsonConvert.DeserializeObject<List<Photo>>(value); return View(result); } Open the Index.cshtml file located under the Views\Home folder and replace all the content with the following code. This code loops through all the photos retrieved from the service and displays them into an unordered list. (Code Snippet - ASP.NET MVC 4 Lab - Ex02 - Photo List) @model List<PhotoGallery.Models.Photo> @{ ViewBag. @foreach (var photo in Model) { <li class="item"> <a href="@Url.Content("~/photos/" + photo.FileName)"> <img alt="@photo.Description" src="@Url.Content("~/photos/" + photo.FileName)" class="thumbnail-border" width="180" /> </a> <span class="image-overlay">@photo.Title</span> </li> } </ul> In the Solution Explorer, right-click the Content folder of your project, and select Add | Existing Item. Browse to the Source\Assets\Content folder of this lab and add the Site.css file. You will have to confirm its replacement. If you have the Site.css file open, you will have to confirm to reload the file also. Open File Explorer and copy the entire Photos folder located under the Source\Assets folder of this lab to the root folder of your project in Solution Explorer. Run the application. You should now see the home page displaying the photos in the gallery. Photo Gallery In Visual Studio, press SHIFT + F5 to stop debugging the application. Exercise 3: Adding support for mobile devices One of the key updates in ASP.NET MVC 4 is the support for mobile development. In this exercise, you will explore ASP.NET MVC 4 new features for mobile applications by extending the PhotoGallery solution you have created in the previous exercise. Task 1 - Installing jQuery Mobile in an ASP.NET MVC 4 Application Open the Begin solution located at Source/Ex3-MobileSupport Package Manager Console by clicking the Tools > NuGet Package Manager > Package Manager Console menu option. Opening the NuGet Package Manager Console In the Package Manager Console run the following command to install the jQuery.Mobile.MVC package. jQuery Mobile is an open source library for building touch-optimized web UI. The jQuery.Mobile.MVC NuGet package includes helpers to use jQuery Mobile with an ASP.NET MVC 4 application. Note By running the following command, you will be downloading the jQuery.Mobile.MVC library from Nuget. PM Install-Package jQuery.Mobile.MVC This command installs jQuery Mobile and some helper files, including the following: Views/Shared/_Layout.Mobile.cshtml: is a jQuery Mobile-based layout optimized for a smaller screen. When the website receives a request from a mobile browser, it will replace the original layout (_Layout.cshtml) with this one. A view-switcher component: consists of the Views/Shared/_ViewSwitcher.cshtml partial view and the ViewSwitcherController.cs controller. This component will show a link on mobile browsers to enable users to switch to the desktop version of the page. Photo Gallery project with mobile support Register the Mobile bundles. To do this, open the Global.asax.cs file and add the following line. (Code Snippet - ASP.NET MVC 4 Lab - Ex03 - Register Mobile Bundles)); BundleMobileConfig.RegisterBundles(BundleTable.Bundles); AuthConfig.RegisterAuth(); } Run the application using a desktop web browser. Open the Windows Phone 7 Emulator, located in Start Menu | All Programs | Windows Phone SDK 7.1 | Windows Phone Emulator. In the phone start screen, open Internet Explorer. Check out the URL where the application started and browse to that URL with the phone browser (e.g.:[PortNumber]/). You will notice that your application will look different in the Windows Phone emulator, as the jQuery.Mobile.MVC has created new assets in your project that show views optimized for mobile devices. Notice the message at the top of the phone, showing the link that switches to the Desktop view. Additionally, the _Layout.Mobile.cshtml layout that was created by the package you have installed is including a different layout in the application. Note So far, there is no link to get back to mobile view. It will be included in later versions. Mobile view of the Photo Gallery Home page In Visual Studio, press SHIFT + F5 to stop debugging the application. Task 2 - Creating Mobile Views In this task, you will create a mobile version of the index view with content adapted for better appearance in mobile devices. Copy the Views\Home\Index.cshtml view and paste it to create a copy, rename the new file to Index.Mobile.cshtml. Open the new created Index.Mobile.cshtml view and replace the existing <ul> tag with this code. By doing this, you will be updating the <ul> tag with jQuery Mobile data annotations to use the mobile themes from jQuery. <ul data- Note Notice that: The data-role attribute set to listview will render the list using the listview styles. The data-inset attribute set to true will show the list with rounded border and margin. The data-filter attribute set to true will generate a search box. You can learn more about jQuery Mobile conventions in the project documentation: []() Press CTRL + S to save the changes. Switch to the Windows Phone Emulator and refresh the site. Notice the new look and feel of the gallery list, as well as the new search box located on the top. Then, type a word in the search box (for instance, Tulips) to start a search in the photo gallery. Gallery using listview style with filtering To summarize, you have used the View Mobilizer recipe to create a copy of the Index view with the "mobile" suffix. This suffix indicates to ASP.NET MVC 4 that every request generated from a mobile device will use this copy of the index. Additionally, you have updated the mobile version of the Index view to use jQuery Mobile for enhancing the site look and feel in mobile devices. Go back to Visual Studio and open Site.Mobile.css located under the Content folder. Fix the positioning of the photo title to make it show at the right side of the image. To do this, add the following code to the Site.Mobile.css file. CSS .ui-li .ui-btn-inner a.ui-link-inherit, .ui-li-static.ui-li { padding: 0px !important; } li.item span.image-overlay { position:relative; left:100px; top:-40px; height:0px; display:block; } Press CTRL + S to save the changes. Switch back to the Windows Phone Emulator and refresh the site. Notice the photo title is properly positioned now. Title positioned on the right side of the image Task 3 - jQuery Mobile Themes Every layout and widget in jQuery Mobile is designed around a new object-oriented CSS framework that makes it possible to apply a complete unified visual design theme to sites and applications. jQuery Mobile's default Theme includes 5 swatches that are given letters (a, b, c, d, e) for quick reference. In this task, you will update the mobile layout to use a different theme than the default. Switch back to Visual Studio. Open the _Layout.Mobile.cshtml file located in Views\Shared. Find the div element with the data-role set to "page" and update the data-theme to "e". <div data- Press CTRL + S to save the changes. Refresh the site in the Windows Phone Emulator and notice the new colors scheme. Mobile layout with a different color scheme Task 4 - Using the View-Switcher Component and the Browser Overriding Features used in the _Layout.Mobile.cshtml view. Link to switch to Desktop View The view switcher uses a new feature called Browser Overriding. This feature lets your application treat requests as if they were coming from a different browser (user agent) than the one they are actually coming from. In this task, you will explore the sample implementation of a view-switcher added by jQuery.Mobile.MVC and the new browser overriding features in ASP.NET MVC 4. Switch back to Visual Studio. Open the _Layout.Mobile.cshtml view located under the Views\Shared folder and notice the view-switcher component being referenced as a partial view. Mobile layout using View-Switcher component Open the _ViewSwitcher.cshtml partial view. The partial view uses the new method ViewContext.HttpContext.GetOverriddenBrowser() to determine the origin of the web request and show the corresponding link to switch either to the Desktop or Mobile views. The GetOverriddenBrowser method returns an HttpBrowserCapabilitiesBase instance that corresponds to the user agent currently set for the request (actual or overridden). You can use this value to get properties such as IsMobileDevice. ViewSwitcher partial view Open the ViewSwitcherController.cs class located in the Controllers folder. Check out that SwitchView action is called by the link in the ViewSwitcher component, and notice the new HttpContext methods. The HttpContext.ClearOverriddenBrowser() method removes any overridden user agent for the current request. The HttpContext.SetOverriddenBrowser() method overrides the request's actual user agent value using the specified user agent. ViewSwitcher Controller Browser Overriding is a core feature of ASP.NET MVC 4, which is also available even if you do not install the jQuery.Mobile.MVC package. However, this feature affects only view, layout, and partial-view, and it does not affect any of the features that depend on the Request.Browser object. Task 5 - Adding the View-Switcher in the Desktop View In this task, you will update the desktop layout to include the view-switcher. This will allow mobile users to go back to the mobile view when browsing the desktop view. Refresh the site in the Windows Phone Emulator. Click on the Desktop view link at the top of the gallery. Notice there is no view-switcher in the desktop view to allow you return to the mobile view. Go back to Visual Studio and open the _Layout.cshtml view. Find the login section and insert a call to render the _ViewSwitcher partial view below the _LogOnPartial partial view. Then, press CTRL + S to save the changes. <div class="float-right"> <section id="login"> @Html.Partial("_LogOnPartial") @Html.Partial("_ViewSwitcher") </section> <nav> Press CTRL + S to save the changes. Refresh the page in the Windows Phone Emulator and double-click the screen to zoom in. Notice that the home page now shows the Mobile view link that switches from mobile to desktop view. View Switcher rendered in desktop view Switch to the Mobile view again and browse to About page ([port]/Home/About). Notice that, even if you haven't created an About.Mobile.cshtml view, the About page is displayed using the mobile layout (_Layout.Mobile.cshtml). About page Finally, open the site in a desktop Web browser. Notice that none of the previous updates has affected the desktop view. PhotoGallery desktop view Task 6 - Creating New Display Modes The new display modes feature lets an application select views depending on the browser that is generating the request. For example, if a desktop browser requests the Home page, the application will return the Views\Home\Index.cshtml template. Then, if a mobile browser requests the Home page, the application will return the Views\Home\Index.mobile.cshtml template. In this task, you will create a customized layout for iPhone devices, and you will have to simulate requests from iPhone devices. To do this, you can use either an iPhone emulator/simulator (like Electric Mobile Simulator) or a browser with add-ons that modify the user agent. For instructions on how to set the user agent string in an Safari browser to emulate an iPhone, see How to let Safari pretend it's IE in David Alison's blog. Notice that this task is optional and you can continue throughout the lab without executing it. In Visual Studio, press SHIFT + F5 to stop debugging the application. Open Global.asax.cs and add the following using statement. using System.Web.WebPages; Add the following highlighted code into the Application_Start method. (Code Snippet - ASP.NET MVC 4 Lab - Ex03 - iPhone DisplayMode) protected void Application_Start() { // ... DisplayModeProvider.Instance.Modes.Insert(0, new DefaultDisplayMode("iPhone") { ContextCondition = context => context.Request.UserAgent != null && context.Request.UserAgent.IndexOf("iPhone", StringComparison.OrdinalIgnoreCase) >= 0 }); } You have registered a new DefaultDisplayMode named "iPhone", within the static DisplayModeProvider.Instance.Modes static list, that will be matched against each incoming request. If the incoming request contains the string "iPhone", ASP.NET MVC will find the views whose name contain the "iPhone" suffix. The 0 parameter indicates how specific is the new mode; for instance, this view is more specific than the general ".mobile" rule that matches requests from mobile devices. After this code runs, when an iPhone browser generates a request, your application will use the Views\Shared\_Layout.iPhone.cshtml layout you will create in the next steps. Note This way of testing the request for iPhone has been simplified for demo purposes and might not work as expected for every iPhone user agent string (for example test is case sensitive). - Create a copy of the _Layout.Mobile.cshtml file in the Views\Shared folder and rename the copy to "_Layout.iPhone.cshtml". - Open _Layout.iPhone.cshtml you created in the previous step. - Find the div element with the data-role attribute set to page and change the data-theme attribute to "a". <body> <div data- @Html.Partial("_ViewSwitcher") ... Now you have 3 layouts in your ASP.NET MVC 4 application: _Layout.cshtml: default layout used for desktop browsers. _Layout.mobile.cshtml: default layout used for mobile devices. _Layout.iPhone.cshtml: specific layout for iPhone devices, using a different color scheme to differentiate from _Layout.mobile.cshtml. Press F5 to run the application and browse the site in the Windows Phone Emulator. Open an iPhone simulator (see Appendix C for instructions on how to install and configure an iPhone simulator), and browse to the site too. Notice that each phone is using the specific template. Using different views for each mobile device Exercise 4: Using Asynchronous Controllers Microsoft .NET Framework 4.5 introduces new language features in C# and Visual Basic to provide a new foundation for asynchrony in .NET programming. This new foundation makes asynchronous programming similar to - and about as straightforward as - synchronous programming. You are now able to write asynchronous action methods in ASP.NET MVC 4 by using the AsyncController class. You can use asynchronous action methods for long-running, non-CPU bound requests. This avoids blocking the Web server from performing work while the request is being processed. The AsyncController class is typically used for long-running Web service calls. This exercise explains the basics of asynchronous operation in ASP.NET MVC 4. If you want a deeper dive, you can check out the following article: []() Task 1 - Implementing an Asynchronous Controller Open the Begin solution located at Source/Ex4-Async HomeController.cs class from the Controllers folder. Add the following using statement. using System.Threading.Tasks; Update the HomeController class to inherit from AsyncController. Controllers that derive from AsyncController enable ASP.NET to process asynchronous requests, and they can still service synchronous action methods. public class HomeController : AsyncController { Add the async keyword to the Index method and make it return the type Task<ActionResult>. public async Task<ActionResult> Index() { ... Note The async keyword is one of the new keywords the .NET Framework 4.5 provides; it tells the compiler that this method contains asynchronous code. A Task object represents an asynchronous operation that may complete at some point in the future. Replace the client.GetAsync() call with the full async version using await keyword as shown below. (Code Snippet - ASP.NET MVC 4 Lab - Ex04 - GetAsync) public async Task<ActionResult> Index() { var client = new HttpClient(); var response = await client.GetAsync(Url.Action("gallery", "photo", null, Request.Url.Scheme)); ... Note In the previous version, you were using the Result property from the Task object to block the thread until the result is returned (sync version). Adding the await keyword tells the compiler to asynchronously wait for the task returned from the method call. This means that the rest of the code will be executed as a callback only after the awaited method completes. Another thing to notice is that you do not need to change your try-catch block in order to make this work: the exceptions that happen in background or in foreground will still be caught without any extra work using a handler provided by the framework. Change the code to continue with the asynchronous implementation by replacing the lines with the new code as shown below (Code Snippet - ASP.NET MVC 4 Lab - Ex04 - ReadAsStringAsync) public async Task<ActionResult> Index() { var client = new HttpClient(); var response = await client.GetAsync(Url.Action("gallery", "photo", null, Request.Url.Scheme)); var value = await response.Content.ReadAsStringAsync(); var result = await JsonConvert.DeserializeObjectAsync<List<Photo>>(value); return View(result); } Run the application. You will notice no major changes, but your code will not block a thread from the thread pool making a better usage of the server resources and improving performance. Note You can learn more about the new asynchronous programming features in the lab "Asynchronous Programming in .NET 4.5 with C# and Visual Basic" included in the Visual Studio Training Kit. Task 2 - Handling Time-Outs with Cancellation Tokens Asynchronous action methods that return Task instances can also support time-outs. In this task, you will update the Index method code to handle a time-out scenario using a cancellation token. Go back to Visual Studio and press SHIFT + F5 to stop debugging. Add the following using statement to the HomeController.cs file. using System.Threading; Update the Index action to receive a CancellationToken argument. public async Task<ActionResult> Index(CancellationToken cancellationToken) { ... Update the GetAsync call to pass the cancellation token. (Code Snippet - ASP.NET MVC 4 Lab - Ex04 - SendAsync with CancellationToken) public async Task<ActionResult> Index(CancellationToken cancellationToken) { var client = new HttpClient(); var response = await client.GetAsync(Url.Action("gallery", "photo", null, Request.Url.Scheme), cancellationToken); var value = await response.Content.ReadAsStringAsync(); var result = await JsonConvert.DeserializeObjectAsync<List<Photo>>(value); return View(result); } Decorate the Index method with an AsyncTimeout attribute set to 500 milliseconds and a HandleError attribute configured to handle TaskCanceledException by redirecting to a TimedOut view. (Code Snippet - ASP.NET MVC 4 Lab - Ex04 - Attributes) [AsyncTimeout(500)] [HandleError(ExceptionType = typeof(TimeoutException), View = "TimedOut")] public async Task<ActionResult> Index(CancellationToken cancellationToken) { Open the PhotoController class and update the Gallery method to delay the execution 1000 milliseconds (1 second) to simulate a long running task. public ActionResult Gallery() { System.Threading.Thread.Sleep(1000); return this.File("~/App_Data/Photos.json", "application/json"); } Open the Web.config file and enable custom errors by adding the following element. <system.web> <customErrors mode="On"></customErrors> ... Create a new view in Views\Shared named TimedOut and use the default layout. In the Solution Explorer, right-click the Views\Shared folder and select Add | View. Using different views for each mobile device Update the TimedOut view content as shown below. @{ ViewBag.Title = "TimedOut"; Layout = "~/Views/Shared/_Layout.cshtml"; } <h2>Timed Out!</h2> Run the application and navigate to the root URL. As you have added a Thread.Sleep of 1000 milliseconds, you will get a time-out error, generated by the AsyncTimeout attribute and catch by the HandleError attribute. Time-out exception handled Note Additionally, you can deploy this application to Windows Azure Web Sites following Appendix D: Publishing an ASP.NET MVC 4 Application using Web Deploy. Summary In this hands-on-lab, you've observed some of the new features resident in ASP.NET MVC 4. The following concepts have been discussed: - Appendix : Installing WebMatrix 2 and iPhone Simulator To run your site in a simulated iPhone device you can use the WebMatrix extension "Electric Mobile Simulator for the iPhone". Also, you can configure the same extension to run the simulator from Visual Studio 2012. Task 1 - Installing WebMatrix 2 Go to [](). Alternatively, if you already have installed Web Platform Installer, you can open it and search for the product "WebMatrix 2". Click on Install Now. If you do not have Web Platform Installer you will be redirected to download and install it first. Once Web Platform Installer is open, click Install to start the setup. Install WebMatrix 2 Read all the products' licenses and terms and click I Accept to continue. Accepting the license terms Wait until the downloading and installation process completes. Installation progress When the installation completes, click Finish. Installation completed Click Exit to close Web Platform Installer. Task 2 - Installing the iPhone Simulator Extension Run WebMatrix and open any existing Web site or create a new one. Click the Run button from the Home ribbon and select Add new. Adding new WebMatrix extension Select iPhone Simulator and click Install. Browsing WebMatrix extensions In the package details, click Install to continue with the extension installation. iPhone Simulator extension Read and accept the extension EULA. WebMatrix extension EULA Now, you can run your Web site from WebMatrix using the iPhone Simulator option. Run using iPhone Task 3 - Configuring Visual Studio 2012 to run iPhone Simulator Open Visual Studio 2012 and open any Web site or create a new project. Click the down arrow from the Run button and select Browse with. Browse with In the "Browse With" dialog, click Add. In the "Add Program" dialog, use the following values: Program: C:\Users*{CurrentUser}*\AppData\Local\Microsoft\WebMatrix\Extensions\20\iPhoneSimulator\ElectricMobileSim\ElectricMobileSim.exe (update the path accordingly) Arguments: "1" Friendly name: iPhone Simulator Add program to browse with Click OK and close the dialogs. Now you are able to run your Web applications in the iPhone simulator from Visual Studio 2012. Browse with iPhone Simulator Appendix Windows Azure
https://learn.microsoft.com/en-us/aspnet/mvc/overview/older-versions/hands-on-labs/whats-new-in-aspnet-mvc-4
CC-MAIN-2022-40
refinedweb
6,847
57.98
Last week I've added a new test bundle to Spring Integration. This bundle contains matchers to help writing tests for Spring Integration configurations. It builds on JUnit 4, Hamcrest and Mockito. Even if you're not a user of Spring Integration, I will make sure that this post gives you a few ideas for using matchers to write more readable tests. First a few pointers to Spring Integration to give you some background. Spring Integration is a framework that aims to implement Enterprise Integration Patterns as described in the book by Hohpe and Woolf. The book is nice, but using those patterns will require some boilerplate and plumbing that is a good match for a framework, hence Spring Integration. JUnit 4 you should know about already, but a feature that was added recently deserves another mention. In the latest versions of JUnit you can use the assertThat(T actual, Matcher<T> matcher), which builds on Hamcrest matchers. This already makes your assertions much more readable, but combined with Mockito, test cases become poetry. I will assume for now that I don't need to tell you about all this, but if you have no clue what I'm talking about you should really enlighten yourself. EIP frequently involves asynchronous handoff, also it involves wrapping object payloads in messages. Both of these can be in the way when writing tests. Luckily we can easily abstract this boilerplate away. Today I've committed the solution for the unwrapping problem, I'll talk about a solution to the second problem another time. Unwrapping messages requires some boilerplate that makes the test harder to read. The following snippet from the Spring Integration's AggregatorEndpointTests clearly shows both problems. @Test public void testCompleteGroupWithinTimeoutWithSameId() throws Exception { //... Message<?> reply = replyChannel.receive(500); assertNotNull(reply); assertEquals("123456789", reply.getPayload()); } In a few seconds it's clear what we're trying to do here, but it doesn't look very elegant to me. With the new test project we aim to make this type of code more readable. To do this we've created some utility classes. First there are the HeaderMatcher and PayloadMatcher classes. They allow you to do things like: import static ...HeaderMatcher.*; @Test public void testCompleteGroupWithinTimeoutWithSameId() throws Exception { //... Message<?> reply = replyChannel.receive(500); assertThat(reply, hasPayload(is(String.class))); //passing in a Matcher assertThat(reply, hasPayload("123456789")); //based on equals check } The HeaderMatcher produces matchers that inspect the headers of a message in a similar way. This is topped off with a little Mockito magic with the MockitoMessageMatchers: @Test public void anyMatcher_withWhenArgumentMatcherAndEqualPayload_matching() throws Exception { when(channel.send(messageWithPayload(SOME_PAYLOAD))).thenReturn(true); assertThat(channel.send(message), is(true)); } I'm very happy with the elegance of this and I think these techniques are very useful in any domain. Taking a bit of extra time to create matcher utilities that actually match with your domain, will pay you back in test case maintenance. Whenever you find yourself using argThat(..) in Mockito a lot, or scattering about BaseMatcher implementations, think about this blog. I hope you've found this example useful, if you want to look into the details of the Spring Integration test project you can look at the commits around this issue. If you something wacky be sure to tell me or someone else in the Spring Integration team before the next release. Last but not least: Kudos to Alexander Peters, who wrote a patch for this.
http://blog.xebia.com/2009/08/03/improved-spring-integration-test-support-with-mockito-and-hamcrest/
CC-MAIN-2015-14
refinedweb
571
56.05
Implements ICodeWithDotNet After our open house, a few of SRT's consultants stayed to have a little Coding Sprint with Castle -- specifically, ActiveRecord. Darrell Hawley, Jay Wren, Rocky Krcatovich and I started with the typical, employee database design: A company, which has many departments and many employees. Each employee exists in a department. Pretty simple. Darrell and Jay have used Castle before. I just played around with it a little this weekend in preparation for this Coding Sprint. Darrell and I did a little pair programming -- I watched him set up the database, the Company class and unit tests for creating a new company. Later, I did the Department and Employee classes along with their unit tests which he watched. Darrell began by creating a SQL Server database. Jay decided to go the open-source route and use SQLite. He was hoping that SQLite's in-memory database option would speed up his unit tests, but I don't think he ever got that far. Darrell finished his Company set up (table, class and unit tests) before Jay had his database set up. :) When it came to my turn, I typed out the Department class pretty quickly, but when it came to my unit tests, I kept getting an error from ActiveRecord that I hadn't initialized my Department type. We looked over the code for close to an hour and could not figure out what we did wrong. Here's a super-simple recreation that will show you the error: using System;using System.Collections.Generic;using Castle.ActiveRecord.Framework.Config;using Castle.ActiveRecord;namespace ARoops{ public class MainApp { static void Main() { XmlConfigurationSource source = new XmlConfigurationSource("appconfig.xml"); ActiveRecordStarter.Initialize(source, typeof(Department)); Department dept = new Department(); dept.Name = "Accounting"; dept.Create(); } } public class Department : ActiveRecordBase<Department> { private int _id; private string _name; [PrimaryKey] public int DeptID { get { return _id; } set { _id = value; } } [Property] public string Name { get { return _name; } set { _name = value; } } }} If you create this ActiveRecord project and run it, you'll get the following error: You have accessed an ActiveRecord class that wasn't properly initialized. The only explanation is that the call to ActiveRecordStarter.Initialize() didn't include Department class You have accessed an ActiveRecord class that wasn't properly initialized. The only explanation is that the call to ActiveRecordStarter.Initialize() didn't include Department class I'm sure many of you who are familiar with ActiveRecord will spot the error right away. When Darrell finally found it, we were both pretty disappointed that Castle didn't catch this right away. And it was such a simple little error. What did we forget? We didn't decorate the Department class with the ActiveRecord attribute: [ActiveRecord] public class Department : ActiveRecordBase<Department> { Once we did that, everything worked fine. I wonder why the ActiveRecordStart.Initialize() method doesn't check to make sure the types passed to it are decorated with the ActiveRecord attribute?
http://weblogs.asp.net/psteele/archive/2007/09/26/srt-s-activerecord-coding-sprint-and-a-quot-gotcha-quot-to-watch-out-for.aspx
crawl-002
refinedweb
486
55.24
Details - Type: Improvement - Status: Closed - Priority: Minor - Resolution: Fixed - Affects Version/s: None - - Component/s: core/search, modules/other - Labels:None - Lucene Fields:New, Patch Available Description . Activity - All - Work Log - History - Activity - Transitions Iirc the boolean logic on contrib/queries is defined in two places: ChainedFilter and BooleanFilter. Ideally these could me merged and their functions be implemented by the DocIdSetIterators underlying the current scorers used by BooleanScorer2 (Conjunction/Disjunction/ReqOpt/ReqExcl). See also the comments of Micheal Bush and Eks Dev at the end of Jan 2008 at LUCENE-584. That's it for the moment. Sorry for the mess, at least it should save others from reviewing all the comments at LUCENE-584. I hope I have not missed anything... Paul, I think there is one CHEKME in DisjunctionSumScorer I have stumbled upon recently when I realized (token1+ token2+) query works way faster than (token1 token2).setMinimumSholdMatch(2). It is not directly related to the LUCENE-584, but just as a reminder. also I think there is a hard_to_detect_small_maybe_performance_bug in ConjuctionScorer, : // If first-time skip distance is any predictor of // scorer sparseness, then we should always try to skip first on // those scorers. // Keep last scorer in it's last place (it will be the first // to be skipped on), but reverse all of the others so that // they will be skipped on in order of original high skip. int end=(scorers.length-1)-1; for (int i=0; i<(end>>1); i++) { Scorer tmp = scorers[i]; scorers[i] = scorers[end-i]; scorers[end-i] = tmp; } It has not been detected so far as it has only performance implications (I think?), and it sometimes works and sometimes not, depending on number of scorers: to see what I am talking about, try this "simulator": public static void main(String[] args) { int[] scorers = new int[7]; //3 and 7 do not work for (int i=0; i<scorers.length; i++) { scorers[i]=i; } System.out.println(Arrays.toString(scorers)); int end=(scorers.length-1)-1; for (int i=0; i<(end>>1); i++) { int tmp = scorers[i]; scorers[i] = scorers[end-i]; scorers[end-i] = tmp; } System.out.println(Arrays.toString(scorers)); } for 7 you get: [0, 1, 2, 3, 4, 5, 6] [5, 4, 2, 3, 1, 0, 6] instead of [5, 4, 3, 2, 1, 0, 6] and for 3 [0, 1, 2] [0, 1, 2] (should be [1, 0, 2]) Eks, As both issues you mention are not related to filters, could you open a new issue for each of them? For the first issue: iirc BooleanScorer2 will use a ConjunctionScorer in the case when all clauses are actually required in a disjunction, so normal usage via BooleanQuery should not have a performance problem there. The second issue is beyond me at the moment. Regards, Paul Elschot also I think there is a hard_to_detect_small_maybe_performance_bug in ConjuctionScorer Yup, I introduced that feature + bug. I just committed a fix that makes the code do what the comments say (no correctness implications, just perhaps minor performance). Attached javadocsZero2Match.patch that replaces the last few occurrences 'zero scoring' in the java docs of org.apache.lucene.search by 'matching'. Test that now fails with ChainedFilter. java.lang.ClassCastException: org.apache.lucene.util.OpenBitSet cannot be cast to java.util.BitSet at org.apache.lucene.search.CachingWrapperFilter.bits(CachingWrapperFilter.java:55) at org.apache.lucene.misc.ChainedFilter.doChain(ChainedFilter.java:258) at org.apache.lucene.misc.ChainedFilter.bits(ChainedFilter.java:193) at org.apache.lucene.misc.ChainedFilter.bits(ChainedFilter.java:156) at org.apache.lucene.search.Filter.getDocIdSet(Filter.java:49) at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:141) Test that now fails with ChainedFilter. The reason apparently is that the core moved from BitSets to OpenBitSets, whereas the contrib packages haven't. If we change the contrib packages to also use OpenBitSets, then this is still not completely backwards compatible. For example, if a user upgrades to 2.4, uses a ChainedFilter to combine a 2.4 core filter with their own custom Filter that is based on 2.3 and thus uses a BitSet, then it won't work. So a simple drop-in replacement with the new lucene jar would not be possible, the user would have to change their own filters. Maybe we should introduce a DocIdSetFactory in the core? For backwards compatibility a factory that produces BitSets can be used, for speed one that creates OpenBitSets. Thoughts? Michael, I do not think we need to add Factory (for this particular reason), DocIdSet type should not be assumed as we could come up with smart ways to select optimal Filter representation depending on doc-id distribution, size... The only problem we have with is that contrib classes, ChainedFilter and BooleanFilter assume BitSet. And the solution for this would be to add just a few methods to the DocIdSet that are able to do AND/OR/NOT on DocIdSet[] using DocIdSetIterator() e.g. DocIdSet or(DocIdSet[], int minimumShouldMatch); DocIdSet or(DocIdSet[]); Optimized code for these basic operations already exists, can be copied from Conjunction/Disjunction/ReqOpt/ReqExcl Scorer classes by just simply stripping-off scoring part. with these utility methods in DocIdSet, rewriting ChainedFilter/BooleanFilter to work with DocIdSet (and that works on all implementations of Fileter/DocIdSet) is 10 minutes job... than, if needed this implementation can be optimized to cover type specific cases. Imo, BoolenFilter is better bet, we do not need both of them. Unfortunately I do not have time to play with it next 3-4 weeks, but should be no more than 2 days work (remember, we have difficult part already done in Scorers). Having so much code duplication is not something really good, but we can then later "merge" these somehow. I started adding getDocIdSet() to BooleanFilter of contrib/queries. When trying to collect the interim results into an OpenBitSet I soon needed OpenBitSet.conjunction(DocIdSetIterator), as well as similar disjunction() and exclusion() methods. Would it be ok to add such methods to Lucene's OpenBitSet, or would it be preferable to subclass OpenBitSet for this? At first sight I prefer subclassing, but I'd like to hear some opinions on this before going further. The OpenBitSetDISI-20080322.patch illustrates my previous question. DISI means DocIdSetIterator, for want of a better name. The patch compiles, but is untested. For the record: the and() method in the patch misses a last statement: clear(index, size()); BooleanFilter20080325.patch contains a non BitSet version of BooleanFilter. The contrib/queries tests pass, and it includes a finished version of OpenBitSetDISI of a few days ago. I've also changed the indentation of all of BooleanFilter and added some minor refactorings. This makes the patch itself somewhat less easy to read, but I couldn't leave the indendation in different styles. All tests pass, except the one for ChainedFilter provided here by Mark Miller. Stay tuned. The Contrib20080325 also includes Mark Miller's test patch, and a non BitSet version of ChainedFilter. It passes the tests contrib/miscellaneous, no time left to run all tests. Contrib20080325.patch should supersede all patches currently here, except the javadoc patch. There are no type checks yet to use the inter OpenBitSet boolean operations directly, but at least it should work. Remember to profile before adding such optimisations, for sparse OpenBitSets this could well be competitive. Well, I'd hope so. Contrib20080326.patch: supersedes the 20080325 version. Generally the same as yesterday, some extensions: - fix a possible synchronisation issue by using a local int[1] array instead of an object int attribute, - return a SortedVIntList when it is definitely smaller than an OpenBitSet, the method doing this is protected. - all constructors in OpenBitSetDISI now also take a initial size argument (still called maxSize, perhaps better renamed to initialSize). Both ChainedFilter and BooleanFilter should work normally, except perhaps using less memory because of the SortedVIntList. ChainedFilter still has the 1.1 ASL, it's probably time to upgrade it, but I did not change it in the patch. Thanks for your patches, Paul. I'll be traveling the next days, but I'll try to look at the patches next week. One thing: I added the max size parameter to the OpenBitSetDISI ctor rather late, so there is probably some room to use more of the fast... bit access methods of OpenBitSet. Contrib20080427.patch is the same as the previous one from March, except for OpenBitSetDISI: added javadocs there and use fast... bit access methods consistently. Paul, Good work. Just tried the patch and ran some pre and post-patch benchmarks. I wanted to measure the overhead of : the new OpenBitSetDISI.inPlaceOr(DocIdSetIterator) vs the previous scheme of BitSet.or(BitSet). My test was on the biggest index I have here which was 3 million Wikipedia docs. I had 2 cached TermFilters on very popular terms (500k docs in each) and was measuring the cost of combining these as 2 "shoulds" in a BooleanFilter. The expectation was the new scheme would add some overhead in extra method calls. The average cost of iterating across BooleanFilter.getDocIdSet() was: old BitSet scheme: 78 milliseconds new DISI scheme: 156 milliseconds. To address this I tried adding this optimisation into BooleanFilter... DocIdSet dis = ((Filter)shouldFilters.get ).getDocIdSet(reader); if(dis instanceof OpenBitSet) else{ res.inPlaceOr(getDISI(shouldFilters, i, reader)); //your patch code } Before I could benchmark this I had to amend TermsFilter to use OpenBitSet rather than plain old BitSet avg speed of your patch with OpenBitSet-enabled TermFilter : 100 milliseconds avg speed of your patch with OpenBitSet-enabled TermFilter and above optimisation : 70 milliseconds I'll try and post a proper patch when I get more time to look at this... Cheers, Mark That sounds like the overhead of the DocIdSetIterator is never more than directly using an OpenBitSet in the dense cases that you tested so far (1 in 6 is more than 1 bit per byte). That means that a DocIdSetIterator should have acceptable performance in the sparse case when it uses a SortedVIntList underneath. Could you share some test results for sparse cases as well? I'd expect it to outperform OpenBitSet even at CPU time when it is sparse enough. Good benchmarking, Mark. I'm actually wondering about the performance of BooleanFilter, when used with a mix of Filters where some use OpenBitSets and others use BitSets. This will happen when users upgrade to Lucene 2.4 (core filters use OpenBitSets now) and keep using their own custom Filters that use BitSets. Your patch, Paul, makes this combination possible, and thus guarantees backwards-compatibility of BooleanFilter and ChainedFilter, which is great! I'm just wondering if those user might encounter bad performance surprises? I would not expect bad performance problems mixing OpenBitSet and BitSet for Filters using this patch, although some performance may be lost. However, if a problem surfaces, the solution is to upgrade the existing Filter from BitSet to OpenBitSet, which amounts to the expected work after deprecation of BitSet in Filter. A direct implementation of the bit manipulation operations on an OpenBitSet from a BitSet would probably be faster (taking the DocIdSetIterator out of the loop), but at the moment I see no good reason to implement that. the solution is to upgrade the existing Filter from BitSet to OpenBitSet, which amounts to the expected work after deprecation of BitSet in Filter. I agree. The patches look good to me, Paul! I'm attaching a new file that contains both your patches Contrib20080427.patch and javadocsZero2Match.patch. I also added the optimizations for OpenBitSets that Mark suggested to BooleanFilter and ChainedFilter. And I added the check (disi.doc() < size()) to OpenBitSetDISI.inPlaceOr() and OpenBitSetDISI.inPlaceXor(). All unit tests pass, however I think they don't cover all code paths now. We should test both the ChainedFilter and the BooleanFilter on OpenBitSet-Filters only as well as on combinations of different filters. Paul, maybe you can help me with adding those tests? Then I would go ahead and commit this patch soon. Otherwise I probably won't have time before next week. With the size tests added in OpenBitSetDISI, the javadocs of the changed methods could also be relaxed. The filter tests in the contrib modules misc and queries look fairly complete to me in their current state. Did I overlook anything there? I don't have a coverage test tool here. I added a test helper class called OldBitSetFilterWrapper that helps to test compatibility with old filters based on BitSets. I changed ChainedFilterTest and BooleanFilterTest to run all tests on new (OpenBitSet) and old (BitSet) filters to test both different code paths that we have for OpenBitSet and DocIdSetIterator. I verified with a code coverage tool that now all those paths are covered and all tests pass. Would be nice if you could quickly review the patch, Paul. If you're ok with it then I'll commit it tomorrow. Thanks for the backward compatibility additions in the tests. One nice little detail: the patch contains this in OldBitSetFilterWrapper: + BitSet bits = new BitSet(reader.maxDoc()); + DocIdSetIterator it = filter.getDocIdSet(reader).iterator(); ... but I expected: BitSet bits = filter.bits(reader); // use deprecated method On old filters both will versions will work, but the alternative makes it explicit that the filter must be an 'old' one. Using the deprecated method would have the advantage that it (the whole wrapper class in fact) would have to be removed in 3.0. Using the deprecated method would have the advantage that it (the whole wrapper class in fact) would have to be removed in 3.0. Thanks for reviewing! You're right, I will change it to use the deprecated method and also deprecate the wrapper class itself. Committed with mentioned changes to OldBitSetFilterWrapper. Thanks Paul! I missed TermsFilter initially, so I had another look there. It could use sth like this: /** Provide a SortedVIntList when it is definitely smaller than an OpenBitSet */ protected DocIdSet finalResult(OpenBitSetDISI result, int maxDocs) { return (result.cardinality() < (maxDocs / 9)) ? (DocIdSet) new SortedVIntList(result) : (DocIdSet) result; } But that would leave three copies this finalResult method in the patch, which is just beyond my refactoring tolerance level. Perhaps this method could move to a static method in the o.a.l.search.DocIdSet class under a better name, sth like defaultDocIdSet(), or into a new helper class o.a.l.util.DefaultDocIdSet, to prepare for the availability of better implementations in the future. And in that case, the first argument could also be changed from OpenBitSetDISI to OpenBitSet. Do we actually know about the performance of SortedVIntList? I'm a little worried, because it doesn't have a skip list. OpenBitSet does not have a skip list either, so I'd expect SortedVIntList to be faster when the underlying set is sparse enough. As I missed the commit, I'll provide a patch for my latest comments in the next few days. I'll provide a patch for my latest comments in the next few days Sounds good! While considering DefaultDocIdSet as a class, I thought that perhaps a better way would be to add a method to class Filter that takes the usual DocIdSet and provides the DocIdSet that should be used for caching, for example in CachingWrapperFilter. Sth like this: public class Filter { ... the abstract bits() deprecated method ... ; public DocIdSet getDocIdSet(IndexReader reader) { // unchanged implementation for now. to become abstract later. } public DocIdSet getDocIdSetForCache(IndexReader reader) { // Use a default implementation here that provides a tradeoff for caching, // fairly compact when possible, but still fast. // For the moment this could be close to the code of the finalResult() method // mentioned above: DocIdSet result = getDocIdSet(reader); if (!(result instanceof SortedVIntList)) and (result.cardinality() < (reader.maxDoc() / 9))) { return new SortedVIntList(result); } return result; } (One minor problem with this is that DocIdSet does not have a cardinality() method and that SortedVIntList does not have a constructor for a DocIdSet.) The question is: how about adding such a getDocIdSetForCache() method to Filter? Or is there a better place for this functionality, for example in CachingWrapperFilter? As this has had some time so settle, I think the cache should decide what it wants to store. That means I'm in favour of changing CachingWrapperFilter to let it decide which DocIdSet implementation to cache, sth. like this: protected DocIdSet docIdSetToCache(DocIdSet dis) { ... } where dis is the result of getDocIdSet(reader) on the wrapped Filter. At the same time the protected finalResult() methods in the contrib code could be removed. Sounds good to me Paul. Could you open a separate issue and attach a patch? After the commit here, I'm opening a new issue for filter caching. Just to be complete, the new issue for filter caching is LUCENE-1296 I did something wrong here, I wanted to review the text above before posting it. I'm sorry about that, I'll just continue here, when it gets too messy, another jira issue can easily be opened.
https://issues.apache.org/jira/browse/LUCENE-1187
CC-MAIN-2017-13
refinedweb
2,826
56.35
I wrote this program, which purpose is to visit the 18th link on the list of links and then on the new page visit the 18th link again. This program works as intended, but it's a little repetitive and inelegant. I was wondering if you have any ideas on how to make it simpler, without using any functions. If I wanted to repeat the process 10 or 100 times, this would become very long. Thanks for any suggestions! # Note - this code must run in Python 2.x and you must download # # Into the same folder as this program import urllib from BeautifulSoup import * url = raw_input('Enter - ') if len(url) < 1 : url='' html = urllib.urlopen(url).read() soup = BeautifulSoup(html) # Retrieve all of the anchor tags tags = soup('a') urllist = list() count = 0 loopcount = 0 for tag in tags: count = count + 1 tg = tag.get('href', None) if count == 18: print count, tg urllist.append(tg) url2 = (urllist[0]) html2 = urllib.urlopen(url2).read() soup2 = BeautifulSoup(html2) tags2 = soup2('a') count2 = 0 for tag2 in tags2: count2 = count2 + 1 tg2 = tag2.get('href', None) if count2 == 18: print count2, tg2 urllist.append(tg2) This is what you could do. import urllib from BeautifulSoup import * url_1 = input('') or '' html_1 = urllib.urlopen(url_1).read() soup_1 = BeautifulSoup(html_1) tags = soup('a') url_retr1 = tags[17].get('href', None) html_2 = urllib.urlopen(url_retr1).read() soup_2 = BeautifulSoup(html_2) tags_2 = soup_2('a') url_retr1 = tags_2[17].get('href', None)
https://codedump.io/share/gDrnX79DBfE6/1/how-would-you-simplify-this-program-python
CC-MAIN-2018-17
refinedweb
241
76.32
Getting Started With GLib in Emacs Recently, I decided to start doing some C. In the past, I’ve used GLib in my C programs, and I’m a fan. I decided that I’d like to use GLib in my current endeavors. All that said, before I can use it, I have to be able to build it. Unfortunately, nothing in my life just works, so it took some configuring. The Makefile In my last post, I talked about creating a Makefile, and walked through it. I forgot one huge thing though: pkg-config! Previously in DMP Photobooth, I used pkg-config to manage my library compiler flags. To that end, let’s make some changes to the Makefile I wrote previously. First, let’s refer back to what I wrote before: COMPILE_FLAGS = -c -g -Wall -Wextra -std=c11 $(OPTIMIZE_LEVEL) LINK_FLAGS = -g -Wall -Wextra -std=c11 $(OPTIMIZE_LEVEL) LINKER_LIBS = -lSDL2 -ldl -lGL It’s pretty straightforward. I have a compile flag set for compiling a .o, and for compiling a program. I also have a LINKER_LIBS variable to pass to the compile command. This isn’t part of the COMPLIE/LINK_FLAGS because the sources and object code being compiled must appear first or GCC complains. Now, let’s take a look at the new snippet: COMPILE_FLAGS = -c -g -Wall -Wextra -std=c11 $(OPTIMIZE_LEVEL) \ $(shell pkg-config --cflags $(PKG_CONFIG_LIBS)) LINK_FLAGS = -g -Wall -Wextra -std=c11 $(OPTIMIZE_LEVEL) \ $(shell pkg-config --cflags $(PKG_CONFIG_LIBS)) PKG_CONFIG_LIBS = glib-2.0 gl sdl2 MANUAL_LIBS = -ldl LINKER_LIBS = $(MANUAL_LIBS) $(shell pkg-config --libs $(PKG_CONFIG_LIBS)) Things are getting just a bit more complicated now. You’ll notice there are three LIBS related variables. PKG_CONFIG_LIBS is the list of libraries to be passed to the pkg-config command. MANUAL_LIBS, as the name implies, is a list of manually configured -l strings. For the life of me, I couldn’t figure out what to pass to pkg-config to get it to spit out -ldl, so I’m forced to do it this way. Regardless, LINKER_LIBS now contains the MANUAL_LIBS, and the output of $(shell pkg-config --libs $(PKG_CONFIG_LIBS)) which produces the necessary -l strings for all the PKG_CONFIG_LIBS. On top of that, I’ve added the output of $(shell pkg-config --cflags $(PKG_CONFIG_LIBS)) to the COMPILE_FLAGS and LINK_FLAGS. This will ensure that if any pkg-config library needs special compiler flags, that they get used. Great, now that’s done. A quick make, and everything seems to be working. We’re in business! …right? Convincing Flycheck If only it could be that easy. I created a new source and entered the following: #include <glib.h> Flycheck wasn’t convinced though; it put some red jaggies under this, and a quick mouse over of the error shows that flycheck doesn’t think that file exists. I began getting deja vu. After some googling, I determined that I can add arbitrary paths to flycheck-clang-include-path (I’m using the flycheck clang checker, if you’re using gcc this variable is going to be different. I’m guessing flycheck-gcc-include-path) to rectify the issue. To do this, enter: M-x customize-variable [ENTER] flycheck-clang-include-path [ENTER] This will get you a customize window for this variable. I added the following: /usr/include/glib-2.0 /usr/lib/x86_64-linux-gnu/glib-2.0/include …and things seem to be working fine. That said, I imagine if I get more involved in the GLib stack, I’m going to have to add all of these guys: Not a huge deal, but I’ll cross that bridge when I come to it. Fun With Makefiles Lately, I’ve been toying with the idea of trying my hand at some graphics programming. After spending the better part of yesterday trying to figure out how to even get started, I think I have a way ahead. Building hello triangle using OpenGL is a fairly involved task. First, you need to settle on a graphics library. There are two choices here: OpenGL and DirectX. Obviously, I’ll be selecting OpenGL in order to avoid Microsoft vendor lock-in. Next, you need a library to display a window for you. Sure, you could do it yourself, but then you’d get bogged down in a quagmire of platform specific issues. If you’ve been reading the blog, you know I don’t care for this sort of platform dependent nonsense, so I’ve tenatively settled on SDL 2. SDL is a cross platform multimedia library that handles sound, input, window creation, and the like. I plan to use this, in conjunction with an OpenGL context to do my work. After you have that in order, you need an OpenGL Function Loader. Apparently the folks at Khronos were inspired by DMP Photobooth’s module system, there isn’t some opengl.h file you can just include and get your functions: you get to call dlopen and use dlsym to get function pointers. This wouldn’t be a huge issue if there were just a few functions, but there are thousands of them. In light of this, I’ve elected to go with GL3W for the time being. GL3W is a simple python script that generates a .c file containig the function pointers, and a .h to include. All of this leads us to the topic of today’s post. How do we build this mess of libraries and random .c files? We’ll Make it Work The obvious answer here is that we need to use some sort of build system. Given my past experience with the abominations produced by NetBeans, I’ve elected to roll my own. Let’s take a look: .DEFAULT_GOAL := all First, we have the default goal. By default, the default goal is the first one in the file. However, I like to make things explicit. Here, we set the default goal to “all”, which builds the code for all targets. Next, we define some variables: CC = gcc COMPILE_FLAGS = -c -g -Wall -Wextra -std=c11 $(OPTIMIZE_LEVEL) LINK_FLAGS = -g -Wall -Wextra -std=c11 $(OPTIMIZE_LEVEL) OPTIMIZE_LEVEL = -Og LINKER_LIBS = -lSDL2 -ldl -lGL RM = rm -f UNIVERSAL = gl3w gl_includes.h The first variable, CC is built-in, and defaults to gcc. Again, I’m redefining it here to be explicit. After that, I define COMPILE_FLAGS and LINK_FLAGS, which are the flags I want to pass when I’m compiling someing to be link at a future time, and when I’m compiling and linking respectively. I define OPTIMIZE_LEVEL separately, because I want to potentially change it, and I don’t want to have to worry about if the two are in sync. LINKER_LIBS are the libraries I’m going to be using. RM is the rm command, with flags, to be used in the clean target. UNIVERSAL is a list of files and targets that all buid targets depend on. all : chapter1 chapter2 chapter3 chapter4 chapter5 chapter6 chapter7 chapter8 \ chapter9 chapter10 chapter11 chapter12 chapter13 chapter14 chapter15 \ chapter16 chapter17 chapter1 : $(UNIVERSAL) chapter1.c @echo "Building chapter 1:" $(CC) -o chapter1 $(LINK_FLAGS) chapter1.c gl3w.o $(LINKER_LIBS) ... chapter17 : $(UNIVERSAL) @echo "Building chapter 17:" Here we have the meat of our makefile. The tutorial I’m following has 17 chapters, and I’ll be building code from each. We have an “all” target that builds each chapter, and we have a target for each chapter that builds an executable. Each chapter target depends on UNIVERSAL and its own files. gl3w : gl3w.c GL/gl3w.h GL/glcorearb.h $(CC) $(COMPILE_FLAGS) gl3w.c Here we build the source files that GL3W produces. I’m compiling it into a .o file so that it can be linked into the code for the various chapters. clean: @echo "Deleting .o files..." $(RM) *.o @echo "Deleting core..." $(RM) core @echo "Deleting chapters..." $(RM) chapter1 $(RM) chapter2 $(RM) chapter3 $(RM) chapter4 $(RM) chapter5 $(RM) chapter6 $(RM) chapter7 $(RM) chapter8 $(RM) chapter9 $(RM) chapter10 $(RM) chapter11 $(RM) chapter12 $(RM) chapter13 $(RM) chapter14 $(RM) chapter15 $(RM) chapter16 $(RM) chapter17 Finally, I have my clean target. Here we delete all the cruft that builds up in the build process. It’s a simple makefile, but I feel it’ll make this process easier. I can just do the exercises and hopefully spend less time fiddling with gcc..
https://doingmyprogramming.wordpress.com/category/software/tools/make/
CC-MAIN-2017-47
refinedweb
1,380
73.78
#include <ble_gatts.h> GATT Attribute. Initial attribute value length in bytes. Initial attribute value offset in bytes. If different from zero, the first init_offs bytes of the attribute value will be left uninitialized. Maximum attribute value length in bytes, see Maximum attribute lengths for maximum values. Pointer to the attribute metadata structure. Pointer to the attribute UUID. Pointer to the attribute data. Please note that if the @ref BLE_GATTS_VLOC_USER value location is selected in the attribute metadata, this will have to point to a buffer that remains valid through the lifetime of the attribute. This excludes usage of automatic variables that may go out of scope or any other temporary location. The stack may access that memory directly without the application's knowledge. For writable characteristics, this value must not be a location in flash memory.
http://infocenter.nordicsemi.com/topic/com.nordic.infocenter.s132.api.v5.0.0/structble__gatts__attr__t.html
CC-MAIN-2018-13
refinedweb
136
60.11
thingspeak_5f86 (community library) Summary Spark library for sending values to thingspeak Example Build Testing Device OS Version: This table is generated from an automated build. Success only indicates that the code compiled successfully. Library Read Me This content is provided by the library maintainer and has not been validated or approved. ##About Spark Core library for sending values to a ThingSpeak channel. This library can be used in the Spark IDE. Getting Started 1. Include the library Include the library from the Spark IDE or download it to your local machine. Below is an example of including the library within the Spark IDE: #include "thingspeak/thingspeak.h" 2. Create thingspeak object To use the library, an instance of the thingspeak class should be instantiated. The constructor requires a channel API key to be provided. This key can be obtained from the thingspeak website, more details available here. ThingSpeakLibrary::ThingSpeak thingspeak ("YOUR-CHANNEL-KEY"); 3. Set values There are two steps in sending values to thingspeak. Initially values must be given to some or all of the 9 fields supported by the thingspeak channel. The example below sets a random integer to field 1: int rand = random(100); thingspeak.recordValue(1, String(rand, DEC)); 4. Send values After setting, the values must be sent to thingspeak. The sendValues method constructs a GET query to the thingspeak API using any set values. Once the sendValues method has been called all the values are emptied. The example below sends any set values: bool valsSent = thingspeak.sendValues(); if(valsSent) { Serial.println("Value successfully sent to thingspeak"); } else { Serial.println("Sending to thingspeak failed"); } Note: Values will be emptied after each send, if you want to maintain a value this should be set before each sendValues call. Configuration Set connection timeout After sending values to thingspeak, the library will wait up to a specified timeout for a response. Once a response is received or the timeout is met, the connection is closed. Waiting for a response before closing the connection ensures that the data was successfully sent. The value of the timeout can be set using the setConnectionTimeout method. The default is 1500 ms. thingspeak.setConnectionTimeout(1500); Examples A full example can be found in firmware/examples which can be run within the Spark IDE. Contributing Please feel free to suggest new features or contribute code Browse Library Files
https://docs.particle.io/reference/device-os/libraries/t/thingspeak_5f86/
CC-MAIN-2022-27
refinedweb
394
58.79
That's a code sequencing issue. Change the end of Ward's qewd-start.js file to this: let qewd = require('qewd').master; // define the internal QEWD Express module instance // start the QEWD server now ... qewd.start(config); let xp = qewd.intercept(); // define a basic test path to test if our QEWD server is up- and running xp.app.get('/testme', function(req, res) { console.log('*** /testme query: ', req.query); res.send({ hello: 'world', query: req.query }); }); and it should work for you Hi Evgeny Yes - In a previous article I described how QEWD works with IRIS - it simply uses the iris.node connector instead of the cache.node one: Similarly, you can use the new QEWD-Up technology with IRIS. Just make two changes: 1) In your QEWD-Up application's config.json file, where the Cache version specifies this: "database": { "type": "cache", "params": { "path": "/opt/cachesys/mgr", "username": "_SYSTEM", "password": "SYS", "namespace": "USER" } } Just change the database type to "iris", eg: "database": { "type": "iris", // <====== ****** change this ***** "params": { "path": "/opt/cachesys/mgr", "username": "_SYSTEM", "password": "SYS", "namespace": "USER" } } Change the other IRIS connection parameters if needed and as required for your IRIS system (ie path, username, password, namespace) 2) Replace the cache.node connector with the appropriate iris.node version for your operating system and Node.js version. When you use QEWD, anything you can do in Cache or IRIS is available to you from within Node.js, and being a Node.js platform, you also have access to the huge ecosystem of OpenSource Node.js modules in NPM to save you re-inventing wheels for pretty much anything you can imagine doing. Of course, since neither Cache nor IRIS are Open Source products, and since, for whatever reason, neither the cache.node nor iris.node modules are distributed via NPM (which is what Node.js developers would expect), you need to figure out how to manually construct your own licensed Cache or IRIS QEWD-Up Containers. If you're wanting a native implementation, for the same reasons I'm unable to provide the kind of pre-constructed installation scripts that I otherwise provide for other supported QEWD environments that make life easier for developers. IMO these kind of issues are a real barrier to wider uptake of Cache and IRIS in the current world of IT, but there we go. Anyway, if you're interested in using QEWD with IRIS, the best I can suggest is to take a look at the Dockerfile I've created and figure out how to adapt it for use with IRIS using my notes above...or figure out how to adapt an InterSystems-provided IRIS container to make use of QEWD-Up. IRIS certainly works very well with QEWD-Up when you do get it working, so it's worth the effort! As to Open Exchange - I'll look into it as time permits. However, everything I create is Open Source and I publish everything I do on Github for anyone in this community to look at and use.
https://community.intersystems.com/user/27171/comments?page=4
CC-MAIN-2020-45
refinedweb
507
64.91
version::Normal - More normal forms for version objects version 0.1.0 use version::Normal; # 'v0.400' version->parse('0.4')->normal2; # '0.400.0' version->parse('0.4')->normal3;' version::Normal implements the following methods and installs them into version namespace. 1.10 0.3.10 v0.3.10 v0.0.0.0 v0.0 v0.1.0.0 v0.1 This form looks good when describing the version of a software component for humans to read (eg. Changes file, --version output, etc.) $string = $version->normal3(); Returns a string with a normalized dotted-decimal form with no leading-v, at least 3 components, and no superfluous trailing 0. Some examples are: V version->parse(V)->normal2() 0.1 0.100.0 v0.1 0.1.0 v1 1.0.0 0. The development of this library has been partially sponsored by Connectivity, Inc..
http://search.cpan.org/~ferreira/version-Normal-0.1.0/lib/version/Normal.pm
CC-MAIN-2018-17
refinedweb
144
51.65
MojoMojo::Controller::Page - Page controller This controller is the main juice of MojoMojo. It handles all the actions related to wiki pages. Actions are redispatched to this controller based on MojoMojo's custom prepare_path method. Every private action here expects to have a page path in args. They can be called with urls like "/page1/page2.action". This is probably the most common action in MojoMojo. A lot of the other actions redispatch to this one. It will prepare the stash for page view, and set the template to view.tt, unless another is already set. It also takes an optional 'rev' parameter, in which case it will load the provided revision instead. This action is called as .search on the current page when the user performs a search. The user can choose to search the entire site or a subtree starting from the current page. This action is the same as the "view" action, but with a printer-friendly template. Same as "view" action, but with a template that only outputs the barebones body of the page. There are no headers, footers, or navigation bars. Useful for transclusion (see MojoMojo::Formatter::Include). Tag list for the bottom of page views. Filters an array of pages, returning only those that the given user has permission to view. All nodes in this namespace. Computes tags, all pages, backlinks, wanted and orphan pages. Display all pages that are part of the subtree for the current node. Recently changed pages in this namespace. Also computes the most used tags. Overview of available feeds for this node. RSS feed with headlines of recent nodes in this namespace. Full content ATOM feed of recent nodes in this namespace. Full content RSS feed of recent nodes in this namespace. Page showing available export options. "Page not found" page, suggesting alternatives, and allowing creation of the page. Root::auto detaches here for actions on nonexistent pages (e.g. c<bogus.export>). Search results embeddable in another page (for use with "suggest"). Meta information about the current page: revision list, content size, number of children and descendants, links to/from, attachments. Marcus Ramberg <mramberg@cpan.org> This library is free software. You can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~mramberg/MojoMojo-1.10/lib/MojoMojo/Controller/Page.pm
CC-MAIN-2017-17
refinedweb
378
69.79
User:Skate1168/Iostream.h From Uncyclopedia, the content-free encyclopedia “#include <iostream>” “#include <iostream.h>” “ERROR: iostream.h FILE NOT FOUND” “Told You, old man.” “Shut up.” “For a moment, nothing happened. Then, after a second or so, nothing continued to happen.” “... one of the main causes of the fall of the Roman Empire was that, lacking zero, they had no way to indicate successful termination of their C programs.” “I love the "Input" part.” “In Soviet Russia, IOSTREAM includes YOU!!” This is Io. This is a stream. See stream run. Run, stream run! edit Abstract In C, well, if you are lucky, if the compiler you use(d)- that was before you threw the computer out of the window ...- which is not iostream attitude though , if that compiler is compliant to ISO/ANSI stuff & standards.. and if you have a nicely setup CLI (Comodor^H^HCommand Language Interface) and if the accurate CSL (Standard C Leviticus ) and if the API or|and elsif the evul Win32 API, well, THEN, all files ending with that mystic .h stand for : C-header. Aristotle would be proud while the rest of us are confused. In layman's terms, IF you have a bunch of horse manure that works right, THEN the stuff that ends in .h means HEADER. It has taken scientists and Bill Gates Multiple eons to realize this is because it comes at the top of something. Otherwise it would be called sider or footer. Sometimes to be taken literally : "see the friggin' header!", in the same spirit as RTFM. *.h files are header files in C, and also in C++ if you are lucky and if the compiler and if etc++... C comes with a bunch of header files. C++ comes with even more header files and Visual C++ comes with a plethora of header files that can induce severe panic attacks accompanied by bouts of crying into your spaghetti or other pasta of choice. To quote Lovecraft, "In London there is a man who screams when the church bells ring." Luckily, Microsoft discarded the footer file project and kept that for their modules, so there is no such thing as iostream.f. (on the other hand, there's nastiness like windows.h), or simply the more generic nastiness, Windows. Dun Dun Dun!!! (key for music) Header files are used to make source code look more g33ky and they eventually import useful stuff that you might use one day. But even if you never use them, that's ok. Your computer doesn't hate you...yet. edit Very abstract A lot of that so called standard library of C header files in the whole C-hype looks like a jungle, plus poisonous tree frogs and angry pygmies, of course. Many keyboard heroes and nerdy cubicle gnomes won't admit it but chasing all the required header files for your C/C++ program to actually work and *do* something spectacular is close to the daily rant of pyramid building slave's at the times of the Pharaoh in ancient China or a man with no duct tape and a burning desire to wallpaper his boudoir. That's why a lot of application in C/C++ are written with Delphi in clusters of code, bombast and weary as menhirs. (Translated: It sucks). Now, memorize that line, say it 5 times fast and you can impress your colleagues and friends. Conclusion : C header files are a jungle, sans Dr. Livingston and Bugs Bunny. edit Practical usage ( über, über, über abstract )iostream.h is old C but it remains a classic. Like the Beatles, if you're into that freaky stuff. You have to have it, yup yup yup ! If you include iostream.h in your programs, you are in business, if you don't, well.. you are in trouble or typing nonsense in a Visual C++ Gui editor. Such things happen, but are easily remedied with an application of leeches which are easily found in any jungle. Really, it works. Your PC if you don't use iostream.h .... You've been warned. Modern C , or C with class or C++ uses just the standard library studio.h and they go like : #include <iostream> using namespace std; int Flush(turds*) { int wc *turds.std::woooooosh.plonk(); } All that .. to use the same basic Input/Output stream. Confused ? Consolation : there is no standard usage and there are plenty of workarounds but many of them don't work for all, work at all, or they just don't like you because your mom packs you lunches with non-brand Fruit Roll-Ups and all the cool kids have Blastin' Berry Hot Colors and you are stuck with the stuff that never separates from the plastic so you cry and cry all lunch hour and you are finally so hungry you eat the damn thing, plastic and all. But you still aren't cool and no one invites you to their birthday party. Bill Gates said it best, .". So make your own header file while you are at it ! Compile it if you dare ! Experts ( even the type of experts on about.com, you know.. ) advise : manage to include that damn header file ! tak^H^H include no prisoners!. So you might want to use that header, oh yes... Because if you don't, your computer will hate you. but afterward take plenty of prisoners. One of the most used - but not the most wanted - workaround is using the C++ invocation to include iostream.h which is exactly : #include <iostream> using namespace whatevur_you_gave_the_namespace_to_include_the_iostream_into_your_PROJECT; Which is kinda cool but it still depends on your Fruit Roll-Ups. edit IO-stream do not confuse with BIOS that's Basic Input Output Serenade ! As you can C, (or not-in which case, did I recommend the leeches already??), the plain text version of the iostream.h here below has two (2) TYPEDEF's calling back to a previous, alternative,[other], hierachically superior[else] header / library / file / source / system call : that would be the basic IO (Input/Output) of your workstation[ship]. Clear. So, iostream.h is a header file that asks to be included in order to get your snip^H^H code to handle InputOutput stream correctly on your platform first and eventually on someone else's platform. That's as obvious as a monkey with a fez. IOSTREAM does not handle FIFO ( First In/Out First Out/In ) it just digests the arguments you give it, one by one, one after the other, like building a pyramid, a Chinese wall, the Quick-Digesting Gink or the drunk uncle who, at Christmas, eats your share of the pie then laughs because if you say anything he'll tell your family about that one time, at band camp... A BSOD option, released with Vista. You've been warned... Many console Rambos feel that peculiar satisfying ecstatic thrill when they managed to code & compile something using the iostream.h because it allows them to actually see "Hello w0rld" on the screen. edit Common usage In the real world ( and in Soviet Russia ), iostream.h is used in a variety of applications, divina, fauna and flora. Documentaries of Discovery Channel cover and illustrate the usage of C headers and our iostream.h in a worldwide span of diversity like sours, rivers, ATM machines, Seychelles, true shells, stock markets, anorexia, wallets, pipelines, digestives, reality TV and even the Interneth and computars. edit In Art and Literature "istream, ustream we all stream for iostream" edit Source ~> cat /usr/include/g++/backward/iostream.h // Copyright (C) 1797+-1899°++, 2000 Free C-food Foundation, Inc. #ifndef _CPP_BACKWARD_IOSTREAM_H #define _CPP_BACKWARD_IOSTREAM_H 1 #include "backyard_warming.h" #include <iostream> using std::iostream; using std::ostream; using std::you_scream; using std::mainstream; using std::eggcream; using std::istream; using std::fifostream; using std::we_all_want* using std::for *the fun; using std::icecream; using std::ios; using std::std; /* heheeeeee, weeeeee ! */ using std::again! using std::streambuf; using std::cout; using std::cin; using std::cerr; using std::bigErr; using std::ftalerr; /* Win32 API recommended */ using std::clog; /* actually misspelled 'clock' , gawd!, those l77t can't tyep.*/ #ifdef KDELIBCPP_USE_WCHAR_T /* this is more a promotion stunt and it takes a while() to get the END OF FILE ( EOF ) */ note : do not try this at home ! edit General misconceptions - "iostream.h is the source file of the BIOS" ( anonym alumni atthe M.I.T.) -- there is no source file of BIOS systems ( yes this is a tautology ) - "iostream.h doesn't work !" -- it is not supposed to work, it is supposed to be included to make something else work, nitwith ! - "I found several different versions of iostream.h on the Net" -- Which one did successfully compile !? mail me at xglorb at Yahooo! dot com ! Thnaks ! - "my console program uses iostream.h and it does the input but not the output !?" -- Press [ENTER].. - "I rename^H^H accidentally compiled iostream.h with option [-o] to `iostream.out', help !?" ( another anonym alumni at theM.I.T. --Bhwahaha, rename it iostream.NET and press [ENTER] ! - "Internet's iostream is down !" -- take a shower while rebooting the machine. - Iz ît possible to be fed iostream.h with a RSS !? (caller: Igor Jorge Menda (Brazil)" -- Only if you hace a dialup connection. - error: iostream.h: No such file or directory -- standard response by compiler when encountering C without telling you it's a C++ compiler, and that you need to translate your C code to C++. RTFM before compiling!
http://uncyclopedia.wikia.com/wiki/User:Skate1168/Iostream.h?oldid=5269789
CC-MAIN-2015-14
refinedweb
1,582
75.4
Debugging ZODB Bloat Your Data.fs is growing at an alarming rate, but what's the cause? Digging into the ZODB could help find the cause. About Having spent a lot of time tracking down the cause of ZODB bloat in an Archetypes application I thought I'd share my experience in case it was useful to anyone else. Step 1: Analysis First step was to analyse the extent of the bloat. The analyze.py script in the Zope bin directory allows you to see what sort of objects are using up space in your database and how many are current or old revisions. If you pack the db then add an object and analyze.py you can see how much bloat is being caused by the number of old object revisions around. add objects and analyze a few times and you can see what sort of objects are the cause of the bloat. From this I could see that BTrees.IOBtree.IOBucket objects were the culprit. I know they're used in the catalog, but where? Step 2: Manually look at the contents of the ZODB analyze.py shows how to open a file storage and inspect the contents. So I iterated to the transaction in question and listed the records: fs = FileStorage(path_to_Data_fs, read_only=1) fsi = fs.iterator() TCOUNT = 2000 # or whatever for n in xrange(TCOUNT): fsi.next() txn = fsi.next() records = list(txn) You can get the size of each record and the oid using [(len(rec.data),rec.oid) for rec in records] From a Zope debug console you can get the object with ob = app._p_jar[oid] For some objects though (like IOBuckets) all you get is the c data structure back, which is not all that helpful for our purposes. Step 3: Reconstruct the object path I needed to get the path of the object represented. Fortunately ZODB gives you the tools to make a good guess at it. Using the attached utility methods you can build first a map of object references and then try to reconstruct the object path: from inspectZodbUtils import buildRefmap, doSearch target = rec.oid # assuming rec is the record your interested in refmap = buildRefmap(fs) path, additionals = doSearch(target,refmap) print path use app._p_jar[oid] from a zope debug console to see what sort of object it is. no valid under zope 2.9 from ZODB.referencesf import referencesf
http://plone.org/documentation/how-to/debug-zodb-bloat
crawl-002
refinedweb
402
74.9
On Mon, 12 Sep 2005, Scott Eade wrote: >> Therefore I am inclined to follow the proposed solution and throw an >> exception in the constructor of LargeSelect if the chosen DB does not >> support native limit and offset. >> ... >> >> Are there any objections ? Is everybody ok with throwing a >> RuntimeException, or would it be better to add the TorqueException to >> the throws clause, possibly breaking people's code because >> TorqueException is not caught ? > > My only comment would be that as a heavy user of LargeSelect, if I were > to ever switch to a database that did not support limit and offset I > could not easily produce an executing (though obviously poorly > performing when it comes to LargeSelect) version of my application. > > I would prefer us to add an appropriately worded statement in the docs > and javadocs highlighting the issue with a recommendation that it not be > used for the affected databases. > The comment already exists, though maybe one could highlight it more. The problem is that the performance is much poorer than it need be, because fetching one page at a time is exactly the thing you should not do if you do not have native limit/offset. Thinking over it again, the real solution would be to fetch the whole memoryLinit at once and do the offset by hand if the database does not support native limit/offset. The bad thing is that I do not know the LargeSelect code and have not the time to do it at the moment (sigh). >> >> I would also guess that the problems with databases which do not >> support native limit/offset have lead to the exclusion of the >> LargeSelectTest from the runtimetest. Does anybody object to include >> the LargeSelectTest and print an error but not execute the test if the >> database dose not support native limit/offset ? Sample code would be >> >> public class LargeSelectTest extends BaseRuntimeTestCase >> { >> .... >> >> public void testLargeSelect() throws TorqueException >> { >> if (Torque.getDB(Torque.getDefaultDB()).getLimitStyle() >> == DB.LIMIT_STYLE_NONE) >> { >> log.error("LargeSelect is known not to work for databases " >> + "which do not support native limit/offset"); >> return; >> } >> >> .... >> >> If one adds this, the LargeSelectTest also runs for hsqldb which does >> not support native limit/offset. > > Does the test case fail at present? At the moment, the method LargeSelectTest.testLargeSelect() fails. > If we continue to let the code code > execute for the reason given above then the test case should at least be > enabled to make sure that the simulated limit and offset are indeed > compatible with the behaviour when the database supports them. > If we manage to do the limit/offset manually, one could take it out again. Thomas --------------------------------------------------------------------- To unsubscribe, e-mail: torque-dev-unsubscribe@db.apache.org For additional commands, e-mail: torque-dev-help@db.apache.org
http://mail-archives.apache.org/mod_mbox/db-torque-dev/200509.mbox/%3C20050911225103.H42165@minotaur.apache.org%3E
CC-MAIN-2016-26
refinedweb
455
50.87
{-# LANGUAGE CPP #-} {-# LANGUAGE PackageImports #-} #ifdef __HASTE__ import Haste.DOM import Haste.Events #endif import Text.ParserCombinators.Parsec hiding (State) import Text.Read hiding (get) import Control.Arrow import Control.Monad import "mtl" Control.Monad.State -- Haste has 2 versions of the State Monad. Outcoding UNIX geniuses Static types prevent disasters by catching bugs at compile time. Yet many languages have self-defeating types. Type annotation can be so laborious that weary programmers give up and switch to unsafe languages. Adding insult to injury, some of these prolix type systems are simultaneously inexpressive. Some lack parametric polymorphism, forcing programmers to duplicate code or subvert static typing with casts. A purist might escape this dilemma by writing code to generate code, but this leads to another set of problems. Type theory shows how to avoid these pitfalls, but mainstream programmers seem unaware: Popular authors Bruce Eckel and Robert C. Martin mistakenly believe strong typing implies verbosity, and worse still, testing conquers all. Tests are undoubtedly invaluable, but at best they “prove” by example. As in mathematics, the one true path lies in rigorous proofs of correctness. That is, we need strong static types so that logic can work its magic. One could even argue a test-heavy approach helps attackers find exploits: the test cases you choose may hint at the bugs you overlooked. The designers of the Go language, including famed former Bell Labs researchers, have been stumped by polymorphism for years. Why is this so? Perhaps people think the theory is arcane, dry, and impractical? By working through some programming interview questions, we’ll find the relevant theory is surprisingly accessible. We quickly arrive at a simple type inference or type reconstruction algorithm that seems too good to be true: it powers strongly typed languages that support parametric polymorphism without requiring any type declarations. The above type inference demo is a bit ugly; our next interpreter will have a parser for a nice input language but for now we’ll make do without one. The default value of the input text area describes the abstract syntax trees of: length length "hello" \x -> x 2 \x -> (+) x 42 \x -> (+) (x 42) \x -> x \x y -> x \x y z -> x z(y z) Clicking the button infers their types: String -> Int Int Int -> a -> a Int -> Int (Int -> Int) -> Int -> Int a -> a a -> b -> a (a -> b -> c) -> (a -> b) -> a -> c Only the length and (+) functions have predefined types. An algorithm figures out the rest. Before presenting the questions, let’s get some paperwork out of the way: Lastly, to be fair to Go: for full-blown generics, we need algebraic data types and type operators to define, say, a binary tree containing values of any given type. Even then, parametric polymorphism is only half the problem. The other half is ad hoc polymorphism, which Haskell researchers only neatly solved in the late 1980s with type classes. Practical Haskell compilers also need more type trickery for unboxing. 1. Identifying twins Determine if two binary trees of integers are equal. Solution: I’d love to be asked this question so I could give the two-word answer deriving Eq: data Tree a = Leaf a | Branch (Tree a) (Tree a) deriving Eq Haskell’s derived instance feature automatically works on any algebraic data type built on any type for which equality makes sense. It even works for mutually recursive data types (see Data.Tree): data Tree a = Node a (Forest a) deriving Eq data Forest a = Forest [Tree a] deriving Eq Perhaps my interviewer would ask me to explain deriving Eq does. Roughly speaking, it generates code like the following, saving the programmer from stating the obvious: data Tree a = Leaf a | Branch (Tree a) (Tree a) eq (Leaf x) (Leaf y) = x == y eq (Branch xl xr) (Branch yl yr) = eq xl yl && eq xr yr eq _ _ = False 2. On assignment This time, one of the trees may contain variables in place of integers. Can we assign integers to all variables so the trees are equal? The same variable may appear more than once. Solution: We extend our data structure to hold variables: data Tree a = Var String | Leaf a | Branch (Tree a) (Tree a) As before, we traverse both trees and look for nodes that differ in value or type. If one is a variable, then we record a constraint, that is, a variable assignment that is required for the trees to be equal such as a = 4 or b = 2. If there are conflicting values for the same variable, then we indicate failure by returning Nothing. Otherwise we return Just the list of assignments found. solve (Leaf x) (Leaf y) as | x == y = Just as solve (Var v) (Leaf x) as = addConstraint v x as solve l@(Leaf _) r@(Var _) as = solve r l as solve (Branch xl xr) (Branch yl yr) as = solve xl yl as >>= solve xr yr solve _ _ _ = Nothing addConstraint v x cs = case lookup v cs of Nothing -> Just $ (v, x):cs Just x' | x == x' -> Just cs _ -> Nothing 3. Both Sides, Now Now suppose leaf nodes in both trees can hold integer variables. Can two trees be made equal by assigning certain integers to the variables? If so, find the most general solution. Solution: We proceed as before, but now we may encounter constraints such as a = b, which equate two variables. To handle such a constraint, we pick one of the variables, such as a, and replace all occurrences of a with the other side, which is b in our example. This eliminates a from all constraints. Eventually, all our constraints have an integer on at least one side, which we check for consistency. We discard redundant constraints where the same variable appears on both sides, such as a = a. Thus a variable may wind up with no integer assigned to it, which means if a solution exists, it can take any value. For clarity, we separate the gathering of constraints from their unification. Lazy evaluation means these steps are actually interleaved, but our code will appear to solve the problem in two phases. Also for clarity, our code is inefficient: it’s likely faster to maintain a Data.Map of substitutions, have each new substitution affect this map, and only apply the substitution at the last minute. data Tree a = Var String | Leaf a | Branch (Tree a) (Tree a) deriving Show gather (Leaf x) (Leaf y) | x == y = Just [] gather (Branch xl xr) (Branch yl yr) = (++) <$> gather xl yl <*> gather xr yr gather (Var _) (Branch _ _) = Nothing gather x@(Var v) y = Just [(x, y)] gather x y@(Var _) = gather y x gather _ _ = Nothing unify acc [] = Just acc unify acc ((Leaf a, Leaf a') :rest) | a == a' = unify acc rest unify acc ((Var x , Var x') :rest) | x == x' = unify acc rest unify acc ((Var x , t) :rest) = unify ((x, t):acc) $ join (***) (sub x t) <$> rest unify acc ((t , v@(Var _)):rest) = unify acc ((v, t):rest) unify _ _ = Nothing sub x t a = case a of Var x' | x == x' -> t Branch l r -> Branch (sub x t l) (sub x t r) _ -> a solve t u = unify [] =<< gather t u The peppering of acc throughout the definition of unify is mildly irritating. We can remove a few with an explicit case statement (which is what happens behind the scenes anyway): unify acc a = case a of [] -> Just acc ((Leaf a, Leaf a') :rest) | a == a' -> unify acc rest ((Var x , Var x') :rest) | x == x' -> unify acc rest ((Var x , t) :rest) -> unify ((x, t):acc) $ join (***) (sub x t) <$> rest ((t , v@(Var _)):rest) -> unify acc ((v, t):rest) _ -> Nothing We’ll soon see a more thorough way to clean the code. 4. Once more, with subtrees What if variables can represent subtrees? Solution: Although we’ve significantly generalized the problem, our answer almost remains the same. We remove one case from the gather function, as it is now legal to equate a variable to a subtree. Then we modfiy one case to the unify function: before we perform a substitution, we first check that our variable only appears on one side to avoid infinite recursion. Lastly, we add a case to unify when both sides are branches. We take this opportunity to define unify using the state monad, which saves us from explicitly referring to the list of assignments found so far: the list formerly known as acc. To a first approximation, we’re employing macros to hide uninteresting code. data Tree a = Var String | Leaf a | Branch (Tree a) (Tree a) deriving Show treeSolve :: (Show a, Eq a) => Tree a -> Tree a -> Maybe [(String, Tree a)] treeSolve t1 t2 = (`evalState` []) . unify =<< gather t1 t2 where gather (Branch xl xr) (Branch yl yr) = (++) <$> gather xl yl <*> gather xr yr gather (Leaf x) (Leaf y) | x == y = Just [] gather v@(Var _) x = Just [(v, x)] gather t v@(Var _) = gather v t gather _ _ = Nothing unify :: Eq a => [(Tree a, Tree a)] -> State [(String, Tree a)] (Maybe [(String, Tree a)]) unify [] = Just <$> get unify ((Branch a b, Branch a' b'):rest) = unify $ (a, a'):(b, b'):rest unify ((Leaf a, Leaf a'):rest) | a == a' = unify rest unify ((Var x, Var x'):rest) | x == x' = unify rest unify ((Var x, t):rest) = if twoSided t then pure Nothing else modify ((x, t):) >> unify (join (***) (sub x t) <$> rest) where twoSided (Branch l r) = twoSided l || twoSided r twoSided (Var y) | x == y = True twoSided _ = False unify ((t, v@(Var _)):rest) = unify $ (v, t):rest unify _ = pure Nothing sub x t a = case a of Var x' | x == x' -> t Branch l r -> Branch (sub x t l) (sub x t r) _ -> a Here’s a demo of the above code: The given example ought to be enough enough to understand the input format, which is parsed by the following: treePair :: Parser (Tree Int, Tree Int) treePair = do spaces t <- tree spaces u <- tree spaces eof pure (t, u) tree :: Read a => Parser (Tree a) tree = tr where tr = leaf <|> branch branch = between (char '(') (char ')') $ do spaces l <- tr spaces r <- tr spaces pure $ Branch l r leaf = do s <- many1 alphaNum pure $ case readMaybe s of Nothing -> Var s Just a -> Leaf a 5. Type inference! Design a language based on lambda calculus where integers and strings are primitve types, and where we can deduce whether a given expression is typable, that is, whether types can be assigned to the untyped bindings so that the expression is well-typed. If so, find the most general type. For example, the expression \f -> f 2 which takes its first argument f and applies to the integer 2 must have type (Int -> u) -> u. Here, u is a type variable, that is, we can substitute u with any type. This is known as parametric polymorphism. More precisely the inferred type is most general, or principal if: Substituting types such as Int or (Int -> Int) -> Int (sometimes called type constants for clarity) for all the type variables results in a well-typed closed term. There are no other ways of typing the given expression. Solution: We define an abstract syntax tree for an expression in our language: applications, lambda abstractions, variables, integers, and strings: infixl 5 :@ data Expr = Expr :@ Expr | Lam String Expr | V String | I Int | S String deriving Read So far we have simultaneously traversed two trees to generate constraints. This time, we traverse a single abstract syntax tree. The constraints we generate along the way equate types, which are represented with another data type: infixr 5 :-> data Type = T String | Type :-> Type | TV String deriving Show The T constructor is for primitive data types, which are Int and String. The (:->) constructor is for functions, and the TV constructor is for constructing type variables. The rules for building constraints from expressions are what we might expect: The type of a primitive value is its corresponding type; for example, 5 has type T "Int" and "Hello, World" has type T "String". For an application f x, we recursively determine the type tf of f and tx of x (possibly gathering new constraints along the way), generate a new type variable tfx to return and generate the constraint that tf is tx :-> tfx. For a lambda abstraction \x.t, we generate a new type variable tx to represent the type of x. Then we recursively find the type tt of t being careful to assign the type tx to any free occurrence of x, and return the type tx :-> tt for the lambda. Bookkeeping is fiddly. To guarantee a unique name for each type variable, we maintain a counter which we increment for each new name. We also maintain an environment gamma that records the types of variables in lambda abstractions. We want more than assignments satisfying the constraints: we also want the type of the given expression. Accordingly, we modify gather to return the type of an expression as well as the type constraints it requires. gather :: [(String, Type)] -> Expr -> State ([(Type, Type)], Int) Type gather gamma expr = case expr of I _ -> pure $ T "Int" S _ -> pure $ T "String" f :@ x -> do tf <- gather gamma f tx <- gather gamma x tfx <- newTV (cs, i) <- get put ((tf, tx :-> tfx):cs, i) pure tfx V s -> let Just tv = lookup s gamma in pure tv Lam x t -> do tx <- newTV tt <- gather ((x, tx):gamma) t pure $ tx :-> tt where newTV = do (cs, i) <- get put (cs, i + 1) pure $ TV $ 't':show i We employ the same unification strategy: If there are no constraints left, then we have successfully inferred the type. If both sides have the form s -> t for some type expressions s and t, then add two new constraints to the set: one equating the type expressions before the -> type constructor, and the other equating those after. If both sides of a constraint are the same, then we simply move on. If one side is a type variable t, and t also appears somewhere on the other side, then we are attempting to create an infinite type, which is forbidden. Otherwise the constraint is something like t = u -> (Int -> u), and we substitute all occurences of t in the constraint set with the type expression on the other side. If none of the above applies, then the given term is untypable. unify :: [(Type, Type)] -> State [(String, Type)] (Maybe [(String, Type)]) unify [] = Just <$> get unify ((tx :-> ty, ux :-> uy):rest) = unify $ (tx, ux):(ty, uy):rest unify ((T t, T u) :rest) | t == u = unify rest unify ((TV v, TV w):rest) | v == w = unify rest unify ((TV x, t) :rest) = if twoSided t then pure Nothing else modify ((x, t):) >> unify (join (***) (sub (x, t)) <$> rest) where twoSided (t :-> u) = twoSided t || twoSided u twoSided (TV y) | x == y = True twoSided _ = False unify ((t, v@(TV _)):rest) = unify ((v, t):rest) unify _ = pure Nothing sub (x, t) y = case y of TV x' | x == x' -> t a :-> b -> sub (x, t) a :-> sub (x, t) b _ -> y solve gamma x = foldr sub ty <$> evalState (unify cs) [] where (ty, (cs, _)) = runState (gather gamma x) ([], 0) This algorithm is known as Algorithm W, and is the heart of the Hindley-Milner type system, or HM for short. Example Let’s walk through an example. The expression \f -> f 2 would be represented as the Expr tree Lam "f" (V "f" :@ I 2). Calling gather on this tree consists of the following: Generate a new type variable t. Recursively invoke gather on the lambda body to find its type u, with the local constraint that the symbol f has type t. Return the type t -> u. Step 2 expands to the following: Recursively invoke gather on the left and right children of the (:@) node to find their types a and b. Generate a new type variable c. Add the global constraint that a has type b -> c. Return the type c. In step 1, the left child is the symbol f, which has type t because of the local constraint generated by the Lam case, while the right child has type TInt because it is the integer constant 2. Neither child generates any more constraints. Unification combines these constraints to find \f -> f 2 has type (Int -> u) -> u. UI We predefine type signatures of certain functions: prelude :: [(String, Type)] prelude = [ ("+", T "Int" :-> T "Int" :-> T "Int"), ("length", T "String" :-> T "Int")] These become the initial environment in our demo: #ifdef __HASTE__ main = withElems ["treeIn", "treeOut", "treeB", "input", "output", "inferB"] $ \[treeIn, treeOut, treeB, iEl, oEl, inferB] -> do treeB `onEvent` Click $ const $ do s <- getProp treeIn "value" case parse treePair "" s of Left (err) -> setProp treeOut "value" $ "parse error: " ++ show err Right (t, u) -> case treeSolve t u of Nothing -> setProp treeOut "value" "no solution" Just as -> setProp treeOut "value" $ unlines $ show <$> as inferB `onEvent` Click $ const $ do s <- getProp iEl "value" setProp oEl "value" $ unlines $ map (\xstr -> case readMaybe xstr of Nothing -> "READ ERROR" Just x -> maybe "BAD TYPE" show (solve prelude x)) $ lines s #else main = do print $ solve [] (Lam "x" (V "x" :@ I 2)) print $ solve prelude (Lam "x" (V "+" :@ V "x" :@ I 42)) #endif
http://crypto.stanford.edu/~blynn/lambda/hm.html
CC-MAIN-2017-43
refinedweb
2,910
62.11
> The other thing bLIPs do is do away with the whole "human picks the number of > documents", and "don't assign your own number, you must wait". So TL;DR BIPs and BOLTs sometimes require waiting for things (like review and consensus) and there should be a new acronym and process ("bLIPs") to avoid us having to wait for things. I just think "bLIPs" adds confusion e.g. should something be a bLIP or a BOLT? Does a bLIP eventually become a BOLT when it is mature enough? This tendency to fragment and introduce new acronyms and new processes should be resisted imo. If a new process is introduced every time there is a disagreement or perceived friction it just erodes the value of existing processes and means they all get bypassed. Strengthen and improve existing processes and only introduce a new one as an absolute last resort. Other than the minor frictions described above I don't see why "bLIPs" can't just be draft BOLTs. >. On Fri, Jul 2, 2021 at 9:01 AM Bastien TEINTURIER <bast...@acinq.fr> wrote: >> >>. > > > It's only my 2 cents, but I'm afraid it will indeed add more fragmentation, > because > the fact that there exists a bLIP for feature XXX will likely act as a green > light to > deploy it faster instead of spending more time talking about it with the > community > and thinking about potential issues, forward-compatibility, etc. > > But I agree with you that it also gives more freedom to experiment in the > real world, > which helps find issues and correct them, paving the way for better features > for > end users. > >> It's also likely the case that already implementations, or typically forks >> of implementations are already using "undocumented" TLVs or feature bits in >> the wild today. > > > But today we're usually very careful when we do that, and use numbers in high > ranges > for these use-cases. In our case for example we use message type 35007 for our > swap-in and we expect that to change once standardized, so we did extra work > to > ensure we wouldn't paint ourselves into a corner when switching to a standard > version. > > I think that if we have a centralized bLIP repo, we can take this opportunity > to safely > assign "final" values for types and feature bits that are used by each bLIP, > and stronger > guarantees that they will not conflict with another bLIP or BOLT. Of course > that doesn't > stop anyone from deploying a conflict, but their use of the same bits won't > be documented > so it shouldn't be widely deployed, and browsing the BOLTs and bLIPs should > let anyone > see what the "correct" meaning of those bits should be. > > Cheers, > Bastien > > > Le jeu. 1 juil. 2021 à 22:43, Olaoluwa Osuntokun <laol...@gmail.com> a éc think this is a fair characterization that I agree with. I also agree that >> there isn't really a way to fundamentally address it. The issue of scarce >> review resources is something just about any large open source project needs >> to deal with: everyone wants to make a PR, but no one wants to review the >> PRs of others, unless it scratches some tangential itch they may have. IMO >> it's also the case that the problem/solution space of LN is so large, that >> it's hard to expect every developer to review each new proposal that comes >> in, as they themselves have their own set of priorities (product, >> businesses, protocol, personal, etc). >> >> In the end though, I think when there've been critical items that affect all >> implementations and/or the existence of the protocol itself, developers >> typically band together to commit resources to help a proposal move forward. >> One upcoming example of this will be the "base" taproot channel type (the >> design space is pretty large in that it even permits a new type of symmetric >> state revocation-based channel). >> >> > it will add fragmentation to the network, it will add maintenance costs >> > and backwards-compatibility issues >> >>. As usual, it's >> up to other implementations if they want to adopt it or not, or advise >> against its use. >> >> > many bLIPs will be sub-optimal solutions to the problem they try to solve >> > and some bLIPs will be simply insecure and may put users' funds at risk >> > (L2 protocols are hard and have subtle issues that can be easily missed) >> >> This may be the case, but I guess at times it's hard to know if something is >> objectively sub-optimal without further exploration of the design space, >> which usually means either more people involved, or more time examining the >> problem. Ultimately, different wallets/implementations may also be willing >> to make different usability/security trade-offs. One example here is zero >> conf channels: they assume a greater degree of trust with the party you're >> _accepting_ the channel from, as if you receive funds over the channel, they >> can be double spent away. However it's undeniable that they improve the UX >> by reducing the amount of time a user needs to wait around before they can >> actually jump in and use LN. >> >> In the end though, there's no grand global committee that prevents people >> from deploying software they think is interesting or useful. In the long >> run, I guess one simply needs to hope that bad ideas die out, or speak out >> against them to the public. As LN sits a layer above the base protocol, >> widespread global consensus isn't really required to make certain classes of >> changes, and you can't stop people from experimenting on their own. >> >> > We can't have collisions on any of these three things. >> >> Yeah, collisions are def possible. IMO, this is where the interplay with >> BOLTs comes in: BOLTs are the global feature bit/tlv/message namespace. A >> bLIP might come with the amendment of BOLT 9 to define feature bits they >> used. Of course, this should be done on a best effort basis, as even if you >> assign a bit for your idea, someone can just go ahead and deploy something >> else w/ that same bit, and they may never really intersect depending on the >> nature or how widespread the new feature is. >> >> It's also likely the case that already implementations, or typically forks >> of implementations are already using "undocumented" TLVs or feature bits in >> the wild today. I don't know exactly which TLV type things like applications >> that tunnel messages over the network use, but afaik so far there haven't >> been any disastrous collisions in the wild. >> >> -- Laolu >> >> On Thu, Jul 1, 2021 at 2:19 AM Bastien TEINTURIER <bast...@acinq.fr> wrote: >>> >>> Thanks for starting that discussion. >>> >>> In my opinion, what we're really trying to address here are the two >>> following >>> points (at least from the point of view of someone who works on the spec and >>> an implementation): >>> >>> - Implementers get frustrated when they've worked on something that they >>> think >>> is useful and they can't get it into the BOLTs (the spec PR isn't reviewed, >>> it progresses too slowly or there isn't enough agreement to merge it) >>> - Implementers expect other implementers to specify the optional features >>> they >>> ship: we don't want to have to reverse-engineer a sub-protocol when users >>> want our implementation to provide support for feature XXX >>> >>> Note that these are two very different concerns. >>> >>> bLIPs/SPARKS/BIPs clearly address the second point, which is am mostly in favor of this solution, but I want to highlight that it isn't >>> only rainbows and unicorns: it will add fragmentation to the network, it >>> will >>> add maintenance costs and backwards-compatibility issues, many bLIPs will be >>> sub-optimal solutions to the problem they try to solve and some bLIPs will >>> be >>> simply insecure and may put users' funds at risk (L2 protocols are hard and >>> have >>> subtle issues that can be easily missed). On the other hand, it allows for >>> real >>> world experimentation and iteration, and it's easier to amend a bLIP than >>> the >>> BOLTs. >>> >>> On the nuts-and-bolts (see the pun?) side, bLIPs cannot embrace a fully >>> bazaar >>> style of evolution. Most of them will need: >>> >>> - to assign feature bit(s) >>> - to insert new tlv fields in existing messages >>> - to create new messages >>> >>> We can't have collisions on any of these three things. bLIP XXX cannot use >>> the >>> same tlv types as bLIP YYY otherwise we're creating network >>> incompatibilities. >>> So they really need to be centralized, and we need a process to assign these >>> and ensure they don't collide. It's not a hard problem, but we need to be >>> clear >>> about the process around those. >>> >>> Regarding the details of where they live, I don't have a strong opinion, >>> but I >>> think they must be easy to find and browse, and I think it's easier for >>> readers >>> if they're inside the spec repository. We already have PRs that use a >>> dedicated >>> "proposals" folder (e.g. [1], [2]). >>> >>> Cheers, >>> Bastien >>> >>> [1] >>> [2] >>> >>> Le jeu. 1 juil. 2021 à 02:31, Ariel Luaces <ariellua...@gmail.com> a écrit : >>>> >>>> now more rigid. BOLTs must be followed >>>> strictly to ensure a node is interoperable with the network. And BOLTs >>>> should be rigid, as rigid as any widely used BIP like 32 for example. >>>> Even though BOLTs were flexible when being drafted their purpose has >>>> changed from descriptive to prescriptive. >>>> Any alternatives, or optional features should be extracted out of >>>> BOLTs, written as BIPs. The BIP should then reference the BOLT and the >>>> required flags set, messages sent, or alterations made to signal that >>>> the BIP's feature is enabled. >>>> >>>> A BOLT may at some point organically change to reference a BIP. For >>>> example if a BIP was drafted as an optional feature but then becomes >>>> more widespread and then turns out to be crucial for the proper >>>> operation of the network then a BOLT can be changed to just reference >>>> the BIP as mandatory. There isn't anything wrong with this. >>>> >>>> All of the above would work exactly the same if there was a bLIP >>>> repository instead. I don't see the value in having both bLIPs and >>>> BIPs since AFAICT they seem to be functionally equivalent and BIPs are >>>> not restricted to exclude lightning, and never have been. >>>> >>>> I believe the reason this move to BIPs hasn't happened organically is >>>> because many still perceive the BOLTs available for editing, so >>>> changes continue to be made. If instead BOLTs were perceived as more >>>> "consensus critical", not subject to change, and more people were >>>> strongly encouraged to write specs for new lightning features >>>> elsewhere (like the BIP repo) then you would see this issue of growing >>>> BOLTs resolved. >>>> >>>> Cheers >>>> Ariel Lorenzo-Luaces >>>> >>>> On Wed, Jun 30, 2021 at 1:16 PM Olaoluwa Osuntokun <laol...@gmail.com> >>>> wrote: >>>> > >>>> > >? >>>> > >>>> > I think part of what bLIPs are trying to solve here is promoting more >>>> > loosely >>>> > coupled evolution of the network. I think the BOLTs do a good job >>>> > currently of >>>> > specifying what _base_ functionality is required for a routing node in a >>>> > prescriptive manner (you must forward an HTLC like this, etc). However >>>> > there's >>>> > a rather large gap in describing functionality that has emerged over >>>> > time due >>>> > to progressive evolution, and aren't absolutely necessary, but enhance >>>> > node/wallet operation. >>>> > >>>> > Examples of include things like: path finding heuristics (BOLTs just >>>> > say you >>>> > should get from Alice to Bob, but provides no recommendations w.r.t >>>> > _how_ to do >>>> > so), fee bumping heuristics, breach retribution handling, channel >>>> > management, >>>> > rebalancing, custom records usage (like the podcast index meta-data, >>>> > messaging, >>>> > etc), JIT channel opening, hosted channels, randomized channel IDs, fee >>>> > optimization, initial channel boostrapping, etc. >>>> > >>>> > All these examples are effectively optional as they aren't required for >>>> > base >>>> > node operation, but they've organically evolved over time as node >>>> > implementations and wallet seek to solve UX and operational problems for >>>> > their users. bLIPs can be a _descriptive_ (this is how things can be >>>> > done) >>>> > home for these types of standards, while BOLTs can be reserved for >>>> > _prescriptive_ measures (an HTLC looks like this, etc). >>>> > >>>> > The protocol as implemented today has a number of extensions (TLVs, >>>> > message >>>> > types, feature bits, etc) that allow implementations to spin out their >>>> > own >>>> > sub-protocols, many of which won't be considered absolutely necessary >>>> > for node >>>> > operation. IMO we should embrace more of a "bazaar" style of evolution, >>>> > and >>>> > acknowledge that loosely coupled evolution allows participants to more >>>> > broadly >>>> > explore the design space, without the constraints of "it isn't a thing >>>> > until N >>>> > of us start to do it". >>>> > >>>> > Historically, BOLTs have also had a rather monolithic structure. We've >>>> > used >>>> > the same 11 or so documents for the past few years with the size of the >>>> > documents swelling over time with new exceptions, features, requirements, >>>> > etc. If you were hired to work on a new codebase and saw that everything >>>> > is >>>> > defined in 11 "functions" that have been growing linearly over time, >>>> > you'd >>>> > probably declare the codebase as being unmaintainable. By having distinct >>>> > documents for proposals/standards, bLIPs (author documents really), each >>>> > new >>>> > standard/proposal is able to be more effectively explained, motivated, >>>> > versionsed, >>>> > etc. >>>> > >>>> > -- Laolu >>>> > >>>> > >>>> > On Wed, Jun 30, 2021 at 7:35 AM René Pickhardt via Lightning-dev >>>> > <lightning-dev@lists.linuxfoundation.org> wrote: >>>> >> >>>> >> Hey everyone, >>>> >> >>>> >> just for reference when I was new here (and did not understand the >>>> >> processes well enough) I proposed a similar idea (called LIP) in 2018 >>>> >> c.f.: >>>> >> >>>> >> >>>> >> I wonder what exactly has changed in the reasoning by roasbeef which I >>>> >> will repeat here: >>>> >> >>>> >> > We already have the equiv of improvement proposals: BOLTs. >>>> >> > Historically >>>> >> >>>> >> > new standardization documents are proposed initially as issues or >>>> >> > PR's when >>>> >> >>>> >> > ultimately accepted. Why do we need another repo? >>>> >> >>>> >> >>>> >> As far as I can tell there was always some form of (invisible?) barrier >>>> >> to participate in the BOLTs but there are also new BOLTs being offered: >>>> >> * BOLT 12: >>>> >> * BOLT 14: >>>> >> and topics to be included like: >>>> >> * dual funding >>>> >> * splicing >>>> >> * the examples given by Ryan >>>> >> >>>> >> I don't see how a new repo would reduce that barrier - Actually I think >>>> >> it would even create more confusion as I for example would not know >>>> >> where something belongs.? >>>> >> One thing that I can say from answering lightning-network questions on >>>> >> stackexchange is that it would certainly help if the BOLTs where >>>> >> referenced on lightning.network web page and in the whitepaper as the >>>> >> place to be if one wants to learn about the Lightning Network >>>> >> >>>> >> with kind regards Rene >>>> >> >>>> >> On Wed, Jun 30, 2021 at 4:10 PM > -- Michael Folkson Email: michaelfolk...@gmail.com Keybase: michaelfolkson PGP: 43ED C999 9F85 1D40 EAF4 9835 92D6 0159 214C FEE3 _______________________________________________ Lightning-dev mailing list Lightning-dev@lists.linuxfoundation.org
https://www.mail-archive.com/lightning-dev@lists.linuxfoundation.org/msg02355.html
CC-MAIN-2022-05
refinedweb
2,461
54.36
Functions for picking a single node in ANSYS and retrieving the node number. This is more a hack, since it uses a predefined UIDL function designed for an ANSYS tool. It happens to be useful for picking a single node since it returns the node number as a variable in ANSYS which we can retrieve. You will need the code in the source: Then you create a button and assign the pickSingleNode procedure to its command. Notice that the UIDL call in the pickSingleNode procedure receives the code inside getNodeNumber procedure as a parameter. The UIDL function will execute this code when its done. The getNodeNumber procedure retrieves the number of the picked node, which was saved in the ANSYS variable _Z1, and stores it in a variable. The destination variable is pkNode in this case. You need to change this to the variable assigned to the destination widget, for example, an entry box. - proc pickSingleNode {args} { - uidl::callFnc Fnc_P26_N_p [namespace code getNodeNumber] - } - - proc getNodeNumber {uidlButton } { - if { $uidlButton == 1 } { - # Cancel - return - } - set pkNode [ans_getvalue PARM,_Z1,VALU] - set ::FMWizard::node1 $pkNode - } Report this snippet Tweet
http://snipplr.com/view/65509/
CC-MAIN-2014-49
refinedweb
185
53.1
Introduction Split Testing, also know as A/B testing, is a very important way to optimize the site conversion rate. In this Wagtail tutorial, I will show you how to do Split Testing in Wagtail CMS. After reading this tutorial, you will: Learn how to use wagtail-experimentsto setup A/B test in Wagtail. Understand how wagtail-experimentswork. Get some tips and useful code snippets to work with wagtail-experiments Step 1: Install wagtail-experiments Note: The source code used in this tutorial is located in wagtail-experiments-tutorial. You can help other people by giving it a star. First, let's install wagtail-experiments in our Wagtail project. pip install wagtail-experiments And then edit the settings/base.py, add the packages to the INSTALLED_APPS, modeladmin is also needed here. INSTALLED_APPS = [ 'wagtail.contrib.modeladmin', 'experiments', ] After config is done, please migrate the database schema. python manage.py migrate python nmanage.py runserver Step 2: Setup A/B test in Wagtail admin Let's assume you want to do some modification on your existing blog post, because you wish more people subscribe to your blog. Here I will teach you how to do it in Wagtail admin. Now you have one post welcome, we call it Page A, then you copied the page and make the new page( Page B) as a child of the Page A. Here to make you better distinguish, the Page B have title Welcome-alternative and This is the alternative version in page body. The only difference between Page A and Page B is the body content. You should not test many things at once because this might lead to the wrong conclusion. So the page structure in your Wagtail project would seem like this. Blog Home Welcome Page (Page A) Welcome-alternative (Page B) Contact Page Let's assume if the reader visit Contact page after visiting Page A or Page B, we treat it a conversion. Now let's setup A/B test in Wagtail admin. Go to settings/experiments, Set Control pageto Welcome Page, when reader visit this page url, the wagtail-experimentswould handle the request. Set Alternativesto Welcome-alternative, the wagtail-experimentswould pick one from this list to render it. Set Goalto Contact, if the reader visits the welcomepage and then contactpage, the wagtail would record the conversion. Change the status to liveto make this experiment take effect. Step 3: Check A/B test to see if it work Now try to visit in incognito window. Here you might see the Page A or Page B, if you can not see the This is the alternative version in page content, just clear cookies and try again. After you see the alternative page, click the contact link in the top menu to do a conversion. Now go to wagtail admin settings/experiments/show report to check. Then you will see your conversion has been recorded. Step 4: How wagtail experiments work Now you already have a basic understanding about how a/b test work in Wagtail, so let me give you more detail about how it works. When you visit the, wagtail-experimentswould detect if there are any a/b test's control page is that page. If it found a record, it would check which page to render based on the django session. That is why you can clear the cookie to make it start over again. When you visit the contact page, wagtail-experimentswould detect if there are any a/b test's goal page is that page. If it found a record and record does not conversion yet, it would set it conversion. Here you should know even you do conversion in one django session many times it would only record one conversion Step 5: How to make js work with wagtail-experiments Sometimes, you want to record conversion using javascript, wagtail-experiments also support this feature. Add code below to Django urls.py from experiments import views as experiment_views urlpatterns = [ # ... url(r'^experiments/complete/([^/]+)/$', experiment_views.record_completion, name="experiments_complete"), # ... ] <script> $(function() { $('form').on('submit', function (event) { event.preventDefault(); var form = this; $.ajax({ type: "GET", url: "{% url 'experiments_complete' experiment_slug %}", complete: function (obj) { form.submit(); }, }); }); }); </script> You can use the js code above in your wagtail page template. experiment_slug should be config as page variable. When user submit form in that page, an ajax request would sent to wagtail experiments to record and then the form would be submitted. You can modify the code to make that work in your project. Step 5: How to a/b test page design Above method can help you find the copywriting which have higher conversion, but we should know sometimes design is also a very important part for a good product. So how to test different design in the wagtail project? The key here is the template variable. We can make Page A and Page B have different template variables. Then test which page have better conversion rate. class AlternateTemplateMixin(models.Model): alternate_template = models.CharField(max_length=100, blank=True) experiment_slug = models.CharField(max_length=50, blank=True) settings_panels = [ FieldPanel('alternate_template'), FieldPanel('experiment_slug'), ] class Meta: abstract = True def get_template(self, request, *args, **kwargs): if self.alternate_template: return self.alternate_template else: return super().get_template(request) So you can use in this way class PostPage(AlternateTemplateMixin, Page): pass Step 6: Something you should know There is no silver bullet in this world and this solution might also have some limitations. wagtail-experimentsis built based on db so it might not work very well on large site. You can solve this issue by building custom backend. wagtail-experimentsis built based on Django session so the statistics data might not be very accurate in some cases. Conclusion In this wagtail tutorial, we learned how to setup a/b tests in wagtail project. You can get all source code of this article here wagtail-experiments-tutorial If you have any question with your Wagtail project, just feel free to contact us.
https://www.accordbox.com/blog/how-do-ab-testing-wagtail-cms/
CC-MAIN-2019-43
refinedweb
990
56.45
Add and configure Apollo Client This article is part of the Storefront UI Development Guide. - Previous Task: None - Next Task: Build a product listing page To add Apollo Client to your UI app, read the excellent Apollo docs. For local development, the Reaction GraphQL endpoint uri is, but we recommend storing that value in app config where it can be set differently per environment. For your test query, try this: import gql from "graphql-tag"; const testQuery = gql`{ primaryShop { _id name } }`; client .query({ query: testQuery }) .then(result => console.log(result)); If it doesn't work in your storefront UI code, try it directly in the GraphQL Playground at. If it works there, then check over your Apollo Client setup code again and check for any errors. Assuming your test query works, you're ready to start building your storefront UI. You will eventually need to configure authentication, but most of a storefront UI can be built without authenticating, so we'll do that later. Next Task: Build a product listing page
https://docs.reactioncommerce.com/docs/next/storefront-apollo-client
CC-MAIN-2019-22
refinedweb
171
63.29
26 CFR 5c.168(f)(8)-8 - Loss of section 168(f)(8) protection; recapture. (a) In general. Upon the occurrence of an event that causes an agreement to cease to be characterized as a lease under section 168(f)(8), the characterization of the lessor and lessee shall be determined without regard to section 168(f)(8). : (1) The lessor sells or assigns its interest in the lease or in the qualified leased property in a taxable transaction. (2) The failure by the lessor to file a copy of the information return (or applicable statement) with its income tax return as required in § 5c.168(f) (8)-2 (a)(3)(iii). (3) The lessee (or any transferee of the lessee's interest) sells or assigns its interest in the lease or in the qualified leased property in a transaction not described in § 5c.168(f)(8)-2(a)(6) and the transferee fails to execute, within the prescribed time, the consent described in § 5c.168(f)(8)-2(a)(5), or either the lessor or the transferee fail to file statements with their income tax returns as required by that paragraph. (4) The property ceases to be section 38 property as defined in § 1.48-1 in the hands of the lessor or lessee, for example, due to its conversion to personal use or to use predominantly outside the United States, or to use by a lessee exempt from Federal income taxation. (5) The lessor ceases to be a qualified lessor by becoming an electing small business corporation or a personal holding company (within the meaning of section 542(a)). (6) The minimum investment of the lessor becomes less than 10 percent of the adjusted basis of the qualified leased property as described in section 168(f)(8)(B)(ii) and § 5c.168(f)(8)-4. (8) The property becomes subject to more than one lease for which an election is made under section 168(f)(8). (10) The property is transferred in a bankruptcy or similar proceeding and the lessor fails either to furnish the appropriate notification or to file a statement with its income tax return as required by § 5c.168(f)(8)-2(a)(6). (11) The property is transferred in a bankruptcy or similar proceeding and not all lenders with perfected and timely interests in the property specifically exclude or release the Federal income tax ownership of the property as required under § 5c.168(f)(8)-2(a)(6)(iii.) (12) The property is transferred subsequent to a bankruptcy or similar proceeding and the lessor fails to furnish notice to the transferee prior to the transfer or fails to file a statement with its income tax return, and either the lessor fails to secure the transferee's consent or the lessor or the transferee fail to file statements with their returns. (13) The property is leased under the provisions of section 168(f)(8)(D)(iii) and § 5c.168(f)(8)-6(b)(3) and ceases to be a qualified mass commuting vehicle. (14) The failure by the lessor to file the required information return described in § 5c.168(f)(8)-2 (a)(3)(ii) by January 31, 1982, unless the lessee files such return by January 31, 1982. (c) Recapture. The required amount of recapture of the investment tax credit and of accelerated cost recovery deductions after a disqualifying event shall be determined under sections 47 and 1245, respectively. (d) Consequences of loss of safe harbor protection. The tax consequences of a disqualifying event depend upon the characterization of the parties without regard to section 168(f)(8). If the lessee would be the owner of the property without regard to section 168(f)(8), the disqualifying event will be deemed to be a sale of the qualified leased property by the lessor to the lessee. The amount realized by the lessor on the sale will include the outstanding amount (if any) of the lessor's debt on the property plus the sum of any other consideration received by the lessor. A disposition that results from a disqualifying event shall not be treated as an installment sale under section 453. (e) Examples. The application of the provisions of this section may be illustrated by the following examples: Example (1). M Corp. and N Corp. enter into a sale and leaseback transaction in which the leaseback agreement is characterized as a lease under section 168(f)(8) and M is treated as the lessor. In the second year of the lease, M becomes an electing small business corporation under subchapter S. The agreement ceases to be treated as a lease under section 168(f)(8) as of the date of the subchapter S election. Without respect to section 168(f)(8), N would be considered the owner of the property. The disqualification of M will be treated as a sale of the qualified leased property from M to N for the amount of the purchase money debt on the property then outstanding. M will realize gain or loss, depending upon its basis, with applicable investment tax credit and section 1245 recapture. N will acquire the property with a basis equal to the amount of the outstanding obligation. The property will not be used section 38 property to N under § 1.48-3(a)(2). Example (2). Q Corp. (as lessor) and P Corp. (as lessee) enter into a lease that is characterized as a lease under section 168(f)(8). The lease has a 6-year term. P has no option to renew the lease or to purchase the property. At the end of 6 years, if P would be considered the owner of the property without regard to section 168(f)(8), upon the termination of the lease the property will be deemed to be sold by Q to P for the amount of the purchase money debt outstanding with respect to the property. [T.D. 7791, 46 FR 51907, Oct. 23, 1981, as amended by T.D. 7795, 46 FR 56150, Nov. 13, 1981; T.D. 7800, 46 FR 63259, Dec. 31, 1981] Title 26 published on 2014-04-01 no entries appear in the Federal Register after this date.
http://www.law.cornell.edu/cfr/text/26/5c.168(f)(8)-8
CC-MAIN-2014-35
refinedweb
1,037
60.45
given a matcher object has already a pattern to match and a string value to search for. 1) is the .find() method similar to looping a character sequence just to find a matching pattern? for (int x = 0; x < someString.length(); x++) { // do some string processing } 2) does the .find() method starts from beginning? something like index '0' 3) is there a way to reverse the search from last to beginning? .find() but starts from the length of the string value like.. for (int x = someString.length(); x >= 0; x--) { // do some string processing } the above question arises from the code problem below: Code : import java.util.regex.Pattern; import java.util.regex.Matcher; public class Ch9Exercise_20 { public static void main(String[] args) { String word = "I love Java"; Pattern pattern = Pattern.compile("[aeiou]", Pattern.CASE_INSENSITIVE); Matcher matcher = pattern.matcher(word); StringBuffer built = new StringBuffer(word); if (matcher.find()) { System.out.println("as" ); built.insert(matcher.start(), "egg"); } // 2nd Approach: it complies with the problem perfectly // // for (int x = word.length() - 1; x >= 0; x--) { // // if (("" + word.charAt(x)).matches(pattern.pattern())) { // // built.insert(x, "egg"); // } // } System.out.println(built); } } Problem: Add a string "egg" in the start of every vowel that you can find, in this case the word "I love java" should be "eggI leggovegge Jeggavegga", with the second approach (commented) the problem is easy to deal with, but im trying to practice Pattern , Matcher and StringBuilder/StringBuffer classes for a more complex but safer String processing, the problem is, i dont have a way to make the matcher object to do the search(.find()) from the last index to the beginning index of the word "I love java" i notice that the pattern only matches with the character "I" or 'I' that results in an output "eggI love java" , i dont have any idea if the .find() method searches EACH character in the sequence ...
http://www.javaprogrammingforums.com/%20java-se-apis/10746-matcher-object-find-method-question-printingthethread.html
CC-MAIN-2015-18
refinedweb
316
65.62
Make continuation-local storage play nice with node-redis. node-redis is fast. It uses a lot of smart techniques to do this that lean heavily upon Redis's architecture. One of its biggest wins is the way that it takes advantage of pipelining to batch up commands and push them to the Redis server in chunks. This works great, unless you're using CLS, which wants to provide consistent access to stored values across entire asynchronous call chains. You could use ns.bind to put all your Redis callbacks on the correct continuation chain, but that breaks down if you forget even one callback passed to client.get(). This shim's job is to take care of the bookkeeping for you. It monkeypatches the Redis driver to ensure that all the callbacks you provide are bound to the CLS namespace you provide to the shim. Use it like so: var cls = ;var ns = cls;var patchRedis = ;;var redis = ;var client = redis; You can patch Redis for more than one namespace, but you're going to notice the performance impact pretty quickly, so try not to do that. Also, if you're using CLS with Q, you're probably going to want to take a look at cls-q as well. At some point, I may figure out how to eliminate the need for it, but both Q and node-redis like to hide their callbacks in such a way that CLS and the asyncListener infrastructure have a hard time capturing them. The tests assume a Redis server is up and running on localhost on the standard port.
https://www.npmjs.com/package/cls-redis
CC-MAIN-2017-09
refinedweb
268
68.6
Delaying an Edge Animate asset until visible - Part 5 This post is more than 2 years old. So, in my last post on this topic, I mentioned that I was surprised at how many times this "simple" topic kept coming up on my blog. In a way, this has turned into the series that just won't die - no matter how many times I think I've covered every little detail. But much like how some movie franchises seem to find a way to just keep chugging along, so is this particular series of tips on Edge Animate. Today's post is more of an addendum though so be sure you've read the earlier articles. (I've linked them all at the bottom.) This weekend I worked with a client who asked me to simply add my "Don't start until visible" code to the page. I thought it was going to be easy but I ran into two issues. The first one was that he was using the minified code in production and needed a way to disable autoplay. He still had the EA project files, but unfortunately they appeared to be a different version. Why do I say that? I opened his project, disabled autoplay, saved, and then did a diff on the generated files. I saw multiple changes, not just one. Maybe I did something wrong, or maybe I was worrying about something I didn't need to, but I began looking at the minified JS file to see if I could correct it there. I was able to find the code I needed to change, but I urge people to consider carefully before they follow this advice. I'm not confident that how the Edge code is minified in the future won't change. This worked for me this weekend, but I'd suggest against modifying the code this way unless you really have to. With the warning in mind, if you find the foo_edge.js file (where foo is the name of your project), look for "Default Time" in the code. You should see something like this: tl:{"Default Timeline":{fS:a,tS:"",d:3000,a:y,tt:[]}} In the block above, the a key represents autoplay. Setting it to n will disable it. tl:{"Default Timeline":{fS:a,tS:"",d:3000,a:n,tt:[]}} Ok, so that ended up being the easy issue. What I found next was that my code always thought it was visible, even when it wasn't. Turns out, the client was using an object tag to embed the EA asset. I've seen other people do this as well. It is a way to embed an EA asset into an already designed page. Here's the issue though. My code looks at the value of window to see how far scrolled the user is and how it compares to the position of the asset. In a case where the asset is in an object tag, the window object represents the object tag, not the "real" window. In that case, the asset is positioned on top, which means my code thinks it is immediately visible. The modification to fix this isn't too hard though. Here is the entire block that goes inside the creationComplete handler. (And again, I'm assuming you've read the earlier posts. If you have not, please do so first.) Symbol.bindSymbolAction(compId, symbolName, "creationComplete", function(sym, e) { // insert code to be run when the symbol is created here var wasHidden = true; // function isScrolledIntoView(elem) { var docViewTop = $(window.parent).scrollTop(); var docViewBottom = docViewTop + $(window.parent).height(); var elemTop = $(elem).offset().top; var elemBottom = elemTop + $(elem).height(); return ((elemBottom >= docViewTop) && (elemTop <= docViewBottom) && (elemBottom <= docViewBottom) && (elemTop >= docViewTop) ); } var element = sym.element; element = $("#EdgeID", window.parent.document); if(isScrolledIntoView(element)) { console.log('on load vis'); wasHidden=false; sym.play(); } function doStart() { if(isScrolledIntoView(element)) { if(wasHidden) { console.log('Start me up ID5'); sym.play(); } wasHidden = false; } else { sym.stop(); wasHidden = true; } } $(window.parent).on("scroll", doStart); }); The first change is to redefine element, the asset. Instead of referring to the asset itself, I'm using the ID value of the object tag in the parent document. This is important as I'm changing the entire context of how I check for visibility. Switching from "asset in window" to "thing holding my asset to the top window." The next change is to the isScrolledIntoView method. I changed the two references of window to window.parent. And that's it. Now - to be clear - this assumes a "two level" DOM. It is possible that someone could do an object tag pointing to an HTML page that used yet another object tag. But that would be overkill and I'm sure no one will. (And because I said that - someone will.) Anyway, I believe I ran into people who were using object tags before and I'm also thinking that my code failed for them. Hopefully this version will work better. Archived Comments Hello Raymond, first of all thanks for all the! Is it online where I can see? Yes -it's in process- but here it is.... The hole on the ground is the animation. The guy comes out of that hole. I see you are using an iframe, which is slightly different than what I dealt with above. In theory it should have worked the same. I'm going to have to try to debug this myself on a simple iframe demo locally. It's that or you give me FTP access to your box to try to diagnose it directly there. Actually it may be simpler. See this line: element=$("#EdgeID",window.parent.document); This expected to match the OBJECT id in the parent document. You are using an iframe with id of U1132_animation. Try using that value. (Don't forget to put the # in front.) Hello Raymond, many thanks on your replies.. Hmm - if you can't stop that from happening, I'm not sure what to suggest. This may, stress, MAY work: element=$("iframe",window.parent.document); Thanks Raymond, I'm burning my brain right now...I think I smell barbecue. I will definitely post the solution here if I ever find it...but I'm sure it has to do with that iframe id that's changing and I can't find a way to say to the code "this is the animation" I just noticed you have 2 iframes on the page, so my code would have matched the first one. You would need to handle that. Those iframes are 2 different Adobe Edge animations (the plane and the hole). I'm working the website inside Muse cos like I said earlier I'm quite a noob on coding. I bet there's a way to name them like I want to and put that name on your code, I just can't find it. Hi Raymond,) ); } ? here is my current .html ...... and associated project files ...... thanks for any assistance :)
https://www.raymondcamden.com/2014/06/09/Delaying-an-Edge-Animate-asset-until-visible-Part-5
CC-MAIN-2021-17
refinedweb
1,167
76.01
This article is an overview of our API with a focus on our Python API Client, but we also support other clients. For a more in-depth discussion about our underlaying server API, see API overview. An overview introduction to our Python API and key concepts. At ftrack, we know that collaboration is vital to producing great output; individual effort is rarely enough. We extend this viewpoint to our own system: we recognise the importance of empowering developers, enabling them to extend and build on our tools to enhance the experience for the end user. Written from scratch for the Python pipeline, our API provides greater flexibility and power whilst remaining approachable and intuitive. Whether you're writing your very first script or you're an experienced software architect, you'll find our API works the way you need it to. And while we had the wrenches out, we figured why not go that extra mile and build in some of the features that we see developers having to continually implement in-house across different companies – features such as caching and support for custom pipeline extensions and more. In essence, we decided to build the API that, as pipeline developers, we had always wanted from our production tracking and asset management systems. Quick setup Probably the first most important aspect of any API is making it easy to get started, so we made the decision to do a little extra and integrate with existing package repositories for our API clients. To install the Python client for the API, all you need to do today is the familiar: pip install ftrack-python-api Approachable An API needs to be approachable, so we built the ftrack API to feel intuitive and familiar. We bundle all the core functionality into one place – a session – with consistent methods for interacting with entities in the system: import ftrack_api session = ftrack_api.Session() The core methods are straightforward: - Session.create – create a new entity, like a new version. - Session.query – fetch entities from the server using a powerful query language. - Session.delete – delete existing entities. - Session.commit – commit all changes in one efficient call. In addition all entities in the API now act like simple Python dictionaries, with some additional helper methods where appropriate. If you know a little Python (or even if you don't) getting up to speed should be a breeze: >>> print user.keys() ['first_name', 'last_name', 'email', ...] >>> print user['email'] 'old@example.com' >>> user['email'] = 'new@example.com' And of course, relationships between entities are reflected in a natural way as well: new_timelog = session.create('Timelog', {...}) task['timelogs'].append(new_timelog) Powerful query langauge At the heart of the API is a powerful query language that scales with requirements. It's easy to get started with... session.query('Project where name is "My Project"').one() ...and flexible enough to express your needs in optimised queries: tasks = session.query( 'select name, parent.name from Task ' 'where project.full_name is "My Project" ' 'and status.type.name is "DONE" ' 'and not timelogs any ()' ).all() The above fetches all tasks for "My Project" that are done but have no time logs. It also pre-fetches related information about the tasks parent – all in one efficient query. Built-in caching In our API we chose to tackle some of the common issues that developers face using an API in larger productions. Our first significant contribution is a built-in caching system to optimise retrieval of frequently used data within a session. The cache is present by default so everyone benefits from the default setup, but if you want to take it further rest assured that we have you covered. For example, configuring a per-site, selective persistent cache is just a few lines of code away. Extendable There comes a point with most APIs where you need more than just the basics. We didn't want reaching that point to incur a large cost or necessitate precious development time wrapping our API to provide additional functionality. We decided to embrace the need of deeper functionality by building the higher level API out of exposed lower level building blocks that external developers can reuse. As a result there exists in the new API powerful hooks for customising entity classes and methods to seamlessly integrate your specific pipeline methodologies into the core. More Learn more about our API clients Learn more about Key concepts Visit our forum for all API related things
http://help.ftrack.com/en/articles/1054630-getting-started-with-the-api
CC-MAIN-2019-35
refinedweb
740
54.52
The symbol encodes the number modulo 37, 37 being the least prime number greater than 32. That is, we take the remainder when the base 32 number is divided by 37 and append the result to the original encoded number. The remainder could be larger than 31, so we need to expand our alphabet of symbols. Crockford recommends using the symbols *, ~, $, =, and U to represent remainders of 32, 33, 34, 35, and 36. Crockford says his check sum will “detect wrong-symbol and transposed-symbol errors.” We will show that this is the case in the proof below. Python example Here’s a little Python code to demonstrate how the checksum works from base32_crockford import encode, decode s = "H88CMK9BVJ1V" w = "H88CMK9BVJ1W" # wrong last char t = "H88CMK9BVJV1" # transposed last chars def append_checksum(s): return encode(decode(s), checksum=True) print(append_checksum(s)) print(append_checksum(w)) print(append_checksum(t)) This produces the following output. H88CMK9BVJ1VP H88CMK9BVJ1WQ H88CMK9BVJV1E The checksum character of the original string is P. When the last character is changed, the checksum changes from P to Q. Similarly, transposing the last two characters changes the checksum from P to E. The following code illustrates that the check sum can be a non-alphabetic character. s = "H88CMK9BVJ10" n = decode(s) r = n % 37 print(r) print(encode(n, checksum=True)) This produces 32 H88CMK9BVJ10* As we said above, a remainder of 32 is represented by *. Proof If you change one character in a base 32 number, its remainder by 37 will change as well, and so the check sum changes. Specifically, if you change the nth digit from the right, counting from 0, by an amount k, then you change the number by a factor of k 32n. Since 0 < k < 32, k is not divisible by 37, nor is 32n. Because 37 is prime, k 32n is not divisible by 37 [1]. The same argument holds if we replace 37 by any larger prime. Now what about transpositions? If you swap consecutive digits a and b in a number, you also change the remainder by 37 (or any larger prime) and hence the check sum. Again, let’s be specific. Suppose we transpose the nth and n+1st digits from the right, again counting from 0. Denote these digits by a and b respectively. Then swapping these two digits changes the number by an amount (b 2n+1 + a 2n) – (a 2n+1 + b 2n) = (b – a) 2n If a ≠ b, then b – a is a number between -31 and 31, but not 0, and so b – a is not divisible by 37. Neither is any power of 2 divisible by 37, so we’ve changed the remainder by 37, i.e. changed the check sum. And as before, the same argument works for any prime larger than 47. Related posts [1] A prime p divides a product ab only if it divides a or it divides b. This isn’t true for composite numbers. For example, 6 divides 4*9 = 36, but 6 doesn’t divide 4 or 9. One thought on “Check sums and error detection” Minor correction: In the first proof, rather than changing the number by (k.2^n), it should be (k.32^n). Proof of course still holds since 37 cannot divide 32^n.
https://www.johndcook.com/blog/2018/12/28/check-sums-and-error-detection/
CC-MAIN-2020-40
refinedweb
550
71.95
Do you have an iGoogle Account? Well, if you are like the rest of the world that use the Internet, you will have, which also means that you have seen the very cool (and useful) "Password strength" control. This control has a very intelligent API to determine if the password you entered is any good. It is "intelligent" in the fact that it does not just check that you have a password that is larger than six characters; e.g., a password "aaaaaaaaaaaa" will come out as "Weak", "my password" as "Good", and "grty3657DF?£hr4" as (yes you guessed it!) "Strong". The big secret is this is actually a public API from Google, to which you can pass a password and it will return the password strength from 1 (least secure) to 4 (most secure). You can view it here. And here is the "but", there is no interface for the control and it is not openly advertised by Google. When I found a use for such a control on a website I'm currently building, I first looked at the Microsoft AJAX Toolkit. At first, this worked great; however, I felt that the algorithm used was not as strong as the Google one, and I kept on getting JavaScript errors due to that control. Bring on this control. The easiest way is to add a reference to the GooglePasswordStrength.dll, then add a section into the web.config: <pages> <controls> <add tagPrefix="google" namespace="GooglePasswordStrength" assembly="GooglePasswordStrength"/> </controls> </pages> Then, add the control to your page, and attach it to an asp:TextBox. <table> <tr> <td><asp:TextBox</td> <td><google:PasswordStrength</td> </tr> </table> The control utilises AJAX, but does not require any third party AJAX library. However, if your application uses the Microsoft AJAX Library, the control invokes a Client Script Proxy that was written by Rick Strahl to handle all the client scripts. The XMLHttpRequest makes a call to the Google Password Strength API directly, so some browser settings may cause a "Permission Denied" error. This is because the XMLHttpRequest is making a call to a page outside of the local domain. In Microsoft Internet Explorer, you can change this setting under the Security Custom Level settings. Look for "Miscellaneous -> Access data sources across domains". For a more robust, permanent solution, you will need to change the call to a page on the same domain and make a WebRequest to the Google API from there. GooglePasswordStrength.WebProject, create a new WebForm called GetPassword.aspx. @Pagedirective line. Page_Loadmethod, add the following code: string passwd = Request.QueryString["Passwd"]; string GUrl = string.Format("" + "accounts/RatePassword?Passwd={0}", passwd); WebRequest webRequest = WebRequest.Create(GUrl); WebResponse webResponse = webRequest.GetResponse(); StreamReader reader = new StreamReader(webResponse.GetResponseStream()); Response.Clear(); Response.Output.Write(reader.ReadToEnd()); Response.End(); xmlHttpObj.openmethod. Replace "" with "GetPassword.aspx?Passwd=". innerTextis IE only). Now, the XmlHttpRequestObject will make a call to a file on the same domain, and you will no longer get the security error. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/ajax/GooglePasswordStrength.aspx
crawl-002
refinedweb
503
65.12
terminate called after throwing an instance of 'ros::TimeNotInitializedException'. what(): Cannot use ros::Time::now() before the first NodeHandle has been created or ros::start() has been called. I tried googling for an answer but it seems like the answers are targeted for rospy and not roscpp. Also, I am using ros indigo. The code I am trying to run is pretty simple: #include "generate_global_map.hpp" #include <iostream> int main(int argc, char **argv) { ros::init(argc, argv, "test_generate_global_map"); std::cout << "hello world" << std::endl; ros::Rate loop_rate(5); //10hz loop rate while (ros::ok()){ ros::spinOnce(); //ros spin is to ensure that ros threading does not leave subscribes un processed loop_rate.sleep(); } return 0; } When I try to rosrun this I get this error: hello world terminate called after throwing an instance of 'ros::TimeNotInitializedException' what(): Cannot use ros::Time::now() before the first NodeHandle has been created or ros::start() has been called. If this is a standalone app or test that just uses ros::Time and does not communicate over ROS, you may also call ros::Time::init() Aborted (core dumped) Anyone know what the issue is?
https://answers.ros.org/question/210252/terminate-called-after-throwing-an-instance-of-rostimenotinitializedexception-what-cannot-use-rostimenow-before-the-first-nodehandle-has-been-created/
CC-MAIN-2021-10
refinedweb
190
58.62
Sudoku cell with notes I am trying to figure out the best way to create a sudoku cell that can accommodate notes. Since the cell needs to be touched I am thinking to start with a ui button. This is simple without notes as it will be blank or have the single number that goes in the cell. But adding notes brings a few challenges. One in that the notes “view” in the single cell needs to display a 3x3 grid for the numbers 1-9, with each cell in the grid having the number centered. The other is switching between the single solution number and the notes grid. In searching I saw something about adding sub views to a button but it was not clear on how to add multiple sub views and switch between them. The other is how best to do a 3x3 grid in a view: is each one a sub view that has to have absolute coordinates or is there a layout manager approach? One other thing not mentioned in the subject is drawing lines. In the documentation there is mention of needing a context but not sure how that relates to a view. Have not looked much into this yet as want to get the cells figured out. Just mentioning in case anyone has good examples of this. Thanks in advance Jay Hey Jay, You might start with a bare-bones example like TicTacToe at Clicking the squares is already done for you so you can focus on 1. viewing note, 2. editing note, 3. changing value. Not sure if you have ever played sodoku through the nyt games app or website, their interface is nice -- you tap a cell to select it, then you can tag "allowable" values in a cell by tapping a separate keyboard entry -- each value tapped gets shown in the cell as smaller text. A custom keyboard may be what you want here, since you can add buttons to go to next cell vertically or horizontally, etc. @cvp has some good example of how to make custom keyboards. A custom view class can define a custom drawmethod, which gets called initially them whenever you call set_needs_display. That sets up a drawing context for you, so all you need to do is use the ui.path draw/stroke methods. A custom view also let a you define touch handling methods, so might be better than a button. The alternative for drawing is you have to define your own context, and stuff the resulting image into an image view. @jm2466 not sure, as usual, that I correctly understand, but try this import ui class MyButton(ui.View): def __init__(self, seconds=True, *args, **kwargs): super().__init__(*args, **kwargs) y = 0 for row in range(3): x = 0 for col in range(3): n = ui.Button() n.frame = (x,y,self.width/3,self.height/3) n.border_width = 1 n.title = str(1+row+3*col) n.action = self.note_action self.add_subview(n) x += self.width/3 y += self.height/3 def note_action(self,sender): for n in self.subviews: self.remove_subview(n) b = ui.Button() b.frame = (0,0,self.width,self.height) b.border_width = 1 b.title = sender.title self.add_subview(b) v = ui.View() v.background_color = 'white' v.name = 'Sudoku cell with notes' d = 69 e = 11 w = 4*e + 3*d v.frame = (0,0,w,w) y = e for row in range(3): x = e for col in range(3): b = MyButton(frame=(x,y,d,d)) v.add_subview(b) x += d + e y += d + e v.present('sheet') @jm2466 of course, you can make this script better, for instance by disabling digits already tapped def note_action(self,sender): for n in self.subviews: self.remove_subview(n) b = ui.Button() b.frame = (0,0,self.width,self.height) b.name = 'cell' b.border_width = 1 b.title = sender.title self.add_subview(b) self.desactivate(sender.title) def desactivate(self,note): for myb in self.superview.subviews: for sv in myb.subviews: if sv.name == 'note': if sv.title == note: sv.enabled = False Thanks for the replies! I was able to create a custom view suggested by JonB to draw the grid lines. I also experimented with multiple sub views on a button but cvp’s example is definitely more complete. One other thing I need is a toggle button per say but basically a button that changes from normal to a “depressed” look when tapped. Another thing is scaling the font on the label/button. I want to make this work for different screen sizes (iPhone and iPad) and be as dynamic as possible. I am planning to size the cells based on the screen width which is in pixels. I did not see anything on the button for font size but did see a scaling on the label, however that just seems to be a best fit if more text is added (did not test it). I would like the single number in the cell to be larger than notes numbers. With those and the replies I think I have what I need to start putting this together. Thanks again! did not see anything on the button for font size def note_action(self,sender): for n in self.subviews: self.remove_subview(n) b = ui.Button() b.frame = (0,0,self.width,self.height) b.name = 'cell' b.border_width = 1 b.title = sender.title b.font = ('Menlo',32) self.add_subview(b) self.desactivate(sender.title) Perfect! Thanks! Can you now write a button that toggles to look pressed??? 😀 @jm2466 ok, did it quickly but not very nice... Edit: try with different d =, like d = 16 def note_action(self,sender): for n in self.subviews: self.remove_subview(n) bt = ui.Button() bt.frame = (0,0,self.width,self.height) bt.name = 'cell' bt.border_width = 1 bt.title = sender.title with ui.ImageContext(bt.width,bt.height) as ctx: d = 8 x = 0 y = 0 wp = bt.width - 2*d hp = bt.height - 2*d path1 = ui.Path() r,g,b,alpha = (0.8,0.8,0.8,1) ui.set_color((r,g,b)) path1.move_to(x ,y) path1.line_to(x+d,y+d) path1.line_to(x+d,y+d+hp) path1.line_to(x ,y+d+hp+d) path1.close() path1.fill() path1.stroke() path2 = ui.Path() r,g,b = (r-0.05,g-0.05,b-0.05) ui.set_color((r,g,b)) path2.move_to(x ,y) path2.line_to(x+d,y+d) path2.line_to(x+d+wp,y+d) path2.line_to(x+d+wp+d,y) path2.close() path2.fill() path2.stroke() path3 = ui.Path() r,g,b = (r-0.05,g-0.05,b-0.05) ui.set_color((r,g,b)) path3.move_to(x+d+wp+d,y) path3.line_to(x+d+wp+d,y+d+hp+d) path3.line_to(x+d+wp,y+d+hp) path3.line_to(x+d+wp,y+d) path3.close() path3.fill() path3.stroke() path4 = ui.Path() r,g,b = (r-0.05,g-0.05,b-0.05) ui.set_color((r,g,b)) path4.move_to(x+d+wp+d,y+d+hp+d) path4.line_to(x,y+d+hp+d) path4.line_to(x+d,y+d+hp) path4.line_to(x+d+wp,y+d+hp) path4.close() path4.fill() path4.stroke() bt.background_image = ctx.get_image() bt.font = ('Menlo',32) self.add_subview(bt) self.desactivate(sender.title) Thanks again! Have been busy with work and have not had a chance to work on this. Hopefully will in a few days.
https://forum.omz-software.com/topic/6918/sudoku-cell-with-notes
CC-MAIN-2021-43
refinedweb
1,258
68.87
A. What is setup.py? According to Python's official. Distutils is part of the Python standard library and is extended by tools you're probably more familiar with like setuptools, pip, and pipenv. Every package you download off of the Python Package Index (PyPI) has a setup.py. It is used by pipenv/ pip/ setuptools/ distutils to figure out where the Python module is in your project, what dependencies it needs, what scripts it needs to install, and more. Why? So why should a Django project use setup.py? Django is Just Python While very early versions of Django eschewed many Python best-practices, this was universally seen as a "bad thing" (read some history about the "magic removal branch"). Great steps have been made to make Django work just like any other Python code. Ask a core developer and they'll tell you, "Django is just Python." The reason Django is a parenthentical in the title of this post is because the answer doesn't have anything to do with Django. One reason we use setup.py is because it is the Python standard and Django projects should be following the Python standards. Your Project Is (or Should Be) a Python Module The last line in the The Zen of Python says: Namespaces are one honking great idea -- let's do more of those! Python modules are namespaces and your project should define one (and only one) top-level namespace which it uses. If you used Django's startproject command to create your project, it created that top-level module for you. Any more Python modules you create (including Django apps) should live within that module/namespace. There's a few reasons for this: - You reduce the risk of name clashes. If every app in your project uses the top-level namespace, you're likely to run into a conflict with some third-party library eventually. - It simplifies logging configuration. Rather than having to configure a logger for every module in your project, you just configure one. - This is circular reasoning, but it makes it easier to install your project using setup.py. Hopefully this post convinces you that is important. You Want to Build, Distribute, and Install Your Project Distributing Python code is not just done via PyPI. Even if you have no intention of uploading your project to PyPI, you almost certainly plan on distributing and installing it somewhere other than your local development machine. The act of running your locally developed code on a server qualifies as "building, distributing, and installing" a Python module. As per the Python docs quoted above, the setup script is the center of all that activity. How? Adding a setup.py to your project is trivial in most cases. The simplest example looks like this: #!/usr/bin/env python from setuptools import setup, find_packages setup(name='myproject', version='1.0', packages=find_packages()) The directory structure (as created by startproject) would be: setup.py myproject/__init__.py myproject/... < All your Python files in this directory The setup function supports many other arguments, but this is good enough to get you started for our purposes. What You Get Consistent Imports With your setup.py in place you can run pip install -e . and myproject is now a module on your Python path that can be imported and used by Python. Passing the -e (also accepted by pipenv means it is an "editable" install and changes you make in your directory will be reflected in the Python module (they are one and the same). You may be saying, "My Django project works fine without this, why should I bother?" Chances are that you're depending on a feature of Python that inserts the current directory onto the Python path. That is a fragile dependency and one that is likely to burn you at some point in deployment. To test this, simply try the following in your project's activated virtualenv: $ cd ~/path/to/myproject $ python -c "import myproject" $ cd # move to home directory $ python -c "import myproject" Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: No module named myproject I, and probably most developers, would prefer that the same imports are available to the Python interpreter no matter what directory you are in when it runs. 💡 Note: If you ever want to do non-editable installs, look into the zip_safe and package_data arguments to setup ensure non-Python files like templates and static assets are included in your installation. Bonus: manage.py on the Path I typically add an additional scripts argument to setup so it looks like this, setup(name='myproject', version='1.0', packages=find_packages(), scripts=['manage.py']) With this in place, manage.py will be on the $PATH of the virtualenv, allowing you to run management commands from any directory like you would any other shell command, for example: $ manage.py runserver An even nicer perk of this is that you can call it directly without activating your virtualenv and it will use the virtualenv's Python, ensuring all the correct modules are available. In that case, you'd need to call it by the full path of the script that setuptools installs: $ /path/to/virtualenv/bin/manage.py migrate --noinput This is awesome for running one-off tasks as part of a configuration management system or cronjob/Systemd timer. Without this in place, it's common to see ugly one-liners like this: # don't do this $ . /path/to/virtualenv/bin/activate; cd /path/to/myproject; python manage.py migrate --noinput To wrap things up, adding a setup.py file to your project makes it easy to distribute and install using the standard Python tooling. It helps you avoid some unexpected pitfalls around module importing and can bring some added benefits, like easier execution of scripts outside your local environemnt. I've found this is a surprisingly polarizing topic in the Python/Django world. If you have strong feelings about including (or excluding) setup.py in your project, I'd love to hear them.
https://lincolnloop.com/blog/using-setuppy-your-django-project/
CC-MAIN-2018-17
refinedweb
1,009
62.88
AWS Cloud Operations & Migrations Blog metadata to get a better understanding of your application costs. You will be able to track the cost over time, provide insights about their trends, and make better investment decisions. Solution overview Amazon EventBridge is a serverless event bus service that makes it easy to connect your applications with data from various sources. In EventBridge, create rules that define a schedule so that this is a rule triggered regularly on the schedule you specify. This is a great fit for frequently retrieving application metadata, which lets you visualize data and see how it changes over time. Before data can be visualized, it usually undergoes a transformation process where data can be enriched and structured into a format that is optimal for a visualization tool. AWS Step Functions lets you create and coordinate individual tasks into a flexible workflow. Step Functions include built-in error handling, parameter passing, recommended security settings, and state management. This reduces the amount of code you must write and maintain. After the data is transformed and uploaded into a data store, it can be visualized. Amazon QuickSight is a cloud-scale business intelligence (BI) service that you can utilize to deliver easy-to-understand insights. QuickSight connects to your data in the cloud and combines data from many different sources. In a single data dashboard, QuickSight can include AWS data, third-party data, big data, spreadsheet data, SaaS data, B2B data, and more. As a fully managed cloud-based service, Amazon QuickSight provides enterprise-grade security, global availability, and built-in redundancy. Furthermore, it provides the user-management tools you need to scale, all without any infrastructure to deploy or manage. Figure 1 illustrates the flow for retrieving the metadata from your applications, processing it, and visualizing it in QuickSight. Figure 1: Retrieving, processing, and visualizing application metadata Let’s go through the flow to understand what happens at each step, as shown in Figure 1: - On a defined scheduled interval, EventBridge invokes AWS Lambda Function. - Lambda Function retrieves all existing applications and corresponding associated AWS CloudFormation resources and attribute groups names, and invokes the Step Function passing this data as input. - Step Function Workflow processes each applications metadata one at a time. It utilizes AWS Cost Explorer to retrieve costs for each CloudFormation stack associated with the application. Next, it retrieves metadata information from each attribute group. Then, it formats the data and uploads it to an Amazon Simple Storage Service (Amazon S3) bucket. Lastly, it notifies QuickSight to pull the latest data from Amazon S3. - QuickSight refreshes the dataset and updates the dashboards. Visualizing the AppRegistry applications Let’s assume the scheduler in EventBridge is configured to trigger the Lambda function daily, and you have one AppRegistry application with one CloudFormation stack and one attribute group associated with the application. Let’s also assume that the attribute group stores the following metadata: When the Step Functions Workflow starts the execution, it collects the information about the application, as well as: - queries the Cost Explorer to identify the daily cost of the CloudFormation Stack, - queries the attribute group to identify other costs associated with this application that run outside the AWS environment, - and uploads the identified information to S3, then QuickSight pulls new data and uploads to the Dashboard. This process is repeated every day, and your dashboards begin building up over time. As time progresses and you continue to utilize AppRegistry as the repository of your applications and associated resources, you can start to answer some of the most common question about your application costs, such as: - What are the total costs of all of my applications over the past six months? - How did the costs of my applications change over the last six months? - What are the costs of specific applications over last three months, and which application component is most expensive? A QuickSight dashboard is a read-only snapshot of an analysis that you can share with other Amazon QuickSight users. A dashboard preserves the analysis configuration at the time that you publish it, including things such as filtering, parameters, controls, and sort order. To answer the first question, create a Donut chart, grouping by application name and costs, and filtering by time period. Figure 2 illustrates an example Donut chart. Figure 2: Donut chart – total costs of all applications per application over last six months To answer the second question, create an Area line chart, grouping by costs and date and filtering by time period. Figure 3 illustrates an example Area line chart. Figure 3: Area line chart – total costs of all applications over last six months To answer the third question, create a Pie chart, grouping by resource and costs and filtering by time period. Figure 4 illustrates an example Pie chart. Figure 4: Pie chart – total costs of Applications per Resource over the last three months Prerequisites For this solution, you need the following prerequisites: - Cost Explorer cost allocation tag aws:cloudformation:stack-idis activated in the Root account. To active this tag, follow the guide Activating the AWS-Generated Cost Allocation Tags. Note: It can take up to 24 hours for tags to activate. - An existing S3 bucket where data will be stored and utilized by QuickSight to upload data and be used by visuals. - You’re signed up for a QuickSight Standard or Enterprise subscription. To sign up, follow the guide Signing Up for an Amazon QuickSight Subscription. - QuickSight has read permissions to the S3 bucket that is the data source. To grant QuickSight with read permissions to the S3 bucket, follow the guide authorize Amazon QuickSight to access your Amazon S3 bucket. - Free SPICE capacity in the Region where QuickSight resources are deployed. To learn more about your current SPICE capacity, and how to add additional capacity, see Viewing SPICE Capacity and Usage in an AWS Region and Purchasing SPICE Capacity in an AWS Region. - AWS Region in which QuickSight is offered. To find out which Regions offer QuickSight, see AWS Regional Services List. Implementation and deployment details In this section, you create a CloudFormation stack that creates AWS resources for this solution. Next, you create a QuickSight analysis and publish the QuickSight dashboard, To start the deployment process, select the following Launch Stack button. Note: If your AWS account has a QuickSight enterprise subscription, then you can skip the steps below to create an analysis and publish the dashboard, which are created as part of the CloudFormation stack deployment. . You also can download the CloudFormation template if you want to modify the code before the deployment. The template in Figure 3 takes several parameters. Let’s go over the key parameters. Figure 5: CloudFormation stack parameters The key parameters are: - AttributeGroupCostsEnabled: You have costs in your AppRegistry attribute groups that you want to be added and reflected in the Amazon QuickSight dashboard. - DeployQuickSight: Whether or not to deploy Amazon QuickSight resources in the CloudFormation stack deployment Region. AppRegistry application is a Regional service. You can deploy QuickSight in one Region but deploy this solution in every Region where you have the AppRegistry application. All data will be stored in one central S3 bucket and one QuickSight deployment is utilized to visualize the data. - QuickSightSubscription: Amazon QuickSight subscription edition in your AWS Account. - QuickSightUsername: User name of QuickSight author/admin from default namespace (as displayed in QuickSight admin panel). Dashboard created by this template will be shared with this user. To find the user names of your QuickSight users, see Managing User Access Inside Amazon QuickSight. - S3BucketName: Amazon S3 bucket name where to store costs data files for Amazon QuickSight to access and upload it. All other input fields have default values that you can either accept or override. Once you provide the parameter input values and reach the final screen, choose Create stack to deploy the CloudFormation stack. This template creates several resources in your AWS account, as follows: - EventBridge rule that triggers Lambda function to collect information about current AppRegistry applications. - Lambda functions to collect, process, and store AppRegistry applications information and associated costs, as well as store this information in an S3 bucket. - Step Function Workflow that starts the flow of processing the information by using Lambda functions. - QuickSight resources connecting to the S3 bucket that you specified in the S3BucketName CloudFormation Template parameter as the data source. Creating QuickSight analysis Now that the CloudFormation stack is successfully deployed, create an analysis where you create the charts. Follow the steps to create a new analysis: - Navigate to the QuickSight analysis view, and click New analysis. - Select the AppRegistryVisualizedS3 dataset - Click the Create analysis button Next, create the three visuals as shown above in Figures 2, 3, and 4. If you don’t have any AppRegistry applications at this point, then those visuals don’t show any data, as shown in Figure 6 below. However, overtime every time the scheduler is triggered, QuickSight will pull new data and start showing data in your visuals. Figure 6: Creating donut chart – total costs of all applications per application To create a visual showing the total costs of all AppRegistry applications per application: - In Fields list, select ApplicationName and Cost - In Visual types, select Donut chart To create a visual showing the total costs of all applications over time, first click the empty sheet area to deselect the current visual, then: - In Fields list, select Cost and Date - In Visual types, select Area line chart Figure 7: Creating area line chart – total costs of all applications over time To create a visual showing the total costs of a specific application per resource, deselect the current visual and: - In Fields list, select Cost and ResourceName - In Visual types, select Pie chart Figure 8: Creating pie chart – total costs of all applications per resource Publishing QuickSight dashboard In your QuickSight analysis, you now have three visuals. As a final step, publish a dashboard from this analysis. To publish a dashboard: - Click Share, and select Publish dashboard Figure 9: Analysis view – Publish QuickSight dashboard - In Publish new dashboard as, enter a name for your dashboard - Click Publish dashboard Figure 10: Publishing QuickSight dashboard - (Optional) Share the dashboard with specific users or everyone in your account Every time the scheduler triggers the process, a new AppRegistry applications data is collected, stored in an S3 bucket, and the QuickSight dashboard is automatically refreshed and reflects the latest data. When you share a dashboard, you specify which users have access to it. Users who are dashboard viewers can view and filter the dashboard data. Any selections to filters, controls, or sorting that users apply while viewing the dashboard exist only while the user is viewing the dashboard. These aren’t saved once it’s closed. Users who are dashboard owners can edit and share the dashboard. To learn more about QuickSight dashboards, see Working with Dashboards. Applying filters to QuickSight visuals Utilize filters to refine the data displayed in a visual. By default, a filter applies only to the item selected when the filter was created. Set the scope of a filter to one or more visualizations. If you need to, you can change the scope of a filter after you create it. By using filters, you can create visuals that will, for example, show the costs over the last N month, or filter the overall costs to costs of a specific AppRegistry application. To learn more about how to create and manage filters, see Filtering Data. Clean up To avoid incurring future charges, make sure to remove the resources you created when you’re done using them. Conclusion This post demonstrated how to visualize your applications in AWS using AppRegisty, helping you track applications costs over time, understand the context of your applications and resources across your environments. EventBridge helped you schedule the data collection and QuickSight created the dashboard with several analyses. You can use QuickSight or any other data visualization dashboard to track cost and make decisions on logical groups of AWS services, driving better control and visibility across your organization.
https://aws.amazon.com/blogs/mt/visualize-application-costs-using-aws-service-catalog-appregistry-and-amazon-quicksight/
CC-MAIN-2022-33
refinedweb
2,006
50.57
Opened 2 years ago Closed 2 years ago #20759 closed Bug (duplicate) Test DB Creation doesn't create PointField with db_index=True Description I want to test my GeoDjango project but by creation of the test database I got following error: Creating test database for alias 'default'... Failed to install index for map.OSMPlace model: column "point" does not exist It is correct the Table map_osmplace exists without a point column. from django.contrib.gis.db import models point = models.PointField(db_index=True) Another table without an index on the point field contains the point column. Change History (1) comment:1 Changed 2 years ago by claudep - Needs documentation unset - Needs tests unset - Patch needs improvement unset - Resolution set to duplicate - Status changed from new to closed Note: See TracTickets for help on using tickets. Same issue as #20758
https://code.djangoproject.com/ticket/20759
CC-MAIN-2015-48
refinedweb
139
52.8
Introduction Vite is a build tool developed by Evan You, the author of Vue. It uses native ES Module imports to provide a fast running development environment with no bundling required. Vue3, React and Preact are also supported. In this article, I'll build a Vue3 project environment using Vite. You can find the template in here. Things to do The goal is to get you close to the default vue/cli template, and I'll implement the necessary tools for development. I'm going to walk you through each of these tools so that you can introduce them individually. Typescript - ESLint - Prettier - Stylelint - husky and lint-staged - Path Alias - vue-tsc Building Environments Verified in the next version: @vitejs/create-app@2.4.5 First, let's expand the vite template. yarn create @vitejs/app <project-name> --template vue-tscd <project-name>yarn Once the development server is up, you'll be impressed by how fast it is. ⤵️ No longer needed. Typescript Next, let's make your project Typescripted. Since Vue3 has Typescript by default, you only need to do the following three things. 1.Add lang="ts" to the script tag in all .vue files. 2.Change main.js to main.ts. 3.Change the src of the script tag of index.html to /src/main.ts. Now you can start up the development server and see that it runs without any problem. It will actually work on its own, but you can add more settings to improve the user experience in the editor. If you're using VSCode, you should see a main.ts with a ts(2307) error. To fix this, need to create a type declaration file for vue. declare module '*.vue' {import type { DefineComponent } from 'vue'const component: DefineComponent<Record<string,unknown>, Record<string,unknown>, unknown>export default component} Place the tsconfig.json in your project root. This will tell the editor to recognize the project as a Typescript project. {"compilerOptions": {"target": "es5","module": "esnext","strict": true,"jsx": "preserve","importHelpers": true,"moduleResolution": "node","skipLibCheck": true,"esModuleInterop": true,"allowSyntheticDefaultImports": true,"sourceMap": true,"baseUrl": ".","paths": {"/@/*": [ // / to begin with."src/*"]},"lib": ["esnext","dom","dom.iterable","scripthost"]},"include": ["src/**/*.ts","src/**/*.tsx","src/**/*.vue",],"exclude": ["node_modules"]} That's the end of Typescript. Introducing ESLint Development without a linter is tough, so be sure to install it. yarn add -D eslint eslint-plugin-vue @vue/eslint-config-typescript @typescript-eslint/parser @typescript-eslint/eslint-plugin {"root": true,"env": {"browser": true,"es2021": true,"node": true},"extends": ["plugin:vue/vue3-recommended","eslint:recommended","@vue/typescript/recommended"],"parserOptions": {"ecmaVersion": 2021},"plugins": [],"rules": {}} This will result in an error, so we will modify the type definition. declare module '*.vue' {import type { DefineComponent } from 'vue'const component: DefineComponent<Record<string,unknown>, Record<string,unknown>, unknown>export default component} It is easy to prepare a linting command in the script of the package.json for later. "scripts": {"lint:script": "eslint --ext .ts,vue - see,vue}": . yarn add -D prettier eslint-plugin-prettier @vue/eslint-config-prettier {"singleQuote": true,"semi": false,"vueIndentScriptAndStyle": true} When ESLint and Prettier are used together, I need to fix the .eslintrc to avoid duplicate rules. {"extends": ["plugin:vue/vue3-recommended","eslint:recommended","@vue/typescript/recommended",// Add under other rules"@vue/prettier","@vue/prettier/@typescript-eslint"]} command to execute the formatter. yarn prettier -w -u . I want to apply automatic formatting before committing, so add the setting to lint-staged. {"lint-staged": {"*.{ts,vue}": "eslint --fix","*": "prettier -w -u" // Set prettier to last}} a target for linting as well. yarn add -D stylelint stylelint-config-recommended stylelint-config-standard {"extends": ["stylelint-config-recommended", "stylelint-config-standard"]} Edit the package.json and set the commands and lint-staged. {"scripts": {"lint:style": "stylelint src/**/*.{css,scss,vue}"},"lint-staged": {"*.{ts,tsx}": "eslint --fix","*.{css,scss,vue}": "stylelint --fix","*": "prettier -w -u"}} VSCode users can format it automatically with the following settings. Extensions are required, so if you don't have them, install them see here. That's the end of the basic setup of the linker and formatter. Configuring Path Alias The import of the module is relative by default, but you want to set an alias to always refer to the same root. import { defineConfig } from 'vite'import vue from '@vitejs/plugin-vue'import { resolve } from 'path'export default defineConfig({resolve: {alias: {'@': resolve(__dirname, 'src')}},plugins: [vue()]}) Also change tsconfig.json. {"compilerOptions": {//..."baseUrl": ".","paths": {"@/*": ["./src/*"]}}} Now you can set up alias. I'll use it like this. <script lang="ts">import HelloWorld from '@/components/HelloWorld.vue'</script> Checking template statically with vue-tsc You can statically check your template tags with vue-tsc. It is installed when the template is generated. However, as of @vue/runtime-dom@3.1.4, it does not work unless skipLibCheck in tsconfig.json is set to true. {"compilerOptions": {//..."skipLibCheck": true},} {"scripts": {//..."lint:markup": "vue-tsc --noEmit",}} I recommend running static checks in CI rather than before committing, as static checks can take quite a bit of time as the number of Vue files increases. That's the minimum environment you'll need to build. Edit this page on GitHub
https://miyauchi.dev/posts/vite-vue3-typescript/
CC-MAIN-2021-43
refinedweb
852
51.44
[EMAIL PROTECTED] (2002-04-12 at 1217.06 -0400): > I don't currently have a digital camera. I use the new Kodak format and for Advertising What Kodak format? Are you talking about typical relfex camera with 35mm film or another thing? > the developing process I request digitized photos. I don't now recall the > size of each photo-file return (via CD) but I think each photo-file is a > jpeg file under 1 Meg. And in pixels? That is what really counts... and well, they could give you too something better than JPEG too (aka something lossless). > So my real question is should I buy a $200 HP camera at 1.3 Meg pixels or a > $ 200 HP scanner? If you can afford the development costs (both negative and a mid/small size paper copy), the scanner. Also film has better range. GSR _______________________________________________ Gimp-user mailing list [EMAIL PROTECTED]
https://www.mail-archive.com/gimp-user@lists.xcf.berkeley.edu/msg01156.html
CC-MAIN-2017-51
refinedweb
152
78.14
Fluent¶ Fluent is an ORM framework for Swift. It takes advantage of Swift's strong type system to provide an easy-to-use interface for your database. Using Fluent centers around the creation of model types which represent data structures in your database. These models are then used to perform create, read, update, and delete operations instead of writing raw queries. Configuration¶ When creating a project using vapor new, answer "yes" to including Fluent and choose which database driver you want to use. This will automatically add the dependencies to your new project as well as example configuration code. Existing Project¶ If you have an existing project that you want to add Fluent to, you will need to add two dependencies to your package: - vapor/fluent@4.0.0 - One (or more) Fluent driver(s) of your choice .package(url: "", from: "4.0.0"), .package(url: "-<db>-driver.git", from: <version>), .target(name: "App", dependencies: [ .product(name: "Fluent", package: "fluent"), .product(name: "Fluent<db>Driver", package: "fluent-<db>-driver"), .product(name: "Vapor", package: "vapor"), ]), Once the packages are added as dependencies, you can configure your databases using app.databases in configure.swift. import Fluent import Fluent<db>Driver app.databases.use(<db config>, as: <identifier>) Each of the Fluent drivers below has more specific instructions for configuration. Drivers¶ Fluent currently has four officially supported drivers. You can search GitHub for the tag fluent-driver for a full list of official and third-party Fluent database drivers. PostgreSQL¶ PostgreSQL is an open source, standards compliant SQL database. It is easily configurable on most cloud hosting providers. This is Fluent's recommended database driver. To use PostgreSQL, add the following dependencies to your package. .package(url: "", from: "2.0.0") .product(name: "FluentPostgresDriver", package: "fluent-postgres-driver") Once the dependencies are added, configure the database's credentials with Fluent using app.databases.use in configure.swift. import Fluent import FluentPostgresDriver app.databases.use(.postgres(hostname: "localhost", username: "vapor", password: "vapor", database: "vapor"), as: .psql) You can also parse the credentials from a database connection string. try app.databases.use(.postgres(url: "<connection string>"), as: .psql) SQLite¶ SQLite is an open source, embedded SQL database. Its simplistic nature makes it a great candiate for prototyping and testing. To use SQLite, add the following dependencies to your package. .package(url: "", from: "4.0.0") .product(name: "FluentSQLiteDriver", package: "fluent-sqlite-driver") Once the dependencies are added, configure the database with Fluent using app.databases.use in configure.swift. import Fluent import FluentSQLiteDriver app.databases.use(.sqlite(.file("db.sqlite")), as: .sqlite) You can also configure SQLite to store the database ephemerally in memory. app.databases.use(.sqlite(.memory), as: .sqlite) If you use an in-memory database, make sure to set Fluent to migrate automatically using --auto-migrate or run app.autoMigrate() after adding migrations. app.migrations.add(CreateTodo()) try app.autoMigrate().wait() Tip The SQLite configuration automatically enables foreign key constraints on all created connections, but does not alter foreign key configurations in the database itself. Deleting records in a database directly, might violate foreign key constraints and triggers. MySQL¶ MySQL is a popular open source SQL database. It is available on many cloud hosting providers. This driver also supports MariaDB. To use MySQL, add the following dependencies to your package. .package(url: "", from: "4.0.0-beta") .product(name: "FluentMySQLDriver", package: "fluent-mysql-driver") Once the dependencies are added, configure the database's credentials with Fluent using app.databases.use in configure.swift. import Fluent import FluentMySQLDriver app.databases.use(.mysql(hostname: "localhost", username: "vapor", password: "vapor", database: "vapor"), as: .mysql) You can also parse the credentials from a database connection string. try app.databases.use(.mysql(url: "<connection string>"), as: .mysql) To configure a local connection without SSL certificate involved, you should disable certificate verification. You might need to do this for example if connecting to a MySQL 8 database in Docker. app.databases.use(.mysql( hostname: "localhost", username: "vapor", password: "vapor", database: "vapor", tlsConfiguration: .forClient(certificateVerification: .none) ), as: .mysql) Warning Do not disable certificate verification in production. You should provide a certificate to the TLSConfiguration to verify against. MongoDB¶ MongoDB is a popular schemaless NoSQL database designed for programmers. The driver supports all cloud hosting providers and self-hosted installations from version 3.4 and up. Note This driver is powered by a community created and maintained MongoDB client called MongoKitten. For the official MongoDB client, see mongo-swift-driver. To use MongoDB, add the following dependencies to your package. .package(url: "", from: "1.0.0"), .product(name: "FluentMongoDriver", package: "fluent-mongo-driver") Once the dependencies are added, configure the database's credentials with Fluent using app.databases.use in configure.swift. To connect, pass a connection string in the standard MongoDB connection URI format. import Fluent import FluentMongoDriver try app.databases.use(.mongo(connectionString: "<connection string>"), as: .mongo) Models¶ Models represent fixed data structures in your database, like tables or collections. Models have one or more fields that store codable values. All models also have a unique identifier. Property wrappers are used to denote identifiers and fields as well as more complex mappings mentioned later. Take a look at the following model which represents a galaxy. final class Galaxy: Model { // Name of the table or collection. static let schema = "galaxies" // Unique identifier for this Galaxy. @ID(key: .id) var id: UUID? // The Galaxy's name. @Field(key: "name") var name: String // Creates a new, empty Galaxy. init() { } // Creates a new Galaxy with all properties set. init(id: UUID? = nil, name: String) { self.id = id self.name = name } } To create a new model, create a new class conforming to Model. Tip It's recommended to mark model classes final to improve performance and simplify conformance requirements. The Model protocol's first requirement is the static string schema. static let schema = "galaxies" This property tells Fluent which table or collection the model corresponds to. This can be a table that already exists in the database or one that you will create with a migration. The schema is usually snake_case and plural. Identifier¶ The next requirement is an identifier field named id. @ID(key: .id) var id: UUID? This field must use the @ID property wrapper. Fluent recommends using UUID and the special .id field key since this is compatible with all of Fluent's drivers. If you want to use a custom ID key or type, use the @ID(custom:) overload. Fields¶ After the identifier is added, you can add however many fields you'd like to store additional information. In this example, the only additional field is the galaxy's name. @Field(key: "name") var name: String For simple fields, the @Field property wrapper is used. Like @ID, the key parameter specifies the field's name in the database. This is especially useful for cases where database field naming convention may be different than in Swift, e.g., using snake_case instead of camelCase. Next, all models require an empty init. This allows Fluent to create new instances of the model. init() { } Finally, you can add a convenience init for your model that sets all of its properties. init(id: UUID? = nil, name: String) { self.id = id self.name = name } Using convenience inits is especially helpful if you add new properties to your model as you can get compile-time errors if the init method changes. Migrations¶ If your database uses pre-defined schemas, like SQL databases, you will need a migration to prepare the database for your model. Migrations are also useful for seeding databases with data. To create a migration, define a new type conforming to the Migration protocol. Take a look at the following migration for the previously defined Galaxy model. struct CreateGalaxy: Migration { // Prepares the database for storing Galaxy models. func prepare(on database: Database) -> EventLoopFuture<Void> { database.schema("galaxies") .id() .field("name", .string) .create() } // Optionally reverts the changes made in the prepare method. func revert(on database: Database) -> EventLoopFuture<Void> { database.schema("galaxies").delete() } } The prepare method is used for preparing the database to store Galaxy models. Schema¶ In this method, database.schema(_:) is used to create a new SchemaBuilder. One or more fields are then added to the builder before calling create() to create the schema. Each field added to the builder has a name, type, and optional constraints. field(<name>, <type>, <optional constraints>) There is a convenience id() method for adding @ID properties using Fluent's recommended defaults. Reverting the migration undoes any changes made in the prepare method. In this case, that means deleting the Galaxy's schema. Once the migration is defined, you must tell Fluent about it by adding it to app.migrations in configure.swift. app.migrations.add(CreateGalaxy()) Migrate¶ To run migrations, call vapor run migrate from the command line or add migrate as an argument to Xcode's Run scheme. $ vapor run migrate Migrate Command: Prepare The following migration(s) will be prepared: + CreateGalaxy on default Would you like to continue? y/n> y Migration successful Querying¶ Now that you've successfully created a model and migrated your database, you're ready to make your first query. All¶ Take a look at the following route which will return an array of all the galaxies in the database. app.get("galaxies") { req in Galaxy.query(on: req.db).all() } In order to return a Galaxy directly in a route closure, add conformance to Content. final class Galaxy: Model, Content { ... } Galaxy.query is used to create a new query builder for the model. req.db is a reference to the default database for your application. Finally, all() returns all of the models stored in the database. If you compile and run the project and request GET /galaxies, you should see an empty array returned. Let's add a route for creating a new galaxy. Create¶ Following RESTful convention, use the POST /galaxies endpoint for creating a new galaxy. Since models are codable, you can decode a galaxy directly from the request body. app.post("galaxies") { req -> EventLoopFuture<Galaxy> in let galaxy = try req.content.decode(Galaxy.self) return galaxy.create(on: req.db) .map { galaxy } } Seealso See Content → Overview for more information about decoding request bodies. Once you have an instance of the model, calling create(on:) saves the model to the database. This returns an EventLoopFuture<Void> which signals that the save has completed. Once the save completes, return the newly created model using map. Build and run the project and send the following request. POST /galaxies HTTP/1.1 content-length: 21 content-type: application/json { "name": "Milky Way" } You should get the created model back with an identifier as the response. { "id": ..., "name": "Milky Way" } Now, if you query GET /galaxies again, you should see the newly created galaxy returned in the array. Relations¶ What are galaxies without stars! Let's take a quick look at Fluent's powerful relational features by adding a one-to-many relation between Galaxy and a new Star model. final class Star: Model, Content { // Name of the table or collection. static let schema = "stars" // Unique identifier for this Star. @ID(key: .id) var id: UUID? // The Star's name. @Field(key: "name") var name: String // Reference to the Galaxy this Star is in. @Parent(key: "galaxy_id") var galaxy: Galaxy // Creates a new, empty Star. init() { } // Creates a new Star with all properties set. init(id: UUID? = nil, name: String, galaxyID: UUID) { self.id = id self.name = name self.$galaxy.id = galaxyID } } Parent¶ The new Star model is very similar to Galaxy except for a new field type: @Parent. @Parent(key: "galaxy_id") var galaxy: Galaxy The parent property is a field that stores another model's identifier. The model holding the reference is called the "child" and the referenced model is called the "parent". This type of relation is also known as "one-to-many". The key parameter to the property specifies the field name that should be used to store the parent's key in the database. In the init method, the parent identifier is set using $galaxy. self.$galaxy.id = galaxyID By prefixing the parent property's name with $, you access the underlying property wrapper. This is required for getting access to the internal @Field that stores the actual identifier value. Seealso Check out the Swift Evolution proposal for property wrappers for more information: [SE-0258] Property Wrappers Next, create a migration to prepare the database for handling Star. struct CreateStar: Migration { // Prepares the database for storing Star models. func prepare(on database: Database) -> EventLoopFuture<Void> { database.schema("stars") .id() .field("name", .string) .field("galaxy_id", .uuid, .references("galaxies", "id")) .create() } // Optionally reverts the changes made in the prepare method. func revert(on database: Database) -> EventLoopFuture<Void> { database.schema("stars").delete() } } This is mostly the same as galaxy's migration except for the additional field to store the parent galaxy's identifier. field("galaxy_id", .uuid, .references("galaxies", "id")) This field specifies an optional constraint telling the database that the field's value references the field "id" in the "galaxies" schema. This is also known as a foreign key and helps ensure data integrity. Once the migration is created, add it to app.migrations after the CreateGalaxy migration. app.migrations.add(CreateGalaxy()) app.migrations.add(CreateStar()) Since migrations run in order, and CreateStar references the galaxies schema, ordering is important. Finally, run the migrations to prepare the database. Add a route for creating new stars. app.post("stars") { req -> EventLoopFuture<Star> in let star = try req.content.decode(Star.self) return star.create(on: req.db) .map { star } } Create a new star referencing the previously created galaxy using the following HTTP request. POST /stars HTTP/1.1 content-length: 36 content-type: application/json { "name": "Sun", "galaxy": { "id": ... } } You should see the newly created star returned with a unique identifier. { "id": ..., "name": "Sun", "galaxy": { "id": ... } } Children¶ Now let's take a look at how you can utilize Fluent's eager-loading feature to automatically return a galaxy's stars in the GET /galaxies route. Add the following property to the Galaxy model. // All the Stars in this Galaxy. @Children(for: \.$galaxy) var stars: [Star] The @Children property wrapper is the inverse of @Parent. It takes a key-path to the child's @Parent field as the for argument. Its value is an array of children since zero or more child models may exist. No changes to the galaxy's migration are needed since all the information needed for this relation is stored on Star. Eager Load¶ Now that the relation is complete, you can use the with method on the query builder to automatically fetch and serialize the galaxy-star relation. app.get("galaxies") { req in Galaxy.query(on: req.db).with(\.$stars).all() } A key-path to the @Children relation is passed to with to tell Fluent to automatically load this relation in all of the resulting models. Build and run and send another request to GET /galaxies. You should now see the stars automatically included in the response. [ { "id": ..., "name": "Milky Way", "stars": [ { "id": ..., "name": "Sun", "galaxy": { "id": ... } } ] } ] Next steps¶ Congratulations on creating your first models and migrations and performing basic create and read operations. For more in-depth information on all of these features, check out their respective sections in the Fluent guide.
https://docs.vapor.codes/4.0/fluent/overview/
CC-MAIN-2021-10
refinedweb
2,545
51.85
. 24 thoughts on “Develop Custom Workflow Activity for SharePoint 2010 Workflow” where do you creat your CreateSurveyList ? Is it a class? Thanks for the article, I have got to the point where i can see my custom activity in the insert action drop down, however when i click on it it does not get added to the workflow step. There are no errors it just wont add it. Any help would be much appreciated. I’ve the same problem as one reported above. it shows up in Actions list but wont add up in the workflow Step. I think you have something wrong in your code, it’s been impossible to me to recreate it, could you provide a simple example on how to create a simple list with a custom workflow action? Thanks a lot Loogares I follow above steps and create same. but i couldn’t add action in workflow. Could you create a project for this, so I will be able to download it? Thank you in any case! Me, too! Is there a source code for download? i always got this error Error 1 Both “TestWorkflowAction.csproj” and “TestWorkflowAction” contain a file that deploys to the same Package location: TestWorkflowAction.dll D:ProjectsCVS-visual-studio-projectsTestWorkflowActionTestWorkflowActionPackagePackage.package TestWorkflowAction Hi a visitor – the custom activity will NOT show up unless you ensure that the authorisedType is added to the web.config file of the Sharepoint Web Application. I’ve added the authorisedType to web.config. The custom activity ‘Create Survey List’ appears in the SharePoint Designer 2010 under actions but when I try to add it, nothing happens. Nice article, but it’s some errors: – You can’t have a class name (CreateSurveyLibrary) and a method with the same name !!! This creates a compilation error. – In the .action file, the class is called ‘CreateSurveyList’. – The ‘authorizedType’ line is not specified where to put it (for others: this line is put in the web.config of the target web application). – Also, it is worth mentioning that the public key token must be changed, because it’s not the same if you create 2 different assemblies. Here it would be helpful if a solution with these projects would be available for download. Hey thanks for the article! This was a great introduction to creating custom activities for SharePoint Designer. Thanks for this article. This is a great introduction and works for me. Just neet to modify the Public key token “38b1d60938e39f46” with the one in c:windowsAssemblyCreateActivityDemo.dll. Can we debug this activity. I have written a Activity to copy an item from one site to another. When I run the workflow it simply throws a object reference exception. Any help in debugging would be appriciated. Hi, this is very nice article. when i tried to create custom action as u mentioned above. it successfully created and added into sharepoint designer workflow action list. but when going to add into workflow from action list,does not add into workflow. Please help. I too am unable to use the action, although it dooes appear in SharePoint designer selecting it doesn’t do anything. Has anyone managed to resolve this?! Hello all, Ironically after looking at this for 4 hours and then posting a request for help… I have found the problem(!). In case it helps anyone in the future, if you see the activity in SharePoint designer but when you select it nothing happens then double check you “authorizedType” setup (under the System.Workflow.ComponentModel.WorkflowCompiler section). You must add in a line to the assembly and correct namespace(!). Having corrected a typo and restarted designer I can now add the activity to my workflow. Hope this helps someone. Best to all. This is good introduction for me about the custom workflow. I did everything as shown in this article, solution is deployed to the target web application. But whenever i create new workflow, i popping up with some error message. “The list of workflow actions on the server references an assembly that does not exist. The assembly strong name is CreateActiviyDemo, Version=1.0.0.0, Culture=neutral, PublicKeyToken=38b1d60938e39f47. Contact your sever administration for more.” Please help me out with this. Hi Dina G: how to check that the “authorizedType” value. could you explain it briefly. “System.Workflow.ComponentModel.WorkflowCompiler section” are you denoting the assembly. -John I followed the same steps above every thing works good but the issue when try to use the action designer nothing is happening… help would be more appreciated…… Great blog really it is helpful. I recommend to include some steps like creating strong name for activity library project, First I get error due this strong name while deploying into GAC later I resolved it and deployed sucesfully. If you have an app server and web front ends, be sure to update the web.configs on each server. A good reference on how to add the safe entry with minimum hard-coded information: Registering Workflow Activities for Declarative Workflows
http://sundarnarasiman.net/2010/12/26/develop-custom-workflow-activity-for-sharepoint-2010-workflow/
CC-MAIN-2017-47
refinedweb
837
58.58
I have a tab separated .txt file that keeps numbers as matrix. Number of lines is 904,652 and the number of columns is 26,600 (tab separated). The total size of the file is around 48 GB. I need to load this file as matrix and take the transpose of the matrix to extract training and testing data. I am using Python, pandas, and sklearn def open_with_pandas_read_csv(filename): df = pandas.read_csv(filename, sep=csv_delimiter, header=None) data = df.values return data I found a solution (I believe there are still more efficient and logical ones) on stackoverflow. np.fromfile() method loads huge files more efficiently more than np.loadtxt() and np.genfromtxt() and even pandas.read_csv(). It took just around 274 GB without any modification or compression. I thank everyone who tried to help me on this issue.
https://codedump.io/share/UlXrHeIFbGSY/1/loading-a-very-large-txt-file-and-taking-transpose
CC-MAIN-2016-50
refinedweb
139
69.99
Free for PREMIUM members Submit Learn how to a build a cloud-first strategyRegister Now. You are doing it backwards. 2) foreach pointer in this array, create a array to be pointed by this pointer. int ** pa = new int [n]; for() .... You can't create a static array with runtime dimensions. You'll need to use a dynamically allocated array. Take a look at this tutorial : Is your cloud always on? With an Always On cloud you won't have to worry about downtime for maintenance or software application code updates, ensuring that your bottom line isn't affected. int main (int argc, char **argv) { int a = atoi(argv[1]) int b = atoi(argv[2]); int matriz[a][b]; } With this you'd have to enter the dimensions of the array as a command line argument. Most nowadays C++ compilers won't allow definition of an array with non-const size parameters (though a next standard might require that) It seems I have been spoiled by g++ which supports it as an extension. Variable length arrays are allowed in C99, but are not covered by the C++ standard. So, my initial response was correct after all (I should rely on my intuition a bit more lol). You have to use dynamically allocated memory for variable length arrays in C++, unless you want to make use of a compiler extension like g++ has. errang, that don´t work . I´m trying to make the game Tic Tac Toe but the user have to input the size of the matrix. If the matrix is static is easy but they want the matrix dynamic. I made this when i saw some examples, but i don´t understand. #include <iostream> using namespace std; int main () { int i=0,a,b; int **matrix; cout<< "Insert the size of the Matrix.\n"; cin>>a; b=a; matrix= new int*[a]; for (i; i<a;i++) { matrix[i]= new int [b]; } system ("PAUSE"); return 0; } And how can i send some information to inside, i cant visualize in my mind how this works. Thank you all. That looks ok. It creates a dynamically allocated 2-dimensional array. For more information, see the tutorial I mentioned earlier. You can access the dynamically allocated array, in the same way as you access a normal array, ie. matrix[i][j] will get the value at row i, column j. >>>> matrix= new int*[a]; that creates an array of pointers which are supposed to point to the first element of a row. >>>> matrix[i]= new int [b]; that creates an array of int for row with index i and the pointer to that array was assigned to the rows array: matrix 0 1 2 ... b-1 [row 0] -> [] [] [] ... [] [row 1] -> [] [] [] ... [] .... [row a-1] -> [] [] [] ... [] So actually the matrix is an array of pointers (the left column in the diagram) and the second index works horizontal for each row (pointer). for a fixed-sized matrix it is different: const int a = 4; const int b = 5; int matrix[a][b] = { 0 }; 0 1 2 3 4 [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] Here we have no additional pointer array but only one int array of 20 elements accessible as 4 rows with 5 columns each. Nevertheless for both we have the same expression when accessing one single element: matrix[2][3] = 12345; Thanks int *matrix = new int[ ROWSIZE * COLSIZE ]; int access(int * matrix, int row, int col) { return matrix[ ROWSIZE * row + col]; } void matriz() { int a,b,c=1; int **matrix; cout<< "Insert the size of the Matrix.\n"; cin>>a; b=a; for (int i=0; i<a;++i) { matrix= new int*[a]; for (int j=0; j<a; ++j) { matrix[i]= new int [b]; matrix[i][j]= c; c++; cout <<matrix[i][j]<<" "; } cout << endl; } } Maybe it´s not the write way but it work. Can you give me your feedback please. If you agree with this way i will close this question and give the points. Thank you all matrix = new int *[a]; is _inside_ a for loop. So it will overwrite matrix each time. Move that statement above the loop. After that, I think it looks good. void matrix() { int a,b,c=1; int *matrix; cout<< "Insert the size of the Matrix.\n"; cin>>a; b=a; matrix= new int[a * b]; for (int i=0; i<a;++i) { for (int j=0; j<a; ++j) { matrix[i*a + j]= c; c++; cout <<matrix[i * a + j]<<" "; } cout << endl; } delete [] matrix; } As mrjoltcola already pointed out, there are a few problems with that code, memory leaks not being the least of it. But note that you posted correct code earlier here : http:#24059085. Why change that ? Antoher Question it is possible to call another function inside of a function?
https://www.experts-exchange.com/questions/24290726/How-to-create-a-multi-dimension-array-in-C-Plus-Plus.html
CC-MAIN-2017-51
refinedweb
795
71.34
In this example you will learn about sorting array list in java using an example. By default array list are sorted as the element are inserted into the list. In Java ArrayList extends AbstractList implements List Interface. ArrayList are created with the initial size and exceed when the element increased. This example will explain sorting of ArrayList using Comparator, Comparable and Collectiolns.sort() method. ArrayList are dynamic in nature so they are also called dynamic array. But standard array are not dynamic in nature once the array is created they cannot be grow or shrink, fixed in size. ArrayList doesn't have sort() method. We can use the sort method of the Collections class. It sorts the Collection object. Now, here is the example to sort an ArrayList in java . import java.util.ArrayList; import java.util.Collections; import java.util.List; public class SortArrayList{ public static void main(String args[]){ List list = new ArrayList (); list.add("D"); list.add("G"); list.add("A"); list.add("XX"); list.add("HHH"); list.add("KK"); list.add("abc"); list.add("123"); list.add("980"); //before sort System.out.println("ArrayList before sort"); for(String var: list){ System.out.println(var); } //sort the list Collections.sort(list); //after sorted System.out.println("ArrayList after sort"); for(String var: list){ System.out.println(var); } } } Output from the program If you enjoyed this post then why not add us on Google+? Add us to your Circles Liked it! Share this Tutorial Discuss: Sorting Array List in Java Post your Comment
http://www.roseindia.net/java/beginners/java-arraylist-sort.shtml
CC-MAIN-2014-52
refinedweb
255
53.27
Created on 2009-10-17 21:29 by joe.amenta, last changed 2014-07-03 05:58 by berker.peksag. Step to reproduce: $ echo 'file("/some/file")' | python `which 2to3` - (replace python with whichever python executable version you wish to use, e.g. ~/python-trunk/python or /usr/local/bin/python3.2 ) Expected result: Anything referring to the fact that the "file" type no longer exists in python 3.x, whether it be a warning or a refactoring. Actual result: RefactoringTool: No files need to be modified. Just a report for now, I don't have the time to write a patch. Patch attached. Unit test and documentation included. COMMITMSG: Adds a new fixer to lib2to3 which replaces the deprecated builtin "file" with "open". Woops. Removed the extra comments. Updated patch attached. Unfortunately this patch would also replace legitimate uses of a "file" function. I thought the whole point was that file[1] was removed in 3.0[2]? Or, are you saying that if somebody overloaded file with def file(...)? If that is the case would it be reasonable to check like this? >>> file in list(__builtins__.__dict__.values()) True >>> def file(): ... pass ... >>> file in list(__builtins__.__dict__.values()) False >>> [1] - [2] - Yes, the idea was that it doesn't seem outlandish for someone to do: def file(something): do_stuff() You can use lib2to3.fixer_util.is_probably_builtin for this... modified the patch and attached.
https://bugs.python.org/issue7162
CC-MAIN-2018-30
refinedweb
234
70.6
Gitweb:;a=commit;h=24037a8b69dbf15bfed8fd42a2a2e442d7b0395b Commit: 24037a8b69dbf15bfed8fd42a2a2e442d7b0395b Parent: 688340ea34c61ad12473ccd837325b59aada9a93 Author: Jeremy Fitzhardinge <[EMAIL PROTECTED]> AuthorDate: Tue Jul 17 18:37:04 2007 -0700 Committer: Jeremy Fitzhardinge <[EMAIL PROTECTED]> CommitDate: Wed Jul 18 08:47:42 2007 -0700 Advertising Add nosegneg capability to the vsyscall page notes. Signed-off-by: Ian Pratt <[EMAIL PROTECTED]> Signed-off-by: Christian Limpach <[EMAIL PROTECTED]> Signed-off-by: Chris Wright <[EMAIL PROTECTED]> Signed-off-by: Jeremy Fitzhardinge <[EMAIL PROTECTED]> Acked-by: Zachary Amsden <[EMAIL PROTECTED]> Cc: Roland McGrath <[EMAIL PROTECTED]> Cc: Ulrich Drepper <[EMAIL PROTECTED]> --- arch/i386/kernel/vsyscall-note.S | 28 ++++++++++++++++++++++++++++ 1 files changed, 28 insertions(+), 0 deletions(-) diff --git a/arch/i386/kernel/vsyscall-note.S b/arch/i386/kernel/vsyscall-note.S index 52e0cbb..271f16a 100644 --- a/arch/i386/kernel/vsyscall-note.S +++ b/arch/i386/kernel/vsyscall-note.S @@ -12,3 +12,31 @@ ELFNOTE_START(Linux, 0, "a") .long LINUX_VERSION_CODE ELFNOTE_END + +#ifdef CONFIG_XEN + +/* + * 1 nosegneg + * to match the mapping of bit to name that we give here. + */ + +/* Bit used for the pseudo-hwcap for non-negative segments. We use + bit 1 to avoid bugs in some versions of glibc when bit 0 is + used; the choice is otherwise arbitrary. */ +#define VDSO_NOTE_NONEGSEG_BIT 1 + +ELFNOTE_START(GNU, 2, "a") + .long 1, 1<<VDSO_NOTE_NONEGSEG_BIT /* ncaps, mask */ + .byte VDSO_NOTE_NONEGSEG_BIT; .asciz "nosegneg" /* bit, name */ +ELFNOTE_END +#endif - To unsubscribe from this list: send the line "unsubscribe git-commits-head" in the body of a message to [EMAIL PROTECTED] More majordomo info at
https://www.mail-archive.com/git-commits-head@vger.kernel.org/msg17881.html
CC-MAIN-2017-30
refinedweb
243
55.64
I am trying to split my profile disk and my diff disk onto two different DFS shares. I have configured the GPO to specify the two different paths. Whenever a user logs in, the diff disks (for both the O365 and Profile containers) are in the same folder as the profile disks. In the profile-.log I see: The log indicates that it sees diff disk path setting from the GPO as \\domain.local\dfsnamespace\FSLogixRWDisks, but creates it with the profile disk instead at \\domain.local\dfsnamespace\FSLogixProfiles FSLogix version is 2.9.7349.30108 Desktop is Windows 10 Enterprise, 1909 Any ideas as to what I am missing? Thanks
https://social.msdn.microsoft.com/Forums/en-US/0d1ea33c-f9ed-49c8-8e1a-2b03c9455fdd/diff-disk-specifying-different-location-than-profile-disk?forum=FSLogix
CC-MAIN-2020-34
refinedweb
111
67.15
Intruder Alarm: You will need LED Active piezo buzzer Jumper wires Connect the laser sensor Think of all those movies where a secret agent or thief has to get past some lasers guarding an object: break the beam and the alarm will go off. That’s what we’ll be doing here. For this tutorial, we’re using the laser sensor from the [Waveshare Sensors Pack](magpi.cc/wavesensors), available in the UK from The Pi Hut, and also sold separately, but any similar sensor should work in a similar way. It continually emits a laser beam, and its receiver only detects a reflected beam of the exact same wavelength (650 nm), so it won’t be triggered by other visible light. When it detects the beam, its digital pin outputs 1; when the beam is broken, it’s 0. With the power turned off, connect the laser sensor to Raspberry Pi as in Figure 1. We’re powering it from Raspberry Pi’s 3V3 pin, grounding it with a GND pin (both via the breadboard side rails), and the digital output (marked DOUT on the sensor) is going to GPIO 21. Warning! The laser sensor used here continually emits a laser beam. Be very careful not to point it towards anyone’s head as it could potentially damage their eyesight. More information on laser safety. Laser positioning With the laser sensor wired up, turn on Raspberry Pi. You should see the sensor’s red power LED (on the right) light up if it’s connected correctly. It should also be emitting a laser beam from the metal tube, so be careful never to look straight into it. Aim the beam at a nearby wall (up to 1.5 m away) and check that its left LED (marked DAT) is lit, confirming that it is detecting the laser beam. You may need to adjust the vertical and horizontal tilt of the sensor, or move it closer to the wall.For the finished alarm, we recommend you place the laser sensor fairly near the floor so that anyone walking through it will break the beam and it won’t be anywhere near their eyes. Laser test To begin, we’ll create a simple Python program, as in the laser_test.py listing, to read the sensor’s digital output and print out a message to show when the beam is broken. From the desktop menu, go to Programming and open the Thonny IDE to start coding. from gpiozero import Button laser = Button(21) msg = "" while True: if laser.value == 0: msg = "Intruder!" else: msg = "All clear" print(msg, end = "\r") As before, we’re using the GPIO Zero library; at the top of the code, we import the Button method from it. We’ll use this to sense when the digital output from the sensor is high, in effect the equivalent of a push-button being pressed. As it’s connected to GPIO 21, we assign the laser object to this with laser = Button(21). In an infinite while True: loop, we check whether the pin is low (if laser.value == 0), which means the beam has been broken, and set the message (msg1 variable) that we’ll be printing to the Shell area accordingly. In our print statement, we add the end = "\r" parameter so the message is always printed on the same line. Run the laser_test.py code and then try breaking the beam with your hand and see if the message changes to ‘Intruder!’. You may find that it works better with your hand more distant from the sensor. Even if the DAT LED only flickers off momentarily, that should be enough to trigger our alarm later. Add a sound sensor Now that we have our laser sensor working, let’s make our setup even more intruder-proof by adding a sound sensor. We’re using a Waveshare sound sensor for this, as featured in the Sensors Pack, but other similar sensors are available, along with USB mics. Our sensor has pins for analogue and digital outputs, but we only need the digital output for our alarm. With the power turned off, we connect that pin (DOUT) to GPIO 14, and the VCC and GNC pins to 3V3 and GND (shared with the laser sensor via the breadboard side rails), as in Figure 1. Turning Raspberry Pi back on, you’ll see the power LED on the left of the sound sensor is lit up. Make a loud noise and you should see the LED on the right light up to show it has been detected. Sound test Let’s create a similar program to test the sensor. In the sound_test.py code listing, we assign the sound object to GPIO14 with sound = Button(14). Again, we use the Button method to detect when the pin is triggered. from gpiozero import Button sound = Button(14) msg = "" while True: if sound.value == 1: msg = "Intruder!" else: msg = "All clear" print(msg, end = "\r") This time in our while True: loop, we test whether the pin is high (there is a loud enough noise to trigger the sound sensor). As before, this determines which message (in the msg1 variable) is printed to the Shell area. Make a noise Now it’s time to test our sound sensor to check it’s wired up and working correctly. Run the sound_test.py Python code and then made a loud noise to make the DAT LED on the right of the sensor light up. You may find that you need to be noisy for a second or so and that there’s a short delay before the message changes briefly from ‘All clear’ to ‘Intruder!’. If you’re having trouble triggering it, try altering the sensitivity of the sound sensor by adjusting the lower potentiometer screw (marked D for digital) on it: turning it anticlockwise increases the sensitivity, but don’t overdo it or the DAT LED will be lit up constantly. Add a visual alert If your sensors and code are working correctly, it’s time to move on to the next part. Printed messages are all very well, but for a proper alarm you need a visual and/or audible alert. As in last month’s guide, we’ll add a standard red LED for a visual alert. Ours is 5 mm, but you can use a different size. As always, a resistor is needed to limit the current to the LED to ensure it doesn’t receive too much and potentially burn out. With the LED placed in the breadboard, with legs in different unconnected rows, we connect a 330 Ω resistor between the negative (shorter) leg and the ground rail of the breadboard. The positive (bent, longer) leg is connected to GPIO 16 on Raspberry Pi, as in the Figure 1 wiring diagram. Sound the alarm For our audible alert, we’ll use a small active piezo buzzer to make a beeping noise. You could use something else to sound the alarm. The buzzer has a longer positive leg and a shorter negative one; their positions may also be marked on its top. Connect the negative pin to the breadboard’s ground rail and the positive pin to GPIO 25 (as in Figure 1). Alarm code With everything wired up as in Figure 1, you’re now ready to program your intruder alarm. In the final code, intruder_alarm.py, we add LED and Buzzer to the gpiozero imports at the top. We also import sleep from the time library, to use as a delay. from gpiozero import Button, LED, Buzzer from time import sleep laser = Button(21) sound = Button(14) led = LED(16) buzzer = Buzzer(25) def alarm(): print("Intruder alert!", end = "\r") for i in range(10): led.toggle() buzzer.toggle() sleep(0.5) while True: if laser.value == 0 or sound.value == 1: alarm() else: print("All clear ", end = "\r") led.off() buzzer.off() If you wanted, you could create a separate function with a different message for each alarm (like our fire and gas alarm last issue), but this time we’ve kept it simple with a single alarm function, as we’re not bothered how an intruder is detected. When triggered, this executes a for loop which toggles the LED and buzzer on and off a set number of times, with a 0.5 sleep delay each time. In a while True: loop, we check the pin values from both sensors and trigger the alarm when the laser beam is broken (laser.value == 0) or the sound threshold is exceeded (sound.value == 1). If neither is triggered, we show the default message and ensure the LED and buzzer are turned off. Test the alarm Now to test the alarm system. As before, try breaking the laser beam: the LED should then blink and the buzzer will beep. Do the same for the sound sensor by making a prolonged loud noise; the alarm will trigger again. Each time, the ‘Intruder!’ message will show in the Shell area. Taking it further We now have a simple intruder alarm. To improve it, you could add extra sensors such as a PIR or even a camera to detect movement. You could trigger a larger light and/or play an alert sound or spoken message on a connected speaker. You could also send an email or push notification alert to your phone whenever the alarm is triggered. Next time we’ll create a weather station using temperature, humidity, and ultraviolet light sensors. See you then.
https://magpi.raspberrypi.com/articles/make-an-intruder-alarm-with-raspberry-pi
CC-MAIN-2022-05
refinedweb
1,594
79.6
Opened 7 years ago Closed 2 years ago #3050 defect closed fixed (fixed) t.p.basic.LineReceiver StackOverflow Description If a LineReceiver-based protocol switches back and forth between line and data modes several times on the same packet, it can hit the maximum stack recursion depth. Attached is a patch to fix that in LineReceiver, with a unit test that fails with trunk and passes with the patch applied. Attachments (3) Change History (24) Changed 7 years ago by ghazel comment:1 Changed 5 years ago by ivank comment:2 Changed 5 years ago by ivank - Cc ivank added comment:3 Changed 4 years ago by ghazel - Keywords review added comment:4 Changed 4 years ago by ghazel - Owner teratorn deleted comment:5 Changed 4 years ago by rwall - Keywords review removed - Owner set to ghazel ghazel, did you mean to put this up for review. Your patch was added 2 years ago and your comment 15 months ago mentions a new attempt to fix this problem in #3803. Did you mean to mark this as a duplicate of #3803? If existing patch _is_ up for review, it'll need a news file () and tests fixing. trial twisted/test/test_protocols.py ... [ERROR] Traceback (most recent call last): File "/home/richard/Projects/Twisted/trunk/twisted/test/test_protocols.py", line 338, in testStackRecursion t = StringIOWithoutClosing() exceptions.NameError: global name 'StringIOWithoutClosing' is not defined comment:6 Changed 4 years ago by ghazel Changed 4 years ago by ghazel Fixed unittest, and added News file comment:7 Changed 4 years ago by ghazel - Keywords review added - Owner ghazel deleted Here you go. Please review linereceiver_depth.2.diff comment:8 Changed 4 years ago by ivank - Keywords review removed I believe this changes the behavior of LineReceiver.setLineMode when it is called not under LineReceiver.dataReceived (i.e. called by some application code when it feels like it). There are several places in Twisted that call setLineMode like this. I think the setLineMode implementation should look more like this: if self._busy: self.__buffer += extra else: self.dataReceived(extra) (and of course LineReceiver has to set _busy when appropriate; something like this, and also when calling self.lineReceived(...)) self._busy = True try: why = self.rawDataReceived(line) finally: self._busy = False I did not review the rest of the change. comment:9 Changed 4 years ago by ivank Here's a new test for the above problem I described: def test_setLineModeWhenNotBusyCausesDelivery(self): """ If setLineMode is called *not* under a rawDataReceived or lineReceived callback, the data passed to extra= is immediately delivered (if possible). """ class Receiver(basic.LineReceiver): MAX_LENGTH = 4 def lineReceived(self, line): #print 'lines was', self.lines lines.append(line) for setRawFirst in (True, False): lines = [] protocol = Receiver() if setRawFirst: protocol.setRawMode() protocol.setLineMode(extra='test\r\nthis') self.assertEqual(lines, ['test']) comment:10 Changed 4 years ago by ghazel Does anything actually call setLineMode with extra data outside of a rawDataReceived callback and expect it to work? The comment says: ." It even says to *not* call this function from within a lineReceived callback. Could we simply extend the comment to say not to call it from outside a rawDataReceived callback at all? comment:11 Changed 4 years ago by ivank I don't think the compatibility policy allows a change to its non-buggy behavior, even though it's a subtle change that doesn't cause any of Twisted's tests to fail (last I checked). comment:12 Changed 4 years ago by ghazel Well, I disagree. I'm abandoning this effort. If someone else would like to try to fix it to handle imaginary, undocumented cases that's fine with me. comment:13 Changed 4 years ago by <automation> comment:14 Changed 2 years ago by exarkun Changed 2 years ago by graham Updated version of the previous patch that accommodates calling setLineMode from outside a rawDataReceived method. comment:15 Changed 2 years ago by graham I hit this in production and was amazed to find this patch languishing for 5 years when it would have saved me (and probably a lot of other people) a huge headache. I agree with ghazel that the scenario presented for rejecting his patch is unlikely, but frankly I'm not willing to leave a monkey patch in production code just so it doesn't blow up under load. So here's a patch that might appeal to you better. I feel like the discussion on this patch to date treated it as an academic issue, but the reality is that I doubt you could even use the memcache driver included with twisted under load without hitting this problem, and it's a bit of a pain to diagnose. It's also a genuine DoS target, since it'd be fairly easy to craft packet patterns that would trigger this behaviour on any protocol that uses linemode switching. comment:16 Changed 2 years ago by graham - Keywords review added comment:17 Changed 2 years ago by therve - Branch set to branches/linereceiver-stack-3050 comment:18 Changed 2 years ago by therve I applied latest patch in a branch, with some minor fixes like py3 handling and coding standard. comment:19 Changed 2 years ago by graham Nice. Those changes seem entirely reasonable to me. comment:20 Changed 2 years ago by glyph - Keywords review removed - Owner set to therve These changes look basically good to me. However, the diff covers all the lines in dataReceived, but there are still a bunch of lines - mostly the line-length-exceeded case - that are still un-covered. Can you write a test or two to hit those lines, then commit? It should be pretty straightforward, so no need for a re-review. Thanks to everybody who kept this ticket alive and got it to the finish line :). comment:21 Changed 2 years ago by therve - Resolution set to fixed - Status changed from new to closed diff of the LineReceiver patch and unit test
http://twistedmatrix.com/trac/ticket/3050
CC-MAIN-2014-42
refinedweb
995
60.35
A simple tool for building graph databases from multiple relational sources. Project description Autoneo Autoneo makes it easy to build complex graphs in Neo4j. This tool allows users to represent their graph structure in a JSON schema, with the JSON containing the properties to load into Neo4j but also how to extract the information from a data source. Using autoneo Prequisites Set up Neo4j to allow imports from external sources. This allows data from CSV files to be loaded into Neo4j (which is how this tool does its job!). - Go to the config file for your instance. (You can find this location by starting neo4j from the command line. There will be a line of output showing the location of the configdirectory. The config file inside it. ) - Comment out this line: dbms.directories.import=import (Add a #) - Uncomment this line: #dbms.security.allow_csv_import_from_file_urls=true (Remove the #) Install autoneo pip install autoneo Using autoneo In Code You can use autoneo like: from autoneo import builder builder.build( "path/to/config.json", "path/to/data_dir", "path/to/query_dir", "path/to/csv_dir") See config.json for an example of how to define a graph. Currently we support extraction from sqlite3 databases and CSV files. Support for other databases formats is on the way. Data directory will contain .db files. Query dir will be where all scripts for retrieving data are placed. CSV dir is where you can drop a CSV for when you don't need a Database file/Query File. If you provide those, then a CSV will be generated by autoneo and placed in this directory. Working on the project To build: python setup.py sdist bdist_wheel To upload: python -m twine upload dist/* (Token name: pypi_all_purpose) To run tests: nosetests (to run all tests) Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/autoneo/
CC-MAIN-2022-05
refinedweb
323
66.64
when building (any branch) on arm64 the file: mono/mono/arch/arm64/arm64-codegen.h contains: #include "../../../../mono-extensions/mono/arch/arm64/arm64-codegen.h" but this link does not exist, and make stops with an error message. Since a package "mono-extensions" does not exist, this #include statement may be wrong. arm64 support in mono is currently not OSS. Seriously, NO. I don't consider this answer a resolution and it's not a proper answer. A proper solution would be adding a comment there, or in the installation docs, explaining why the arm64 support is missing, and probably where one can obtain the required extension. The extension seems to exist and to be available on the OS-X products of Xamarin (actually, of Microsoft's new Mono division, SCNR). it works ! the actual version 4.5.0 compiles without any problem on a arm64 machine Seriously, I have hard times believing that. Just tested, with an aarch64 crosscompiler (some Yocto SDK) and : CC mono-networkinterfaces.lo In file included from ../../mono/utils/mono-stack-unwinding.h:10:0, from ../../mono/utils/mono-threads.h:14, from mono-mmap.c:36: ../../mono/utils/mono-context.h:288:43: fatal error: mono/arch/arm64/arm64-codegen.h: No such file or directory #include <mono/arch/arm64/arm64-codegen.h> ^ compilation terminated. it really works. But you need to get it from the right sources. I did this on an Odroid-C2 and an Lemaker HiKey (both 64 bit Ubuntu, and both work well): git clone git://github.com/mono/mono.git cd mono ./autogen.sh --prefix=/usr/local make sudo make install and mono works. I have it running till 2 days on 64 bit and it works without any problems. Confirming that a build from git is possible (using monolite to bootstrap). arm64-codegen.h exists in both the master and branch-4.5.0-branch branches, but appears to have been stripped from the nightlies... Should be fixed by mono master 096d2488cc881406443331df5050ec8e87ab335b.
https://bugzilla.xamarin.com/show_bug.cgi?format=multiple&id=38223
CC-MAIN-2018-30
refinedweb
334
61.12
Extracting just Month and Year from Pandas Datetime column (Python) I have a Dataframe, df, with the following column: df['ArrivalDate'] = ... 936 2012-12-31 938 2012-12-29 965 2012-12-31 966 2012-12-31 967 2012-12-31 968 2012-12-31 969 2012-12-31 970 2012-12-29 971 2012-12-31 972 2012-12-29 973 2012-12-29 ... The elements of the column are pandas.tslib.Timestamp. I want to just include the year and month. I thought there would be simple way to do it, but I can't figure it out. Here's what I've tried: df['ArrivalDate'].resample('M', how = 'mean') I got the following error: Only valid with DatetimeIndex or PeriodIndex Then I tried: df['ArrivalDate'].apply(lambda(x):x[:-2]) I got the following error: 'Timestamp' object has no attribute '__getitem__' Any suggestions? Edit: I sort of figured it out. df.index = df['ArrivalDate'] Then, I can resample another column using the index. But I'd still like a method for reconfiguring the entire column. Any ideas? You can directly access the year and month attributes, or request a datetime.datetime: In [15]: t = pandas.tslib.Timestamp.now() In [16]: t Out[16]: Timestamp('2014-08-05 14:49:39.643701', tz=None) In [17]: t.to_datetime() Out[17]: datetime.datetime(2014, 8, 5, 14, 49, 39, 643701) In [18]: t.day Out[18]: 5 In [19]: t.month Out[19]: 8 In [20]: t.year Out[20]: 2014 One way to combine year and month is to make an integer encoding them, such as: 201408 for August, 2014. Along a whole column, you could do this as: df['YearMonth'] = df['ArrivalDate'].map(lambda x: 100*x.year + x.month) or many variants thereof. I'm not a big fan of doing this, though, since it makes date alignment and arithmetic painful later and especially painful for others who come upon your code or data without this same convention. A better way is to choose a day-of-month convention, such as final non-US-holiday weekday, or first day, etc., and leave the data in a date/time format with the chosen date convention. The calendar module is useful for obtaining the number value of certain days such as the final weekday. Then you could do something like: import calendar import datetime df['AdjustedDateToEndOfMonth'] = df['ArrivalDate'].map( lambda x: datetime.datetime( x.year, x.month, max(calendar.monthcalendar(x.year, x.month)[-1][:5]) ) ) If you happen to be looking for a way to solve the simpler problem of just formatting the datetime column into some stringified representation, for that you can just make use of the strftime function from the datetime.datetime class, like this: In [5]: df Out[5]: date_time 0 2014-10-17 22:00:03 In [6]: df.date_time Out[6]: 0 2014-10-17 22:00:03 Name: date_time, dtype: datetime64[ns] In [7]: df.date_time.map(lambda x: x.strftime('%Y-%m-%d')) Out[7]: 0 2014-10-17 Name: date_time, dtype: object ★ Back to homepage or read more recommendations:★ Back to homepage or read more recommendations: From: stackoverflow.com/q/25146121
https://python-decompiler.com/article/2014-08/extracting-just-month-and-year-from-pandas-datetime-column-python
CC-MAIN-2019-26
refinedweb
532
67.45
In this tutorial you will learn how to create variables in the Java programming language. Variables are used to store data in a program. A variable is like a ‘container’ that can store a value that can be used within the program. These values can be accessed, used and modified throughout the code. Examples of values that you might need to store in a program include the score in a game, the user’s name, a password, or numbers used in calculations. Variables can store values of different data types – we’ll look at these in the next tutorial. Variables have three important properties: - Variable name (identifier) – the actual name of the variable eg. score, username, age, price. Each variable must have a unique name. Some variable names can’t be used if they are the same as a reserved word used elsewhere in the language for other things such as a function. Variable names often cannot contain spaces or start with digits (rules vary in different languages) - Data type – the type of data that the variable will be storing such as text or numbers. There are special names for different data types that we will look at in the next tutorial - Value – the actual information being stored in the variable such a “Bob” for a variable called firstName or 26 for a variable called age. The example below shows a new variable called message being created in the Java language. The variable is of the String data type (text that can contain letters, numbers and different characters) and is given an initial value of “Hello“. When you create a variable in a program you declare the variable. This means you give it a name and specify the data type. You may decide to not give it a value at that point in the program and give it a value later on, or you may decide to initialise the variable with a value (that can also be changed later). In the example above, the variable is declared and given an initial value all in one line of code. Sample code The sample code below shows a variable called ‘message’ being declared and given a value of “Hello”. The value is displayed on the screen to the user. Then, the value is changed to “Hello there” and this is displayed as output on the screen. Lastly, the message is displayed and the text “friend.” is added on the end when displayed as output so that the message being displayed is “Hello there friend.”. This is an example of concatenation (a fancy word for joining) where two strings are joined together. You might notice that some lines of code that begin with two // forward slashes. These are comments in the code explaining what is going on. Comments are not carried out as instructions in the code but are used to annotate your code with explanations of what the code is meant to do, or you can also use them to add information about the author of the code, the program’s purpose or when it was created/modified. package myjavaproject; public class CreatingVariables { public static void main (String[] args){ String message = "Hello"; // create and initialise String variable System.out.println(message); // output variable value message = "Hello there"; // modify variable value System.out.println(message); // output modified variable value System.out.println(message + " friend."); // concatenate strings } }
https://www.codemahal.com/video/variables-in-java/
CC-MAIN-2020-24
refinedweb
563
61.36
I want to parse from the site, but the site does not allow me. import requests Url = ' Url2 = ' PAYLOAD = { 'username': 'log', 'Password': 'pass' } Headers = { 'User-Agent': 'Mozilla / 5.0 (Windows NT 10.0; Win64; x64) AppleWebKit / 537.36 (KHTML, LIKE GECKO) Chrome / 60.0.3112.113 Safari / 537.36'} With Requests.session () AS C: C.Post (URL, Headers = Headers, Data = PAYLOAD) R = C.GET (URL2, Headers = Headers) Print (R.Text) displays the authorization page What should I do, what would the site missed me? In no way, the TC does not remember them. Through the selenium everything works fine. Answer 1, Authority 100% Home tasks can be sprawling using the API of their mobile application, use the RUOBR_API module. pip install ruobr_api from rubr_api import rubr_api import R = RUOBR ('UserName', 'Password') Print (R.Gethomework ("2020-12-18", "2020-12-31")) # The first date is today, the second - usually plus two weeks # [Lesson (id = 190272322, Topic = 'Lesson Topic', Task = Task (id = 24898438, title = 'Task Title', Doc = False, Requires_Solutions = False, DeadLine = DateTime.date (2020, 12, 18) , test_id = none, type = 'group'), time_start = datetime.time (9, 45), date = datetime.date (2020, 12, 18), subject = 'item', time_end = datetime.time (10, 25), Staff = 'Surname name Patronymic teacher'), ...] Reference to the source repository:
https://computicket.co.za/python-cant-go-to-cabinet-ruobr-ru-via-requests/
CC-MAIN-2022-21
refinedweb
206
71
Join the community to find out what other Atlassian users are discussing, debating and creating. Hi guys! We are running a NPO and as such we do also have a facebook fanpage. I am now looking for a possibility to add facebook information to a confluence page. I have had some success with rss feeds, but it does not give you the same "look and feel" as facebook has. Any ideas from your side how to show a facebook fanpage in confluence? thanks, Christian "A nice option would be" Why reload jQuery at all? This works fine with jQuery 1.5.2 as used by Confluence 4. Just prefix/namespace the jQuery object. This should work: <script src="/path/to/jquery.neosmart.fb.wall.js" type="text/javascript"></script> <script type="text/javascript"> AJS.toInit(function(){ AJS.$('#example1').fbWall({ id:'162253007120080',accessToken:'206158599425293|7809823973348bcf8cd72f6d.1-100000221135225|BW9n2eoyL7EYvJs7GEmv61NbBFk'}); }); </script> <div id="example1"></div> Hi David! Ok, now I do not understand anything anymore.... If I load the script "jquery.neosmart.fb.wall.js" inside a page it works, if I load it via custom HTML in admin panel it does not work at all. Anyway, thanks for your great help! Christian I'm not sure what you're doing to get that. I'd bundle a variation of the above code snippet in a user macro and place it on the appropriate pages -- likely turning the id & accessToken into macro parameters. If you just want to include the fan page, you could add it to the page as an iframe. Seems a bit lame, but not knowing your requirements, it may be sufficient. Hi! The iframe macro does not work once you try to use. It just shows a blank page - all other addresses to work w/o any problem. Isn't there a Facebook fanpage widget which you can simply include? You are absolutely right - unfortunately the facebook social gadgets do not show the wall post information. What would make sense to see is what you can also get via an rss feed from a fan page. A nice option would be But unfortunately loading jQuery manually does lead to problems with refinedwiki.
https://community.atlassian.com/t5/Confluence-questions/Include-facebook/qaq-p/316511
CC-MAIN-2019-18
refinedweb
363
75.71
OK, I'm running into a problem when I publish my files to my longhorn server. I am developing this project in VS 2008 "ORCAS". The error is Could not load type 'System.Web.UI.IScriptManager' from assembly 'System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'. The line of code that the error happens on is <asp:ScriptManager</asp:ScriptManager> I have installed 3.5 for the LINQ information, but I can't find out where to select it on my APP POOL or maybe its something else that is wrong. Any Information on this is greatly appreciated. I've been down for a while and need to get moving past this error quickly. Thanks, Josh Just a shot in the dark. I am using the same server platform but on a .NET 2.0 site no issues. I also use the script manager quite a bit. Are you sure that the dll that contains the assembly System.Web installed in the GAC on the server is the correct version? Check it, it is possible somethings have been depricated and you need to install it, however according to how Microsoft publishes the data this should not be an issue. Also check that in IIS 7 that the version of ASP.NET for the particular site is correct. This has solved some issues in the past real quick for me. I would attempt to put the version you are running in development of that DLL in the GAC and use the correct public key if it is not already there. This error means that the DLL contains the namespace but not the class your looking for.
https://www.webdeveloper.com/forum/d/165206-longhorn-and-vs-2008-publishing
CC-MAIN-2018-17
refinedweb
278
75.91
This blueprint adds the basic tag catalog back after it was deferred from Juno due to time constraints. The original, approved spec included supporting tag libraries in the metadata definitions catalog. However, at the end of Juno, the original spec was updated to remove tags since they weren’t implemented. This spec is actually a reduction of the tag concepts in the original approved Juno spec. In the original Juno spec, tags had a dynamic hierarchy capability. This spec does not include the tag hierarchy in order to simplify this spec. That aspect of tags will be deferred to a later spec. The implemented juno spec is here for reference: A challenge with using OpenStack is discovering, sharing, and correlating tags across services and different types of resources. We believe this affects both end users and administrators. For example, a cloud operator or vendor may have a predefined set of “tags” they want to be used as a starting point for images and instances. Currently, OpenStack does not have a facility for the cloud operator to include that base set of tags. This means that every deployment and every project may end up with its own disparate set of tags. This leads to inconsistencies, but also is extra hassle for end users who end up reinventing all the “tags” in every project. For example, is the tag “postgres” the same as “PostgreSQL”? If a base library of tags is used, any user typing “pos” would be prompted with the one that already exists in the tag library and would choose it. Future searches based on tags would ensure consistent results. Terminology The term metadata can become very overloaded and confusing. This proposed enhancement is about the additional metadata that is set as “tags” (name only) across various artifacts and OpenStack services. Different APIs may use tags and key / value pairs differently. Tags typically are not used to drive runtime behavior. However, key / value pairs are often used by the system to potentially drive runtime, such as scheduling, quality of service, or driver behavior. A few examples of metadata today: We are proposing enhancements to the Metadata Definitions Catalog. The following subsections detail the enhancements to the catalog. A catalog of possible tags that can be used to help ensure tag name consistency across users, resource types, and services. So, when a user goes to apply a tag on a resource, they will be able to either create new tags or choose from tags that have been used elsewhere in the system on different types of resources. For example, the same tag could be used for Images, Volumes, and Instances. Tags are not case sensitive (BigData is equivalent to bigdata but is different from Big-Data). A key use case is the collaboration on tags using a common catalog. This is complementary to tags being added ad-hoc across all services. We think the metadata API could also be backed by a search indexer across services to include ad-hoc metadata as well as defined metadata. However, that is not the focus of this blueprint. This will use a relational database and exist in the same database as the existing Glance Metadata Definitions Catalog. It will be additive to the existing schema. The following DB schema is the initial suggested schema. We will improve and take comments during code review. Constraints not shown for readability. Suggested Basic Schema: CREATE TABLE `metadef_tags` ( `id` int(11) NOT NULL AUTO_INCREMENT, `namespace_id` int(11) NOT NULL, `name` varchar(80) NOT NULL, `created_at` timestamp NOT NULL, `updated_at` timestamp ) This will not include Tag descriptions in this revision. In the REST API everything is referred by namespace and name rather than synthetic IDs. This helps to achieve portability (import / export using JSON). APIs should allow coarse grain and fine grain access to information in order to control data transfer bandwidth requirements. Working with Namespaces Basic interaction is: - Get list of namespaces with overview info based on the desired filters. - Get tags Common Response Codes Create Success: 201 Created Modify Success: 200 OK Delete Success: 204 No Content Failure: 400 Bad Request with details. Forbidden: 403 Forbidden Not found: 404 Not found if specific entity not found API Version All URLS will be under the v2 Glance API. If it is not explicitly specified assume /v2/<url> Namespace may optionally contain the following in addition to basic fields. resource_types properties objects This spec adds Tags. GET /metadefs/namespace/{namespace}/tags Filters by adding query parameters: limit = Use to request a specific page size. Expect a response to a limited request to return between zero and limit items. marker = Specifies the name of the last-seen tag. The typical pattern of limit and marker is to make an initial limited request and then to use the last tag from the response as the marker parameter in a subsequent limited request. Design note: We want a format that allows for additional information such as description to be added without changing the base response. For this reason, we used a dictionary for each tag rather than just a flat list of tags. Example Body: { "tags": [ { "name": "Databases" }, { "name": "BigData" }, { "name": "MySQL", }, { "name": "PostgreSQL", }, { "name": "MongoDB", } ] } POST /metadefs/namespaces/{namespace}/tags/ POST /metadefs/namespaces/{namespace}/tags/{tag} DELETE /metadefs/namespaces/{namespace}/tags/{tag} DELETE /metadefs/namespaces/{namespace}/tags We intend to expose this via Horizon and are working on related blueprints. Update python-glanceclient as needed. None anticipated. This is expected to be called from Horizon when an admin wants to annotate tags onto things likes images and instances. This API would be hit for them to get available tags or create new ones. DB Schema Creation for new API Default / Sample tag libraries will be checked into Glance. Deployers can customize these and provide additional definition files suitable to their cloud deployment. glance-manage will include loading tags. Unit tests will be added for all possible code with a goal of being able to isolate functionality as much as possible. Tempest tests will be added wherever possible. Youtube summit recap of Graffiti Juno POC demo that included tags. Current glance metadata definition catalog documentation. Simple application category tags (no hierarchy) Images, volumes, software applications can be assigned to a category. Similarly, a flavor or host aggregate could be “tagged” with supporting a category of application, such as “BigData” or “Encryption”. Using the matching of categories, flavors or host aggregates that support that category of application can be easily paired up. Note: If a resource type doesn’t provide a “Tag” mechanism (only key value pairs), a blueprint should be added to support tags on that type of resource. In lieu of that, a key of “tags” with a comma separated list of tags as the value be set on the resource Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
https://specs.openstack.org/openstack/glance-specs/specs/kilo/metadefs-tags.html
CC-MAIN-2019-35
refinedweb
1,150
54.73
24 February 2010 13:54 [Source: ICIS news] LONDON (ICIS news)--Polypropylene (PP) buyers in Europe will be under heavy pressure to accept higher prices in March as product availability remains restricted, largely due to propylene restraints that have been exacerbated by recent strike action in France, sources said on Wednesday. The March monthly propylene contract price was settled on 23 February at €910/tonne ($1,247/tonne), up by €35/tonne from the February contract price. INEOS said it planned to increase March PP prices by €70/tonne from February. “Our aim is to regain margin,” said a source with INEOS. “We are not happy with the increases we have achieved so far this year.” “Our margins are squeezed, mainly due to the high cost of spot propylene, and this sort of increase reflects the tight situation in PP at the moment,” said the INEOS source. PP prices had increased by €150-170/tonne in January and February combined, leaving homopolymer injection net prices in the range of €1,020-1,050/tonne FD (free delivered) NWE (northwest ?xml:namespace> Propylene monthly contracts rose by €125/tonne in the same period. PP buying was done strictly on a hand-to-mouth basis in Europe, as prices were high and expectations of lower prices were rife due to new capacities coming on stream in the Middle East and The new plants were slow to achieve full capacity, however, and European buyers were still in the hands of domestic producers. “If buyers are buying on contract, they will have to pay higher prices in March. The market is tight and the likes of INEOS and Total will dictate market terms,” said a trader. “That tightness will go as more Middle Eastern producers offer for the second half of 2010 and 2011,” said the trader. Not all producers were aiming to go above the March monomer increase of €35/tonne. “We will be looking to cover monomer,” said another European PP producer source. “The market has misjudged the import situation enormously,” the source continued. “For two years now, we have been talking about a tsunami of imports which would impact the European market. Now it looks likely to be absorbed more naturally into the market.” However, several sources said they still expect Middle Eastern production to affect the market more actively. “The PP market in Buyers were not happy with INEOS’s announcement on Wednesday of a €70/tonne hike for March PP. “I don’t think they understand our situation. They are showing a total disregard for their markets. Ultimately, they will damage their converters. It’s too much,” said a frustrated buyer. Another buyer said: “No way will they get a €70/tonne hike in March. The market is too weak.” Buying was expected to remain low as buyers waited for better prices. But sellers pointed out that March was a seasonally strong month, and they were confident in their approach for business next month. PP producers in ($1 = €0.74)
http://www.icis.com/Articles/2010/02/24/9337653/europe-pp-buyers-face-higher-prices-for-march-amid-tight-supply.html
CC-MAIN-2015-22
refinedweb
501
62.68
from cenpy import products import matplotlib.pyplot as plt %matplotlib inline chicago = products.ACS(2017).from_place('Chicago, IL', level='tract', variables=['B00002*', 'B01002H_001E']) Matched: Chicago, IL to Chicago city within layer Incorporated Places Install it the prerelease candidate using: pip install --pre cenpy I plan to make a full 1.0 release in July. File bugs, rough edges, things you want me to know about, and interesting behavior at! I'll also maintain a roadmap here. Cenpy started as an interface to explore and query the US Census API and return Pandas Dataframes. This was mainly intended as a wrapper over the basic functionality provided by the census bureau. I was initially inspired by acs.R in its functionality and structure. In addition to cenpy, a few other census packages exist out there in the Python ecosystem, such as: And, I've also heard/seen folks use requests raw on the Census API to extract the data they want. All of the packages I've seen (including cenpy itself) involved a very stilted/specific API query due to the way the census API worked. Basically, it's difficult to construct an efficienty query against the census API without knowing the so-called "geographic hierarchy" in which your query fell: The main census API does not allow a user to leave middle levels of the hierarchy vague: For you to get a collection of census tracts in a state, you need to query for all the counties in that state, then express your query about tracts in terms of a query about all the tracts in those counties. Even tidycensus in R requires this in many common cases. Say, to ask for all the blocks in Arizona, you'd need to send a few separate queries: what are the counties in Arizona? what are the tracts in all of these counties? what are the blocks in all of these tracts in all of these counties? This was necessary because of the way the hierarchy diagram (shown above) is structured. Blocks don't have a unique identifier outside of their own tract; if you ask for block 001010, there might be a bunch of blocks around the country that match that identifier. Sometimes, this meant conducting a very large number of repetitive queries, since the packages are trying to build out a correct search tree hierarchy. This style of tree search is relatively slow, especially when conducting this search over the internet... So, if we focus on the geo-in-geo style queries using the hierarchy above, we're in a tough spot if we want to also make the API easy for humans to use. Fortunately for us, a geographic information system can figure out these kinds of nesting relationships without having to know each of the levels above or below. This lets us use very natural query types, like: what are the blocks *within* Arizona? There is a geographic information system that cenpy had access to, called the Tiger Web Mapping Service. These are ESRI Mapservices that allow for a fairly complex set of queries to extract information. But, in general, neither census nor censusdata used the TIGER web map service API. Cenpy's cenpy.tiger was a fully-featured wrapper around the ESRI Mapservice, but was mainly not used by the package itself to solve this tricky problem of building many queries to solve the geo-in-geo problem. Instead, cenpy1.0.0 uses the TIGER Web mapping service to intelligently get all the required geographies, and then queries for those geographies in a very parsimonious way. This means that, instead of tying our user interface to the census's datastructures, we can have some much more natural place-based query styles. Let's grab all the tracts in Los Angeles. And, let's get the Race table, P004. from cenpy import products import matplotlib.pyplot as plt %matplotlib inline The new cenpy API revolves around products, which integrate the geographic and the data APIs together. For starters, we'll use the 2010 Decennial API: dectest = products.Decennial2010() Now, since we don't need to worry about entering geo-in-geo structures for our queries, we can request Race data for all the tracts in Los Angeles County using the following method: la = dectest.from_county('Los Angeles, CA', level='tract', variables=['^P004']) Matched: Los Angeles, CA to Los Angeles, CA within layer Counties And, making a pretty plot of the Hispanic population in LA: How this works from a software perspective is a significant imporvement on how the other packages, like cenpy itself, work. targetwithin a level of the census geography. (e.g. match Los Angeles, CA to Los Angeles County) target. Since the Web Mapping Service provides us all the information needed to build a complete geo-in-geo query, we don't need to use repeated queries. Further, since we are using spatial querying to do the heavy lifting, there's no need for the user to specify a detailed geo-in-geo hierarchy: using the Census GIS, we can build the hierarchy for free. Thus, this even works for grabbing block information over a very large area, such as the Austin, TX MSA: aus = dectest.from_msa('Austin, TX', level='block', variables=['^P003', 'P001001']) Matched: Austin, TX to Austin-Round Rock-San Marcos, TX within layer Metropolitan Statistical Areas
https://nbviewer.jupyter.org/gist/ljwolf/3481aeadf1b0fbb46b72553a08bfc4e6?flush_cache=true
CC-MAIN-2019-18
refinedweb
892
60.14
ASF Bugzilla – Bug 42239 ECDSA signature value interoperability patch. Last modified: 2009-07-14 04:38:48 UTC I've recently tried to verify a signature from the austrian citizen security card (), which uses ECDSA-singatures. Unfortunately, the code in SignatureECDSA.java passes the SignatureValue directly to the JCE-provider. However, the ECDSA xml-security spec at states, that the ECDSA SignatureValue is a concatenation of the raw BigIntegers. This is in line with the semantics of SignatureValue for conventional DSA signatures (SignatureDSA.java), where the SignatureValue is converted to the ASN1 representation used by the JCE provider. The attached patch adopts the procedure of converting the SignatureValue to ASN.1 for the ECDSA algorithm. With this patch applied to xmlsec-1.4.0 I can verify the signatures of my austrian card. (An example is attached) Regards, Wolfgang Created attachment 20038 [details] Patch to the ECDSA signature algoithm implementation. Created attachment 20039 [details] A sample signature from the suatrin citizen card. For me is ok to add this to 1.4.1 release, what other people think? Applying this patch for 1.4.1 would be a great favor to me. TIA, Wolfgang Can you recreate the patch with svn diff, or eclipse patch. It is the only thing left for the release. Regards, Raul My original patch appiels well againt the current svn tree using 'patch -p0 xx' from the project's root diretory (the directory where build.xml resides...) However, I will attach the output of 'svn diff' of the current svn with my patch applied Created attachment 20087 [details] output of svn diff for my proposed patch. Thanks last patch works. One last think can you send a junit test case that stress the new code. If you don't do this I don't know how long it will keep working. Regards, Raul The problem with generating a junit test is, that the signature attached to this issuse uses complex xpointer references with namespaces. URI="#xmlns(etsi=)%20xpointer(<some_xpath_expression>)" I've implemented an xpointer resolver, which is capable of resolving such things, but this resolver relies on java 1.5's XPath API plus schema validation, which makes the thing not suitable for a junit test (schema lookup through http, test woll only run in java-1.5...). The only thing I can do is to generate a signature with xmlsec in a way, that it maybe resolved by the austrian security layer implementation. I've already contacted the guys a IAIK.at for receiving an official testvector, but the staff hasn't been very cooperative at this point. Created attachment 20117 [details] patch for making ECDSA aware of varaible keylengths. This patch make the new ECDSA signaturevalue conversion aware of multiple keylength as discovered by the junit test, which will follow. Created attachment 20118 [details] JUnit tests for ECDSA. These junit tests test the interopaerability of the ECDSA signature with a testvector from the austrian security card from as well as the consistency of the ECDSA implementation. Please note, that the third test fails with bouncycastle 129. I had to upgrade to the current bouncycastle (136) in order to get the interoperability test working. I just recognized, that the variable keylength patch as well as the junit tets has not yet been checkinto the svn tree of the xml security project. Raul, would it be possible to do so ? TIA, Wolfgang I've tried the xpointer resolver for generating signatures for the german banking interface EBICS which uses #xpointer(//*[@authenticate='true']) to sign all elements with this attribute. Unfortunately, signing fails because CanonicalizerBase.canonicalizeXPathNodeSet strips away subnodes because isVisibleDO returns 0 for subnodes of nodes in the xpathnodeset. Well, how is the XPointer resolver related to this issue ? Supposedly you should post XPointer issues to Regards, Wolfgang I gzuess you are right that it fits better into issue 42599. The reason I posted it here was that this is where you published the resolver that triggered this bug. I will add a comment to 42599. Thanks Christian Wolfgang, I'm working on integrating this patch. I have an issue with the test. I'd rather it didn't depend on the BouncyCastle APIs to generate a certificate and EC keypair. This seems easy to workaround, instead you can read them from a keystore file which will contain the private key and X.509 certificate. Is this something you can change for me? Thanks, Sean Created attachment 23914 [details] A patch (zip file) for this issue See the attached patch which we can apply to close this issue. It's a reworked version of the previous attachment "20118 - JUnit tests for ECDSA". Instead of explicitly using BouncyCastle API's to create an ECDSA keypair each time the test is run, I used the code to write the keypair out to a keystore, and then commented the BouncyCastle code out. The test tries to load the BouncyCastle provider via reflection, if it fails then the tests just return without an error, as to run the tests we need to have BouncyCastle installed to provide ECDSA support. The patch is a zip file consisting of the actual patch, and the keystore which should be placed in data\org\apache\xml\security\samples\input. Note that this patch also needs attachment number "20117 - patch for making ECDSA aware of varaible keylengths" that is attached to this issue but was never applied. Sean, can you take a look at the license at the top of the ResourceResolver implementation in the patch to make sure it's ok? Colm. (In reply to comment #17) > Sean, can you take a look at the license at the top of the ResourceResolver > implementation in the patch to make sure it's ok? Are you referring to this? : +/*********************************************************** + * $Id: XPointerResourceResolver.java 52 2007-04-07 19:45:06Z wglas $ + * + * Tapestry support for the austrian security card: Apr 6, 2007 + * + * Author: wglas + * + ***********************************************************/ I don't think this is acceptable. See for more information. It specifically states: "Each source file should include the following license header -- note that there should be no copyright notice in the header" First the standard disclaimer : IANAL. I think the easiest thing to do is ask Wolfgang whether he really needs to include their own copyright (implying ownership) of this code. If they do, then I think we need to have a CLA in place. In any case, I think this copyright issue will delay getting this fix into 1.4 (In reply to comment Thanks for the quick reply Wolfgang. I read this to mean that you aren't claiming ownership of the code (no copyright), is that right? If so, Colm, I think this means we can keep the 2.0 notice, and just replace the copyright with the ASF copyright and not the notice in. Note: it may be the case the the existing copyright is ok and doesn't require a CLA because it is specified within an Apache 2.0 license. However, I'm not really sure and would want to check with legal first. So if we can just remove the non-Apache copyright and replace it with an Apache copyright that seems like the quickest solution. Thanks, Sean Patch applied. Colm.
https://bz.apache.org/bugzilla/show_bug.cgi?id=42239
CC-MAIN-2015-32
refinedweb
1,207
64.1
0 I am trying to learn C++. I wrote a little calculator using class to see if I can do it and give some practice. The following is the code: #include <iostream> using namespace std; class Calculator { float a, b; public: float add(float, float); float subtract(float, float); float multiply(float, float); float divide(float, float); }; float Calculator::add(float a, float b) { return (a+b); } float Calculator::subtract(float a, float b) { return (a-b); } int main() { Calculator calc; float a,c; char b; cout << "Enter math problem (eg. 1 + 2)" << endl; cin >> a >> b >> c; switch (b) { case '+': calc.add(a,b); cin.ignore(); default: cout << "Please Enter correct math problem" << endl; } } When the command line pops up, it asks me to Enter math problem and I do, such as I do 1+2 and it returns the default switch "Please Enter correct math problem". Then I tried adding cout << "True"; to see if it was actually going to the correct switch and it printed true. So it seems that it isn't doing calc.add(a,b) though I am not sure why not.
https://www.daniweb.com/programming/software-development/threads/337659/c-class-calculator
CC-MAIN-2018-26
refinedweb
188
67.08
The Python WSGI Utility Library Werkzeug is a WSGI utility library for Python. It's widely used and BSD licensed. from werkzeug.wrappers import Request, Response @Request.application def application(request): return Response('Hello World!') if __name__ == '__main__': from werkzeug.serving import run_simple run_simple('localhost', 4000, application). Werkzeug is the base of frameworks such as Flask and more, as well as in house frameworks developed for commercial products and websites. Have you looked at werkzeug.routing? It's hard to find anything that's simpler, more self-contained, or purer-WSGI than Werkzeug, in general — I'm quite a fan of it! — Alex Martelli Found a bug? Have a good idea for improving Werkzeug? Head over to Werkzeug's new github page and create a new ticket or fork. If you just want to chat with fellow developers, visit the IRC channel or join the mailinglist.
http://werkzeug.pocoo.org/
CC-MAIN-2017-43
refinedweb
146
69.58
import "github.com/google/battery-historian/csv" Package csv contains functions to store battery history events and convert them to and from CSV format. const ( // FileHeader is outputted as the first line in csv files. FileHeader = "metric,type,start_time,end_time,value,opt" // UnknownWakeup is emitted for running events if no wake up reason is set for it. UnknownWakeup = "Unknown wakeup reason" // CPURunning is the string outputted for CPU Running events. CPURunning = "CPU running" // Reboot is the string outputted for reboot events. Reboot = "Reboot" ) ExtractEvents returns all events matching any of the given metrics names. If a metric has no matching events, the map will contain a nil slice for that metric. If the metrics slice is nil, all events will be extracted. Errors encountered during parsing will be collected into an errors slice and will continue parsing remaining events. type Entry struct { Desc string Start int64 Type string Value string // Additional data associated with the entry. // Currently this is used to hold the UID (string) of a service (ServiceUID), // and is an empty string for other types. Opt string // Unique identifier for the event. e.g. The name of the app that triggered the event. Identifier string } Entry contains the details of the start of a state. GetKey returns the unique identifier for the entry. GetStartTime returns the start time of the entry. GetType returns the type of the entry. GetValue returns the stored value of the entry. type EntryState interface { // GetStartTime returns the start time of the entry. GetStartTime() int64 // GetType returns the type of the entry: // "string", "bool", "float", "group", "int", "service", or "summary". GetType() string // GetValue returns the stored value of the entry. GetValue() string // GetKey returns the unique identifier for the entry. GetKey(string) Key } EntryState is a commmon interface for the various types, so the Entries can access them the same way. type Event struct { Type string Start, End int64 Value string Opt string AppName string // For populating from package info. } Event stores the details contained in a CSV line. MergeEvents merges all overlapping events. Key is the unique identifier for an entry. RunningEvent contains the details required for printing a running event. State holds the csv writer, and the map from metric key to active entry. NewState returns a new State. func (s *State) AddEntry(desc string, newState EntryState, curTime int64) AddEntry adds the given entry into the existing map. If the entry already exists, it prints out the entry and deletes it. AddEntryWithOpt adds the given entry into the existing map, with the optional value set. If the entry already exists, it prints out the entry and deletes it. func (s *State) AddOptToEntry(desc string, state EntryState, opt string) AddOptToEntry adds the given optional value to an existing entry in the map. No changes are made if the entry doesn't already exist. AddRebootEvent stores the entry for the reboot event, using the given curTime as the start time. EndEvent marks an event as finished at the given timestamp. Does nothing if the event is not currently active. EndWakeupReason adds the wakeup reason to the wakeup reason buffer. HasEvent returns whether an event for the metric with the given identifier is currently active. HasRebootEvent returns true if a reboot event is currently stored, false otherwise. Print directly prints a csv entry to CSV format and writes it to the writer. PrintActiveEvent prints out all active entries for the given metric name with the given end time, and deletes those entries from the map. PrintAllReset prints all active entries and resets the map. PrintEvent writes an event extracted by ExtractEvents to the writer. PrintInstantEvent converts the given data to CSV format and writes it to the writer. PrintRebootEvent prints out the stored reboot event, using the given curTime as the end time. StartEvent marks an event as beginning at the given timestamp. Does nothing if the event is already active. For events without a duration, PrintInstantEvent should be used instead. StartWakeupReason adds the wakeup reason to the wakeup reason buffer. Package csv imports 10 packages (graph) and is imported by 13 packages. Updated 2018-05-03. Refresh now. Tools for package owners.
https://godoc.org/github.com/google/battery-historian/csv
CC-MAIN-2019-35
refinedweb
688
67.35
clock() not working as expected. What's up?allene Jul 27, 2015 10:54 AM I am trying to get a timestamp (see other thread) and tried the function clock() that should read the system clock. I am printing out the value of clock once a second timed by a GPS input. I can see a steady display of the gps data from a printf statement but the associated clock() values are not sonstant. Every 5th one has about 50% more digits between the numbers. I call this once a second and the printf output comes out once a second but the number x3 is not at all constant. int time_stamp(){ unsigned long xyzzy; static unsigned long x2 = 0; xyzzy = (unsigned long)( clock()); unsigned long x3 = xyzzy - x2; printf("** timestamp is %lu %lu %lu\n", xyzzy, x2, x3); x2 = xyzzy; return (int)xyzzy; } So, can I fix this or is there another way to get a sub second clock? 1. Re: clock() not working as expected. What's up?Steven Moy Jul 27, 2015 6:53 PM (in response to allene) Hm, do you want wall time or time spent by your program, reading the man page of "clock", it seems its reporting the cpu time spent by your program Below seems to be a good tutorial on time related things in C programming. It calls out "clock" is rarely used. Time, Clock, and Calendar Programming In C 2. Re: clock() not working as expected. What's up?allene Jul 28, 2015 9:30 AM (in response to Steven Moy) I understand what you are saying about clock() Obviously the wrong thing to use to get absolute timestamps. I tried the example here and got a segmentation fault. Perhaps I need to link rt. I can't test it until later but will post the results. Allen 3. Re: clock() not working as expected. What's up?Steven Moy Jul 28, 2015 10:15 AM (in response to allene) Hi, What kind of environment are u compiling your code? I dropped the tutorial code onto edison and compile on edison. works for me. root@redacted:~/test_clock# gcc -o gettime gettime.c -lrt root@redacted:~/test_clock# ls gettime gettime.c root@redacted:~/test_clock# ./gettime elapsed time = 1000536234 nanoseconds elapsed process CPU time = 68750 nanoseconds 4. Re: clock() not working as expected. What's up?allene Jul 28, 2015 10:49 AM (in response to Steven Moy) I didn't do the link so I assume that is my problem. Thanks so much for showing that this works. I very much appreciate it. Allen
https://communities.intel.com/thread/77620
CC-MAIN-2018-30
refinedweb
432
81.63
This video is only available to subscribers. Start a subscription today to get access to this and 408 other videos. Using os_log to Log Messages This episode is part of a series: Unified Logging and Activity Tracing. Episode Links Defining Shared OSLog instances You will likely want to log statements and reuse the same subsystem and category throughout your application. For this, you can define a struct with a static member: import os.log struct Log { static var general = OSLog(subsystem: "com.myapp.my_target", category: "general") } Logging Messages Then you can use the shared log instance to log messages with os_log: os_log("The app did something interesting", log: Log.general, type: .info) Including values in your log messages We can include values in our log messages by using one of the built-in formatting type specifiers: Sometimes we may want to give the system some information on how we want these values to be interpreted and formatted on display. One example is using time_t to get a formatted date to be displayed: os_log("The record was modified at %{time_t}d", log: Log.general, type: .info, time_t(date.timeIntervalSince1970))
https://nsscreencast.com/episodes/344-unified-logging-using-os-log
CC-MAIN-2019-39
refinedweb
188
51.58
Well I didn't have anywhere else to turn, so i came here. I dont know how many people here have actually worked with SDL, but I'm hoping there are a few of you out there that can help me. I downloaded the file under win32 I think, it ended with a VC6.zip. So I unzip it and find VisualC.html. I read it, and see it says how to compile your own files. (SDL.dll, SDL.lib, SDLmain.lib) I find a directory that already has them. So I skip this part and continue on. It tells me how to create a project with SDL. I follow the directions step by step. I then compile this. #include "SDL.h" int main( int argc, char* argv[] ) { // Body of the program goes here. return 0; } I get 2 errors and a warning. LINK : warning LNK4001: no object files specified; libraries used LINK : error LNK2001: unresolved external symbol _WinMainCRTStartup Debug/SDL1.exe : fatal error LNK1120: 1 unresolved externals So I copy the first error and put it in a search engine. I find someone that gets the same error, due to using a borland lib instead of MSVC or something like this. So I go back to the first step to try and build my own. It says to unzip VisualC.zip to the same directory as the VisualC.html. I don't have a visualC.zip so I search libsdl.org and find a version 1.4 of the file and unzip it. It then says to go to the newly created Visual C directory, ok. Next I have to open the workspace "SDL.dsw", problem, "SDL_ttf.dsw" in this directory. I have no "SDL.dsw" anywhere. So I SDL_ttf.dsw instead. It then says to fileview on the workspace area and build the files by simply right clicking what I see in the area and choosing build. I should have a few warnings, but no errors. I get 1 error, and 1 warning for each. I can't figure out whats wrong. Help me somebody.
https://cboard.cprogramming.com/c-programming/40893-need-help-setting-up-sdl-msvc-printable-thread.html
CC-MAIN-2017-04
refinedweb
347
86.81
You can prevent unauthorized users from receiving events to which they should not have access. Your event provider can supply instances of its own event classes, just as the System Registry Provider supplies classes such as RegistryKeyChangeEvent. Your event provider can also deliver intrinsic events such as __InstanceCreationEvent. For more information, see Writing an Event Provider. An event provider can control access to event recipients in the following ways: The provider determines whether the consumer has privileges to receive a requested event. If the consumer lacks sufficient privileges to register, WMI returns an access denied error. Use this mode when the provider can make the decision about who can receive events. For example, a provider may supply security related events and can require that the consumer have administrator privileges with the SeSecurityPrivilege privilege enabled. For examples, see Implementing Access Control. WMI performs access checks based on the SD. Use this mode when the provider cannot make the decision regarding who is allowed to consume its events, but can decide on an SD for a specific sink. For example, use IWbemEventSink::SetSinkSecurity if your event provider obtained several sinks by calls to IWbemEventSink::GetRestrictedSink and you want a security descriptor for each sink. For examples, see Implementing Sink Security. Use this approach when each event delivered to the sink can have different security descriptors. To use this approach, derive any of the extrinsic event classes defined by your provider from __Event or __ExtrinsicEvent so that your class contains the SECURITY_DESCRIPTOR property. For example, your event provider may publish both secure and normal events through a sink. In this case, use the Administrators account security descriptor for secure events and a NULL security descriptor for normal events that can be received by anyone. For examples, see Setting SD for a Specific Event Type. Decoupled event providers differ from nondecoupled event providers in the way that they register with WMI. The call to IWbemEventProviderSecurity::AccessCheck for events from a decoupled provider never propagates the client access token. WMI handles the access control in the same manner as for nondecoupled event providers. For more information about writing a decoupled provider, see Incorporating a Provider in an Application. Only administrators with the FULL_WRITE privilege set in WMI Control of the Control Panel are allowed to raise events for a namespace. For more information, see Setting Namespace Security with the WMI Control. Windows 2000 and Windows NT: Use IWbemServices::QueryObjectSink to obtain a sink from an IWbemServices proxy object for a namespace. Send comments about this topic to Microsoft Build date: 6/15/2009
http://msdn.microsoft.com/en-us/library/aa392894(VS.85).aspx
crawl-002
refinedweb
428
55.03
User Tag List Results 1 to 4 of 4 - Join Date - Sep 2008 - 7 - Mentioned - 0 Post(s) - Tagged - 0 Thread(s) shovell p.243 test fails, browser view works simply rails 2, chapter 7, p.243 the second stories_controller_test.rb is: def test_should_show_story_votes_elements get :show, :id => stories one) assert_select 'h2 span#vote_score' assert_select 'ul#vote_history li', :count => 2 assert_select 'div#vote_form form' end when the rake test run it shows failure on the bold marked line. here it is: 1) Failure: test_should_show_story_votes_elements(StoriesControllerTest) [C:/InstantRails/ruby/lib/ruby/gems/1.8/gems/actionpack-2.0.2/lib/action_con troller/assertions/selector_assertions.rb:296:in `assert_select' ./test/functional/stories_controller_test.rb:46:in `test_should_show_story_ votes_elements' C:/InstantRails/ruby/lib/ruby/gems/1.8/gems/activesupport-2.0.2/lib/active_ support/testing/default.rb:7:in `run']: Expected at least 2 elements matching "ul#vote_history li", found 0. <false> is not true. the puzling thing about it is that it shows well on the browser, and 'shoving' the story again and again show well new scores and a refreshed vote_history list. what is wrong here? any body knows? - Join Date - Sep 2008 - 7 - Mentioned - 0 Post(s) - Tagged - 0 Thread(s) As a matter of fact it does not show on the broser too I was confused by the voting that clicking works fine, and the Ajax too, and than it shows the vote_history too. But when a story is entered first through '', the vote_history does not show. There is a small empty grey square with darker border that remind me there should have been vote_history there. It does, though, show the line 'No shoves yet!' when that is true. So it is not only test failure, it is a functional real problem Please HELP! - Join Date - Jul 2005 - Location - West Springfield, Massachusetts - 15,986 - Mentioned - 146 Post(s) - Tagged - 1 Thread(s) Does your test\fixtures\stories.yml file look like Code: # Read about fixtures at one: name: My shiny weblog link: two: name: SitePoint Forums link: - Join Date - Sep 2008 - 7 - Mentioned - 0 Post(s) - Tagged - 0 Thread(s) Thanks for the try, and for the link, I'll study it. But as I mentioned in the additional note this was a REAL problem, not only test failure. so I tested the <ul> section on the show.html.erb by reversing the condition (!) and the tasks, and what I saw in the browser was a different problem, but such that proved I hit the spot. The examination of the section again revealed there was a typo in the else condition. I typed it like that < else > istead of <% else %>. Now the test is perfect, and so is the show in the browser. Anyway - Thanks a lot for the bother. Bookmarks
http://www.sitepoint.com/forums/showthread.php?579413-shovell-p-243-test-fails-browser-view-works&p=4014849&viewfull=1
CC-MAIN-2014-10
refinedweb
461
73.58
Can someone please explain to me the use of super(t) in the following code. Thanks alot. import javax.swing.*; import java.awt.*; public class ComponentExample(String t){ super(t); Container cp = getContentPane(); cp.add(new JButton("Click", "North"); pack(); } Java works in a heirarchy manner, each class is either a parent or a child of another class, but in the end they all come back to the Object class. So when you say super(t) you are calling the constructor of the class that your class is a child of, so in your case you are calling the Object constructor and passing it the String t, not too exciting, not even necessary... A kram a day keeps the doctor......guessing How come when the string t is passed, its gonna become the title of the window created? I dont get the relationship between how that works and the fact that it passes the String to the Object constructor. Can you please explain further? Thanks a lot. The constructor of the super clas that you are passing the value must set it up as on of the parameters for your window. Its simply one of the operations of the constructor of the super class. Keeping in mind that your class has derived "from" the Object class, so its like your classes parent, and that is the relationship. If you have a class that "Extends" JApplet for example, then the JApplet class will be the super/parent class. As it is, the code above is invalid. You seem to have grouped together the class definition and the constructor all into one thing. Maybe it should look something like this?[code]import javax.swing.*; import java.awt.*; public class ComponentExample extends Component{ //class definition public ComponentExample(String t){ //constructor super(t); Container cp = getContentPane(); cp.add(new JButton("Click", "North"); pack(); } } Forum Rules Development Centers -- Android Development Center -- Cloud Development Project Center -- HTML5 Development Center -- Windows Mobile Development Center
http://forums.devx.com/showthread.php?140378-super()-question&p=415574
CC-MAIN-2015-27
refinedweb
327
62.58
RS232 Fail testuserc_43312 Aug 18, 2017 2:15 PM Hi, I just received a PSoC 4 unit to evaluate for a potential project/product. I installed creator 2.2 and I have been looking at videos and examples online. I tried creating a simple rs232 comunication test. Receive something from an external source and then append a little string to it and send it back. I must be missing something, bacause I can't get it to work. I assigned TX to P0.5 and RxP0.4. The Rx input pin is set to High Impidance Digital [low(0)] Tx is set to Resistive pull up. [I tried different options, not sure what to use]. I would appreciate any help I could get. Also any information about training resources. The C code looks like this #include <device.h> void main() { /* Place your initialization/startup code here (e.g. MyInst_Start()) */ uint8 ch; UART_1_Start(); UART_1_PutString("Testing RS232 :"); /* CyGlobalIntEnable; */ /* Uncomment this line to enable global interrupts. */ for(;;) { /* Place your application code here. */ ch = UART_1_GetChar();//UartGetChar(); //printf("test %c \n",ch); if (0u != ch) { UART_1_PutString("got this ="+ch); } } } - RS232_Component.png 11.9 K 1. Re: RS232 Fail testuserc_43312 Aug 1, 2013 10:22 AM (in response to userc_43312) I forgot to mention that I connected the signal ground (rs232) to pun GND from J3. [I do not want to set the board as a bridge as it's suggested on the Pioneer kit guide, since the deviced where I plan to use it, doesn't have the USB option. ] For testing I am using a terminal program, and trying to send/receive something. 9600,8,n,1 2. Re: RS232 Fail testDaKn_263916 Aug 1, 2013 1:25 PM (in response to userc_43312) The pioneer kit does not have a RS232 translator on it to get the levels necessary for the physical layer to talk to a PC. Unless you are doing a loopback test at the CMOS interface you cannot get the PC to talk to the UART over RS232. There are a plethora of RS232 translator chips, and you can always google "RS232 PC Interface" and do it with discretes. Or use a USB to RS232 dongle on PS side. Regards, Dana. 3. Re: RS232 Fail testuserc_43312 Aug 1, 2013 2:40 PM (in response to userc_43312) Hi Dana, Thank you for the reply. Just to be clear. Does this mean I would need a ttl shifter (Like this) In the past I have used a MAXIM 232cpe, would either of this two options work for me? Also what would be the maximum number of UARTs [rs232] that I should be able to defined? [I think I read that PSoC 4 had 2 UARTs] can I only define 2 or can the board "play some trics" to expand that number? Thank you so much for your help, Cristian 4. Re: RS232 Fail testuserc_40792 Aug 1, 2013 6:45 PM (in response to userc_43312)Hi, Something for you. 1) Permit global interrupt, user module might be use it. 2) I think, Tx pin might be Strong drive and initial value is high, It would be better. 3) What is this? [ UART_1_PutString("got this ="+ch); ] This is a C language, not have string class as C++ 5. Re: RS232 Fail testDaKn_263916 Aug 1, 2013 7:01 PM (in response to userc_43312) Maxim 232 excellent choice. As far as max UARTs, take a look at the software UART and its API sizes. Uses no UDB resources. Regards, Dana. 6. Re: RS232 Fail testJoMe_264151 Aug 2, 2013 3:56 AM (in response to userc_43312) UART_1_PutString("got this ="+ch); is correct C, but does not do what you expect, Use two statements, one to send the fixed text as a string and one to send the character as a byte. Bob 7. Re: RS232 Fail testJoMe_264151 Aug 2, 2013 4:00 AM (in response to userc_43312) There are some UDBs within the PSoC4 which can be used to build additional hardware. Try to implement one or two UART modules, connect the pins (Needed to prevent the UARTs from getting optimized-out) and see if the project fits, I did not try that yet. Bob 8. Re: RS232 Fail testuserc_43312 Aug 2, 2013 7:52 AM (in response to userc_43312) Thank you for your comments. I added a MAX232xxx and I had communication two ways after that. The buffer seemed to be limited to 4bytes. I tried to just change the component (UART) buffer size, but that seems to requiere the use of interrupts. Are there any examples of the use/implementation of this interrupts? or use larger string messages? Thank you for your help, Cristian 9. Re: RS232 Fail testki.leung Aug 2, 2013 8:50 AM (in response to userc_43312) I normally use a ring buffer for RX and own interrupt RX routine. 10. Re: RS232 Fail testJoMe_264151 Aug 2, 2013 9:27 AM (in response to userc_43312) When extending the buffer to more than 4 bytes will start an INTERNAL interrupt procedure, so there is no need (except for enabling global interrupts) for you to act. The routines are blocking, so there is no need to increase the buffer. The routine will return when all bytes are sent. HL applied for a different approach we always suggest here since it is a VERY good exercise and is rather helpful: Write an interrupt driven routine that accepts characters from UART in a (so called) circular buffer. Same (or similar) for ta circular buffer from which characters are fetched an transferred via UART. Bob 11. Re: RS232 Fail testDaKn_263916 Aug 7, 2013 5:28 AM (in response to userc_43312) 3) What is this? [ UART_1_PutString("got this ="+ch); ] You cannot concatenate strings or other to a string this way in C. Convert the numeric to a string, then use char *cstrcat(char *dest, const char *src); in the extended library to concatenate the strings. Don't forget to size the receiving string array to total length expected + 1 for nul termination character. Regards, Dana.
https://community.cypress.com/thread/21223
CC-MAIN-2020-40
refinedweb
1,005
72.36
Previous Next The Plant Creation Wizard can be used to create a new plant using the appearance of the currently selected plant. This is useful for when you wish to use an existing plant as a starting point for your own variations. Note: Due to their complexity, you cannot currently create custom variations of UltraRes Plants®. To create a custom plant: 1. Add the plant that you would like to use to your landscape design. For instructions, see Adding a Plant. 2. Modify the color and brightness of the plant as needed. Either adjust the Color and Brightness of the plant, or edit it using the Realtime Picture Editor. 3. Ensure that the plant is selected. 4. Click Tools and Plant Creation Wizard. 5. Click the Next button. 6. Click the Set File Name button, enter the desired Common Name for the new plant, and then click the Next button. 7. Enter the desired information for your plant and then click the Next button. This information is optional, but it is recommended to accurately enter the Mature age and Mature height as they affect how the plant is sized when it is added to your landscape design. 8. Click the Finish button to complete the wizard. Your new plant will appear in the Custom category of the plant library. Tips: • If you wish to import a 3D plant in 3ds or skp format, use the Model Import Wizard. • Variations of 3D plants can also be added using the Model Creation Wizard. • To import a picture of a plant, use the Picture Import Wizard. • Custom plants are saved to the following directory: \Users\user\Documents\Realtime Landscaping Plus version\Custom Data\type\Custom\ Where user is your Windows user name, version is the software version number, and type is “plant” for 3D plant models, and “plant_picture” for 2D plant See also: Picture Import WizardModel Import WizardModel Creation WizardSelecting ObjectsEditing Materials
https://www.ideaspectrum.com/help/2016/plus/plantcreationwizard.php
CC-MAIN-2017-13
refinedweb
319
64
Kotlin provides many advanced grammar features. However, these functions are difficult for beginners to understand, because in many cases, we always use them in conjunction with lambda grammar. Some built-in functions of Kotlin extension are provided in the Source Standard. KT Library of Kotlin to optimize the coding of Kotlin. Higher order functions are often used: - Filter {}: filter function. - takeWhile {}: Take the numbers in order, and stop if they don't meet the requirements. - Let {}: Call the let function of an object, then the object is a parameter of the function, which can be referred to by it in the function block. Another function of let functions is to avoid writing operations that judge null. The return value is the last line of the function block or specifies the return expression. - apply {}: A member variable (method) of an object can be invoked directly. The return value is the calling object itself. apply function can be used in conjunction with let, which is more convenient. - with(){}: passed in as a parameter, and can directly call the member variables (methods) of the object, without the need to use it to refer to the object like let functions. - Run {}: run function is actually a combination of let and with functions. run function receives only one lambda function as a parameter, returns in the form of closure, and returns the value of the last line or the expression of the specified return. For let,with functions in any scenario. Because run function is a combination of let and with functions, it makes up for the fact that let function must use it parameter to replace object in function body. In run function, it can be omitted as with function, and can directly access the public attributes and methods of instance. On the other hand, it makes up for the problem of empty judgment of object passed in by with function. In run function, it can do empty judgment like let function. . - Also {}: Call an object's also function, then the object is a parameter of the function. It can be used to refer to the object in a function block. The return value is the object itself. The only difference between the structure of the also function and let is that the return value is different. Let is returned as a closure, returning the value of the last line in the function body, and returning a default value of Unit type if the last action is empty. The also function returns the object itself. Generally, it can be used for chain call of multiple extended functions. - Use {}: Usually used for IO operations. That is to say, we need to close the operation of resources, using the use function, which can automatically close. it can be used to refer to objects in scope. *** it and this denotation: *** - let, also, use all use it to refer to the current object. - with, run, and apply all use this to refer to the current object or to omit it directly. - let, with, run are returned as closures. - apply and also all return the object itself. *** Use scenarios: *** - let: Suitable for operating scenarios that are not null. - with: When calling multiple methods of the same class, you can omit the duplication of class names and call the methods of the class directly. It is often used in the onBinderViewHolder of RecyclerView in Android, and the attributes of the data model are mapped to the UI. - run: For let,with functions in any scenario. - also: For any scenario of let functions, it can be used for chain calls of multiple extended functions. - apply: - For any scenario of run function, it is generally used to initialize an object instance, manipulate the object properties, and ultimately return the object. - Dynamic inflate can also be used to bind data to an XML View. - Generally, it can be used for chain call of multiple extended functions. - The problem of data model multi-level package emptying processing. Use examples: fun main(args: Array<String>) { // testFilter() // testLet() // testApply() // testApplyAndLet() // testWith() // testRun() // testAlso() // testReadFile() // testUse() } fun testFilter() { //Find the factorial of 0~6 (0..6).map(::factorial).forEach(::println)//The map function is an extension of Array, which converts List < T > into List < R > sets. //Keeping only factorial is period odd println((0..6).map(::factorial).filter { it % 2 == 1 })//[1, 1] //The factorial to be in odd digits println((0..6).map(::factorial).filterIndexed { index, i -> index % 2 == 1 })//[1, 6, 120] } fun testTakeWhile() { //According to the conditions, take the number in order, and stop if it fails to meet the requirements. println((0..6).map(::factorial).takeWhile { it % 2 == 1 }) } //Factorial formula n!=1 *2 *3 * ((n-1)*n private fun factorial(n: Int): Int { if (n == 0) return 1//The factorial of 0 is 1. return (1..n).reduce { acc, total -> acc * total } } fun testLet() { val r = findTest()?.let { it.aa() println(it.a) 1 + 1 } println(r)//Output 2 } fun testApply() { val r = findTest()?.apply { // You can use member variables and methods in Test aa() println(a) } println(r?.b)//The apply function returns the calling object itself (that is, the object of the Test class), so its member variable b can be called directly. } fun testApplyAndLet() { // apply and let are used together findTest()?.apply { aa() println(a) findCeShi()?.let { it.cc() it.c //Here this is the object that refers to the call of the apply function println(this.b) } }?.bb()//Because the event object returned by apply itself, the member methods or variables of the object can be invoked directly here. } fun testWith() { //Object as a parameter of with function val r = with(findTest()) { println(this?.a) this?.aa() //The return value is the last line of the function block or specifies the return expression. this?.b } println(r)//Output parameter b } fun testRun() { val r = findTest()?.run { println(a) aa() b } println(r)//Output parameter b } fun testAlso() { findTest()?.also { it.a it.aa() it.b }?.bb()//The also function returns the calling object itself (that is, the object of the Test class), so its member variable b can be called directly. } //Ordinary Read Files fun testReadFile() { val file = File("E:\\blog-note\\test.txt") //Read the contents of the file into the buffer reader val bufferedReader = BufferedReader(FileReader(file)) var line: String while (true) { //Read a row of data when there is content, or exit the loop line = bufferedReader.readLine() ?: break println(line) } bufferedReader.close()//Close buffer reader } //Using the use function to read the file, the operation results are consistent with one of the above. fun testUse() { val file = File("E:\\blog-note\\test.txt") //Read the contents of the file into the buffer reader (the use method automatically closes the BufferedReader) BufferedReader(FileReader(file)).use { var line: String while (true) { line = it.readLine() ?: break println(line) } } } fun findTest(): Test? { return Test("a parameter", "b parameter") } data class Test(var a: String, var b: String) { fun aa() { println("aa is exe!") } fun bb() { println("bb is exe!") } } fun findCeShi(): CeShi? { return CeShi("c parameter", "d parameter") } data class CeShi(var c: String, var d: String) { fun cc() { println("cc is exe!") } }
https://www.fatalerrors.org/a/commonly-used-higher-order-functions.html
CC-MAIN-2020-45
refinedweb
1,194
56.66
Errata for Programming Groovy The latest version of the book is P1.0, released over-Apr-09) PDF page: 1 I'm reading the mobi version of the book on my kindle, so I don't have a page number. The kindle reports it as location 480-89. The section entitled "Groovy as Lightweight Java" seems to be missing an example after the last bullet point. It reads "For example in the following code, the learn methods return the class so you can chain calls to learn methods:". But there is no example. The Javabeans section begins immediately after the colon. I assume this is an issue with the mobi formatting, but haven't verified.--Bryan Young - Reported in: P1.0 (02-Sep-08) Paper page: 37 According to the context and subsequent examples etc. def access(location,weight,fragile) should be def access(weight,location,fragile) --will krespan - Reported in: P1.0 (26-May-09) Paper page: 37 I see the point of about the reasoning of the p.37 example. It would be nice if we could delete our erratum if we made them incorrectly. wk --will krespan - Reported in: P1.0 (12-Aug-08) Paper page: 37 Last paragraph of the page reads: "it leads to a problems, such as when..." "a problems" is not correct.--Ethan M - Reported in: P1.0 (10-May-09) Paper page: 37 Will Krespan's erratum for page 37 is wrong. Venkat clearly states that they are out of order. It was done for illustrative purposes. Furthermore, in this example, the first parameter (location) is assumed to be a Map. Changing the order would make weight a Map, not something we want to do.--Andy O'Brien - Reported in: P1.0 (01-Jun-11) PDF page: 39 Substitute "print()" to println() "The $ in front of the variable it tells the method println( ).." --Giacomo Cosenza - Reported in: P1.0 (09-Apr-08) PDF page: 41 In comparing Java code to Groovy you use the evocative example of the sword-fight scene from Raiders of the Lost Ark. However your reference to a YouTube video () results in an error; YouTube has removed the video. However, you *can* still find it at AOL (video.aol.com/video-detail/indiana-jones-sword-fight/2100620337). Interestingly it is a YouTube video!--Srivaths Sankaran - Reported in: P1.0 (08-Mar-09) Paper page: 44 In the table "Type x Condition for truth", for Iterator, instead of "has text", it should be "has next", isn't it ?--Enio Pereira - Reported in: P1.0 (14-Jun-11) PDF page: 47 MacBookPro, snow leopard, Groovy Version: 1.7.6 JVM: 1.6.0_24 on page 47 on pdf, for Car definition, drive() method throws up compilation error since "final" variable cannot be modified outside of constructor. --madhav - Reported in: P1.0 (10-Oct-11) PDF page: 49 in the second paragraph after code example. the sentence is not clear. it is said that the first argument is assumed to be a map, but in the call of the method, the paired value arrives in second position. confusing.--Ioan Le Gué - Reported in: P1.0 (10-May-08) PDF page: 49 "leads to a problems" should read, "leads to problems" - Reported in: P1.0 (25-Apr-08) PDF page: 66 "Generics.java:10: cannot find symbol" needs to be "Generics.java:11: cannot find symbol" since line 11 is passing a String to the List. --Fabian Topfstedt - Reported in: P1.0 (22-Jul-09) PDF page: 67 Running the code in Groovy I get an error (book says it will run) 1 2 hello Exception in thread "main" org.codehaus.groovy.runtime.typehandling.GroovyCastException: Cannot cast object '3hello' with class 'java.lang.String' to class 'java.lang.Integer' at org.codehaus.groovy.runtime.typehandling.DefaultTypeTransformation.castToNumber(DefaultTypeTransformation.java:127) at org.codehaus.groovy.runtime.typehandling.DefaultTypeTransformation.castToType(DefaultTypeTransformation.java:256) at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.castToType(ScriptBytecodeAdapter.java:598) at Generics.main(GTest.groovy:18) I am running: Groovy Version: 1.6.3 JVM: 1.5.0_17--Sean M - Reported in: P1.0 (14-Feb-11) PDF page: 68 Executing the code def isPalindrome2(str) { if (str) { str == str.reverse() } else { false } println "mom is palindrome? ${isPalindrome2('mom')}" } in a Groovy Console version 1.7.7 is printing mom is palindrome? true instead of mom is palindrome? null--Jorge Lee - Reported in: P1.0 (28-Mar-11) PDF page: 76 In the C code example provided at the bottom of page 76, the char* argument is referred to as "argv" in the formal parameter list; but it is referred to as the variale "value" in the code block within the method definition.--Rick Manocchi - Reported in: P1.0 (29-Sep-08) Paper page: 76 The text says "BigNumber" instead of "BigDecimal" in the 2nd paragraph (not that I don't wish my raise was a BigNumber...)--Alan Thompson - Reported in: P1.0 (14-Apr-08) PDF page: 80 Section 4.4: They allow us to agree upon a certain expectations to be fulfilled. should read They allow us to agree upon certain expectations to be fulfilled.--Srivaths Sankaran - Reported in: P1.0 (21-Oct-08) PDF page: 89 2nd paragraph: "So, the compiler treats the instance of BigNumber as Number." BigNumber should be BigDecimal. - Reported in: B2.0 (28-Mar-08) PDF page: 109 Missing: fix reference to figure here--David Potts - Reported in: P1.0 (16-Apr-08) PDF page: 121 The page ends with ...elegant cousin of '\\d*\\w*" That should be ...elegant cousin of '\\d*\\w*' --Srivaths Sankaran - Reported in: P1.0 (03-Jun-08) Paper page: 129 "Here's an example: friends refers to a map" Friends is a list, not a map. Also, the short names in the list definition (e.g. briang) are never used. --Alan Thompson - Reported in: P1.0 (04-Dec-08) Paper page: 149 The code works with 1.5.6, JVM: 1.5.0_17-b04 on Ubuntu 8.10 I haven't found a jvm that works with Groovy 1.5.7. If I use 1.5.7 to run UsingDOMCategory.groovy I get Caught: java.lang.StackOverflowError The languages.xml file in the code download differs from the example in the book on page 148. The xml file in the book works fine, but the one in the download returns empty results. - Reported in: P1.0 (23-Nov-10) PDF page: 158 dynamic tying should be dynamic typing (line directly af the header "Using XMLParser") Using XMLParser The class groovy.util.XMLParser exploits the dynamic tying and metaprogramming capabilities of Groovy--Norbert Beckers - Reported in: P1.0 (15-May-08) PDF page: 162 The code: langs = ['C++' : 'Stroustrup' , 'Java' : 'Gosling' , 'Lisp' : 'McCarthy' ] xmlDocument = new groovy.xml.StreamingMarkupBuilder().bind { mkp.xmlDeclaration() mkp.declareNamespace(computer: "Computer" ) languages { comment << "Created using StreamingMarkupBuilder" langs.each { key, value -> computer.language(name: key) { author (value) } } } } println xmlDocument Does not produce an indented xml document, but a single line string. This is really important to mention and confusing. I don't think there is a way to make the output indented with that Builder.--Fred Janon - Reported in: P1.0 (20-Jun-09) PDF page: 187 Figure 12.2 has multiple instances of "it's" used as a possessive. Also, my PDF says it is the P2.0 printing, while this errata page says that P1.0 is the latest. --Grammar Police - Reported in: P1.0 (27-Mar-09) Paper page: 197 one more more classes -> one more to.--Jeremy Flowers - Reported in: P1.0 (15-Jun-08) PDF page: 202 4th paragraph, 1st sentence: "...the names of methods you want to add to one more more classes." The word "more" is repeated twice. I'm guessing the first "more" should really be "or".--Justin Spradlin - Reported in: B2.0 (28-Mar-08) PDF page: 203 Missing: Please make the sect2’s in this chapter appear in table of contents—I don’t know how to do that, but I think these section titles are important to appear in toc Missing: they should be title 1s then if they’re that important. –CE--David Potts - Reported in: P1.0 (21-Jul-08) PDF page: 203 Paper page: 218 I suggest discussing propertyMissing. I tried using method injection based on chapter 14 for a property and wondered why I was not seeing setXXX() and getXXX() like I expected.--Christopher M. Judd - Reported in: B2.0 (28-Mar-08) PDF page: 211 Integer.metaClass.static.isEven should be Integer.metaClass.'static'.isEven--Tim Orr - Reported in: B2.0 (28-Mar-08) PDF page: 214 System.out.println at bottom of page where the more consistent println could be used same issue pages 216-221--Tim Orr - Reported in: P1.0 (25-Sep-10) PDF page: 228 source code delegateTo.invokeMethod(name, *varArgs) should be delegateTo.invokeMethod(name, varArgs), because the original source code throw groovy.lang.MissingMethodException , after the chagne, the code works--Steve Zhang - Reported in: B2.0 (28-Mar-08) PDF page: 274 Missing: fix the following xref--David Potts - Reported in: P1.0 (04-Jul-09) Paper page: 284 first line should read: ... of the method at() ...--André - Reported in: B2.0 (28-Mar-08) PDF page: 286 Missing: fix the following xref--David Potts - Reported in: B2.0 (02-Aug-09) Paper page: 295 Add 'as' operator to index, as described in section 3.4 on page 39. Glen Smith was discussing this here: grailspodcast.com/episode/68 Have posted comment on podcast site too. About 13 minutes he started mentioing 'as', as being the silver bullet of handling annonymous inner classes when he was doing SWT stuff. I know I'd read about inner classes weren't in Groovy at the time of reading either your book or GinA--Jeremy Fowers
https://pragprog.com/titles/vslg/errata
CC-MAIN-2015-40
refinedweb
1,638
69.89
Man Page Manual Section... (3) - page: wctomb NAMEwctomb - convert a wide character to a multibyte sequence SYNOPSIS #include <stdlib.h> int wctomb(char *s, wchar_t wc); DESCRIPTIONIf s is not NULL, the wctomb() function converts the wide character wc to its multibyte representation and stores it at the beginning of the character array pointed to by s. It updates the shift state, which is stored in a static anonymous variable only known to the wctomb function, and returns the length of said multibyte representation, that is, the number of bytes written at s. The programmer must ensure that there is room for at least MB_CUR_MAX bytes at s. If s is NULL, the wctomb() function resets the shift state, only known to this function, to the initial state, and returns nonzero if the encoding has nontrivial shift state, or zero if the encoding is stateless. RETURN VALUEIf s is not NULL, the wctomb() function returns the number of bytes that have been written to the byte array at s. If wc can not be represented as a multibyte sequence (according to the current locale), -1 is returned. If s is NULL, the wctomb() function returns nonzero if the encoding has nontrivial shift state, or zero if the encoding is stateless. CONFORMING TOC99. NOTESThe behavior of wctomb() depends on the LC_CTYPE category of the current locale. This function is not multithread safe. The function wcrtomb(3) provides a better interface to the same functionality. SEE ALSOMB_CUR_MAX(3), wcrtomb(3), wcstombs
http://linux.co.uk/documentation/man-pages/subroutines-3/man-page/?section=3&page=wctomb
CC-MAIN-2014-10
refinedweb
249
60.85
Hi Sir's, I have taken one class with two overloading methods i.e same method name with different number of arguments and i invoke with instance of class i got wrong number of arguments error. please see the below code...... class Person def print_details(name) "Hey My Name is #{name}" end def print_details(name,age) "Hey My Name is #{name} and #{age}" end end person1 = Person.new puts person1.print_details("peter") puts person1.print_details("pk",25) I got The error wrong number of arguments (1 for 2) (ArgumentError). Why Ruby does not support the method overloading ??? Could u please explain me reason???? Thanking You in Advance........ on 2011-01-17 06:38 on 2011-01-17 08:31 . on 2011-01-17 08:50 > class Person > > def print_details(name) > "Hey My Name is #{name}" > end > > def print_details(name,age) > "Hey My Name is #{name} and #{age}" > end > end You can use default arguments in this case. def print_details(name, age = nil) s = "Hey My Name is #{name}" s << " and age #{age}" if age puts s end A piece of advice - A function's name and action should match. In this case, you name the method print_detail but do not print it. Seems like you expect to use the return value elsewhere; call the method just 'details' in that case. > person1 = Person.new > puts person1.print_details("peter") > puts person1.print_details("pk",25) > > I got The error wrong number of arguments (1 for 2) (ArgumentError). Yeah. The second definition now binds to 'print_details'. > Why Ruby does not support the method overloading ??? Thats the Ruby way. Ruby makes up for function overloading with its flexible argument processing ability, like Victor pointed out. on 2011-01-17 10:22 Victor D. wrote in post #975387: >. Ok Thank Your Yes it is overwriting the previous function with current function bases on method name . it is not considering arguments, only it is considering method name....... So Ruby is supporting Method Over writing not method overloading... Thank You for giving me an idea...... on 2011-01-18 02:09). From my understanding, taking the number of arguments, runtime type of arguments or even the formal parameter names into account for message to method mapping will complicate the language implementation. This is also the reason why Ruby does not support multiple dispatch at the language level. Common-Lisp does support multiple dispatch, but it does so by including a type modifier to the arguments so that they can be checked against the actual runtime type of arguments, which, IMO, is as unpleasant as doing something like the following in a single method definition: case arg when String # ... when Symbol # ... end Like Anurag said, Ruby already has a flexible argument processing ability; it suffices our needs though in a different style. on 2011-01-18 02:36 On Mon, Jan 17, 2011 at 7:09 PM, Su Zhang <zhangsu@live.com> wrote: > # ... > end > > I went through a Lisp book over the winter break, and I took it a different way: that it was essentially a way to define a receiver for a function. (not sure if that is good terminology) In other words, I took it like: class Array def size Array size finding code end end class Hash def size Hash size finding code end end Now, hash.size invokes a different method than array.size. So in Lisp this would maybe look like (defmethod size ((self array)) array size finding code ) (defmethod size ((self hash-table)) hash-table size finding code ) And unlike all the other functions in Lisp, which share the same namespace, size knows which function to invoke (size ary) invokes a different function than (size hash). So I think that defmethod is an appropriate name, because that is how I consider them, modular like methods, as opposed to created in one canonical place like a case statement. on 2011-01-18 09:02 On Tue, Jan 18, 2011 at 2:09 AM, Su Zhang <zhangsu@live.com> wrote: > From my understanding, taking the number of arguments, runtime type of > arguments or even the formal parameter names into account for message to > method mapping will complicate the language implementation. ... and method lookup tables will potentially be larger. Dunno whether this is a critical factor though. > # ... > when Symbol > # ... > end For multiple arguments the unpleasant variant looks like this (1.9 and later): def f(*args) case args.map(&:class) when [Array, Array] puts "Array * 2" when [String, Array] puts "String, Array" when [Hash] puts "Hash" else raise ArgumentError, "Don't know what to do with %p" % [args] end end Cheers robert on 2011-01-18 10:28 On Tue, Jan 18, 2011 at 2:09 AM, Su Zhang <zhangsu@live.com> wrote: >). There is also a more fundamental reason to consider disallowing method overloading with different argument signatures: each method that shares the same name (identifier) should do the same. If however arguments have different types methods cannot do the same - strictly speaking. Of course, this is an extreme point of view but you have to ask yourself where you draw the line for "same". In languages with interfaces (e.g. Java) ideally all appropriate argument types implement the same interface and you need only one method anyway - or all argument types inherit from a common base class. In Ruby however which has duck typing you can pass in anything as long as "anything" responds to all methods needed - hence there is less demand for overloading. You can do class IntSum def initialize; @s = 0; end def add(x) @s += x.to_int # or x.to_i end end where you would need multiple methods for different argument types in other languages..
https://www.ruby-forum.com/topic/890619
CC-MAIN-2017-17
refinedweb
945
63.8