text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Bryan Thompson 2005-07-14 Hello, It seems to me that the cache eviction policy is too eager. As it stands in the default configuration the CacheRecordManager coordinates with the MRU CachePolicy to evict the least recently used object from the cache once the cache is full. However, if there is a hard reference to this object held by the application, then the data associated with the cache is still in memory. Hence the only savings is that the cache hashtable size is kept to a maximum value. (Side note, it looks like the Hashtable should be initialized to the configured maximum cache size to avoid sporatic delays as the hash table grows in capacity.) Further, there appear to be three layers of cache in play. The CacheRecordManager has its private CacheEntry objects, which are what it inserts into the MRU CachePolicy (and hence into the Hashtable). However, the MRU class has its own private CacheEntry class which adds the prior and next links used by that CachePolicy. Finally, there is the actual application object - a reference to this is held by the CacheRecordManager's CacheEntry object. Each call to CacheRecordManager.update() is required to test against the cache. It seems that an application could drammatically improve update performance by coordinating with the cache entry, e.g., by hold a hard reference to the cache entry. This would obviate the need to test for a cache hit in update altogether since the application object would directly hold the reference to the cache data. Another way to do this is to have the application object optionally implement a cache entry interface. If that interface is implemented, then we can avoid cache tests for in-memory objects. Finally, cache eviction is only reasonable when the application object is finalizable. As I noted before, if there is a hard reference to the application object then evicting the object from cache appears to do nothing (other than force a sync to disk). I see that there is code in CacheRecordManager.update() to place the application object back into the cache so that the live object is always used in preference to the object on disk, which ensures that we never get more than one reference to a persistent application object with the same recid. If we used a protocol to perform cache eviction only when the application object was finalized, then we could drop this since there would never be an opportunity to update an object that was not already in the cache. Thoughts? -bryan Alex Boisvert 2005-07-15 Bryan, I see where you're coming from. The best cache management policy is usually a function of the type of application/usage. It seems to me that what you're looking for is a cache policy that would use weak references. This type of cache would keep track of the objects you have in memory and lower the cost of updating the LRU structures for every access. What do you think? alex If anyone does decide to pursue this, I have a WeakValueHashMap that I put together a few years ago for a similar purpose. It's been in use for about 3 or 4 years now, so the kinks or pretty much worked out. In terms of optimization, there may be some tuning opportunities in determining when to flush references (right now I do it every time an object is inserted - it may be better to have a configurable load factor and only purge every XXX puts...) Here's the code: package com.trumpetinc.util; import java.util.Collection; import java.util.HashMap; import java.util.*; import java.lang.ref.*; /** * @version 1.0 * @author Kevin Day (Trumpet, Inc.) */ public class WeakValueHashMap extends HashMap { private static final long serialVersionUID = 3977303234983573560L; ReferenceQueue refQ = new ReferenceQueue(); HashMap reverseMap = new HashMap(); /** * Constructor for WeakValueHashMap. * @param initialCapacity * @param loadFactor */ public WeakValueHashMap(int initialCapacity, float loadFactor) { super(initialCapacity, loadFactor); } /** * Constructor for WeakValueHashMap. * @param initialCapacity */ public WeakValueHashMap(int initialCapacity) { super(initialCapacity); } /** * Constructor for WeakValueHashMap. */ public WeakValueHashMap() { super(); } /** * Constructor for WeakValueHashMap. * @param m */ public WeakValueHashMap(Map m) { super(m); } /* * @see Map#get(Object) */ public synchronized Object get(Object key) { clearUnreferencedValues(); Reference val = (Reference)super.get(key); Object oval = val != null ? val.get() : null; return oval; } public synchronized void clearUnreferencedValues(){ Object val = null; val=refQ.poll(); while (val != null){ Object key = reverseMap.remove(val); //System.out.println(this + " is removing " + key); this.remove(key); val = refQ.poll(); } } /** * Puts the specified value into the Map. Ensures that the value * is a weak reference. Note that it is not possible to store * WeakReferences themselves into this Map - they will be dereferenced * during get(). * If an attempt is made to store a null value as the value (or if a * Reference is passed in that returns null from it's get() method) * The key and value will NOT be added to the map, and null will be * returned. * @see Map#put(Object, Object) */ public synchronized Object put(Object key, Object value) { clearUnreferencedValues(); Reference nextRef = null; if (value instanceof Reference) value = ((Reference)value).get(); if (value != null){ nextRef = new WeakReference(value, refQ); reverseMap.put(nextRef, key); return super.put(key, nextRef); } else { return null; } } /** * Returns a collection containing the values contained within * this map. This collection is a snapshot of the map at the time * the method is called. * @see Map#values() */ public synchronized Collection values() { Collection snapshot = new ArrayList(size()); Object val = null; for (Iterator it = super.values().iterator(); it.hasNext();) { Reference ref = (Reference) it.next(); val = ref.get(); if (val != null) snapshot.add(val); } return snapshot; } /* * @see Map#containsKey(Object) */ public synchronized boolean containsKey(Object key) { return get(key) != null; } } Bryan Thompson 2005-07-28 Kevin, If it is Ok with you, I would like to pursue this and create another CachePolicy implementation that uses the WeakValueHashMap and contribute the result back into the jdbm project, e.g., as a files within the jdbm.helper package and a patch to the record manager initialization so that the policy may be configured easily. The contract for this policy would be that objects remain in cache until they are no longer in use. As a consequence, the cache size can grow without limit, though I would expect that a reasonable upper bound would be observed in practice. Right now I see two problems with the default jdbm MRU cache policy: (1) cache eviction is too eager, since objects are evicted from the cache once it reaches its initial capacity; and (2) since objects may be evicted eagerly from the cache, it is possible to have a runtime object for some persistent record, have it flushed from the cache, and then have a second runtime object for the same persistent data created since the original runtime object is no longer in the cache, even though it still exists and is in use by the application. The latter is a BIG problem for our application since it breaks the guarentee that there is never more than one runtime object for a given persistent object. That guarentee is what allows us to use java reference tests, e.g., x == y, to test whether two persistent objects are the same. This problem does not show up, of course, until you reach capacity on the cache. Thanks, -bryan I'm glad to hear that you are pursuing this. It's definitely been on my mind as well. Please let me know if you'd like to exchange design ideas, etc... Some thoughts: 1. It will probably be a good idea to combine the MRU with the weak reference. Caching is often extremely useful, even if an object would technically have been removed by the GC. I've been thinking about the best way to do this - I'm thinking something along the lines of a two level cache. MRU is the top level, weak reference is the next. When an object is evicted from the MRU, it gets added to the WR cache. If an object is pulled from cache, it gets added to the top of the MRU, regardless of whether it was pulled from the MRU or the WR. 2. Because the objects in the cache are actually the BTree pages, which, in turn, hold references to their value objects, a WR cache implemented at the record manager level may not behave as expected. It is quite possible for the BTreePage to be evicted from cache while you still maintain a reference to a value object. I havne't spent enough time thinking about the correct solution here... you don't really want to hold a reference to the Btreepage in the value object. If the BTree didn't contain objects, but instead contained recid values that were used to load from the record manager, then a WR cache would work properly - but you will wind up with poor disk performance (see the other thread in the forum about NIO implementations for a discussion on why this would be). It almost seems like we need to cache value objects in the BTree itself instead of in the record manager. During restore of a primary BTree page, if would look the primary key up in the BTree's object cache prior to deserialization of a given object. This is some really rough thinking right now - I just want to make sure that the entire problem is considered before we start writing code :-) I'm eager to hear your thoughts on the above! - K Bryan Thompson 2005-07-28 Kevin, I was also thinking that an MRU policy would still be useful, but I was thinking to migrate the objects to the MRU cache when they were evicted from the WR cache. MRU cache objects would be "inserted" on eviction from the WRU and roll off the end of the MRU cache if they are not fetched before too many evications of other objects go by. This almost sounds like a ring-buffer for the MRU cache. I'm not sure which design (if either) is "right", but your point is well taken. There may (or may not - this is an empirical question and is probably application specific) be a bias for fetches against persistent objects that had have already been fetched even after the JVM collects the corresponding runtime objects. One thing that I have been playing with in our application objects is a small local MRU cache for references resolved from that object. A similar MRU cache could be placed in front of a btree for application objects resolved against that btree. This is something that I am planning to do within our object manager for certain well-used btrees. There is no reason why this could not be done for any btree. In fact, it still makes sense when the btree would resolve a recid against the record manager since you can just cache the object that would be resolved rather than the recid of that object. I tend to hold a reference to well-used btrees rather than letting them get collected. I could see wrapping a BTree with a cache policy, much like the recman, but perhaps with an optional "Resolver" interface that could be used to cache objects that are being indirected through the btree. Also, there are two things that bother me about the MRU cache policy: (1) the hashtable should be initialized to its capacity; and (2) we are coining Long objects all over the place since the Hashtable requires an object (rather than a primitive long) for its key. It would be nice to get rid of that object allocation overhead. -bryan I don't think you want to have the WRQ in front. Remember, when you add the object to the MRU, it will get a reference, but it will no longer be in the WRQ. When the MRU evicts the object, you will be in danger of violating your A1==A2 requirement. Per your comment on holding the reference to the BTree - the BTree does not hold references to the value objects. The BTreePage holds the reference, and the page is *not* referenced by the BTree itself. Per your comment on the current MRU implementation (with hashtable): In terms of maxing the capacity up front: Sure - that's always a good idea with this kind of thing. But it won't make much difference. I suspect that the hash table will be at it's final size within a few minutes of running... With respect to your comment on Long vs long: It does seem to be a bit of a performance drain to be continuously creating and destroying Long objects... I believe that the Jakarta project has a sub-project that has implementations of HashMap that use long's as keys - it may be worth thinking about. That said, remember that the key in the BTreePage is already going to be a Long - so the object is already created, really. The optimization may be better performed by overriding the recmanager methods that accept long so they also accept Long. That way, the record manager could re-use the Long object that is already created by the BTreePage... Just something to think about. I don't think it would be a good idea to try to make the BTree implementation use native longs as keys - the existing design is way too elegant - so you are going to have Long objects floating around anyway... optimizing the cachemanager hashtable to use native longs may not make much of a difference. I've actually been thinking quite a bit about a slightly improved BTree implementation. In my app (and I think this would be an appropriate strategy for many apps), I differentiate between primary key BTrees and secondary key BTrees. If we added WRQ caching to the primary BTree implementation, and leave the MRU processing the record manager, I think it might make for a very elegant solution. In fact, it gives the behavior you described above, without any of the potential pitfuls I described (unless I'm missing something :-) ) The object wouldn't come up for eviction untill the BTreePage that references it is evicted from the MRU, so you still effectively get MRU caching behavior. The BTreePage would then check it's parent BTree to see if a given key is in the WRQ before performing deserialization. If it is in the WRQ, it retrieves it and adds it to it's internal list. If it is not in the WRQ, it creates a new one, then adds it to the WRQ. Some other features that would be extremely nice that I'm thinking about: 1. A key-aware serializer. This would add an extra method to the serializer interface, overloading the deserialize() method so it accepts the key in addition to the byte array. This would allow the deserialize method to populate the created object with data from the key, instead of having to store the key inside the byte array itself (which is just a silly waste of disk space). 2. index triggers - I haven't thought this one through all the way, but it would be insanely cool if a BTree could be configured with listeners that get told when a value is being added or removed. This could be used to keep a primary key BTree in sync with several scondary key BTrees, without having to manually code for every situation. The listeners should have an opportunity to approve the change with one callback, then execute the change on a second callback. The approval is to give a secondary index an opportunity to "just say no" if a key violation would occur. I have a bunch of code that keeps indexes consistent, but it feels like I've written the same code over and over again for each business object, and I'm sure that there must be a way to factor out at least some of the behavior. 3. Encryption - there is no reason the BTreePage serializer couldn't encrypt it's contents. I'd be inclined to use a low level implementation (such as Bouncy Castle) instead of the Java CAPI, so it will continue to work with older JVMs, and you won't have the junk overhead of the Java Cryptography API. Whew - it looks like I've got my hands full :-) - K Bryan Thompson 2005-07-29 Kevin, I'll respond in more detail later. For the moment two things: - I would like to proceed by small changes initially. - Maybe we should take this to a telcon to talk about what we want. I think that we are actually pretty well aligned and that speaking directly might clear things up. - I am currently working on a jdbm-related project and I can devote some time to feature improvement during the course of that effort. Also, for the current jdbm maintainers: I would be happy to do a 1.0 release of jdbm since that seems to be stalled. I would do this as a rc1 first to verify things and then a 1.0 in a few weeks. This would be a feature freeze release. I've done source forge releases before, so this should not be too difficult. This might also make it easier to start thinking in terms of 2.0 feature development. -bryan Bryan Thompson 2005-07-29 Kevin, The reason that I mentioned holding a reference to the btree was that if you do this, then it is easy to wrap the btree with an object (vs BPage) cache. This works even if you need to indirect through a recid to an object globally stored in the recman vs in the value[] on the BPage. I've done this sort of index trigger thing in our framework in which the primary data structure is a distributed linked list. I have it slated to do a varient in which the primary data structure is a clustered index - just like your "primary key index" and in which the secondary indices are value indices. This is all open source already, but there may be aspects that could be re-factored into jdbm. A related feature would be an Iterator (or modified TupleBrowser) for the BTree that supports concurrent modification, i.e., it registers a transient listener for the BTree and updates its state accordingly when there is a change to the state of the BTree that would effect the traversal ordering. This is something that I need to solve anyway for our application. Unlike the iterator/tuple browser, secondary indices would need to register persistent listeners. -bryan OK - now that I think about it, this is exactly what I do. I have a BTree factory that caches the top level tree object when it is loaded for the first time. What I don't do is store the reference to the tree in the value objects created inside the tree - I always grab the trees from the factory when I need them. That is nitty-gritty detail that probably doesn't matter much. Of course, you are correct that you need to hold references to the BTree in my WRQ strategy. Otherwise, you could wind up with 2 BTree objects holding different copies of what should be the same object. In terms of loading directly from the record manager, I'm not convinced yet that this is an appropriate strategy for most implementations. I originally had my app configured to do exactly this, but the performance was terrible. This may imply that the underlying record manager has some performance problems. In fact, I'm going to start another thread with my thinking on that... Your idea of a "change safe" tuple browser is definitely good. I've been able to refactor my code so I don't need that capability in my current application, but there really is no good reason to not have it (outside of the fact that it requires digging into some really low level stuff in the BTree implementation, and will probably require a heck of a lot of testing to ensure that it works properly as the BTree reorganizes itself as records are added and removed...). The interface for this type of browsing that I've always found most useful from a top level application perspective is to be able to choose a particular index, then be able to walk that index forward or backwards (or search for a particular element in the index), making changes as you go. Just had a thought: If we are using a WRQ, then it should be possible to not have to store the recid in the value object at all. If the object is going to be deleted, it can be looked up in the WRQ and the recid can be obtained that way. That may make JDBM significantly more like a transparent object database... Anyway - all of this is still in the very early "noodling around" phase. I think it's worth continuing the conversation until we start to hone in on the "best" way to do this... Cheers! - K
http://sourceforge.net/p/jdbm/discussion/12569/thread/a91476cc
CC-MAIN-2014-52
refinedweb
3,512
60.45
You must have seen a lot of very fancy Status Bars in different samples and commercial applications with progress bars, animation, images etc etc. Here, I present a technique for making a Text Only Status bar with many text-only panes and with it's own tool tips extracted from the Status Bar panes themselves. You can easily replace the standard status bar in an existing SDI/MDI app by including: #include "TextualStatusBar.h" at the top. For a dialog based app you can create it in OnCreate(). OnCreate() Although this might not be the best Status Bar around, I've shown you the way to deal with a Status bar and tool tip control as a child window. Furthermore, there are a couple of other (read, better!) ways for adding tool tips to any control. The technique I used in the sample is the same one that I used in an app because it was a requirement. I caught WM_NCHITTEST over the status bar and updated the tool tip text. WM_NCHITTEST This example also teaches how to get to the individual panes of the status bar and perform an operation on them. This example also illustrates the tight connection between MFC CStatusBar and CStatusBarCtrl classes. CStatusBar CStatusBarCtrl Please don not hesitate to mail me any bug, suggestion, clarification, query.
http://www.codeproject.com/Articles/497/Text-Only-Status-Bar?fid=625&df=90&mpp=10&sort=Position&spc=None&tid=77680
CC-MAIN-2015-27
refinedweb
220
63.19
✨✨Big Python Tutorial✨✨ ✨✨Hello✨There✨✨ Welcome to My Boring Fun 🐍🐍Python🐍🐍 Tutorial where I will be teaching YOU python. This tutorial will cover the basics of python an I promise you that you will learn a Lot. Without a Further a due go and learn Python! Credits to @RhinoRunner for helping with the tutorial. Me and @RhinoRunner will be rolling out a part 2 soon(Please don't think I am cycle squeezing, this tutorial took me around 1000 lines of markdown). I have attached a repl down below that is a copy of this tutorial(It's split up into 3 files, because the file became to large for our computers to handle.) Note: Feel free to tell me if something is unclear or if it isn't correct, or if there is anything I can add. Edit: Added Functions Edit2: A website for all of this is in the works! I will be adding it soon. Course Content Before I get started here is the Course Content - Single-Line - Multi-Line - Print Statements - Single-Line - Multi-Line - Data Types - Strings - Integers - Floating Points - Booleans - Variables - Proper Variable Names - Printing them - Changing them - Type Function - f-strings - .format - String Methods - indexing - slicing - Concatenation - Printing Using Concatenation - Assigning Variables Using Concatenation - Type Casting - Operators - Basic Operators - Assignment Operators - Comparison Operators - Getting Inputs - Storing inputs in a variable - Specifying input type - Practice Problem - Lists - List Methods - Conditional Statements - if - else - elif - pass - logic - Practice Problem - Loops - for Loops - while Loops - Practice Problem - Escape Codes - Functions - Return - Parameters - Lambda - Scope - Global - Local Let's Get Started! Single-Line Comments Comments are blocks of code that are ignored by the computer and can be used to stay stuff about your code. The way to tell your computer that you are entering a comment is with a #. So here's an example: #This is a comment Multi-Line Comments You can also have comments that span multiple lines which are done with three single or double quotes to start and 3 to end. Here's an example: #This is a single line comment ''' This is a multiline comment ''' Comments are mostly used when you want to say something about how your code works, so when you revisit your code you aren't confused on what you were doing. Another common use is when you want to leave some code out of your program and not have it run without completely removing it from your code. Printing The print() function prints(or outputs) something to the console. Single-Line Printiing Here's an example: print("Python Rules") #You can use either Double Quotes or Single Quotes, but using Double Quotes is the best Practice print('Python Rules') #This works too This would output Python Rules Python Rules Multi-Line Printing If you want to print text for multiple lines then use 3 single quotes or doubles quotes to start and 3 to end. Here's an example: print("Here's my to do list") print(""" To Do: [1] Code In Python [2] Code In Python [3] Code In Python """) The Output: Here's My to do list To Do: [1] Code In Python [2] Code In Python [3] Code In Python Data Types str or string is a data type that is a sequence of characters and is surrounded in double or single quotes Ex: "Python Rocks", 'Python is Fun' int or integer is a any positive or negative number (without decimals) Ex: 5, 69, -568 float is any positive or negative number with a decimals Ex: 69.69, 5.0, -15.89 bool or boolean is a True or False(It can only be True or False) value that can be used for Logic (Make sure to capatalize it) Variables Variables are data types that are assigned to variable names and are used to store information. Proper Variable Names They can use numbers,letters,and underscores, but can't start with numbers. No spaces are allowed either(Use underscores instead). You should use variable names that relate to the value being stored, and variable names that are also short. Here are a few example of valid and good Variable names: Apple_Cost = 0.49 Apples_Amnt = 5 Printing Variables You can also print variables by putting them in the parenthisis of the print Statements name = "IntellectualGuy" print(name) It would output IntellectualGuy Reassigning Variables You can also change the value of the variable by simply just reassigning it Pog = False Pog = True You can also switch the data type Bannanas = False Bannanas = 3 Type Function After you change a variable a lot you might want to find out what data type it is so you just use the type() function. Apples = 5 print(type(Apples)) It would output <class 'int'>, which basically means that the data type of the variable apple is an int or integer. F-Strings F-Strings print out stuff with a variable in between. This will become more useful when you learn about inputs. Here's an example: username = "IntellectualGuy" print(f"Hello {username}") cycles = 100 print(f"Hello {username} you have {cycles} cycles") It would output: Hello IntellectualGuy Hello IntellectualGuy you have 100 cycles .format() Another Way to do that is using the .format method which you use like so print("Hello {}".format("IntellectualGuy")) #You can insert more than one value print("Hello {} you have {} cycles".format(username,cycles)) #You can use variables print("How many {fruit} would you like to buy? Each costs ${cost}.".format(fruit = "bannanas", cost = 0.75)) #You can also use indexing print("Hello {0} you bought {1} bannanas".format("IntellectualGuy",3)) Output: Hello IntellectualGuy Hello IntellectualGuy you have 100 cycles How many bannanas would you like to buy? Each costs $0.75. Hello IntellectualGuy you bought 3 bannanas String Methods There are many different string methods that you can use. String methods are basically functionsthat you can apply to your strings. FOr example the print() function that I explained earlier can print a string to the console. Something to know before starting string methods it that each character in a string has an index value, starting from 0. Here's an example "H e l l o" 0 1 2 3 4 #This shows in the string hello, what characters are in what index positions Index positions can also be in the negatives like so "H e l l o" -5 -4 -3 -2 -1 Negative index positions are mainly used for things like getting the last character of a string Indexing Indexing is getting the value of a certain index of a string, and you use it like so greeting = "Hello" print(greeting[3]) Output: l You can also use negative indexing to get a character greeting = "Hello" print(greeting[-1]) Output: o Slicing Slicing is used to get a certain part of a string like so Length Length is used to get the length , the syntax is len(string) of a string and is used like so greeting = "Hello" print(len(greeting)) Output: 5 Upper The .upper() method is used to uppercase a string like so name = "bob" print(name.upper()) Output: BOB Lower The .lower() method is used to lowercase a string like so name = "SAM" print(name.lower()) Output: sam Concatenation Concatenation is when you add/join 2 or more strings together. For example: User_Type = "Admin" print("You are the" + User_Type) Output: You are the Admin. You can also assign a variable using Concatenation For example: User_Type = "Guest" Message = "You are a" + User_Type print(Message) Output: You are a Guest Remember you can Concatenate more than 2 strings together For Example: User_Type = "Admin" Message = "Hello" + User_Type + "What would you like to do?" print(Message) Output: Hello Admin What would you like to do? Type Casting You can't Concatenate strings with integers/floats/booleans, you would have to cast the type, let's say you wanted to print out a message with a number using Concatenation, then you would use tyoe casting, which let's you change the type of a variable. Ex: Age = 5 print("Bob you are " + Age + " years old.") #That would produce a Type error, and the way to fix that is to cast the integer age into a string. #Let's do it again without producing an error Age = 5 print("Bob you are " + str(age) + " years old.") #The syntax for type casting is the data type you want to convert the variable to and then the variable inside paranthesis #datatype(variable) #Let's try another Example Name = "Bob" print("I know someone who is 5 years old, his name is" + int(Name)) #This would produce an error because you can't convert the string into an integer, as Bob is not a number Note: Type casting only temporarily changes the data type of the variable, not permanently, If you wanted to though then you could assign the variable to the casted variable like solution age = 5 age = str(age) #Now instead of being the integer 5, age is now the string "5" Operators Basic Operators You can do basic Math with Python. Addition - You can add Two numbers with a plus sign + Ex: 5 + 5 , 6.9 + 9.6 Output: 10 , 16.5 Subtraction - You can subtract Two numbers with a plus sign - Ex: 10 - 5 , 9.6 - 6.9 Output: 5 , 2.7 Multiplication - You can multiply Two numbers with an asterik * Ex: 5 * 5 , 5.5 * 3 Output: 25 , 16.5 Division - You can divide Two numbers with a forward slash / Ex: 50 / 5 , 20.4 / 4 Output: 10 , 5.1 Power - You can get a power of a number with 2 asteriks ** Ex: 2 ** 3 Output: 8 Modulo - You can get the remainder between Two numbers with a Percentage sign % Ex: 69 % 6 , 25 % 5 Output: 3 , 0 Floor Divison: Ignores decimals when doing Division with two forward slashes // Ex: 10 // 3, 17 // 4 Output: 3, 4 You can assign variables to use them for math Ex: a = 10 b = 5 print(a + b) print(a - b) print(a * b) print(a / b) print(a % b) Output: 15 5 50 2 0 Assingment Operators Assignment operators are used to assign variables using operations. Here is a list of them. += -= *= /= %= //= **= Now let's have a few examples of using them a = 15 b = 10 #After I print each value out prentend that I am reseting the value of a a += b print(a) a -= b print(a) a *= b print(a) a /= b print(a) a %= b print(a) a //= b print(a) a **= b print(a) #I won't show the output for this because it is to big, but it would output the result of 15**10 Output: 25 5 150 1.5 5 1 Output of 15**10 The Assignment operators are technically just shortened down Basic Operators. For example a += b is just saying a = a + b,it's just another way to write it, however I still recommend using the shortened version. Comparison Operators Comparison operators are used to get a true or false value. Here is a list of them Checking if a value is equal to another value. If it is equal to the other value then the it is True. Otherwise if the other value is not equal to the value it is False - == Checking if a value is not equal to another value. If it is equal to the other value be compared then it is false. Otherwise if the value is not equal to the other value then it is False - != Checking if a value is greater than another value. If it is greater than the other value then it is True. Otherwise if the value is less than or equal to the other value then it is False - > Checking if a value is less than another value. If it is less than the other value then it is true. Otherwise if the value is greater than or equal to the other value it is False - < Checking if a value is greater than or equal to another value. If it is greater than or equal to the other value then it is True. Otherwise if the value is less than the other value then it is False - >= Checking if a value is less than or equal to another value. If it is less than or equal to the other value then it is True. Otherwise if the value is greater than the other value then it is False. - <= Something to remember is that there are opposite comparison operators like == and != are opposite value because == is checking if the values are same and != is checking if the values are different. Opposite Pairs: == and != > and <= < and >= Getting Inputs You can also get the user to input something using the input() function, here's an example of using it: input("How old are you: ") Output How old are you: 69 It would output How old are you and then I could say whatever my age was. Storing inputs in a variable You can store the input that you get from a user into a variable like so age = input("How old are you?") Output: How old are you: 69 Remember it doesn't print 69, I am just using 69 as an example input. Specifying input type You can also specify the data type of the input like so age = int(input("How old are you?")) Output: How old are you: 69 Practice Problem 1 Small Exercise Try to make an multiplication calculator where you ask the user to input 2 numbers and then output the product of the 2 numbers Solution: #Getting a number from the user number1 = int(input("Enter a number")) #Getting another from the user number2 = int(input("Enter another number")) #Adding the 2 numbers together product = number1 * number2 #Printing out a message to the user telling them the product of the two numbers print(f"The product of {number1} and {number2} is {product}") Lists Lists are one of the more complex python data type. Lists hold other data types like strings, integers, floats, and even lists(We will go over nested lists in the next tutorial). They are declared using this syntax fruits = ['apple','bannana','orange'] Each list item has an index, like strings. The index starts from 0. Here's an example to show fruits = ['apple','bannana','orange'] # 0 1 2 Lists also use negative indexing fruits = ['apple','bannana','orange'] # -3 -2 -1 List Methods There are many methods that you can use with lists. The .append() method adds an item to the list Example: fruits = ['apple','bannana','orange'] fruits.append('mango') Now fruits is ['apple','bannana','orange','mango'] The .pop() method removes an item from a specific index Example: fruits = ['apple','bannana','orange','mango'] fruits.pop(3) Now fruits is ['apple','bannana','orange'] The .remove() method remove a certain item from a list Example: fruits = ['apple','bannana','orange','mango'] fruits.remove('orange') Now fruits is ['apple','bannana','mango'] The del method can delete a whole list or a specific index of a list Example: fruits = ['apple','bannana','orange','mango'] del fruits[0] Now fruits is ['bannana','orange','mango'] Another Example: fruits = ['apple','bannana','orange','mango'] del fruits Now there is no fruits list. Conditionals Conditionals can be used to create code that will run if a certain condition will used, the syntax is like so if [condition]: code If you don't specify if you want the code to run if the condition is true or false then the default is that when your code is run, if the condition is true then the code will run. Here's an example of a valid if statement using comparison operators age = 69 if age == 69: print("You are 69 years old") Output: You are 69 years old Another Example age = 69 if age == 13: print("You are 13 years old") Output: #Nothing because you are the varaible age is not 13, and only if the age is 13, then it will say You are 13 years old. Elif Elif is to have multiple conditions that can check if something is true if the if statement is not true. You can have as many elif statements as you want. Example: age = 9 if age == 13: print("You are 13 years old") elif age == 9: print("You are 9 years old") Output You are 9 years old Another Example age = 11 if age == 13: print("You are 13 years old") elif age == 4: print("You are 4 years old") Output: #Nothing because in the first statement, age is not 13, so it won't run and in the second statement age is not 4 so it won't run either Else Else conditions, are run when all of your if or elif statements are run, and haven't been executed. Note: In if,elif, and else statements, in a single conditional, only one condition will run. So let's say you have multiple conditions, and 2 of them are true. The one that was stated the first will be run, and the second one won't be run. Here's an example age = 8 if age == 13: print("You are 13 years old") elif age == 9: print("You are 9 years old") else: print("You are not 9 or 13 years old") Another Example: age = 13 name = "Bob" if age == 13: print("You are 13 years old") elif age == 9: print("You are 9 years old") elif name == "Bob": print("Your name is Bob") elif name == "Mike": print("Your name is Mike") else: print("You are not Mike or Bob, and you are not 13 or 9.") Output: You are 13 years old Remember the reason it won't output "Your name is Bob" is because only one statement will run in a single conditional statement. pass The pass keyword is used as a placeholder so when you have a if statement that you want nothing to execute if that condition is true then use the pass keyword Example: age = 15 if age == 15: pass else: print("You are not 15") This wouldn't output anything because nothing happens in the if statement because there is a pass Logic You can have multiple conditions in each if statement, using the and keyword and the or keyword. When you used the and keyword then the code will only run when all conditions are true. When you use the or keyword then the code will run if one or more of the conditions are true. Practice Problem 2 Small Exercise Try to make an odd and even checker, where you get a number from user input and then the computer will check if the number is odd or even. If it is odd then print out the number and say that it is odd, and if it is even then print out the number and say that it is even. Go make a new repl and try it out! If it becomes too hard then feel free to check out the solution. Solution #First Let's get the number input from the user, and let's call it number_input number_input = int(input("What number Would you like to check?")) #Then let's create the conditional Statements #Checking if the number is even if number_input % 2 = 0: #Printing out the message that says the number is even print(f"Your number {number_input} is even.") #Now we'll have an else statement because if the number is not even then it must be odd.But it's okay if you used an elif. else: #Printing out the message that says the number is odd print(f"Your number {number_input} is odd.") Loops Loops are used to run things multiple times, and there are 2 main types, for loops and while loops For Loops For loops are used to iterate through items in a list, dictionary, integer, or string. They can be used to do something for every item in a variable. Example: example_list = ['item1','other item','last item'] #we now have a list for i in example_list: #loops through everything in the list #and assigns the current thing to the variable i print(i) output: item1 other item last item As you can see, it looped through every value in the list example_list. The reason the i is there is to have a variable you can assign to the thing it is looping through. You can change i to any other name as long as another variable doesn't have the same name. Another thing you can use is range. for i in range(10): print(i) output: 0 1 2 3 4 5 6 7 8 9 It starts from 0 and goes to the number before the number you said, so it has 10 numbers in it. If you want to exit a for loop, you can use break. for i in range(10): if i == 5: break else: print(i) output: 0 1 2 3 4 As you can see, when the number reached 5, the loop ended. You can also skip iterations using continue. for i in range(10): if i == 5: continue else: print(i) output: 0 1 2 3 4 6 7 8 9 When the loop reached 5, it skipped the print statement and went back to the beginning of the loop. While Loops While loops are used to run a program while the variable stated is the condiion stated. Example: num = 6 other = 1 while num > other: print(other) other += 1 output: 1 2 3 4 5 So, the loop keeps going on so long as num is greater than other, it prints other. But, when they were equal (both at 6), the loop stopped. You can also use break and continue in while loops. It works the same as it did in the for loops. The most common usage for while loops is with bool values. var = True while var: print('hello') output: hello hello hello hello hello hello hello hello ... Since var is never changed (it always stays true), the loop goes on forever. Practice Problem 3 Make a program where for every number from 1-100, if the number is divisible by 15, it would print Fizzbuzz, if the number is divisble by 5, it would print Fizz, if the number is divisble by 3 then bring Buzz, otherwise it wil just print the number.Then Ask the user If they want to start the program again, if yes than restart it again, otherwise break out of the loop. Go make a new repl and try it out! If it becomes too hard then feel free to check the solutions. Challenge: Ask the user the maximum number they want the program to run to. Solution: #A while loop to keep on running the program if the user wants it to while True: #A for loop to iterate through each number in the range of 1-100 for number in range(1,101): Challenge solution: #A while loop to keep on running the program if the user wants it to while True: #Asking the user what they would like the maximum number to be max_range = input("What would you like the range to be? ") #A for loop to iterate through each number in the range of 1- whatever number they chose for number in range(max_range): Escape Codes There are multiple escape codes that you can use in your code to do different things \n This prints the text on a new line. Ex: Code: print("Hello\nTesting\n\nBackslash n") Output: Hello Testing Backslash n \t This prints a tab. Ex: Code: print('\ttab\t\t\tmore tabs') Output: tab more tabs Functions A function is a block of code that runs code. They are mainly used when you want to use code again and again, without having to write it multiple times. Here is the syntax for it def function_name(): code_to_run Here is an example: def add(): sum = 5 + 5 print(sum) The way that you use or call functions is through this syntax add() Output: 10 And you can use the function multiple times. return There is a special keyword that can be used in functions and it is the return keyword What the return keyword does, is that it gives the function that you call a value. And you have to print the function out when you want the code that was run So here's an example def try_func(): x = 5 return x try_func() You might think that using try_func() would print out 5, but really it would assign the value of 5 to the function. If you wanted it to print out 5, then you would have to do print(tryfunc()) Parameters Parameters are something used in a function to make it more versatile and interactive. Parameters are variables, that you pass in when trying to call a function. Here's the syntax: def function_name(parameters_needed): code_to_run Syntax to run it: function_name(parameters) Here's an example of a function using paramters def add(num1, num2): return int(num1) + int(num2) print(add(9,10)) # 9 and 10 are parameters this should output 19 Basically, the function is just saying, give me two numbers, and I'll add them and return the sum. Lambda Lambdas are short functions that you can use in your code, and are assigned to variable names. Here's the syntax variable_name = lambda parameters : code_to_run Calling syntax: variable_name(parameters) And here's an example multiply_by5 = lambda num: num * 5 Calling it: print(multiply_by5(5)) Output: 25 You can also use lambdas in function, the are mostly used in the return def concatenator(string): return lambda string2 : string2 + string er_concatenator = concatenator("er") print(er_concatenator("Program")) Output: Programer Scopes A scope is where a variable is stored. Like if it's inside if a function. Global The global scope are variables that can be accesed by all of your code. eg. x = 'This is in the global scope' Local The local scope is the scope of a function, and you may notice, that when you set a variable inside of a function then you can't access it outside the function, and you will get an error. This happens because the version of that variable only changes inside of the function. Here is an example of the error. def y(): x = 1 y() print(x) The way you can fix this is by using the global keyword and declaring that variable as a global variable def y(): global x # Declaring x to be a global variable x += 1 y() print(x) The global keyword basically moves the x variable from the local scope to the global scope, and all changes that are made when the function is called is applied to the global version of x. Conclusion Wow, did you actually read all of that, or did you just skip to the ending? That was (pretty much) all of the python basics! If you want a further explanation on something, or feel as if we missed something, put it in the comments and we will make revisions! MAIN CONTRIBUTOR: @IntellectualGuy PARTIAL CONTRIBUTOR: @RhinoRunner Credit for a bug - @mitiok Here's a udemy course that I got the fizzbuzz challenge from @IntellectualGuy no its not that i don't like it, but there are too many python tutorials out there. @IntellectualGuy this isn't trash it's great but wayyyyy tooooo many Python tutorials. And BTW, you need to credit Udemy for The FizzBuzz challenge. I pretty sure that is from the Python course which I bought on Udemy. You may have bought it too. So PLEASE credit them... @IntellectualGuy @OldWizard209 It's a very common thing that is used, but sure I will. Also I see you haven't upvoted, anything wrong? @RhinoRunner I know, I'm just asking if there is anything wrong, not asking for upvotes. Sorry, Lol I upvoted and closed the tab quickly and there were many apps running so the processing speed might be slow. I will upvote agiain. @IntellectualGuy @OldWizard209 You didn't have to, I was just asking if anything was wrong because I really want this tutorial to be good cause I spent a lot of time on it. @Andrewsmith291 Np, I have a part 2 coming up soon that has more intermediate concepts. Epic tutorial I can see a lot of effort has been put into this This is really helpful in making my jaeger :D. I plan to teach others python while doing this, so this is like the meaning of life for me. Just gotta say something. Everyone can make a tutorial upt to functions. But no one tries Classes because that is where all hell breaks lose.. Just gotta say something. Everybody can make a tutorial about Python. But no one tries C++ because they are too lazy to learn a new language. no offense, just pointing out a flaw in the tutorials system @OldWizard209 Hi, there's a great free course on - Object-oriented Programming in Python - covers classes, objects, methods etc This is definitely the best python tutorial I have seen on the web for free!, Really appreciate your hardwork!. Thanks again!! Bro if u hate dis then ya gotta try doing one yourself. Python tutorials are hard to make. I've tried to make one................... I went way too fast. I talked about printing then skipped to like functions then like modules.... it's not ez @CookieSnowOwl I'm not saying it's difficult, I'm saying it's overdone, redundant, and unnecessary. im making a text adventure game with this and am trying to make an ascii house /\ | 0| but it won't let print the '/' symbol how would I do that? @cannonthepom123 Here import os # Whatever code you have #Clearing the screen os.system('clear') I will also Include this in the part 2 Great overview! In the buzzfuzz exercise, the range should be 101 if we're going up to 100 ;-) @IntellectualGuy no need to credit. I just did the exercise myself and then went to compare it to your solution, so spotted it. Thanks for the good overview. It was good to see all the things I kinda now in one place! Looking forward to part 2! Imma upvote bc • This probably took ages • It's very helpful • I like it :) • I'm nice yuh JUST REALISING THAT I UPVOTED YESTERDAY- @Rainbowstuff Lollllllll XD @pythongeek1010 XD
https://replit.com/talk/learn/sparklessparklesBig-Python-Tutorialsparklessparkles/120516
CC-MAIN-2021-21
refinedweb
5,085
64.95
How to File Tax on Rental Property in a Different State Than You Live In If. Instructions - 1 Complete your federal tax return (form 1040, 1040A or 1040EZ), listing all income from your rental property. You should also list any deductions associated with the property, as the income is considered business income and standard business deductions apply. - 2. - 3 Prepare a state tax return for your home state. Even though you already listed the income associated with the out-of-state rental property on the nonresident tax return, you must also report this income on your home state's tax return. - 4. Tips & Warnings Every state income tax return is slightly different, including where to report rental income, what deductions and withholdings are appropriate, and how to take the paid taxes credit. If you are not confident in your ability to prepare multiple state tax returns, enlist the aid of a competent tax professional to help you. References - Photo Credit Creatas/Creatas/Getty Images You May Also Like - How to Find Rental Property in Stillwater, OK Whether it is across the country or across town, moving is not anyone’s favorite thing to do. In fact, moving probably rates... - How to Rent Income-Based Apartments If you are short on money and need housing, you might find yourself looking to rent income based apartments. Just like any... - Rental Property Tax Advice Owning a rental property can provide investors with a large stream of income. But rental properties also offer numerous tax advantages that... - How to Figure Out the Land Value of a Rental Property for Taxes The value of an asset generally is determined by depreciating the cost of the asset over its useful life. This is done... - What Tax Do I Pay on the Sale of a Rental Property? Sales of rental property create taxes just as any other kind of transaction that produces income. However, because rental property can sometimes... - Tax Benefits of Owning a Rental Property Tax Benefits of Owning a Rental Property. Owning rental property has its advantages and disadvantages, but the one advantage no one can... - Pennsylvania State Property Tax Laws The Pennsylvania government, upon realizing the difficulties some citizens experience in paying property taxes, instituted the Property Tax/Rent Rebate Program, which ... - How To File Income Taxes With Property in Two Different States Owning property in two different states can make filing income taxes at the end of the tax year somewhat more difficult. Only... - Arizona State Property Tax Information Arizona state property tax is evaluated by individual county assessors. However, there are different methods used to valuate the personal property of... - Reporting Rental Properties on IRS Tax Forms The IRS may flag rental properties with consistent losses. Learn tips for reporting rental income and losses from your investment properties to...
http://www.ehow.com/how_6031249_file-property-different-state-live.html
crawl-003
refinedweb
467
53.51
I haven't developed an AppEngine application yet, I'm just taking a look around their documentation and seeing what stands out for me. It's not the much speculated super cluster VM. AppEngine is solidly grounded in code and structure. It reminds me a little of the guy who ran a website out of S3 with a splash of Heroku thrown in as a chaser. The idea is clearly to take advantage of our massive multi-core future by creating a shared nothing infrastructure based firmly on a core set of infinitely scalable database, storage and CPU services. Don't forget Google also has a few other services to leverage: email, login, blogs, video, search, ads, metrics, and apps. A shared nothing request is a simple beast. By its very nature shared nothing architectures must be composed of services which are themselves already scalable and Google is signing up to supply that scalable infrastructure. Google has been busy creating a platform of out-of-the-box scalable services to build on. Now they have their scripting engine to bind it all together. Everything that could have tied you to a machine is tossed. No disk access, no threads, no sockets, no root, no system calls, no nothing but service based access. Services are king because they are easily made scalable by load balancing and other tricks of the trade that are easily turned behind the scenes, without any application awareness or involvement. Using the CGI interface was not a mistake. CGI is the perfect metaphor for our brave new app container world: get a request, process the request, die, repeat. Using AppEngine you have no choice but to write an app that can be splayed across a pointy well sharpened CPU grid. CGI was devalued because a new process had to be started for every request. It was too slow, too resource intensive. Ironic that in the cloud that's exactly what you want because that's exactly how you cause yourself fewer problems and buy yourself more flexibility. The model is pure abstraction. The implementation is pure pragmatism. Your application exists in the cloud and is in no way tied to any single machine or cluster of machines. CPUs run parallel through your application like a swarm of busy bees while wizards safely hidden in a pocket of space-time can bend reality as much as they desire without the muggles taking notice. Yet the abstraction is implemented in a very specific dynamic language that they already have experience with and have confidence they can make work. It's a pretty smart approach. No surprise I guess. One might ask: is LAMP dead? Certainly not in the way Microsoft was hoping. AppEngine is so much easier to use than the AWS environment of EC2, S3, SQS, and SDB. Creating an app in AWS takes real expertise. That's why I made the comparison of AppEngine to Heroku. Heroku is a load and go approach for RoR whereas AppEngine uses Python. You basically make a Python app using services and it scales. Simple. So simple you can't do much beyond making a web app. Nobody is going to make a super scalable transcoding service out of AppEngine. You simply can't load the needed software because you don't have your own servers. This is where Amazon wins big. But AppEngine does hit a sweet spot in the market: website builders who might have previously went with LAMP. What isn't scalable about AppEngine is the scalability of the complexity of the applications you can build. It's a simple request response system. I didn't notice a cron service, for example. Since you can't write your own services a cron service would give you an opportunity to get a little CPU time of your own to do work. To extend this notion a bit what I would like to see as an event driven state machine service that could drive web services. If email needs to be sent every hour, for example, who will invoke your service every hour so you can get the CPU to send the email? If you have a long running seven step asynchronous event driven algorithm to follow, how will you get the CPU to implement the steps? This may be Google's intent. Or somewhere in the development cycle we may get more features of this sort. But for now it's a serious weakness. Here's are a quick tour of a few interesting points. Please note I'm copying large chunks of their documentation in this post as that seems the quickest way to the finish line... import wsgiref.handlers from google.appengine.ext import webapp class MainPage(webapp.RequestHandler): def get(self): self.response.headers['Content-Type'] = 'text/plain' self.response.out.write('Hello, webapp World!') def main(): application = webapp.WSGIApplication( [('/', MainPage)], debug=True) wsgiref.handlers.CGIHandler().run(application) if __name__ == "__main__": main() This code defines one request handler, MainPage, mapped to the root URL (/). When webapp receives an HTTP GET request to the URL /, it instantiates the MainPage class and calls the instance's get method. Inside the method, information about the request is available using self.request. Typically, the method sets properties on self.response to prepare the response, then exits. webapp sends a response based on the final state of the MainPage instance. The application itself is represented by a webapp.WSGIApplication instance. The parameter debug=true passed to its constructor tells webapp to print stack traces to the browser output if a handler encounters an error or raises an uncaught exception. You may wish to remove this option from the final version of your application. Example of creation:() Example of get, modify, save: if users.get_current_user(): user_pets = db.GqlQuery("SELECT * FROM Pet WHERE pet.owner = :1", users.get_current_user()) for pet in user_pets: pet.spayed_or_neutered = True db.put(user_pets) Looks like your normal overly complex data access. Me, I appreciate the simplicity of a string based property interface. Re: Google AppEngine - A First Look Nice article. The background (cron) job is said to come in the near future. Also, using your own domain names is possible now. Re: Google AppEngine - A First Look Call me when it'll run MySQL. I can't handle this SimpleDB nonsense. Also, in the demo, there's a clear "runtime" variable that the guy sets to python. I fully expect PHP, Perl, or whatever else to show up here soon. Re: Google AppEngine - A First Look This is a great start, but it's 100% the basics. There's not enough there yet to build anything truly robust and scalable.....but I suspect it's coming. Re: Google AppEngine + cron "Any software problem can be solved by adding another layer of indirection." - Steven Bellovin If you really want a cron job, you could have a cron job on a local web-connected machine that fires a GET or POST request to a URL on your AppEngine site, which then sends your hourly emails or does your bit of processing. Of course, this means AppEngine will not be a total replacement for your server resources, but I think it is intended to fit a tighter niche than that. Their marketing all points to getting web application developers up and running and building scalable web services. Sure, my shared hosting gives me cron and a handful of background processes, and if I really need more power I could get a dedicated machine or two, but it would get awfully expensive to match the distributed load balancing of Google applications. A lot of the other limitations reflect this, too. The persistance model, the inability to access local resources or use any python code that could touch the machine, and especially the way it reacts to spikes in request frequency, all point to a specialized market, as you point out. Re: Google AppEngine - A First Look I guess MySQL isn't scalable at that kind of level. And it is not aiming to be a full RDMS only a simple datastore. Complex calculations can also be done in the programming code, which is likely to be more scalable. Google always works with Python en I think it is perfect for doing the job. They cant just support al kind of programming languages because it is aimed to be very simple, with adding more languages it is only getting more complicated. I think this is a good development and I have played around with it a bit and I like it :) Re: Google AppEngine - A First Look I can't get the Hello World webapp to display on windows XP, any idea why? Re: Google AppEngine - A First Look Nice article. The background (cron) job is said to come in the near future. Also, using your own domain names is possible now. web2py on the appengine you can also run code developed with a high level framework like web2py on the appengine. Here is an example Re: Google AppEngine - A First Look So Google is now ultimately making applications which can be looked at their appengine now. ----- sea plants sea grapes...seaweed Re: Google AppEngine - A First Look The background (cron) job is said to come in the near future. Also, using your own domain names is possible now thank you very good Post new comment
http://highscalability.com/google-appengine-first-look
crawl-002
refinedweb
1,563
65.52
To navigate from one page to another, you can take advantage of the static Navigate method on the page’s read-only NavigationService property. This passes your request to the NavigationService Singleton for your application, and greatly simplifies the navigation syntax. A Quick Demo Demos should, in my opinion, be absolutely as simple as humanly possible, to keep the focus on the topic at hand. In that spirit, we’ll create a new application with just three pages: - Page1 - Page2 For each page we’ll adjust the application title and page title, so that the page is self-identifying; e.g., <StackPanel x: <TextBlock x: <TextBlock x: </StackPanel> On MainPage we’ll place two buttons, one saying Page 1 and the other saying Page 2. We’ll also implement event handlers for each, that use the Navigation service. Here is the complete code-behind for MainPage, using System; using System.Windows; using Microsoft.Phone.Controls; namespace PhoneNav { public partial class MainPage : PhoneApplicationPage { public MainPage() { InitializeComponent(); Page1.Click += Page1_Click; Page2.Click += Page2_Click; } void Page2_Click( object sender, RoutedEventArgs e ) { NavigationService.Navigate( new Uri( "/Page2.xaml", UriKind.Relative ) ); } void Page1_Click( object sender, RoutedEventArgs e ) { NavigationService.Navigate( new Uri( "/Page1.xaml", UriKind.Relative ) ); } } } Run the application. Clicking Page1 takes you there instantly (we may want to add animation at some point to make the transition less confusing). For now, to navigate back, use the back button. In the next mini-tutorial we’ll look at the events that are raised during navigation and how you can override them to take greater control of the navigation. Sounds like a great application, hope you’ll build it. For this series, it seems a bit too advanced, and I’m already in the middle of another start to finish project with The Full Stack. But still, a cool idea. Just to clarify why spoken directions are desirable, this would give you turn-by-turn navigation, useful in a car and vital on a motorcycle. On my bike I cannot use a display type GPS but I -can- listen to directions with my in-helmet bluetooth headset. If you really want to impress me, Jessie, walk us through an app that wraps the Maps application (or if we can’t do that then we’ll have to recreate it using the Bing Maps silverlight control) and reads out the waypoint instructions as you approach each waypoint. To trigger a total nerdgasm in millions of readers, get Jen Taylor to do the voice (Cortana from Halo). Pingback:
http://jesseliberty.com/2010/12/29/windows-phone-from-scratch-navigation/
CC-MAIN-2014-49
refinedweb
418
55.64
!ATTLIST div activerev CDATA #IMPLIED> <!ATTLIST div nodeid CDATA #IMPLIED> <!ATTLIST a command CDATA #IMPLIED> It may be that I am tired but I cannot seem to figure out how to place a script that doesnt inherit from monobehaviour into an array of that type in the editor/inspector Example: [System.Serializable] public class BaseWeapon { public string Name {get; set;} public int Damage {get; set;} public int Ammo {get; set;} // Constructor public BaseWeapon BaseWeapon(string name, int dmg, int ammo) { Name = name; Damage = dmg; Ammo = ammo; } } // DIFFERENT SCRIPT public class WeaponPickup : MonoBehaviour { public BaseWeapon[] weapons; private BaseWeapon weaponToDrop; void Awake() { if(weapons.Length > 1) weaponToDrop = GetRandomWeapon(); else weaponToDrop = weapons[0]; } void OnTriggerEnter(Collider other) { if(other.gameObject.tag == "Player") { other.sendMessage("GetWeapon", weaponToDrop); } } // DIFFERENT SCRIPT public class Handgun : BaseWeapon { public BaseWeapon weapon = new Baseweapon("Handgun", 5, 30); } // DIFFERENT SCRIPT public class PlayerScript : MonoBehaviour { public Projectile projectile; private BaseWeapon equippedWeapon; private BaseWeapon[] weapInventory; void GetWeapon(BaseWeapon weap) { int i = weapInventory.Length; weapInventory[i] = weap; equippedWeapon = weapInventory[i]; } void Update() { if(Input.GetAxis("Fire1") { FireWeapon(); } } void FireWeapon() { if(equippedWeapon.Ammo > 0) { GameObject tempProj = (GameObject)Instantiate(projectile, transform.position, transform.rotation); Projectile proj = tempProj.GetComponent("Projectile"); tempProj.damage = equippedWeapon.Damage; equippedWeapon.Ammo--; } } } Now in the above code I have a MonoBehaviour script attached to a GameObject and it displays the weapons array but when I add more elements to it I cant just drag the BaseWeapon script to the element. Im wanting to do this so the level designers can easily drag the weapon that they want the pickup to drop into the array, theres also code in place to select one at random if more than one weapon is in the array. EDIT: Added additional code, Projectile is just a monobehaviour which has a public var damage and simply moves the go forward and sends the damage variable to whatever it hits. This code is not my original as I am not at home atm so i just hand typed this myself and it is not complete such as there is also a function to allow the player to cycle thru their weapons but i left it out for this... If any additional info is needed let me kno Thanks in advanced asked Feb 28, 2011 at 03:53 AM unitydev0008 162 ● 39 ● 30 ● 39 edited Mar 01, 2011 at 12:02 AM You seem to have a misunderstanding of what public BaseWeapon[] weapons; is. It is an array of references to instantiated objects, not an array of classes. If you have a prefab with a BaseWeapon object attached to it, then you can drag that prefab onto a slot in the weapons array. If that's not what you want to do, my next guess would be that you want an array of enums. If this isn't enough to get you going, further description and code for what you want to do will be necessary to help you. answered Feb 28, 2011 at 03:29 PM Jessy 18.3k ● 208 ● 177 ● 292 Hi Jessy, What I am trying to do is develop a Top down shooter game, the baseWeapon class is the default class that I derive from when I make a new weapon script, this class holds all the base stats of weapons, such as the name, rate of fire, accuracy, etc... The array I am making in weapon pickup is of type base weapon so that I can drag the weapons created that inherit from baseweapon into the array so the weaponPickup can "Drop" that weapon. This array is used to send the weaponscript to the player and add that weapon to their inventory so it can be used by the player...Thanks again Again, code will help us help you. "weapons created", and "send the weaponscript to the player" are not clear phrases to me. Added additional code which was hand typed and not in VS so it may not be correct but should help in understanding what im trying to do...thanks again for helping Jessy Hey man thanks for the help, however I have decided to go about it a different way by just having a monobehaviour class create an instance of my base weapon, seems to be working just fine...thanks again man and i accepted ur answer and +1 to show my gratitude. I'm sorry I didn't help you more directly; I got REALLY busy. Glad to hear you have a good solution23 inspector x809 monobehaviour x236 asked: Feb 28, 2011 at 03:53 AM Seen: 1784 times Last Updated: Mar 01, 2011 at 12:02 AM Display Non Monobehaviour array in inspector C# Unexpected values at the inspector when initializing member values from built-in arrays Resource.Load v Inspector object reference Making a user-friendly array with indices based on an enum How to access gameobject array set in inspector? [solved] How to edit an array of custom classes in the Inspector See MonoScript's attributes in the inspector (when it is in an array) Array of transforms with "labels" How to check if inspector-filled array is empty How to serialize an array of classes EnterpriseSocial Q&A
http://answers.unity3d.com/questions/49795/how-to-place-a-non-monobehaviour-script-into-an-ar.html
CC-MAIN-2014-42
refinedweb
860
55.07
preface This article is included in the album:, click to unlock more knowledge of data structure and algorithm. Hello, I’m brother Tong, a hard core man who climbs 26 floors every day and doesn’t forget to read the source code. Array, linked list, queue and stack are the most basic four structures in data structure. Array and linked list are the foundation of the foundation. All subsequent complex data structures are evolved from them. In this section, we will review these four structures. array On the array, we are more familiar with. It is a kind of linear data structure, using a group of continuous memory space to store a group of data with the same type. There are three key words in this concept: linear, continuous and the same type. Linear means that there is no bifurcation, and there is only one element before and after any element at most. Similarly, there are linked lists, queues, etc. Continuous, its storage in the memory space is continuous, uninterrupted, before and after the two elements close, there is no gap. With the same type, the elements stored in the array must be of the same type. Of course, in Java, you can use object to represent all types. In essence, they are still of the same type. It is with the above three features that the array has theRandom accessSo what is random access? In short, you can quickly locate the elements in the array by subscript, and the time complexity is O (1). How does it do it? We know that there are only 0 and 1 in the computer, everything can be regarded as various combinations of 0 and 1, and memory is the same. When we create an array, such as int[] array = new int[]{2, 5, 8, 7};In fact, it returns the location (address) of the array in memory. We know that an int type takes up four bytes, that is, 32-bit 0 or 1. When we access an element with an array subscript of 0, we can directly return the array address, take 32-bit and convert it to int. similarly, when we access an element with an array subscript of 1, we can, Return the array address plus the address of (32 * 1), take 32 bits to convert to int, and so on. This is also the reason why the subscript of array starts from 0 in most languages. Imagine that if the subscript starts from 1, then the calculation of memory address becomes a problem address + 32 * (i - 1)This will obviously cause some performance loss. Linked list Linked list, which is also a thread data structure, is different from the array, it is not necessarily sequential storage in memory space, in order to ensure the continuity of the elements in the linked list, generally use a pointer to find the next element. The figure above is a typical single linked list structure, in which there is only one pointer to the next element. If you want to use Java classes to represent element nodes in a single linked list, it will look like this: class Node { int value; Node next; } Therefore, the linked list does not have the characteristics of random access, in the linked list according to the index to find elements can only start from the beginning (single linked list), its time complexity is O (n). What we said above is a single linked list. If we add a precursor pointer (pointer to the previous element) to the single linked list, it will become a two-way linked list. The LinkedList in Java is a typical two-way linked list structure. It can be used as a queue or stack, which is very convenient. If you add the function of HashMap on the basis of the two-way linked list, it will become linked HashMap, cough, pull away. I hope that students who study the source code analysis of LinkedList and LinkedHashMap can pay attention to my public number owner “tongge read the source code”. Queue is mentioned here. So, what is queue? queue The so-called queue, in fact, is the same as the queue in reality, in which the elements enter from one end and go out from the other end. In English, it is called first in, first out, abbreviated as FIFO. From this figure, we can see that the simplest way to implement queue is to use linked list and reverse the arrow in the figure above. When joining the queue, add the element to the end of the list. When leaving the queue, delete the first element and point the head node to the next node. Let’s take a look at a simple code implementation of using linked lists to implement queues public class LinkedQueue { Node head; Node tail; void offer(Integer value) { if (value == null) { throw new NullPointerException(); } Node node = new Node(value); if (head == null) { head = tail = node; } else { tail.next = node; tail = node; } } Integer poll() { Node first = head; if (first != null) { head = first.next; first.next = null; return first.value; } else { return null; } } static class Node { int value; Node next; public Node(int value) { this.value = value; } } } Is it very simple? Can arrays implement queues? The answer is yes. There are many ways to use arrays to implement queues. One is to use two pointers: the in pointer and the out pointer, which point to the next in queue and the next out queue respectively. When joining the team, put the element at the entry pointer and move the entry pointer backward. When leaving the queue, take out the element at the pointer to return, and move the pointer back at the same time. When the pointer reaches the end of the array, it returns the beginning of the array. This forms an array that can be recycled, commonly known as a circular array. At this point, we consider a problem: when the queue is empty or full, the two pointers point to the same location, which seems not easy to handle. In fact, it’s very simple to introduce a size variable to identify how many elements there are in the queue. So, how can this be done? Show me the code! public class ArrayQueue { int[] array; int offerIndex; int pollIndex; int size; public ArrayQueue(int capacity) { this.array = new int[capacity]; this.offerIndex = this.pollIndex = 0; this.size = 0; } boolean offer(Integer value) { if (value == null) { throw new NullPointerException(); } if (size == array.length) { return false; } array[offerIndex] = value; offerIndex = (offerIndex + 1) % array.length; size++; return true; } Integer poll() { if (size == 0) { return null; } int value = array[pollIndex]; pollIndex = (pollIndex + 1) % array.length; size--; return value; } } OK, the above is the queue implemented with array. You can see that compared with the queue implemented with linked list, it needs to specify the capacity, which is called Bounded queue, if you need to use array to realize unbounded queue, you need to join the expansion mechanism. Interested students can realize it by themselves. Next, let’s look at another basic data structure stack. Stack Stack is a data structure completely opposite to queue. Its elements are advanced and come out later, just like we put things into a cup. The first thing put in is placed at the bottom. Only when the top thing is taken out can we take out the things pressed below. This kind of behavior is called first in, last out, or Filo for short. Stack has many uses. A lot of processing in the computer is carried out through the data structure of stack, such as arithmetic operation. Prepare two stacks, one for storing numbers and the other for storing symbols. Press characters into the two stacks in turn from the beginning. When the symbol priority is lower than the top element of the stack, take out the top symbol, After all the characters are put into the stack, two elements are taken out from the digital stack in turn, and one element is taken out from the symbol stack for calculation. The result is pushed back to the digital stack and continues to run until the symbol stack is empty, Or until there is only one element left in the number stack, the pop-up number is the final result. with 3 + 2 * 4 -1For example: Well, let’s briefly introduce the stack. Later, we will encounter a large number of data structures. Postscript In this section, we review the four most basic data structures: array, linked list, queue and stack. Speaking of arrays, we can see that memory itself is a large array, and the elements in it are 0 and 1. Can we directly operate these 0 and 1? The answer is yes. In the next section, we will introduce bit operation and bitmap data structure. At that time, we will introduce how to use it in detail Bitmap to achieve 12306 ticket logic, follow me and get the latest tweets in time. Pay attention to the owner of the public account “tongge read the source code” to unlock more knowledge of source code, foundation and architecture.
https://developpaper.com/review-four-basic-data-structures-array-linked-list-queue-and-stack/
CC-MAIN-2021-25
refinedweb
1,527
71.34
VERSIONversion 0.05 SYNOPSIS my $result = $.cache->get('key'); if (!defined($result)) { ... compute $result ... $.cache->set('key', $result, '5 minutes'); } ... % $.Cache('key2', '1 hour') {{ <!-- this will be cached for an hour --> % }} DESCRIPTIONAdds a "cache" method and "Cache" filter to access a cache (CHI) object with a namespace unique to the component. INTERP PARAMETERS - cache_defaults - Hash of parameters passed to cache constructor. Defaults to driver=>'File', root_dir => 'DATA_DIR/cache' which will create a basic file cache under Mason's data directory. - cache_root_class - Class used to create a cache. Defaults to CHI. COMPONENT CLASS METHODS - cache - Returns a new cache object with the namespace set to the component's path. Parameters to this method, if any, are combined with cache_defaults and passed to the cache_root_class constructor. The cache object is memoized when no parameters are passed. my $result = $.cache->get('key'); REQUEST METHODS - cache - Same as calling "cache" on the current component class. This usage will be familiar to Mason 1 users. my $result = $m->cache->get('key'); FILTERS - Cache ($key, $options, [%cache_params]) - --> % }} SUPPORTThe mailing list for Mason and Mason plugins is [email protected] You must be subscribed to send a message. To subscribe, visit <>. You can also visit us at "#mason" on <irc://irc.perl.org/#mason>. Bugs and feature requests will be tracked at RT: [email protected] The latest source code can be browsed and fetched at: git clone git://github.com/jonswar/perl-mason-plugin-cache.git AUTHORJonathan Swartz <[email protected]> COPYRIGHT AND LICENSEThis software is copyright (c) 2011 by Jonathan Swartz. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
http://manpages.org/masonplugincache/3
CC-MAIN-2018-30
refinedweb
277
66.64
> The relocation support might be worse implementing it or not, we should > discuss this. Does anyone use it? I do not use rpms relocation feature, just because my only linux box is a debian. but read further. My Desktop Environment is GNUstep (a Cocoa/OpenSTEP clone). This environment supports application in a users seperate namespace: $HOME/Applications is in his path. a user can install its own applications and tools. I think it is worth to have this feature also in mainstream OSs. The package system should be able to handle installs by a user in his own namespace. This makes relocation neccessary. btw: to fully support users own set of libraries and applications, we need a changed ld.so. I once changed the linux ld.so to build an own ld.so.cache per user, this is worth implementing too. ~ibotty
http://leaf.dragonflybsd.org/mailarchive/kernel/2003-09/msg00066.html
CC-MAIN-2014-42
refinedweb
142
70.09
Hello, welcome to therichpost.com. In this post, I will tell you, What is RequestOptions in Angular? I am familiar with Angular 1 very well but Angular 2 is totally different with Angular 1. Angular2 is totally command based. RequestOptions is used in when we post or send data like if we want to send json data then we will use type json in header. Here is working code for RequestOptions and you can add this into your Angular2 component or Angular2 service: import { Http, Response ,RequestOptions,Headers} from ‘@angular/http’; let headers = new Headers({ ‘Content-Type’: ‘application/json’ }); let options = new RequestOptions({ headers: headers }); There are many more code in Angular2 and WordPress and i will let you know all. Please do comment if you any query related to this post. Thank you. Therichpost.com Recent Comments
https://therichpost.com/what-is-requestoptions-in-angular/
CC-MAIN-2021-43
refinedweb
138
54.73
C UConverter functions to aid the writers of callbacks. More... #include "unicode/utypes.h" #include "unicode/ucnv.h" #include "unicode/ucnv_err.h" Go to the source code of this file. C UConverter functions to aid the writers of callbacks. These functions are provided here for the convenience of the callback writer. If you are just looking for callback functions to use, please see ucnv_err.h. DO NOT call these functions directly when you are working with converters, unless your code has been called as a callback via ucnv_setFromUCallback or ucnv_setToUCallback !! A note about error codes and overflow. Unlike other ICU functions, these functions do not expect the error status to be U_ZERO_ERROR. Callbacks must be much more careful about their error codes. The error codes used here are in/out parameters, which should be passed back in the callback's error parameter. For example, if you call ucnv_cbfromUWriteBytes to write data out to the output codepage, it may return U_BUFFER_OVERFLOW_ERROR if the data did not fit in the target. But this isn't a failing error, in fact, ucnv_cbfromUWriteBytes may be called AGAIN with the error status still U_BUFFER_OVERFLOW_ERROR to attempt to write further bytes, which will also go into the internal overflow buffers. Concerning offsets, the 'offset' parameters here are relative to the start of SOURCE. For example, Suppose the string "ABCD" was being converted from Unicode into a codepage which doesn't have a mapping for 'B'. 'A' will be written out correctly, but The FromU Callback will be called on an unassigned character for 'B'. At this point, this is the state of the world: Target: A [..] [points after A] Source: A B [C] D [points to C - B has been consumed] 0 1 2 3 codePoint = "B" [the unassigned codepoint] Now, suppose a callback wants to write the substitution character '?' to the target. It calls ucnv_cbFromUWriteBytes() to write the ?. It should pass ZERO as the offset, because the offset as far as the callback is concerned is relative to the SOURCE pointer [which points before 'C'.] If the callback goes into the args and consumes 'C' also, it would call FromUWriteBytes with an offset of 1 (and advance the source pointer). Definition in file ucnv_cb.h. ONLY used by FromU callback functions. Writes out the specified byte output bytes to the target byte buffer or to converter internal buffers. ONLY used by FromU callback functions. This function will write out the correct substitution character sequence to the target. ONLY used by fromU callback functions. This function will write out the error character(s) to the target UChar buffer. ONLY used by ToU callback functions. This function will write out the Unicode substitution character (U+FFFD). ONLY used by ToU callback functions. This function will write out the specified characters to the target UChar buffer.
https://unicode-org.github.io/icu-docs/apidoc/dev/icu4c/ucnv__cb_8h.html
CC-MAIN-2021-39
refinedweb
468
66.33
On Wed, May 28, 2008 at 12:46 AM, Olivier Boudry <olivier.boudry at gmail.com> wrote: > If the calling convention is stdcall on Windows and ccall on other OS then > it should be defined based on the OS. This can be done by updating the .hsc > files to define the calling convention as a "macro" depending on the OS > type. > > #ifdef mingw32_HOST_OS > #let #else > #let #endif > > And the foreign import should use CALLCONV instead of ccall. > > This should make it work on Windows and not break it on Linux. Thanks Olivier, that's neater than I thought. I'll put a patch together. -- Andrew
http://www.haskell.org/pipermail/haskell-cafe/2008-May/043667.html
CC-MAIN-2014-35
refinedweb
105
85.59
S-expression S-expression is an actively used meta language created in 1960. In computing, s-expressions,. Read more on Wikipedia... - S-expression ranks in the top 25% of languages - the S-expression wikipedia page - S-expression first appeared in 1960 - See also: lisp, scheme, c, common-lisp, xml, python, islisp, rfc, i-expressions, bayer-expressions - Have a question about S-expression not answered here? Email me and let me know how I can help. Example code from Wikipedia: def parse_sexp(string): """ >>> parse_sexp("(+ 5 (+ 3 5))") [['+', '5', ['+', '3', '5']]] """ sexp = [[]] word = '' in_str = False for char in string: if char is '(' and not in_str: sexp.append([]) elif char is ')' and not in_str: if word: sexp[-1].append(word) word = '' temp = sexp.pop() sexp[-1].append(temp) elif char in (' ', '\n', '\t') and not in_str: if word: sexp[-1].append(word) word = '' elif char is '\"': in_str = not in_str else: word += char return sexp[0] Last updated August 9th, 2020 Edit S-expression on GitHub
https://codelani.com/languages/s-expressions.html
CC-MAIN-2020-45
refinedweb
162
65.93
Initializing a static variable inside a namespace Discussion in 'C++' started by ik, Sep 22, 2004.: - 501 - Adam Smith - Apr 15, 2004 Initializing static class memberAvi Uziel, Sep 24, 2003, in forum: C++ - Replies: - 14 - Views: - 821 - WW - Sep 26, 2003 Initializing a static const member array of a class typeSteven T. Hatton, Apr 19, 2004, in forum: C++ - Replies: - 1 - Views: - 6,785 - Siemel Naran - Apr 19, 2004 Good practice question: Declaring/Initializing variables inside or outside a loop ?SM, Apr 30, 2007, in forum: Javascript - Replies: - 8 - Views: - 169 - -Lost - May 1, 2007 Initializing a global namespace objectStanimir Stamenkov, Jul 26, 2010, in forum: Javascript - Replies: - 39 - Views: - 384 - David Mark - Jul 30, 2010
http://www.thecodingforums.com/threads/initializing-a-static-variable-inside-a-namespace.285786/
CC-MAIN-2015-06
refinedweb
116
59.47
32011/how-to-delete-an-ec2-instance-using-python-boto3 You just need to have the list of instances that you want to delete and then pass it as an argument and you are done. you can use the following code for the same: import boto3 ids = ['i-1','i-2','i-3','i-4','i-5'] ec2 = boto3.resource('ec2') ec2.instances.filter(InstanceIds = ids).terminate() This will serve the purpose. Hope this helps. You can use the following code, it ...READ MORE Both terminating and stopping has same code ...READ MORE import boto3 ec2 = boto3.resource('ec2') instance = ec2.create_instances( ...READ MORE Hey JunDevOps, Have a look these scripts: might be throwing an error on ...READ MORE Using AWS Management Console: Right-Click on the instance Instance ...READ MORE OR
https://www.edureka.co/community/32011/how-to-delete-an-ec2-instance-using-python-boto3
CC-MAIN-2019-30
refinedweb
132
78.75
Principles, Patterns, and Practices: The Strategy, Template Method, and Bridge Patterns The Strategy, Template Method, and Bridge Patterns One of the great benefits of object-oriented programming is polymorphism; i.e., the ability to send a message to an object without knowing the true type of the object. Perhaps no pattern illustrates this better than the Strategy pattern. To illustrate the Strategy pattern let's assume that we are working on a debug logger. Debug loggers are often very useful devices. Programmers can send messages to these loggers at strategic places within the code. If the system misbehaves in some way, the debug log can provide clues about what the system was doing internally at the time of the failure. In order to be effective, loggers need to be simple for programmers to use. Programmers aren't going to frequently use something that is inconvenient. You should be able to emit a log message with something no more complicated than: logger.log("My Message"); On the other hand, what we want to see in the log is quite a bit more complex. At very least we are going to want to see the time and date of the message. We'll also probably want to see the thread ID. Indeed, there may be a whole laundry list of system states that we want to log along with the message. So the logger needs to gather all of this peripheral information together, format it into a log message, and then add it to the growing list of logged messages. Where should the logged messages be stored? Sometimes we might like them stored in a text file. Sometimes we might want to see them added to a database table. Sometimes we might like them accumulated in RAM. The choices seem endless. However, the final destination of the logged messages has nothing to do with the format of the messages themselves. We have two algorithms: one formats the logged message, and the other records the logged message. These two algorithms are both in the flow of logging a message, but both can vary independently of each other. The formatter does not care where the message is recorded, and the recorder does not care about the format of the message. Whenever we have two connected but independent algorithms, we can use the Strategy pattern to connect them. Consider the following structure: Here the user calls the log method of the Logger class. The log method formats the message and then calls the record method of the Recorder interface. There are many possible implementations of the Recorder interface. Each does the recording in a different way. The structure of the Logger and Recorder is exemplified by the following unit test, which uses the Adapter pattern: public class LoggerTest extends TestCase { private String recordedMessage; protected String message; public void testLogger() throws Exception { Logger logger = new Logger(new Recorder() { public void record(String message) { recordedMessage = message; } }); message = "myMessage"; logger.log(message); checkFormat(); } private void checkFormat() { String datePattern = "\\d{2}/\\d{2}/\\d{4} \\d{2}:\\d{2}:\\d{2}.\\d{3}"; String messagePattern = datePattern + " " + message; if(!Pattern.matches(messagePattern, recordedMessage)) { fail(recordedMessage + " does not match pattern"); } } } As you can see, the Logger is constructed with an instance of an object that implements the Recorder interface. Logger does not care what that implementation does. It simply builds the string to be logged and then calls the record method. This is very powerful decoupling. It allows the formatting and recording algorithms to change independently of each other. Loggeris a simple class that simply formats the message and forwards it to the Recorder. public class Logger { private Recorder recorder; public Logger(Recorder recorder) { this.recorder = recorder; } public void log(String message) { DateFormat format = new SimpleDateFormat("MM/dd/yyyy kk:mm:ss.SSS"); Date now = new Date(); String prefix = format.format(now); recorder.record(prefix + " " + message); } } And Recorder is an even simpler interface. public interface Recorder { void record(String message); } The canonical form of the Strategy pattern is shown below. One algorithm (the context) is shielded from the other (the strategy) by an interface. The context is unaware of how the strategy is implemented, or of how many different implementations there are. The context typically holds a pointer or reference to the strategy object with which it was constructed. In our Logger example, the Logger is the context, the Recorder is the strategy interface, and the anonymous inner class within the unit test acts as one of the implemented strategies. If you have been an object-oriented programmer for any length of time, you have seen this pattern many times. Indeed, it is so common that some folks shake their heads and wonder why it even has a name. It's rather like giving the name "DO NEXT STATEMENT" to the fact that execution proceeds statement by statement. However, there is a good reason to give this pattern a name. It turns out that there is another pattern that solves the same problem in a slightly different way; and the two names help us differentiate between them. This second pattern is called Template Method, and we can see it by adding the next obvious layer of polymorphism to the Logger example. We already have one layer that allows us to change the way log messages are recorded. We could add another layer to allow us to change how log messages are formatted. Let's suppose, for instance, that we want to support two different formats. One prepends the time and date to the message as above; the other prepends only the time. Clearly, this is just another problem in polymorphism, and we could use the Strategy pattern once again. If we did, the design might look like this: Here we see two uses of the Strategy pattern. One provides polymorphic recording and the other provides polymorphic formatting. This is a common enough solution, but it is not the only solution. Indeed, we might have opted for a solution that looked more like this: Notice the format method of Logger. It is protected (that's what the # means) and it is abstract (that's what the italics mean). The log method of Logger calls its own abstract format method, which deploys to one of the derived classes. The formatted string is then passed to the record method of the Recorder. Consider the unit test below. It shows tests for both the TimeLogger and the TimeDateLogger. Notice that each test method creates the appropriate Logger derivative and passes a Recorder instance into it. import junit.framework.TestCase; import java.util.regex.Pattern; public class LoggerTest extends TestCase { private String recordedMessage; protected String message; private static final String timeDateFormat = "\\d{2}/\\d{2}/\\d{4} \\d{2}:\\d{2}:\\d{2}.\\d{3}"; private static final String timeFormat = "\\d{2}:\\d{2}:\\d{2}.\\d{3}"; private Recorder recorder = new Recorder() { public void record(String message) { recordedMessage = message; } }; public void testTimeDateLogger() throws Exception { Logger logger = new TimeDateLogger(recorder); message = "myMessage"; logger.log(message); checkFormat(timeDateFormat); } public void testTimeLogger() throws Exception { Logger logger = new TimeLogger(recorder); message = "myMessage"; logger.log(message); checkFormat(timeFormat); } private void checkFormat(String prefix) { String messagePattern = prefix + " " + message; if (!Pattern.matches(messagePattern, recordedMessage)) { fail(recordedMessage + " does not match pattern"); } } } The Logger has changed as follows. Notice the protected abstract format method. public abstract class Logger { private Recorder recorder; public Logger(Recorder recorder) { this.recorder = recorder; } public void log(String message) { recorder.record(format(message)); } protected abstract String format(String message); } TimeLoggerand TimeDateLoggersimply implement the format method appropriate to their type, as shown below: import java.text.*; import java.util.Date; public class TimeLogger extends Logger { public TimeLogger(Recorder recorder) { super(recorder); } protected String format(String message) { DateFormat format = new SimpleDateFormat("kk:mm:ss.SSS"); Date now = new Date(); String prefix = format.format(now); return prefix + " " + message; } } import java.text.*; import java.util.Date; public class TimeDateLogger extends Logger { public TimeDateLogger(Recorder recorder) { super(recorder); } protected String format(String message) { DateFormat format = new SimpleDateFormat("MM/dd/yyyy kk:mm:ss.SSS"); Date now = new Date(); String prefix = format.format(now); return prefix + " " + message; } } The canonical form of Template Method looks like this: The Context class has at least two functions. One (here called function) is generally public, and represents some high-level algorithm. The other function (here called subFunction) represents some lower-level algorithm called by the higher-level algorithm. The derivatives of Context implement subFunction in different ways. It should be clear how Strategy and Template Method solve the same problem. The problem is simply to separate a high-level algorithm from a lower-level algorithm in such a way that the two can vary independently. In the case of Strategy, this is solved by creating an interface for the lower-level algorithm. In the Template Method case, it is solved by creating an abstract method. Strategy is preferable to Template Method when the lower-level algorithm needs to change at run time. This can be accomplished with Strategy simply by swapping in an instance of a different derivative. Template Method is not so fortunate; once created, its lower-level algorithm is locked in. On the other hand, Strategy has a slight time and space penalty compared to Template Method, and is more complex to set up. So Strategy should be used when flexibility is important, and Template Method should be used when time and space efficiency and simplicity are more important. Could we have used Template Method to solve the whole Logger problem? Yes, but the result is not pleasant. Consider the following diagram. Notice that there is one derivative for each possible combination. This is the dreaded m x n problem. Given two polymorphic degrees of freedom (e.g., recording and format) the number of derivatives is the product of those degrees. This problem is common enough that the combined use of Strategy and Template Method to solve it (as we did in the previous example) is a pattern in and of itself, called Bridge. - Login or register to post comments - Printer-friendly version - 8466 reads
http://today.java.net/pub/a/today/2004/10/29/patterns.html
crawl-003
refinedweb
1,677
56.66
Im new to python, coming from PHP so i hardly understand it. I have a very basic "launch switch" attached to GPIO 23 & GPIO GND and am running this python. But it works in reverse, when the switch is off or disconnected the "switch + number" clocks, but when I activate the switch it stops, is this a problem in my python? I have tried switching cables around. Code: Select all import time import RPi.GPIO as io io.setmode(io.BCM) door_pin = 23 io.setup(door_pin, io.IN) counter = 0 while True: if io.input(door_pin): counter += 1 print "Switch:", counter time.sleep(0.5) Another slight problem, is that it does not tick smoothly, as in a get a response every half second or so for about 11 ticks, then it pauses, then another 11 or so ticks then it pauses...
https://lb.raspberrypi.org/forums/viewtopic.php?f=31&t=35502
CC-MAIN-2020-10
refinedweb
142
83.66
Linux is a very dynamic system with constantly changing computing needs. The representation of the computational needs of Linux centers around the common abstraction of the process. Processes can be short-lived (a command executed from the command line) or long-lived (a network service). For this reason, the general management of processes and their scheduling is very important. From user-space, processes are represented by process identifiers (PIDs). From the user's perspective, a PID is a numeric value that uniquely identifies the process. A PID doesn't change during the life of a process, but PIDs can be reused after a process dies, so it's not always ideal to cache them. In user-space, you can create processes in any of several ways. You can execute a program (which results in the creation of a new process) or, within a program, you can invoke a fork or exec system call. The fork call results in the creation of a child process, while an exec call replaces the current process context with the new program. I discuss each of these methods to understand how they work. For this article, I build the description of processes by first showing the kernel representation of processes and how they're managed in the kernel, then review the various means by which processes are created and scheduled on one or more processors, and finally, what happens if they die. Process representation Within the Linux kernel, a process is represented by a rather large structure called task_struct. This structure contains all of the necessary data to represent the process, along with a plethora of other data for accounting and to maintain relationships with other processes (parents and children). A full description of the task_struct is beyond the scope of this article, but a portion of task_struct is shown in Listing 1. This code contains the specific elements this article explores. Note that task_struct resides in ./linux/include/linux/sched.h. Listing 1. A small portion of task_struct struct task_struct { volatile long state; void *stack; unsigned int flags; int prio, static_prio; struct list_head tasks; struct mm_struct *mm, *active_mm; pid_t pid; pid_t tgid; struct task_struct *real_parent; char comm[TASK_COMM_LEN]; struct thread_struct thread; struct files_struct *files; ... }; In Listing 1, you can see several items that you'd expect, such as the state of execution, a stack, a set of flags, the parent process, the thread of execution (of which there can be many), and open files. I explore these later in the article but will introduce a few here. The state variable is a set of bits that indicate the state of the task. The most common states indicate that the process is running or in a run queue about to be running ( TASK_RUNNING), sleeping ( TASK_INTERRUPTIBLE), sleeping but unable to be woken up ( TASK_UNINTERRUPTIBLE), stopped ( TASK_STOPPED), or a few others. A complete list of these flags is available in ./linux/include/linux/sched.h. The flags word defines a large number of indicators, indicating everything from whether the process is being created ( PF_STARTING) or exiting ( PF_EXITING), or even if the process is currently allocating memory ( PF_MEMALLOC). The name of the executable (excluding the path) occupies the comm (command) field. Each process is also given a priority (called static_prio), but the actual priority of the process is determined dynamically based on loading and other factors. The lower the priority value, the higher its actual priority. The tasks field provides the linked-list capability. It contains a prev pointer (pointing to the previous task) and a pointer (pointing to the next task). The process's address space is represented by the mm and active_mm fields. The mm represents the process's memory descriptors, while the active_mm is the previous process's memory descriptors (an optimization to improve context switch times). Finally, the thread_struct identifies the stored state of the process. This element depends on the particular architecture on which Linux is running, but you can see an example of this in ./linux/include/asm-i386/processor.h. In this structure, you'll find the storage for the process when it is switched from the executing context (hardware registers, program counter, and so on). Process management Now, let's explore how you manage processes within Linux. In most cases, processes are dynamically created and represented by a dynamically allocated task_struct. One exception is the init process itself, which always exists and is represented by a statically allocated task_struct. You can see an example of this in ./linux/arch/i386/kernel/init_task.c. All processes in Linux are collected in two different ways. The first is a hash table, which is hashed by the PID value; the second is a circular doubly linked list. The circular list is ideal for iterating through the task list. As the list is circular, there's no head or tail; but as the init_task always exists, you can use it as an anchor point to iterate further. Let's look at an example of this to walk through the current set of tasks. The task list is not accessible from user-space, but you can easily solve that problem by inserting code into the kernel in the form of a module. A very simple program is shown in Listing 2 that iterates the task list and provides a small amount of information about each task ( name, pid, and parent name). Note here that the module uses printk to emit the output. To view the output, you need to view the /var/log/messages file with the cat utility (or tail -f /var/log/messages in real time). The next_task function is a macro in sched.h that simplifies the iteration of the task list (returns a task_struct reference of the next task). Listing 2. Simple kernel module to emit task information (procsview.c) #include <linux/kernel.h> #include <linux/module.h> #include <linux/sched.h> int init_module( void ) { /* Set up the anchor point */ struct task_struct *task = &init_task; /* Walk through the task list, until we hit the init_task again */ do { printk( KERN_INFO "*** %s [%d] parent %s\n", task->comm, task->pid, task->parent->comm ); } while ( (task = next_task(task)) != &init_task ); return 0; } void cleanup_module( void ) { return; } You can compile this module with the Makefile shown in Listing 3. When compiled, you can insert the kernel object with insmod procsview.ko and remove it with rmmod procsview. Listing 3. Makefile to build the kernel module obj-m += procsview.o KDIR := /lib/modules/$(shell uname -r)/build PWD := $(shell pwd) default: $(MAKE) -C $(KDIR) SUBDIRS=$(PWD) modules After insertion, /var/log/messages displays output as shown below. You can see here the idle task (called swapper) and the init task (pid 1). Nov 12 22:19:51 mtj-desktop kernel: [8503.873310] *** swapper [0] parent swapper Nov 12 22:19:51 mtj-desktop kernel: [8503.904182] *** init [1] parent swapper Nov 12 22:19:51 mtj-desktop kernel: [8503.904215] *** kthreadd [2] parent swapper Nov 12 22:19:51 mtj-desktop kernel: [8503.904233] *** migration/0 [3] parent kthreadd ... Note that it's also possible to identify the currently running task. Linux maintains a symbol called current that is the currently running process (of type task_struct). If at the end of init_module you add the line: printk( KERN_INFO, "Current task is %s [%d], current->comm, current->pid ); you would see: Nov 12 22:48:45 mtj-desktop kernel: [10233.323662] Current task is insmod [6538] Note that the current task is insmod, because the init_module function executes within the context of the execution of the insmod command. The current symbol actually refers to a function ( get_current) and can be found in an arch-specific header (for example, ./linux/include/asm-i386/current.h). Process creation So, let's walk through the creation of a process from user-space. The underlying mechanism is the same for user-space tasks and kernel tasks, as both eventually rely on a function called do_fork to create the new process. In the case of creating a kernel thread, the kernel calls a function called kernel_thread (see ./linux/arch/i386/kernel/process.c), which performs some initialization, then calls do_fork. A similar action occurs for user-space process creation. In user-space, a program calls fork, which results in a system call to the kernel function called sys_fork (see ./linux/arch/i386/kernel/process.c). The function relationships are shown graphically in Figure 1. Figure 1. Function hierarchy for process creation From Figure 1, you can see that do_fork provides the basis for process creation. You can find the do_fork function in ./linux/kernel/fork.c (along with the partner function, copy_process). The do_fork function begins with a call to alloc_pidmap, which allocates a new PID. Next, do_fork checks to see whether the debugger is tracing the parent process. If it is, the CLONE_PTRACE flag is set in the clone_flags in preparation for forking. The do_fork function then continues with a call to copy_process, passing the flags, stack, registers, parent process, and newly allocated PID. The copy_process function is where the new process is created as a copy of the parent. This function performs all actions except for starting the process, which is handled later. The first step in copy_process is validation of the CLONE flags to ensure that they're consistent. If they're not, an EINVAL error is returned. Next, the Linux Security Module (LSM) is consulted to see whether the current task may create a new task. To learn more about LSMs in the context of Security-Enhanced Linux (SELinux), check out the Resources section. Next, the dup_task_struct function (found in ./linux/kernel/fork.c) is called, which allocates a new task_struct and copies the current process's descriptors into it. After a new thread stack is set up, some state information is initialized and control returns to copy_process. Back in copy_process, some housekeeping is performed in addition to several other limit and security checks, including a variety of initialization on your new task_struct. A sequence of copy functions is then invoked that copy individual aspects of the process, from copying open file descriptors ( copy_files), copying signal information ( copy_sighand and copy_signal), copying process memory ( copy_mm), and finally copying the thread ( copy_thread). The new task is then assigned to a processor, with some additional checking based on the processors on which the process is allowed to execute ( cpus_allowed). After the priority of the new process inherits the priority of the parent, a small amount additional housekeeping is performed, and control returns to do_fork. At this point, your new process exists but is not yet running. The do_fork function fixes this with a call to wake_up_new_task. This function, which you can find in ./linux/kernel/sched.c), initializes some of the scheduler housekeeping information, places the new process in a run queue, then wakes it up for execution. Finally, upon returning to do_fork, the PID value is returned to the caller and the process is complete. Process scheduling While a process exists in Linux, it can potentially be scheduled through the Linux scheduler. Although outside of the scope of this article, the Linux scheduler maintains a set of lists for each priority level on which task_struct references reside. Tasks are invoked through the schedule function (available in ./linux/kernel/sched.c), which determines the best process to run based on loading and prior process execution history. You can learn more about the Linux version 2.6 scheduler in Resources. Process destruction Process destruction can be driven by several events—from normal process termination, through a signal, or through a call to the exit function. However process exit is driven, the process ends through a call to the kernel function do_exit (available in ./linux/kernel/exit.c). This process is shown graphically in Figure 2. Figure 2. Function hierarchy for process destruction The purpose behind do_exit is to remove all references to the current process from the operating system (for all resources that are not shared). The destruction process first indicates that the process is exiting by setting the PF_EXITING flag. Other aspects of the kernel use this indication to avoid manipulating this process while it's being removed. The cycle of detaching the process from the various resources that it attained during its life is performed through a series of calls, including exit_mm (to remove memory pages) to exit_keys (which disposes of per-thread session and process security keys). The do_exit function performs various accountings for the disposal of the process, then a series of notifications (for example, to signal the parent that the child is exiting) is performed through a call to exit_notify. Finally, the process state is changed to PF_DEAD, and the schedule function is called to select a new process to execute. Note that if signalling is required to the parent (or the process is being traced), the task will not completely disappear. If no signalling is necessary, a call to release_task will actually reclaim the memory that the process used. Going further Linux continues to evolve, and one area that will see further innovation and optimization is process management. While keeping true to UNIX principles, Linux continues to push the boundaries. New processor architectures, symmetrical multiprocessing (SMP), and virtualization will drive new advances in this area of the kernel. One example is the new O(1) scheduler introduced in Linux version 2.6, which provides scalability for systems with large numbers of tasks. Another is the updated threading model using the Native POSIX Thread Library (NPTL), which enables efficient threading beyond the prior LinuxThreads model. You can learn more about these innovations and what's ahead in Resources. Resources Learn - For a great look at memory management in Linux, check out Mel Gorman's Understanding the Linux Virtual Memory Manager (Prentice Hall, 2004), which is available in PDF form. This book provides a detailed but accessible presentation of memory management in Linux, including a chapter on process address spaces. - For a nice introduction to process management, see Performance Tuning for Linux: An Introduction to Kernels (Prentice Hall, 2005). A sample chapter is available from IBM Press. - Linux provides an interesting approach to system calls that involves transitioning between user-space and the kernel (separate address spaces). You can read more about this in "Kernel command using Linux system calls" (developerWorks, March 2007). - In this article, you saw cases in which the kernel checked the security capabilities of the caller. The basic interface between the kernel and the security framework is called the Linux Security Module. To explore this module in the context of SELinux, read "Anatomy of Security-Enhanced Linux (SELinux)" (developerWorks, April 2008). - The Portable Operating System Interface (POSIX) standard for threads defines a standard application programming interface (API) for creating and managing threads. You can find implementations for POSIX on Linux, Sun Solaris, and even non-UNIX-based operating systems. - The Native POSIX Thread Library is a threading implementation in the Linux kernel for efficiently executing POSIX threads. This technology was introduced into the 2.6 kernel, where the prior implementation was called LinuxThreads. - Read "TASK_KILLABLE: New process state in Linux" (developerWorks, September 2008) for an introduction to a useful alternative to the TASK_UNINTERRUPTIBLE and TASK_INTERRUPTIBLE process states. - Read more of Tim's articles on developerWorks. -.
http://www.ibm.com/developerworks/library/l-linux-process-management/
CC-MAIN-2014-35
refinedweb
2,539
54.32
Servlets Servlets Java Servlet technology You have set.... Anyways, please visit the following links: servlets what are filters in java servlets what are filters in java Filters are powerful tools in servlet environment. Filters add certain functionality to the servlets apart from processing request and response paradigm servlets , visit the following links: servlets the following links: servlets servlets what is ResultSet? ResultSet is a java object that is used for database connectivity to hold the data returned by a select query. When we run a select query it returns us the data in a table format with each servlets servlets how can I run java servlet thread safety program using tomcat server? please give me a step by step procedure to run the following program my program is A DEMO PROGRAM FOR THREAD SAFETY. package serv; import java servlets java servlets please help... how to connect java servlets with mysql i am using apache tomcat 5.5 what is servlets in Java what is servlets in Java to respond to HTTP requests. A JSP layered on top of Java Servlets. Whereas...servlets what is the difference b/w servlets and JSP, what servlets..., The Java Servlet is the fundamental component to using Java for web - Servlets - Java Interview Questions and are available to all the servlets within that application. It represents your web... are specified for a particular servlet and are unknown to other servlets... information visit Hi, How can i run the servlet programon my system? I wrote the code in java.(I've taken the Helloworld program as example program... for more information, Thanks Amardeep deploying - Java Beginners servlets deploying how to deploy the servlets using tomcat?can you...); } } ------------------------------------------------------- This is servlets.... Thanks Amarde java servlets - JDBC java servlets First thank you sir for your reply sir i am using "Oracle Database" it asking for 1.User Name 2.Password 3.Host String if we doesn't give that "Host String" it doesn't connecting to the Oracle.In - Java Beginners Threads,Servlets - Java Beginners servlets - Java Interview Questions Servlets - Java Interview Questions servlets what is the duties of response object in servlets servlets servlets why we are using serv servlets what are advantages of servlets what are advantages of servlets Please visit the following link: Advantages Of Servlets servlets - JSP-Servlet files required) /web.xml /addstudent.html After executing the java files place... an application using Servlets or jsp make the directory structure given below link Now visit -let Interview Questions ; This is java code... information.  ... for more information. - JSP-Servlet servlets How would you set an error message in the servlet,and send the user back to the JSP page?Please give java or pseudo code examples.  ...(""); } } --------------------------------------------------- Read for more information. which are the differ ways you can communicat between servlets which are the differ ways you can communicat between servlets Different ways of communicating between servlets:- 1)Using RequestDispatcher object. 2/ java servlets - Servlet Interview Questions java servlets sir, i am doing a project of online examination. i have built a question page that has 25 questions. i have use four radio button for each question(ie four choices). how can i send value and name of each Servlets - JSP-Servlet Servlets Hello ! I have the following error when i try to run Java file which has connections for MYSQL. java.lang.ClassNotFoundException: com.mysql.jdbc.Driver My program coding SERVLETS servlets the servlets Servlets
http://www.roseindia.net/tutorialhelp/comment/92155
CC-MAIN-2014-23
refinedweb
570
64.61
Developing extensions to the headless framework (Buckminster) From Eclipsepedia Adding a Headless Command Not surprisingly, the framework deals with making it possible to add a new command, for example 'helloworld', so you can write buckminster helloworld The name, an optional namespace (in addition to the plugin namespace, which is always prefixed to the name) and 0-M aliases are declared in the extension point along with an implementation class. The implementation class must derive from the base class AbstractCommand. In principle, only the method 'run' needs to be overridden, but likely you wish to override others, for example to declare option support and callbacks for detection of them. Also, the declaration can tag the command as 'hidden' (works, but not shown on listings) or 'disabled' (doesn't work). Finally, you may indicate whether the framework should automatically add support for the option flags '-?' '--help' (recommended). Assuming the framework manages the help options, you should add a file with the same name as the command class name, but with a '.help' extension with the help text. As noted, the long name of a command is always prefixed with the plugin id. Thus, a plugin with the id 'org.somedomain.plugin' declaring a command with the name 'helloworld' and alias 'hw' will in reality be known as 'org.somedomain.plugin.helloworld' and 'org.somedomain.plugin.hw'. As another example, the name can contain further namespace levels - e.g. 'my.name.space.helloworld', leading to 'org.somedomain.plugin.my.name.space.helloworld'. However, note that any aliases do not contain a namespace — they will just work in the same namespace as the main/real name. Commands are supposed to cooperate with some settings supplied by the framework (which may be reflections of user selections): currently this deals with a suggested location for storing tmp files, and an implementation of progress monitor to provide to lower layers. See the 'context' for details. What is this context? where can I read about it? Commands can typically write freely to System.out and System.err. Note that both or either may have been trapped by the frameworks logging mechanism. Example Here is an example: Adding a helloworld headless command Adding a headless preference mapping Write this section Adding a headless progress monitor extension Write this section
http://wiki.eclipse.org/Developing_extensions_to_the_headless_framework_%28Buckminster%29
crawl-003
refinedweb
380
55.03
I am looking at ways to implement infinite scrolling with React. I have come across react-infinite-scroll and found it inefficient as it just adds nodes to the DOM and doesn’t remove them. Is there any proven solution with React which will add, remove and maintains constant number of nodes in the DOM. Here is the jsfiddle problem. In this problem, i want to have only 50 elements in the DOM at a time. others should be loaded and removed as user scrolls up and down. We have started using React because of it’s optimization algorithms. Now i couldn’t find solution to this problem. I have come across airbnb infinite js. But it is implemented with Jquery. To use this airbnb infinite scroll, i have to loose the React optimisation which i don’t want to do. sample code i want to add scroll is(here i am loading all items. My goal is to load only 50 items at a time) /** @jsx React.DOM */ var Hello = React.createClass({ render: function() { return (<li>Hello {this.props.name}</li>); } }); var HelloList = React.createClass({ getInitialState: function() { var numbers = []; for(var i=1;i<10000;i++){ numbers.push(i); } return {data:numbers}; }, render: function(){ var response = this.state.data.map(function(contact){ return (<Hello name="World"></Hello>); }); return (<ul>{response}</ul>) } }); React.renderComponent(<HelloList/>, document.getElementById('content')); Looking for help… Basically when scrolling you want to decide which elements are visible and then rerender to display only those elements, with a single spacer element on top and bottom to represent the offscreen elements. Vjeux made a fiddle here which you can look at: Upon scrolling it executes scrollState: function(scroll) { var visibleStart = Math.floor(scroll / this.state.recordHeight); var visibleEnd = Math.min(visibleStart + this.state.recordsPerBody, this.state.total - 1); var displayStart = Math.max(0, Math.floor(scroll / this.state.recordHeight) - this.state.recordsPerBody * 1.5); var displayEnd = Math.min(displayStart + 4 * this.state.recordsPerBody, this.state.total - 1); this.setState({ visibleStart: visibleStart, visibleEnd: visibleEnd, displayStart: displayStart, displayEnd: displayEnd, scroll: scroll }); }, and then the render function will display only the rows in the range displayStart..displayEnd. You may also be interested in ReactJS: Modeling Bi-Directional Infinite Scrolling. Update December 2016 I’ve actually been using react-virtualized in a lot of my projects recently and find that it covers the majority of use cases a lot better. Both libraries are good, it depends on exactly what you’re looking for. For instance, react-virtualized supports variable height JIT measuring via an HOC called CellMeasurer, example here.
https://exceptionshub.com/infinite-scrolling-with-react-js-2.html
CC-MAIN-2021-21
refinedweb
428
51.24
In this tutorial we'll dive back into the Vuforia Augmented Reality (AR) library, exploring one of its most interesting resources—the Image Target. We'll expand on the Shoot the Cubes game that we created in earlier lessons, adding a new level where the player needs to defend their base from attacking cubes. This tutorial can be completed alone, although if you want an introduction to AR with Vuforia and Unity3D, check out the earlier posts in the series. - Mobile DevelopmentPokémon GO Style Augmented Reality With VuforiaTin Megali - Mobile DevelopmentCreate a Pokémon GO Style Augmented Reality Game With VuforiaTin Megali Image Targets Any kind of image can be a Vuforia Image Target. However, the more detailed and intricate the image, the better it will be recognized by the algorithm. A lot of factors will be part of the recognizing calculation, but basically the image must have a reasonable level of contrast, resolution, and distinguishing elements. A blue sky photograph wouldn't work very well, but a picture of some grass would work gracefully. Image Targets can be shipped with the application, uploaded to the application through the cloud, or directly created in the app by the user. Adding a Target Let’s begin by adding an ImageTarget element to our Unity project. First, download the course assets from the button in the sidebar. Then, in your Unity project, create a new scene called DefendTheBase: in the Project window, select the Scenes folder and click on Create > Scene. Now open that scene and remove all the default scene objects from the hierarchy. Next we'll add a light and camera. Click on Add > Light > Directional Light to add a directional light. Select this new light and set Soft Shadow as the Shadow Type option. After that, drag and drop an ARCamera object from Vuforia > Prefabs. Select the ARCamera object and in the inspector panel, set the App License Key created on the Vuforia developer page (see the first tutorial for instructions). Select DEVICE_TRACKING for the World Center Mod. Finally, drag and drop an ImageTarget to the hierarchy from Vuforia > Prefabs. Now we have to add a Vuforia Database. First, navigate to. Click on Add Database and choose a name. There are three types of Database to choose from: - Device: The Database is saved on the device and all targets are updated locally. - Cloud: Database on the Vuforia servers. - VuMark: Database exclusive to VuMark targets. It is also saved on the device. In this case, choose the Device option and click on create. Select the new database so we can start adding targets to it. Now it is time to add targets to the database. For now, we’ll just use the Single Image option. Navigate to the previously downloaded files, pick ImageTarget1, and set its Width to 1 and click on Add. (Note: If you prefer to create your own Image Target, read the guide first.) Now you can download the database, selecting Unity Editor as the chosen platform. Open the file and select all elements to be imported. We must also prepare our Unity scene to recognize the ImageTarget with this database we have created. In the Unity editor, click on the ImageTarget object. First, find and expand Image Target Behavior in the object inspector. Select a Type of Predefined. Choose the image target we created earlier for Database. Finally, make sure that the Enable Extended Tracking and Enable Smart Terrain options are both disabled. The ImageTarget prefab is made of a series of components, including some scripts like Image Target Behavior, Turn Off Behavior, and Default Tracker Event Handler. If you want to deeply understand how the system works, read those scripts and try to understand their relationship to other components. For this tutorial, we won't dig too deep, though. We’ll only need to concentrate on the Default Tracker Event Handler, which receives calls when the image target tracking status changes. So let’s use this script as a base to create our own script behavior. Create a copy of this script that we can extend. First select Default Tracker Event Handler, click on options and select Edit Script. Now, make a copy of the script. If you’re using MonoDevelop, click File > Save As and save as ImageTargetBehavior, saving it in the Scripts folder. The TargetBehaviorScript Script We won’t need the Vuforia namespace in our script. Remove the line “ namespace Vuforia” and the brackets. That means we'll need to explicitly reference the Vuforia namespace when we want to access its classes: using UnityEngine; using System.Collections; public class BaseScript : MonoBehaviour, Vuforia.ITrackableEventHandler { // code here } The most important method in this class will be the OnTrackableStateChanged method that receives calls when the image target is found or lost by the camera device. According to the target status, it calls OnTrackingFound or OnTrackingLost, and we’ll need to edit those methods as well. But first, let’s think about how we want the image target to behave. In this game, the user will defend a base that appears on an image target. Let’s consider the following game mechanics: - Once the target is recognized by the system, the base appears and enemies start to spawn and fly toward the base in a kamikaze style. - Every time an enemy hits the base, the base will take some damage and the enemy will be destroyed. - To win the game the user must shoot and destroy all enemies before the base is destroyed. - If the image target is lost (is no longer visible from the device camera), the game will start a countdown timer. If the timer gets to zero, the game is lost. While the target is lost, all enemies will stop advancing toward the base. So we’ll need to adapt those game mechanics on top of what we built in the last tutorial. We'll create the enemy spawning logic in the next section with an empty object named _SpawnController, using the same logic adopted in the first part of the game. For now, let's look at the tracking found logic. (); } } Back in the Unity editor, we can create the base object that will be spawned by the spawn controller. First, on the ImageTarget object, disable the Default Trackable Event Handler script. Next, click on Add Component and select the Target Behavior Script. From the Hierarchy panel, right click on ImageTarget and create a new cube named "Base". This cube should be inserted inside the ImageTarget object. Make sure that the Base has Box Collider and Mesh Renderer enabled. Optionally, you could also insert a Plane object inside the ImageTarget using the ImageTarget submitted earlier in Vuforia as a texture. This would create an interesting effect, projecting shadows from the target and creating a richer experience. Adapting the SpawnScript Now we will adapt the _SpawnController used in the last tutorial. Save the current scene and open ShootTheCubesMain from the last tutorial. In the Hierarchy panel, select the _SpawnController and drag it to the Prefabs folder to make it a Unity Prefab. Save this new scene and reopen DefendTheBase. Drag _SpawnController from the prefabs folder to the Hierarchy panel. With the _SpawnController selected, click on Add Tag on the Inspector panel. Name the new tag _SpawnController and apply it to the object. In the Project window, select the Cube element in the Prefab folder and set its Tag, back on its inspector, to 'Enemy'. Finally, open the Scripts folder and open SpawnScript. We need to make this script adapt itself to the loaded scene. Next, we need to create two public methods to receive calls from TargetBehaviorScript when the target is found or lost: BaseOn (Vector3 basePosition)will be called when the target is found by the camera and the Base object is shown. It will change the spawning position, start the process, and inform all cubes that were previously added to the stage that the base is visible. The BaseOff()method will be used when the target is lost. It will stop the staging process and inform all cube elements that the base was lost. The SetPosition (System.Nullable<Vector3> pos) uses the target’s current position to modify the object x, y, and z axes, and it can also receive a null value when the scene loaded is ShootTheCubesMain. ); } } } InformBaseOnToCubes() and InformBaseOffToCubes() are responsible for informing all staged cubes of the current base status. //); } } The SpawnLoop() and SpawnElement() methods are using almost the same logic as the last tutorial. //++; } yield return new WaitForSeconds (Random.Range (mTimeToSpawn, mTimeToSpawn * 3)); } } // Spawn a cube private GameObject SpawnElement () { // spawn the element on a random position, inside a imaginary sphere GameObject cube = Instantiate (mCubeObj, (Random.insideUnitSphere * 4) + transform.position, transform.rotation) as GameObject; // define a random scale for the cube float scale = Random.Range (0.5f, 2f); // change the cube scale cube.transform.localScale = new Vector3 (scale, scale, scale); return cube; } #endregion // PRIVATE_METHODS Creating the Enemies Now we’ll need to create some enemies. We'll use the Cube object that we created in the last tutorial, making some modifications to its script. In the Prefabs folder, add a Cube object to the hierarchy. Then select the object and edit the CubeBehaviorScript. We’ll preserve almost the same logic in this script, but with the following differences: - The Cube will pursue the Base when the target is found by the camera. - When the Cube hits the Base, it will destroy itself and give some damage to the Base. - The script needs to know the name of the scene loaded and adapt itself accordingly. If the scene's name is DefendTheBase, it must find the Base object and start to move towards it. The CubeSettings() also need to adapt according to the scene loaded. The Cube only orbits on the y-axis for the DefendTheBase scene. > (); } We’ll add some new logic to the RotateCube() method. The cube objects will rotate around the base while the target is visible. When the target is not visible, they will continue to rotate around the Camera, using the same logic as in the last tutorial. //); } // Scale object from 0 to 1 private void ScaleObj(){ // growing obj if ( transform.localScale != mCubeMaxScale ) transform.localScale = Vector3.Lerp( transform.localScale, mCubeMaxScale, Time.deltaTime * mGrowingSpeed ); else mIsCubeScaled = true; } To move the object toward the base, we’ll need to check first if the base is present, and then apply the position steps to the object. //); } } The DestroyCube() method is the same as before, but now we'll add a new method—the TargetHit(GameObject) method—that will be called when the base is hit. Note that the BaseHealthScript referenced in TargetHit() hasn't been created yet. // Finally, we’ll add the public methods to be called when the cube takes a hit, when it collides with the base, or when the base changes status. Controlling the Base Health. Let’s begin adding the health bar. In the Hierarchy panel in the Unity editor, click on Create > UI > Slider. A new Canvas element will be added to the hierarchy. It contains UI elements, including the new Slider. Expand the Canvas and select the Slider. Change the slider element name to UIHealth. In the Inspector panel, expand Rect Transform and set Width to 400 and Height to 40. Set Pos X to -220, Pos Y to 30, and Pos Z to 0. Now expand the slider script in the hierarchy. Unselect the Interactable option. For Target Graphic, click on the small ‘dot’ on the right side and select the Background image. - Set the Min Value to 0 and Max Value to 100. - Select Whole Numbers. - Set Value to 100. Now, expand the Slider panel to expose its child elements: Background, Fill Area, and Handle Slide Area. - Delete Handle Slide Area. - Select Background and set its Color to a darker shade of green, like #12F568FF. - Expand Fill Area and select the Fill object and set its color to #7FEA89FF. This is how the Game Window should look with the health bar. The Base Health Script MyBase. } Now we need to add and configure the script. Select the Base in the hierarchy, click on Add Component, and add an Audio Source. Now drag MyBase to the Base element and, in the Inspector panel, expand MyBase. Select a sound effect for the explosion and hit. I’ve used the explosion clip used in the last tutorial, but feel free to add your own. Finally, in the Health Slider, select the UISlider element. Defending the Base Our new game experience is almost done. We only need to shoot some lasers to start defending our base. Let's create a script for the laser! First drag the _PlayerController from the Prefab folder to the hierarchy. Expand _PlayerController and select _LaserController. In the Inspector panel, find Laser Script and click on Edit. The only thing that we need to change in this script is the position of the laser. // Shot the Laser private void Fire () { // Get ARCamera Transform Transform cam = Camera.main.transform; // Define the time of the next fire mNextFire = Time.time + mFireRate; // Set the origin of the RayCast Vector3 rayOrigin = cam.position; //); } } Trying Out the Game! At this point, you have a good understanding of how the Vuforia system works and how to use it with Unity. I expect that you've enjoyed this journey as much as I have. See you soon! To learn more about Augmented Reality with Vuforia and Unity, check out our video course here on Envato Tuts+! Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/create-a-pokemon-go-style-augmented-reality-game-with-vuforia-part-3--cms-28246
CC-MAIN-2018-34
refinedweb
2,269
65.93
Python Programming, news on the Voidspace Python Projects and all things techie. PyCon UK and Mix UK Photos Unfortunately I forgot my camera at both the PyCon UK Mix UK conferences I attended recently. Luckily other people weren't so dopey. A picture of me giving my presentation at PyCon UK, from Chris Miles: The next one is by Craig Murphy, and you can just about see me with Zi watching Scott Guthrie demonstrating the next generation of ASP.NET (a great improvement): Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2007-09-19 22:14:38 | | Resolver Screencast: Web Application in Ten Minutes Giles has put up a ten minute screencast of Resolver in action. As you will probably guess from the title, it covers both the desktop version and the web server. If after all my descriptions you still haven't got a clue what Resolver is actually for, this shows off some of the best features in just ten minutes. If this whets your appetite, you can still sign up for the Resolver beta program. The next phase of the beta is still a couple of weeks away, but shouldn't be much longer. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2007-09-19 21:40:48 | | Categories: Work, IronPython Python Interactive Interpreter in a Browser Ok, so I lied. One more entry on Silverlight. I've got a proof of concept 'Interactive Interpreter in the Browser' working with IronPython & Silverlight. There are lots of ways it could be improved (I'll discuss these in a minute), but it is still quite fun. This could be embedded into web based tutorials (like Crunchy) to provide live examples. Because the Python code executes in the browser (not on the server) there are no security issues to worry about. It doesn't actually use the Silverlight canvas, just an HTML textarea and text input field, so you could have several in a page if you wanted. You could also prepopulate the namespace(s) with live objects. It has several limitations, most of which someone (you?) could fix and one of which will have to wait for the next update to Silverlight. If you fancy downloading it as it is, head over to my IronPython & Silverlight Page. - It would be greatly improved by being in a single textarea field. Only the last line after the prompt should be editable. This is fiddly to do in a cross-browser way, but not very difficult [1]. - Evaluated expressions aren't shown as they are in the 'normal' interactive interpreter (you have to use print to see the result). This is actually due to a bug in IronPython 2, which will be fixed in the next release (basically a one line fix!) [2]!. - You have to provide the standard library if you want it to be available! - In Silverlight the file type doesn't exist (for a good reason), so more stuff than usual is broken. Note that this does use some standard library modules, which I've modified slightly to work with IronPython & Silverlight. These are included in the download. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2007-09-19 21:08:20 | | Categories: Website, Python, IronPython, Hacking IronPython & Silverlight Articles & Resources Ok, last post on Silverlight for a while. I've turned my PyCon / Mix UK talks into a series of articles. You can access the articles, downloads and online examples from: The articles are: - Introduction to IronPython & Silverlight - An IronPython Silverlight Application - The Silverlight APIs - From Silverlight to Javascript and Back Again - Embedding IronPython in a C# Silverlight Application For Mix I added a couple of extra examples to the Web IDE: You can download the source files from the main articles page. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2007-09-17 13:47:07 | | Categories: Writing, IronPython, Hacking Archives This work is licensed under a Creative Commons Attribution-Share Alike 2.0 License. Counter...
http://www.voidspace.org.uk/python/weblog/arch_d7_2007_09_15.shtml
CC-MAIN-2013-48
refinedweb
683
72.46
Java stacks certainly are tall. You have your web server, your application server, servlet container, an IoC container, JPA, JAAS, JAX-RS, and that’s before you actually write any code. The Play! framework seems set to change all that. It throws nearly all of the enterprise out of Java and instead provides you with a very structured, very Rails-like, web environment. You’ve got routes, controllers, something that resembles ActiveRecord, background jobs, built-in authentication, loads of nice plugins. In general, it’s very refreshing. For example, in my (relatively flat) stack of Jersey and Jetty, it took me off-and-on about a week to implement Facebook authentication. Lots of fiddling with callback urls and hand-rolling Apache Shiro handlers. I got it working in the end, but it was pretty nasty. By comparison, using Play! was as simple as adding play -> fbconnect 0.5 to the dependencies.yml file (yes, that’s YAML in Java, not XML!) and changing my view to include a new #{fbconnect.button}. That’s it! Play! also has a fairly unique feature in Java-land, and that’s dynamic reloading and compilation of classes. It’s just like using Ruby. Edit a file, refresh your browser and your changes are immediately visible; not just changes to views, but to the controllers and models too. A great improvement over the regular rebuild/reload cycle. All in all, Play! has turned out to be an almost perfect Java web framework. Almost. Then we get to the testing story. I’m going to be blunt here. Despite Play! promoting how easy it is to test, I’m fairly sure the developers don’t actually do much testing; at the very least, they don’t do much unit testing. Where to start? Dependency injection I’m not talking containers here. A fairly ubiquitous practice for testing web applications is to use constructor injection in your Controllers, injecting any services your controller needs into the constructor; those services are then used by the action methods to do their job, but more importantly they can be mocked or stubbed as part of a unit test. An ASP.Net MVC example would look something like this: public class MyController : Controller { readonly IMyService myService; public MyController(IMyService myService) { this.myService = myService; } public void Index() { myService.DoSomething(); } } That way, in a unit test we can do this: [Test] public void should_do_something() { var myService = MockRepository.GenerateMock<IMyService>(); new MyController(myService) .Index(); myService.AssertWasCalled(x => x.DoSomething()); } Piece of cake. Play! on the other hand is not so simple. Play! requires controller action methods to be static; the justification for this is that controllers have no state, and thus are static. That makes sense, but it does so at the (in my opinion, fairly large) expense of constructor injection. You can’t call a static constructor, so you can’t pass in a dependency, so you can’t mock your dependency. The equivalent example in Play! would be this: public class MyController extends Controller { public static void index() { MyService myService = new MyServiceImpl(); myService.doSomething(); } } How can we test that controller in isolation? We can’t very easily. At least, not without using something like PowerMock (think TypeMock) to do some bytecode/reflection magic. One proposed solution to this is to use an IoC container like Google Guice and inject a static field. public class MyController extends Controller { @Inject MyService myService; public static void index() { myService.doSomething(); } } That’s an improvement, but without constructor injection we have to bring a full container into the unit tests or make the field public and overwrite it manually. Not exactly pretty. Another reason bandied around is “anaemic domain model”. Models should do things, I get that; however, we’re not in Ruby here, if my entity takes a hard-dependency on a service, how exactly am I supposed to test that in isolation? If an email should be sent when a user is created, I don’t want to have an actual SMTP server running just to execute my unit tests. In Ruby we could do some monkey patching and replace the SMTP service at runtime, but this is Java and we can’t do that (without resorting to service locators or singletons). I had an idea of using a JPA interceptor and injecting dependencies into an entity when it’s hydrated by Hibernate, but that just seems like a recipe for disaster. So, deal breaker number 1: No easy way to mock dependencies, one way or another. A brief diversion: Play! doesn’t seem to really do unit testing. It refers to things as unit tests, but really they’re all integration tests. As mentioned already, you can’t easily replace your dependencies with stubs or mocks, so inevitably you need to run your tests against a real database, your emails to a real SMTP service, and your messages to a real messaging queue. This sucks. I’m all for integration tests, and if I had to pick between them and unit tests, I’d put my money on integration tests; however, I’m not yet of the belief that I can live entirely without unit tests. Some things should still be tested in isolation; specifically, if I’m dealing with external services, I shouldn’t need them up-and-running to run a unit test. IDE support Java is where IDEs thrive. Whilst I know Play! is heavily influenced by Rails, I don’t yet think I could live without an IDE. IDEs have their strong points, and unit test runners are one of them. Great little things, one keyboard shortcut and all your tests are spinning away. Not for Play! though, or not very easily anyway. Despite Play!s “unit tests” being based on JUnit, they can’t actually be ran as plain-old-JUnit tests. If you interact with any of the models, or any of the Play! specific classes, you need the full web environment to be available. In fact, the default runner for unit tests is the website itself. I’m all for running QUnit tests in the browser, but JUnit tests, really? No thanks. Deal breaker number 2: Can’t run unit tests in the IDE. It takes 6 seconds on my fairly meaty laptop to run one unit test. That’s unbelievable. In addition, as Play! requires the web environment to run tests, that also means it kicks off any start-up jobs your application has. So whenever I run a test, it spins up my message queue, my database connection, and runs my data import routines (when in test mode). Deal breaker number 3: Can’t run unit tests without spinning up the entire website (and that’s not fast). Example projects So there’s me thinking “It can’t possibly be this bad”. I decided to have a hunt around and see if there are any open-source applications built with Play!, or at the very least some reasonably sized examples. There were a few; however, none of them had test suites. In fact, nearly all of them still had the default tests that are provided with a new project. public class BasicTest extends UnitTest { @Test public void aVeryImportantThingToTest() { assertEquals(2, 1 + 1); } } Finally, one thing that really made me feel that the developers don’t really get testing was their “mock” SMTP service. Take a look at line 36 of their Mail.java. A hand-rolled mock, in the main service. I don’t say this often but: WTF. Is this what’s considered good practice? I’m so incredibly disappointed in Play!. It’s a wonderful framework which was obviously designed by people who don’t really do testing; or at the very least, don’t do anything other than end-to-end integration tests. I’d love to use Play!, but I just don’t know if I can get past these issues. Everything else about it has been such an improvement over my previous stack, but all of that is next to worthless if I can’t test it. If anyone has any information or experiences to the contrary, I’d gladly be shown the err in my ways. How do you test with Play!? Integration tests only or is there some secret sauce I’m missing out on? I really do want to like Play! but it just seems so difficult to do proper testing. Some links:
https://lostechies.com/jamesgregory/2011/09/18/observations-on-the-play-framework/
CC-MAIN-2016-36
refinedweb
1,406
65.73
Here is the problem: Given a list of integers, A0, A2, ... , An-1, each of which may be positive or negative, find the sublist of integers Ai, ..., Aj which has the maximum sum of any sublist. If all the integers are negative, return 0. A sublist has to start at some position in the original list, finish at some later position and include every number which is in the original list between those positions. So, for example if the list is {10,-20,11,-4,13,3,-5,-17,2,15,1,-7,8} then the answer is 23 since this is the sum of the list {11,-4,13,3} and no other sublist has a higher sum. The answer is always at least 0, because the empty list is always a possible sublist, and its sum is 0. What I have done is Code : import java.util.*; class Exercise6c { // Front end code for the problem set as Exercise 6 public static void main(String[] args) { Scanner input = new Scanner(System.in); System.out.println("Enter some numbers (all on one line, separated by commas):"); String line = input.nextLine(); String[] numbers = line.split(","); int[] array = new int[numbers.length]; for(int i=0; i<array.length; i++) array[i]=Integer.parseInt(numbers[i].trim()); int highSum = highestSum(array); System.out.println("The highest sum of a sublist of the numbers is: "+highSum); } public static int highestSum(int[] a) { int sum = 0; int sm = 0; for(int i = 0; i < a.length; i++){ for(int j = 1; j < a.length; j++){ // sum = a[i]; for (int k = 0; k < (j-i)+1 ; k++){ sum = sum + a[i+k]; if (sum > sm ){ sm = sum ; } } } } return sm ; // ... To be filled in ... } } just look at method highestSum.( Apprently one of the applications is stock market analysis.) Can anyone solve the problem of just altering the method highestSum, to find the solution to the task.
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/1384-i-have-algorithm-problem-printingthethread.html
CC-MAIN-2015-48
refinedweb
321
67.45
Important changes to forums and questions All forums and questions are now archived. To start a new conversation or read the latest updates go to forums.mbed.com. 4 years, 5 months ago. Saving string from (RX,TX) to an array Hello! I need help in saving a string data type coming from RS232 (RX,TX) to an array. I have tried the code below, it saves the input but the problem is that when you try to enter the same input again, the end character of your input becomes the first character and it stays that way. Please help me with this. Thank you. #include "mbed.h" Serial pc(USBTX, USBRX); int main() { char command[8]; while(1) { if (pc.readable()) { pc.gets(command, 8); for (int i=0; i<8; i++) { pc.printf("%x ", command[i]); } pc.printf("\n"); } } } 4 Answers 4 years, 5 months ago. Hello Jas, Another alternative is to use scanf function: #include "mbed.h" int main() {Serial pc(USBTX, USBRX); volatile char command[8]; while(1) { pc.scanf("%s", command); pc.printf("%s\r\n", command); } } The scanf function will read (wait for) subsequent characters until a whitespace is found (whitespace characters are considered to be blank, newline and tab). Make sure the length of string passed over the serial connection does not exceed the length of command array. Hello Jas, Yes, scanf shall work with any available (valid) Serial device. And I also agree with Oliver that one should prevent exceeding the array by specifying the maximum number of characters to be read in one reading operation. #include "mbed.h" Serial pc(USBTX, USBRX); Serial device(PA_9, PA_10); int main() { volatile char command[8]; while(1) { device.scanf("%7s", command); pc.printf("%s\r\n", command); } } If I understand correctly you can (and should) qualify %s with a length argument like %7s to prevent exceeding the array.posted by 18 May 2016 Will this work if the data is coming from a serial device? Serial device(PA_9,PA_10);posted by 24 May 2016 Yes, if it is "Serial" it will support scanf. It should work for any "streams" based input device. For example it works on Serial but not RawSerial.posted by 24 May 2016 Do you happen to know how to do a simple serial interrupt? Because I need to activate a sensor when a pin goes high? Thank you.posted by 30 May 2016 Jas do you mean you want the program to continue running while it is receiving serial data?posted by 30 May 2016 4 years, 5 months ago. Try something like that by attach below code as interrupt to USART. It reads incoming bytes until carriage return and works for my communication. void rf_rx_isr() { node.rf_buffer_char = nodeRF.getc(); if (node.rf_buffer_char != '\r') { node.rfRxBuffer[node.rfRxBufferCounter] = node.rf_buffer_char; node.rfRxBufferCounter++; } else if (node.rf_buffer_char == '\r') { nodeRF.attach(NULL, Serial::RxIrq); node.rfRxBufferCounter = 0; node.rfInterruptComplete = true; } } 4 years, 5 months ago. As to the initial question: "gets" with a size parameter is really "fgets", and the reason for the character hanging over is that it is waiting for and returning seven characters. abcdefg fulfils this so the "CR" was queued till the next line. I wondered why I could press Enter seven times and get d d d d d d d 0 but then I realised it is probably waiting for a newline, 0A, and putty sends 0D. I think you will have to pull characters one by one using "getc" but whether you actually need an ISR is up to you, a "while" loop may do it. Can you give example inputs and outputs? There can sometimes be issues with gets() detecting the ends of strings.posted by Andy A 13 May 2016 This is an example output of the code: Sent: 5/17/2016 1:33:43 PM abcdefg Received: 5/17/2016 1:33:43 PM 61 62 63 64 65 66 67 0 Sent: 5/17/2016 1:33:44 PM abcdefg Received: 5/17/2016 1:33:44 PM d 61 62 63 64 65 66 0 Sent: 5/17/2016 1:33:47 PM abcdefg Received: 5/17/2016 1:33:47 PM 67 61 62 63 64 65 66 0 Sent: 5/17/2016 1:33:50 PM abcdefg Received: 5/17/2016 1:33:50 PM 67 61 62 63 64 65 66 0posted by Jas Y 17 May 2016
https://os.mbed.com/questions/68964/Saving-string-from-RXTX-to-an-array/
CC-MAIN-2020-45
refinedweb
741
73.78
However, unlike the other components, Webel runs entirely off web services. All cheminformatics analysis is carried out using Rajarshi's REST services (which use the CDK and are hosted at Uppsala) and the NIH's Chemical Identifier Resolver (by Markus Sitzmann, and which uses Cactvs for much of its backend). To use Webel, all you need to do is download webel.py, and type "import webel" at a Python prompt (see example code below - it's basically the same as using Pybel if you're familiar with that). So what are the advantages of running off webservices? First, as should be clear, there is the ease of installation. This means that Webel could easily be bundled in with some other software to provide some useful functionality. Second, Webel can still be used in environments where installation of a cheminformatics toolkit is simply not possible (more on this next week!). Third, webservices may provide additional functionality not available elsewhere (e.g. the Chemical Resolver provides name-to-structure conversion as well as InChIKey resolution). Fourth, webservices are accessed across HTTP rather than through some type of language binding. As a result, Webel works equally well from CPython, Jython or IronPython. And finally, it's just a cool idea. :-) If you can think of any other advantages or potential applications, I'd be interested to hear them. In the meanwhile, here's some code that calculates the molecular weight of aspirin, its LogP, its InChI, gives alternate names for aspirin, and creates the PNG above: ...which gives......which gives... import webel mol = webel.readstring("name", "aspirin") print "The molecular weight is %.1f" % mol.molwt print "The InChI is %s" % mol.write("inchi") print "LogP values are: %s" % mol.calcdesc(["ALOGPDescriptor"]) print "Aspirin is also known as: %s" % mol.write("names") mol.draw(filename="aspirin.png", show=False) C:\Tools\cinfony\trunk\cinfony>python example.py The molecular weight is 180.2 The InChI is InChI=1/C9H8O4/c1-6(10)13-8-5-3-2-4-7(8)9(11)12/h2-5H,1H3,(H,11,12) /f/h11H AuxInfo=1/1/N:5,3,4,1,2,12,6,7,11,9,8,10,13/E:(11,12)/F:5,3,4,1,2,12,6,7 ,11,9,10,8,13/rA:21CCCCCCCOOOCCOHHHHHHHH/rB:;a1;a2a3;;a1;a2a6;;;;s6d8s10;s5d9;s7 s12;s10;s1;s2;s3;s4;s5;s5;s5;/rC:6.3301,-.56,0;4.5981,-1.56,0;6.3301,-1.56,0;5.4 641,-2.06,0;2,-.06,0;5.4641,-.06,0;4.5981,-.56,0;4.5981,1.44,0;2.866,-1.56,0;6.3 301,1.44,0;5.4641,.94,0;2.866,-.56,0;3.7321,-.06,0;6.3301,2.06,0;6.8671,-.25,0;4 .0611,-1.87,0;6.8671,-1.87,0;5.4641,-2.68,0;2.31,.4769,0;1.4631,.25,0;1.69,-.596 9,0; LogP values are: {'ALOGPDescriptor_ALogp2': 0.10304100000000004, 'ALOGPDescripto r_AMR': 18.935400000000001} Aspirin is also known as: ['2-Acetoxybenzoic acid', '50-78-2', '2-Acetoxybenzene carboxylic acid', 'Acetylsalicylate', 'Acetylsalicylic acid', 'Aspirin', ... 'Claradin', 'Clariprin', 'Colfarit', 'Decaten', 'Dolean pH 8', ... 'Acetylsalicylsaure [German]', 'Acide acetylsalicylique [French]', ... 'A6810_SIGMA', 'Spectrum5_000740', 'CHEBI:15365',...] 6 comments: Does this handle stereochemistry? The underlying data model is a SMILES string. This is capable of storing cis/trans and tetrahedral stereochemistry. So the question then is, do the webservices honor stereochemistry? In the case of the CDK webservices, there is no problem as stereochemistry doesn't affect the results (e.g. the molecular weight). In the case of the NCI services, stereochemistry appears to be preserved (you can try a chiral SMILES-->InChI-->SMILES roundtrip to test this). Nice work! After reading about Cinfony, I was hesitating to use it, but this new module (Webel) seems to be what I needed. In particular, I want to align some molecules and I came across Obfit that seems to require SMARTS pattern provided by the user. What I need is a method that would not require this input and Kabsch alignment implemented in CDK and described in chem-bla-ics Blog seems to be what I need. I'll see if I can do this with Webel. Keep up the good work! Thanks for the encouragement Sargis! I think you will need access to the core CDK API to do what you want, so the Cinfony cdk module is the way to go. I intend to write a similar blog post on using this. Hi Noel, as I wrote to you by email I really like the idea of Webel - nice work! @nyc_dad: As Noel wrote: at the very backend the Chemical Structure Resolver uses Cactvs which deals with stereochemistry very carefully. However, if you encounter any problems please report them to us Just realised that your name isn't mentioned, Markus. I've corrected that omission.
http://baoilleach.blogspot.com/2009/11/introducing-webel-cheminformatics.html
CC-MAIN-2015-27
refinedweb
820
67.15
Then I guess you need to fix up the debugger code, take a fork and mend it. Use a logic analyzer, Oscilloscope or console out to find out why it is not working. Then I guess you need to fix up the debugger code, take a fork and mend it. Use a logic analyzer, Oscilloscope or console out to find out why it is not working. Hi Guys, Has anyone had any experience in getting this to work in any way, or pointers of how to go about it. I have success so far in making it hard fault :) Cheers Andy Have a look here: So if I was to wipe the MKL02 and solder a JTAG connector to the relevant pins then the Teensy can be flashed and debugged over JTAG and the only downside would be that the teensy flasher app would... Thanks for the info Kurt. Now I know it's not me doing things wrong :) I was wondering if the longer LVGL times were caused by not calling lv_task_handler() regularly enough due to the fact... I Just tested with updateScreen() and that is only updating changed areas 865us to 4246us Hi Kurt, One thing I am looking into is that updateScreenAsync() always seems to send the entire screen (around 16ms at 100Mhz) even with a call to updateChangedAreasOnly(true); Is this what... Also just had a thought those timings above are using 20 ms/50fps Here we are with 30 ms/33.33 fps as in the original posts: lv_task_handler 1us, flush 0us, touchscreen 0us... Ok, I put some timing in. This is for the demo code from the video above, so probably worst case scenario for screen updating!: lv_task_handler = total time in lv_task_handler(); flush = total... Hi Kurt, I did put some timing code in the flush code and a small % was only used there, I didn't put any in the touchscreen handling though, i will check that. Thanks for the info. I will give that approach a go initially. No the midi code is not there, its just a demo from the LGVL examples. I did test Luni's idea of using the interval timer, just to see it was called correctly and the LGVL stuff still worked. ... In case anyone is interested it looks like the code is able to run nicely at 50fps on the Teensy4: Hi, Thanks for the reply. I was just a little worried about calling all the midi functions from within an interrupt, I will give it a go though and see how it goes... Cheers Andy Hi Guys, I wonder if anyone can help me. I am building a MIDI router/processor, supporting DIN (serial), USB Host and USB Device. I also want an LCD for the GUI, currently using an... The plugin was JavaAppletPlugin.plugin and located in /Library/Internet Plug-ins/ When I looked at the files arduino was using it was using this as well as the embedded JRE. The tools needed to... Thanks for the info. I looked into it a bit and the JRE that arduino uses is included with it. So I looked at the files opened when running teensyarduino and noticed that it was also using the... Thanks for all the info, from the command line: $ java -XshowSettings:vm -version 2>&1 VM settings: Max. Heap Size (Estimated): 7.11G Ergonomics Machine Class: server Using VM:... Hi @defragster, Thanks for the reply. I had actually started using the all in one to update from a previous beta of 1.53 that was installed in arduino and also had the problem. So I thought... Thanks for the reply, I have a 12 core, 32gb Ram, NVME drive. It’s a pretty fast machine. Arduino without the teensy software installed runs fine, with the teensy software installed I get very... Hi Guys, Is there a fix somewhere for the Aduino Gui running really slow on OSX? For instance 10 seconds of spinning beach ball when I click on a menu. Cheers Andy Just had this again, I am getting a little nervous using the teensy now! Hi Paul, Just got a panic and restart here and it looks like it may be do do with teensy_serialmon. I had Code/PlatformIO open and was using the Arduino serial monitor for logging, I think this... Thanks, I will stick that in and see if there is any difference. The midi looks like it may be behaving better than the audio though! Actually it is even worse, the audio is drifting backwards in time while the midi is drifting forward in time: 21551 So with that code above and recording the audio and midi, the midi drifts but the audio doesn't: At the start of test: 21549 A few minutes in: 21550 Here you go, different code same problem: #include <Arduino.h> #include <Audio.h> AudioSynthSimpleDrum drum1; AudioOutputUSB usb1; AudioConnection ... It isn't code I will be using, just some code to show the issue ;) Hi Paul and Mark, Thanks for the info and for looking into this. The timings I see on the generated waveform are: Teensy 4.0: 999.996 Teensy 4.1: 999.987 I found usbMIDI.send_now(); which I guess is the equivalent of the flush, didn't help though. Hi, Thanks for all the info. I think it seems to be a constant drift on the midid usb, I guess I need to keep it running for a while to check. I attach a 1khz signal to a pin and run the... Thanks for that. The timing output on teensy serial looks right, on midi monitor I see the message received in lock with the serial display but I see the drift in timing. I'm thinking of... Actually I think it might be the midi usb stuff that is playing up? I just tested on a 4.0 as well: If I use the timer to generate a 1khz wave and use a hardware freq counter I am seeing:... Hi, Thanks for the interest, here is an example using IntervalTimer, 1ms, every 1000 send midi message: #include <Arduino.h> IntervalTimer myTimer; elapsedMicros since = 0; Hi Guys, I'm looking at running code at set periods over time and would like the timing to be as stable as possible. I have looked at IntervalTimer and elapsedMicros and on the 4.1 I am using... Hi Richard, it is done: Busidle fix #9 Hi Richard, I'll try to get the PR done this weekend... Cheers Andy OK, I think the problem is that the BUSIDLE time is being set incorrectly. From the datasheet page 2751 the minimum BUSIDLE is: Also on page 2772 it states that you can disable it totally... So I investigated a little further, so this is where we are seeing the problem: Basically it doesn't think the bus is idle, bit weird as SCL and SDA are high and it doesn't start till the bus... Hi Kurt, Managed to get some time on this. The async calls work nicely in teensy4_i2c, apart from a bit of a strange delay, details in last post here:... Hi Richard, Thanks very much for this, I was looking for a async I2C implementation and you have done all the work for me :) I have a slight issue though, there seems to be around a 300-350us... I'm really not interested in it being DMA based, just asynchronous with a callback so the processing is not blocked. I worded the question incorrectly I guess. Edit: I looked for a way to... Thanks for the reply Kurt. I'm going to have a look at teensy4_i2c async calls in a while, maybe that will give me what I need. I did take a quick look at the NXP side and there is a... On closer look it seems that teensy4_i2c has async methods... Hi Guys, I have searched around a bit on google and the forum and can't seem to find an answer to this. Is it possible to use DMA for sends and receives over i2c on the teensy 4, I have found... Hi, I have a few tips that may save others a few hours :) 1. If using hardware serial and you want > 9600 baud you need to use begin(): Serial1.begin(115200); debug.begin(Serial1); Hi, thanks for the info. I'm on a mac, I managed to get it going with single serial by just including your debug files and building in PlatformIO. Also managed to get the VSCode debugger... Hi Guys, Has anyone managed to get "Take over serial" to work? In GDB I see: [no_device]: No such file or directory. I cannot see the serial device available either. The only way to get... Wow, good job frtias. Will look at giving all this a go at the weekend, thanks! For tracing you need also need a SWO pin, the 1050/1060 M7s do have SWO. I have a 1050 EVK and the SWO is not available on the debug header and you have to grab it on GPIO_B0_13 which is SW7 pin...
https://forum.pjrc.com/search.php?s=4449edf77d9c35d433c01cb725c39e56&searchid=6876741
CC-MAIN-2021-43
refinedweb
1,521
81.43
Created on 2020-01-19 20:23 by wchargin, last changed 2020-03-04 07:06 by ned.deily. This issue is now closed. The `gzip` module properly uses the user-specified compression level to control the underlying zlib stream compression level, but always writes metadata that indicates that the maximum compression level was used. Repro: ``` import gzip blob = b"The quick brown fox jumps over the lazy dog." * 32 with gzip.GzipFile("fast.gz", mode="wb", compresslevel=1) as outfile: outfile.write(blob) with gzip.GzipFile("best.gz", mode="wb", compresslevel=9) as outfile: outfile.write(blob) ``` Run this script, then run `wc -c *.gz` and `file *.gz`: ``` $ wc -c *.gz 82 best.gz 84 fast.gz 166 total $ file *.gz best.gz: gzip compressed data, was "best", last modified: Sun Jan 19 20:15:23 2020, max compression fast.gz: gzip compressed data, was "fast", last modified: Sun Jan 19 20:15:23 2020, max compression ``` The file sizes correctly reflect the difference, but `file` thinks that both archives are written at max compression. The error is that the ninth byte of the header in the output stream is hard-coded to `\002` at Lib/gzip.py:260 (as of 558f07891170), which indicates maximum compression. The correct value to indicate maximum speed is `\004`. See RFC 1952, section 2.3.1: <> Using GNU `gzip(1)` with `--fast` creates the same output file as the one emitted by the `gzip` module, except for two bytes: the metadata and the OS (the ninth and tenth bytes). (The commit reference above was meant to be git558f07891170, not a Mercurial reference. Pardon the churn; I'm new here. :-) ) Looks reasonable. gzip should write b'\002' for compresslevel == _COMPRESS_LEVEL_BEST, b'\004' for compresslevel == _COMPRESS_LEVEL_FAST, and b'\000' otherwise. Do you mind to create a PR William. Sure, PR sent (pull_request17470). PR URL, for reference: <> New changeset eab3b3f1c60afecfb4db3c3619109684cb04bd60 by Serhiy Storchaka (William Chargin) in branch 'master': bpo-39389: gzip: fix compression level metadata (GH-18077) Thank you for your contribution William! New changeset ab0d8e356ecd351d55f89519a6a97a1e69c0dfab by Miss Islington (bot) in branch '3.8': bpo-39389: gzip: fix compression level metadata (GH-18077) My pleasure; thanks for the triage and review! Ping. The 3.7.x backport (PR 18101) for this issue is still open and neither needs to be fixed or closed. "either" New changeset 12c45efe828a90a2f2f58a1f95c85d792a0d9c0a by Miss Islington (bot) in branch '3.7': [3.7] bpo-39389: gzip: fix compression level metadata (GH-18077) (GH-18101)
https://bugs.python.org/issue39389
CC-MAIN-2021-04
refinedweb
412
68.57
How YOU can Learn Mock testing in .NET Core and C# with Moq Chris Noring Updated on ・11 min read Follow me on Twitter, happy to take your suggestions on topics or improvements /Chris When we test we just want to test one thing - the business logic of the method. Often our method needs the help of dependencies to be able to carry out its job properly. Depending on what these dependencies answer - there might be several paths through a method. So what is Mock testing? It's about testing only one thing, in isolation, by mocking how your dependencies should behave. In this article we will cover the following: - Why test, it's important to understand why we test our code. Is it to ensure our code works? Or maybe we are adding tests for defensive reasons so that future refactors don't mess up the business logic? - What to test, normally this question has many answers. We want to ensure that our method does what it says it does, e.g 1+1equals 2. We might also want to ensure that we test all the different paths through the method, the happy path as well as alternate/erroneous paths. Lastly, we might want to assert that a certain behavior takes place. - Demo, let's write some code that has more than one execution path and introduce the Mocking library Moqand see how it can help us fulfill the above. References xUnit testing This page describes how to use xUnit with .Net Core nUnit testing This page describes how to use nUnit with .Net Core. dotnet test, terminal command description This page describes the terminal command dotnet testand all the different arguments you can call it with. dotnet selective test This page describes how to do selective testing and how to set up filters and query using filters. .Net Core Series on NuGet, Serverless and much more - Why test As we mentioned already there are many answers to this question. So how do we know? Well, I usually see the following reasons: - Ensuring Quality, because I'm not an all-knowing being I will make mistakes. Writing tests ensures that at least the worst mistakes are avoided. - Is my code testable, before I've written tests for my code it might be hard to tell whether it lends itself to be tested. Of course, I need to ask myself at this point whether this code should be tested. My advice here if it's not obvious what running the method will produce or if there is more than one execution path - it should be tested. - Being defensive, you have a tendency to maintain software over several years. The people doing the maintaining might be you or someone else. One way to communicate what code is important is to write tests that absolutely should work regardless of what refactorings you, or anyone else, attempts to carry out. - Documentation, documentation sounds like a good idea at first but we all know that out of sync documentation is worse than no documentation. For that reason, we tend to not write it in the first place, or maybe feel ok with high-level documentation only or rely on tools like Swagger for example. Believe it or not but tests are usually really good documentation. It's one developer to another saying, this is how I think the code should be used. So for the sake of that future maintainer, communicate what your intentions were/are. What to test So what should we test? Well, my first response here is all the paths through the method. The happy path as well as alternate paths. My second response is to understand whether we are testing a function to produce a certain result like 1+1 equals 2 or whether it's more a behavior like - we should have been paid before we can ship the items in the cart. Demo - let's test it What are we doing? Well, we have talked repeatedly about that Shopping Cart in an e-commerce application so let's use that as an example for our demo. This is clearly a case of behavior testing. We want the Cart items to be shipped to a customer providing we got paid. That means we need to verify that the payment is carried out correctly and we also need a way to assert what happens if the payment fails. We will need the following: - A CartController, will contain logic such as trying to get paid for a cart's content. If we are successfully paid then ship the items in the cart to a specified address. - Helper services, we need a few helper services to figure this out like: ICartService, this should help us calculate how much the items in cart costs but also tell us exactly what the content is so we can send this out to a customer once we have gotten paid. IPaymentService, this should charge a card with a specified sum IShipmentService, this should be able to ship the cart content to a specific address Creating the code We will need two different .NET Core projects for this: - a webapi project, this should contain our production code and carry out the business logic as stated by the CartControllerand its helper services. - a test project, this project will contain all the tests and a reference to the above project. The API project For this project, this could be either an app using the template mvc, webapp or webapi First, let's create a solution. Create a directory like so: mkdir <new directory name> cd <new directory name> Thereafter create a new solution like so: dotnet new sln To create our API project we just need to instantiate it like so: dotnet new webapi -o api and lastly add it to the solution like so: dotnet sln add api/api.csproj Controllers/CartController.cs Add the file CartController.cs under the directory Controllers and give it the following content: using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using Microsoft.AspNetCore.Mvc; using Services; namespace api.Controllers { [ApiController] [Route("[controller]")] public class CartController { private readonly ICartService _cartService; private readonly IPaymentService _paymentService; private readonly IShipmentService _shipmentService; public CartController( ICartService cartService, IPaymentService paymentService, IShipmentService shipmentService ) { _cartService = cartService; _paymentService = paymentService; _shipmentService = shipmentService; } [HttpPost] public string CheckOut(ICard card, IAddressInfo addressInfo) { var result = _paymentService.Charge(_cartService.Total(), card); if (result) { _shipmentService.Ship(addressInfo, _cartService.Items()); return "charged"; } else { return "not charged"; } } } } Ok, our controller is created but it has quite a few dependencies in place that we need to create namely ICartService, IPaymentService and IShipmentService. Note how we will not create any concrete implementations of our services at this point. We are more interested in establishing and testing the behavior of our code. That means that concrete service implementations can come later. Services/ICartService.cs Create the file ICartService.cs under the directory Services and give it the following content: namespace Services { public interface ICartService { double Total(); IEnumerable<CartItem> Items(); } } This interface is just a representation of a shopping cart and is able to tell us what is in the cart through the method Items() and how to calculate its total value through the method Total(). Services/IPaymentService.cs Let's create the file IPaymentService.cs in the directory Services and give it the following content: namespace Services { public interface IPaymentService { bool Charge(double total, ICard card); } } Now we have a payment service that is able to take total for the amount to be charged and card which is debit/credit card that contains all the needed information to be charged. Services/IShipmentService.cs For our last service let's create the file IShipmentService.cs under the directory Services with the following content: using System; using System.Generic; namespace Services { public interface IShipmentService { void Ship(IAddressInfo info, IEnumerable<CartItem> items); } } This contains a method Ship() that will allow us to ship a cart's content to the customer. Services/Models.cs Create the file Models.cs in the directory Services with the following content: namespace Services { public interface IAddressInfo { public string Street { get; set; } public string Address { get; set; } public string City { get; set; } public string PostalCode { get; set; } public string PhoneNumber { get; set; } } public interface ICard { public string CardNumber { get; set; } public string Name { get; set; } public DateTime ValidTo { get; set; } } public interface CartItem { public string ProductId { get; set; } public int Quantity { get; set; } public double Price{ get; set; } } } This contains some supporting interfaces that we need for our services. Creating a test project Our test project is interested in testing the behavior of CartController. First off we will need a test project. There are quite a few test templates supported in .NET Core like nunit, xunit and mstest. We'll go with nunit. To create our test project we type: dotnet new nunit -o api.test Let's add it to the solution like so: dotnet sln add test/test.csproj Thereafter add a reference of the API project to the test project, so we are able to test the API project: dotnet add test/test.csproj reference api/api.csproj Finally, we need to install our mocking library moq, with the following command: dotnet add package moq Moq, how it works Let's talk quickly about our Mock library moq. The idea is to create a concrete implementation of an interface and control how certain methods on that interface responds when called. This will allow us to essentially test all of the paths through code. Creating our first Mock Let's create our first Mock with the following code: var paymentServiceMock = new Mock<IPaymentService>(); The above is not a concrete implementation but a Mock object. A Mock can be: - Instructed, you can tell a mock that if a certain method is called then it can answer with a certain response - Verified, verification is something you carry out after your production code has been called. You carry this out to verify that a certain method has been called with specific arguments Instruct our Mock Now we have a Mock object that we can instruct. To instruct it we use the method Setup() like so: paymentServiceMock.Setup(p => p.Charge()).Returns(true) Of course, the above won't compile, we need to give the Charge() method the arguments it needs. There are two ways we can give the Charge() method the arguments it needs: - Exact arguments, this is when we give it some concrete values like so: var card = new Card("owner", "number", "CVV number"); paymentServiceMock.Setup(p => p.Charge(114,card)).Returns(true) - General arguments, here we can use the helper It, which will allow us to instruct the method Charge()that any values of a certain data type can be passed through: paymentServiceMock.Setup(p => p.Charge(It.IsAny<double>(),card)).Returns(true) Accessing our implementation We will need to pass an implementation of our Mock when we call the actual production code. So how do we do that? There's an Object property on the Mock that represents the concrete implementation. Below we are using just that. We first construct cardMock and then we pass cardMock.Object to the Charge() method. var cardMock = new Mock<ICard>(); paymentServiceMock.Setup(p => p.Charge(It.IsAny<double>(),cardMock.Object)).Returns(true) Add unit tests Let's rename the default test file we got to CartControllerTest.cs. Next, let's discuss our approach. We want to: - Test all the execution paths, there are currently two different paths through our CartController depending on whether _paymentService.Charge()answers with trueor false - Write two tests, we need at least two different tests, one for each execution path - Assert, we need to ensure that the correct thing happens. In our case, that means if we successfully get paid then we should ship, so that means asserting that the shipmentServiceis being called. Let's write our first test: // CartControllerTest.cs [Test] public void ShouldReturnCharged() { // arrange paymentServiceMock.Setup(p => p.Charge(It.IsAny<double>(), cardMock.Object)).Returns(true); // act var result = controller.CheckOut(cardMock.Object, addressInfoMock.Object); // assert shipmentServiceMock.Verify(s => s.Ship(addressInfoMock.Object, items.AsEnumerable()), Times.Once()); Assert.AreEqual("charged", result); } We have three phases above. Arrange Let's have a look at the code: paymentServiceMock.Setup(p => p.Charge(It.IsAny<double>(), cardMock.Object)).Returns(true); here we are setting things up and saying that if our paymentService.Charge() method is called with any value It.IsAny<double>() and with a card object cardMock.Object then we should return true, aka .Returns(true). This means we have set up a happy path and are ready to go to the next phase Act. Act Here we call the actual code: var result = controller.CheckOut(cardMock.Object, addressInfoMock.Object); As we can see above we get the answer assigned to the variable result. This takes us to our next phase, Assert. Assert Let's have a look at the code: shipmentServiceMock.Verify(s => s.Ship(addressInfoMock.Object, items.AsEnumerable()), Times.Once()); Assert.AreEqual("charged", result); Now, there are two pieces of assertions that take place here. First, we have a Mock assertion. We see that as we are calling the method Verify() that essentially says: I expect the Ship() method to have been called with an addressInfo object and a cartItem list and that it was called only once. That all seems reasonable, our paymentService says it was paid, we set it up to respond true. Next, we have a more normal-looking assertion namely this code: Assert.AreEqual("charged", result); It says our result variable should contain the value charged. A second test So far we tested the happy path. As we stated earlier, there are two paths through this code. The paymentService could decline our payment and then we shouldn't ship any cart content. Let's see what the code looks like for that: [Test] public void ShouldReturnNotCharged() { // arrange); } Above we see that we have again the three phases Arrange, Act and Assert. Arrange This time around we are ensuring that our paymentService mock is returning false, aka payment bounced. paymentServiceMock.Setup(p => p.Charge(It.IsAny<double>(), cardMock.Object)).Returns(false); Act This part looks exactly the same: var result = controller.CheckOut(cardMock.Object, addressInfoMock.Object); Assert We are still testing two pieces of assertions - behavior and value assertion: shipmentServiceMock.Verify(s => s.Ship(addressInfoMock.Object, items.AsEnumerable()), Times.Never()); Assert.AreEqual("not charged", result); Looking at the code above we, however, are asserting that shipmentService is not called Times.Never(). That's important to verify as that otherwise would lose us money. The second assertion just tests that the result variable now says not charged. Full code Let's have a look at the full code so you are able to test this out for yourself: // CartControllerTest.cs using System; using Services; using Moq; using NUnit.Framework; using api.Controllers; using System.Linq; using System.Collections.Generic; namespace test { public class Tests { private CartController controller; private Mock<IPaymentService> paymentServiceMock; private Mock<ICartService> cartServiceMock; private Mock<IShipmentService> shipmentServiceMock; private Mock<ICard> cardMock; private Mock<IAddressInfo> addressInfoMock; private List<CartItem> items; [SetUp] public void Setup() { cartServiceMock = new Mock<ICartService>(); paymentServiceMock = new Mock<IPaymentService>(); shipmentServiceMock = new Mock<IShipmentService>(); // arrange cardMock = new Mock<ICard>(); addressInfoMock = new Mock<IAddressInfo>(); // var cartItemMock = new Mock<CartItem>(); cartItemMock.Setup(item => item.Price).Returns(10); items = new List<CartItem>() { cartItemMock.Object }; cartServiceMock.Setup(c => c.Items()).Returns(items.AsEnumerable()); controller = new CartController(cartServiceMock.Object, paymentServiceMock.Object, shipmentServiceMock.Object); } [Test] public void ShouldReturnCharged() { paymentServiceMock.Setup(p => p.Charge(It.IsAny<double>(), cardMock.Object)).Returns(true); // act var result = controller.CheckOut(cardMock.Object, addressInfoMock.Object); // assert // myInterfaceMock.Verify((m => m.DoesSomething()), Times.Once()); shipmentServiceMock.Verify(s => s.Ship(addressInfoMock.Object, items.AsEnumerable()), Times.Once()); Assert.AreEqual("charged", result); } [Test] public void ShouldReturnNotCharged() {); } } } Final thoughts So we have managed to test out the two major paths through our code but there are more tests, more assertions we could be doing. For example, we could ensure that the value of the Cart corresponds to what the customer is actually being charged. As well all know in the real world things are more complicated. We might need to update the API code to consider timeouts or errors being thrown from the Shipment service as well as the payment service. Summary I've hopefully been able to convey some good reasons for why you should test your code. Additionally, I hope you think the library moq looks like a good candidate to help you with the more behavioral aspects of your code. Great post! You can increase the readability of your tests by using FluentAssertions and it makes your Assert statements independent from the test library. Magic string are no go imo for tests. Replacing expected value for a variable increase readability as well. yea.. It wasn't meant to look like a clean code, perfect looking test but rather show the usage of moq.. Agree with you though.. no magic strings
https://practicaldev-herokuapp-com.global.ssl.fastly.net/dotnet/how-you-can-learn-mock-testing-in-net-core-and-c-with-moq-4ikd
CC-MAIN-2020-05
refinedweb
2,828
54.93
Robert Klarer, Oct 26 2014 Disclosure: I intend to vote for Morgan Baskin tomorrow. Imagine that there's going to be a municipal election in your town on Monday, in which the town's mayor will be selected. Furthermore, let's assume that there are three major candidates in the race. Sound familiar? Okay, let's make the following, somewhat more hypothetical assumptions, too: For example, let's say that our reliable polls tell us that there is a 33% chance that any single voter will select Candidate T, a 33% percent chance that the same voter will select Candidate C, and a 32% percent chance that the voter will select Candidate F. The remaining 2% represents the possibility of the voter selecting some other, dark horse candidate. Remember, we're talking about probabilities; if we treat each ballot cast as a random event, what are Candidate F's odds of prevailing on election night, and becoming the mayor-elect? To answer that question, I wrote some C++ code. You don't need to read the code to follow the rest of this analysis, but here it is anyway: #include <algorithm> #include <iomanip> #include <iostream> #include <iterator> #include <random> #include <type_traits> int main() { const int nsimulations = 10000; const int nballots = 852; const int voteshare[] = { 33, 32, 33, 2 }; const std::string candidatename[] = { "Tory", "Ford", "Chow", "other" }; const int ncandidates = std::extent <decltype(voteshare) >(); int wins[ncandidates] = {}; std::mt19937 engine; std::discrete_distribution<> d(voteshare, std::end(voteshare)); for (int i = 0; i < nsimulations; ++i) { int votescast[ncandidates] = {}; for (int j = 0; j < nballots; ++j) { ++votescast[d(engine)]; } ++wins[std::max_element(votescast, std::end(votescast)) - votescast]; } for (int k = 0; k < ncandidates; ++k) { std::cout << std::setw(6) << candidatename[k] << std::setw(9) << wins[k] << std::endl; } } Oh, and did I mention that Candidate T's full name is Tory, Candidate C's full name is Chow, and Candidate F's full name is Ford? No? Well, let's go with that. Those seem like good hypothetical names for a TOTALLY HYPOTHETICAL election, right? Anyway, the program simulates 10,000 elections in which exactly 852 ballots are cast. The ballots are generated randomly, using the 33/33/32/2 probability split I mentioned earlier. Here is the actual output from the program: Tory 4051 Ford 2060 Chow 3889 other 0 The number to the right of each candidate's name is the number of simulated elections (out of a total of 10,000) that were won by that candidate. What this experiment shows is that reducing the probability of a ballot being marked for Ford by just 1% from 33% to 32% reduces the number of randomly-generated elections in which Ford wins from 33% to 20%. A recent Forum poll showed that a sample of 852 likely voters and early-bird voters who had voted in advance polls - when weighted according to Forum's demographic model - supported Ford about 29% of the time, plus or minus 3%. If you believe Forum's numbers (I know, I know), then, you're obliged to believe that the above simulation represents Ford's ideal outcome: 29 + 3% chance of any single voter supporting him, plus an equal split of non-Ford votes between his two main opponents, Tory and Chow. What if the split between Tory and Chow is not equal? Let's change the line of the program that reads const int voteshare[] = { 33, 32, 33, 2 }; to const int voteshare[] = { 35, 32, 31, 2 }; This leaves the probability of a ballot going to Ford at 32%, as before, but it gives Tory a 35% chance of winning over any single voter, while reducing the chance of Chow's winning over any single voter to 31%. Let's run the program again: Tory 8228 Ford 1262 Chow 510 other 0 Ford now wins fewer than 13% of the 10,000 simulated elections, and we didn't even change his share of the vote. What if we choose to believe Forum's top-line prediction of 29% support for Ford? Let's make this as advantageous as possible for Ford, and assume an equal split between Tory and Chow: const int voteshare[] = { 35, 29, 35, 1 }; Here's the outcome: Tory 5049 Ford 18 Chow 4933 other 0 Ford wins just 18 out of 10,000 simulations. That's 0.18%, folks. OK, one more. Let's assume that all of Forum's top line numbers are correct: const int voteshare[] = { 43, 29, 25, 3 }; The outcome looks decisive: Tory 10000 Ford 0 Chow 0 other 0 What can we conclude from this experiment? A predictable counter-argument to everything that you've just read is this: What if the polls are wrong, and what if the particular poll whose results I plugged into the program is really, really wrong? I have two responses to this: The whole point of strategic voting is that it is an informed response to known information about how others will vote. If you choose to believe that the polls are wrong you CANNOT VOTE STRATEGICALLY. At best, you can vote based on a hunch and flatter yourself with the false belief that doing so is smart, somehow. Just vote for the candidate you like. Good luck. Oct 27 2014 OK, Forum released the results of a new poll Sunday night (October 26th). Let's run the simulation with these new results: const int voteshare[] = { 44, 32, 21, 3 }; These numbers still predict decisive victory for Tory. Tory 10000 Ford 0 Chow 0 other 0 How much of Tory's share of the vote is attributable to strategic voting? I have no way of knowing. Is it possible that, without the strategic vote, Tory and Chow would split the anti-Ford vote exactly evenly? Yes, it's possible, but decidedly improbable. Still, at 32%, an even split of the anti-Ford vote between Tory and Chow, lets Doug win only 20% of 10,000 simulated elections (see above). Should you vote strategically, then? Here's a guide: then yeah, vote strategically. Honestly, though, anything other than a Tory mayoralty is really really unlikely at this point, so I still think you should just vote however you want. And if you want to vote "strategically," then go for it, but be aware that the math doesn't support you.
http://klarer.ca/
CC-MAIN-2019-04
refinedweb
1,059
54.36
I was actually asked a question around this topic in an interview a few years ago. It is quite an interesting area and delves into some deep parts of C++. A common scenario in C++ is to create a container of user defined objects. In quantitative finance, this might be a vector of derivative contracts, for instance. One (bad) way to achieve this is by using a container of pointers to the option object. Container of "Dumb" Pointers #include <vector> class Option { public: .. } void vec_option(std::vector<Option*>& vec) { vec.push_back(new Option()); // .. Some additional code .. delete vec.back(); // Skipping this line causes a memory leak vec.pop_back(); // Causes a dangling pointer if this line isn't reached } If the code prior to delete vec.back; causes an exception to be thrown or returns from the function, this will lead to a memory leak or a dangling pointer. Why is this? Simply, because although the destructor called by std::vector, it does NOT remove the allocations that were made by the new operator, when creating the Option object. Containers of Auto_ptrs Although the above code will compile and run, it is bad practice to use such containers of "dumb" pointers due to the fragility of the function. The solution to such a problem is to use a smart pointer. One such smart pointer is std::auto_ptr<>. Let's modify the function prototype code above to make use of it: #include <vector> #include <memory> // Needed for std::auto_ptr<> class Option { public: .. } void vec_option(std::vector<std::auto_ptr<Option> >& vec) { vec.push_back(new Option()); .. vec.pop_back(); } Now when we try to compile this code we receive a compiler error! What just happened? It all comes down to the contract that std::auto_ptr<> makes with you when you agree to use it in your code. std::auto_ptr<> does not fulfill the requirements of being copy-constructible and assignable. Unlike objects which do have this requirement, when copying or assigning an auto_ptr the two elements are not logically independent. This is because auto_ptr has semantics of strict ownership and is thus solely responsible for an object during the object's life cycle. If we copy the auto_ptr then the source auto_ptr will lose the reference to the underlying object. Since objects within an STL container must be copy-constructible and assignable, a compile time error is provided if an auto_ptr is used within a container. Algorithms, such as those involved in sorting STL containers, often copy objects while carrying out their tasks. Hence, there would be a large scope for memory leaks and/or dangling pointers if containers of auto_ptrs were allowed. Boost Smart Pointers and STL Containers Prior to the C++11 standard the best way to overcome this problem was to use the Boost library smart pointers. In this instance we could use the Boost library shared pointer - boost:shared_ptr. A shared pointer is useful because it removes the possibility of a memory leak due to neglect of iterating over the vector and calling delete for each item. Let's modify the example above to make use of the shared pointer: #include <vector> #include <boost/shared_ptr.hpp> // Need to include the Boost header for shared_ptr class Option { public: .. } typedef boost::shared_ptr<Option> option_ptr; // This typedef stops excessive C++ syntax later void vec_option(std::vector<option_ptr>& vec) { option_ptr ptr_opt(new Option()); // Separate allocation to avoid problems if exceptions are thrown vec.push_back(ptr_opt); .. vec.pop_back(); } So why does this work? Shared pointers make use of reference counting, which ensures that the allocation option object ( ptr_opt) is correctly transferred into the vector, in this line: vec.push_back(ptr_opt);. Although it is not absolutely necessary to use a shared_ptr, I have done so here because debugging dangling pointers when using different types is extremely painful. It is much easier to optimise working code than trying to prematurely optimise non-working code! C++11 Smart Pointers and STL Containers In modern C++, which utilises the C++11 standard, the use of auto_ptrs has become deprecated. In addition, certain new smart pointers have made it into the standard. The Boost shared_ptr previously described was included via the TR1 standard to C++, then eventually made it into C++11, with some additional modifications. It is straightforward to modify the above Boost code to incorporate the new C++11 shared pointers: #include <vector> #include <memory> // std::shared_ptr is included in the "memory" header class Option { public: .. } typedef std::shared_ptr<Option> option_ptr; // This typedef stops excessive C++ syntax later void vec_option(std::vector<option_ptr>& vec) { option_ptr ptr_opt(new Option()); // Separate allocation to avoid problems if exceptions are thrown vec.push_back(ptr_opt); .. vec.pop_back(); } One final note - using shared pointers can be considered bad practice, as it means that the programmer has not given sufficient thought to the lifetime of the object or where it should actually be deleted. Another suggestion is to use std::unique_ptr for situations where the container object "owns" the elements within it, rather than the elements having a distinct external lifetime of their own. I won't dwell on this too much here as it is really a discussion long enough for another article!
https://quantstart.com/articles/STL-Containers-and-Auto_ptrs-Why-They-Dont-Mix/
CC-MAIN-2022-27
refinedweb
858
53.92
With by programmatically connecting to wifi networks. In this post, I’ll go over how to add this feature to your Xamarin.Forms app. This creates a smoother pairing process for users and their new IoT devices. Shared Code To begin, we’ll need to add an interface for the wifi connector. This interface will be implemented by each native wifi connector, both of which will have their own platform-specific version of the ConnectToWiFi method. namespace MyApp { public interface IWifiConnector { void ConnectToWifi(string ssid, string password); } } This method will take two strings: the SSID and the password for the network where we want to connect. You can obtain these strings in a number of ways. For example, you could ask the user to manually enter the information. Alternatively, you could hard-code the values. Or you could even use a QR code scanner to obtain the strings from a barcode. For this example, it doesn’t matter how we get the values, as long they’re available when we’re ready to connect. Both the Android and iOS versions will use the Xamarin.Forms DependencyService to inject each class where it needs to go. But first, you’ll need to register each class by adding the following attribute to each version of the class: [assembly: Dependency(typeof(WifiConnector))] To use the WifiConnector, add the following line of code to access the interface from anywhere in your Forms app: var wifiConnector = Xamarin.Forms.DependencyService.Get (); Android Connecting to wifi within your Xamarin.Forms Android app requires you to enable two permissions: CHANGE_WIFI_STATE and ACCESS_WIFI_STATE. These will be added to the AndroidManifest.xml file. <uses-permission android: <uses-permission android: Xamarin uses the WifiManager class in Android to connect to wifi networks. This manager allows access to wifi information from configured networks to the current wifi state. We will use it to add a new network to the device’s list of configured networks. The wifi manager is created by grabbing the WifiService from the Android context. The service comes back from the context as a Java object, so we’ll have to cast it as a Xamarin WifiManager to continue. var wifiManager = (WifiManager)Android.App.Application.Context .GetSystemService(Context.WifiService); Before we can add the network to our configured networks, though, we have to create a wifi configuration from the SSID and password that were passed into the method. When we create the configuration, we need to make sure to format the strings with extra quotes, as per the Android documentation. var formattedSsid = $"\"{ssid}\""; var formattedPassword = $"\"{password}\""; var wifiConfig = new WifiConfiguration { Ssid = formattedSsid, PreSharedKey = formattedPassword }; Now we’ll add the network configuration to our list of configured networks. var addNetwork = wifiManager.AddNetwork(wifiConfig); Once we add the network configuration, we can double-check that the network is properly configured. If not, we don’t want to keep trying to connect to it. However, if all is well, we can continue our attempt to connect. var network = wifiManager.ConfiguredNetworks .FirstOrDefault(n => n.Ssid == ssid); if (network == null) { Console.WriteLine($"Cannot connect to network: {ssid}"); return; } At this point, we’ll need to disconnect from any wireless network where we’re currently connected. Then we can enable the network to create the actual connection. wifiManager.Disconnect(); var enableNetwork = wifiManager.EnableNetwork(network.NetworkId, true); Finally, we’re connecting to the network we want. Although Android does not notify the user that their wifi network is changing, it would be helpful to share that information with the user at this point. iOS With the release of iOS 11 in Fall 2017, Apple provided a way for developers to implement wifi connections within their apps. This also allowed for any cross-platform frameworks, including Xamarin.Forms, to implement the classes required for configuring wireless networks. Before we can actually connect to a wifi network from our app, we have to add the Hotspot service to our provisioning profile. You can access this through the Apple Developer Portal. Xamarin makes use of the NEHotspotConfigurationManager from the NetworkExtensions library to handle connections to wireless networks. var wifiManager = new NEHotspotConfigurationManager(); Just like Android, the iOS version of the WifiConnector implements the IWifiConnector, so the ConnectToWifi method takes the SSID and password for the requested network. We’ll use these parameters to create a configuration (this time an NEHotSpotConfiguration), passing in the SSID, password, and WEP flag (false because we’re using a WPA/WPA2 connection). var wifiConfig = new NEHotspotConfiguration(ssid, password, false); From this, we’ll use the configuration manager we created to “apply” this configuration, passing in a lambda to handle any errors that may occur when trying to connect. If no errors occur, the operating system will notify the user that the app wants to change the wireless network. wifiManager.ApplyConfiguration(wifiConfig, (error) => { if (error != null) { Console.WriteLine($"Error while connecting to WiFi network {ssid}: {error}"); } }); Once the user selects “Join,” the phone will connect to the network. Success! By commenting below, you agree to the terms and conditions outlined in our (linked) Privacy Policy21 Comments Hi, Molly Alger How to get the wireless list with NetworkExtensions? I tried to find the solution but is no easy Can you help me, thanks! With iOS, it is possible to get a list of the networks that have been configured by your app by calling _wifiManager.GetConfiguredSsids(). The parameter for this will be an action to handle the string array of configured SSIDs, such as removing all the networks. More information about this can be found in the Apple developer docs. Thanks you Hi, Molly Alger In a Xamarin IOS project, when I connect to special wireless Althought i have enabled Hotspot in provision profile but one error still occur Error content is: “Exception of type ‘Foundation.NSErrorException’ was thrown.” My source like: public async Task ConnectToWifi_IOS(string ssid, string password) { try { var config = new NEHotspotConfiguration(ssid, password, false); await wifiManager.ApplyConfigurationAsync(config); Console.WriteLine(“Connected!”); } catch (Foundation.NSErrorException error) { Console.WriteLine(error.Message); return false; } catch (Exception e) { Console.WriteLine(e.Message); return false; } return true; } Hello, how about iOS 9.3 and newer? The sample found in iOS 11 before, maybe! @khoicva Just set the Entitlements.plist in “Enabled HotSpotConfiguration”, so that found without exception and pop up appearing Is this in a public repository? On Android, when searching for configured network, you should query against formattedSsid. So following line var network = wifiManager.ConfiguredNetworks.FirstOrDefault(n => n.Ssid == ssid); should be var network = wifiManager.ConfiguredNetworks.FirstOrDefault(n => n.Ssid == formattedSsid); Has anyone gotten this to work? Even with the last change from Ingweland, I have yet to make a connection. Everything works fine! What problem do you have? Android side works fine – no problem there. This does not work: (Apple side) NEHotspotConfigurationManager wifiManager = new NEHotspotConfigurationManager(); NEHotspotConfiguration wifiConfig = new NEHotspotConfiguration(ssid, password, false); wifiManager.ApplyConfiguration(wifiConfig, (error) => { if (error != null) { Debug.WriteLine($”Error while connecting to WiFi network {ssid}: {error}”); Thread.CurrentThread.Abort(); } else { Debug.WriteLine(“Connected!”); } }); // make this async SocketPermission permission = new SocketPermission( NetworkAccess.Connect, // Connection permission TransportType.Tcp, // Defines transport types “”, // Gets the IP addresses SocketPermission.AllPorts // All ports ); // Ensures the code has permission to access a Socket permission.Demand(); Socket Client = null; TcpClient ClientSocket = new TcpClient(); Client = ClientSocket.Client; Client.ReceiveBufferSize = 4096; try { Client.Connect(config_ipaddress, config_portnumber); } catch (Exception ex) { Client.Disconnect(false); Console.WriteLine(“*** Reset the server board. ***”); Console.WriteLine(ex); Environment.Exit(0); } Unfortunately – for some odd reason that I am going back and forth on with Microsoft – I can’t write to the console or set a break point in Visual Studio 2017. (using latest version) Apple is giving me a hard time getting access to the Hotspot service: is there any specific explanation or wording to make sure the request in granted ? Hello Molly Great explanation but I wonder if there is any nugget-plugin that include this functionality. it will help novice programmers. thanks Hi Molly Alger, Please let me know how to get all available Wifi networks list in my XForms.iOS? I can get in XForms.Android using WifiManager.ScanResults property but in iOS i can’t to find the solution. Can you help me. Thanks in Advance, Ashok N Hi Molly Alger, Please let me know how to get all available Wifi networks list in my XForms.iOS? I can get in XForms.Android using WifiManager.ScanResults property but in iOS i can’t to find the solution. I am new for iOS, can you help me. Thanks in Advance, Ashok N Hi Molly! Thanks for great post ;) I have question. I create application “Connecting to WiFi in iOS”. Everything works fine but iOS always asks a question “Wants to Join Wi-Fi Network”. Is there any possibility that it would not ask? For my application, this popup is a problem. Maybe list of preferred network? Hi Maciek, you said that your app works fine, but my Visual Studio 2017 says about GetConfiguredSsidsAsync(), GetConfiguredSsids() and ApplyConfiguration() “To be added”, that means they are not implemented yet. The version of Xamarin.iOS is 4.12.3.77 (it is last version). My code is exactly as in this post and comments. What Xamarin.iOS.dll do you use? Can you show your code? Hi Molly, My Visual Studio 2017 says about GetConfiguredSsidsAsync(), GetConfiguredSsids() and ApplyConfiguration() “To be added”. The version of Xamarin.iOS is 4.12.3.77 (it is last version). Does it mean all example in your post do not working now? I contacted James Montemagno – he said you can’t programmatically connect to a wifi on an iPhone. The user would have to select the wifi and you go from there. On Android you can, and I have it working in Android in Xamarin. Thank you, Robert! I found GitHub project , but there I see the same result for WiFi methods: “To be added”. All the examples I found is not working for the same reason. Hi Molly, thank you for the article, it was really good written. I have one doubt, I managed to disconnect from my current network and connect to the one I want, but it stays connected for like 2, 3 seconds and go back to the old network. Is this happening to anyone else?
https://spin.atomicobject.com/2018/02/15/connecting-wifi-xamarin-forms/
CC-MAIN-2019-09
refinedweb
1,705
50.33
Ajay <abra9823 at mail.usyd.edu.au> wrote: > the statement attribs.getNamedItem("appel:connective") however returns > None. Oh dear me. This is issue 20 from: Which I believed had been fixed in PyXML 0.7, but apparently not; certainly I can see the problem again in 0.8.3. Using namespace-unaware methods to access attributes which have namespaces just doesn't seem to work in 4DOM. That's quite bad really. > now i think its substituting the namespace for appel but then how would you > access the attribute, just 'connective' doesn't work, 'appel:connective' > doesn't either and doesn't > work either. You'd need one of the DOM Level 2 namespace-aware methods for this: attrs.getNamedItemNS('', 'connective') element.getAttribute('', 'connective')
https://mail.python.org/pipermail/xml-sig/2004-August/010487.html
CC-MAIN-2014-10
refinedweb
124
53.37
Red Hat Bugzilla – Bug 13030 Patch to compile rpm-3.0.5 under Tru64 Unix Last modified: 2008-05-01 11:37:56 EDT rpm-3.0.5 as released doesn't build under Compaq's Tru64 Unix 4.0F with gcc. Here's the patch: diff -rc rpm-3.0.5.orig/lib/md5.h rpm-3.0.5/lib/md5.h *** rpm-3.0.5.orig/lib/md5.h Wed Jun 21 00:14:00 2000 --- rpm-3.0.5/lib/md5.h Mon Jun 26 09:40:27 2000 *************** *** 3,9 **** --- 3,13 ---- #include <sys/types.h> + #ifdef __alpha + typedef unsigned int uint32; + #else typedef u_int32_t uint32; + #endif struct MD5Context { uint32 buf[4]; Fixed. Will be in final rpm-3.0.5 release. Thanks for the report.
https://bugzilla.redhat.com/show_bug.cgi?id=13030
CC-MAIN-2018-09
refinedweb
130
72.02
This is the mail archive of the gdb@sourceware.org mailing list for the GDB project. Feel free to ignore this; it's mostly a brain dump. Or feel free to read down as far as the start of README.AVAIL and stop there. I've created a new branch: gdb-csl-available-20060303-branch This is for the "available registers as a target property" work that I described on this list almost a year ago. I've been working hard on this, and made a lot of progress over the last few weeks, but I'm going to have to go out of town on business for most of next week; and I'm always fearsomely behind on my projects when I get back from a trip. So, I may not be able to work on it again for a while. I wanted to get my current state into CVS before I left. The stuff that's there is not, in my opinion, ready. It needs some more development, and a few of the items from my TODO list (see below) are really important to consider before we start using this. But, what's on the branch works! I hacked up a local embedded ARM stub to supply the sample description below, and the associated registers, and GDB will successfully display them. At least it did a few days ago when I last tried it. So, it's not done, but it's real. Here's the DTD and the current README.AVAIL file; I'll probably think of more TODO items that I forgot to write down, and there's plenty of them in the code already. I'm always happy for folks to comment on either the code or the approach; one of the problems in designing something extensible is allowing it to extend to things I haven't thought of yet. That's why I eventually settled on XML for the descriptions, by the way. With the caching items mentioned in the TODO, and support for reading the descriptions from files instead of from the target, I don't consider the extra bandwidth to be a problem. And being able to use e.g. GUI XML editors to generate the descriptions and standards-compliant XML validators to verify the syntax has already been really handy. So I think this is the right choice. I've put expat 2.0.0 on the branch, with a small change to make it build in our environment; this won't impose any new build requirements. It's not as trivial to handle as a full-featured XML library with DOM (Document Object Model) support, but it's also not very large. I considered using libxml2 instead so that I could take advantage of the DOM, but now that I've written the expat parser, I think it's not _too_ horrible. It's a somewhat weird style to me, though. README.AVAIL: Notes for the branch "gdb-csl-available-20060303-branch" -------------------------------------------------------- This branch implements a new mechanism which allows GDB to ask a target "what features do you have?" GDB can then interpret the response and dynamically present those features to the user. Some features require corresponding support in GDB, and must be specially recognized by the target architecture. Others do not require additional GDB support, e.g. additional registers (the only type of feature implemented so far). The branch does not have a ChangeLog relative to mainline; one will be written later. So far, only the ARM target has any support for available features. The most interesting portion of this document is the TODO list; the rest may get somewhat out of date. Control flow in GDB ------------------- After connecting to the target, we check the new architecture-provided setting gdbarch_available_features_support. If it is set, we query the target for its available features, interpret the response, and switch to a new gdbarch, derived from the current one, with these features recorded. In order for the derivation process to work, the architecture's gdbarch_init must correctly support filling in defaults based on the last used architecture. If it does not, for example, cache something read from the ELF binary in gdbarch_tdep, the architecture is likely to get out of sync with the debuggee. During debugging, GDB can query information from the current set of features. This is currently done in architecture-specific hooks, but may be done in common code in the future. Writing a feature description ----------------------------- Feature descriptions are written in XML. The current DTD is in gdb/features/gdb-target.dtd. There are some limits beyond those expressed in the DTD - many of these limits are not yet documented and not yet relevant until additional GDB support has been implemented. See the TODO. Here's a simple sample description: <?xml version="1.0"?> <!DOCTYPE target SYSTEM "gdb-target.dtd"> <target> <feature name="bar"> <reg name="s0" bitsize="32"/> <reg name="s1" bitsize="32" type="float"/> </feature> <feature-set> <feature-ref </feature-set> </target> This describes a simple target feature set which only contains two registers, named s0 (32-bit, integer) and s1 (32-bit, floating point). You can spread a description over multiple files by using the standardized XInclude mechanism - but only a very simplistic form of XInclude is supported. The xpointer attribute must be provided, using a bare ID rather than a more complicated XPointer expression. The href argument should also be provided, using a bare basename. GDB will query the target for the file, if it has not already seen it. Presently only <feature> elements may be read using XInclude. You can validate the description using any XML validator which supports XInclude. For instance, with "xmllint" (shipped as part of the GNOME libxml2 package): xmllint --xinclude --postvalid my-target.xml Post validation is usually appropriate when using XInclude; the DTD describes the document after XInclude processing. TODO items and unsettled (or unasked) questions ----------------------------------------------- When finished this documentation needs to move into the user and internals manual. The "ro" and "save-restore" tags may not express enough information about how to treat system registers. Richard Earnshaw suggested a more detailed categorization; I need to consider it. Reading and writing registers using the 'p' and 'P' packets is very inefficient; a target mechanism and remote protocol packets for multiple register batching would probably help a lot. For ARM VFP, there are two views of some registers: s0 / s1 are single precision registers overlayed in storage with d0, a double precision register. Many other targets do the same thing. Should we express this to GDB, or just allow it to read them both from the target (somewhat wasteful)? GDB already assumes that modifying one register may modify another unpredictably, so writing is OK. The DTD allows for description fields, including multi-lingual ones, but there is no GDB support for descriptions. It would be good to present them to the user. Should we convey the information read from the target (after e.g. XInclude processing) to MI front ends, or are the changes to the register cache sufficient? For instance, Eclipse would probably be happy to display most of this data (especially descriptions). The current DTD and GDB support does not allow for nested features. This is probably useful. GDB needs additional error checking for its assumptions of unique names. For instance, there may be multiple registers with the same name in GDB's feature database, but they had better not be instantiated in the same feature set. Feature sets should probably have names, so that they can be referenced uniquely and cached, to minimize data the target has to supply. We need a naming scheme for features (and maybe feature sets). Some considerations: - Names should generally be globally unique, so that we can efficiently cache features and even ship their descriptions with GDB. Having the feature on the target not match GDB's cached value is likely to lead to mayhem. When caching is implemented, perhaps we should also have a maint command to check that the cache is correct, for targets which can supply their features in detail (it's possible that the target can't, and instead relies on GDB loading them from files). - It should be hierarchical, so that vendors may create their own names without risk of interfering with future GDB development or other vendors. - There should probably be a namespace which will not be cached, for convenience during development, or for features which dynamically reconfigure on the target. Should known features be compiled in to GDB, or loaded from the filesystem? The most essential features should probably be compiled in, so that the GDB binary is useful standalone. GDB should support reading features and feature sets from disk instead of from the target. GDB should support caching features read from the target in a user-specified directory. Should GDB warn about unrecognized features which require additional GDB support, or silently ignore them? If the name field of features is hierarchical, and the description is free-form, there should probably be a "short description" field - a user label without the uniqueness constraint. Another suggested type of feature is a memory map - which areas of memory the debugger may read from / write to. How should features interact with the standard architecture support? Basic options: - Target features must not specify the standard registers. - Target features may specify the standard registers, and GDB will handle any duplication. - Target features must specify the standard registers. GDB can provide a standard feature for the architecture, which must be referenced (or a set of such standard features, at least one of which must be referenced). I'm somewhat leaning towards #3, but it requires buy-in from target maintainers who wish to support this feature. It's nice in that it moves a bit of code from the tdep files out into a description file; but less nice in that it might lose flexibility that we need; the target might have to run-time generate the XML. Are there any examples where that would be necessary? The DTD: <!-- The root element of a GDB target description is <target>. It contains a list of feature definitions, followed by a feature-set. This is also the only point at which xi:include is supported; it must be used with xpointer to fetch a feature, from a document whose root element is either target or feature. --> <!ELEMENT target (feature*, feature-set)> <!ATTLIST target xmlns:xi CDATA #FIXED ""> <!ELEMENT feature-set (description*, feature-ref+)> <!-- QUESTION: Is there any reason for feature-ref to have its own descriptions? Or a short name field (descriptive)? --> <!ELEMENT feature-ref EMPTY> <!ATTLIST feature-ref name IDREF #REQUIRED base-regnum CDATA #IMPLIED> <!-- TODO: Handle arch_data, maybe as unvalidated fields; do we want to define a namespace for arch-specific fields? Issue for feature and for reg. --> <!-- QUESTION: Should the feature also have a short description to identify it? The format of its "name" field is restricted and probably not user-appropriate. --> <!ELEMENT feature (description*, reg*)> <!ATTLIST feature name ID #REQUIRED> <!-- TODO: GDB does not yet support descriptions. --> <!-- Registers do not have an explicit register number field; they are numbered sequentially from the containing feature's base-regnum when the feature is referenced. --> <!-- arch_data; see above --> <!-- Kill save-restore in favor of a more complete scheme --> <!ELEMENT reg (description*)> <!ATTLIST reg name CDATA #REQUIRED bitsize CDATA #REQUIRED readonly (yes | no) 'no' save-restore (yes | no) 'yes' type CDATA 'int' group CDATA #IMPLIED > <!ELEMENT description (#PCDATA)> <!ATTLIST description xml:lang CDATA #IMPLIED> -- Daniel Jacobowitz CodeSourcery
http://sourceware.org/ml/gdb/2006-03/msg00031.html
crawl-002
refinedweb
1,924
64.51
Originally posted on my website on June 14th 2020 Using. // MDN Docs: const trekkies = [ { id: 0, name: "Piccard", planet: "Earth" }, { id: 1, name: "Spock", planet: "Vulcan" }, { id: 2, name: "Kirk", planet: "Earth" }, { id: 3, name: "Worf", planet: "Gault" } ]; const findTrekkiesByPlanet = planet => { return trekkies.filter(trekkie => trekkie.planet === planet); }; console.log(findTrekkiesByPlanet("Earth")); // [0: Object {id: 0 name: "Piccard" planet: "Earth"} // 1: Object {id: 2 name: "Kirk" planet: "Earth"}]. Array.prototype.find() The find array method can be used to find a single entry in an Api response based on a certain criteria. // MDN Docs: const friends = [ { id: 0, name: "joey", quote: "How you doin?" }, { id: 1, name: "ross", quote: "We were on a break" }, { id: 2, name: "phoebe", quote: "She’s your lobster" } ]; const findFriendById = id => { return friends.find(friend => friend.id === id); }; console.log(findFriendById(0)); // Object {id: 0, name: "joey", quote: "How you doin?"}. Array.from() The from array method's function is to create a new array from some arbitrary data. Here we are going to use it to conform Api response data to something we can pass to a React component. // MDN Docs: const apiCategories = [ { id: 0, title: "javascript", description: "...", other: "..." }, { id: 1, title: "React", description: "...", other: "..." } ]; const transformApiCategories = () => { return Array.from(apiCategories, category => { return {label: category.title, value: category.id}; }); }; console.log(transformApiCategories()); // [0: Object {label: "javascript" value: 0} // 1: Object {label: "React" value: 1}] // Example use in a react select component. return (<SelectControl options={ transformApiCategories() }/>). Inside each example there is a comment with a link to that specific method's doc page. And you can check out all the array methods in the MDN documentation. Let's connect on twitter @Vanaf1979 or here on Dev.to @Vanaf1979 so i can notify you about new articles, and other WordPress development related resources. Thanks for reading and stay safe. Discussion (2) Good post, Thank you. Thank you! Your welcome :)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/vanaf1979/my-favourite-javascript-array-methods-for-dealing-with-api-data-4i5i
CC-MAIN-2022-05
refinedweb
313
59.3
It all started with a task to do: Print all members of the group within Active Directory, including members of the nested groups. And a deadline: 15 minutes. Given the deadline, I had no chance to get it done in time. Having 15 minutes means you need to get it right from the first run. Googling for groovy ldap brought Gldapo. But after looking at it and seeing how much configuration has to be done, I searched for some alternatives. Groovy LDAP was beautifully simple and had no external dependencies. I downloaded the jar, dropped it into my GROOVY_HOME/lib directory and started to write the script: import org.apache.directory.groovyldap.LDAPAfter reading through the sample scripts, I already had the main part: ldap = LDAP.newInstance('ldap://ldap.mycompany.com:389/dc=mycompany,dc=com') ldap.eachEntry ('&(objectClass=person)(memberOf=cn=mygroup') { person ->I saved it as listGroup.groovy and ran it from the command line: println "${person.displayName} (${person.cn})" } groovy listGroupIt worked out of the box, printing on the console all the members of the group: John Smith (smithj)Of course, the script was not printing members of the nested groups. In order to do that, I had to turn the snippet into the Groovy recurrent function and avoid hardcoding a group's name in favor of taking it as a command line parameter. Here is the entire script: Amanda McDonald (mcdonaa) Isabelle Dupre (duprei) import org.apache.directory.groovyldap.LDAP import org.apache.directory.groovyldap.SearchScope List getMembersOfAGroup(connection, groupName) { def members = [] def result = connection.searchUnique("cn=$groupName”); connection.eachEntry("memberOf=${result.dn}") { member -> if (member.objectclass.contains("group")) members.addAll(getMembersOfAGroup(connection, member.cn)) else members.add("${member.displayName} (${member.cn})") } return members } LDAP ldap = LDAP.newInstance("ldap://ldap.mycompany.com:389/dc=mycompany,dc=com") getMembersOfAGroup(ldap, args[0]).each { println it } If your directory contains circular group relations, the script has to be further adjusted. This detail has been omitted for simplicity reasons. Please note, that the examples in this article work only with Microsoft Active Directory, because they use vendor specific structure and schema elements. In other directory solutions for instance, group membership is often stored in group entries only, while in Active Directory it is stored in both group and member object. But the examples can easily be adjusted to fit another directory's solution, e.g. by modifying filter expressions. What is this LDAP thing you're talking about? LDAP 101: LDAP stands for Lightweight Directory Access Protocol. A directory is a storage organized as a tree of directory entries. The tree usually reflects political, geographical and/or organizational boundaries. Every directory entry consists of a set of attributes (name/value pairs). These attributes are defined in the LDAP schema. Each directory entry has a unique identifier named DN (Distinguished Name). For more information please read Apache Directory introductory article. Project background Groovy LDAP is a small library started by Stefan Zoerner from the Apache Directory project. Its goal was to create minimalistic LDAP API for Groovy, with metaphors understood by the LDAP community (e.g. members of the Apache Directory team). As such, the only two dependencies of Groovy LDAP are: - Java SE (5 or later) - Groovy 1.0 or later Under the hood, JNDI is used to perform LDAP queries, but fortunately Groovy LDAP hides it and lets you use a bunch of useful methods and objects, instead. It actually reminds me of the time when Netscape LDAP API was widely used. It defines a set of methods to perform basic LDAP operations: create, modify, delete, compare, search. Groovy LDAP is written in Java, not Groovy. The only Groovy dependency is a reference to a Closure class, which is used as a parameter in a couple of search methods. So with the exception of the method taking the closure, others can be also used in Java programs. How to get it The simplest way is to get the binaries from the Groovy LDAP download page. After downloading and expanding the zip file you need to look for groovy-ldap.jar in the dist directory. Drop it into your GROOVY_HOME/lib directory and you’re ready to write your first script. How to build it If you want to build the library on your own, you will need: After you download and install Ant, drop Ivy's jar (ivy-1.4.1.jar) into your ANT_HOME/lib directory. Now you can check out the source files from Apache Directory sandbox Subversion repository. Once the files are checked out, just type ant and wait until the distribution jar is built in the dist directory. Connecting to the directory The first thing you will want to do is to connect to the directory. Groovy LDAP offers here two types of connection: anonymous bind and simple bind. Anonymous bind happens when you connect to the directory without providing your credentials. Many directories allow anonymous bind if the client is only reading from the directory. In corporations anonymous bind is often disabled for security reasons. So, in order to connect you need to instantiate LDAP class using newInstance() method, with the following variants: public LDAP newInstance() public LDAP newInstance(url) A non-parameter method connects to the default address, which is localhost:389. It proves to be useful for various short proof-of-concept scripts. The second method takes the url of the directory as a second parameter. If anonymous bind is not allowed or not sufficient there is an equivalent method, taking additionally user credentials: public LDAP newInstance(url, user, password) Once the connection is established, you can perform any other actions. One tip is to always provide a baseDN as a part of the connection url e.g. ldap://ldap.mycompany.com:389/dc=mycompany,dc=com By doing so you define the default base, upon which searches will be performed, which in turn allows you to use convenient one parameter search methods, instead of specifying a search base and scope each time. Reading and searching directory entries You may want to start with checking if a specific directory entry exists: def found = ldap.exists('cn=smithj,dc=mycompany,dc=com') exists() method is searching the directory by DN (Distinguished Name) and returning a boolean result detailing whether an entry was found. As a companion there is read() method, that reads directory entry, specified by its DN: if (found) def entry = ldap.read('cn=smithj,dc=mycompany,dc=com') This method returns either a boolean value or a given entry, accordingly. But there might be cases when you do not want to search by DN, but by another attribute which is also unique. A good example of this is a userId attribute, which is usually unique within a company. def entry = ldap.searchUnique('userId=smithj') This method assumes uniqueness of an object. If more than one result is returned from the search, you will get an exception. When more results are expected, you can use search() method: and then iterate over a result set: results = ldap.search('(objectClass=user)') println 'Found: $results.size entries' results.each { entry -> println entry.dn } Searches can be also performed with more compact and more Groovy method eachEntry() taking a closure as the last parameter: ldap.eachEntry('(objectClass=user)') { entry -> println entry.dn } As you see, when you have the entry object, you can reference all its properties using native map syntax e.g. entry.dn. This is possible, because all result objects returned from Groovy LDAP search methods are Maps or Lists of Maps. But, how does Groovy LDAP know in which subtree you would like to perform your search? It doesn't, because you haven't specified anything else, but the basic query. So it assumed you want to search in baseDN (hopefully specified, when connecting to the directory). When you want to have more control over how the query is performed, there is a different version of search(), searchUnique() and eachEntry() methods that support it e.g. public List<Object> search( String filter, String base, SearchScope scope ) They define additional parameters such as base upon which a search is performed and search scope, being one of the three possible constants: - SearchScope.BASE – searches only base - SearchScope.ONE – searches one level below base, excluding base - SearchScope.SUB – searches the entire subtree below base, including base So an example search could look like: ldap.search('objectclass=user', 'ou=hr,dc=mycompany,dc=com', SearchScope.SUB) There are also more sophisticated alternatives, taking Map<String, Object> or Search class instance as parameters, but we'll leave them as for now. When you deal with LDAP directories as a part of your daily job, you may want to have a look at Apache Directory Studio, a full-fledged LDAP client tool, which allows you to connect, browse and modify any LDAP-compatible directory. It can also be used as diagnostic tool when your query in Groovy LDAP doesn't work as expected. Adding, modifying and deleting directory entries When you know how to search and read from the directory, it's time to do some modifications. Let's start from adding a new entry: def attributes = [ objectclass: ['top', 'person'], cn: 'smithc', displayName: 'John Smith' ] ldap.add('cn=smithc,dc=example,dc=com', attributes) add() method takes DN and a Map with attributes as parameters. You need to remember not to put DN in the attributes map, as it is not an attribute but rather the unique identifier of an entry. Removing a directory entry is even more straightforward: ldap.delete('cn=smithc,dc=example,dc=com') delete() method will throw an exception, if an object with the given DN does not exist. Modifying a directory entry is not very Groovyish for the time being. Adding single attributes is still relatively easy: def dn = 'cn=smithj,dc=mycompany,dc=com' def email = [ email: 'john.smith@mycompany.com' ] ldap.modify(dn, 'ADD', email) Performing batch modifications could be more readable using Builder-like syntax.. The current way to do this is the following: def modifications = [ [ 'REPLACE', [email: 'jsmith@mycompany.com'] ], [ 'ADD', [phone: '+48 99 999 99 99'] ] ] ldap.modify(dn, modifications) The same operation, using more expressive syntax, would potentially look like: ldap.modify ('cn=smithj,dc=mycompany,dc=com') { replace(email: 'jsmith@mycompany.com') add(phone: '+48 99 999 99 99') } Summary As you can see, Groovy LDAP is a neat little library, delivering simple but convenient API to deal with LDAP directories, which makes it an ideal candidate to use in various administrator scripts and short programs. As a project it resides in Apache Directory sandbox, so when you have a chance, contribute and help Groovy LDAP to become an official subproject of the Apache Directory. Thanks I would like to thank Stefan Zoerner and Carolyn Harman for thorough review of the article. Matt Stine replied on Mon, 2009/02/23 - 1:36pm Kirk Remignanti replied on Wed, 2010/11/10 - 10:41am I just found this article, tried to navigate to the Groovy LDAP download page and got a 404 error. Is Groovy LDAP still a viable method for connecting to an LDAP directory? Update (11/10/10) - I found the download for Groovy LDAP here: and I was able to implement it by dropping the .jar file into my GRAILS_HOME/lib directory. The problem I ran in to though is when I tried to use the "import org.apache.directory.groovyldap.LDAP" code in my script to load the LDAP class. It might be because I'm running this script from a Grails environment but I needed to load the class manually using the "classLoader" method. Here's how I was able to get the script to run correctly: Does anyone know why I would get the error message "unable to assign a value to the class 'org.apache.directory.groovyldap.LDAP'" when I try to use "import org.apache.directory.groovyldap.LDAP" in my script? (My apologies if this is a dumb question, I'm fairly new to the Grails/Groovy programming environment.) Punit Ashra replied on Tue, 2011/11/08 - 4:22am Hi i have a problem regarding groovy ldap.I wanted to add already existing users in active directory to be a part of the group.It would me helpful if i get any information regarding this. Vinod Damle replied on Thu, 2013/01/31 - 10:54pm Over the past few days, I tried to get both Apache LDAP (M14) and Gldapo (0.8.2) to work with an OpenLdap server via Grails (2.2) . Apache LDAP doesn't work out of the box like the other user commented (duh! I read the comments after a day of wasted effort) and I heard the same on their mailer list. Gldapo has a plugin for grails sts but it is very poorly documented. Exceptions don't work if you have Java 7 (the plugin uses Java 6 if I'm not mistaken) due to some compatibility issue. I spent a couple of days to get a query working but at the end of it, there was still a problem with the schema mapping classes for which I never found an answer. Maybe its Murphy's law, I got hold of UnboundId's LDAP sdk right at the very end and it works like a charm. All I had to do was drop it in the <proj>/lib folder in grails and the examples/api guide in their javadoc are clear enough to get scenarios working. I would strongly recommend UnboundId's SDK.
http://groovy.dzone.com/articles/programming-ldap-groovy
CC-MAIN-2014-49
refinedweb
2,259
55.84
//////////////////// // my .CRX file // // here is my password definition: // CXRP = "SexyBeast" // // here are some strings: // my first string const char* pString1 = _CXR("AbcdEfg1234 blah\tblah"); // string #2 const char* pString2 = _CXR("This is a long one, not that it should matter..."); As you can see, the only difference between this and standard C/C++ is the _CXR specifier. The comment line with the password is required, and any text you want encrypted must be inside a _CXR(...) specifier. Anything else in the file will be copied over to the output .CPP unchanged. So, that's how you set up the .CXR file. When you build your project, the CXR parser will read this file and generate a .CPP that looks like this: /////////////////////////// #ifdef _USING_CXR // my first string const char* pString1 = "ab63103ff470cb642b7c319cb56e2dbd591b63a93cf88a"; #else const char* pString1 = _CXR("AbcdEfg1234 blah\tblah"); // my first string #endif /////////////////////////// #ifdef _USING_CXR // string #2 const char* pString2 = "baff195a3b712e15ee7af636065910969bb24997c49c6d0cc6a40d3ec1..."; #else // string #2 const char* pString2 = _CXR("This is a long one, not that it should matter..."); #endif ...more stuff below... Presto. The CXR parser has encrypted your strings. CString csString1 = _CRX(pString1); // pString1 = "ab63103ff470cb642b7c319cb56e2dbd591b63a93cf88a" // and now csString1 = "AbcdEfg1234 blah\tblah"; Note the #ifdef _USING_CXR tags. Because of these, you can disable the CXR strings by simply changing a single #define - to make testing easier. If _USING_CXR is not defined, all your strings revert to their unencrypted form and the " _CXR" turns into a macro that does nothing. If _USING_CXR is defined, your strings take on their encrypted forms and the _CXR macro becomes a call to the decrypting code. It's (almost) totally seamless. // CXRP = "MyPasswordString"Where the password string is any string you want. The " // CXRP =" part is required. _CXR(...)tag. _CXR(...)tags can not span multiple lines. extern const char* pString1; extern const char* pString2; cxr.exe -i $(InputPath) -o $(ProjDir)\$(InputName).cppThis will cause Visual Studio to call CXR.EXE with the name of your .CXR file as input and the same name, but with a .CPP extension as output. In the "Outputs" section, enter: $(ProjDir)\$(InputName).cppThis tells the compiler that this file needs to be recompiled when the .CXR file changes. #ifndef CRXHeaderH #define CRXHeaderH #define _USING_CXR #ifndef _USING_CXR #define _CXR(x) x #else #define _CXR(x) __CXRDecrypt(x) extern CString __CXRDecrypt(const char *pIn); #endif #endifThis file defines the macros you need to use to get your strings. This is also a good place to turn off the _USING_CXRmacro, if you want to use your strings un-encrypted for any reason. _CXR(...)tags. Anything else it just copies to the output as-is. When it finds a _CXR tag, it encrypts the string with your password, converts the encrypted data to printable characters and outputs the encrypted version, along with the original version, in an #ifdef...#endifchunk. The parser is somewhat dumb; it doesn't understand much C/C++; it only knows to looks for the password and _CXR("..."), this is what prevents the use of multi-line text and Unicode. It does, however understand the syntax of C/C++ literal strings (including all escapes documented in K&R v2). The encryption code is based on the quick and small Sapphire II stream cipher, from Crypto++. XOR would work equally as well, since we're only concerned about obfuscating the strings, not in keeping them secure against decryption attack. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/string/cxr.aspx
crawl-002
refinedweb
563
73.88
A Color Picker for Windows 32/64 that has no external dependencies. Tested on Windows XP 32-bit and Windows 7 64-bit using Sublime 2 build 2165. Screenshot: Instructions and source code available at github.com/animehunter/SublimeC ... indowsOnly If you spot any bugs, please post here. Good job! It worked great here. I think that finally adds one for all the OSs. Thanks for the feedback. Hi, can you expand your plugin to allow us to use a pipette to choose the color from any element on the screen? Regards,Highend Due to the difficulty of customizing the Color editor dialog, I've decided not to implement the pipette feature for now.As an alternative, I've came up with another method to choose a color from any pixels on the screen. See the latest readme at my repository for more details. Ok, installed the updated files manually (package control didn't list an upgrade for your plugin) and it's working fine so far (ctrl + shift + alt + c). Thanks a lot. This plugin is absolutely awesome! Thank you! This is perfect. Works like a charm! Thank you. Now we have also a multi-platform plugin: github.com/weslly/ColorPicker thanks to @animehunter! Nice! I was going to suggest folding these two together. I kind of like having my platforms use the same plugins. It makes syncing settings and everything much easier. There is a way to call a third part color picker?I found this plugin pretty userful but the standard windows color picker is awful an pretty useless!. Yes it is very possible. I have done a test to show it is possible. In this example I have simply used a local python on my machine with WxPython installed, and grabbed the 3rd party python color picker dialog from xoomer.virgilio.it/infinity77/AG ... ialog.html called CubeColourDialog. I created a simple program that provides the CubeColourDialog Gui based off a demo they had posted: [pre=#2D2D2D]import wximport wx.lib.agw.cubecolourdialog as CCDimport json app = wx.App(0) colourData = wx.ColourData()dlg = CCD.CubeColourDialog(None, colourData) if dlg.ShowModal() == wx.ID_OK: # If the user selected OK, then the dialog's wx.ColourData will # contain valid information. Fetch the data ... colourData = dlg.GetColourData() h, s, v, a = dlg.GetHSVAColour() # ... then do something with it. The actual colour data will be # returned as a three-tuple (r, g, b) in this particular case. colour = colourData.GetColour() r, g, b, alpha = colour.Red(), colour.Green(), colour.Blue(), colour.Alpha() colors = { "rgba": (r, g, b, alpha), "hsva": (h, s, v, a) } print json.dumps(colors) else: print "{}" dlg.Destroy() app.MainLoop()[/pre] I compiled it as an EXE with pyinstaller pyinstaller.org/ and then accessed the executable via a sublime plugin and output the result in the debug panel. [pre=#2D2D2D]import sublimeimport sublime_pluginimport subprocess as spimport os.path as pathimport json def check_output(command): process = sp.Popen(command, shell=True, stdout=sp.PIPE, stderr=sp.STDOUT, universal_newlines=True) output = process.communicate() retcode = process.poll() if retcode: raise(sp.CalledProcessError(retcode, command, output=output0])) return output0] class ColorTestCommand(sublime_plugin.TextCommand): def run(self, edit): print("ColorTest: Custom Color Dialog Feasibility Demo") cmd = path.join(sublime.packages_path(), "ColorTest", "clr_pick.exe") colors = json.loads(check_output(cmd)) if "rgba" in colors and "hsva" in colors: print("ColorTest (rgba): r=%d, g=%d, b=%d, a=%d" % tuple(colors"rgba"])) print("ColorTest (hsva): h=%d, s=%d, v=%d, a=%d" % tuple(colors"hsva"])) else: print("No color selected!")[/pre] Results: >>> sublime.active_window().active_view().run_command("color_test") ColorTest: Custom Color Dialog Feasibility Demo ColorTest (rgba): r=67, g=188, b=117, a=255 ColorTest (hsva): h=145, s=163, v=188, a=255 It would be better if I threaded this and the dialog is a little buggy, but this shows you can create your own custom dialogs etc. and access them from sublime. You don't have to create the executable in python like I did, but this just shows you can create your own custom gui to do stuff. I think this is an untapped approach that developers could really do some neat stuff with. thanks for the explanation facelessuser, i've managed to set my favourite picker in the plugin Glad I could help.
https://forum.sublimetext.com/t/color-picker-for-windows/4061/4
CC-MAIN-2016-18
refinedweb
714
52.66
#include "koolplot.h" int main() { Plotdata x(-3.0, 3.0), y = sin(x) - 0.5*x; plot(x, y); return 0; } Are you are experiencing a similar issue? Get a personalized answer when you ask a related question. Have a better answer? Share it in a comment. From novice to tech pro — start learning today. #pragma comment(lib,"user32.lib") to your code (which basically achieves the same). To add libs using VS, open your project's properties (ALT+F7), go to "Configuration Properties|Linker|Input" and enter the lib(s) under "Additional Dependencies". If the lib you want to use does not reside in one of the standard directories or your project's build directory (Debug or Release), you can set that dirctory under "Configuration Properties|Linker|General" The koolplot website says "koolplot is available for the MingW (GCC port) compiler" There may be a way, but it sounds easier if you use MingW.
https://www.experts-exchange.com/questions/26846520/how-to-include-a-library-into-a-MS-VS-2008.html
CC-MAIN-2018-26
refinedweb
156
57.57
Main class of the Angle Measure plug-in. More... #include <AngleMeasure.hpp> Main class of the Angle Measure plug-in. Definition at line 70 of file AngleMeasure.hpp. Detect or show the configuration GUI elements for the module. This is to be used with plugins to display a configuration dialog from the plugin list window. Reimplemented from StelModule.. Handle key events. Please note that most of the interactions will be done through the GUI module. Reimplemented from StelModule. Handle mouse clicks. Please note that most of the interactions will be done through the GUI module. Reimplemented from StelModule. Handle mouse moves. Please note that most of the interactions will be done through the GUI module. Reimplemented from StelModule. Initialize itself. If the initialization takes significant time, the progress should be displayed on the loading bar. Implements StelModule. Load the plug-in's settings from the configuration file. Settings are kept in the "AngleMeasure"(). Update the module with respect to the time. Implements StelModule.
http://stellarium.org/doc/0.15/classAngleMeasure.html
CC-MAIN-2017-51
refinedweb
163
63.05
I'm trying to disable a component in C# that isn't attached to the current object the script is running on. I was using the below code to disable components attached to the current object, but how can I 'reach out' and disable others? GetComponent<Camera>().enabled = false; I know there is also GetComponentChildren but it's not a child object, it's a complete separate object, on it's own. Thanks for any help provided! Simply need references to thoses object and doing things through those references,like this: GameObject Enemy; Enemy.GetComponent().enabled = false; Answer by ScottYann · Aug 03, 2013 at 11:40 PM Enabling a component works just as you say: GetComponent().enabled = false; but if I read you right you want to do that to a different object. All you need then is the object reference. There are several ways of getting one. You can use Find like this: GameObject.Find("myDesiredObject").GetComponent().enabled = false; This can work, however it is very costly in terms of cpu usage. You can get away with doing this if this line is only called once, once in a while like as the result of pressing a button. If you are calling this line in an Update or an IEnumerator the penalty is significant especially on mobile. If you've preplaced your target object in your scene, why not just reference it through a public variable? public GameObject myTarget void OnMyEvent(){ myTarget.GetComponent().enabled = false } Then if you haven't placed your target object in your scene and you've not done anything to reference it before hand, a sure fire way of getting to it with the smallest possible penalty is by getting to it via an object tag. Make a new tag, and assign your target object with that tag in it's prefab. Then do this to call it: GameObject.FindGameObjectWithTag("myTag").GetComponent().enabled = false; You can have dozens of tags and they make it easy to work with collections of objects. Say you had a smart bomb that blows up all your enemies. Just have a tag called "enemy" assign enemy objects that tag and do this: foreach (GameObject enemy in GameObject.FindGameObjectsWithTag("enemy"){ Destroy(enemy); } Answer by Jamora · Jul 31, 2013 at 05:31 PM You need a reference to the other GameObject. The easiest, but not the most efficient, is to use GameObject.Find with the name of the GamObject as a parameter. There are other ways to get a reference to other objects without using GameObject.Find, but I think you should look into those if you ever find your game's frame rates dropping. I had come across that one, and as you stated can drop frame rate. You say that's the easiest, as if you know of another option? What other ways could this be accomplished? Thanks for your help so far! Answer by Lovrenc · Jul 31, 2013 at 05:32 PM You need a reference to other object. otherObject.GetComponent<Camera>().enabled = false; But this can get confusing spaghetti code really fast so be careful! So something like: CameraObject otherObject; otherObject.GetComponent().enabled = false; CameraObject otherObject; otherObject.GetComponent().enabled = false; Cause I tried something similar to that, the issue is it's multiplayer, and the object isn't created until the player is in-game, and it puts the objects in place, so when I try to reference it, I get an error that it doesn't exist. Thanks for the help so far! Answer by eaglemaster7 · Jul 31, 2013 at 05:32 PM try using GameObject.Find("Your object name").GetComponent().enabled = false; Thanks for the reply! That was one option I came across, but I kept seeing comments about it eating a lot of resources. I had this same problem on a 'test' project, and had a working disable script, but for the life of me can't remember what I did. It was like 2 lines per component, any ideas? Answer by StormMuller · Dec 23, 2017 at 09:46 AM So this is a really old post, but maybe I can help other people coming to this question. The original poster's problem was that he needed to reference a gameObject other that the one the script is on. remember when using GetComponent<T>() on it's own, you are referencing a member property from the MonoBehaviour class, which knows what GameObject you are looking for(The Game object your script is attached to). GetComponent<T>() MonoBehaviour He could do this with any of the GameObject.Find methods. or even better yet, create a public GameObject otherGameObject then you can drag and drop another gameobject from the hierarchy or a prefab into the field in the inspector. GameObject.Find public GameObject otherGameObject It's also important to note that the Component class does not have an enabled property to toggle. This is because some components cannot be disabled(Like the transform component). However most components that can be disabled (Lights, Cameras, Colliders, etc.) are actually Behaviors (Including your own custom MonoBehaviours) and Behaviors do have a enabled property. enabled Behaviors A common use case: You have a script that will turn certain components on for the local player in a networked game. using UnityEngine; using UnityEngine.Networking; public class LocalPlayerEnabler : NetworkBehaviour { public Behaviour[] behavioursToTurnOn; void Start() { if (!isLocalPlayer) { return; } foreach (var behaviour in behavioursToTurnOn) { behaviour.enabled = Disabling A Script on a GameObject From a Different Script 2 Answers How to Check if a component Exists on a Gameobject. 3 Answers How to read X and Y component ( of a vector2) from a list 1 Answer
https://answers.unity.com/questions/505010/disabling-components-c.html
CC-MAIN-2019-04
refinedweb
940
64.3
This is an article that briefly explains the use of a group of classes I created that represent a typical set of playing cards. I wanted to create a game of cribbage, and began by looking for Open Source code that would have the basic playing card classes built for me. Surprisingly, I didn't find any that suited my needs. So, I decided the first thing I would need to create for my game of cribbage were the playing cards. These included a card class, a card array (a collection of cards), a card comparer (for sorting the card array in different ways), a standard 52 card deck, and shoe classes. I originally wrote these classes in C#, but translated them to VB using the free online tool. I hope you find this useful. The project file actually has the solution file as well as three projects in it. There is one project for the C# card classes, one for VB.NET card classes, and the third is a simple C# console app that is designed to show you the uses of the classes. The C# and VB.NET classes are identical in functionality, and only differ in the language they were written in. One of the most common mistakes I saw when looking at how others approached the card classes is to not use enumeration when representing the card properties. By using strongly typed data, the code will be easier to read and less prone to usage errors. Another shortcoming I saw in other similar projects is the lack of methods for sorting the cards. There are many different ways to order a set of cards, such as face, value, suit, ascending, descending, etc. I decided to create a static class that uses the IComparable interface to make sorting easier. The actual implementation of the sort algorithm is hidden in a private nested class as seen below: IComparable public static class CardComparer { public static IComparer SortFaceAscending() { return (IComparer)new SortFaceAscendingHelper(); } .... private class SortFaceAscendingHelper : IComparer { int IComparer.Compare(object a, object b) { if (a is Card && b is Card) { Card cardA = (Card)a; Card cardB = (Card)b; if (cardA.Face > cardB.Face) { return 1; } else if (cardA.Face < cardB.Face) { return -1; } else { return 0; } } else { throw new ArgumentException("Object is not of type Card."); } } } I built this code as a foundation for a cribbage game that I am currently working on. It could be used or easily modified to play with any card game that you want. The biggest difference that the various card games have is the actual value of the card, which is why I made the face and suit properties separate from the actual value of the card. This could be improved by the value of the card based on the current game being played, perhaps through a static class passed to the card's constructor. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) switch General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/28958/Playing-Cards?msg=4605880
CC-MAIN-2017-30
refinedweb
531
60.75
12 April 2013 18:00 [Source: ICIS news] HOUSTON (ICIS)--Here is Friday’s midday ?xml:namespace> CRUDE: May WTI: $91.10/bbl, down $2.41; May Brent: $101.98/bbl, down $2.29 NYMEX WTI crude futures sold down sharply in response to released data showing weak retail sales and downbeat consumer sentiment, validating recent forecasts for a decline in energy demand growth. WTI bottomed out at $90.27/bbl before rebounding. RBOB: May: $2.7895/gal, down 4.15 cents/gal Reformulated blendstock for oxygen blending (RBOB) gasoline futures prices fell during morning trading, extending Thursday’s loss and tracking lower crude futures. NATURAL GAS May: $4.220/MMBtu, up 8.1 cents The front month on the NYMEX natural gas market started Friday’s session up for the third consecutive day, boosted by the strong short-term demand outlook from the ETHANE: lower at 28.75 cents/gal Ethane spot prices were 0.25 cents/gal lower in a quiet market. AROMATICS: Benzene down at $4.41-4.48/gal Prompt April benzene spot prices were discussed lower this morning, sources said. The morning range was down from $4.45-4.49/gal FOB (free on board) the previous day. OLEFINS: ethylene offered higher at 64 cents/lb, PGP wider at 57-59 cents/lb April ethylene offers moved to 64.00 cents/lb, higher than a deal done late on Thursday at 60.25 cents/lb, on continued cracker out
http://www.icis.com/Articles/2013/04/12/9658775/noon-snapshot-americas-markets-summary.html
CC-MAIN-2015-06
refinedweb
244
71
New Pages Listed below are the most recently created pages on wikiHow. Select the drop-down menu below to view new pages from a specific namespace, or enter a user name (not real name) to view pages recently created by that individual. Showing below up to 50 results starting with #1. View (previous 50) (next 50) (20 | 50 | 100 | 250 | 500). - 21:35, 4 July 2009 Be a Crazy Person (hist) [1,255 bytes] PunkyKid (Talk | contribs) (beginning) - 21:34, 4 July 2009 Draw Manga Wings (hist) [3,305 bytes] Smartkitty314 (Talk | contribs) (New page: Your Finished WingsA manga character is always cool, but why not add that little extra something to your manga hero or animal companion by adding som...) - 20:44, 4 July 2009 Get Your Website/Webshow Well Known (hist) [2,350 bytes] 24.184.241.96 (Talk | contribs) (New page: Do you have a website/webshow that doesn't have as many viewers as you wish you had? Do you want your website/webshow to be known? This article can help. == Steps == # Get yourself sign...) - 20:42, 4 July 2009 Use Mcafee Siteadvisor in Safari (hist) [1,436 bytes] 75.110.131.56 (Talk | contribs) (New page: SiteAdvisor is a McAfee toolbar for Internet Explorer and Firefox. It allows you to distinguish between safe and unsafe sites on the net. Unfortunately, there is no toolbar for Safari. Usi...) - 20:09, 4 July 2009 Get Gum out of Your Hair the Old Fashioned Way (hist) [843 bytes] 96.255.203.37 (Talk | contribs) (New page: Gum in your hair stinks! It looks bad and is very hard to get out, unless you follow these simple and old-fashioned directions. :) == Steps == # Keep the rest of your hair away from the p...) - 19:54, 4 July 2009 Make a Key Necklace (hist) [870 bytes] OMIGAWD (Talk | contribs) (categorization) - 19:47, 4 July 2009 Start a Club for Younger Kids (hist) [1,341 bytes] 80.2.16.184 (Talk | contribs) (I hope you have enjoyed reading this article and that it has been helpful and encouraging for you to start your own Club.) - 19:26, 4 July 2009 Make Lip Scrub (hist) [1,986 bytes] 75.73.225.186 (Talk | contribs) (New page: Using a homemade lip scrub once a week espesially in the winter time is very important. Your lips have the thinnest skin on your whole body and should not get chapped or dry. Many people...) - 19:13, 4 July 2009 Kill Fleas and Ticks in Your Home (hist) [3,950 bytes] Cat Camille (Talk | contribs) (New page: How to kill fleas and ticks in your home. This is an informative article on how to kill fleas and how to get rid of fleas and ticks in your home. Start killing fleas and ticks in your home...) - 18:36, 4 July 2009 Use a Search Engine to Search a Site (hist) [991 bytes] Ttrimm (Talk | contribs) (categorization, weaving the web of links) - 17:27, 4 July 2009 Survive on a Gluten Free Casien Free Diet (hist) [327 bytes] MasterChickDanceChick (Talk | contribs) (New page: Oh Great. Dr. Julie Buckley has decided to place you on a GF/CF diet. Well you do not have to worry, this article will come to the rescue! == Steps == # Learn which foods have said things...) - 16:59, 4 July 2009 Make Addie's "Get over Your Heartbreak!!!" Food (hist) [822 bytes] AddieHoney (Talk | contribs) (New page: When you are down, and just broke up with your boyfriend, or just got fired from your job, make this and pop in a movie and cry. :) It's one of the ultimate comforting foods for depressing...) - 16:26, 4 July 2009 Be Scary/Freaky on Clubpenguin (hist) [2,383 bytes] 75.45.182.50 (Talk | contribs) (New page: You've seen ghosts,vampires ,werewolfs, and many other scary things on club penguin,and you've wanted to be like them?Well now you can! == Steps == #Put on clothes that make you look "myst...) - 16:18, 4 July 2009 Backup a Joomla Web Site (hist) [1,675 bytes] MichaelScottMcGinn (Talk | contribs) (New page: Have you ever needed to backup your Joomla site and move it to a different web host or to save an off line copy of your Joomla web and mysql database files? There are a few ways to do this...) - 15:50, 4 July 2009 Write a Song for Children (hist) [1,089 bytes] 68.9.233.196 (Talk | contribs) (New page: Do you love to write songs for kids and want a catchy, fun song to write? This is for you! == Steps == #Come up with a catchy song title that kids can relate to. (ex. missing the bus, eati...) - 14:16, 4 July 2009 Use Youtube Uploader (hist) [2,384 bytes] IngeborgK (Talk | contribs) (New page: ==Steps== #'''Download and install AVS YouTube Uploader'''<br> ===Create a YouTube account (if you don't have one already)=== #'''Open AVS YouTube Uploader.''' #Click this button [[Image:A...) - 14:13, 4 July 2009 Make a Pet Rock Have Babies (hist) [714 bytes] Daviddummy (Talk | contribs) (New page: Once you have had your pet rock for a good while (eg:5 days to a week) here's how to make it have babies/pebbles == Steps == #There comes a time when your pet rock has to have babies/pebbl...) - 13:21, 4 July 2009 Make Your Life Exactly Like Bella Swan's (hist) [2,981 bytes] Xkawaiixhawtx (Talk | contribs) (New page: This article will tell you how to make your life exactly like Bella Swan's from Twilight. == Steps == # Change your name to Isabella Marie Swan and get people to call you Bella- for short...) - 12:13, 4 July 2009 Make Your Parents Treat You Like a Star (hist) [624 bytes] Team blackstar (Talk | contribs) (Its an ok how to but its not the good for one I mean people should try things by them selfs. Reamber live your life =]) - 10:29, 4 July 2009 Make American Flags That Are Safe for Children (hist) [721 bytes] 80.187.104.232 (Talk | contribs) (New page: The American flag is a wonderful thing to look at. If you are an American and loved North America, this is the craft for you! It is also safe for children. == Steps == # Draw the American...) - 10:13, 4 July 2009 Develop a Naming Scheme (hist) [3,115 bytes] GTAddict (Talk | contribs) (New page: Too many files and folders!Ever had to deal with a large number of files or folders, but got drowned in them? Here's how to name them so you'll find them quick...) - 10:13, 4 July 2009 Make Fireworks That Are Safe for Children (hist) [908 bytes] 80.187.104.232 (Talk | contribs) (New page: These fireworks are great to make, and fun to stick in your food on a picnic on the 4th of July. They are safe for children, so easy that children can make them. You may make as many as yo...) - 09:57, 4 July 2009 Make a Rock Band Drum Set Cake (hist) [6,569 bytes] NathanJ1979 (Talk | contribs) (New page: So, you have a friend or relative with a birthday coming up who is a wizard at the drums on Rock Band, and want to give them a cake they will never forget? With a bit of careful planning ...) - 08:52, 4 July 2009 Download Music and Video in Firefox With Net Video Hunter (hist) [1,895 bytes] IngeborgK (Talk | contribs) (adding video) - 08:43, 4 July 2009 Be a Master at Manhunt (hist) [1,713 bytes] 24.24.81.124 (Talk | contribs) (New page: manhunt is a game of stealth when played at night, follow these steps and you'll win, I beat 18 year olds and me being 15. This is also useful in hunting, airsoft and paintball. == Steps =...) - 07:36, 4 July 2009 Prepare for Exams (hist) [93 bytes] 117.193.4.75 (Talk | contribs) (New page: night before the exam, on the day of exam, ultimate tools for the exams == Steps == #THE PREPARATION BEFORE THE EXAM : #NIGHT BEFORE THE EXAM : #Keep a few pens of different shapes, thick...) - 07:14, 4 July 2009 Earn High Commissions Online (hist) [1,442 bytes] 72.237.51.46 (Talk | contribs) (home based buisness, work at home jobs, earn a living online, freedom, cash, money making, real jobs) - 06:01, 4 July 2009 Make Hamster Treats (hist) [1,856 bytes] Gracierogers (Talk | contribs) (New page: If you don't feel like buying treats for your hamster and have a minute or to to spare, then you could have fun making a treat. This has the advantages of being free and only containing th...) - 05:31, 4 July 2009 Have a Secret Relationship at Camp (hist) [3,732 bytes] AddieHoney (Talk | contribs) (New page: 80% of camps in the US DO NOT allow relationships! It is one of the sad, yet true rules that most kids are supposed to obey... but who said that we ever actually listen to those dumb rules...) - 05:01, 4 July 2009 Be Like Izzy and Annie (hist) [729 bytes] 99.50.245.214 (Talk | contribs) (New page: izziy and annie are twins in resses witherspoons, leaggley blonds! ther are some steps to become as faboulas as them! == Steps == # think pink! izzy and annies fav color is pink! go for ea...) - 04:37, 4 July 2009 Repair Your Lace Wigs (hist) [1,976 bytes] Xantuslacewigs.com (Talk | contribs) (New page: DON'T THROW AWAY THAT OLD WIG. Get a [ Lace Wig Repair] and make it look like new again. Your wig may become worn. Perhaps, the hair and la...) - 04:33, 4 July 2009 Draw a Do Job Man (hist) [1,564 bytes] Smartkitty314 (Talk | contribs) (New page: A cute completed do-Job man!Bored and need to doodle? A do-job man is a fun and cute little piece of notebook art which will take your mind off of any jo...) - 03:42, 4 July 2009 Build a Desktop Computer (hist) [810 bytes] 69.141.141.232 (Talk | contribs) (New page: like being a model is all about pratice not just about how your body is well it si but there are more things you need to do to be a model == Steps == # 1. If you want to be a model you ha...) - 03:31, 4 July 2009 Make a Lace Front Wig (hist) [1,842 bytes] Xantuslacewigs.com (Talk | contribs) (New page: Learning [ How to Make Lace Front Wigs] is a valuable skill. That will give you unmeasurable joy, and a means to create extra dollars. It's like ...) - 02:59, 4 July 2009 Shoot for Depth of Field (hist) [3,261 bytes] Ttrimm (Talk | contribs) (categorization and work in progress) - 02:33, 4 July 2009 How Get a Emo and Goth Girl 2 Go out on a Date Wit U (hist) [460 bytes] Mr.Love Goru (Talk | contribs) (New page: Hi single teens Do u like a Goth or Emo boy or Girl u come 2 the right place == Steps == #The important thing 2 keep in mind is no goth or Emo like 2 b bothered or teased by their look or ...) - 02:31, 4 July 2009 Make a Flash Bang Grenade (hist) [1,432 bytes] 70.211.125.203 (Talk | contribs) (New page: Have you ever wanted to make a quick get away or you love paintball and you want to get the drop on your opponent?If you said yes to any of these suggestions than this is the article for y...) - 01:29, 4 July 2009 Make a Paper Scroll (hist) [2,123 bytes] Smartkitty314 (Talk | contribs) (categorization) - 00:46, 4 July 2009 Press a Button (hist) [1,337 bytes] Kimmer1234 (Talk | contribs) (categorization) - 22:43, 3 July 2009 Work out a Song Meaning (hist) [2,016 bytes] Halftimelord (Talk | contribs) (New page: Ever listened to a song and wondered what the meaning behind it was? Finding it hard to find the real meaning? Well this article should help you! == Steps == # LISTEN TO THE SONG! By just...) - 22:22, 3 July 2009 Find out if California Cat Food Is the Right Cat Food for Cats (hist) [3,172 bytes] Cat Camille (Talk | contribs) (New page: Is california Cat Food the right cat food for your cat? What is in California Cat Food for Cats? Is california natural Cat Food for cats all natural cat food or is it chemical mixed? Every...) - 21:41, 3 July 2009 Make the Most out of Your Flickr Account (hist) [1,822 bytes] Ttrimm (Talk | contribs) (weaving the web of links) - 21:40, 3 July 2009 Promote Tourism in Limassol Cyprus (hist) [646 bytes] 213.7.8.250 (Talk | contribs) (New page: Ideas about how we can enhance the tourism package in Limassol cyprus.What in your opinion sombody which visit Limassol in Cyprus is looking for. == Steps == # Limassol is a beautifull sma...) - 21:14, 3 July 2009 Make a Unique Bedroom(Girl) (hist) [677 bytes] 76.235.73.121 (Talk | contribs) (New page: You want a unique bedroom say your moving and don't want that old bedroom and you want something new like the other girls. If you do this is the right guide for you. == Steps == #1.You hav...) - 20:48, 3 July 2009 Pray to a Saint (hist) [2,055 bytes] Hurricaneseye (Talk | contribs) (New page: Have you ever been fascinated by the saints, historical Christian figures that are believed to have lived exceptionally holy lives? Have you ever wondered if they play a role in your life?...) - 20:07, 3 July 2009 Enjoy Camp Chrysalis, Mendo Session (hist) [4,328 bytes] Ochixay (Talk | contribs) (New page: Going to Mendo this summer? Don't trip. It's going to be awesome! == Steps == # Pack right. Generally the camp packing list on Camp Chrysalis' website is good enough, but keep in mind tha...) - 19:56, 3 July 2009 Sit in a Chair Properly (hist) [1,494 bytes] Emilyxoxo16 (Talk | contribs) (New page: If you don't have good posture while sitting, you could get many back problems, including arthritis, sore back muscles and many other symptoms. If you notice that you have bad posture whil...) - 19:56, 3 July 2009 Teach a Miniature Horse to 'pose (hist) [1,415 bytes] Hayleylee (Talk | contribs) (miniature horses) - 19:34, 3 July 2009 Make a Bottle Bomb (hist) [358 bytes] Baileymoore (Talk | contribs) (New page: Bottle bombs are very cool. == Steps == # 1. Fill up a rolled up piece of newspaper with baking soda. # 2. Take a water bottle filled up with vinegar and put the newspaper in.# 3. Shake i...) - 19:12, 3 July 2009 Make a House for Your Stuffed Animals (hist) [622 bytes] Bigkittyc (Talk | contribs) (New page: if you have alot of stuffed animals, you dont want them to just sit in a corner! you build and your stuffed animals canh have fun buliding them a house! == Steps == # start by getting a bi...) View (previous 50) (next 50) (20 | 50 | 100 | 250 | 500).
http://www.wikihow.com/Special:Newpages
crawl-002
refinedweb
2,505
78.59
the current waveform and in the case of capacitive loading it will force to lead the current waveform than voltage waveform. You can see the waveforms of inductive load. The phase shift of 30 degree is present in the current waveform. The power factor is basically the “angle cosine of that lagging current". In simple words, current is lagging by voltage with some angle and if we take the cosine of that angle we will get power factor. Now how to get that lagging angle? This is only problem left here. If somehow we can measure the time difference of both waveforms then we can find our required angle by using the formula below: Where frequency (f) is the frequency of the system which may be 50 or 60 Hz. Zero Cross Detection: Zero cross detection is a method which can enable us to measure the time between voltage and current. In this technique we get a high value (i.e. 1) whenever a zero will cross the system. There are many ways to implement it. But remember, this technique is the heart of this project so implementation must be accurate. In this project we implemented zero crossing using LM358 an 8 pin IC having dual amplifiers in it. In zero crossing, we have to get a “high” value during crossing of zero in waveforms. So to get that value we use amplifier as a comparator which will compare the non inverting reference value and then act accordingly. We will use a 16x2 LCD to show our results and ATmega 8 or ATmega16 can be used for the project. In the simulation, take upper sine generator as output of Potential Transformer (PT) and lower sine generator as output of Current Transformer (CT). The reason behind using CT and PT is, we cannot give high voltage to the IC LM358. It will burn the IC badly. So first step down the voltage and step down the current at such extent that the highest peak of current and voltage is not more than 5V. If you have no idea of using CT and PT in the real systems then see the links below. 1.AC Voltage Measurement Using Atmel AVR Microcontroller 2. AC Current Measurement Using Atmel AVR Microcontroller Now comeback to the zero crossing method, you can see that we set the reference value of zero volts at non - inverting pin (+) of both amplifiers. So, according to comparator action it will give us high value (1) at the output whenever a zero will cross in the waveform. The output of comparators is shown below in the figure in which yellow is the output of voltage and blue is the output of current having some lag. Implementation: I implemented a voltage divider with PT because the output of PT will be 6 volt. 6 volt PT have peak of 1.4*6 = 8.4 volts which is harmful for the IC. So I placed a voltage divider to cut the peak to the 4.2 volts. The resistor present in front of the CT is burden resistor which is essential for CT. We never leave secondary of CT as open circuit. Components Needed: Click on the component name to buy the product from reliable sources: - ATMEGA8A-PU - Blue screen 16x2 Character LCD Display Module - 1-phase transformers 220V/6V volt (1amp or less) or 110V/6V (1amp or less) - 30A-Toroidal Core Current Transformer - USB to TTL converter UART module CH340 - Resistor pack 1/4W 1% - Printed Circuit Board - Female Single Row Pin Headers - Male Single Row Pin Headers Coding part: The code is written on codevision. You can download whole code from the link present below. If you want code for Atmel Studio email us. Here we will discuss two main functions of code. //_________function to get the time difference_____________ void pf_func(){ while(1){ if ( PINC.4==1 ){ TCNT1=0; TCCR1B = 0x01; // Start timer at Fcpu/1 break; } else { continue; } } while(1){ if ( PINC.3 == 1 ){ TCCR1B = 0x00; //Stop timer g=TCNT1; break; } else { continue; }}} In this function, I started the Timer1 of microcontroller when zero crossing of voltage occurs at PINC.4 and turned off the Timer2 at PINC.5 when zero crossing of current occurs. “g variable” has the final time count of the difference. //________function to calculate the power factor______ int powerfactor(){ k=0; // To complete number of counts g=g+1; //Value from the timer //To convert into seconds pf=(float)g/1000000; //To convert into radians pf=pf*50*360*(3.14/180); //power facor pf = cos(pf); //power factor into percentage k=abs(ceil(pf*100)); return k; } In this function, I just converted the delay into seconds and then converted that seconds into angle using the formula I mentioned above. Warning: If you know how to handle electricity then implement this project with precautions. Don't use any component without understanding completely about it. 76 thoughts to “Power factor measurement using ATmega8 /16” Thank you so much sir!....Sir I need full code with extension pde or ino so that i can open this source file in arduino IDE directly or tell me some ways to convert c file to .pde .please mail me at 15103122-011@uog.edu.pk This one is working good #include LiquidCrystal lcd(12, 11, 5, 4, 3, 2); int pin = 13; float rads = 57.29577951; // 1 radian = approx 57 deg. float degree = 360; float frequency = 50; float nano = 1 * pow (10,-6); // Multiplication factor to convert nano seconds into seconds // Define floats to contain calculations float pf; float angle; float pf_max = 0; float angle_max = 0; int ctr; void setup() { pinMode(pin, INPUT); Serial.begin(9600); lcd.begin(16, 2); } void loop() { for (ctr = 0; ctr angle_max) // Test if the angle is maximum angle { angle_max = angle; // If maximum record in variable "angle_max" pf_max = cos(angle_max / rads); // Calc PF from "angle_max" } } if (angle_max > 360) // If the calculation is higher than 360 do following... { angle_max = 0; // assign the 0 to "angle_max" pf_max = 1; // Assign the Unity PF to "pf_max" } if (angle_max == 0) // If the calculation is higher than 360 do following... { angle_max = 0; // assign the 0 to "angle_max" pf_max = 1; // Assign the Unity PF to "pf_max" } Serial.print(angle_max, 2); // Print the result Serial.print(","); Serial.println(pf_max, 2); lcd.clear(); lcd.setCursor(0,0); lcd.print("PF="); lcd.setCursor(4,0); lcd.print(pf_max); lcd.print(" "); lcd.setCursor(0,1); lcd.print("Ph-Shift="); lcd.setCursor(10,1); lcd.print(angle_max); lcd.print(" "); //delay(500); angle = 0; // Reset variables for next test angle_max = 0; } Just paste this code to arduino IDE S.H.Rony can you share your final code with me plz...it will be a great help Hi sir, I am getting trouble in simulation on Proteus, i have made the program in Atmel studio by the same logic you used above, but its not working. kindly help me out please. Code is below. #define F_CPU 1000000UL #include #include #include #include #include "MRLCD.h" unsigned char buf[10]; unsigned int k=0,x=0,g=0; float pf; uint16_t t; void loop() { while(1) { if(PINA0 == 1) { TCNT1 = 0; TCCR1B = 0x01;//start timer// break; } else { continue; } while(1) { if(PINA1 == 1) { TCCR1B = 0x00; g=TCNT1; break; } else { continue; } } } } int powerfactor() { k=0; g=g+1; // to complete number of counts pf = (float)g/1000000; // convert into seconds pf = pf*50*360*(3.1428/180); pf = cos(pf); k = abs(ceil(pf*100)); return k; } int main(void) { DDRA = 0; LCD_Init(); LCD_String_xy(0,0); LCD_String("POWER FACTOR"); while (1) { loop(); x=powerfactor(); LCD_String_xy(1,0); itoa(x,buf,10); LCD_String(buf); } } "The above program is display everything except The POWER FACTOR". Kindly help me its my final year project. Please help me sir. Send me the code on my mail. i do not have mote knowledge about programming. ashutoshpradhan1011@gmail.com can i display current ,voltage and power? then how can i add code for same circuit.. sir I have having problem in measuring power factor ,..I am using Arduino uno... kindly help us ..as it is very urgent.. I implemented the zcd as shown in your simulation. The output of zcd-implemented on hardware-is square wave with 50% duty cycle. The LCD does not show power factor value. But "Power Factor" is displayed. Please help. Thanks so much.. your code can help me to my skription.. This my phone whatsapp.. 🙂 +685604174884 Calibrate it, use for input signal generator Visual Analyser with knowed phase shift as a input. hello. excuse me sir. if we use transformer than the transformer will shift the phase, am i right ? so how we could measure appropiate power factor of our system ? sir i opened your proteus file....i compiled also...it is showing only 87% as power factor if change waveforms in cro... how can i vary power factor ...plz tell me Change the Phase degree by double clicking the SINE wave input probe. how can we get the voltage and current individually?? Study the article below: Thank you for such project. Can you please explain how to use this code to implement it on arduino mega 2560? Hello How to convert your code for stm32f103 Regards Ted When I compile in ATmel STUDIO 7 these errors appear: 1- recipe for target'main.o' failed 2- mega8.h:No surch file or directory What I do? Use CodeVisionAVR compiler. Hi ISMAIL i instilled "atmel studio" but it gives an error "reciep for target 'main.o' failed" while building the same programe... what i have to do now. please help me any one... thanks in advance hello sir! while building this code on ''MicroC PRO for AVR'' it gives error in #include and #include , i think these file are not present in its source files... i need your help, please give me suggestion , The code is written on Atmel Studio. Use Atmel Studio to compile the code. sir!!! can i have power factor correction code for this pic18 Sir as I extracted above file there are 3 file .dbk .dsn and .pwi so which file is associated with proteus schematic please help me pf.dsn file. Open it using Proteus 8. Can anyone say how to open that proteus file in proteus software please Use Proteus 8 to open file. In above senario most probably it will work according But what for capacitive load, in which current leads and there's 1 on pin 3 first so timer will give wrong durations What do u think?? Yes you are right. But the issue is capacitive load is not a real world thing. It's just an ideal assumption for calculations. You cannot achieve capacitive load while doing work on practical things. Hi Ismail, i used your code on16f877a pic controller but it dosent gets out of loop function, Any idea what might be the problem?? Any kind of help will be greatly appreciated Thanks in Advance!! Most probably problem lie in circuit. First use SPDT switches in places of comparators. hi sir. please help me i need atmel studio code for power factor meter. please consider my request. i tried many times but i am not able to make it. so please send the code on this email address majidmanzoor07@gmail.com sir i have burned the hex file given in the folder but it is not working,. what should i do now ? please help me . kindly mail me the atmel studio code of this project that can be burn and can be implement. Hi Sir. i want to implement this circuit using atmel studio, so can you please mail me the atmel studio code of this circuit? its very urgent please help me. my email address is below: majidmanzoor07@gmail.com Your required code is present right below the article. Sir, I have used some of the pins of the arduino for other measuring purposes. how do i know which pin is to be connected for powerfactor? can you please provide the code that we usually use in arduino IDE. Please. No I can't provide you that. Instead you should study some basics of Atmel AVR microcontrollers and how to control thier pins. Hi,, Please do help me. Its very urgent. I would like to find the powerfactor as a part of my project. Circuits are available. but I couldnt make a program for the same. I am using aTmega 328 p. And the code given above is entirely different language than I use. please do provide me a program and circuit for measuring power factor. My main project is based on Demand Response .. email id- pranavs405@gmail.com .. Thanks.. Same code can be used for ATmega328 as well. Just use codeVisionAVR software. i used dtostr in order to convert the float pf value into string, but its giving an error "undefined reference to dtostr" Use ftoa function Sir. I read this given article but in this article only explain about the relay. i want programming of ATMEGA8 for conditions if pf less than 0.8 then PD0 (pin no.-2) goes high. how to apply this logic in ATMEGA8 programming ? " If pf<0.8 then PD0 (pin no.-2) goes HIGH else PD0 goes LOW " Cause I don't know how to apply pf<0.8 and how to enable PD0 (pin no.-2) in ATMEGA8. Thank you. Study this article to understand I/O pins of AVR and how to control them. Secondly to get power factor less than 0.8, apply some inductive load or use some low wattage LED bulbs. Sir. Which instructions can i use if pf is less than 0.7 then pin-0 of port-D goes high and connect the capacitor through the relay for improving pf. Any examples ? Study this article to understand relay functioning. Thank you so much sir. It is worked power factor indicate between 0 to 1. It took some minor changes. Just this three lines are removed and it is worked. lcd_print("-"); k=abs(ceil(pf*100)); lcd_print("%"); k=abs(ceil(pf*100)); This code is used for converting pf into percentage right. If i remove this line then simulation not work. How to get pf in between 0 to 1 ?? Check your mail Sir. Is it required in this programme to convert pf into percentage ?? Because i put this line ftoa(x,2,buf); Instead itoa but power factor indicates 88.00 instead it must be like 0.88 Sry. Sir but how to use foat function. Can you give me any examples of this function ?? ftoa(x,2,buf); Just put this line instead itoa function. Thank u Ismail sir. For your quick reply. Can u send me the code for measuring Power factor between 0 to 1. And if the Power factor is bellow 0.8 then i want any output pin of ATMEGA goes High for improving the power factor by connecting capacitor. Because i don't know about the programming. Please help me sir. Send me the code on my mail. kotadiya46@gmail.com Ask me anything. I will help you throughout your project but don't force me to complete your project all by myself. Hello, sir I want the power factor in between 0 to 1 not in percentage. What can i do ?? What changes can i make in this programme ??? pf = cos(pf); Take pf from this line and through ftoa function convert pf to character type and then display it on LCD. sir.. good day to you... i have a question pertaining to the connection of PT and CT in between the load and source, if i substitute all of the PT and CT circuit with a voltage sensor and current sensor and fed it to the arduino will it work ? why the increment 'g=g+1'? In timer counter formula of Atmel AVR, you will see a '-1'. It is the reason, I am adding +1 to complete the number of counts. Sir can you give a suggestion for the CT ? like a brand link or Amazon link ? Because i didnt find the "1:1500" ct at the stores..I mean i didt get actually,which CT is i have to use for this project? Use this link to get the CT. This CT is rather cheap and gives really accurate results as well. Does it required external crystal for practical implementation..? No it doesn't. can i have power factor code for atmel studio plz? For Atmega 8 You can change this code for atmel studio. A little bit hard work will be required which will help you further in understanding the things as well. I tried. but time difference mesurement is not correct. it can not come out from the loop. so I need help. Can you assist me with the code in Atmel studio please IDE please, I'm having trouble with the transformation. At which part you are having trouble? Sorry I meant the code in arduino IDE, I am getting stuck with the function to get time difference please Put different serial commands in loops and try to figure out at which loop and at which state of comparators you are having issue. I have done it correctly. yes, I am successful. Happy to hear that 🙂
http://engineerexperiences.com/power-factor-measurement.html
CC-MAIN-2018-51
refinedweb
2,883
75.81
David works for a company that writes applications that care about when things happen. Needing a bit more time to work on other things in-house, they outsourced some module development to China. In his own words, They helped us to develop some modules, and my boss asked me to review one of them. It seemed to be working fine, so I decided to look into source codes. I shouldn't have do it. function weekday($year,$month,$day) { $corrected_year=$year; if(($month<3)) $corrected_year--; $leap_years=(intval($corrected_year/4)); switch($month) { default: case 1: $month_year_day=0; break; case 2: $month_year_day=31; break; case 3: $month_year_day=59; break; case 4: $month_year_day=90; break; case 5: $month_year_day=120; break; case 6: $month_year_day=151; break; case 7: $month_year_day=181; break; case 8: $month_year_day=212; break; case 9: $month_year_day=243; break; case 10: $month_year_day=273; break; case 11: $month_year_day=304; break; case 12: $month_year_day=334; break; } return (intval((intval((-473+365*($year-1970)+$leap_years- intval($leap_years/25)+((intval($leap_years % 25)<0) ? 1 : 0)+intval((intval($leap_years/25))/4)+$month_year_day+ $day-1) % 7)+7) % 7)); } After much contemplation, David thinks this code is supposed to return a UNIX timestamp for the start of a specific date, ignoring the strange leap year calculations. However, he wonders why the developer didn't use mktime(0,0,0,$month,$day,$year). Now David has less time: he found the code so he gets to rewrite the module.
http://thedailywtf.com/Articles/Making_Time_for_UNIX.aspx
crawl-002
refinedweb
239
50.16
30 November 2010 09:40 [Source: ICIS news] HO CHI MINH (ICIS)--Demand for polymers in ?xml:namespace> "Consumption of polyethylene (PE), polypropylene (PP), polyvinyl chloride (PVC) and polystyrene (PS) is expected to record robust growth on the back of strong packaging and construction growth," said Dao Duy Kha, deputy general director of Vinaplast, a leading PVC maker in The household products and industrial plastics segments would also contribute significantly to the polymer demand growth, he said. "Demand for PE and PP is forecast to grow from 1.08m tonnes/year and 1.05m tonnes/year in 2010 to 2.4m tonnes/year and 2.2m tonnes/year respectively in 2015," Kha said. PVC and PS demand was expected to increase to 1.2m tonnes/year and 500,000 tonnes/year from 550,000 tonnes/year and 200,000 tonnes/year respectively in the same period, he added. For more on poly
http://www.icis.com/Articles/2010/11/30/9414964/vietnam-polymer-demand-to-grow-16-annually-in-next-five-years.html
CC-MAIN-2014-15
refinedweb
152
66.44
Enable Sphinx to generate HTML5 valid files Note Japanese version of this document is also available, on the site or the doc folder of this package. Introduction Currently, June 1, 2015, Sphinx 1.3.1 cannot generate localized heading in index pages and glossary directives. This package make the gate to make the Sphinx internationalized. This means another package is required to do, I will make the one named ‘Gosyu’, please find that on pip. This package does not depends on the Sphinx. You can use this for another text proccesing product.. Requirements Tested with 32bit version of Python 2.7.9 and 64bit version of 3.4.3, both on the Microsoft Windows 8.1 Pro 64bit edition. But with another versions and on another OSs would be usable. How to install Again, you can install this package as you will do with another one. Open a console and do pip install sortorder. On the MS-Windows, <python_installed_path>\Scripts\pip.exe install sortorder. Or when you get zip archive like sortorder-2.0.6(.zip) where ‘2.0.6’ is version number, change current directiory to the folder that has the zip file, and do setup.py sortorder-2.0.6.zip. On the MS-Windows, <python_installed_path>\Scripts\pip.exe install sortorder-2.0.6.zip. Or, this way is the Sphinx specific, you can use this package just extracted any folder you want. the conf.py enables you to use the themes and extensions. How to use (with Sphinx) To know how to use the sort order you already have, see yogosyu or gosyu. If you want to use one of the preset sort orders this package has, just language = 'xx'. it causes automatically load sortorder.xx. Currently, xx is available ja, eo, el and ru for the presets. To know how to make your own sortorder, see sortorder/__init__.py. In short: - determine the filename of the new module and create it. if you name sort_order_xx.py, it will automatically loaded when language = 'xx'is declared. - write import sortorder. - make the class inherits sortorder.SortOrderBase. - override get_string_to_sortand get_group_name. - make get_default_sort_orderreturns the instance of the class. - make setup. see any of sortorder.xx included this package. How to use (General) If you have this module not installed by pip, you should first do sys.path.insert(0, '<the_folder_you_copied_the_extension_file>'). Second, if you want directly use ja.py (Japanese), eo.py (Esperanto), el.py (Greek) or ru.py (Russian), Just do import sortorder.xxx where xxx is language code like ja, eo, etc. Otherwise, you should make the your sort-order module as you want. You should define the new class which inherits sortorder.SortOrderBase. The filename of the module should prefixed sort_order_, like sort_order_xx.py. get_default_sort_order and setup methods are only used by the Sphinx document generator. After you make sort_order_xx.py or you have it someone gives, add the path of the .py file to sys.path like above. Next import the module: sys.path.insert(0, '<the_folder_you_copied_the_extension_file>') # (snip...) import sort_order_xx # may automatically import sortorder.__init__.py But sortorder.__init__ has the method get_sort_order. You can add your code some automatic feature like used with the Sphinx, by defining get_default_sort_order method in your module. History 2.0.6(2015-07-04): Fix document(this file) for PyPI. 2.0.5(2015-07-04): 2013-12-07: Add Python 3 support. 2013-12-06: Updated to meet Sphinx 1.2. 2011-06-28: Russian and Greek versions added. 2011-05-24: First release. Included in yogosyu extension. Japanese and Esperanto versions included. Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/sortorder/
CC-MAIN-2017-34
refinedweb
614
71.82
<eparsons> Looks like webex will not start until 08:00 GMT so I will set up a hangout in the meantime... <eparsons> Use <eparsons> … Looks like webex will not start until 08:00 GMT so I will set up a hangout in the meantime... Use <eparsons> Might get webex sooner if phila can change it... <mlefranc> hello, no webex untill 9 ? <eparsons> Webex in 5 mins please wait - phila is here now <eparsons> Webex now live 111 <eparsons> please move back over... <eparsons> I will close the hangout... <eparsons> Kerry : ok to work through options within everyone mlefranc: to speak to rob's remark, … to say we haveto write a rule to retrive the right ontology … this applied to every singel optio on the table … all of them need 303(?) rules due to slash uris … i don't see a difference roba: the diff is (and I have built them), the second option that is an ontology for namespace you only have to substitue a slash for a hash … and you can do this for every term in onle rewrite rule <tidoust> i|Kerry: ok to work|Topic: SOSA-SSN Integration roba: but for this method, unlike others, you have to write a rule for every term … so if someone wanted an another ontolgy that builds on it they have to put new rules in youir infrastructure … so less scalable <tidoust> Remaining Options for SOSA-SSN Integration roba: becuase you need to negotiate a new rule for each new tool … wheras extensions all intheri own namespace collapses mlefranc: in the group we have closed little sets so its easy to write the rules.. … but having one single namespace puts the burden on the ... looking for those terms … I wanted to hightlight we have 2 options … there are some questions that can be spearated … e.g. option 8 relies on content neg.. … we could also write an option 1 that relies on content negotiation … so we could do this to [missed] but we should keep clear that conneg is not an instrinsic part of the picture <Zakim> phila, you wanted to talk about infrastructure mlefranc: i don't think it is a good thing to return different parts of the doc <mlefranc> speak up please <mlefranc> (can't hear) phila: any other examples where content neg is [missed] … if u want w3c to have on set of rules [cannot hear] … if you ..feels wrong is not web architecture.. … one uri giving multiple resources breaks a fundamental architecture principle roba: otion 1 or option 8 do not need content negotiation … fact that... fundamental distinction. Option 1 and 5 you need to do tricks … I think this is intersting for this problem -- 8 options when not much guidance … obviously not a trivial problem mlefranc: we all agree that we do not want content neg then … let's get rigd of this <phila> phila: Web fundamental - different resources have different URIs <phila> phila: If resources differ by anything other than serialisation, they're different resources and should have a different URI <Zakim> roba, you wanted to respond to Kerrys question about which options rely on content neg.. roba: the limitation of option 1 is there is no way to fnd axioms for sosa <joshlieberman> It shouldn't be necessary to go to any resource that imports a particular term other than the originally defining resource. roba: but could use content negotiation to do that … option 5 also has the same navigation... ssn or whatever imports sos and use content negotiation. but option 8 does not use content negotiation. <phila> +1 to Rob ... just uses owl imports joshlieberman: not sure that content neg is a problem <mlefranc> * kerry you type, I can"t hear [cannot hear] ... expet to resolve a term where it is originall define ...anyone can import an ontology and extend it mlefranc: not a prblem eg. for platform ..can use metadata to say that ssn extendes sosa +1 ...would like to mod option 5 to get rid of content negotiation ...so there is a clear difference between them! +1 yes please agree to that change <phila> phila: Didn't say conneg is bad. Just that it is limited in its meaning. It gives you different representations of the one resource identified by the URI ... kerry [missed] kerry: [missed] <joshlieberman> It seems the difference between 5 and 8 now is the role of the SSN ontology (add axioms versus add terms). Can we combine that? roba: challenge is that one of the functional requirements is the inability to navigate between a term and its strongert axiomitisation … my problem is that we cannot talk about the ability to discover axioms is not about pros and cons … option 1 and option 5 both rely on conneg to provide traversal to full axioms … optin 1 the nav path is ... [not sure]... if you resilve the uri you get the sosa term … if u can provide evidence otherwise we need to assess that. … you cannot reolve the uris without using conneg … option 5 has standard web aptter and you could use conneg to get the stronger axiomiisation option 8 is a reposne to the identification of that challenge ... it occurs to me that importas and refactorisation of axioms makes it work ....there may be a coutner arg ...we can't take it out ...but if we can simplify it! joshlieberman: so could serve full ssn via content neg ... if you indent to get a diff ontology then you should ask for it … option 5 and option 8 have differnt scope for ssn ontology … we should say ssn should import sosa add new axioms and new terms … if you want and only know of the deeper sosoa axiomitisation you should know that and go to the stronget ontology and import it. mlefranc: if you want stronger go to ssn; if you want lightweight for to sosa … in this case you need to know where it is defined … so option 1 also does this … back to option 8 [end of sosa] as sosa:imports a strionger ontology <roba> correct "if you follw OWL entaiulment" - but if you dont then there is no problem mlefranc: you have sosa plus all the owldl in sosa becuase of the imports, and most users would see this and therefore get the owl dl when they do not want it <phila> kerry: It sounds as if we have consensus that conng should not decide which ontology you receive <phila> eparsons: Sounds like sense but wait until Armin is here <mlefranc> back at 9 ? eparsons: [calls a cofee break until 9am] <phila> [Back at the top of the hour] <roba> so i dont lose the thought - the proposition is that people editing in Protege will be confused by getting more via imports. Surely this is explicitly not the SOSA use case - SOSA design goal is to be lightweight and usable outside e of tools that do more complex entailment. My proposition is allow complex tools to do the right thing by explicitly including the relevant statements, (owl:imports) but assume simple cases can work with the resources they find. <roba> furthermore the decision that "SSN extends _and_ adds axioms" was a design goal - we now know it has implications that we cannot discover axioms deom sosa automatically, and a refactor would allow us to do so using standard behaviours. <roba> s/.deom/from eparsons: Explains that we've had a preliminary discussion about SSN options, now omn to coverages billroberts: Main business to get through if we can is around the CoverageJSON … Then a progress report on the other 2 docs (eo-qb and qb4st) billroberts: On CoverageJSON - we discussed this in London … essence was whether the spec ... which is still evolving... should be part of the doc being produced here or referenced from it. … Conclusion was thaty it would live elsewhere and our doc would discuss its content and it may be standardised by OGC later … So there was some restructuring to do. … Jon apologises for not being able to join the call today. He's done most of the wwork in the last few weeks <eparsons> Doc is here <eparsons> billroberts: Minor things to do, tidying up references etc. … Need to add section to cover relationship with SDW-BP … I now think it's a good candidate for FPWD … Take feedback and then do one more version before the end of the WG <billroberts> [Pause for reading] eparsons: What are the expectations wrt to future OGC process and time frame? joshlieberman: There's an OGC Community standard which is accepted by a community but it's not formalised. OGC can adopt but not change AndreaPerego: If you want to modify it, you'd need to go through the OGC process? joshlieberman: If the community agrees billroberts: The status of the spec is that it's close to being final but has not yet been widely applied … Jon as instigator, wants to encourage use beyond his own group. billroberts: Jon might aim for a Community Standard joshlieberman: What we've done here in this WG is simultaneous publication. There's been work done to line up different doc types joshlieberman: Another evolution that might happen with a Community Standard - in the process of getting something to work, it's not 100% compatible with other OGC standards … Do we leave it as it is or do some reconciliation work? … e.g. CIS model cf. CoverageJSON joshlieberman: KML was a case in point. It was widely adopted. Then there was some pain to align with Simple Features <Zakim> AndreaPerego, you wanted to ask whether consolidating it via a W3C CG could be an option prior to contribute to OGC AndreaPerego: You said that it's not widely implemented? Does that make it difficult to be a Community Standard at OGC? … Maybe a W3C CG might be a good vehicle? billroberts: Maybe there's a route to continue in W3C/OGC collaboration … This WG is about to finish <eparsons> phila: work this week with OGC staff to agree a new joint working group <joshlieberman> So there may be 3 options: 1) further community development followed by community standard adoption 2) incorporation into an OGC standards development process (SWG) and 3) further OGC-W3C work item <eparsons> phila: Jon could publish this as a note - but he wants to follow a non W3C route phila could not convince him... <joshlieberman> OGC document types in order of process weight: Best Practice / Community Standard / Implementation Standard <eparsons> phila : Not sure we should publish this without advice from W3C staff eparsons: What would be the status of this doc if we were to publish it now? tidoust: It would be a FPWD eparsons: Of what tidoust: Of itself tidoust: The doc will say that it's going to be Note, not a rec billroberts: I think that's well established billroberts: It's not going to be a Rec, we know that … So it's a Note or nothing <joshlieberman> There is some difference in OGC between a discussion paper (interesting work) and a best practice (give it a try, first step of standards track). joshlieberman: There is a discussion paper, and a best practice which is more heading towards a standard eparsons: Does it need normative content to be OK? … Can it just point to other docs? tidoust: What is the intention? No need to add normative content tidoust: It could be formulated as a Note about coverages on the Web, with strong ties to CoverageJSON … and what could be standardised in the future. billroberts: I understand what you're asking but I don't think it would be useful at this stage. … a broader doc about coverages on the Web is not what this is for. … Notes can be very light weight … The group has been discussing this. Others can look at it and see what it's about. … It's a review of an existing format phila: Retracting a little - the WG can publish the doc if it so chooses. I'm registering my own disappointment that Jon is unwilling to publish the spec itself as an OGC/W3C doc (not a formal standard) billroberts: Would publishing this preclude changing the doc to include the full spec in the doc in future? Linda: I want to support that. We can as a group to say that we think it's better to include the spec if Jon can be persuaded Linda: We could wait and then publish but I don't mind publishing what we have now. tidoust: Putting the coverage JSON in this doc does not mean that we will end up as a standard in future … It can be at a group level … tidoust If the WG feels it's important to standardise CoverJSON that be a future work. eparsons: I agree with Linda that we can publish this now and then raise the issue of potentially adding in the full spec … So we publish as is, and then raise the issue and see what Jon has to say after that. billroberts: The essence of the doc wouldn't change substantially. At the moment it has links to sections - we'd essentially bring the text in. eparsons: The JWOC might provide the environment for future work. AndreaPerego: The section around the RDF/JSON-LD implementation. namespace is covjson.org - how stable is that? Another option is w3 space? <tidoust> Phil: I am working with management and systems team to have a simple feature that allows people to publish stuff to w3.org/ns, e.g. from GitHub through one button. I hope to have that soon. billroberts: covjson.org doesn't have the longevity of w3.org of course. A move in future might be possible joshlieberman: I think we've decided that if something is in an OGC BP doc, then it can have an OGC namespace billroberts: When you start the work, you want something you can manage. <eparsons> PROPOSED: That the editors current draft of the coverageJSON doc at be published by W3C and OGC as First Public Working Draft. <billroberts> +1 <eparsons> +1 <AndreaPerego> +1 <Linda> +1 <kerry> +1 <joshlieberman> +1 <RaulGarciaCastro> +1 <ahaller2_> +1 Resolved: That the editors current draft of the coverageJSON doc at be published by W3C and OGC as First Public Working Draft. <eparsons> Congratulations and Thanks to the Editors !!! billroberts: Thanks everyone. I'll have those discussions with Jon. <tidoust> [+ note the issue about the future of CoverageJSON near the top of the document before publication] billroberts: Should we try and schedule a session later today that Jon could join? eparsons: We can mull it over and have that discussion offline eparsons: It's not a controversy, we want the best home for the excellent work that Jon and co havae done. billroberts: Plan is to have one more iteration each … So there should be new docs in the coming couple of months. … We had discussion of that on our last call. sam_toyer: At the moment, eo-qb has only a handful of issues. Need to add references … Also some feedback we need to incorporate, but they're minor. Missing a short section on implementation and notes for developers phila: What is your actual time line? sam_toyer: I can do some this evening. Some is dependent on others. University time line shouldn't present a problem. One student's availability might pose a prob until April <eparsons> Armin ack next kerry: I agree with what Sam said. We were hoping to have a new WD by now, but there's not much change since last time so we're not proposing a new publication today? billroberts: That's right. … One more publication before the end of the WG. billroberts: QB4ST - status is similar. If anything it's even closer to being finished. Rob has done all the work on this. … The call we had recently, the main discussion is around adding examples and there has been some input from Jon on that in the last couple of days as some of the CovJSON egs are good for this roba: I've done some edits - minor editorial edits from Kerry etc. There were some open issues, mostly wish lists for richer examples roba: I've clarified a number of pieces around the potential mechanism around hierarchies ... I've simply put in some suggestions … I've suggested that we can use skos:broader to preserve the hierarchies roba: No substantive changes to the content or the vocab … Call to Bill - is it better to go as is? I have no further changes to make. … I don't have the bandwidth to go into deep detail on the Coverage JSON examples billroberts: My feeling was that the doc is pretty complete. But more examples would be helpful. It can survive without but it would be better with them. … It's not details of CovJSON, it's finding a couple of realistic coverages where the terms in your ontology where the temporal dimensions etc, could be demonstrated roba: I should be able to find some time to work on those. … Early feedback suggests it's not far from being complete billroberts: Maybe I should have a go at the examples roba: We can do a screen share session to work it out. <Zakim> phila, you wanted to talk about the UN billroberts: That gives me a chance to be an external tester <eparsons> phila: reports back from UN meeting of stats division UNGGIM, Geo etc <eparsons> phila: stats guys pleased with QB4ST !!! billroberts: On the hierarchies... it makes sense to me to be out of scope for that doc. From a stats POV, I've been working with commercial clients and EU projects … Talk of reviving a W3C CG on semantic statistics which might be a place to take this forward roba: Then I think what we have is pretty much done billroberts: We could publish what we have now then? … The doc is certainly ready to be published. What do people think? eparsons: Don't keep it secret billroberts: The ED is also available ahaller2_: Talked about SemStats CG and its revival PROPOSED: That the current working draft of QB4ST at be published as a (non-final) WD <eparsons> +1 <billroberts> +1 <Linda> +1 <kerry> +1 <AndreaPerego> +1 <joshlieberman> +1 <roba> +1 <ahaller2_> +1 Resolved: That the current working draft of QB4ST at be published as a (non-final) WD <RaulGarciaCastro> +1 eparsons: Thanks Bill [5 minute break] eparsons: Can Kerry fill Armin in on the earlier discussion about options for namespaces kerry: There was some discussion about conneg and how that did and didn't relate to the options on the table kerry: I think there was consensus around not depending on conneg and that might help us move forward … Rob was raising issues about what happens when you resolve a URI in the ontology roba: I'll clarify. It's not what happens when you resolve, it's how do you get to the stronger axoims … I think we're happy on what happens when you resolve the URIs … The issue raised was that different options have different expectations for user agnet behaviour … Option 2, without conneg, you have to know in advance whether you want to load SSN or SOSA … Option 3, non-OWL aware tools won't follow owl:imports <kerry> <ahaller2_> can we try to use Option 1, Option 5 and Option 8, just to not confuse anyone mlefranc: I really think that the distinction between options 1 and 5 doesn't exist. Meaning when you look up a term, because we don't use conneg to serve different resources … wew need to decide to which ontology each term redirects <ahaller2_> all options need 303 redirects mlefranc: It's the same in all options … There's no complexity added by option 1 … For each term, you need to know for which term it redirects roba: It's not whether the redirect gets the stronger axioms mlefranc: But this has not been decided … We don't know how SOSA will point to SSN <ahaller2_> Option 1: If you care about that every term has the same namespace, but don’t mind that the stronger axiomatisation of a term may not be directly accessible in a linked data fashion you will like Option 1. ahaller2_: I want to bring people to yesterday... there are a couple of people who were here yesterday... the main attributes of each option I posted earlier <ahaller2_> Mean vote result: 0 <ahaller2_> Option 1: If you care about that every term has the same namespace, but don’t mind that the stronger axiomatisation of a term may not be directly accessible in a linked data fashion you will like Option 1. <mlefranc> (same for the other options) <ahaller2_> Mean vote result: 0 <ahaller2_> Option 5: If you care about the same reuse mechanism of terms as in Option 1, but don’t mind to have two namespaces and that the stronger axiomatisation of a term may not be directly accessible in a linked data fashion you will like Option 5. <ahaller2_> Mean vote result: 0.11 <ahaller2_> Option 8: If you like Option 5 and you want to be able to access the stronger axiomatisation of a term in a linked data fashion, but don’t mind that SOSA imports an expressive OWL ontology, you will like Option 8. <roba> yes - those are accurately described - well done <ahaller2_> Mean vote result: 0.11 ahaller2_: Option 5 had no -ve votes but 8 did mlefranc: I'd like the opinions of people who were not here yesterday or this morning RaulGarciaCastro: In my case, I'm in favour of option 1. That gives the user a unified view of the ontology … For me, option 8 works, but explaining that they have to use one which imports another etc. seems unnecessary <kerry> +1 to agree entirely with Raul <ahaller2_> +1 with Raul that importing owl-dl axioms is confusing for web developers who just one schema.org style DanhLePhuoc: I also support Raul. The unified view. In the context of schema.org are looking at SOSA and SSN, they're interested in integrating this. And the WoT WG … They don't want to be confused and don't care about OWL … They see everything as a data schema kerry: The 3 options on the table, to summarise and to forget conneg. Option 5 is what everyone has ever seen before. … They're different ontologies. <roba> Option 1 does work but it means abandoning decision to separate a simple SOSA core kerry: We have the clear goal to modularise the ontology. We have a simple form of the ontology and then we have the more complex version with more terms <ahaller2_> @roba in Option 1 we are not abandoning a simple SOSA core kerry: This strikes me as a sensible modularity option. kerry: Option 8 adds an owl:imports into the simple core. My big problem is that it's artificial. I also don't hink it's even possible. … What we have is some sort of factorisation of what we have: SOSA core, then SOSA axioms, then SSN … That's a long way from what we have now <ahaller2_> +1 to kerry kerry: I think option 8 would be very difficult to do. <roba> I will make the observation that currently SOSA-OWL-DL would be empty :-) <roba> so not difficult at all mlefranc: I agree that option 5 works with slash and hash based URIs … Option 1 has one namespace that works only with slash-based URIs … Simon says is that people's expectation is that one namespace gives one ontology not multiple ontologies. … I agree with Kerry that it would be very difficult to split things. … So we end up with something like an option between 5 and 8. … We assume that non-OWL tools will not understand and implement owl:imports … Non OWL tools will be confused and think that SOSA does not exist … I think this is dangerous. The imports may lead to loops. roba: Lots of issues here that might collapse into each other. … Is this possible or not? Most of the constraints are about subclasses … Currently SOSA-OWL-DL is unpopulated … OWL explicity specifies that the closure of import loops are not to be treated as a problem. So we could base our decision on tools that ignore the OWL spec … So it comes down to whether you want one namespace or two. … If you want to have one namespace then you're abandoning the idea of SOSA as a pull out simplification. … Other options are OK but are inconsistent with what we've already agreed. Option 1 does away with SOSA core ahaller2_: +1 to Danh and Raul … People who expect a schema.org like simple ontology will get a bunch of axioms they won't understand or expect with option 5 … What Kerry and Maxime said - it would be a challenge to separate the axioms envisaged in option 8 <roba> @Raul - in what way exactly <RaulGarciaCastro> @roba I will explain ahaller2_: We are running out of time and option 8 seems to be the one that causes most problem so I'd like to reduce it to a choice between 1 and 5 … Option 1 still has 2 files joshlieberman: I've spent enough time railing against namespaces and prefixes. The difficulty of managing ontologies on the Web - we want to do the most expected … One ontology for one prefix <roba> i disagree about the time commitment - 1 will require major refactoring of sosa + ssn currently in separate namespaces, 8 has 0 extra effort - just mneans SOSA axioms go in a particular file joshlieberman: The SOSA vocab is most likely to be most useful if there is no connection to SSN … You don't want unexpected axioms <mlefranc> @roba, I commit to do the refactoring <Linda> +1 to josh <ahaller2_> @roba Option 8 needs careful consideration of every axiom that spans SOSA and SSN terms joshlieberman: I think I'm supporting 5, but that puts you in an environment... <Zakim> phila, you wanted to talk about where the modularisation happens <roba> @ahheler - no - if it spanbs SSN it goes in SSN - very simple <ahaller2_> @roba, no it can't if the term is introduced in SOSA <roba> SSN terms, by def, are not in SOSA.. they are in SSN <tidoust> Phil: What harm is done by a tool that doesn't understand some triples? SSN was too complicated, so modularization was in order. You can do that in documentation. The harm, as Josh said, is that people won't understand the complex stuff if it gets returned with the simple things. <Zakim> RaulGarciaCastro, you wanted to mention that another "Major con" for option 8 is that it does not conform to the standard expectation of how ontologies are implemented <tidoust> ... The idea of having redirections based on terms, I strongly disagree with. RaulGarciaCastro: Option 1 doesn't conform to standard expectations, but nor does option 8 <ahaller2_> @eparsons what about another straw poll? <eparsons> ahaller2_ Yes I think we are nearly there... RaulGarciaCastro: @@@ Missed point, sorry@@@ <RaulGarciaCastro> We are proposing to define a set of axioms in an ontology and then import those axioms in another ontology that is the one that is declaring all the ontology terms. <RaulGarciaCastro> This is something that I haven't ever seen. And not the expected behaviour. <RaulGarciaCastro> And we have the risk of people starting copying that way of doing things in the ontologies that they implement! mlefranc: I understand that one namespace implies one ontology … LOV assumes this mlefranc: We can show creativity in this. We're working with Ghislain on breaking the 1 - 1 relationship between namespaces and ontologies … LOV will support one prefix for multiple ontologies <mlefranc> I will contribute to LOV to do so, Talked with Ghislain about this early Jan <ahaller2_> Option 1: <mlefranc> +1 <roba> 0 <kerry> +1 <ahaller2_> 0 -1 <RaulGarciaCastro> +1 <Linda> -1 <joshlieberman> -1 <eparsons> _1 <eparsons> -1 <AndreaPerego> -1 <mlefranc> note Simon's vote for option 1 is: -1 <mlefranc> and danh just got disconnected ? <ahaller2_> jano said 0 -1 <mlefranc> Jano said: -1 to 0. in case this is the chosen option, then +1 for unify = sosa <ahaller2_> danh: +1 <ahaller2_> Option 5: <ahaller2_> 0 <mlefranc> 0 -1 <Linda> +1 <roba> 0 <kerry> 0 <eparsons> 0 <RaulGarciaCastro> 0 <mlefranc> note Simon's vote for option 5 is: 0 <AndreaPerego> 0 (my -1 based on what's written in the wiki) <ahaller2_> danh: 0 <roba> is it possible to tease out phil's issue here? <ahaller2_> Option 8: <kerry> -1 <ahaller2_> -1 <RaulGarciaCastro> -1 phila: My issue is the conneg one <mlefranc> -1 <eparsons> -1 <Linda> -1 <roba> 0 0 <AndreaPerego> -1 <mlefranc> @phila, we agreed that conneg would be removed from option 5 <mlefranc> note Simon's vote for option 5 is: +1 <joshlieberman> +1 to 5, -1 to 8 <roba> yep - cannot get to axioms - thats all Then I don't see how option 5 works mlefranc <DanhLePhuoc> -1 for 8 <mlefranc> @phila, what do you mean "work" ? it works like skos and skos-xl for instance <roba> yes option 1 and 5 cannot get axioms , 1 works if we abandin SOSA core and have one ontology <Zakim> kerry, you wanted to suggest we drop option 8 off and recondier 1 and 5 -- t hey are very similar! phila: If we get rid of weird conneg behaviour, then I'm probably OK with 5 <kerry> yes josh! joshlieberman: It is true that without some sort of negotiation, there is no path from SOSA to the SSN extras - which is as it should be ahaller2_: +1 to joshlieberman <roba> yes josh - and thats a design decision we need to make with our eyes open <ahaller2_> just removed the content negotiation sentence in Option 5 <ahaller2_> Option 1: <ahaller2_> 0 <mlefranc> +1 <RaulGarciaCastro> +1 <Linda> -1 -1 <eparsons> -1 <roba> -1 <DanhLePhuoc> +1 <kerry> +1 <AndreaPerego> -1 <joshlieberman> -1 <mlefranc> Simon: -1 <ahaller2_> jano: -1 to 0 <ahaller2_> Option 5: <ahaller2_> 0 <mlefranc> +1 +1 <eparsons> +1 <Linda> +1 <joshlieberman> +1 <RaulGarciaCastro> 0 <roba> +1 <DanhLePhuoc> 0 <mlefranc> oups --> 0 <AndreaPerego> +1 <kerry> 0 but really sorry to miss the chance to do much better kerry: I'm curious what people who have yet to speak why they're concerned about option 1? <mlefranc> (difficult to hear you linda) Linda: I'm concerned that option 1 will confuse people. There are potential users of SOSA who want something simple. … It's clear, it's simple … It provides a simple core that people can use with its own namespace. People don't need to know about the more complex one AndreaPerego: +1 to Linda. There needs to be a clear separation between the two vocabs. Linking the two would lead to confusion <mlefranc> +1 for what kerry is just saying kerry: To respond to that. If you're a SOSA user, and the SOSA file is separate, you'd never know about the extension <roba> agree - but is you ask for unify:system you suddenly get a rich OWL ontology which looks nothing like unify:Platform kerry: But in option 1, the complex knows about the simpler kerry: The difference is how the extended relates to the simple joshlieberman: This is another example of not the mechanism, it's what people do. … If someone looks for terms with the SOSA prefix on the Web, they'll get a load of different terms … As a technical matter, you are right, Kerry. But add humans and 5 wins (scribe paraphrase) <ahaller2_> PROPOSED: Use the implementation in Option 5 for the integration of SOSA and SSN +1 <eparsons> +1 <mlefranc> 0 <Linda> +1 <RaulGarciaCastro> 0 <joshlieberman> +1 <kerry> 0 <ahaller2_> 0 <AndreaPerego> +1 <DanhLePhuoc> 0 <roba> +1 Resolved: Use the implementation in Option 5 for the integration of SOSA and SSN <kerry> [applause!] joshlieberman: Maybe we can propose another resolution that we agree that the options for modularisation are unsatisfactory <kerry> like that idea! joshlieberman: Something to feel better about ahaller2_: We can make that in the doc … A direct consequence of not having a unified namespace, there's a long running issue around the SOSA name <ahaller2_> "Sensor, Observation, Sample, and Actuator (SOSA) Core Ontology" ahaller2_: [explaining SOSA name] … I don't know if anyone has a better name, or if at that point in time, we should just continue with SOSA kerry: My concern is around our explicit attempts to modularize SSN. Recently, I'm really happy that there is a simple core with a clean separation with the more complex vocabulary. … Name matters for the document, for branding stuff. … I think we want to capture that notion of a core and of extension. <roba> would be nice .. <RaulGarciaCastro> +1 for consistent naming kerry: Consistent naming that would make it look not as 2 independent things would be good. Phil: I agree with you, Kerry. SSN-lite would fine. SSN-core. SSN and SSN-extended would be good as well. I agree that branding is important. The existing names are new enough that they can be replaced without problem. mlefranc: SOSA-core, SOSA-extended, would that work too? kerry: That would work as well. SOSA-core and simply SOSA would be good as well. I don't like SOSA-extended very much. <mlefranc> prefixes: sosa-core: for SOSA Core, and sosa: for SOSA <joshlieberman> This is why the whole namespace mechanism is archaic. <phila> phila: 'XL' might be a suffix for the full version? <mlefranc> @josh, this is why option 1 ;-) <phila> phila: (as in SKOS-XL) ahaller2_: I do understand that there is a wide community for the SSN ontology. It's probably a problem to rename SSN. I would prefer to rename both though. <Zakim> AndreaPerego, you wanted to note that is you rename SSN into SOSA this will be intuitively considered a different ontology ahaller2_: The prefix could be ssn for the name SOSA extended, perhaps. AndreaPerego: If I were a previous user of SSN and saw sosa, my impression would be that it would be different from the ontology I know. For branding, personally I would not change the acronym <joshlieberman> Andrea makes a good point that this is a question of branding, not explicit identity or semantics. mlefranc: On the other hand, it's easy to add a name in the spec to explain why we did not keep the SSN name. It would make it easy for us as well, as we would not have to redirect the old SSN ontology. <ahaller2_> +1 for what maxime said mlefranc: The semantics are changing, the name sosa illustrates the changes that occurred in the ontology. kerry: I agree with Andrea. Keeping the branding of SSN is important. I appreciate Maxime's point as well, though, so won't vote against any of these options. <mlefranc> so two options: option1 is sosa-core/sosa, and option2 is ssn-lite/ssn ? eparsons: I don't think we got agreement right now. <ahaller2_> can't hear phila phila: Whatever is called, we can agree that it's called "thing-core", then "thing" for the extended version. <kerry> +1 <mlefranc> +1 <RaulGarciaCastro> +1 <roba> do we want thing and thingxl or thing-core and thing? <RaulGarciaCastro> :) <phila> PROPOSED: That the 'two' ontologies share a common name with either prefix or suffix to distinguish, e.g. 'lite' or 'full' <ahaller2_> 0 <roba> +1 <mlefranc> +1 <RaulGarciaCastro> +1 <phila> +1 <eparsons> +1 <AndreaPerego> +1 <Linda> +1 <kerry> +1 <mlefranc> * sosa-core or ssn-core will be yet another prefix that is cannot be registered in prefix.cc because this tool doesn't allow prefixes with '-' characters ;-) Resolved: That the 'two' ontologies share a common name with either prefix or suffix to distinguish, e.g. 'lite' or 'full' phila: OK, now on to the more contentious part. <phila> PROPOSED: That the common name should be either SSN or SOSA (please state preference) <eparsons> SSN <ahaller2_> SOSA <Linda> I'm neutral on this <mlefranc> SOSA <RaulGarciaCastro> SOSA mlefranc: I think it's not a problem if we use hyphens, it's just that prefixes need to change with some tools. <AndreaPerego> SSN <joshlieberman> No preference <kerry> SSN <roba> SOSA - but not strongly <mlefranc> * @ tidoust, to me it's the implementatino of prefix.cc that should be updated phila: That seems inconclusive <DanhLePhuoc> SOSA <RaulGarciaCastro> Let’s vote for both and look for -1s <roba> jano would be strong on this I think, <ahaller2_> Jano would be SOSA, Simon I don't know <mlefranc> I think Simon would also be in favour of SOSA <mlefranc> he cares about 'sampling' being part of the name :-) <phila> PROPOSED: That the base name of the ontologies is SOSA <mlefranc> +1 <ahaller2_> +1 <RaulGarciaCastro> +1 <Linda> 0 <kerry> 0 <joshlieberman> Since it is a branding issue, maybe we can do a focus group? <phila> 0 <AndreaPerego> 0 <eparsons> 0 <DanhLePhuoc> +1 <roba> 0 <ahaller2_> @joshlieberman if we get an extension of the working group, we can do focus groups ;-) <phila> PROPOSED: That the base name of the ontologies is SSN <phila> 0 <Linda> 0 <ahaller2_> 0 <RaulGarciaCastro> +0.5 <kerry> +1 <AndreaPerego> +1 <DanhLePhuoc> 0 <mlefranc> 0 <eparsons> +1 <roba> 0 <RaulGarciaCastro> To advance I can round it to 0 <kerry> yes to that! phila: If the core was called SSN-lite, SSN-core, I think I would probably go for SSN. In other words, the new suffix to SSN handles the political issue that in some people's eyes, SSN was too complicated. <eparsons> +1 to phila phila: But I don't feel strongly enough about that to oppose renaming to SOSA though. <RaulGarciaCastro> +1 to phila eparsons: The value-add is to simplify SSN. My preference would be SSN-simple. <roba> ssn-bigly-tremendous <ahaller2_> maybe we postpone that issue? eparsons: People coming from SSN would find SOSA otherwise, and be confused. kerry: Should we go back and look at the suffix and then go back to the name? eparsons: The actual name is more the issue <kerry> thing-core, thing-lite, thing-full etc <kerry> ...thing-simple eparsons: My vote would be thing and thing-simple, with thing being SSN. <roba> thing-terms\ <ahaller2_> I don't think that people care too much about core, lite, full or simple, but more the name <mlefranc> thing-xs and thing-xl ? <phila> phila: I'd say ssn-lite and ssn (maybe ssn-full) but I am not wedded to these eparsons: We could spend all of our time talking about these branding issues. Linda: In light of what you and Phil just said, I would vote +1 on SSN now. … We are, in this WG, working to make it more simple. mlefranc: I think it's a question that is sensitive enough that we cannot ignore the fact that Simon and Jano are not here. <roba> agree with maxime mlefranc: I would rather postpone to next week. ahaller2_: Agree with Maxime. It would be unfair for them to resolve without them. … I would also agree, given that no one has a strong opinion here, to postpone. kerry: I'm ok with that, but we should record the opinion of people here who may not be attending an SSN meeting. eparsons: Right, that's what minutes give us. <phila> PROPOSED: That the SSN Sub Group takes up the naming issue, noting that the F2F participants have a slight preference for SSN <ahaller2_> do it in a general SDW meeting <eparsons> +1 <Linda> +1 <phila> +1 <roba> +1 <DanhLePhuoc> +1 <kerry> +1 <AndreaPerego> +1 <RaulGarciaCastro> +1 (but prefer in a general meeting) <mlefranc> +1 <ahaller2_> +1 (prefer also in the SDW meeting) <kerry> agree -- on a plenary agenda <phila> PROPOSAL: That the SSN Sub Group takes up the naming issue, noting that the F2F participants have a slight preference for SSN, but the decision will be made in a plenary call <eparsons> +1 <RaulGarciaCastro> +1 <Linda> +1 <kerry> +1 <phila> +1 <ahaller2_> +1 eparsons: Good point, Raul, maybe if the SSN sub group can bring the naming issue back to the group, that would be beneficial. <mlefranc> +1 <ahaller2_> next plenary please Resolved: That the SSN Sub Group takes up the naming issue, noting that the F2F Meeting Day 2 participants have a slight preference for SSN, but the decision will be made in a plenary call <Zakim> kerry, you wanted to comment on agenda kerry: We had planned for a vote on OWL-Time. Not sure if we can do that today. Lots of changes made to OWL-Time and I haven't had time to look closely at them. If we don't have Simon or Chris, what should we do? eparsons: Agree we cannot do much in their absence. … My sense would be not to do it, and break earlier if we can. phila: Next, I'll talk about continuation of this work after this group <ahaller2_> phila can you come closer to the mic, please phila: Sending you the document that Scott has written. Not discussed at all at W3C so far, because we had hardly signed the MoU (still under review by our membership). <Zakim> kerry, you wanted to also remind phil about ssn timeline topic that got skipped yesterday due to traffic. phila: I expect to discuss this document with Denise and Scott some time today. Input from the group would be hopeful. Don't make the document public at this stage, though. Phil: About timeline, we need to think about whether any of, or both, ontologies can make it to CR by June. … There's a bunch of options on the table, but we basically have 2 months left. <ahaller2_> @phila I am still optimistic about CR, there are not too many issues left Phil: From my point of view, the chance of reaching that level for either of those is small, but I suggest discussing it. [15mn break] <phila> The jaws that bite, the claws that catch! <phila> Beware the Jubjub bird, and shun <phila> The frumious Bandersnatch! <roba> oh frabjous day! <ahaller2_> <mlefranc> I can give it a try <ahaller2_> thanks mlefranc kerry: minCardinality 0 suggests that you can relate this concept to another one using this property … it's useful for documentation reasons … although it conveys no semantics at all … my proposal would be to keep it RaulGarciaCastro: people that use the ontology might start doing the same in their own ontology … is it the right way of documenting an ontology ? … my proposal would be to remove these axioms kerry: because we do not have domain/range, it's good to have some local trace that this property is a proeprty of observation <ahaller2_> +1 to that this actually should help the user kerry: to me using these axioms does more good than harm <ahaller2_> mlefranc: proposes to use that in the comment of the property mlefranc: I propose to move these meaningless axioms into the comments … of the Observation class RaulGarciaCastro: I won't fight for one or the other, at least the decision should be documented ahaller2_; <summarizes what kerry, mlefranc, RaulGarciaCastro just said> <ahaller2_> PROPOSED: Remove minCardinality 0 on ssn:qualityOfObservation ssn:observationResultTime ssn:observationSamplingTime on Observation and put them in comments of the properties roba: do we have evidence of users of SSN that think this is usefull ? kerry: the observationResultTime is used <missed the point>.. but I don't recall anyone complaining about them … it's there so you use it, although it holds no semantics +1 <Zakim> phila, you wanted to talk about profiles phila: do you really need cardinality constraints in the spec, or can you move them in another profile ? <Zakim> kerry, you wanted to answer phila phila: in general, you can use relax or precise the semantics of ssn, which would define a new application profile … refering to a new working group kerry: this specific cardinality constraint is "at least zero", so that acts as: … a. a faithful representation of the UML model that is represented here … b. that's the only way of documenting an expectation of how the property should be used in the ontology phila: why not put them in the comment ? kerry: <explains historic reasons> phila: as a developer, I may be frustrated to spend time implementing stuff that are actually useless <phila> That's another good reason to leave out formal axioms - data bloat that just slows down the system, Danh, yes <RaulGarciaCastro> agree with Danh DanhLePhuoc: whatever you put in cardinality constraints, in practice, in 2 EU projects I am involved in, cardinality constraints slow the implementations roba: is cardinality 0 equivalent to domainIncludes in SOSA ... ? <ahaller2_> +1 for roba, schema:domainIncludes and schema:rangeIncludes <ahaller2_> q/ <ahaller2_> PROPOSED: Remove minCardinality 0 on ssn:qualityOfObservation ssn:observationResultTime ssn:observationSamplingTime on Observation and put them in comments of the properties <ahaller2_> 0 <kerry> 0 -1 <eparsons> 0 <roba> 0 <AndreaPerego> 0 " +1 if you write: and put them in comments of the Observation" <DanhLePhuoc> +1 <RaulGarciaCastro> 0 (if also documenting the class) <phila> +1 <ahaller2_> PROPOSED: Remove minCardinality 0 on ssn:qualityOfObservation ssn:observationResultTime ssn:observationSamplingTime on Observation and put them in comments of the properties and the Observation class (potentially using schema:DomainIncludes schema:rangeIncludes +1 <ahaller2_> 0 <DanhLePhuoc> +1 <kerry> 0 <RaulGarciaCastro> +1 (-1 to schema properties in SSN) <phila> 0 <eparsons> 0 <roba> +1 <AndreaPerego> 0 Resolved: PROPOSED: Remove minCardinality 0 on ssn:qualityOfObservation ssn:observationResultTime ssn:observationSamplingTime on Observation and put them in comments of the properties and the Observation class +1 Resolved: Remove minCardinality 0 on ssn:qualityOfObservation ssn:observationResultTime ssn:observationSamplingTime on Observation and put them in comments of the properties and the Observation class <RaulGarciaCastro> totally agree <ahaller2_> mlefranc: remove mincardinalities of 1 in all cases mlefranc: maybe using someValuesFrom instead can help SSN become valid OWL 2 EL kerry: not wanting to vote on a proposal to change these axioms in SSN regardless of the precise locations … I strongly believe we can fall back to OWL 2 EL anyways <ahaller2_> The property ssn:observedProperty must be 1 1 time(s) ahaller2_: the only occurence of maxCardinality is on Obseravtion <ahaller2_> ahaller2_: let's have a quick look on where it occurs <ahaller2_> owl:qualifiedCardinality "1"^^xsd:nonNegativeInteger ; [ rdf:type owl:Restriction ; owl:onProperty sosa:featureOfInterest ; owl:qualifiedCardinality "1"^^xsd:nonNegativeInteger ; owl:onClass sosa:FeatureOfInterest ] , [ rdf:type owl:Restriction ; owl:onProperty sosa:isObservedBy ; ahaller2_: 4 places: sosa:featureOfInterest sosa:isObservedBy ssn:observedProperty ssn:sensingMethodUsed kerry: that's O&M's UML model to OWL conversino * (I corrected the minutes) RaulGarciaCastro: with these strong restrictions we are forcing people to instantiate the Sensing class (for example), which most users do not do <ahaller2_> +1 to RaulGarciaCastro roba: open world assumption makes it interesting to use axioms to implicitly say for instance; the class of observations that observed this exact foi,.. … <missed the rest, sorry> kerry: it's important to allow for some classes to not be instantiated .. anyways, a reasoner can infer that "some instance" exists … the max cardinality remains important <roba> ok - sounds like its a good thing to leave in then - its usefl semantics <roba> there is no "feature of interest" class really... <roba> we can infer that a feature is acting as a feature of interest becuase it is related to an observation kerry: can we maybe just say: maxCardinality 1 instead of qualifiedCardinality 1 ? roba: we should maybe ask to the list for the implications of each option ? <Zakim> kerry, you wanted to ask why we want to change it? <ahaller2_> +1 to mlefranc <ahaller2_> mlefranc: what if a property is a FeatureOfInterest mlefranc: what if a property is also a property of a featureOfInterest ahaller2_: mlefranc will post an email to the list first <roba> +1 <phila> issue-153? <trackbot> issue-153 -- reconsider role of device in ssn as a result of the changes to platform required for sosa -- raised <trackbot> ahaller2_: role of device class on SSN: sensor can be attached to sensor and is then itself a device.. need to discuss about the implications <kerry> prefer to break but can go on too <eparsons> Delft will return at 13:45 to talk about what happens next... <RaulGarciaCastro> prefer to break for lunch <RaulGarciaCastro> does not seem a “fast" issue <phila> [Adjourned] * @ armin, I think we could also solve quickly the issue about Measurement and Operating property, that may apply to actuators...talk <ahaller2_> @mlefranc i had this issue on my list too, yes, but it seems we can't do any issue anymore today. will postpone to next week Phil: The wording of the document I sent around was written by Scott … From an OGC perspective, the TC would setup a subcommittee which would be a permanent group. Modelled after the ISO TC2 11 group. … From a W3C perspective, this would be an Interest Group (IG). … The group would not be constitued to create standards. … It could publish notes, use cases, etc. but no standards. … The question is: is that valuable? Can we think of things that this group could do right away? eparsons: My concern would be that, as constitued, it would be a group that discusses issues and identified gaps where standards could be useful, and can do the incubation. My concern is that these groups often end up being a talking shop. <roba> +1 ed eparsons: The danger would be that we just sit down 3 times a year, have a nice talk, and be done with that. Linda: I also do not want a talking shop. What would happen when we want a standard? Would it be easy to create a group? phila: W3C IG can write a charter for a new WG. Yes, it would be easy for that JWOC group to write a charter. It then needs to go through the Membership for review. … So it does make it easier, but does not change the rules either to create a WG. ClemensPortele: The group would not ask for presentations. The scope of the group should be clear. … This should be the place where people agree about "is it a W3C charter?", "is it an OGC charter?", "is it a joint charter?" … I think there is value, but we need a clear standing agenda, and not presentations of relevant stuff roba: The issue would be what you need in JWOC to identify gaps. It's been useful in the SDW WG to discuss and identify technical gaps. … We've done a great job at this. I'm not 100% convinced that it's going to transition to another group. billroberts: It sounds like a good idea to me. I can see a useful role for the JWOC. The SDW WG proves it's useful to have joint works. Coordination is good, so setting up some group to make that possible seems a useful thing to have. kerry: Are we expecting JWOC to incubate ideas or are expecting others to make proposals? eparsons: That's a good point. … The two communities themselves will have ideas. … JWOC would have coordination role, deciding what's the best place to develop something. … But I'm not sure you can justify having it being the incubator. kerry: So I think there needs to be something more explicit for making decisions, path for influence. ClemensPortele: The proposal says that every WG chair in both organizations are part of the group, with voting rights. … We need this kind of mechanism for Scott to bless a topic as related to spatial data on the web, and to be discussed in JWOC. Similar mechanism at W3C. … Everyone should have the possibility to raise such an issue. phila: JWOC is only concerned by either doing work that is joint, or proposing work that is joint. … When a topic does not need to be done as joint work, it should pass outside of scope for JWOC. … In terms of voting rights, I'm trying to phrase it in a way that be both open and balanced. Other WGs at W3C that come to mind: Geolocation, Web of Things, etc. … At each charter renewal, there should be a clear list of things that need to be done. … The overlap between data on the web, spatial data, etc. is enormous. eparsons: Would there be a way for JWOC to take on deliverables such as the ones we've been working on, meaning non-normative documents. … That would scope things down. phila: I would love to see the Coverage work continue for instance, with the question of whether JWOC could help bring that to a state where standardization could be done. … Statistical data on the web best practices could be an example. <roba> QB4ST jneeds to be integreated with OLAP models - how dikmensions are "rolled up" in spatio-temporal contexts) phila: We could identify specific areas that should be put in the charter. roba: [mentioning QB4ST, what functions exist to convert day month year] … Spatial use cases that use cube would provide a perfect example of why a joint work could be useful. eparsons: Playing devil's advocates. If JWOC had existed 10 years ago, you could argue that most of the work done by OGC would have better been done as Web specifications. … Probably a bigger problem for OGC than W3C. Lots of things are not spatial enough to be a spatial spec. roba: I've been at OGC for a long time and struggled with that for a long time. … Spatial use cases have been running up against the architecture … The nature of spatial data is that it naturally fits within distributed systems. Most of the patterns of distributed systems are spatial by definition. … I think we'd be a lot further ahead if we had started the work a long time ago. Linda: What if there is some thing that comes up that needs standardization and JWOC says "W3C should do it", but OGC members do not contribute§? phila: Then it doesn't happen. We can only do what our members tell us to do and are willing to do. … If the JWOC were to say, we need to create a WG to work that, then my instinct tells me that all WG participants will need to be part of both organizations. … One of the member organizations that was member of both left, another did not contribute, etc. … We want to encourage membership, which may make the proposal impractical. billroberts: Members would choose the cheaper, of course. It's important to keep self-funded contributors engaged as well. kerry: I think that's true. We have had people looking at options. I wonder if we might think of joint membership fees longer term, perhaps with some responsibility. <roba> i was thinking along the same lines phila: We did merge with IDPF recently, but that's not going to work here because we serve different communities. … Now, there's another mechanism on W3C side: a Business Group (BG), which you can join for a restricted fee. ClemensPortele: Nobody joins just for JWOC. No one will join OGC if interested in W3C. … Membership fees at W3C are a substantive amount of money, making it a difficult case to make. It would be good to have a mechanism for companies to join the works that they are interested in. phila: The BG fees won't allow you to join a WG ClemensPortele: making it a bad deal overall. eparsons: Maybe some elements in this space could be funded by research projects. ahaller2_: There is an introductory industry membership at W3C for large companies that allow them to join an IG. There's nothing similar for small companies. … You could imagine a similar mechanism for a couple of years. phila: If JWOC is free, and it creates a WG, would you agree to pay 2K? ClemensPortele: Depends on the group eparsons: If the client is willing to pay. We cannot really tell whether a group is going to be worth a particular amount today. <phila> Potential work items for JWOC <phila> - Continue supporting development of CoverageJSON <phila> - Continue development of QB4ST <phila> - Continue development of EO-QB <phila> - Statistical Data on the Web BPs phila: To run back a bit, the general feedback is that there is value in the JWOC proposal if it has specific goals. … I listed 4 (see above) roba: I actually think the nature of EO-QB is more around best practices. I would tend to merge QB4ST and EO-QB as one work item. … I also think that the SSN stuff is going to be specialization of SSN for specific domains. Classes of sensors which relate to the hot topic of the day. … Applications of SSN. ClemensPortele: One question is would small updates to the best practices that we have today be included? Minor updates to keep the document up-to-date? <roba> +1 to that phila: Yes, the JWOC could do that directly. Linda: We also mentioned the spatial ontology before. To be considered. … Also GeoDCAT. phila: That actually raises an issue. If you want an extension to DCAT as opposed to GeoDCAT-AP, then the new WG that I'm currently proposing can take care of that. … JWOC would be directly well positioned to influence discussions in both organizations. An OGC SWIG would be well advised to take some advice from JWOC on web related stuff. AndreaPerego: There will be a meeting tomorrow of a group looking at DCAT for geospatial data. There is a risk of overlap. kerry: I used to follow the Research Data Alliance, who decided to make their own standards. I'm wondering whether taking this concept to them to get feedback and see if they would be willing to bring their standards if JWOC existed could be useful. Linda: Another thing around topical sensors. Already mentioned. That seems worth investigating. <Zakim> phila, you wanted to talk about ANDS, RDA, etc. roba: [relationship with RDA] phila: When RDA started, I missed the first plenary and went to the following ones. Trying to improve relationships here. They serve a different community. That does not surprise me that they came up with their own standards. [discussion about RDA] eparsons: Summary: there's value in JWOC as a coordination point, and also there is on-going work that this group could take on. These are both useful things. tidoust: I note the current proposal is not very explicit about JWOC doing incubation or spec work. Could be worth emphasizing. <phila> - Common spatial ontology <phila> - GeoDCAT <phila> - Generic Sensor API <roba> Applications of SSN? <roba> and i'd include EO as a use case for QB4ST <phila> - Common spatial ontology <phila> - GeoDCAT <phila> - Generic Sensor API <phila> - SSN Applications <phila> - Nieuhaven narrative <phila> tidoust: Patent commitments would be important <phila> tidoust: The Web and TV IG, there are probs with patent commitment <phila> ...Not in other IGs <phila> phila: I'd assume we're talking about royalty free outputs. <phila> RDA Plenary Barcelona AndreaPerego: The RDA plenary will be in a couple of weeks in Barcelona. It may be worth doing a presentation in the geospatial group … Maybe there are other opportunities. Poster, keynote. roba: I don't know enough about the plenary. kerry: I think it's a good idea, but who would be going? roba: Simon might be going. eparsons: We do need to advertize the best practices. In a future plenary call, we should discuss how we do that. kerry: The WWW conference is in Australia in a couple of weeks. A few of us will be presenting, including Armin, Byron and myself. <ahaller2_> Roba: I'll be presenting stuff as well. eparsons: Thanks, any other points before we conclude? <RaulGarciaCastro> Bye <ahaller2_> bye <roba> Actually Nick Car will be reporting on state of play witrh ref to some stuff <kerry> bye!
https://www.w3.org/2017/03/21-sdw-minutes
CC-MAIN-2019-35
refinedweb
9,988
57.3
Compares two specified string objects, ignoring or honoring their case,, string,#12 Compare the path name to "file" using an ordinal comparison. The correct code to do this is as follows: code reference: System.String.Compare#13 The following example demonstrates comparing strings with and without case sensitivity. C# Example using System; public class StringCompareExample { public static void Main() { string strA = "A STRING"; string strB = "a string"; int first = String.Compare( strA, strB, true ); int second = String.Compare( strA, strB, false ); Console.WriteLine( "When 'A STRING' is compared to 'a string' in a case-insensitive manner, the return value is {0}.", first ); Console.WriteLine( "When 'A STRING' is compared to 'a string' in a case-sensitive manner, the return value is {0}.", second ); } } The output isWhen 'A STRING' is compared to 'a string' in a case-insensitive manner, the return value is 0.
http://docs.go-mono.com/monodoc.ashx?link=M%3ASystem.String.Compare(System.String%2CSystem.String%2CSystem.Boolean)
CC-MAIN-2020-05
refinedweb
143
56.55
I'm been working with this program trying to come up with this output: I can't get the box or zeros to appear right. I got first and last row of the box seems to get messed up and when I set the counter equal to the rows (not displayed in code) it won't seem to work. Any one got a solution. Thanks! Drawing program Do you want to start(Y/N)? Y How many rows/columns(5-21)? 7 0****** *0***** **0**** ***0*** ****0** *****0* ******0 Do you want to continue(Y/N)? Y How many rows/columns(5-21)? 55 Invalid number. Range 5 - 21. Enter again: 5 0**** *0*** **0** ***0* ****0 Do you want to continue(Y/N)? N This is the code I've been playing around around with. Anyone know what needs to be changed I'm stumped! lol Code:#include <iostream> #include <iomanip> using namespace std; int main() { int factorial = 0; int held_Num = 0; char answer = 'y'; int i = 0; int m = 0; char star = '*'; while (answer == 'y') { cout << "Please enter a number to work with: " << endl; cin >> held_Num; while (held_Num < 5 || held_Num > 21) { cout << "Please enter a number to work with: " << endl; cin >> held_Num; } for (i = 1; i <= held_Num; i++) { cout << star << endl; for (m = 1; m <= held_Num; m++) { cout << star; } } cout << endl; cout << "Would you like to continue? (y/n): " <<endl; cin >> answer; } return 0; std::cin.get(); }//END OF MAIN
http://cboard.cprogramming.com/cplusplus-programming/94697-quick-question-nested-loops.html
CC-MAIN-2015-06
refinedweb
242
90.39
Indeed, there is a GAC in the .NET Compact Framework where you can deploy you own assemblies for shared reuse. It comes preloaded with a subset of the .NET base class libraries, which are CPU- and operating system-independent managed DLLs you can call from your applications through references to reduce the overall amount of code required. These class libraries follow the same hierarchical structure of namespaces as the ones found in the .NET Framework 1.0 and 1.1. I'll come back to .NETcf namespaces very soon. I mentioned earlier that only Visual Basic .NET and Visual C# .NET were supported in the .NET Compact Framework. This is not entirely true. Since the CLR only runs IL code, this means any .NET language could potentially be used to create .NET Compact Framework assemblies. The limitation to these two languages actually rests on the SDE which only initially supports VB .NET and C# from Microsoft. Should another company decide to port their .NET language to .NETcf and create an SDE-style compiler that understands the limitations, the compact CLR would accommodate it just as well. I have not seen any official announcement yet as far as .NETcf third-party languages are concerned. One interesting difference in the architecture lies in the way error messages are handled. Typically, when an error is raised, an error message accompanies it and the developer can choose to display it or display their own message to the end user. In the .NETcf, Microsoft extracted all the error messages for memory considerations and put them in separate error string files (SYSTEM.SR.dll). There is one such file per supported language and you can choose whether to deploy an error string file or not along with your mobile application. .NET Framework: Compact/Desktop Commonalities As stated earlier, the .NET Framework and the .NET Compact Framework share many commonalities, and many of these have to do with the .NET programming model. For instance, both benefit from a verifiable type safe execution of assembly code in a managed environment, thanks to CLR services. This means that?unlike Embedded Visual Basic 3.0?you cannot rely on uninitialized variables or unsafe casts. Nor can you have bad array indexing or bad pointer math, which could corrupt your application. And while it may be compact, the "mobile" CLR is just as complete. The Garbage Collector (GC) eliminates the need for reference counting (à la COM), allocates and deallocates memory, and prevents memory leaks. The "Blue Screen of Death" may not exist in Windows CE, but memory corruptions can still occur and the .NETcf GC safeguards what happens in those tiny application domains. Just-in-Time (JIT) compilation is also inherited from the desktop cousin. .NETcf strictly runs IL code and nothing else, and with many more CPUs on mobile devices than on the desktop, this design feature brings many more benefits, such as portability of assemblies, facilitated deployment, and more. The object model is also the same, giving you full access to true OOP on mobile devices. This is a given since .NETcf supports the same IL assembler as .NET, which means you're using the same Visual Basic .NET and Visual C# .NET languages as .NET, and not some stripped-down variant. With this CLS (Common Language Specification) compliance comes many code mechanisms and development benefits such as object calling, cross-language inheritance, and source-level debugging across different languages. Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. Your name/nickname Your email WebSite Subject (Maximum characters: 1200). You have 1200 characters left.
http://www.devx.com/codemag/Article/17441/0/page/5
CC-MAIN-2018-39
refinedweb
602
60.72
25 May 2010 17:21 [Source: ICIS news] MUMBAI (ICIS news)--India’s Deepak Fertilisers has posted an 11.4% year on year increase in its fourth quarter net profit to Indian rupees (Rs) 441.6m ($9.4m, €7.6m), on better capacity utilization and improved availability of feedstock, the company said on Tuesday. Net profit for the financial year of 2009-10 rose 15.7% year-on-year to Rs1.72bn, it said in a statement to the Bombay Stock Exchange. Sales in Deepak’s agri-business in the fiscal year of 2009-2010 dropped 23.3% year on year to Rs4.42bn, while sales in the chemicals segment rose marginally to Rs8.52bn compared with Rs8.27bn recorded in the 12 month accounting period of 2008-2009. Looking ahead, the company planned to begin operations at its new 300,000 tonnes/year technical ammonium nitrate (TAN) plant at Taloja in India's ?xml:namespace> “With this plant coming on stream, the company will be the fifth largest manufacturer of this product in the world and will derive considerable advantages from the higher scale,” Deepak said in a statement. Deepak Fertilisers also announced that it has signed ammonia contracts for the project with a leading supplier from the ($1 = Rs46.9, €1 = Rs58.2) To discuss issues facing the chemical industry visit ICIS connect For more on fertilizers
http://www.icis.com/Articles/2010/05/25/9362492/deepak-fertilisers-posts-11.43-rise-in-net-profit-to.html
CC-MAIN-2014-35
refinedweb
229
67.04
tutorial assumes you're already familiar with how to configure a new Django project. If you need help, please refer to Django for Beginners which covers the topic in more detail. Complete source code can be found on Github. Setup Start by creating a new Django project. This code can live anywhere on your computer. On a Mac, the desktop is a convenient place and that's where we'll put this code. We can do all of the normal configuration from the command line: - create a new authdirectory for our code on the Desktop - install Django with Pipenv - start the virtual environment shell - create a new Django project called config - create a new Sqlite database with migrate - run the local server Here are the commands to run: $ cd ~/Desktop $ mkdir accounts && cd accounts $ pipenv install django==3.0 $ pipenv shell (accounts) $ django-admin.py startproject config . (accounts) $ python manage.py migrate (accounts) $ python manage.py runserver If you navigate to you'll see the Django welcome screen. The Django auth app Django automatically installs the auth app when a new project is created. Look in the config/settings.py file under INSTALLED_APPS and you can see auth is one of several built-in apps Django has installed for us. # config/settings.py INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', # Yoohoo!!!! 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', ] To use the auth app we need to add it to our project-level urls.py file. Make sure to add include on the second line. I've chosen to include the auth app at accounts/ but you can use any url pattern you want. # config/urls.py from django.contrib import admin from django.urls import path, include # new urlpatterns = [ path('admin/', admin.site.urls), path('accounts/', include('django.contrib.auth.urls')), # new ] The auth app we've now included provides us with several authentication views and URLs for handling login, logout, and password management. The URLs provided by auth are:'] There are associated auth views for each URL pattern, too. That means we only need to create a template to use each! Login Page Let's make our login page! Django by default will look within a templates folder called registration for auth templates. The login template is called login.html. Create a new directory called registration and the requisite login.html file within it. From the command line type Control-c to quit our local server and enter the following commands: (accounts) $ mkdir templates (accounts) $ mkdir templates/registration (accounts) $ touch templates/registration/login.html Note: Make sure that templates is created at the project level, not within an existing directory such as config. You can see the official source code here for further confirmation your structure is correct. Then include this template code in our login.html file: <!-- templates/registration/login.html --> <h2>Login</h2> <form method="post"> {% csrf_token %} {{ form.as_p }} <button type="submit">Login</button> </form> This is a standard Django form using POST to send data and {% csrf_token %} tags for security concerns, namely to prevent a CSRF Attack. The form's contents are outputted between paragraph tags thanks to {{ form.as_p }} and then we add a "submit" button. Next update the settings.py file to tell Django to look for a templates folder at the project level. Update the DIRS setting within TEMPLATES as follows. This is a one-line change. # config/settings.py TEMPLATES = [ { ... 'DIRS': [os.path.join(BASE_DIR, 'templates')], ... }, ] Our login functionality now works but to make it better we should specify where to redirect the user upon a successful login. In other words, once a user has logged in, where should they be sent on the site? We use the LOGIN_REDIRECT_URL setting to specify this route. At the bottom of the settings.py file add the following to redirect the user to the homepage. # config/settings.py LOGIN_REDIRECT_URL = '/' We're actually done at this point! If you now start up the Django server again with python manage.py runserver and navigate to our login page at you'll see the following. Create users But there's one missing piece: we haven't created any users yet. Let's quickly do that by making a superuser account from the command line. Quit the server with Control+c and then run the command python manage.py createsuperuser. Answer the prompts and note that your password will not appear on the screen when typing for security reasons. (accounts) $ python manage.py createsuperuser Username (leave blank to use 'wsv'): Email address: will@learndjango.com Password: Password (again): Superuser created successfully. Now spin up the server again with python manage.py runserver and refresh the page at. Enter the login info for your just-created user. We know that our login worked because we were redirected to the homepage, but we haven't created it yet so we see the error Page not found. Let's fix that! Create a homepage We want a simple homepage that will display one message to logged out users and another to logged in users. First quit the local server with Control+c and then create new base.html and home.html files. Note that these are located within the templates folder but not within templates/registration/ where Django auth looks by default for user auth templates. (accounts) $ touch templates/base.html (accounts) $ touch templates/home.html Add the following code to each: <!-- }}! {% else %} <p>You are not logged in</p> <a href="{% url 'login' %}">login</a> {% endif %} {% endblock %} While we're at it, we can update login.html too to extend our new base.html file: <!-- templates/registration/login.html --> {% extends 'base.html' %} {% block title %}Login{% endblock %} {% block content %} <h2>Login</h2> <form method="post"> {% csrf_token %} {{ form.as_p }} <button type="submit">Login</button> </form> {% endblock %} Now update our urls.py file so we can display the homepage. Normally I would prefer to create a dedicated pages app for this purpose, but we don't have to and for simplicity, we'll just add it to our existing config/urls.py file. Make sure to import TemplateView on the third line and then add a urlpattern for it at the path ''. # config/urls.py from django.contrib import admin from django.urls import path, include from django.views.generic.base import TemplateView # new urlpatterns = [ path('admin/', admin.site.urls), path('accounts/', include('django.contrib.auth.urls')), path('', TemplateView.as_view(template_name='home.html'), name='home'), # new ] And we're done. If you start the Django server again with python manage.py runserver and navigate to the homepage at you'll see the following: It worked! But how do we logout? The only option currently is to go into the admin panel at and click on the "Logout" link in the upper right corner. This will log us out as seen by the redirect page: If you go to the homepage again at and refresh the page, we can see we're logged out. Logout link Let's add a logout link to our page so users can easily toggle back and forth between the two states. Fortunately the Django auth app already provides us with a built-in url and view for this. And if you think about it, we don't need to display anything on logout so there's no need for a template. All really we do after a successful "logout" request is redirect to another page. So let's first add a link to the built-in logout url in our home.html file: <!-- templates/home.html--> {% extends 'base.html' %} {% block title %}Home{% endblock %} {% block content %} {% if user.is_authenticated %} Hi {{ user.username }}! <p><a href="{% url 'logout' %}">logout</a></p> {% else %} <p>You are not logged in</p> <a href="{% url 'login' %}">login</a> {% endif %} {% endblock %} Then update settings.py with our redirect link which is called LOGOUT_REDIRECT_URL. Add it right next to our login redirect so the bottom of the settings.py file should look as follows: # config/settings.py LOGIN_REDIRECT_URL = '/' LOGOUT_REDIRECT_URL = '/' Actually, now that we have a homepage view we should use that instead of our current hardcoded approach. What's the url name of our homepage? It's home, which we named in our config/urls.py file: # config/urls.py ... path('', TemplateView.as_view(template_name='home.html'), name='home'), ... So we can replace '/' with home at the bottom of the settings.py file: # config/settings.py LOGIN_REDIRECT_URL = 'home' LOGOUT_REDIRECT_URL = 'home' Now if you revisit the homepage and login you'll be redirected to the new homepage that has a "logout" link for logged in users. Clicking it takes you back to the homepage with a "login" link. Conclusion With very little code we have a robust login and logout authentication system. It probably feels a bit like magic since the auth app did much of the heavy lifting for us. One of the nice things about Django is while it provides a lot of functionality out-of-the-box, it's designed to be customizable too. In the next post, Django Signup Tutorial, we'll learn how to add a signup page to register new users. Discussion
https://dev.to/learndjango/django-login-and-logout-tutorial-2nid
CC-MAIN-2021-04
refinedweb
1,525
59.5
Originally posted by Iain Emsley: I don't have the book but wouldn't you need something like public class party rather than just class party at the beginning? Originally posted by Campbell Ritchie: I don't have that book, but I believe some of the earlier examples are incomplete and will compile but not run... Originally posted by blingo james: Pasted your code, javac, and it worked for me (some warnings, but no errors) ,and .class file created. Is what you've pasted in here exactly what you have on file? Originally posted by Campbell Ritchie: ... Since the previous post is 6 weeks old, is the previous poster likely to be reading?
http://www.coderanch.com/t/411597/java/java/Java
CC-MAIN-2014-41
refinedweb
113
64.1
Using ASSERT(), VERIFY(), and TRACE() in non-MFC Applications by Gabriel Fleseriu When it comes to game development under C++, few people choose to use MFC. Still, I find the ASSERT(), VERIFY() and TRACE() macros useful. So I thought to write my own versions that work for any kind of project for Windows platforms. A few reminders: ASSERT() is supposed to evaluate its parameter, and if this is zero, to break the execution. In release mode, assert should expand to nothing. VERIFY() is very similar to ASSERT(), except that in Release mode, it is supposed to expand to its parameter. ASSERT() should be used with expressions that do not include any function call. For expressions that include a function call, you should use VERIFY(), so the function call is preserved in release mode. TRACE() is the counterpart of printf(), except that it prints to the debug window. In Release mode, TRACE() also should expand to nothing. None of the three macros imply any runtime penalty in release mode. The macros distinguish between debug and release mode by the pre-defined _DEBUG macro. This is specific to Microsoft Visual C++. If you are using some other compiler you might have to use some appropriate macro. There are two files needed to support ASSERT(), VERIFY and TRACE(): debug.h and debug.cpp. You should include debug.h in some main header of your project. It does not pollute recurrent inclusions, since it does not include any file itself. You also should add debug.cpp to the source files of your project. Here they are: // file debug.h #ifndef __DEBUG_H__ #define __DEBUG_H__ #ifdef _DEBUG void _trace(char *fmt, ...); #define ASSERT(x) {if(!(x)) _asm{int 0x03}} #define VERIFY(x) {if(!(x)) _asm{int 0x03}} #else #define ASSERT(x) #define VERIFY(x) x #endif #ifdef _DEBUG #define TRACE _trace #else inline void _trace(LPCTSTR fmt, ...) { } #define TRACE 1 ? (void)0 : _trace #endif #endif // __DEBUG_H__ //file debug.cpp #ifdef _DEBUG #include <stdio.h> #include <stdarg.h> #include <windows.h> void _trace(char *fmt, ...) { char out[1024]; va_list body; va_start(body, fmt); vsprintf(out, fmt, body); va_end(body); OutputDebugString(out); } #endif Discuss this article in the forums Date this article was posted to GameDev.net: 7/23/2002 (Note that this date does not necessarily correspond to the date the article was written) See Also: Sweet Snippets © 1999-2009 Gamedev.net. All rights reserved. Terms of Use Privacy Policy
http://www.gamedev.net/reference/articles/article1846.asp
crawl-002
refinedweb
403
69.48
A friendly place for programming greenhorns! Big Moose Saloon Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies Register / Login Win a copy of Meteor in Action this week in the JavaScript forum! JavaRanch » Java Forums » Java » Swing / AWT / SWT Author Swing thread - polling Tom McAmmond Ranch Hand Joined: Feb 16, 2004 Posts: 58 posted Aug 13, 2004 14:26:00 0 Cross-posted from Intermediate Java--- In summary, what I want to do seems deceptively simple. There are 3 classes: Config, Account, and Mail. I do something like this: Config c = new Config(); Account acc = new Account(c); Mail mail = new Mail(acc); int num = mail.checkMail(); Now I have a swing application that does this in Main(). I want it to run mail.checkMail(); repeatedly at specified intervals (in a thread), and I want the swing class to update a label with the new int upon every check. I can't figure out how to do that - there seem to be so many ways to do threads, and they're all a bit confusing. Thoughts are welcome - I'm still gaining experience and have no "best practices" guide Thanks for reading....! Ernest Friedman-Hill author and iconoclast Marshal Joined: Jul 08, 2003 Posts: 24189 34 I like... posted Aug 13, 2004 14:55:00 0 One simple and effective way would be to use javax.swing.Timer . You create a Timer, and register an ActionListener with it (you've already written many of those, I'm sure.) The actionPerformed method of the listeners will be called at an interval that you can specify. In your actionPerformed method, you'd call checkMail() and then use SwingUtilities.invokeLater() to set the label text. To implement this, you'd create the Timer in the same place where you actually assemble the GUI, and call start() on the timer after the GUI is all put together. The ActionListener would likely be a separate class. The Runnable you have to write for invokeLater() could be an anonymous class. Let us know if you get stuck. [Jess in Action] [AskingGoodQuestions] Tom McAmmond Ranch Hand Joined: Feb 16, 2004 Posts: 58 posted Aug 14, 2004 15:00:00 0 Thank you for the suggestion - I'll be writing that code in the next week or so, and I'll write back if I have difficulties.... clio katz Ranch Hand Joined: Apr 30, 2004 Posts: 101 posted Aug 14, 2004 20:17:00 0 a peanut from the peanut-gallery: since _checkMail() . is potentially a long-running task . needs Config and Account instance data, . may require error/exception interpretation, and . returns something ... you may want to constrain Timer actionlistener to _only_ start a _checkMail thread (as appropriate) ... and let another listener be responsible for GUI update the 'other' action/event listener (GUI updater) would be fired at _checkMail completion. I left out a few things:-) It's more code, but can save some later hair-pulling if you plan to continue to use/distribute the app. As an example, you could implement a MailMgr Class containing - num (last available mail msg count, or -1 at init) - setters/getters for Config & Acct info - state variable (OK, POLLING, ERROR, etc) - listener overrides (_addMailListener(listener) etc) - checkMail() - to start MailThread, if appropriate - internal MailThread Class the MailThread Class might contain - 'listener' overrides (same set as above, invoked by MailMgr, keeps a list of listeners that we notify) - event notifier(s) (_fireMailCounted, _fireMailError ..) - run method! (retrieve mailcount, notify listeners of outcome) Finally, you'd need to write the Event obj and Listener interface for event(s) you want to catch ... something like // checkMail event public class MailUpdateEvent extends EventObject { public MailUpdateEvent(MailMgr source) { super(source); } } // event listener interface public interface MailEventListener extends EventListener { public void MailCounted(MailUpdateEvent e); public void MailError(MailUpdateEvent e); } In this scenario, you could do something like: MailMgr mm = new MailMgr(configDat, acctDat); mm.addListener(new MailEventListener()); mm.checkMail(); when you start the app. When you receive a Timer event, you could then do something like if (mm.getState() == mm.READY) { mm.checkMail(); } else { // skip it, or adjust // timer, or ... whatever } Likewise, upon 'hearing' completion events, the MailEventListener can trigger the appropriate GUI updates. [You'll need to write a class to do this...:-] SOOOOOooo what would these few peanuts buy you? 1) a little less pain when things go wrong . won't tie-up a listener event q waiting for checkMail completion . allows you to trigger visual cues (GUI) about poll state . app can maintain info about data/connection issues without interrupting other gui tasks . facilitates implementation of mail-status-widget to dynamically review poll state 2) extensibility . facilitates customizable timer parameters (timer evt 'smart' threadlaunch code can de-queue overlapping events) . can pretty easily support multiple mail accts per MailMgr (just vector-ize thread-specific resources), or multiple MailMgrs... seems like a lot of work, but it's probably worthwhile to consider a scheme like this if you will be using the app for a while... just my two cents - hope it helps! [ August 14, 2004: Message edited by: clio katz ] [ August 14, 2004: Message edited by: clio katz ] Ernest Friedman-Hill author and iconoclast Marshal Joined: Jul 08, 2003 Posts: 24189 34 I like... posted Aug 14, 2004 21:11:00 0 Clio is right, although I would use only a slightly different approach than what I originally described. I wasn't really thinking when I recommended javax.swing.Timer -- I should have recommended java.util.Timer instead. If you use javax.swing.Timer , the mail checking will be done on the GUI thread -- not what you want, as it will block the GUI. I don't normally use this class, but I recommended it to you as it sounded like you'd be more confortable with the idea of an ActionListener than a Runnable. Mea Culpa. Anyway, instead, use java.util.Timer (which wants a Runnable instead of an ActionListener , but otherwise, it's basically the same idea as the Swing timer) and checkMail will run on an independent thread, leaving the GUI to run in peace. Again, use SwingUtilities.invokeLater() to update the GUI when you're done. Tom McAmmond Ranch Hand Joined: Feb 16, 2004 Posts: 58 posted Aug 15, 2004 00:43:00 0 Definitely lots to think about - thank you both for your advice. This level of Java coding seems to be quite a bit more advanced than the level I'm at, but without a challenge how can you learn....? I'll do my best at implementing something like that, since it's definitely a better idea to have the timer run in its own thread (separate from the GUI), and the extensibility it offers is tempting. In other words, expect more questions on this stuff as I attempt it Can either of you recommend any particularly good literature that might help me improve my GUI coding skills? clio katz Ranch Hand Joined: Apr 30, 2004 Posts: 101 posted Aug 16, 2004 20:09:00 0 I agree with Ernest's approach, and it's certainly much clearer than the approach I suggested. I was trying to provide some 'food for thought' based on some snags i have run into in the past... kind of 'defensive programming':-) As you may have seen in some earlier posts, I'm still struggling with 'best practices' for dealing w the Swing event Q myself ... I can't get used to a single Q for GUI events, although I *think* I understand why it may have been architected this way [my "naive" theory is: because swing must be portable and operate the same across (theoretically 'any') gui windowing system, java must abstract a 'protected mode'-ish graphics kernel. i can only guess that swing's single asynch q simplifies 'kernel' programming/maint and awt backsupport ... as i say, it's a naive theory ...]. SwingWorker appears to acknowledge the limitations of Swing. The fact that it's not included in the SDK keeps some of us from shooting ourselves in the foot before we have learned enough about the particulars of Swing. Probably in the typical case, a programmer may never need a SwingWorker thread. The bottom line is that we all wind up taking Swing design into account (eg _invokeLater) once the 'event-based' part of our need comes into play... But to answer your question, I'm afraid I don't know of a 'best practices' type book. There was an author of a Swing book on this forum maybe a few months ago, and Gregg (moderator) asked him about this very topic. As I recall, the book didn't take a position on best practices, but offered various approaches. The book may have been reviewed elsewhere on this site. I don't believe the samples were online back then when I looked... but it's probably worth a look-see at the javaranch book reviews. - I got (and get) most help from downloaded tutorials, examples, and open source: 1. The SwingSet2 demo that comes with the SDK is a tour-de-force of designs... I really like their initialization code, and the init infrastructure for launching the components. it's kind of a bean-starter lesson - a component-ized function orientation instead of the traditional top-down sequential design. 2. For open source, jEdit really impresses me, even though the code itself is way 'advanced java'. it may take a long while to understand the detail, but it's always instructive to study it. there are lots of neat tricks and thoughtful methods there... 3. Again on the 'advanced' side: if you want a case-study of a java e-mail filtering application using very advanced concepts (model-view-controller, thread pools, etc), you can download BlackMamba . In this article the author explains how and why he chose to design the app the way he did. 4. Finally, if you check out a handful of Sun examples, you'll see a pattern of their "_createAndShowGUI()" to get the main window displayed. The Sun coders are particularly mindful of the Q limitations, and by reviewing some of their examples, you'll begin to get a feel for when a deceptively simple need can trigger the need for a slew of defensive programming:^-) hth! Tom McAmmond Ranch Hand Joined: Feb 16, 2004 Posts: 58 posted Aug 19, 2004 23:24:00 0 I have to admit - I know enough to see that Clio's post is probably very good advice but it's so far beyond my understanding that I'm not sure what to do with it. I'll study some more examples (like the ones you spoke of), and see if it makes a bit more sense in a few days. Thanks again, Tom Tom McAmmond Ranch Hand Joined: Feb 16, 2004 Posts: 58 posted Aug 21, 2004 09:54:00 0 Need some help with threads... In my GUI class main method I have a timer. I have a TimerTask now that, every few minutes, should launch a new instance of Mail, which is a Runnable object, in a new thread. class PollingTask extends TimerTask{ public void run(){ System.out.println("Launching polling thread"); Mail mm = new Mail(Account, Config) Thread t = new Thread(mm); t.start(); } } The run() method of Mail will do all the necessary operations to check my mail. I don't understand how to use invokeLater() in my Mail class, at the end of the run() method, to update the Gui class that has the timer so that the GUI will change to reflect the mail operation's outcome (i.e. # of new messages). The Mail class doesn't know much about the GUI class, so it can't very well call methods in that class that will update the GUI, can it? Can someone explain, by any chance? Thank you! [ August 21, 2004: Message edited by: Tom McAmmond ] [ August 21, 2004: Message edited by: Tom McAmmond ] clio katz Ranch Hand Joined: Apr 30, 2004 Posts: 101 posted Aug 21, 2004 11:36:00 0 Hi Tom, I know my previous posts have been kind of dense, but I'll try to cut that out:-) Basically, you will be able to design the mail thread in the way that best suits your needs. you can design it as (1) a reusable component (operates same way each time, regardless), or (2) a specialized class, with access to the caller's resources (i.e. it can directly or indirectly 'know' about and manipulate GUI view resources) My first response was a strategy for case #1. This is considered a java/oo 'best practice', and it is the basis for the "bean" concept. a bean is just basically a reusable component. in case #1, you would write an event class to represent the real-world "you've got mail!" event. (i called this the "MailCounted" event) Then your Mail thread would do "fireMailCounted()" when he's done. somewhere else you would have an eventlistener ready to 'do the right thing' when the event is fired.. ... but there's nothing wrong w opting for strategy #2! w strategy #2, your Mail thread will be designed to be a member/inner class of the invoking class. in this scenario, the Mail instance can get access to the caller's resources (such as class-accessible methods/variables). For example, if the caller has a method public void setMailCount(int count) { final String strCount = String.valueOf(count); SwingUtilities.invokeLater(new Runnable() { public void run() { lblMailCount.setText(strCount); } }); } the Mail thread can invoke it... that's just an example - the design options are open. but i'll stop right here before i get to blabbing on-n-on hth:-) p.s. thanks for your feedback! it's valuable to know if responses are helpful or not Tom McAmmond Ranch Hand Joined: Feb 16, 2004 Posts: 58 posted Aug 21, 2004 13:10:00 0 I really appreciate your help on this clio & Ernest. I realize these are pretty basic newbie questions. I like option number 1 - make the Mail class a reusable component. The service I'm accessing has more than just Mail to offer, and eventually I would like to incorporate other reusable components into this program. Here's what I think has to be done - please correct me if I'm wrong: - So I'm going to learn how to create my own event classes (something I haven't done before). - Then my TimerTask inner class in my GUI instantiates a new Mail class (Mail implements runnable()). Every time the timer goes off, it creates a new thread and passes the Mail class to the thread. - The Mail class, upon t.start(), does a bunch of stuff to check the mail. It fires a new MailCounted (an Event class internal to the Mail class?) event upon completion. - The GUI has a listener for this MailCounted event, and it updates the display with current information when it sees this event being fired. Questions (some of them, I realize, extremely basic. I'm still researching and reading docs/tutorials constantly - might figure these out on my own soon): - How and where, exactly, does the GUI listen for MailCounted events? - How does the GUI get info from the Mail class when it's finished doing what it does? The thread is dead at that point... - I take it I have to synchronize the methods in the Mail class that deal with updating, say, _numUnread - Should I attempt to stop my timer from starting new Mail threads if one already exists? Probably.... So most of my confusion relates to "How does the Mail thread update the GUI? Does the GUI access the Mail class when it's finished doing stuff, or does the Mail class send info to the GUI along with the Event?" Once again, thanks for the help clio katz Ranch Hand Joined: Apr 30, 2004 Posts: 101 posted Aug 21, 2004 22:01:00 0 Hi Tom, I think you're just nanoseconds away from working your way through - what i think of as - a pretty advanced programming task! Don't be turned off by the event stuff - it's pretty quick to write once you assemble the pieces: event class, listener interface, listener. you don't really need to think about using an event adapter until you have more than one event (an event adapter is just a programming convenience, so your eventlistener instances can pick and choose to override methods in the interface...) on the design side, either Timer or Mail can start the checkMail thread. since you'll be firing an event at completion, you won't need to synchronize/join threads. when the task finishes, your listener will be notified of the event, and java implicitly zombie-fies the thread. my pref would be to use one thread per distinct mail account. your Mail class can be the thread 'controller'. timer event just needs to trigger a Mail fetch ... Mail class can ignore the call if a thread is already running for the given mail acct however you will want to use more than _isAlive() for checking thread 'state'... why? java threads are a little dodgey when it comes to querying or managing state - i'm sure you've read about this. you will need to set and check your own state var(s) so that you can manage execution (stop, interrupt, 'isExecuting', etc). Thread.stop() is a no-no ... you basically have to write code to give the thread a poison pill (as needed) on to your central question: your event listener uses the same mechanisms as the other java listeners (actionListener, windowListener etc). once you register the listener (eg _addActionListener), it's like registering a callback: java will load and run your listener-associated code whenever the so-named event (actionPerformed, windowClosing etc) is fired. to try to clarify 'flow', i'll sketch out a skeletal design: model Classes RootWin Timer Mail MailUpdateEvent Interfaces MailEventListener code fragments (see above code for MailUpdateEvent class, MailEventListener interface) // // to simplify, rootWin can be our eventListener // public class RootWin implements MailEventListener { // member vars JLabel lblMailCount; Mail mm; ... // methods public void setMailCount(int count){ // see prev post:-) ... } // MailEventListener "MailCounted" event handler // // ==HERE's where the Mail info gets to the GUI== public void mailCounted(MailUpdateEvent e){ //setMailCount( mm.getMailCount() ); // // or .. more like ... Mail mail = (Mail) e.getSource(); setMailCount( mail.getMailCount() ); } } i know i'm forgetting things (like inner-workings of Mail class!), but i hope this helps to get you started ... let us know! [ August 21, 2004: Message edited by: clio katz ] Tom McAmmond Ranch Hand Joined: Feb 16, 2004 Posts: 58 posted Aug 22, 2004 17:30:00 0 It works!! Thanks for not making it too easy - I learned lots Still some finishing touches and a little reorganizing to do, but it works! It's amazing how as you gain experience you start looking at your old code and saying "What was I thinking?". Plus, rereading your earlier posts now they become a lot clearer, clio. This project has the potential to suck up plenty more of my time and I'll probably be back to ask more questions at some point, but the basic structure is laid our pretty much like you suggested, and I can't thank you enough. I've got events, listeners, interfaces, threads... all sorts of good stuff... I check my mail in a thread, it fires events based on errors or new mail, the GUI gets the messages and updates itself, the timer works.... Now to clean it all up and document it so I remember what I did One more thing - Would you recommend putting the MailEvent and MailEventListener classes inside the MailMgr class, or can they go in their own class files? I might find out soon that you can only do it one way... One again, thanks so much for your help. Tom clio katz Ranch Hand Joined: Apr 30, 2004 Posts: 101 posted Aug 23, 2004 10:20:00 0 great work! accept my praise for your persistence and follow-through - rare qualities. you pulled together many complex/confusing bits into a working whole - you have yourself to thank. you probably already solved your packaging issue, but since you asked my opinion: having them in separate pkg/class probably fits best w the reusability goal at the rate you're going, next time i expect to be thanking _you_ for help:-) I agree. Here's the link: subject: Swing thread - polling Similar Threads Help with arrays/class help with arrays How to develop a standalone swing application with ejb 3.1 Expert guidance needed in understanding the part of a shell script Attaching a GUI to a polling application All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/338829/GUI/java/Swing-thread-polling
CC-MAIN-2015-18
refinedweb
3,480
69.62
Hi everyone, I'm having problems getting a flipped binary number (one where all the 1's of a number become 0's and vice versa) to display correctly. My program converts a value from decimal to binary and then should flip the converted binary number so that all 1's become 0's and 0's become 1's. It is not displaying the flipped number properly. I was wondering how I can get it to be displayed so that 1's become 0's and 0's become 1's. I tried using the stndard tilde symbol for accomplishing this but it didn't work. Now I'm stuck as to why it won't work. Here is my code for this program so far: Any help or suggestions would be greatly appreciated. Thanks.Any help or suggestions would be greatly appreciated. Thanks.Code:#include <iostream> #include <stdlib.h> #include <conio.h> #include <string> #include <bitset> using namespace std; class BitHandler { private: unsigned int value; public: void print(); long binary(int); int binflip(int); }; void BitHandler::print() { value = 128; cout << "Decimal Value: " << value << endl; cout << "Binary Value: " << binary(value) << endl; cout << "Flipped Value: " << binflip(value) << endl; } long BitHandler::binary(int value) { // converts to binary int rem; long x = 0; if (value == 0) { return 0; } rem = value % 2; value /= 2; x = binary(value) * 10 + rem; return x; } int BitHandler::binflip(int value) { // converts to binary int rem; int y; long x = 0; if (value == 0) { return 0; } rem = value % 2; value /= 2; x = binary(value) * 10 + rem; y = ~x; // this line should flip the binary number to it's inverse return y; } int main() { BitHandler bits; bits.print(); getche(); return 0; }
http://cboard.cprogramming.com/cplusplus-programming/67991-working-binary-numbers-cplusplus.html
CC-MAIN-2015-32
refinedweb
282
58.82
Back in the SharePoint 2007 timeframe, I wrote my checklist for optimizing SharePoint sites – this was an aggregation of knowledge from various sources (referenced in the article) and from diagnosing performance issues for my clients, and it’s still one of my more popular posts. Nearly all of the recommendations there are still valid for SP 2010, and the core tips like output caching, BLOB caching, IIS compression etc. can have a huge impact on the speed of your site. Those who developed SharePoint internet sites may remember that suppressing large JavaScript files such as core.js was another key step, since SharePoint 2007 added these to every page, even for anonymous users. This meant that the ‘page weight’ for SharePoint pages was pretty bad, with a lot of data going over the wire for each page load. This made SharePoint internet sites slower than they needed to be, since anonymous users didn’t actually need core.js (since it facilitates editing functionality typically only needed for authenticated users) and indeed Microsoft published a workaround using custom code here. The SP2010 problem To alleviate some of this problem, SharePoint 2010 introduces the Script On Demand framework (SOD) – this is designed to only send JavaScript files which are actually needed, and in many cases can load them in the background after the page has finished loading. Additionally, the JavaScript files themselves are minified so they are much smaller. Sounds great. However, in my experience it doesn’t completely solve the issue, and there are many variables such as how the developers reference JavaScript files. I’m guessing this is an area where Your Mileage May Vary, but certainly on my current employer’s site () we were concerned that SP2010 was still adding some heavy JS files for anonymous users, albeit some apparently after page load thanks to SOD. Some of the bigger files were for ribbon functionality, and this seemed crazy since our site doesn’t even use the ribbon for anonymous users. I’ve been asked about the issue several times now, so clearly other people have the same concern. Waldek also has an awesome solution to this problem involving creation of two sets of master pages/page layouts for authenticated/anonymous users, but that wasn’t an option in our case. N.B. Remember that we are primarily discussing the “first-time” user experience here – on subsequent page loads, files will be cached by the browser. However, on internet sites it’s the first-time experience that we tend to care a lot about! When I use Firebug, I can see that no less than 480KB of JavaScript is being loaded, with an overall page weight of 888KB (and consider that, although this is an image-heavy site, it is fairly optimized with sprite maps for images etc.): If we had a way to suppress some of those bigger files for anonymous users entirely, we’d have 123KB of JavaScript with an overall page weight of 478.5KB (70% of it now being the images): But what about page load times? Right now, if you’ve been paying attention you should be saying “But Chris, those files should be loading after the UI anyway due to Script On Demand, so who cares? Users won’t notice!”. That’s what I thought too. However, this doesn’t seem to add up when you take measurements. I thought long and hard about which tool to measure this with – I decided to use Hammerhead, a tool developed by highly-regarded web performance specialist Steve Souders of Google. Hammerhead makes it easy to hit a website say 10 times, then average the results. As a sidenote, Hammerhead and Firebug do reasssuringly record the same page load time – if you’ve ever wondered about this in Firebug, it’s the red line in Firebug which which we care about. Mozilla documentation defines the blue and red lines (shown in the screenshots above) as: - Blue = DOMContentLoaded. Fired when the page's DOM is ready, but the referenced stylesheets, images, and subframes may not be done loading. - Red = load. Use the “load” event to detect a fully-loaded page. Additionally, Hammerhead conveniently simulates first-time site visitors (“Empty cache”) and returning visitors (“Primed cache”) - I’m focusing primary on the first category. Here are the page load times I recorded: Without large JS files suppressed: With large JS files suppressed: Reading into the page load times Brief statistics diversion - I suggest we consider both the median and average (arithmetic mean) when comparing, in case you disagree with my logic on this. Personally I think we can use average, since we might have outliers but that’s fairly representative of any server and it’s workload. Anyway, by my maths the differences (using both measures) for a new visitor are: - Median – 16% faster with JS suppressed - Average – 24% faster with JS suppressed Either way, I’ll definitely take that for one optimization. We’ve also shaved something off the subsequent page loads which is nice. The next thing to consider here is network latency. The tests were performed locally on my dev VM – this means that in terms of geographic distance between user and server, it’s approximately 0.0 metres, or 0.000 if you prefer that to 3 decimal places. Unless your global website audience happens to be camped out in your server room, real-life conditions would clearly be ‘worse’ meaning the benefit could be greater than my stats suggest. This would especially be the case if your site has visitors located in other continents to the servers or if users otherwise have slow connections – in these cases, page weight is accepted to be an even bigger factor in site performance than usual. How it’s done The approach I took was to prevent SharePoint from adding the unnecessary JS files to the page in the first place. This is actually tricky because script references can originate from anywhere (user controls, web parts, delegate controls etc.) – however, SharePoint typically adds the large JS files using a ClientScriptManager or ScriptLink control and both work the same way. Controls on the page register which JS files they need during the page init cycle (early), and then the respective links get added to the page during the prerender phase (late). Since I know that some files aren’t actually needed, we can simply remove registrations from the collection (it’s in HttpContext.Current.Items) before the rendering happens – this is done via a control in the master page. The bad news is that some reflection is required in the code (to read, not write), but frankly we’re fine with that if it means a faster website. If you’re interested in the details, it’s because it’s not a collection of strings which are stored in HttpContext.Current.Items, but Microsoft.SharePoint.WebControls.ScriptLinkInfo objects (internal). Control reference (note that files to suppress is configurable): <!-- the SuppressScriptsForAnonymous control MUST go before the ScriptLink control in the master page --> <COB:SuppressScriptsForAnonymous <SharePoint:ScriptLink The code: using System; usingSystem.Collections; using System.Collections.Generic; using System.Reflection; using System.Web; using System.Web.UI; namespace COB.SharePoint.WebControls { /// <summary> /// Ensures anonymous users of a SharePoint 2010 site do not receive unnecessary large JavaScript files (slows down first page load). Files to suppress are specified /// in the FilesToSuppress property (a semi-colon separated list). This control *must* be placed before the main OOTB ScriptLink control (Microsoft.SharePoint.WebControls.ScriptLink) in the /// markup for the master page. /// </summary> /// <remarks> /// This control works by manipulating the HttpContext.Current.Items key which contains the script links added by various server-side registrations. Since SharePoint uses sealed/internal /// code to manage this list, some minor reflection is required to read values. However, this is preferable to end-users downloading huge JS files which they do not need. /// </remarks> [ToolboxData("<{0}:SuppressScriptsForAnonymous runat=\"server\" />")] public class SuppressScriptsForAnonymous : Control { private const string HTTPCONTEXT_SCRIPTLINKS = "sp-scriptlinks"; private List<string> files = new List<string>(); private List<int> indiciesOfFilesToBeRemoved = new List<int>(); public string FilesToSuppress { get; set; } protected override void OnInit(EventArgs e) { files.AddRange(FilesToSuppress.Split(';')); base.OnInit(e); } protected override void OnPreRender(EventArgs e) { // only process if user is anonymous.. if (!HttpContext.Current.User.Identity.IsAuthenticated) { // get list of registered script files which will be loaded.. object oFiles = HttpContext.Current.Items[HTTPCONTEXT_SCRIPTLINKS]; IList registeredFiles = (IList)oFiles; int i = 0; foreach (var file in registeredFiles) { // use reflection to get the ScriptLinkInfo.Filename property, then check if in FilesToSuppress list and remove from collection if so.. Type t = file.GetType(); PropertyInfo prop = t.GetProperty("Filename"); if (prop != null) { string filename = prop.GetValue(file, null).ToString(); if (!string.IsNullOrEmpty(files.Find(delegate(string sFound) { return filename.ToLower().Contains(sFound.ToLower()); }))) { indiciesOfFilesToBeRemoved.Add(i); } } i++; } int iRemoved = 0; foreach (int j in indiciesOfFilesToBeRemoved) { registeredFiles.RemoveAt(j - iRemoved); iRemoved++; } // overwrite cached value with amended collection.. HttpContext.Current.Items[HTTPCONTEXT_SCRIPTLINKS] = registeredFiles; } base.OnPreRender(e); } } } Usage considerations For us, this was an entirely acceptable solution. It’s hard to say whether an approach like this would be officially supported, but it would be simple to add a “disable” switch to potentially assuage those concerns for support calls. Ultimately, it doesn’t feel too different to the approach used in the 2007 timeframe to me, but in any case it would be an implementation decision for each deployment and it may not be suitable for all. Interestingly, I’ve shared this code previously with some folks and last I heard it was probably going to be used on a high-traffic *.microsoft.com site running SP2010, so it was interesting for me to hear those guys were fine with it too. Additionally, you need to consider if your site uses any of the JavaScript we’re trying to suppress. Examples of this could be SharePoint 2010’s modal dialogs, status/notification bars, or Client OM etc. Finally, even better results could probably be achieved by tweaking the files to suppress (some sites may not need init.js for example), and extending the control to deal with CSS files also. Even if you weren’t to do this, test, test, test of course. Summary Although there are many ways to optimize SharePoint internet sites, dealing with page weight is a key step and in SharePoint much of it is caused by JavaScript files which are usually unnecessary for anonymous users. Compression can certainly help here, but comes with a trade-off of additional server load, and it’s not easy to calculate load/benefit to arrive at the right compression level. It seems to me that it would be better to just not send those unnecessary files down the pipe in the first place if we care about performance, and that’s where I went with my approach. I’d love to hear from you if you think my testing or analysis is flawed in any way, since ultimately a good outcome for me would be to discover it’s a problem which doesn’t really need solving so that the whole issue goes away!
http://www.sharepointnutsandbolts.com/2011_01_01_archive.html
CC-MAIN-2016-18
refinedweb
1,851
53
Well,. public class TestClass { public TestClass() { //here you can write File.Open Console.WriteLine("Constructor"); } ~TestClass() { //here you can write File.Close Console.WriteLine(. Related Read : Using or Using ?: public class TestClass : IDisposable { public TestClass() { //here you can write File.Open Console.WriteLine("Constructor"); } ~TestClass() { //here you can write File.Close Console.WriteLine("Destructor"); } public void Dispose() { //Close the file here GC.SuppressFinalize(this); } }. Related Read : Using Fixed Keyword in C#.. I hope this post will give you better understanding of the Using block and allow you to write better programs in the long run. Good Tip. Our servers in dell.com have high CPU due to intensive garbage collection. Is it possible to indicate to GC to suppress finalization without using IDisposable? Hi Vijay, Are you using a pattern to handle your Factory methods ? If so, what you can do, is to call GC.SuppressFinalize for each objects created on your system. Remember, there is always a catch because if you use SuppressFinalize for an objject that uses Managed code and you dont clear this out, you would be having a memory leak. For servers there is a provision of Background GC recently introduced, which widely differs from Concurrent GC which we are aware of in previous version of .NET. For Servers if you are using .NET 4.0 or above, I think Background GC is enabled by default. This will ensure that the Garbage collection does not need to suspend the Execution engine but can collect it on the fly. You can read my book But I am also going to put up another tip to handle high performance use cases in the server tonight. So stay tune.
https://dailydotnettips.com/benefit-of-using-in-dispose-for-net-objects-why-and-when/
CC-MAIN-2021-17
refinedweb
280
67.15
bt_gatt_srv_init() Initialize resources required for the Generic Attribute (GATT) server. Synopsis: #include <btapi/btgattsrv.h> int bt_gatt_srv_init(void) Arguments: Library:libbtapi (For the qcc command, use the -l btapi option to link against this library) Description: The function also starts a new thread. Most callbacks invoked in the new thread are thread safe unless otherwise specified. You must call this function before calling any other functions in this file. Returns: - EACCESS: Insufficient permissions to initialize functionality. - ENOMEM: Insufficient memory was available to perform the request. - ENOTSUP: The current library version is not supported. - ESRVRFAULT: An internal error has occurred. Last modified: 2014-05-14 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/native/reference/core/com.qnx.doc.bluetooth.lib_ref/topic/bt_gatt_srv_init.html
CC-MAIN-2017-09
refinedweb
120
53.47
Hi, I'm working on a turn-based tactics game in which the action takes place on a 2D grid. (I should probably mention that this is my first major Unity project.) I'd like to use Dijkstra's algorithm for finding the movement and attack ranges of units in Manhattan distance. However, Unity doesn't appear to support some of the data structures I need for this implementation (specifically priority queue, map, and list types). What are the best ways to resolve this? Right now, I'd like to import standard JavaScript library implementations, but it doesn't seem like that's possible. I might be overlooking something really obvious, but as far as I can tell so far, Unity isn't compatible with standard JavaScript. My background is in design and not engineering, so I'm not too enthusiastic about having to write efficient custom implementations of these. Any recommendations? Thanks in advance. Use .net classes. It would be much easier to port C# code to Unityscript than Javascript code. Answer by tingham · Feb 06, 2012 at 02:13 AM You might want to check out the source for Three.js Many of the algorithms can be ported over to Unity by simply fixing the namespace of the class references and function names. If nothing else you could probably prototype your game in WebGL using Three.js and then work backwards porting in to unity from there (if it's even required.) Well, one of the main reasons I'm using Unity for this project was to be able to prototype in-engine. I'm also not seeing the classes I'm looking for in Three.js, although it's got enough source content that I might be overlooking them somewhere, or not recognizing them. Thanks for the recommendation. A node in a childnode? 1 Answer null reference exception problem with generic lists in unityscript (i think...) 1 Answer Is there a way to convert a 2d generic list to a string in unityscript 1 Answer Help Unity Script to C#!! 1 Answer Remove Missing Objects from List 3 Answers
https://answers.unity.com/questions/214235/implementing-dijkstras-algorithm-in-unityscript.html?sort=oldest
CC-MAIN-2020-40
refinedweb
352
64.61
Differences Between Static Generated Sites And Server-Side Rendered Apps JavaScript currently has three types of applications that you can build with: Single Page Applications (SPAs), pre-rendering/static generated sites and server-side rendered applications. SPAs come with many challenges, one of which is Search Engine Optimization (SEO). Possible solutions are to make use of Static Site Generators or Server-Side Rendering (SSR). In this article, I’m going to explain them alongside listing their pros and cons so you have a balanced view. We’re going to look at what static generated/pre-rendering is as well as frameworks such as Gatsby and VuePress that help in creating statically generated sites. We’re also going to look at what server-side rendered (SSR) applications are as well as frameworks like Nextjs and Nuxtjs that can help you create SSR applications. Finally, we’re going to cover the differences between these two methods and which of them you should use when building your next application. Note: You can find all the code snippets in this article on GitHub. What Is A Static Site Generator? A Static Site Generator (SSG) is a software application that creates HTML pages from templates or components and a given content source. You give it some text files and content, and the generator will give you back a complete website, and this completed website is referred to as a static generated site. What this means is that your site pages are generated at build time and your site content does not change unless you add new contents or components and “rebuild” or you have to rebuild your site if you want it to be updated with new content. This approach is good for building applications that the content does not change too often — sites that the content does not have to change depending on the user, and sites that do not have a lot of user-generated content. An example of such a site is a blog or a personal website. Let’s look at some advantages of using static generated sites. PROS - Fast website: Since all of your site’s pages and content have been generated at build time, you do not have to worry about API calls to the server for content and this makes your site very fast. - Easy to deploy: After your static site has been generated, you would be left with static files, and hence, it can be easily deployed to platforms like Netlify. - Security: Static generated site are solely composed of static files, the risk of being vulnerable to cyber attacks is minimal. This is because static generated sites have no database, attackers cannot inject malicious code or exploit your database. - You can use version control software (e.g git) to manage and track changes to your content. This can come in handy when you want to roll back changes you made to the content on your site. CONS - Content can become stale if it changes too quickly. - To update its content, you have to rebuild the site. - Build time would increase depending on the size of the application. Examples of static site generators are GatsbyJS and VuePress. Let us take a look at how to create static sites using these two generators. Gatsby According to their official website, “Gatsby is a free and open-source framework based on React that helps developers build blazing-fast websites and apps.” This means developers familiar with React would find it easy to get started with Gatsby. To use this generator, you first have to install it using NPM: npm install -g gatsby-cli This will install Gatsby globally on your machine, you only have to run this command once on your machine. After this installation is complete, you can create your first static site generator using the following command. gatsby new demo-gatsby This command will create a new Gatsby project that I have named demo-gatsby. When this is done, you can start up your app server by running the following command: cd demo-gatsby gatsby develop Your Gatsby application should be running on localhost:8000. The folder structure for this app looks like this; --| gatsby-browser.js --| LICENSE --| README.md --| gatsby-config.js --| node_modules/ --| src/ ----| components ----| pages ----| images --| gatsby-node.js --| package.json --| yarn.lock --| gatsby-ssr.js --| public/ ----| icons ----| page-data ----| static For this tutorial, we’re only going to look at the src/pages folder. This folder contains files that would be generated into routes on your site. To test this, let us add a new file (newPage.js) to this folder: import React from "react" import { Link } from "gatsby" import Layout from "../components/layout" import SEO from "../components/seo" const NewPage = () => ( <Layout> <SEO title="My New Page" /> <h1>Hello Gatsby</h1> <p>This is my first Gatsby Page</p> <button> <Link to='/'>Home</Link> </button> </Layout> ) export default NewPage Here, we import React from the react package so when your code is transpiled to pure JavaScript, references to React will appear there. We also import a Link component from gatsby and this is one of React’s route tag that is used in place of the native anchor tag ( <a href= ' # ' >Link</a>). It accepts a to prop that takes a route as a value. We import a Layout component that was added to your app by default. This component handles the layout of pages nested inside it. We also import the SEO component into this new file. This component accepts a title prop and configures this value as part of your page’s metadata. Finally, we export the function NewPage that returns a JSX containing your new page’s content. And in your index.js file, add a link to this new page we just created:> {/* new link */} <button> <Link to="/newPage/">Go to New Page</Link> </button> </Layout> ) export default IndexPage Here, we import the same components that were used in newPage.js file and they perform the same function in this file. We also import an Image component from our components folder. This component is added by default to your Gatsby application and it helps in lazy loading images and serving reduced file size. Finally, we export a function IndexPage that returns JSX containing our new link and some default content. Now, if we open our browser, we should see our new link at the bottom of the page. And if you click on Go To New Page, it should take you to your newly added page. VuePress VuePress is a static site generator that is powered by Vue, Vue Router and Webpack. It requires little to no configuration for you to get started with it. While there are a number of tools that are static site generators, VuePress stands out from amongst the pack for a single reason: its primary directive is to make it easier for developers to create and maintain great documentation for their projects. To use VuePress, you first have to install it: //globally yarn global add vuepress # OR npm install -g vuepress //in an existing project yarn add -D vuepress # OR npm install -D vuepress Once the installation process is done, you can run the following command in your terminal: # create the project folder mkdir demo-vuepress && cd demo-vuepress # create a markdown file echo '# Hello VuePress' > README.md # start writing vuepress dev Here, we create a folder for our VuePress application, add a README.md file with # Hello VuePress as the only content inside this file, and finally, start up our server. When this is done, our application should be running on localhost:8080 and we should see this in our browser: VuePress supports VueJS syntax and markup inside this file. Update your README.md file with the following: # Hello VuePress _VuePress Rocks_ > **Yes!** _It supports JavaScript interpolation code_ > **{{new Date()}}** <p v-{{i}}</p> If you go back to your browser, your page should look like this: To add a new page to your VuePress site, you add a new markdown file to the root directory and name it whatever you want the route to be. In this case, I’ve gone ahead to name it Page-2.md and added the following to the file: # hello. And now, if you navigate to /page-2 in your browser, we should see this: What Is Server-Side Rendering? (SSR) Server-Side Rendering (SSR), is the process of displaying web-pages on the server and passing it to the browser/client-side instead of rendering it in the browser. Server-side sends a fully rendered page to the client; the client’s JavaScript bundle takes over and allows the SPA framework to operate. This means if you have an application that is server-side rendered, your content is fetched on the server side and passed to your browser to display to your user. With client-side rendering it is different, you would have to navigate to that page first before it fetches data from your server meaning your user would have to wait for some seconds before they’re served with the content on that page. Applications that have SSR enabled are called Server-side rendered applications. This approach is good for building complex applications that require user interaction, rely on a database, or where the content changes very often. This is because content on these sites changes very often and the users need to see the updated content as soon as they’re updated. It is also good for applications that have tailored content depending on who is viewing it and applications where you need to store user-specific data like email and user preference while also catering for SEO. An example of this is a large e-commerce platform or a social media site. Let us look at some of the advantages of server-side rendering your applications. Pros - Content is up to date because it fetches content on the go; - Your site loads fast because it fetches its content on the server-side before rendering it to the user; - Since in SSR JavaScript is rendered server-side, your users’ devices have little relevance to the load time of your page and this leads to better performance. CONS - More API calls to the server since they’re made per request; - Cannot deploy to a static CDN. Further examples of frameworks that offer SSR are Next.js and Nuxt.js. Next.js Next.js is a React.js framework that helps in building static sites, server-side rendered applications, and so on. Since it was built on React, knowledge of React is required to use this framework. To create a Next.js app, you need to run the following: npm init next-app # or yarn create next-app You would be prompted to choose a name your application, I have named my application demo-next. The next option would be to select a template and I’ve selected the Default starter app after which it begins to set up your app. When this is done, we can now start our application cd demo-next yarn dev # or npm run dev Your application should be running on localhost:3000 and you should see this in your browser; The page that is being rendered can be found in pages/index.js so if you open this file and modify the JSX inside the Home function, it would reflect in your browser. Replace the JSX with this: import Head from 'next/head' export default function Home() { return ( <div className="container"> <Head> <title>Hello Next.js</title> <link rel="icon" href="/favicon.ico" /> </Head> <main> <h1 className="title"> Welcome to <a href="">Next.js!</a> </h1> <p className='description'>Nextjs Rocks!</p> </main> <style jsx>{` main { padding: 5rem 0; flex: 1; display: flex; flex-direction: column; justify-content: center; align-items: center; } .title a { color: #0070f3; text-decoration: none; } .title a:hover, .title a:focus, .title a:active { text-decoration: underline; } .title { margin: 0; line-height: 1.15; font-size: 4rem; } .title, .description { text-align: center; } .description { line-height: 1.5; font-size: 1.5rem; } `}</style> <style jsx global>{` html, body { padding: 0; margin: 0; font-family: -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Oxygen, Ubuntu, Cantarell, Fira Sans, Droid Sans, Helvetica Neue, sans-serif; } * { box-sizing: border-box; } `}</style> </div> ) } In this file, we make use of Next.js Head component to set our page’s metadata title and favicon for this page. We also export a Home function that returns a JSX containing our page’s content. This JSX contains our Head component together with our main page’s content. It also contains two style tags, one for styling this page and the other for the global styling of the app. Now, you should see that the content on your app has changed to this: Now if we want to add a new page to our app, we have to add a new file inside the /pages folder. Routes are automatically created based on the /pages folder structure, this means that if you have a folder structure that looks like this: --| pages ----| index.js ==> '/' ----| about.js ==> '/about' ----| projects ------| next.js ==> '/projects/next' So in your pages folder, add a new file and name it hello.js then add the following to it: import Head from 'next/head' export default function Hello() { return ( <div> <Head> <title>Hello World</title> <link rel="icon" href="/favicon.ico" /> </Head> <main className='container'> <h1 className='title'> Hello <a href="">World</a> </h1> <p className='subtitle'>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Voluptatem provident soluta, sit explicabo impedit nobis accusantium? Nihil beatae, accusamus modi assumenda, optio omnis aliquid nobis magnam facilis ipsam eum saepe!</p> </main> <style jsx> {` .container { margin: 0 auto; min-height: 100vh; max-width: 800px;: 22px; color: #526488; word-spacing: 5px; padding-bottom: 15px; } `} </style> </div> ) } This page is identical to the landing page we already have, we only changed the content and added new styling to the JSX. Now if we visit localhost:3000/hello, we should see our new page: Finally, we need to add a link to this new page on our index.js page, and to do this, we make use of Next’s Link component. To do that, we have to import it first. # index.js import Link from 'next/link' #Add this to your JSX <Link href='/hello'> <Link href='/hello'> <a>Next</a> </Link> This link component is how we add links to pages created in Next in our application. Now if we go back to our homepage and click on this link, it would take us to our /hello page. Nuxt.js According to their official documentation: a great developer experience in mind.” It is based on Vue.js so that means Vue.js developers would find it easy getting started with it and knowledge of Vue.js is required to use this framework. To create a Nuxt.js app, you need to run the following command in your terminal: yarn create nuxt-app <project-name> # or npx npx create-nuxt-app <project-name> This would prompt you to select a name along with some other options. I named mine demo-nuxt and selected default options for the other options. When this is done, you can open your app folder and open pages/index.vue. Every file in this folder file is turned into a route and so our landing page is controlled by index.vue file. So if you update it with the following: <template> <div class="container"> <div> <logo /> <h1 class="title"> Hello Nuxt </h1> <h2 class="subtitle"> Nuxt.js ROcks! </h2> <div class="links"> <a href="" target="_blank" class="button--green" > Documentation </a> <a href="" target="_blank" class="button--grey" > GitHub </a> </div> </div> </div> </template> <script> import Logo from '~/components/Logo.vue' export default { components: { Logo } } </script> <style> .container { margin: 0 auto; min-height: 100vh; display: flex; justify-content: center; align-items: center;: 42px; color: #526488; word-spacing: 5px; padding-bottom: 15px; } .links { padding-top: 15px; } </style> And run your application: cd demo-nuxt # start your applicatio yarn dev # or npm run dev Your application should be running on localhost:3000 and you should see this: We can see that this page displays the content we added in to index.vue. The router structure works the same way Next.js router works; it renders every file inside /pages folder into a page. So let us add a new page (hello.vue) to our application. <template> <div> <h1>Hello World!</h1> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Id ipsa vitae tempora perferendis, voluptate a accusantium itaque vel ex, provident autem quod rem saepe ullam hic explicabo voluptas, libero distinctio?</p> </div> </template> <script> export default {}; </script> <style> </style> So if you open localhost:3000/hello, you should see your new page in your browser. Taking A Closer Look At The Differences Now that we have looked at both static-site generators and server-side rendering and how to get started with them by using some popular tools, let us look at the differences between them. Conclusion We can see why it is so easy to think both static generated sites and server-side rendered applications are the same. Now that we know the differences between them are, I would advise that we try to learn more on how to build both static generated sites and server-side rendered applications in order to fully understand the differences between them. Further Resources Here are some useful links that are bound to help you get started in no time: - “Getting Started With Gatsby,” Gatsby official website - “Getting Started With VuePress,” VuePress official website - “VuePress: Documentation Made Easy,” Ben Hong, Smashing Magazine - “Getting Started With Next.js,” Next.js by Vercel official website - “Why Do People Use A Static-Site Generator?,” Quora - “Static Site Generator,” Gatsby official website - “An Introduction To VuePress,” Joshua Bemenderfer, DigitalOcean - “What Is Server-Side Rendering?,” Edpresso, Educative.io - “What Is A Static Site Generator? And 3 Ways To Find The Best One ,” Phil Hawksworth, The Netlify Blog - “The Benefits Of Server Side Rendering Over Client Side Rendering,” Alex Grigoryan, Medium
https://www.smashingmagazine.com/2020/07/differences-static-generated-sites-server-side-rendered-apps/
CC-MAIN-2020-34
refinedweb
3,024
61.56
PIPE(2) NetBSD System Calls Manual PIPE(2)Powered by man-cgi (2020-09-24). Maintained for NetBSD by Kimmo Suominen. Based on man-cgi by Panagiotis Christias. NAME pipe, pipe2 -- create descriptor pair for interprocess communication LIBRARY Standard C Library (libc, -lc) SYNOPSIS #include <unistd.h> int pipe(int fildes[2]); #include <fcntl.h> #include <unistd.h> int pipe2(int fildes[2], int flags); DESCRIPTION pipe2() function behaves exactly like pipe() only it allows extra flags to be set on the returned file descriptor. The following flags are valid: O_CLOEXEC Set the ``close-on-exec'' property. O_NONBLOCK Sets non-blocking I/O. O_NOSIGPIPE Return EPIPE instead of raising SIGPIPE. RETURN VALUES On successful creation of the pipe, zero is returned. Otherwise, a value of -1 is returned and the variable errno set to indicate the error. ERRORS The pipe() and pipe2() calls will fail if: [EFAULT] The fildes buffer is in an invalid area of the process's address space. The reliable detection of this error cannot be guaranteed; when not detected, a signal may be delivered to the process, indicating an address violation. [EMFILE] Too many descriptors are active. [ENFILE] The system file table is full. [ENOMEM] Not enough kernel memory to establish a pipe. pipe2() will also fail if: [EINVAL] flags contains an invalid value. SEE ALSO sh(1), fork(2), read(2), socketpair(2), write(2) STANDARDS The pipe() function conforms to ISO/IEC 9945-1:1990 (``POSIX.1''). HISTORY A pipe() function call appeared in Version 3 AT&T UNIX. Since Version 4 AT&T UNIX, it allocates two distinct file descriptors. The pipe2() function is inspired from Linux and appeared in NetBSD 6.0. NetBSD 9.99 November 27, 2020 NetBSD 9.99
http://man.netbsd.org/pipe.2
CC-MAIN-2021-10
refinedweb
289
60.21
On Sun, Oct 16, 2011 at 4:39 PM, Eduardo Ochs <eduardoochs@gmail.com> wrote: > In my opinion the two worst functions in Lua 5.1 are "require" and > "module". They contain too much magic, and - in my opinion again - > their implementation details are somewhat arbitrary. require() is quite good, it does a few things more than strictly necessary, but that allows you some flexibility in how your modules are defined but still can be require()ed module(), on the other hand, is the one polluting the global namespace, and also having to meddle around with the local environment. that's why when i saw that it was going to be deprecated, i simply switched to: local Modulename = {} .... populate Modulename return Modulename and it works perfectly well for my needs. when I want a slightly more OO-style, i might add a metatable to the module table, that's just a couple lines extra. nothing to write home about. module() is gone, the replacement is far simpler, the structure is unchanged. sounds good to me, but a very trivial adjustment. I really don't get why to keep discussing about it. -- Javier
http://lua-users.org/lists/lua-l/2011-10/msg00518.html
CC-MAIN-2013-20
refinedweb
192
64.41
ASP.NET Overview. This topic describes the following features of ASP.NET and of Visual Web Developer, the development environment for creating ASP.NET applications.. Using Visual Studio 2010,. Some of the most important namespaces in the .NET Framework class library that pertain to ASP.NET are the following: For a complete list of .NET Framework namespaces, with links to API reference topics for them, see .NET Framework Class Library. always runs with a particular Windows identity so you can secure your application using Windows capabilities such as NTFS Access Control Lists (ACLs), database permissions, and so on. For more information about effect on operational Web applications and servers. ASP.NET configuration settings are stored in XML-based files. Because these XML files are available,.
http://msdn.microsoft.com/en-US/library/4w3ex9c2(v=vs.100).aspx
CC-MAIN-2014-41
refinedweb
125
52.97
Compiler Error CS0246 The type or namespace name 'type/namespace' could not be found (are you missing a using directive or an assembly reference?) A type or namespace that is used in the program was not found. You might have forgotten to reference (/reference) the assembly that contains the type, or you might not have added the required using directive. Or, there might be an issue with the assembly you are trying to reference.. For more information, see Managing references in a project. If you get this error in code that was previously working, first look for missing or unresolved references in Solution Explorer. Do you need to re-install a NuGetpackage? For information about how the build system searches for references, see Resolving file references in team build. If all references seem to be correct, look in your source control history to see what has changed in your .csproj file and/or your local source file. If you haven’t successfully accessed the reference yet, use the Object Browser to inspect the assembly that is supposed to contain this namespace and verify that the namespace is present. If you verify with Object Browser that the assembly contains the namespace, try removing the “using” directive for the namespace and see what else breaks. The root problem may be with some other type in another assembly.()); } }
https://msdn.microsoft.com/en-us/library/w7xf6dxs.aspx
CC-MAIN-2016-07
refinedweb
225
62.68
Django Admin - Dynamically pick list_display fields (user defined) Some of my models have a lot of fields and the user may not need to see all of them at any given point in time. I am trying to add functionality to allow the user to select which fields are displayed from the front end without having to change the list_display definition in the admin.py file. I also don't want to just dump all of the fields out there for them either. I am hoping someone may be able to point me at something on github or give me some advice on how to go about doing this. Thanks in advance.') - Return Zip file with HttpResponse using StringIO, Django, Python I'm trying to return a zip file with HttpResponse, using StringIO() because i'm not storing in DB or Harddrive. My issue is that my response is returning 200 when i request the file, but the OS never ask me if i want to save the file, or the file is never saved. i think that the browser is reciving the file because i have seen on the Network Activity (inspect panel) and it says than a 6.4 MB file type zip is returned. I'm taking a .step file (text file) from a DB's url, extracting the content, zipping and returning, that's all. this my code: def function(request, url_file = None): #retrieving info name_file = url_file.split('/')[-1] file_content = urllib2.urlopen(url_file).read() stream_content = StringIO(file_content) upload_name = name_file.split('.')[0] # Create a new stream and write to it write_stream = StringIO() zip_file = ZipFile(write_stream, "w") try: zip_file.writestr(name_file, stream_content.getvalue().encode('utf-8')) except: zip_file.writestr(name_file, stream_content.getvalue().encode('utf-8', 'ignore')) zip_file.close() response = HttpResponse(write_stream.getvalue(), mimetype="application/x-zip-compressed") response['Content-Disposition'] = 'attachment; filename=%s.zip' % upload_name response['Content-Language'] = 'en' response['Content-Length'] = write_stream.tell() return response - Django finding element by reverse relation vs filter function I am currently working on django project, I am using reverse relationship for finding the element but at same time I can also use filter function. for example models are: class Group(models.Model): #some attributes class Profile(models.Model): group = models.ForeignKey(Group,related_name = profile) #more attributes if I have instance of Group ( group) then I could use: group.profile.all() but also: Profile.objects.filter(group=group) What's the difference and which one is more efficient ?? I tried to find on google but unable to get a good solution. What if I am using reverse relationship three to four times to find element? - Add new variable to DRF Response I have a working function, I need to add a new variable to it, the value of which will depend on which part of the code is executed. working_code.py class Youtube) except Youtube.DoesNotExist: p = Platform(user=request.user, platform=y, description=description) return Response( PlatformSerializer(p, context={'request': request}).data ) Now I add variable NEW class My) NEW = False except Youtube.DoesNotExist: p = Platform(user=request.user, platform=y, description=description) NEW = True return Response(?????) How to add right in return Response variable NEW? Something like PlatformSerializer(p, context={'request': request, 'new':new}).data - Django Admin (1.11.14) showing 404 error after login () I'm using a2hosing passenger_wsgi.py My django version is 1.11.14 , the frontend looks good even I can see the django admin login page. But when I try to login as superuser ( of course with valid credentials ) its taking me to, with 404 error The error msg : Page not found (404) Request Method: POST Request URL: mysite/admin/login/?next=/admin/ Raised by: django.contrib.admin.sites.login Using the URLconf defined in mysite.urls, Django tried these URL patterns, in this order: ^admin/ The current path, login/,thing, match any of these. Last thing , when test django-admin check from console (ssh) , i'm also getting : ModuleNotFoundError: No module named 'mysite' - Django Admin upload file when form is invalid I have a field of type FileFieldin my model. In the admin, when the form is invalid, the upload file is lost and it shows the error in the admin page without the file uploaded. How do I solve this? - Change format for Duration field in django admin I'm using django DurationFieldin a model to calculate the time difference between two DateTimeField. On the admin site for that model, the field shows up like this: # short duration Cumulative Total time: 0:51:33 # longer duration Cumulative Total time: 2 days, 3:10:21 Is it possible to change the format for this field to the below given format? Cumulative Total time: 2 days, 3 hours, 10 minutes, 21 seconds
http://quabr.com/51277214/django-admin-dynamically-pick-list-display-fields-user-defined
CC-MAIN-2018-34
refinedweb
787
58.08
Advertising Dan Garry <dga...@wikimedia.org> changed: What |Removed |Added ---------------------------------------------------------------------------- Status|RESOLVED |REOPENED CC| |dga...@wikimedia.org Resolution|FIXED |--- --- Comment #4 from Dan Garry <dga...@wikimedia.org> --- This issue is still occurring for me on my own install of Vagrant. I followed the instructions on mediawiki.org for enabling Flow on the User talk namespace, and now when I go to a user talk page I get this error. Reopening as such. -- You are receiving this mail because: You are the assignee for the bug. You are on the CC list for the bug. _______________________________________________ Wikibugs-l mailing list Wikibugs-l@lists.wikimedia.org
https://www.mail-archive.com/wikibugs-l@lists.wikimedia.org/msg351719.html
CC-MAIN-2018-30
refinedweb
104
70.8
Opened 12 years ago Closed 12 years ago #4312 closed (wontfix) addition of defaults argument to newforms save() Description I'd like the ability to call save() on a form and insert some values for fields (e.g., non-editable or to the instance before it is saved. This way, calling save(False) to return an object which is subsequently manipulated then saved again can be avoided. I'm not sure if 'defaults' is the best name for this argument, but that's what I've implemented for now. For example: Given these models: from django.db import models class Foo(models.Model): name = models.CharField(maxlength=16) class Bar(models.Model): name = models.CharField(maxlength=16) foo = models.ForeignKey(Foo, editable=False) Here is the example: >>> from django import newforms as forms >>> from mysite.myapp.models import * >>> >>> my_foo = Foo(name='My Foo') >>> my_foo.save() >>> Form1 = forms.form_for_model(Bar) >>> form = Form1({ 'name': 'My Bar'}) >>> form.is_valid() True >>> form.save(defaults={ 'foo': my_foo }) <Bar: Bar object> >>> Form2 = forms.form_for_model(Bar, fields=('baz')) >>> form = Form2({}) >>> form.is_valid() True >>> form.save(defaults={ 'name': 'My Bar 2', 'foo': my_foo }) <Bar: Bar object> >>> Attachments (1) Change History (4) Changed 12 years ago by comment:1 follow-up: 2 Changed 12 years ago by comment:2 Changed 12 years ago by Replying to Simon G. <dev@simon.net.nz>: Hmm.. what benefits does this have over using initial values? I'm not sure that "defaults" was the right terminology for this. This was not intended to provide an initial value for the Form field or a default value for the Model attribute. Instead, this was meant as a "hook" that would allow the developer to supply values for Model attributes that are not intended to be a form field (neither hidden nor visible)--usually fields that are not "editable" or are not included in the "fields" argument to newforms.models.form_for_instance() or newforms.models.form_for_model(). A better example might be something like this: from django.db import models from django.contrib.auth.models import User class Blog(models.Model): name = models.CharField(maxlength=16) class Entry(models.Model): blog = models.ForeignKey(Blog, editable=False) user = models.ForeignKey(User, editable=False) text = models.TextField() When an Entry is created, the blog and user attributes are required, but they won't be part of the form--they must be supplied by some other means (e.g., the session, URL, or query string). Then they are added to the instance by passing them into the "defaults" argument of the Form's save method. This way save(False) doesn't need to be called before the attributes are set. In this case it's mostly needed when creating the instance. If there is many_to_many data, then that would also have to be handled somehow (see docs for save_instance()). There may be other ways to do this, but I found this to be backwards compatible and relatively simple solution... comment:3 Changed 12 years ago by This functionality is already provided by the save(False) approach. Attaching data to the form that isn't required for form presentation isn't really a good idea, IMHO. There is a problem with the save(False) approach when it comes to m2m data. This problem is described in ticket #4001, but that problem is solvable. The solution is to fix #4001, not to make form_for_model a behemoth. Remember, form_for_model is a helper, not the end of the story on newforms. Hmm.. what benefits does this have over using initial values?
https://code.djangoproject.com/ticket/4312
CC-MAIN-2019-35
refinedweb
586
60.31
. from PIL import Image import glob, os size = 128, 128 for infile in glob.glob("*.jpg"): file, ext = os.path.splitext(infile) im = Image.open(infile) im.thumbnail(size) im.save(file + ".thumbnail", "JPEG") or do you typeor do you type script.py ?? python script.py where python where pip assoc .py ftype Python.File import sys, time print("My python is %r" % sys.executable) time.sleep(2) #!/usr/bin/env python3.3 # above line MUST be the very first line and should contain the version of python under which you'd like to run your script # I'm not using windows a lot and never tried, but I think if you installed at least one ptyhon3 version the 'generic' launcher # should be able to parse this line and launch the appropriate python to run your script import sys, time print("My python is %r" % sys.executable) time.sleep(2) V:\Orchids\Python\thumbnail>test.py My python is 'C:\\Python35\\python.exe' V:\Orchids\Python\thumbnail> V:\Orchids\Python\thumbnail>test.py Traceback (most recent call last): File "V:\Orchids\Python\thumbnail\test.py", line 2, in <module> import PIL ImportError: No module named 'PIL' V:\Orchids\Python\thumbnail> Please tell me what's the output.Please tell me what's the output. ftype Python.File #!/usr/bin/env python3.3 If you are experiencing a similar issue, please ask a related question Join the community of 500,000 technology professionals and ask your questions.
https://www.experts-exchange.com/questions/28957556/How-to-generate-square-thumbnail-using-perl.html
CC-MAIN-2017-13
refinedweb
246
57.87
Another post about Phobos usability. Again the relative code is a rosettacode Task: >Assume we have a collection of numbers, and want to find the one with the largest minimal prime factor (that is, the one that contains relatively large factors). To speed up the search, the factorization should be done in parallel using separate threads or processes, to take advantage of multi-core CPUs.< The original D2 implementation (probably by feepinCreature), with little changes: This D2 version is not bad, but there are some ways to improve it. ------------------------------------ 1) Finding the max or min of an iterable is a really common need. Probably I need to do it every 50-100 lines of code or less. This is how it is found in that program: reduce!q{max(a, b)}(0L, minFactors)); I strongly suggest the max() and min() to find the max of an iterable by themselves. And to optionally accept a mapping function (as schwartzSort). See: With that the line of code becomes more readable and shorter, and surely bug-free: max(minFactors); Similar considerations suggest the creation of a sum() function: A sum() is not just a reduce, because it is allowed to use an optimization. See issue 4725. ------------------------------------ 2) Clojure language shows how nice is a pmap(), parallel map. If necessary two different pmap() may be created, one where the single mapping operation is slow enough, and one where it is very cheap. For this Task I use the pmap() for costly mapping function. Using pmap() the D2 program may be shortened to (this loses the timings, but they are not required): import std.stdio, std.math, std.algorithm, std.parallel; pure ulong lowestFactor(immutable ulong n) { if (n % 2 == 0) return 2; else { immutable ulong limit = cast(ulong)sqrt(n) + 1; for (ulong i = 3; i < limit; i += 2) if (n % i == 0) return i; } return n; } void main() { auto numbers = [2UL^^61-1, 2UL^^61-1, 2UL^^61-1, 112272537195293, 115284584522153, 115280098190773, 115797840077099, 112582718962171, 112272537095293, 1099726829285419]; writefln("Largest min. factor is %s.", max(pmap(&lowestFactor, numbers))); } pmap is able to see that the given mapping function is pure. Here lowestFactor() is strongly pure, so pmap() is free to ignore the execution order of the single lowestFactor functions. An alternative syntax for pmap is similar to the map: pmap!(&lowest_factor)(numbers) ------------------------------------ 3) For debugging I may want to print the array of lowest factors: writeln(pmap(&lowest_factor, numbers)); Or even just: writeln(map!(&lowest_factor)(numbers)); writeln() is able to print a lazy iterable, but it prints it just as an array. This is bad, because it's important to give cues to the person that reads to the printout regarding the types of the types printed. A simple way to tell apart lazy sequences from arrays is to use a semicolon instead a comma to tell apart items (lists are sometimes written using a semicolon in other functional languages, so this is not a new thing): [0; 1; 2; 3; 4] See: Empty dynamic arrays, empty static arrays, empty associative arrays and empty lazy iterables need some kind of output when they are printed, I suggest a simple "[]" at least. With no output it's too much easy to confuse "no output" with "empty array" with "missing writeln" with "not run code path", etc. (In bugzilla there is a bug report about this, not written by me, I don't remember its number). Bye, bearophile
http://forum.dlang.org/thread/if1lv8$1lcp$1@digitalmars.com
CC-MAIN-2015-18
refinedweb
571
59.03
Exploring Dynamic Scoping in Python Experimenting with Code Objects & Bytecode Introduction Ruby has anonymous code blocks, Python doesn't. Anonymous code blocks are (apparently) an important feature in implementing DSLs, much touted by Ruby protaganists. As far as I can tell, the major difference between code block in Ruby and functions in Python is that code blocks are executed in the current scope. You can rebind local variables in the scope in which the code block is executed. Python functions have a lexical scope, with the execption that you can't rebind variables in the enclosing scope. Note It turns out that this is wrong. Ruby code blocks are lexically scoped like Python functions. This article is really an exploration of dynamic scoping. If you define a function inside a function or method which uses the variable 'x', this will be loaded from the scope in which the function was defined; not the scope in which it is executed. This is enormously useful, but perhaps not always the desired behaviour. If a function assigns to the variable 'x' this will always be inside the scope of the function and not affect the scope the function was defined in or executed in. I thought it would be fun to try and implement this feature of anonymous code blocks for Python, using code objects. This should be a fun way to learn more about the implementation of Python scoping rules by experimenting with byte-code. If this sounds like it's a hack, then it's only because it is. It is interesting to note however that Aspect Oriented Programming is a well accepted technique in Java, and is mainly implemented at the bytecode level. This article looks at the byte-code operations used in code objects and experiments with creating new ones. Although the details of the byte-codes are shown, no great technical knowledge should be needed to follow the article. Code Objects Python doesn't have code blocks. It does have code objects. These can be executed in the current scope, but they are inconvenient to create inside a program. The code must be stored as a string, compiled and then executed. >>>>> codeObject = compile(codeString, '<CodeString>', 'exec') >>> exec codeObject 3 >>> print x 7 >>> Functions store a code object representing the body of the function as the func_code attribute. For a reference on function attributes, see the function type. The byte-code contains instructions telling the interpreter how to load and store values. It is a combination of the function attributes and the byte-code, including code object attributes, that implement the scoping rules. You can't just execute the code object of a function: >>> def function(): ... print x ... x = 7 ... >>> codeObject = function.func_code >>> exec codeObject Traceback (most recent call last): File "<stdin>", line 1, in ? File "<stdin>", line 2, in function UnboundLocalError: local variable 'x' referenced before assignment >>> The co_freevars attribute of the code object contains a list of the variables from the enclosing scope used by the code object. Their are various other attributes like co_varnames which tell the interpreter how to load names. For a reference on code objects, see: Code Objects (Unofficial Reference Wiki). Code objects are immutable, or at least the interesting attributes are read only, so we can't just change the attributes we are interested in. We can create new code objects. The documentation doesn't seem to encourage this though : >>> from types import CodeType >>> print CodeType.__doc__ code(argcount, nlocals, stacksize, flags, codestring, constants, names, varnames, filename, name, firstlineno, lnotab[, freevars[, cellvars]]) Create a code object. Not for the faint of heart. >>> In order to implement code blocks I would like to take the code objects from a function and transform them into ones which can be executed in the current scope. There is an interesting recipe which transforms bytecodes and creates new code objects in this way: Implementing the make statement by hacking bytecodes. Luckily there is an easier way. Byte-Codes There is a great module called Byteplay. This lets you manipulate byte-codes and create new code objects. Ideal for my purposes. It is also great for exploring byte-codes. Let's see what the byte-code looks like for some functions. The Python Byte Code Instructions comes in handy here. The following Python creates three code blocks and uses Byteplay to print out the names of the byte-codes operations. The three code blocks come from a function which is defined in the global scope, the same code (without the argument 'x') compiled from a string in the global scope, and a function defined inside another function. from pprint import pprint z = 1 def testFunction(x): y = 1 print x print y print z print 'From Function:' code = Code.from_code(testFunction.func_code) byteCode1 = code.code pprint(byteCode1) codeObject = compile(""" y = 1 print y print z""", '<Summink>', 'exec') print 'From current scope:' code = Code.from_code(codeObject) byteCode2 = code.code pprint(byteCode2) def anotherScope(): z = 1 def testFunction(x): y = 1 print x print y print z code = Code.from_code(testFunction.func_code) byteCode3 = code.code return byteCode3 byteCode3 = anotherScope() print 'Code defined in another scope, using a local rather than a global.' pprint(byteCode3) This prints out the following (you don't need to read it all) : From Function: [(SetLineno, 6), (LOAD_CONST, 1), (STORE_FAST, 'y'), (SetLineno, 7), (LOAD_FAST, 'x'), (PRINT_ITEM, None), (PRINT_NEWLINE, None), (SetLineno, 8), (LOAD_FAST, 'y'), (PRINT_ITEM, None), (PRINT_NEWLINE, None), (SetLineno, 9), (LOAD_GLOBAL, 'z'), (PRINT_ITEM, None), (PRINT_NEWLINE, None), (LOAD_CONST, None), (RETURN_VALUE, None)] From current scope: [(SetLineno, 2), (LOAD_CONST, 1), (STORE_NAME, 'y'), (SetLineno, 3), (LOAD_NAME, 'y'), (PRINT_ITEM, None), (PRINT_NEWLINE, None), (SetLineno, 4), (LOAD_NAME, 'z'), (PRINT_ITEM, None), (PRINT_NEWLINE, None), (LOAD_CONST, None), (RETURN_VALUE, None)] Code defined in another scope, using a local rather than a global. [(SetLineno, 66), (LOAD_CONST, 1), (STORE_FAST, 'y'), (SetLineno, 67), (LOAD_FAST, 'x'), (PRINT_ITEM, None), (PRINT_NEWLINE, None), (SetLineno, 68), (LOAD_FAST, 'y'), (PRINT_ITEM, None), (PRINT_NEWLINE, None), (SetLineno, 69), (LOAD_DEREF, 'z'), (PRINT_ITEM, None), (PRINT_NEWLINE, None), (LOAD_CONST, None), (RETURN_VALUE, None)] In summary, this tells us: Store a local variable: STORE_FASTLoad an argument: LOAD_FASTLoad a variable local to function: LOAD_FASTLoad a global: LOAD_GLOBALLoad a value from the enclosing scope: LOAD_DEREFLoad a value from the same scope: LOAD_NAMEStore a value in the same scope: STORE_NAME So in order to rescope a code block to execute in the current scope, we need to transform LOAD_FAST and LOAD_DEREF into LOAD_NAME, and STORE_FAST and STORE_DEREF (which we haven't seen here) into STORE_NAME. Transforming Byte-codes The Byteplay module allows us to iterate over the opcodes. It stores them as a list of tuples. Because lists are mutable we can replace the byte-codes we are interested in. The Byteplay module also has a dictionary called opmap, which is a mapping of opcode names to their symbolic values. LOAD_FAST = opmap['LOAD_FAST'] STORE_FAST = opmap['STORE_FAST'] LOAD_NAME = opmap['LOAD_NAME'] STORE_NAME = opmap['STORE_NAME'] LOAD_DEREF = opmap['LOAD_DEREF'] STORE_DEREF = opmap['STORE_DEREF'] def AnonymousCodeBlock(function):)) At the start of the function AnonymousCodeBlock we use Code.from_code to turn the function byte-code object into a Byteplay object. By the end, so far, we have a list newBytecode which holds our transformed bytecode. There is one more step. We need to turn this back into a code object, but one which executes in the current scope. This means that we need to set the freevars attribute to () (empty) and the newlocals attribute to False. code.newlocals = False code.freevars = () return code.to_code() Because we're not interested in functions which take arguments, we ought to check the function we've been passed. inspect.getargspec makes this easy. The full AnonymousCodeBlock, looks like this. from byteplay import Code, opmap LOAD_FAST = opmap['LOAD_FAST'] STORE_FAST = opmap['STORE_FAST'] LOAD_NAME = opmap['LOAD_NAME'] STORE_NAME = opmap['STORE_NAME'] LOAD_DEREF = opmap['LOAD_DEREF'] STORE_DEREF = opmap['STORE_DEREF'] def AnonymousCodeBlock(function): argSpec = inspect.getargspec(function) if [i for x in argSpec if x is not None for i in x]: raise TypeError("Function '%s' takes arguments" % function.func_name))) code.code = newBytecode code.newlocals = False code.freevars = () return code.to_code() Using AnonymousCodeBlock To use AnonymousCodeBlock you pass it a function. It returns a code object which represent the body of the function. You can execute this with a call to exec. Local variables used by the code, and names bound by it, will be looked up and bound in the scope in which you execute the code. print "In thunk" print x x = 45 def getInnerThunk(): x = 1 z = 3 def innerThunk(): print 'In inner thunk' print x x = 7 print z return innerThunk def main(): x = 20 z = 10 codeObject = AnonymousCodeBlock(thunk) exec codeObject print x codeObject2 = getInnerThunk() exec AnonymousCodeBlock(codeObject2) print x main() x = 5 z = 6 print 'in local' exec AnonymousCodeBlock(getInnerThunk()) print x The above code uses two functions which work with the variables 'x' and 'z'. One of the functions (thunk) is used directly. The second (innerThunk) is obtained by calling getInnerThunk. If you run it (I won't spoil the surprise), you'll see that it does what it should. The variable 'x' is printed and then changed: whether the function comes from an inner scope or not, and whichever scope it is executed in. So there we have it, an implementation of anonymous code blocks for Python, sort of. Note Note that AnonymousCodeBlock doesn't change global lookups. You probably shouldn't use it in production code either. Last edited Mon Apr 21 00:51:32 2008. Counter...
http://www.voidspace.org.uk/python/articles/code_blocks.shtml
crawl-002
refinedweb
1,558
62.88
User:SPIKE/2014-3 user Kamek98. - Other conversations in '12. - Conversations in '13 before Oppage. After oppage: 2 3 4 - RFCs received in 2013 of no enduring interest except to those studying the requestors. too. -- RomArtus*Imperator ITRA (Orate) ® 13 You promised me you would edit my Welcome message to change the personalization to yourself (or you can remove the personalization entirely) before using it to welcome new users. Do not deliver remarks that seem to be from me "on behalf of" me. Separately, everything you know about good writing needs to be subordinated to evaluating how enforcement will look to a very new user. In particular, you should not engage in a revert war with a newly registered Uncyclopedian (who might not know how to see your rationale in the Change Summary) without explaining your disagreement, probably on his talk page. (In the case of Idontthinksomeonehasthisusername, this is now done.) Spıke. 18 Requests for Autopatrolled BlogsyQuenz Has gotten straight to work cleaning up errors in the site. Doesn't look like a spammer, and has potential for humor. --The Shield of Azunai DSA510My Edits! 01:38, July 29, 2013 (UTC) - Mostly positive contributions; thank you for delivering praise. Simsie found one bad move (at 4chan). I'll keep an eye on him. Spıke Ѧ 12:39 29-Jul-13 Improving other articles I have been reading the Uncyclopedia pages on all the countries of the world, and I noticed one happened to be missing: the United Kingdom. When you type in UK, there is a redirect to the page of Great Britain, which is an island that makes up a large portion of the UK. However, the article is written to describe the UK exactly (an island that constitutes a nation wouldn't have a prime minister, etc.), so I think this article needs a title change. Another article I'd like to improve is that of Pokémon. The article has one paragraph that constantly repeats itself throughout the article, but it is uneditable (I know that's not a word). I'm sure other Uncyclopedia users and me could put together a pretty good article that actually contains more information, but this is impossible at the moment. Thanks Spike! --Chocolin (talk) 22:01, July 29, 2013 (UTC)Chocolin - On Great Britain, I agree with your comment on nomenclature. However, we only have the one article, and it doesn't bother me that "Great Britain" and "United Kingdom" point to the same text. If your point is that the wrong one is the redirect, it is valid. Would you please mention this to ScottPat, as he tends to have extremely strong ideas when it comes to GB and UK? Likewise if your point is that you'd like to write a second article with a different comedy take (which is explicitly allowed, and doesn't have to be consistent with the existing article). - Pokémon is presently a redirect to another page. It has been protected since 2008 so that only Admins can edit it. I have changed this protection so that you can make it something other than a redirect. However, please make it easy for readers who actually want to see the other page to get to it. Spıke Ѧ 04:28 30-Jul-13 The page that I was talking about that needed editing was the Pokémon (video games series) page. The same paragraph repeats over and over, and I think this could be a pretty good written article if it sprouted out of the paragraph and into other information. In my opinion, the redirect is good because we don't need to separate pages for Pokémon and Pokémon (video games series), since the franchise mainly is a video game series. I'm not sure if this page was originally written by an admin, but it still is locked from editing. Thanks Spike. --Chocolin (talk) 21:57, July 30, 2013 (UTC)Chocolin - Pokémon (video games series) was protected by departed Admin MrN9000 in April 2012. I find it tedious; the joke that Pokémon video games are distinguished mainly by having different signature colors and by nothing else, really does not have to be told 19 times. In his Change Summary in the Protection Log, he raises the possibility that a better article could be written, but wanted to see it done before letting his article be transformed. I tend to agree; tedious or not, it does what it is trying to do very well. If you have a different comedy strategy, I don't doubt that it will be better (or at least more diverse), but do pursue it in a separate page--for example, the now-editable Pokémon. - Separately, I appreciate your vote to feature Cap'n Crunch--but it ran last week. Spıke Ѧ 02:20 31-Jul-13 Thanks Spike. I might go ahead and do that, but I don't know if the articles I'm creating are good enough along the lines of humor! I feel dumb for not getting MrN9000's joke, and that makes the article have a pretty good humor strategy. Earlier, I mentioned the UK article, and found that ScottPat will not return from vacation until mid-August, so I guess I'll wait on that. If you have any sports, geography, or other articles that need completion or editing, I'd be happy to help! --Chocolin (talk) 03:10, July 31, 2013 (UTC) A risk-free option is to click here: User:Chocolin/Pokémon. For other suggestions, you can browse Uncyclopedia:Requested Articles or peruse our articles on nations of the world, states of the US, and states of Mexico, for gaping quality holes. Boston Red Sox is an article that departed user Kamek98 began with the strategy of exploiting cheap puns, and on which I insisted that it had to reflect some of the club's history, but never finished the job. Spıke Ѧ 03:28 31-Jul-13 Llwy's talk Sorry for bothering you but I just noticed that User talk:Llwy-ar-lawr is protected now but it is still not complete, as you did not put back all the deleted posts. Thanks. Anton (talk) 09:51, July 30, 2013 (UTC) - I don't know how that happened! but thank you for catching it. And you surely don't have to apologize for "bothering" me after I made a mistake! Spıke Ѧ 02:20 31-Jul-13 A bit hypocritical, much? For so long, I've seen the interwiki's. And I see no problem with the welsh and assorted interwiki's being added. Why hypocritical, you ask? Well, I haven't clicked every interwiki, but one of my more frequented uncyc's is, [gratuitous advertisement deleted]. I know we have an obligation to wikia, but, its not our first purpose to be wikia slaves. Our purpose is to spread humor of all kinds across the planet...( I know that sounded a bit corny, sorry.) The more languages we have, the farther the spread of our humor. Thats one thing I like about the fork. That is, they are (at least, in theory), more focused on the humor part, rather than politics. And I'm sure the TOS states that promotion of non-wikia stuff is not desired but it isn't barred. And what say you, about the tons of links still in articles? I know that some of that is a bit of "boosterism", but some is to add on to the article. Wikipedia, has interwikis, and links to external stuff. You're enforcing wikia policy too much. I know it should be enforced enough to keep us from degrading into encyclopedia dramatica, but keep in mind, we're a humor wiki. The rules are (or in my opinion, should be) relaxed a bit. Llwy is just trying to help out her division of uncyclopedia. Is that so wrong? I mean, uncyclopedia is already distanced from wikia, a bit, and its not like swarms of readers will leave us to go to the welsh site. Think about it, how many people here have even a basic grasp of Welsh? And much less a grasp of welsh humor. I am kindly requesting you to let Llwy do her thing, I won't complain if you throw in a clause to write for the english uncyc, but you shouldn't bar her from spreading a welsh uncyc. Again, how many users would actually be permanently diverted from this site to go to the welsh site. And also, don't mess with the japanese interwiki links. Sincerely, --The Shield of Azunai DSA510My Edits! 18:03, August 1, 2013 (UTC) - I'm kind of confused by this elipsical post here. The links are there...as far as I know...no one has taken them down. Llwy has been somewhat disruptive and has also claimed three times she was leaving. I also think that SPIKE only grudgingly adheres to wikia's policy when it goes against the interests of uncyclopedia and that he tries to work with wikia rather than against them. I'm not a fan of giving in too much to wikia and I don't think anyone else here is. --ShabiDOO 18:39, August 1, 2013 (UTC) - Llwy has not danced like she has never danced before and so she cannot edit here anymore. Concerning the interwikis, this really concerns the interwikis, doesn't it? User:Anton199/sig/Parody/Spike 18:48, August 1, 2013 (UTC) Reply Denza: I do not, and you have not claimed that I do, assert a privilege that I deny to other users. Therefore, there is no question of hypocrisy and I assume you started your message this way just to get my attention. My honor does not need defending, but if you engage other Uncyclopedians with such name-calling, I am sure we can do without you for a few days. Llwy was not "trying to help out her division of uncyclopedia"--another use of the "one community" mush that people use to divert attention away from misbehavior. She has chronically used this website to send traffic to, and serve the interests of, other websites, and her most recent crusade was that we voluntarily shut down this website. Llwy's only defense has been that (1) she was in a bad mood, (2) she has the courage to name her offenses explicitly and say she probably shouldn't have committed them, while committing new ones such as ban evasion, and while noting that banning will not be effective. She has explicitly stated that she does not intend to contribute more content (political screeds evidently excepted). I do not know how you became her spokesman, but it is a reminder that your good work patrolling this site has to take into account your recurring philosophy that the site is less about writing funny stuff than about playing games of personality politics; as before, making alliances and strategically doublecrossing them. It is a cheap shot that everyone who enforces Wikia's Terms-of-Use is a mind-numbed robot. There is lively discussion that you are not aware of. But I have been on this website for four years and I enthusiastically enforce the rule against supposed contributors sucking away its resources, and I don't care who owns the site that benefits. Wikia is free to cross-sell its other wikis in its ads in the footer. I recently deleted a post where someone who started a new Wikia wiki tried to recruit an inactive Uncyclopedian. I do not have mastery of the Interwikis, but apart from the useful function of sending foreign-language readers to a version of the page they would enjoy more--and apart from more faithfully spoofing Wikipedia--I am not sure why we do Interwikis either. Spıke Ѧ 12:03 2-Aug-13 Interwikis Spike, I would agree with Denza if I, myself, would notice that some interwikis get deleted. But I haven't and now I am addressing myself to you with a complicated issue: on the main page there is an interwiki to [1] while it can be considered dead. The site which is active is [2], although it is not a wikia site. Do you think we ought to change the link? Anton (talk) 16:34, August 2, 2013 (UTC) - You are saying that there are two Russian-language Uncyclopedias and one is more active than the other (though the UnNovosti in the "dead" one aren't entirely dead or ancient). In an unaligned world, the question of which of the Russian Uncyclopedias is "better" is an inherently political one on which we need not take a stand. The world is not unaligned, and I'd expect Wikia's position is that we should not actively divert traffic away from another Wikia franchise. There was a week, earlier this year, when Wikia seemed to be getting active in the defense of this site; namely, removing from the Interwiki table the links to some foreign wikis where en:does not link back here. I do not know whether there was any follow-through. - I do not think "we" ought to change the link; because "you" can't do it and "I" don't know how to do it, and especially because I concede it would be a Terms-of-Use violation to do Russian readers the "favor" of helping them avoid the relevant Wikia product. - We have a variety of foreign users whose only business here is adding Interwiki leaks to their own websites. This does not help our site except to become a giant table of pointers out, which is ultimately not a traffic-builder. We also have a huge table of Interwikis on the main page for no evident reason other than to pat ourselves on the back for being part of a World Movement. This table confronts and sidesteps the question of whether there is more than one suitable destination in the foreign language. I do not know why a reader interested in Russian-language humor comes here at all, nor why we would care to offer him an opinion on where he should go instead. Spıke Ѧ 17:50 3-Aug-13 - When I came here, I was very interested in what was happening here (and I still am). And I had no idea that there was a Russian site like this one. And guess what? I found it with the help of the main page but it created a lot of problems as I went to the old one first and all the users who are not wikia staff left it, so it is dead as a humor site, not as a wikia site. And it took me quite a while to understand which site was actually working. Anton (talk) 19:24, August 3, 2013 (UTC) Spike, I thought about this and think that what you said is extremely sensible and I actually remember that there is still is some action going on on the old site. But I am very worried by the thought that, by forbidding intwikis to external websites, we enclose ourselves in the wikia family and our site becomes a complete wikia site. And wikia does not always equal humor. Anton (talk) 19:37, August 3, 2013 (UTC) - Interlinks are mutually benificial as long as both are linking to each other. To be honest...with the languages I know...Ive clicked on the other websites...noted the general lack of quality and a different (not always in a good way) sense of humour...read a few articles and never really saw the site again except to write a few articles that were utterly ignored. Readers on the other sites may visit and perhaps stay...considering we offer links to see the best of our features, openly encouraged to vote on the best articles, an (at times) outstanding news component which is constantly being rolled over with new and timely articles on a daily basis etc. etc. etc. In this sense...we are far more likely to retain readers than anglo-saxons who visit foreign websites as none of the other websites offer this to the extent and of the quality that we do. As long as the website is in the uncyclopedia tradition...and the website links to us for "english" ... I believe it is advantageous to keep the links. --ShabiDOO 20:13, August 3, 2013 (UTC) - Replying to 19:24: I don't concede your narrative. Even in this locale, some argue that the only people electing to remain on a Wikia website must be "Wikia staff" or at least bought off, which is the cheap shot, common in partisan politics, of accusing adversaries of being driven merely by lucre. The question is why this website should take a position on two websites in a foreign locale, especially why it should favor the non-Wikia one. - Replying to 19:37: Of course it is unlikely that a Wikia product is always the best (funniest) at what it is trying to do. But Wikia is using us to build Wikia traffic, and I have always found that "price" of using its free services for instant global publication the least burdensome agreement I have ever worked under. By comparison, in almost any radio job or even in sports announcing, you will find actual words placed in your mouth. We do not "enclose ourselves in the Wikia family": We do not agree never to look outside. The Terms of Use as I understand them is that we simply agree not to recommend to prospective Wikia readers that they instead go elsewhere; and I don't mind that a bit. - I am not prepared to conclude that the Wikia Russian Uncyclopedia is the worse of two Russian Uncyclopedias. And if it were, the two options are to join (or assist) the exodus, or to stay and get the work done. Spıke Ѧ 20:27 3-Aug-13 Thank you because you might not understand how helpful your comment was! And because of this I just remembered that when I joined the Russian wikia, a user from the fork immediately sent me an e-mail that I should not be working there. Anton (talk) 20:59, August 3, 2013 (UTC) Further intercession I am not really sure if I am right to write this thing here, as you probably consider the situation over but: - Llwy blanked her talk page for the first time because, as she said, she did not know it was forbidden, seeing that others have done it. I, personally, have never seen anyone blanking his talk page and not being blocked but maybe she did. But who knows? Anyway, she got banned for one day. - Then she blanked her talk page twice. So, at the same time, she reverted an admin and did the same thing for which she git banned for the first time. But: "21:44, July 29, 2013 Llwy-ar-lawr (talk | contribs) . . (343 bytes) (-42,210) . . (Please let me do this. A talk page is a place for contacting the user; if the user can only be found elsewhere, the talk page should point there)". This comment basically says that she does not know she is doing something wrong and I think that she did not believe that you banned her only for blanking (but for a political reason). And this is not surprising knowing everything she had said before and after. Anyway, she got blocked for three months. - Llwy made her unblock request, then changed her mind and decided not to stay here anymore but still argued for a while and left several messages. This is clearly ban evasion. But Llwy is neither shy, nor quiet: when she thinks that she is being attacked, she defends herself and I think it was impossible for her to leave without defending herself. And, in addition to this, going directly form one day block to a three months one is rather quick, isn't it? So... There is no right or wrong in this situation because Llwy is completely sure that the site is very authoritative and does not allow her at all. And when a person thinks that, he can fall apart and begin committing mistakes only because he thinks that he fights for the right causes. And I know this on my own example because I have almost the same thing on the Russian uncyc now and I am trying very hard to understand whether it is me who is causing problems or the admins who want to throw me away. Conclusion (if there is any): I disagree with Llwy because I think that the site is democratic and user-friendly. So if I am right, maybe she just deserves a chance to understand how everything works down here and see for herself that people do not get banned because they express their thoughts openly? I am sure it is none of my business but ... it is a question and not a request and I, myself, would help her in every way I can if she gets into further conflicts. Anton (talk) 12:45, August 4, 2013 (UTC) - I do consider the situation over. I greeted her in February with my standard message, which includes: "Don't delete anyone's messages. In case of any controversy, we depend on an accurate record of what was written." My standard message does not include a table of punishments, nor should it. The first ban was for a token interval; the follow-on ban was for deliberate, repeated misconduct, and I have banned one Anon that was she, as I will ban any others. No Admin here bans users "for political reasons"--that is, on the basis of their opinions, even hers that this website owes it to others to go out of business. Personally, I do not think she blanked her talk page merely to resign but because my rebuttal of her final comments did not reflect favorably on her. - As you set out: She was not here to contribute content; she was a drama queen with a preconceived opposition to Wikia, to any rules she felt like disobeying, and to me. These are not even banning offenses, but together I have no motivation to set aside the ban. And no, it is none of your business, unless like DungeonSiege, you view this website not in terms of writing content but as a game of forming and breaking alliances. Spıke Ѧ 14:50 4-Aug-13 A question Hello. I was recently blocked. I am wondering if I could have more info on why (specifically what I vandalised) —The preceding unsigned comment was added by Phant0mhaX0r (talk • contribs) - Thank you for your inquiry. After typing the unhelpful, and unsigned, comment "Writer block is umm… er… let's just say meh" at Talk:Writer's block, you went to work on an actual article, Wikipedian, and I quote, "There is a reason why The Site Which Must Not Be Named has no spork label. It is a HUUUUUUUUUUUUUUUUUUGE spork." Together with a user page full of L33T, this struck me as vandalism and I banned you for 1 day. I also provided a detailed explanation along the same lines as this, at User talk:Phant0mhaX0r, plus reading suggestions for you. Spıke Ѧ 12:03 2-Aug-13 QVFD Spike, sorry, but could you, please, delete UnSignpost/Template:News (but it is a redirect page) and restore Uncyclopedia:UnSignpost_Template? Maybe I posted it the wrong way on the QVFD. Anton (talk) 19:21, August 4, 2013 (UTC) - Done. The reason the rules call for you to use {{Redirect}} on UN:QVFD is to keep us from clicking through it to the page redirected to. Spıke Ѧ 20:39 4-Aug-13 - No rush. However, I am baffled that the UK and Russia seem to have been annexed by France so that everyone has the entire month of August on vacation. Spıke Ѧ 15:54 7-Aug-13 - It's attractive; and it probably makes a lot more sense for us to mimic (and track) Wikipedia rather than the Fork. A couple years ago, UnNews was re-skinned to continue to imitate Wikinews. However, the word UnSignpost is annoyingly large; I think humor should be achieved by funny writing, not by large lettering. Also, the date of the articles seems to be coded as a section heading at the same level as the headline itself, which should not be the case. Spıke Ѧ 18:10 7-Aug-13 Ok, thanks. There is actually another problem to fix (except the two you mentioned): the newspaper can fold itself (see my talk page "test") but cannot get unfolded. Then delivering it makes no sense, if the readers won't have a mean to open it. Anton (talk) 19:25, August 7, 2013 (UTC) - I finished this current issue and am planning to deliver it tomorrow. The only problem is that it does not fold itself (I fixed the first problem - the fact that it could not unfold itself). But this is impossible of the formatting is similar to Wikipedia's, and I don't think we really need this. You can write an article for the next issue (if there is anything to talk about), as I agree with you on the newsworthiness of recent events. Anton (talk) 19:55, August 9, 2013 (UTC) - May I explain Spike's joke: The French are famous for their holiday routine in which every single Frenchman decides to take a holiday at the same time in August so that France basically closes down for a month. He presumed that because the Russians and Brits on this site have their holiday in August that they must be turning French., August 10, 2013 (UTC) - Oh, now I get it. Spike probably found my comment extremely stupid. I was baffled by the word "annexed" the meaning of which I did not know well. Thanks for explaining! Anton (talk) 20:21, August 10, 2013 (UTC) Token ban for Denza Spike, Denza is upset by his ban even though it is rather short. This is what he said exactly: "I just wanted to direct the user to an admin, but I now realize that I should have linked him to UN:AA". So he is sorry and asks you if you can unban him, please! Anton (talk) 19:00, August 13, 2013 (UTC) - Indeed, UN:AA is exactly the right way to do it. The system is not letting me unblock Denza, so he will have to wait another 3 minutes. Spıke Ѧ 19:49 13-Aug-13 - Should have used UN:AA, I'll remember that next time. --The Sieger of Dungeons Lord Denza Aetherwing Inventory 19:57, August 13, 2013 (UTC)(shiny new sig!) - Everyone should; and if you agree with me, vote with me at VFD, where I have nominated this piece of Performance Art for deletion. Spıke Ѧ 20:04 13-Aug-13 UnSignpost Subscription Just wanted to tell you that after V V I P's comments, I made another option for the UnSignpost subsrcibers: from now on they can choose to only receive a link to the new issue and the news summary (like on Wikipedia). I am just informing you, so you do not have to react to this in any way. Anton (talk) 16:18, August 15, 2013 (UTC) - This is a very good idea. Having the UnSignpost be a template means that, although it does not increase the character count of people's talk pages, it vastly increases the character count of their talk pages as rendered. Even to select the correct section of an infrequent editor's talk page on which to comment, I have to download the complete text of all UnSignposts since he last archived each page! This is a concern for those of us on slow and metered Internet links. I would like this option to become the default (though I am not "voting" for this, as it isn't my talk page and I don't read it by subscription). Spıke Ѧ 16:28 15-Aug-13 User:Reverend P. Pennyfeather's block Spike, don't you think that a two week block is a bit much for being drunk and mis-spelling votes on VFH (or as the Reverend now calls it "VHF"). As far as I can tell he hasn't done anything bad to this site, so wouldn't a shorter block be:50, August 16, 2013 (UTC) PS - It was exam results day over here in England and Wales, which may explain his drunkeness (depression or celebration). I find it slightly amusing he started correcting spelling to British English from American English when usually he distances himself away from me when I do that (and not even I go as far as he did!) but in all in all he's only drunk for one night so I don't think he ought to have that long a:07, August 16, 2013 (UTC) - Yikes! I meant two hours, not two weeks, as evidenced by my comment that he sleep it off, and not Rip Van Winkle style! You are right to object. Fixed now, and let me blame force-of-habit. Spıke Ѧ 11:10 16-Aug-13 Cotswold Olimpick Games Whoops. Mis-read. I thought it implied "two weeks earlier than the date that the reader is reading the article on." Thanks for the revert., August 17, 2013 (UTC) NASCAR You again. *sigh*... Do you know anyone who knows about NASCAR? I worked hard on that page, don't want it deleted... —The preceding unsigned comment was added by Aaronaraujo2013 (talk • contribs) - I don't think the article is at risk of being deleted, but more content is always better. Good move to call for help on the article's talk page. On the present page, it is always going to be "me again." No Uncyclopedians who are also NASCAR fans have advertised that fact. Spıke Ѧ 20:24 17-Aug-13 UnBooks:My Summer Vacation in Saudi Arabia On the nomination I responded to your against vote. Are you voting against it because you think it should be in some other namespace. I agree...it ought to be in unbooks. --ShabiDOO 20:25, August 17, 2013 (UTC) - I do think it would be better in UnBooks; but I also think the function of the main page is to showcase our "encyclopedia" identity and not the fact that we have good writers doing relatively unrelated things. Spıke Ѧ 20:29 17-Aug-13 - It doesn't seem idiosyncratic to hold that the Uncyclopedia main page showcase Uncyclopedia rather than individuals. But I have suggested to the editors that the next UnSignpost flog the question of whether my opinion is "namespace bias" as is prohibited in VFH and what, if anything, should be done about it. Repeating a previous conversation in response to a complaint of Anton199, it is not one vote against that is fatal but persistent lack of votes in favor. Spıke Ѧ 22:09 17-Aug-13 - PS--Are you a car-racing fan? If so, see immediately preceding section. Spıke Ѧ 22:11 17-Aug-13 - The feature page was set up...from the beginning...to show case the best of the best of uncyclopedia. This has...over the years of its existance included all sub projects which parody wikipedias projects. Based on the voting patterns of nearly all users in the last so and so years...the community clearly, openly and welcomly voted to feature non mainspace articles. The only exception I can think of was when there were four or five unnews in one week...and I believe the dissenters were often over-ruled by the votes of the rest of the community anyways. I could count on my hands the amount of times I've seen users openly vote down an UnProject article because it wasn't mainspace. And even then...that featured mainspace articles should be thoroughly "encyclopedic" is an issue that I have never ever heard mentioned in votes or on forums about the vision of uncyclopedia. Perhaps there are a few rare moments or there were some of those before I came here. - I suppose if users became more active in voting against articles that didn't meld with their vision of the wiki...then indeed there would be more articles featured despite votes against and perhaps more balance. Perhaps we should all become more proactive by letting users know our own visions of the wiki and voting in kind. Not a bad idea considering the lack of users. Now is a great time to openly carve out a vision that emcompases how all the remaining and new users see the future uncyclopedia fork here. Great time for change. - Sorry...I don't know anything about NASCAR except for how southpark parodied them. It was an okay episode. --ShabiDOO 02:14, August 18, 2013 (UTC) - I agree with Spike that 1 vote does not kill an article. I have had a few featured recently with one and even two votes against (some of those votes against coming from both you and Spike). As for whether an UnBooks can be featured it is a dilema. You don't won't to spoil the parody effect of the front page but I reckon that chances are you'd have got the joke before you get to the front page because most people find it through article pages. - By the way what happened to the content warning as I no longer get it. Is it gone finally?:36, August 18, 2013 (UTC) - Wikia removed it after a discussion with the admins on this website. Regards VFH, the issue of voting against an article has been a long running issue that pre-dates the schism. I see no problem with people voting against an article. Regards whether an article should be 'encyclopedic', 'navelistic' or personal ('The Day My Fridge Ate the Postman' type story) is up to the people who are active on this site. I have my own preferences but when it comes to VFH it is a matter of drumming up enough, August 18, 2013 (UTC) Sorry, but isn't it this: "Articles from all namespaces (including UnNews, UnTunes, HowTo, UnBooks, etc.) are eligible for VFH. Votes against articles based on namespace prejudice will be discarded"? Anton (talk) 13:45, August 18, 2013 (UTC) - Yes, it is. Now see Forum:Namespace prejudice. Spıke Ѧ 13:55 18-Aug-13 UnNews:Russia makes prison time a prerequisite for voting Thank you very much for proofreading my UnNews! However, I changed some of the sentences back, as probably you did not fully understand my concept. So, if you want to know more about Russian politics, here is some information: - Alexey Navalny is the greatest opposition leader in Russia right now (the greatest, because of the amount of his supporters and because of his actions against the government, and not because I like him the most). He is known for criticizing the government's actions openly. This is why he got arrested. This is not my opinion, this is just that all the evidence got falsified and after taking a close look at all the process, it will become obvious that he is innocent. So I am not parodying him but the government. - I did not name the sponsor, knowing that the law is fake, which is another aspect of my concept: I parodied all the silly and pointless laws that the government made recently and another reason for not talking who the sponsor was, is that anyone from the government could be him. - Finally, I did not want to annoy the reader with many Russian political details (what I am currently doing now) but I wrote about Alexey Navalny, knowing that his verdict became very famous outside Russia and caused a lot of people from different countries to express their opinion on the Russian democracy. I am not sure whether you know this or not, but the press (not the Russian one) even associated Navalny with Nelson Mandela. Thank you for your attention and sorry to bother you with all this! Anton (talk) 14:20, August 18, 2013 (UTC) - No, I did not want to know more about Russian politics. I just wanted it to read like a news story, which means it must open with what just happened, not how what happened was conceived. Failing to state the law's sponsor seriously detracts from the resemblance to news. What you should do instead is contrive an explanation for how the law's sponsor was unknown or unavailable. - "This is why he got arrested" is of course your opinion, as arrests in Russia do not correspond exactly to transgressions. You are welcome to describe this arrest as a reprisal, but you should do it delicately and with irony--not add a footnote saying essentially that people who don't come to your conclusion aren't paying attention. This feels like advocacy. It always works to have the "writer of the article" be credulous and repeat the explanations of Government without questioning them, even though the reader will. Spıke Ѧ 14:34 18-Aug-13 What I said in the UnNews is "People who disagree with the verdict did not study the affair closely) which is irony, as actually those with it are people who studied the affair closely. And thank you for your criticism: I will include the reason for which the sponsor is unknown. Anton (talk) 14:37, August 18, 2013 (UTC) Cupar, the UnSignpost and your talkpage Do you think the article needs more work? I think that I did all I could. Anton (talk) 20:35, August 18, 2013 (UTC) - It is now in good shape, but not yet remarkable enough for the main page. The challenge on a VFD save is not to produce the best possible article. If that is your goal, I had a look at the Wikipedia article and there is material about history and commerce that might be sporked and ridiculed. Spıke Ѧ 20:58 18-Aug-13 I was actually not planning to feature it. But if it is good, I can add history and commerce and maybe nominate it. Anton (talk) 09:52, August 19, 2013 (UTC) There will be too much small sections on your talkpage, if I keep on creating new ones every time I have a question, so I just decided to post this here: - Please, could you take a look at at this forum and vote, if you care about the UnSignpost format? - Do you know that before the contents of your talkpage are downloaded completely, there is the Uncyclopedia logo which appears at the upper left corner right where Spike the Dog's head is and disappears immediately? This gives the body of the dog and the Halloween Pumpkin at the top, because the is not enough time to notice that it is a potato. Anton (talk) 17:42, August 19, 2013 (UTC) Promising new Uncyclopedians Could you, please, make Tyrone McGee autopatrolled? His edits are very good and he is adopted by Scott, so they will get reviewed anyway. Thanks. Anton (talk) 19:13, August 19, 2013 (UTC) - Not yet. He aroused my suspicion by making a bee-line for Cunt and Masturbation (sport), and his Sir Swagsalot article, even after renaming, makes me regard him as part of a recent trio (with The TwaFFs and Kody-the-Fox) of promising new Uncyclopedians who have not yet bought into the idea of writing articles that "encyclopedia" readers will actually search for. Spıke Ѧ 21:28 19-Aug-13 - I agree that it is not the most encyclopedic however for a first article it is very promising.:15, August 19, 2013 (UTC) Ok, fine. Anton (talk) 11:38, August 20, 2013 (UTC)
http://uncyclopedia.wikia.com/wiki/User_talk:SPIKE?oldid=5725701
CC-MAIN-2015-35
refinedweb
6,677
68.81
Details - Type: Improvement - Status: Closed - Priority: Major - Resolution: Fixed - Affects Version/s: None - - Component/s: XML Configuration - Labels:None Description To better support ReST-style wildcard schemes ("/article/123/view"), the default action mapper should support action names that contain slashes. To do this, instead of assuming everything before the last slash is the namespace, the configured namespaces should be iterated over and explicitly matched. Activity - All - Work Log - History - Activity - Transitions This fix broke the expected behavior of actions in the default namespace being allowed to be called from any path prefix. For example, in our showcase, we have an action name of "AjaxRemoteForm" in the default namespace (""), and it needs to be called, among other places, from "/ajax/remoteforms/AjaxRemoteForm". This capability of calling actions from any path prefix in the default namespace stretches back to WebWork 1 and is a somewhat frequently-used feature. Therefore, I added a new struts.properties setting, struts.enable.SlashesInActionNames, which, well, enables slashes in action names, but by doing so, you lose the ability to call actions in the default namespace from any path prefix. Therefore, the default value will be "false". If you want to enable slashes in the action names, it is probably because you use wildcards frequently, in which case, you can simulate this feature by putting "*/" in the start of your action name. Therefore, to fix the above, the action name would be "*/AjaxRemoteForm", so now, that action could be called from anywhere. I think we should make the default setting to "false" to make WebWork 2 migrations easier, but perhaps for Struts 2.1 we'll default to "true". Hi Don, What do you think about having a different action mapper that allows slashes as action name, and allow switching by switching action mapper? This way we could avoid having the struts.properties. rgds Well, you'd still need to modify struts.properties to switch over to the new action mapper. Besides, I'm not sure how I feel about a ton of different ActionMappers, each with small features that separate them. We did that with ActionForms in Struts and that didn't turn out too well. I see. I guess if it didn't work out in Struts1, we probably shouldn't try it with Struts2 then. Got another random thought Don, What do you think about the concept of a Composite action mapper, that would somehow allows multiple action mapper, and somehow selecting one Form first thought, it would complicate the action mapper configuration, not sure if this is a good sign. Yeah, I think that deserves more thought. If you come up with something, put out a proposal on dev@ Even if we don't change anything, it is worthwhile, IMO, to raise the issue and start the discussion. Fixed in svn. The fix involved modifying the ActionMapper interface to support the passing in of Configuration instances.
https://issues.apache.org/jira/browse/WW-1383
CC-MAIN-2017-34
refinedweb
484
61.16
The Android platform offers a wide range of storage options for use within your apps. In this tutorial series, we are going to explore some of the data storage facilities provided by the Android SDK by building a simple project: an ASCII art editor. This is the final part in a tutorial series on creating an ASCII art editor app for Android. In the first three parts, we created the user interface, implemented saving user ASCII pictures as image files, and set up a database to store saved pictures in. This allowed users to choose from the list of saved pictures to load back in for viewing, exporting, and editing. In this tutorial, we will facilitate saving pictures to the database, deleting pictures previously saved, and starting new pictures. This tutorial series on Creating a Simple ASCII Art Editor is in four parts: - Building the User Interface - Image Export & User Configuration - Database Creation & Querying - Saving and Deleting ASCII Pictures Step 1: Detect Button Clicks We will be working entirely in the application's main Activity class this time. Add the following imports at the top of the class: import java.text.SimpleDateFormat; import java.util.Calendar; import android.app.AlertDialog; import android.content.ContentValues; import android.content.DialogInterface; We will handle clicks on the save, delete, and new buttons. In your main onCreate method, set the class up to handle clicks: Button saveASCIIBtn = (Button)findViewById(R.id.save_btn); saveASCIIBtn.setOnClickListener(this); Button newBtn = (Button)findViewById(R.id.new_btn); newBtn.setOnClickListener(this); Button deleteBtn = (Button)findViewById(R.id.delete_btn); deleteBtn.setOnClickListener(this); We added these buttons to the layout files earlier in the series. In the onClick method, after the existing code, add to your chain of conditional blocks for these additional three buttons: //user has clicked new button else if(v.getId()==R.id.new_btn) { } //user has clicked save button else if(v.getId()==R.id.save_btn) { } //user has clicked delete button else if(v.getId()==R.id.delete_btn) { } We will add code to each of these blocks to implement the functionality. Step 2: Create New Pictures Let's start with the easiest function, users pressing the new button. In the conditional block in onClick for the new button, reset the text-field to an empty string ready for user input: textArea.setText(""); Remember that we used a variable to keep track of the ID of the currently displayed picture if it has been loaded from the database - reset it too: currentPic=-1; Step 3: Save the Current Picture Let's turn to the conditional block in onClick for the save button. When the user presses the save button, there are two possibilities. Either they are saving a new picture not yet stored in the database or they are saving a picture loaded from the database, then edited. If the user is saving a picture loaded from the database, rather than saving a new entry in the database, we will update the existing record. First get the content of the Edit Text: String enteredTxt = textArea.getText().toString(); To model the data we want to commit to the database, either as an insert for a new picture or an update for an existing one, we create a Content Values object: ContentValues picValues = new ContentValues(); The new data will include the text from the text-field, so add it to the Content Values object, using the table column name we defined last time, stored as a public variable in the database helper class: picValues.put(ImageDataHelper.ASCII_COL, enteredTxt); We are going to use a string including the current date and time for the picture name, so build that next: Date theDate = Calendar.getInstance().getTime(); SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd_hh.mm.ss"); String fileName = dateFormat.format(theDate); This is what the user will see in the list of saved pictures. Add it to the Content Values using the same technique: picValues.put(ImageDataHelper.CREATED_COL, fileName); Get a reference to the database: SQLiteDatabase savedPicsDB = imgData.getWritableDatabase(); Now we need to tailor what happens to whether the current picture is new or not. Add a conditional statement: if(currentPic<0){ } else{ } The variable will be less than zero if the current picture is not already in the database (as we set it to -1). If the currently displayed picture has been loaded from the database, this variable will have the picture ID from the database stored in it, in which case the else will execute. Inside the if block, we will save the picture as a new database record: long insertNum = savedPicsDB.insert("pics", null, picValues); This is an insert statement because it is a new record. We pass the table name and the Content Values we created. The middle parameter is for a column name, but we do not need it. We retrieve the result of the insert as a long value, which is the ID of the newly inserted record. Update the variable so that any new edits saved will be written to the same database record: currentPic=(int)insertNum; Now output a confirmation message to the user, providing the insertion was a success: if(insertNum>=0) Toast.makeText(getApplicationContext(), "Image saved to database!", Toast.LENGTH_SHORT).show(); Now let's turn to the else, for updating a picture already stored in the database: int savedNum = savedPicsDB.update("pics", picValues, ImageDataHelper.ID_COL+"=?", new String[]{""+currentPic}); This time we use an update statement, passing the table name, Content Values and where details. The where part of the statement indicates the ID column and the value to match in it, specifying the current picture ID, so that the correct record is updated. The method expects a string array for the last parameter, even where there is only one value as in this case. Confirm the update to the user: if(savedNum>0) Toast.makeText(getApplicationContext(), "Image saved to database!", Toast.LENGTH_SHORT).show(); We are updating the picture name as well as content, but you can opt to leave the name as a reflection of when the picture was originally created if you prefer. After the else, close the connections: savedPicsDB.close(); imgData.close(); Step 4: Delete the Current Picture Now let's implement deleting the current picture. If the current picture has been loaded from the database, we will delete its record. Otherwise we will just empty the text-field. In the conditional section of the onClick method for the delete button, add a test for this as follows: if(currentPic>=0){ //picture has been loaded from the database - get user to confirm } else{ //picture has not been loaded from database } In the if section we will delete from the database. First get the user to confirm using an Alert Dialog: AlertDialog.Builder confirmBuilder = new AlertDialog.Builder(this); Set the dialog message and cancelable status: confirmBuilder.setMessage("Delete the saved picture?"); confirmBuilder.setCancelable(false); Now we need to specify what should happen when the user chooses to go ahead with the deletion, by defining the positive button: confirmBuilder.setPositiveButton("Yes", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int id) { } }); Here we define a new click listener together with its onClick method. Inside the onClick method, we can delete the saved picture - get a connection to the database: SQLiteDatabase savedPicsDB = imgData.getWritableDatabase(); Now execute the deletion: int deleteResult = savedPicsDB.delete("pics", ImageDataHelper.ID_COL+"=?", new String[]{""+currentPic}); We specify the table name, the ID column and the value to match in it so that we delete the correct record. If deletion was successful, confirm to the user: if(deleteResult>0) Toast.makeText(getApplicationContext(), "Picture deleted", Toast.LENGTH_SHORT).show(); Still inside the Dialog Interface click listener onClick method, reset the picture ID variable, empty the text-field and close the database connections: currentPic=-1; textArea.setText(""); savedPicsDB.close(); imgData.close(); Now after the block in which you set the positive button, set the negative button: confirmBuilder.setNegativeButton("No", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int id) { dialog.cancel(); } }); In this case we simply cancel the dialog. Now we can go ahead and show it: AlertDialog alert = confirmBuilder.create(); alert.show(); Now to complete the deletion section of the Activity onClick method, turn to your else statement for when the current picture has not been loaded from the database. In this case we will simply empty the Edit Text: textArea.setText(""); Step 5: Tidy Up! We have closed our database connections in each block we used to query, insert, delete or update records. However, in case the user exits the app while a connection is open, we should make sure all connections are closed. Add the onDestroy method to your Activity: @Override public void onDestroy() { } Inside it, close the database helper, then call the superclass method: imgData.close(); super.onDestroy(); Now you can test your app! Check that it correctly saves new pictures, updates existing pictures and deletes pictures on user request by saving a few then experimenting with them. Conclusion The simple ASCII art editor app is now complete. When you run the app, you should be able to enter text characters, save pictures, export them as image files, load, edit and delete previously saved pictures as well as configuring the display colors. The source code download contains all of the Java and XML files we have worked on during the series. There are lots of ways you could enhance this app if you want to explore it further. For example, you could improve the styling of the user interface. You could check whether the user wants to overwrite a stored picture before updating an existing database record, giving them the option of saving a new picture instead. A particularly productive enhancement would be to extend the code to use content providers and/or fragments to load data and target tablet devices effectively. You could also improve the picture saving process, for example by allowing the user to choose a name for each picture they save. In this series we have introduced a few of the basic processes involved in data storage on Android. We have focused on local storage, i.e. data stored on the user device. Another technique commonly used in Android apps is retrieving data over the Internet, which is also a task worth exploring. The Android platform facilitates a wide range of data storage and management options. You should now have basic familiarity with some of those most commonly used, giving you a solid foundation for approaching data in your future projects.
http://code.tutsplus.com/tutorials/build-an-ascii-art-editor-save-and-delete-ascii-pictures--mobile-13226
CC-MAIN-2014-42
refinedweb
1,742
54.83
Scroll down to the script below, click on any sentence (including terminal blocks!) to jump to that spot in the video! If you liked what you've learned so far, dive in! video, code and script downloads. Stripe's API is really organized. Our code that talks to it is getting a little crazy, unless you like long, procedural code that you can't re-use. Please tell me that's not the case. Let's get this organized! At the very least, we should do this because eventually we're going to need to re-use some of this logic - particularly with subscriptions. Here's the goal of the next few minutes: move each thing we're doing in the controller into a set of nice, re-usable functions. To do that, inside AppBundle, create a new class called StripeClient: Make sure this has the AppBundle namespace. We're going to fill this with functions that work with Stripe, like createCustomer() or updateCustomerCard(). In the controller, the first thing we do is create a Customer: In StripeClient, add a new createCustomer() method that will accept the User object which should be associated with the customer, and the $paymentToken that was just submitted: Copy the logic from the controller and paste it here. Update $token to $paymentToken. Then, return the $customer at the bottom, just in case we need it: You'll see me do with this most functions in this class. The only problem is with the entity manager - the code used to update the user record in the database. The way we fix this is a bit specific to Symfony. First, add a public function __construct() with an EntityManager $em argument. Set this on a new $em property: Down below, just say $em = $this->em: To use the new function in our controller, we need to register it as a service. Open up app/config/services.yml. Add a service called stripe_client, set its class key to AppBundle\StripeClient and set autowire to true: With that, Symfony will guess the constructor arguments to the object. If you're not coding in Symfony, that's OK! Do whatever you need to in order to have a set of re-usable functions for interacting with Stripe. In the controller, clear out all the code in the if statement, and before it, add a new variable called $stripeClient set to $this->get('stripe_client'): This will be an instance of that StripeClient class. In this if, call $stripeClient->createCustomer() and pass it the $user object and the $token: Done. Let's keep going! The second piece of logic is responsible for updating the card on an existing customer. In StripeClient, add a public function updateCustomerCard() with a User $user whose related Customer should be updated, and the new $paymentToken: Copy logic from the controller and past it here. Update $token to $paymentToken: Go copy the logic from the controller, and paste it here. Update $token to $paymentToken. In OrderController, call this with $stripeClient->updateCustomerCard() passing it $user and $token: Now the StripeClient class is getting dangerous! But, there's one small problem. This will work now, but look at the setApiKey() method call that's above everything: We must call this before we make any API calls to Stripe. So, if we tried to use the StripeClient somewhere else in our code, but we forgot to call this line, we would have big problems. Instead, I want to guarantee that if somebody calls a method on StripeClient, setApiKey() will always be called first. To do that, copy that line, delete it and move it into StripeClient's __construct() method. Symfony user's will know that the getParameter() method won't work here. To fix that, add a new first constructor argument called $secretKey. Then, use that: To tell Symfony to pass this, go back to services.yml and add an arguments key with one entry: %stripe_secret_key%: Thanks to auto-wiring, Symfony will pass the stripe_secret_key parameter as the first argument, but then autowire the second, EntityManager argument. The end-result is this: when our StripeClient object is created, the API key is set immediately. Ok, the hard stuff is behind us: let's move the last two pieces of logic: creating an InvoiceItem and creating an Invoice. In StripeClient, add public function createInvoiceItem() with an $amount argument, the $user to attach it to and a $description: Copy that code from our controller, remove it, and paste it here. Update amount to use $amount and description to use $description. Add a return statement just in case: In OrderController, call this $stripeClient->createInvoiceItem() passing it $product->getPrice() * 100, $user and $product->getName(): Perfect! For the last piece, add a new public function createInvoice() with a $user whose customer we should invoice and a $payImmediately argument that defaults to true: Who knows, there might be some time in the future when we don't want to pay an invoice immediately. You know the drill: copy the invoice code from the controller, remove it and paste it into StripeClient. Wrap the pay() method inside if ($payImmediately). Finally, return the $invoice: Call that in the controller: $stripeClient->createInvoice() passing it $user and true to pay immediately: Phew! This was a giant step sideways - but not only is our code more re-usable, it just makes a lot more sense when you read it! Double-check to make sure it works. Add something to your cart. Check-out. Yes! No error! The system still works and this StripeClient is really, really sweet. // composer.json { "require": { "php": ">=5.5.9, <7.4", "symfony/symfony": "3.1.*", // v3.1.10 "doctrine/orm": "^2.5", // v2.7.2 "doctrine/doctrine-bundle": "^1.6", // 1.6", // 1.1.1 "twig/twig": "^1.24.1" // v1.35.2 }, "require-dev": { "sensio/generator-bundle": "^3.0", // v3.0.7 "symfony/phpunit-bridge": "^3.0", // v3.1.2 "hautelook/alice-bundle": "^1.3", // v1.3.1 "doctrine/data-fixtures": "^1.2" // v1.2.1 } }
https://symfonycasts.com/screencast/stripe/centralize-stripe-code
CC-MAIN-2020-40
refinedweb
999
73.47
What is Amazon S3? Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. Customers of all sizes and industries can use Amazon S3 to store and protect any amount of data for a range of use cases, such as data lakes, websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. Amazon S3 provides management features so that you can optimize, organize, and configure access to your data to meet your specific business, organizational, and compliance requirements. Topics Features of Amazon S3 Storage classes Amazon S3 offers a range of storage classes designed for different use cases. For example, you can store mission-critical production data in S3 Standard for frequent access, save costs by storing infrequently accessed data in S3 Standard-IA or S3 One Zone-IA, and archive data at the lowest costs in S3 Glacier and S3 Glacier Deep Archive. You can store data with changing or unknown access patterns in S3 Intelligent-Tiering, which optimizes storage costs by automatically moving your data between four access tiers when your access patterns change. These four access tiers include two low-latency access tiers optimized for frequent and infrequent access, and two opt-in archive access tiers designed for asynchronous access for rarely accessed data. For more information, see Using Amazon S3 storage classes. For more information about S3 Glacier, see the Amazon S3 Glacier Developer Guide. Storage management Amazon S3 has storage management features that you can use to manage costs, meet regulatory requirements, reduce latency, and save multiple distinct copies of your data for compliance requirements. S3 Lifecycle – Configure a lifecycle policy to manage your objects and store them cost effectively throughout their lifecycle. You can transition objects to other S3 storage classes or expire objects that reach the end of their lifetimes. S3 Object Lock – Prevent Amazon S3 objects from being deleted or overwritten for a fixed amount of time or indefinitely. You can use Object Lock to help meet regulatory requirements that require write-once-read-many (WORM) storage or to simply add another layer of protection against object changes and deletions. S3 Replication – Replicate objects and their respective metadata and object tags to one or more destination buckets in the same or different AWS Regions for reduced latency, compliance, security, and other use cases. S3 Batch Operations – Manage billions of objects at scale with a single S3 API request or a few clicks in the Amazon S3 console. You can use Batch Operations to perform operations such as Copy, Invoke AWS Lambda function, and Restore on millions or billions of objects. Access management Amazon S3 provides features for auditing and managing access to your buckets and objects. By default, S3 buckets and the objects in them are private. You have access only to the S3 resources that you create. To grant granular resource permissions that support your specific use case or to audit the permissions of your Amazon S3 resources, you can use the following features. S3 Block Public Access – Block public access to S3 buckets and objects. By default, Block Public Access settings are turned on at the account and bucket level. AWS Identity and Access Management (IAM) – Create IAM users for your AWS account to manage access to your Amazon S3 resources. For example, you can use IAM with Amazon S3 to control the type of access a user or group of users has to an S3 bucket that your AWS account owns. Bucket policies – Use IAM-based policy language to configure resource-based permissions for your S3 buckets and the objects in them. Access control lists (ACLs) – Grant read and write permissions for individual buckets and objects to authorized users.. Access Analyzer for S3 – Evaluate and monitor your S3 bucket access policies, ensuring that the policies provide only the intended access to your S3 resources. Data processing To transform data and trigger workflows to automate a variety of other processing activities at scale, you can use the following features. S3 Object Lambda – Add your own code to S3 GET requests to modify and process data as it is returned to an application. Filter rows, dynamically resize images, redact confidential data, and much more. Event notifications – Trigger workflows that use Amazon Simple Notification Service (Amazon SNS), Amazon Simple Queue Service (Amazon SQS), and AWS Lambda when a change is made to your S3 resources. Storage logging and monitoring Amazon S3 provides logging and monitoring tools that you can use to monitor and control how your Amazon S3 resources are being used. For more information, see Monitoring tools. Automated monitoring tools Amazon CloudWatch metrics for Amazon S3 – Track the operational health of your S3 resources and configure billing alerts when estimated charges reach a user-defined threshold. AWS CloudTrail – Record actions taken by a user, a role, or an AWS service in Amazon S3. CloudTrail logs provide you with detailed API tracking for S3 bucket-level and object-level operations. Manual monitoring tools Server access logging – Get detailed records for the requests that are made to a bucket. You can use server access logs for many use cases, such as conducting security and access audits, learning about your customer base, and understanding your Amazon S3 bill. AWS Trusted Advisor – Evaluate your account by using AWS best practice checks to identify ways to optimize your AWS infrastructure, improve security and performance, reduce costs, and monitor service quotas. You can then follow the recommendations to optimize your services and resources. Analytics and insights Amazon S3 offers features to help you gain visibility into your storage usage, which empowers you to better understand, analyze, and optimize your storage at scale. Amazon S3 Storage Lens – Understand, analyze, and optimize your storage. S3 Storage Lens provides 29+ usage and activity metrics and interactive dashboards to aggregate data for your entire organization, specific accounts, AWS Regions, buckets, or prefixes. Storage Class Analysis – Analyze storage access patterns to decide when it's time to move data to a more cost-effective storage class. S3 Inventory with Inventory reports – Audit and report on objects and their corresponding metadata and configure other Amazon S3 features to take action in Inventory reports. For example, you can report on the replication and encryption status of your objects. For a list of all the metadata available for each object in Inventory reports, see Amazon S3 Inventory list. Strong consistency Amazon S3 provides strong read-after-write consistency for PUT and DELETE requests of objects in your Amazon S3 bucket in all AWS Regions. This behavior applies to both writes of new objects as well as PUT requests that overwrite existing objects and DELETE requests. In addition, read operations on Amazon S3 Select, Amazon S3 access control lists (ACLs), Amazon S3 Object Tags, and object metadata (for example, the HEAD object) are strongly consistent. For more information, see Amazon S3 data consistency model. How Amazon S3 works Amazon S3 is an object storage service that stores data as objects within buckets. An object is a file and any metadata that describes the file. A bucket is a container for objects. To store your data in Amazon S3, you first create a bucket and specify a bucket name and AWS Region. Then, you upload your data to that bucket as objects in Amazon S3. Each object has a key (or key name), which is the unique identifier for the object within the bucket. S3 provides features that you can configure to support your specific use case. For example, you can use S3 Versioning to keep multiple versions of an object in the same bucket, which allows you to restore objects that are accidentally deleted or overwritten. Buckets and the objects in them are private and can be accessed only if you explicitly grant access permissions. You can use bucket policies, AWS Identity and Access Management (IAM) policies, access control lists (ACLs), and S3 Access Points to manage access. Topics Buckets A bucket is a container for objects stored in Amazon S3. You can store any number of objects in a bucket and can have up to 100 buckets in your account. To request an increase, visit the Service Quotas Console Every object is contained in a bucket. For example, if the object named photos/puppy.jpg is stored in the DOC-EXAMPLE-BUCKET bucket in the US West (Oregon) Region, then it is addressable using the URL. For more information, see Accessing a Bucket. When you create a bucket, you enter a bucket name and choose the AWS Region where the bucket will reside. After you create a bucket, you cannot change the name of the bucket or its Region. Bucket names must follow the bucket naming rules. You can also configure a bucket to use S3 Versioning or other storage management features. Buckets also: Organize the Amazon S3 namespace at the highest level. Identify the account responsible for storage and data transfer charges. Provide access control options, such as bucket policies, access control lists (ACLs), and S3 Access Points, that you can use to manage access to your Amazon S3 resources. Serve as the unit of aggregation for usage reporting. For more information about buckets, see Buckets overview. Objects Objects are the fundamental entities stored in Amazon S3. Objects consist of object data and metadata. The metadata is a set of name-value pairs that describe the object. These pairs include some default metadata, such as the date last modified, and standard HTTP metadata, such as Content-Type. You can also specify custom metadata at the time that the object is stored. An object is uniquely identified within a bucket by a key (name) and a version ID (if S3 Versioning is enabled on the bucket). For more information about objects, see Amazon S3 objects overview. Keys An object key (or key name) is the unique identifier for an object within a bucket. Every object in a bucket has exactly one key. The combination of a bucket, object key, and optionally, version ID (if S3 Versioning is enabled for the bucket)-EXAMPLE-BUCKET is the name of the bucket and /photos/puppy.jpg is the key. For more information about object keys, see Creating object key names. S3 Versioning You can use S3 Versioning to keep multiple variants of an object in the same bucket. With S3 Versioning, you can preserve, retrieve, and restore every version of every object stored in your buckets. You can easily recover from both unintended user actions and application failures. For more information, see Using versioning in S3 buckets. Version ID When you enable S3 Versioning in a bucket, Amazon S3 generates a unique version ID for each object added to the bucket. Objects that already existed in the bucket at the time that you enable versioning have a version ID of null. If you modify these (or any other) objects with other operations, such as CopyObject and PutObject, the new objects get a unique version ID. For more information, see Using versioning in S3 buckets. Bucket policy A bucket policy is a resource-based AWS Identity and Access Management (IAM) policy that you can use to grant access permissions to your bucket and the objects in it. Only the bucket owner can associate a policy with a bucket. The permissions attached to the bucket apply to all of the objects in the bucket that are owned by the bucket owner. Bucket policies are limited to 20 KB in size. Bucket policies use JSON-based access policy language that is standard across AWS. You can use bucket policies to add or deny permissions for the objects in a bucket. Bucket policies allow or deny requests based on the elements in the policy, including the requester, S3 actions, resources, and aspects or conditions of the request (for example, the IP address used to make the request). For example, you can create a bucket policy that grants cross-account permissions to upload objects to an S3 bucket while ensuring that the bucket owner has full control of the uploaded objects. For more information, see Bucket policy examples. In your bucket policy, you can use wildcard characters on Amazon Resource Names (ARNs) and other values to grant permissions to a subset of objects. For example, you can control access to groups of objects that begin with a common prefix or end with a given extension, such as .html. Access control lists (ACLs). You can use ACLs to grant read and write permissions for individual buckets and objects to authorized users. Each bucket and object has an ACL attached to it as a subresource. The ACL defines which AWS accounts or groups are granted access and the type of access. For more information, see Access control list (ACL) overview. S3 Access Points Amazon S3 Access Points are named network endpoints with dedicated access policies that describe how data can be accessed using that endpoint. Access Points simplify managing data access at scale for shared datasets in Amazon S3. Access Points are named network endpoints attached to buckets that you can use to perform S3 object operations, such as GetObject and PutObject. Each access point has its own IAM policy. You can configure Block Public Access settings for each access point. To restrict Amazon S3 data access to a private network, you can also configure any access point to accept requests only from a virtual private cloud (VPC). For more information, see Managing data access with Amazon S3 access points. Regions You can choose the geographical AWS Region where Amazon S3 stores the buckets that you create. You might choose a Region to optimize latency, minimize costs, or address regulatory requirements. Objects stored in an AWS Region never leave the Region unless you explicitly transfer or replicate them to another Region. For example, objects stored in the Europe (Ireland) Region never leave it. You can access Amazon S3 and its features only in the AWS Regions that are enabled for your account. For more information about enabling a Region to create and manage AWS resources, see Managing AWS Regions in the AWS General Reference. For a list of Amazon S3 Regions and endpoints, see Regions and endpoints in the AWS General Reference. Amazon S3 data consistency model Amazon S3 provides strong read-after-write consistency for PUT and DELETE requests of objects in your Amazon S3 bucket in all AWS Regions. This behavior applies to both writes to new objects as well as PUT requests that overwrite existing objects and DELETE requests. In addition, read operations on Amazon S3 Select, Amazon S3 access controls lists (ACLs), Amazon S3 Object Tags, and object metadata (for example, the HEAD object) are strongly consistent. Updates to a single key are atomic. For example, if you make a PUT request to an existing key from one thread and perform a GET request request) that is initiated following the receipt of a successful PUT response will return the data written by the PUT request. Here are examples of this behavior: A process writes a new object to Amazon S3 and immediately lists keys within its bucket. The new object appears in the list. A process replaces an existing object and immediately tries to read it. Amazon S3 returns the new data. A process deletes an existing object and immediately tries to read it. Amazon S3 does not return any data because the object has been deleted. A process deletes an existing object and immediately lists keys within its bucket. The object does not appear in the listing. Amazon S3 does not support object locking for concurrent writers. If two PUT requests are simultaneously made to the same key, the request with the latest timestamp wins. If this is an issue, you must, this means that: requests) on objects in the bucket. Concurrent applications This section provides examples of behavior to be expected from Amazon S3 when multiple clients are writing to the same items. In this example, both W1 (write 1) and W2 (write 2) finish before the start of R1 (read 1) and R2 (read 2). Because S3 is strongly consistent, R1 and R2 both return color = ruby. In the next example, W2 does not finish before the start of R1. Therefore, R1 might return color = ruby or color = garnet. However, because because of various factors, such as network latency. For example, W2 might be initiated by an Amazon EC2 instance in the same Region, while W1 might be initiated by a host that is farther away. The best way to determine the final value is to perform a read after both writes have been acknowledged. Related services After you load your data into Amazon S3, you can use it with other AWS services. The following are the services that you might use most frequently: Amazon Elastic Compute Cloud (Amazon EC2) – Provides secure and EMR – Helps businesses, researchers, data analysts, and developers easily and cost-effectively process vast amounts of data. Amazon EMR uses a hosted Hadoop framework running on the web-scale infrastructure of Amazon EC2 and Amazon S3. AWS Snow Family – Helps customers that need to run operations in austere, non-data center environments, and in locations where there's a lack of consistent network connectivity. You can use AWS Snow Family devices to locally and cost-effectively access the storage and compute power of the AWS Cloud in places where an internet connection might not be an option. AWS Transfer Family – Provides fully managed support for file transfers directly into and out of Amazon S3 or Amazon Elastic File System (Amazon EFS) using Secure Shell (SSH) File Transfer Protocol (SFTP), File Transfer Protocol over SSL (FTPS), and File Transfer Protocol (FTP). Accessing Amazon S3 You can work with Amazon S3 in any of the following ways: AWS Management Console The console is a web-based user interface for managing Amazon S3 and AWS resources. If you've signed up for an AWS account, you can access the Amazon S3 console by signing into the AWS Management Console and choosing S3 from the AWS Management Console home page. AWS Command Line Interface You can use the AWS command line tools to issue commands or build scripts at your system's command line to perform AWS (including S3) tasks. The AWS Command Line Interface (AWS CLI) AWS SDKs AWS provides SDKs (software development kits) that consist of libraries and sample code for various programming languages and platforms (Java, Python, Ruby, .NET, iOS, Android, and so on). The AWS SDKs provide a convenient way to create programmatic access to S3 and AWS. Amazon S3 is a REST service. You can send requests to Amazon S3 using the AWS SDK libraries. which wrap the underlying Amazon S3 REST API and simplify your programming tasks. For example, the SDKs take care of tasks such as calculating signatures, cryptographically signing requests, managing errors, and retrying requests automatically. For information about the AWS SDKs, including how to download and install them, see Tools for AWS Every interaction with Amazon S3 is either authenticated or anonymous. If you are using the AWS SDKs, the libraries compute the signature for authentication from the keys that you provide. For more information about how to make requests to Amazon S3, see Making requests. Amazon S3 REST API The architecture of Amazon S3 is designed to be programming language-neutral, using AWS-supported interfaces to store and retrieve objects. You can access S3 and AWS programmatically by using the Amazon S3 REST API. The REST API is an HTTP interface to Amazon S3. With the REST API, you use standard HTTP requests to create, fetch, and delete buckets and objects. To use the REST API, you can use any toolkit that supports HTTP. You can even use a browser to fetch objects, as long as they are anonymously readable. The REST API uses matches the style of standard HTTP usage. If you make direct REST API calls in your application, you must write the code to compute the signature and add it to the request. For more information about how to make requests to Amazon S3, see Making requests. SOAP API support over HTTP is deprecated, but it is still available over HTTPS. Newer Amazon S3 features are not supported for SOAP. We recommend that you use either the REST API or the AWS SDKs. Paying for Amazon S3 Pricing for Amazon S3 is designed so that you don't have to plan for the storage requirements of your application. Most storage providers require you to purchase a predetermined amount of storage and network transfer capacity. In this scenario, model gives you a variable-cost service that can grow with your business while giving you the cost advantages of the AWS infrastructure. For more information, see Amazon S3 When you sign up for AWS, your AWS account is automatically signed up for all services in AWS, including Amazon S3. However, you are charged only for the services that you use. If you are a new Amazon S3 customer, you can get started with Amazon S3 for free. For more information, see AWS free tier To see your bill, go to the Billing and Cost Management Dashboard in the AWS Billing and Cost Management console PCI DSS compliance Amazon S
https://docs.aws.amazon.com/en_us/AmazonS3/latest/userguide/Welcome.html
CC-MAIN-2021-49
refinedweb
3,547
52.8
I am facing the same problem with npm running 0.17.x app with quasar v1.0.0-rc.4 on linux. Is there a solution with npm? I would not like to switch to yarn I am facing the same problem with npm running 0.17.x app with quasar v1.0.0-rc.4 on linux. Is there a solution with npm? I would not like to switch to yarn Hi @Hawkeye64, qcalender is awesome and the documentation is very nice. I would like to ask you are you using some tool (like vuepress) for the documentation or you have just another quasar project for it? Thanx in advance and cheers, Oggo Hi @syflex , it is great you are ready to help me. So my use case is: i have dynamically created quploader fileds (see the snippet below). What i want to have is when a file get uploaded i need to know to which quploader it belongs. It means i want to have something like a map quploader -->> uploaded file. When i set the uploaded event handler i can either call a function with (file, xhr) parameters, which is helpful but i don’t know from which quploader it is called. If i call my callback function for example fileUploaded(bIndex) i will know which quploader is calling, but there will be no information about the file uploaded As a visual component i also think quploader is great, but i desperately need something like v-model for this component Thanx again in advance, every help will be well appreciated. <div v- <q-uploader </div> Do really someone using this component?!! How do you handle it without model? I’m trying to get it work with maximum 3 images uploaded, but i am failing to find the way how? It’s a showstopper and i consider to change the framework! Is there someone able to give a good advise? Hi guys, i would like to have something like v-model for a q-uploader, so that when my form is submitted i can get the url of the uploaded file. How can i achieve it? Thanx guys, @metalsadman hmmmm it seems to work in your sample, so i will have to drill deeper to find my error. Thanx a lot for the sample! Cheers, Oggo Sure, here is the fiddle: I have no idea how to make the import in the fiddle, but i hope the use case is clear. As soon as i return the array, it works fine. When i use the imported function “getTabsMdl” the tabs are shown, but no tab is selected Hi guys, i’m using quasar 0.17 trying to load the tabs for my layout dynamically from file. Something like this: <template> <q-layout view=“lHh Lpr lFf”> <q-layout-header> … <q-tabs v-model=“selTabMdl”> <q-route-tab v-for="(bTab, bTIndex) in _getTabsMdl()" :to=“bTab.to” :label=“bTab.name” :key=“bTIndex” slot=“title” /> </q-tabs> … import { getTabsMdl } from ‘./myTabsModel’ export default { … methods: { _getTabsMdl () { return getTabsMdl() // return [{name: ‘tt1’, to: ‘/’}, {name: ‘tt2’, to: ‘/’}, {name: ‘tt3’, to: ‘/’}] }, … When i call the function getTabsMdl() from the file myTabsMdl.js the tabs show up, but there is no default tab selected. Since i return the array (the commented code) everything is fine and the default tab is selected Can someone give me an advice, what should i do to get it work? Thanx in advance!
https://forum.quasar-framework.org/user/oggo
CC-MAIN-2019-26
refinedweb
572
72.97
can we assign a set to each index of vector. if it is possible how can we do implement this? i did like this : vector<set<pair> > vsp; for i=1 to n take set as local variable set s; insert some values in s; and assign this set to vector vsp[i]=s; but it is giving error.what should i do any help?thanks in advance. What error are you getting? is this correct. error is type mismatched in last line vsp[i] = s You are using set pair,so syntax should be like this vector<set<pair<int,int> > >vsp i did the same thing but i didn’t mention here sry for that. Declare like this vsp[1000000] i already did the same thing. error is type mismatch.error: no match for ‘operator=’ (operand types are ‘std::vector<std::set<std::pair<int, int> > >’ and ‘std::set<std::pair<int, int> >’) v[i] = s; error: no match for ‘operator=’ (operand types are ‘std::vector<std::set<std::pair<int, int> > >’ and ‘std::set<std::pair<int, int> >’) v[i] = s; @todumanish you have declared set<int,int> s;, I guess this should be like this set<pair<int,int> >s. I don’t know whether this is the mistake but if we assign v[i]=s where s is set<int,int> that would give a error(mismatch). After declaring,if you want to assign a set s to v[i] just write v[i]=s thanks for telling me but i already did the same i already edit my question. Can we assign like this in vectors. I mean v[i] is used to random access the vector elements but I never used it during insertion (i may be wrong). Have you tried push_back or it is required to insert them on specific positions? You can do that. Take a look. #include <iostream> #include <vector> #include <set> #include <algorithm> #include <stdio.h> #include <cstring> using namespace std; vector<set<pair<int, int>>> vsp; set<pair<int,int>> s; int main() { int t; //cin >> t; t = 5; for (int i = 0; i < t; i++) { for (int j = 0; j < i + 1; j++) { s.insert(make_pair(j, 2 * j)); } } vsp.push_back(s); for (auto it : vsp[0]) { cout << it.first << " " << it.second << endl; } } I think the problem is in vsp[i]=s part, (as i’th element doesn’t exist) you must use push_back(). @todumanish I tried using v[i] for insertion but it showed an error. This might be the reason that when you assign v[i] to some set (like v[i]=s instead of v.push_back(s)) compiler misunderstand v[i] as an array of vectors and we are assigning something to an element of that means we are assigning a value to a vector (which surely gives and error ) This is same as writing vectorv; and then v=3; So it is better to use push_back as @only4 has mentioned in his code. Does this make sense? Please correct if wrong. = is assignment operator. Read push_back() adds a new element. You can’t assign value until the element is created. “You can’t assign value until the element is created” True. Thanks now it is clear. right bro you are correct. change vector < set < pair< int,int > > > vsp; to vector < set < pair < int,int > > > vsp(n); and set < int,int > s; to set < pair< int,int > > s;
https://discusstest.codechef.com/t/query-related-to-stl-vector-and-set/15071
CC-MAIN-2021-31
refinedweb
574
74.29
This is a Java Program to Print the kth Element in the Array. Enter size of array and then enter all the elements of that array. Now enter the position k at which you want to find element. Now we print the element at k-1 position in given array. Here is the source code of the Java Program to Print the kth Element in the Array. The Java program is successfully compiled and run on a Windows system. The program output is also shown below. import java.util.Scanner; public class Position { public static void main(String[] args) { int n; Scanner s = new Scanner(System.in); System.out.print("Enter no. of elements you want in array:"); n = s.nextInt(); int a[] = new int[n]; System.out.println("Enter all the elements:"); for (int i = 0; i < n; i++) { a[i] = s.nextInt(); } System.out.print("Enter the k th position at which you want to check number:"); int k = s.nextInt(); System.out.println("Number:"+a[k-1]); } } Output: $ javac Position.java $ java Position Enter no. of elements you want in array:5 Enter all the elements: 2 5 3 8 6 Enter the k th position at which you want to check number:3 Number:3 Sanfoundry Global Education & Learning Series – 1000 Java Programs. Here’s the list of Best Reference Books in Java Programming, Data Structures and Algorithms.
https://www.sanfoundry.com/java-program-print-kth-element-array/
CC-MAIN-2018-34
refinedweb
232
58.69
calling two procedures from one ksh ??? shell scripts Login to Discuss or Reply to this Discussion in Our Community Thread Tools Search this Thread Top Forums Shell Programming and Scripting calling two procedures from one ksh ??? # 1 10-29-2009 shashi369 Registered User 2, 0 Join Date: Oct 2009 Last Activity: 5 November 2009, 10:11 AM EST Location: chicago IL Posts: 2 Thanks Given: 0 Thanked 0 Times in 0 Posts calling two procedures from one ksh ??? Hi to all, This is first posting in this forum. I quite new to this KSH....i guess i ll have some fun... Well, I have two indivdual korn shell scripts. i m calling PL/SQL stored procedure from korn shell script. It works absolutely fine. Once its completed the log is updated and data is loaded into tables. These two loades the data into two different tables. Then have dependency. So i have to run A.ksh and Later B.ksh. If i run B.ksh first then data will not be loaded since it has dependency on A.ksh. So i was trying to merge into one korn shell script. First run 1st procedurer if successfully completed then run 2nd procedure. if 1st procedure is failed then dont run 2nd procedure. So how do i accomplish this in korn shell script ??? Any idea guys Thank you so much in advance !!! shashi369 View Public Profile for shashi369 Find all posts by shashi369 # 2 10-29-2009 adderek Registered User 123, 3 Join Date: Sep 2007 Last Activity: 12 August 2010, 12:56 PM EDT Location: Poland Posts: 123 Thanks Given: 2 Thanked 3 Times in 3 Posts I'm assuming that you are using sqlplus... You have not specified any details. Check the sqlplus manual... Since it is very limited, you would need to parse its output. I would guess that PL/SQL might be the right choice... but you need to check it. In a brief: According to POSIX application should return ($?) a value of 0 if it was finished with a success. SQLPLUS can perform multiple operations and you won't get the result from a single operation. You might want to try perl or python or java. General idea: Korn shell should be used for simple things (although it can perform complex things either). If you do anything that is complex then you need to use some real programming language (perl, python, java, c++,...). adderek View Public Profile for adderek Find all posts by adderek # 3 10-29-2009 giannicello Registered User 190, 0 Join Date: Sep 2001 Last Activity: 21 August 2015, 10:59 AM EDT Location: Chicago Posts: 190 Thanks Given: 7 Thanked 0 Times in 0 Posts Why can't you just add the command to kick off the second script (b.ksh) inside the first script (a.ksh) at the end? giannicello View Public Profile for giannicello Find all posts by giannicello # 4 10-29-2009 Scott Administrator Emeritus 9,179, 1,331 Join Date: Jun 2009 Last Activity: 26 February 2019, 5:57 PM EST Posts: 9,179 Thanks Given: 430 Thanked 1,331 Times in 1,120 Posts I agree with this: Quote: Originally Posted by adderek I'm assuming that you are using sqlplus... You have not specified any details. Don't understand this: Quote: Originally Posted by adderek Check the sqlplus manual... Since it is very limited, you would need to parse its output. I would guess that PL/SQL might be the right choice... but you need to check it. And completely disagree with this: Quote: Originally Posted by adderek General idea: Korn shell should be used for simple things (although it can perform complex things either). If you do anything that is complex then you need to use some real programming language (perl, python, java, c++,...). Scott View Public Profile for Scott Find all posts by Scott # 5 10-29-2009 steadyonabix Registered User 330, 1 Join Date: Oct 2009 Last Activity: 1 January 2017, 3:21 PM EST Location: UK Posts: 330 Thanks Given: 14 Thanked 1 Time in 1 Post At work we have many scripts (ksh) that integrate with DB2 and perform many complex DB operations, way beyond simply running SQL scripts. If you do want to perform complex DB operations from shell it is good to break the code down into specific operations, such as: - Connect to the database set schema update table delete insert disconnect and so on. Each of these actions are performed by specific functions written in a common library file that is dotted into scripts running under sudo. Expected return codes are passed to each function so they can fail if a code is out of range. One script for example connects to a database with over 200 tables and verifies: - No tables have changed their name No field names have changed No data types have changed No field sizes have changed. As a manual exercise this was a day's work for the testers and very error prone. Now it is performed in a few minutes with any changes presented in a report. So don't let any one tell you you cannot perform complex operations in shell. At the last line count the tool suite had almost 2,000,000 lines, albeit a lot of that standing data. Last edited by steadyonabix; 10-30-2009 at 10:28 AM .. steadyonabix View Public Profile for steadyonabix Find all posts by steadyonabix # 6 11-02-2009 adderek Registered User 123, 3 Join Date: Sep 2007 Last Activity: 12 August 2010, 12:56 PM EDT Location: Poland Posts: 123 Thanks Given: 2 Thanked 3 Times in 3 Posts Quote: Originally Posted by scottn I agree with this: Don't understand this: What is here to understand? He needs an interaction with the database - and it might be better way to do it within a single transaction (or several related transactions if DDL is performed). Since it might be difficult in sqlplus - you need another option. Perl's DBI might be a choice. I believe that a PL/SQL code (might be anonymous) might be a better choice. Quote: Originally Posted by scottn And completely disagree with this I have written that you should not (note the difference between "should not" and "cannot") use korn shell for complex code. Not that it cannot be used for complex things - it can do artificial inteligence if you want it to. However for complex things other choices (perl, python, java, ...) are usually much more optimal. If you don't agree then I wish you luck in writing 3D accelerated games in ksh Complex ksh code might be difficult to read and maintain. Code might be distributed across thousands of scripts executed as functions named like a_my_function which is in fact "${x}_${y}_${z}". There is no separation between modules and people tend to use variables without declaration (try running "set -u"). You don't have references, objects, streams, inheritance, overloading, debugging, libraries, private methods, multi-threading, ... You can find several games written in shell... try reading that mess. Then compare it to other game sources. As for the original question, you wanted to: 1. Run some script, let say A.sql 2. Check if A.sql failed 3. Run B.sql if A.sql did not fail... probably rollback the changes if A.sql failed In shell you would need to spool the output from sqlplus, grep it and quess what might fail (ex. grep the output by "error" string and be lucky to have there nothig like "create table my_errors(error char(10))") with checking the return code and then execute the second script. PL/SQL script might try to work in a single transaction and have "rollback" in the exception handler. If you use any DDL there then flashback mechanism might be used. adderek View Public Profile for adderek Find all posts by adderek Previous Thread | Next Thread 10 More Discussions You Might Find Interesting 1. UNIX for Beginners Questions & Answers Calling subscript with sh vs. ksh I ran into an issue today that I was able to resolve, but I don't quite understand why the resolution worked. I'm wondering if anyone can help me make sense of it. I have a "kicker" script that calls 4 subscripts on a RHEL 6.9 server. It calls the scripts in sequence checking for a 0 exit code... (5 Replies) Discussion started by: derndingle 5 Replies 2. 3. UNIX for Dummies Questions & Answers Calling commands with ksh Hi, I am not able to run below command on linux, it however works on solaris. If anyone knows the reason and a solution for it can you please let me know ? Linux ----- $> ksh 'echo hi' ksh: echo hi: No such file or directory $> which ksh /usr/bin/ksh Solaris ------ $> ksh 'echo... (2 Replies) Discussion started by: krishnaux 2 Replies 4. Shell Programming and Scripting Calling Function in KSH I have a script with 2 functions 1) show_menu 2) create Ths show_menu function works fine....... Sort of.... When I select option 2 of the menu the code does a few commands and then calls another function called create. It's at this point that I get "create: not found"..... However,... (2 Replies) Discussion started by: hxman 2 Replies 5. Shell Programming and Scripting calling jar file from ksh Hi, I have a below command in ksh .... $JAVA_HOME/bin/java -cp "/usr/orabase/product/10.2.0/lib:/usr/java/javajar/common/sched.jar:/usr/orabase/product/10.2.0/jdbc/lib/ojdbc14_g.jar:/usr/orabase/product/10.2.0/jdbc/lib/classes12.jar" com.abrt.smart.Smart I want to know where the... (1 Reply) Discussion started by: PRKS 1 Replies 6. Shell Programming and Scripting Get return value from PERL script calling from KSH All: I am calling a PERL script from KSH. I need specific codes to be returned by the PERL Script. For ex: Ksh ----- result=`test.pl $FILE` My idea is to get the value of result from the test.pl, by specifically making the test.pl to print the return code. Since I had some other print... (1 Reply) Discussion started by: ucbus 1 Replies 7. Shell Programming and Scripting calling csh script from ksh shell hi, I have a csh script, which has setenv X xyz etc My shell is korn Is there some way I can "source" this to have the variables in my current korn shell? thanks (3 Replies) Discussion started by: JamesByars 3 Replies 8. Shell Programming and Scripting calling a ksh script present in different unix box Hi , I have a shell script which does some calling of procedures once all the procedures are called I need to start another ksh script which is present on a different unix box How can this be done ? Do I need to open a telnet session to connect to that box in my first script and... (1 Reply) Discussion started by: Navatha 1 Replies 9. Shell Programming and Scripting calling a PL/SQL stored procedure from KSH Hi I have a stored procedure which should be called from KSH. Could ayone please help me with this. Thanks (1 Reply) Discussion started by: BlAhEr 1 Replies 10. Solaris Calling Oracle Stored Procedures in UNIx(sun solaris) I have created 3 Procedures all similar to this one: I then created 3 shell sripts which will call the sql? finally created a calling script to call the procedure. I am a bit unsure how to this all works, can someone check my code and I am doing this right? Also could I add my procedure (first... (0 Replies) Discussion started by: etravels 0 Replies Member Badges and Information Modal × Featured Tech Videos
https://www.unix.com/shell-programming-and-scripting/122473-calling-two-procedures-one-ksh.html
CC-MAIN-2021-49
refinedweb
1,962
72.87
The IDE, shown in Figure 1.19, has become more complex than in previous versions of Visual Basic, and being able to use it, or at least knowing what the various parts are called, is a skill we'll need in the coming chapters. Part of the reasons it's become more complex is that the same IDE is now shared by all Visual Studio languages, such as VB and C# (something Microsoft has promised for many years, but only implemented now). We've already seen the IDE at work, of course, but now it's time to take a more systematic look. There are so many independent windows in the IDE that it's easy to misplace or rearrange them inadvertently. The IDE windows are docking windows, which means you can use the mouse to move windows around as you like; when the windows are near an edge, they'll "dock"-adhere-to that edge, so you can reconfigure the IDE windows as you like. If you move IDE windows inadvertently, don't panic; just use the mouse to move them back. Also note that the windows in the IDE come with an X button at upper left, which means you can close them. I don't know about you, but I sometimes click these when I don't mean to, and a window I wanted disappears. It's easy to panic: The toolbox is gone! I'll have to reinstall everything! In fact, all you have to do is to find that window in the View menu again (such as View|Toolbox) to make it reappear. (Note that some windows are hidden in the View|Other Windows menu item, which opens a submenu of additional windows-there are simply too many windows to fit them all into one menu without needing to use a submenu.) There's so much packed into the IDE that Microsoft has started to make windows share space, and you can keep them separate using tabs such as those you can see above the form at the center of Figure 1.19. If you click the Form1.vb[Design] tab, you see the form itself as it'll appear when the program runs; if you click the Form1.vb tab, you'll see the form's code, and if you click the Start Page tab, you'll see the Start page, which lets you select from among recent solutions to open. Also note at lower right that the Properties window and the Dynamic Help window-a new VB .NET feature-are sharing the same space, and you can select between them using tabs. The IDE is a very crowded place, and in an effort to unclutter the cluttered IDE a little, VB .NET adds a new button in dockable IDE windows-a little thumbtack button at upper right as you see in various windows in Figure 1.19, next to the X close button. This is the "auto-hide" feature, which lets you reduce a window to a tab connected to the edge it's docked on. For example, in Figure 1.19, the Server Explorer (which lets you explore data sources on servers) window is hidden and has become a tab at upper left in the IDE. If I let the mouse move over that tab, the full Sever Explorer window will glide open, covering most of the toolbox. You can auto-hide most windows like this; for example, if I were to click the thumbtack button in the toolbox, it would close and become a tab under the Server Explorer tab in the IDE. To restore a window to stay-open status, just click the thumbtack again. And, of course, you can customize the IDE as well. For example, to customize IDE options such as the fonts and colors used to display code, you select the Tools|Options menu item and use the various items in the Environment folder. To customize menus and toolbars, such as specifying the toolbars to display (How many are there to choose from? Twenty-seven.), or what buttons go on what toolbars, use the Tools|Customize menu item. That's it for general discussion-it's time to get to the IDE itself, starting with the Start page. We've already seen the Start page, which is what you see when you first start Visual Basic, and which appears outlined in Figure 1.20. You can use the Start page to select from recent projects; by default, the Get Started item is selected in the Start page at upper left. You can also create a new project here by clicking the New Project button. The Start page has other useful aspects as well: for example, because you use the same IDE for all Visual Studio languages, it'll also search through all those languages when you search the help files. To make it search only pertinent help files, you can select the My Profile item in the Start page, and select either Visual Basic or Visual Basic and Related (which is my preference) in the Help Filter drop-down list box. After you've started Visual Basic and have seen the Start page, you often turn to the menu system to proceed, as when you want to create a new project and use the File|New|Project menu item to bring up the New Project dialog box (you can do the same thing by clicking the New Project button in the Start page). The IDE menu system is very involved, with many items to choose from-and you don't even see it all at once. The menu system changes as you make selections in the rest of the IDE-for example, the Project menu will display 16 items if you first select a project in the Solution Explorer, but only 4 items if you have selected a solution, not a project. In fact, there are even more dramatic changes; for example, try clicking a form under design and you'll see a Data menu in the menu bar, used to generate datasets. If you then select not the form but the form's code, however (for example, double-click the form to open the code window), the Data menu disappears. There are hundreds of menu items here, and many useful ones that will quickly become favorites, such as File|New|Project that you use to create a new project, or the most recently used (MRU) list of files and projects that you can access from the Recent Files or Recent Projects items near the bottom of the File menu. The menu system also allows you to switch from debug to release modes if you use the Build|Configuration Manager item, lets you configure the IDE with the Tools|Options and Tools|Customize items, and so on. I'll introduce more and more menu items throughout the book as appropriate. The toolbars feature is another handy aspect of the IDE. These appear near the top of the IDE, as shown in Figure 1.21. There are plenty of toolbars to choose from, and sometimes VB .NET will choose for you, as when it displays the Debug toolbar when you've launched a program with the Start item in the Debug menu. Because the IDE displays tool tips (those small yellow windows with explanatory text that appear when you let the mouse rest over controls such as buttons in a toolbar), it's easy to get to know what the buttons in the toolbars do. As mentioned, you can also customize the toolbars in the IDE, selecting which toolbars to display or customizing which buttons appear in which toolbars with the Tools|Customize menu item, or you can right-click a toolbar itself to get a menu of the possible toolbars to display (the bottom item in this popup menu is Customize, which lets you customize which buttons go where), or you can open the Toolbars submenu in the View menu to do the same thing (as is often the case in VB, there's more than one way to do it). Toolbars provide a quick way to select menu items, and although I personally usually stick to using the menu system, there's no doubt that toolbar buttons can be quicker; for example, to save the file you're currently working on, you only need to click the diskette button in the standard toolbar (as you see in Figure 1.21), or the stacked diskettes button to save all the files in the solution. When you want to create a new project, you turn to the New Project dialog box. We've already used this quite a bit, and you can see it in Figure 1.22. In addition to letting you select from all the possible types of projects you can create in Visual Basic, you can also set the name of the project, and its location; for Windows projects, the location is a folder on disk, but for Web projects, you specify a server running IIS. Note also that you can add projects to the current solution using the New Project dialog box; just click the Add to Solution radio button instead of the Close Solution one (the default). If your project is entirely new, VB .NET will create an enclosing solution for the new project if there isn't already one. Finally, note the Setup and Deployment Projects folder, which you use to create projects for deploying your program as we'll do near the end of the book. When you're working on a project that has user interface elements-such as forms, VB .NET can display what those elements will look like at run time, and, of course, that's what makes Visual Basic visual. For example, when you're looking at a Windows form, you're actually looking at a Windows form designer, as you see in Figure 1.23, and you can manipulate the form, as well as add controls to it and so on. There are several different types of graphical designers, including: Windows form designers Web form designers Component designers XML designers You may have noticed-or may already know from VB6-that Windows forms display a grid of dots, which you can see in Figure 1.23. To set the grid spacing, and specify whether or not controls should "snap" to the grid (that is, position their corners on grid points), you can use the Tools|Options menu item to open the Options dialog box, and select the Windows Form Designer folder, displaying the possible options for you to set. Unlike graphical designers, code designers let you edit the code for a component, and you can see a code designer in Figure 1.24. You can use the tabs at the top center of the IDE to switch between graphical designers (such as the tabs Form1.vb[Design], which displays a graphical designer, and the Form1.vb tab, which displays the corresponding code designer). You can also switch between graphical and code designers using the Designer and Code items in the View menu, or you can use the top two buttons at left in the Solution Explorer. Note the two drop-down list boxes at the top of the code designer; the one on the left lets you select what object's code you're working with, and the one on the right lets you select the part of the code that you want to work on, letting you select between the declarations area, functions, Sub procedures, and methods (all of which we'll see starting in Chapter 2). The declarations area, which you select by selecting the (Declarations) item in the right-hand list box, is where you can put declarations of module-level objects, as we'll discover in Chapter 3 (see "Understanding Scope" in that chapter). Also note the + and - boxes in the code designer's text area, at left. Those are new in VB .NET, and were introduced because VB .NET now writes a great deal of code for your forms and components automatically. You can use the + and - buttons to show or hide that code. For example, here's what that code looks like for a typical Windows Friend WithEvents TextBox1 As System.Windows.Forms.TextBox Friend WithEvents Button1 As System.Windows.Forms.Button 'Required by the Windows Form Designer Private components As System.ComponentModel.Container 'NOTE: The following procedure is required by the Windows Form Designer 'It can be modified using the Windows Form Designer. 'Do not modify it using the code editor. <System.Diagnostics.DebuggerStepThrough()> Private Sub _ InitializeComponent() Me.TextBox1 = New System.Windows.Forms.TextBox() Me.Button1 = New System.Windows.Forms.Button() Me.SuspendLayout() ' 'TextBox1 ' Me.TextBox1.Location = New System.Drawing.Point(32, 128) Me.TextBox1.Name = "TextBox1" Me.TextBox1.Size = New System.Drawing.Size(224, 20) Me.TextBox1.TabIndex = 0 Me.TextBox1.Text = "" ' 'Button1 ' Me.Button1.Location = New System.Drawing.Point(112, 56) Me.Button1.Name = "Button1" Me.Button1.TabIndex = 1 Me.Button1.Text = "Click Me" ' 'Form1 ' Me.AutoScaleBaseSize = New System.Drawing.Size(5, 13) Me.ClientSize = New System.Drawing.Size(292, 213) Me.Controls.AddRange(New System.Windows.Forms.Control() _ {Me.Button1, Me.TextBox1}) Me.Name = "Form1" Me.Text = "Form1" Me.ResumeLayout(False) End Sub #End Region We'll dissect what this code means when we start working with Windows applications in depth in Chapter 4; for now, note the #Region and #End Region directives at top and bottom of this code-those are how the code designer knows that this region of code can be collapsed or expanded with a + or - button. Visual Basic also automatically adds those + or - buttons for other programming constructions like procedures, enumerations, and so on, allowing you to hide the parts of your code you don't want to see. The IDE is cluttered enough, and this helps a little in uncluttering it. As with the rest of the IDE, there are features upon features packed into code designers-for example, right-clicking a symbol lets you go to its definition, or its declaration, and so on. One useful feature of VB .NET code designers is Microsoft's IntelliSense. IntelliSense is what's responsible for those boxes that open as you write your code, listing all the possible options and even completing your typing for you. IntelliSense is one of the first things you encounter when you use VB .NET, and you can see an example in Figure 1.25, where I'm looking at all the members of a text box object. IntelliSense is made up of a number of options, including: List Members-Lists the members of an object. Parameter Info-Lists the arguments of procedure calls. Quick Info-Displays information in tool tips as the mouse rests on elements in your code. Complete Word-Completes typed words. Automatic Brace Matching-Adds parentheses or braces as needed. There's also a Visual Basic-specific IntelliSense, which offers syntax tips that display the syntax of the statement you're typing. That's great if you know what statement you want to use but don't recall its exact syntax, because its syntax is automatically displayed. IntelliSense is something you quickly get used to, and come to rely on. However, you can turn various parts of IntelliSense off if you want; just select the Tools|Options menu item, then select the Text Editor folder, then the Basic subfolder, and finally the General item in the Basic subfolder. You'll see a number of IntelliSense options you can turn on and off with check boxes. IntelliSense is useful because it tells you what syntax is correct automatically, or lists all the members of an object that are available. Another useful tool that's too often overlooked by Visual Basic programmers is the Object Explorer. This tool lets you look at all the members of an object at once, which is invaluable to pry into the heart of objects you've added to your code. The Object Explorer helps open up any mysterious objects that Visual Basic has added to your code so you can see what's going on inside. To open the Object Explorer, select View|Other Windows|Object Explorer (see Figure 1.26.) The Object Explorer shows all the objects in your program and gives you access to what's going on in all of them. For example, in Figure 1.26, I'm looking at a Windows form, Form1, and all its internal members-and the parameters they require-are made visible. To close the Object Explorer, just click the X button at its upper right. The toolbox is something that all veteran Visual Basic developers are familiar with, and you can see it in Figure 1.27. Microsoft has crammed more into the toolbox with each successive version of Visual Basic, and now the toolbox uses tabs to divide its contents into categories; you can see these tabs, marked Data, Components, Windows Forms, and General, in Figure 1.27. The tabs available, as you might surmise, depend on the type of project you're working on-and even what type of designer you're working with. The Data, Components, Windows Forms, and General tabs appear when you're working with a Windows form in a Windows form designer, but when you switch to a code designer in the same project, all you'll see are General and Clipboard Ring (which displays recent items stored in the clipboard, and allows you to select from among them) in the toolbox. When you're working on a Web form, you'll see Data, Web Forms, Components, Components, HTML, Clipboard Ring, and General, and so on. The Data tab displays tools for creating datasets and making data connections, the Windows Forms tab displays tools for adding controls to Windows forms, the Web Forms tab displays tools for adding server controls to Web forms, and so on. The General tab is empty by default, and is a place to store general components, controls, and fragments of code in. (You can even add more tabs to the toolbox by right-clicking the toolbox and selecting the Add Tab item.) In fact, there are so many controls that even when you click a tab in the toolbox, you'll still most likely get a list that you have to scroll to see everything that's available. We've already discussed the Solution Explorer quite a bit; this window gives you an overview of the solution you're working with, including all the projects in it, and the items in those projects. (You can see the Solution Explorer in Figure 1.28.) This tool displays a hierarchy-with the solution at the top of the hierarchy, the projects one step down in the hierarchy, and the items in each project as the next step down. You can set the properties of various items in a project by selecting them in the Solution Explorer and then setting their properties in the properties window. And you can set properties of solutions and projects by right-clicking them and selecting the Properties item in the menu that appears, or you can select an item and click the properties button, which is the right-most button at the top of the Solutions Explorer. If you're working on an object that has both a user interface and code, you can switch between graphical and code designers by using the buttons that appear at top left in the Solution Explorer when that object has been selected. You can right-click a solution and add a new project to it by selecting the Add|New Project menu item in the popup menu that appears. And you can specify which of multiple projects runs first-that is, is the startup project or projects-by right-clicking the project and selecting the Set As Startup Object item, or by right-clicking the solution and selecting the Set Startup Projects item. Much of what goes on in the VB .NET IDE depends on which solution or project is the current one, and you set that by selecting it in the Solution Explorer. For example, you can specify what icon you want an application to use in Windows if you don't like the plain default one; to do that, you select its project in the Solution Explorer, select Properties in the Project menu, then open the Common Properties|Build folder, browse to the .ico (icon) file you want, and click OK. The Solution Explorer tracks the items in your projects; to add new items, you can use the menu items in the Project menu, such as Add Windows Form and Add User Control. To add new empty modules and classes to a project (we'll see what these terms mean in detail in the next chapter), you can use the Project|Add New Items menu item. The Solution Explorer sees things in terms of files, as you can see in Figure 1.28. There, the References folder holds the currently referenced items (such as namespaces) in a project, AssemblyInfo.vb is the file that holds information about the assembly you're creating, and Form1.vb is the file that holds the code for the form under design. However, there's another way of looking at object-oriented programs-in terms of classes-and the Class View Window does that. If you click the Class View tab under the Solution Explorer, you'll see the Class View window, as shown in Figure 1.29. This view presents solutions and projects in terms of the classes they contain, and the members of these classes. Using the Class View window gives you an easy way of jumping to a member of class that you want to access quickly-just find it in the Class View window, and double-click it to bring it up in a code designer. The Properties window is another old favorite in Visual Basic, although now it shares its space with the Dynamic Help window. The Properties window appears in Figure 1.30. You set properties of various objects in Visual Basic to customize them; for example, we've set the Text property of a button in the WinHello project to "Click Me" to make that text appear in the button. To set an object's properties when you're designing your program in Visual Basic-called design time (as opposed to run time)-you select that object (by clicking a control or form, or a project, or a solution), and then set the new property values you want in the Properties window. The Properties window is divided into two columns of text, with the properties on the left, and their settings on the right. The object you're setting properties for appears in the drop-down list box at the top of the Properties window, and you can select from all the available objects using that list box. When you select a property, Visual Basic will give you an explanation of the property in the panel at the bottom of the Properties window, as you see in Figure 1.30. And you can display the properties alphabetically by clicking the second button from the left at the top of the Properties window, or in categories by clicking the left-most button. To change a property's setting, you only have to click the right-hand column next to the name of the property, and enter the new setting. Often properties can have only a few specific values, in which case Visual Basic will display a drop-down list box next to the property's name when you click the right-hand column, and you can select values from that list. Sometimes, Visual Basic requires more information, as when you create data connections, and instead of a list box, a button with an ellipsis ("…") appears; when you click that button, Visual Basic will usually walk you through the steps it needs to get that information. Note also that, as usual with properties and methods in Visual Basic, not all properties of a form or control will be available at design time in the Properties window when you're designing your code-some will be available only at run time. In fact, there aren't many changes in the Properties window from VB6 (something VB6 programmers might be pleased to hear), so if you've used it before, you're all set. The window that shares the Properties window's space, however, is quite new-the Dynamic Help window. Visual Basic .NET includes the usual Help menu with Contents, Index, and Search items, of course, but it also now supports dynamic help, which looks things up for you automatically. You can see the Dynamic Help window by clicking the Dynamic Help tab under the Properties window, and you can see the Dynamic Help window in Figure 1.31. VB .NET looks up all kinds of help topics on the element you've selected automatically; for example, in Figure 1.31, I've selected a button on a Windows form, and dynamic help has responded by displaying all kinds of helpful links to information on buttons. This is more helpful than simply searching the whole help system for the word "button", because dynamic help will typically select introductory and overview help topics, not all the hundreds of topics with the word "button" in their text. If you click a help link in the Dynamic Help window, the corresponding help topic is opened in the central space of the IDE where the designers appear (and you can switch between designers and help topics using tabs). In VB6, when you added a component to a form, and that component wasn't visible at run time-such as a timer control-the timer would still appear on the form at design time. That's changed in VB .NET; now, when you add components that are invisible at run time, they'll appear in a component tray, which will appear automatically in the designer, as you see in Figure 1.32. You use the Server Explorer, which appears in Figure 1.33, to explore what's going on in a server, and it's a great tool to help make distant severs feel less distant, because you can see everything you need in an easy graphical environment. You can do more than just look using the Server Explorer too-you can drag and drop whole items onto Windows forms or Web forms from the Server Explorer. For example, if you dragged a database table onto a form, VB .NET would create the connection and command objects you need to access that table from code. If you look at the bottom of the IDE, you'll see two tabs for the Output and Breakpoints windows. We'll look at the Breakpoints window when we discuss debugging, because it lets you manage the breakpoints at which program execution halts when you're debugging your code. The Output window, which you see in Figure 1.34, on the other hand, gives you the results of building and running programs, as you can also see in Figure 1.34. You can also send messages to the Output window yourself if you use the System.Diagnostics.Debug.Write method like this: System.Diagnostics. Debug.Write("Hello from the Output window!"). The Task List is another useful window that not many Visual Basic programmers know about. To see it, select the View|Show Tasks|All; this window appears in Figure 1.35. As its name implies, the Task List displays tasks that VB .NET assumes you still have to take care of, and when you click a task, the corresponding location in a code designer appears. There are a number of such tasks; for example, if VB .NET has detected a syntax error, underlined with a wavy line as shown in Figure 1.35, that error will appear in the task list. If you've used a wizard, such as the Upgrade Wizard where VB .NET still wants you to take care of certain issues, it'll put a TODO comment into the code, as we saw earlier: If blnDrawFlag Then 'UPGRADE_ISSUE: Graphics statements can't be migrated. 'Click for more: ms-help://MS.MSDNVS/vbcon/html/vbup2034.htm Line(X,Y) End If TODO comments like this will appear in the Task List. Plenty of other windows are available. For example, selecting View|Other Windows|Command Window opens the Command window, as you see in Figure 1.36. This window is a little like the Immediate window in VB6, because you can enter commands like File.AddNewProject here and VB .NET will display the Add New Project dialog box. However, this window is not exactly like the Immediate window, because you can't enter Visual Basic code and have it executed. And there are other windows that we'll see as needed, such as when we're discussing debugging programs where we'll introduce the Call Stack window, the Breakpoints window, Watch and Value display windows, Autos and Locals windows, and so on. There's another new aspect of the IDE that bears mention-macros. You can use macros to execute a series of commands in the Visual Studio environment. If you want to give macros a try, take a look at the Macros submenu in the Tools menu. There's more to the IDE than we've been able to cover here, but now we've gotten the foundation we'll need in the coming chapters. I'll end this chapter by taking a look at coding practices in VB .NET; if you're not thoroughly familiar with Visual Basic yet, some of this might not make sense, so treat it as a section to refer back to later.
http://www.yaldex.com/vb-net-tutorial-2/library.books24x7.com/book/id_5526/viewer.asp@bookid=5526&chunkid=0408758138.htm
CC-MAIN-2015-32
refinedweb
4,986
67.38
The Q3ValueStack class is a value-based template class that provides a stack. More... #include <Q3ValueStack> This class is part of the Qt 3 support library. It is provided to keep old source code working. We strongly advise against using it in new code. See Porting to Qt 4 for more information. Inherits: Q3ValueList<T>. Note that C++ defaults to field-by-field assignment operators and copy constructors if no explicit version is supplied. In many cases this is sufficient. Constructs an empty stack.. Removes the top item from the stack and returns it. See also top() and push(). Adds element, d, to the top of the stack. Last in, first out. This function is equivalent to append(). See also pop()(). This is an overloaded function. Returns a reference to the top item of the stack or the item referenced by end() if no such item exists. This function is equivalent to last(). See also pop(), push(), and Q3ValueList::fromLast().
http://doc.trolltech.com/main-snapshot/q3valuestack.html
crawl-003
refinedweb
160
71.21
A class is a type. Its name becomes a class-name ([class.name]) within its scope. class-name: identifier simple-template-id Class-specifiers and elaborated-type-specifiers are used to make class-names. An object of a class consists of a (possibly empty) sequence of members and base class objects. class-specifier: class-head { member-specificationopt } class-head: class-key attribute-specifier-seqopt class-head-name class-virt-specifieropt base-clauseopt class-key attribute-specifier-seqopt base-clauseopt class-head-name: nested-name-specifieropt class-name class-virt-specifier: final class-key: class struct union A class-specifier whose class-head omits the class-head-name defines an unnamed class. [ Note: An unnamed class thus can't be final. — end note ] A class-name is inserted into the scope in which it is declared immediately after the class-name is seen. The class-name is also inserted into the scope of the class itself; this is known as the injected-class-name. For purposes of access checking, the injected-class-name is treated as if it were a public member name. A class-specifier is commonly referred to as a class definition. A class is considered defined after the closing brace of its class-specifier has been seen even though its member functions are in general not yet defined. The optional attribute-specifier-seq appertains to the class; the attributes in the attribute-specifier-seq are thereafter considered attributes of the class whenever it is named. If a class is marked with the class-virt-specifier final and it appears as a class-or-decltype in a base-clause, ] Complete objects and member subobjects of class type shall have nonzero size.107 [ Note: Class objects can be assigned, passed as arguments to functions, and returned by functions (except objects of classes for which copying or moving has been restricted; see [class.copy]). Other plausible operators, such as equality comparison, can be defined by the user; see [over.oper]. — end note ] A union is a class defined with the class-key union; it holds at most one data member at a time ([class.union]). [ Note: Aggregates of class type are described in [dcl.init.aggr]. — end note ] A trivially copyable class is a class: where each copy constructor, move constructor, copy assignment operator, and move assignment operator ([class.copy], [over.ass]), all of which are either trivial or deleted and at least one of which is not deleted. [ Note: In particular, a trivially copyable or trivial class does not have virtual functions or virtual base classes. — end note ] A class S is a standard-layout class if it: has no non-static data members of type non-standard-layout class (or array of such types) or reference, has no virtual functions and no virtual base classes, has the same access control for all non-static data members, has no non-standard-layout base classes, has at most one base class subobject of any given type, has all non-static data members and bit-fields in the class and its base classes first declared in the same class, and has no element of the set M(S) of types (defined below) as a base class.108 M(X) is defined as follows: If X is a non-union class type with no (possibly inherited) non-static data members, the set M(X) is empty. If X is a non-union class type whose first non-static data member has type X0 (where said member may be an anonymous union), the set M(X) consists of X0 and the elements of M(X0). If X is a union type, the set M(X) is the union of all M(Ui) and the set containing all Ui, where each Ui is the type of the ith non-static data member of X. If X is an array type with element type Xe, the set M(X) consists of Xe and the elements of M(Xe). If X is a non-class, non-array type, the set M(X) is empty. [ Note: M(X) is the set of the types of all non-base-class subobjects that are guaranteed in a standard-layout class to be at a zero offset in X. — end note ] [ Example: struct B { int i; }; // standard-layout class struct C : B { }; // standard-layout class struct D : C { }; // standard-layout class struct E : D { char : 4; }; // not a standard-layout class struct Q {}; struct S : Q { }; struct T : Q { }; struct U : S, T { }; // not a standard-layout class — end example ] A standard-layout struct is a standard-layout class defined with the class-key struct or the class-key class. A standard-layout union is a standard-layout class defined with the class-key union. [ Note: Standard-layout classes are useful for communicating with code written in other programming languages. Their layout is specified in [class.mem]. — end note ] A POD struct109 ] If a class-head-name contains a nested-name-specifier, the class-specifier shall refer to a class that was previously declared directly in the class or namespace to which the nested-name-specifier refers, or in an element of the inline namespace. This ensures that two subobjects that have the same class type and that belong to the same most derived object are not allocated at the same address ([expr.eq]). function f() and not simply a single function f() twice. For the same reason, struct S { int a; }; struct S { int a; }; // error, double definition is ill-formed because it defines S twice. — end example ] A class declaration introduces the class name into the scope where it is declared and hides any class, variable, function, or other declaration of that name in an enclosing scope. If a class name is declared in a scope where a variable, block-scope declaration s* p; // refer to local struct s struct s { char* p; }; // define local struct s struct s; // redeclaration, has no effect } — end example ] [ Note: Such declarations allow definition of classes that refer to each other. [ Example: class Vector; class Matrix { // ... friend Vector operator*(const Matrix&, const Vector&); }; class Vector { // ... friend Vector operator*(const Matrix&, const Vector&); }; Declaration of friends is described in [class.friend], operator functions in [over.oper]. — end example ] — end note ] [ Note: An elaborated-type-specifier can also be used as a type-specifier as part of a declaration. It differs from a class declaration in that if a class of the elaborated name is in scope the elaborated name will refer to it. — end note ] [ Example: struct s { int a; }; void g(int s) { struct s* p = new struct s; // global s p->a = s; // parameter s } — end example ] [. — end note ] A typedef-name. member-specification: member-declaration member-specificationopt access-specifier : member-specificationopt member-declaration: attribute-specifier-seqopt decl-specifier-seqopt member-declarator-listopt ; function-definition using-declaration static_assert-declaration template-declaration deduction-guide alias-declaration empty. A direct member of a class X is a member of X that was first declared within the member-specification of X, including anonymous union objects ([class.union.anon]) and direct members thereof. Members of a class are data members, member functions, nested types, enumerators, and member templates and specializations thereof. [ Note: A specialization of a static data member template is a static data member. A specialization of a member function template is a member function. A specialization of a member class template is a nested class. — end note ] A member-declaration does not declare new members of the class if it is a static_assert-declaration, a using-declaration, or For any other member-declaration, each declared entity that is not an unnamed bit-field is a member of the class, and each such member-declaration shall either declare at least one member name of the class or declare at least one unnamed bit-field. A data member is a non-function member introduced by a member-declarator. A member function is a member that is a function. Nested types are classes ([class.name], [class.nest]) and enumerations declared in the class and arbitrary types declared as members by use of a typedef declaration or alias-declaration. The enumerators of an unscoped enumeration defined in the class are members of the class. A data member or member function may be declared static in its member-declaration, in which case it is a static member (see [class.static]) (a static data member ([class.static.data]) or static member function ([class.static.mfct]), respectively) of the class. Any other data member or member function is a non-static member (a non-static data member or non-static member function ([class.mfct.non-static]), respectively). [ Note: A non-static data member of non-reference type is a member subobject of a class object ([intro.object]). — end note ] A member shall not be declared twice in the member-specification, except that a nested class or member class template can be declared and then later defined, and an enumeration can be introduced with an opaque-enum-declaration and later redeclared with an enum-specifier. [ Note: A single name can denote several member functions provided their types are sufficiently different (Clause [over]). — end note ] A class is considered a completely-defined object type ([basic.types]) (or complete type) at the closing } of the class-specifier. Within the class member-specification, the class is regarded as complete within function bodies, default arguments, noexcept-specifiers, and default member initializers (including such things in nested classes). Otherwise it is regarded as incomplete within its own class member-specification. ] A brace-or-equal-initializer shall appear only in the declaration of a data member. (For static data members, see [class.static.data]; for non-static data members, see [class.base.init] and [dcl.init.aggr]). A brace-or-equal-initializer for a non-static data member specifies a default member initializer for the member, and shall not directly or indirectly cause the implicit definition of a defaulted default constructor for the enclosing class or the exception specification of that constructor. A member shall not be declared with the extern. A pure-specifier shall be used only in the declaration of a virtual function that is not a friend declaration.. Non-static data members of a (non-union) class with the same access control are allocated so that later members have higher addresses within a class object. The order of allocation of non-static data members with different access control is unspecified. Implementation alignment requirements might cause two adjacent members not to be allocated immediately after each other; so might requirements for space for managing virtual functions and virtual base classes. If T is the name of a class, then each of the following shall have a name different from T: every static data member of class T; every member function of class T [ Note: This restriction does not apply to constructors, which do not have names — end note ] ; every member of class T that is itself a type; every member template of class T; every enumerator of every member of class T that is an unscoped enumerated type; and every member of every anonymous union that is a member of class T. In addition, if class T has a user-declared constructor, every non-static data member of class T shall have a name different from T. The common initial sequence of two standard-layout struct: struct A { int a; char b; }; struct B { const int b1; volatile char b2; }; struct C { int c; unsigned : 0; char b; }; struct D { int d; char b : 4; }; struct E { unsigned int e; char b; }; The common initial sequence of A and B comprises all members of either class. The common initial sequence of A and C and of A and D comprises the first member in each case. The common initial sequence of A and E is empty. — end example ] Two standard-layout struct types are layout-compatible classes if their common initial sequence comprises all members and bit-fields of both classes ([basic.types]). Two standard-layout unions are layout-compatible if they have the same number of non-static data members and corresponding non-static data members (in any order) have layout-compatible types. In a standard-layout union with an active member of struct type T1, it is permitted to read a non-static data member m of another union member of struct type T2 provided m is part of the common initial sequence of T1 and T2; the behavior is as if the corresponding member of T1 were nominated. [ Example: struct T1 { int a, b; }; struct T2 { int c; double d; }; union U { T1 t1; T2 t2; }; int f() { U u = { { 1, 2 } }; // active member is t1 return u.t2.c; // OK, as if u.t1.a were nominated } — end example ] [ Note: Reading a volatile object through a non-volatile glvalue has undefined behavior ([dcl.type.cv]). — end note ] ] [ Note: The object and its first subobject are pointer-interconvertible ([basic.compound], [expr.static.cast]). — end note ] A member function may be defined in its class definition, in which case it is an inline member function,. An inline member function (whether static or non-static) may also be defined outside of its class definition provided either its declaration in the class definition or its definition outside of the class definition declares the function as inline or constexpr. [ Note: Member functions of a class in namespace scope have the linkage of that class. Member functions of a local class have no linkage. See [basic.link]. — end note ] [ Note: There can be at most one definition of a non-inline member function in a program. There may be more than one inline member function definition in a program. See [basic.def.odr] and [dcl.inline]. — end note ] If the definition of a member function is lexically outside its class definition, the member function name shall be qualified by its class name using the :: operator. [ Note: A name used in a member function definition (that is, in the parameter-declaration-clause including the default arguments or in the member function body) is looked up as described in [basic.lookup]. — end note ] [ Example: struct X { typedef int T; static T count; void f(T); }; void X::f(T t = count) { } The member function f of class X is defined in global scope; the notation. — end example ] [ Note: A static local variable or local type in a member function always refers to the same entity, whether or not the member function is inline. — end note ] Member functions of a local class shall be defined inline in their class definition, if they are defined at all. [(); typedef void fvc() const; struct S { fv memfunc1; // equivalent to: void memfunc1(); void memfunc2(); fvc memfunc3; // equivalent to: void memfunc3() const; }; fv S::* pmfv1 = &S::memfunc1; fv S::* pmfv2 = &S::memfunc2; fvc S::* pmfv3 = &S::memfunc3; Also see [temp.arg]. — end note ] A non-static member function may be called for an object of its class type, or for an object of a class derived from its class type, using the class member access syntax ([over.match.call]). A non-static member function may also be called directly using the function call syntax ([expr.call], [over.match.call]) from within the body of a member function of its class or of a class derived from its class. If a non-static member function of a class X is called for an object that is not of type X, or of a type derived from X, the behavior is undefined. When an id-expression that is not part of a class member access syntax and not used to form a pointer to member ([expr.unary.op]) is used in a member of class X in a context where this can be used, if name lookup resolves the name in the id-expression to a non-static non-type member of some class C, and if either the id-expression is potentially evaluated or C is X or a base class of X, the id-expression is transformed into a class member access expression using (*this) as the postfix-expression to the left of the . operator. [ Note: If C is not X or a base class of X, the class member access expression is ill-formed. — end note ] Similarly during name lookup, when an unqualified-id used in the definition of a member function for class X resolves to a static member, an enumerator or a nested type of class X or of a base class of X, the unqualified-id is transformed into a qualified-id in which the nested-name-specifier names the class of the member function. These transformations do not apply in the template definition context ([temp.dep.type]). [ Example: struct tnode { char tword[20]; int count; tnode* left; tnode* right; void set(const char*, tnode* l, tnode* r); }; void tnode::set(const.110 — end example ] A non-static member function may be declared const, volatile, or const volatile. These cv-qualifiers affect the type of the this pointer. They also affect the function type of the member function; a member function declared const is a const member function, a member function declared volatile is a volatile member function and a member function declared const volatile is a const volatile member function. [ Example: struct X { void g() const; void h() const volatile; }; X::g is a const member function and X::h is a const volatile member function. — end example ] A non-static member function may be declared with a ref-qualifier ([dcl.fct]); see [over.match.funcs]. A non-static member function may be declared virtual ([class.virtual]) or pure virtual ([class.abstract]). See, for example, <cstring> ([c.strings]). In the body of a non-static member function, the keyword this is a prvalue expression*. [ Note: Thus in a const member function, the object for which the function is called is accessed through a const access path. — end note ] [ Example:. — end example ] Similarly, volatile semantics apply in volatile member functions when accessing the object and its non-static data members. A cv-qualified member function can be called on an object-expression. — end example ] Constructors and destructors shall not be declared const, volatile or const volatile. [ Note: However, these functions can be invoked to create and destroy objects with cv-qualified types, see [class.ctor] and [class.dtor]. — end note ] A static member s of class X may be referred to using the qualified-id expression X::s; it is not necessary to use the class member access syntax to refer to a static member. A static member may be referred to using the class member access syntax, in which case the object expression is evaluated. [ Example: struct process { static void reschedule(); }; process& g(); void f() { process::reschedule(); // OK: no object necessary g().reschedule(); // g() is called } — end example ] A static member may be referred to directly in the scope of its class or in the scope of a ] If an unqualified-id is used in the definition of a static member following the member's declarator-id, and name lookup. [ Note: See [expr.prim] for restrictions on the use of non-static data members and non-static member functions. — end note ] Static members obey the usual class member access rules. When used in the declaration of a class member, the static specifier shall only be used in the member declarations that appear within the member-specification of the class definition. [ Note: It cannot be specified in member declarations that appear in namespace scope. — end note ] [ Note: The rules described in [class.mfct] apply to static member functions. — end note ] [ Note: A static member function does not have a this pointer. — end note ] A static member function shall not be virtual. There shall not be a static and a non-static member function with the same name and the same parameter types ([over.load]). A static member function shall not be declared const, volatile, or const volatile. A static data member is not part of the subobjects of a class. If a static data member is declared thread_local there is one copy of the member per thread. If a static data member is not declared thread_local there is one copy of the data member that is shared by all the objects of the class. :: operator. The initial process. In the static data member definition, the initializer expression refers to the static data member running of class process. — end example ] [ Note: Once the static data member has been defined, it exists even if no objects of its class have been created. [ Example: In the example above, run_chain and running exist even if no objects of class process are created by the program. — end example ] — end note ] If a non-volatile. An inline static data member may be defined in the class definition and may specify a brace-or-equal-initializer. If the member is declared with the constexpr specifier, it may be redeclared in namespace scope with no initializer (this usage is deprecated; see ]). A member-declarator of the form identifieropt attribute-specifier-seqopt : constant-expression specifies a bit-field; its length is set off from the bit-field name by a colon. The optional attribute-specifier-seq appertains to the entity being declared. The bit-field attribute is not part of the type of the class member. The constant-expression shall be an integral constant expression with a value greater than or equal to zero. The value of the integral constant expression may be larger than the number of bits in the object representation of the bit-field's type; in such cases the extra bits are used as padding bits and do not participate in the value representation of the bit-field. be equal to zero. A bit-field shall not be a static member. A bit-field shall have integral or enumeration type ([basic.fundamental]).]. — end note ] { FALSE=0, TRUE=1 }; struct A { BOOL b:1; }; A a; void f() { a.b = TRUE; if (a.b == TRUE) // yields true { /* ... */ } } — end example ] A class can be declared within another class. A class declared within another is called a nested class. The name of a nested class is local to its enclosing class. The nested class is in the scope of its enclosing class. [ Note: See [expr.prim] for restrictions on the use of non-static data members and non-static member functions. — end note ] int x; int y; struct enclose { int x; static int s; struct inner { void f(int i) { int a = sizeof(x); // OK: operand of sizeof is an unevaluated operand ] Member functions and static data members of a nested class can be defined in a namespace scope enclosing the definition of their class. [ Example: struct enclose { struct inner { static int x; void f(int i); }; }; int enclose::inner::x = 1; void enclose::inner::f(int i) { /* ... */ } — end example ] ] Like a member function, a friend function defined within a nested class is in the lexical scope of that class; it obeys the same rules for name binding as a static member function of that class, but it has no special access rights to members of an enclosing class. Type names obey exactly the same scope rules as other names. In particular, type names defined within a class definition cannot be used outside their class without qualification. [ Example: struct X { typedef int I; class Y { /* ... */ }; I a; }; I b; // error Y c; // error X::Y d; // OK X::I e; // OK — end example ] In a union, a non-static data member is active if its name refers to an object whose lifetime has begun and has not ended. At most one of the non-static data members of an object of union type, and if a non-static data member of an object of this standard-layout union type is active and is one of the standard-layout structs, it is permitted to inspect the common initial sequence of any of the standard-layout struct members; see [class.mem]. — end note ] The size of a union is sufficient to contain the largest of its non-static data members. Each non-static data member is allocated as if it were the sole member of a struct. [ Note: A union object and its non-static data members are pointer-interconvertible ([expr.static.cast]). As a consequence, all non-static data members of a union object have the same address. — end note ] A union can have member functions (including constructors and destructors), but it shall not have virtual functions. A union shall not have base classes. A union shall not be used as a base class. If a union contains a non-static data member of reference type the program is ill-formed. [ Note: Absent default member initializers, if any non-static data member of a union has a non-trivial default constructor ([class.ctor]), copy constructor, move constructor ([class.copy]), copy assignment operator, move assignment operator, or destructor, the corresponding member function of the union must be user-provided or it will be implicitly deleted for the union. — end note ] [ Example: Consider the following union: union U { int i; float f; std::string s; }; Since std::string declares non-trivial versions of all of the special member functions, U will have an implicitly deleted default constructor, copy/move constructor, copy/move assignment operator, and destructor. To use U, some or all of these member functions must be user-provided. — end example ] When the left operand of an assignment operator involves a member access expression, S(B) if B is of array type, and empty otherwise. Otherwise, S(E) is empty. In an assignment expression of the form E1 = E2 that uses either the built-in assignment operator or a trivial assignment operator, for each element X of S(E1), if modification of X would have undefined behavior under [basic.life], an object of the type of X is implicitly created in the nominated storage; no initialization is performed and the beginning of its lifetime is sequenced after the value computation of the left and right operands and before the assignment. [ Note: This ends the lifetime of the previously-active member of the union, if any. — end note ] [ Example: union A { int x; int y[4]; }; struct B { A a; }; union C { B b; int k; }; int f() { C c; // does not start lifetime of any union member c.b.a.y[3] = 4; // OK: S(c.b.a.y[3]) contains c.b and c.b.a.y; // creates objects to hold union members c.b and c.b.a.y return c.b.a.y[3]; // OK: c.b.a.y refers to newly created object (see [basic.life]) } struct X { const int a; int b; }; union Y { X x; int k; }; void g() { Y y = { { 1, 2 } }; // OK, y.x is active union member ([class.mem]) int n = y.x.a; y.k = 4; // OK: ends lifetime of y.x, y.k is active member of union y.x.b = n; // undefined behavior: y.x.b modified outside its lifetime, // S(y.x.b) is empty because X's default constructor is deleted, // so union member y.x's lifetime does not implicitly start } — end example ] [ Note: In general, one must use explicit destructor calls and placement new-expression-expression as follows: u.m.~M(); new (&u.n) N; — end example ]. [ Note: Nested types, anonymous unions,-member) member functions. A union for which objects, pointers, or references are declared is not an anonymous union. [ Example: void f() { default member initializer. [ Example: union U { int x = 0; union { int k; }; union { int z; int y = 1; // error: initialization for second variant member of U }; }; — end example ]..
https://timsong-cpp.github.io/cppwp/n4659/class
CC-MAIN-2021-17
refinedweb
4,625
52.6
This article was written by the winner of 2021 Q.3 - Beneish's M-Score and Altman's Z-Score for analyzing stock returns of the companies listed in the S&P500 - Information Demand and Stock Return Predictability (Coded in R) - A Probabilistic Relative Valuation for the Financial Sector Using Deep Learning@lseg.com . Abstract This article studies the implementation of the dropout method for predicting returns in Ibex 35's historical constituents. This methodology produces multiple predictions for the same input data, thus allowing to obtain a standard deviation and a mean for the predictions. Using 100 predictions and a filter based on the standard deviations, some models could generate returns in the test set whereas the first individual prediction of each model lost money during the same period. These results illustrate the usefulness of including uncertainty in predictions. In addition, a custom metric was defined for training the models. It is defined to mirror the Sharpe ratio given that standard metrics do not completely reflect reality for deep learning models applied in finance, given the asymmetry in the returns. Finally, the models are compared using different ratios to compare the returns adjusted for risk, being the simple recurrent neural network model, the worst performing one in the test set. The LSTM and the GRU with the strictest filter obtained the best results for the ratios considered. The convolutional 1 D layer performed better than the simple recurrent neural network model. 1. Introduction Deep learning has shown great advances in multiple fields. In this paper, it is used to make predictions on the next day returns on the constituents of the Ibex 35. One of the great capabilities of deep learning is the automated extraction of features. In that sense, four different models are compared for making predictions using the percentage change of the open, close, high, low and the standard deviation, kurtosis, and skewness of the returns over the last 22 days. These models always produce a prediction, even when there might be high uncertainty in this prediction. Thus, by computing multiple predictions for each model, it is possible to average the results, and compute the standard deviations to model the uncertainty in this prediction. In this paper it is achieved by randomly deactivating a percentage of the connections in the neural networks, based on GG (Gal and Ghahramani, 2016). Finally, these models are compared with the buy and hold on the Ibex 35 on the test set and with a single prediction of each model. 2. Literature review There is sounding literature studying the application of deep learning for predicting stocks returns. Using monthly data and multiple input variables obtained from Refinitiv, different neural networks structures are capable of predicting returns using MSE as the loss function and dropout layers for regularizing the great amount of parameters with the few data points available (Abe and Nakayama, 2018). A comparison of different deep learning models and other machine learning algorithms yields the long short-term memory (LSTM) as the best structure for predicting stock market returns for multiple forecast windows (Nabipour et al., 2020), when evaluated for four regression losses, amongst them, the MSE. In addition, another study on the constituents of the S&P500 shows the LSTM cells as the best performing structure for predicting stock returns (Fischer and Krauss, 2018). Moreover, when using only a few stocks, an LSTM model performed better than a 1D Convolutional layer in terms of MSE. Nevertheless other studies show the prediction power of a convolutional layer when applied to a few stocks (Sayavong, Wu and Chalita, 2019). In addition, using daily returns from the Chinese stock market, the convolutional layer shows some predictive capacity (Chen and He, 2018). As for the dropout technique for estimating uncertainty, other authors have proposed different methodologies such as the variational dropout (Molchanov, Ashukha and Vetrov, 2017) or the single shot MC dropout approximation (Brach, Sick and Dürr, 2020). For comparison, the sharpe ratio (Sharpe, 1994), sortino ratio (Sortino and Price, 1994) and the information ratio (Goodwin, 1998) are computed for each model in the test set. 3. Methodology and analysis of results In this part, we start writing the required code. First of all we import the required libraries. # !pip install tensorflow-gpu==1.15 # # # You might need to run the below: # # !pip install --upgrade pip # # !pip install --upgrade tensorflow-gpu import eikon as ek import pandas as pd import sys import plotly.graph_objects as go import plotly as pyo import numpy as np import warnings import tensorflow as tf from tensorflow.keras import layers import tensorflow.experimental.numpy as tnp The versions of the different libraries are showed to facilitate reproducibility. print(sys.version) # 3.8.8 (default, Feb 24 2021, 15:54:32) [MSC v.1928 64 bit (AMD64)] for i, j in zip(["eikon", "pandas", "numpy", "plotly", "tensorflow"], [ek, pd, np, pyo, tf]): print(f"The Python library '{i}' imported in this script is version '{j.__version__}'") 3.7.5 (tags/v3.7.5:5c02a39a0b, Oct 15 2019, 00:11:34) [MSC v.1916 64 bit (AMD64)] The Python library 'eikon' imported in this script is version '1.1.12' The Python library 'pandas' imported in this script is version '1.3.0' The Python library 'numpy' imported in this script is version '1.19.5' The Python library 'plotly' imported in this script is version '5.1.0' The Python library 'tensorflow' imported in this script is version '2.7.0' In the next few steps, four neural networks predicting a stock's daily returns are compared. These models are composed of two layers, each one followed by a batch normalization layer (Ioffe and Szegedy, 2015) and a dropout layer (Baldi and Sadowski, n.d.). After that there is fully connected layer and a final fully connected layer which outputs the prediction. The two initial layers have 16 units each, while the next fully connected layer has eight neurons, and the last layer uses only one neuron. The activation function is the hyperbolic tangent for every layer except the last one which uses an exponential linear unit activation (Clevert, Unterthiner and Hochreiter, 2016) to reflect the asymmetry in stocks returns, as the negative return cannot be lower than –100% but the positive returns are not limited. Finally, the dropout is set to 50%. The four models differ in the type of layer used for the initial two layers. These chosen layers are two simple recurrent neural networks cells, two gated recurrent unit (GRU), two LSTM layers and two one dimension convolutional layers, given its advantages and uses for time series data (Kiranyaz et al., 2021), using a kernel size of five for the convolutional layers. To create these four models we define the following functions. The training parameter activates the dropout layer during prediction too. def get_seq_model(neurons = 16, dr = 0.5, act = "tanh", cell = layers.LSTM): inputs = layers.Input(shape = (sequence, X_train.shape[1])) x = cell(neurons, activation = act, return_sequences = True)(inputs) x = layers.BatchNormalization()(x) x = layers.Dropout(dr)(x, training = True) x = cell(neurons, def get_conv_model(neurons = 16, k = 5, dr = 0.5, act = "tanh"): inputs = layers.Input(shape = (sequence, X_train.shape[1])) x = layers.Conv1D(neurons, k, activation = act)(inputs) x = layers.BatchNormalization()(x) x = layers.Dropout(dr)(x, training = True) x = layers.Conv1D(neurons, k, We want to use the data for the constituents of the Ibex 35 from 1995 to 2020. At the start of each year, those companies listed in the index are added to the study from that day onwards. We will set the key for the API, this key is obtained using the Eikon APP to generate keys. Note that an instance of Workspace or Eikon needs to run on your machine for the cel below to work. #() The dates defined for the study are stored in the variable dates. dates = pd.date_range('1995', '2020', freq = 'YS') We define the required fields for our study, we define a dataframe that will be filled with the constituents of the Ibex 35. In addition, we declare the constituents of the Ibex 35. We need at least one field to obtain the data, in this case the GICS sector was chosen. df_full = pd.DataFrame() chain = "0#.IBEX" # Constituents of the Ibex 35 fields = ["TR.GICSSector"] Then we fill this dataframe. for date in dates: year = int(str(date.to_period('Y'))) day = f"{year}-01-01" params = {'SDate': day, "EDate": day, "Frq": "FY"} # We obtain the constituents of the Ibex 35 at the start # of each year and store that date as a column df = ek.get_data(f'{chain}({day})', fields, params)[0] df["Date"] = f'{date.to_period("D")}' # Then we drop the missing values and store them in # the previously created dataframe df = df.replace('', np.nan).dropna() df_full = df_full.append(df) We already have a dataframe containing the historical consitituents of the Ibex 35 and the dates when they were listed in the index. We want to obtain another dataframe containing the daily returns of each company since the date they were already part of the Ibex 35. In addition, given that the get_timeseries function of eikon only returns 3000 data points, we take only 10 years with each connection. As, the high, low, close and volume information were not available until the market closed we shift that data one day. Moreover, we compute the daily percentage of change of the open price and create additional columns computing rolling statistical measures of these changes. Finally we create another column containing the next day percentage of change of the open, which will be used for training the neural networks. There are a few companies for which data is not available, thus we ignore those stocks. df_full.T # We store the names of the companies and the year they were listed in the index. stocks = df_full.groupby("Instrument")["Date"].min().index starts = df_full.groupby("Instrument")["Date"].min() df_ts = pd.DataFrame() # We will fill this dataframe with all the data # This lambda function takes a year and returns that date for the next step years step = 10 to_date = lambda x: (f"{x}-01-01", f"{x+step}-12-31") # e.g.: [to_date(x) for x in range(2000, 2020, step)] [('2000-01-01', '2010-12-31'), ('2010-01-01', '2020-12-31')] cols = ['OPEN', 'CLOSE', 'HIGH', 'LOW', 'VOLUME'] for stock, start in zip(stocks, starts): year = int(start[:4]) date_range = [to_date(x) for x in range(year, 2020, step)] for st, end in date_range: try: df = ek.get_timeseries(stock, cols, start_date=st, end_date = end, interval = "daily") except ek.EikonError: try: df = ek.get_timeseries(stock, cols, start_date=st, end_date = None, interval = "daily") except ek.EikonError: continue df["stock"] = stock df[cols[1:]] = df[cols[1:]].pct_change().shift(1) df["target"] = df["OPEN"].pct_change().shift(-1) df["OPEN"] = df["OPEN"].pct_change() # We create a multi-index appending the identifier of the stock to the 'dates' index. df = df.set_index("stock", append = True) df_ts = df_ts.append(df.dropna()) # Finally we sort the data by its index df_ts = df_ts.sort_index() df_ts The obtained data is transformed to numpy and transformed to be between 0 and 1 in the training set. In addition, it is divided into the training set from the start of 1995 to the end of 2011, the validation set from the start of 2012 to the end 2015 and finally the test set, which goes from the start of 2016 to the end of 2020. for col in df_ts.columns: df_ts[col] = df_ts[col].astype(np.float32) cols = df_ts.drop(columns = "target").columns X_train = df_ts.loc[:"2011", slice(None), :].drop(columns = "target") df_ts[cols] = (df_ts[cols] - X_train.min()) / (X_train.max() - X_train.min()) df_ts X_train = df_ts.loc[:"2011", slice(None), :].drop(columns = "target") y_train = df_ts.loc[:"2011", slice(None), :].pop("target") X_val = df_ts.loc["2012":"2015", slice(None), :].drop(columns = "target") y_val = df_ts.loc["2012":"2015", slice(None), :].pop("target") X_test = df_ts.loc["2016":, slice(None), :].drop(columns = "target") y_test = df_ts.loc["2016":, slice(None), :].pop("target") X_full = df_ts.drop(columns = "target") We will create sequences of data of 22 days that will be used as input for the neural networks. sequence = 22 batch = 256 stocks = df_ts.index.get_level_values(1).unique() We prepare the data, and store it as tensorflow datasets. loc = lambda x, s: x.loc[slice(None), s, :] to_ds = lambda x, y: tf.keras.preprocessing.timeseries_dataset_from_array( x, y, sequence_length=sequence, batch_size=batch) ds_train = to_ds(loc(X_train, stocks[0]), loc(y_train, stocks[0])) ds_val = to_ds(loc(X_val, stocks[0]), loc(y_val, stocks[0])) ds_test = to_ds(loc(X_test, stocks[0]), loc(y_test, stocks[0])) ds_err = {'train': [], 'val': [], 'test': []} for stock in stocks[1:]: try: ds_train = ds_train.concatenate(to_ds(loc(X_train, stock), loc(y_train, stock))) except KeyError as err: ds_err['train'].append(err) try: ds_val = ds_val.concatenate(to_ds(loc(X_val, stock), loc(y_val, stock))) except KeyError as err: ds_err['val'].append(err) try: ds_test = ds_test.concatenate(to_ds(loc(X_test, stock), loc(y_test, stock))) except KeyError as err: ds_err['test'].append(err) continue A custom loss is defined by attempting to mirror the sharpe ratio assuming 0% risk free returns, 0.25% trading commissions, and adding a little constant to the denominator to avoid division by zero. def loss(y_true, y_pred): dif = - tf.reduce_mean(y_true * y_pred - 0.0025 * tf.math.abs(y_pred)) / (tf.math.reduce_std(y_true * y_pred) + 1e-10) return dif In addition, we want to stop the training if the loss of this model does not improve at least 0.01 for 15 consecutive epochs in the validation set or 7 consecutive epochs in the training set, restoring the weights that achieved the lowest loss early_stop_val = tf.keras.callbacks.EarlyStopping( monitor='val_loss', patience=15, verbose=0, restore_best_weights=True, min_delta = 0.01 ) early_stop_train = tf.keras.callbacks.EarlyStopping( monitor='loss', patience=7, verbose=0, restore_best_weights=True, min_delta = 0.01 ) Finally, we want to schedule the learning rate during training. It will increase for the first 10 epochs and after that, the learning rate starts to decrease using an expontential function. def lr_sc(ep, lr): if ep <= 10 : return 1.1*lr return lr * tf.math.exp(-(ep-10) / 100) mod_lr = tf.keras.callbacks.LearningRateScheduler(lr_sc) Finally, these models are trained using the RMSprop optimizer for a maximum of 100 epochs. The starting learning rate is set to 0.0003. We set the seed to 0 for reproducible results. Even though the neural networks are trained using multiprocessing (it is possible because the data was converted to a tensorflow dataset) it might take some time until the training ends. history = [] models = [get_seq_model(cell = layers.SimpleRNN), get_seq_model(cell = layers.GRU), get_seq_model(), get_conv_model()] tf.random.set_seed(0) for model in models: tf.keras.backend.clear_session() optimizer=tf.optimizers.RMSprop(3e-4) model.compile(optimizer = optimizer, loss = loss) history.append( model.fit( ds_train, epochs = 100, validation_data=ds_val, verbose = 0, # This value can be increased to visualize the progress. callbacks = [early_stop_val, mod_lr, early_stop_train], use_multiprocessing = True, ) ) To make the predictions of the models we define a function that takes the number of predictions that we want to average. We store the average and the standard deviations of the predictions and the first prediction is also stored for comparison. def make_predictions(samples = 100): stocks = X_full.index.get_level_values(1).unique() for i, stock in enumerate(stocks): for m, model in enumerate(models): data = to_ds(X_full.loc[(slice(None), stock), :], None) predictions = [model.predict(data, use_multiprocessing = True) for _ in range(samples)] mean = np.append([np.nan]*(sequence - 1), tf.reduce_mean(predictions, axis = 0).numpy()) std = np.append([np.nan]*(sequence - 1), tf.math.reduce_std(predictions, axis = 0).numpy()) first = np.append([np.nan]*(sequence - 1), predictions[0]) df_ts.loc[(slice(None), stock), f"pred_mean_{m}"] = mean df_ts.loc[(slice(None), stock), f"pred_{m}"] = first df_ts.loc[(slice(None), stock), f"pred_std_{m}"] = std if (i % 5 == 0): print(f"{100*i/len(stocks):.2f} %") with warnings.catch_warnings(): warnings.simplefilter("ignore") make_predictions() Each day we take the sum of the absolute of the predictions and divide each prediction by this number, thus the sum of these weighted predictions is one. with warnings.catch_warnings(): warnings.simplefilter("ignore") for m, model in enumerate(models): df_ts[f"pred_weighted_{m}"] = df_ts.groupby("Date")[f"pred_{m}"].apply(lambda x: x / (x.abs().sum())) Finally, we compute the trading costs of 0.01% and define a few lambda functions to compute all the results. cps = 0.0001 names = ["SimpleRNN", "GRU", "LSTM", "Convolutional"] mask = lambda x, y : (df_ts.dropna()[f"pred_std_{x}"] * y < df_ts.dropna()[f"pred_mean_{x}"].abs()) r_mean = lambda x, y: (df_ts.dropna()[mask(x, y)].groupby("Date")[f"pred_mean_{x}"].apply(lambda z: z / (np.sum(np.abs(z)))) * df_ts.dropna()[mask(x, y)]["target"]).groupby("Date").sum() costs_mean = lambda x, y: (df_ts.dropna()[mask(x, y)].groupby("stock")[f"pred_mean_{x}"].apply(lambda z: z / (np.sum(np.abs(z)))).diff(1).abs() * cps).groupby("Date").sum() r_mean_total = lambda x, y: r_mean(x, y) - costs_mean(x, y) r_single = lambda x: (df_ts.dropna()[f"pred_weighted_{x}"] * df_ts.dropna()["target"]).groupby("Date").sum() costs_single = lambda x: (df_ts.dropna().groupby("stock")[f"pred_weighted_{x}"].diff(1).abs() * cps).groupby("Date").sum() r_single_total = lambda x: r_single(x) - costs_single(x) # # Setting up sigma range up to a max of 10: max_sigma_l = [] for x in range(len(names)): max_sigma = 0 while True in (mask(x, max_sigma + 1)).to_list() and max_sigma < 10: max_sigma += 1 max_sigma_l.append(max_sigma) max_sigma = min(max_sigma_l) if max_sigma >= 10: sigmas = [0, 2, 3, 5, 10] elif max_sigma > 5: sigmas = [0, 2, 3, 5, max_sigma] elif max_sigma <= 5: sigmas = [0, 2, 3, 5] print(f"Have too few sigmas, only up to {max_sigma}") rets_mean = [r_mean_total(x, sigma) + 1 for x, _ in enumerate(names) for sigma in sigmas] rets_single = [r_single_total(x) + 1 for x, _ in enumerate(names)] names_mean = ["Average prediction " + name + f" {y} sigmas filter" for name in names for y in sigmas] names_single = ["Single prediction " + name for name in names] For comparison we also take the data of the index in a variable. df_ibex = ek.get_timeseries(chain[2:], "OPEN", start_date="2016-01-01", end_date = "2020-12-31", interval = "daily") df_ibex["OPEN"] = df_ibex["OPEN"].pct_change() rets_b = df_ibex.dropna()["OPEN"] + 1 Finally, we define a function to compute the returns and plot its results. def plot_returns(title = "", date = "2016", single = True, mean = True, std = max_sigma): rets_mean_ = [rets_mean[x] for x in range(sigmas.index(std), len(rets_mean), len(sigmas))] names_mean_ = [names_mean[x] for x in range(sigmas.index(std), len(names_mean), len(sigmas))] fig = go.Figure() if mean: [fig.add_trace(go.Scatter( x = r.loc[date:"2020"].index, y = (r.loc[date:"2020"]).cumprod(), name = n, mode = "lines")) for r, n in zip(rets_mean_, names_mean_)] if single: [fig.add_trace(go.Scatter( x = r.loc[date:"2020"].index, y = (r.loc[date:"2020"]).cumprod(), name = n)) for r, n in zip(rets_single, names_single)] fig.add_trace(go.Scatter( x=rets_b.loc[date:"2020"].index, y=(rets_b.loc[date:"2020"]).cumprod(), name = "BuyHold")) fig.update_annotations(font = dict(size = 20)) fig.update_layout( title = f"{title}", xaxis_title = "Date", yaxis_title = "Returns (%)", yaxis_tickformat = '%', template = "plotly_dark", font = dict(size = 16), ) fig.show() Finally we plot the results of our models compared with the buy and hold on the index for the sigmas defined above. plot_returns(title = "First prediction of the models", mean = False) for std in sigmas: plot_returns(title = f"Average predictions using {std} sigmas filter", single = False, std = std) The yearly sharpe ratio, sortino ratio and the information ratio are computed for each model, using the functions defined below. The risk free return is assumed to be 0 % and the benchmark for the information ratio is the return obtained through buy and hold in the Ibex 35, excluding commissions, during the test set. def sharpe_ratio(returns): return np.sqrt(returns.groupby(returns.index.year).count().mean()) * (returns - 1).mean() / (returns - 1).std() def information_rate(returns, benchmark): return (returns - benchmark).mean() / (returns - benchmark).std() def sortino_ratio(returns): return np.sqrt(returns.groupby(returns.index.year).count().mean()) * (returns - 1).mean() / (returns[(returns -1) < 0] - 1).std() We store the results of our models in a DataFrame date = "2016" df_rets = pd.DataFrame(columns = ["Sharpe ratio", "IR", "Sortino ratio"], dtype = np.float64) for i, ret in enumerate(rets_mean): ret = ret.loc[date:] rets_b = rets_b.loc[date:] df_rets.loc[names_mean[i]] = sharpe_ratio(ret), information_rate(ret, rets_b), sortino_ratio(ret) for i, ret in enumerate(rets_single): ret = ret.loc[date:] rets_b = rets_b.loc[date:] df_rets.loc[names_single[i]] = sharpe_ratio(ret), information_rate(ret, rets_b), sortino_ratio(ret) df_rets.loc["buyhold"] = sharpe_ratio(rets_b.loc[date:]), 0, sortino_ratio(rets_b.loc[date:]) Finally we show the results of this DataFrame sorted by its sharpe ratio. df_rets.sort_values(by = "Sharpe ratio", ascending = False) import pickle # need to ' pip install pickle-mixin ' import inspect # Needed to check what object is or isn't a module object; pickle cannot 'pickle' module objects. alll, alll_err, alll_names = [[], []], [], [] pickle_out = open("MCDropout_005_module_test.pickle","wb") for i in dir(): try: exec(f"pickle.dump({i}, pickle_out)") alll_names.append(i) except: alll_err.append(i) pickle_out.close() for i in alll_names: alll[0].append(i) exec(f"alll[1].append({i})") # To save data out: pickle_out = open("MCDropout_005.pickle","wb") # This creates the '.pickle' file where our data of choice will be saved. ' wb ' stand for 'write bytes'. pickle.dump(alll, pickle_out) # ' pickle_out ' specifies the '.pickle' file in which we want to write (actually: overwrite - everything previously in that file will be overwritten) pickle_out.close() # We need to close this '.pickle' file; leaving it open could corrupt it. # # To load data in: # pickle_in = open("MCDropout_005.pickle","rb") # ' rb ' stand for 'read bytes'. # allll = pickle.load(pickle_in) # pickle_in.close() # We ought to close the file we opened to allow any other programs access if they need it. 4. Conclusions By modelling the uncertainty through the standard deviation of the predictions it is possible to obtain better results than using a single prediction. In addition, a few models could generate positive returns in the test set, even when incorporating trading commissions. Moreover, these models only used an input based on the raw percentage change of prices scaled and the addition of some statistical measures of the returns on the sequences of 22 days. However, other inputs such as fundamental data, or even some transformations of the data might produce much better results. Furthermore, even though the convolutional layer performs better than the simple recurrent neural network cells, the simple RNN outperformed it with the filter of highest standard deviations (9 in this case). However, the convolutional model using 5 standard deviations as filter obtained a better positive information ratio while the GRU with the same filter produced a negative one. To sum up, incorporating MC dropout for the predictions of deep learning has shown a great effect for predicting returns for the historical constituents of the Ibex 35. Future lines of work Most of the models lost money on the test set even while ignoring trading commissions, thus more informative inputs might produce much better results. In addition, given that the predicted returns were divided by the sum of the absolute of the predictions for that date, using sequences of data for all the stocks and predicting a relative value for each of them might yield better results. Other methods for modelling uncertainty in predictions might be explored and compared with this methodology. In addition, performing more predictions under this methodology would produce a better estimation of the standard deviation of the predictions, although it requires high computing capacity, given that the predictions took more time than training the models. Bibliography Abe, M. and Nakayama, H., 2018. Deep Learning for Forecasting Stock Returns in the Cross-Section. In: D. Phung, V.S. Tseng, G.I. Webb, B. Ho, M. Ganji and L. Rashidi, eds. Advances in Knowledge Discovery and Data Mining, Lecture Notes in Computer Science. Cham: Springer International Publishing.pp.273–284.. Baldi, P. and Sadowski, P.J., n.d. Understanding Dropout. p.9. Brach, K., Sick, B. and Dürr, O., 2020. Single Shot MC Dropout Approximation. arXiv:2007.03293 [cs, stat]. [online] Available at: [Accessed 7 Mar. 2021]. Chen, S. and He, H., 2018. Stock Prediction Using Convolutional Neural Network. IOP Conference Series: Materials Science and Engineering, 435, p.012026.. Clevert, D.-A., Unterthiner, T. and Hochreiter, S., 2016. Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs). arXiv:1511.07289 [cs]. [online] Available at: [Accessed 31 Jan. 2021]. Fischer, T. and Krauss, C., 2018. Deep learning with long short-term memory networks for financial market predictions. European Journal of Operational Research, 270(2), pp.654–669.. Gal, Y. and Ghahramani, Z., 2016. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. arXiv:1506.02142 [cs, stat]. [online] Available at: [Accessed 7 Mar. 2021]. Goodwin, T.H., 1998. The Information Ratio. Financial Analysts Journal, 54(4), pp.34–43.. Ioffe, S. and Szegedy, C., 2015. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv:1502.03167 [cs]. [online] Available at: [Accessed 3 Feb. 2021]. Kiranyaz, S., Avci, O., Abdeljaber, O., Ince, T., Gabbouj, M. and Inman, D.J., 2021. 1D convolutional neural networks and applications: A survey. Mechanical Systems and Signal Processing, 151, p.107398.. Molchanov, D., Ashukha, A. and Vetrov, D., 2017. Variational Dropout Sparsifies Deep Neural Networks. arXiv:1701.05369 [cs, stat]. [online] Available at: [Accessed 8 Mar. 2021]. Nabipour, M., Nayyeri, P., Jabani, H., Mosavi, A., Salwana, E. and S., S., 2020. Deep Learning for Stock Market Prediction. Entropy, 22(8), p.840.. Sayavong, L., Wu, Z. and Chalita, S., 2019. Research on Stock Price Prediction Method Based on Convolutional Neural Network. In: 2019 International Conference on Virtual Reality and Intelligent Systems (ICVRIS). 2019 International Conference on Virtual Reality and Intelligent Systems (ICVRIS). pp.173–176.. Sharpe, W.F., 1994. The Sharpe Ratio. The Journal of Portfolio Management, 21(1), pp.49–58.. Sortino, F.A. and Price, L.N., 1994. Performance Measurement in a Downside Risk Framework. The Journal of Investing, 3(3), pp.59–64.. References Uncertainty in Neural Networks? Monte Carlo Dropout Related APIs Related Articles - Beneish's M-Score and Altman's Z-Score for analyzing stock returns of the companies listed in the S&P500
https://developers.uat.refinitiv.com/en/article-catalog/article/monte-carlo-dropout-for-predicting-prices-with-deep-learning-and-tensorflow
CC-MAIN-2022-27
refinedweb
4,347
50.84
1 /*******************************************************************************2 * Copyright (c).preferences * Note: This class is copied from org.eclipse.core.resources25 * 26 * @since 3.127 */28 public final class StringPool {29 private int savings;30 private final HashMap map = new HashMap ();31 32 /**33 * Adds a <code>String</code> to the pool. Returns a <code>String</code>34 * that is equal to the argument but that is unique within this pool.35 * @param string The string to add to the pool36 * @return A string that is equal to the argument.37 */38 public String add(String string) {39 if (string == null)40 return string;41 Object result = map.get(string);42 if (result != null) {43 if (result != string)44 savings += 44 + 2 * string.length();45 return (String ) result;46 }47 map.put(string, string);48 return string;49 }50 51 /**52 * Returns an estimate of the size in bytes that was saved by sharing strings in 53 * the pool. In particular, this returns the size of all strings that were added to the54 * pool after an equal string had already been added. This value can be used55 * to estimate the effectiveness of a string sharing operation, in order to 56 * determine if or when it should be performed again.57 * 58 * In some cases this does not precisely represent the number of bytes that 59 * were saved. For example, say the pool already contains string S1. Now 60 * string S2, which is equal to S1 but not identical, is added to the pool five 61 * times. This method will return the size of string S2 multiplied by the 62 * number of times it was added, even though the actual savings in this case63 * is only the size of a single copy of S2.64 */65 public int getSavedStringCount() {66 return savings;67 }68 } Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/eclipse/core/internal/preferences/StringPool.java.htm
CC-MAIN-2017-26
refinedweb
312
82.54
no arrays. hi guys. this is my code. I've been changing it back and forth trying various things but for the most part, this is the general skeleton. really frustrating, have spent about 30 hours on this and still can't get it so it is extremely discouraging. all help will be greatly appreciated. my question isn't necessarily how to fix the error but to see if my approach makes sense. if what I've done thus far can actually be completed or if my code has to be rewritten or just slightly modified to work. I have scrapped the whole thing a total of 3 times and started from scratch and now I am here. import java.util.*; class WhiteSpace { public static void main(String[] args) { Scanner scan = new Scanner(System.in); String text; //variable for user's input String cTrim; System.out.println("enter desired width"); int width = scan.nextInt(); //desired width input System.out.println("enter desired phrase"); scan.nextLine(); String keyboard = scan.nextLine(); //takes a line 2b formatted cTrim = keyboard.replaceAll("^\\s+", ""); //compresses whitespace text = cTrim.replaceAll("\\s+", " "); System.out.println(text.length()); int i = 0; int lastspace = -1; int linestart = 0; int lastprinted = 0; int k = 0; // hello what are you d // oing // lastspace = 5, 10, 14, 18 // at i = 20, lastspace = 18 if (text.length() < width) System.out.println(text); else { while (i < text.length()) { while (i < linestart + width + 1) // while i < 21 { if (text.charAt(i) == ' ') lastspace = i; //lastspace is 18 if (i > linestart + width - 1) //i is 20 if bigger than 19 { if (lastspace != -1) // 18 { for (i = lastprinted; i < lastspace; i++) // i = 0; < 18 { System.out.print(text.charAt(i)); // 0 - 17 hello how are you k++; } if (i == lastspace) // i = 17 System.out.println(); i = lastprinted + k; // i would be 17 + 0 } } i++; } linestart = lastspace + 1; //linestart is now 19 lastspace = -1; //lastspace no longer 18 lastprinted = i + 1; //last printed is 18 } } } } I've pretty much been testing it with a width of 20 and the input of "hello how are you doing". my desired output is as follows: so my comments are just for me to keep track of where i currently is to be accurate.so my comments are just for me to keep track of where i currently is to be accurate.hello how are you doing I'm trying to do left justification. my idea here is that I detect the last space in a line and println() at that spot so that the next printed char is in a new line. then I want to detect the last space in that 2nd line and so forth and so forth. my problem is figuring out how I can print all the characters while not screwing up the incrementing i that has to continue to discover the last space in a line. hope that makes sense. all help will be greatly appreciated. thanks in advance.
http://www.javaprogrammingforums.com/whats-wrong-my-code/7390-making-text-editor-using-string-class.html
CC-MAIN-2018-26
refinedweb
491
74.19
This is an excerpt from the Scala Cookbook (partially modified for the internet). This is Recipe 1.12, “How to Add Your Own Methods to the String Class.” Scala FAQ: Can you share an example of how to create an implicit class in Scala 2.10 (and newer)?. This will let you write code like this: "HAL".increment instead of this: StringUtilities.increment("HAL") Solution In Scala 2.10, you define an implicit class, and then define methods within that class to implement the behavior you want. You can see this in the REPL. First, define your implicit class and method(s): scala> implicit class StringImprovements(s: String) { | def increment = s.map(c => (c + 1).toChar) | } defined class StringImprovements Once this is done you can invoke your increment method on any String: scala> val result = "HAL".increment result: String = IBM In real-world code, this is just slightly more complicated. According to SIP-13, Implicit Classes, “An implicit class must be defined in a scope where method definitions are allowed (not at the top level).” This means that your implicit class must be defined in one of these places: - A class - An object - A package object Put the implicit class in an object One way to satisfy this condition is to put the implicit class inside an object. For instance, you can place the StringImprovements implicit class in an object such as a StringUtils object, as shown here: package com.alvinalexander.utils object StringUtils { implicit class StringImprovements(val s: String) { def increment = s.map(c => (c + 1).toChar) } } You can then use the increment method somewhere else in your code, after adding the proper import statement: package foo.bar import com.alvinalexander.utils.StringUtils._ object Main extends App { println("HAL".increment) } Put the implicit class in a package object Another way to satisfy the requirement is to put the implicit class in a package object. With this approach, place the following code in a file named package.scala, in the appropriate directory. If you’re using SBT, you should place the file in the src/main/scala/com/alvinalexander directory of your project, containing the following code: package com.alvinalexander package object utils { implicit class StringImprovements(val s: String) { def increment = s.map(c => (c + 1).toChar) } } When you need to use the increment method in some other code, use a slightly different import statement from the previous example: package foo.bar import com.alvinalexander.utils._ object MainDriver extends App { println("HAL".increment) } See Recipe 6.7 of the Scala Cookbook, “Putting Common Code in Package Objects,” for more information about package objects. Using versions of Scala prior to version 2.10 If for some reason you need to use a version of Scala prior to version 2.10, you’ll need to take a slightly different approach to solve this problem. In this case, define a method named increment in a normal Scala class: class StringImprovements(val s: String) { def increment = s.map(c => (c + 1).toChar) } Next, define another method to handle the implicit conversion: implicit def stringToString(s: String) = new StringImprovements(s) The String parameter in the stringToString method essentially links the String class to the StringImprovements class. Now you can use increment as in the earlier examples: "HAL".increment Here’s what this looks like in the REPL: scala> class StringImprovements(val s: String) { | def increment = s.map(c => (c + 1).toChar) | } defined class StringImprovements scala> implicit def stringToString(s: String) = new StringImprovements(s) stringToString: (s: String)StringImprovements scala> "HAL".increment res0: String = IBM Discussion As you just saw, in Scala, you can add new functionality to closed classes by writing implicit conversions and bringing them into scope when you need them. A major benefit of this approach is that you don’t have to extend existing classes to add the new functionality, like you would have to do in a more restricted OOP language. For instance, there’s no need to create a new class named MyString that extends String, and then use MyString throughout your code instead of String; instead, you define the behavior you want, and then add that behavior to all String objects in the current scope when you add the import statement. Note that you can define as many methods as you need in your implicit class. The following code shows both increment and decrement methods, along with a method named hideAll that returns a String with all characters replaced by the * character: implicit class StringImprovements(val s: String) { def increment = s.map(c => (c + 1).toChar) def decrement = s.map(c => (c − 1).toChar) def hideAll = s.replaceAll(".", "*") } Notice that except for the implicit keyword before the class name, the StringImprovements class and its methods are written as usual. By simply bringing the code into scope with an import statement, you can use these methods, as shown here in the REPL: scala> "HAL".increment res0: String = IBM Here’s a simplified description of how this works: - The compiler sees a string literal HAL. - The compiler sees that you’re attempting to invoke a method named incrementon the String. - Because the compiler can’t find that method on the Stringclass, it begins looking around for implicit conversion methods that are in scope that accept a Stringargument. - This leads the compiler to the StringImprovementsclass, where it finds the incrementmethod. That’s an oversimplification of what happens, but it gives you the general idea of how implicit conversions work. For more details on what’s happening here, see SIP-13, Implicit Classes. Annotate your method return type It’s recommended that the return type of implicit method definitions should be annotated. If you run into a situation where the compiler can’t find your implicit methods, or you just want to be explicit when declaring your methods, add the return type to your method definitions. In the increment, decrement, and hideAll methods shown here, the return type of String is made explicit: implicit class StringImprovements(val s: String) { // being explicit that each method returns a String def increment: String = s.map(c => (c + 1).toChar) def decrement: String = s.map(c => (c − 1).toChar) def hideAll: String = s.replaceAll(".", "*") } Returning other types Although all of the methods shown so far have returned a String, you can return any type from your methods that you need. The following class demonstrates several different types of string conversion methods: implicit class StringImprovements(val s: String) { def increment = s.map(c => (c + 1).toChar) def decrement = s.map(c => (c − 1).toChar) def hideAll: String = s.replaceAll(".", "*") def plusOne = s.toInt + 1 def asBoolean = s match { case "0" | "zero" | "" | " " => false case _ => true } } With these new methods you can now perform Int and Boolean conversions, in addition to the String conversions shown earlier: scala> "4".plusOne res0: Int = 5 scala> "0".asBoolean res1: Boolean = false scala> "1".asBoolean res2: Boolean = true Note that all of these methods have been simplified to keep them short and readable. In the real world, you’ll want to add some error-checking.
https://alvinalexander.com/scala/scala-2.10-implicit-class-example/
CC-MAIN-2020-34
refinedweb
1,173
66.23
Related ‘traditional. I see XForms in particular as crucial to keeping the Web from being eaten by the Web Services and "Rich Internet Applications" that have been eating around its edges for a while. The Web has to move forward if it hopes to survive, as its promise of cheap interoperability (which it did remarkably well, even with the Browser Wars) is under assault once more from vendors who have a lot to gain by fragmenting the Web and Web applications into proprietary pieces under their control.? Much Needed Yes, as both the Gecko and Mozilla engine are coming into the current level of standards they should have been in 2 years ago - I think it is strong important that we start looking at XHTML 2.0 and make some improvements on where we would like to see the web go - visually. As my designer mind would like to see Flash improve - my programmer mind knows that can't be the future and we need to start creating new standards to allow developers to better control the virtual environment which they are creating. XForms would be a great start... and hopefully the start of a new beginning. Out of touch? As a sometime Mozilla QA flunky, I must say that I sometimes feel that the HTML and CSS working groups are the only ones in touch with reality. I read www-tag occasionally, where there's always a tremendous amount of heat and light being generated over various abstractions: RDF! bags! namespaces! RDDL! what's an URI? REST! SOAP! XPointer, XML Schema, Cthulhu ftagn (as Joe English would say). Then I go triage Bugzilla bugs, and we're still trying to get people to close their tags so we don't pop a stack. The HTML WG is making a valiant effort to clear up the masses of cruft that have accumulated on top of HTML (which didn't exactly start as a rich semantic language), and the CSS WG is developing CSS3 and whittling CSS2 down to a commonly implemented subset. All the X* stuff has its uses, and I'm sure XSLT, etc. are holding up the back ends of some web sites, providing web services, and so forth, but the-Web-as-it-is-browsed is only taking the first little steps towards being XML. We need a well-designed language expressing basic web semantics, so that we can crawl, before we can stand up and walk with ontologies and the Semantic Web.
http://www.oreillynet.com/xml/blog/2002/11/xhtml_20_and_the_health_of_the.html
crawl-002
refinedweb
416
64.44
public class Test{ static String symbol; public static void main (String[] args){ String symbol = args[0]; letter(); } static void letter(){ System.out.println(symbol); } } When you write String symbol = args[0]; in the main method you are declaring a new local variable, which shadows (hides) the static field called symbol. When you call letter that local variable is no longer applicable (since it is only applicable to main) and thus the value of the static field is printed, but that value is null since that static field was never assigned. Change that line to symbol = args[0]; so it stores the value of args[0] to the static field instead.
https://codedump.io/share/hizkUoz1J5FC/1/why-i-cant-use-the-value-of-a-static-variable
CC-MAIN-2017-17
refinedweb
110
62.21
Modules API reference¶ Supported languages¶ Currently modules written in C and Lua(JIT) are supported. The anatomy of an extension¶ A module is a shared object or script defining specific functions/fields; here’s an overview. The X corresponds to the module name; if the module name is hints, the prefix for constructor would be hints_init(). More details are in docs for the kr_module and kr_layer_api structures. Note The modules get ordered – by default in the same as the order in which they were loaded. The loading command can specify where in the order the module should be positioned., especially when JIT-compilation is taken into account. Note The Lua functions retrieve an additional first parameter compared to the C counterparts - a “state”. Most useful C functions and structures have lua FFI wrappers, sometimes with extra sugar.: some functions do not get called at all if state == kres.FAIL; see docs for details: kr_layer_api. Since the modules are like any other Lua modules, you can interact with them through the CLI and and any interface. Tip Module discovery: kres_modules. is prepended to the module name and lua search path is used on that. Writing a module in C¶ As almost all the functions are optional, the minimal module looks like this: #include "lib/module.h" /* Convenience macro to declare module ABI. */. Configuring modules¶ There is a callback X_config() that you can implement, see hints module. Exposing (e
https://knot-resolver.readthedocs.io/en/latest/modules_api.html
CC-MAIN-2019-39
refinedweb
237
66.23
BootFile.default is not a function I have a small JS library that was written for Node.JS that I need to import into my Quasar project. The library works fine when bundled into the bundle.jsfile of a standard Webpack project, but in Chrome I get Lib1-boot.default is not a function(my Quasar boot file) when I try to use the library in my Quasar project. Here’s my Quasar (Quasar CLI v1.0) setup: I’ve successfully added many boot files to other projects so I’m familiar with the process. In this case, I’ve created the boot file with quasar new boot 'Lib1-boot'. That file contains the following: // src/boot/Lib1-boot.js import Lib1 from '../plugins/Lib1.js' export default async ({ Vue }) => { Vue.use(Lib1) }; I’ve added Lib1-bootto my Quasar.conf boot: []section with my other boot files (all those boot files, including Axios, work fine). I’ve added import Lib1 from '../plugins/Lib1.jsto the top of the <script>block of Index.vuewith my other imports. (I use VSCode so it’s easy to see that all my imported files are successfully detected and loaded when I start quasar devand my app loads fine. ) For context, here’s a simplified version of the code in my Lib1.jsfile: // src/plugins/Lib1.js // import library functions var pb = require('./Lib2.js'); // A dependency of the Lib1 file in the same directory, which I've also created a `Lib2-boot.js` file for in `src/boot`. Object.assign(module.exports, Lib2); // I think Quasar/Vue is probably choking on this. module.exports.createMeParent = function(args) { var myFunc = createMe(args); return { getStuff: function(data) { return myFunc(data, pb.getStuffReq); }, // lots of other function definitions that should be imported from Lib2 . . . . In my Index.vue template, when I click the button that’s attached to my myFunction()method (which tries to call getStuff()and other functions exported by Lib1), I can see from the call stack that my method is trying to find the functions in my Lib1.jsfile, but it’s not finding them because of the Lib1-boot.default is not a functionerror. So I know I’m calling the function from my method correctly, but it’s simply not finding any of the exported functions in Lib1.js. I suspect that these import statements in my Lib1 file . . . // src/plugins/Lib1.js var pb = require('./Lib2.js'); Object.assign(module.exports, pb); . . . need to somehow be adapted to the logic of my Lib1-bootfile so that all the functions in the module.exportsobject are recognized and passed to Vue. I understand there’s no default exportdefined in my lib files, which is probably part of the problem, but my knowledge of Quasar boot files (and Node.JS syntax) doesn’t go beyond the examples that I’ve seen in the docs and various forum posts; so I’m not sure how to resolve the default exportissue in this case. Rewriting the entire Lib1.jsand Lib2.jsfiles doesn’t seem like it should be necessary. As I’ve been troubleshooting, I noticed that some libraries (e.g., Axios) are imported into Quasar with a different export defaultsyntax using Vue.prototype...instead of Vue.use, e.g.: export default async ({ Vue }) => { Vue.prototype.$axios = axios } So I tried using the same Vue.prototype.$Lib1 = Lib1syntax, but that didn’t help. I’ve spent a long time troubleshooting this. I think I’m close to resolving it, but I need some help please on how to properly import all the Lib1.jsfunctions into my Quasar project. Please help! Looks like it should be export defaultwhen looking at this: Lib1-boot.default is not a function Thank you for your reply. It seems like the problem is caused because the functions in the module.exportsobject in the Lib1file are not being imported properly by Quasar/Vue because the boot loading syntax in my src/boot/Lib1-bootfile is not written with the syntax that the Quasar boot process is looking for. Does anybody understand why the Lib1.jsfunctions are not being imported with the standard export defaultsyntax? UPDATE: I was able to successfully import all the functions from my Lib1module (and its dependencies in Lib2) by simply bypassing the boot files and importing them directly into my Index.vue file like this: import { func1, func2, func3, func4 . . . funcN} from './Lib1' After solving a config issue with Babel, I realized the main issue was I needed to import them all by name because there is no default exportin Lib1. So everything works this way now, but is there any significant disadvantage to doing it this way other than the inconvenience of having to import the modules in every page/component that I need them in? In other words, am I missing out on any other significant benefits by importing the modules directly into my pages/components, e.g., tree-shaking, code-splitting, etc.? Finally, now that we know what the problem was, can somebody please explain how I would apply the named imports to the boot files? So far, I’ve only seen boot file configurations like this . . . import Lib1 from '../plugins/Lib1.js' export default async ({ Vue }) => { Vue.use(Lib1) }; . . . but what do we do when there is no default export? When I tried to import them in the boot file like I can in the page/component <script>block, it would not compile. So, as soon as I know how named imports would work in the boot files I think I can use the boot files, too, which would be great. - metalsadman last edited by metalsadman @Julia boot file, you have access to vue/store/route before it’s instantiated, you can use someting like binding your lib to a vue prototype then you can call it using this.$yourLib.someFunctionsomewhere in your sfc file. you can change probably how you do the exports on your lib, or maybe try to import it in boot file using a wildcard ie. import * as lib1 from 'yourPath'. tree shaking is ok tho if you don’t need to instantiate anything before vue app, since you might not want to use some of your function. some reading. - rstoenescu Admin last edited by Don’t mix requirestatements with import. Use importalong with export ...syntax. Thank you all for your feedback! @metalsadman Your suggestions guided me to the solution I was hoping to find. Now I have all the imports declared in the boot file and it’s easy to access all the functions from any component with this.$Lib1… Thank you!
https://forum.quasar-framework.org/topic/3832/bootfile-default-is-not-a-function
CC-MAIN-2020-40
refinedweb
1,104
66.84
In one of previous ASP.NET MVC Tutorial, we discussed about Html Helpers in ASP.NET MVC and get answers to following questions: Standard Html Helpers are very useful but limited to common scenarios like rendering links and Html form elements etc. But for a specific scenario, we might need to create a Custom Html Helper. ASP.NET MVC facilitates us to create our Custom Html Helper in following simple ways: If we want a custom Html Helper to be used just like standard Html helper, then available approach is to create an extension method on Html Helper class. Custom Html Helpers we create using Extension methods will be available to Html property of View. For the purpose of implementation, we will create a custom Html Helper i.e. “CustomImage” using extension method approach as follows: CustomImage namespace CustomHelpers { public static class ImageHelper { public static MvcHtmlString CustomImage(this HtmlHelper htmlHelper, string src, string alt, int width, int height) { var imageTag = new TagBuilder("image"); imageTag.MergeAttribute("src", src); imageTag.MergeAttribute("alt", alt); imageTag.MergeAttribute("width", width.ToString()); imageTag.MergeAttribute("height", height.ToString()); return MvcHtmlString.Create(imageTag.ToString(TagRenderMode.SelfClosing)); } } } In order to make this extension method available in all Views, we will add CustomHelper namespace to namespace section of View’s web.config as follows: CustomHelper <add namespace=”CustomHelpers” /> Now we can use CustomImage helper in Views. We will pass image source, alternate text, image width and height to our View as follows: @Html.CustomImage(“../Images/Ahmad.jpg”, “Mohammad Ahmad”, 150, 200) Using the same approach, we can create any Custom Html Helper to simplify the task of writing lots of Html code repeatedly. Second available approach for creating Custom Html Helper is by using Static Methods. It’s also as simple as that of above mentioned Extension Method approach. We will create a static method for TextBox that renders an HTML TextBox as string. namespace CustomHelpers { public static class CustomTextBox { public static string TextBox(string name, string value) { return String.Format("<input id=’{0}’ name=’{0}’ value=’{1}’", name, value); } } } Verify the namespace is added to Web.Config namespace section as we did before for Extension Method approach. Now, we can simply use the CustomTextBox in our View as follows: @CustomTextBox.TextBox(“strStudentName”, “Mohammad Ahmad”) We can use the Static Method approach to generate more HTML rich Custom Helper in ASP.NET MVC. Other Related Articles: The post 2 simple ways to create Custom Html Helper in ASP.NET MVC appeared first on Web Development.
http://www.codeproject.com/Articles/800862/simple-ways-to-create-Custom-Html-Helper-in-ASP-NE
CC-MAIN-2015-27
refinedweb
415
56.25
Name Tcl_HandleAlloc, Tcl_HandleFree, Tcl_HandleTblInit, Tcl_HandleTblRe-lease, Tcl_HandleTblUseCount, Tcl_HandleWalk, Tcl_HandleXlate -Dynamic, handle addressable tables. Synopsis #include <tclExtend.h> void_pt Tcl_HandleTblInit (const char *handleBase, - int - entrySize, - int - initEntries); int Tcl_HandleTblUseCount (void_pt headerPtr, - int - amount); void Tcl_HandleTblRelease (void_pt headerPtr); void_pt - Tcl_HandleAlloc (void_pt - headerPtr, - char - *handlePtr); void_pt Tcl_HandleXlate (Tcl_Interp *interp, - void_pt - headerPtr, const char *handle); void_pt - Tcl_HandleWalk (void_pt - headerPtr, - int - *walkKeyPtr); void - Tcl_WalkKeyToHandle (void_pt - headerPtr, - int - walkKey, - char - *handlePtr); void Tcl_HandleFree (void_pt headerPtr, void_pt entryPtr); Description The Tcl handle facility provides a way to manage table entries that may be referenced by a textual handle from Tcl code. This is provided for applications that need to create data structures in one command, return a reference (i.e. pointer) to that particular data structure and then access that data structure in other commands. An example application is file handles. A handle consists of a base name, which is some unique, meaningful name, such as ‘file’ and a numeric value appended to the base name (e.g. ‘file3’). The handle facility is designed to provide a standard mechanism for building Tcl commands that allocate and access table entries based on an entry index. The tables are expanded when needed, consequently pointers to entries should not be kept, as they will become invalid when the table is expanded. If the table entries are large or pointers must be kept to the entries, then the the entries should be allocated separately and pointers kept in the handle table. A use count is kept on the table. This use count is intended to deter-mine when a table shared by multiple commands is to be release. Tcl_HandleTblInit Create and initialize a Tcl dynamic handle table. The use count on the table is set to one. Parameters: o handleBase - The base name of the handle, the handle will be returned in the form baseNN", where NN is the table entry number. o entrySize - The size of an entry, in bytes. o initEntries - Initial size of the table, in entries. Returns: A pointer to the table header. Tcl_HandleTblUseCount Alter the handle table use count by the specified amount, which can be positive or negative. Amount may be zero to retrieve the use count. Parameters: o headerPtr - Pointer to the table header. o amount - The amount to alter the use count by. Returns: The resulting use count. Tcl_HandleTblRelease Decrement the use count on a Tcl dynamic handle table. If the count goes to zero or negative, then release the table. Parameters: o headerPtr - Pointer to the table header. Tcl_HandleAlloc Allocate an entry and associate a handle with it. Parameters: o headerPtr - A pointer to the table header. o handlePtr - Buffer to return handle in. It must be big enough to hold the name. Returns: A pointer to the allocated entry (user part). Tcl_HandleXlate Translate a handle to a entry pointer. Parameters: o interp - A error message may be returned in result. o headerPtr - A pointer to the table header. o handle - The handle assigned to the entry. Returns: A pointer to the entry, or NULL if an error occurred. Tcl_HandleWalk Walk through and find every allocated entry in a table. Entries may be deallocated during a walk, but should not be allocated. Parameters: o headerPtr - A pointer to the table header. o walkKeyPtr - Pointer to a variable to use to keep track of the place in the table. The variable should be initialized to -1 before the first call. Returns: A pointer to the next allocated entry, or NULL if there are not more. Tcl_WalkKeyToHandle Convert a walk key, as returned from a call to Tcl_HandleWalk into a handle. The Tcl_HandleWalk must have succeeded. Parameters: o headerPtr - A pointer to the table header. o walkKey - The walk key. o handlePtr - Buffer to return handle in. It must be big enough to hold the name. Tcl_HandleFree Frees a handle table entry. Parameters: o headerPtr - A pointer to the table header. o entryPtr - Entry to free.
http://docs.activestate.com/activetcl/8.6/tcl/tclx/Handles.3.html
CC-MAIN-2019-04
refinedweb
654
58.58
putbq(9f) [bsd man page] putbq(9F) Kernel Functions for Drivers putbq(9F) NAME putbq - place a message at the head of a queue SYNOPSIS #include <sys/stream.h> int putbq(queue_t *q, mblk_t *bp); INTERFACE LEVEL Architecture independent level 1 (DDI/DKI). PARAMETERS q Pointer to the queue. bp Pointer to the message block. DESCRIPTION putbq() places a message at the beginning of the appropriate section of the message queue. There are always sections for high priority and ordinary messages. If other priority bands are used, each will have its own section of the queue, in priority band order, after high prior- ity compo- nent. The flow control parameters are updated to reflect the change in the queue's status. If QNOENB is not set, the service routine is enabled. RETURN VALUES putbq() returns 1 upon success and 0 upon failure. Note - Upon failure, the caller should call freemsg(9F) to free the pointer to the message block. CONTEXT putbq() can be called from user or interrupt context. EXAMPLES See the bufcall(9F) function page for an example of putbq(). SEE ALSO bcanput(9F), bufcall(9F), canput(9F), getq(9F), putq(9F) Writing Device Drivers STREAMS Programming Guide SunOS 5.10 28 Aug 2001 putbq(9F) srv(9E) Driver Entry Points srv(9E) NAME srv - service queued messages SYNOPSIS #include <sys/types.h> #include <sys/stream.h> #include <sys/stropts.h> #include <sys/ddi.h> #include <sys/sunddi.h> intprefixrsrv(queue_t *q); /* read side */ intprefixwsrv(queue_t *q); /* write side */ INTERFACE LEVEL Architecture independent level 1 (DDI/DKI). This entry point is required for STREAMS. ARGUMENTS q Pointer to the queue(9S) structure. DESCRIPTION The optional service srv() routine may be included in a STREAMS module or driver for many possible reasons, including: o to provide greater control over the flow of messages in a stream; o to make it possible to defer the processing of some messages to avoid depleting system resources; o to combine small messages into larger ones, or break large messages into smaller ones; o: o Pass the message to the next stream component with putnext(9F). o mini- mum, a stream must distinguish between normal (priority zero) messages and high priority messages (such as M_IOCACK). High priority mes- sages.. SunOS 5.10 12 Nov 1992 srv(9E)
https://www.unix.com/man-page/bsd/9f/putbq
CC-MAIN-2021-31
refinedweb
382
57.98
Get user data using django-social-auth August 2018: Please note that this post was written for an old version of Django. It is left here for historical purposes only. Recently we had to add support for social networks login to an application we are developing and we chose django-social-auth to work with. It is a well documented and easy to use django application for authentication. But we wanted to do more than just authenticating the user, we wanted to get extra data like the profile picture, gender, etc. Fortunately django-social-auth has an useful pipeline that you can expand to fulfill this kind of tasks. I will explain briefly how to achieve this. I will assume you already have authentication working which is very well explained in the documentation. First of all, we should define the pipeline in our settings.py like this:.user.update_user_details', 'auth_pipelines.pipelines.get_user_avatar', ) After we have our pipeline ready we need to implement the function that looks like this: def get_user_avatar(backend, details, response, social_user, uid, user, *args, **kwargs): url = None if backend.__class__ == FacebookBackend: url = "" % response['id'] elif backend.__class__ == TwitterBackend: url = response.get('profile_image_url', '').replace('_normal', '') if url: profile = user.get_profile() avatar = urlopen(url).read() fout = open(filepath, "wb") #filepath is where to save the image fout.write(avatar) fout.close() profile.photo = url_to_image # depends on where you saved it profile.save() If you want to extend the functionality of getting the avatar from others social networks you should check the backend and get the image url accordingly. On the other hand, if you want to get another kind of information like gender or age you should define another pipeline item and do something like we did above. That’s pretty much everything. Happy coding! Like what you read? Subscribe to our newsletter and get updates on Deep Learning, NLP, Computer Vision & Python.No spam, ever. We'll never share your email address and you can opt out at any time.
https://tryolabs.com/blog/2012/02/13/get-user-data-using-django-social-auth/
CC-MAIN-2020-34
refinedweb
332
58.69