text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Opened 4 years ago Closed 4 years ago #7521 closed defect (fixed) ExtractUrlPlugin: setup.py doesn't work if you haven't already installed it Description Your setup.py contains this: from tracextracturl.extracturl import __revision__ as coderev from tracextracturl.macro import __revision__ as macrorev setup.py should never import from your package. If you're lucky it works, but this can cause a host of problems. For one, pip can't determine the dependencies of your plugin unless everything your plugin imports is already installed, which makes installing a little difficult... Attachments (0) Change History (1) comment:1 Changed 4 years ago by martin_s - Resolution set to fixed - Status changed from new to closed Note: See TracTickets for help on using tickets. Thanks for reporting this issue. Fixed in [8393].
http://trac-hacks.org/ticket/7521
CC-MAIN-2014-52
refinedweb
131
60.31
Introducing TurboSignals My. Basics TurboSignals is a simple library. It includes signal classes for up to ten parameters (Signal0, Signal1, …, Signal10) as well as a class for var args (SignalN). These are paired with slot interfaces (Slot0, Slot1, …, Slot10 and SlotN) that you implement in order to receive the signal’s callback. The purpose of an explicit slot type is to avoid using a Function variable, which is very slow. This was shown in my article on Runnables along with how Runnables are a good strategy for avoiding that slowdown. Some helper classes (FunctionSlot0, FunctionSlot1, …, FunctionSlot10 and FunctionSlotN), however slow, are provided for when you really want to provide an arbitrary function. Advanced Features One nicety of TurboSignals is that the dispatch operation is “safe” insomuch as that calls to addSlot, removeSlot, and removeAllSlots will not affect which slots are called. One drawback though is that parameters to dispatch are all untyped (*) and therefore there is no compile-time checking of the parameters and the possibility exists that there will be type errors at runtime. This is true too in the Event/EventDispatcher system as well as as3signals, only the latter explicitly checks for this problem at runtime to give more informative errors. Usage Example (faster) This example runs at maximum speed, which is probably not needed for simple button clicks. implements Slot0 { public function MainMenu(button:Button) { button.clicked.addSlot(this); } public function onSignal0(): void { trace("button was clicked"); } } Usage Example (slower) This example allows you to make your callback private and name it as you wish, courtesy of the FunctionSlot0 adapter class. { public function MainMenu(button:Button) { button.clicked.addSlot(new FunctionSlot0(onButtonClicked)); } private function onButtonClicked(): void { trace("button was clicked"); } } Usage Example (complex) This example runs at maximum speed with multiple buttons. import com.jacksondunstan.signals.*; public class Button extends Sprite { public var clicked:Signal1 = new Signal1(); public function Button() { addEventListener(MouseEvent.MOUSE_DOWN, onMouseDown); } private function onMouseDown(ev:MouseEvent): void { this.clicked.dispatch(this); } } public class MainMenu implements Slot1 { private var __button1:Button; private var __button2:Button; public function MainMenu(button1:Button, button2:Button) { __button1 = button1; __button2 = button2; button1.clicked.addSlot(this); button2.clicked.addSlot(this); } public function onSignal1(target:*): void { if (target == __button1) { trace("button 1 was clicked"); } else if (target == __button2) { trace("button 2 was clicked"); } } } Parameter Passing Strategy As you can see from above, functionality from alternative systems can be emulated in TurboSignals by simply adding more event parameters. The Event/EventDispatcher system’s Event.target is attained by simply passing a reference to this as a parameter to dispatch. Likewise, the type field can easily be passed. You may also choose to pass objects like Event that include these too, which may help with speed as multiple arguments slow down the dispatch operation. Performance Data The TurboSignals distribution includes a suite of performance tests for TurboSignals as well as as3signals and Event/EventDispatcher. Here are the results for the first version: TurboSignals – 1 Listener (1000000 dispatches) TurboSignals – 10 Listeners (1000000 dispatches) TurboSignals – 1 Function Listener (1000000 dispatches) TurboSignals – 10 Function Listeners (1000000 dispatches) as3signals – 1 Function Listener (1000000 dispatches) as3signals – 10 Function Listeners (1000000 dispatches) Event/EventDispatcher – 1 Function Listener (1000000 dispatches) Event/EventDispatcher – 10 Function Listeners (1000000 dispatches) Performance Graphs Performance Analysis The latest version of as3signals (as of today, 2/15/2010) goes a long way to improve performance of the Event/EventDispatcher system, especially on Mac OS X. TurboSignals goes a lot further though and nearly matches the speed of its inspiration: the simple list of Runnables. TurboSignals manages to dispatch events about 17 times faster than as3signals when implementing the slot directly and about 3 times faster than as3signals when using a Function variable to allow for a the callback to be private, named, or anonymous. That said, as3signals is itself 4-13x faster than the Event/EventDispatcher system. So if you are planning on dispatching frequently or to many listeners, you should definitely take a look at TurboSignals. #1 by Piergiorgio Niero on February 16th, 2010 · | Quote What about adding\removing listeners at runtime? is there any possibility to “(un)subscribe” (from)to a signal at runtime? I think the interface implementation way only would become a lot limitating for medium\large projects. is there any way to workaround that? #2 by jackson on February 16th, 2010 · | Quote Adding/subscribing is done by addSlot and removing/unsubscribing is done either by removeSlot or removeAllSlots. You can work around implementing your own slot only at the cost of a speed hit. The primary way of doing this is to use FunctionSlotX classes, where X is 0-10 or N. See the example titled “Usage Example (Slower)” and the corresponding performance data and graphs related to “function listeners”. The idea is that TurboSignals gives you the option to implement the slot interface yourself for maximum speed where you really need it (see the “Why Would I Need Speed?” section of the TurboSignals project page. #3 by whitered on February 16th, 2010 · | Quote The next step to improve performance is to use Vector instead of Array. And here is my extremelly simple implementation of signals: #4 by jackson on February 16th, 2010 · | Quote I originally used Vector instead of Array in my last article: Callback Strategies. I switched to Array to make the library usable in Flash 9 rather than restricting it to Flash 10. The performance difference was negligible anyhow. I did quite a lot of looking around for other implementations of signals and slots whilst working on my own, but didn’t find yours. Thanks for the link! It is indeed a very simple implementation, but sometimes that’s all you want. One performance-related aspect that has helped Robert Penner (and myself, I suppose) is to only copy the callbacks list when it actually needs to be copied. That is, set a flag indicating if you’re dispatching and if it is true in your addCallback or removeCallback then you can copy then. Robert Penner claims that doubled his performance! #5 by whitered on February 16th, 2010 · | Quote wow, thanks for useful idea. I’ll surely apply it. #6 by whitered on February 17th, 2010 · | Quote trying to implement this optimization on my project, I’ve found a bug that exists in TurboSignals. If we redispatch the signal that is already dispatched, its __slotsNeedCopying flag will be reset, so its slots can be changed on the run. See the demo: #7 by jackson on February 17th, 2010 · | Quote Nice find! I will look into this shortly and reply here with any resolution I come up with. If you have a way of resolving it, I do welcome patches. Thank you for the detailed example! #8 by whitered on February 17th, 2010 · | Quote just clone your slots if __slotsNeedCopying in dispatch() method as you do in other ones #9 by jackson on February 17th, 2010 · | Quote That would work, but I’d prefer not to copy the slots unless it’s really necessary or it helps performance. I implemented the fix by adding a __numDispatchesInProgress integer that I use to determine if I can safely set __slotsNeedCopying to false after the dispatch is done. I also added a unit test based on your demo code above. Version 1.0.1 is now up on the project page and Google Code and it should run at virtually the same speed as version 1.0. #10 by whitered on February 18th, 2010 · | Quote This implementation has a hidden danger: signal become broken when a slot throw an error. In that case signal’s __numDispatchesInProgress counter will always be greater than in should be so the signal will often clone its slots when it isn’t necessary. I think redispatches happens rather rarely so I’ve decided to keep things simple and to clone my callbacks in dispatch method on redispatches. #11 by jackson on February 18th, 2010 · | Quote Yes, it would lead to lots of slot list copying later on and it would also lead to the rest of the slots not being called. The former is a performance problem and the latter is a correctness problem. Unfortunately, the only way I know of to fix the correctness problem is to wrap the slot call in a try/catch block. As I’ve shown before, this would introduce a big performance problem itself. Personally I think that you should expect uncaught exceptions to do bad things to your program. TurboSignals is but a drop in the ocean of code that isn’t handling thrown errors. Luckily with TurboSignals you get off easy: the rest of your slots don’t get called and it performs slower from then on, but at least it never crashes. I’d welcome anyone to reply here with comments on how TurboSignals should handle uncaught errors thrown by slots. #12 by whitered on February 18th, 2010 · | Quote Yes, this is true that you should expect uncaught exceptions to do bad things. But I think that you should not expect bad things from exception that was caught and handled. Code like that can delude a developer: On the other hand this implementation will result in slower dispatching but do it clearly and in very rare situations (recursive dispatchings) Not but what to wrap the slot call in a try/catch block is the worst idea as I think. #13 by jackson on February 18th, 2010 · | Quote I like your logic a lot and I can totally see this point of view. It seems that in this situation, as you point out, we must choose where to optimize. A case like you show above could very well happen and the current penalty in TurboSignals would be slower addSlot, removeSlot, and removeAllSlots calls from then on. This is indeed a steep penalty, so avoiding it as you have optimizes to remove this penalty. But in the process of optimizing that way you have, as you again have pointed out, made recursive dispatching slower. So the question is this: should we optimize for recursive dispatching (TurboSignals’ current way) or for thrown errors (the way the signal class you linked works)? To me the matter comes down to the legitimacy of the two uses cases. In my view, errors are bad but recursive dispatching is a valid feature of TurboSignals. I prefer not to punish the valid uses of TurboSignals for the sake of the invalid uses. You can, of course, do whatever you’d like to in your own signal class. :) #14 by Robert Penner on February 16th, 2010 · | Quote This is great! I love your creative thinking and passion for performance testing. The more options, the better. #15 by Robert Penner on February 16th, 2010 · | Quote I noticed you’re using AsUnit 4 alpha for your unit tests. I looked through your test code and it looks really good so far (there’s a lot of it). I’d love to hear your thoughts on unit testing and the role it played in TurboSignals development. #16 by jackson on February 16th, 2010 · | Quote TurboSignals seemed to be a good place to apply unit testing since its functionality is what I call “pure data”. That is, there is no graphical or audio output that can only be properly judged by the human eye and ear, no interaction requiring precision timing (especially with Flash Player!), and so forth. Further, I didn’t have any (public) application to start using TurboSignals in, so I needed some test application. I started off with a homebrew set of tests like I’ve done in previous articles, but it started to get messy, overly verbose, and difficult to scale after about a dozen tests. All of this pointed toward using some unit testing framework. I went with AsUnit as it’s clearly one of the most popular choices and the little I’ve seen of it before (mostly from you!) looked pretty good. I couldn’t find a SWC build of it though and didn’t feel like setting up Ruby just to do the build, so I actually took your build of version 4’s alpha out of as3signals. It seems to run fast enough, very consistently, and was easy to set up. All in all I’d say this was a good place to use unit testing and it’s been quite successful. It certainly isn’t a project built with TDD as I wrote all of the code before the tests, but the validation part of it is very nice. :) #17 by Joa Ebert on February 16th, 2010 · | Quote Hey, the TurboSignals look very promising and I like the approach of having classes like Signal0. Scala does this as well very successful. It is a shame you can not apply more syntactit sugar. Did you try what happens when you are using linked lists and object pooling approaches as well? I think your life will be much easier if your listeners are nodes of a linked list. Traversal should be faster and removing listeners is not a big deal as well. It can work without a copy. Most of the time you are removing a listener in its callback so that can be even done in O(1). #18 by jackson on February 16th, 2010 · | Quote I haven’t tried linked lists yet, but I’m willing to give it a go. Earlier on in development I tried making the iterator a private field and then checking the index to remove from against it to see if it was at or before the current callback. In such a case there’s no need to copy the list, which would be a nice optimization as far as speed, garbage creation, and (instantaneous) memory footprint. This turned out to be disastrous to performance though as accessing a field is quite a bit slower than accessing a local variable. So it seems as though I’d still need to copy the list on addSlot/removeSlot/removeAllSlots if I switched from Array to a linked list. In my my implementation from December it looked like both copying and traversing were slower in linked lists. Do you have some suggestions on how to make the linked list implementation fast enough to make it worthwhile here? I’d much appreciate any suggestions you could give. #19 by matthew on February 16th, 2010 · | Quote It seems to me that the speed benefits aren’t worth the cost in readability and required code. For one, this system requires you to check the target in the callback whenever you have multiple dispatchers using the same signal number. That could get out of hand pretty fast, as your example with two buttons shows. Chances are that every listener for more than one signal would need to act as a switch, delegating to other functions based on the target. It also couples the implementation of the listener and dispatcher in a strange, non-semantic way, requiring your listener to know an arbitrary piece of metadata (the slot number) in addition to the event type (“clicked”). A tiny change in the dispatcher’s implementation (the slot number of a particular event) would require huge, cascading changes across an entire codebase. If you’re really in need of the performance benefits gained from using the Observer pattern, maybe it would be better to use tools that would facilitate that (templates, snippets, macros, etc.) #20 by jackson on February 16th, 2010 · | Quote Check out the example titled “Usage Example (Slower)” using FunctionSlot. That shows how to use TurboSignals like you’d use Event/EventDispatcher or as3signals with an arbitrary Function callback. Then check out the performance data to see how TurboSignals is still faster than Event/EventDispatcher or as3signals, even when using FunctionSlots. If you only used Signal1 with FunctionSlot1 to dispatch Event objects (or similar), you’d still realize a speed gain without any of the drawbacks you mention. But the important part about TurboSignals is that it gives you a choice to go even further and speed up your callbacks by implementing the slot directly. Yes, you lose some code cleanliness in the process, but sometimes it’s worth it. By providing FunctionSlot classes in TurboSignals in addition to Slot interfaces, I leave that choice up to the individual programmer on his or her own project. #21 by Alec McEachran on February 16th, 2010 · | Quote Hi Jackson, Congratulations, this is really interesting work. While I don’t imagine I’ll use it in many places in projects I work on since the syntax is somewhat confusing for team environments, I am definitely going to give this a good hard look for game loops, and other performance critical code. Now I understand why you were holding back running as3signals vs Event analysis earlier! This is a great addition to the as3 canon. Thanks. #22 by jackson on February 16th, 2010 · | Quote I’d like to hear your thoughts on how to improve the syntax. Personally, I think the “Usage Example (Slower)” code shows how to use it quite similarly to Event/EventDispatcher and as3signals and the performance data shows that even there you get a nice speedup. #23 by Winx Alex on March 24th, 2010 · | Quote For all this we can conclude that events run faster if they are called on object(not true pointer or thru EventDispatcher) but object.function(args). So you need to make few changes in original Robert Panner code. Let took Signal.as and – “add” function should took add(object,”nameOftheFunction”) -listeners array would take now object-“nameOftheFunction” pair instead of pointer of the function -“dispatch” function in instead of listener.apply(null, valueObjects); should make object[“nameOftheFunction”](valueObjects) or something like that maybe with object[“nameOftheFunction”].apply(object,valueObjects)….. #24 by jackson on March 24th, 2010 · | Quote It would be fantastic if this would result in a dramatic speedup similar to the speedup you get in TurboSignals. This would relieve the programmer of the need to create Slot derivatives and allow them to name callback functions however they wish, both downsides of using TurboSignals. However, I believe that there are two reasons why this would result in code that runs at the same speed or possibly even slower than as3signals does right now. Firstly, the dynamic access (object["nameOfTheFunction"]) is quite slow. Secondly, what you get back is indeed just a Function variable, so calling it (.apply(object,valueObjects)) would be as slow as the original (listener.apply(null, valueObjects). What makes TurboSignals fast is the strong guarantee that the compiler and JIT are afforded by using a typed object: the Slot class. Removing the typing causes the slowdowns you see in both as3signals and EventDispatcher, although the latter has many more problems than just weak typing. #25 by Winx Alex on March 25th, 2010 · | Quote Could you make test. object[“nameOfTheFunction”]) vs object.nameOfTheFunction(). so we see what is quite slow. Everything is compromise. Little bit slower but cleaner. Except in atom accelerator :) #26 by jackson on March 26th, 2010 · | Quote Sure. Given these simple slots: I wrote this simple test: And got these results: So it’s about 40x slower than a direct call. That’s a bit much of a compromise for TurboSignals. :) #27 by eco_bach on November 21st, 2010 · | Quote Hi Jackson Interesting comparison between AS3Signals. What about bubbling? I absolutely rely on event bubbling in all my projects. Is this supported by TurboSignals? #28 by jackson on November 21st, 2010 · | Quote Bubbling is, unfortunately, not supported right now. It would certainly be slow, but so are FunctionSlots. The key would be to add a new set of signal classes analogous to as3signals’ DeluxeSignalso as to not slow down the regular signals. TurboSignals is open source, want to volunteer? :) #29 by Mark on February 18th, 2011 · | Quote Hi I am trying to use turbosignals using your example “USAGE EXAMPLE (COMPLEX)” in my own project. I think 2 things are wrong in this example: The clicked Signal1 never gets instantiated and gives an object-null error when trying to call the addSlot function. For the example it would be an idea to add somewhere ‘new Signal1()’ Another thing which cost me more time to explore; I have a class called Enemy, which have a Signal1. I also have a class called Game which implements Slot1. When I use a type in the onSignal1 function in the Game class like this: ..I get this Error: “Interface method onSignal1 in namespace com.jacksondunstan.signals:Slot1 is implemented with an incompatible signature in class Game.” It is fixed when I use this, but then I lose my completion: How is it possible your example uses a type at that place? I am using flex_sdk_4.1.0.16076, compiling as FP10.1 #30 by jackson on February 18th, 2011 · | Quote These are both excellent points. I’m amazed that in over a year, no one caught these errors in the examples. I’ve updated the article to actually create the signals and to remove the type from the Signal1callback, which is unfortunately necessary for compilation. If you want to get the type back, you’ll need to do a cast. Also, if you use a FunctionSlotto mimic the EventDispatcheror as3signals approach, you can still type the parameter. You give up some speed—as shown in the article—and some compile-time checking—it can’t make sure your arguments are of the correct types, the same problem you have with EventDispatcherand as3signals—but you do at least get to give your argument an explicit type. I wish I know of a way that was fast, flexible, and type-safe. Unfortunately, AS3 seems to force compromise here: TurboSignalswith the types explicitly stated instead of using * TurboSignals The key seems to be that all of AS3’s dynamic functionality disposes of all the compile-time checking. #31 by Mark on February 18th, 2011 · | Quote Ah! I thought I was doing something wrong, since I have never used any kind of signals. Making an own typesafe signal would be the answer, but casting is not a problem I think. The pure turbosignals without functionslots already loses some readability, so it feels a bit dirty anyway. The functionslots are a better solution for that. I have explorer the classes to see how it worked. I am not a performance expert but I think some small things could be improved. – Use vectors instead of arrays (who uses fp9 anyway) – Optimise the dispatch function: – Remove the var from function, and if you use a vector, there would be no need to cast to type inside the loop. – Remove the default value 0 of i I don’t see any reason why __numDispatchesInProgresscounts up and down in the same function, It will always end as 0? What does slotsNeedCopying mean? I think the signals are a very great concept. I really like the fact you don’t have to create event-types (Strings), but the signals describe the event itself. They are just very simple. ( clicked:Signal). BTW In most of my apps I create a singleton which extends EventDispatcher, to have global events (bad, but very very handy ☺). How do signals perform when they are static? #32 by jackson on February 18th, 2011 · | Quote I’ve learned more about AS3 performance the last year and now see that you have some good points. For example, defaulting ito 0 is pointless. The cast is also not helping and a Vectorcould be used if Flash Player 9 support is dropped. As for removing the locally-cached __slotscopy, that was done intentionally because local variable access is much quicker than field access. As for __numDispatchesInProgressand __slotsNeedCopying, they are there to protect against modifications to the signal during the dispatch. Without them, certain conditions (errors during dispatch, adding/removing slots during dispatch, dispatching during dispatch) can corrupt the signal’s state. Signals should perform the same when they are static, but the static lookup (e.g. MyGlobalSignals.enterFrame) is more expensive. #33 by Mark on February 19th, 2011 · | Quote Thanks for explaining, I wonder how much MyGlobalSignals.enterFrame.dispatch() vs MyGlobalEventDispatcher.dispatchEvent(new CustomEvent()) performs. #34 by jackson on February 19th, 2011 · | Quote The dispatch performance should be the same as the non-static dispatch performance you see in the article. Only the static access (when you add slots, remove slots, or call dispatch) would be slowed down. #35 by Mark on February 23rd, 2011 · | Quote How do you remove a slot if you are using the FunctionSlot ? The only way to remove it is to create a variable of the functionslot, like this? It would be great if there is an alternative to this. Maybe a FunctionSignal or something would be a great addition, which creates FunctionSlots inside the class like addSlot(func); #36 by Mark on February 23rd, 2011 · | Quote Just tried to create my own FunctionSignal1, all slots are now functions, so semantically they are a bit different but the idea is clear. I don’t know if this is performing as fast as the normal Signal1 with a FunctionSignal, haven’t tested that, but I think this is more useable in normal projects since you can normally pass the listener function. #37 by jackson on February 23rd, 2011 · | Quote TurboSignals is meant as a fast alternative to EventDispatcherand as3signals. If you’re going to use Functionobjects directly, you’ve given up most of the speed advantages. If speed is not your goal, consider EventDispatcheror as3signals instead. #38 by Mark on February 23rd, 2011 · | Quote I am still using the FunctionSlot and the dispatch function still calls the function from the interface. I think removing the slots now is more simple, since the example does not show how to remove slots. Do you this this is slower? Sorry if i removed the turbo from the signal :) I think these signals are more readable then CustomEvents anyway, I am starting to like the idea of this lightweight event alternative. #39 by whattatorpe on April 5th, 2012 · | Quote One year old thread… well, I’ll try anyways… I’m not a hardcore programmer, so maybe my question is kind of illiterate: This way Turbosignals or Signals work, isn’t all this against encapsulation? Let’s picture a menu and a menuButton for folding or unfolding it. I use the flash IDE events (addEventListener, dispatchEvent… you’ll know better) and doing so, menuButton need no public vars and I can reuse it everywhere, as long as I add addEventListener (custom_event). I can reuse it in other projects, classes, clips on the stage, and if I remove it or simply don’t use it, I don’t need to make any changes in the code besides removing new:menuButton But using Turbosignals I cannot move my menuButton between classes or even different projects, I have first to implement access to every public var:Turbosigna in menuButtnl, or, if I later regret, I end up with a lot of referenceless variables I have to deal with. Is it that I’m missing something, or, to say it clearly, the problem is that I do need serious programming tuition? Anyways, as far as the parts I’m able to understand, great blog #40 by jackson on April 5th, 2012 · | Quote Thanks for the kind words about the site. The way I see it, there is only one major stylistic difference between TurboSignals and EventDispatcher: TurboSignals does not use Stringevent names. Consider these two simple classes: And some code that uses them: If I understand your question correctly, you’re wondering about how the user will change when the button changes. Say the two events are replaced by a single “click” event. Here’s what the button classes would look like: And here’s the user code: To me, it’s about the same level of maintenance required by both versions. #41 by whattatorpe on April 7th, 2012 · | Quote A year-old thread and I get an answer the same day… I don’t know how to say how awsome I find this. Danke!! I think you undestood my question even better than I did. After a while fiddling both with ED and TS, yep, code is moved a bit here and there, but as long as you don’t rely on event bubbling things don’t change that much. I guess that when I saw that public variables had to be declared, it came to my mind all this things actual programmers say about how bad dependencies are for encapsulation and so, but after a while dealing with your code I think I’m rather uninformed on the subject. Thanks again for sharing your knowledge. #42 by jackson on April 7th, 2012 · | Quote No problem; I’m happy to help. :) One more thought on the subject: you could separate your classes even further by using an interface. For example: This is a lot harder and messier than with EventDispatchersince classes using that approach don’t have any strongly-typed definition of what events they dispatch. I’ll leave it up to you to think about how you’d even go about making an interface like Clickablewith the EventDispatchersystem. :) #43 by Shay Pierce on November 1st, 2012 · | Quote Hey Jackson, This system seems very interesting and pragmatic; I had meant to take a stab at integrating it into an event-driven game of mine for a while, and finally took a stab at it the other day. However I found it to be rather cumbersome to convert existing EventDispatcher-based code over, and I became very concerned about the lack of specificity in the function names. I’d say that the strangest-feeling part of all this is the arbitrary-feeling-yet-constant intrusion of the concern around the “number of parameters”. In particular it bugged me that, for an object that listened for many different events (which I have at least a couple of), all of those unrelated events that happened to share the same number of parameters would all call the same function… without even an event-type string being provided to distinguish them. (Obviously the FunctionSlot wrapper can be used to mitigate this; but even that has the painful clunkiness of worrying about the number of parameters.) Maybe it’s silly to assume a developer of your expertise hasn’t considered this, but I have to ask: have you looked at the use of the “args” variable array, to support an arbitrary number of parameters to a function? It seems like at least providing this as a (slightly slower?) alternative would allow many things to be a great deal cleaner. Of course I don’t know the performance implications of using a function like that instead of a normal one, and in fact I haven’t checked whether interfaces play well with that feature. Just curious if you’ve tried it. Thanks for creating the library, I think I’ll be using it selectively in my game (and/or exploring the method I just described above, heh!).
https://jacksondunstan.com/articles/585?replytocom=604
CC-MAIN-2020-16
refinedweb
5,175
60.24
Good time of day everyone.I want to create plugin for ST3. But I am not familar with Python3. I decided to write it in other script language. So, I need code snippet to run my script (external program) in backround mode and to do something when script is finished. I have: class FuncThread(threading.Thread): def __init__(self, target): self._target = target threading.Thread.__init__(self) def run(self): self._target()[/code] Somewhere in "run" [code] thread = threading.Thread(target = my_long_calculate, args = (self.view, edit)) thread.start() [/code] my_long_calculate looks approximately as: [code]subprocess.Popen(cmd, stdout=subprocess.PIPE, startupinfo=startupinfo).communicate()[0] But my ST still is hanging (locked) when my_long_calculate() is running. What am I doing wrong? What on earth are you doing that would make it easier to shell out to a different process than to clutz through Python? Also, how are you planning to turn the script's output into changes to Sublime Text? It is not easier! I told that I dont known python (I dont understand languages without curly brackets). I'm sure that my task can be solved in Python, but I dont know it, everything is simple. This is example ST3 plugin "Hello from node.js"gist.github.com/unlight/6262932 run view.run_command("hello_node") command in ST console. You don't really show enough code to see what is wrong. Also, you are not supposed to retain or pass the "edit" object between threads. Instead, call view.run_command of a second TextCommand that will apply the appropriate changes. As an aside, using the sublime.set_timeout_async function might be simpler than manually creating a thread. @sapphirehamsterI upload full code of test plugin on github: github.com/unlight/HelloNodeI used sublime.set_timeout_async(lambda: self.view.run_command("long_loop"), 0) In long_loop command occurs "long calculation" which I emulate by Javascript setTimeout() function, and in this time ST3 is "hanging" (cannot move cursor, cannot write something, etc.) I think you may have misunderstood when I said it needs to run a second TextCommand. You need to call set_timeout_async on your own function that will do the long computation, and then call the TextCommand. It would look something roughly like this: class ExampleNodeCommand(sublime_plugin.TextCommand): def run(self, edit): sublime.set_timeout_async(self.long_command, 0) def long_command(self): args = [get_node_path(), PLUGIN_FOLDER + "/longloop.js"] for region in self.view.sel(): output = run_process(args) pos = region.begin() self.view.run_command('example_node2', {'pos': pos, 'output': output}) class ExampleNode2Command(sublime_plugin.TextCommand): def run(self, edit, pos=None, output=None): self.view.insert(edit, pos, output) I didn't test this, so there may be typos, but it's the general idea. Also, you may have problems if there are multiple cursors (the positions may shift, so you may want to iterate over the Selection in reverse). Nice. It works! Thank you very much. thank you for your comment and example code, this has been very helpful for me! kudos!
https://forum.sublimetext.com/t/execute-external-program-in-background-non-blocking-thread/11122
CC-MAIN-2016-07
refinedweb
489
53.58
Hello everyone, Im new to java programming and especially new to this forum. Im taking my first college computer science course. I did very well on my first exam (loops, methods, keyboard input, types, etc). However, this question seriously has me stumped. How do I make the quarter circle/square? I attached a picture of the question. Heres what I know: - I know I will need a loop - Need to declare Random integer- how do I do this? - I have no idea where to start Code for program: import java.util.Random; class Dogs { public static void main (String[] args) { Random integer r = new Random // Completely wrong probably. Any help or suggestions will be greatly appreciated. Thank you in advance.
http://www.javaprogrammingforums.com/java-theory-questions/32979-would-appreciate-help-beginner-programmer.html
CC-MAIN-2016-30
refinedweb
120
69.18
Mutliple thread access to object method Discussion in 'Java' started by Crouchez, Aug 30, 2007. Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum. - Similar Threads mutliple CPUs, threading and JVM 1.4.2David, Nov 3, 2004, in forum: Java - Replies: - 0 - Views: - 470 - David - Nov 3, 2004 [nio] Mutliple Messages in ByteBufferMichael B Allen, Dec 13, 2004, in forum: Java - Replies: - 1 - Views: - 463 - Esmond Pitt - Dec 13, 2004 mutliple schemas/namespaces - xml validationDominique, Jun 29, 2004, in forum: XML - Replies: - 3 - Views: - 502 - Dominique - Jun 29, 2004 Mutliple Drop down list boxesAmi, Aug 25, 2006, in forum: ASP .Net - Replies: - 1 - Views: - 421 - =?Utf-8?B?QXVndXN0aW4gUHJhc2FubmE=?= - Aug 25, 2006 Error logging for mutliple sites on same server, Nov 6, 2006, in forum: ASP .Net - Replies: - 3 - Views: - 331 - Phil Wilks - Nov 7, 2006
http://www.thecodingforums.com/threads/mutliple-thread-access-to-object-method.533799/
CC-MAIN-2015-22
refinedweb
171
78.48
Opened 14 years ago Closed 12 years ago Last modified 12 years ago #6447 closed (fixed) cache backends that are primarily for local dev should enforce same key restrictions as memcached Description Using a locmem:// or postgres cache backend, I can generate a cache key using the following expression: repr(f.__module__) + repr(f.__name__) + repr(args) + repr(kwargs) but if I configure memcached, it fails silently, causing an empty page to be sent to the client. I have confirmed this is the source of the problem by generating an md5.hexdigest of the above key - in which case, memcached works correctly. I have included the context for the above key, in case it is relevant. It is a decorator method which caches the return value of an arbitrary function, assuming its value depeonds solely on its parameters, and that the method never returns None from django.core.cache import cache as _djcache def cache(seconds = 3600): def doCache(f): def x(*args, **kwargs): key = md5(repr(f.__module__) + repr(f.__name__) + repr(args) + repr(kwargs)).hexdigest() result = _djcache.get(key) if result is None: result = f(*args, **kwargs) _djcache.set(key, result, seconds) return result return x return doCache Attachments (5) Change History (20) comment:1 Changed 14 years ago by comment:2 Changed 14 years ago by Decision needed. It appears that John has a good point. I'd suggest to either fix the code or add a note to the docs. At the very least I'd suggest adding a note to the docs. I don't feel qualified in this area to attempt to take a shot at adding this to the docs. comment:3 Changed 14 years ago by comment:4 Changed 14 years ago by This ticket was apparently closed because #7460 dealt with bad keys and a solution was committed for that ticket. The problem is that this happened only at the template tag interface. Applications that use the low level interface to the cache system still break with bad keys. The documentation for the cache system still does not mention this limitation of the memcache backend, and having inconsistent keying limitations (especially one of such a basic level as whitespace vs no whitespace) in various backends makes it much harder to build reusable apps (a cache key that works for my localmem:// cache may not work with the memcached backend being used on someone elses server). Ideally, I'd like to see the memcached backend have similar logic applied to it as was used in in #7460 (e.g. use urlquote to ensure keys have no spaces in them) so that all interfaces are protected from spaces. Lacking that, I'd suggest an update to the cache docs that draws attention to memcached's limitation and provides a recommendation to application programmers who are writing applications that use the low-level cache layer to always escape their keys if their app might be used on a server that utilizes memcached. comment:5 Changed 12 years ago by Following discussion with jacobkm and jezdez in IRC, the plan is to leave the memcached backend unmodified (as any key mangling there could slow down a critical code path), but modify the other builtin backends (locmem, dummy, file, db) so they throw an error (or perhaps just a warning?) if you pass them a key that would be invalid on memcached. This discourages/prevents writing cache code that would not be portable to memcached. I'm working on a patch. comment:6 Changed 12 years ago by comment:7 Changed 12 years ago by Marking as accepted per the above IRC conversation with core committers. Changed 12 years ago by patch to enforce same key restrictions on all cache backends comment:8 Changed 12 years ago by - I chose to make an invalid key an exception, rather than a warning. My feeling is cache code ought to be backend-portable, so if it'll blow up on memcached it might as well blow up elsewhere too. I'll change this if core devs prefer it as a warning. - If the idea is that cache-code ought to be backend-portable, is there any reason for the dummy backend to still accept *args, kwargs on most of its methods, where the other backends do not? What's the rationale for keeping this? - As noted before, the design decision here is not to apply any additional key checking to the memcached backend, for speed reasons. The restrictions applied match memcached's restrictions (it appears that some control characters are accepted in keys by memcached itself, but python-memcached applies the same restrictions we apply here), so the result should be the same on all backends, except that memcached will raise a different exception. Changed 12 years ago by same as previous patch, with added doc note re key restrictions Changed 12 years ago by updated patch, applies to trunk comment:9 Changed 12 years ago by comment:10 Changed 12 years ago by comment:11 Changed 12 years ago by Since Django supports third-party cache backends, we can't control (thankfully!) what possible external caches people might choose to use in production. So an error here, restricting everybody to memcache's particular limitations, feels inappropriate. A warning -- in it's own class so that it can be easily silenced via warnings.ignore() -- would be my preference. Definitely warning people of importabilities is a good plan. Forcing them to effectively choose memcache for now and forever is bad. Changed 12 years ago by updated patch using warnings comment:12 Changed 12 years ago by Added a new patch following design discussion with Russell K-M, Malcolm T, Jannis L, and Jeremy Dunck. Changed 12 years ago by fixed a couple documentation issues in the previous patch comment:13 Changed 12 years ago by comment:14 Changed 12 years ago by (In [13766]). comment:15 Changed 12 years ago by (In [13767]) [1.2.X]. Backport of r13766 from trunk. The problem is that memcached doesn't allow whitespace in keys. I ran into this a while back, and dealing with it in your code is the right solution. Changing the memcached backend to do it makes examining the cache next to impossible, as Jacob explained while closing #3241 as "wontfix". You could try urllib.quote(key)or re.sub('\s+','',key)if you want to make the keys less opaque (and if you know they'll end up under 250 characters, I suppose).
https://code.djangoproject.com/ticket/6447
CC-MAIN-2022-27
refinedweb
1,086
59.43
hey, some people already wondered (including me) about why you had to set CCVER=gcc34 manually when building -Devel nrelease isos on -Release. This is the situation: release1.2:/usr/src/nrelease % echo $CCVER CCVER: Undefined variable. release1.2:/usr/src/nrelease % grep CCVER /etc/make.conf release1.2:/usr/src/nrelease % make release ... ( cd /usr/src/nrelease/..; CCVER=gcc2 make buildworld ) ... >>> stage 4a: populating /usr/obj/usr/src/world_i386/usr/include ... ===> include/rpcsvc rpcgen -C -h -DWANT_NFS3 /usr/src/include/rpcsvc/key_prot.x -o key_prot.h cpp: in path [/usr/obj/usr/src/ctools_i386_i386]/usr/libexec/gcc2/cpp: No such file or directory *** Error code 1 ... So, why does it pass CCVER=gcc2? This is because: release1.2:/usr/src % grep ^CCVER /usr/share/mk/bsd.cpu.mk CCVER ?= gcc2 Okay. This is a default setting for building when no CCVER is set. This should be overridden for -Devel builds. It is! release1.2:/usr/src % tail Makefile # # Build compatibility overrides # .ifdef CCVER .if ${CCVER} == "gcc2" CCVER= gcc34 .endif .endif It replaces CCVER=gcc2 with CCVER=gcc34! Still this doesn't work (see above). This is why: % cat Makefile FOO?= gcc2 .makeenv FOO .if ${FOO} == "gcc2" FOO= gcc34 .endif all: @echo FOO=${FOO}, env FOO=$${FOO} % make FOO=gcc34, env FOO=gcc34 % make FOO=gcc2 FOO=gcc2, env FOO=gcc2 % env FOO=gcc2 make FOO=gcc34, env FOO=gcc2. How do we solve this? cheers simon
http://leaf.dragonflybsd.org/mailarchive/bugs/2005-08/msg00101.html
CC-MAIN-2015-06
refinedweb
237
55.4
Deploy a Sample Custom Skill as a Web Service The Java samples provided with the Alexa Skills Kit can be hosted as web services. This document provides the steps this scenario – both running as sample on the command line, as well as deploying a sample as a web service with AWS Elastic Beanstalk, a service offering by Amazon Web Services. Note that it is generally faster and easier to deploy and test the samples as AWS Lambda functions on AWS Lambda (a service offering by Amazon Web Services). You can use Lambda with either the Java or Node.js samples. See Deploying a Sample Custom Skill to AWS Lambda for detailed steps. The samples are available in GitHub repositories. For details about getting the samples and the specific samples included, see Using the Alexa Skills Kit Samples (Custom Skills). - System Requirements - Building and Running a Sample from the Command Line - Setting up Hello World on Elastic Beanstalk - Creating the Maven Webapp Project and Importing the Code - Adding a SpeechletServlet and Setting up Servlet Mapping - Deploying the Sample to Elastic Beanstalk - Configuring the Elastic Beanstalk Environment to Use HTTPS - Registering the Sample in the Developer Portal - Testing the Sample - Accessing Log Files - Next Steps System Requirements Setting up and running the samples requires the following: - The Java library ( alexa-skills-kit-x.x.jar), provided in the repodirectory. - Java SE Development Kit (JDK) 7 or later. To download the JDK, visit. - Maven for dependency management and building the samples. Additional requirements depend on how you want to test the samples. See the relevant sections below: Building and Running a Sample from the Command Line These steps show how to build and run the Hello World sample from the command line on the Linux platform. Doing this lets you send HTTP requests using a command line tool such as cURL. The Launcher class provided in the src/main/java folder starts a Jetty server that can host a Speechlet for one of the samples, such as the HelloWorldSpeechlet. This is a convenient way to run the samples from the command line. Note that running the sample in this way does not normally make it available through an Internet-accessible endpoint, so you cannot test your code with an actual device in this way. Testing on the command line can still be useful for understanding the requests and responses used by the web service for an Alexa skill. For an example of deploying the sample to the cloud instead, see Setting up Hello World on Elastic Beanstalk, below. Running Hello World from the command line requires these tools, in addition to the Java requirements noted earlier: To run a sample from the command line, do the following: - Download or clone the amzn/alexa-skills-kit-java GitHub repository. - Create a self-signed certificate, as described in Testing a Custom Skill. - Use the private key and certificate to set up a Java KeyStore and include information about the KeyStore in the pom.xmlfile for the samples. See Setting Up an SSL/TLS Java KeyStore. - Build and run the sample using Maven, then use cURLto send requests to the sample and note the responses. See Running the Sample from the Command Line. In these instructions, <alexa-skills-kit-folder> is the folder you cloned from the GitHub repository. Setting Up an SSL/TLS Java KeyStore Follow these steps to create a Java KeyStore and to configure the Hello World sample to build using that KeyStore: - Create a private key and self-signed certificate. See the detailed instructions in Testing a Custom Skill. Use the following opensslcommand to create a PKCS #12 archive file from your private key and certificate.Replace the private-key.pemand certificate.pemvalues shown here with the filenames for your key and certificate. Specify a password for the archive when prompted. openssl pkcs12 \ -keypbe PBE-SHA1-3DES \ -certpbe PBE-SHA1-3DES \ -inkey private-key.pem \ -in certificate.pem -export \ -out keystore.pkcs12 \ Indented para… Use the following keytoolcommand to import the PKCS #12 file into a Java KeyStore, specifying a password for both the destination KeyStore and source PKCS #12 archive: $JAVA_HOME/bin/keytool \ -importkeystore \ -destkeystore java-keystore.jks \ -srckeystore keystore.pkcs12 \ -srcstoretype PKCS12 \ Edit the pom.xmlfile in <alexa-skills-kit-folder> /samplesto provide settings for the javax.net.ssl.keyStoreand javax.net.ssl.keyStorePasswordsystem properties. Set the values to the path to the KeyStore and password created in the previous step. <java classname="Launcher" classpathref="java.sdk.classpath" fork="true"> <sysproperty key="javax.net.ssl.keyStore" value="/insert/your/path/java-keystore.jks" /> <sysproperty key="javax.net.ssl.keyStorePassword" value="insert_your_password" /> </java> Running the Sample from the Command Line Build and run the sample using Maven ( mvn): - In a command-line terminal, navigate to <alexa-skills-kit-folder> /samples. This folder contains the pom.xmlfile you previously edited. - Build the samples with the following Maven command: mvn assembly:assembly -DdescriptorId=jar-with-dependencies package - Once the samples are successfully built, run the sample with the following Maven command: mvn exec:java -Dexec.executable=”java” -DdisableRequestSignatureCheck=true The sample starts a server using the Launcher class. You can then use the command-line utility cURL to transmit HTTPS requests to the sample to test its behavior. Note that the Maven command to run the samples includes the disableRequestSignature flag. This is because the SpeechletServlet included in the Java library only accepts incoming HTTPS requests that are signed by Alexa. The disableRequestSignature flag temporarily disables this signature verification. To send a request using cURL, open a new terminal window and transmit a request to formatted as JSON. For example (JSON shown on multiple lines for clarity): curl -v -k --data-binary '{ "version": "1.0", "session": { "new": true, "sessionId": "session1234", "application": { "applicationId": "amzn1.echo-sdk-ams.app.1234" }, "attributes": {}, "user": { "userId": null } }, "request": { "type": "LaunchRequest", "requestId": "request5678", "timestamp": "2015-05-13T12:34:56Z" } }' For details about the JSON syntax for a launch request, see JSON Interface Reference for Custom Skills. The Hello World sample generates a simple response to all requests that includes text meant to be spoken back by Alexa and a card displayed in the Alexa App. The body of this response is in JSON format, similar to the following: { "version": "1.0", "response": { "outputSpeech": { "type": "PlainText", "text": "Welcome to the Alexa Skills Kit, you can say hello" }, "card": { "type": "Simple", "title": "HelloWorld", "content": "Welcome to the Alexa Skills Kit, you can say hello" }, "reprompt": { "outputSpeech": { "type": "PlainText", "text": "Welcome to the Alexa Skills Kit, you can say hello" } }, "shouldEndSession": false }, "sessionAttributes": {} } Setting up Hello World on Elastic Beanstalk To test a skill with the Service Simulator or an Alexa-enabled device, you need to host the web service for the Alexa skill at an Internet-accessible endpoint. There are many different ways you can accomplish this. One possible method is to host the skill on AWS Elastic Beanstalk (a service offering by Amazon Web Services). These steps walk you through setting up the Hello World sample on Elastic Beanstalk. Deploying Hello World to AWS Elastic Beanstalk requires the following tools, in addition to the Java requirements noted earlier: - OpenSSL - An account on Amazon Web Services (AWS) - AWS Toolkit for Eclipse (note that this plug-in requires Eclipse Java EE) - AWS Command Line Interface - Maven and Maven Integration for Eclipse In these instructions, <alexa-skills-kit-folder> is the folder you cloned from the GitHub repository. To deploy a sample to Elastic Beanstalk, do the following: - Download or clone the amzn/alexa-skills-kit-java GitHub repository. - If you do not already have an account on AWS, go to Amazon Web Services and create an account. - Install the AWS Toolkit for Eclipse. Configure the AWS Toolkit for Eclipse with your account credentials: a. Log in to the AWS Console and navigate to Identity and Access Management (IAM). b. Create a new IAM user and grant the user full access to Elastic Beanstalk. c. Select the option to generate an access key for the new user. Choose to download these credentials. d. In Eclipse, click on Window and then Preferences. In the AWS Toolkit section, enter your Access Key ID and Secret Access Key. Run Maven once to add the Alexa Skills Kit Java library ( alexa-skills-kit-x.x.jar) to your local repository: a. In a command line, navigate to <alexa-skills-kit-folder> /samples. b. Run the command: mvn install. This copies the compiled jarfile from <alexa-skills-kit-folder> /repointo your local Maven repository, so it will be available when you build the sample later. - Create a new Maven project using the maven-archetype-webapparchetype and import the sample code. See Creating the Maven Webapp Project and Importing the Code. - Add a class that extends SpeechletServletto host the HelloWorldSpeechletand set up servlet mapping in web.xml. See Adding a SpeechletServlet and Setting up Servlet Mapping. - Deploy the project to AWS Elastic Beanstalk. See Deploying the Sample to Elastic Beanstalk. - Configure the new Elastic Beanstalk environment to use HTTPS. See Configuring the Elastic Beanstalk Environment to Use HTTPS. - Register the sample in the Developer Portal. See Registering the Sample in the Developer Portal. - Test the skill with either the Service Simulator on the developer portal, or an Alexa-enabled device registered to your developer portal account. See Testing the Sample. Creating the Maven Webapp Project and Importing the Code The Maven maven-archetype-webapp archetype creates a new project that is structured as a web application and set up to build with Maven. When setting up your project, note that you can drag files and folders from your file system into Eclipse. When prompted, select the option to copy the files into the project. - In Eclipse, click File > New > Maven Project. - If this option is not available, choose File > New > Project, expand Maven and select the Maven Project option. - If Maven is not shown in the New Project dialog box, make sure that your version of Eclipse includes Maven Integration for Eclipse. - Select a location for the project. Make sure the Create simple project option is not selected. - When prompted to select an archetype, select maven-archetype-webapp. Note that the list of archetypes is long, so you may want to filter it by typing “webapp” in the filter box. - Enter a Group Id and Artifact Id for the project. For instance, you can set both of these fields to alexa-skills-kit-samples. This creates a new, blank project set up as a web app. You now need to add the sample source code and dependencies: - The project should have a src/mainfolder. If maindoes not already contain a javafolder, create a new folder called java. The end result should be a structure like this: src/main/java. - Copy the helloworldfolder from <alexa-skills-kit-folder> /samples/src/main/javainto the src/main/javafolder in the new project. - You can do this either by dragging helloworldinto eclipse and selecting the Copy files and folders option, or copying the folder within the file system. - The project should now contain a single package called helloworld, containing two classes ( HelloWorldSpeechletand HelloWorldSpeechletRequestStreamHandler. Note that the second class is provided for Lambda and not used in this example). - Copy the file <alexa-skills-kit-folder> /samples/pom.xmlinto the root of the Maven project, replacing the pom.xmlthat was generated when creating the project. The pom.xmlprovided with the samples is already setup with all of the dependencies necessary to build the sample. However, it is set up to build a JAR file rather than a WAR. - Edit the pom.xmlfile and make the following changes: - Change the <packaging>setting to war. - Under <build>, add the following line: <finalName>alexa-skills-kit-samples</finalName> - Right-click on the project, click Maven and then click Update Project. Adding a SpeechletServlet and Setting up Servlet Mapping The Java library includes a SpeechletServlet class. This implementation of a Java EE servlet handles serializing and deserializing the body of the HTTP request and calls the appropriate Speechlet methods based on the request ( onLaunch(), onIntent, and so on). To host your web service on Elastic Beanstalk, you need to extend this class and add a constructor that instantiates the appropriate Speechlet class in the sample ( HelloWorldSpeechlet). Then, set up servlet mapping in the web.xml file for the project. To create the SpeechletServlet: - In the helloworldpackage, create a new class that extends com.amazon.speech.speechlet.servlet.SpeechletServlet. For this example, name the new class HelloWorldServlet. - Implement a default constructor for the new class that instantiates HelloWorldSpeechletby calling setSpeechlet(). The HelloWorldServlet class should look like this: package helloworld; import com.amazon.speech.speechlet.servlet.SpeechletServlet; public class HelloWorldServlet extends SpeechletServlet { public HelloWorldServlet() { this.setSpeechlet(new HelloWorldSpeechlet()); } } To set up the servlet mapping: - Open the src/main/webapp/WEB-INF/web.xmlfile in the project. - Add the <servlet>and <servlet-mapping>tags to identify the servlet class and map the class to a URL. The <servlet> and <servlet-mapping> sections should look like this: <servlet> <servlet-name>HelloWorldServlet</servlet-name> <servlet-class>helloworld.HelloWorldServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>HelloWorldServlet</servlet-name> <url-pattern>/hello</url-pattern> </servlet-mapping> After making all of these updates, save all files, then right-click on the project, click Maven and then click Update Project. At this point, the project should be fully set up. Do a build just to ensure that everything builds correctly. Right-click the project, click Run As and then click Maven install. Verify that a war file is created in the target folder for the project. Deploying the Sample to Elastic Beanstalk You can deploy web project directly from Eclipse if you are using the AWS Toolkit for Eclipse. The initial deployment creates a new application and environment. An environment represents a particular version of a web service, deployed onto AWS resources. The environment name is used to determine the endpoint for the web service. See Elastic Beanstalk Components for more information. Once the web service is deployed, you can make code changes in Eclipse and re-deploy to the same environment. This updates the environment with a new version of your web service. AWS Pricing: There is no extra charge to use Elastic Beanstalk. You pay for the actual AWS resources your application uses (such as EC2). Note that pricing for EC2 usage differs between regions – see Amazon EC2 Pricing for details. To deploy the Hello World sample to Elastic Beanstalk: - Use Maven to build a warfile for the project: Right-click the project, click Run As and then click Maven install. Verify that a warfile is created in the targetfolder for the project. - In the Eclipse Project Explorer, right-click the project and click Amazon Web Services then Deploy to AWS Elastic Beanstalk. - Select the Manually define a new server option. - For the server type, select AWS Elastic Beanstalk for Tomcat 8, then click Next. Select the Region where you want to deploy the application. Note: All calls from Alexa come from the US East (N. Virginia) AWS region. Therefore, for best performance and reduced cost, create your Elastic Beanstalk environment in the US East (N. Virginia) region. - Enter an Application Name and Environment Name and click Finish. Deploying the application to Elastic Beanstalk may take several minutes. At this point, you should be able to connect to the environment in a browser. However, your service cannot take requests from Alexa until you configure the environment to use HTTPS. Configuring the Elastic Beanstalk Environment to Use HTTPS To meet the security requirements of Alexa, the endpoint for your web service must present a valid SSL certificate. When testing on Elastic Beanstalk, you can use a self-signed certificate for this. You need to configure the Elastic Beanstalk environment to present the signed certificate. You do this by uploading your certificate as a server certificate and then configuring the Elastic Beanstalk Load Balancer for your environment to use HTTPS. - Create a private key and self-signed certificate. - See the detailed instructions in Testing a Custom Skill. Be sure to specify the domain name for your new Elastic Beanstalk endpoint when creating the certificate. - Make a note of the .pemfiles for your certificate and private key. Install and configure the AWS Command Line Interface if you have not already done so. Use the following AWS command to upload your self-signed certificate to AWS as a server certificate. aws iam upload-server-certificate \ --server-certificate-name CertificateName \ --certificate-body \ --private-key - For the server-certificate-name, specify a name to identify the certificate in AWS. - For the certificate-body, specify the path and filename of the .pemfile for the certificate. - For the private-key, specify the path and filename of the .pemfile for your private key. - Log in to the AWS Management Console. Navigate to Elastic Beanstalk and then navigate to your new environment. - In the left menu, click Configuration, then open the Load Balancing section. Make the following changes: - Set Secure listener port to 443. - Set Protocol to HTTPS. - From the SSL Certificate ID drop-down list, select the name of the server certificate you uploaded in step 3. - Save your changes. Elastic Beanstalk automatically updates your environment. - Verify you can connect to your environment’s URL from a browser using the. Registering the Sample in the Developer Portal Finally, you need to register the sample in the developer portal. This makes it possible for you to test it. You don’t need to fully complete all registration fields. For testing, you must enter at a minimum: - Skill Information: Skill Type (Custom Interaction Model), Name, and Invocation Name. - Interaction Model: Intent Schema, Sample Utterances, and Custom Slot Types. - Configuration: Endpoint. Select the HTTPS option and enter the URL for your service’s endpoint. - SSL Certificate: Select one of the options. If you want to use a self-signed SSL certificate for testing, create that certificate and upload it to the Developer Portal. - Test: Set to Enabled. For the steps to register the skill, see Registering and Managing Custom Skills in the Developer Portal. Use the values in the following table. Testing the Sample Once the sample is deployed to Elastic Beanstalk, configured for HTTPS, and registered on the developer portal, you should be able to invoke the skill by saying one of the phrases for interacting with skills: - “Alexa, ask <invocation name> to <command>.” - “Alexa, tell <invocation name> to <command>.” - “Alexa, talk to <invocation name>” For a list of supported phrases, see Supported Phrases to Begin a Conversation. For details about the provided samples and possible intents, see Using the Alexa Skills Kit Samples (Custom Skills). If you want to experiment with changes, make your code changes in the project and then re-deploy the code to Elastic Beanstalk. You can re-deploy to the same Elastic Beanstalk environment: - In the Eclipse Project Explorer, right-click the project and choose Amazon Web Services > Deploy to AWS Elastic Beanstalk. - In the Run On Server dialog box, select Choose an Existing Server, select the existing environment to which you want to deploy the web service, then click Finish. Accessing Log Files Log files can be very useful when debugging and troubleshooting. In an Elastic Beanstalk configuration set up as described here, you can access log files for the web service in three ways: - View a snapshot of the log files from the AWS Console. This snapshot includes the last 100 lines from each log file, all in a single view or download. - Download all the logs from AWS Console. - Configure your Beanstalk environment to automatically publish the logs to AWS S3 bucket. For details about these different options, see Working with Logs. The provided samples use SLF4J and log4j for logging. By default this writes log messages to the console terminal. To write log messages to a log file stored with your Elastic Beanstalk environment, you need to configure the logger with a log4j.properties configuration file. Add this file to the Java Resources\src directory in your Java Web Project. For example, the following log4j.properties file configures the logger to write log entries to a file called /var/log/tomcat8/helloworld.log: log4j.logger.helloworld=info log4j.rootLogger=info, tail log4j.appender.tail=org.apache.log4j.FileAppender log4j.appender.tail.File=${catalina.base}/logs/helloworld.log log4j.appender.tail.layout=org.apache.log4j.PatternLayout log4j.appender.tail.layout.ConversionPattern=%d: %-5p %c{1} - %m%n See SLF4J and log4j for information about configuring logging. The HelloWorldSpeechlet writes a log entry for each type of request the service receives: private static final Logger log = LoggerFactory.getLogger(HelloWorldSpeechlet.class); ... @Override public SpeechletResponse onLaunch(final LaunchRequest request, final Session session) throws SpeechletException { log.info("onLaunch requestId={}, sessionId={}", request.getRequestId(), session.getSessionId()); ... } Using the above configuration and launching Hello World generates log entries similar to the following (as viewed in the snapshot of the last 100 lines of logs in Elastic Beanstalk): ------------------------------------- /var/log/tomcat8/helloworld.log ------------------------------------- 2015-03-27 18:06:48,976: INFO HelloWorldSpeechlet - onSessionStarted 2015-03-27 18:06:49,087: INFO HelloWorldSpeechlet - onLaunch
https://developer.amazon.com/docs/custom-skills/deploy-a-sample-skill-as-a-web-service.html
CC-MAIN-2017-43
refinedweb
3,508
56.96
JSP forwarding JSP forwarding What is JSP forwarding? Hi, You can use the JSP forward tag for jsp forwarding. This will help you forward your request to some other page on the same server. Here is the code for jsp forwarding problem in forwarding response problem in forwarding response My response is not getting displayed. i have used RequestDispatcher.forward,RequestDispatcher.include and response.sendRedirect.There is no error on console nor any javascript error Redirecting and forwarding to views In this section, you will learn about redirecting and forwarding views through prefixes Array Creation - JSP-Servlet Array Creation hi i have a requirement in which i need to convert... 1000 comma separated string values now i convert it into single array like String x[]=csvvalue.split(","); this gives me an array containing all the strings JSP Array JSP Array  .... The below example gives you a demo on use of Array in JSP. Understand with Example The Tutorial illustrate an example from 'JSP Array'. To understand the example we Array problem - JSP-Servlet Array problem Respected Sir/Madam, I am having a pop up window in which the database values are available.. Its in the format of: One radio button ID Name Like this there is a table with n JSP Array Length JSP Array Length JSP Array Length is performed when you want to compute the length of array in JSP. Understand with Example The Tutorial to access JQuery array in jsp page? how to access JQuery array in jsp page? JQury Array: var elems =this.value; var arr = jQuery.makeArray(elems); How to get in jsp page getting html list in a array getting html list in a array hi i want to get html unordered list in a array using jsp Convert a tsring to an array - JSP-Interview Questions want to convert to an array that i wwant to process it character by chracter...: The String is strContent the array is Mystr F11633 is a text file..., 0); } jsp:include to Collection The values into Array Java Struts JSP J2EE Download this example... Array to Collection In this example we are converting values of an array into collection create an array in which no element is duplicate. if duplicate then tell to user duplicate. duplicacy is tell to user during when he enter the element retrieve JSON array objects in javascript retrieve JSON array objects in javascript I am getting one value by JSON array, but how can i store multiple values in json array and how can i...", how? please ignore my db connection in jsp JSONObject How to pass Array of string from action class to jsp page How to pass Array of string from action class to jsp page this is my action class package login.ipm; import java.sql.*; import java.util.ArrayList... jsp page is <%-- Document : select_service Created on : Aug 31 Passing a 2 Dimentional Array From one Jsp to Another Jsp file and Retreving it Passing a 2 Dimentional Array From one Jsp to Another Jsp file and Retreving it Hi I have a 2 dimensional integer array declared and values are dynamically stored into it in one jsp file .I passed this array into another Jsp PHP Array Operator ] => JSP [P] => PHP [A] => ASP ) Values of array c are: Array ( [S] => JSP [H] => PHP [P] => ASP ) Union of arrays a and b: Array ( [J... of arrays b and c: Array ( [S] => JSP [H] => PHP [P] => ASP [j] => Storing content from file path to an array the contents in this text file to an array. I am using jsp. I can access my path but how to store the contents in an array? I am looking forward in hearing from you saving form bean with Array of objects (collection) - Struts saving form bean with Array of objects (collection) Hi all... thanks..:) I am facing problem to capture my array of objects(Order) in form bean into action class, the array i get from form is NULL..:( Let me explain jsp - JSP-Servlet JSP associative array Does JSP objects acts like associative array JSP - JSP-Servlet one JSP file to another file.This will redirect to the different page without... or forwarding. This is the servlet API equivalent to SSI includes. The uri Forwarding Messages using Java Mail Forwarding Messages using Java Mail This Example shows you how to forward a message using javamail api. there is no method to forward a mail from one user to another PHP Push Array to Array PHP Push Array to Array array_push() function using array in PHP Declaring string array Declaring string array An array is the collection of same data type. Suppose if we have a declare an array of type String JSP - JSP-Servlet Difference between jsp forward and sendRedirect What is difference between jsp forward and send Redirect? Difference between jsp forward and sendRedirectJSP forward action (<jsp: forward/>)jsp forward action jsp forward action tag application context as the forwarding JSP file. Syntax of forward action Tag: <...jsp forward action tag Defined jsp forward action tag ? The <jsp:forward> element forwards the request object containing JavaScript Array of checkboxes JavaScript Array of checkboxes  ... that help you in understanding JavaScript Array of checkboxes. For this we...;br> <input type=checkbox name=scripts value='Jsp java - JSP-Interview Questions file, another JSP file, or a servlet. It should be noted that the target file must be in the same application context as the forwarding JSP file...() and forward() methods? Hi JSP forward action transfers the control Access value of array using OGNL in struts2. Access value of array using OGNL in struts2. Here, you will see how to access value of array in struts2 using OGNL. 1-index.jsp <%@taglib...] : <s:property<br/> Value array list jsp forward not functioning ". Is it because of that, the jsp:forward is not forwarding...jsp forward not functioning If the form in AddStudent.jsp... the student in the database (works OK) } %>}else{%> <jsp array example array example giving input from outside for array example Array sort Array sort Program that uses a function to sort an array of integers Attendance Generation - JSP-Servlet no forwarding of leaves for next month otherwise (no loss of pay) 2 if i used more than 2 leaves no forwarding of leaves for next month but loss of pay should length in array length in array wat s length in array Array of structure Array of structure create employment details with necessary field using array of structure array ADT array ADT Write a program using array_ADT to retrieve a list of elements from a file array ADT array ADT Write a program using array_ADT to retrieve a list of elements from a file Java array Java array How can one prove that the array is not null but empty Array Sorter Array Sorter I need a program that will ask the user to either "Enter an Array" or to "Exit the program" If the user want to enter an array the are asked to enter ten numbers. Then those ten numbers are stored in an array Array and input Array and input if this is my array int briefcases [ ]={1,2,3,4,5,6,7,8,9,10,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26}; i want to ask user to input one of the above numbers then i want to print the array without the number array ADT array ADT Write a program using array_ADT to retrieve a list of URLs from a file. In your program take an array index as the input and delete the entry corresponding to that index Computer - JSP-Interview Questions ForwardServlet extends HttpServlet{ private static final String forwardTo = "/jsp/ResultServlets"; private static final String includeIn = "/jsp/ResultServlets... ServletException, IOException { String mode = req.getParameter("mode"); // Forwarding using array using array Circular left shift array element by one position array string array string how to sort strings with out using any functions length in array length in array hi wat is length in array and where s it used Array Name Array Name How To Set Value in Two-Dimension ArrayList array declaration array declaration String propArray = []; is this declaration correct? Hi Friend, No, This is the wrong way. Array is declared... links: Integer Array String Array Thanks array to string array to string hello how to assign value from array to string. nsstring *abc = [array objectAtindex:1]; you can use this code NSString *abc = [NSString stringWithString:[array objectAtIndex:i]]; where i Java array Java array Java program to find first two maximum numbers in an array,using single loop without sorting array array program array program write a java program which will take 10 elements as command line arguments and count how many times 3 occurs in array Array copy Array copy Hello all. could someone telle me how to copy an array (holding random numbers) to a new array...( then make randomnumbers on this new array ) but without changing the excisting array ? last . possibility Array search Array search Need a program which performs a searching operation on a one dimensional array called NUM, which is an N-element array stored in a file... for the search operation. he value with one or more of the array items. Search begins ARRAY and STACK ARRAY and STACK hello, What is the difference between ARRAY... would be the last removed. In Array the items can be entered or removed in any order. a stack is simply a special case of an array. You could say Jsp - JSP-Servlet Jsp Hello sir, And how to store an array of strings in access database please reply me sir Thank you sir. Hi Friend, We have created a table named names(id(autonumber),name(text),address(text)). Try Array in C Array in C Respected Sir, How can an array be an lvalue, if we can't assign to it? How can I set an array's size at run time? How can I avoid fixed-sized arrays? help me sir array declaration array declaration what is the difference between declaration of these two things integer[] i={1,2,3,4,5} and integer i[]={1,2,3,4,5
http://www.roseindia.net/tutorialhelp/comment/85559
CC-MAIN-2014-52
refinedweb
1,709
58.62
#include <FXExtentf.h> #include <FXExtentf.h> List of all members. [inline] Default constructor. Copy constructor. Initialize from two vectors. Initialize from six numbers. Assignment. Indexing with 0..1. Comparison. Width of box. Height of box. Longest side. shortest side Length of diagonal. Get radius of box. Compute diagonal. Get center of box. Test if empty. Test if box contains point x,y. Test if box contains point p. Test if box properly contains another box. Include point. Include given range into extent. Get corner number 0..3. [friend] Test if bounds overlap. Union of two boxes. Intersection of two boxes. Save object to a stream. Load object from a stream.
http://www.fox-toolkit.org/ref16/classFX_1_1FXExtentf.html
crawl-003
refinedweb
110
66.6
What is Lambdageek, howto, linq, programming, tech April 20th, 2008 Something from a previous post was a very small line of code that looked like it was a code fragment used as an argument to a function call. cruncher.Expect(x => x.Add(2, 4)).Returns(6); cruncher.Expect(yyy => yyy.Add(5, 2)).Returns(3); Which brings us to a fork in the road. Down one path you can say, “Well that’s interesting. Some magic was added to dotnet somewhere along the line I might use here or there,” and down the other path you can say, “Well that’s interesting. What is this doing, and exactly how does it work under the hood, because I’ll probably use this here or there and I want to make sure I’m using the the right tool for the job and I’m using it in the right way.” Going down either path is fine, but this post is for people who want to go down the second one. Whenever I explain something I like to start with something to connect it with. In this case I came up with a song my mother-in-law used to listen to by Lyle Lovett - If I had a boat. If I had a boat I’d go out on the ocean And if I had a pony I’d ride him on my boat And we could all together Go out on the ocean I said me upon my pony on my boat So let’s see some code to put that song in context: class Program { static public void Example() { Location theOcean = new Location { Name = "the ocean" }; Action<Boat> x1 = boat => boat.GoOut(theOcean); Boat myBoat = new Boat { Name = "my boat" }; x1(myBoat); // my boat go out on the ocean } } class Boat { public string Name { get; set; } public void GoOut(Location location) { Console.WriteLine("{0} go out on {1}", this.Name, location.Name); } } class Location { public string Name { get; set; } } The line that’s the doing the magic here is where the local variable Action<Boat> x1 is assigned with the value boat => boat.GoOut(theOcean). If you ever see a type of Action or Func you can relax - they’re nothing more than delegates with template parameters for the arguments. In other words the Action variable x1 holds reference to a function that takes a boat as a parameter. The difference between Func and Action is that the last template parameter of Func is the return value, while the return value of Action is always void. The value that’s being assigned into it is boat => boat.GoOut(theOcean). That’s where the song lyrics relate. Even though this looks like a function call, the method GoOut is not called at this time, and in fact the myBoat variable and Boat instance are not even in existance yet. What this is saying is “If I had a Boat (in a boat variable as far as I’m concerned) then I would use that boat to GoOut on theOcean (using theOcean Location variable in this scope).” “Okay,” you’re now saying, “that’s interesting. It’s a slightly shorter way of declaring an inline delegate function. I’ve saved myself some keystrokes. Yay.” But wait! There’s more! Let’s look at what Reflector is showing us has taken place. public static void Example() { Location <>g__initLocal0 = new Location(); <>g__initLocal0.Name = "the ocean"; Location theOcean = <>g__initLocal0; Action<Boat> x1 = delegate (Boat boat) { boat.GoOut(theOcean); }; Boat <>g__initLocal1 = new Boat(); <>g__initLocal1.Name = "my boat"; Boat myBoat = <>g__initLocal1; x1(myBoat); } D’oh! The people behind the compiler are one step ahead of us. It is producing the same IL you would have if you had simply used an inline delegate - so that’s what Reflector has deduced the code for this method would have been. As an aside there’s also an example of what compiler is producing when you’re using the property assignment syntax after a constructor - it’s about what you’d expect - it’s creating the object normally in a generated variable, setting the properties, and then using the object as it was intended. In this case they’re simply assigned to a local variable. But that’s not going to stop us! Because we know that before the expression was used to make a delegate it was an Expression. So let’s use it as such. class Program { static public void Example() { Location theOcean = new Location { Name = "the ocean" }; Expression<Action<Boat>> e1 = boat => boat.GoOut(theOcean); Action<Boat> x1 = e1.Compile(); Boat myBoat = new Boat { Name = "my boat" }; x1(myBoat); // my boat go out on the ocean theOcean = new Location { Name = "the pacific ocean" }; x1(myBoat); // my boat go out on the pacific ocean } } And when we look at Reflector we can see how the compiler has given us what we’re looking for. An Expression is really an object. Actually it’s a fair number of related objects which model the statement. When the compiler hit this line it produced the IL needed to allocate and connect the expression objects that model this statement. Here’s approximately the csharp code Reflector believes would produce the same IL: static public void Example() { Location theOcean = new Location { Name = "the ocean" }; ParameterExpression CS_0_0000; Expression<Action<Boat>> e1 = Expression.Lambda<Action<Boat>>( Expression.Call( CS_0_0000 = Expression.Parameter(typeof(Boat), "boat"), typeof(Boat).GetMethod("GoOut"), new Expression[] { Expression.Constant(theOcean) } ), new ParameterExpression[] { CS_0_0000 }); Action<Boat> x1 = e1.Compile(); Boat myBoat = new Boat { Name = "my boat" }; x1(myBoat); // my boat go out on the ocean theOcean.Name = "the pacific ocean"; x1(myBoat); // my boat go out on the pacific ocean theOcean = new Location { Name = "the atlantic ocean" }; x1(myBoat); // my boat go out on the pacific ocean } You can compile an Expression into a delegate and execute the code it refers to, as we see above, but the real power comes from the fact that you can take a step back and work with the expression itself as a piece of information. “And if I had a pony I’d ride him on my boat.” See how Lyle’s not saying because I have a pony I’m riding it on my boat now. It’s more hypothetical. Expression<Action<IPony, ILocation>> LyleSays = (pony, boat) => pony.RideOn(boat); so he’s saying “hypothetically, given a pony and a location I’ll refer to as a boat for the sake of argument, I’d ride one on the other.” He’s even put the statement out there as a line of a song you can parse and make of what you will. That last part is where the spooky-cool magic powers start to come from. Let’s take a look at the MoQ example again. var cruncher = new Mock<ICruncher>(); cruncher.Expect(x => x.Add(2, 4)).Returns(6); What we have done is called a Mock<Cruncher> method named Expect that takes an Expression<Action<ICruncher>> parameter. The parameter we provided says if I had a cruncher I called x I’d call Add on it with a 2 and a 4. The Mock takes this expression we’ve provided, looks into it’s guts, sees it’s a method call with these parameters, and saves that information internally in a collection of rules. Later when the totally fake cruncher gets a call on the Add method it whips through the rules looking for matching parameters and performs whatever action was added to that rule. So in this case the expression is never even compiled and certainly never executed directly. It’s treated as data. The nice thing about this data though is that it’s been through the intellisense, and the compiler, so you know it’s good to go. It will also be updated if you use Visual Studio or Resharper to rename the Add method to something like Sum. Or even if the rename is missed it’ll be a compiler error because there’s no Add method. This is also a corner-stone behind some of the new technologies that convert fairly complex expressions into sql statements (Linq to SQL, Linq to NHibernate) or that are used to calculate what url would be used for a given method call (Microsoft MVC). Source code with this article is available online for browsing. Or to get a local copy use: svn co (Though honestly it’s a pretty darn trivial and contrived example) using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Linq.Expressions; namespace IfIHadABoat.Part4 { interface ILocation { string Name { get; set; } } interface IPony : ILocation { void Ride(ILocation location); } interface IBoat : ILocation { void GoOut(ILocation location); } class Program { static public void Example() { Action<IBoat, ILocation> x1 = (boat, place) => boat.GoOut(place); Action<IPony, ILocation> x2 = (pony, place) => pony.Ride(place); Action<IBoat, IPony, ILocation> x3 = (boat, pony, place) => { if (boat != null) x1(boat, place); if (pony != null) x2(pony, boat ?? place); }; var myPony = new Pony { Name = "my pony" }; var myBoat = new Boat { Name = "my boat" }; var theOcean = new Location { Name = "the ocean" }; x3(myBoat, myPony, theOcean); // my boat go out on the ocean // my pony ride on my boat x3(null, new Pony(), new Location {Name = "through the desert"}); //no name ride on through the desert } } class Location : ILocation { public string Name { get; set; } } class Pony : IPony { public string Name { get; set; } public void Ride(ILocation location) { Console.WriteLine("{0} ride on {1}", this.Name ?? "no name", location.Name); } } class Boat : IBoat { public string Name { get; set; } public void GoOut(ILocation location) { Console.WriteLine("{0} go out on {1}", this.Name ?? "no name", location.Name); } } } Updated: I added a hackish loop in the final version of program.cs that turns an object graph into an xml infoset, and then puts that infoset onto the console. Here is the infoset for the following lambda expression: Expression<Action<IBoat, IPony, ILocation>> e3 = (boat, pony, place) => pony.Ride(boat == null ? place : boat.GoOut(place)); <!--(boat, pony, place) => pony.Ride(IIF((boat = null), place, Convert(boat.GoOut(place))))--> <e NodeType="Lambda"> <!--pony.Ride(IIF((boat = null), place, Convert(boat.GoOut(place))))--> <MethodCallExpression-Body <RuntimeMethodInfo-Method <!--pony--> <ParameterExpression-Object <RuntimeType-Type </ParameterExpression-Object> <Arguments> <!--IIF((boat = null), place, Convert(boat.GoOut(place)))--> <ConditionalExpression NodeType="Conditional"> <!--(boat = null)--> <BinaryExpression-Test <!--boat--> <ParameterExpression-Left <RuntimeType-Type </ParameterExpression-Left> <!--null--> <ConstantExpression-Right <RuntimeType-Type </ConstantExpression-Right> <RuntimeType-Type </BinaryExpression-Test> <!--place--> <ParameterExpression-IfTrue <RuntimeType-Type </ParameterExpression-IfTrue> <!--Convert(boat.GoOut(place))--> <UnaryExpression-IfFalse <!--boat.GoOut(place)--> <MethodCallExpression-Operand <RuntimeMethodInfo-Method <!--boat--> <ParameterExpression-Object <RuntimeType-Type </ParameterExpression-Object> <Arguments> <!--place--> <ParameterExpression Name="place" NodeType="Parameter"> <RuntimeType-Type </ParameterExpression> </Arguments> <RuntimeType-Type </MethodCallExpression-Operand> <RuntimeType-Type </UnaryExpression-IfFalse> <RuntimeType-Type </ConditionalExpression> </Arguments> <RuntimeType-Type </MethodCallExpression-Body> <Parameters> <!--boat--> <ParameterExpression Name="boat" NodeType="Parameter"> <RuntimeType-Type </ParameterExpression> <!--pony--> <ParameterExpression Name="pony" NodeType="Parameter"> <RuntimeType-Type </ParameterExpression> <!--place--> <ParameterExpression Name="place" NodeType="Parameter"> <RuntimeType-Type </ParameterExpression> </Parameters> <RuntimeType-Type </e> Recent Comments
http://whereslou.com/2008/04/20/what-is-lambda
crawl-002
refinedweb
1,828
55.95
How to update QML chart during runtime ? I found this link in this forum. It works fine after integrating into my application. But i am not able to add data during runtime and not able to display it. - In QMLChartData.Js How to push data to the variable data. var ChartLineData = { labels: ["January","February","March","April","May","June","July",] }] } Since i have imported and binded like this import "QMLChartData.js" as ChartsData property var chartLineData: ChartsData.ChartLineData //push new data chartLineData.datasets.data.push(100);// It says cannot call method push of undefined. But I am able to push data to labels. chartLineData.labels.push("december") chart_line.requestPaint() but not able to display it on the graph even though i called reqeustPaint().Any help please. I just want to draw a line graph based on the data from cpp side.Any example would be really helpful. Thanks
https://forum.qt.io/topic/57546/how-to-update-qml-chart-during-runtime
CC-MAIN-2018-30
refinedweb
147
62.24
In this article, I will describe the implementation of an efficient Aho-Corasick algorithm for pattern matching. In simple words, this algorithm can be used for searching a text for specified keywords. The following code is useful when you have a set of keywords and you want to find all occurrences of a keywords in the text or check if any of the keywords is present in the text. You should use this algorithm especially if you have a large number of keywords that don't change often, because in this case, it is much more efficient than other algorithms that can be simply implemented using the .NET class library. In this section, I'll try to describe the concept of this algorithm. For more information and for a more exact explanation, please take a look at the links at the end of this article. The algorithm consists of two parts. The first part is the building of the tree from keywords you want to search for, and the second part is searching the text for the keywords using the previously built tree (state machine). Searching for a keyword is very efficient, because it only moves through the states in the state machine. If a character is matching, it follows goto function otherwise it follows fail function. In the first phase of the tree building, keywords are added to the tree. In my implementation, I use the class StringSearch.TreeNode, which represents one letter. The root node is used only as a place holder and contains links to other letters. Links created in this first step represents the goto function, which returns the next state when a character is matching. StringSearch.TreeNode During the second phase, the fail and output functions are found. The fail function is used when a character is not matching and the output function returns the found keywords for each reached state. For example, in the text "SHIS", the failure function is used to exit from the "SHE" branch to "HIS" branch after the first two characters (because the third character is not matching). During the second phase, the BFS (breadth first search) algorithm is used for traversing through all the nodes. Functions are calculated in this order, because the fail function of the specified node is calculated using the fail function of the parent node. Building of the keyword tree (figure 1 - after the first step, figure 2 - tree with the fail function) As I already mentioned, searching only means traversing the previously built keyword tree (state machine). To demonstrate how this algorithm works, let's look at the commented method which returns all the matches of the specified keywords: // Searches passed text and returns all occurrences of any keyword // Returns array containing positions of found keywords public StringSearchResult[] FindAll(string text) { ArrayList ret=new ArrayList(); // List containing results TreeNode ptr=_root; // Current node (state) int index=0; // Index in text // Loop through characters while(index<text.Length) { // Find next state (if no transition exists, fail function is used) // walks through tree until transition is found or root is reached TreeNode trans=null; while(trans==null) { trans=ptr.GetTransition(text[index]); if (ptr==_root) break; if (trans==null) ptr=ptr.Failure; } if (trans!=null) ptr=trans; // Add results from node to output array and move to next character foreach(string found in ptr.Results) ret.Add(new StringSearchResult(index-found.Length+1,found)); index++; } // Convert results to array return (StringSearchResult[])ret.ToArray(typeof(StringSearchResult)); } Complexity of the first part is not so important, because it is executed only once. Complexity of the second part is O(m+z) where m is the length of the text and z is the number of found keywords (in simple words, it is very fast and it's speed doesn't drop quickly for longer texts or many keywords). To show how efficient this algorithm is, I created a test application which compares this algorithm with two other simple methods that can be used for this purpose. The first algorithm uses the String.IndexOf method to search the text for all the keywords, and the second algorithm uses regular expressions - for example, for keywords he, she, and his, it creates a regular expression (he|she|his). The following graphs show the results of tests for two texts of different sizes. The number of used keywords is displayed on the X axis and the time of search is displayed on the Y axis. String.IndexOf The interesting thing is that for less than 70 keywords, it is better to use a simple method using String.IndexOf. Regular expressions are almost always slower than other algorithms. I also tried compiling the test under both .NET 1.1 and .NET 2.0 to see the difference. Although my measuring method may not be very precise, it looks like .NET 2.0 is a bit faster (about 5-10%), and the method with regular expressions gives much better results (about 60% faster). Two charts comparing the speed of the three described algorithms - Aho-Corasick (green), IndexOf (blue), and Regex (yellow) I decided to implement this algorithm when I had to ban some words in a community web page (vulgarisms etc.). This is a typical use case because searching should be really fast, but blocked keywords don't change often (and the creation of the keyword tree can be slower). The search algorithm is implemented in a file StringSearch.cs. I created the interface that represents any search algorithm (so it is easy to replace it with another implementation). This interface is called IStringSearchAlgorithm, and it contains a property Keywords (gets or sets keywords to search for) and methods for searching. The method FindAll returns all the keywords in the passed text, and FindFirst returns the first match. Matches are represented by the StringSearchResult structure that contains the found keyword and its position in the text. The last method is ContainsAny, which returns true when the passed text contains a keyword. The class that implements the Aho-Corasick algorithm is called StringSearch. IStringSearchAlgorithm Keywords FindAll FindFirst StringSearchResult ContainsAny true StringSearch The following example shows how to load keywords from a database and create a SearchAlgorithm instance: SearchAlgorithm // Initialize DB connection SqlConnection conn = new SqlConnection(connectionString); SqlCommand cmd = new SqlCommand("SELECT BlockedWord" + " FROM BlockedWords",conn); conn.Open(); // Read list of banned words ArrayList listWords = new ArrayList(); using(SqlDataReader reader = cmd.ExecuteReader(CommandBehavior.CloseConnection)) { while(reader.Read()) listWords.Add(myReader.GetString(0)); } string[] arrayWords = (string[])listWords.ToArray(typeof(string)); // Create search algorithm instance IStringSearchAlgorithm searchAlg = new StringSearch(); searchAlg.Keywords = arrayWords; You can also use the StringSearch constructor which takes an array of keywords as parameter. Searching the passed text for keywords is even easier. The following sample shows how to write all the matches to the console output: // Find all matching keywords StringSearchResult[] results=searchAlg.FindAll(textToSearch); // Write all results foreach(StringSearchResult r in results) { Console.WriteLine("Keyword='{0}', Index={1}", r.Keyword, r.Index); } This implementation of the Aho-Corasick search algorithm is very efficient if you want to find a large number of keywords in a text of any length, but if you want to search only for a few keywords, it is better to use a simple method like String.IndexOf. The code can be compiled in both .NET 1.1 and .NET 2.0 without any modifications. If you want to learn more about this algorithm, take a look at the link in the next section, it was very useful for me during the implementation of the algorithm and explains the theory behind this.
http://www.codeproject.com/Articles/12383/Aho-Corasick-string-matching-in-C?fid=240598&df=90&mpp=25&sort=Position&spc=Relaxed&select=2721299&tid=1321224
CC-MAIN-2015-18
refinedweb
1,262
61.06
dirtyRSS - A dirty but self-contained RSS parser use dirtyRSS; $tree = parse($in); die("$tree\n") unless (ref $tree); disptree($tree, 0); dirtyRSS is a terribly dirty RSS parser, which doesn't require any other module to work. It parses a string, and creates a tree, which represents the RSS feed. It doesn't support the complete XML syntax, only things that are commonly used in feeds. All tags are lowercased, namespace indicators are removed, and several typical non-RSS-2.0 tags are translated shamelessly to their 2.0 counterpart. There is also plenty of fiddling with the data on the way. The only good thing about this parser, is that it works most of the time, and it makes the tree look as if it came from an RSS 2.0, for a large parts of feeds of various sorts. If the parse fails, an error message is passed via the return value, rather than a reference to an array. The following functions are exported: parse() and disptree(). The module is based upon trials and errors, so naturally there are going to be more errors. This module is released to the open domain. There are no restrictions using it. The module is part of the Editaste site, Eli Billauer, <perldev@editaste.com>
http://search.cpan.org/dist/dirtyRSS/dirtyRSS.pm
CC-MAIN-2016-40
refinedweb
214
73.88
This article has 50 spring interview questions based on the most frequently asked in the spring framework interviews. We will be adding the more number of questions from readers request. If you are looking for any specific questions and doubts, please post your queries in the comments section of this article. We will update the questions and send you the remainder about the update. Also don’t forget to leave the feedback about the questions in this article and provide suggestions, it helps use to compile more number of questions into the list. If you are looking for any specific topic on the spring framework, please read the list of articles published on spring topic. - Spring Framework Article - Spring Framework Books (recommended) - Introduction to Spring Framework - Spring and Hibernate Integration 50 Spring Framework Interview Questions 1) What is Spring Framework? Spring is a lightweight inversion of control and aspect-oriented container framework. Spring Framework’s contribution towards java community is immense and spring community is the largest and most innovative community by size. They have numerous projects under their portfolio and have their own spring dmServer for running spring applications. This community is acquired by VMWare, a leading cloud compting company for enabling the java application in the cloud by using spring stacks. If you are looking to read more about the spring framework and its products, please read in their official site Spring Source. 2) Explain Spring? -. - Framework : Spring provides most of the intra functionality leaving rest of the coding to the developer. 3) What are the different modules in Spring framework? - The Core container module - Application context module - AOP module (Aspect Oriented Programming) - JDBC abstraction and DAO module - O/R mapping integration module (Object/Relational) - Web module - MVC framework module 4) What is the structure of Spring framework? 5) What is the Core container module? This module is provides the fundamental functionality of the spring framework. In this module BeanFactory is the heart of any spring-based application. The entire framework was built on the top of this module. This module makes the Spring container. 6) What. 7) What is AOP module? The AOP module is used for developing aspects for our Spring-enabled application. Much of the support has been provided by the AOP Alliance in order to ensure the interoperability between Spring and other AOP frameworks. This module also introduces metadata programming to Spring. Using Spring’s metadata support, we will be able to add annotations to our source code that instruct Spring on where and how to apply aspects. 8)What. 9) What are object/relational mapping integration module? Spring also supports for using of an object/relational mapping (ORM) tool over straight JDBC by providing the ORM module. Spring provide support to tie into several popular ORM frameworks, including Hibernate, JDO, and iBATIS SQL Maps. Spring’s transaction management supports each of these ORM frameworks as well as JDBC. 10) What is web module? This) What is web module? Spring comes with a full-featured MVC framework for building web applications. Although Spring can easily be integrated with other MVC frameworks, such as Struts, Spring’s MVC framework uses IoC to provide for a clean separation of controller logic from business objects. It also allows you to decoratively bind request parameters to your business objects. It also can take advantage of any of Spring’s other services, such as I18N messaging and validation. 12) What is a BeanFactory? A BeanFactory is an implementation of the factory pattern that applies Inversion of Control to separate the application’s configuration and dependencies from the actual application code. 13) What is AOP Alliance? AOP Alliance is an open-source project whose goal is to promote adoption of AOP and interoperability among different AOP implementations by defining a common set of interfaces and components. 14) What is Spring configuration file? Spring configuration file is an XML file. This file contains the classes information and describes how these classes are configured and introduced to each other. 15) What does a simple spring application contain? These applications are like any Java application. They are made up of several classes, each performing a specific purpose within the application. But these classes are configured and introduced to each other through an XML file. This XML file describes how to configure the classes, known as the Spring configuration file. 16) What is XMLBeanFactory? BeanFactory has many implementations in Spring. But one of the most useful one is org.springframework.beans.factory.xml.XmlBeanFactory, which loads its beans based on the definitions contained in an XML file. To create an XmlBeanFactory, pass a java.io.InputStream to the constructor. The InputStream will provide the XML to the factory. For example, the following code snippet uses a java.io.FileInputStream to provide a bean definition XML file to XmlBeanFactory. BeanFactory factory = new XmlBeanFactory( new FileInputStream('beans.xml')); To retrieve the bean from a BeanFactory, call the getBean() method by passing the name of the bean you want to retrieve. MyBean myBean = (MyBean) factory.getBean('myBean'); 17) What are important ApplicationContext implementations in spring framework? - ClassPathXmlApplicationContext – This context loads a context definition from an XML file located in the class path, treating context definition files as class path resources. - FileSystemXmlApplicationContext – This context loads a context definition from an XML file in the filesystem. - XmlWebApplicationContext – This context loads the context definitions from an XML file contained within a web application. 18) Explain Bean lifecycle in Spring framework? -. 19) What is bean wiring? Combining together beans within the Spring container is known as bean wiring or wiring. When wiring beans, you should tell the container what beans are needed and how the container should use dependency injection to tie them together. 20) How do add a bean in spring application? <?xml version='1.0' encoding='UTF-8'?> <!DOCTYPE beans PUBLIC '-//SPRING//DTD BEAN//EN' ''> <beans> <bean id='foo' class='com.act.Foo'/> <bean id='bar' class='com.act.Bar'/ </beans> In the bean tag the id attribute specifies the bean name and the class attribute specifies the fully qualified class name. 21) What are singleton beans and how can you create prototype beans? Beans. <beans> <bean id='bar' class='com.act.Foo' singleton='false'/> </beans> 22) What are the important beans lifecycle methods? There are two important bean lifecycle methods. The first one is setup which is called when the bean is loaded in to the container. The second method is the teardown method which is called when the bean is unloaded from the container. 23) How can you override beans default lifecycle methods? The bean tag has two more important attributes with which you can define your own custom initialization and destroy methods. Here I have shown a small demonstration. Two new methods fooSetup and fooTeardown are to be added to your Foo class. <beans> <bean id='bar' class='com.act.Foo' init- </beans> 24) What are Inner Beans? When wiring beans, if a bean element is embedded to a property tag directly, then that bean is said to the Inner Bean. The drawback of this bean is that it cannot be reused anywhere else. 25) What are the different types of bean injections? There are two types of bean injections. - By setter - By constructor 26) What is Auto wiring? You can wire the beans as you wish. But spring framework also does this work for you. It can auto wire the related beans together. All you have to do is just set the autowire attribute of bean tag to an autowire type. <beans> <bean id='bar' class='com.act.Foo' Autowire='autowire type'/> </beans> 27) What are different types of Autowire types? There are four different types by which autowiring can be done. - byName - byType - constructor - autodetect 28) What are the different types of events related to Listeners? There are a lot of events related to ApplicationContext of spring framework. All the events are subclasses of org.springframework.context.Application-Event. They are - ContextClosedEvent – This is fired when the context is closed. - ContextRefreshedEvent – This is fired when the context is initialized or refreshed. - RequestHandledEvent – This is fired when the web context handles any request. 29) What is an Aspect? An aspect is the cross-cutting functionality that you are implementing. It is the aspect of your application you are modularizing. An example of an aspect is logging. Logging is something that is required throughout an application. However, because applications tend to be broken down into layers based on functionality, reusing a logging module through inheritance does not make sense. However, you can create a logging aspect and apply it throughout your application using AOP. 30) What is a Jointpoint? A joinpoint is a point in the. 31) What is an Advice? Advice is the implementation of an aspect. It is something like telling your application of a new behavior. Generally, and advice is inserted into an application at joinpoints. 32) What is a Pointcut? A pointcut is something that defines at what joinpoints an advice should be applied. Advices can be applied at any joinpoint that is supported by the AOP framework. These Pointcuts allow you to specify where the advice can be applied. 33) What is an Introduction in AOP? An introduction allows the user to add new methods or attributes to an existing class. This can then be introduced to an existing class without having to change the structure of the class, but give them the new behavior and state. 34) What is a Target? A target is the class that is being advised. The class can be a third party class or your own class to which you want to add your own custom behavior. By using the concepts of AOP, the target class is free to center on its major concern, unaware to any advice that is being applied. 35) What is a Proxy? A proxy is an object that is created after applying advice to a target object. When you think of client objects the target object and the proxy object are the same. 36) What is meant by Weaving? The process of applying aspects to a target object to create a new proxy object is called as Weaving. The aspects are woven into the target object at the specified joinpoints. 37) What are the different points where weaving can be applied? - Compile Time - Classload Time - Runtime 38) What are the different advice types in spring? - Around : Intercepts the calls to the target method - Before : This is called before the target method is invoked - After : This is called after the target method is returned - Throws : This is called when the target method throws and exception - Around : org.aopalliance.intercept.MethodInterceptor - Before : org.springframework.aop.BeforeAdvice - After : org.springframework.aop.AfterReturningAdvice - Throws : org.springframework.aop.ThrowsAdvice 39) What are the different types of AutoProxying? - BeanNameAutoProxyCreator - DefaultAdvisorAutoProxyCreator - Metadata autoproxying 40) What is the Exception class related to all the exceptions that are thrown in spring applications? DataAccessException - org.springframework.dao.DataAccessException 41) What kind of exceptions those spring DAO classes throw? The spring’s DAO class does not throw any technology related exceptions such as SQLException. They throw exceptions which are subclasses of DataAccessException. 42) What is DataAccessException? DataAccessException is a RuntimeException. This is an Unchecked Exception. The user is not forced to handle these kinds of exceptions. 43) How can you configure a bean to get DataSource from JNDI? <bean id='dataSource' class='org.springframework.jndi.JndiObjectFactoryBean'> <property name='jndiName'> <value>java:comp/env/jdbc/myDatasource</value> </property> </bean> 44) How can you create a DataSource connection pool? <bean id='dataSource' class='org.apache.commons.dbcp.BasicDataSource'> <property name='driver'> <value>${db.driver}</value> </property> <property name='url'> <value>${db.url}</value> </property> <property name='username'> <value>${db.username}</value> </property> <property name='password'> <value>${db.password}</value> </property> </bean> 45) How JDBC can be used more efficiently in spring framework? JDBC can be used more efficiently with the help of a template class provided by spring framework called as JdbcTemplate. 46) How JdbcTemplate can be used? With use of Spring JDBC framework the burden of resource management and error handling is reduced a lot. So it leaves developers to write the statements and queries to get the data to and from the database. <strong>JdbcTemplate</strong> template = new <strong>JdbcTemplate</strong>(myDataSource); A simple DAO class looks like this. public class StudentDaoJdbc implements StudentDao { private JdbcTemplate jdbcTemplate; public void setJdbcTemplate(JdbcTemplate jdbcTemplate) { this.jdbcTemplate = jdbcTemplate; } more.. } The configuration is shown below. <bean id='jdbcTemplate' class='org.springframework.jdbc.core.JdbcTemplate'> <property name='dataSource'> <ref bean='dataSource'/> </property> </bean> <bean id='studentDao' class='StudentDaoJdbc'> <property name='jdbcTemplate'> <ref bean='jdbcTemplate'/> </property> </bean> <bean id='courseDao' class='CourseDaoJdbc'> <property name='jdbcTemplate'> <ref bean='jdbcTemplate'/> </property> </bean> 47) How do you write data to backend in spring using JdbcTemplate? The JdbcTemplate uses several of these callbacks when writing data to the database. The usefulness you will find in each of these interfaces will vary. There are two simple interfaces. One is PreparedStatementCreator and the other interface is BatchPreparedStatementSetter. 48) Explain about PreparedStatementCreator? PreparedStatementCreator is one of the most common used interfaces for writing data to database. The interface has one method createPreparedStatement(). PreparedStatement <strong>createPreparedStatement</strong> (Connection conn) throws SQLException; When this interface is implemented, we should create and return a PreparedStatement from the Connection argument, and the exception handling is automatically taken care off. When this interface is implemented, another interface SqlProvider is also implemented which has a method called getSql() which is used to provide sql strings to JdbcTemplate. 49) Explain about BatchPreparedStatementSetter? If the user what to update more than one row at a shot then he can go for BatchPreparedStatementSetter. This interface provides two methods setValues(PreparedStatement ps, int i) throws SQLException; int getBatchSize(); The getBatchSize() tells the JdbcTemplate class how many statements to create. And this also determines how many times setValues() will be called. 50) Explain about RowCallbackHandler and why it is used? In order to navigate through the records we generally go for ResultSet. But spring provides an interface that handles this entire burden and leaves the user to decide what to do with each row. The interface provided by spring is RowCallbackHandler. There is a method processRow() which needs to be implemented so that it is applicable for each and everyrow. void processRow(java.sql.ResultSet rs); Recommended Books for Spring Framework Spring In Action:Covers Spring 3.0 by Craig Walls Spring Framework is required knowledge for Java developers, and Spring. Spring Batch In Action by Arnaud Cogoluegnes Thierry Templier Gary Gregory Olivier Bazoud. Spring Dynamic Modules In Action by Arnaud Cogoluegnes Thierry Templier Gary Gregory Olivier Bazoudby Arnaud Cogoluegnes Thierry Templier Andy Piper. Professional Java Development With The Spring Framework by Rod Johnson Juergen Hoeller Alef Arendsen Thomas R The book covers the complete specturm of Java development, including database access/persistence, container configuration, transaction management, remoting, and web MVC. It introduces well known techniques, like design patterns, to solve some of these problems as well as new and innovative approaches like Inversion of Control (IoC) and Aspect Oriented Programming (AOP). All solutions are implemented using the functions provided by the Spring Framework in conjunction with other popular open source technologies like Hibernate and Velocity. Spring Web Flow 2 Web Development: Master Spring’s well-designed web frameworks to develop powerful web applications by Markus Stauble, Sven Luppken. < - JSTL Function fn:replace() - March 7, 2014 - JSTL Function fn:trim() - March 7, 2014 - JSTL Function fn:toUpperCase() - March 7, 2014 I need more information about Jdbctemplate, because i am using spring framework Refer this link: Thank you for posting these useful questions and answers.I would be happy if you would put more on Spring security and integration Thank you for the suggestion. We will add some more questions on these topics. realy good site for interview preparation Hello Bini, Thank you for the nice words!! We keep adding more questions for the interview preparation.I would recommend this book for complete preparation for Java interview: hi hiiijjj hhhh hi jjjjj ssss dd plz dont delete this i m using it for my profile picture. Thanks Hi, In question 20 the various un-necessary characters are getting added, so its not understandable. Please remove and add the proper code. Regards, Rahul Gupta
http://www.javabeat.net/spring-framework-interview-questions/
CC-MAIN-2014-10
refinedweb
2,717
50.02
The Java If Else Statement is an extension to the Java If Statement (which we explained in our earlier post). We already seen that, If statement only executes the statements when the given condition is true and if the condition is false, it will not execute any statement. In real-time it would be nice to execute something when the condition fails and to do so, Java introduced If else statement. Here, Else statement will execute the statements when the condition fails. Let us see the syntax of the Java If Else statement: Java If Else statement Syntax The syntax of the If Else Statement in Java Programming language is as follows: if (Test condition) { //If the condition is TRUE, these statements will be executed True statements; } else { //If the condition is FALSE, these statements will be executed False statements; } If the test condition present in the above structure is true, then True statements will be executed. And, if the condition is false, the False statements will be executed. Let us see the if else flow chart for better understanding. Flow Chart of a Java If Else Statement The following picture shows you, the flow chart behind this Java If Else Statement If the test condition is true then STATEMENT 1 is executed and followed by STATEMENT N. If the condition is False then STATEMENT 2 is executed followed by STATEMENT N. Here, STATEMENT N is executed irrespective of test results. Because, it is out of the Java if else condition block and it has nothing to do with the condition result. Java If Else Statement example This Java program allows the user to enter his/her age and it will check whether he is eligible to vote or not using the if else statement. In this Java if else program we are going to place 4 different System.out.println statements. If the condition is true, we will print 2 different statements and, if the condition is false, we will print another 2 statements. // Program for Java If Else Statement package ConditionalStatements; import java.util.Scanner; public class IfElseStatement { private static Scanner sc; public static void main(String[] args) { int age; sc = new Scanner(System.in); System.out.println(" Please Enter you Age: "); age = sc.nextInt(); if (age >= 18) { System.out.println("You are eligible to Vote."); //St1 System.out.println("Please carry Your Voter ID to Polling booth");//St2 } else { System.out.println("You are Not eligible to Vote.");//St3 System.out.println("We are Sorry for that");//St4 } System.out.println("This Message is coming from Outside the IF ELSE STATEMENT"); } } ANALYSIS: User enters his/her age. If the age is greater than or equal to 18 then St1, St2 will printed. And, If the age is less than 18 then St3 and St4 will print as output. We have also placed one System.out.println statement outside the Java If Else block. This statement will execute irrespective of condition result. OUTPUT 1: Let us enter the age as 25. Condition is TRUE OUTPUT 2: Let us enter age = 17 to deliberately fail the condition. Condition is FALSE.
https://www.tutorialgateway.org/java-if-else-statement/
CC-MAIN-2019-39
refinedweb
521
55.54
Archives How to Use Msbuild.exe with CruiseControl.NET I just updated my primer/tutorial/walkthrough on CruiseControl.NET with some information about how to use msbuild.exe instead of devenv.exe in your minimal cc.net configuration. One good reason to go with msbuild is that you don't need to install VS.NET on a dedicated build server, and you can also target unit tests, performance tests, code analysis etc. that you may have added using the Team Edition versions of VS.NET. Please check it out and comment on it if you please. System Restore... I'm not sure what has happened with the WinXP Automatic Updates lately, but a while ago I noticed (for the first time) I had an option to Shut down and install updates at the same time. I tried it the second time I saw it and naturally it hung my whole system. The machine wasn't working properly after that of course, but I managed to get in and install the updates more or less manually. Today I got new problems with the updates. It started with a crash in the svchost.exe process and I tried to debug and see what it was. Windows popped up a webpage telling me there was a fix for this specific "Generic Host bla bla bla" problem. I installed it, it told me it won't fully work until I restart my machine. Alright, I kept running for a while, got the svchost.exe crash again and decided to restart. After the restart, WinXP wanted to start install some Updates it appearently had in queue! At login? Weird. It didn't ask and I couldn't stop it. Of course the installation failed as it hung my whole box. I let it sit there for a long time before I decided to restart the machine. Anyway, what saved me today was to restart the machine, hit F8, select to start with a "configuration that worked" or whatever the menu option says. Now I'm in, I've configured Automatic Updates in a way so that I will decide myself when to download and install. All seems to work WAY better now. I wonder how far away SP3 is... I think I'm going for Vista as soon as I get my hands on the RTM version. UPDATE: LOL, I was too fast... after this "manual" Windows Update and the recommended reboot, I got a new crash after logging into Windows, now it was the AutoUpdate.exe crashing on my... yeah... LET ME WORK FOR LOVE'S SAKE! [Podcasts] Podcast Aggregator - Doppler My buddy Jan-Erik has been listening to podcasts for a while now, and he pointed me to a program called Doppler which seems to be a decent podcast aggregator. I've only just started to use it, but it looks nice and seems to behave well. It got all the bells and whistles we're getting used to nowadays - async downloads, system tray, notifications and so on, and you can turn these features on and off as you please. Go check Doppler out at Now I need to look for a seamless way to get these things over to my Sony Ericsson mobile without too many clicks. [Podcasts] New Podcast - Without a Name Found a newly started podcast, one which haven't got a name yet even. So far called "developer podcast", but they would like to get some ideas for a name. "They" are Derek Hatchard and Mike Mullen and you find links to their own blogs from their podcast site. They have recorded 2 shows so far and the sound quality could be better, but they give some good tips about Vista which are useful and have some links to resources they talked about on their blog. You find these guys at FolderShare - The Best Utility of the Year? Oh yeah! Thanks to Scott Hanselman for podcasting and blogging about this tool, and thanks Microsoft for making this one a free Windows Live Service. Most of you know that Scott is testing most of the software tools that ever gets created and let me quote some of the things he's saying about FolderShare: Sure, there's other applications that have tried to solve problems like this before, but holy crap FolderShare nails it. There's so much you can do with it, like automatically mirror pictures, music and so on across your machines with a few clicks. They just have to be connected to the Internet. It's also secure. I now have my IE Favourites synced between my machines, and it was done with like 2-3 clicks. Even though it's free now, you got a few limitations to how many folders and files you can share right now. That will probably change in the future. If you look at you see that there is a limit to 10 folders and 10.000 files at the moment, but hey, that will take you far. I've also seen a max filesize of 2GB being mentioned on the FolderShare web site so I'm not sure what's the deal here. It's beautiful anyway, go download already! [Podcasts] .NET Podcasts I've soon listened to all shows that have been recorded by Scott Hanselman and Carl Franklin on Hanselminutes (), and I've started to dig around for some more podcasts, preferably similar to the Hanselminutes stuff. I've been listening to .NET Rocks () for some time as well, and I just ran upon the Polymorphic Podcast () which seems to be just great. Polymorphic is about most things .NET related and it's hosted by Craig Shoemaker. Then there's the ASP.NET Podcasts, hosted by Wallace B. (Wally) McClure and Paul Glavich, which I haven't listened to (or looked at for that matter because they also have some viewable material) at or better yet If you know of any other similar podcasts, please let me know and I'll add them to this page. I need to think of a way to automatically download the files and have them synced with my SonyE ricsson mobile phone... Or get a better "download deal" with my phone company and create a small program that I can run on the mobile phone to download these podcasts directly. It's too expensive for me to download these files over the mobile network as I have to pay for the bytes... Quickstart on CruiseControl.NET I just wrote a page about how to set up CruiseControl.NET to compile and test a VS.NET web app solution in a few minutes. It's over at my Walkthroughs and Tutorials section. How to Hook Up a VS.NET 2005 Solution With CruiseControl.NET in a Few Minutes: > <msbuild> <executable>C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\MSBuild.exe</executable> <workingDirectory>c:\CI\JohansTestSystem</workingDirectory> <projectFile>JohansTestSystem.sln</projectFile> <buildArgs>/noconsolelogger /p:Configuration=Debug /v:diag</buildArgs> <targets>Build</targets> <timeout>15</timeout> <logger>ThoughtWorks.CruiseControl.MsBuild.XmlLogger,ThoughtWorks.CruiseControl.MsBuild.dll</logger> </msbuild> <nunit path="C:\nunit\bin\nunit-console.exe"> <assemblies> <assembly>C:\CI\JohansTestSystem\Johan.Test\bin\Debug\Johan.Test.dll</assembly> </assemblies> </nunit> </tasks> <publishers> <xmllogger /> </publishers> </project> </cruisecontrol>: \unittests.xsl</xslFile> <xslFile>xsl\MsTestSummary.xsl</xslFile> <xslFile>xsl\compile-msbuild.xsl</xslFile> <xslFile>xsl\fxcop-summary... Installing .NET 3.0 and Orcas... (or perhaps call it) An Ordinary Saturday in a Programmers' Life Note of warning - don't follow the steps I did. Better make sure you install things in the right order. Have a quick look down at the end where I have a "Lessons Learned" :p The Logbook It's Saturday morning and I'm about to install .NET 3.0 and Orcas and whateverelse I might need to get a proper 3.0 platform up and running on a WinXP VPC. It's 11am and I won't be sitting in front of the box all day to do this because I got loads of other stuff to do. The plan is to uninstall a few old things, then do something else, then download one part, install it and do something else and keep doing this until thing seem ok. Goal is to have a decent WinXP + .NET 3.0 + Orcas install to play around with by the end of the day :) 11:00am - Need a new VPC machine to install things on, so I start by taking a copy of my (almost) vanilla WinXP SP2 VPC file I always keep handy. 11:30am - Phew! That took a while to copy... I'm in there now, uninstalling a few old things I no longer need. I noticed I copied the wrong VPC file so I got a few old WinFX things I need to remove. 11:55am - All done, downloading the SDK setup file... BTW, Nicholas Allen got a page where he lists things you need to install: 12:10pm After reading up on some things, I started the install of "Microsoft® Windows® Software Development Kit for Windows Vista™ and .NET Framework 3.0 Runtime Components". It complains that I still got a few old things I should remove (FxCop etc.) and retry install. That's a good one Microsoft. Thanks. Uninstalling and retrying... this install is downloading during installation, so it might take a while. I'm off to do something else while that is running... BTW. The full install requires some 2.3 GB of disk space. 12:18pm BANG! Installation bombed. It crashed during the FxCop installation it seems. I wanted to read about the error, but when the "report to ms" dialog was done, it closed the installation dialogs... I guess I should restart XP and restart the installation again... gah! 12:25pm It says I must first install the old version... here goes. What old version? Taking a look at the installed programs and uninstalling whatever seems to be related to any SDKs... 12:40pm Ngggh... still refusing to install. Uninstalling even more and trying again. Lessons learned - start out with a very, very fresh XP + VS.NET 2005 box. Kids, don't try this on your daddy's machine at home. I think I'm off to cut my hair and have something to eat. 15:30pm Right, I'm back. I've unistalled everything I think is remotely related to the old WinFX stuff, and also let WinXP finish installing a few updates it had on queue... took a while it did. Starting a new install of .NET 3.0 SDK again. Next, next, next, next... BANG! My whole VPC now crashed as the installer was about to start doing its real work. Jeeeez... what is this? 15:35pm Starting VPC again and we'll see if the install works better this time. Wow... it's actually looking good now. 16:20pm Still installing... 16:25pm Done! Now on to the next install, Visual Studio 2005 extensions for .NET Framework 3.0 (WCF & WPF), November 2006 CTP, starting now... 16:55pm I'm back. Had something to eat. Right, installing the Orcas stuff didn't work, because it needs the .NET 3.0 runtime stuff on the machine... Weird, because one would think it got installed with the SDK... whatever, installing the runtime now. Seems that I didn't read right stuff :D 17:55pm This is your host this evening, back from driving my daughter to a street dance show she's in. The runtime is installed and I'm back in track again - time to install the WCF/WPF extensions... why does the task "merging of the help collections" give me the chills? Because "merging of the help collections might take some time"... 18:15pm Everything looks fine, now installing the Workflow extensions... Why does all these installation look different? Seems like every team at Microsoft is using their own installer and tools for making these packages? One would think that there were some kind of guidelines that they should try and use the same templates. Anyway, as long as it works I'm happy. 19:15pm Took a break. Workflow Foundation extensions seems to be installed OK, time to fire up VS.NET and see if it works :p (a few minutes later) It works!! Yay! I need a drink... Lessons learned: a) Make sure you got a few hours of spare time... ;) b) Try to have an as clean as possible machine from start - WinXP SP2 and VS.NET 2005, that's all. Use a virtual machine if possible, don't mess up the box you're working on every day. c) Remove anything even remotely related to older .NET 3.0 / WinFX installations. d) Install things in this order: 1. Microsoft .NET Framework 3.0 Redistributable Package 2. Microsoft® Windows® Software Development Kit for Windows Vista™ and .NET Framework 3.0 Runtime Components 3. Visual Studio 2005 extensions for .NET Framework 3.0 (WCF & WPF), November 2006 CTP 4. Visual Studio 2005 extensions for .NET Framework 3.0 (Windows Workflow Foundation) e) Voilá, you're (hopefully) done, start making your first WPF app or something already! [.NET 2.0] On Battery or Not I'm thinking of writing a small utility to manage which of my apps gets started at startup of Windows, depending on if my laptop is running on batteries or not. Normally, when I boot it up on batteries, I don't want to start things such as my blog reader and a few other things. The thing is it's very simple to detect from .NET if you are running on batteries or not. The SystemInformation type gives you all that and more: using System; using System.Windows.Forms; namespace BatteryConsole { class Program { static void Main(string[] args) { PowerStatus ps = SystemInformation.PowerStatus; if(ps.PowerLineStatus == PowerLineStatus.Online) Console.Write("Your power cable is connected, and "); else Console.Write("You are running on batteries, and "); Console.WriteLine("your battery level is on {0}", (ps.BatteryLifePercent * 100) + "%"); } } } I'll get back if I ever get that small program written. [Podcasts] Hanselminutes is Cool I've started to listen to podcasts on my mobile phone while commuting to where I work at the moment, and it's pretty cool. I've listened to quite a few shows by Scott Hanselman and Carl Franklin available at and I am surprised (well not really) by the quality. It's relaxed, sit-back kind of shows and Carl asks very good questions. You also get some very good tips about tools and blogs. The shows I've enjoyed most so far I think is the one on Test Driven and the follow up on Mock Objects. The one on Infocards were pretty interesting as well, and... ;) Have to hunt for more podcasts... I spend lots of time on trains these days. [.NET 2.0][Unit Testing] Good Tutorial on System.Transactions (and Unit Testing) On my random blog reading and Googling around System.Transactions and Unit Testing, I ran into a series of short but great articles by Stuart Celarier which actually covers both topics. Stuart teaches the reader about System.Transaction by using unit tests in a very educational (and entertaining) way. He uses NUnit in his articles, but nothing will stop you from using the test mechanism built into VS.NET if you got that Team Edition version. Just use the [TestMethod] attribute instead of [Test] and all sample code in the articles will compile and test well (eventually :) Stuart also make use of anonymous methods for testing events that I recently blogged about. System.Transactions is good stuff, I wonder if it could be useful to manage "compensating transactions" when dealing with multiple web service calls within a transaction? [.NET 2.0][Unit Testing] Using Anonymous Methods when Testing Events [Team System] Getting Rid of Default Document Folders in the Quick Launch Bar I just recently blogged about how to display document folders in the Quick Launch area of the TFS Sharepoint portal. Here's a follow-up to that one on how to remove the default folders which are created when you set up a new Team System project. It's all about modifying the Sharepoint template. You should only try this if you are creating a custom Team System Process Template. 1) Create a new test-project in Team System, don't bother to add any source control to it. 2) In Team Explorer, remove the Document folders and files you're not interested in. This will be reflected on the project portal. As is explained in detail at the MSDN site, you must also edit the Process Template XML files to reflect these changes to the template content. While at it, you may also delete any reports you don't want (or add for that matter), but again - modify the template XML files to reflect this. I'm pretty sure it will still work well, but you never know, things will probably be messed up if you don't. If you change the portal content, change the XML files (repeat as a mantra). 3) On the project portal, go to "Site Settings", "Go to Site Administration" and click "Save site as template". This will create an .stp file on the portal that you can download and put anywhere on your Team Foundation Server for now. Write down the name of the Sharepoint template because you need it in the following steps. 4) The rest about how to add the template to the server and so on is well described on the TFS site on MSDN. In short - Go to the server, add the template to the TFS by using the stsadm.exe tool and pointing at your .stp file. Then get back you your lovely Process Template files and refer to it (by name) in the <site> section of the WssTasks.xml file. 5) Now, upload the new Process Template to TFS, create a new project and see if it looks alright. As I wrote earlier, to change the overall look and feel of the Sharepoint portal, you still need to use Frontpage, but that is something I will go into later. [Team System] Displaying Document Folders in the Quick Launch Bar Some people who are editing their Team System process templates and are new to Sharepoint wonder how to publish their own document folders in the menu of the portal home page. There are at least 2 ways - either use Frontpage (which I haven't tested yet), or in Sharepoint go to "Documents and Lists", click the folder in question, click "Modify settings and columns" in the left hand menu and then "Change general settings" and tell it to "Display this document library on the Quick Launch bar". To modify the Quick Launch bar when it comes to reports and such, I think Frontpage is the tool. I'll try that as soon as I get Frontpage installed... not even sure which version of Frontpage to use actually. Last thing to do is to modify the Sharepoint portal template in a way that suits the new process template, then create a new template out of it, make sure it's installed on the Team System server and then point at it from the WssTasks.xml file: <site template="Your_Sharepoint_template" language="1033" /> Who said creating your own process template for TFS was easy? :) I'll get back to this process in a later post. [Team System] Great Tool for Modifying Process Template I found a great tool for those of you who wants to modify the process template - Process Template Editor from imagiNET, and it's free. It won't see to all your needs, but it's a great help and the best tool I've seen so far to get you started. You still need to do some manual XML editing if your goal is to create a completely new template, and I still need to find a good way to modify the document folders and the content within there. So far I've been editing the WssTasks.xml file by hand, and it *beeps*... :)
https://weblogs.asp.net/jdanforth/archive/2006/11
CC-MAIN-2021-21
refinedweb
3,359
74.39
\(1,2,3)It's a very handy feature. I use it all the time (though I use it with variables, not constants). How would you do the same thing in python? E.g. in fetching from databases: my $sth = $dbh->prepare("select this, that from table"); $sth->execute(); $sth->bind_columns(\my ($this, $that)); while ($sth->fetch) { # Data is fetched to variables $this and $that print "This: $this That: $that\n"; }(clue: it's not the reference to a three-item list that you were probably expecting it to be) Well, actually it is exactly a reference to a three-item list (update: no it's not, see below). ;-) The distinction between perl-lists and perl-arrays is a common source of confusion, though. For pythoneers: in perl a list is a free set of scalars. An array is a set of scalars bound to a variable or to an anonymous array. In scalar context (as above) an operation on a list is resolved to an operation on the last element of the list. So the expression above is the same as "\3" - a reference to a scalar of value 3. To yield a reference to an array one would write the expression above as: [1,2,3]Anyway, I'm not a good teacher so you might better refer to "man perldata". MartinSchwartz perl's colorful "context sensitivity" works perhaps 99% of the time; and that's the hacker philosophy (I want a 90% solution right now.) But I'm trying to be an engineer, so I try to set perl aside for anything over 100LOC. I say try cause it is addictive. Show me perl -p -ibak -e s/foo/bar/ in Java, please. -- PaulTaney Apply the same discipine to Perl that you would to C++ or Java, and you might be surprised at how well it scales. I worked inside of a system that included an 80KLOC Perl middle tier, and it was a real pleasure, particularly when compared to the raw pain of working in a C++ system of similar size and complexity. --DaveSmith DB<1> x \(1,2,3) 0 SCALAR(0x1012e9c8) -> 1 1 SCALAR(0x1012e9d4) -> 2 2 SCALAR(0x1012e9e0) -> 3 DB<2> x \3 0 SCALAR(0x1012e9bc) -> 3Ignoring the obvious(sic) "in a scalar context" RedHerring - nothing in the original was there to force it into a scalar context - this is actually a list of three references to scalars. It's even so documented in perlref(1) \(1,2,3)(clue: it's not the reference to a three-item list that you were probably expecting it to be) Of course it isn't. An anonymous list reference is created like this: \token, or [1, 2, 3]. Most of your complaints about Perl are because you're taking Perl in reference to other languages, but Perl isn't other languages, it's Perl. Because something is done differently, doesn't mean its done wrong. As for messy code, I'll take efficency over readability any day. Besides, cryptic code is nothing a few comments won't fix. import pickle help(pickle)Works even if the module has no documentation (but of course pickle does). can also do: import pickle print pickle.__doc__
http://c2.com/cgi/wiki?PythonVsPerl
CC-MAIN-2015-18
refinedweb
538
68.7
Input - Process - Output¶ At the heart of all computer programming is a simple mantra: - Input some raw data from the human world - Have the computer manipulate that data according to a set of instructions called a program. We call this manipulation processing. - Output the results to the human world. Note Most folks call processed data information. Information has a higher value to the user of the program. However, one person’s information may be another person’s data. Designing a Program¶ We need to start our learning process somewhere, so let’s start off by looking at the process we will use to solve any problem using a computer. Note Clearly, there will be some problems where using the computer is just not going to happen. That is just fine. What you will learn in this course is not really about computer programming, it is about solving problems that happen to involve computers. The techniques you study apply to all kinds of problems, and your concerns are almost the same regardless of the kind of tools you ultimately use to solve your problem. We solve problem using computers by designing programs that make the computer do the required work. Programs are not just hacked together by a bunch of people pounding on a keyboard. At least programs that have real value are not built that way. Most professionals will tell you that the actual keyboard work is a small part of what they do. Well, that is, the keyboard work that involves entering program code is a small part of their keyboard work. I have seen studies that claim that over 80 percent of the work you do as a programmer is all in your head. You are thinking about the problem you want to solve, studying how to solve it, how to test it, how to organize it, and how to document it. You are also very concerned with making sure the client who wants your program is getting exactly what they need! All of that is hard work, and some will say it is no fun. But, I discovered that it can be fun, and can lead to a very rewarding career. . A Thinking Tool¶ We need a tool to help us learn how to think first, then we will translate that thinking into real code later. The book uses a tool called a flow chart which dates back to the early days of programming. Flow Charts are simple to draw, but they are a pain to build in real life without the aid of a program to help lay things out. We will not worry about drawing flow charts for our work, but it will help to look at them and use them to understand how your program will flow. Basically that means how the computer will work through the lines of code you will write ultimately. To get started, I am going to use Scratch as a simple aid to help you form your early programs. Scratch is a sort of flow charting tool, although it does not use the same symbols as your text. I think you will find it easy to use and fun to boot. Shoot, kids have a ball playing with Scratch! Running your Design¶ What makes Scratch a powerful tool is its ability to let you drag programming constructs onto your design screen, then run the program easily to see how it works. We will not do much number crunching with Scratch, but we will do some animations and a bit of drawing. A tool like Scratch is not really needed to think through a solution to a problem. But it is fun to use, and some people are more “visual” than others. To them a picture showing how we “flow” through the steps in our solution helps them think about if it will really work or not. Warning Do not let the “fun” of Scratch distract you from what we are trying to learn. Each control you drag onto the screen has a purpose in life. It does something to your “program”. It is very important that you know exactly what that control does, and what will happen when the program runs. BTW! It is not OK to pick just anything from the Scratch tools for this course. Only use the controls yu are told to use. That will be important in later labs. Actual program design¶ In this part of our learning, we can use nothing more complicated than a piece of paper and a pencil. (OK, fine! We probably will use a computer to write our thoughts down, That will make changing our mind and rearranging out thoughts easier and less messy, than if we had to erase and rewrite things!) Most of the actual program design process involves thinking about your problem. So, warm up your brain, and we will start learning how to use it effectively. We will start off this thinking process by thinking about it! (This sounds hard! Don’t worry, it is not, and you do it all the time without thinking about it - HUH?) What I mean is that this is what you are doing when you figure things out as a normal human! There is no magic here, just common sense! You know when something makes sense, you know if it sounds like it will work. We will be doing the same thing here. We will write down our instructions on what to do to solve a problem, then we will study those thoughts and convince ourselves that this will really work! Perhaps while we are studying our current thoughts, it will occur to use that we could explain things more clearly. This realization is vital to the process we are going to use. This computer thing is really dumb. It will not fill in any blanks we leave out. While humans might be able to do that, the computer will never do that. So as you think your problem through, remember that you need to explain it to this really dumb critter called a computer. In doing so, there is little chance that any intelligent human will not understand what you mean! Problem statement¶ We will start off with some kind of problem statement. This statement will have been written by some poor human who needs help. The statement itself, might not be so clear, but we have to start somewhere, and this is it! As we try to solve our given problem, we might need to go back to whoever gave us this problem, and fill in a few details later. That is an essential part of making sure we solve the right problem. You are never to assume you know what is meant in the problem statement. You need to let the owner of the problem tell you, you might assume wrongly! The owner lives in the world where the problem came from, as a programmer, you might not live in that same world, and might not understand the terminology. Breaking the Statement Apart¶ Our first job is to study the statement and see if we can figure out what data we need to collect to get the solution started. All programs need to process something and produce something. This step is focused on the something we want to process. We call that something the program inputs. Next, we need to figure out what we are supposed to produce. The whole point of this problem solving stuff is to figure out how to produce the required program outputs. In some cases there may be no tangible output. Instead the computer may just run some equipment a certain way. You can argue that the signals used to control that equipment are outputs, and you are right. We will just not be printing those signals out so us humans can read them! Find a Solution¶ The last part is the toughest. Figuring out what to do to convert the input stuff into the output stuff. We call this step “Processing”. The entire process cam be boiled down into that simple set of operations we mentioned at the beginning of this discussion: INPUT --> PROCESS --> OUTPUT (IPO) Input a bunch of input data, process that data to produce something new, then output the new stuff and you are done! Simple, right? Well doing this well takes practice, and we will practice that a lot in this course! In your homework problem for our first week, you were to think about a process that will calculate the square root of some number. While this seems silly to those of us who know how to do that, just pull out the calculator and punch a few keys, this kind of problem consumed the mathematicians of the day back when there were no computers, or even calculators. All they could do was work up a totally manual process that someone could follow to come up with the required answer. As you try to solve the problem, yourself, you are going to get frustrated, because with no guidance, you may not even see a way to get started! Hopefully, after reading through all of this lecture, you will be able to get started, and perhaps even come up with the solution. In your square-root problem, the input is easy to see, it is just a simple number. The output is also easy, another number. The conversion (processing) is not so easy. In other problems, it may not be so easy to spot the input data, but we have to do that to get started. We also have to explicitly figure out what we are supposed to produce in order to claim that we really have solved the problem. That process step in the middle is where all the real work in designing the solution can be found. An Example Solution¶ By the way, here is a solution to that homework for you to think through. See if you agree it can do the job. If it has holes that you can spot, good for you! There are a few issues with the solution presented. Thinking it through¶ What we need to do is to learn how to use a new toolbox to solve problems. We will actually only have a few basic tools in this toolbox. Our first job is to understand how each one of these new tools works, and learn how to figure out when to use each one. The basic ideas are simple, and it takes practice attacking small problems before you will be able to tackle hard problems. The basic tools are these: the sequence, the decision, and the loop! Note No fair inventing a new “left-handed-isostatic-framismeter” to put in your toolbox. (That was something my friends and I thought up in a hobby store I worked in as a teenager. The thing was designed to confuse people. It was made up from junk parts we found in the shop bolted together into a strange mess! All of us in the shop acted like it was the coolest tool around, just to watch the faces of the people around us who just knew we were nuts! We will only use the simple tools we are given, to make sure we do NOT confuse people, especially ourselves! We will only use these three simple tools. Unlike real toolkits with things like hammers and saws, these tools are things we have an infinite supply of. You pick one out of the toolbox, and stick it in a list of steps in our solution. What we are trying to learn is how to decide what tool to grab and stick in our solution list.: The Sequence tool¶ When we study a problem, we often see a way to break the big problem down into a set of smaller problems. If we line up that set, we see that a sequence of steps that solve each smaller problem will do the job. If these steps depend on each other, we must do them in a particular order. If they are not interrelated, we might do all the steps at the same time, given the right hardware. This kind of thinking leads to parallel processing programs. Much to my dismay, we do not study parallel programming here, in spite of the fact that the modern computer really can do more than one thing at a time, and we should be learning how to think that way! In breaking problems down, we run into our first opportunity to name something. We do this all the time in programming. We name containers that will hold our data (something we call variables, and we also name blocks of code we will figure out in detail later. That “INPUT” block above is such a block. Eventually we need to figure out exactly what we need to input and where we will put that data while we process it later. Names are important, since they convey meaning to the reader. A name like X24 is not very useful, but one like InputData helps us understand what is going on. We will go over rules for naming things later. For now, any name will do, but try to make your thinking clear in what you write, and keep names reasonably short. (Usually, we never put spaces in names we come up with. Instead us an underscore between words, or just mash them together as shown in the example.) Using the sequence tool is usually the first major step in decomposing a big problem into a set of smaller, hopefully easier to figure out, sub-problems. Here is a simple diagram showing how things “flow: The Decision tool¶ You will determine that you have reached a point in your code where what you do next depends on what has happened just before this point. You will have done some calculations, and you must ask a question about some aspect of your code and the data it manipulated to figure out what to do next. Depending on the answer to your question, you will process one set of instructions, or another set. This is called an IF-THEN-ELSE tool. Note We do allow the “then” part of this tool to be empty. That means you only do something if the answer to your question is “true”. Otherwise, you skip that something and continue on after that to the next tool. This is the “IF-THEN’ rather than the “IF-THEN-ELSE” tool. (There is no “else only” form. If you want something like that, reform your question, using the opposite of what you asked. For example, “if the number is zero” becomes “if the number is not zero”!) Here is a simple diagram how a “decision” works: The loop tool¶ This tool has a pretty obvious application. You have something to do over and over. We call this kind of thing a loop. It will usually be obvious that you need a loop, so you stick one in your design. You will need to also figure out how to stop the loop! However, in some problems, you might not see that you need a loop. Is one needed to figure out the square root of a number? On first thinking about it, you probably do not see a loop in there anywhere. What those old mathematicians figured out, though, was that they could come up with a scheme that guesses at an answer, figured out how far off that answer was, then come up with a way to adjust the guess so they could try it again, and so on. Guess what, there is a loop sitting there! There is also an interesting way to stop it (when you get close enough to the answer you want that you are satisfied! Finally, here is a simple diagram showing how the basic loop “flows”: That is it (for now). We will use just these three tools and try to carve our problem up into small pieces where one of these tools can be applied as we work towards a solution! We will take steps in our solution by adding one tool at a time to a list of tools we will create. The solution will involve using each tool, the way it is designed to be used, then use the next one and so on. When we reach the end of our list, the problem will have been solved. Baby steps¶ I am a strong believer in taking small steps as you work through a design. You carve off a chunk of the problem that you think you can work out, then go back and carve off another one. One thing you need to do as you work through this process is to stop and “walk through” your design, asking yourself if this will really work. As you get better at this, it will become easier. Taking small steps is better than fighting with a big problem for hours. Take a small step, then look for another small step. Eventually you will end up with a full solution. Testing¶ As you work, you should ask yourself how to tell if things are going well, or how to tell if things are going wrong. We really want to write two things. The program itself, and a set of tests that give us confidence in this new tool we are creating. None of this comes easily at first. Some folks are better than others at finding chunks to carve off and work. The more you practice, the better you will get at all this problem solving stuff. Tools like Scratch give you a visual tool to create a diagram that you can sit back and study. Only when you feel that this will really work should you convert your diagram to code and try it out. (With Scratch__ we can try things out whenever we like!) Hopefully, in your thinking about testing things, you came up with some way to figure out if things are working properly. Testing will tell you if this is really doing what your want. Are you moving forward, towards a solution, or are you on the wrong track> The classic old saying¶ One rule programmers learn early is this: Plan on throwing one solution away, you will anyway! What this means is that your first try at solving a problem (or your second, or third) may not be that great. Maybe you should start over, using what you just learned to build a better solution the next time. When I wrote the program I used for my Master’s Degree at Texas State, I used the fourth version that I wrote. I rewrote it three times before I was satisfied with it and it ran with few problems for over four months with about 150 students pounding on it 24 hours a day!. Warning Do not fall in love with your work! That may cause you to hold on to it beyond where you should have started over and done it better. Pseudo Code¶ Many programmers use something called pseudo-code rather than drawing diagrams. These are easy to create using a simple text editor. Basically, we use simple sentences that express what we want to do, without worrying about a bunch of rules telling us EXACTLY how to write things. We sprinkle some special words around so we can see out structures constructs. Here is an example: START set "sum" to zero set total to zero LOOP read a number from the user IF number is zero stop the loop END IF Add that number to "sum" add one to "total" END LOOP set "average" to the value you get when you divide "sum" by "total" STOP Note There are no rules telling you how to do this. The idea is to get your thoughts written down so you can study them and convince ourself that this “solution” will get you the answer you want! Note Those special, capitalized, words used in the above example mark our “tools”. We call then programming “structures” (sequence-decision-loop). I like to capitalize those words so they stand out. I also “style” my code by indenting things. That is very important. I can see the nesting of tools inside other tools with proper indentation. (Inside that loop is a decision statement!) This is far easier than drawing those “flow chart” diagrams. A problem to solve¶ Let’s try to solve a simple problem, not by programming it, but by thinking it through using the “pseudo code” stuff. The problem is fairly simple - balancing your checkbook. But this bank is mean. For every check you write, they are going to charge you a 10% fee (Boy, I want out of this bank as soon as I can move my money!). We will assume we start off our balancing session with some bank balance and a stack of checks and deposits we made during the month. We probably got all of this information in an envelop in the mail (I said this was an old-fashioned bank, they probably have not even heard of the Internet yet!). Our job is to come up with a process that will tell us exactly how to proceed with the calculations we need to perform to balance our checkbook. Identifying the data we will be working with¶ Before we begin, it would help to identify the data items we will be dealing with. We want to come up with names for these items and define the types of items they will be. Here is a start on this list: Note You should have completed your data type homework by now. We will explore the kinds of data yu can use in a computer in our next lecture. - BankBalance - Floating point (dollars and cents need to be stored as a floating point number) - checkAmount - Floating point (each check will be a floating point number as well) - depositAmount - Floating point (each deposit will be a floating point number as well) - checkFee - floating point (the banks fee must be calculated) - numChecks - integer (it might be nice to count the number of checks we process) - numDeposits - integer (same for deposits) All of these pieces of information are inputs. Or are they? As we process an item, we pull some input information off of that item. The amount of the item, whether it is a check or deposit, and what is the initial bank balance. Of those items, some are actually outputs. While we could count the checks manually, why would we do that? The computer can count them for us as we enter items. We can even count checks and deposits at the same time, since we will know what kind of item we are working with. So numChecks and even checkFee are output items! Can you think of anything else that might be nice to calculate and display? Perhaps some summary data would be nice. How about these additional output items: - totalChecks - Floating point (sum of all checks written) - totalDeposits - Floating point (sum of all deposits made) - finalBalance - Floating point (end of month balance) Note We have come up with a set of names for data items, some input and some output. We will call containers that can hold these numbers “variables”. That means that as the program runs the value stored in the container can “vary”. As an example, we might start off a “counter” variable with a value of zero, then add one to it as we process an item. As we loop over all the items, we will end up with a total count of the items we processed! Getting started¶ When we stare at a problem like this, we need to see what drives the solution. Is it a sequence, is it a loop?. Hopefully, you see a loop in this problem, since we have to process a number of checks and deposits. But before we get to that point, we need to get started with some basic information from the bank: - What is the starting balance we have at the beginning of the month? We need to input that number from the bank statement we got in the mail. From our list of data items, we will input that and save it in the bankBalance variable. We probably do not want to code that in as an initial value, since we will want to run this program many times, and we do not want to recompile it every time we run it, so we will ask the user to input the value. Once we have the initial value for the bankBalance, we can start up a loop. How many times we loop is not something we necessarily know, although we could count the checks to find out. It might be better to write the program so that we can just enter a check amount, and set up the program so that if we enter a bogus amount (like a negative amount), the program will halt. However, we want to be able to handle deposits as well, so how will we distinguish between deposits and checks. A positive number might mean a deposit, a negative number a check, maybe a zero number means stop. Or, we ask the user for some kind of code indicating what they want to enter. There are many ways to deal with this situation. You are in charge of determining which one you like the best. Whichever you choose, we do need a loop, and a way to stop it. Processing inside the loop¶ We have two things to deal with inside the loop, checks and deposits. We will need to get the amount from the user. We have already set up named variables to use for this purpose. In the case of the check. We need to calculate the fee for the check based on the amount of the check entered and subtract that amount from the bank balance in addition to the check amount itself (silly bank fees). For deposits, we add the amount to the bank balance. If we want to track the number of deposits and checks, we need to make sure we have initialized the counter variables outside of the loop and add one each time we encounter a new check or new deposit. Phew, that is a lot of work to do! Actually it is, but all you need to do is think through what has to happen each time you spin through the loop, and keep your mind on what you need to do for just that one transaction! Wrapping up at the end of the loop¶ When you reach the end of the loop. you might want to output a summary of what happened. If you kept track of the total number of checks and deposits, you could print those numbers out. If you generated a new bank balance, you should print that out for sure. Do you want to show the total of the checks written, or the total of the deposits made? If so, did you keep track of that data? You might discover that you could have tracked that information with the addition of a few more variables, and simply add them to your code. The result would be a bit more useful program for your user. Generating Pseudo-Code¶ Pseudo-code means just that. It is not real code, just something that looks like code. We have not looked at any code yet, so we cannot use that analogy to figure out what to write. Instead we do this. Write short sentence-like statements that explain what you want each step in your solution to do. Remember, we only have three tools we are allowed to use, so the form of these sentences can be set down before we starts. The most common way to do this is to use words from the name of the tool, and indentation to show each tool. Hmmm, let’s see an example. Sequence¶ This one is easy. One statement followed by another, top-to bottom, just like we real text. We should see the sequence in this list of statements. Remember you are explaining how to solve a problem using a sequence of steps. There is no story here, no plot, no characters. Just a simple statement describing the work you expect someone to do. That statement might be one of these: - get a check amount from the check, place it in checkAmount. - set the value of numChecks to zero See, short and as clear as possible. How the Sequence Works¶ Why am I even asking this question. The answer is obvious to any rational human being. If we are handed a “to do” list, we might think we can do any item in the list in whatever order we like. True! That is not what a sequence is. Instead, we are being handed a list of instructions for assembling a Christmas Bicycle that came in a box. It that list looked like this: Open the box and remove all the parts Find the handlebars and attach them to the front wheel post Find the seat and attach it where you will sit Attach the two wheels where they are supposed to go In this case deciding to start off by doing that last step makes no sense at all. Instead we follow the instructions in order, from top to bottom. That is how the sequence works, and we have to think about what has to be the first step, then the second and so forth. Perhaps it will hurt nothing if we swap two statements, but the tool still works by causing us to do one thing after another. We need to understand that, and so does anyone reading your solution and trying to see if it will really work. If-Then-Else¶ There will be situations where you need to ask a question about what is going on at the moment in your calculations. You need to examine the value of some number you have either input, or calculated. Perhaps we want to know if that number is zero, or if it is greater than some value. We will ask a question that has only a true/false, or yes/no answer. Computers are not any good at dealing with “fuzzy” questions, like “how blue is the sky”. They can deal with “is it cloudy” (as long as you only answer :yes” or “no”, not something like “kinda”). Here is a basic if-then-else in Pseudo-code: IF the transactionAmount is positive THEN set depositAmount to transactionAmount ELSE set checkAmount to transactionAmount END Note At this point we have our first example of something called program style. Style is all about how you write your solution. In this example, we are using indentation to show the sequence of statements we will do in each part of the if-then-else tool. The indentation is important, and companies are very precise in how you do it. The most common way is to use four spaces to indent, and every statement INSIDE this if-then-tool is indented by exactly that amount. The words IF, ELSE, END are required to be aligned vertically. Why we do this has to do with presenting our code in a clean way so everyone knows that to expect, and can understand what you mean when they read your code. This if-then-else thing hardly looks like a sentence, but it does look like program code in many programming languages. We have a minor problem here! We never mentioned a container named transactionAmount and we are using it here. Perhaps we need to keep a list of variables we are using to make sure we have the right number for our solution! Notice something else here as well. This tool has another tool buried inside it. Tools fit into other tools? Sure! That actually happened in the sequence, but it was not so obvious. It would have been if we had shown the sequence this way: SEQUENCE Statement 1 Statement 2 END SEQUENCE While we could have done this, doing so seems kind of silly. We do not really need the markers at the top and bottom, so we pitch those and just show the statements in a list, one after the other. The “flow” of our solution is obvious. We are used to seeing lists of things we are supposed to do, one after the other in our native human languages. On the other hand, the flow through the if-then-else is not so obvious, unless you think about it. We only “flow” through one of the two sets of statements we allow in the if-then-else. Understanding that is an important part of understanding how this tool works. It is important to realize that a sequence of only one statement is still a sequence. It is a short one but that does not really matter. It is still one of our tools! Here is an example if-then-else showing more than one statement inside our if-then-else tool: IF transactionAmount is greater than zero THEN set depositAmount to transactionAmount Add one to numDeposits ELSE ... END I used a slightly different form of question here. Are they the same thing. Is being positive (in numbers) the same thing as the value of that transactionAmount is “greater than zero”? Hopefully everyone agrees that is the same question. Realizing that will be important when we study doing arithmetic later in the course. The actual design process¶ This is really how we design our solution with only three tools. Any place we allow a statement to occur, we can use any of the three statement forms. One item in a sequence can be a decision statement, another can be a single statement (a really short sequence), and another a loop statement that we consider next. Loop statement¶ The last of our three tools is the loop. There are actually two forms of loop in programming, but I am only going to show the most important one here. We will study loops in more detail later. Here is our loop: WHILE somethingIsTrue LOOP do something do something else END WHILE Notice that, just like with the if-then-else tool, we have a special marker at the beginning of the loop and another at the end. We indent anything in between these markers. Notice, also, that there is a question in this tool. The answer to this question is a simple “yes” or “no. If the answer is “yes” we process the sequence (may be only one) of statements indented after the LOOP word. We skip to the END after we are done. If the answer is a simple “false’, we skip the statements after the THEN, and start working through the statements indented after the ELSE. We work through all statements until we run into the END IF line. We talked our way through the program without showing the code. Let’s do that now: input bankBalance totalChecks = 0 totalDeposits = 0 morTtransactions = true while we have moreTransactions loop get transactionAmmount if tranasction is negative process check checkAmmount = tranactionAamount * 90% totalChecks = totalChecks + checkAmount bankBalance = bankBalance - checkAMount endif if transactionamount is positive process deposit depositAMount = transactionAmount totalDeposits = totalDeposits + depositAmount bankbalance = bankBalance + depositAMount endif endwhile print bankBalance print checkCount print depositCount print totalChecks print totalDeposits Now, notice that we had to calculate the fee inside the loop each time we got a new check value. It is a common mistake to think that we can set up a formula for such a calculation outside of the loop and it will apply to calculations inside the loop automatically. It helps to build small programs, and it helps to put them together in stages, running them to see how they work as you write them. Seeing the output as soon as you can makes sure you are on track. Your program will be incomplete, but you know that. As you add more code, the output will start to fall into place and look better. You are gaining confidence in what you see and in the correctness of your work as you do this.
http://www.co-pylit.org/courses/cosc1315/problem-solving/00-input-process-output.html
CC-MAIN-2018-17
refinedweb
6,056
79.09
Enviado por roman-lygin (Intel) el others. Working on CAD Exchanger I am designing one of its plugin to convert 3D CAD data between ACIS and Open CASCADE (two modeling kernels) to be parallel. Depending on a model size, the converter has to deal with multiple small objects allocated on a heap (e.g. 20,000+ objects each taking 48bytes + additional object data such as lists, strings, etc). The translation works just fine and concurrency analysis with Intel Parallel Amplifier indicates high concurrency levels. So far, so good. However I had noticed that when translating the same ACIS file over and over again in the same test harness session translation took longer and longer. Why could it be ? So I launched the Amplifier to collect hotspots and here is what I saw: These two top hotspots relate to the memory manager layer (Standard_MMgrRaw class) which simply forwards calls to malloc/free and new/delete. Trying to root-cause the problem I had to switch to the mode to see direct OS functions (toggling off the button on the Amplifier toolbar) and here is a new screenshot: It shows that hotspots are two system functions – RtlpFindAndCommitPages() and ZwWaitForSingleObject() – which are called from memory allocation / deallocation routines. It also shows that the nearest hotspot related to my code (BSplCLib::Bohm()) is just 1/4 of the time consumed by ZwWaitForSingleObject() (0.47s vs 1.81s). After experimenting with several runs, analyzing how the hotspot profile changes with growing number of runs, I concluded that the first hotspot is explained by the fact that the ACIS converter creates multiple tiny objects with different size with short life span (they are destroyed after every conversion). This seems to cause strong memory fragmentation which forces the system to constantly look for new memory chunks. The second hotspot (ZwWaitForSingleObject()) which goes through critical section is caused by the default mechanism of memory management on Windows which uses a lock. The execution of locks&waits analysis also proves that memory management lock is the greatest one adversely affecting concurrency. All this is caused by the direct use of calloc/malloc/free, and new/delete called dozens of thousands times. It’s worth mentioning that such hotspots did not exist when I used serial implementation and popped up only when I started using parallel one. The former used a memory manager (in a 3rd party lib) that allocated memory blocks and did not return them to the system reusing them when the application requested new blocks. I couldn’t reuse this memory manager because it was not thread-safe and therefore had to switch to another manager that simply forwarded to malloc/free. So I almost was forced to write my own memory manager that would implement a previous behavior and would be thread-safe and … fast ! Challenges are good but not when you need to re-write low-level components what can take a lot of time and require diligent thorough testing delaying progress in your project which already receives very limited attention. So, I approached my colleagues from the Threading Build Blocks team to check if there is anything TBB could help with. What was my surprise when they suggested me trying a new release 2.2. Version 2.2 offers a mechanism to seamlessly replace the system memory manager with the tbb allocator. ‘Seamlessly’ really means it – everything I had to is to add a single line of code into a C++ file: #include "tbb/tbbmalloc_proxy.h" The outcome was immediate. Not only did the hotspot profile change completely removing the OS hotspots (see the comparison mode screenshot below) but the overall speed up (on entire test case) was about 25%! One line of code, no need of re-writing anything on my own saved hours of coding, with such a return! Just incredible, the least I could say. Recently released 2.2 Update 1 includes further improvements which my app now benefits from (more reliable processing of realloc(), bug fixes for debug mode, etc). The colleagues later explained me that the TBB allocator runs concurrently (seemingly without any locks inside) and with a similar fashion of reusing previously allocated blocks. Thus, it was the entire application (not only its parallel part) which benefited from this substitution. So, if you are migrating from serial to parallel implementation you may encounter something unexpected – memory bottlenecks. If you got accustomed to use some nice single-threaded memory manager you can be forced to consider migration to something alternative. If this is the case you may want to give a try to tbb allocator and see if it helps in your case.
https://software.intel.com/es-es/blogs/2009/10/28/memory-management-challenges-in-parallel-applications
CC-MAIN-2015-11
refinedweb
774
50.97
So I was wondering what OO methodology the Perl Monks use. Some of the ones I'm familiar with are: Edit by tye, ephasize "you" in title, close B and I tags One. package Foo; use BaseClass; our @ISA = qw(BaseClass); attrs qw( foo bar baz ); sub init { my $self = shift; my %options = @_; # Do some stuff here return $self->SUPER::init(%options); } sub foo { my $self = shift; # Do some validation here return $self->_foo(@_); } [download] It handles diamond inheritance, on-the-fly methods, and provides a simple (but not obtrusive) separation of interface and implementation. Also, you can't mispell a method name and have it DTWT silently, as with direct access to hashes can. I've never uploaded it to CPAN because there are a plethora of class classes out there. I also never really cared to learn Class::Struct, Class::MethodMaker, and the like, because this does what I need it to do, and nothing more. *shrugs* I probably should, at some point, if only to get into the mainstream. ----- I'm partial to const subs and namespace purity: package My::Class; use vars qw( $VERSION ); BEGIN { $VERSION= 1.01 } package My::Class::_implement; BEGIN { my $offset= 0; for my $member ( qw( FOO BAR BAZ BIF ) ) { eval "sub $member() { $offset }; 1" or die "eval failed: $@"; $offset++; } } sub My::Class::GetObjectPackage { my $class= shift(@_); return $class . "::Object"; } sub My::Class::new { my $class= shift(@_); bless [], $class->GetObjectPackage(); } sub My::Class::Object::Foo { my $self= shift(@_); my $value= $self->[FOO]; $self->[FOO]= shift(@_) if @_; return $value; } [download] . . . . ----I wanted to explore how Perl's closures can be manipulated, and ended up creating an object system by accident. -- Schemer Note: All code is untested, unless otherwise stated I prefer to use Class::MethodMaker. In one instance, I extended it for my needs (to modify a data_accessor method so that it also calls the parent class). -Dan. Newly i use it in combination with Data::Type: class 'Idiot', { public => { string => [ qw( name title ) ], int => [ qw( score ) ], }, types => { string => STD::VARCHAR(80), int => STD::INTEGER, } }; my $murat = new Idiot name => 'murat', title => 'Mr.', score => 10; # + out of 10 [download] Murat Edit by tye, preserve formatting class 'Idiot', { public => { string => [ qw( name title ) ], int => [ qw( score ) ], }, types => { string => STD::VARCHAR(80), int => STD::INTEGER, } }; sub Idiot::talk : method { my $this = shift; use IO::Extended qw(:all); printfln "I am %s and stupid", $this->name; } my $murat = new Idiot name => 'murat', title => 'Mr.', score => 10; $murat->talk; [download]. Steffen My OO style is simple and clean, thanks to Attribute::Property: package Some::Class; sub new : New { my ($self) = @_; exists $self->{$_} or croak "Mandatory argument '$_' missing" for qw(id foo); return $self; } sub id : Property { /^\d+\z/ } sub foo : Property; sub bar : Property; sub blah : Property { $_ < 50 } sub _private : method { my ($self) = @_; ... } sub do_something : method { my ($self, $quux) = @_; ... } [download] my $thing = Some::Class->new( id => 15, foo => "Hello" ); $thing->foo =~ s/e/a/; $thing->id++; $thing->do_something($$); $thing->blah = 10; $thing->blah = 60; # dies [download] Juerd # { site => 'juerd.nl', plp_site => 'plp.juerd.nl', do_not_use => 'spamtrap' }! Hell yes! Definitely not I guess so I guess not Results (44 votes), past polls
http://www.perlmonks.org/index.pl/jacques?node_id=296978
CC-MAIN-2014-52
refinedweb
537
58.11
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). Brainstorm Hi and welcome to devnet, What kind of tutorials are you thinking about ? There are already several in Qt's own documentation. @Pablo-J-Rogina thanks for the link - very useful indeed and a very good starting point. Ultimately I'm going for a fully-integrated solution which will have its own waterproof housing and could potentially be Wi-Fi linked to a home network (for email notifications, web access, etc). No one has replied. @GhostWolf said in Tomtom linux: The reason I wanted the TomTom was because I just have it laying here, and the programm doesn't have to be fast. The only thing it needs to do is to send data if I press a button on the screen. The reason I wanted the TomTom was because I just have it laying here, and the programm doesn't have to be fast. The only thing it needs to do is to send data if I press a button on the screen. As @Wieland already said, most likely nobody here can help you with this. So even if it's possible (from hardware side) to do what you want, you're on your own. Just as a note. you can add as many forms as you like and have them auto created like mainwindow is. P.s.:Hi :) Thanks, the two of you. Document creator sounds like what I need. The lighter, the better. @mzimmers Yes, you don't have to Basically Ive got 2GB on my SSD and 2 other harddisks (2tb ea) one is filled the other is completly clean so I installed it on the completly clean one but I can't use the maintainancetool beacuse my SSD doesn't have space for temporary files. Thats my problem :/ Edit: found a way to clear space I'll update when its done! ^^ You find command doesn't go through all of your machine just the curent folder, likely you home directory. Thanks but I look like cloud mqtt, firebase, etc. solutions. I don't want to build database I want to use an online system. But, which one is the most compatible(or appropriate) for qt android and my requirements. @kshegunov got it...thanks for the clarification. If you want to use D3.js without rewriting it in other language, your first options are either to embed browser, or use QML. Sure you can also run JS code in plain QJSEngine and add your objects there, but that's going to be more work Qt 4 has reached EOL long ago. Please consider upgrading to Qt 5 Thanks everyone for insight . I have some idea now from where i should start . @raven-worx Yes, I'm running the unmodified example code. I start the application from Qt Creator. You are correct there are options to disable Gestures via command line parameters. But options to disable like "no-pan" or "no-pinch" or " no-swipe" must be explicitly set to disable them. Since I'm running from Qt Creator, I haven't set any of these options. Hence all three gestures are enabled. This library looks good as well: Found by simply typing "call java from c++" into Google Just published an article about how to use Snapshot Testing with TDD for automated GUI testing @mzimmers In this kind of scenarios, you should use timer to call a function periodically. NEVER EVER create a while-loop insider your GUI, this will freezes your app. A suggested solution is The GUI is @SGaist said in Licenses craziness!: What happens usually is that developers use a different version of the database system that was used to build the Qt plugins hence the potential need to re-build the corresponding plugins. What happens usually is that developers use a different version of the database system that was used to build the Qt plugins hence the potential need to re-build the corresponding plugins. This isn't an excuse because I don't face any problem when I use commercial products (as I mentioned before ex. Delphi). Do other companies make a magical solution for this issue?! Of course not. ? Since I knew Qt I couldn't use any Qt plugin (specifically SQL plugins) by default because I always need to rebuild it from scratch (the old Qt binary distros didn't include binaries for SQL plugins except SQLite) Because Qt can be used under LGPL/GPL doesn't mean its development is free nor that there are limitless hardware to build and produce the pre-built release nor thousands of developers working on it every day. Because Qt can be used under LGPL/GPL doesn't mean its development is free nor that there are limitless hardware to build and produce the pre-built release nor thousands of developers working on it every day. I understand your frustration regarding licenses but it looks like you don't realise the jungle that it is. Because a product is free it doesn't mean you can do whatever you want to with it. I understand your frustration regarding licenses but it looks like you don't realise the jungle that it is. Because a product is free it doesn't mean you can do whatever you want to with it. This issue not related to open source version Qt, the commercial version of Qt has same problem! Yeah, that was basically it. When I stitched together the examples from above, I neglected to notice that my worker object had two timers declared, and I wasn't consistent on which I used. Works now. I've tested multiple starts and stops, and verified that the "finished" signal works, too. I do believe I have a functional example. I also modified the program so the worker passes the value to the widget for display. So, to summarize what I've learned here: I think I have a basis for reasonable design now. Thank you to everyone who participated in this. @daeto did you specify an absolute or relative lib path in the project's AdditionalLibraryDirectories property? What libs did you add to the project's AdditionalDependencies property? And what does the errors exactly say? Not even aware of "Fluent Design" tech/product name is until seeing your post... still not really knowing. What I can gather it's a Microsoft heavy gloss marketing attempt at 3d interface / Ui? Have you seen: @tekojo Thank you for your detailed answer! This really helps me in the right direction! :D Most things are around what I expected them to be, which is great. I'll mark this thread as solved, thanks again for the help! @Konstantin-Tokarev Thanks, I see it now. @Konstantin-Tokarev that's exactly what I meant, thanks for the clarification. @VRonin I guess there is actually no reason to test, if fglrx is deprecated, and Open Source drivers works fine, I just mark this topic as solved. Hi, From the top of my head, you can try to set alignment flag to Qt::AlignHCenter rather than 0. @Drakkhan I should mention, that this resize in ResizeEvent can Flow Layout Example or nesting your custom Widget inside an other one that is effected by the Layout, and handels the positioning of your custom -square- widget. I prefer the 2nd option myself. I'd avoid repeating the namespace name in sub-namespaces: Morph:MorphUtils vs Morph:Utils. The second is cleaner. I'd recommend bringing this question to the interest mailing list, you'll find there Qt's developers/maintainers. This forum is more user oriented. @tham Yea and really someone has to be interested enough in hacking your application to do it. That is ultimately pretty rare unless it gets really popular or is a security based app. Other than either of those scenarios I'm betting nothing will ever happen to/with your encrypted password. :) What does this have to do with Qt ? @Konstantin-Tokarev I am downloading it now! Thank you! The msys2 package works like a charm The Elastic Node Example from the Graphics View framework might give you some ideas to start from. Disabled Categories are greyed out Looks like your connection to Qt Forum was lost, please wait while we try to reconnect.
https://forum.qt.io/category/9/brainstorm
CC-MAIN-2018-13
refinedweb
1,418
64.2
# PVS-Studio team's kanban board. Part 2: YouTrack Hello everyone! Welcome to the second part of the PVS-Studio Team's Kanban Board story. This time we'll talk about YouTrack. You'll learn why we chose and implemented this task tracker and what challenges we encountered. We don't want to advertise or criticize YouTrack. Nevertheless, our team thinks JetBrains has done (and keeps doing) a great job. ![0853_Kanban_YouTrack/image1.png](https://import.viva64.com/docx/blog/0853_Kanban_YouTrack/image1.png) I discussed how we integrated kanban, and why we decided to switch from Bitbucket to a new task tracker in the previous article, "[PVS-Studio Team's Kanban Board. Part 1: Agile](https://habr.com/en/company/pvs-studio/blog/549960/)". Do take a look if you haven't already – this will help you understand the concepts I discuss further on. Note that since [YouTrack](https://www.jetbrains.com/youtrack/) offers multiple features, I won't be able to cover them in detail. To see all the features, check out the [documentation](https://www.jetbrains.com/help/youtrack/incloud/YouTrack-InCloud.html). This article is for those who want to use or already use the tracker. Background ---------- If you don't want to read the previous article, then here's a brief background story. After implementing the kanban method, we realized how useful a kanban board is for daily meetings. However, after we all switched to remote work, we could not use our usual physical board anymore. We struggled a bit and then decided to speed up the process of choosing an alternative task tracker to replace Bitbucket. We wanted a tool with an agile board. In fact, we were choosing between YouTrack and something else. Our C++ team lead (hey there, Phillip) encouraged us to use YouTrack. The main competitor of YouTrack was Jira. Eventually, we picked YouTrack. The PVS-Studio team had been using Bitbucket since mid-2014. Bitbucket suited us well for a long time. But as the company was growing, we needed more complex features, not available in Bitbucket. Some functionalities were too difficult to implement: * advanced notification settings; * customizable workflows; * task dependencies; * time tracking; * reports; * agile boards. Even after a cursory study of YouTrack, we realized that it offers these features out of the box. Inspired by YouTrack, we started switching to the new tool. Getting ready ------------- As I've mentioned in the previous article, overall our team created 5500 tasks in Bitbucket. Obviously, we wanted to export the existing tasks to YouTrack, but it was quite difficult to do. On the other hand, it's fairly easy to [export](https://support.atlassian.com/bitbucket-cloud/docs/export-or-import-issue-data/) tasks from Bitbucket to Jira. Alternatively, you can export data with Bitbucket API into an intermediate json file of a certain [format](https://support.atlassian.com/bitbucket-cloud/docs/issue-import-and-export-data-format/). We tried to save the entire repository to a zip archive via the Bitbucket export menu. Unfortunately, after some time the process froze in any browser. Forum website visitors link this long-standing issue to the new Bitbucket API version. Thus, the only possible option is to upload tasks one by one to the same json file with the help of a script (you can find several similar scripts on the Internet). But that's only half the story. On the other hand, YouTrack lets you [import](https://www.jetbrains.com/help/youtrack/standalone/imports.html) tasks from Jira and from some other systems. But there's no Bitbucket among them. The only option is to import the data through [json](https://www.jetbrains.com/help/youtrack/incloud/custom-imports.html). To do this, you need to prepare the data source: convert the Bitbucket's json file, while taking into account all the differences in data formats. Exporting previous tasks to YouTrack turned out to be a complicated task. Eventually, we gave up this idea. We decided to manually export a number of tasks from Bitbucket to YouTrack: about a hundred of those that we had not started yet (backlog) and a couple dozen tasks that were on hold. As for Bitbucket, we kept it as an archive. Only five employees have access to Bitbucket, and we use its free version now. Then we created our first project in YouTrack. We chose YouTrack InCloud because it has a quick setup. Previously, we've already had a YouTrack test repository, so we understood this system pretty well. Besides, YouTrack offers many learning tools, for example, interface tips and a test project with a detailed description of all features the tracker offers. The diversity of YouTrack features took some getting used to. The workflow management also differed from Bitbucket's. Bitbucket has few custom settings compared to YouTrack. In YouTrack, you can customize notifications, saved searches, reports, agile boards and "Knowledge Base" articles. I'd like to point out that we didn't have any serious technical difficulties or unsolvable issues when switching to YouTrack. Eventually, everything we needed, we made happen. Moreover, we discovered many additional features when we started using YouTrack daily – or started to appreciate them, to be precise. For example, spent time reports allow you to see employees who created or closed a task and also those involved in the task. I'll tell you more about YouTrack time tracking a bit later. As a side effect, we had to optimize our business processes. We implemented new workflow approaches we hadn't thought about before. YouTrack presented us with a technical challenge to configure workflows that are written in JavaScript. But mostly we dealt with administrative challenges. Here's what we had to do: * choose employees to switch to YouTrack first, decide whether to switch the entire team at once or start with one department; * choose architecture: one project for everyone or different projects for the development and marketing teams, for example; * create users, configure roles and groups; * set up the project and define access permissions; * configure fields structures; * create and configure kanban boards; * choose a time tracking strategy; * configure workflows; * set up reports; * train employees how to use YouTrack. In this article, I'll review each of the challenges listed above. Switching order, creating projects ---------------------------------- We had one main question. Which team should migrate to YouTrack first? We were afraid to switch everyone at once. In this case, we would have had to configure YouTrack and refine it to accommodate our processes, while a large number of users were using the system. We decided to start with the marketing team. Firstly, they've already been using kanban on a daily basis, the same as our developers. Secondly, our marketing team consists of 10 employees. This was enough to try out all the main scenarios of using the new tracker. Finally, ~~we just didn't feel sorry for them~~. Our marketing team is very creative and very flexible in terms of innovations. I'd like to thank the head of our marketing team, Ekaterina, for her endless patience and support while we were integrating YouTrack. :) Another question was whether to create one project for the entire company or different projects for different teams. YouTrack offers convenient tools for managing several projects at the same time. At first, we wanted to separate the development team from the marketing team. We could create several projects and thus reduce the total amount of values in the fields, the number of tags, etc. Moreover, it would simplify the tracker usage for users. But there was a very important argument against it: our tasks are constantly reassigned between marketing and development employees. For example, a task to write an article. Let's say, a C++ developer writes an article. Then the marketing team employees get the article to proofread and translate. The author gets the article back to post online. Finally, after the author publishes the article, the marketing team receives the task again to promote the article on social media and forum websites. Thus, two projects would complicate our interaction and familiar workflow. YouTrack can move tasks between projects, but fields of different projects must match. Also, some workflows linked to a certain project may depend on its fields. So, we decided to create one shared project, add all teams there, and set up the project so that they can work together simultaneously. The marketing team was the first to switch. On December 3, 2020, we created the "Marketing" project and later renamed it to "PVS-Studio". ![0853_Kanban_YouTrack/image3.png](https://import.viva64.com/docx/blog/0853_Kanban_YouTrack/image3.png) Users, groups, roles -------------------- Managing users, groups, and access rights (roles) in YouTrack is simple but has some peculiarities. I'll talk about them in more detail further on. Let's start with the hierarchy. YouTrack InCloud is a service that JetBrains offers for its team products. All JetBrains services are joined together. The top-level service, [JetBrains Hub](https://www.jetbrains.com/hub/), allows you to control other services and exchange data. JetBrains Hub also stores general settings, such as access rights, users, groups. They are called resources. The "Global" project is the parent project for all projects. By default, YouTrack suggests adding all created groups as resources to the project. So, you can use all created groups in all subsidiary projects. You can make a group invisible by adding it to a specific project. ![0853_Kanban_YouTrack/image4.png](https://import.viva64.com/docx/blog/0853_Kanban_YouTrack/image4.png) By default, the "All Users" parent group represents all users. Also, there's the "Registered Users" subgroup with an auto-join option for registered users. You can use YouTrack user groups to configure user access, in task selection requests, and in workflows. Since we decided to use a shared project, we did not need to configure user access via groups. However, we added several groups that show the company's structure. ![0853_Kanban_YouTrack/image6.png](https://import.viva64.com/docx/blog/0853_Kanban_YouTrack/image6.png) We use these groups mainly when sending workflow notifications. An employee and a team lead receive a notification, for example, about an overdue task. It was easy to implement. The "Managers" group includes all team leads. Each team group contains employees, and one or more team leads. When notifying a user, we check their group. If they are in the "Managers" group, only the manager receives the notification. Otherwise, we search for the user's workgroup. The user and all team leads from the working group receive the notification. I'll tell you more about our workflows later. Let's go back to the group list. Note that all groups (resources) belong to the "Global" project. The "Registered Users" group has the "Global Observer" role. A role in YouTrack is a named set that contains permissions of two types: "Hub" and "YouTrack". The "Hub" type permissions provide access to work with global entities: groups, projects, roles, users, etc. The "YouTrack" type is used to configure access to tracker features. The "YouTrack" type allows to work with tasks (create, comment, add attachments, etc.), reports, saved lists, etc. We did not create additional roles. The default entities proved sufficient. ![0853_Kanban_YouTrack/image8.png](https://import.viva64.com/docx/blog/0853_Kanban_YouTrack/image8.png) Each new user is automatically added to the "Registered Users" group. In accordance with the "Global Observer" role, a new user receives a minimum set of three permissions. ![0853_Kanban_YouTrack/image9.png](https://import.viva64.com/docx/blog/0853_Kanban_YouTrack/image9.png) These are the "Hub"-level permissions. We wanted to automate the process of adding new users as much as possible so that they immediately get the necessary set of permissions to work in the "PVS-Studio" project (the "YouTrack" levels of access). I'll show you how we implemented it. Let's open the properties of the "Registered Users" group. ![0853_Kanban_YouTrack/image11.png](https://import.viva64.com/docx/blog/0853_Kanban_YouTrack/image11.png) Besides the "Observer" role, the group also has the "Developer" role in the "PVS-Studio" project. By default, YouTrack sets the "Developer" role to work in the project. We have explicitly assigned this role to the "All Users" parent group, so the role is inherited from this group. ![0853_Kanban_YouTrack/image13.png](https://import.viva64.com/docx/blog/0853_Kanban_YouTrack/image13.png) So that was pretty easy. Now that we've discussed group settings, let's take a look at the project settings to fully understand YouTrack permissions. The project has a "Team" concept. A "Team" here is a group of users who can work with the project. Members have specified roles. Our project has the following settings. ![0853_Kanban_YouTrack/image15.png](https://import.viva64.com/docx/blog/0853_Kanban_YouTrack/image15.png) Nothing surprising. There's the same "All Users" group and the "Developer" role. We can configure the team members' access (the "Edit" link) by selecting one or more roles. ![0853_Kanban_YouTrack/image16.png](https://import.viva64.com/docx/blog/0853_Kanban_YouTrack/image16.png) Finally, you can use the "Access" tab in the project settings to see users with access. ![0853_Kanban_YouTrack/image18.png](https://import.viva64.com/docx/blog/0853_Kanban_YouTrack/image18.png) Here you can see users' access permissions, directly assigned roles, and global project permissions. You can configure roles further. Since we had one "Developer" role for all project users, that was the only role we customized. We slightly changed the "Hub"-level permissions. Namely: we added the "Read Group" option. This option allows users to specify groups in issue filters. Note that if you change default settings, YouTrack displays the "changed" mark to the right of the checkbox. ![0853_Kanban_YouTrack/image20.png](https://import.viva64.com/docx/blog/0853_Kanban_YouTrack/image20.png) In addition, we customized the "YouTrack"-level permissions for the "Developer" role. We gave access to edit teammates' articles in the project's "Knowledge Base" and delete/update comments to articles. We disabled the "Delete Issue" option. Only the administrator can delete tasks. We also disabled private field visibility. An example of such a field is our spent time tracker (date and time). It stores the amount of time spent on a certain task. ![0853_Kanban_YouTrack/image21.png](https://import.viva64.com/docx/blog/0853_Kanban_YouTrack/image21.png) Finally, a few words about users. It's quite easy to create users and give permissions. You need to request the required number of licenses in the tracker's global settings, create users, set emails and temporary passwords. By the way, YouTrack does not count blocked users' licenses. Thus, we managed to create the entire company structure at the beginning of the migration. We unblocked only marketing team users, so, we didn't need to request additional licenses until all other users were ready to migrate. Now let's talk about the key component of any tracker – task fields. Fields and tags --------------- Initially, we wanted to create the same field structure in YouTrack as we had in Bitbucket, and then customize the structure according to our needs. But we faced a few difficulties. Our physical kanban board was an add-on for Bitbucket. Therefore, many things our physical board provided (additional task states, card colors) were not available in Bitbucket. YouTrack's online agile board, on the other hand, is based on entities that you configure in the tracker. We wanted to use the board right away. So, at first, we had to establish the familiar field structure and then customize it taking into account how the online board displays the fields. In other words, we had to combine Bitbucket fields and the entities of our physical board. I'll skip the details of how we matched the fields and chose a suitable configuration. It took us some time, but finally we did set a field selection that met all our requirements. Also, I'd like to point out that for some workflows, YouTrack needs additionally configured (service) fields. Now I'll talk about our current fields in YouTrack. They are listed below. ![0853_Kanban_YouTrack/image23.png](https://habrastorage.org/r/w1560/getpro/habr/post_images/50a/6aa/603/50a6aa603e5c646786bb5d9467541506.png) **Project**. The list does not include such a field. However, the card of any task has the project field and the project's name to which the task is attached. We have one project – "PVS-Studio". **Assignee**. Used to assign the task performer, the current assignee. Before you can assign a task to a user, you need to explicitly add this user to the "user" type enumeration. The field values are used for naming swimlanes on the online kanban board (display tasks by assignee). **State**. Used to show the current state of a task. May have the following values: * Buffer — contains scheduled tasks. No one worked on these tasks. If no assignee is specified for a task in Buffer, the task is considered as backlog; * In progress — a task in progress; * Review — a task is under review; * On hold — a team member is working on a task, but the task is temporarily (for a short period) postponed; * Resolved — a task is solved and closed (the expected result is achieved); * Suspend — a team member had been working on a task but postponed it for a long period of time (with the intention to return to the task); * Wontfix — a task is not solved, it is closed (could not be solved for some reason, was added by mistake, etc.); * Duplicate — a task is not solved, it is closed, because there is a duplicate task. YouTrack uses field values to name swimlanes on an online kanban board (tasks are displayed by assignee). **Component**. Specifies the department or sub-department. For example, "Marketing", "Office", "C++ (Rules)", "C# (Core)", etc. **Type**. Contains a type of a task: a random task (Task), a task to fix a bug (Bug), a task to prepare an event or a report (Event), a task to write an article (Article), etc. **Priority**. Defines a task's importance, priority. Priority can have the following values: Minor, Normal, Major, Critical, Blocker. **Scope**. "Scope" — time (in hours, days, or weeks), which we plan to spend on a task. A team member fills out the field when creating a task. If the task takes much more time than we planned to spend, we discuss the reasons. The system adds the work item automatically when the task is "In progress" or on "Review" (the "State" field). To use this feature, you need to enable time tracking and configure the appropriate workflow in project settings. **Spent time**. Adds up the amount of time (in total) that was spent to complete the task. Spent time is the second field that you have to configure when enabling time tracking in project settings. The field is read-only. The amount of spent time (a work item) can be added in two ways: * manually using the "Add spent time" comment; * automatically (YouTrack automatically adds the spent time and displays this as the special comment). YouTrack logs time automatically when a task's "State" field is set to the following values: "In Progress" or "Review". When you stop working on a task (put it "On hold" or in "Resolved"), this time is added to the value in the "Spent time" field. Besides, the time is added when you change assignees. YouTrack also updates the amount of time spent on each task automatically at 9:00 am every workday. YouTrack records the last change (date and time) in the "TimerTime" field. You can customize all these features in the corresponding workflow, which I will describe later. Time tracking in YouTrack is very useful. You can see how much time each user spent working on the task and how much time the task took overall. As I mentioned earlier, you can use this information in special reports to get a complete picture of each employee's contribution to the work. If the TimerTime value exceeds the "Scope" value, a special progress indicator circle turns red (the indicator is shown in the task card and the task list). If the amount of spent time does not exceed the Scope time, the progress indicator shows a pie chart with the amount of time spent and the amount of time left. ![0853_Kanban_YouTrack/image24.png](https://habrastorage.org/r/w1560/getpro/habr/post_images/700/0ff/e92/7000ffe92e89e68bd1cec1f5d826b0bc.png) **TimerTime**. The TimerTime field is hidden from users. The field stores the date and time of one of the last changes: a task state, an assignee. TimerTime helps you to track the amount of time spent. **Due Date**. This field allows you to set the date and time when a task must be resolved (the Resolved state). A reporter sets the due date. If a task is overdue and is not resolved by the due date, the assignee (and their higher-ups) receive daily notifications by email. The field helps to configure notifications. Enumeration type (state or enum) fields offer an additional setting. You can choose colors for values that these fields show. Task lists and boards display the colors. Here's an example of the "State" field: ![0853_Kanban_YouTrack/image25.png](https://habrastorage.org/r/w1560/getpro/habr/post_images/4ab/824/aef/4ab824aef45dbeba018b0cdee17a857f.png) Besides the fields' configuration, YouTrack offers to customize special task labels, called tags. You can create tasks without tags and tasks with multiple tags. Tags are very useful. You may use many different combinations of task states. Note that you also can set multiple values to the enum type fields, but combining fields and tags seems to be more efficient. We currently use almost 18 tags. Here's the head of a task card with tags. ![0853_Kanban_YouTrack/image26.png](https://habrastorage.org/r/w1560/getpro/habr/post_images/ea2/4d9/900/ea24d990030305e6b9cd71d7e2335215.png) Also, note the color of the enumerated fields in the legend of the same task. ![0853_Kanban_YouTrack/image27.png](https://habrastorage.org/r/w1560/getpro/habr/post_images/2cb/322/2e0/2cb3222e0f5645b882c1fbcf9f3db62a.png) Fields and tags indicate that Phillip is the assignee, and the task is on hold. The task has the C++ component ("New Rule" and "Client" tags mean that the assignee is working on a new diagnostic at the client's request. The diagnostic belongs to the "General Analysis" class). This is the task with the Major priority. The task's reporter estimated the scope at four workdays (32 hours with an eight-hour working day). Note that the team has already been working on this task for 16 workdays. That gives us food for thought. :) The task has no "TimerTime". This means, YouTrack does not record time for this task (because "State" is set to "On hold"). The task has no deadline. Take a look at the task list. ![0853_Kanban_YouTrack/image29.png](https://habrastorage.org/r/w1560/getpro/habr/post_images/66b/00a/dd2/66b00add245f39fc8b27afce40134aae.png) By the way, we filtered the tasks by the C++ component (for all sub-departments). To do this, we used the following query: ``` Component: {C++ *} ``` The request gets all tasks whose values begin with "C++". YouTrack search queries is one of the most powerful features. Besides, search queries are user-friendly. You can save tags and search queries, configure notifications, share queries with colleagues, or use queries privately. I'll talk a little more about queries when we get to workflows. You can use all fields and tags of a task in search queries and in reports. Kanban ------ I've told so much about online boards, so, I can't help but show at least one. :) Here is a screenshot of our C++ kanban board (tasks are filtered by assignee). ![0853_Kanban_YouTrack/image31.png](https://habrastorage.org/r/w1560/getpro/habr/post_images/6b3/b69/0a8/6b3b690a80daa17ce2d213425a1c43b8.png) The "Resolved" column is collapsed. YouTrack filters cards to match a query. You can set a query in board settings: ![0853_Kanban_YouTrack/image32.png](https://habrastorage.org/r/w1560/getpro/habr/post_images/78f/13a/01a/78f13a01a832c74c4891d7d2430519b9.png) Thus, you see all unresolved tasks and tasks resolved after March 11, 2021 (this is the date of one of the latest releases). We decided to filter the resolved tasks by one release cycle to free the space on the board. Also, we filtered the tasks list by explicitly specified users to display them on the board swimlane. Of course, YouTrack has other options to display swimlanes. Recently, we changed the swimlanes display. Now we have boards with a summary for each team. It allows us to focus more on tasks and their positions on the board. You can customize the board. Now, let's talk about one more important feature of YouTrack boards. Initially, we used the "Automatically add new issues" mode to configure our kanban. YouTrack offers the mode by default. As the "Manually assign issues" mode, the "Automatically add new issues" mode adds a task to the board. The task and the board are linked. So, the task literally belongs to the board. But you can't, for example, filter tasks by date in the "Resolved" column, as we have now. The board displays all the tasks that we've ever added. This is one of the reasons why we chose to filter cards to match a query instead of the first and the second options. Moreover, when you create a task in the "Automatically add new issues" mode, a new task appears on all the boards at once. It may be inconvenient. Although, the "Automatically add new issues" mode may be useful to work with multiple time-limited projects on one board. This option requires fewer additional settings. It's easy to configure the rest of board settings, so that's all here. Kanban is an out-of-the-box feature in YouTrack. You can use kanban right away. As for time tracking and workflows, you have to configure these features. Now, it's time for most interesting part. Time tracking and workflows --------------------------- A kanban board was not the only thing we lacked in Bitbucket. We also wanted to have time tracking and workflow features. I'll tell about these features in one section, because without workflows, you can't use automatic time tracking. You can use time tracking only in manual mode. See the YouTrack [workflows documentation](https://www.jetbrains.com/help/youtrack/standalone/Workflow-Guide.html) for more details. In a nutshell, you can write workflows in JavaScript. These scripts automate various types of tracker features. YouTrack provides a set of default workflows. So far, this set contains over 35 workflows. You can customize default workflows. You can easily roll back default workflows. You can create workflows for your own use and shared workflows (to share workflows from different scripts). So, YouTrack provides space for creativity. We used several default workflows that seemed necessary to us. We modified some workflows and added a couple of new workflows. ### Time tracking We wanted to implement time tracking long ago. We needed to estimate the amount of time spent on a task and find out why some tasks are time-consuming. We wanted to analyze our workflows. In Bitbucket we tried to set scopes in square brackets in task headers. By the way, tasks headers included some other information that we could not set in the Bitbucket fields. For example, we added the [client] word in square brackets. It meant that a team member was working on a task at a client's request. Here's an example of such a task: "[client][5 days] The task summary". It is clear that Bitbucket could not analyze these so-called tags that we created. Therefore, we developed a utility that used the API to get information from Bitbucket and generate various reports. Also, this utility helped us to notify team members about forgotten tasks (tasks without comments posted over the last three or four days, for example). That's how we tried to customize our workflows in Bitbucket. As for time tracking in Bitbucket, we could analyze tasks only manually. We had to look at the creation date of the task, look for a scope value (that was not always present) in the description, and draw conclusions. Such a complex workflow! We were so happy when we learned about time tracking in YouTrack. We customized time tracking almost in three clicks. We just enabled the time tracking option in the project settings by adding a couple of fields. ![0853_Kanban_YouTrack/image34.png](https://habrastorage.org/r/w1560/getpro/habr/post_images/917/f64/d00/917f64d00099ca79a0843aa0b9673ae6.png) Right after, we could manually add so-called work items and get special reports. I've mentioned reports in the Fields and Tags section. By the way, you can customize work items by adding your custom types of work. This way, you can see what each assignee did within a task during a particular period of time. After enabling time tracking, you can leave it at that. However, in this case, the users are responsible for logging the time spent on a task. As working on a task for some time, employees must update the task card with a work item and the time. But employees often forget to update the task, or they are too lazy to do that. That's why YouTrack offers automatic time logging. For time management, YouTrack provides two ready-to-use workflows with different approaches: * [In Progress Work Timer](https://www.jetbrains.com/help/youtrack/standalone/Workflow-Work-Timer.html). This workflow tracks the task state and adjusts time tracking parameters (starts and stops the timer); * [Stopwatch-style Work Timer](https://www.jetbrains.com/help/youtrack/standalone/Workflow-Standalone-Work-Timer.html). This workflow allows you to start and stop the timer manually. Stopwatch-style Work Timer adds the "Start" and "Stop" values to the field. The second workflow is similar to the manual mode. But even such a semi-automatic workflow is much better than the manual one. The user still has to start the process, but they don't have to estimate the spent time. The script tracks the time. We chose In Progress Work Timer. It logs work items automatically but requires much more effort to configure. I'll talk about this workflow – and other ones – in the next paragraph. ### Workflows You can access project workflows in project properties. Here is our current set of workflows. ![0853_Kanban_YouTrack/image35.png](https://habrastorage.org/r/w1560/getpro/habr/post_images/38d/858/6ce/38d8586cee6c7815939416d64b8d8644.png) You can find the full set of workflows in the global settings. Each workflow can include several rules (modules). Each workflow has a clear title. Moreover, each workflow and its rules have a unique name. For example, "assignee-state". You can see the rules if you expand the workflow node. ![0853_Kanban_YouTrack/image36.png](https://habrastorage.org/r/w1560/getpro/habr/post_images/4b9/e55/3c7/4b9e553c74e4758a4de3cd0c7feea2fe.png) For rules, the name corresponds to the js script file. For example, "check-assignee-state". The "@jetbrains/youtrack-workflow-" prefix in workflow names indicates default (pre-configured) YouTrack workflows. YouTrack has five types of rules: * On-change: is applied when a task is changed; * On-schedule: is applied to tasks according to a schedule ; * State-machine: regulates transitions from one value to another for custom fields; * Action: is used when a user selects an action (a custom interface command); * Custom: is applied to objects and functions from other workflow modules. Let's move on and discuss our workflows. **Assignee and State** We created this workflow. It contains a single rule of the "On-change" type: * Block users from setting an invalid assignee. The rule tracks changes in the "State" and "Assignee" fields and prevents a wrong combination of values. For example, a task is unassigned. But someone tries to take the task or resolve it immediately. In this case, the script displays an error. So, the changes are not applied. YouTrack has the default "[Issue Property Combinations](https://www.jetbrains.com/help/youtrack/standalone/Workflow-Issue-Properties-Combinations.html)" workflow with similar options. However, we created our own workflow because it is more convenient. And because, why not? Besides, YouTrack uses default names for its customizable fields. We changed many field names to more suitable ones. The default field names would have complicated the modification of default workflows. **Assignee Visibility** This is a default (pre-configured) [workflow](https://www.jetbrains.com/help/youtrack/standalone/Workflow-Assignee-Visibility-Group.html). The workflow has one "On-change" rule: * Warn when issue is not visible to assignee. The rule makes sure assignees are members of the visibility group. The rule is useful for teams that run several projects and control access through groups of users. If someone forgets to add a user to a group, the user won't be able to create a task. The rule checks the visibility before you assign a task to a team member. We are not interested in this rule since we have one project. All the users have access to the project by default. Nevertheless, this rule is auto-attached. So, we use it to be on the safe side. **Common** This is a custom workflow which has three modules: * common-datetime (the "Custom" type); * common-users (the "Custom" type); * DEBUG ACTION (the "Action" type). I gave you the names of the first two modules instead of clear titles. These modules contain blocks of code that we can use in other rules. So, these modules don't have clear titles. The third module is meant for debugging. So, you don't have to (although it is possible) give it a clear title. That's why I titled the module DEBUG ACTION. The module's name is "common-debug-action". The "common-datetime" module contains time-tracking logic. The module gives the business calendar for the current year. It also stores our function that calculates minutes between two dates (includes only workdays). I'll tell you more about the "common-datetime" module in the "In Progress Work Timer" section. The "common-users" module contains code that automatically notifies users about important changes. For example, you receive a notification if a task is overdue. You also receive a notification if your task has no comments for the last three days. We decided to notify assignees and their higher-ups to increase productivity. I've already mentioned this process in the "Users, groups, roles" section. Now let's talk about the "DEBUG ACTION" module. You can guess its purpose. We created the "DEBUG ACTION" module for debugging. This is an "Action" type rule. Such rules allow you to create quick actions for tasks in YouTrack. To see the list of available actions, click the toolbar item with three dots. ![0853_Kanban_YouTrack/image38.png](https://habrastorage.org/r/w1560/getpro/habr/post_images/2c3/826/188/2c3826188b0dd6a75b6c4a9db9494946.png) You can also select an action in the Apply Command window: ![0853_Kanban_YouTrack/image40.png](https://habrastorage.org/r/w1560/getpro/habr/post_images/e72/166/ed4/e72166ed4baa7e18a19dd89ad8c06c6c.png) When you select an action, a block of code is executed immediately. You can customize the visibility of such rules (menu items). For example, the window displays the "DEBUG ACTION" only for a specified user (me). You can configure which actions are displayed in certain scenarios, for example, when users create new tasks, etc. What's this module for? We had some difficulties while debugging workflow scripts in YouTrack InCloud. In fact, debugging happens after the workflow is executed. The workflow editor analyzes the output data. Such debugging seems to be enough. But we wanted to run custom code (modules) directly in the cloud. I created DEBUG ACTION to solve this issue. Perhaps I could implement some other smart and technically correct options to process custom code. However, my approach was good enough too. **Dependencies** This workflow is a default [workflow](https://www.jetbrains.com/help/youtrack/standalone/Workflow-Dependencies.html) that includes a single "On-change" rule: * Block users from resolving issues with unresolved dependencies. The rule controls linked tasks, namely tasks with the "Depend" relationship (depends on — is required for). This dependency means that one task depends on another. You cannot close a task while the other task is in progress. We often link tasks, so this rule is useful. YouTrack has one peculiarity. You can link tasks using a built-in (or customized) set of dependency types for tasks. But these dependencies are not available without the appropriate workflows. So, be careful when deactivating default YouTrack workflows. The same is with custom fields' names. YouTrack workflows have default sets of fields. If you change some values, be ready to edit your workflows. Check out [the documentation](https://www.jetbrains.com/help/youtrack/standalone/Link-Issues.html) for more information about links (dependencies). **Due Date** This [workflow](https://www.jetbrains.com/help/youtrack/standalone/Workflow-Due-Date.html) is also default. Due Date helps you set deadlines for tasks. This workflow contains two rules by default. However, we decided to use only the second one: * Require due dates for submitted issues (we do not use it); * Notify assignee about overdue issues. The first one is the "On-change" rule. It requires every reporter to set a "Due Date" when creating a task. We disabled this strict rule. We found the second rule useful. The rule helps us to notify assignees and their higher-ups about overdue tasks. The rule is "On-schedule". It follows a schedule to process tasks (the "cron" parameter). To get a list of the tasks, it uses a search query (the "search" parameter). We slightly improved this rule. By default, the "On-schedule" rule checks the value in the Due Date field on workdays: from Monday till Friday. But sometimes we have public holidays on workdays. So, we added an additional condition that checks the Due Date field only on workdays and excludes the predefined public holidays. To do this, we used the verification functions from the "common-datetime" module. As I mentioned earlier, we also use the extended mailing feature from "common-users" module to notify users. **Duplicates** This [workflow](https://www.jetbrains.com/help/youtrack/standalone/Workflow-Duplicates.html) manages duplicate tasks. We rarely create duplicate tasks, but sometimes it happens. Therefore, we added this workflow to the project. The Duplicate workflow contains five "On-change" rules (we do not use one of the rules): * Attach duplicate links to single duplicated issue; * Raise priority when an issue is duplicated by another issue (we don't use it); * Reopen issue when all duplicates links are removed; * Set state to "Duplicate" when duplicates link is added to issue; * Require links to duplicate issue when state becomes "Duplicate". YouTrack provides two ways to mark the task as Duplicate: * A user can set the "Duplicate" state to a duplicate task. In this case, YouTrack automatically links tasks (duplicates — is duplicated by). * or vice versa: if a user links tasks, YouTrack automatically sets the "Duplicate" state to the task which is duplicated. Moreover, if you want to delete the link between duplicate tasks or to change the "Duplicate" state, you have to ensure the correct behavior. The workflow monitors all these features. We slightly modified the rules of this workflow. We just changed some names of task states (they differ from the default names). **In Progress Work Timer** Now let's talk about a default [workflow](https://www.jetbrains.com/help/youtrack/standalone/Workflow-Work-Timer.html), which was challenging to configure. As I said earlier, we wanted to try time tracking. So, we were glad to see this feature in the list of pre-installed features. However, the out-of-the-box implementation was not good enough for us. The default process includes two "On-change" rules: * Start timer when issue is in progress; * Stop timer when issue is fixed. The first rule tracks a task state. When an employee sets the task to "In Progress" (the task is in progress), the "TimerTime" field logs the current date and time. When the employee finishes the task (changes the "In Progress" state to any other), the script of the second rule automatically adds a work item. The rules record the time span between the current date and the value saved earlier in the "TimerTime" field. The algorithm seems simple, but it has some disadvantages: * the time is recorded only when a task is "In Progress". We have additional states which also require time tracking. We have the "Review" state, for example. We wrote our additional states into the scripts, it was easy to do; * the original version of the "Stop timer when issue is fixed" script specifies the current YouTrack user (ctx.CurrentUser) as a reporter (an assignee), while adding a work item. This does not work for us, because a team lead often closes tasks at the end. Then, the work item is recorded as performed by the one who resolved the task. It does not affect the total time spent on a task. But in this case, a time report is pointless. [A handy Time Report](https://www.jetbrains.com/help/youtrack/standalone/Time-Report.html) allows you to see the contribution of each employee. Our script adds a work item for the assignee who creates the task. The "Assignee and State" workflow, described earlier, checks that a task has an assignee; * the change of assignee is ignored. Several assignees could work on a task, but a work item is added for a person who closes the task. To solve this issue, we added an additional module named "Restart timer when the assignee is changed". This simple rule tracks the change of assignees for tasks in progress. In this case, the script creates a work item for a previous assignee. The script adds the time that the assignee spent on the task. The timer restarts; * more of an improvement, rather than a solution to some issue. If an assignee works on a task for a long time (for example, several days), then we don't know the exact amount of spent time. Because the work item is added only when a state or an assignee changes. Only then we see the total spent time on a pie chart in the task list: ![0853_Kanban_YouTrack/image41.png](https://habrastorage.org/r/w1560/getpro/habr/post_images/fb9/54d/43b/fb954d43b19dff3012a58219df68102f.png) Let's say an assignee forgets about the task "In Progress" and takes another task. Then they come back to the task and decide to resolve it. In such a situation, a great amount of time is added, during which no one worked on the task. In fact, that's not a big deal. Previously created work items are editable (you can manually change the value of the time added). But we want a fully automated process, don't we? :) Therefore, we added another "On-schedule" type rule named "Update timer on schedule". Every workday morning, this rule restarts the timer for tasks in progress. Hence, our tasks have a new work item every day. However, we can view the exact amount of time spent with an accuracy of one day; * when a work item is added, "Stop timer when issue is fixed" uses the default "[intervalToWorkingMinutes](https://www.jetbrains.com/help/youtrack/devportal/v1-Project.html)" function from the "Project" class to track the difference in minutes between two dates. The function uses a minimal set of YouTrack settings for time tracking: ![0853_Kanban_YouTrack/image42.png](https://habrastorage.org/r/w1560/getpro/habr/post_images/b4f/1f5/599/b4f1f5599f33788e5690496b7766da75.png) The function tracks time only on workdays and the days specified as workdays. The function does not track the time of the lunch break and holidays. The function gives a rough report in minutes. Let's say your workday starts at 9:00 and ends at 18:00 (as we have it). You start the task on Tuesday at 5:45 p.m. The next morning, on Wednesday, you continue working on the task and resolve it at 9:15 a.m. The "intervalToWorkingMinutes" function displays eight and a half working hours (510 minutes). That is fine, since switching the task to the next day immediately gives us one working day (1\*8\*60 = 480 minutes). It also gives the additional time in minutes to get a full hour (15 + 15). We wanted more accuracy, so we wrote our own version of the "intervalToWorkingMinutes" function, which satisfies our needs. For the above example, the function returns 30 minutes. Our function also excludes the weekend (Saturday or Sunday) and public holidays in our country. This function helps calculate the time between two dates, and send out notifications from other workflows(for example, about "Due Date"). The users receive notifications during working hours only. The "intervalToWorkingMinutes" function and our function are in the shared "common-datetime" module. Thus, we modified the default "In Progress Work Timer" workflow and expanded it to four rules: * Start timer when issue is in progress; * Stop timer when issue is fixed (was renamed to "Stop timer when the issue is resolved or paused"); * Update timer on schedule (was added); * Restart timer when the assignee is changed (was added). You may think that we overdid it with time tracking, and this is partly true. But YouTrack encourages us to customize the workflows as much as possible, so we couldn't resist :) Besides, we didn't customize everything at once. We still keep tweaking things from time to time. Now I'm going to tell you about one interesting error that we found after we started to use this workflow actively. I'm talking about a bug (or a feature) of the tracker. I am talking about the "Scope" field, which contains the scope of the task. Initially, we set three working days as the default value for this field (3d). Some time later, we noticed a strange thing. Tasks with the three days value did not have a pie chart. Take a look at the screenshot: ![0853_Kanban_YouTrack/image44.png](https://habrastorage.org/r/w1560/getpro/habr/post_images/fd1/0ed/d69/fd10edd69a662928dcba2f0806823325.png) The second and the fifth tasks do not have pie charts. While working with YouTrack, we found out that we need to explicitly specify the value in the "Scope" field. At the same time, something happens under the hood of YouTrack, and then this value is correctly addressed. We had to disable the default value. Besides, the requirement to specify the scope is useful, as it encourages to estimate the amount of spent time at the beginning. Perhaps the YouTrack developers have already fixed the bug. I was too lazy to study this issue in detail. Finally, I want to say a few words about the management side of time tracking. We hope that we've managed to create a convenient workflow mechanism. But that's not enough. We need to learn how to use the tool and develop the best workflow. Here's a simple example: if you stop working on a task, don't forget to change its state (put it "On hold" to stop the timer) and vice versa. You have a clear picture of workflows if you change the state promptly. However, this self-evident approach has a downside. At the beginning, employees tried to hack the system. They put a task on hold, so the timer stopped. But in fact, they continued to do the task. Therefore, the team seemed to do nothing – there were only a few tasks in progress. Also, many users thought that their higher-ups would use time tracking to track some secret KPIs to raise or low their salary, etc. Of course, we did not want to do that. We implemented this workflow mainly to increase the efficiency of task management and estimate more precisely possible labor costs. **Overdue work activities** This is an untypical process. It involves a single "On-change" rule: * Notify assignee on overdue work activities. This module is very similar to the "Notify assignee about overdue issues" module from the "Due Date" workflow described above. This module helps us to regularly notify assignees and their higher-ups if there were no comments on a task for a certain period. We use the following search query: ``` State:{In progress},Review commented:-{minus 3d}..Today created:-{minus 1d}..Today ``` The query displays all tasks in progress except for tasks that received comments over last three days or were recently created (the day before). The workflow helps us to identify forgotten tasks and encourage employees to provide progress reports in the comments. **Subtasks** This is a default [workflow](https://www.jetbrains.com/help/youtrack/standalone/Workflow-Subtasks.html). "Subtask" has a set of two "On-change" rules that automate subtasks: * Open parent task when subtask changes to an unresolved state; * Fix parent task when all subtasks are resolved (we do not use it). To create a hierarchy of tasks, YouTrack uses a link of the "Subtask" type (subtask of — parent for). I talked about links in the "Dependencies" paragraph. Subtasks are very useful. We really lacked these features in Bitbucket. By the way, we noticed an interesting thing. In YouTrack, we have a larger number of tasks (on average for the company) that are in progress per unit of time, compared to Bitbucket. With a comparable number of employees. I think one of the reasons is the frequent use of subtasks in YouTrack. Subtasks allow us to decompose tasks more easily. Let's get back to the rules. The first rule checks the state of all subtasks of an already closed parent task. If at least one of the tasks is taken back to work, the parent task is reopened. The second rule automatically resolves the parent task when all its subtasks are resolved. We found the second rule inconvenient because we often use parent tasks as full-fledged tasks. They are not just for aggregating subtasks. An employee may still working on the task after all subtasks are resolved. Therefore, we disabled this rule. However, we needed another rule. We wanted to prohibit users from closing the parent task if there is at least one subtask in progress. So, we expanded the default workflow. We added our own rule – "Block users from resolving issues with unresolved subtasks". That's all I wanted to say about our workflows. The article seems to be a long read. But I wanted to show that YouTrack is user-friendly. You can easily customize your workflows. I guess our team succeeded. Reports ------- YouTrack has time reports, a useful feature that we also wanted to implement. Time reports help to collect data and summarize statistics. For example, we often use the [built-in](https://www.jetbrains.com/help/youtrack/standalone/Time-Report.html) time report that I mentioned earlier. We do not monitor the time each employee spent on a task. Time reports help us to estimate employee's contributions to team progress, implement effective time management strategies, and analyze our workflows. However, we got used to reports in Bitbucket (we used our own report generator in Bitbucket) and did not want to give them up. Besides, in YouTrack you may use the API to access all tasks. Thus, every employee can customize and use any built-in YouTrack reports. In addition, our higher-ups receive a large summary report every month as we had in Bitbucket. Employee training ----------------- Finally, I'll tell you how we trained our team to work with YouTrack. We used workflow tracking tools before, so our team did not have difficulties with the YouTrack interface. YouTrack has lots of features. A number of options may confuse you at first. But it's no big deal. The YouTrack team tries to simplify the interface. Recently, they added the "Light" mode, changed the menu structure, etc. I often hear people saying that YouTrack is too complex for users. It's partly true. But I still think that it's a matter of time and practice to get used to the tool. However, it may be difficult to change workflows, field structures, task links, and add time tracking. Our task team gradually introduced these features. Then, employees had to understand everything promptly. Team members made an effort to learn about new features. Some employees didn't like that. We made two presentations to explain YouTrack features. So far, we continue to get questions and objections. I also want to add an Easter egg here for our employees. One of the purposes of this article is to provide our team members with the additional information about YouTrack. A hard worker of PVS-Studio, if you're here and ready to answer extra questions about YouTrack, I've got a gift for you. :) Conclusion ---------- Looking back, I'm proud and satisfied with the work done. I think other employees who helped to implement the kanban and switch to YouTrack feel the same. We've completed an important stage in developing our company. Introducing YouTrack was difficult, but very exciting. We had to solve lots of management and technical issues. We held meetings and discussions, agreed on deadlines, defended our points of view, and wrote code in JavaScript. I want to thank all involved and wish us to remain passionate about what we do. It's just the beginning. And that's great!
https://habr.com/ru/post/572602/
null
null
8,839
67.04
Archive for the ‘NAG Library’ Category What is Second Order Cone Programming (SOCP)? Second Order Cone Programming (SOCP) problems are a type of optimisation problem that have applications in many areas of science, finance and engineering. A summary of the type of problems that can make use of SOCP, including things as diverse as designing antenna arrays, finite impulse response (FIR) filters and structural equilibrium problems can be found in the paper ‘Applications of Second Order Cone Programming’ by Lobo et al. There are also a couple of examples of using SOCP for portfolio optimisation in the GitHub repository of the Numerical Algorithms Group (NAG). A large scale SOCP solver was one of the highlights of the Mark 27 release of the NAG library (See here for a poster about its performance). Those who have used the NAG library for years will expect this solver to have interfaces in Fortran and C and, of course, they are there. In addition to this is the fact that Mark 27 of the NAG Library for Python was released at the same time as the Fortran and C interfaces which reflects the importance of Python in today’s numerical computing landscape. Here’s a quick demo of how the new SOCP solver works in Python. The code is based on a notebook in NAG’s PythonExamples GitHub repository. NAG’s handle_solve_socp_ipm function (also known as e04pt) is a solver from the NAG optimization modelling suite for large-scale second-order cone programming (SOCP) problems based on an interior point method (IPM). $$ \begin{array}{ll} {\underset{x \in \mathbb{R}^{n}}{minimize}\ } & {c^{T}x} \\ \text{subject to} & {l_{A} \leq Ax \leq u_{A}\text{,}} \\ & {l_{x} \leq x \leq u_{x}\text{,}} \\ & {x \in \mathcal{K}\text{,}} \\ \end{array} $$ where $\mathcal{K} = \mathcal{K}^{n_{1}} \times \cdots \times \mathcal{K}^{n_{r}} \times \mathbb{R}^{n_{l}}$ is a Cartesian product of quadratic (second-order type) cones and $n_{l}$-dimensional real space, and $n = \sum_{i = 1}^{r}n_{i} + n_{l}$ is the number of decision variables. Here $c$, $x$, $l_x$ and $u_x$ are $n$-dimensional vectors. $A$ is an $m$ by $n$ sparse matrix, and $l_A$ and $u_A$ and are $m$-dimensional vectors. Note that $x \in \mathcal{K}$ partitions subsets of variables into quadratic cones and each $\mathcal{K}^{n_{i}}$ can be either a quadratic cone or a rotated quadratic cone. These are defined as follows: - Quadratic cone: $$ \mathcal{K}_{q}^{n_{i}} := \left\{ {z = \left\{ {z_{1},z_{2},\ldots,z_{n_{i}}} \right\} \in {\mathbb{R}}^{n_{i}} \quad\quad : \quad\quad z_{1}^{2} \geq \sum\limits_{j = 2}^{n_{i}}z_{j}^{2},\quad\quad\quad z_{1} \geq 0} \right\}\text{.} $$ - Rotated quadratic cone: $$ \mathcal{K}_{r}^{n_{i}} := \left\{ {z = \left\{ {z_{1},z_{2},\ldots,z_{n_{i}}} \right\} \in {\mathbb{R}}^{n_{i}}\quad\quad:\quad \quad\quad 2z_{1}z_{2} \geq \sum\limits_{j = 3}^{n_{i}}z_{j}^{2}, \quad\quad z_{1} \geq 0, \quad\quad z_{2} \geq 0} \right\}\text{.} $$ For a full explanation of this routine, refer to e04ptc in the NAG Library Manual Using the NAG SOCP Solver from Python This example, derived from the documentation for the handle_set_group function solves the following SOCP problem minimize $${10.0x_{1} + 20.0x_{2} + x_{3}}$$ from naginterfaces.base import utils from naginterfaces.library import opt # The problem size: n = 3 # Create the problem handle: handle = opt.handle_init(nvar=n) # Set objective function opt.handle_set_linobj(handle, cvec=[10.0, 20.0, 1.0]) subject to the bounds $$ \begin{array}{rllll} {- 2.0} & \leq & x_{1} & \leq & 2.0 \\ {- 2.0} & \leq & x_{2} & \leq & 2.0 \\ \end{array} $$ # Set box constraints opt.handle_set_simplebounds( handle, bl=[-2.0, -2.0, -1.e20], bu=[2.0, 2.0, 1.e20] ) the general linear constraints & & {- 0.1x_{1}} & – & {0.1x_{2}} & + & x_{3} & \leq & 1.5 & & \\ 1.0 & \leq & {- 0.06x_{1}} & + & x_{2} & + & x_{3} & & & & \\ \end{array} # Set linear constraints opt.handle_set_linconstr( handle, bl=[-1.e20, 1.0], bu=[1.5, 1.e20], irowb=[1, 1, 1, 2, 2, 2], icolb=[1, 2, 3, 1, 2, 3], b=[-0.1, -0.1, 1.0, -0.06, 1.0, 1.0] ); and the cone constraint $$\left( {x_{3},x_{1},x_{2}} \right) \in \mathcal{K}_{q}^{3}\text{.}$$ # Set cone constraint opt.handle_set_group( handle, gtype='Q', group=[ 3,1, 2], idgroup=0 ); We set some algorithmic options. For more details on the options available, refer to the routine documentation # Set some algorithmic options. for option in [ 'Print Options = NO', 'Print Level = 1' ]: opt.handle_opt_set(handle, option) # Use an explicit I/O manager for abbreviated iteration output: iom = utils.FileObjManager(locus_in_output=False) Finally, we call the solver # Call SOCP interior point solver result = opt.handle_solve_socp_ipm(handle, io_manager=iom) ------------------------------------------------ E04PT, Interior point method for SOCP problems ------------------------------------------------ Status: converged, an optimal solution found Final primal objective value -1.951817E+01 Final dual objective value -1.951817E+01 The optimal solution is result.x array([-1.26819151, -0.4084294 , 1.3323379 ]) and the objective function value is result.rinfo[0] -19.51816515094211 Finally, we clean up after ourselves by destroying the handle # Destroy the handle: opt.handle_free(handle) As you can see, the way to use the NAG Library for Python interface follows the mathematics quite closely. NAG also recently added support for the popular cvxpy modelling language that I’ll discuss another time. Links Given a symmetric matrix such as ! I’ve been a user and supporter of the commercial numerical libraries from NAG for several years now, using them in MATLAB, Fortran, C and Python among others. They recently updated their C library to version 24 which now includes 1516 functions apparently. NAG’s routines are fast, accurate, extremely well supported and often based on cutting edge numerical research (I know this because academics at my University, The University of Manchester, is responsible for some of said research). I often use functions from the NAG C library in MATLAB mex routines in order to help speed up researcher’s code. Here’s some of the new functionality in Mark 24. A full list of functions is available at • Hypergeometric function (1f1 and 2f1) • Nearest Correlation Matrix • Elementwise weighted nearest correlation matrix • Wavelet Transforms & FFTs —Three dimensional discrete single level and multi-level wavelet transforms —Fast Fourier Transforms (FFTs) for two dimensional and three dimensional real data • Matrix Functions —Matrix square roots and general powers —Matrix exponentials (Schur-Parlett) —Fréchet Derivative —Calculation of condition numbers • Interpolation — Interpolation for 5D and higher dimensions • Optimization — Local optimization: Non-negative least squares —Global optimization: Multi-start versions of general nonlinear programming and least squares routines • Random Number Generatorss — Brownian bridge and random fields • Statistics — Gaussian mixture model — Best subsets of given size (branch and bound ) — Vectorized probabilities and probability density functions of distributions — Inhomogeneous time series analysis, moving averages — Routines that combines two sums of squares matrices to allow large datasets to be summarised • Data fitting —Fit of 2D scattered data by two-stage approximation (suitable for large datasets) • Quadrature — 1D adaptive for badly-behaved integrals • Sparse eigenproblem — Driver for real general matrix, driver for banded complex eigenproblem — Real and complex quadratic eigenvalue problems • Sparse linear systems — block diagonal pre-conditioners and solvers • ODE solvers — Threadsafe initial value ODE solvers • Volatility — Heston model with term structure In a recent Stack Overflow query, someone asked if you could switch off the balancing step when calculating eigenvalues in Python. In the document A case where balancing is harmful, David S. Watkins describes the balancing step as ‘the input matrix A is replaced by a rescaled matrix A* = D-1AD, where D is a diagonal matrix chosen so that, for each i, the ith row and the ith column of A* have roughly the same norm.’ Such balancing is usually very useful and so is performed by default by software such as MATLAB or Numpy. There are times, however, when one would like to switch it off. In MATLAB, this is easy and the following is taken from the online MATLAB documentation A = [ 3.0 -2.0 -0.9 2*eps; -2.0 4.0 1.0 -eps; -eps/4 eps/2 -1.0 0; -0.5 -0.5 0.1 1.0]; [VN,DN] = eig(A,'nobalance') VN = 0.6153 -0.4176 -0.0000 -0.1528 -0.7881 -0.3261 0 0.1345 -0.0000 -0.0000 -0.0000 -0.9781 0.0189 0.8481 -1.0000 0.0443 DN = 5.5616 0 0 0 0 1.4384 0 0 0 0 1.0000 0 0 0 0 -1.0000 At the time of writing, it is not possible to directly do this in Numpy (as far as I know at least). Numpy’s eig command currently uses the LAPACK routine DGEEV to do the heavy lifting for double precision matrices. We can see this by looking at the source code of numpy.linalg.eig where the relevant subsection is lapack_routine = lapack_lite.dgeev wr = zeros((n,), t) wi = zeros((n,), t) vr = zeros((n, n), t) lwork = 1 work = zeros((lwork,), t) results = lapack_routine(_N, _V, n, a, n, wr, wi, dummy, 1, vr, n, work, -1, 0) lwork = int(work[0]) work = zeros((lwork,), t) results = lapack_routine(_N, _V, n, a, n, wr, wi, dummy, 1, vr, n, work, lwork, 0) My plan was to figure out how to tell DGEEV not to perform the balancing step and I’d be done. Sadly, however, it turns out that this is not possible. Taking a look at the reference implementation of DGEEV, we can see that the balancing step is always performed and is not user controllable–here’s the relevant bit of Fortran * Balance the matrix * (Workspace: need N) * IBAL = 1 CALL DGEBAL( 'B', N, A, LDA, ILO, IHI, WORK( IBAL ), IERR ) So, using DGEEV is a dead-end unless we are willing to modifiy and recompile the lapack source — something that’s rarely a good idea in my experience. There is another LAPACK routine that is of use, however, in the form of DGEEVX that allows us to control balancing. Unfortunately, this routine is not part of the numpy.linalg.lapack_lite interface provided by Numpy and I’ve yet to figure out how to add extra routines to it. I’ve also discovered that this functionality is an open feature request in Numpy. Enter the NAG Library My University has a site license for the commercial Numerical Algorithms Group (NAG) library. Among other things, NAG offers an interface to all of LAPACK along with an interface to Python. So, I go through the installation and do import numpy as np from ctypes import * from nag4py.util import Nag_RowMajor,Nag_NoBalancing,Nag_NotLeftVecs,Nag_RightVecs,Nag_RCondEigVecs,Integer,NagError,INIT_FAIL from nag4py.f08 import nag_dgeevx eps = np.spacing(1) np.set_printoptions(precision=4,suppress=True) def unbalanced_eig(A): """ Compute the eigenvalues and right eigenvectors of a square array using DGEEVX via the NAG library. Requires the NAG C library and NAG's Python wrappers The balancing step that's performed in DGEEV is not performed here. As such, this function is the same as the MATLAB command eig(A,'nobalance') Parameters ---------- A : (M, M) Numpy array A square array of real elements. On exit: A is overwritten and contains the real Schur form of the balanced version of the input matrix . Returns ------- w : (M,) ndarray The eigenvalues v : (M, M) ndarray The eigenvectors Author: Mike Croucher () Testing has been mimimal """ order = Nag_RowMajor balanc = Nag_NoBalancing jobvl = Nag_NotLeftVecs jobvr = Nag_RightVecs sense = Nag_RCondEigVecs n = A.shape[0] pda = n pdvl = 1 wr = np.zeros(n) wi = np.zeros(n) vl=np.zeros(1); pdvr = n vr = np.zeros(pdvr*n) ilo=c_long(0) ihi=c_long(0) scale = np.zeros(n) abnrm = c_double(0) rconde = np.zeros(n) rcondv = np.zeros(n) fail = NagError() INIT_FAIL(fail) nag_dgeevx(order,balanc,jobvl,jobvr,sense, n, A.ctypes.data_as(POINTER(c_double)), pda, wr.ctypes.data_as(POINTER(c_double)), wi.ctypes.data_as(POINTER(c_double)),vl.ctypes.data_as(POINTER(c_double)),pdvl, vr.ctypes.data_as(POINTER(c_double)),pdvr,ilo,ihi, scale.ctypes.data_as(POINTER(c_double)), abnrm, rconde.ctypes.data_as(POINTER(c_double)),rcondv.ctypes.data_as(POINTER(c_double)),fail) if all(wi == 0.0): w = wr v = vr.reshape(n,n) else: w = wr+1j*wi v = array(vr, w.dtype).reshape(n,n) return(w,v) Define a test matrix: A = np.array([[3.0,-2.0,-0.9,2*eps], [-2.0,4.0,1.0,-eps], [-eps/4,eps/2,-1.0,0], [-0.5,-0.5,0.1,1.0]]) Do the calculation (w,v) = unbalanced_eig(A) which gives (array([ 5.5616, 1.4384, 1. , -1. ]), array([[ 0.6153, -0.4176, -0. , -0.1528], [-0.7881, -0.3261, 0. , 0.1345], [-0. , -0. , -0. , -0.9781], [ 0.0189, 0.8481, -1. , 0.0443]])) This is exactly what you get by running the MATLAB command eig(A,’nobalance’). Note that unbalanced_eig(A) changes the input matrix A to array([[ 5.5616, -0.0662, 0.0571, 1.3399], [ 0. , 1.4384, 0.7017, -0.1561], [ 0. , 0. , 1. , -0.0132], [ 0. , 0. , 0. , -1. ]]) According to the NAG documentation, this is the real Schur form of the balanced version of the input matrix. I can’t see how to ask NAG to not do this. I guess that if it’s not what you want unbalanced_eig() to do, you’ll need to pass a copy of the input matrix to NAG. The IPython notebook The code for this article is available as an IPython Notebook The future This blog post was written using Numpy version 1.7.1. There is an enhancement request for the functionality discussed in this article open in Numpy’s git repo and so I expect this article to become redundant pretty soon. The Numerical Algorithms Group (NAG) are principally known for their numerical library but they also offer products such as a MATLAB toolbox and a Fortran compiler. My employer, The University of Manchester, has a full site license for most of NAG’s stuff where it is heavily used by both our students and researchers. While at a recent software conference, I saw a talk by NAG’s David Sayers where he demonstrated some of the features of the NAG Fortran Compiler. During this talk he showed some examples of broken Fortran and asked us if we could spot how they were broken without compiler assistance. I enjoyed the talk and so asked David if he would mind writing a guest blog post on the subject for WalkingRandomly. He duly obliged. This is a guest blog post by David Sayers of NAG. extensively in to. The principal developer of the compiler is Malcolm Cohen, co-author of the book, Modern Fortran Explained along with Michael Metcalf and John Reid. Malcolm. Of all people Malcolm Cohen should know Fortran and the way the standard should be enforced! His compiler reflects that knowledge and is designed to assist the programmer to detect how programs might be faulty due to a departure from the Fortran standard or prone to trigger a run time error. In either case the diagnostics of produced by the compiler are clear and helpful and can save the developer many hours of laborious bug-tracing. Here are some particularly simple examples of faulty programs. See if you can spot the mistakes, and think how difficult these might be to detect in programs that may be thousands of times longer: Example 1 Program test Real, Pointer :: x(:, :) Call make_dangle x(10, 10) = 0 Contains Subroutine make_dangle Real, Target :: y(100, 200) x => y End Subroutine make_dangle End Program test Example 2 Program dangle2 Real,Pointer :: x(:),y(:) Allocate(x(100)) y => x Deallocate(x) y = 3 End Example 3 program more integer n, i real r, s equivalence (n,r) i=3 r=2.5 i=n*n write(6,900) i, r 900 format(' i = ', i5, ' r = ', f10.4) stop 'ok' end Example 4 program trouble1 integer n parameter (n=11) integer iarray(n) integer i do 10 i=1,10 iarray(i) = i 10 continue write(6,900) iarray 900 format(' iarray = ',11i5) stop 'ok' end And finally if this is all too easy … Example 5 ! E04UCA Example Program Text ! Mark 23 Release. NAG Copyright 2011. ! .. Scalar Arguments .. REAL (KIND=nag_wp), INTENT (OUT) :: objf INTEGER, INTENT (INOUT) :: mode INTEGER, INTENT (IN) :: n, nstate ! .. Array Arguments .. REAL (KIND=nag_wp), INTENT (INOUT) :: objgrd(n), ruser(*) REAL (KIND=nag_wp), INTENT (IN) :: x(n) INTEGER, INTENT (INOUT) :: iuser(*) ! .. Executable Statements .. IF (mode==0 .OR. mode==2) THEN objf = x(1)*x(4)*(x(1)+x(2)+x(3)) + x(3) END IF IF (mode==1 .OR. mode==2) THEN objgrd(1) = x(4)*(x(1)+x(1)+x(2)+x(3)) objgrd(2) = x(1)*x(4) objgrd(3) = x(1)*x(4) + one objgrd(4) = x(1)*(x(1)+x(2)+x(3)) END IF RETURN END SUBROUTINE objfun SUBROUTINE confun(mode,ncnln,n,ldcj,needc,x,c,cjac,nstate,iuser,ruser) ! Routine to evaluate the nonlinear constraints and their 1st ! derivatives. ! .. Implicit None Statement .. IMPLICIT NONE ! .. Scalar Arguments .. INTEGER, INTENT (IN) :: ldcj, n, ncnln, nstate INTEGER, INTENT (INOUT) :: mode ! .. Array Arguments .. REAL (KIND=nag_wp), INTENT (OUT) :: c(ncnln) REAL (KIND=nag_wp), INTENT (INOUT) :: cjac(ldcj,n), ruser(*) REAL (KIND=nag_wp), INTENT (IN) :: x(n) INTEGER, INTENT (INOUT) :: iuser(*) INTEGER, INTENT (IN) :: needc(ncnln) ! .. Executable Statements .. IF (nstate==1) THEN ! First call to CONFUN. Set all Jacobian elements to zero. ! Note that this will only work when 'Derivative Level = 3' ! (the default; see Section 11.2). cjac(1:ncnln,1:n) = zero END IF IF (needc(1)>0) THEN IF (mode==0 .OR. mode==2) THEN c(1) = x(1)**2 + x(2)**2 + x(3)**2 + x(4)**2 END IF IF (mode==1 .OR. mode==2) THEN cjac(1,1) = x(1) + x(1) cjac(1,2) = x(2) + x(2) cjac(1,3) = x(3) + x(3) cjac(1,4) = x(4) + x(4) END IF END IF IF (needc(2)>0) THEN IF (mode==0 .OR. mode==2) THEN c(2) = x(1)*x(2)*x(3)*x(4) END IF IF (mode==1 .OR. mode==2) THEN cjac(2,1) = x(2)*x(3)*x(4) cjac(2,2) = x(1)*x(3)*x(4) cjac(2,3) = x(1)*x(2)*x(4) cjac(2,4) = x(1)*x(2)*x(3) END IF END IF RETURN END SUBROUTINE confun END MODULE e04ucae_mod PROGRAM e04ucae ! E04UCA Example Main Program ! .. Use Statements .. USE nag_library, ONLY : dgemv, e04uca, e04wbf, nag_wp USE e04ucae_mod, ONLY : confun, inc1, lcwsav, liwsav, llwsav, lrwsav, & nin, nout, objfun, one, zero ! .. Implicit None Statement .. ! IMPLICIT NONE ! .. Local Scalars .. ! REAL (KIND=nag_wp) :: objf INTEGER :: i, ifail, iter, j, lda, ldcj, & ldr, liwork, lwork, n, nclin, & ncnln, sda, sdcjac ! .. Local Arrays .. REAL (KIND=nag_wp), ALLOCATABLE :: a(:,:), bl(:), bu(:), c(:), & cjac(:,:), clamda(:), objgrd(:), & r(:,:), work(:), x(:) REAL (KIND=nag_wp) :: ruser(1), rwsav(lrwsav) INTEGER, ALLOCATABLE :: istate(:), iwork(:) INTEGER :: iuser(1), iwsav(liwsav) LOGICAL :: lwsav(llwsav) CHARACTER (80) :: cwsav(lcwsav) ! .. Intrinsic Functions .. INTRINSIC max ! .. Executable Statements .. WRITE (nout,*) 'E04UCA Example Program Results' ! Skip heading in data file READ (nin,*) READ (nin,*) n, nclin, ncnln liwork = 3*n + nclin + 2*ncnln lda = max(1,nclin) IF (nclin>0) THEN sda = n ELSE sda = 1 END IF ldcj = max(1,ncnln) IF (ncnln>0) THEN sdcjac = n ELSE sdcjac = 1 END IF ldr = n IF (ncnln==0 .AND. nclin>0) THEN lwork = 2*n**2 + 20*n + 11*nclin ELSE IF (ncnln>0 .AND. nclin>=0) THEN lwork = 2*n**2 + n*nclin + 2*n*ncnln + 20*n + 11*nclin + 21*ncnln ELSE lwork = 20*n END IF ALLOCATE (istate(n+nclin+ncnln),iwork(liwork),a(lda,sda), & bl(n+nclin+ncnln),bu(n+nclin+ncnln),c(max(1, & ncnln)),cjac(ldcj,sdcjac),clamda(n+nclin+ncnln),objgrd(n),r(ldr,n), & x(n),work(lwork)) IF (nclin>0) THEN READ (nin,*) (a(i,1:sda),i=1,nclin) END IF READ (nin,*) bl(1:(n+nclin+ncnln)) READ (nin,*) bu(1:(n+nclin+ncnln)) READ (nin,*) x(1:n) ! Initialise E04UCA ifail = 0 CALL e04wbf('E04UCA',cwsav,lcwsav,lwsav,llwsav,iwsav,liwsav,rwsav, & lrwsav,ifail) ! Solve the problem ifail = -1 CALL) SELECT CASE (ifail) CASE (0:6,8) WRITE (nout,*) WRITE (nout,99999) WRITE (nout,*) DO i = 1, n WRITE (nout,99998) i, istate(i), x(i), clamda(i) END DO IF (nclin>0) THEN ! A*x --> work. ! The NAG name equivalent of dgemv is f06paf CALL dgemv('N',nclin,n,one,a,lda,x,inc1,zero,work,inc1) WRITE (nout,*) WRITE (nout,*) WRITE (nout,99997) WRITE (nout,*) DO i = n + 1, n + nclin j = i - n WRITE (nout,99996) j, istate(i), work(j), clamda(i) END DO END IF IF (ncnln>0) THEN WRITE (nout,*) WRITE (nout,*) WRITE (nout,99995) WRITE (nout,*) DO i = n + nclin + 1, n + nclin + ncnln j = i - n - nclin WRITE (nout,99994) j, istate(i), c(j), clamda(i) END DO END IF WRITE (nout,*) WRITE (nout,*) WRITE (nout,99993) objf END SELECT 99999 FORMAT (1X,'Varbl',2X,'Istate',3X,'Value',9X,'Lagr Mult') 99998 FORMAT (1X,'V',2(1X,I3),4X,1P,G14.6,2X,1P,G12.4) 99997 FORMAT (1X,'L Con',2X,'Istate',3X,'Value',9X,'Lagr Mult') 99996 FORMAT (1X,'L',2(1X,I3),4X,1P,G14.6,2X,1P,G12.4) 99995 FORMAT (1X,'N Con',2X,'Istate',3X,'Value',9X,'Lagr Mult') 99994 FORMAT (1X,'N',2(1X,I3),4X,1P,G14.6,2X,1P,G12.4) 99993 FORMAT (1X,'Final objective value = ',1P,G15.7) END PROGRAM e04ucae Answers to this particular New Year quiz will be posted in a future blog post. recently installed MATLAB 2012a on a Windows machine along with a certain set of standard Mathworks toolboxes. In addition, I also installed the excellent NAG Toolbox for MATLAB which is standard practice at my University. I later realised that I had not installed all of the Mathworks toolboxes I needed so I fired up the MATLAB installer again and asked it to add the missing toolboxes. This extra installation never completed, however, and gave me the error message The application encountered an unexpected error and needed to close. You may want to try re-installing your product(s). More infomation can be found at C:\path_to_a_log_file I took a look at the log file mentioned which revealed a huge java error that began with java.util.concurrent.ExecutionException: java.lang.StringIndexOutOfBoundsException: String index out of range: -2 at java.util.concurrent.FutureTask$Sync.innerGet(Unknown Source) at java.util.concurrent.FutureTask.get(Unknown Source) at javax.swing.SwingWorker.get(Unknown Source) at com.mathworks.wizard.worker.WorkerImpl.done(WorkerImpl.java:33) at javax.swing.SwingWorker$5.run(Unknown Source) A little mucking around revealed that the installer was unhappy with the pathdef.m file at C:\Program Files\MATLAB\R2012a\toolbox\local\pathdef.m The installer for the NAG Toolbox modifies this file by adding the line 'C:\Program Files\MATLAB\R2012a\toolbox\NAG\mex.w64;' ... near the beginning and the lines 'C:\Program Files\MATLAB\R2012a\help\toolbox\NAG;' ... 'C:\Program Files\MATLAB\R2012a\help\toolbox\NAGToolboxDemos;' ... Near the end and it seems that the MATLAB installer really doesn’t like this. So, what you do is create a copy of this pathdef.m file (pathdef.m.old for example) and then remove the non-mathworks lines in pathdef.m. Now you can install the extra Mathworks toolboxes you want. Once the installer has finished its work you can re-add the non-mathworks lines back into pathdef.m using your copy as a guide. I’ll be informing both NAG and The Mathworks about this particular issue but wanted to get this post out there as soon as possible to provide a workaround since at least one other person has hit this problem at my University and I doubt that he will be the last (It’s also going to make SCCM deployment of MATLAB a pain but that’s another story). Update The Mathworks technical support have sent me a better workaround than the one detailed above. What you need to do is to change 'C:\Program Files\MATLAB\R2012a\toolbox\NAG\mex.w64;' ... to 'C:\Program Files\MATLAB\R2012a\toolbox\NAG\mex.w64;', ... The Mathworks installer is unhappy about the missing comma. The NAG C Library is one of the largest commercial collections of numerical software currently available and I often find it very useful when writing MATLAB mex files. “Why is that?” I hear you ask. One of the main reasons for writing a mex file is to gain more speed over native MATLAB. However, one of the main problems with writing mex files is that you have to do it in a low level language such as Fortran or C and so you lose much of the ease of use of MATLAB. In particular, you lose straightforward access to most of the massive collections of MATLAB routines that you take for granted. Technically speaking that’s a lie because you could use the mex function mexCallMATLAB to call a MATLAB routine from within your mex file but then you’ll be paying a time overhead every time you go in and out of the mex interface. Since you are going down the mex route in order to gain speed, this doesn’t seem like the best idea in the world. This is also the reason why you’d use the NAG C Library and not the NAG Toolbox for MATLAB when writing mex functions. One way out that I use often is to take advantage of the NAG C library and it turns out that it is extremely easy to add the NAG C library to your mex projects on Windows. Let’s look at a trivial example. The following code, nag_normcdf.c, uses the NAG function nag_cumul_normal to produce a simplified version of MATLAB’s normcdf function (laziness is all that prevented me from implementing a full replacement). /*" void mexFunction( int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[] ) { double *in,*out; int rows,cols,num_elements,i; if(nrhs>1) { mexErrMsgIdAndTxt("NAG] = nag_cumul_normal(in[i]); } } To compile this in MATLAB, just use the following command mex nag_normcdf.c CLW6I09DA_nag.lib If your system is set up the same as mine then the above should ‘just work’ (see System Information at the bottom of this post). The new function works just as you would expect it to >> format long >> format compact >> nag_normcdf(1) ans = 0.841344746068543 Compare the result to normcdf from the statistics toolbox >> normcdf(1) ans = 0.841344746068543 So far so good. I could stop the post here since all I really wanted to do was say ‘The NAG C library is useful for MATLAB mex functions and it’s a doddle to use – here’s a toy example and here’s the mex command to compile it’ However, out of curiosity, I looked to see if my toy version of normcdf was any faster than The Mathworks’ version. Let there be 10 million numbers: >> x=rand(1,10000000); Let’s time how long it takes MATLAB to take the normcdf of those numbers >> tic;y=normcdf(x);toc Elapsed time is 0.445883 seconds. >> tic;y=normcdf(x);toc Elapsed time is 0.405764 seconds. >> tic;y=normcdf(x);toc Elapsed time is 0.366708 seconds. >> tic;y=normcdf(x);toc Elapsed time is 0.409375 seconds. Now let’s look at my toy-version that uses NAG. >> tic;y=nag_normcdf(x);toc Elapsed time is 0.544642 seconds. >> tic;y=nag_normcdf(x);toc Elapsed time is 0.556883 seconds. >> tic;y=nag_normcdf(x);toc Elapsed time is 0.553920 seconds. >> tic;y=nag_normcdf(x);toc Elapsed time is 0.540510 seconds. So my version is slower! Never mind, I’ll just make my version parallel using OpenMP – Here is the code: nag_normcdf_openmp.c /*" #include <omp.h> void do_calculation(double in[],double out[],int num_elements) { int i,tid; #pragma omp parallel for shared(in,out,num_elements) private(i,tid) for(i=0; i<num_elements; i++){ out[i] = nag_cumul_normal(in[i]); } } void mexFunction( int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[] ) { double *in,*out; int rows,cols,num_elements; if(nrhs>1) { mexErrMsgIdAndTxt("NAG_NORMCDF); } Compile that with mex COMPFLAGS="$COMPFLAGS /openmp" nag_normcdf_openmp.c CLW6I09DA_nag.lib and on my quad-core machine I get the following timings >> tic;y=nag_normcdf_openmp(x);toc Elapsed time is 0.237925 seconds. >> tic;y=nag_normcdf_openmp(x);toc Elapsed time is 0.197531 seconds. >> tic;y=nag_normcdf_openmp(x);toc Elapsed time is 0.206511 seconds. >> tic;y=nag_normcdf_openmp(x);toc Elapsed time is 0.211416 seconds. This is faster than MATLAB and so normal service is resumed :) System Information - 64bit Windows 7 - MATLAB 2011b - NAG C Library Mark 9 – CLW6I09DAL - Visual Studio 2008 - Intel Core i7-2630QM processor
http://www.walkingrandomly.com/?cat=28
CC-MAIN-2020-05
refinedweb
4,841
54.22
AWS Compute Blog Building, Testing and Deploying Java applications on AWS Lambda using Maven and Jenkins Jeff Nunn Solutions Architect workflows. Using Git, Maven, and Jenkins, you can integrate, test, build and deploy your Lambda functions using these same paradigms. As a side note, we will be having a webinar on Continuous Delivery to AWS Lambda which covers more methods of Continuous Delivery on Thursday, April 28th. Register for the webinar. Prerequisites - Jenkins — Many of our customers use Jenkins as an automation server to perform continuous integration. While the setup of Jenkins is out of scope for this post, you can still learn about unit testing and pushing your code to Lambda by working through this example. - A Git repository — This method uses Git commit hooks to perform builds against code to be checked into Git. You can use your existing Git repository, or create a new Git repository with AWS CodeCommit or other popular Git source control managers. Getting started In this example, you are building a simple document analysis system, in which metadata is extracted from PDF documents and written to a database, allowing indexing and searching based on that metadata. You use a Maven-based Java project to accomplish the PDF processing. To explain concepts, we show snippets of code throughout the post. Overview of event-driven code To accomplish the document analysis, an Amazon S3 bucket is created to hold PDF documents. When a new PDF document is uploaded to the bucket, a Lambda function analyzes the document for metadata (the title, author, and number of pages), and adds that data to a table in Amazon DynamoDB, allowing other users to search on those fields. Lambda executes code in response to events. One such event would be the creation of an object in Amazon S3. When an object is created (or even updated or deleted), Lambda can run code using that object as a source. Create an Amazon DynamoDB table To hold the document metadata, create a table in DynamoDB, using the Title value of the document as the primary key. For this example, you can set your provisioned throughput to 1 write capacity unit and 1 read capacity unit. Write the Java code for Lambda The Java function takes the S3 event as a parameter, extracting the PDF object and analyzing the document for metadata using Apache PDFBox, and writing the results to DynamoDB. // Get metadata from the document PDDocument document = PDDocument.load(objectData); PDDocumentInformation metadata = document.getDocumentInformation(); ... String title = metadata.getTitle(); if (title == null) { title = "Unknown Title"; } ... Item item = new Item() .withPrimaryKey("Title", title) .withString("Author", author) .withString("Pages", Integer.toString(document.getNumberOfPages())); The Maven project comes with a sample S3 event ( /src/test/resources/s3-event.put.json) from which you can build your tests. { " }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": "builtonaws", "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3:::builtonaws" }, "object": { "key": "blogs/lambdapdf/aws-overview.pdf", "size": 558985, "eTag": "ac265da08a702b03699c4739c5a8269e" } } } ] } Take care to to replace the awsRegion, arn, and key to match your specific region, Amazon Resource Name, and key of the PDF document that you’ve uploaded. Test your code The sample code you’ve downloaded contains some basic unit tests. One test gets an item from the DynamoDB table and verifies that the expected metadata exists: @Test public void checkMetadataResult() { DynamoDB dynamoDB = new DynamoDB(new AmazonDynamoDBClient()); Table table = dynamoDB.getTable("PDFMetadata"); Item item = table.getItem("Title", "Overview of Amazon Web Services"); assertEquals(31, item.getInt("Pages")); assertEquals("sajee@amazon.com", item.getString("Author")); assertEquals("Overview of Amazon Web Services", item.getString("Title")); } Before continuing, test your code to ensure that everything is working: mvn test After ensuring there are no errors, check your DynamoDB table to see the metadata now added to your table. The code executes because of the sample event in your Maven project, but how does it work when a new PDF is added to your bucket? To test this, complete the connection between Amazon S3 and Lambda. Create a Lambda function Use mvn package to package your working code, then upload the resulting JAR file to a Lambda function. - In the Lambda console, create a new function and set runtime to Java 8. - Set function package to the project JAR file Maven created in the target folder. - Set handler to “example.S3EventProcessorExtractMetadata”. - Create a new value for role based on the Basic With DynamoDB option. A role gives your function access to interact with other services from AWS. In this case, your function to interacts with both Amazon S3 and Amazon DynamoDB. In the window that opens, choose View Policy Document, then choose Edit to edit your policy document. While this document gives your function access to your DynamoDB resources, you need to add access to your S3 resources as well. Use the policy document below to replace the original. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::*" ] }, { "Action": [ "dynamodb:GetItem", "dynamodb:PutItem", ], "Effect": "Allow", "Resource": "*" } ] } It is important to note that this policy document allows your Lambda function access to all S3 and DynamoDB resources. You should lock your roles down to interact only with those specific resources that you wish the function to have access to. After completing your policy document and reviewing the function settings, choose Create Function. Create Amazon S3 bucket - Create a bucket in the Amazon S3 console. Note that buckets created in S3 are in a global namespace: you need to create a unique name. - After the bucket is created, upload the Overview of Amazon Web Services PDF to your bucket. We’ve included this white paper to use in unit tests for debugging your Lambda function. - Manage events for your S3 bucket by going to the root bucket properties and choosing Events: - Give your event a name, such as “PDFUploaded”. - For Events, choose Object Created (all). - For Prefix, list the key prefix for the subdirectory that holds your PDFs, if any. If you want to upload PDF documents to the root bucket, you can leave this blank. If you made a “subdirectory” called “pdf”, then the prefix would be “pdf”. - Leave Suffix blank, and choosing Lambda function as the Send To option, choosing the Lambda function you created. - Choose Save to save your S3 event. Test everything Test the entire process by uploading a new PDF to your bucket. Verify that a new entry was added to your DynamoDB table. (To troubleshoot any errors, choose Monitoring in your Lambda function to view logs generated by Amazon CloudWatch.) Enter Jenkins At this point, you have created a testable Java function for Lambda that uses an S3 event to analyze metadata from a PDF document and stores that information in a DynamoDB table. In a CI/CD environment, changes to the code might be made and uploaded to a code repository on a frequent basis. You can bring those principles into this project now by configuring Jenkins to perform builds, package a JAR file, and ultimately push the JAR file to Lambda. This process can be based off of a Git repo, by polling the repo for changes, or using Git’s built-in hooks for post-receive or post-commit actions. Build hooks Use the post-commit hook to trigger a Jenkins build when a commit is made to the repo. (For the purposes of this post, the repo was cloned to the Jenkins master, allowing you to use the post-commit hook.) To enable Jenkins to build off Git commits, create a Jenkins project for your repo with the Git plugin, set Build Trigger to “Poll SCM”, and leave Schedule blank. In your project folder, find .git/hooks/post-commit and add the following: #!/bin/sh curl http://<jenkins-master>:8080/job/<your-project-name>/build?delay=0sec This ensures that when a commit is made in this project, a request is made to your project’s build endpoint on Jenkins. You can try it by adding or modifying a file, committing it to your repo, and examining the build history and console output in your Jenkins dashboard for the status update. (For more information about implementing a post-receive hook, see the Integrating AWS CodeCommit with Jenkins AWS DevOps Blog post.) Deploy code to Lambda You may notice in the console output a command for aws sns publish --topic-arn .... In this project, we’ve added a post-build step to publish a message via Amazon Simple Notification Service (Amazon SNS) as an SMS message. You can add a similar build step to do the same, or take advantage of SNS to HTTP(S) endpoints to post status messages to team chat applications or a distributed list. However, to be able to push the code to AWS Lambda after a successful commit and build, look at adding a post build step. - In the configuration settings for your project, choose Add built step and Invoke top-level Maven targets, setting Goal to “package”. This packages up your project as a JAR file and places it into the targetdirectory. - Add a second build step by choosing Add built step and the Execute shell option. - For Command, add the following Lambda CLI command (substitute the function-name variable and zip-file variable as necessary): aws lambda update-function-code --function-name extractPDFMeta --zip-file fileb://target/lambda-java-example-1.0.jar You have now added the necessary build steps for Jenkins to test and package your code, then upload it to Lambda. Test the entire process start to finish by adding a new PDF document into your S3 bucket and checking the DynamoDB table for changes. Take it a step further The code and architecture described here are meant to serve as illustrations for testing your Lambda functions and building out a continuous deployment process using Maven, Jenkins, and AWS Lambda. If you’re running this in a production environment, there may be additional steps you would want to take. Here are a few: - Add additional unit tests - Build in additional features and sanity checks, for example, to make sure documents to be analyzed are actually PDF documents - Adjust the Write Capacity Units (WCU) of your DynamoDB table to accommodate higher levels of traffic - Add an additional Jenkins post-build step to integrate Amazon SNS to send a notification about a successful Lambda deployment
https://aws.amazon.com/blogs/compute/building-testing-and-deploying-java-applications-on-aws-lambda-using-maven-and-jenkins/
CC-MAIN-2022-27
refinedweb
1,735
53
House Overwhelmingly Passes Cybersecurity Bill timothy posted more than 4 years ago | from the critical-mass-of-buzzwords dept. .'" Why Icecream has no bones (-1, Offtopic) fibrewire (1132953) | more than 4 years ago | (#31025416) The Federal Food Safety Act of 1921 prohibited bone-in ice cream and all ice cream and ice cream novelties had to be sold boneless after that. There was an interesting turn of events that led to the Federal Food Safety Act of 1921. Evidently Grover Cleveland , who was the US President at that time, had a daughter named Ruth. Ruth was very fond of ice cream, which back then was a rarity because electric refrigeration was not yet largely available to the public. They had to haul ice from the frozen lakes on mountains down to where the homes where at, in order to make ice cream. But I'm getting off onto another subject. Anyway - the ice cream they made back then naturally had bones in it - how else would you make ice cream? Normally this was not an issue - every child back then knew how to hold a drumstick ice cream cone at the bottom and lick around the bone so that they would not accidentally choke on it. Although it was possible to make ice cream with out bones - it was very expensive to do it because only the Chinese craftsmen, who invented ice cream, were clever enough to debone and ice cream drumstick and not make a mess of it. So anyway - Ruth was eating an ice cream cone at the exact instant her father was elected president of the United States. So shocked was she, that she accidentally swallowed the bone from the ice cream and began to choke. Luckily, Henry Heimlich Sr. (Father of Henry Heimlich Jr, inventor of the Heimlich maneuver) was nearby and was able to expel the bone from the windpipe of young Ms Cleveland and save her from certain death. Well - this was a great thing that Henry Heimlich had done, saving the daughter of the President of the United States. However, Grover Cleveland was away for his inauguration while this happened and wasn't aware of it for several months afterward - when his daughter retold the story of how she almost died from an ice cream bone. Grover was pretty busy at the time and really didn't pay much attention to this until around the time of Ruth's next birthday. Taking time away from the war, Grover Cleveland asked his daughter what she wanted for her birthday. Giving it some thought, young Ruth finally said "Daddy, I want an sweet snack that I can eat that won't cause me to choke again." It was then when President Cleveland remembered the incident she told him of many months earlier. He asked Ruth - "What kind of ice cream were you eating when you almost died?" She told her dad she was eating a chocolate covered ice cream cone with nuts and caramel in it. Ruth's father thought about this and called his old friend James Curtis who owned the Curtis Candy Company in Chicago to see what could be done on Ruth's request. James told Grover that he would get back to him after trying out a few things, and they hung up the phone which only recently had been invented. While James Curtis was working on a treat in which Ruth would not possibly choke on - President Cleveland was bothered by the fact that if his adorable daughter almost died from eating ice cream with the bone still in - that there must be other children suffering the same fate. So - President Cleveland called up congress and told them he wanted them to pass a bill to outlaw bones in ice cream in an effort to save the children of the United States the dangers of choking on ice cream bones. Congress said "say what..??" But they decided there was something they could do and they hung up their phone. Congress and the president and the Curtis Candy Company had phones back then -but not many other people. So about that time - James Curtis called the President with the news that he had come up with a candy bar, with chocolate and nuts and caramel with NO BONES in it what so ever! And his daughter could eat them and not be afraid of choking on the bones because it had no bones.! President Cleveland told James Curtis to bring his newly invented confection to the White House and to come as quickly as possible because his daughter's birthday was next Thursday. James knew this was urgent so he flew by Jet, which actually had not been invented yet, but the secret underground government had been working on it for some time. So - anyway - James Curtis shows up at the White House with a box of his new candy bars at the same time Congress showed up at the back door, having stopped by for a beer after work. Grover invited James and Congress to the oval office for a drink and James showed him the candy bar and said "Here. President Cleveland - have your dear baby Ruth try this new candy bar." Now Ruth, who was eavesdropping from the Green Room as this meeting was going on - heard her name and waltzed in like nothing was going on - and gave her the candy bar and said "here, try this and try not to choke on it this time." Well - the candy bar was very delicious and Ruth really loved it and did not choke at all! She was so overjoyed by this she gave her father a Re:Why Icecream has no bones (3, Insightful) Anonymous Coward | more than 4 years ago | (#31025438) Re:Why Icecream has no bones (3, Interesting) fibrewire (1132953) | more than 4 years ago | (#31025596) Nah, I just get sick of cybersecurity bill garbage - not like anyone on slashdot is going to do anything about it. Re:Why Icecream has no bones (1) poetmatt (793785) | more than 4 years ago | (#31026008) so the real answer is that you're saying you forgot to check off "post anonymously", then. Re:Why Icecream has no bones (1) wizardforce (1005805) | more than 4 years ago | (#31026124) better to be ignorant of it then right? Re:Why Icecream has no bones (0, Offtopic) fibrewire (1132953) | more than 4 years ago | (#31026358) Yes - i like rice, too. Re:Why Icecream has no bones (0) Anonymous Coward | more than 4 years ago | (#31025634) Re:Why Icecream has no bones (1) thePowerOfGrayskull (905905) | more than 4 years ago | (#31026530) I wonder (5, Insightful) jwinster (1620555) | more than 4 years ago | (#31025418) Re:I wonder (3, Insightful) coinreturn (617535) | more than 4 years ago | (#31025480) Since this new body is designed to "represent the government in negotiations," I wonder if there's any relation to the ACTA treaty currently discussed behind closed doors. I don't wonder at all. Re:I wonder (5, Insightful) girlintraining (1395911) | more than 4 years ago | (#31025560). HITLER HAS A POSSE! (-1, Offtopic) Philip K Dickhead (906971) | more than 4 years ago | (#31025696) On thar IntarWebz! Heh! Moore's law states that I can introduce Hitler into a discussion more frequently than Hitlr was discussed during his entire lifetime! Take THAT, Mr. Godless! Re:I wonder (1, Informative) Anonymous Coward | more than 4 years ago | (#31025758):I wonder (0) Anonymous Coward | more than 4 years ago | (#31026400) On one hand, you're probably right. On the other, the absolute last thing I want is to have the person in charge of computer security in the US be an elected individual. I can only see that ending badly. Re:I wonder (0) Anonymous Coward | more than 4 years ago | (#31025862) What new body? WTF? There's a loose R&D planning council. But it's NIST that reps in negotiations. Does anybody read the source, or just comment on blogs? Yeah, I know, I'm new here etc. Re:I wonder (0) Anonymous Coward | more than 4 years ago | (#31026064) "new body?" NIST has been around for over a century. Re:I wonder (4, Informative) Tekfactory (937086) | more than 4 years ago | (#31026418) (1) forand (530402) | more than 4 years ago | (#31026662) OutSource 'em! (0) Anonymous Coward | more than 4 years ago | (#31025426) "the US needs 500 to 1,000 more 'cyber warriors' every year in order to keep up with potential enemies." Hey, there is plenty of skilled cyber warriors in China, India and Eastern Europe. Re:OutSource 'em! (0) Anonymous Coward | more than 4 years ago | (#31025498) Hi, I'm from the internet, and I say we take a band of 300 leet nerd-warriors, to where the inter-tubes enter the country, and hold back the armies of OVER ONE MILLION evil foreign hackers. Re:OutSource 'em! (0) Anonymous Coward | more than 4 years ago | (#31025592) I hope you're not marked as funny, since strangely enough this likely isn't a joke. What the * is a Cyber Warrior? (1) CoffeePlease (596791) | more than 4 years ago | (#31025736) Re:What the * is a Cyber Warrior? (0) Anonymous Coward | more than 4 years ago | (#31025782) A sexually-aware solider? A solider engaged in fetish or p0rn0graphic activity broadcast or otherwise distributed via the Internet? Re:What the * is a Cyber Warrior? (2, Insightful) Anonymous Coward | more than 4 years ago | (#31026106). At least (1) sleekware (1109351) | more than 4 years ago | (#31025446) Re:At least (0) Anonymous Coward | more than 4 years ago | (#31025778) off a cliff Re:At least (0) Anonymous Coward | more than 4 years ago | (#31025974) Re:At least (0) Anonymous Coward | more than 4 years ago | (#31026378) Perfect! Tell me it's a really high cliff with an impossible to survive landing... then, I might just be down for this! I need a job and this one fits my life to a tee. (2, Interesting) JDeane (1402533) | more than 4 years ago | (#31025450) Where do I sign up? Re:I need a job and this one fits my life to a tee (3, Interesting) chill (34294) | more than 4 years ago | (#31025492) [usajobs.gov] Re:I need a job and this one fits my life to a tee (3, Insightful) Anonymous Coward | more than 4 years ago | (#31025538) . Shivers... (0) Anonymous Coward | more than 4 years ago | (#31025458) Why did reading this article send shivers down my spine? Especially the last paragraph? Re:Shivers... (1) Narnie (1349029) | more than 4 years ago | (#31025700) Re:Shivers... (1) BerneAI (448306) | more than 4 years ago | (#31025838) why should it...the bill, like those that passed it, is meaningless. passed by the clueless, to impress the uninformed, or uniformed, your choice. more drivel.... Cyber Warrior positions available? (4, Funny) PingSpike (947548) | more than 4 years ago | (#31025464) I knew all those years playing Quake would come in handy eventually. Re:Cyber Warrior positions available? (2, Insightful) Hurricane78 (562437) | more than 4 years ago | (#31026170) I think you mean System Shock 1! Re:Cyber Warrior positions available? (1) ArundelCastle (1581543) | more than 4 years ago | (#31026192) Can you imagine handing out shiny new business cards? ----- [DHS logo] Bill "FraggleR0x0rs" Ferguson Cyber Warrior U.S. Government Cell: ### Skype: ### AIM: ### ----- Dude, awesome. eeep (3, Funny) the_Bionic_lemming (446569) | more than 4 years ago | (#31025484) The house overwhelmingly approved? That means it'll add to the deficit, be largely useless, and misused by RIAA. God help us all. Re:eeep (1) interkin3tic (1469267) | more than 4 years ago | (#31025670) That means it'll add to the deficit By this, do you mean to imply there's a tax cut hidden in there somewhere? Cyber Warriors.... (5, Funny) neogeographer (1568287) | more than 4 years ago | (#31025486) Re:Cyber Warriors.... (1) DarrenBaker (322210) | more than 4 years ago | (#31025636) Why is it that the Government, when referring to IT matters, always uses terminology like that... What is this, the United States of Johnny Mnemonic? Re:Cyber Warriors.... (1) Jawn98685 (687784) | more than 4 years ago | (#31025918) the threat and build or buy the best defenses. Big Corporate World will, eventually, come to a similar conclusion and pony up, though signs seem to indicate that this will be later, rather than sooner. The rest of us are on our own. "The government" can't/won't throw enough resources at the problem to keep the spam bots off grandma's PC. And honestly, I am not sure I want anyone, especially the government, that close (as in close enough to make a difference) to my edge of the network. Re:Cyber Warriors.... (1) DarrenBaker (322210) | more than 4 years ago | (#31026094):Cyber Warriors.... (1) tyrione (134248) | more than 4 years ago | (#31026112) Re:Cyber Warriors.... (1) Dalambertian (963810) | more than 4 years ago | (#31026442) Re:Cyber Warriors.... (1) sconeu (64226) | more than 4 years ago | (#31025870) Alas, most Slashdotters are too young to get your reference [imdb.com] . Re:Cyber Warriors.... (1) nine-times (778537) | more than 4 years ago | (#31025970). Re:Cyber Warriors.... (1) tyrione (134248) | more than 4 years ago | (#31026132). So now suddenly it's OK again? (3, Interesting) moz25 (262020) | more than 4 years ago | (#31025500) Too little, too late. For more than a decade, effort was done to *weaken* the domestic talent at developing themselves or helping (causing) to harden the existing infrastructure. Re:So now suddenly it's OK again? (1, Interesting) Anonymous Coward | more than 4 years ago | (#31025620) Standard operating procedure: Eradicate what's there, bring in your own guys. Re:So now suddenly it's OK again? (5, Informative) GovCheese (1062648) | more than 4 years ago | (#31025748) Re:So now suddenly it's OK again? (1, Interesting) Anonymous Coward | more than 4 years ago | (#31025950):So now suddenly it's OK again? (1) shaitand (626655) | more than 4 years ago | (#31026188):So now suddenly it's OK again? (3, Informative) Anonymous Coward | more than 4 years ago | (#31026062)". Cybersecurity? Wtf? (-1, Troll) Anonymous Coward | more than 4 years ago | (#31025504) Why is "cybersecurity" needed? (And does that really have anything to do with cybernetics?) If you've got important data you need locked down, keep it strictly on a closed network or offline. The only government machines that should be online are those to serve up the web site. For everything else, computers are not secure enough. Private sector (4, Funny) gmuslera (3436) | more than 4 years ago | (#31025510) Re:Private sector (1) Blakey Rat (99501) | more than 4 years ago | (#31025640) Did Weyland-Yutani bid on it? Re:Private sector (1) jellomizer (103300) | more than 4 years ago | (#31025662). Klingons (0) Anonymous Coward | more than 4 years ago | (#31025554) The Klingons are gearing up for a new field of battle. I guess all those ridiculous stories about "Chinese" attacks on various inconsequential web sites had a meaning. Orders? (-1) mcgrew (92797) | more than 4 years ago | (#31025598) wasn't aware that Congress could order the White House to do anything. What part of the Constitution gives it this power? What about "separation of powers"? Re:Orders? (3, Insightful) mujadaddy (1238164) | more than 4 years ago | (#31025656) In other news, you really don't know what those words you said mean, do you? Re:Orders? (-1, Troll) mcgrew (92797) | more than 4 years ago | (#31025764) Yes, I do. Congress writes the laws, the President enforces the laws, and the judiciary judges both the laws and those accused of breaking them. Nothing in the Constitution gives anybody but the judiciary the power to order anybody to do anything (except the President is Commander in Chief of the military and can order troops around). Now, the SCOTUS can order the President to obey the laws congress passes (and he or his predecessors sighn), but Congress can't order him. This is Junior high school shit, everyone should know it. Re:Orders? (2, Insightful) bsDaemon (87307) | more than 4 years ago | (#31026060):Orders? (1) mujadaddy (1238164) | more than 4 years ago | (#31026280) Now, the SCOTUS can order the President to obey the laws Andrew Jackson would disagree with that. This is Junior high school shit, everyone should know it. Congress issues requirements for the Executive branch all the time. Everyone should know this "Junior high school shit." Re:Orders? (1) tyrione (134248) | more than 4 years ago | (#31026176) Hi, I'm Separation of Powers, and I take laws that Congress makes and give them to the Executive branch so they can enforce them. In other news, you really don't know what those words you said mean, do you? Touché. Rule of law, which Congress writes... (1) weston (16146) | more than 4 years ago | (#31025776) latitude, arising partly from being the executor of the law, and partly from human sociology (most people have some natural aversion to adversarial actions against high-status individuals) and politics (sure, maybe Bush and Cheney are guilty of war crimes, but you open that can of worms and you're going to start a big fight and potentially find yourself staring down the barrel of similar accusations in the future). If anything, the executive branch is stronger in practice than it should be. Re:Rule of law, which Congress writes... (1) mcgrew (92797) | more than 4 years ago | (#31025966) If anything, the executive branch is stronger in practice than it should be. I certainly wouldn't argue with that. Re:Rule of law, which Congress writes... (5, Insightful) shaitand (626655) | more than 4 years ago | (#31026258) The entire federal government is dramatically more powerful than it should be. Just look how many powers it has stolen for itself by twisting a simple authority to regulate interstate commercial traffic. Re:Orders? (1) starfliz (922954) | more than 4 years ago | (#31026118) Google attack? (3, Interesting) antiaktiv (848995) | more than 4 years ago | (#31025608) BYOCT... (Bring your own conspiracy theory) Re:Google attack? (1) shaitand (626655) | more than 4 years ago | (#31026274) ALLEGEDLY Chinese? glad that's taken care of (0) Anonymous Coward | more than 4 years ago | (#31025642) now we can get back about the 'business' of surviving the escalating assault on ourselves, by us, & 'them'. consult with/trust in your creators, providing more than enough security, & everything else we need, with no personal gain motive, using an unending supply of newclear power, since/until forever. see you there? Where is their test environment? (2, Insightful) zerointeger (1587877) | more than 4 years ago | (#31025678) (2, Interesting) Anonymous Coward | more than 4 years ago | (#31025722) :Bleh (1) starfliz (922954) | more than 4 years ago | (#31026246) What the heck is a Cyber Warrior? (2, Informative) Qualin74 (1491297) | more than 4 years ago | (#31025730) Re:What the heck is a Cyber Warrior? (0) Anonymous Coward | more than 4 years ago | (#31025986) If they really want to be concerned about "Cyber Security", why don't they nuke all the computers running Bot nets? Like yours? Or your parent's? Or your grandma's? Or the XP-based prescription medication dispenser controlling the IV in my dad's hospital room that connects to the nurse's station with WEP? Why don't they go after the jerkoffs running the C&C servers? By air-dropping fully-armed Navy seals into China and India and Canada and California and France and Germany? Why don't they set up Honeypots acting as spam traps and go after all those spammers clogging up the pipes? We do. Why don't they go after the RBN equivalents out there? We are. Nobody would dare to sue a military unit, would they? Am I missing something here? The American people and the International Community would probably be pretty P.O.'d if the U.S. military starts responding to every IT security problem with bullets and missiles. Re:What the heck is a Cyber Warrior? (1) Akita24 (1080779) | more than 4 years ago | (#31026360) Re:What the heck is a Cyber Warrior? (1) Lord Ender (156273) | more than 4 years ago | (#31026366) our own systems? It's hard to say, but I'm guessing there might be some of each. Re:What the heck is a Cyber Warrior? (1) chrisG23 (812077) | more than 4 years ago | (#31026494), creating your own tools for the particular system. If they really want to be concerned about "Cyber Security", why don't they nuke all the computers running Bot nets? International law. They (the FBI) already goes after people operating the C2C servers inside the borders of this country (the USA). Most people don't know it when their computer is infected with a botnet, depending on the botnet. Why don't they go after the jerkoffs running the C&C servers? Why don't they set up Honeypots acting as spam traps and go after all those spammers clogging up the pipes? I think that is the idea of this whole thing actually. Why don't they go after the RBN equivalents out there? It is hard to find the ringleaders, and then even if the USA did, they would likely be in Russia, and Russia may not accept our evidence. (Begin rumors without citation) There are some that think the Russian government unofficially supports the RBN, as long as their activities do no mess with Russian interests.(/rumors) Nobody would dare to sue a military unit, would they? Am I missing something here? Military action is never a good first option, or second, third or fourth option for that matter. There are serious consequences for violating a sovereign nation with an act of war, unless they are really weak and poor and have no friends. If there is evidence that countries are beefing up their own cyber warfare capabilities, then it sorta the explicit and implicit responsibility of a government to its people to protect them. You don't see any countries in the world that can afford a military without one do you? Unless they can get it way with it some other way (think Switzerland of countries that are not allowed a sizable military as a condition of their surrender in a previous war by the winning country(ies).) Welcome to the future. Its like Robot Jox but without the robots and just the software. the 'Manhattan Project of our generation' (1) KharmaWidow (1504025) | more than 4 years ago | (#31025774) So does this mean that they are trying to wreck havoc on our lives like nuclear bombs have? ...Wars, threats of terrorism, devastating economic sanctions, preemptive wars, and cold wars? Reminds me of the DEVO song "It's a Beautiful Life" umm wat? (2, Insightful) nilbog (732352) | more than 4 years ago | (#31025808) Shouldn't treaties be made by people who are responsible to an electorate? Isn't that the point of our entire system of government? This seems really shady to me. Re:umm wat? (1) Tekfactory (937086) | more than 4 years ago | (#31026538) The Director if NIST a confirmed presidential appointee. Appointees get chosen by the president and grilled by the Senate, all of whom are elected and in theory responsible to their electorate. I give it 6 months (0) Anonymous Coward | more than 4 years ago | (#31025818) I give the NIST six months before they're over-ruled by the NSA and DHS. Six months. Separating reality and fantasy (4, Insightful) Angst Badger (8636) | more than 4 years ago | (#31025848):Separating reality and fantasy (0) Anonymous Coward | more than 4 years ago | (#31026128) So let me get this strait as a "cyber warrior" we get to use swords, knifes, maces, axes to dispense justice on those rotten scammers? Sign me up. Re:Separating reality and fantasy (1) spinkham (56603) | more than 4 years ago | (#31026302) I'm sure that kenetic response for network threats is part of the US strategy. Though we don't use swords much anymore... Re:Separating reality and fantasy (1) Lord Ender (156273) | more than 4 years ago | (#31026454). WHO THE FUCK CARES (0) Anonymous Coward | more than 4 years ago | (#31025894) I wonder? (0) Anonymous Coward | more than 4 years ago | (#31025972) That's really great. Question: - Haging mandated this conference and extension to our bloated government, did this knowledgable legislater provide funding for this addition? If funding was NOT provided and since this clown thinks this is such an important issue, let's take the funding from his budget, and if that is not enough, cut his pay, and if that is not enough, he is going to have to get a 2nd job to cover the difference. This is called - being responsible. Google/NSA (1) Temujin_12 (832986) | more than 4 years ago | (#31026004) Hmmm.... this [cnet.com] would be related now would it? Google is finalizing an agreement with the National Security Agency to help the search giant ward off cyberattacks, according to the Washington Post. millions get infected with malware (0) Anonymous Coward | more than 4 years ago | (#31026016) and the government does nothing a F500 company gets hacked and all of a sudden we need cyberwarriors good to know those priorities come election day I give up (1) Quiet_Desperation (858215) | more than 4 years ago | (#31026244) Re:I give up (0) Anonymous Coward | more than 4 years ago | (#31026420) As a cyber warrior you need to tank the cyber criminals, while your teammates deal most of the damage and kill them. I wanna become cyber warlock btw. Like the magic and stuff. Re:I give up (0) Anonymous Coward | more than 4 years ago | (#31026652) I'll be your John Wayne (3, Funny) elrous0 (869638) | more than 4 years ago | (#31026292). Right before elections? Good luck! (0) Anonymous Coward | more than 4 years ago | (#31026310) That was a pretty stupid move, now wasn't it? I know a certain 422 members of congress who likely won't be re-elected! too coincidental (1) JustNiz (692889) | more than 4 years ago | (#31026374)? No differnt to a state post office (0) Anonymous Coward | more than 4 years ago | (#31026428) Usually a country's post office is given powers to represent the country in international postal negotiations and the UPU. This seems to be no different, except that it deals with standards. Cyber Warriors??? (1) Khan (19367) | more than 4 years ago | (#31026506) Phft! All you need is Jack Bauer and CTU. THAT'LL teach them not to mess with the US! ;-)
http://beta.slashdot.org/story/130910
CC-MAIN-2014-41
refinedweb
4,396
70.13
Hedwig alternatives and similar libraries Based on the "Email" category EmailOctopusKit0.2 0.0 Hedwig VS EmailOctopusKitEmail Campaigns and Marketing Automation on iOS powered by EmailOctopus Do you think we are missing an alternative of Hedwig or a related project? Popular Comparisons README Hedwig is a Swift package which supplies a set of high level APIs to allow you sending email to an SMTP server easily. If you are planning to send emails from your next amazing Swift server app, Hedwig might be a good choice. Features - [x] Connect to all SMTP servers, through whether plain, SSL or TLS (STARTTLS) port. - [x] Authentication with PLAIN, CRAM-MD5, LOGINor XOAUTH2. - [x] Send email with HTML body and attachments. - [x] Customize validation method and mail header, to track your mail campaign. - [x] Queued mail sending, without blocking your app. You can even send mails concurrently. - [x] Works with Swift Package Manager, in the latest Swift syntax and cross-platform. - [x] Fully tested and documented. Installation Add the url of this repo to your Package.swift: import PackageDescription let package = Package( name: "YourAwesomeSoftware", dependencies: [ .Package(url: "", majorVersion: 1) ] ) Then run swift build whenever you get prepared. (Also remember to grab a cup of coffee 😄) You can find more information on how to use Swift Package Manager in Apple's official page. Usage Sending text only email let hedwig = Hedwig(hostName: "smtp.example.com", user: "foo@bar.com", password: "password") let mail = Mail( text: "Across the great wall we can reach every corner in the world.", from: "onev@onevcat.com", to: "foo@bar.com", subject: "Hello World" ) hedwig.send(mail) { error in if error != nil { /* Error happened */ } } Sending HTML email let hedwig = Hedwig(hostName: "smtp.example.com", user: "foo@bar.com", password: "password") let attachment = Attachment(htmlContent: "<html><body><h1>Title</h1><p>Content</p></body></html>") let mail = Mail( text: "Fallback text", from: "onev@onevcat.com", to: "foo@bar.com", subject: "Title", attachments: [attachment] ) hedwig.send(mail) { error in if error != nil { /* Error happened */ } } CC and BCC let hedwig = Hedwig(hostName: "smtp.example.com", user: "foo@bar.com", password: "password") let mail = Mail( text: "Across the great wall we can reach every corner in the world.", from: "onev@onevcat.com", to: "foo@bar.com", cc: "Wei Wang <onev@onevcat.com>, tom@example.com", // Addresses will be parsed for you bcc: "My Group: onev@onevcat.com, foo@bar.com;", // Even with group syntax subject: "Hello World" ) hedwig.send(mail) { error in if error != nil { /* Error happened */ } } Using different SMTP settings (security layer, auth method and etc.) let hedwig = Hedwig( hostName: "smtp.example.com", user: "foo@bar.com", password: "password", port: 1234, // Determined from secure layer by default secure: .plain, // .plain (Port 25) | .ssl (Port 465) | .tls (Port 587) (default) validation: .default, // You can set your own certificate/cipher/protocols domainName: "onevcat.com", // Used when saying hello to STMP Server authMethods: [.plain, .login] // Default: [.plain, .cramMD5, .login, .xOauth2] ) Send mails with inline image and other attachment let</body></html>", // If imageAttachment only used embeded in HTML, I recommend to set it as related. related: [imageAttachment] ) // You can also create attachment from raw data. let data = "{\"key\": \"hello world\"}".data(using: .utf8)! let json = Attachment( data: data, mime: "application/json", name: "file.json", inline: false // Send as standalone attachment. ) let mail = Mail( text: "Fallback text", from: "onev@onevcat.com", to: "foo@bar.com", subject: "Check the photo and json file!", attachments: [html, json] hedwig.send(mail) { error in if error != nil { /* Error happened */ } } Send multiple mails let mail1: Mail = //... let mail2: Mail = //... hedwig.send([mail1, mail2], progress: { (mail, error) in if error != nil { print("\(mail) failed. Error: \(error)") } }, completion: { (sent, failed) in for mail in sent { print("Sent mail: \(mail.messageId)") } for (mail, error) in failed { print("Mail \(mail.messageId) errored: \(error)") } } ) Help and Questions Visit the documentation page for full API reference. You could also run the tests ( swift test) to see more examples to know how to use Hedwig. If you have found the framework to be useful, please consider a donation. Your kind contribution will help me afford more time on the project. Or you are a Bitcoin fan and want to treat me a cup of coffe, here is my wallet address: 1MqwfsxBJ5pJX4Qd2sRVhK3dKTQrWYooG5 I cannot send mails with Gmail SMTP. Gmail uses an application specific password. You need to create one and use the specified password when auth. See this. I need to add/set some additonal header in the mail. Both Attachmentaccept customizing header fields. Pass your headers as additionalHeaderswhen creating the mail or attachment and Hedwig will handle it. Can I use it in iOS? At this time Swift Package Manager has no support for iOS, watchOS, or tvOS platforms. So the answer is no. But this framework is not using anything only in iOS (like UIKit), so as soon as Swift Package Manager supports iOS, you can use it there too. Tell me about the name and logo of Hedwig Yes, Hedwig (bird) was Harry Potter's pet Snowy Owl. The logo of Hedwig (this framework) is created by myself and it pays reverence to the novels and movies. Other questions Submit an issue if you find something wrong. Pull requests are warmly welcome, but I suggest to discuss first. You can also follow and contact me on Twitter or Sina Weibo. Enjoy sending your emails License Hedwig is released under the MIT license. See LICENSE for details. *Note that all licence references and agreements mentioned in the Hedwig README section above are relevant to that project's source code only.
https://swift.libhunt.com/hedwig-alternatives
CC-MAIN-2020-29
refinedweb
922
61.22
10. 10}}\). 10.7.2. Implementation from Scratch¶ Adadelta needs to maintain two state variables for each independent variable, \(\boldsymbol{s}_t\) and \(\Delta\boldsymbol{x}_t\). We use the formula from the algorithm to implement Adadelta. In [1]: import sys sys.path.insert(0, '..') %matplotlib inline import d2l from mxnet import nd features, labels = d2]: d2l.train_ch9(adadelta, init_adadelta_states(), {'rho': 0.9}, features, labels) loss: 0.244383, 0.490105 sec per epoch 10.7.3. Concise Implementation¶ From the Trainer instance for the algorithm named “adadelta”, we can implement Adadelta in Gluon. Its hyperparameters can be specified by rho. In [3]: d2l.train_gluon_ch9('adadelta', {'rho': 0.9}, features, labels) loss: 0.245700, 0.409220 sec per epoch 10.7.4. Summary¶ - Adadelta has no learning rate hyperparameter, it uses an EWMA on the squares of elements in the variation of the independent variable to replace the learning rate. 10.7.6. Reference¶ [1] Zeiler, M. D. (2012). ADADELTA: an adaptive learning rate method. arXiv preprint arXiv:1212.5701.
http://d2l.ai/chapter_optimization/adadelta.html
CC-MAIN-2019-18
refinedweb
169
63.05
use @hook to define a function block to run. Hooks have a few limitations which affect the usefulness of this mechanism: Hooks are really meant to be used for internal workflow transitions. There are only 8 of them and they represent a specific stage in the life cycle of a charm. There is no mechanism to extend or inherit these built-ins. - install - config-changed - start - upgrade-charm - stop - update-status - leader-elected - leader-settings-changed Hook sequence is hardcoded. This is defined in juju/worker/uniter/operation/runhook.go. In other words, the transition from installto upgrade-charmis both guaranteed and mandatory. This limits how we can design our workflows. States Decorator: @not_unless, @only_once, @when, @when_all, @when_any, @when_file_changed, @when_none, @when_not, @when_not_allDefinition: charm.reactive/charms/reactive/decorators.py States were probably designed to fix the limitation that Hooks present. States can be defined using arbitrary string except two reserved words juju and jujud. Further, workflow of states are not fixed. States are evaluated iteratively and a true condition will execute associated function block. When a state is true? What defines a true? The code is defined in class StateWatch in file charm.reactive/charms/reactive/bus.py. The value changed is true when there are states to monitor ( set(states) is not empty) and there are changes ( data['changes] is not empty). class StateWatch(object): ... @classmethod def watch(cls, watcher, states): data = cls._get() iteration = data['iteration'] changed = bool(set(states) & set(data['changes'])) return iteration == 0 or changed What defines the values in data['changes']? There are only two places to set this value: set_state(...) and remove_state(...). Looking at set_state reveals that if the state is already in old_states list, it will not be set to changed, therefore the watch will fail to identify this state and will not trigger an execution. In other words, if a state becomes true, it will not be re-evaluated until it is reset to false first. Reactive charm triggers only on state transitions: False->True ( @when) or True->False ( @when_not). def set_state(state, value=None): """ Set the given state as active, optionally associating with a relation. """ old_states = get_states() unitdata.kv().update({state: value}, prefix='reactive.states.') if state not in old_states: StateWatch.change(state) Namespace There isn't really a namespace concept in charm. I'm borrowing the term to illustrate state boundaries. The question is that if I can define arbitrary states, in which scope are they visible? Can state.xyz in charm A trigger an action in charm B? or in another layer? or in another unit? or even a bundle? To answer this, we need to examine how states are stored. The key information can be found in charmhelpers/core/unitdata.py — this is the storage class that is used by charm to store states. Clearly the backend is a sqlite3 database. class Storage(object): """Simple key value database for local unit state within charms. Modifications are not persisted unless :meth:`flush` is called. To support dicts, lists, integer, floats, and booleans values are automatically json encoded/decoded. """ def __init__(self, path=None): self.db_path = path if path is None: if 'UNIT_STATE_DB' in os.environ: self.db_path = os.environ['UNIT_STATE_DB'] else: self.db_path = os.path.join( os.environ.get('CHARM_DIR', ''), '.unit-state.db') self.conn = sqlite3.connect('%s' % self.db_path) self.cursor = self.conn.cursor() .... Function _init(self) reveals the table schema in this database — three tables: kv, kv_revisions and hooks. kv is the primary store as can be seen in the set_state function above ( unitdata.kv().update(....). def _init(self): self.cursor.execute(''' create table if not exists kv ( key text, data text, primary key (key) )''') self.cursor.execute(''' create table if not exists kv_revisions ( key text, revision integer, data text, primary key (key, revision) )''') self.cursor.execute(''' create table if not exists hooks ( version integer primary key autoincrement, hook text, date text )''') self.conn.commit() What is not obvious is that each unit has its own DB. Therefore, the boundary of states are per charm unit. In other words, states are visible inside a unit. Using layers will package states inside a single charm, but in run time it is the unit boundary that matters. States do not go across charms. Using the same charm, states do not go across units either. Dispatch Reading function dispatch in charms.reactive/charms/reactive/bus.py is interesting because there is certainly something no document has mentioned. Dispatch is done in two phases: hooks and other. Hooks are run in the hooks phase. Registered hook will run its test() so this scan will test all hooks. def _test(to_test): return list(filter(lambda h: h.test(), to_test)) .... unitdata.kv().set('reactive.dispatch.phase', 'hooks') hook_handlers = _test(Handler.get_handlers()) _invoke(hook_handlers) States are run in the other phase. The magic number 100 for-loop highlights an underline assumption that states can converge within these iterations. Otherwise, state watch is reset and will be count from 0 again during next iteration → state effect can then ripple through one single iteration. unitdata.kv().set('reactive.dispatch.phase', 'other') for i in range(100): StateWatch.iteration(i) other_handlers = _test(Handler.get_handlers()) if not other_handlers: break _invoke(other_handlers)
http://fengxia.co.s3-website-us-east-1.amazonaws.com/juju%20charm%20reactive.html
CC-MAIN-2019-09
refinedweb
863
52.05
Someone. We've built a very basic todo lists app using Django. It has views to deal with viewing lists, creating new lists, and adding to existing lists. Two of these views end up doing some similar work, which is to retrieve a list object from the database based on its list ID: def add_item(request, list_id): list_ = List.objects.get(id=list_id) Item.objects.create(text=request.POST['item_text'], list=list_) return redirect('/lists/%d/' % (list_.id,)) def view_list(request, list_id): list_ = List.objects.get(id=list_id) return render(request, 'list.html', {'list': list_}) This is a good use case for a decorator. A decorator can be used to extract duplicated work, and also to change the arguments to a function. So we should be able to build a decorator that does the list-getting for us. Here's the target: @get_list def add_item(request, list_): Item.objects.create(text=request.POST['item_text'], list=list_) return redirect('/lists/%d/' % (list_.id,)) @get_list def view_list(request, list_): return render(request, 'list.html', {'list': list_}) So how do we build a decorator that does that? A decorator is a function that takes a function, and returns another function that does a slightly modified version of the work the original function was doing. We want our decorator to transform the simplified view functions we have above, into something that looks like the original functions. (you end up saying "function" a lot in any explanation of decorators...) Here's a template: def get_list(view_fn): def decorated_view(...?): ??? return view_fn(...?) return decorated_view Can you get it working? Thankfully, our code has tests, so they'll tell you when you get it right... git clone -b chapter_06 python3 manage.py test lists # dependencies: django 1.7 Decorators definitely are a bit brain-melting, so it may take a bit of effort to wrap your head around it. Once you get the hang of them, they're dead useful though, If you're finding it impossible, you could start with a simpler challenge... say, building a decorator to make functions return an absolute value: def absolute(fn): # this decorator currently does nothing def modified_fn(x): return fn(x) return modified_fn def foo(x): return 1 - x assert foo(3) == -2 @absolute def foo(x): return 1 - x assert foo(3) == 2 # this will fail, get is passing! Try it out: git clone deccy python3 deccy/deccy.py Enjoy! [update 2014-10-23 at 3pm, see also @baroque, the decorating decorator decorator].
http://www.obeythetestinggoat.com/decorators.html
CC-MAIN-2021-17
refinedweb
414
57.57
Marketstore python driver Project description pymarketstore Python driver for MarketStore Build Status: Pymarketstore can query and write financial timeseries data from MarketStore Tested with 3.3+ How to install $ pip install pymarketstore Examples In [1]: import pymarketstore as pymkts ## query data In [2]: param = pymkts.Params('BTC', '1Min', 'OHLCV', limit=10) In [3]: cli = pymkts.Client() In [4]: reply = cli.query(param) In [5]: reply.first().df() Out[5]: Open High Low Close Volume Epoch 2018-01-17 17:19:00+00:00 10400.00 10400.25 10315.00 10337.25 7.772154 2018-01-17 17:20:00+00:00 10328.22 10359.00 10328.22 10337.00 14.206040 2018-01-17 17:21:00+00:00 10337.01 10337.01 10180.01 10192.15 7.906481 2018-01-17 17:22:00+00:00 10199.99 10200.00 10129.88 10160.08 28.119562 2018-01-17 17:23:00+00:00 10140.01 10161.00 10115.00 10115.01 11.283704 2018-01-17 17:24:00+00:00 10115.00 10194.99 10102.35 10194.99 10.617131 2018-01-17 17:25:00+00:00 10194.99 10240.00 10194.98 10220.00 8.586766 2018-01-17 17:26:00+00:00 10210.02 10210.02 10101.00 10138.00 6.616969 2018-01-17 17:27:00+00:00 10137.99 10138.00 10108.76 10124.94 9.962978 2018-01-17 17:28:00+00:00 10124.95 10142.39 10124.94 10142.39 2.262249 ## write data In [7]: import numpy as np In [8]: import pandas as pd In [9]: data = np.array([(pd.Timestamp('2017-01-01 00:00').value / 10**9, 10.0)], dtype=[('Epoch', 'i8'), ('Ask', 'f4')]) In [10]: cli.write(data, 'TEST/1Min/Tick') Out[10]: {'responses': None} In [11]: cli.query(pymkts.Params('TEST', '1Min', 'Tick')).first().df() Out[11]: Ask Epoch 2017-01-01 00:00:00+00:00 10.0 Client pymkts.Client(endpoint='') Construct a client object with endpoint. Query pymkts.Client#query(symbols, timeframe, attrgroup, start=None, end=None, limit=None, limit_from_start=False) You can build parameters using pymkts.Params. - symbols: string for a single symbol or a list of symbol string for multi-symbol query - timeframe: timeframe string - attrgroup: attribute group string. symbols, timeframe and attrgroup compose a bucket key to query in the server - start: unix epoch second (int), datetime object or timestamp string. The result will include only data timestamped equal to or after this time. - end: unix epoch second (int), datetime object or timestamp string. The result will include only data timestamped equal to or before this time. - limit: the number of records to be returned, counting from either start or end boundary. - limit_from_start: boolean to indicate limitis from the start boundary. Defaults to False. Pass one or multiple instances of Params to Client.query(). It will return QueryReply object which holds internal numpy array data returned from the server. Write pymkts.Client#write(data, tbk) You can write a numpy array to the server via Client.write() method. The data parameter must be numpy's recarray type with a column named Epoch in int64 type at the first column. tbk is the bucket key of the data records. List Symbols pymkts.Client#list_symbols() The list of all symbols stored in the server are returned. Server version pymkts.Client#server_version() Returns a string of Marketstore-Version header from a server response. Streaming If the server supports WebSocket streaming, you can connect to it using pymkts.StreamConn class. For convenience, you can call pymkts.Client#stream() to obtain the instance with the same server information as REST client. Once you have this instance, you will set up some event handles by either register() method or @on() decorator. These methods accept regular expressions to filter which stream to act on. To actually connect and start receiving the messages from the server, you will call run() with the stream names. By default, it subscribes to all by */*/*. pymkts.Client#stream() Return a StreamConn which is a websocket connection to the server. pymkts.StreamConn#(endpoint) Create a connection instance to the endpoint server. The endpoint string is a full URL with "ws" or "wss" scheme with the port and path. pymkts.StreamConn#register(stream_path, func) @pymkts.StreamConn#on(stream_path) Add a new message handler to the connection. The function will be called with handler(StreamConn, {"key": "...", "data": {...,}}) if the key (time bucket key) matches with the stream_path regular expression. The on method is a decorator version of pymkts.StreamConn#run([stream1, stream2, ...]) Start communication with the server and go into an indefinite loop. It does not return until unhandled exception is raised, in which case the connection is closed so you need to implement retry. Also, since this is a blocking method, you may need to run it in a background thread. An example code is as follows. import pymarketstore as pymkts conn = pymkts.StreamConn('ws://localhost:5993/ws') @conn.on(r'^BTC/') def on_btc(conn, msg): print('received btc', msg['data']) conn.run(['BTC/*/*']) # runs until exception -> received btc {'Open': 4370.0, 'High': 4372.93, 'Low': 4370.0, 'Close': 4371.74, 'Volume': 3.3880948699999993, 'Epoch': 1507299600} Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pymarketstore/
CC-MAIN-2021-49
refinedweb
896
69.79
3 Prototyping the Main View Written by Audrey Tam Now for the fun part! In this chapter, you’ll start creating a prototype of your app, which has four full-screen views: - Exercise - History - Success Creating the Exercise view You’ll start by laying out the Exercise view, because it contains the most subviews. Here’s the list of what your user sees in this view: - A title and page numbers are at the top of the view and a History button is at the bottom. - The page numbers indicate there are four numbered pages. - The exercise view contains a video player, a timer, a Start/Done button and rating symbols. And here’s the list rewritten as a list of subviews: - Header with page numbers - Video player - Timer - Start/Done button - Rating - History button You could sketch your screens in an app like Sketch or Figma before translating the designs into SwiftUI views. But SwiftUI makes it easy to lay out views directly in your project, so that’s what you’ll do. The beauty of SwiftUI is it’s declarative: You just declare the views you want to display, in the order you want them to appear. If you’ve created web pages, it’s a similar experience. Outlining the Exercise view ➤ Continue with your project from the previous chapter or open the project in this chapter’s starter folder. There’s a lot to do in this view, so you’ll start by creating an outline with placeholder Text views. ➤ Open ExerciseView.swift. ➤ The canvas preview uses the run destination simulated device by default. You’ll start by laying out the interface for the iPad version of HIITFit, so select an iPad simulator: ➤ If the iPad doesn’t fit in the canvas, zoom out: ➤ ExerciseView has six subviews, so duplicate the Text(exerciseNames[index]) view, then edit the arguments — in the code or in the canvas — to create this list: VStack { Text(exerciseNames[index]) Text("Video player") Text("Timer") Text("Start/Done button") Text("Rating") Text("History button") } The first Text view is the starting point for the Header view. You’ll create the Header and Rating views in their own files. The video player, timer and buttons are simple views, so you’ll just create them directly in ExerciseView. Creating the Header view Skills you’ll learn in this section: modifying views using the Attributes inspector or auto-suggestions; method signatures with internal and external parameter names; using SF Symbols; Imageview; extracting and configuring subviews; working with previews You’ll create this Header view by adding code to ExerciseView, then you’ll extract it as a subview and move it to its own file. ➤ To prepare for the later extraction, embed the first Text view in a VStack: Hold down the Command key, click Text(exerciseNames[index]) then select Embed in VStack: Note: This version of the Command-click menu appears only when the canvas is open. If you don’t see the Embed in VStack option, press Option-Command-Return to open the canvas. Now this Text view is inside a VStack: VStack { Text(exerciseNames[index]) } The many ways to modify a view ➤ In the canvas, select the “Squat” Text view. To open the Attributes inspector, click the inspectors button in the toolbar, then select the Attributes inspector: This inspector has sections for the most commonly-used modifiers: Font, Padding and Frame. You could select a font size from the Font ▸ Font menu, but you’ll use the search field this time. This is a more general approach to adding modifiers. ➤ Click in the Add Modifier field, then type font and select Font from the menu: The font size of “Squat” changes in both the canvas and in code: .font(.title) ➤ Xcode suggests the font size title, but this is only a placeholder. To “accept” this value, click .title, then press Return to set it as the value. Note: Xcode and SwiftUI do a good job of auto-suggesting or defaulting to an option that is probably what you want. ➤ To see other options, Control-Option-click font or title. This opens the font modifier’s pop-up Attributes inspector. In the Font section, click the selected Font option Title to see the Font menu: ➤ Select Large Title from the menu: Now “Squat” is even bigger! Note: Putting the modifier on its own line is a SwiftUI convention. A view often has several modifiers, each on its own line. This makes it easy to move a modifier up or down, because sometimes the order makes a difference. ➤ Here’s another way to see the font menu. Select .largeTitle and replace it with . — Xcode’s standard auto-suggest mechanism lists the possible values: ➤ Select largeTitle from the menu. ➤ Once you’re familiar with SwiftUI modifiers, you might prefer to just type. .font(.largeTitle) and type .font. Xcode auto-suggests two font methods: ➤ Select either method and Xcode auto-completes with (.title). Change this to .largeTitle. Swift Tip: The method signature func font(_ font: Font?) -> Textindicates this method takes one parameter of type Font?and returns a Textview. The “_” means there’s no external parameter name — you call it with font(.title), not with font(font: .title). Creating page numbers In addition to the name of the exercise, the header should display the page numbers with the current page number highlighted. This replaces the TabView index dots. You could just display Text("1"), Text("2") and so on, but Apple provides a wealth of configurable icons as SF Symbols. ➤ The SF Symbols app is the best way to view and search the collection. Download and install it from apple.co/3hWxn3G. Some symbols must be used only for specific Apple products like FaceTime or AirPods. You can check symbols for restrictions at sfsymbols.com. ➤ After installing the SF Symbols app, open it and select the Indices category. Scroll down to the numbers: You can choose black numbers on a white background or the other way around, in a circle or a square. The fill version is a good choice to represent the current page, with no-fill numbers for the other pages. ➤ SF Symbol names can be long, but it’s easy to copy them from the app. Select a symbol, then open the app’s Edit menu: Note: The keyboard shortcut is Shift-Command-C. ➤ Select the no-fill “1.circle” symbol, press Shift-Command-C, then use the name to add this line of code below the title Text: Image(systemName: "1.circle") Image is another built-in SwiftUI View, and it has an initializer that takes an SF Symbol name as a String. ➤ Before adding more numbers, Command-click this Image to embed it in an HStack, so the numbers will appear side by side. Then duplicate and edit more Image views to create the other three numbers: HStack { Image(systemName: "1.circle") Image(systemName: "2.circle") Image(systemName: "3.circle") Image(systemName: "4.circle") } And here’s your header: The page numbers look too small. Because SF Symbols are integrated into the San Francisco system font — that’s the “SF” in SF Symbols — you can treat them like text and use font to specify their size. ➤ You could add .font(.title) to each Image, but it’s quicker and neater to add it to the HStack container: HStack { Image(systemName: "1.circle") Image(systemName: "2.circle") Image(systemName: "3.circle") Image(systemName: "4.circle") } .font(.title2) The font size applies to all views in the HStack: ➤ You can modify an Image to override the HStack modifier. For example, modify the first number to make it extra large: Image(systemName: "1.circle") .font(.largeTitle) Now only the first symbol is larger: ➤ Delete the Image modifier, so all the numbers are the same size. Your ExerciseView now has a header, which you’ll reuse in WelcomeView. So you’re about to extract the header code to create a HeaderView. You’ll use Xcode’s refactoring tool, which works well. But it’s always a good idea to commit your code before a change like this, just in case. Select Source Control ▸ Commit… or press Option-Command-C. Extracting a subview OK, drum roll … ➤ Command-click the VStack containing the title Text and the page numbers HStack, then select Extract Subview from the menu: Xcode moves the whole VStack into the body property of a new view with the placeholder name ExtractedView. And ExtractedView() is where the VStack used to be: ➤ While the placeholders are still highlighted, type HeaderView and press Return. If you miss the moment, just edit both placeholders. The error flag shows where you need a parameter. The index property is local to ExerciseView, so you can’t use it in HeaderView. You could just pass index to HeaderView and ensure it can access the exerciseNames array. But it’s always better to pass just enough information. This makes it easier to set up the preview for HeaderView. Right now, HeaderView needs only the exercise name. ➤ Add this property to HeaderView, above the body property: let exerciseName: String ➤ And replace exerciseNames[index] in Text: Text(exerciseName) ➤ Scroll up to ExerciseView, where Xcode is complaining about a missing argument in HeaderView(). Click the error icon to click Fix, then complete the line to read: HeaderView(exerciseName: exerciseNames[index]) ➤ Now press Command-N to create a new SwiftUI View file and name it HeaderView.swift. Because you were in ExerciseView.swift when you pressed Command-N, the new file appears below it and in the same group folder. Your new file opens in the editor with two error flags: - Invalid redeclaration of ’HeaderView’. - Missing argument for parameter ’exerciseName’. ➤ To fix the first, in ExerciseView.swift, select the entire 17 lines of your new HeaderView and press Command-X to cut it — copy it to the clipboard and delete it from ExerciseView.swift. ➤ Back in HeaderView.swift, replace the 5-line boilerplate HeaderView with what’s in the clipboard. ➤ To fix the second error, in previews, let Xcode add the missing parameter, then enter any exercise name for the argument: HeaderView(exerciseName: "Squat") Because you pass only the exercise name to HeaderView, the preview doesn’t need access to the exerciseNames array. Working with previews The preview still uses the iPad simulator, which takes up a lot of space. You can modify the preview to show only the header. ➤ In HeaderView_Previews, Control-Option-click HeaderView(...) then type preview in the Add Modifier field: ➤ Select Preview Layout to add this modifier: .previewLayout(.sizeThatFits) ➤ The placeholder value is sizeThatFits, and this is what you want, but you must “accept” it. Click sizeThatFits, then press Return to set it as the value. ➤ Resume the preview to see just the header: ➤ Now you’re all set to see the power of previews. In the preview canvas, click the Duplicate Preview button: You’ve made a copy of the preview in the canvas and in the code: Group { HeaderView(exerciseName: "Squat") .previewLayout(.sizeThatFits) HeaderView(exerciseName: "Squat") .previewLayout(.sizeThatFits) } Just like when you duplicated the Text view, Xcode embeds the two views in a container view. This time it’s a Group, which doesn’t specify anything about layout. Its only purpose is to wrap multiple views into a single view. Swift Tip: The bodyand previewsproperties are computed properties. They must return a value of type some View, so what’s inside the closure must be a single view. ➤ Now you can modify the second preview. Click its Inspect Preview button: The inspector lets you set Color Scheme and Dynamic Type. ➤ Set Color Scheme to Dark and Dynamic Type to accessibilityLarge. That’s how easy it is to see how this view appears on a device with these settings. Now return to ExerciseView.swift, where the header is just the way you left it. Time to commit changes again: Select Source Control ▸ Commit… or press Option-Command-C. And this is the last time I’ll remind you. ;] Next, you’ll set up the video player. Playing a video Skills you’ll learn in this section: using AVPlayerand VideoPlayer; using bundle files; optional types; make conditional; using GeometryReader; adding padding ➤ In ExerciseView.swift, add this statement just below import SwiftUI: import AVKit AVKit is a framework in Apple’s software development kits (SDKs). Importing it allows you to use high-level types like AVPlayer to play videos with the usual playback controls. ➤ Now replace Text("Video player") with this line: VideoPlayer(player: AVPlayer(url: url)) Xcode complains it “cannot find ’url’ in scope”, so you’ll define this value next. Getting the URL of a bundle file You need the URL of the video file for this exercise. The videonames array lists the name part of the files. All the files have file extension .mp4. These files are in the project folder, which you can access as Bundle.main. Its method url(forResource:withExtension:) gets you the URL of a file in the main app bundle if it exists. Otherwise, it returns nil which means no value. The return type of this method is an Optional type, URL?. Swift Tip: Swift’s Optionaltype helps you avoid many hard-to-find bugs that are common in other programming languages. It’s usually declared as a type like Intor Stringfollowed by a question mark: Int?or String?. If you declare var index: Int?, indexcan contain an Intor no value at all. If you declare var index: Int— with no ?— indexmust always contain an Int. Use if let index = index {...}to check whether an optional has a value. The indexon the right of =is the optional value. If it has a value, the indexon the left of =is an Intand the condition is true. If the optional has no value, the assignment =is not performed and the condition is false. You can also check index != nil, which returns trueif indexhas a value. Note: You’ll learn more about the app bundle in Chapter 8, “Saving Settings” and about optionals in Chapter 9, “Saving History Data”. So you need to wrap an if let around the VideoPlayer. Yet another pair of braces! It can be hard to keep track of them all. But Xcode is here to help. ;] ➤ Command-click VideoPlayer and select Make Conditional. And there’s an if- else closure wrapping VideoPlayer! Xcode Tip: Take advantage of features like Embed in HStack and Make Conditional to let Xcode keep your braces matched. To adjust what’s included in the closure, use Option-Command-[ or Option-Command-] to move the closing brace up or down. ➤ Now replace if true { with: if let url = Bundle.main.url( forResource: videoNames[index], withExtension: "mp4") { ➤ In the else closure, replace EmptyView() with: Text("Couldn’t find \(videoNames[index]).mp4") .foregroundColor(.red) Swift Tip: The string interpolation code \(videoNames[index])inserts the value of videoNames[index]into the string literal. ➤ It’s easy to test this else code: Create a typo by changing the withExtension argument to np4, then refresh the preview: Actually, it’s squat.np4 that isn’t in the app bundle. ➤ Undo the np4 typo. ➤ Now click Live Preview, then click the play button to watch the video. If the play button disappears, try this: Click on the video then press Space. Getting the screen dimensions The video takes up a lot of space on the screen. You could set the width and height of its frame to some constant values that work on most devices, but it’s better if these measurements adapt to the size of the device. ➤ In body, Command-click VStack and select Embed…. Change the Container { placeholder to this line: GeometryReader { geometry in GeometryReader is a container view that provides you with the screen’s measurements for whatever device you’re previewing or running on. ➤ Add this modifier to VideoPlayer: .frame(height: geometry.size.height * 0.45) The video player now uses only 45% of the screen height: Adding padding ➤ The header looks a little squashed. Control-Option-click HeaderView to add padding to its bottom: This gives you a new modifier padding(.bottom) and now there’s space between the header and the video: Note: You could have added padding to the VStackin HeaderView.swift, but HeaderViewis a little more reusable without padding. You can choose whether to add padding and how to customize it whenever you use HeaderViewin another view. Now head back to ContentView.swift and Live Preview your app. Swipe from one page to the next to see the different exercise videos. Finishing the Exercise view Skills you’ll learn in this section: Textwith date and style parameters; types in Swift; Date(); Button, Spacer, foregroundColor; repeating a view; unused closure parameter To finish off the Exercise view, add the timer and buttons, then create the Ratings view. Creating the Timer view ➤ Add this property to ExerciseView, above body: let interval: TimeInterval = 30 These are high-intensity interval exercises, so the timer counts down from 30 seconds. ➤ Replace Text("Timer") with this code: Text(Date().addingTimeInterval(interval), style: .timer) .font(.system(size: 90)) The default initializer Date() creates a value with the current date and time. The Date method addingTimeInterval(_ timeInterval:) adds interval seconds to this value. ➤ The Swift Date type has a lot of methods for manipulating date and time values. Option-click Date and Open in Developer Documentation to scan what’s available. You’ll dive a little deeper into Date when you create the History view. The timeInterval parameter’s type is TimeInterval. This is just an alias for Double. If you say interval is of type Double, you won’t get an error, but TimeInterval describes the value’s purpose more accurately. Swift Tip: Swift is a strongly typed language. This means that you must use the correct type. When using numbers, you can usually pass a value of a wrong type to the initializer of the correct type. For example, Double(myIntValue)creates a Doublevalue from an Intand Int(myDoubleValue)truncates a Doublevalue to create an Int. If you write code in languages that allow automatic conversion, it’s easy to create a bug that’s very hard to find. Swift makes sure you, and people reading your code, know that you’re converting one type to another. You’re using the Text view’s (_:style:) initializer for displaying dates and times. The timer and relative styles display the time interval between the current time and the date value, formatted as “mm:ss” or “mm min ss sec”, respectively. These two styles update the display every second. You set the system font size to 90 points to make a really big timer. ➤ Click Live Preview to watch the timer count down from 30 seconds: Because you set date to 30 seconds in the future, the displayed time interval decreases by 1 every second, as the current time approaches date. If you wait until it reaches 0 (change interval to 3 so you don’t have to wait so long), you’ll see it start counting up, as the current time moves away from date. Don’t worry, this Text timer is just for the prototype. You’ll replace it with a real timer in Chapter 7, “Observing Objects”. Creating buttons Creating buttons is simple, so you’ll do both now. ➤ Replace Text("Start/Done button") with this code: Button("Start/Done") { } .font(.title3) .padding() Here, you gave the Button the label Start/Done and an empty action. You’ll add the action in Chapter 7, “Observing Objects”. Then, you enlarged the font of its label and added padding all around it. ➤ Replace Text("History button") with this code: Spacer() Button("History") { } .padding(.bottom) The Spacer pushes the History button to the bottom of the screen. The padding pushes it back up a little, so it doesn’t look squashed. You’ll add this button’s action in Chapter 6, “Adding Functionality to Your App”. Here’s what ExerciseView looks like now: Now for the last subview in ExerciseView: RatingView. Creating the Rating view ➤ Create a new new SwiftUI View file named RatingView.swift. This will be a small view, so add this modifier to its preview: .previewLayout(.sizeThatFits) A rating view is usually five stars or hearts, but the rating for an exercise should reflect the user’s exertion. ➤ To find a more suitable rating symbol, open the SF Symbols app and select the Health category: ➤ The ECG wave form seems just right for rating high-intensity exercises! Select it, then press Shift-Command-C to copy its name. ➤ Replace the boilerplate Text with this code, pasting the symbol name in between double quotation marks: Image(systemName: "waveform.path.ecg") .foregroundColor(.gray) You’ve added the SF Symbol as an Image and set its color to gray. A rating view needs five of these symbols, arranged horizontally. ➤ In the canvas or in the editor, Command-click the Image and select Repeat from the menu: Xcode gives you a loop, with suggested range 0 ..< 5: ForEach(0 ..< 5) { item in Image(systemName: "waveform.path.ecg") .foregroundColor(.gray) } ➤ Click this range and press Return to accept it. In the canvas, you see five separate previews! Xcode should have embedded them in a stack, like when you duplicated a view, but it didn’t. ➤ Command-click ForEach and embed it in an HStack. Now your code looks like this: HStack { ForEach(0 ..< 5) { item in Image(systemName: "waveform.path.ecg") .foregroundColor(.gray) } } That’s better! Now the symbols are all in a row. But they’re very small. ➤ Remember, you can use font to specify the size of SF Symbols. So add this modifier to the Image: .font(.largeTitle) Bigger is better! One last detail: The code Xcode created for you contains an unused closure parameter item: ForEach(0 ..< 5) { item in ➤ You don’t use item in the loop code, so replace item with _: ForEach(0 ..< 5) { _ in Swift Tip: It’s good programming practice to replace unused parameter names with _. The alternative is to create a throwaway name, which takes a non-zero amount of time and focus and will confuse you and other programmers reading your code. ➤ Now head back to ExerciseView.swift to use your new view. Replace Text("Rating") with this code: RatingView() .padding() Your ECG wave forms now march across the screen! In Chapter 6, “Adding Functionality to Your App”, you’ll add code to let the user set a rating value and represent this value by setting the right number of symbols to red. And, in Chapter 8, “Saving Settings”, you’ll save the rating values so they persist across app launches. Key points - SwiftUI is declarative: Simply declare views in the order you want them to appear. - Create separate views for the elements of your user interface. This makes your code easier to read and maintain. - Use the SwiftUI convention of putting each modifier on its own line. This makes it easy to move or delete a modifier. - Xcode and SwiftUI provide auto-suggestions and default values that are often what you want. - Let Xcode help you avoid errors: Use the Command-menu to embed a view in a stack or in an if-elseclosure, or extract a view into a subview. - The SF Symbols app provides icon images you can configure like text. - Previews are an easy way to check how your interface appears for different user settings. - Swift is a strongly typed programming language. GeometryReaderenables you to set a view’s dimensions relative to the screen dimensions. Where to go from here? Your Exercise view is ready. In the next chapter, you’ll lay out the other three full-screen views your app needs.
https://koenig-assets.raywenderlich.com/books/swiftui-apprentice/v1.0/chapters/3-prototyping-the-main-view
CC-MAIN-2021-49
refinedweb
3,954
65.52
Objects for depth-first traversal HyperTrees. More... #include <vtkHyperTreeCursor.h> Objects for depth-first traversal HyperTrees. Objects that can perform depth-first traversal of HyperTrees. This is an abstract class. Cursors are created by the HyperTree implementation. Definition at line 42 of file vtkHyperTreeCursor.h. Definition at line 45 of file vtkHyperTreeCursor HyperTree to which the cursor is pointing. Return the HyperTree to which the cursor is pointing. Return the index of the current vertex in the tree. Is the cursor pointing to a leaf? Is the cursor at tree root? Return the level of the vertex pointed by the cursor. Return the child number of the current vertex relative to its parent. Move the cursor to the parent of the current vertex. Move the cursor to child ‘child’ of the current vertex. Move the cursor to the same vertex pointed by ‘other’. Is ‘this’ equal to ‘other’? Create a copy of ‘this’. Are ‘this’ and ‘other’ pointing on the same hypertree? Return the number of children for each node (non-vertex leaf) of the tree. Return the dimension of the tree.
https://vtk.org/doc/nightly/html/classvtkHyperTreeCursor.html
CC-MAIN-2019-47
refinedweb
182
62.95
I'm curious about this game engine that is mostly commonly used. I'm not sure if I want to work in the gaming industry, but I want to ask this either way. So, I have to option to take a class called "Intro to MS Visual C++ .NET," the description is: Introduction to Visual C++ Graphical User Interface (GUI) programming, the Microsoft .NET Visual Studio, .NET Framework Library, and the Common Language Runtime (CLR). Includes Visual C++ Managed Extensions, control structures, methods, arrays, classes, Active Server Pages (ASP) .NET Web Services, database access, GUI windows forms, windows control, event handling/delegates, files and streams, multithreading, namespaces and assemblies. Emphasis is on building the foundation necessary to thoroughly understand the capabilities of .NET and object-oriented, event-driven client/server GUI software development. Is this knowledge used in the Unreal game engine? Do I need to know this to use Unreal more 'fluently?' If I don't, what good would knowledge for? Technically you don’t have to know anything to use a game engine 😉, that’s half the purpose anyways wouldn’t you say? It creates an environment all fancy for you. So you don’t have to worry about certain things. Personally I think that if you feel you need to know something to help with using a tool that is supposed to facilitate game development, then something is wrong, yes? If your fine with your engine now then why bother? Not really, that class would be more applied when making applications for Windows from scratch. While Unreal is written in C++ I believe, I don't think this particular class is exactly what you want.
https://repl.it/talk/ask/Unreal-Engine-using-C/16987
CC-MAIN-2019-51
refinedweb
278
66.44
TaskQueue A TaskQueue is basically a FIFO queue where tasks can be enqueued for execution. The tasks will be executed concurrently up to an allowed maximum number. A task is simply a non-throwing asynchronous function with a single parameter which is a completion handler called when the task finished. Features - Employs the execution of asynchronous "non-blocking" tasks. - The maximum number of concurrently executing tasks can be set, even during the execution of tasks. - Employs a "barrier" task which serves as a synchronisation point which allows us to "join" all previous enqueued tasks. - A task queue can be suspended and resumed. - A task queue can have a target task queue where tasks which are ready for execution will be enqueued and the target will then become responsible for execution of the task (which again may be actually performed by another target task queue). - Task and TaskQueue can be a used as a replacement for NSOperationand NSOperationQueue. With barriers, suspend and resume functionality, target relationships and the control of the concurrency level allows us to design complex systems where the execution of asynchronous tasks can be controlled by external conditions, interdependencies and by the restrictions of system resources. Description With a TaskQueue we can control the maximum number of concurrent tasks that run "within" the task queue. In order to accomplish this, we enqueue tasks into the task queue. If the actual number of running tasks is less than the maximum, the enqueued task will be immediately executed. Otherwise it will be delayed up until enough previously enqueued tasks have been completed. At any time, we can enqueue further tasks, while the maximum number of running tasks is continuously guaranteed. Furthermore, at any time, we can change the number of maximum concurrent tasks and the task queue will adapt until the constraints are fulfilled. Installation Note: Swift 4.0, 3.2 and 3.1 requires slightly different syntax: For Swift 4 use version >= 0.9.0. For Swift 3.2 compatibility use version 0.8.0 and for Swift 3.1 use version 0.7.0. Carthage Add github "couchdeveloper/TaskQueue" to your Cartfile. This is appropriate for use with Swift 4, otherwise specify version constraints as noted above. In your source files, import the library as follows import TaskQueue CocoaPods Add the following line to your Podfile: pod 'cdTaskQueue' This is appropriate for use with Swift 4, otherwise specify version constraints as noted above. In your source files, import the library as follows import cdTaskQueue SwiftPM To use SwiftPM, add this to your Package.swift: .Package(url: "") Usage Suppose, there one or more asynchronous tasks and we want to execute them in some controlled manner. In particular, we want to make guarantees that no more than a set limit of those tasks execute concurrently. For example, many times, we just want to ensure, that only one task is running at a time. Furthermore, we want to be notified when all tasks of a certain set have been completed and then take further actions, for example, based on the results, enqueue further tasks. So, what's a task anyway? A task is a Swift function or closure, which executes asynchronously returns Void and has a single parameter, the completion handler. The completion handler has a single parameter where the eventual Result - which is computed by the underlying operation - will be passed when the task completes. We can use any type of "Result", for example a tuple (Value?, Error?) or more handy types like Result<T> or Try<T>. Canonical task function: func task(completion: @escaping (R)->()) { ... } where R is for example: (T?, Error?) or Result<T> or (Data?, Response?, Error?) etc. Note, that the type R may represent a Swift Tuple, for example (T?, Error?), and please not that there are syntax changes in Swift 4: Caution: In Swift 4 please consider the following changes regarding tuple parameters: If a function type has only one parameter and that parameter’s type is a tuple type, then the tuple type must be parenthesized when writing the function’s type. For example, ((Int, Int)) -> Voidis the type of a function that takes a single parameter of the tuple type (Int, Int)and doesn’t return any value. In contrast, without parentheses, (Int, Int) -> Voidis the type of a function that takes two Int parameters and doesn’t return any value. Likewise, because Voidis a type alias for (), the function type (Void) -> Voidis the same as (()) -> ()— a function that takes a single argument that is an empty tuple. These types are not the same as () -> ()— a function that takes no arguments. So, this means, if the result type of the task´s completion handler is a Swift Tuple, for example (String?, Error?), that task must have the following signature: func myTask(completion: @escaping ((String?, Error?))->()) { ... } Now, create a task queue where we can enqueue a number of those tasks. We can control the number of maximum concurrently executing tasks in the initialiser: let taskQueue = TaskQueue(maxConcurrentTasks: 1) // Create 8 tasks and let them run: (0...8).forEach { _ in taskQueue.enqueue(task: myTask) { (String?, Error?) in ... } } Note, that the start of a task will be delayed up until the current number of running tasks is below the allowed maximum number of concurrent tasks. In the above code, the asynchronous tasks are effectively serialised, since the maximum number of concurrent tasks is set to 1. Using a barrier A barrier function allows us to create a synchronisation point within the TaskQueue. When the TaskQueue encounters a barrier function, it delays the execution of the barrier function and any further tasks until all tasks enqueued before the barrier have been completed. At that point, the barrier function executes exclusively. Upon completion, the TaskQueue resumes its normal execution behaviour. let taskQueue = TaskQueue(maxConcurrentTasks: 4) // Create 8 tasks and let them run (max 4 will run concurrently): (0...8).forEach { _ in taskQueue.enqueue(task: myTask) { (String?, Error?) in ... } } taskQueue.enqueueBarrier { // This will execute exclusively on the task queue after all previously // enqueued tasks have been completed. print("All tasks finished") } // enqueue further tasks as you like Specify a Dispatch Queue Where to Start the Task Even though, a task should always be designed such, that it is irrelevant on which thread it will be called, the practice is often different. Fortunately, we can specify a dispatch queue in function enqueue where the task will be eventually started by the task queue, if there should be such a limitation. If a queue is not specified, the task will be started on the global queue ( DispatchQueue.global()). taskQueue.enqueue(task: myTask, queue: DispatchQueue.main) { Result<String> in ... } Note, that this affects only where the task will be started. The task's completion handler will be executed on whatever thread or dispatch queue the task is choosing when it completes. There's no way in TaskQueue to specify the execution context for the completion handler. Constructing a Suitable Task Function from Any Other Asynchronous Function The function signature for enqueue requires that we pass a task function which has a single parameter completion and returns Void. The single parameter is the completion handler, that is a function, taking a single parameter or a tuple result and returning Void. So, what if our asynchronous function does not have this signature, for example, has additional parameters and even returns a result? Take a look at this asynchronous function from URLSession: dataTask(with url: URL, completionHandler: @escaping (Data?, URLResponse?, Error?) -> Swift.Void) -> URLSessionDataTask Here, besides the completion handler we have an additional parameter url which is used to configure the task. It also has a return value, the created URLSessionTask object. In order to use this function with TaskQueue, we need to ensure that the task is configured at the time we enqueue it, and that it has the right signature. We can accomplish both requirements by applying currying to the given function. The basic steps are as follows: Given any asynchronous function with one or more additional parameters and possibly a return value: func asyncFoo(param: T, completion: @escaping (Result)->()) -> U { ... } we transform it to: func task(param: T) -> (_ completion: @escaping (Result) -> ()) -> () { return { completion in let u = asyncFoo(param: param) { result in completion(result) } // handle return value from asyncFoo, if any. } } That is, we transform the above function asyncFoo into another, whose parameters consist only of the configuring parameters, and returning a function having the single remaining parameter, the completion handler, e.g.: ((Result) -> ()) -> (). The signature of this returned function must be valid for the task function required by TaskQueue. "Result" can be a single parameter, e.g. Result<T> or any tuple, e.g. (T?, Error?) or (T?, U?, Error?), etc. Note, that any return value from the original function (here asyncFoo), if any, will be ignored by the task queue. It should be handled by the implementation of the task function, though. You might want to examine this snippet a couple of times to get used to it ;) Then use it as follows: taskQueue.enqueue(task: task(param: "Param")) { result in // handle result ... } This ensures, that the task will be "configured" with the given parameters at the time it will be enqueued. The execution, though, will be delayed up until the task queue is ready to execute it. Example Here, we wrap a URLSessionTask executing a "GET" into a task function: func get(_ url: URL) -> (_ completion: @escaping ((Data?, URLResponse?, Error?)) -> ()) -> () { return { completion in URLSession.shared.dataTask(with: url) { data, response, error in completion((data, response, error)) }.resume() } } Then use it as follows: let taskQueue = TaskQueue(maxConcurrentTasks: 4) taskQueue.enqueue(task: get(url)) { (data, response, error) in // handle (data, response, error) ... } Having a list of urls, enqueue them all at once and execute them with the constraints set in the task queue: let urls = [ ... ] let taskQueue = TaskQueue(maxConcurrentTasks: 1) // serialise the tasks urls.forEach { taskQueue.enqueue(task: get($0)) { (data, response, error) in // handle (data, response, error) ... } } Github Help us keep the lights on Dependencies Used By Total: 0
https://swiftpack.co/package/couchdeveloper/TaskQueue
CC-MAIN-2018-39
refinedweb
1,672
54.93
Zenith cone crusher supplier in the philippinesGet Best Price Send Message . good quality and inexpensive zenith of Qm flyash block machine. stone crushing machine, concrete batching plant concrete pole machine, vertial pipe making machine: mgmt. certification: iso 9001 Qt egg laying hollow block machine price philippines.. large basalt materials are fed to the jaw crusher evenly and gradually by vibrating feeder through a hopper for primary crushing. after first crushing, the basalt stone will be transferred to cone crusher by belt conveyor for secondary crushing; then the crushed basalt materials will be transferred to the vibrating screen for separating.. alibaba.com offers 867 mining equipment jaw crusher zenith sale products. about of these are crusher. wide variety of mining equipment jaw crusher zenith sale options are available to you, such as key selling points, applicable industries, and warranty.. philippines small crushing machine sealed jaw crusher for lab US $1000-$2000 set good reputation high efficiency good quality fine crushing mobile stone impact crusher. US $17688-$18160 unit shanghai zenith minerals Co ltd. . . price of ballmill for backyard mining in dava.. import quality zenith jaw crusher supplied by experienced manufacturers at global sources. We use cookies to give you the best possible experience on our website. for more details including how to change your cookie settings, please read our cookie policy. stone crusher machine from china zenith.this page is about the zenith stone crusher machine,or crusher machine,or crushing machine read more aggregate crushing plant for sale philippinesgravel . aug 21, 2020 china grinding mill supplier, stone crusher, jaw crusher manufacturers suppliers shanghai dingbo heavy industry machinery.. crusher machine manufacturer malaysia crusher iron ore mining company in malaysia xsm stone crusher machine. zenith is world leading mobile iron crushing machine manufacturer, we indonesia, philippines, malaysia .. stone crusher sand making machine product listings on seekpart.com choose quality stone crusher sand making machine products from large database of stone crusher equipment necessary for a gravel crusher plant,gravel washing . ton crushers complete commercial quarry equipment china crusher zenith 200 china crusher zenith 200 zenith crushers complete commercial quarry equipment 200 th complete crusher machine commercial quarry equipment page is about the 200 th contact supplier 300 get prices get a quote chat online learn more complete quarry equipmen price indrive. shanghai zenith minerals Co ltd jaw crusher impact. zenith can provide you the complete stone crushing and beneficiation plants besides standalone crushers grinding mills and beneficiation machines as well as their spare parts can be also available at zenith We hold pursuing the zenith technology and quality as read more. portable stone crusher, cone crushers, stone crusher manufacturer supplier in china, offering zenith high quality cone crusher with capacity ph, new type is ce approved stone crusher machine price, ph zenith stone used crushing plant for sale and so. used portable crushing plant for sale philippines. results for jaw crusher for sale philippines; for sale jaw crusher at sulit.com. ph kangwon portable 150 ton rock crusher jaw 3624 cone 1200 click & chat now. dimension stone crusher,dimension stone grinding mill,process the sbm mining machine is custom designed to reduce your operating costs and increase your mine production by getting your mining, haulage and conveying systems click & chat now. high performance stone crusher price stone crusher for sale US 1000 99999 set new impact crusher stone crusher from shanghai get price philippines sand machine stone crusher mining equipment zenith is one of the machinery manufacturers in the sand mining construction and recycling industry. china zenith large capacity jaw crusher PE 600 900 jaw crusher, stone jaw crusher, jaw crusher machine manufacturer supplier in china, offering zenith large capacity jaw crusher pe, best seller concrete mixing plant concrete mixing plant, mixing station, mh compact commercial concrete mixing plant factory concrete mixing station and so on. stone crusher zenith indonesia. zenith jaw crusher spares dealers royal tele shopping. zenith crusher plant spares in korea mical jhone. zenith jaw crusher spares dealers . zenith jaw crusher how to complete. stone crushers,if you want to konw more info about the zenith jaw crusher,you can contact zenith company offers complete series jaw you will get the price list and. the companys main business is various crushing equipment, including ore crusher equipment, mobile crushing station equipment, cone crusher and special material crushing equipment such as construction waste equipment. all of our machines meet our rigorous quality checks and are field tested before being shipped... rock crusher dealers mobile stone crusher machine. rock crusher dealers We are one of the largest suppliers of used crushers and crushing jaw crusher, search dealers. postal code. Or address. read more. small stone crusher manufacturer in the philippines. zenith has the high quality stone crusher and grinding mill to meet different customers such as jaw crusher, mobile crusher, cone . mar 11, 2016 the special and large equipment reach to sets which includes large boring machine, large cranes, cnc cutting machine, automatic welding equipment, etc. cad design has covered 100%, stone processing quality has reached standard, and machine-made sand has reached the standard.. . auto max 1300 cone crusher crusher machines. cone crusher 1200 1300 autocone & automax spare parts oil flinger, 1200. view image, bearing clamp, 1200.. zenith stone crusher india solutions be. stone crushing plant stone crushing plant is also called rock crushing plant and stone production line, which is the professional industrial machine to crush sand and stones it is mainly composed of vibrating feeder, jaw crusher, impact crusher... crusher for sale philippines stone crusher machine. crusher for sale price of stone crusher machine in. crusher for sale, price of stone crusher machine in leyte, eastern visayas, philippines date author zenith leyte is a province in the philippines located in the eastern visayas region, occupying the northern three. more. get price. used stone crushing machine for sale in philippines sep get price. secondary jaw crusher in the philippines stone crusher quarry We offer high quality jaw crusher and the most attentive service, resolve customer any worries, if you need our products please feel free to contact us. supplier of jaw crushers in the philippines small stone crusher machines In philippines cone crusher suppliersmall stone . zenith for concrete block machines pakistan crusher,stone gulin provide the zenith for concrete block machines solution case for you. vsi sand making machine belt conveyor sand washing machine grinding mill raymond mill. zenith machines 913 new price crusher south africa.. stone crusher project in the philippines. crushing plant aggregate sand and gravel philippines. stone crusher philippines or rock crusher philippines can produce . stone jaw crusher stone . watch stone breaking machine fine impact crusher sand brick making machine video by best in agency on tradekey.com. crushing equipment zenith crusher for sale. mobile impact crusher, also called mobile impact crushing plant, is a star product of zenith. equipped with the unit configuration, zeniths mobile impact crusher . view reliable crushing & culling machine manufacturers on made-in-china.com. this category presents crusher, wood chipper, from china crushing & culling machine suppliers to global buyers....
https://jozef-wilkon-galerie.de/rock-crusher/Aug_Friday_16395/
CC-MAIN-2022-05
refinedweb
1,167
54.83
Using the Raspberry Pi to get weather from the internet Not a member? You should Already have an account? To make the experience fit your profile, pick a username and tell us what interests you. We found and based on your interests. Choose more interests. 6/4 Created a semicircle marked with High/Medium/Low humidity ranges and converted the servo motor into a dial that moves according to the location the user inputs. Made some adjustments to the code so the dial will stay on the high/medium/low instead of resetting to zero at the end of the program (now it resets to zero at the beginning of the program). Below the GIF shows the humidity when I set the location to Hawaii. 5/21 Since the code to move the servo motor is lengthy, I created a function to move the servo by simply calling move_servo(num). The num is the amount of degrees the servo will move by. I decided to try to move the servo based on the humidity percentage outside. If the humidity was less than 50%, the servo would move to 3 degrees (num=3) indicating a low humidity. If the humidity was greater than 50 but less than 70, the servo would move to 7.5 degrees (num=7.5) indicating a normal humidity. If the humidity was greater than 70 but less than 120 (random high number), the servo would move to 11 degrees (num=11) indicating a high humidity. def move_servo(num): #function to move servo p = GPIO.PWM(19,50) #19 is the pin p.start(2.5) #Starting at 0 time.sleep(0.5) p.ChangeDutyCycle(num) #Changing degree based on humidity time.sleep(0.3) p.stop() time.sleep(0.3) p = GPIO.PWM(19,50) #After calling p.stop() must put this information again p.start(num) #Starting at the degree we left off at we will begin to go back to zero time.sleep(0.3) p.ChangeDutyCycle(2.5) #Going back to 0 p.stop() time.sleep(20) 5/7/18 I learned how to control a servo motor using Raspberry Pi. This servo motor will most likely be used as a dial that will turn depending on the humidity (or temperature) of the weather outside. The servo did initially shake a lot when going to different angles. The solution to reduce this shaking was that after going to a certain angle, stop the servo for half a second, then start it again; below is the code of how I did this. p.ChangeDutyCycle(7.5) time.sleep(0.3) p.stop() time.sleep(0.5) p = GPIO.PWM(19,50) p.start(7.5) p.ChangeDutyCycle(12.5) First I start the servo motor at a duty cycle of 7.5 which corresponds to 90 degrees. I give it a third of a second to get there before stopping the servo motor for half a second. I then must put which pin I am using again (p = GPIO.PWM(19,50)) because it isn't saved after the servo is stopped. I then start the power again making sure that I am starting it at the same angle that I stopped at. Then I go to duty cycle of 12.5 which is 180 degrees (duty cycle of 2.5 is 0 degrees). The entire code that can be found on my GitHub page is in the Scratch Pad folder, the file is called LED.py. This code moves the servo motor from 0 to 90 to 180 as can be seen in the attached video. Create an Open Weather Map account to receive a key. This key will allow your code to access the weather of different locations available on the Open Weather Map website. (My key is 9276659d9dc88d95bfbd5db39938c052) To be able to use Open Weather Map, their library must first be downloaded. Steps to downloading the pyowm library: Once the library has been downloaded it is time to write the code! First we must import the library we just downloaded so we type: import pyowm Next we insert the key into the code which you either created an account to get or are using mine: owm = pyowm.OWM('9276659d9dc88d95bfbd5db39938c052') *Make sure the key is written in between apostrophes To find the weather at a specific location, we use the weather_at_place function that is in the pyowm library. The city and the country must be written in the format (“City, Country”). The code below is an example of trying to find the weather at the city of Pasadena which is in the United States. If there are cities with the same name, be sure to specify the location. For example, typing in (“Glendale, US”) gives the weather of the Glendale in Arizona state. To get the weather information of the Glendale city that is in California, the city’s direction is specified so it is written as (“North Glendale, US”) The following code shows the weather in Pasadena, United States. observation= owm.weather_at_place("Pasadena, US") Fetch the weather information in the specified location and set the data found to a variable w= observation.get_weather() There are many different types of information we can choose to receive. We can get the wind speed by adding a function to the variable we are storing the weather information into; you take the variable you stored the weather information into which in my case was w, and add the function after it to get the wind speed which is .get_wind(). You store this information into a variable (I chose to name this variable wind). wind= w.get_wind() To get the temperature in fahrenheit you follow the same steps as above but this time the function to get the temperature is get_temperature() and inside the parentheses specify whether you want the temperature in fahrenheit or celsius. temperature= w.get_temperature('fahrenheit') When you run this function, the maximum and minimum temperature is also printed out. Let’s say you just want the temperature in fahrenheit and nothing else. Create a new variable and put temperature.get('temp'). The .get() function returns to you a value for the given key. So when we put .get('temp'), this will return to us only the value of the temperature. temp_value= temperature.get('temp') To print out what we want, we use the print() function. Inside the parenthesis we write the variable name that we want information from. print(w) prints out the reference time print(wind) prints the speed and degree of the wind print(temperature) prints out the temperature as well as the highest and lowest temperatures of the day. 3/19/18 Created a basic circuit to turn an LED on and off. Next week I will try to turn it on and off based on the temperature outside. View all 5 project logs Already have an account? Open Technology Will Donaldson borazslo Jason P. Become a member to follow this project and never miss any updates Contact Hackaday.io Give Feedback Terms of Use Hackaday API © 2022 Hackaday Yes, delete it Cancel You are about to report the project "Raspberry Pi Weather App", please tell us the reason. Your application has been submitted. Are you sure you want to remove yourself as a member for this project? Project owner will be notified upon removal.
https://hackaday.io/project/96002-raspberry-pi-weather-app
CC-MAIN-2022-21
refinedweb
1,226
64.61
Technical Support On-Line Manuals RL-ARM User's Guide (MDK v4) #include <net_config.h> U16 tnet_process_cmd ( U8* cmd, /* Pointer to command string from the telnet client. */ U8* buf, /* Location where to write the return message. */ U16 buflen, /* Number of bytes in the output buffer. */ U32* pvar ); /* Pointer to a storage variable. */ The tnet_process_cmd function processes and executes the command requested by the telnet client. The telnet server running on TCPnet calls the tnet_process_cmd function when it receives the consecutive Carriage Return (CR) and Line Feed (LF) character sequence from the telnet client (this is usually produced by the user pressing Enter on the telnet client terminal). The argument cmd points to the message containing the command that is received from the telnet client. The argument buf points to the output buffer where the tnet_process_cmd must write the message to be returned to the telnet client. The argument buflen specifies the length of the output buffer in bytes. The argument pvar is a pointer to a variable that never gets altered by the Telnet Server. You can use *pvar as a repeat counter or simply to distinguish between different calls of the tnet_process_cmd function. The tnet_process_cmd function is part of RL-TCPnet. The prototype is defined in net_config.h. You must customize the function in telnet_uif.c. note The tnet_process_cmd function returns the number of bytes written to the output buffer. It also encodes the values of the repeat flag and the disconnect flag into the return value. If bit 14 (repeat flag) of the return value is set to 1, the telnet server running on TCPnet calls the tnet_process_cmd function again with the argument cmd and storage variable *pvar of the same value. The function tnet_process_cmd can then enter more data into the buffer buf. If bit 15 (disconnect flag) of the return value is set to 1, the telnet server disconnects the telnet session. tnet_cbfunc, tnet_ccmp U16 tnet_process_cmd (U8 *cmd, U8 *buf, U16 buflen, U32 *pvar) { U32 len,val,ch; /* Simple Command line parser */ len = strlen (cmd); if (tnet_ccmp (cmd, "BYE") == __TRUE) { /* 'BYE' command, send message and disconnect */ len = str_copy (buf, "\r\nDisconnect...\r\n"); /* Hi bit of return value is a disconnect flag */ return (len | 0x8000); } if (tnet_ccmp (cmd, "ADIN") == __TRUE) { /* 'ADIN' command received */ if (len >= 6) { sscanf (cmd+5,"%d",&ch); val = AD_in (ch); len = sprintf (buf,"\r\n ADIN %d = %d",ch,val); return (len); } } if (tnet_ccmp (cmd, "HELP") == __TRUE || tnet_ccmp (cmd, "?") == __TRUE) { /* 'HELP' command, display help text */ len = str_copy (buf,tnet_help); return (len); } /* Unknown command, display message */ len = str_copy (buf, "\r\n==> Unknown Command: "); len += str_copy (buf+len, cmd); return .
http://www.keil.com/support/man/docs/rlarm/rlarm_tnet_process_cmd.htm
CC-MAIN-2020-05
refinedweb
436
61.67
14, 2006 This article was contributed by Jake Edge. A recent announcement about adding Sender Policy Framework (SPF) capabilities to the machine hosting the linux-kernel mailing list (lkml) has sparked a lively debate. The first step, it seems, is to add an SPF record for vger.kernel.org and later this summer to enable SPF checking on incoming email. Both steps are controversial and the majority of posters seem to be against the change, but Matti Aarnio, one of the postmasters for vger, plans to go ahead with the changes. SPF is a technique that allows a domain to specify which hosts are allowed to send email that have an envelope sender (i.e. SMTP MAIL FROM) using that domain. A domain administrator adds a TXT record to the DNS entry for the domain that describes all hosts allowed to send mail. This allows receiving Mail Transfer Agents (MTAs) to look up the SPF record and determine whether the domain in the envelope has been forged -- at least in theory. Unfortunately, there are a number of problems with this scheme, most having to do with email forwarding. Consider the case where a user has a yahoo.com email account that they are forwarding to their ISP. When yahoo forwards email that it receives, it uses the original envelope sender, but that domain has almost certainly not listed yahoo.com as an authorized sender. The same issue occurs if a user is trying to use their yahoo.com email as the sender, but are required to use their ISP's SMTP server. In that case, Yahoo will rightly not have the ISP listed as a legitimate sender for their domain. The SPF folks have suggested solutions for these problems, but many of them require fundamental changes in how MTAs operate. The Sender Rewriting Scheme (SRS) proposal in particular breaks longstanding email tradition by having forwarding MTAs change the envelope sender as they forward email. Opponents of SPF not only argue that changing this tradition is a bad idea, but also that it is very unlikely to be widely implemented any time soon. Additionally, Mail User Agents (MUAs) would need to learn about SRS encoding in order to parse sender addresses for filtering email at the user end. SPF does provide a way to definitively determine that an email is coming from an authorized host, but failing the SPF check does not in any way imply that the email is invalid, as the mail could have been forwarded by a non-SRS compliant MTA. The main benefit for domains that publish SPF records may be a reduction in the blowback from a 'joe job' (a spammer uses a victim domain as the sender on a large amount of spam, some of which bounces, leaving the victim to deal with all the bounce messages). Opponents point out that because of the forwarding problems, publishing an SPF record for your domain essentially asks other MTAs to mark perfectly valid mail as suspicious at best and forged at worst. Worse yet, some mail administrators are configuring their MTAs to reject mail that fails SPF checking. For the lkml, the immediate impact will be minimal, but still annoying to some. People who have subscribed using addresses that are forwarded to SPF-checking ISPs may no longer receive emails from the list. Some list archiving software may also be affected. Once SPF checking is enabled, some users may find their mail getting rejected depending on how strictly the SPF policy is enforced. Expect another hue and cry on the lkml when and if that happens. SPF on vger Posted Jun 15, 2006 0:44 UTC (Thu) by bangert (subscriber, #28342) [Link] SPF is harmful. Adopt it.-... SPF is dead. Long live CSV ! Posted Jun 15, 2006 1:06 UTC (Thu) by copsewood (subscriber, #199) [Link]. SPF: yes, ma'am Posted Jun 15, 2006 4:11 UTC (Thu) by zmi (subscriber, #4829) [Link] Posted Jun 15, 2006 5:50 UTC (Thu) by nim-nim (subscriber, #34454) [Link] The question is not if it's capable of dropping bad messages, but if it can reasonably distingish spam from ham. So far the evidence in SPF doesn't. SPF: it's NOT AT ALL about SPAMfiltering... Posted Jun 15, 2006 10:42 UTC (Thu) by zmi (subscriber, #4829) [Link]. Posted Jun 15, 2006 10:53 UTC (Thu) by iabervon (subscriber, #722) [Link].) Posted Jun 15, 2006 6:51 UTC (Thu) by job (guest, #670) [Link] All this to combat joe-jobs, which aren't a problem to anyone. Spam is the problem. Which got many SPF proponents to start lying and talk about it as a spam solution. That was dishonest. I know you want to see your pet project adopted around the world, but sorry, it is a braindead idea. But don't take my word for it, read what Allman, Bernstein, Venema and all the other e-mail luminaries who actually know this stuff wrote. No one thought it was a good idea, and with very good reasons too. Posted Jun 15, 2006 7:34 UTC (Thu) by cate (subscriber, #1359) [Link] Posted Jun 15, 2006 8:04 UTC (Thu) by pizza (subscriber, #46) [Link] Oddly enough, the former two tend to rely heavily on forged SMTP envelopes, which is precisely what SPF is intended to deal with, and it accomplishes that fairly well. Does it break certian practices? Well, yes. But what its detractors fail to understand is that this is a trade-off that many, many willingly make, especially when it is their reputation and/or money on the line. Don't forget that these problems exist because of the deficencies of the original SMTP (and yes, DNS) systems. "Requiring most of the world to participate" is actually a feature of the Internet -- the network is dumb; the end-points are smart. But it also makes change very hard to implement. As such, the disruption from replacing the whole schebang will be far greater, even though everyone agrees that it's what really needs to be done. And that will certianly break many things that work now. Incidentally, is there an "official" use for TXT records? "Arbitrary Binary Data up to 255 characters" sounds like there isn't, and a domain owner choosing to use that "arbitrary data" for purposes of reducing forged mail being sent under their domain certianly sounds like an appropriate use. Using DNS TXT records is a cool idea because it doesn't require any new infrastructure, unlike, for example, using PGP signatures, which works well on an individual basis but otherwise scales terribly due to the necessity of establishing trust anonymously. SPAM == Unsolicited bulk email. It's simple Posted Jun 15, 2006 8:26 UTC (Thu) by dwheeler (subscriber, #1216) [Link] Some spam has trojans, or other nasty stuff. It's a good idea to protect against other kinds of malicious email. But usually the malicious email is ALSO spam. If we could get rid of spam, we'd greatly reduce any other kind of problem as well on email. Posted Jun 22, 2006 9:54 UTC (Thu) by forthy (guest, #1525) [Link] I agree that SPF is not a good idea, but I support it (not just lip service), for two purposes: I view SPF not as a solution to a specific problem, but as a nail on the coffin of SMTP, and I'm ready to adopt the next nail if I can find one. SPF, joe jobs, and phishing Posted Jun 15, 2006 11:12 UTC (Thu) by rfunk (subscriber, #4054) [Link] Posted Jun 15, 2006 12:13 UTC (Thu) by dwmw2 (subscriber, #2063) [Link] Posted Jun 15, 2006 13:32 UTC (Thu) by dlang (subscriber, #313) [Link] Posted Jun 15, 2006 15:32 UTC (Thu) by dwmw2 (subscriber, #2063) [Link] You didn't actually read the why not SPF page linked above, did you? 550-Verification failed for <dwmw2@infradead.org> 550-Called: 2001:4bd0:203e::1 550-Sent: RCPT TO:<dwmw2@infradead.org> 550-Response: 550-This address never sends messages directly, and should not accept bounces. 550-550-Please see or contact 550-550 postmaster@infradead.org for further information. 550 Sender verify failed Posted Jun 22, 2006 17:51 UTC (Thu) by kitterma (subscriber, #4448) [Link] Posted Jun 15, 2006 13:35 UTC (Thu) by rfunk (subscriber, #4054) [Link] Posted Jun 22, 2006 17:48 UTC (Thu) by kitterma (subscriber, #4448) [Link] SPF checking may be relatively rare, but in my experience it is enough that within a month of publishing a -all SPF record, bounce messages due to forged sending using my domains ended. There is enough SPF checking going on to provide deterrence. SPF is a horrible idea in theory. In practice, unless your user base sends to peope who do a lot of forwarding, it works pretty well for many domains. Eventually, it will be obsolete because something better will come along. In the meantime, it does the job for me and lots of others. SPF and vger Posted Jun 22, 2006 21:24 UTC (Thu) by SDGathman (guest, #38604) [Link] The proposed vger application involves publishing an SPF record. The only potential forwarding issue for publishers is web greeting card type sites that don't use their own domain for the return path. This is not a problem for vger. I have been publishing and checking SPF in production for 2 years. Most of the complaints from anti-SPF people are based on misunderstandings. For instance, the "forwarding problem" is a "doctor it hurts when I do this" problem. If you have no idea who you get forwarded mail from (even though you set them all up), then don't check SPF. Problem solved. SRS is actually not a good solution to handle forwarders. Simply listing the forwarders is much cleaner. Listing them is easier if the forwarders publish SPF - cause then you know their IPs automatically. SRS is useful as a BATV alternative that also handles relaying. The roaming sender problem is elegantly solved using SMTP AUTH - which has been widely available for at least 10 years. If you publish CSV, you might as well publish SPF for your HELO names and pick up SPF checkers as well (caveat, only if HELO name is distinct from MFROM domain - an SPF namespace collision wart). Posted Jun 23, 2006 18:52 UTC (Fri) by neilbrown (subscriber, #359) [Link] That is: if an incoming message doesn't get a clear SPF-PASS, then don't every send an automatic reply to it. This means no delivery-failure reports, and no vacation messages. I think it is much better that these don't get sent at all than that they get sent to the wrong place. Providing you use the standard 'guess' for domains that don't publish SPF, you still send most of the report that you need to. And if this was widely adopted, it might even act as a gentle stick to encourage more people to regularise the mail sending, and always send through the right MTA... Linux is a registered trademark of Linus Torvalds
http://lwn.net/Articles/187736/
crawl-002
refinedweb
1,856
69.21
I just added a gr_basic_add_const and gr_basic_multiply_const. Nick added the volk (and orc) implementations for the float32 and complex float32 implementations of the multiplier. Trying to avoid math so from gnuradio import math doesnt interfere withTrying to avoid math so from gnuradio import math doesnt interfere with > > python's math import. Thats unfortunate, but I think this will be nicer with the python namespace changes you mentioned. #example import: from gnuradio import gr gr.filter.fir_interp(...) gr.math.add(...) gr.digital.costas_loop(...) Was that the goal? I think we can make this happen without too much trouble > expecting that since you've used different names for the blocks because ofSo, my thoughts were: make new blocks that replicate the functionality > the data-type handling that they won't collide with what's in guradio-core > right now. > in gr-core like filters, adders, etc. We dont delete the old blocks, so the API doesn't change. But if you want the performance improvement, you use the new blocks. Hopefully with these new blocks, we use volk where applicable, use new naming convention to make it easier to fit alternative data types, and avoid generation when possible. Eventually, gnuradio-core becomes nothing but the block framework and runtime. gr-filter takes the filter implementations and gr-<???> takes the very basic blocks like simple math operators, type conversions, and stream conversions. Hows that sound? -Josh
http://lists.gnu.org/archive/html/discuss-gnuradio/2011-11/msg00194.html
CC-MAIN-2017-17
refinedweb
233
56.76
If you are new to programming or coding and have just started working on ASP.NET or have started learning C#, you might want to know what .NET framework and about it's architecture. Sometimes you may also want to know about .NET framework because it is required to install some application on your windows. Introduction to .NET framework .NET framework is a software developement framework which is used for building and running application on Windows. .NET framework is developed and designed by Microsoft. A framework usually means a collection of Application Programming Interfaces (APIs) and a shared library of code that developers use while developing any type of application.In the .NET Framework, that library of shared code is named the Framework Class Library (FCL). So, .NET framework is basically a collection of APIs and libraries which developers use to develop many different type of application like desktop-application or web-application. There are various implementation of .NET framework - .NET framework: Original .NET implementation, it supports running websites, services, desktop apps etc on Windows platform. - .NET Core: .NET Core is an open-source and cross-platform version of .NET that is maintained by Microsoft and the .NET community on GitHub. It support running websites, services, and console apps on Windows, Linux, and macOS .NET Standard is a formal specification of the APIs that are common across .NET implementations. It allows libraries to build against the agreed on set of common APIs, ensuring they can be used in any .NET application—mobile, desktop, IoT, web, or anywhere you write .NET code. .NET framework is used to develop desktop applications, Web-based applications, and Web services. There is a many programming languages available on the .Net platform, VB.Net and C# being the most common ones. What is .NET Framework used for? .NET Framework is used to create and run software applications. .NET apps can run on many operating systems, using different implementations of .NET. .NET Framework is used for running .NET apps on Windows, while .NET Core is used to build and run applications on Linux,MacOsx or windows. Who uses .NET Framework? - Users, who are using applications built on .NET framework may need to download .NET framework from it's offical website, although you get it already downloaded and installed on Windows 10/7/8. Windows 8 and Windows 10 include versions 3.5 and 4 (the current version right now being 4.6.1). They are installed on a first-time-needed basis, so the first time you install an app that needs one of those versions, Windows will add it automatically. - Software developers, developers who are builing web-application, windows-application using WPF or form based , or web-service developer may need to install .NET framework. Architecture of .NET Framework There are two major components of .NET framework: - the common language runtime (CLR), which is the execution engine that handles running apps, and - the .NET Framework Class Library, which provides a library of tested, reusable code that developers can call from their own apps. What is Common language runtime (CLR) in .NET?.) and provides services like thread management, garbage collection, type-safety, exception handling, and more. All .NET-supported languages can be executed under this single defined runtime layer. Just-in-time compiler converts the managed code (compiled intermediate language code), into machine instructions which are then executed on the CPU of the computer. The CLR first locates the referenced assembly, and then it loads it into memory, compiles the associated IL code into platform specific instructions, performs security related checks, and finally executes the code. .NET Framework Class Library (FCL) overview .NET FCL is. The first thing you need to know about the .NET Framework class library is that it is an object-oriented tree derived from a single root: System::Object. The next important characteristic is that the .NET Framework class library strictly follows the rules specified by the common language specification (CLS). The key rules that you should be aware of are as follows: - Global functions and variables are not allowed. - There is no imposed case sensitivity (a consequence of the need to support languages like Visual Basic .NET), so all exposed types differ by more than their case. In other words, all public or protected members differ by more then just case. - The primitive types allowed as parameters conform to the CLS, namely, Byte,Int16,Int32,Int64,Single,Double,Boolean,Char,Decimal,IntPtr, and String. - Variable-length parameter lists to methods are not allowed. Fixed-length arrays are used as parameters instead. - Pointers are not allowed. - Class types must inherit from a CLS-compliant class. - Only single class inheritance is allowed, although multiple inheritance of interfaces is permitted. - NET Framework class library is broken up into nearly 100 namespaces. .NET framework version history If you are new to .NET and looking to learn ASP.NET or C# you can start by learning C#. Recommended topics C# and .NET Version History
https://qawithexperts.com/article/asp-net/what-is-net-framework-understanding-net-framework-architectu/268
CC-MAIN-2021-39
refinedweb
827
59.6
Run in release crash Hello 'verybody ! I got a very common error while running in Release mod but, in spite of all the answers i found on it, i can't find a solution or just do not understand them. My Debug works perfectly (almost... except two exception at the same spot where the release crashes that i couldn't find where they came from but they do not stop the program : "Exception at 0x7fff... execution cannot be continued" --> cannot find a clue in this message as the very beginner that i am). Then, while i run in Release, after triggering a comboBox event, it crashes. Here is the .qml where it seems to crash : onActivated: { //Change index of the model function of the comboBox index reader2.currentIndexReaderProvider = comboBoxReaderProvider.currentIndex reader2.buildReaderList(); //Change the model of ComboBox ReaderUnit function of the ReaderProvider comboBoxReaderUnit.model = reader2.readerStringList } Maybe at buildReaderList where i used my library for the first time... I tried to put breakpoints in my code at different moment but it seems to go to the crash whether they are passed or not. I tried to run the .exe in the Release and Debug folder created : Debug doesn't do anything (i think that's kinda normal right ?). I checked that Qt5Qml.dll and Qt5Network.dll come from Qt\Qt5.9.2\bin. Just to be sure, i also tried with the 5.10 version that i downloaded before. I tried to delete the Qt5Qmld.dll (which seems to be for the Debug version) i tried to debug it with Visual studio while Release run crashed and i had this exception : Unhandled exception at 0x00007FFEE90FAC62 (msvcr120d.dll) in MRCFinal.exe: 0xC0000005: Access violation reading location 0x0000000000000005. static _Elem *__CLRCALL_OR_CDECL copy(_Elem *_First1, const _Elem *_First2, size_t _Count) { // copy [_First2, _First2 + _Count) to [_First1, ...) return (_Count == 0 ? _First1 : (_Elem *)_CSTD memcpy(_First1, _First2, _Count)); //It happenned on this line } <error reading characters of string.> on the _First2 couldnt understand it and didn't find that much information online. I also tried to use Dependency Walker wich was advised a lot on multiple forums but i don't get how to use it and i kinda get errors everywhere whatever i open with it. Some people thought it was linked with the .pro and it wouldn't surprised me since the two exceptions i told you about above seems to be related to the crash and started to appears when i first tried to use my library. Also, here is my .pro (could be messy, even though i tried to clean it, that's my first program on Qt so i did a lot of experiments to learn and explore that massive amount of possibilities) QT += quick CONFIG += c++11 QT += widgets QT += qml QT += core # \ comboboxmodel.cpp \ cardinformation.cpp \ definition.cpp \ reader.cpp \ readwrite HEADERS += \ comboboxmodel.h \ cardinformation.h \ header.h \ reader.h \ readwrite.h win32:CONFIG(release, debug|release): LIBS += -L$$PWD/packages/lib/x64/release/ -lliblogicalaccess -lmifarecards else:win32:CONFIG(debug, debug|release): LIBS += -L$$PWD/packages/lib/x64/debug/ -lliblogicalaccess -lmifarecards INCLUDEPATH += $$PWD/packages/include DEPENDPATH += $$PWD/packages/include LIBS += -L$$PWD/packages/lib/x64/Debug/ -lliblogicalaccess -lmifarecards LIBS += -L$$PWD/packages/lib/x64/Release/ -lliblogicalaccess -lmifarecards Thankfull for all the answers/propositions/advises from beginner/advance/godlike programmers and staying on call at every moment for any details - raven-worx Moderators @Sillimon said in Run in release crash:. you are missing some dlls (at least Qt5Core) -> use the windeployqt tool to make sure the correct dlls are copied. That's really usefull ! Allows to clean everything and just import what is needed ! So i did and now, the .exe in release is as the debug build .exe --> it runs but doesn't do anything... guess it's better than dependencies errors =) I thought that it would work so i followed a deployment tutorial online (yeah... it's my first deployment and i don't know anything about it ! I just remember my first programs when i naively thought that exporting all the project would work on another PC) In the tutorial, they advised to add a custom process step at the run so it would run windeployqt before each run (if i understood correctly) Got a little question about that : is that process not removing the external libraries after each run ? Or is it just adding missing Qt dll and not removing anything anyway ? Because after running it while opening the release folder, it seems that files are removed and some others (or probably the same) are added/generated. Then i tried two different things : FIRST ONE, i followed the tutorial and i created a folder "Deployment" as a C: child. I pasted into it the release .exe and others dll from Qt5.9.2 (bin/plugins/qml folders) I then renamed the Qt Folder into another name so i can run the .exe inside deployment. It did run perfectly until that moment when a messageBox appears to tell me the program crashed (just as before, when i had an exception in Debug build). I tried debugging with VS and the exception changed. The page displays : "symbol file not loaded" - Symbol load information : "Binary was not built with debug information" and the exception says : Unhandled exception at 0x00007FF7C0AB45B3 in MRCFinal.exe: 0xC0000005: Access violation reading location 0x0000000000000000. I'm right now searching for a nullptr or something like that. It might be something else but forums mainly turns around a use of a nullptr and i'm playing a lot with them in my code. i'm also triggered by that "built with debug information" as long as i'm sure that i took the good .exe in the release folder... meantime, can someone explain me what a symbol is and, eventually, where and what it's use for ? I also tried a SECOND THING when i saw it didn't work. I openned the project in QtCreator and tried to run it in release now that i've done the windeployqt.exe First, i'd like to be sure of something. Qt Creator provides a "run" feature and a "start debugging" feature. I read somewhere that debug build and debugger ain't related. Wich means that there is a run with and without debugger for debug build AND for release build right ? What i learnt about it was that "it does not attach the debugger so it doesn't hit breakpoints, doesn't show debug messages, etc..." Isn't that exactly what a release build is supposed to do ? It looks like whatever you take between release or debug build, using "run" or "start debugging" feature change the build mode. Also, i tried both and here is the result : "Run" feature --> got a C:\Users...\path\of\my\release\folder\release.QQmlApplicationEngine failed to load component And a bunch of errors i had not before using windeployqt on my .qml import lines : "module QtQuick is not installed"/"module QtQuick.Window is not installed"/"module QtQml.Models is not installed"/etc etc... Then it says that my .exe exited with code -1 Successfully found the qml folder in C:\Qt ;D now it works, which leads back to the question above : "Is windeployqt.exe removing any file from release folder ?" "Start debugging" feature --> program works fine until that fatal moment i told about above... static _Elem *__CLRCALL_OR_CDECL copy(_Elem *_First1, const _Elem *_First2, size_t _Count) { // copy [_First2, _First2 + _Count) to [_First1, ...) return (_Count == 0 ? _First1 //Exception on the line just below : (_Elem *)_CSTD memcpy(_First1, _First2, _Count)); } I'm sorry to ask that much and i'm not expecting a full answers but this is the questions i'm asking myself or don't understand clearly so i'm open to everything you could teach me about it. - raven-worx Moderators @Sillimon said in Run in release crash: can someone explain me what a symbol is and, eventually, where and what it's use for ? a symbol is basically what your application/library consists of. (Variables, classes, methods, ...) If the symbol information is missing you wont be able to do any meaningful debugging. Wich means that there is a run with and without debugger for debug build AND for release build right ? yes. the difference is just in once case the debugger is attached to the started application, in the other not. 'm right now searching for a nullptr or something like that. It might be something else but forums mainly turns around a use of a nullptr and i'm playing a lot with them in my code. not necessarily a null pointer, but also an uninitialized / dangling (already deleted pointer). But in the end we all can just guess and give tips without a code and a stacktrace of the moment the crash occured. (best in debug mode) After few researches and some help from internal, it appears that my links to libraries ain't clean enough wich leads to those random behaviours. So i'm working on making it cleaner for x32/x64 debug and release. Thanks a lot for your help ! You made my mind way sharper !
https://forum.qt.io/topic/86929/run-in-release-crash
CC-MAIN-2018-43
refinedweb
1,524
64.61
Both of the below works fine on the emulator (2.3.3), but on a real device (Nexus S with 4.1.2) no image is shown for the thumbnail. I will also try to run it on an Android 4 Emulator. If I set a default android:src for the ImageView, it is not shown anymore then. This makes me think that it is replaced, but the ImageView is empty. public class MainActivity extends Activity { ImageView img; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); img = (ImageView)findViewById(R.id.img_thumbnail); new MyAsync().execute(""); } //This version is still not working, but it's more readable (edited: Selvin). public class MyAsync extends AsyncTask<String, Void, Bitmap>{ @Override protected Bitmap doInBackground(String... objectURL) { //return ThumbnailUtils.createVideoThumbnail(objectURL[0], Thumbnails.MINI_KIND); return ThumbnailUtils.extractThumbnail(ThumbnailUtils.createVideoThumbnail(objectURL[0], Thumbnails.MINI_KIND), 100, 100); } @Override protected void onPostExecute(Bitmap result){ img.setImageBitmap(result); } } } I know that a similar question has been asked before, Displaying video thumbnails in an Android device from a remote video URL, but I have already tried this and same result. Why doesn’t this work on the device and how make it work? Use FFmpegMediaMetadataRetriever to extract a thumbnail at the desired position: FFmpegMediaMetadataRetriever Answer: This is not possible on Android and Java and as far as I know any other language without downloading the entire video (correct me if I’m wrong). It is with good reason that YouTube and any other large video service provide a simple API for getting a video thumb. So if the server where the videos reside doesn’t support this kind of API call your in bad luck. Answer: Faced the same problem on 2.3 when tried to make thumbnail from file which was located in the cache folder. Setting permission flags solved the problem: videoFile.setReadable(true, false); Tags: androidandroid, url, video
https://exceptionshub.com/it-is-possible-to-display-a-video-thumbnail-from-a-url-on-android-4-and-above.html
CC-MAIN-2021-49
refinedweb
316
50.73
sorry for the new post but my old one was locked for some reason. is there a way to read keyboard up and down events and mouse up and down events outside of main without needing to pass a handle to the display? I want to make a simple 6 functions without needing to update them in main.. is there a easy way to do this? EDIT: I think this might be able to be done with threads but I am afraid to use them due to mutex locks. You could use global variables. Something like: Call the two event functions when you receive a key up/down event in main. Call the after_game_tick function when you're done handling game input to reset the down/up counters. Exactly the same will work for mouse. --"Either help out or stop whining" - Evert Your old topic wasn't locked. You just can't double. Your old topic wasn't locked. You just can't double post. Could mention the trick to get around that: Edit the post, preferreably putting "APPEND:" or similiar before the additions, then click on Send to top ---Smokin' Guns - spaghetti western FPS action , why? You just did it for me. :/ I meant that I could :p @Eliasis there a way to implement this so I don't have to handler the events in the main loop?I was thinking something like a simple init function. @Edgar Reynaldo and @torhu sorry I didn't realize that.does APPEND notify users of the topic that it has updated? No, but the "Send to Top" button does (if it has been more than an hour since your last update). -- The way I do it is I have an init() function and tick() function for the game and do everything there (and the actual main loop inside of main() does nothing but handling the Allegro events). @bamccaig awesome i will keep that in mind.@Elias so is it possible to handle events for my gui in tick without a display?and if so is it possiable to do it so I can use the same events for another module? For now, you have to have a window to get input. Because that's where Windows sends you the input. There's no way around it. Please explain what you are trying to do. It seems that the OP is having a case of the “XY Problem”. He is asking about an attempted solution rather than his actual problem. In other words, he is trying to solve a problem X. He really thinks solution Y will work. Now, instead of asking about X, he is asking about Y. Can you start from 0 and tell us what you want to do? I saw a red flag when you mentioned threads. Most code doesn’t need threads at all. You also said modules. Are you trying to abstract Allegro into different components or something like that? The more information and code you provide, the more likely someone here will give you a solution. Examples are often made as single file / function, but I understand the issue is about organizing the code in different modules (C files). The main event loop, wherever it is, could include this part : (Adapted from Edgar's earlier example) input.c: input.h: // the declaration of all the above methods // mykeys is *not* declared here, because other modules don't need to know about it so I wrote this yesterday. however I can't test it since it's in a dll project and I can't figure out how to link it into my executable I cannot test this either though one thing seems to be wrong - namely, second source file, at line 31: for (int i = 0; i < ALLEGRO_KEY_MAX; i++) I believe you should change ALLEGRO_KEY_MAX to a variable that your would set in your program's initialization part using the function call that Edgar suggested here: MikiZX, nope. ALLEGRO_KEY_MAX is a constant that doesn't change. It's perfectly safe to use. There are two 'for' loops in Shadowblitz's source, one after the other. The first one is for the keys while the second one is for the mouse buttons. It seems the second loop is the one that accesses an array of 16 elements using an index well out of bounds - sorry, I should have explained it better in my previous post. so I wrote this yesterday. however I can't test it since it's in a dll project and I can't figure out how to link it into my executable You need to split it into header files and implementation (*.cpp) files. Function definitions go in the implementation files. Class/struct declarations, function prototypes, symbolic constants, typedefs, etc. go in the headers. Then you just add these files to your game project, which should be configured to build an executable. You don't need to create DLL's unless you have a specific reason to want that level of separation. Also, you don't really need multiple levels of namespaces in a small project, it's just more to type. Oh, MikiZX, you're right, sorry I can't read. No, you definitely don't want to check ALLEGRO_KEY_MAX mouse buttons. You need to split it into header files and implementation (*.cpp) files. Function definitions go in the implementation files. Class/struct declarations, function prototypes, symbolic constants, typedefs, etc. go in the headers. This is something good, I would like to bookmark it. And... I just did.
https://www.allegro.cc/forums/thread/617986/1044160
CC-MAIN-2022-40
refinedweb
926
74.29
12 Jul 09:25 2010 Re: ANNOUNCE: fgl-5.4.2.3 Ivan Lazar Miljenovic <ivan.miljenovic <at> gmail.com> 2010-07-12 07:25:39 GMT 2010-07-12 07:25:39 GMT A couple of points I meant to make here but forgot (I was busy hacking on this and my other three graph-related packages for over a week now, and especially this past weekend it cut into my sleeping...): * Apart from bug-fixes, I don't intend on touching the 5.4 series any more. That said, I believe that this version is suitable for replacing 5.4.2.2 in the platform (what's the process on that?). * After I get my generic graph class sorted out at AusHac this coming weekend, I intend to make a 5.5.0.0 release which extends the classes in this new library; this will probably _not_ be suitable for the platform and is intended to serve as a stepping stone to the replacement library Thomas Bereknyei and I are working on. With that last point: Thomas and I are willing to call this new version/replacement something like "inductive-graphs" if that is the preference of the community. Does anyone know of a website that would let us have a survey we can use to determine which option people would prefer? Note that even if we give it a new name (rather than just a new major version number), we still intend on using the Data.Graph.Inductive module namespace (as it makes even more sense with the new name), so there will still be clashes between this new version and fgl. Ivan Lazar Miljenovic <ivan.miljenovic <at> gmail.com> writes: > I
http://permalink.gmane.org/gmane.comp.lang.haskell.cafe/77326
CC-MAIN-2015-40
refinedweb
285
73.27
Tag: packages Found 125 results for 'packages'. 1) javascript - How to change NPM version? 2) puppet - Why does updating puppet ppa not update puppet? 3) python - List all the modules that are part of a python package? 4) python - Upgrade python packages from requirements.txt using pip command 5) go - Workflow for creating packages in Go 6) debian - Where are Debian packages hosted? 7) linux - How can I exorcise a specific Ubuntu package that crashes on a postinstall script? 8) linux - How can I verify which repository a package resides on in Centos? 9) yum - How to only allow a package to be downloaded from a specific repository using Yum? 10) debian - Getting the URL to a debian .deb package 11) debian - DEB: "Provides:" field ignored 12) linux - How to force debian linux apt-get installer to install unstable/specific version of packages? 13) debian - Use Debian 6.0 (Squeeze) packages on Debian 5.0 (Lenny) instalation 14) debian - How to upgrade a package in Debian 8? 15) ubuntu - Is a Debian Testing package safety comparable to Ubuntu “Stable” 16) c# - Same class and namespace name 17) python - Single python file distribution: module or package? 18) java - How to solve circular package dependencies 19) design - Component design: getting cohesion right 20) open-source - When did the standard for packaging Linux source code become .tar.gz? 21) python - Module vs. Package? 22) r - How do I show the source code of an S4 function in a package? 23) libraries - CLI and Lib package, where to put module loading code 24) r - How do I show the source code of an S4 function in a package? 25) design - Is it 'safe' to expect myClasses to agree not to only call package Scope methods from other Package scope methods? 26) linux - CentOS working with older repos 27) design - Does it make sense for a package to depend on, e.g. import, its nested packages? 28) solaris - What is the package manager under OpenSolaris 5.11? 29) python - How to pass arguments to main function within Python module? 30) packages - Is it reasonable for an R package to import another package just for coding convenience? 31) centos - How to install Gamooga server on centOS? 32) linux - ERROR with rpm_check_debug vs depsolve 33) python - How do I remove packages installed with Python's easy_install? 34) java - Distributing Java code into packages using a clustering approach 35) debian - debian- file to package mapping 36) java - C++ Namespaces, comparison to Java packages 37) javascript - Installing a local module using npm? 38) linux - Best practice to track custom software installations on Linux in /usr? 39) r - Elegant way to check for missing packages and install them? 40) java - Relationship of Package names and file structure 41) architecture - Should package structure closely resemble class hierarchy? 42) web-server - Make Yum recognize that httpd24-httpd (from SCL) provides the webserver virtual package 43) mac - How do I uninstall any Apple pkg Package file? 44) debian - Why isn't there a openjdk-8-jdk package on debian anymore? 45) terminology - What is the formal definition of a meta package? 46) debian - install whitelist of packages using unstable in debian 47) puppet - Puppet: Run Augeas only when a specific package exists 48) python - Should I include scripts inside a Python package? 49) open-source - Overtake the maintenance of a software package 50) ubuntu - Automatic storing package before installing it on .deb based system?
https://programmatic.solutions/tag/packages
CC-MAIN-2022-40
refinedweb
571
56.25
- Right Shift The right shift operator, >>, shifts all of the bits in a value to the right a specified number of times. Here is the general form to use right shift operator in Java: value >> num Here, num specifies the number of positions to right-shift the value in value i.e., the >> moves all of the bits in the specified value to the right the number of bit position specified by num. The following code fragment shifts the value 32 to the right by two positions, resulting in a being set to 8 : int a = 32; a = a >> 2; // a now contains 8 When a value has bits that are "shifted off," those bits are lost. For example, the next code fragment shifts the value 35 to the right two positions, which causes the two low-order bits to be lost, resulting again in a being set to 8: int a = 35; a = a >> 2; // now a contains 8 Looking at the same operation in binary shows more clearly how this happens : 00100011 35 >>2 00001000 8 Each time you shift a value to the right, it divides that value by two, and discards any remainder. In some cases, you can take advantage of this for the high-performance integer division by 2. When you are shifting right, the top (leftmost) bits exposed by the right shift are filled in with previous contents of the top bit. This is called sign extension and serves to preserve the sign of negative numbers when you shift them right. For example, -8 >> 1 is -4, which, in binary, is : 11111000 -8 >>1 11111100 -4 It is interesting to note that if you shift -1 right, the result always remains -1, since sign extension keeps bringing in more ones in the high-order bits. Sometimes it is not desirable to the sign-extend values when you are shifting them to the right. For example, the following program converts a byte value to its hexadecimal string representation. Notice that the shifted value is masked by ANDing it with 0x0f to discard any sign-extended bits so that the value can be used as an index into the array of hexadecimal characters. Java Right Shift Operator Example Here is an example program, helps you in understanding the concept of right shift operator in Java: /* Java Program Example - Java Right Shift * Masking sign extension. */ public class JavaProgram { public static void main(String args[]) { char hex[] = { '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f' }; byte b = (byte) 0xf1; System.out.println("b = 0x" + hex[(b >> 4) & 0x0f] + hex[b & 0x0f]); } } When the above Java program is compile and executed, it will produce the following output: « Previous Tutorial Next Tutorial »
https://codescracker.com/java/java-right-shift.htm
CC-MAIN-2022-21
refinedweb
460
58.15
Here we go: 1. Correct Model Naming It is generally recommended to use singular nouns for model naming, for example: User, Article. That is, the last component of the name should be a noun, e.g.: Some New Shiny Item. It is correct to use singular numbers when one unit of a model does not contain information about several objects. 2. Relationship Field Naming For relationships such as ForeignKey, OneToOneKey, ManyToMany it is sometimes better to specify a name. Imagine there is a model called Article, - in which one of the relationships is ForeignKey for model User. If this field contains information about the author of the article, then author will be a more appropriate name than user. 3. Correct Related-Name It is reasonable to indicate a related-name in plural as related-name addressing returns queryset. Please, do set adequate related-names. In the majority of cases, the name of the model in plural will be just right. For example: class Owner(models.Model): pass class Item(models.Model): owner = models.ForeignKey(Owner, related_name='items') 4. Do not use ForeignKey with unique=True There is no point in using ForeignKey with unique=Trueas there exists OneToOneField for such cases. 5. Attributes and Methods Order in a Model Preferable attributes and methods order in a model (an empty string between the points). - constants (for choices and other) - fields of the model - custom manager indication meta def __unicode__(python 2) or def __str__(python 3) - other special methods def clean def save def get_absolut_url - other methods Please note that the given order was taken from documentations and slightly expanded. 6. Adding a Model via Migration If you need to add a model, then, having created a class of a model, execute serially manage.py commands makemigrations and migrate (or use South for Django 1.6 and below). 7. Denormalisations You should not allow thoughtless use of denormalization in relational databases. Always try to avoid it, except for the cases when you denormalise data consciously for whatever the reason may be (e.g. productivity). If at the stage of database designing you understand that you need to denormalise much of the data, a good option could be the use of NoSQL. However, if most of data does not require denormalisation, which cannot be avoided, think about a relational base with JsonField to store some data. 8. BooleanField Do not use null=True or blank=True for BooleanField. It should also be pointed out that it is better to specify default values for such fields. If you realise that the field can remain empty, you need NullBooleanField. 9. Business Logic in Models The best place to allocate business logic for your project is in models, namely method models and model manager. It is possible that method models can only provoke some methods/functions. If it is inconvenient or impossible to allocate logic in models, you need to replace its forms or serializers in tasks. 10. Field Duplication in ModelForm Do not duplicate model fields in ModelForm or ModelSerializer without need. If you want to specify that the form uses all model fields, use MetaFields. If you need to redefine a widget for a field with nothing else to be changed in this field, make use of Meta widgets to indicate widgets. 11. Do not use ObjectDoesNotExist Using ModelName.DoesNotExist instead of ObjectDoesNotExist makes your exception intercepting more specialised, which is a positive practice. 12. Use of choices While using choices, it is recommended to: - keep strings instead of numbers in the database (although this is not the best option from the point of optional database use, it is more convenient in practise as strings are more demonstrable, which allows the use of clear filters with get options from the box in REST frameworks). - variables for variants storage are constants. That is why they must be indicated in uppercase. - indicate the variants before the fields lists. - if it is a list of the statuses, indicate it in chronological order (e.g. new, in_progress, completed). - you can use Choicesfrom the model_utilslibrary. Take model Article, for instance: from model_utils import Choices class Article(models.Model): STATUSES = Choices( (0, 'draft', _('draft')), (1, 'published', _('published')) ) status = models.IntegerField(choices=STATUSES, default=STATUSES.draft) … 13. Why do you need an extra .all()? Using ORM, do not add an extra method call all before filter(), count(), etc. 14. Many flags in a model? If it is justified, replace several BooleanFields with one field, status-like. e.g. class Article(models.Model): is_published = models.BooleanField(default=False) is_verified = models.BooleanField(default=False) … Assume the logic of our application presupposes that the article is not published and checked initially, then it is checked and marked is_verified in True and then it is published. You can notice that article cannot be published without being checked. So there are 3 conditions in total, but with 2 boolean fields we do not have 4 possible variants, and you should make sure there are no articles with wrong boolean fields conditions combinations. That is why using one status field instead of two boolean fields is a better option: class Article(models.Model): STATUSES = Choices('new', 'verified', 'published') status = models.IntegerField(choices=STATUSES, default=STATUSES.draft) … This example may not be very illustrative, but imagine that you have 3 or more such boolean fields in your model, and validation control for these field value combinations can be really tiresome. 15. Redundant model name in a field name Do not add model names to fields if there is no need to do so, e.g. if table User has a field user_status - you should rename the field into status, as long as there are no other statuses in this model. 16. Dirty data should not be found in a base Always use PositiveIntegerField instead of IntegerField if it is not senseless, because “bad” data must not go to the base. For the same reason you should always use unique, unique_together for logically unique data and never use required=False in every field. 17. Getting the earliest/latest object You can use ModelName.objects.earliest('created'/'earliest') instead of order_by('created')[0] and you can also put get_latest_by in Meta model. You should keep in mind that latest/ earliest as well as get can cause an exception DoesNotExist. Therefore, order_by('created').first() is the most useful variant. 18. Never make len(queryset) Do not use len to get queryset’s objects amount. The count method can be used for this purpose. Like this: len(ModelName.objects.all()), firstly the query for selecting all data from the table will be carried out, then this data will be transformed into a Python object, and the length of this object will be found with the help of len. It is highly recommended not to use this method as count will address to a corresponding SQL function COUNT(). With count, an easier query will be carried out in that database and fewer resources will be required for python code performance. 19. if queryset is a bad idea Do not use queryset as a boolean value: instead of if queryset: do something use if queryset.exists(): do something. Remember, that querysets are lazy, and if you use queryset as a boolean value, an inappropriate query to a database will be carried out. 20. Using help_text as documentation Using model help_text in fields as a part of documentation will definitely facilitate the understanding of the data structure by you, your colleagues, and admin users. 21. Money Information Storage Do not use FloatField to store information about the quantity of money. Instead, use DecimalField for this purpose. You can also keep this information in cents, units, etc. 22. Remove _id Do not add _id suffix to ForeignKeyField and OneToOneField. 23. Define __unicode__ or __str__ In all non abstract models, add methods __unicode__(python 2) or __str__(python 3). These methods must always return strings. 24. Transparent fields list Do not use Meta.exclude for a model’s fields list description in ModelForm. It is better to use Meta.fields for this as it makes the fields list transparent. Do not use Meta.fields=”__all__” for the same reason. 25. Do not heap all files loaded by user in the same folder Sometimes even a separate folder for each FileField will not be enough if a large amount of downloaded files is expected. Storing many files in one folder means the file system will search for the needed file more slowly. To avoid such problems, you can do the following: def get_upload_path(instance, filename): return os.path.join('account/avatars/', now().date().strftime("%Y/%m/%d"), filename) class User(AbstractUser): avatar = models.ImageField(blank=True, upload_to=get_upload_path) Go check out our case study page. Or read about the top 10 Python frameworks in 2018. Learn the compeling reasons to choose Django for your project.
https://steelkiwi.com/blog/best-practices-working-django-models-python/
CC-MAIN-2019-35
refinedweb
1,480
56.45
Working commands to the database. So, Using these programs, we can perform several operations such as Insertion, Deletion, Updating, and Retrieving. Here, In this article, We will discuss working with MySQL BLOB in python. With the help of BLOB(Large Binary Object) data type in MySQL, we can store files or images in our database in binary format. Attention geek! Strengthen your foundations with the Python Programming Foundation Course and learn the basics. To begin with, your interview preparations Enhance your Data Structures concepts with the Python DS Course. And to begin with your Machine Learning Journey, join the Machine Learning - Basic Level Course Installation of MySQL Connector: This connector will connect our python program to database. Just run this command, Command: pip install mysql-connector-python Important steps for Python Database Programming: - Import MySQL database Module import mysql.connector - For creating a connection between Python Program and Database. Using connect() method, We will connect the python program with our database. connection = mysql.connector.connect(host=’localhost’, database='<database_name>’, user='<User_name>’, password='<password>’) - Now, create a cursor object by using cursor() method for executing the SQL Queries and holding the result in an object. cursor = connection.cursor() - For executing SQL queries, we will use a cursor object. For example, cursor.execute("select * from table_name") - Finally, Once we are done with our operations, we have to close the resources. cursor.close() con.close() We are done with the basic steps of connection. Now, Let’s discuss the main agenda of this article which is the practical implementation of BLOB data type in MySQL Python, - First, We need to create a database in MySQL using the below command. create database geeksforgeeks; For Example: - Creating a function through which we can convert images or files in binary format. Python3 - Check Whether Database Connection is created or not using Python Program. Let’s have a look in below code: Python3 We are done with all basic which is required. Let’s see the complete code for inserting the images or files in the MySQL database using Python Programs: Python3 Output: The table formed in MySQL: Explanation: - Establishing the connection with MySQL database. - Write the create table Query and Using cursor object, Executing it. - Now, Insert data into a table using SQL query and stored in query variable. - Storing the data in variables such as student_id = “1”, Student_name = “Shubham” and for images or files, first we are converting those files into binary data and then stored into a variables. - Using cursor object, Executing the query. Inserting the data in the database in tuple format. - Using commit() method, We are saving the data. - After completing all operations, we have to close all the resources such as the connection and cursor object.
https://www.geeksforgeeks.org/working-with-mysql-blob-in-python/?ref=rp
CC-MAIN-2021-43
refinedweb
455
55.13
On Sun, Oct 19, 2003 at 11:20:33AM +0200, Josip Rodin. At this point I see only two alternative: 1) use dependencies build: build-arch binary-indep: build-indep This is not good to run build-indep as root, but only the maintainer run binary-indep, and there is no need to change dpkg-buildpackage. 2) Set a make variable BUILD and do ifdef BUILD build: $(BUILD) else build: build-arch build-indep endif This require dpkg-buildpackage to call debian/rules with BUILD=build-arch or BUILD=build-indep accordingly. Of course, variants using environment variables exist. Cheers, -- Bill. <ballombe@debian.org> Imagine a large red swirl here.
https://lists.debian.org/debian-policy/2003/10/msg00065.html
CC-MAIN-2017-30
refinedweb
110
52.8
MidiListPlayer is a Prefab, available with the Pro version, able to play Midi music from a list of Midi from your MPTK database in Unity. Furthermore, you can play part of a Midi files, overlap, loop. - This Prefab is available with MPTK Pro. - No line of script is necessary to use this Prefab. - An API exists for a more complex integration in your application. - See an example with source code at the end of this page. Inspector parameters Overlap time in milliseconds of overlap between two Midi. The end of the current Midi gently overlaps the start of the next Midi. Play On Start if checked, start playing when your application start. Loop on the List if checked, restart playing automatically at the first when the end of the Midi list is reached. When running: Integrated MidiFilePlayer In the MidiListPlayer, select MidiFilePlayer_1, the classical MidiFilePlayer Inspector is displayed. See here for the description of each attributes and folders. - Foldout Midi Parameters See here - Foldout Events See here - Foldout Midi Info See here - Synth Parameters See here - Default Editor See here Integration of MidiListPlayer in your script See TestMidiListPlayer.cs and events associated in the canvas gameobjects of TestListMidiPlay scene for the whole example. Code language: PHP (php)Code language: PHP (php) using MidiPlayerTK; ... // MPTK component able to play a Midi list. // This PreFab must be present in your scene. /// </summary> public MidiListPlayer midiListPlayer; ... private void Start() { if (!HelperDemo.CheckSFExists()) return; // Find the Midi external component if (midiListPlayer == null) { Debug.Log("No MidiListPlayer defined with the editor inspector, try to find one"); MidiListPlayer fp = FindObjectOfType<MidiListPlayer>(); if (fp == null) Debug.Log("Can't find a MidiListPlayer in the Hierarchy. No music will be played"); else midiListPlayer = fp; } // This method is fired from button public void Next() { midiListPlayer.MPTK_Next(); }
https://paxstellar.fr/midi-list-player-v2/
CC-MAIN-2021-43
refinedweb
297
58.79
87505/how-write-code-python-where-input-will-then-the-output-will-be Hello everyone if the user gives an input 1 then the output will be 2 how to write this as a code In Python?? You can go through this: def num(number): return number + 1 print(num(1)) Hi @Mike. First, read both the csv ...READ MORE David here, from the Zapier Platform team. ...READ MORE x="malayalam" y="" for i in x: .. If you write x.insert(2,3) and then print x ...READ MORE Hi, @There, Try this: Rollback pip to an older ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/87505/how-write-code-python-where-input-will-then-the-output-will-be?show=87506
CC-MAIN-2021-17
refinedweb
107
86.91
session tracking - JSP-Servlet session tracking hi, i m working on a real estate web site....which i have to submit as final year project. im having problem regarding session tracking... i had gone through roseindia but was unable to get enough to get Session tracking Session tracking How can I enable session tracking for JSP pages if the browser has disabled cookies? By default session tracking uses... not support cookies, or if cookies are disabled, you can still use session tracking problem without session tracking - JSP-Servlet problem without session tracking i want to see the code in which no session tracking has done and that is yielding the problem without session tracking Hi friend, Please Give in Details to solve the problem SessionTest with User name and password parameters, and i wand to track the name to a new Servlet of name: SessionNew by URL Redirecting method, but instead fo giving Session tracking basics . In session tracking client first make a request for any servlet, container... in place where we don't want to use cookies. This is used to maintain the session... Session Tracking Basics   J2EE Tutorial - Session Tracking Example J2EE Tutorial - Session Tracking Example... to the list and displayed, thereby achieving session-tracking... session-tracking feature. ( A stateless session bean does not retain ...("Using hidden form In servlet"); out.println("body"); String[] item Session creation and tracking Session creation and tracking 1.Implement the information persistence across servlet destroy or servlet container start/stop. Write a servlet such that when it is stopped (either by container shutdown or servlet stop Session Tracking Session Tracking  .... There the container sees the Id and sends back the request. Session Tracking can be done... the session tracking. As we know by the name, that in this fields are added Session Tracking Servlet Example Session Tracking Servlet Example In this tutorial you will learn how to track session in servlet. Session tracking is required for identifying the client... object. Here I am using the session object. Here I am giving the simple example session scope Session scope Hii i m java beginner i just started learning java and i just started the topic of session tracking . I want to know about session scopes that is application ,page ,session etc etc and also their uses if possible Tracking Session using URL rewriting Tracking Session using URL rewriting In this section we will discuss about tracking user's session using URL rewriting using a simple example. For tracking sessions, we can use URL rewriting in place of cookies. Since http J2EE Tutorial - Session Tracking J2EE Tutorial - Session Tracking  ... is Session-Tracking. HTTP is a stateless protocol. (ie) In a multiform... to be the best solution. Basically, Session-tracking means that any data session - JSP-Servlet session please send me an example code for session tracking using... that the Web server can't easily remember previous transactions.In session tracking.... There are four types of Session Tracking 1.Session 2.Cookies 3.Url-rewriting Servlet Session before invalidating this session by the servlet container. If you do not want...Servlet Session Sometimes it is required to maintain a number of request... and also can delete the any session data. Session Tracking How Tracking User's Session using Hidden Form Fields Tracking User's Session using Hidden Form Fields In this Section, We will discuss about tracking user's session using Hidden form field. Hidden Form Field is used to maintain the session. It is one of the method to handle session concept - JSP-Servlet session concept Hello friends, How can we track unexpectedly closing a window when a jsp project running with session concept. And this tracking should update the log in status in data base Session management Session management I am new to servlet....developing a project in servlet-jsp.... i want to know about session management... that i don't want to let a user can copy url address and run it on same os and other browser Thanks KRA tracking KRA tracking hi, I am doing a project on KRA tracking.. ie timesheet traking in jsp servlet. can u just help me out with the flow Session Session how to session maintaining in servlet with use of hidden fieds servlet session - JSP-Servlet login servlet fromabout us page now i want the servlet to redirect me to that page....but the problem is that i dont want to write the code as: response.sendRedirect(aboutus.jsp) instead i want a general code which redirect you to a page about session about session hey i want to insert userid and username of the user who have currently loggedin in servlet using prepared statement Please visit the following link: session session What mechanisms are used by a Servlet Container to maintain session information? Servlet Container uses Cookies, URL rewriting, and HTTPS protocol information to maintain the session cookie and session dependency cookie and session dependency Hi, I am currently working... for session tracking but the case when browser cookie is disabled, session value differs... cookie is enabled, the application is running perfectly fine. I tried to debut Session ID Session ID How can I change the session ID of the page? The servlet session represents a relationship between the servlet container...' session ID values because it is very likely to result in the loss of the session Java Servlet : URL Rewriting Rewriting. It is one of session tracking technique. URL Rewriting : You can use another way of session tracking that is URL rewriting where your browser does... of Session tracking. UrlRewritingExample.java import java.io.IOException GPS fleet tracking solutions GPS is the most widely used tracking technology that provides solutions to fleet tracking. Almost all the fleet owners opt for this technology... the information. Owner and manager want to track down their fleet to insure Real time Mechanism fot Sessio Tracking - Servlet Interview Questions Real time Mechanism fot Sessio Tracking Hi Friends, Which is d most commonly used mechanism for tracking a session. and y it is best . thanks in advance Using URLRewriting we don't want to use cookies. It is used to maintain the session. Whenever... object then, there we use the concept of session tracking. In session tracking...Session Using URLRewriting   JSp session time out JSp session time out Consider a case where we are working on a java... in the middle. I would like to know what statement/keyword is responsible for it?? If you want to make session timeout programatically, you can use Maximum length of Session ( $_SESSION ) in PHP. Maximum length of Session ( $_SESSION ) in PHP. sir, i want to know that how much limit of storing data into session in php. i mean can i store full product descriptions into session. Thank u online project tracking system servlet project my project name is "online project tracking system " and i am creating a table for assignProject and i have thinking that when client is add...online project tracking system hello sir , I chintan patel and my Expired session Expired session How can I recover an expired session? If a session has expired, it means a browser has made a new request that carries a session identifier, such as a cookie entry, for which the servlet container has with GenericServlet Session with GenericServlet Can I create a session... is an implementation of the basic, general purpose Servlet interface. Servlet... has two key features necessary for a servlet container to simulate continuous Session Related Interview Questions ; Question: What is Session Tracking? Answer: HTTP is stateless protocol...: What are different types of Session Tracking? Answer: Mechanism for Session... the user. Question: Why do u use Session Tracking in HttpServlet? Answer session maintaining session maintaining Hi. I have created a login page. Consider the user is logged in. Now if he/she gives refresh it goes to home page i.e. the home page of the website. i want to maintain session here. if user gives refresh session into a query session into a query Hi all this am Sreedhar from Hyderabad i am calling one session variable in mysql query but its not working can any one help me please... here is the query.. $query = 'SELECT * FROM post WHERE post_date < Session removing - JSP-Servlet Session removing Hi, I am destroying session by using session.invalidate() in JSP but I am not able to destroy it completely can anyone help me... has been in session using session. setAttribute() but at log off I am using Vehicle Tracking Systems - Overview the central station from where the tracking is done. As we saw earlier, the GPS... Vehicle Tracking Systems - Overview Introduction Vehicle tracking systems are devices used session - JSP-Servlet redirect the session how i can redirect the session in any child window Keep servlet session alive - Development process Keep servlet session alive Hi, I am developing an application in java swing and servlet. Database is kept on server. I am using HttpClient for swing servlet communication. I want to send heartbeat message from client To store value in session & display it ; </html> This is the servlet page where i m storing the value...To store value in session & display it Hello..I m trying to run the following code which will store the session of a book selected on click Session maintain Session maintain hi, all I want to maintain the state of checkboxes across pages. What happens is when i check one checkbox and then go to say 2nd page then check one checkbox there and when i return to previous page Session concept - JSP-Servlet Session concept Hai friends, I am doing a jsp project with session concept. If one person is not accessing his logged window for more than 10 minutes it gets automatically log out.Anybody explain me the reason how to make session timeout programatically - JSP-Servlet how to make session timeout programatically hi, i have a login page with username and password. how to make session timeout for 10minutes...(int interval); And if you want to specify the timeout of a session Track user's session using 'session' object in JSP Track user's session using 'session' object in JSP This section is about tracking user identity across different JSP pages using 'session' object. Session Tracking : Session tracking is a mechanism that is used Servlet-session Servlet-session step by step example on session in servlets session maintainence - JSP-Servlet session maintainence if the logout link is clicked in the main page it should display the message sucessfully logged out in the same page itself..., I am sending login application. I hope that, this code will help you GPS fleet tracking systems GPS fleet tracking systems have become an inevitable part of the fleet business. These tracking systems are present in the market in abundance. All the fleet tracking systems fall under two types: Active device: These devices Fleet vehicle tracking Fleet vehicle tracking Over the past few years, we have seen the advent of vehicle tracking and fleet tracking systems being used to a large extent. Fleet tracking is done by an organisation to keep a track of the fleet of vehicles Write cookie and session to textfile? Write cookie and session to textfile? I want to store all data from user submisstion into a textfile, include session and cookie. Thanks Java Servlet : Hidden Field Field. It is one of session tracking technique. Hidden Field : Session... for a specific time period. You can maintain session tracking in three ways - Hidden form field, Cookies, URL rewriting. Hidden form field is a way of session tracking Pre- Existing Session is pre-existing or not. Consider a situation where servlet want to use only...) of the request object. If we don't want to create a new session then we should use... Pre- Existing Fleet tracking GPS In fleet tracking GPS (Global Positioning System) is the most widely used tracking technology. It uses the radio signals emitted by the satellite to pinpoint the position of the vehicle or the fleet. Thus helps tracking session timeout - Security session timeout How to change session timeout using java programming i do not want to change timeout in web.xml. Hi friend, Code...; Session session = null; session = new Session (theuser I need hibernate session factory example. I need hibernate session factory example. Hi, I want a simple hibernate session factory example.. hello, Here is a simple Hibernate SessionFactory Example Also go through the Hibernate 4 Thanks Fleet GPS tracking Fleet GPS tracking has become an essential part of fleet business in the past decade and it cannot be avoided by any manger or owner if they want to thrive in the fleet business. Fleet GPS tracking allows the owner to keep a tight eye Session management using tiles the previous login name is coming in this session page also ..i want maintain...Session management using tiles hi i am working on elearning project ..my problem is i am not able to maintain session over login's page..suppose GPS fleet tracking GPS fleet tracking involves the use of signals emitted by satellite to track down the position of fleet. This tracking and monitoring insures the safety of the vehicle. Tracking of a fleet is necessary to make sure that every works session in jsp - Java Beginners session in jsp Hi friend, I want to use session in jspplz let me know how to create a session in jsp. Session for jsp with two side... Hi friend, session is implicit object in jsp Session Bean Example Session Bean Example I want to know that how to run ejb module by jboss 4.2.1 GA (session bean example by jboss configuration )? Please visit the following link: Fleet GPS tracking devices Fleet GPS tracking devices are in great demand as they come very handy when an owner or manager of the fleet wants to know what and where exactly there fleet... as well. At present it is being used by people in the fleet business who want Usage of Session Id in servlet - Servlet Interview Questions Usage of Session Id in servlet Hi friends, I am...,edit,delete. I saw Session id is loaded for edit and delete action not for add... session in servlet For more details you can click: Fleet tracking solutions of the tracking technologies. But it is very necessary as to know where exactly their fleet is and to where it is headed. Tracking has become an inevitable part of fleet... by tracking a fleet can command them to work in the way he want, he can make Fleet vehicle tracking Fleet vehicle tracking is a must for every fleet owner if he wants to be secure and want to earn maximum benefit. Tracking a fleet enables the owner...) as their tracking technology. Though GLONASS (Global Navigation Satellite System GPS fleet tracking software in the market that can easily be installed in the system where the tracking...GPS fleet tracking software has become a great companion of the owner... collected by GPS and other tracking technologies. Fleet tracking software session session is there any possibility to call one session from another session session session Session management in Java how to get session object in simple java class?? to servlet. in servlet : here i am checking that whether jsp want to insert data...how to get session object in simple java class?? i am fallowing............................ is.................... now i need to get that session object (GroupPojo Session Session How fined session is new or old Fleet Tracking Systems Works the driver's behaviour. Tracking movement of vehicles can be used when we want.... Fleet vehicle tracking systems is in great demand among the companies who want...Fleet vehicle tracking systems are designed to track the daily activity Fleet GPS tracking systems Fleet GPS tracking systems are being used by almost all the fleet mangers and owners to take their business to extreme height. The tracking system used... not have it). With the help of these signals they point out where exactly session invalidate. session invalidate. how to invalidate session? i am calling session.invalidate()but not working Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://www.roseindia.net/tutorialhelp/comment/8836
CC-MAIN-2015-18
refinedweb
2,704
64
Thanks Bert, that put on hopefully the right track to do the same thing in C++. I can almost do it completely, and I'll detail the code in another email. Thanks, Peter > > Here is my, considerably low-tech, code : > > At init time, > > ----------- > const std::string CatchOutput = > "class StdoutCatcher:\n" > "\tdef __init__(self):\n" > "\t\tself.data = ''\n" > "\tdef write(self, stuff):\n" > "\t\tself.data = self.data + stuff\n" > "\n" > "import sys\n" > "TheStdoutCatcher = StdoutCatcher()\n" > "sys.stdout = TheStdoutCatcher\n"; > > Run (CatchOutput); // thin wrapper to run a snippet > ----------- > > and then after some python code has run : > > ----------- > using namespace boost::python; > object Catcher (main_namespace ["TheStdoutCatcher"]); > object CatcherData (borrowed (PyObject_GetAttrString (Catcher.ptr (), > "data"))); > > const std::string &S = extract<std::string>(CatcherData); > Output += S; // some log string > Run ("sys.stdout.data=''\n"); > ----------- > > Note I'm pretty new at this, so the only warranty is > "the above didn't crash so far" :) Now that I paste > it, I'm not sure about that const ref to the result of > extract. > > > hth, > bert > > _______________________________________________ > C++-sig mailing list > C++-sig at python.org > > >
https://mail.python.org/pipermail/cplusplus-sig/2004-August/007683.html
CC-MAIN-2016-30
refinedweb
179
55.03
Cooperation between threads( ). Wait and notify It's thread's execution is suspended, and the lock on that object is released. There are two forms of wait( ). The first takes an argument in milliseconds that has the same meaning as in sleep( ): "Pause for this period of time." The difference is that in wait( ): The object lock is released during the wait( ). You can come out of the wait( ) due to a notify( ) or notifyAll( ), or by letting the clock run out. The second form of wait( ) takes no arguments; this version is more commonly used. This wait( ) continues indefinitely until the thread receives a notify( ) or notifyAll( ). One fairly unique aspect of wait( ), notify( ), and notifyAll( ) is that these methods are part of the base class Object and not part of Thread, as is sleep( ). Although this seems a bit strange at firstto have something that's exclusively for threading as part of the universal base classit's essential because they manipulate the lock that's nonsynchronized methods since it doesn't manipulate the lock). If you call any of these within a method that's not synchronized, the program will compile, but when you run it, you'll get an IllegalMonitorStateException with the somewhat nonintuitive message "current thread not owner." This message means that the thread calling wait( ), notify( ), or notifyAll( ) must "own" (acquire) the lock for the object before it can call any of these methods. You can ask another object to perform an operation that manipulates its own lock. To do this, you must first capture that object's lock. For example, if you want to notify( ) an object x, you must do so inside a synchronized block that acquires the lock for x: synchronized(x) { x.notify(); } Typically, wait( ) is used when you're waiting for some condition that is under the control of forces outside of the current method to change (typically, this condition will be changed by another thread). You don't want to idly wait while testing the condition inside your thread; this is called a "busy wait" and it's restaurant's "order window," restaurant.order. In run( ), the WaitPerson goes into wait( ) mode, stopping that thread until it is woken up with a notify( ) from the Chef. Since this is a very simple program, we know that only one thread will be waiting on the WaitPerson's lock: the WaitPerson thread itself. For this reason it's safe to call notify( ). In more complex situations, multiple threads may be waiting on a particular object lock, so you don't doesn't( ), it's guaranteed that two threads trying to call notify( ) on one object won't step on each other's. Using Pipes for I/O between threads It's often useful for threads to communicate with each other by using I/O. Threading libraries may provide support for inter-thread I/O in the form of pipes. These exist in the Java I/O library as the classes PipedWriter (which allows a thread to write into a pipe) and PipedReader (which allows a different thread to read from the same pipe). This can be thought of as a variation of the producer-consumer problem, where the pipe is the canned solution. Here's a simple example in which two threads use a pipe to communicate: //: c13:PipedIO.java // Using pipes for inter-thread I/O import java.io.*; import java.util.*; class Sender extends Thread { private Random rand = new Random(); private PipedWriter out = new PipedWriter(); public PipedWriter getPipedWriter() { return out; } public void run() { while(true) { for(char c = 'A'; c <= 'z'; c++) { try { out.write(c); sleep(rand.nextInt(500)); } catch(Exception e) { throw new RuntimeException(e); } } } } } class Receiver extends Thread { private PipedReader in; public Receiver(Sender sender) throws IOException { in = new PipedReader(sender.getPipedWriter()); } public void run() { try { while(true) { // Blocks until characters are there: System.out.println("Read: " + (char)in.read()); } } catch(IOException e) { throw new RuntimeException(e); } } } public class PipedIO { public static void main(String[] args) throws Exception { Sender sender = new Sender(); Receiver receiver = new Receiver(sender); sender.start(); receiver.start(); new Timeout(4000, "Terminated"); } } ///:~ Sender and Receiver represent threads that are performing some tasks and need to communicate with each other. Sender creates a PipedWriter, which is a standalone object, but inside Receiver the creation of PipedReader must be associated with a PipedWriter in the constructor. The Sender puts data into the Writer and sleeps for a random amount of time. However, Receiver has no sleep( ) or wait( ). But when it does a read( ), it automatically blocks when there is no more data. You get the effect of a producer-consumer, but no wait( ) loop is necessary. Notice that the sender and receiver are started in main( ), after the objects are completely constructed. If you don't start completely constructed objects, the pipe can produce inconsistent behavior on different platforms. More sophisticated cooperation Only).
http://www.informit.com/articles/article.aspx?p=31682&seqNum=5
CC-MAIN-2019-30
refinedweb
819
60.75
, Itsik #include <list.h> // or new-style header: // #include <list> // using namespace std; typedef list<int> INT_LIST; INT_LIST list; list.push_back(1); list.push_back(2); list.push_back(3); INT_LIST::iterator theIterator; for (theIterator = list.begin(); iterator != list.end(); theIterator++) { printf("%d\n", *theIterator); } list.clear(); Experts Exchange Solution brought to you by Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.Start your 7-day free trial AlexFM's suggestion to use the STL containers is standard C++ and should be discussed at http:/Programming/Programming_Languages/Cplusplus/ . If it is neither here nor there to you, I'd go with AlexFM's suggestion. Poor old vanilla C doesn't have containers in its libraries and non-specific requests for help in implementing linked lists in C usually get followed up by accusations of homework cheating.... and quite rightly so :-)
https://www.experts-exchange.com/questions/20805833/Linked-list-in-c-list-h.html
CC-MAIN-2018-30
refinedweb
154
51.14
The author selected the Open Source Initiative to receive a donation as part of the Write for DOnations program. Introduction Vue.js is a performant and progressive Javascript framework. It is a popular framework on GitHub and has an active and helpful community. In order to show the capabilities of the Vue web framework, this tutorial will lead you through building the shopping cart of an e-commerce app. This app will store product information and hold the products that the customer wants to buy for checkout later. To store the information, you will explore a widely used state management library for Vue.js: Vuex. This will allow the shopping cart application to persist data to a server. You will also handle asynchronous task management using Vuex. Once you finish the tutorial, you will have a functioning shopping cart application like the following: Prerequisites Step 1 — Setting Up the Application with Vue CLI As of version 4.5.0, Vue CLI now provides a built-in option to choose the Vue 3 preset when creating a new project. The latest version of Vue CLI allows you to use Vue 3 out of the box and to update your existing Vue 2 project to Vue 3. In this step, you will use the Vue CLI to make your project, then install the front-end dependencies. First, install the latest version of Vue CLI by executing the following command from the terminal: This will install Vue CLI globally on your system.. Check you have the right version with this command: You will get output like the following: Output@vue/cli 4.5.10 Note: If you already have the older version of Vue CLI installed globally, execute the following command from the terminal to upgrade: Now, you can create a new project: - vue create vuex-shopping-cart This uses the Vue CLI command vue create to make a project named vuex-shopping-cart. For more information on the Vue CLI, check out How To Generate a Vue.js Single Page App With the Vue CLI. Next, you will receive the following prompt: OutputVue CLI v4.5.10 ? Please pick a preset: (Use arrow keys) ❯ Default ([Vue 2] babel, eslint) Default (Vue 3 Preview) ([Vue 3] babel, eslint) Manually select features Choose the Manually select features option from this list. Next, you will encounter the following prompt to customize your Vue app: Output... ◉ Choose Vue version ◯ Babel ◯ TypeScript ◯ Progressive Web App (PWA) Support ◉ Router ◉ Vuex ◯ CSS Pre-processors ◯ Linter / Formatter ❯◯ Unit Testing ◯ E2E Testing From this list, select Choose Vue version, Router, and Vuex. This will allow you to choose your version of Vue and use Vuex and Vue Router. Next, choose 3.x (Preview) for your version of Vue, answer no ( N) to history mode, and select the option to have your configurations In dedicated config file. Finally, answer N to avoid saving the setup for a future project. At this point, Vue will create your application. After the project creation, move into the folder using the command: To start, you’ll install Bulma, a free, open-source CSS framework based on Flexbox. Add Bulma to your project by running the following command: To use Bulma CSS in your project, open up your app’s entry point, the main.js file: Then add the following highlighted import line: vuex-shopping-cart/src/main.js import { createApp } from 'vue' import App from './App.vue' import router from './router' import store from './store' import './../node_modules/bulma/css/bulma.css' createApp(App).use(store).use(router).mount('#app') Save and close the file. In this app, you’ll use the Axios module to make requests to your server. Add the Axios module by running the following command: Now, run the app to make sure it is working: Navigate to in your browser of choice. You will find the Vue app welcome page: Once you have confirmed that Vue is working, stop your server with CTRL+C. In this step, you globally installed Vue CLI in your computer, created a Vue project, installed the required npm packages Axios and Bulma, and imported Bulma to the project in the main.js file. Next, you will set up a back-end API to store data for your app. Step 2 — Setting Up the Backend In this step, you will create a separate backend to work with your Vue project. This will be in a different project folder from your front-end Vue application. First, move out of your Vue directory: Make a separate directory named cart-backend: Once you have your back-end folder, make it your working directory: You will get started by initializing the project with the necessary file. Create the file structure of your app with the following commands: - touch server.js - touch server-cart-data.json - touch server-product-data.json You use the touch command here to create empty files. The server.js file will hold your Node.js server, and the JSON will hold data for the shop’s products and the user’s shopping cart. Now run the following command to create a package.json file: For more information on npm and Node, check out our How To Code in Node.js series. Install these back-end dependencies into your Node project: - npm install concurrently express body-parser Express is a Node framework for web applications, which will provide useful abstractions for handling API requests. Concurrently will be used to run the Express back-end server and the Vue.js development server simulteneously. Finally, body-parser is an Express middleware that will parse requests to your API. Next, open a server.js file in the root of your application: Then add the following code: cart-backend/server.js const express = require('express'); const bodyParser = require('body-parser'); const fs = require('fs'); const path = require('path'); const app = express(); const PRODUCT_DATA_FILE = path.join(__dirname, 'server-product-data.json'); const CART_DATA_FILE = path.join(__dirname, 'server-cart-data.json'); app.set('port', (process.env.PORT || 3000)); app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: true })); app.use((req, res, next) => { res.setHeader('Cache-Control', 'no-cache, no-store, must-revalidate'); res.setHeader('Pragma', 'no-cache'); res.setHeader('Expires', '0'); next(); }); app.listen(app.get('port'), () => { console.log(`Find the server at:{app.get('port')}/`); }); This snippet first adds the Node modules to your backend, including the fs module to write to your filesystem and the path module to make defining filepaths easier. You then initialize the Express app and save references to your JSON files as PRODUCT_DATA_FILE and CART_DATA_FILE. These will be used as data repositories. Finally, you created an Express server, set the port, created a middleware to set the response headers, and set the server to listen on your port. For more information on Express, see the official Express documentation. The setHeader method sets the header of the HTTP responses. In this case, you are using Cache-Control to direct the caching of your app. For more information on this, check out the Mozilla Developer Network article on Cache-Control. Next, you will create an API endpoint that your frontend will query to add an item to the shopping cart. To do this, you will use app.post to listen for an HTTP POST request. Add the following code to server.js just after the last app.use() middleware: cart-backend/server.js ... app.use((req, res, next) => { res.setHeader('Cache-Control', 'no-cache, no-store, must-revalidate'); res.setHeader('Pragma', 'no-cache'); res.setHeader('Expires', '0'); next(); }); app.post('/cart', (req, res) => { fs.readFile(CART_DATA_FILE, (err, data) => { const cartProducts = JSON.parse(data); const newCartProduct = { id: req.body.id, title: req.body.title, description: req.body.description, price: req.body.price, image_tag: req.body.image_tag, quantity: 1 }; let cartProductExists = false; cartProducts.map((cartProduct) => { if (cartProduct.id === newCartProduct.id) { cartProduct.quantity++; cartProductExists = true; } }); if (!cartProductExists) cartProducts.push(newCartProduct); fs.writeFile(CART_DATA_FILE, JSON.stringify(cartProducts, null, 4), () => { res.setHeader('Cache-Control', 'no-cache'); res.json(cartProducts); }); }); }); app.listen(app.get('port'), () => { console.log(`Find the server at:{app.get('port')}/`); }); This code receives the request object containing the cart items from the frontend and stores them in the server-cart-data.json file in the root of your project. Products here are JavaScript objects with id, title, description, price, image_tag, and quantity properties. The code also checks if the cart already exists to ensure that requests for a repeated product only increase the quantity. Now, add code to create an API endpoint to remove an item from the shopping cart. This time, you will use app.delete to listen for an HTTP DELETE request. Add the following code to server.js just after the previous endpoint: cart-backend/server.js ... fs.writeFile(CART_DATA_FILE, JSON.stringify(cartProducts, null, 4), () => { res.setHeader('Cache-Control', 'no-cache'); res.json(cartProducts); }); }); }); app.delete('/cart/delete', (req, res) => { fs.readFile(CART_DATA_FILE, (err, data) => { let cartProducts = JSON.parse(data); cartProducts.map((cartProduct) => { if (cartProduct.id === req.body.id && cartProduct.quantity > 1) { cartProduct.quantity--; } else if (cartProduct.id === req.body.id && cartProduct.quantity === 1) { const cartIndexToRemove = cartProducts.findIndex(cartProduct => cartProduct.id === req.body.id); cartProducts.splice(cartIndexToRemove, 1); } }); fs.writeFile(CART_DATA_FILE, JSON.stringify(cartProducts, null, 4), () => { res.setHeader('Cache-Control', 'no-cache'); res.json(cartProducts); }); }); }); app.listen(app.get('port'), () => { console.log(`Find the server at:{app.get('port')}/`); // eslint-disable-line no-console }); This code receives the request object containing the item to be removed from the cart and checks the server-cart-data.json file for this item via its id. If it exists and the quantity is greater than one, then the quantity of the item in the cart is deducted. Otherwise, if the item’s quantity is less than 1, it will be removed from the cart and the remaining items will be stored in the server-cart-data.json file. To give your user additional functionality, you can now create an API endpoint to remove all items from the shopping cart. This will also listen for a DELETE request. Add the following highlighted code to server.js after the previous endpoint: cart-backend/server.js ... fs.writeFile(CART_DATA_FILE, JSON.stringify(cartProducts, null, 4), () => { res.setHeader('Cache-Control', 'no-cache'); res.json(cartProducts); }); }); }); app.delete('/cart/delete/all', (req, res) => { fs.readFile(CART_DATA_FILE, () => { let emptyCart = []; fs.writeFile(CART_DATA_FILE, JSON.stringify(emptyCart, null, 4), () => { res.json(emptyCart); }); }); }); app.listen(app.get('port'), () => { console.log(`Find the server at:{app.get('port')}/`); // eslint-disable-line no-console }); This code is responsible for removing all the items from the cart by returning an empty array. Next, you will create an API endpoint to retrieve all the products from the product storage. This will use app.get to listen for a GET request. Add the following code to server.js after the previous endpoint: cart-backend/server.js ... app.delete('/cart/delete/all', (req, res) => { fs.readFile(CART_DATA_FILE, () => { let emptyCart = []; fs.writeFile(CART_DATA_FILE, JSON.stringify(emptyCart, null, 4), () => { res.json(emptyCart); }); }); }); app.get("", (req, res) => { fs.readFile(PRODUCT_DATA_FILE, (err, data) => { res.setHeader('Cache-Control', 'no-cache'); res.json(JSON.parse(data)); }); }); ... This code uses the file system’s native readFile method to fetch all the data in the server-product-data.json file and returns them in JSON format. Finally, you will create an API endpoint to retrieve all the items from the cart storage: cart-backend/server.js ... app.get("", (req, res) => { fs.readFile(PRODUCT_DATA_FILE, (err, data) => { res.setHeader('Cache-Control', 'no-cache'); res.json(JSON.parse(data)); }); }); app.get('/cart', (req, res) => { fs.readFile(CART_DATA_FILE, (err, data) => { res.setHeader('Cache-Control', 'no-cache'); res.json(JSON.parse(data)); }); }); ... Similarly, this code uses the file system’s native readFile method to fetch all the data in the server-cart-data.json file and returns them in JSON format. Save and close the server.js file. Next, you will add some mock data to your JSON files for testing purposes. Open up the server-cart-data.json file you created earlier: - nano server-cart-data.json Add the following array of product objects: cart-backend/server-cart-data.json [ { ": 650.9, "image_tag": "diesel-engine.png", "quantity": 1 }, { ": 619.9, "image_tag": "sefang-engine.png", "quantity": 1 } ] This shows two engines that will start out in the user’s shopping cart. Save and close the file. Now open the server-product-data.json file: - nano server-product-data.json Add the following data in server-product-data.json file: cart-backend/server-product-data.json [ { "id": 1, ": "CAT-engine.png", "created_at": 2020, "owner": "Colton", "owner_photo": "image-colton.jpg", "email": "colt@gmail.com", "price": 719.9 }, { ": "diesel-engine.png", "created_at": 2020, "owner": "Colton", "owner_photo": "image-colton.jpg", "email": "colt@gmail.com", "price": 650.9 }, { ": "sefang-engine.png", "created_at": 2017, "owner": "Anne", "owner_photo": "image-anne.jpg", "email": "anne@gmail.com", "price": 619.9 }, { "id": 4, ": "lawn-mower.png", "created_at": 2017, "owner": "Irene", "owner_photo": "image-irene.jpg", "email": "irene@gmail.com", "price": 319.9 } ] This will hold all the possible products that the user can put in their cart. Save and close the file. Finally, execute this command to run the server: You will receive something like this on your terminal: OutputFind the server at: Leave this server running in this window. Finally, you will set up a proxy server in your Vue app. This will enable the connection between the frontend and backend. Go to the root directory of your Vue app: In the terminal, run this command to create a Vue configuration file: Then, add this code: vuex-shopping-cart/vue.config.js module.exports = { devServer: { proxy: { '/api': { target: '', changeOrigin: true, pathRewrite: { '^/api': '' } } } } } This will send requests from your frontend to your back-end server at. For more information on proxy configuration, review the Vue devServer.proxy documentation. Save and close the file. In this step, you wrote server-side code that will handle API endpoints for your shopping cart. You started by creating the file structure and ended with adding necessary code in the server.js file and data in your JSON files. Next, you will set up the state storage for your frontend. Step 3 — Setting Up State Management with Vuex In Vuex, the store is where the state of the application is kept. The application state can only be updated by dispatching actions within a component that will then trigger mutations in the store. The Vuex store is made up of the state, mutations, actions, and getters. In this step, you’re going to build each of these pieces, after which you will couple everything together into a Vuex store. State Now you will create a place to store state for your application. The store folder in the root directory src of your project is automatically created at the time of the project setup. Locate the store folder in the src directory of your project then create a new folder named modules: Inside this folder, create the product and cart folders: - mkdir src/store/modules/product - mkdir src/store/modules/cart These will hold all the state files for your product inventory and your user’s cart. You will build these two files up at the same time, each open in a separate terminal. This way, you will be able to compare your mutations, getters, and actions side-by-side. Finally, open an index.js file in the product folder: - nano src/store/modules/product/index.js Add the following code to create a state object containing your productItems: vuex-shopping-cart/src/store/modules/product/index.js import axios from 'axios'; const state = { productItems: [] } Save the file and keep it open. Similarly, in a new terminal, add an index.js file to the cart directory with the following: - nano src/store/modules/cart/index.js Then add code for the cartItems: vuex-shopping-cart/src/store/modules/cart/index.js import axios from 'axios'; const state = { cartItems: [] } Save this file, but keep it open. In these code snippets, you imported the Axios module and set the state. The state is a store object that holds the application-level data that needs to be shared between components. Now that you’ve set the states, head over to mutations. Mutations Mutations are methods that modify the store state. They usually consist of a string type and a handler that accepts the state and payload as parameters. You will now create all the mutations for your application. Add the following code in the product/index.js file just after the state section: vuex-shopping-cart/src/store/modules/product/index.js ... const mutations = { UPDATE_PRODUCT_ITEMS (state, payload) { state.productItems = payload; } } This creates a mutations object that holds an UPDATE_PRODUCT_ITEMS method that sets the productItems array to the payload value. Similarly, add the following code in the cart/index.js file just after the state section: vuex-shopping-cart/src/store/modules/cart/index.js ... const mutations = { UPDATE_CART_ITEMS (state, payload) { state.cartItems = payload; } } This creates a similar UPDATE_CART_ITEMS for your user’s shopping cart. Note that this follows the Flux architecture style of making references to mutations in capital letters. Actions Actions are methods that will handle mutations, so that mutations are insulated from the rest of your application code. In product/index.js, create an actions object with all the actions for your application: vuex-shopping-cart/src/store/modules/product/index.js ... const actions = { getProductItems ({ commit }) { axios.get(`/api/products`).then((response) => { commit('UPDATE_PRODUCT_ITEMS', response.data) }); } } Here the getProductItems method sends an asynchronous GET request to the server using the Axios package that you installed earlier. When the request is successful, the UPDATE_PRODUCT_ITEMS mutation is called with the response data as the payload. Next, add the following actions object to cart/index.js: vuex-shopping-cart/src/store/modules/cart/index.js ... const actions = { getCartItems ({ commit }) { axios.get('/api/cart').then((response) => { commit('UPDATE_CART_ITEMS', response.data) }); }, addCartItem ({ commit }, cartItem) { axios.post('/api/cart', cartItem).then((response) => { commit('UPDATE_CART_ITEMS', response.data) }); }, removeCartItem ({ commit }, cartItem) { axios.delete('/api/cart/delete', cartItem).then((response) => { commit('UPDATE_CART_ITEMS', response.data) }); }, removeAllCartItems ({ commit }) { axios.delete('/api/cart/delete/all').then((response) => { commit('UPDATE_CART_ITEMS', response.data) }); } } In this file, you create the getCartItems method, which sends an asynchronous GET request to the server. When the request is successful, the UPDATE_CART_ITEMS mutation is called with the response data as the payload. The same happens with the removeAllCartItems method, although it makes a DELETE request to the server. The removeCartItem and addCartItem methods receives the cartItem object as a parameter for making a DELETE or POST request. After a successful request, the UPDATE_CART_ITEMS mutation is called with the response data as the payload. You used ES6 destructuring to decouple the commit method from the Vuex context object. This is similar to using context.commit. Getters Getters are to an application store what computed properties are to a component. They return computed information from store state methods that involve receiving computed state data. Next, create a getters object to get all the information for the product module: vuex-shopping-cart/src/store/modules/product/index.js ... const getters = { productItems: state => state.productItems, productItemById: (state) => (id) => { return state.productItems.find(productItem => productItem.id === id) } } Here, you made a method productItems that returns the list of product items in the state, followed by productItemById, a higher order function that returns a single product by its id. Next, create a getters object in cart/index.js: vuex-shopping-cart/src/store/modules/cart/index.js ... const getters = { cartItems: state => state.cartItems, cartTotal: state => { return state.cartItems.reduce((acc, cartItem) => { return (cartItem.quantity * cartItem.price) + acc; }, 0).toFixed(2); }, cartQuantity: state => { return state.cartItems.reduce((acc, cartItem) => { return cartItem.quantity + acc; }, 0); } } In this snippet, you made the cartItems method, which returns the list of cart items in the state, followed by cartTotal, which returns the computed value of the total amount of cart items available for checkout. Finally, you made the cartQuantity method, which retuns the quantity of items in the cart. Exporting the Module The final part of the product and cart modules will export the state, mutations, actions, and getters objects so that other parts of the application can access them. In product/index.js, add the following code at the end of the file: vuex-shopping-cart/src/store/modules/product/index.js ... const productModule = { state, mutations, actions, getters } export default productModule; This collects all your state objects into the productModule object, then exports it as a module. Save product/index.js and close the file. Next, add similar code to cart/index.js: vuex-shopping-cart/src/store/modules/product/index.js ... const cartModule = { state, mutations, actions, getters } export default cartModule; This exports the module as cartModule. Setting up the Store With the state, mutations, actions, and getters all set up, the final part of integrating Vuex into your application is creating the store. Here you will harness the Vuex modules to split your application store into two manageable fragments. To create your store, open up the index.js file in your store folder: Add the following highlighted lines: vuex-shopping-cart/src/store/index.js import { createStore } from 'vuex' import product from'./modules/product'; import cart from './modules/cart'; export default createStore({ modules: { product, cart } }) Save the file, then exit the text editor. You have now created the methods needed for state management and have created the store for your shopping cart. Next you will create user interface (UI) components to consume the data. Step 4 — Creating Interface Components Now that you have the store for your shopping cart set up, you can move onto making the components for the user interface (UI). This will include making some changes to the router and making front-end components for your navigation bar and list and item views of your products and your cart. First, you will update your vue-router setup. Remember that when you used the Vue CLI tool to scaffold your application, you chose the router option, which allowed Vue to automatically set up the router for you. Now you can re-configure the router to provide paths for Cart_List.vue and Product_List.vue, which are Vue components you will make later. Open up the router file with the following command: - nano vuex-shopping-cart/src/router/index.js Add the following highlighted lines: vuex-shopping-cart/src/router/index.js import { createRouter, createWebHashHistory } from 'vue-router' import CartList from '../components/cart/Cart_List.vue'; import ProductList from '../components/product/Product_List.vue'; const routes = [ { path: '/inventory', component: ProductList }, { path: '/cart', component: CartList }, { path: '/', redirect: '/inventory' }, ] const router = createRouter({ history: createWebHashHistory(), routes }) export default router This creates the /inventory route for your products and the /cart route for the items in your cart. It also redirects your root path / to the product view. Once you have added this code, save and close the file. Now you can set up your UI component directories. Run this command on your terminal to move to the component’s directory: Run this command to create three new sub-folders under the component’s directory: core will hold essential parts of your application, such as the navigation bar. cart and product will hold the item and list views of the shopping cart and the total inventory. Under the core directory, create the Navbar.vue file by running this command: Under the cart directory, create the files Cart_List_Item.vue and Cart_List.vue: - touch cart/Cart_List_Item.vue cart/Cart_List.vue Finally, under the product directory, create these two files: - touch product/Product_List_Item.vue product/Product_List.vue Now that the file structure has been outlined, you can move on to creating the individual components of your front-end app. Navbar Component In the navbar, the cart navigation link will display the quantity of items in your cart. You will use the Vuex mapGetters helper method to directly map store getters with component computed properties, allowing your app to get this data from the store’s getters to the Navbar component. Open the navbar file: Replace the code with the following: vuex-shopping-cart/src/components/core/Navbar.vue <template> <nav class="navbar" role="navigation" aria- <div class="navbar-brand"> "> <div class="navbar-item"> <div class="buttons"> <router-link <strong> Inventory</strong> </router-link> <router-link <p> Total cart items: <span> {{cartQuantity}}</span> </p> </router-link> </div> </div> </div> </div> </nav> </template> <script> import {mapGetters} from "vuex" export default { name: "Navbar", computed: { ...mapGetters([ 'cartQuantity' ]) }, created() { this.$store.dispatch("getCartItems"); } } </script> As a Vue component, this file starts out with a template element, which holds the HTML for the component. This snippet includes multiple navbar classes that use pre-made styles from the Bulma CSS framework. For more information, check out the Bulma documentation. This also uses the router-link elements to connect the app to your products and cart, and uses cartQuantity as a computed property to dynamically keep track of the number of items in your cart. The JavaScript is held in the script element, which also handles state management and exports the component. The getCartItems action gets dispatched when the navbar component is created, updating the store state with all the cart items from the response data received from the server. After this, the store getters recompute their return values and the cartQuantity gets rendered in the template. Without dispatching the getCartItems action on the created life cycle hook, the value of cartQuantity will be 0 until the store state is modified. Save and close the file. Product_List Component This component is the parent to the Product_List_Item component. It will be responsible for passing down the product items as props to the Product_List_Item (child) component. First, open up the file: - nano product/Product_List.vue Update Product_List.vue as follows: vuex-shopping-cart/src/components/product/Product_List.vue <template> <div class="container is-fluid"> <div class="tile is-ancestor"> <div class="tile is-parent" v- <ProductListItem : </div> </div> </div> </template> <script> import { mapGetters } from 'vuex'; import Product_List_Item from './Product_List_Item' export default { name: "ProductList", components: { ProductListItem:Product_List_Item }, computed: { ...mapGetters([ 'productItems' ]) }, created() { this.$store.dispatch('getProductItems'); } }; </script> Similar to the Navbar component logic discussed earlier, here the Vuex mapGetters helper method directly maps store getters with component computed properties to get the productItems data from the store. The getProductItems action gets dispatched when the ProductList component is created, updating the store state with all the product items from the response data received from the server. After this, the store getters re-computes their return values and the productItems gets rendered in the template. Without dispatching the getProductItems action on the created life cycle hook, there will be no product item displayed in the template until the store state is modified. Product_List_Item Component This component will be the direct child component to the Product_List component. It will receive the productItem data as props from its parent and render them in the template. Open Product_List_Item.vue: - nano product/Product_List_Item.vue Then add the following code: vuex-shopping-cart/src/components/product/Product_List_Item.vue <template> <div class="card"> <div class="card-content"> <div class="content"> <h4>{{ productItem.title }}</h4> <a class="button is-rounded is-pulled-left" @ <strong>Add to Cart</strong> </a> <br /> <p class="mt-4"> {{ productItem.description }} </p> </div> <div class="media"> <div class="media-content"> <p class="title is-6">{{ productItem.owner }}</p> <p class="subtitle is-7">{{ productItem.email }}</p> </div> <div class="media-right"> <a class="button is-primary is-light"> <strong>$ {{ productItem.price }}</strong> </a> </div> </div> </div> </div> </template> <script> import {mapActions} from 'vuex' export default { name: "ProductListItem", props: ["productItem"], methods: { ...mapActions(["addCartItem"]), }, }; </script> In addition to the mapGetters helper function used in the previous components, Vuex also provides you with the mapActions helper function to directly map the component method to the store’s actions. In this case, you use the mapAction helper function to map the component method to the addCartItem action in the store. Now you can add items to the cart. Save and close the file. Cart_List Component This component is responsible for displaying all the product items added to the cart and also the removal of all the items from the cart. To create this component, first open the file: Next, update Cart_List.vue as follows: vuex-shopping-cart/src/components/cart/Cart_List.vue <template> <div id="cart"> <div class="cart--header has-text-centered"> <i class="fa fa-2x fa-shopping-cart"></i> </div> <p v- Add some items to the cart! </p> <ul> <li class="cart-item" v- <CartListItem : </li> <div class="notification is-success"> <button class="delete"></button> <p> Total Quantity: <span class="has-text-weight-bold">{{ cartQuantity }}</span> </p> </div> <br> </ul> <div class="buttons"> <button : Checkout (<span class="has-text-weight-bold">${{ cartTotal }}</span>) </button> <button class="button is-danger is-outlined" @ <span>Delete All items</span> <span class="icon is-small"> <i class="fas fa-times"></i> </span> </button> </div> </div> </template> <script> import { mapGetters, mapActions } from "vuex"; import CartListItem from "./Cart_List_Item"; export default { name: "CartList", components: { CartListItem }, computed: { ...mapGetters(["cartItems", "cartTotal", "cartQuantity"]), }, created() { this.$store.dispatch("getCartItems"); }, methods: { ...mapActions(["removeAllCartItems"]), } }; </script> This code uses a v-if statement in the template to conditionally render a message if the cart is empty. Otherwise, it iterates through the store of cart items and renders them to the page. You also loaded in the cartItems, cartTotal, and cartQuantity getters to compute the data properties, and brought in the removeAllCartItems action. Save and close the file. Cart_List_Item Component This component is the direct child component of the Cart_List component. It receives the cartItem data as props from its parent and renders them in the template. It is also responsible for incrementing and decrementing the quantity of items in the cart. Open up the file: - nano cart/Cart_List_Item.vue Update Cart_List_Item.vue as follows: vuex-shopping-cart/src/components/cart/Cart_List_Item.vue <template> <div class="box"> <div class="cart-item__details"> <p class="is-inline">{{cartItem.title}}</p> <div> <span class="cart-item--price has-text-info has-text-weight-bold"> ${{cartItem.price}} X {{cartItem.quantity}} </span> <span> <i class="fa fa-arrow-circle-up cart-item__modify" @</i> <i class="fa fa-arrow-circle-down cart-item__modify" @</i> </span> </div> </div> </div> </template> <script> import { mapActions } from 'vuex'; export default { name: 'CartListItem', props: ['cartItem'], methods: { ...mapActions([ 'addCartItem', 'removeCartItem' ]) } } </script> Here, you are using the mapAction helper function to map the component method to the addCartItem and removeCartItem actions in the store. Save and close the file. Lastly, you will update the App.vue file to bring these components into your app. First, move back to the root folder of your project: Now open the file: Replace the contents with the following code: vuex-shopping-cart/src/App.vue <template> <div> <Navbar/> <div class="container mt-6"> <div class="columns"> <div class="column is-12 column--align-center"> <router-view></router-view> </div> </div> </div> </div> </template> <script> import Navbar from './components/core/Navbar' export default { name: 'App', components: { Navbar } } </script> <style> html, body { height: 100%; background: #f2f6fa; } </style> App.vue is the root of your application defined in Vue component file format. Once you have made the changes, save and close the file. In this step, you set up the frontend of your shopping cart app by creating components for the navigation bar, the product inventory, and the shopping cart. You also used the store actions and getters that you created in a previous step. Next, you will get your application up and running. Step 5 — Running the Application Now that your app is ready, you can start the development server and try out the final product. Run the following command in the root of your front-end project: This will start a development server that allows you to view your app on. Also, make sure that your backend is running in a separate terminal; you can do this by running the following command in your cart-backend project: Once your backend and your frontend are running, navigate to in your browser. You will find your functioning shopping cart application: Conclusion In this tutorial, you built an online shopping cart app using Vue.js and Vuex for data management. These techniques can be reused to form the basis of an e-commerce shopping application. If you would like to learn more about Vue.js, check out our Vue.js topic page.
https://www.xpresservers.com/tag/cart/
CC-MAIN-2022-27
refinedweb
5,381
51.04
WO2000057315A2 - Extended file system - Google PatentsExtended file system Download PDF Info - Publication number - WO2000057315A2WO2000057315A2 PCT/US2000/007973 US0007973W WO0057315A2 WO 2000057315 A2 WO2000057315 A2 WO 2000057315A2 US 0007973 W US0007973 W US 0007973W WO 0057315 A2 WO0057315 A2 WO 0057315A2 - Authority - WO - Grant status - Application - Patent type - - Prior art keywords - data - file - primitive - client - file system - EXTENDED FILE SYSTEM FIELD OF THE INVENTION The present invention relates generally to computer devices and networks, and more particularly to file storage and access by a computer-related device. BACKGROUND OF THE INVENTION Consumer devices such as Pocket PCs or palm-sized and handheld computers are limited in their available storage space. These devices are capable of loading and executing software packages in much the same way as a desktop computer, but lack the storage necessary to have several of these packages loaded onto the system concurrently along with other data needed by a user. Other devices such as cable television set-top boxes, satellite receivers and so forth have the same lack-of- emory problems. As access to the Internet via such devices is being planned and to some extent implemented, the lack of storage on the devices create problems not seen in home or business computers. For example, personal site customizations, favorites, saved data such as credit card information, cookies and so forth are typically stored on computing devices having relatively large hard disks wherein storage is not normally an issue. E-mail files, which on a device such as a single set-top box, will differ for (possibly multiple) individual users of that device. However, saving such data along with other needed information would quickly fill up the available storage on many devices, and if, for example, a relatively large file was downloaded to the device, the saved data would have to be discarded in order to fit the large file. Indeed, in at least one contemporary cable television set-top box, only 128 kilobytes are available for persisting user data, which is several orders of magnitude smaller than the hundreds of megabytes to dozens of gigabytes typically provided by contemporary personal computers. Contemporary pocket-size devices have somewhat more memory, but are still on the order of tens megabytes or less, of which the operating system and stored programs consume a considerable amount. While network shares allow greater amounts of storage to be accessed via remote drive connections, their implementations require constant connection to the network in order to access a network share. Among other drawbacks, this makes network shares unsuitable for use with the Internet. For example, NetBIOS and other drive- sharing (redirector) systems currently require constant communication between the server and the client. Data is not cached, but instead is used directly off the shared file system, and is updated immediately. This is not acceptable for Internet-based file sharing, as the Internet is unreliable, and can be susceptible to long delays in transmission. The NetBios service and SMB protocol are also point-to-point, relatively heavy, and do not scale well to large numbers of remote users and multiple servers. Other existing services are unable and/or impractical to provide a solution to these low memory problems . SUMMARY OF THE INVENTION Briefly, the present invention provides a method and system for transparently combining remote and local storage to act as one or more virtual local drives for a computer system client, such as a pocket sized personal computer or a set top box. When a connection to an extended file system server is present, the extended file system provides automatic downloading of information that is not locally cached, and automatically uploading of information that has been modified on the client. Providing such a remote drive allows any client device to load file system objects, storing the directories and files remotely, and retrieving the files only when required. Via its local storage, the extended file system handles unreliable connections and delays, particularly with small files such as cookies, e-mail text and so forth. To provide the extended file system, the client includes components that determine via object attributes the remote/local location of file system data, and when appropriate, download or upload the data in a manner that is transparent from the perspective of the application. Thus, an application makes normal file / operating system application programming calls or the like, and the client components determine the source and retrieve / update the data appropriately. Data that is updated (e.g., written) locally is automatically synchronized with the remote server. Moreover, communication is fast by use of a relatively lightweight protocol using straightforward primitives described herein, and may be made secure via authentication and encryption. The system scales to large networks as it employs the lightweight protocol and establishes a connection only to retrieve and submit data. Other advantages will become apparent from the following detailed description when taken in conjunction with the drawings, in which: BRIEF DESCRIPTION OF THE DRAWINGS FIGURE 1 is a block diagram representing one exemplary computer system into which the present invention may be incorporated; FIG. 2 is a block diagram representing a television set-top box including a computer system into which the present invention may be incorporated; FIG. 3 is a block diagram generally representing an extended file system installation in accordance with one aspect of the present invention; FIG. 4 is a block diagram generally representing logical components in a client and server for remotely accessing objects in accordance with in accordance with one aspect of the present invention; FIG. 5 is a flow diagram generally representing logical steps when enlisting a server to participate in an extended file system in accordance with one aspect of the present invention; FIG. 6 is a flow diagram generally representing logical steps when defecting a server from participation in an extended file system in accordance with one aspect of the present invention; FIG. 7 is a representation of communications between a client device and a server to initiate access to remote objects and perform file system-related operations thereto in accordance with one aspect of the present invention; FIG. 8 is a flow diagram generally representing logical steps when enlisting a client to participate in an extended file system in accordance with one aspect of the present invention; FIG. 9 is a flow diagram generally representing logical steps when a client attempts to locate a selected server for accessing an extended file system in accordance with one aspect of the present invention; FIGS. 10-12 are representations of how the client components access local objects locally and remote objects remotely in accordance with one aspect of the present invention; and FIGS. 13 is a flow diagram generally representing logical steps when determining the source of an object in accordance with one aspect of the present invention. DETAILED DESCRIPTION EXEMPLARY OPERATING ENVIRONMENTS FIGURE 1 and the following discussion are intended to provide a brief, general description of one suitable computing environment in which the invention may be implemented. Although not required, the invention will be described in the general context of computer- executable instructions, such as program modules, in one alternative being executed by a pocket-sized computing device such as a personal desktop assistant. Generally, program modules include routines, programs, objects, components, data structures and the like that perform particular tasks or implement particular abstract data types . Moreover, those skilled in the art will appreciate that the invention may be practiced with other computer system configurations, including hand-held, laptop or desktop personal computers, mobile devices such as pagers and telephones, multi-processor systems, microprocessor- based or programmable consumer electronics including a cable or satellite set-top box (FIG. 2) , network PCs, minicomputers, mainframe computers and the like. Part of the invention is also practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices, as described below. With reference to FIG. 1, one exemplary system for implementing the invention includes a general purpose computing device in the form of a pocket-sized personal computing device, typically non-volatile RAM (e.g., battery-backed up) in a pocket-sized personal computing device. A basic input/output system 26 (BIOS), containing the basic routines that help to transfer information between elements within the hand-held computer 20, such as during start-up, is stored in the ROM 24. A number of program modules are stored in the ROM 24 and/or RAM 25, including an operating system 28 (such as Windows® CE) , one or more application programs 29, other program modules 30, program data 31 and a file system manager 32. In accordance with one aspect of the present invention, a local memory is used as part of a virtual local drive is provided by an XFS client component 33, which includes an XFS Ramdisk manager and storage 34 (XFSDISK) , and other components (described below) . A user may enter commands and information into the handheld computer 20 through input devices such as a touch- sensitive display screen 35 with suitable input detection circuitry 36. Other input devices may include a microphone 37 connected through a suitable audio interface 38 and physical (hardware) or a logical keyboard (not shown) . Additional other devices (not shown) , such as LED displays or other peripheral devices controlled by the computer, may be included. The output circuitry of the touch-sensitive display 35 is also connected to the system bus 23 via video driving circuitry 39. In addition to the display 35, the device may include other peripheral output devices, such as at least one speaker 40 and printers (not shown) . Other external input or output devices 42 such as a joystick, game pad, satellite dish, modem or the like (satellite, cable or DSL interface), scanner or the like may be connected to the processing unit 21 through an RS- 232 or the like serial port 40 and serial port interface 41 that is coupled to the system bus 23, but may be connected by other interfaces, such as a parallel port, game port or universal serial bus (USB) . Such devices may also be internal. The hand-held device 20 may further include or be capable of connecting to a flash card memory (not shown) through an appropriate connection port (e.g., slot) 43 and interface 44. A number of hardware buttons 45 such as switches, buttons (e.g., for switching application) and the like may be further provided to facilitate user operation of the device 20, and are also connected to the system via a suitable interface 46. An infrared port 47 and corresponding interface/driver 48 are provided to facilitate communication with other peripheral devices 49, including other computers, network connection mechanism (e.g., modems or the like) , printers, and so on (not shown) . It will be appreciated that the various components and connections shown are exemplary and other components and means of establishing communications links may be used. Turning to FIG. 2 of the drawings, there is shown an alternate computer system into which the present invention may be incorporated, implemented in a set-top box 54 connected to a television receiver / monitor 56. In FIG. 2, an application 58 which may, for example, provide a user interface configured to control set-up, parental control, tuning, timed operation, and/or the like is provided. The application may also provide a user interface via which a user is able to access the Internet, and may include a browser, although as is known, the browser may be integrated into the operating system 60 of the set-top box 54. A user interacts with the application 58 and/or operating system 60 (such as Windows® CE) via a user input device 62 (such as an attached keypad, infrared remote control and/or hardwired keyboard) and suitable device interface 64. As is known, one of the functions of a contemporary set-top box 54 is to output to the receiver / monitor 56 television programming and Internet content received from a provider 66. To this end, some signal processing mechanism 68 or the like is generally provided, such as including one or more splitters, filters, multiplexers, demultiplexers, mixers, tuners and so forth as required to output appropriate video to the receiver / monitor 56, and to both output and input Internet-related data via a cable / satellite modem 70. Of course, consumer satellite dishes only receive content, and thus in a satellite system an additional mechanism (e.g., telephone line, not shown) is required to output data to the provider 66. Other components 72 such as to display closed-captioning, allow parental control, provide onscreen program guides, control video recorders and so forth may be provided as is also known. In any event, these functions of set-top boxes are known, and are not described herein for purposes of simplicity, except to the extent that they relate to the extended file system of the present invention. EXTENDED FILE SYSTEM In accordance with one aspect of the present invention, to provide access to remote client-owned objects (directories and/or files therein) maintained in remote storage 74 by one or more XFS file servers 76, the set-top box includes (e.g., in system memory) an XFS client 33 comprising a number of components (described below) including the XFS Ramdisk manager / virtual local drive 34. A file system manager 32 is also provided, as described below. For example, in the Windows® CE operating system, a suitable file system manager is known as "FSDMGR." An exemplary extended file system (XFS) installation is represented in FIG. 3, and typically comprises a large number (e.g., millions) of client devices 80ι-80n (for example, the pocket computing device 20 or the set-top box 54) . The client devices 80ι-80n are capable of connecting to one or more of the servers (76ι-76m in FIG. 3) over a network 84 via a service provider 86. The servers 76ι-76m participate in XFS as name servers, access controllers and permission managers, or a combination of access controller, permission manager and name server as described below with reference to FIG. 4. The servers 76ι-76m (more particularly the access controllers) point to a common remote file system for storing files in one or more XFS storage devices 74 implemented using DFS shares. DFS is a feature of Windows® 2000 (or Windows® NT®) that provides file replication (used for providing redundancy of data) and load balancing for a file system. In one preferred implementation, the remote file system is the Windows® NTFS file system, which among other benefits, is considered secure. As will be understood, however, the XFS file system of the client is independent of the remote file system / server configuration, and thus virtually any operating and/or file system (e.g., UNIX, FAT, FAT32) or combination thereof that works with the server-side storage media 74 will suffice for purposes of the present invention. In the set-top box implementation, the client devices 54 will normally be physically connected to the servers 761-76m at all times via the cable / satellite modem 70 therein. Indeed, since broadband is in use, remote files may be quickly accessed by the client, as described below, even though logical connections are preferably made on a per-access basis. In keeping with the present invention, however, the client device provides local storage for caching some of the data maintained at the remote storage device 74, thereby enabling operation without a physical connection. Synchronization may be performed at some later time or on demand. As can be appreciated, this is particularly useful with client devices such as pocket-sized computing devices (e.g., 20), digital cameras, and so forth wherein a physical connection is occasional. Moreover, local caching is generally valuable when dealing with Internet content, as even when physically connected to a provider, the Internet is unreliable and can be susceptible to long delays in transmission and also helps in optimizing bandwidth utilization. As generally represented in FIG. 4, the extended file system (XFS) comprises the XFS-Client portion 33 and an XFS-Server portion 92, which together generally include the XFS Ramdisk manager / virtual local drive 34 and other components 94-102 (described below) . Note that the various components 94-102 are logical components, and it is likely that several of the components may be integrated into and handled by a single program. For example, the XFS server portion 92 may comprise a single physical component servicing the requests for its logical components. For extremely large installations, however, it may be desirable for the components to be implemented separately for scalability reasons. Similarly, the virtual local drive of XFS (managed by the XFSDISK 34) may be at any physical or virtual location or locations in system memory, not necessarily adjoining or within the memory allocated to the other XFS client components. The XFSDISK RAMdisk manager 34 that provides the virtual local drive is a complete, thread-safe implementation of a stream interface driver (as defined in the "Windows® CE DDK," available from Microsoft® Corporation, Redmond, Washington) . The XFSDISK 34 is loaded at boot time, and is configured based on information provided in the system registry. The XFSDISK 34 is capable of loading a file system device on itself, thereby appearing as an actual folder off of the root folder of a hierarchically organized file system. To provide accessible memory, the XFSDISK 34 creates a specified number of heaps of a specified size and then "stitches" them together to give the appearance of a single, contiguous, addressable block of memory which serves as a local cache of the virtual local drive. This address space is shared by the threads and processes which access XFSDISK, either through the associated file system device (e.g., the file system manager 32) or by directly reading from or writing to the disk locations. XFSDISK serves as the local cache for the remote file system of the present invention. Two XFS-Client 33 components include the XFS Client Interface (XFSCLNT) 94 and the XFS File System Driver (XFSFSD) 96. The XFS Client Interface 94 is the interface to the XFS Server 92, and is responsible for translating file system requests into XFS primitives (XFS network functions) and marshaling the primitives across to the server. As will be described below, the XFS Client Interface (XFSCLNT) 94 performs initialization operations . The XFS File System Driver (XFSFSD) 96 is an installable file system driver, which in one implementation is modeled after the well-documented FAT file system. In keeping with the present invention, a remotely maintained file system is presented as a local file system through XFSFSD 96. As the local disk 33 fills up, the XFSFSD 96 implements a Least Recently Used (LRU) algorithm to make space available. As described below, if it is not possible to make space, the files presented as available in the local file system are marked as remote and for those files, the file system essentially behaves like a redirector. The local cache of files is thus intelligently managed. The XFS server portion 92 includes an XFS Access Controller 98, an XFS Permissions manager 100, and an XFS Name Resolution Manager (name services module) 102. The access controller 98 is responsible for receiving primitives from the client and taking actions on them, although when the access controller 98 receives name- server primitives, it routes them to name services module 102. As described below, the access controller 98 translates primitives to appropriate actions to be taken on the file system and sends the response back to the client. The Permissions manager 100 is responsible for authenticating clients and users on the clients. Having authenticated the client, and a specified user, the permissions manager 100 provides access to the private folder for a given client. This is done as a part of PRIMITIVE_CALL, described below. The permissions manager 100 may use the standard X509-based authentication scheme for validating clients. In addition to validating client devices, the permissions manager 100 enables multiple users of a common device (e.g., a single set-top box) to share the same device while isolating the files of one user from each other user. SQL-based authentication, the Windows® 2000 Active Directory that specifies domain users or any custom authentication scheme may be used for authentication. The name services module 102 provides enlistment and name resolution services, as also described below, by maintaining (e.g., in the local server registry) a local directory of the name servers and access controllers. To enlist, when a server starts up, it sends a UDP broadcast of an enlistment request as described below. If the server gets an appropriate response from one of the other servers, it then sends a directed enlistment for confirming the entries, after which the local directory is synchronized via a directed resolve. The process of sending resolves across to known servers is done at periodic intervals of time to ensure that any server that is added is reflected in the local directory. The name services module 102 also handles defection (withdrawal from participation) of servers. When a defection is initiated for a specific server, the name services module 92 sends directed defects to the other servers in the local directory. Once the other servers have acknowledged the deletion of the defecting server, no more requests are processed. For the purpose of XFS communications, there are three specific sets of network functions, called primitives, comprising a set of Name Resolution primitives, which include UDP/TCP packets used to locate XFS components on the network, a set of control primitives, which are UDP/TCP packets used for management of the XFS system, and a set of session primitives, which are TCP streams used to transfer data among XFS components. Session primitives are conducted on TCP connections from machine to machine. TCP provides a minimal Quality of Service (QoS) scenario for the connection. Primitives have two distinct states, request and response. Thus, for example, the response to a Resolve request will be a Resolve response. The Maximum size for a primitive is 512 bytes for UDP transported primitives and 1024 bytes for TCP transported primitives. One control primitive is the enlist primitive, which is used to enlist clients (as described below) , and also by servers that are attempting to participate in an XFS installation. A field in the primitive identifies whether a client or server sent the Enlist request. More particularly, to enlist a server, an XFS server (e.g., 763) sends an Enlist primitive to notify the name servers (XFS-NS) that it wants to begin participation in the XFS system. The server 763 does not begin processing requests until it has received an Enlist response primitive from the name services module at least one other server. After receiving an Enlist response primitive, the XFS server 763 may begin processing requests, however, it should continue to send Enlist primitives until it has received and Enlist response primitive from every name services module 102 server participating on the system. Servers (as well as clients) should maintain lists of resolved server IP's, and preferably update the list in a Time To Live (TTL, which denotes the amount of time that a piece of data may be considered valid before it must be verified), manner. It is recommended that TTL' s be no less than 256 seconds for each XFS-NS, and 128 seconds for other servers. In the event that no XFS-NS can be located to resolve requests, the list should be invalidated, and an Enlist primitive should be sent via UDP broadcast to retrieve the network topography. After the first Enlist response, the name services module of the server 763 should send its Enlist requests to unresponsive XFS-NS servers directly, instead of broadcasting the requests on the network. This will help to reduce network traffic and avoid responses from XFS-NS servers which have already responded to the earlier Enlist request. For the server control primitive "Enlist," the logical flow generally described with reference to FIG. 5 should be used to minimize network traffic. As represented in FIG. 5, beginning at step 500, a server sends the enlist request primitive via a UDP Broadcast. This is necessary because the server has no idea as to the locations of XFS-NS' s on the network. The server then provides some time duration for responses, as generally represented via steps 502. For each response received, (if any), at step 504 the server records the IP address of the responding server. In general, UDP transported primitives expect a UDP response verifying their transmission; if no UPD response is received within a reasonable amount of time, the primitive send is considered to have failed, and should be re-issued some number of times before considering the primitive to have failed completely. Thus, when the time for waiting is over, step 506 tests if no responses were received, and if not, branches back to step 500 to reissue the enlist request primitive via UDP Broadcast. If at least one response was received, step 506 branches to step 508 enlistment process ends via step 508. If the number responding is less than the reported number, step 508 branches to step 510 wherein a resolve request for servers of type XFS-NS is sent to one of the at least one known XFS-NS. When the response is received, step 512 sends a UDP directed (i.e., non-broadcast) enlist request to each XFS-NS which did not respond to broadcast request. Step 514 saves the IP addresses for servers that respond to the enlist requests. Note that some wait time (not represented in FIG. 5) to obtain the responses may be provided between steps 510 and 512, and between steps 512 and 514. Note that as long as at least one XFS-NS has responded, the server should begin processing requests, except in the case that the enlisting server is a XFS-NS. The server is to complete enlistment with the other XFS-NS' s, and system implementers should strive to ensure that enlistment will be completed even in the case of server and/or network outages. To withdraw from participation, an XFS server (e.g., 762) sends a Defect primitive to notify the XFS-NS that is no longer wishes to participate in the XFS system. Note that defection is not intended for temporary removal of the server from the XFS system, but rather is used to remove a server from the XFS for extended or indefinite periods. As described below, the name resolution primitive "Locate" will be used to determine server availability. Further, note that the server may quit responding to XFS name resolution and session primitives at this time, but is not to shut down until a Defect response primitive is received from each of the known XFS-NS' s in the system. For the server control primitive "Defect," the logical flow generally described in FIG. 6 should be used to minimize network traffic. In FIG. 6, beginning at step 600, a server sends the Defect request primitive via a UDP Broadcast. After some time, (step 602), the server normally receives a number of Defect responses from the XFS-NS servers (step 604) . At step 606, If the numbers match, the enlistment process ends. Otherwise, (i.e., the number is less than the total number of known XFS-NS servers) , step 606 branches to step 608 to send a UDP-directed (non- broadcast) Defect request to each XFS-NS which did not respond to the broadcast request, and then record the IP address of each responding XFS-NFS at step 610. Note that until all known XFS-NS have responded, the server should continue processing requests (step 612), i.e., the server is to complete defection with each XFS-NS. System implementers are to ensure that the defection will be completed, even in the case of server and/or network outages . Turning to an explanation of the flow of information between one client 80 and one server 76, FIG. 7 shows (via numerically labeled arrows) how and in which direction communication generally takes place. In FIG. 7, it is assumed that the server with which the client 80 is communicating has already enlisted, as described above. As generally represented in FIG. 7 the client sends an enlist request primitive via UDP Broadcast, as represented by the arrow (1), although as can be appreciated, this primitive likely reaches other servers, not shown. This is performed because the client 80 has no idea as to the locations of XFS-NS' s on the network. The client receives a number of Enlist responses from XFS-NSes, such as an XFS-NS name service module of the server 76 (arrow (2) ) . The client 80 records the IP address of each server from which an appropriate response was received. In addition to enlistment, any other custom method can be used to identify the XFS server to the client. In this case, client enlistment process can be bypassed. FIG. 8 generally represents the logical flow for client enlistments, (similar in a number of steps to the server enlisted described above with respect to FIG. 5) . For the client control primitive "Enlist," as represented in FIG. 8, beginning at step 800, a client 80 sends the enlist request primitive via a UDP Broadcast. The client 80 then provides some time duration for responses, as generally represented via steps 802. For each response received, (if any) , at step 504 the client 80 records the IP address of the responding client 80. When the wait time is up, step 806 tests if no responses were received, and if not, branches back to step 800 to reissue the enlist request primitive via UDP Broadcast, at least for some number of reissue attempts. Alternatively, if at least one response is received, step 806 branches to step 808 client has located the full set of servers, and the enlistment process ends via step 808. Note that the XFS-NS will not remember the enlistment of XFS clients. The client enlistment scenario is only for network topography discovery. Thus, the XFS clients have no need to defect from the system, though it is not considered an error for a client to do so. If at step 808 the number responding does not equal (i.e., is less than) the reported number, step 808 branches to step 810 wherein a resolve primitive (arrow (3) in FIG. 7) is sent to an XFS-NS (one of the at least one known) to request a list of IP addresses of the specified XFS server type participating on the system. Returning to FIG. 7, when the Resolve response is received (arrow (4) in FIG. 7), the client saves the IP addresses for servers from the ResolveResponse data at step 812. The client 80 may select one of the resolved servers (e.g., the server 76) via a random process or the like so that the total load of a set of clients is randomly distributed across multiple servers. A client Locate primitive is then sent by the XFS client 80 to the selected XFS server 76 in order to verify the existence of that server on the network (arrow (5) in FIG. 7) , and if it exists, the server responds (arrow (6)) . More particularly, prior to establishing a TCP session, an XFS client should perform the logical flow represented in the steps of FIG. 9 described below. At step 900 of FIG. 9, the client selects a first XFS access controller, (e.g., from a randomly-ordered list), and at step 902 sends a Locate request to the selected XFS access controller via UDP/TCP. If at step 904 there is no response, (e.g., within a suitable delay), and if at least one other access controller is listed, (step 906), the client selects the next XFS access controller at step 908 and returns to step 902 to repeat the process. If at step 904 there are no more XFS access controllers in the XFS client's list of servers, the client sends a Resolve request at step 910 to a known XFS-NS to update the list, and after receiving a response, returns to step 902 for at least some number of retry attempts. In the event that no XFS-NS can be located to respond to the resolve request, (not represented in FIG. 9), the list should be invalidated, and an Enlist primitive should be sent (as described above) via a UDP broadcast to retrieve the network topography. Thus, to summarize, at startup time, the XFS Client Interface (XFSCLNT) 94 of a client 80 sends a client enlistment broadcast on UDP to get the network topology of the servers. Once the enlistment response is received, a directed resolve is sent to the server that responded to enlistment to get a list of access controllers and name servers. Once the client receives a list of name servers and access controllers, the initialization is complete and other primitives can be sent. The other primitives are wrapped in PRIMITIVE_CALL and PRIMITIVE_HANGUP, described below. The session primitives include a call primitive, which initiates a session with a server that is listening. Authentication will be performed during the call, which may include several round trips on the network. For example, the first client call request primitive may include a device id, user-id, user password and a "ticket" (arrow (7) in FIG. 7) . The ticket may not be present if it is the first CALL, for instance after power up. The server retrieves credentials from the CALL primitive. If the ticket is not present, server makes a call into the permissions manager to verify the credentials. If the credentials are not valid, the session is dropped. If the credentials are valid, the server constructs a ticket, which consist of expiration time, box id, user id, and a password, encrypts the ticket and sends it back to the client. In the case when ticket is present in the CALL primitive, the server decrypts the ticket and makes sure that the expiration time is greater then the current time and that the box id, user id and password in the decrypted ticket match the credentials passed. If everything is valid, the same ticket is passed back to the client (e.g., arrow (8) in FIG. 7) . Otherwise the credentials are checked against the permissions manager, and if they are valid, a new ticket is generated and passed back to the client. The client caches the ticket and uses it the future when sending CALL primitive. The ticket in the described scheme serves as a scalability component, which greatly reduces hits to the authentication mechanism. In order to further decrease hits, the expiration period is set to random value between predefined minimum and maximum values (typically between 3-8 hours) . Additionally, the underlying channel is secured using the standard PKI infrastructure. When a client makes a TCP connection to the server, the client sends over "Establish secure channel" message. Then, the client sends over its certificate containing its public signature key. The server validates the certificate as for trust, and if it finds the certificate is not valid, disconnects. The server then sends back over a block of random data. The client computes an MD5 hash of the block of random data, signs the MD5 hash using the client's private signature key, and then sends the signature to the server. The server computes a MD5 hash of a block of the same random data. The server validates the signature passed over by the client using the public signature key buried in the client's certificate. If the validation fails, the client is considered an imposter, and the server disconnects. The server encrypts its two secret RC4 keys with the client's public key exchange key and sends over encrypted RC4 keys - SEND key first and then RECEIVE key. The client decrypts the RC4 keys using its private key exchange key. The client stores the first RC4 key as its RECEIVE key and the second as its SEND key (i.e., opposite of the server). The channel is now secure. Any data to pass through secure connection must be encrypted with the SEND key on the client and then decrypted using the RECEIVE key on the server. The same two keys are shared between all clients connecting to a given server. There is a provision that the server may expire it's RC4 keys at any time, forcing the client to re-negotiate a new set of RC4 keys. This rotation of keys helps the channel from being compromised. Once the server authenticates the client and vice- versa, the virtual file system of the present invention is made available to the client. Although not necessary, an automatic directory request (arrow (9) in FIG. 7) is sent on behalf of the client to retrieve the first level of directories (arrow (10)) under the root. One public folder is provided to supply clients with common information, (e.g., an updated component), and each client has a subdirectory at the sever with a unique name. It is this subdirectory that essentially serves as the root for the each client. The client then ends the call via a hangup request (arrow (11) ) and response (arrow (12) ) . As shown in FIG. 10, the retrieved directory has whatever subdirectories and files (objects) under it that the user of the client device has stored therefore under the first level. The XFS file system adds an attribute (flag) to each object indicating whether the file or directory data stream is in the local storage, or is remote. As represented in FIGS. 10-12, this is indicated by a circled "R" for remote or "L" for local. As can be understood from FIG. 10, at this time, each directory and file are remote. The user can request typical file system operations on objects via session primitives in a new session, (represented in FIG. 7 by arrows numbered (13 - 18) ) . As shown in FIG. 7, these XFS-related session primitives (arrows (15) and (16)) are generally wrapped in PRIMITIVE_CALL (arrows (13) and (14)) and PRIMITIVEJHANGUP (arrows (17) and (18)) primitives, and are set forth in the table below: As described above, the Call and Hangup primitives are used so that the system can scale to large networks, i.e., XFS establishes a connection only to retrieve and submit data, and then closes (hangs up) the connection. Thus, unlike existing file systems, when the user requests a file system operation on an object, the extended file system of the present invention evaluates the Local/Remote attribute to determine whether the object can be retrieved locally or needs to be retrieved from remote storage. Any changes to a local object are synchronized with the remote file system, however reads and the like that do not change an object may be performed locally, without any need to communicate with the server. Note that as described below, some files are too large to be stored locally, and such files are marked by setting another attribute, i.e., a "synchronize only" attribute (circled "S" as represented in FIG. 12) . By way of example, consider a user presented with the locally-downloaded directory listing 110a when the user (or some entity such as a remote server) wants to access (e.g., open) a particular file, e.g., via the path \DIR2\DIR3\Fileιι. When the user selects the DIR2 directory, or when the path\filename is provided, the system determines from the Local/Remote file attribute that the directory \DIR2 is remote. For example, in a Windows® CE environment, an application places an API call to the operating system kernel, which passes the request to the file system manager 32 (FIG. 4) . In turn, the file system manager 32 (e.g., FSDMGR in Windows® CE) sends the request to the XFSFSD 96, which analyzes the call and calls back to the file system manager 32 with the information (track and sector) needed to locate the attribute information on the XFSDISK 34. Note that the track equals one on a RAMDisk. When the file system manager returns the attribute information, the XFSFSD 96 determines that the directory data stream is remote, and calls the XFSCLNT 94 to retrieve the data from the remote server. XFSCLNT issues a DIRECTORY primitive to the server and fetches the remote data. As can be readily appreciated, other operating system and/or file systems may perform essentially equivalent operations, and there is no intent to limit the present invention to the Windows® CE operating system. When the requested data returns, the XFSCLNT 94 provides it to the XFSFXD 96, which stores it in the XFSDISK 34. At this time, the information is generally arranged as shown in listing 110b of FIG. 11, i.e., DIR2 is local, and the objects under it remote. The process continues as above to remotely retrieve the DIR subdirectory data (listing 110c of FIG. 12) , and then again to remotely retrieve the data of Filβn. The next time that access to Filβn is needed, DIR2 and DIR3 may still be local, in which event the data may be locally retrieved from XFSDISK 34, i.e., once data is local, the extended file system essentially behaves in the same manner as any local file system. Note that from the perspective of the application and user, there is no knowledge as to where the objects are stored. Indeed, with fast, broadband connections and small files, any delay in retrieval may go unnoticed. Unlike a simple redirector, however, the locally stored information is used whenever the information is present locally. Similarly, on the server end, the access controller may perform normal access checks and the like, and if appropriate to return / update the server-maintained data, translates the primitive into whatever command corresponds based on the remote file system in use, (e.g., the access controller an API call that equates to the primitive) . One of the files in FIG. 12, namely FILEi2, is shown as having its synchronize-always ("S") attribute set. Note that the other files (and also directories) have this attribute, but it is only shown in FIGS. 10-12 for the file (FILEi2) where it is active. This attribute is used for files that are too large for local memory; their information is always retrieved from the remote storage, providing the user with as much data as possible at a given time given available memory, but without maintaining the file in the local XFSDISK 34. In other words, the extended file system operates as a redirector for such objects. Some threshold size (e.g., less than the available RAMDisk size) may be used to determine when a file is synchronize-always. Note that it is also feasible to cache partial files in the XFSDISK 34, and provide the application with an appropriate window to the data, however this is not presently implemented. For example, one present implementation uses a file object as the unit of remote or local storage, however it is equivalent to use something smaller or larger than a single object, e.g., resolution may be to a sector, part of a stream (useful for streaming audio or video) and so forth. As used herein, "object data" such as local or remote object data, includes any size, fixed or variable into which the data may be divided or combined. Similarly, objects may have a "local-always" attribute set therefor, i.e, if an object is not too large, (e.g., over a certain threshold which may be up to the entire size of the local RAMDisk) , the object may be marked so as to not remove it from the cache via the least-recently-used algorithm or otherwise. The local- always "LA" attribute is present for each file, but is only shown in FIGS. 11 and 12 for one file, file To summarize, FIG. 13 shows the general logic performed by the extended file system when retrieving (e.g., reading) or updating (e.g., writing) data, beginning at step 1300. At step 1300, the local-always attribute corresponding to the requested object is evaluated. If set to local-always, then the object will be cached locally, and thus step 1300 branches to step 1308 to retrieve (or update) the data from (or to) the local RAMDisk as described above. If not local-always at step 1300, the synchronize- always attribute corresponding to the requested object is evaluated at step 1302. If set to synchronize-always, then the object will not be cached locally, and thus step 1302 branches to step 1304 to retrieve (or update) the data from (or to) the remote source using appropriate primitives as described above. If instead at step 1302 the synchronize-always attribute is not set for synchronize-always, the extended file system evaluates the Local / Remote attribute at step 1306 to see if the information to be retrieved or updated is local. If local, step 1308 is performed to retrieve or update the local data. Steps 1310 and 1312 handle the synchronization of the remote data for updates (e.g., writes) to the local data. Note that it is possible that an update to the local data may result in the file becoming too big for local storage by the XFSDISK. In such an event, the object is set to synchronize-always, and is no longer supported locally unless its data later shrinks. Thus, as used herein, "always synchronize, synchronize-always, always local, local-always and the like" are not necessarily attributes that are permanent, and instead, can vary as appropriate based on the file size, available storage, and/or other factors . Returning to step 1306, if the Local/Remote attribute indicates that the data is remote, the process instead branches to step 1314 to handle the operation remotely, i.e., to retrieve or update the object data from or to the remote store. Step 1316 then stores the data locally, unless it is too large for the local storage, in which event the synchronize-always bit is set (if not already set) and the data handled remotely thereafter unless and until it shrinks. PRIMITIVE FORMATS AND C STRUCTURES The information below is a description of the packet formats used in the transmission protocols, and their corresponding C/C++ structures. Each packet comprises a primitive header followed by the data of the primitive. Format of the data section depends on the type of primitive. The maximum size for a single primitive (header and data) is 512 bytes. A plurality of data types are used in the transmission of data, including those set forth below: unsigned char an 8-bit unsigned integer in the range of 0-255. The bits are arranged in most significant bit first. DWORD a 32-bit unsigned integer in the range of 0-4,294,967,295. The format little- endian, that is, the most significant byte first. CRC a 32-bit unsigned integer in the range of 0-4,294,967,295. The format is little-endian. For graphical representation of structures, the following format will be used, where n is the size of the field in bits: N n The primitive header structure is set forth below: wPrimitive wRequest wMore wSenderType wReserved 3 7 1 6 10 typedef struct tPrimitiveHeader tt WORD wReserved : 6; WORD wSenderType : 3; WORD wMore: 1; WORD wRequest : 1; WORD wPrimitive : 5; WORD wID; WORD wSize : 10; WORD wSequence: 6; } PimitiveHeader; Valid values for the primitive header fields are wPrimitive One of the following set PRIMITIVE_RESOLVE = 0 PRIMITIVE LOCATE = 1 PRIMITIVE CALL = 2 PRIMITIVE CONTINUE = 3 PRIMITIVE_HANGUP = 4 PRIMITIVE SEND = 5 PRIMITIVE_RETRIEVE = 6 PRIMITIVE DIRECTORY = 7 PRIMITTIVE CHANGEDIR = 8 PRIMITIVE_ENLIST = 9 PRIMITIVE_DEFECT = 10 PRIMITIVE CREATEDIR=11 PRIMITIVE CREATEFILE=12 PRIMITIVE REMOVEDIR=13 PRIMITIVE DELFILE=14 PRIMITIVE CLOSEFILE=15 PRIMITIVE MOVEFILES 6 PRIMITIVE_GETFILEATTR=17 PRIMITIVE_SETFILEATTR=18 PRIMITIVE GETFILESIZE=19 PRIMITIVE SETEOF=20 Values 21-31 are reserved and should not be used. wRequest 1 = request 0 = response wSenderType SENDER XFSC = 0 SENDER_XFSAC = 1 SENDER XFSNS = 2 The primitive header is followed by 0 or more data structures. The type of structure following the header is determined by the wPrimitive and wRequest fields. PRIMITIVE DATA STRUCTURES Structures for the data fields are listed according to Type=ttt, Request=r where ttt is one of the defined PRIMITIVE_ values, and r is 1 for a request primitive and 0 for a response primitive. Type=PRIMITIVE RESOLVE, Request=l { unsigned char cName; unsigned char szNUID; } ResolveRequest; where rgcName is one of the following values: XFS_C = 1 XFS_AC = 2 XFS_NS = 3 XFS_DS = 4 XFS_PM = 5. Values 6-255 are reserved at this time and should not be used The szNUID field is the name of the XFS system to resolve against. XFS-NS' s are only to respond to resolve. Type=PRIMITIVE RESOLVE, Request=0 { unsigned char cType; unsigned char rgcIP[4]; } ResolveResponse; The values cIPi through cIP4 are the numbers in the IP address for the requested servers in cIP1.cIP2.cIP3.cIP4 format. The number of IP addresses is determined by I {IP} I = (wSize) /sizeof (ResolveResponse) . If the final IP address is IP_BROADCAST (255.255.255.255) and the wMore flag is 0, then there are more IP's available, and the requester should send a Continue primitive to retrieve the next block IP addresses . If the wMore flag is 1, the final address will not be IP_BROADCAST, and the requester should expect another Resolve response primitive. cType is the same as send with PRIMITIVE_RESOLVE, Request =1. This is returned for convenience. Type= PRIMITIVE_LOCATE, Request=l No data is associated with the Locate request primitive. It is simply a "ping" to make sure that the requested machine is still available. Type= PRIMITIVE__LOCATE, Request=0 No data is associated with the Locate response primitive. The fact that a reply is generated is sufficient to imply that the located machine is processing requests. Type= PRIMITIVE_CALL, Request=l The data associated with the Call primitive is implementation specific. It should contain information about the client (such as name/password) or some information used to begin arbitration of credentials. typedef unsigned char [ ] CallRequest; While wSize could be 0 (no data) , this is highly discouraged for open systems, as no security model will be implement-able and no user information or state will be known . Type= PRIMITIVE ALL, Request=0 The data associated with the Call primitive is implementation specific. It should contain information about the client (such as name/password) or some information used to begin arbitration of credentials . typedef unsigned char [ ] CallResponse; Type= PRIMITIVE_CONTINUE, Requests The data associated with the Continue request primitive is dependent on the last non-continue request primitive issued for this connection. e.g. The data type for a Continue request primitive in response to a Continue Send response primitive returned from a Send request primitive is the same as for a Send request primitive . Type= PRIMITIVE_CONTINUE, Request=0 A Continue response primitive is send in response to an X primitive request. For example, if a Call request primitive is issued, and a continuation is required, the Call is answered with a Continue response primitive. The caller would then provide additional data according to the needs of the session, and return a Continue request primitive . The data associated with the Continue response primitive is a set of data according to the last non- continue primitive request issued on this connection. If there were no prior non-continue requests on the connection, the Hangup primitive (with error) should be returned and the session terminated. Type= PRIMITIVE HANGUP, Request^! 32 n typedef struct _tHangupRequest { DWORD dwErrorCode; unsigned char strError []; } HangupRequest; The Hangup request primitive requests termination of the current session. The receiver should note the dwErrorCode field, return the Hangup response primitive, and gracefully terminate the session. If dwErrorCode is not 0 (ERROR_SUCCESS) then dwErrorCode is an implementation specific error about the reason for termination. Win32 error codes should be used by implementations for interoperability. The string strError is a nul terminated human readable description of the error. While it is not required (strError [0] == 0), an application should attempt to provide an error string whenever possible. Multiple error codes are allowed, the receiver of a Hangup request primitive should continue parsing HangupRequest data structures until wSize bytes have been consumed. Type= PRIMITIVE HANGUP, Request=0 dwErrorCode StrError 32 N typedef struct _tHangupResponse { DWORD dwErrorCode; unsigned char strError []; } HangupResponse; The Hangup response primitive verifies receipt of request for termination of the current session. The receiver should note the dwErrorCode field. If the field is not 0 (ERROR_SUCCESS) the receiver should terminate the session immediately (non-graceful shutdown) because and error was encountered while closing the session. The string strError is a nul terminated human readable description of the error. While it is not required (strError[0] == 0), an application should attempt to provide an error string whenever possible. Only one Hangup response data field is allowed. Type= PRIMITIVE_SEND, Request=l The Send primitive sends part of all of an object to a XFS service. The system is designed so that portions of the object may be updated without transmission of the entire object. It is not necessary that a XFS service send partial objects, but all XFS systems must be able to receive them. DWORD dwLength; DWORD dwSectionStart; CRC crc; DWORD dwFilelD; unsigned char rgcObjectData [] ; // wSize- (sizeof // (DWORD) * 4 ) ; }; A send request contains the length and the start of section identifier. dwFilelD is the file identifier returned by PRIMITIVE_CREATEFILE. A CRC is calculated and send across with the primitive. This ensures correct receipt of data. The receiver of this primitive must validate the CRC and only then commit object to the persistent store. If the CRC does not match, the response will contain appropriate error code and the sender should re-send the primitive. CRC is calculated by the formula n 1=0 A send request primitive containing dwLength = 0, dwSectionStart = 0 denotes the end of the request. In a chained send, this will inform the receiver that the send is complete and it should reply with a Send response primitive. In a send/continue scenario, this informs the receiver that no more sections are required to be sent, and the transaction should be terminated with the send response primitive. Type= PRIMITIVE_SEND, Request=0 The Send response primitive returns an error code that specifies the result of the operation. A value of 0 (ERROR_SUCCESS) indicates a successful completion of the operation. ERROR_CRC indicates that the CRC did not verify successfully. dwError 32 typedef struct _tSendResponse { DWORD dwError; }; Type= PRIMITIVE_RETRIEVE, Request=l The Retrieve request primitive is used to start the process of retrieving an object. It specifies the name of the object as well as the portion (s) it wishes to retrieve. dwFilelD 32 dwSection dwLength Start 32 32 typedef struct _tRetrieveSection { DWORD dwOffset; DWORD dwLength; } RetrieveSection; typedef struct _tRetrieveRequest { DWORD dwFilelD; RetrieveSection Section []; }; The dwFilelD field contains the file identifier returned by PRIMITIVE_CREATEFILE . The RetrieveOffset array contains the sections, their starting positions, and their lengths to be retrieved. Certain combinations of values for dwOffset and dwLength mean have special meanings . A section start of 0 and Length of 0 indicates end of retrieval by the client. Note that at present, this primitive implements only one retrieve section per request. Type= PRIMITIVE_RETRIEVE, Request=0 The Retrieve response primitive returns the CRC and (if requested) the data from a section of the object, or the CRC of the entire object if so requested. CRC crc; DWORD dwSectionStart; DWORD dwLength; DWORD dwError; unsigned char rgcData[]; } RetrieveResponse; The dwError field indicates if the retrieve operation was successful. A return value of 0 (ERROR_SUCCESS) means that the operation completed successfully. The value of the field crc is one of two values. If the dwLength == 0, the crc field should be ignored since no data was sent back with the response. Otherwise, the crc field contains the CRC of the rgcData field. Once the client gets retrieve response, it should verify the crc. If it does not mach, it should re-send the primitive across. A Retrieve sequence is terminated by the server with either retrieve response, a return value other than 0 in dwError or length less than the requested length. If the length is less than requested length, a retrieve response is send back. Otherwise, a continue is send back from the server. The client can terminate the retrieve sequence by sending a sectionstart =0 and dwlength=0 with the retrieve request. Type= PRIMITIVE_DIRECTORY, Request=l The Directory request primitive requests a list of some or all objects and sub folders from 1) the current working directory of the session, 2) a directory relative to the current working directory, or 3) a specific directory. __typedef struct _tDirectoryRequest { unsigned char cTypeMask; unsigned char cFiller; unsigned char szNameMask [] ; unsigned char szDirectory [] ; } DirectoryRequest; The szDirectory field is a null-terminated string in one of the following formation: the current working directory the parent of the current working directory [ . I .. directory relative to the current ] \<name> working directory. <name> is the name of the relative directory and may include "\" for multiple levels of indirection \<name> directory relative to the root "\" directory, that is, a specific directory. The cTypeMask field contains a bitmask describing the types of objects to list. The value of the cTypeMask field is a bitwise OR of one or more of the following values ATTR_READONLY = 0x01 ATTR_DIRECTORY = 0x02 ATTR_ALLOBJECTS = OxFF The szNameMask field is a string that is used to filter the list of objects returned by name. The szNameMask field may be empty, in which case, all located objects matching the cTypeMask parameter are to be enumerated. The szNameMask field may contain the "wildcard" characters '?' and λ*', where ? = Any character in this location is a match. * = Any set of characters starting at this location is a match. cFiller is padding for 16 byte alignment required by many processors. This is analogous to the DIR command under DOS. Type= PRIMITIVE_DIRECTORY, Request=0 The Directory response primitive contains the names and flags of objects located by the Directory request primitives masks. CFlags cFiller szName n typedef struct tDi .rectoryResponse t unsigned char cFlags; unsigned char cFiller; unsigned char szName[]; } DirectoryResponse; The cFlags field of the DirectoryResponse structure contains a bitwise OR of the attributes of the named object. Currently, only the flags ATTR_READONLY and ATTR 3IRECTORY are defined - see "Type= PRIMITIVE_DIRECTORY, Request=l" for values of these flags . cFiller is padding for 16 byte alignment required by many processors. The szName field is a nul terminated string giving the canonical name of the object, sans directory information. Type= PRIMITIVE_CHANGEDIR, Request=l The ChangeDir request primitive requests the changing of the current working directory for the session. szDirectory n typedef struct _tChangedirRequest { unsigned char szDirectory [] ; } ChangedirRequest; sz. This is the only primitive for which an error does not generate a Hangup request in response. A ChangeDir request primitive will be answered with a ChangeDir response primitive with szDirectory != szNewDirectory (see "Type= PRIMITIVEjCHANGEDIR, Request=0" below for details on the ChangeDir response primitive) . Type= PRIMITIVE_CHANGEDIR, Request=0 On Success szNewDirectory n On Error typedef struct _tChangedirError { DWORD dwError; unsigned char szErrorString [] ; } ChangedirError; szNew. If no error occurs, szNewDirectory should be equal to the szDirectory parameter from the Changedir request primitive, and only a ChangedirResponse is returned in the data portion. If an error occurs, szNewDirectory should be the new current working directory (even if the new CWD is the same as the original CWD) and the data segment will contain a ChangedirResponse structure followed by a ChangedirError structure. Type= PRIMITIVE_ENLIST, Request=l The Enlist request primitive is used to register new XFS servers with the XFS-NS' s. The enlistment/defect scenarios are meant for permanent addition and removal of servers. Limited removal from the system is accomplished though the fact that XFS servers will not respond to a Locate request primitive when inactive or disabled. 1 unsigned char cType; unsigned char cIP[4]; unsigned char szNUID [] ; } EnlistReques >t; The cType location contains the type of XFS server being registered. It can be one of the following values: XFS__C = 1* XFS_AC = 2 XFS__DS = 3 XFS_NS = 4 XFS PM = 5 Values 6-255 are reserved at this time and should not be used The enlistment of a XFS-C is not maintained in the XFS-NS. It is supplied so that clients may locate the XFS-NS' s without prior knowledge of the network topography. As such, there is no need to defect a XFS-C from the network. The cIP fields contain the IP address of the enlisting box in cIPl.cIP2.cIP3.cIP4 format. The szNUID field is the name of the XFS system to enlist with. XFS-NS' s must only respond to enlist. Multiple Enlist Request structures may be present. This allows a multi-homed box to register several IP addresses, or a package implementing multiple XFS services to register all services at the same time. If more enlistment's are required than the data segment of the datagram will support, the enlisting service (s) must send separate Enlist requests for each block of IP's. Neither datagram chaining nor the continue scenario is supported for enlistment. Type= PRIMITIVE_ENLIST, Request=0 The Enlist response primitive is sent from a XFS-NS to notify an enlisting server that the enlistment into the XFS system has succeeded. nNSCount 16 typedef struct _tEnlistResponse { unsigned short nNSCount; } EnlistResponse; The nNSCount field contains the number of XFS-NS' s currently known to the system so that the enlisting box will know when all XFS-NS' s have succeeded in registering the enlistment. Type= PRIMITIVE_DEFECT, Request=l The Defect request primitive is sent to remove the requester from the XFS-NS' s namespace. The requestor must issue a Defect request for every Enlist request that was previously registered. typedef struct _tDefectRequest { unsigned char cType; unsigned char cIP[4]; } DefectRequest; The Defect request primitive data is substantially identical to the EnlistRequest primitive data. See the above information in "Type= PRIMITIVE_ENLIST, Request=l' for details on data values and semantics. Type= PRIMITIVE_DEFECT, Request=0 The Defect request primitive notifies the defecting server that the defect has been registered on a XFS-NS. There is no data associated with the Defect request primitive. Type=PRIMITIVE_CREATEDIR, Request=l The CreateDir primitive requests the creation of a new directory. The directory could be a new directory in the current directory for the session, new directory relative to the current directory for the session or a specific directory. szNewDirectoryName n typedef struct _tCreateDirrequest { unsigned char szNewDirectory [] ; }; szNewDirectory contains the new directory name, Type=PRIMITIVE_CREATEDIR, Request=0 CreateDir response indicates the result of the operation. dwErrorCode 32 A return value of 0 (ERROR_SUCCESS) indicates the operation completed successfully. Type=PRIMITIVE_CREATEFILE, Request=l The CreateFile primitive requests creation of a new file or open an existing file. { unsigned char szFileName [ ] ; DWORD dwDesiredAccess; DWORD dwShareMode; DWORD dwCreateDisposition; }; szFileName can be file in the current directory for the session, relative directory to the current directory for the session or a specific directory in object store. dwDesiredAccess specifies the type of access to the file. This is an implementation specific parameter that goes across with the primitive. Typical types of access would be read, write or both. dwShareMode specifies how the file can be shared. Setting this field to 0 implies the file cannot be shared. Other sharing modes are implementation specific parameter. Typical types of sharing modes would be share for read and share for write. dwCreateDisposition the actions that can be taken on files that exist and files that do not exist. Following actions may be supported Create New, Create Always, Open Existing, Open Always and Truncate Existing. The implementation of these actions is left to the developer. dwFileAttributes specifies the attributes for the file. This is an implementation specific parameter. Type=PRIMITIVE_CREATEFILE, Request=0 CreateFile response indicates the result of operation. { DWORD dwFilelD; DWORD dwError; }; dwFilelD is the ID of the newly created or opened file. This is set to OxFFFFFFFF ( INVALID_HANDLE_VALUE ) if the operation is unsuccessful. dwError is set to 0 ( ERROR_SUCCESS ) if the primitive succeeds. A non-zero value indicates an error in operation. Type=PRIMITIVE_REMOVEDIR, Request=l The RemoveDir primitive requests deletion of an existing empty directory. The directory to be removed can be relative to the current directory for the session or a directory in the current directory in the session or a specific directory. SzDirectoryName N typedef struct _tRemoveDirRequest { unsigned char szDirectoryName [] ; }; szDirectoryName is the name of the directory to be removed. Type=PRIMITIVE_REMOVEDIR, Request=0 RemoveDir response indicates the result of RemoveDir operation. dwError 32 typedef struct _tRemoveDirResponse { DWORD dwError; }; dwError is set to 0 ( ERROR__SUCCESS) if the operation succeeds. Type=PRIMITIVE_DELFILE, Requested DelFile primitive requests deletion of an existing file. The file to be deleted can be in the current directory for the session, directory relative to the current directory for the session or a specific directory. szFileName N typedef struct _tDelFileRequest { unsigned char szFileName [] ; }; szFileName is the name of the file to be deleted, Type=PRIMITIVE_DELFILE, Request=0 DelFile response indicates the result of DelFile operation. dwError 32 typedef struct _tDelFileResponse { dwError is set DWORD dwError; to 0 (ERROR_SUCCESS) }; if the operation succeeds . Type=PRIMITIVE_CLOSEFILE, Request=l CloseFile closes the file either created or opened by CreateFile primitive. dwFilelD 32 typedef struct __tCloseFileRequest { DWORD dwFilelD; }; dwFilelD is the file identifier returned by CreateFile primitive. Type=PRIMITIVE_CLOSEFILE, Request=0 CloseFile response identifies the result of CloseFile operation. If the file could not be closed, it must be closed once the session terminates. dwError 32 typedef struct CloseFileResponse { DWORD dwError; }; dwError is set to 0 (ERROR_SUCCESS) if the operation succeeds . Type=PRIMITIVE_MOVEFILE, Request=l MoveFile primitive renames an existing file or a directory. { unsigned char szExistingFileName [] ; unsigned char szNewFileName [] ; }; szExistingFileName is a null terminated string that names an existing file or directory. szNewFileName is a null terminated string that specifies a new name for the file or directory. Type=PRIMITIVE_MOVEFILE, Request=0 MoveFile response indicates the result of MoveFile operation. dwError 32 typedef struct _tMoveFileResponse { DWORD dwError; }; dwError is set to 0 (ERROR_SUCCESS) if the operation succeeds . Type=PRIMITIVE_GETFILEATTR, Request=l GetFileAttr requests the attributes for a specified file or a directory. szFileName NULL terminated str typedef struct _tGetFileAttrRequest { unsigned char szFileName [] ; }; szFileName is a null terminated string that specifies the name of the file or directory. Type=PRIMITIVE_GETFILEATTR, Request=0 GetFileAttr response contains the attributes of the requested file or directory and a error code. { DWORD dwAttr; DWORD dwError; }; dwAttr is a 32 bit value specifying the attributes for the file. This value is meaningful if dwError is set to 0 (ERROR_SUCCESS) . The actual values of the attributes are implementation specific and are the same as implemented in CreateFile primitive. dwError is set to 0 is the operation succeeded. Type=PRIMITIVE_SETFILEATTR, Request=l SetFileAttr requests setting of the file attributes, { unsigned char szFileName [ ] ; DWORD dwAttr; }; szFileName is a null terminated string that specifies the name of the file whose attributes are to be set. dwAttr is a 32 bit value specifying the attributes for the file. The actual values of the attributes are implementation specific and are the same as implemented in CreateFile primitive. Type=PRIMITIVE_SETFILEATTR, Request=0 SetFileAttr response indicates the result of SetFileAttr operation. dwError 32 typedef struct _tSetFileAttrResponse { DWORD dwError; }; dwError is set to 0 (ERROR_SUCCESS ) if the operation succeeds . Type=PRIMITIVE_GETFILESIZE, Request=l GetFileSize requests the file size for a given file id. The file should be opened using CreateFile primitive prior to invoking this primitive. dwFilelD 32 typedef struct tGetFileSizeRequest 1 DWORD dwFilelD; //File ID } GetFileSizeRequest; dwFilelD is set to the file id returned by CreateFile Primitive. Type=PRIMITIVE_GETFILESIZE, Request=0 GetFileSize response indicates the result of GetFileSize operation. DWORD dwFileSize; //File S. Lze . - -1 if error DWORD dwRetCode; //Return Code for the operation } GetFileSizeResponse; dwFileSize contains the file size if the primitive is successful. In case of failure, this is set to OxFFFFFFFF. dwRetCode specifies if the operation is successful. This field is set to ERROR_SUCCESS if successful. Otherwise, an implementation specific error code must be returned in this field. Type=PRIMITIVE_SETEOF, Request=l SetEOF primitive set's the EOF at the current position in the file. dwFilelD 32 typedef struct _tSetEOFRequest 1 DWORD dwFilelD; //File ID }SetEOFRequest; dwFilelD is set to file id returned by CreateFile primitive. Type=PRIMITIVE_SETEOF, Request=0 SetEOF response indicates the result of SetEOF operation. dwRetCode 32 typedef struct _tSetEOFResponse { DWORD dwRetCode; //Return Code of the //operation }SetEOFResponse; dwRetCode is set to ERROR_SUCCESS if the operation completes successfully. In case of error, an implementation specific error is set in this field. Type = XfsTICKET The following structure defines the ticket send across by the server in the PRIMITIVE_CALL Response after authentication . struct XfsTicket { enum { SIZE=256 }; DWORD mjdwLength; unsigned char m_bData [SIZE] ; }; XFS COMMUNICATIONS - IP AND LINK LAYER The XFS system specifies two types of transport for primitives, UDP and TCP. UDP and TCP communications are conducted on separate IP ports. It is recommended that port 171 be used for UDP communications and port 172 used for TCP communications. However, any available port could be configured for TCP and UDP communications. Session primitives are restricted to TCP transport. Control primitives - which are UDP capable - are capable of being used with UDP broadcast. Primitives are listed in the table below with the types of transports that may be used with them. Available transports are denoted by an λx' in the transport column. The preferred transport (s) for the primitive is denoted by a capital yX' . As can be seen from the foregoing detailed description, there is provided a method and system wherein a client device has access to an entire file system with large storage capacity when a physical connection is present, even with limited memory resources. The system and method are fast, efficient, scalable and secure. The client device works with locally-cached data, and thus may work without a physical connection, and then upload any changes at a later time. While the present invention thus provides particular benefit with the Internet, it also provides numerous other benefits to computer users in general. Note further that the present invention need not be limited to hierarchically arranged directories of files, but may alternatively be used with other arrangements of data..
https://patents.google.com/patent/WO2000057315A2/en
CC-MAIN-2018-43
refinedweb
11,748
50.87
Working with Ionic Native - Shake, Rattle, and Roll (Follow Up) Last month I wrote a tutorial on using Ionic Native and the Device Motion plugin (Working with Ionic Native - Shake, Rattle, and Roll). In that post I detailed how to use the device’s accelerometer to recognize a “shake” gesture and then reload data from a service. A reader (on the Ionic blog version of my article) had a great question: Thats really useful and it works :-) Can anyone suggest how to implement subscription.unsubscribe(); when the page is navigated away from and then restarted when the user returns to this page? My demo was a one page app which isn’t very practical, but kept things simple for the demo. However, as soon as you add a new page to the app, you may (or may not!) notice something bad about my code - it continues to listen to the accelerometer after you’ve left the page. That’s going to drain the device battery and make the user angry. You wouldn’t like the user when they’re angry - trust me. I began by modifying my previous demo such that the list of cats actually linked to a detail page. In case you don’t remember, this is how the list looked: So I simply created a new page (don’t forget, the Ionic CLI has a cool “generate” feature to make that easy!) and then linked my cats to the detail. So first I added a click event to my list item: <ion-item * {{ cat.name }} </ion-item> And then added a handler for it: loadCat(cat) { this.navController.push(DetailPage, {cat:cat}); } Ok, so how do we fix our code so we only listen to the accelerometer when the view is visible? Easy - we use a view event! The Ionic docs do not do a good job of making it easy to find them, but if you look the API docs for NavController, you’ll find a list of view-related events you can listen to. For my demo, I just needed ionViewWillEnter and ionViewWillLeave. So I simply moved my “listen for device motion” code out of the constructor and into the enter event. Here’s the complete home.ts code: import {Component} from '@angular/core'; import {NavController,Platform} from 'ionic-angular'; import {CatProvider} from '../../providers/cat-provider/cat-provider'; import {DeviceMotion} from 'ionic-native'; import {DetailPage} from '../detail/detail'; @Component({ providers: [CatProvider], templateUrl: 'build/pages/home/home.html' }) export class HomePage { public cats:Array<Object>; private lastX:number; private lastY:number; private lastZ:number; private moveCounter:number = 0; private subscription:any; constructor(public catProvider:CatProvider, private navController: NavController, public platform:Platform) { this.loadCats(); } loadMore() { console.log('load more cats'); this.loadCats(); } loadCats() { this.catProvider.load().then(result => { this.cats = result; }); } loadCat(cat) { this.navController.push(DetailPage, {cat:cat}); } ionViewWillEnter() { console.log('view will enter'); this.platform.ready().then(() => { this; }); }); } ionViewWillLeave() { console.log('view will leave'); this.subscription.unsubscribe(); } } So ionViewWillEnter simply has the code I used before. No real difference there - but do note I’m storing subscription globally to the component. That let’s me then use it in ionViewWillLeave to handle unsubscribing from the accelerometer. I created a new folder for this version in my Cordova Demos repository - you can find it here:
https://www.raymondcamden.com/2016/08/22/working-with-ionic-native-shake-rattle-and-roll-follow-up
CC-MAIN-2018-30
refinedweb
549
57.98
Choosing a Javascript library for Zope Background What's AJAX ? (If you know it, you should skip this section) Wikipedia says:) Like DHTML, LAMP, or SPA, Ajax is not a technology in itself, but a term that refers to the use of a group of technologies together. In fact, derivative/composite technologies based substantially upon Ajax, such as AFLAX, are already appearing. (full definition here: What's AJAX ?) In other words, AJAX allows a developer to make calls to the server from the loaded page with Javascript, and change the page based on the server's answer without having to do a complete page reload. Since the publishing machinery is not involved, a asynchronous round-trip is very fast and allow big ergonomics improvments. This is due to the fact that asyncrhonous calls are made over server methods that just quickly renders the needed datas, in opposition to a regular call that calculates and renders a full page. Why should we care of AJAX in Zope applications ? AJAX isn't just the latest buzz word out there you can show up in your applications (the come-here-we-have-some effect), AJAX is not this latest and coolest technological thing all hype developers should know about. AJAX is not the web UI silver bullet either. AJAX is just a tool that let you focus on something that often get lost, when creating a web applications: people. I believe this is the main reason of AJAX success: developers are able to greatly enhance their users experience with a few drops of Javascript. Examples of use cases: - A select box value is changed, the values of a second one are reloaded. - Direct edition of page parts (see CPSWiki for instance) - Content panel that changes on user actions - Forms are checked before actually sent (see CPSMailAccess email editor) - etc.. People can argue there are a lot of caveats on this (for example, using the 'back' button of the navigator can brake it) but it improves so much the flexibility of a web application and make web app as reactive as desktop apps. A recent Ajax Survey shows us that AJAX is used in real applications everywhere. Even if the amount of developers that have answered this survey is not big, it also brings a good idea on what tools are actually used to do AJAX. Zope is not apart from the main web developpers stream, and can use any AJAX library out there. AJAX libraries have different approaches, explained in the next section. Different libraries approaches There are several kind of libraries available to work with AJAX: - Low-level client-side frameworks (LLF) - Client-side Application frameworks (CSF) - Server-side Javascript generation frameworks (SSF) - XForm approach framework (XFF) - Other frameworks Low-level client-side frameworks Low-level Javascript libraries provide a simple piping to access the server by wrapping XMLHTTPRequest objects and let the developper perfom DOM manipulation with the server's answer. Most of these libraries are a thin layer that provides a portable API for these actions, and other utilies like: - XPath query APIs - XSLT APIs - etc. Example of such libraries are: Sarissa, XHConn, LibXMLHttpRequest (non GPL), etc. Here's a small example of a simple AJAX call in Sarissa: Sarissa.updateContentFromURI = function(sFromUrl, oTargetElement, xsltproc) { try{ oTargetElement.style.cursor = "wait"; var xmlhttp = new XMLHttpRequest(); xmlhttp.open("GET", sFromUrl); function sarissa_dhtml_loadHandler() { if (xmlhttp.readyState == 4) { oTargetElement.style.cursor = "auto"; Sarissa.updateContentFromNode(xmlhttp.responseXML, oTargetElement, xsltproc); }; }; xmlhttp.onreadystatechange = sarissa_dhtml_loadHandler; xmlhttp.send(null); oTargetElement.style.cursor = "auto"; } catch(e){ oTargetElement.style.cursor = "auto"; throw e; }; }; Sarissa takes care of all portability aspects to allow this code to work under most navigators that implements Javascript. These simple libraries provide a quick and simple way to use asynchronous calls on a web view. Google has also released its XSLT/XPath JavaScript engine as an open source project: AJAXSLT Client-side application frameworks Most of the time, client-side application frameworks provide the same features than Low-level client-side frameworks, but also brings a higher level, component-oriented, set of APIs. These APIs let the developer work on client-side like he or she would do in a desktop app with a classical GUI toolkit. The style and the amount of javascript code produced are very dependent on the library, and the developer is quite driven by the toolkit. For example, toolkits like Prototype provides a object-oriented way to work in Javascript, and therefore changes a lot how Javascript is used in a web application and its importance in the architecture. This example show how to create a class hierarchy with Prototype: function Manager () { this.reports = []; } Manager.prototype = new Employee; function WorkerBee () { this.projects = []; } WorkerBee.prototype = new Employee; function SalesPerson () { this.dept = "sales"; this.quota = 100; } SalesPerson.prototype = new WorkerBee; etc... Some toolkits also provide very high level functionnalities that let the developper implement common AJAX use cases in a few lines. OpenRico, which is built on the top of Prototype, is one of those. For example, adding a nice fade effect in OpenRico is done with a single command: new Effect.FadeTo('fadeMe', .2, // 20% opacity 500, // 500ms (1/2 second) 10, // 10 steps {complete:function() {setStatus('done fading element.', 1500);}} ); Complex use cases, like live grids, can be provided as well. See OpenRico live grid demo Client-side application frameworks also provide other developper tools like: - Unit-testing framework integration, like ECMAUnit or Test.Simple - Debuggers - Different protocol for client-server dialogs (XML, JSON) The learning curve involved by these frameworks worth it, as they bring to Javascript what we have in Python. - Javascript? huh! I don't want to use it - What if you can actually use object oriented programming in it ? - I always get thousands of errors in a few lines of Javascript - What if you can debug it ? - Is there any pop corn left ? Server-side Javascript generation frameworks Server-side Javascript generation is generally based on the no-javascript-skills-needed paradigm: the developper creates code on server side, using the server's language using specific APIs, or a specific model. The framework then automatically generates the right HTML and Javascript elements and sends them to the client. Sometime this generation is made on client sied with a javascript engine. CrackAjax is one of those for Python, and let the developper create AJAX views: import crackajax import cherrypy import ituneslib class iTunesAjaxPage(crackajax.AjaxPage): )] crackajax.init("jsolait") cherrypy.root = iTunesAjaxPage(ituneslib.Library("Library.xml"), "") cherrypy.server.start() The serverside decorator tags the methods to become an XML-RPC call, and the clientside automatically convert Python code into client-side javascript, using Python-to-JScript .NET compiler. There are also less radical approaches, where the framework provides APIs to describe the javascript that needs to be created, using for example a descriptive langage that covers explicit behaviors. (see Azax approach). XForm approach frameworks Section to be completed (see FormFaces) Other frameworks They are many other hybrid approaches that mixes server-side and client-side, or approaches that provides even lower-level mecanisms. They are not detailed here because they are often tighted to a particular web framework or don't bring more features that what we would have in other types. Choosing a library For CPS , and moreover for Zope-based applications, an AJAX library has to be taken in CSF category or SSF, since all functionalities available in LLF exists in CSF. The next sections provide a very quick review on existing libraries. SSF Libraries CrackAjax - Pros: - straight-forward: a server side approach, like CrackAjax or Azax, provides a straight-forward tool to add Ajax features in a web application, by hiding the Javascript layers from the developer with techniques that can be compared to Template MetaProgramming, and providing a meta langage or a set of APIs in the web application native language. - Cons: - not so hidden: as you can see in this CrackAjax example, the code written in Python is very similar to what you would have done in Javascript. Furthermore, it's a bit of magic since the given Python code cannot be unit tested howsoever (the 'document' variable does not exists on Python side) Azax - Pros: - clean architecture: Sometime the SSF approach is also meant to drive the developper to cleary separate the Controller and the Interface. For example Azax xml templates provides a way to separate the behavior of a page and the UI, example of separation: Azax page behavior / Azax page template - includes a link to mochikit, allowing hybrid behaviors Cons: - one more layer: This clean separation can be done straight forward in Javascript, without adding a layer, but describing behaviors in XML can also be seen as more portable and readable, because the code could work without Javascript, wich is just the last layer This approach is very similar to what XUL has in XBL Events, but this is a bit of Not Invented Here. CSF Libraries A quite complete survey at OSA foundation is available and EDevil's blog has a nice library list. A library for CPS In the next blurg, I will try to see wich library fits the best for CPS by: - Suming up the use cases that already need to be covered, in both Zope 2 and Zope 3. - Selecting the 10 best libs out there - Trying out all use cases with each toolkit to compare. And please don't hesitate to react on this blurg, if you want to give more precisions or corrections on some toolkits, or if you have opinions, comments. These will be integrated on the page. Thanks to Paul Everitt, for for pointing out some mistakes on this entry, and for pointing out the XForm approach, that will be completed soon. (Post originally written by Tarek Ziadé on the old Nuxeo blogs.)
https://www.nuxeo.com/blog/choosing-javascript/
CC-MAIN-2018-05
refinedweb
1,630
50.67
Dynamic global and instance settings for your django project Project description Warning There is a critical bug in version 1.2 that can result in dataloss. Please upgrade to 1.3 as soon as possible and do not use 1.2 in production. See #81 for more details., 3.5 and 3.6, with django >=1.8. Features - Simple to setup - Admin integration - Forms integration - Bundled with global and per-user preferences - Can be extended to other models if need (e.g. per-site preferences) - Integrates with django caching mechanisms to improve performance Documentation The full documentation is at. Contributing See Changelog 1.4.2 (06-11-2017) - Fix #121: reverted Section import missing from dynamic_preferences.types Contributors: - @okolimar - @swalladge 1.4.1 (03-11-2017) - Section verbose name and filter in django admin (#114) - Fixed wrong import in Quickstart documentation (#113) - Fix #111: use path as returned by storage save method (#112) Contributors: - @okolimar - @swalladge 1.4 (15-10-2017) - Fix #8: we now have date, datetime and duration preferences - Fix #108: Dropped tests and guaranteed compatibility with django 1.8 and 1.9, though - Fix #103: bugged filtering of user preferences via REST API - Fix #78: removed create_default_per_instance_preferences. This is not considered a backward-incompatible change as this method did nothing at all and was not documented Contributors: - @rvignesh89 - @haroon-sheikh 1.3.3 (25-09-2017) - Fix #97 where the API serializer could crash during preference update because of incomplete parsing Contributors: - @rvignesh89 1.3.2 (11-09-2017) - Should fix Python 3.3 complaints in CI, also add tests on Python 3.6 (#94) - Fixed #75: Fix checkpreferences command that was not deleting obsolete preferences anymore (#93) - Retrieve existing preferences in bulk (#92) - Cache values when queried in all() (#91) Contributors: - @czlee 1.3.1 (30-07-2017) - Fix #84: serialization error for preferences with None value (@swalladge) - More documentation about preferences form fields 1.3 (03-07-2017) This release fix a critical bug in 1.2 that can result in data loss. Please upgrade to 1.3 as soon as possible and never use 1.2 in production. See #81 for more details. 1.2 (06-07-2017) Warning There is a critical bug in this that can result in dataloss. Please upgrade to 1.3 as soon as possible and never use 1.2 in production. See #81 for more details. - important performance improvements (less database and cache queries) - A brand new REST API based on Django REST Framework, to interact with preferences (this is an optionnal, opt-in feature) - A new FilePreference [original work by @macolo] 1.1.1 (11-05-2017) Bugfix release to restore disabled user preferences admin (#77). 1.1 (06-03-2017) - Fixed #49 and #71 by passing full section objects in templates (and not just the section identifiers). This means it’s easier to write template that use sections, for example if you want have i18n in your project and want to display the translated section’s name. URL reversing for sections is also more reliable in templates. If you subclassed PreferenceRegistry to implement your own preference class and use the built-in templates, you need to add a section_url_namespace attribute to your registry class to benefit from the new URL reversing. [Major release] 1.0 (21-02-2017) Dynamic-preferences was release more than two years ago, and since then, more than 20 feature and bugfixe releases have been published. But even after two years the project was still advertised as in Alpha-state on PyPi, and the tags used for the releases, were implicitly saying that the project was not production-ready. Today, we’re changing that by releasing the first major version of dynamic-preferences, the 1.0 release. We will stick to semantic versioning and keep backward compatibility until the next major version. Dynamic-preferences is already used in various production applications .The implemented features are stable, working, and address many of the uses cases the project was designed for: - painless and efficient global configuration for your project - painless and efficient per-user (or any other model) settings - ease-of-use, both for end-user (via the admin interface) and developpers (settings are easy to create and to manage) - more than decent performance, thanks to caching By making a major release, we want to show that the project is trustworthy and, in the end, to attract new users and develop the community around it. Development will goes on as before, with an increased focus on stability and backward compatibility. Because of the major version switch, some dirt was removed from the code, and manual intervention is required for the upgrade. Please have a for the detailed instructions: Thanks to all the people who contributed over the years by reporting bugs, asking for new features, working on the documentation or on implementing solutions! 0.8.4 (10-01-2017) This version is an emergency release to restore backward compatibility that was broken in 0.8.3, as described in issue #67. Please upgrade as soon as possible if you use 0.8.3. Special thanks to [czlee]() for reporting this! 0.8.3 (06-01-2017) (DO NOT USE: BACKWARD INCOMPATIBLE) This release introduced by mistake a backward incompatible change (commit 723f2e). Please upgrade to 0.8.4 or higher to restore backward compatibility with earlier versions This is a small bugfix release. Happy new year everyone! - Now fetch model default value using the get_default method - Fixed #50: now use real apps path for autodiscovering, should fix some strange error when using AppConfig and explicit AppConfig path in INSTALLED_APPS - Fix #63: Added initial doc to explain how to bind preferences to arbitrary models (#65) - Added test to ensure form submission works when no section filter is applied, see #53 - Example project now works with latest django versions - Added missing max_length on example model - Fixed a few typos in example project 0.8.2 (23-08-2016) - Added django 1.10 compatibility [ricard33] - Fixed tests for django 1.7 - Fix issue #57: PreferenceManager.get() returns value [ricard33] - Fixed missing coma in boolean serializer [czlee] - Added some documentations and example [JetUni] 0.8.1 (25-02-2016) - Fixed still inconsistend preference order in form builder (#44) [czlee] 0.8 (23-02-2016) Warning: there is a backward incompatbile change in this release. To address #45 and #46, an import statement was removed from __init__.py. Please refer to the documentation for upgrade instructions: 0.7.2 (23-02-2016) - Fix #45: importerrror on pip install, and removed useless import - Replaced built-in registries by persisting_theory, this will maintain a consistent order for preferences, see #44 0.7.1 (12-02-2016) - Removed useless sections and fixed typos/structure in documentation, fix #39 - Added setting to disable user preferences admin, see #33 - Added setting to disable preference caching, fix #7 - Added validation agains sections and preferences names, fix #28, it could raise backward incompatible behaviour, since invalid names will stop execution by default 0.7 (12-01-2016) - Added by_name and get_by_name methods on manager to retrieve preferences without using sections, fix #34 - Added float preference, fix #31 [philipbelesky] - Made name, section read-only in django admin, fix #36 [what-digital] - Fixed typos in documentation [philipbelesky]].
https://pypi.org/project/django-dynamic-preferences/1.4.2/
CC-MAIN-2019-51
refinedweb
1,209
55.64
Sambit Supriya Dash Statistics RANK 1,939 of 257,551 REPUTATION 22 CONTRIBUTIONS 5 Questions 25 Answers ANSWER ACCEPTANCE 20.0% VOTES RECEIVED 1 RANK 16,238 of 17,744 REPUTATION 2 AVERAGE RATING 0.00 DOWNLOADS 5 ALL TIME DOWNLOADS 15 CONTRIBUTIONS 0 Posts CONTRIBUTIONS 0 Public Channels AVERAGE RATING CONTRIBUTIONS 0 Highlights AVERAGE NO. OF LIKES Content Feed extracting files having names with the same date from a dataset a = 196611110428; strA = string(a); d = datetime(strA,'InputFormat','yyyyMMddHHmm'); disp(d) Month = month(d); Day = day(d)... 3 months ago | 0 How can I use boxplot for 1000 dataset in x and y with certain defined number of boxes? ax = gca; hold on boxplot(ax, a, 'Position', 1) boxplot(ax, b, 'Position', 2) ax.XLim = [min(a), max(a)]; ax.YLim = [min(... 3 months ago | 1 | accepted Im trying to graph f(x,y)=x+2y-2 and g(x,y)x^2+4y^2-4 on the same graph fsurf(@(x,y) x+(2*y)-2) hold on fsurf(@(x,y) (x^2)+(4*(y^2))-4) hold off 6 months ago | 0 | accepted How to plot graph of differential equations? This lin... 7 months ago | 0 how to load multiple mat files which has file names in 'YYYYMMDD.mat' file format? load YYYYMMDD.mat Date = datetime(YYYYMMDD,'ConvertFrom','yyyymmdd'); 7 months ago | 0 | accepted Hi, I am looking to purchase Matlab license and would like some advice on which will be a better option? If you have any resource person who is pursuing any degrees from any universities, you can ask for their credentials for the max... 7 months ago | 0 Question How to find the centroid of curve ? I have a curve fitted to a given data and got this curve(x). curve(x) = a*exp(b*x) + c*exp(d*x) Coefficients (with 95% co... 8 months ago | 1 answer | 0 1answer To reduce the number of lines This may help you, in terms of looping, addition and substraction, x = [-2 -1 1 3]; y = [-1 3 -1 19]; syms t; for i = 1:leng... 8 months ago | 0 Question Can anyone suggest how to write/develop/ from where to refer 3D Space Frame Loading (non-orthogonal structures) code ? A trussed frame is there whose members are non-orthogonals to each other and legs are fixed with the base (fixed joints), I want... 8 months ago | 0 answers | 0 0answers Question How to find Area Under the Curve of a Smoothing Spline Curve Fitted Model from Discrete Data Points ? For example: I have 12 y values for different 12 x values and by using MATLAB inbuild app of Curve Fitting I got very good resul... 8 months ago | 1 answer | 0 1answer Statistical comparison between matrices After converting all the 24 matrices of 32x30 into 24 vectors of 960 elements each, take each vector of 960 elements and sort th... 9 months ago | 0 | accepted Question How to convert FORTRAN files (.F) of before 1990s to MATLAB files ? I have files which were developed before the era of .f90 (file format of FORTRAN after 1990). I want to compile them in MATLAB. ... 9 months ago | 1 answer | 0 1answer Quick Reference - MATLAB Fundamentals Yes that's available now. One can download whole 44 sheets of paper without the record projects of the course. 9 months ago | 0 skewness and kurtosis of a weighted distribution The function is developed in Matlab now, Kurtosis - MATLAB kurtosis (mathworks.com), one must check this link Kurtosis - MATLAB... 10 months ago | 0 unable to select data-points in a plot [FDT,ROL] = rmoutliers(y); OL = y(ROL) % this would be your detected outliers in your dataset (dependent variable) 10 months ago | 0 Numerical Analysis of Turbulent Flow over a Backward Facing Step Follow this comment for initialization of the values, 11 months ago | 0 Help with a CFD code Follow these codes, 1- mathworks.com/matlabcentral/fileexchange/57064-lid-driven-cavity-flow 2- mathworks.com/matlabcentral/fi... 11 months ago | 0 Recommendations for book / tutorial on Probability & Statistics in MATALB I am pretty sure, you are receiving the answer very lately, Still I would strongly recommend you to go through the courses, 1- ... 11 months ago | 0 Question How to solve a system of 3 or more ODEs having interdependent independent variables as independent variables of the dependent variable on the same derivative ? Suppose, t1 = t2^2; dX/dt1 = z+4; dY/dt1 = 2*z+1; dX/dt2 = 2+3*z; dY/dt2 = 3+(z^2); (May be the question doesn't make any s... 11 months ago | 0 answers | 0 0answers How can I cite MATLAB in my paper? Suppose you want to cite a reference from MATLAB for example a mesh over space shuttle nose in a research article publication in... 11 months ago | 0 i am confused with commands for loop r = (randi([0 10],1,10))'; r0 = r == 0; if sum(r0) == 0 disp('Zero is not found') else for i = 1:length(r) ... 11 months ago | 0 How to substitute x and y with X and Y respectively and get the same result as Test1(last line of my code) Just before your major calculations (where you are willing to use the variables X & Y), you can add a line, x = X; y = Y; The... 11 months ago | 0 How to plot with a reciprocal (1/x) scale Supposing this example from the MATLAB previous codes, A = [0.5,0.65,0.7,0.66,0.81]; % The plotting function values B = [5,10... 1 year ago | 0 import data from word to MATLAB and save to other word file? For Importing, refer, ... 1 year ago | 0 Add titles to excel file columns This link would work for you, ... 1 year ago | 0 how can i use for loop for this script According to your given document, As per this formula, Your parameters in the code should be, alpha(b) = -((rho*w^2)/E)-((rho... 1 year ago | 0 Concatenate 3-D matrix in a for loop With an example, I would like to answer this. Suppose, "a" is the given 3D Matrix and "b" is the 2D matrix returns the concaten... 1 year ago | 0 how can i use for loop for this script As per your given expression in the question, the assumptions of the constants here taken as 1, The running code is, roh = 1;... 1 year ago | 0 | accepted how can i use for loop for this script k_j(b) =(-1)^(j(b)/2)*([sqrt(alpa+(-1)^j(b))*sqrt((alpa)^2+4*beta)/2]); Provide the original formula in a text or written manne... 1 year ago | 0 Wave simulation This answer may not be useful for the author (it's getting answered after a decade), but could possibly useful for others... T... 1 year ago | 0
https://in.mathworks.com/matlabcentral/profile/authors/20949372?detail=answers
CC-MAIN-2022-21
refinedweb
1,148
68.2
Post-truth era news article metadata service. Project description Metadoc Metadoc is a post-truth era news article metadata retrieval service. It does social media activity lookup, source authenticity rating, checksum creation, json-ld and metatag parsing as well as information extraction for named entities, pullquotes, fulltext and other useful things based off of arbitrary article URLs. Also, Metadoc is built to be relatively fast. Example You just throw it any news article URL, and Metadoc will yield. from metadoc import Metadoc url = "" metadoc = Metadoc(url=url) res = metadoc.query() => { '__version__': '0.9.0', 'authors': ['Kim Zetter'], 'canonical_url': '', 'domain': { 'credibility': { 'fake_confidence': '0.00', 'is_blacklisted': False }, 'date_registered': None, 'favicon': '', 'name': 'theintercept.com'}, 'entities': { 'keywords': [ 'cellebrite', 'fbi', 'skype', 'intercept' ... ] } }, 'image': '', 'language': 'en', 'modified_date': None, 'published_date': '2016-11-17T11:00:36+00:00', 'scraped_date': '2018-07-10T12:13:46+00:00', 'social': [{ 'metrics': [{ 'count': 7340, 'label': 'sharecount' }], 'provider': 'facebook' }], 'text': { 'contenthash': '940a62c70db255b4aec378529ae7a2c8', 'fulltext': 'a guardian of user privacy this year after fighting FBI demands to help crack into San Bernardino shooter Syed ...', 'reading_time': 439, 'summary': 'Your call logs get sent to Apple’s servers whenever iCloud is on — something Apple does not disclose.' }, 'title': 'iPhones Secretly Send Call\xa0History to Apple, Security Firm Says', 'url': '' } Trustworthiness Check Metadoc does a basic background check on article sources. This means a simple blacklist-lookup via whois data on the domain. Blacklists taken into account include the controversial PropOrNot. Thus, only if a domain is found on every blacklist do we spit out a fake_confidence of 1. The resulting metadata should be taken with a grain of salt. Part-of-speech tagging For speed and simplicity, we decided against nltk and instead rely on the Averaged Perceptron as imagined by Matthew Honnibal @explosion. The pip install comes pre-trained with a CoNLL 2000 training set which works reasonably well to detect proper nouns. Since training is non-deterministic, unwanted stopwords might slip through. If you want to try out other datasets, simply replace metadoc/extract/data/training_set.txt with your own and run metadoc.extract.pos.do_train. Purpose This library is used in the context of a news-related software undertaking called Praise. We're building the first social network dedicated to quality journalism recommendations. Synthesizing what we dub "audience-evaluated content" with automated metadata. If you're intrigued and might want to work with us, feel free to drop a line to a@praise.press. Install Requires python 3.5. Using pip pip install metadoc Develop Mac OS brew install python3 libxml2 libxslt libtiff libjpeg webp little-cms2 Ubuntu apt-get install -y python3 libxml2-dev libxslt-dev libtiff-dev libjpeg-dev webp whois Fedora/Redhat dnf install libxml2-devel libxslt-devel libtiff-devel libjpeg-devel libjpeg-turbo-devel libwebp whois Then pip3 install -r requirements-dev.txt python serve.py => serving @ 6060 Test py.test -v tests If you happen to run into an error with OSX 10.11 concerning a lazy bound library in PIL, just remove /PIL/.dylibs/liblzma.5.dylib. Todo - Page concatenation is needed in order to properly calculate wordcount and reading time. - Authenticity heuristic with sharecount deviance detection (requires state). Perf: Worst offender is nltk's pos tagger. Roll own w/ Average Perceptron. Newspaper's summarize produces pullquotes, fulltext takes a while. Move to libextract? Contributors Martin Borho Paul Solbach Meteadoc is a software product of Praise Internet UG, Hamburg. Metadoc stems from a pedigree of nice libraries like goose3, langdetect and nltk. Metadoc leans on this perceptron implementation inspired by Matthew Honnibal. Metadoc is a work-in-progress. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/metadoc/
CC-MAIN-2022-27
refinedweb
619
50.33
Apple’s open-source CommonCrypto isn’t shabby for anyone looking to implement encryption in their app, but it isn’t very “Swifty” to use. Luckily, Danny Keogan wrote a nice wrapper called IDZSwiftCommonCrypto, which renders Swift encryption a much friendlier beast. Introduction (0:00) In this post, I’m going to discuss a wrapper I wrote around CommonCrypto called IDZSwiftCommonCrypto, which makes it a lot Swiftier to use. Upfront, I’ll mention that you can find all the below code on GitHub, and if you have any questions about it, you can find me @iOSDevZone on Twitter. For a quick outline, this post will cover: - Intro to CommonCrypto - How to Access CommonCrypto in Swift - IDZSwiftCommonCrypto Design Goals - IDZSwiftCommonCrypto API - Some words of caution - Other Libraries/Projects - Summary What is CommonCrypto? (1:26) CommonCrypto is Apple’s Open Source cryptography library that is included in iOS & OS X. You can find it at opensource.apple.com. When you’re choosing a crypto library, it’s important to choose one that’s open source, because otherwise you don’t really know what’s going on in there. CommonCrypto is a C library, so that makes it a little bit unpalatable to use in Swift. It is part of System.framework and, unfortunately, it’s not directly accessible by default in Swift. But we can work around that. It provides a number of features: message digests, hash-based message authentication codes, cryptors (which are basically a catch-all for encryptors and decryptors), and then a couple of utility routines, such as key derivation routines and random number generation routines. API Design Goals (2:23) When designing my API, I had several goals in mind. First of all, I wanted it to be implementation independent. I didn’t want to be tied into CommonCrypto because there are certain things it can’t do, and certain things that other libraries like OpenSSL does do. The user-facing API of it doesn’t actually bleed through any of CommonCrypto; it’s pretty much independent. There is also an IDZSwiftOpenSSL, but that’s not quite ready for prime time yet. I also wanted to make the layer as thin as possible to make it Swifty, while avoiding the introduction of any security issues. It’s extremely easy to introduce security problems if you’re meddling with a crypto library. Finally, I wanted it be easy to use. There are a lot of inconsistencies in Apple’s CommonCrypto API, and even in C, it’s not that pleasant to use. IDZSwiftCommonCrypto API (3:32) The first problem I ran up against when trying to use CommonCrypto was that Apple does not provide a module map. I kept getting a “No such module” error. You might say “it’s in the system.framework module!”, but somehow they don’t export the right symbols. After a bit of digging about, I found that the solution to this was to create a fake module map for the CommonCrypto library. module CommonCrypto [system] { header "/Applications/Xcode-7.0-7A218.app/Contents/Developer/Platforms/MacOSX.platform/ Developer/SDKs/MacOSX10.11.sdk/usr/include/CommonCrypto/CommonCrypto.h" header "/Applications/Xcode-7.0-7A218.app/Contents/Developer/Platforms/MacOSX.platform/ Developer/SDKs/MacOSX10.11.sdk/usr/include/CommonCrypto/CommonRandom.h" export * } This turns out to be a pain when you’re sending out a library because the header entries require absolute library names. The example here is from my Mac and you can see that I put my Xcode in a non-standard place because, like every Swift developer, I have three or four different versions going at the same time. This is basically something that you want to write a script to do. Originally, I wrote a Bash script. There exists a newer version, as I’ve added support for tvOS and watchOS, that’s all written in Swift. Once you’ve got access to the various routines, we can look and find out what they are, what we can do with them, and how I wrapped them up in Swift. Message Digests (4:52) Basically, this is a cryptographic hash function. It takes a message in and produces digests on the other side. It has a few of properties that make it a cryptographic hash function: - Given measy to calculate h - Given hdifficult to find m - Given m1difficult to find m2such that hash(m1) == hash(m2) - Difficult to find pair ( m1, m2) such that hash(m1) == hash(m2) So, given a message, it should be quick and easy to calculate the hash.h. Also, given the hash.h, it should be difficult (or “infeasible”) to find “m”, the message. Furthermore, if you have a message, m1, it should be extremely difficult to find another message m2 that hashes to the same value. The final thing is that it should be very difficult to come up with two messages, m1 and m2, that hash to the same value. There are fancy names for all of those, like “preimage resistance” and “second preimage resistance,” but that’s the gist of what they are. CommonDigest.h (6:04) If we were trying to do this in C, what would this look like? Here is a quick excerpt from CommonDigest.h: Get more development news like this public func CC_MD2_Init(c: UnsafeMutablePointer<CC_MD2_CTX>) -> Int32 public func CC_MD2_Update(c: UnsafeMutablePointer<CC_MD2_CTX>, _ data: UnsafePointer<Void>, _ len: CC_LONG) -> Int32 public func CC_MD2_Final(md: UnsafeMutablePointer<UInt8>, _ c: UnsafeMutablePointer<CC_MD2_CTX>) -> Int32 There’s an initialization routine that initializes some context, there’s an update which takes in a buffer and updates the calculation, and then there’s a final routine, which gives you the actual digest. Notice that the context is the first argument of the first two routines and the last one in the third. In general, you’ll see that they’re completely inconsistent about where they put their arguments. For digest, though, they’re particularly bad because this is for an old digest function called MD2, which you probably shouldn’t use anymore. Of course, they add more for MD4 and for MD5. Eventually, you’ve got eight algorithms, each with three different routines, and it’s a mess. We can definitely do better when we’re making this Swifty. We don’t want to bring this in all as it is. Simplify the Types (7:04) One thing that I found that Swift is doing for me is that I think a lot more about types. If we step back from those functions a bit, we can see that there were basically three functions: typealias Initializer = (Context) -> (Int32) typealias Updater = (Context, Buffer, CC_LONG) -> (Int32) typealias Finalizer = (Digest, Context) -> (Int32) And if we simplify the types, we can see that this is what they look like. This just gives a much clearer view of what’s going on: class DigestEngineCC<C> : DigestEngine { typealias Context = UnsafeMutablePointer<C> typealias Buffer = UnsafePointer<Void> typealias Digest = UnsafeMutablePointer<UInt8> typealias Initializer = (Context) -> (Int32) typealias Updater = (Context, Buffer, CC_LONG) -> (Int32) typealias Finalizer = (Digest, Context) -> (Int32) /* . . . */ init(initializer : Initializer, updater : Updater, finalizer : Finalizer, length : Int32) func update(buffer: Buffer, _ byteCount: CC_LONG) func final() -> [UInt8] } We have to peel it back a little bit, of course, because there are those nasty pointers and such. Then, if we wrap that up in a class, this all becomes rather nice. This is only parameterized by the context structure of the individual algorithm. Otherwise, we can abstract away all that complexity. If we create one of these for each of the algorithms, we’re then going to be able to use it. protocol DigestEngine { func update(buffer: UnsafePointer<Void>, _ byteCount: CC_LONG) func final() -> [UInt8] } And if we create a protocol, as I have called DigestEngine, then the rest of the code can speak without having to worry about generics or the particular algorithm to our engine. This gave me the first view of my Digest API. The Digest API 1.2 and 2.0 (8:08) public class Digest { public enum Algorithm { case MD2, MD4, MD5, SHA1, SHA224, SHA256, SHA384, SHA512 } public init(algorithm: Algorithm) public func update(data : NSData) -> Digest? public func update(byteArray : [UInt8]) -> Digest? public func update(string : String) -> Digest? public func final() -> [UInt8] var engine: DigestEngine } I think you’ll agree, it looks a little bit nicer than the C version. Basically, the init routine populates the engine based on what algorithm you pass in. From then on, it’s able to talk to the digest engine protocol and it doesn’t care what algorithm you’re using. Above is the version as it was in Swift 1.2, but we can actually tidy things up a little bit more, because in Swift 2.0, they introduced protocol extensions. If we define a protocol Updateable, we can then factor out all that code that was talking about updating based on different types. public protocol Updateable { var status : Status { get } func update(buffer : UnsafePointer<Void>, _ byteCount : size_t) -> Self? } extension Updateable { public func update(data: NSData) -> Self? { update(data.bytes, size_t(data.length)) return self.status == Status.Success ? self : nil } public func update(byteArray : [UInt8]) -> Self? { update(byteArray, size_t(byteArray.count)) return self.status == Status.Success ? self : nil } public func update(string: String) -> Self? { update(string, size_t(string.lengthOfBytesUsingEncoding(NSUTF8StringEncoding))) return self.status == Status.Success ? self : nil } } You have all these different types, and it becomes annoying to use it if you’ve got some stuff that NSData, some stuff that’s UInt arrays, or some stuff that’s string. This just makes it a little bit nicer. With the protocols extensions, I don’t have to repeat this code in every single class, which is what I had to do in Swift 1.2. So win for Swift 2.0. public class Digest : Updateable { public enum Algorithm { case MD2, MD4, MD5, SHA1, SHA224, SHA256, SHA384, SHA512 } public init(algorithm: Algorithm) public func update(buffer: UnsafePointer<Void>, _ byteCount: size_t) -> Self? public func final() -> [UInt8] } Using Digest (9:34) What does it look like now if I want to calculate a digest? let m1 = "The quick brown fox jumps over the lazy dog." let sha1 = Digest(algorithm: .SHA1).update(m1)?.final() let d = Digest(algorithm: .SHA1) d.update(m1) let sha1 = d.final() It’s fairly straightforward. The top version shows if you were just trying to calculate on a short buffer what the digest is. You can use optional chaining to put it all on one line. For the lower view, if you were perhaps trying to generate a digest over something that was coming in off the network, although I only call update once, you would call update as each block comes in and update it and eventually check that your digest is okay at the end. So those are message digests; they’re great for storing passwords in a database, as long as you add Salt to it, and they’re great for testing whether something has changed. If you use Git, of course, you’re familiar with this. One thing that they’re not good for, though, is that they can’t detect if somebody else has intercepted the message and modified it. All you need in order to calculate the digest is the message and knowledge of the algorithm. If you want protection, then you need to use hash-based message authentication codes, or HMAC. HMAC (10:51) Basically, an HMAC takes a message digest and permutes it with a key, such that to calculate the HMAC, you must also be in possession of the key. To verify it, you also have to be in possession of the key. That in itself causes a little bit of a problem, because you have to make sure that the keys are securely transmitted. Luckily, there are key exchange protocols. (Key exchange protocols fall outside the scope of this present post, but if you’re interested in cryptography, these are amazing to look into. The most common one is called Diffie-Hellman key exchange, and it’s absolutely mind-blowing.) A key exchange protocol the additional ability to verify that the message is at it was intended by the person who had control of the key. If you have a good trust that the key belongs to the person, or that only you and the other person can share the key, then you can know that the message came from them and it is the message that they sent. Alright, let’s have a look at the header file: public func CCHmacInit(ctx: UnsafeMutablePointer<CCHmacContext>, _ algorithm: CCHmacAlgorithm, _ key: UnsafePointer<Void>, _ keyLength: Int) public func CCHmacUpdate(ctx: UnsafeMutablePointer<CCHmacContext>, _ data: UnsafePointer<Void>, _ dataLength: Int) public func CCHmacFinal(ctx: UnsafeMutablePointer<CCHmacContext>, _ macOut: UnsafeMutablePointer<Void>) The interesting thing here is that it’s completely inconsistent with the other API. In fact, it looks a little bit more like the message digest API. In particular, there’s this algorithm argument, so it looks like it should be much easier to Swiftify. However, there’s one problem: when Swift imports constants from C, it doesn’t do it as literals, and RawRepresentable enums can’t handle non-literals. Luckily, because Swift enums can have methods attached to them, we can just solve it. It’s not that pretty, but if you just ignore that, you can bury all the complexity down a level. public class HMAC { public enum Algorithm { case MD5, SHA1, SHA224, SHA256, SHA384, SHA512 } static let fromNative : [CCHmacAlgorithm: Algorithm] = [ CCHmacAlgorithm(kCCHmacAlgSHA1):.SHA1, CCHmacAlgorithm(kCCHmacAlgSHA1):.MD5, CCHmacAlgorithm(kCCHmacAlgSHA256):.SHA256, CCHmacAlgorithm(kCCHmacAlgSHA384):.SHA384, CCHmacAlgorithm(kCCHmacAlgSHA512):.SHA512, CCHmacAlgorithm(kCCHmacAlgSHA224):.SHA224 ] func nativeValue() -> CCHmacAlgorithm { switch self { case .SHA1: return CCHmacAlgorithm(kCCHmacAlgSHA1) case .MD5: return CCHmacAlgorithm(kCCHmacAlgMD5) case .SHA224: return CCHmacAlgorithm(kCCHmacAlgSHA224) case .SHA256: return CCHmacAlgorithm(kCCHmacAlgSHA256) case .SHA384: return CCHmacAlgorithm(kCCHmacAlgSHA384) case .SHA512: return CCHmacAlgorithm(kCCHmacAlgSHA512) } } static func fromNativeValue(nativeAlg : CCHmacAlgorithm) -> Algorithm? { return fromNative[nativeAlg] } } Once you do that, then the HMAC API looks like this below. As you can see, it’s beginning to look extremely similar to the digest one. public class HMAC : Updateable { public enum Algorithm { case MD5, SHA1, SHA224, SHA256, SHA384, SHA512 } public init(algorithm: Algorithm, key: NSData) public init(algorithm: Algorithm, key: [UInt8]) public init(algorithm: Algorithm, key: String) public func update(buffer: UnsafePointer<Void>, _ byteCount: size_t) -> Self? public func final() -> [UInt8] } The only bit that makes it a little bit different is that I’ve given some convenience initializers so that you don’t have to cast stuff if your key which happens to be coming from Objective-C as NSData, or if you want to use a string as your key. Using HMAC (13:41) Using it is fairly straightforward. In fact, using it is exactly the same as the message digest, except there’s the additional initialization parameter of the key: let key = arrayFromHexString("408d94384216f890ff7a0c3528e8bed1e0b01621") let m1 = "The quick brown fox jumps over the lazy dog." let hmac1 = HMAC(algorithm: .SHA1, key: key).update(m1)?.final() let hmac2 = HMAC(algorithm: .SHA1, key: key) hmac2.update(m1) let hmac2final = hmac2.final() Hopefully you agree that this is a bit better than the C interface. Cryptor (13:58) When you think of cryptography, you generally don’t think much of message digests and message authentication code. Most people think of sending secret messages with keys, and all that sort of thing. The class that looks after that part is the cryptor classes. Adopting CommonCrypto’s terminology, they’re just using cryptor to encompass a decryptor or an encryptor. When sending a message with cryptography, we start with a sender has a message and a key. Similar to the message authentication codes, both the sender and the receiver have to have a shared key that’s shared by some means (we assume it’s by a key exchange protocol, so any eavesdropper can see and it’s not a problem). For some modes of operation, there will be also an intialization vector, which is just a random block used to start off the transmission. The key, the plaintext, and the initialization vector go into the encryption. Out the other side comes the encrypted message, or ciphertext, and you transmit the initialization vector and the ciphertext. On the receiving side, the receiver passes in the key, the initialization vector, and the ciphertext and if everything works out well, they get the plaintext back. As I mentioned, an eavesdropper can see the initialization vector and it wouldn’t matter. The initialization vector serves two purposes. The first one is that, in certain modes, the ciphertext for the current block depends on the previous block. Obviously, that begs the question of what the first block uses, since there’s no previous block. It uses the initialization vector. However, because it’s random, it also serves another purpose: if all your messages were to start off with the same thing, then an attacker might notice the first block is always the same ciphertext. Whereas, if you choose your initialization vector randomly, they won’t be able to notice that. Cryptor in CommonCrypto (16:25) public func CCCryptorCreate(op: CCOperation, _ alg: CCAlgorithm, _ options: CCOptions, _ key: UnsafePointer<Void>, _ keyLength: Int, _ iv: UnsafePointer<Void>, _ cryptorRef: UnsafeMutablePointer<CCCryptorRef>) -> CCCryptorStatus public func CCCryptorUpdate(cryptorRef: CCCryptorRef, _ dataIn: UnsafePointer<Void>, _ dataInLength: Int, _ dataOut: UnsafeMutablePointer<Void>, _ dataOutAvailable: Int, _ dataOutMoved: UnsafeMutablePointer<Int>) -> CCCryptorStatus public func CCCryptorFinal(cryptorRef: CCCryptorRef, _ dataOut: UnsafeMutablePointer<Void>, _ dataOutAvailable: Int, _ dataOutMoved: UnsafeMutablePointer<Int>) -> CCCryptorStatus public func CCCryptorRelease(cryptorRef: CCCryptorRef) -> CCCryptorStatus It’s a little more complicated now, because when we create the cryptor, not only is there a key as there was with the HMAC, but we also have the initialization vector. We have to tell it whether we’re encrypting or decrypting. Also, unlike HMACs and digests, we’re not just trying to calculate a single answer here; as we feed in data, we’re going to be getting data out, so each of the updates now has both input buffers and output buffers. Then, when we get to the end, there could still be some stuff in the buffers, so we get some additional data out at the end. The other thing to note is that there’s also this CCCryptorRelease. To make sure that we clean up all the resources and everything, we have to have to Swift deinit that calls this to make sure that everything’s looked after correctly. Swift 2.0 OptionSetType (17:34) Now for the options parameter. In C, this is a set of flags that can be Bitwise OR’d together. In Swift 1.2, this was a nightmare to deal with because you might think that you could use enums, but you can’t Bitwise OR together enums. (There was a way of doing it, but it took endless amounts of code to get it to work correctly.) I’m not going to go through the Swift 1.2 version here, it was so horrible, but in Swift 2.0, there’s OptionSetType which makes it really easy to do. public struct Options : OptionSetType { public typealias RawValue = Int public let rawValue: RawValue public init(rawValue: RawValue) { self.rawValue = rawValue } public init(_ rawValue: RawValue) { self.init(rawValue: rawValue) } public static let None = Options(rawValue: 0) public static let PKCS7Padding = Options(rawValue:kCCOptionPKCS7Padding) public static let ECBMode = Options(rawValue:kCCOptionECBMode) } At the bottom, those are the three options, and that’s all the code you have to do to bring them in. When you call it from Swift, though, you don’t Bitwise OR them together. Instead, you actually create an array containing the flags that you need. Now, the padding flag. All the currently implemented cryptography algorithms in CommonCrypto are what are called “block-based algorithms”, which means that you pass in a block of a particular size and you get a block out. If you pass in less than that block, it can’t produce anything, and if your input’s not an integral number of blocks long, obviously the final block’s going to get truncated. To work around that, there’s a particular sort of padding that’s also designed so that it doesn’t leak too much information about how long your message is. Unless you’re coding to a particular protocol and you know that you’re going to have an integral number of block lengths, you probably want to specify this flag. The other flag is the complete opposite. There are two modes that CommonCrypto or cryptors can operate in. The first one (the default) is called Chain Block Cipher. That’s the one that uses the initialization vector I mentioned previously, where the ciphertext of the current block depends not only on the current plaintext, but also on the ciphertext of the previous block. Why is this important? Suppose I have a highly secret message, and I’m going to encrypt it. If I use Chain Block Cipher, it will end up essentially being garbled white noise, and will be unintelliglble. If I’d specified Electronic CodeBook Mode, it would come out as readable. In Electronic CodeBook Mode, if you put in the same plaintext, you’re going to get the same ciphertext out for a given key. So, although it’s taking blocks of pixels and encrypting them, lots of the blocks are the same, and there’s enough information leakage that you can make out what the original was. The StreamCryptor API (20:56) Okay, well if we put it all together then, what does the StreamCryptor look like? public class StreamCryptor { public enum Operation { case Encrypt, Decrypt } public enum Algorithm { case AES, DES, TripleDES, CAST, RC2, Blowfish } public struct Options : OptionSetType { static public let None, PKCS7Padding, ECBMode: Options } public convenience init(operation: Operation, algorithm: Algorithm, options: Options, key: [UInt8], iv: [UInt8]) public convenience init(operation: Operation, algorithm: Algorithm, options: Options, key: String, iv: String) public func update(dataIn: NSData, inout byteArrayOut: [UInt8]) -> (Int, Status) public func update(byteArrayIn: [UInt8], inout byteArrayOut: [UInt8]) -> (Int, Status) public func update(stringIn: String, inout byteArrayOut: [UInt8]) -> (Int, Status) public func final(inout byteArrayOut: [UInt8]) -> (Int, IDZSwiftCommonCrypto.Status) } This class is quite complicated, and it has to be that way. First of all, you have to pass in all the parameters for the initialization, and then also, as you pass in each block, you’ve got to get a block out as well. If it happens that you’re not getting data nicely aligned, it’ll correctly handle it as long as you specified the padding. It’ll apply the padding in the final block and make sure that everything gets through correctly. You’ll notice as well in this one that I’m using the Swift idea of being able to return a tuple. There are two things that you might care about here: there’s a Status because unlike the message digest and HMACs, according to the documentation, there are cases under which this can fail. I’ve never actually seen it do so, but I suppose we should put it in there. The other thing is that it may not produce a full block, depending on what’s going on. The other return value is a count of how much ciphertext was produced in that iteration. So, if you’re decrypting data coming over the network or if it’s a big file, this is the interface you want to use. If you’re only doing a small block of data, I also provided a simpler version: public class Cryptor : StreamCryptor, Updateable { internal var accumulator: [UInt8] public func final() -> [UInt8]? public func update(buffer: UnsafePointer<Void>, _ byteCount: Int) -> Self? } You can see that cryptor, if it’s short, is actually pretty straightforward. Using Cryptor (22:56) var aesKey1Bytes = arrayFromHexString("2b7e151628aed2a6abf7158809cf4f3c") var aesIV = arrayFromHexString("deadfacedeadfacedeadfacedeadface") let encryptor = Cryptor(operation: .Encrypt, algorithm: .AES, options: [.PKCS7Padding], key: aesKey1Bytes, iv: aesIV) let ciphertext = encryptor.update(m1)?.final() let decryptor = Cryptor(operation:.Decrypt, algorithm: .AES, options: [.PKCS7Padding], key: aesKey1Bytes, iv: aesIV) let plaintext = decryptor.update(ciphertext!)?.final() This is how you would encrypt to a ciphertext and then decrypt the ciphertext back to plaintext. The plaintext, in this case, will be an array of bytes, so you’ll have to do a little bit more munging to get it into a string to prove that it is equal to what you originally passed in. The PBKDF API (23:16) It might be tempting when you’re trying to come up with a key for something, particularly if you’re just encrypting a file to disk, to use a pass phrase and to use the ASCII encoding of the pass phrase or something like that. In the past, that may well have been done, but with modern computers, you probably don’t want to do that because they can guess too many thousands or millions of keys a second. A better way to do it is to use a password-based key derivation function, or PBKDF. public class PBKDF { public enum PseudoRandomAlgorithm { case SHA1, SHA224, SHA256, SHA384, SHA512 } public class func calibrate(passwordLength: Int, saltLength: Int, algorithm: PseudoRandomAlgorithm, derivedKeyLength: Int, msec : UInt32) -> UInt public class func deriveKey(password : String, salt : String, prf: PseudoRandomAlgorithm, rounds: uint, derivedKeyLength: UInt) -> [UInt8] } This is a function specifically designed to be expensive to compute. Normally, they have a balance between computationally expensive and memory expensive so that it slows down ability to see what the key that came out from that was. I’m not going to bother looking at the header file here, but these are basically Swifty translations of the two functions related to this functionality. The first one is calibrate: given a specific password length, a specific Salt length, a desired derived key length, and the amount of time that you’re willing to wait on a device, it tells you how many rounds of the function you can run. Basically, the higher the number of rounds, the longer it’s going to take an attacker to try and guess any one password. Now, you’re probably not going to do this as part of an app, but it’s useful to have it so that when you bring out the next version of your app, you can up the number of rounds if your amount of computational devices has gone up enough, without inconveniencing your user. Once you’ve decided on the number of rounds, you use deriveKey to guess key material from the password. It takes the password itself, the Salt, which is a bit like the initialization vector that I mentioned for the cryptors. Essenntially, Salt is a random buffer that’s used to permute your password. It’s normally stored with the password or transmitted with it, but the main thing is that it means that if two people have the same password, it won’t appear the same. The Random API (26:34) public class Random { public class func generateBytes(byteCount : Int) throws -> [UInt8] } For both Salts and initialization vectors, it’s important that they’re random. It’s also important that these random numbers are of sufficiently high quality, because low quality randomization could mean that an attacker could have an easier time gaining information about your system. Some Cautionary Words (27:15) IDZSwiftCommonCrypto should be considered beta I’ve tested it reasonably well, but I think there are still some problems in a few places that I’m looking for good tests for. If you’re using it, please treat it as a beta. Don’t try to invent your own protocol; there be dragons! Instead, use one of the existing ones. Use a peer-reviewed one. It’s incredibly easy to shoot yourself in the foot with this. Even some of the existing, well-tested, well-reviewed algorithms like SSH have their problems. Many of the implementations of OpenSSH use a limited version of Diffie-Hellman key exchange, and recent papers have suggested that this is exploitable. If professionals in the area can occasionally screw things up then the chances are, if you’re rolling your own, you’re going to make a mess of it. If the stuff you’re dealing with is very sensitive for your users, that could be a disaster. Don’t write your own crypto libraries I should say that didn’t write any crypto algorithms here, I just wrapped it. I’ve said this to few people and they said, “Well, why? I mean, what could possibly go wrong?” Anybody who knows this field will giggle to themselves. Check local jurisdiction for laws governing use/sale/export of products containing/using cryptography There are generally laws governing particularly strong cryptography. If you’re submitting to the U.S. App Store and you’ve crossed a certain threshold, you’ll have to provide the information. You’ll need to deal with the Bureau of Industry and Security to get your export license. Other Projects (32:56) - RNCryptor (Rob Napier) - Supports C++, C#, Java, PHP, Python, Javascript, and Ruby - CryptoSwift (Marcin Krzyżanowski) - Pure Swift Implementation - Crypto (Sam Soffes) - Extensions to NSData and NSString Summary (34:01) - Use module maps to generate Swift prototypes for forgotten functions - Use generic classes to unify related functions and structures - Use protocol extensions to factor out repetitive code - Use protocols to bridge non-generic to generic classes - Use customized enums to work around RawRepresentablelimitations - Use OptionSetTypeto wrap bitwise flags Q&A (35:36) _Q: One thing I’m curious about is when submitting things to the App Store, there’s a “Does your app contain cryptography?” question, but I’ve never been in a position where I’ve needed to check Yes on that. What is that process like, say, here in the United States? How painful is it if you want to get something approved to the App Store that has cryptography in it? Danny: I can only speak to part of that. One of my apps in the App Store does have cryptography in it, but doesn’t have a level of cryptography that requires me to have a registration number. When you click the Yes button, it asks you a number of other questions. In particular, and this is one of the deciding factors, is something like, “Is it only used to keep user information private?” I was using it for protecting user passwords and there was a small amount of cryptography, and I think Diffie-Hellman key exchange as well. Mine was below the threshold, where I simply said “Yes, it’s in there, but it’s below this threshold, and this is what it does.” That was sufficient for the App Store. If that’s not enough, the first stage is you register with the Bureau of Industry and Security, then you have to fill out a SNAP-R application with a whole bunch of information about how strong the cryptography is and what it’s used for. Depending on that, you get either an ERN or a CCATS. I think you have to be doing really high-level stuff to get a CCATS. I haven’t really had to go through the full process. My initial registration took very little time and was very painless. Q: Did you consider writing this as a bunch of extensions for NSData and NSString? It looks like you’ve written it as a standalone. Also, did you consider using sequence type or generator, because it looks like random gives you a sequence of bytes? It might be possible to implement it using a sequence type. Danny: Basically, I was trying to provide building blocks. It is really easy to implement the extensions using Sam Soffes’s library. It’s like three lines of code for each one. The one big reason you may not want to do it, though, is that some of this stuff takes a while, especially if your data is any way big. It can take quite a few seconds to encrypt, so you wouldn’t want that happening on your main thread. You’d have all sorts of synchronization and things, but it would be fairly easy to do. As for the he sequence type thing, you could, but the underlying mechanism expects you to be requesting a specific length, and you normally know a priori what that length is because you’re asking for it for an initialization vector for Salt. You’re not going to really ask for an endless stream of random numbers. However, there would be no reason why you couldn’t build up such a thing quite easily from this. About the content This content has been published here with the express permission of the author.
https://academy.realm.io/posts/danny-keogan-swift-cryptography/
CC-MAIN-2018-22
refinedweb
5,455
52.49
> Hello, this question is very stupid. But I am making a padlock script that is unlocked with 4 different integers and I was wondering if there is any way of making it so that for example. If you have a main int called currentCode. What I want to do is set all of the 4 integers into 1. So it would be like option1, option2, option3 and option4 would be for example (6 4 7 8) as the separate integers. Then I want the currentCode to display the 4 options (6478). If there isn't any way of doing this that's fine as long as someone can guide me in the direction I need to go towards. Like if I could do something like currentCode = option1, option2, option3, option4; That would be great but that doesnt seem to be the case There is no need to consider this as far as the specifics of the coding go. Only for display purposes. You can do something like this, however, using an Array Thank you for the quick reply Okay, so is there any way of doing this with arrays then. So like being able to set 4 options of the array to the 4 integers? Also, was just thinking. Would I be able to do this with strings instead? You can make an array of anything, more or less. string[] 1d string array string[][] 2d string array. string[] string[][] Answer by fafase · Mar 11, 2018 at 11:02 AM The solution to your problem is enum with Flag attribute. [System.Flags] public enum Option { First = 0, Second = 1, Third = 2, Fourth = 4, Fifth = 8 } Notice the give value are power of two. You can then declare an enum reference and add values to it. Option options = Option.First; options |= Option.Third; options | = Option.Fifth; this contains First, Third and Fifth. You can figure out if it contains a specific value: private bool ContainsValue(Option value) { return ((options & value) == value); } And you can remove a flag with: options &= ~Option.Third; Plenty of it over the. Detecting multiples of 2 Object Oriented Bounding Boxes for Multiple Meshes 1 Answer
https://answers.unity.com/questions/1478684/storing-multiple-integers-into-one.html
CC-MAIN-2019-18
refinedweb
357
74.29
If you’re reading this you’re probably familiar with the idea of functions. When to use and when to not use a function probably comes pretty natural to you. In this post we’re going to learn how to leverage that knowledge to build better user interfaces. One of the best parts of React.js is that you can use the same intuition that you have about functions for when to create new React components. However, instead of your function taking in some arguments and returning a value, your function is going to take in some arguments and return some UI. This idea can be summed up in the following formula, f(d)=V. A Function takes in some Data and returns a View. This is a beautiful way to think about developing user interfaces because now your UI is just composed of different function invocations, which is how you’re already used to building applications and every benefit that you get from functions is now transferred over to your UI. Let’s look at some actual code now. var getProfilePic = function (username) {return "" + username;};var getProfileLink = function (username) {return "" + username;};var getProfileData = function (username) {return {pic: getProfilePic(username),link: getProfileLink(username),};};getProfileData("tylermcginnis"); Looking at the code above, we have three functions and one function invocation. You’ll notice our code is very clean and organized because we’ve separated everything out into different functions. Each function has a specific purpose and we’re composing our functions by having one function ( getProfileData) which leverages the other two functions ( getProfilePic and getProfileLink). Now when we invoke getProfileData we’ll get an object back which represents our user. You should be very comfortable with the code above. But now what I want to do is instead of having those functions return some value, let’s modify them a bit to return some UI (in the form of JSX). Here you’ll really see the beauty of React’s render method. class ProfilePic extends React.Component {render() {return (<img src={'' + this.props.username'} />)}} class ProfileLink extends React.Component {render() {return (<a href={"" + this.props.username}>{this.props.username}</a>);}} class Avatar extends React.Component {render() {return (<div><ProfilePic username={this.props.username} /><ProfileLink username={this.props.username} /></div>);}} <Avatar username="tylermcginnis" /> Now, instead of composing functions to get some value, we’re composing functions to get some UI. This idea is so important in React that React 0.14 introduced Stateless Functional Components which allows the code above to be written as normal functions (and which we’ll cover more in depth later in the course). var ProfilePic = function (props) {return <img src={'' + props.username'} />} var ProfileLink = function (props) {return <a href={"" + props.username}>{props.username}</a>;}; var Avatar = function (props) {return (<div><ProfilePic username={props.username} /><ProfileLink username={props.username} /></div>);}; <Avatar username="tylermcginnis" /> One thing each of the functions and components above has in common is they’re all “pure functions”. Perhaps one of my favorite things about React is it’s given me a light introduction to functional programming (FP) and a fundamental piece of FP are pure functions. The whole concept of a pure function is consistency and predictability (which IMO are keys to writing great software). The reason for the consistency and predictability is because pure functions have the following characteristics. When you call a function that is “pure”, you can predict exactly what’s going to happen based on its input. This makes functions that are pure easy to reason about and testable. Let’s look at some examples. function add(x, y) {return x + y;} Though simple, add is a pure function. There are no side effects. It will always give us the same result given the same arguments. Let’s now look at two native JavaScript methods. .slice and .splice var friends = ["Mikenzi", "Merrick", "Dan"];friends.slice(0, 1); // 'Mikenzi'friends.slice(0, 1); // 'Mikenzi'friends.slice(0, 1); // 'Mikenzi' Notice slice is also a pure function. Given the same arguments, it will always return the same value. It’s predictable. Let’s compare this to . slice’s friend, . splice var friends = ["Mikenzi", "Merrick", "Dan"];friends.splice(0, 1); // ["Mikenzi"]friends.splice(0, 1); // ["Merrick"]friends.splice(0, 1); // ["Dan"] .splice is not a pure function since each time we invoke it passing in the same arguments, we get a different result. It’s also modifying state. Why is this important for React? Well the main reason is React’s render method needs to be a pure function and because it’s a pure function, all of the benefits of pure functions now apply to your UI as well. Another reason is that it’s a good idea to get used to making your functions pure and pushing “side effects” to the boundaries of your program. I’ll say this throughout the course, React will make you a better developer if you learn React the right way. Learning to write pure functions is the first step on that journey..
https://ui.dev/building-user-interfaces-with-pure-functions-and-function-composition-in-react-js/
CC-MAIN-2021-43
refinedweb
836
58.58
[ ] Martin Sebor commented on STDCXX-140: ------------------------------------- I have seen a similar issue on Cygwin. Here are my notes The library builds just fine now (at least 11s does), but I ran into a runtime issue that I haven't had time to analyze yet. A simple integer insertion followed by std::endl hangs: #include <iostream> int main () { std::cout << 0 << std::endl; } I can't even stop the debugger by pressing Ctrl+C so it's kind of tricky to figure out what's going on. It seems to hang on return from ctype::widen() somewhere in cygwin.dll. (This is with gcc 3.3.3 and gdb 6.3.50_2004-12-28-cvs on CYGWIN_NT-5.0 RW1266 1.5.12(0.116/4/2)). > [NetBSD 3.0] accum hangs during execution > ----------------------------------------- > > Key: STDCXX-140 > URL: > Project: C++ Standard Library > Type: Bug > Components: Examples > Versions: 4.1.4 > Environment: NetBSD 3.0 > GCC 3.3.3 > Reporter: Andrew Black > >:
http://mail-archives.apache.org/mod_mbox/stdcxx-dev/200602.mbox/%3C262920273.1139335145675.JavaMail.jira@ajax.apache.org%3E
CC-MAIN-2015-48
refinedweb
160
77.64
This is a logging framework. Project description jk_logging Introduction This python module provides a logging infrastructure. It contains various classes to implement logging and aid in debugging. Information about this module can be found here: How to use this module Basic Architecture Documentation of Log Objects In order to use loggers you need to know which classes there are end what kind of methods they offer for use. The next subchapters will provide you with that information. Common Methods Every log object will provide the following methods for use: # # Perform logging with log level DEBUG. # # @param string text The text to write to this logger. # def debug(self, text) # # Perform logging with log level NOTICE. # # @param string text The text to write to this logger. # def notice(self, text) # # Perform logging with log level INFO. # # @param string text The text to write to this logger. # def info(self, text) # # Perform logging with log level STDOUT. # This method is intended to be used in conjunction with STDOUT handlers. # # @param string text The text to write to this logger. # def stdout(self, text) # # Perform logging with log level WARNING. # # @param string text The text to write to this logger. # def warning(self, text) # # Perform logging with log level ERROR. # # @param string text The text to write to this logger. # def error(self, text) # # Perform logging with log level STDERR. # This method is intended to be used in conjunction with STDERR handlers. # # @param string text The text to write to this logger. # def stderr(self, text) # # Perform logging with log level EXCEPTION. # # @param Exception exception The exception to write to this logger. # def exception(self, exception) # # If this logger is buffering log messages, clear all log messages from this buffer. # If this logger has references to other loggers, such as a <c>FilterLogger</c> # or a <c>MulticastLogger</c> # def clear(self) Other log objects will provide additional methods. BufferLogger Objects of type BufferLogger will provide the following additional methods: # # Return a list of strings that contains the data stored in this logger. # Standard formatting is used for output. # # @return string[] Returns an array of strings ready to be written to the console or a file. # def getBufferDataAsStrList(self) # # Return a list of tuples that contains the data stored in this logger. # # @return tuple[] Returns an array of tuples. Each tuple will contain the following fields: # * int timeStamp : The time stamp since Epoch in seconds. # * EnumLogLevel logLevel : The log level of this log entry. # * string|Exception textOrException : A log message or an execption object. # def getBufferDataAsTupleList(self) # # Return a single string that contains the data stored in this logger. # Standard formatting is used for output. # # @return string Returns a single string ready to be written to the console or a file. # def getBufferDataAsStr(self) # # Forward the log data stord in this logger to another logger. # # @param AbstractLogger logger Another logger that will receive the log data. # def forwardTo(self, logger, bClear = False) Instantiation Based on Configuration Information Provided Often it is convenient for applications to provide some detailed way of specifying how data should be logged. For exactly that reason this logging framework provides a function that is capable of creating loggers from some kind of description. Example: import jk_logging logger = jk_logging.instantiate({ "type": "MulticastLogger", "nested": [ { "type": "ConsoleLogger" }, { "type": "FileLogger", "filePath": "mylogfile-%Y-%m-%d.log", "rollOver": "day" } ] }) (more description comeing soon) Examples Here is some example code that demonstrates the use of the various loggers available: print() print("-- ConsoleLogger --") print() clog = ConsoleLogger.create() clog.debug("This is a test for DEBUG.") clog.notice("This is a test for NOTICE.") clog.info("This is a test for INFO.") clog.warning("This is a test for WARNING.") clog.error("This is a test for ERROR.") print() print("-- Exception Handling --") print() def produceError(): a = 5 b = 0 c = a / b try: clog.notice("Now let's try a calculation that will fail ...") produceError() except Exception as ee: clog.error(ee) print() print("-- FilterLogger --") print() flog = FilterLogger.create(clog, EnumLogLevel.WARNING) flog.notice("This message will not appear in the log output.") flog.error("This message will appear in the log output.") print() print("-- DetectionLogger --") print() dlog = DetectionLogger.create(clog) dlog.notice("A notice.") dlog.debug("A debug message.") dlog.info("An informational message.") dlog.debug("Another debug message.") print(dlog.getLogMsgCountsStrMap()) print("Do we have debug messages? Answer: " + str(dlog.hasDebug())) print("Do we have error messages? Answer: " + str(dlog.hasError())) print() print("-- BufferLogger --") print() blog = BufferLogger.create() blog.info("First we write something to a buffer.") blog.info("And something more.") blog.notice("And more.") blog.debug("And even more.") blog.info("Then we write it to the console by forwarding the data to another logger.") blog.forwardTo(clog) print() print("-- MulticastLogger --") print() mlog = MulticastLogger.create(clog, clog) mlog.info("This message gets written twice.") print() print("-- NamedMulticastLogger --") print() nmlog = NamedMulticastLogger.create() nmlog.addLogger("log1", clog) nmlog.addLogger("log2", clog) nmlog.info("This message gets written twice.") nmlog.removeLogger("log1") nmlog.info("This message gets written once.") Things to do Any help in implementing additional log classes and improving on the existing ones is appreciated. Feel free to contact me if you are interested in colaborating..
https://pypi.org/project/jk-logging/0.2021.1.18/
CC-MAIN-2021-25
refinedweb
862
52.56
0 Hi, What I'm trying to do is to write a script which would open an application only in process list. Meaning it would be "hidden". I don't even know if its possible in python. If not, there has to be a function would allow for a program to be opened with python in a minimized state like "wsMinimized" from Delphi, something like this: import subprocess def startProgram(): subrpocess.minimize(subprocess.Popen('C:\test.exe')) ??? I know this is wrong but you get the idea... startProgram() Any ideas? Oh and the programs that i want to hide are the ones that don't have the option to be minimized into tray so that's sort of a compromise.
https://www.daniweb.com/programming/software-development/threads/262417/open-a-program-with-python-minimized-and-or-hidden
CC-MAIN-2017-43
refinedweb
120
72.46
Recursion Recursion is a process where a method calls itself repeatedly until it arrives on a desired value. Recursion is a complex topic in programming and mastering it is not that easy. Recursion must stop at some point or else, it will call itself for an infinite number of times. For this lesson, I will show a very simple example of recursion.The factorial of a positive whole number n (denoted as n!) is that the product of all positive integers but or capable n. contemplate the factorial of five. 5! = 5 * 4 * 3 * 2 * 1 = 120 So to make a recursive method, we need to analyze how our method will stop its recursion. Based on the description of recursion, the factorial only accepts a positive integer. The lowest positive integer we can give is 1. We will use this value to stop the recursion. using System; namespace FactorialDemo { public class Program { static long Factorial(int number) { if (number == 1) return 1; return number * Factorial(number - 1); } public static void Main() { Console.WriteLine(Factorial(5)); } } } Example 1 – Factorial Using Recursion 120 The method returns a long value because factorial calculations can be very big. The method accepts one argument which is the number that will be involved in the calculation. Inside the method, we first wrote an if statement in line 9 telling that if the argument provided is equal to 1, then we will return 1 as the value. If it is not 1, then we will proceed in the next line. This condition is also the one that will determine if the repetition of calls should stop. Line 12 multiplies the current value of number and then the method calls itself providing an argument which 1 less than the current value of number. Inside the method called inside the original method, processes will again occur, and if the argument passed is not equal to 1, then it will do another recursive step. The diagram below shows the how we arrived at the final value. The above code can be rewritten using a for loop. factorial = 1; for ( int counter = number; counter >= 1; counter-- ) factorial *= counter; The above code is easier than its equivalent code that uses recursion. Recursion has its specific use in the field of computer science. Recursion is used when it seems more natural to use it than using a non-recursion version (called Iteration). Recursion consumes more memory so do not use recursion if speed is important.
https://compitionpoint.com/recursion/
CC-MAIN-2021-31
refinedweb
412
55.03
Includes enterprise licensing and support Note: Before proceeding with this tutorial, make sure you've installed the Maps API for Flash SDK and obtained a Maps API key as indicated in Setting Up Your Development Environment. This section discusses how to obtain and set up Adobe FlexBuilder, how to reference the Google Maps API for Flash library, and how to set up your development environment to get you ready to program in ActionScript and build your first Google Maps API for Flash map using Adobe's FlexBuilder IDE. Note: If you wish to use the Free Flex SDK, see the Flex SDK Tutorial instead. Incorporating Google Maps into your Flex application requires requires understanding not only ActionScript code but Flex MXML files. This tutorial illustrates how to get a map application up and running within FlexBuilder —Adobe's IDE for Flex development — available at the following URL: This tutorial assumes you have purchased and set up the FlexBuilder application. Before you begin coding your application, you should first set up a project within FlexBuilder. To do so, follow these simple steps: Click Finish and your project will be created. FlexBuilder will also automatically create a template MXML file, as shown below: Before you can compile your code, you will need to link it to the Google Maps API for Flash SWC file. To do so, select Project->Properties. A Properties dialog box will appear for your project. Click on Flex Build Path and then select the Library Path tab: Click Add SWC... within the Library Path pane. OK. FlexBuilder will update your project and you return you to the Flex Development Perspective showing your skeleton MXML file. You may wish to test compilation of your skeleton MXML file at this point to ensure your development environment is working correctly. New! The Google Maps API for Flash now offers native support for the com.google.maps.Map object within Flex. You no longer need to extend the Map class to define your Map application class, and you can use a Map component directly within Flex. Flex applications are defined within MXML declarations. Generally, you should provide these MXML application definitions in the root of your source code directory. By default, FlexBuilder creates a src directory within your project to place source code, and MXML files should reside at the root of that directory. Create a new MXML file by selecting File->New->MXML Application. name the MXML file HelloWorld.mxml. We'll fill in this skeletal MXML application in the following sections. Components that extend any existing Google Maps API for Flash components should generally be provided as ActionScript files within a unique namespace. Google recommends that you use a namespace which you own to prevent collisions with other applications; this is especially important if you will have many developers working on multiple flash applications at the same time. Using namespaces also allows you to bundle your application code into packages, which allow easier sharing of common code. Packages and namespaces should be defined using your top level domain, your organization domain, and sub-domain. For example, the Google Maps namespace is defined as com.google.maps and an examples package within that namespace would be defined as com.google.maps.examples. You can then use this namespace to implicitly define the directory structure of your application (e.g. com/google/maps/examples/). You use this namespace to define a package within your ActionScript code, and to define your application within the MXML declaration. Generally, ActionScript code ( *.as files) reside within the bottommost directory of whatever namespace is defined, while the MXML declarations ( *.mxml files) reside at the "root" of the directory structure. The easiest way to start learning about the Google Maps API for Flash is to see a simple example. In this tutorial, using FlexBuilder we'll create a simple MXML file, add some ActionScript code, compile that file into a SWF file using the Flex SDK, and launch the file for visual inspection. The MXML declaration defines UI elements within a Flex application, while embedded ActionScript code within the <mx:Script> tag defines actions on those UI elements. In the simplest case, you simply declare a com.google.maps.Map object within MXML and initialize the map with ActionScript code: Modify your HelloWorld.mxml file until it appears as shown below. We'll walk through and explain the code within this file. <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns: <maps:Map xmlns:maps="com.google.maps.*" id="map" mapevent_mapready="onMapReady(event)" width="100%" height="100%":Application> This example is located at gmaps-samples-flash.googlecode.com/svn/trunk/examples/HelloWorld.html. (MXML source is located here.) Note that you'd need to build your own SWF file, with your own API key, for this example to appear on your website. Even in this simple example, there are several things to note: <mx:Application>to hold all of our code, as is required for Flex applications. Within this <mx:Application>object, we declare an xmnamespace to reference the standard Flex components. Mapobject as a child of the <mx:Application>and define a mapsnamespace to reference code from com.google.maps.*. We also define parameters including an id, a mapevent_mapreadyhandler, and an API key. These parameters will be explained later. <mx:Script>object. mapevent_mapreadyevent. These steps are explained in more detail below. <mx:Application xmlns: Google Maps Flash applications require not only ActionScript code but also a user interface framework to display the map and any associated UI elements on a web page. Within the Flex framework, these UI components are specified within an MXML declaration. An MXML declaration consists of a configuration file with the .mxml suffix. This MXML file commonly resides in the root of your ActionScript development directory. To display your Flash map on a webpage, you will need at least one MXML declaration. All MXML declarations require a root <mx:Application> component. Additionally, the <mx:Application> also registers the mx prefix to reference standard Flex components. Note that MXML declarations can be quite complex, and the layout of UI components within an MXML declaration is beyond the scope of this documentation. For more information, see the provided examples and consult the Flex SDK documentation. <maps:Map xmlns: The Google Maps API for Flash now includes native support for the Map object defined within the com.google.maps.* package. We add a Map here as a child of the <mx:Application>, define its namespace as maps (linking it to the com.google.maps.* code), set an id which we can reference the map from within the ActionScript code, and define a mapevent_mapready handler. (See Event Handling below.) The <maps:Map> declaration also specifies width and height parameters to define how the Map will appear within the application. More importantly, the MXML declaration acts as a convenient location to place your unique API key. <mx:Script> <![CDATA[ // ActionScript Code ]]> </mx:Script> Maps within the Google Maps API for Flash are manipulated using ActionScript 3.0 code. This tutorial will not attempt to teach the nuances of ActionScript. Online tutorials on ActionScript are available at the following locations: The <mx:Script> object is a Flex component that holds a reference to the ActionScript code. Because MXML is a variant of XML, we also need to inform the MXML parser to ignore the ActionScript code within this file through use of the <![CDATA[][ and ]]> delimiters. Note: you may instead wish to provide your ActionScript within separate ActionScript ( *.as) files. If you do so, you can reference the ActionScript file by including a source parameter in the <mx:Script> tag: <mx:Script></mx:Script> Placing code within separate files often makes sense if you have large or complex applications. However, within this document base, we will show all code inline within MXML declarations for simplicity. import com.google.maps.LatLng; import com.google.maps.Map; import com.google.maps.MapEvent; import com.google.maps.MapType; ActionScript libraries are imported with the import declaration. In the sample code above, we import several Google Maps Flash libraries. You should ensure that you import libraries for types that you use within your sample code. We recommend that you import only those classes you need. In most of the examples within this documentation, we structure our code so that the <mx:Application> does not use Flex UI components. Doing so provides a basic lightweight map. Note that including any Flex libraries may substantially increase the size of your SWF files, even if you only need one particular Flex component (such as a button). ActionScript, like JavaScript, is an event-driven programming language. Interactions with users within Flash objects are handled by registering event listeners on objects for specific events. In the code snippet in the previous section, the Map declaration added an event listener to the Map object for the MapEvent.MAP_READY event through the use of the special parameter mapevent_mapready. This event handler acts as). In general, it is good to "initialize" your map in such a manner by intercepting and handling the MapEvent.MAP_READY event. Events are discussed in more detail within the Map Events section of the Google Maps Flash documentation.. We now have a HelloWorld.mxml within our source's root directory and ActionScript code within that file's <mx:Script> object. We are ready to compile our code into a SWF (Shockwave Flash) file. We can do so dirctly within FlexBuilder. To execute FlexBuilder's compiler and launch the debug version of the Flash player, click one of the Run Tools located in FlexBuilder's toolbar. There are options to launch an optimized version, a debug version, or a version for profiling. FlexBuilder will compile the MXML application, build a SWF file, and automatically bring up your browser, displaying the following Flash SWF: HelloWorld.swf file appears below. For the map to display on a web page, we must reserve a spot for it. We do so in this example by creating a named div element and adding the object element to it. <div id="map_canvas" name="map_canvas"> <object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" codebase="" width="800px" height="600px"> <param name="movie" value="helloworld.swf"> <param name="quality" value="high"> <param name="flashVars" value="key=your_api_key"> <embed width="800px" height="600px" src="helloworld.swf" quality="high" flashVars="key=your_api_key" pluginspage="" type="application/x-shockwave-flash"> </embed> </object> </div> Note that we add the API key parameter within a flashVars parameter.. The resulting HTML page is shown below. View Source (HelloWorld.mxml) Congratulations! You've written your first Google Maps Flash application! The Google Maps API for Flash now not only supports development of Flash applications within the browser, but there's also an experimental feature that supports Adobe AIR® applications running on the desktop. AIR applications provide additional capabilities not available to applications running within a browser, including file and local network access, application state persistence, and data access. AIR is a runtime environment that allows applications to run natively within a variety of operating systems from a common code base. An AIR Installer compiles this code into a format usable by the runtime environment. AIR applications allow you the freedom to interact directly with the desktop, providing file access, data storage, and robust user interface components across operating systems. More information on Adobe AIR is located at. To run Adobe AIR applications, you will need to download and install Adobe's AIR runtime environment. This installation includes Adobe's AIR installer, which converts Adobe AIR executable files into applications which can run on your operating system. Make sure you're also using version 1.8 or higher of the Google Maps API for Flash SWC file, which adds AIR support. The latest AIR release is available at:. Download the file from this location and install it per Adobe's instructions. This tutorial was developed using the Adobe AIR 1.1 Installer. (Note: Mac OS X developers who have trouble running AIR might want to check out this fix.) FlexBuilder is capable of creating either Flash applications for use within the Flash plugin within a browser, or standalone AIR applications for use within the operating system. Before you begin coding your AIR application, you should first set up an AIR-compatible project within FlexBuilder. To do so, follow these simple steps: Click Next, specify an output folder (leaving the default as bin-debug is fine), and then click Next again. You will be brought to the Source path and Library path configuration panel. Select a name for your Main application file and add an Application ID, which should generally be "package-like" (for example: com.google.maps.examples.air). Before you can compile your code, you will need to link it to the Google Maps API for Flash SWC file. To do so, click the Library Path tab and click Add SWC.... Finish. FlexBuilder will update your project and you return you to the Flex Development Perspective showing a skeleton MXML file. You may wish to test compilation of your skeleton MXML file at this point to ensure your development environment is working correctly. Since the file is empty except for a single mx:WindowedComponent object, you will only see a blank window appear. Creating Google Maps API for Flash AIR applications is similar to creating SWF files for use within a browser with two exceptions: mx:Application, the root of the AIR application is a mx:WindowedApplication. This should be created by default when you create your initial project. Mapobjects within AIR applications properties must have a urlproperty set to an online location where the purpose and scope of the application is described. This urlmust be the same URL that your registered when signing up for an API key. Because AIR applications have access to the file system, they can perform tasks that browser-based applications cannot easily do. We'll add some code to our test application that responds to user clicks by writing the clicked latitude/longitude value to a text file. Within the Flex Development Perspective of FlexBuilder, copy the following code (this code is the same as the code in the "Hello World" tutorial above except that the mx:Application has been changed to an mx:WindowedApplication and the Map.url property has been set. <?xml version="1.0" encoding="utf-8"?> <mx:WindowedApplication xmlns: <maps:Map xmlns:maps="com.google.maps.*" id="map" mapevent_mapready="onMapReady(event)" width="100%" height="100%" url="":WindowedApplication> Now that we have this skeleton file, we'll add the capability to process event clicks and write the current location to a text file. Add this additional import statement to add MapMouseEvent functionality: import com.google.maps.MapMouseEvent; Within the existing onMapReady event handler, add the following code to register an event listener for mouse clicks, and create an onMapClick event handler: private function onMapReady(event:Event):void { this.map.setCenter(new LatLng(40.736072,-73.992062), 14, MapType.NORMAL_MAP_TYPE); this.map.addEventListener(MapMouseEvent.CLICK, onMapClick); } private function onMapClick(event:MapMouseEvent):void { } Within the onMapClick function, add the following code to process the current mouse location as a latitude/longitude and write the output to a file named "locations.txt". Note that because AIR applications are Operating-System neutral, we need to obtain a reference to this file via the resolvePath method. private function onMapClick(event:MapMouseEvent):void { var file:File = File.documentsDirectory.resolvePath("airTest/locations.txt"); var fileStream:FileStream = new FileStream(); fileStream.open(file, FileMode.APPEND); fileStream.writeUTF(event.latLng.toString()); fileStream.writeUTF("\n"); fileStream.close(); } Now we can test our application by clicking the Run button in FlexBuilder's menu bar: FlexBuilder will compile the application and bring up a window displaying Manhattan. Click on any point on the map. The application will then write a "locations.txt" file within an "airTest" directory in your Documents directory ( /Users/username/Documents/ in Mac OS, C:My Documents\ in Windows). Opening that text file, you should see a simple latitude/longitude for the location you clicked: AIR applications run natively within the operating system, and often have access to sensitive information, such as the file system. Because of this, it is important to trust the applications that run on the desktop. All AIR applications must be digitally signed by a certificate authority, and FlexBuilder allows you to specify this certificate when generating the release build. FlexBuilder also allows you to "self-sign" your own applications for testing purposes or internal applications, but if you plan to provide your application to other users, you should obtain a commercial certificate from a certification authority such as Thawte or VeriSign. More information about digitally signing your Adobe AIR applications is available at:. We'll now want to export an AIR installation file (denoted with an *.air suffix). To do so, we'll need to create a test certificate and generate a release build from FlexBuilder. Within FlexBuilder, select Project->Export Release Build. The Export Release Build dialog box will appear: Make sure the correct project and application are selected and click Next. The Digital Signature panel will appear. Ensure that Export and sign an AIR file with a digital certificate is currently selected. If you have a commercial certificate, you can browse and select that certificate, supply its password and click Finish. However, if you don't currently have a certificate, select Create and the Create Self-Signed Digital Certificate dialog box will appear: Within the Create Self-Signed Digital Certificate dialog box, enter a Publisher name, a password, and a filename for the certificate. Click OK and the Export Release Build dialog box will be populated with this certificate information. Click Finish. FlexBuilder will compile your application and generate an *.air file. This file can then be installed as an application via the AIR Application Installer. We now have an AIR installation file located in our output directory. To install this AIR application: Double-click on the MapSimple.air file. The Application Install dialog box will appear: Since this application is self-signed, the AIR Installer will warn you that the Publisher of the application is UNKNOWN and that the application has UNRESTRICTED System Access. As we've generated this file ourselves, we don't need to worry, so click Install. The Installation dialog box will then ask you to specify an installation location: Select an Installation Location, leave the Start application after installation checkbox checked, and the MapSimple application will appear: Congratulations! You've written your first Google Maps AIR application!
http://code.google.com/apis/maps/documentation/flash/tutorial-flexbuilder.html
crawl-002
refinedweb
3,084
54.83
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video. Returning HTTP Status Codes in a Rails API2:14 with Naomi Freeman As we've been doing through and implementing our API responses, we've introduced a common but subtle mistake in how we send back status codes. In this video, we're going to fix it. Code Samples Here is a small snippet in our create method: def create list = TodoList.new(list_params) if list.save render json: { status: 200, message: "Successfully created todo list.", todo_list: list }.to_json end end We fix it to properly return the correct status code by moving the status key to be an argument to the render method rather than a hash key in the json argument: render status: 200, json: { message: "Successfully created todo list.", todo_list: list }.to_json We then apply this to the rest of the controller actions. Useful Links - 0:00 As we've been going along coding our API responses, - 0:03 we've accidentally introduced one of the most common errors when creating an API. - 0:07 If we look at our code for each of the responses, we're returning a JSON object - 0:11 with a status key and the corresponding HTTP status code that we want. - 0:16 While it isn't necessarily a problem to send along a status code - 0:19 as part of our API, let's take a look at the response. - 0:24 Notice the very first line. - 0:27 The HTTP response is a 200 OK status when in fact we wanted it to have a 500 status. - 0:34 Luckily, this is an easy fix. - 0:37 We'll move the status key out of the response object. - 0:41 [BLANK_AUDIO] - 0:44 And instead, put it as a argument to the render method. - 0:49 Now if we redo the cURL request, we get the proper response. - 0:54 Well, it's almost the proper response. - 0:56 A 500 error means that the server had some kind of internal error. - 1:00 However, the server, our Rails app, is just fine. - 1:04 It's actually the data that isn't correct for what we're doing. - 1:08 If we look at the HTTP standards, it looks like the status code, - 1:13 422, Unprocessable Entity, is more appropriate and - 1:17 is also the status most preferred by Rails. - 1:20 Other acceptable codes would be 406, which stands for Not Acceptable, and 403, - 1:25 Forbidden, which says that the server understood the request but - 1:28 is refusing to fulfill it. - 1:31 There's some debate over whether or not 400 Bad Request is acceptable, and - 1:35 it's up to you which you choose for your API. - 1:38 We're going to use 422, Unprocessable Entity, for ours. - 1:43 Now when we perform the cURL request again, we get the correct status code. - 1:51 One last thing, we'll go back and fix this for the successful status code as well. - 1:57 Now we just have to go back and fix the rest of our methods. - 2:00 [BLANK_AUDIO] - 2:05 Now I've gone through and fixed all of the status codes. - 2:09 Now your code should look something like this. - 2:13 Perfect
https://teamtreehouse.com/library/build-a-rails-api/coding-the-api/returning-http-status-codes-in-a-rails-api
CC-MAIN-2017-43
refinedweb
571
71.55
iCelRegion Struct Reference #include <propclass/zone.h> Detailed Description A region. A region is a collection of map files which will be loaded in one CS region (iRegion). Definition at line 98 of file zone.h. Member Function Documentation Register an entity that should be removed as soon as this region is unloaded. Determine whether the given entity is in this region. Create a map file for this region and associate it with this region. Unregister an entity from this region. Find the specified map file. Get the VFS path for the cache manager. Returns 0 if path is not used. Get the CS region that is used for this region. Get the name of the CS region that is used for this region. This will be the name of the entity containing the zone manager appended with the name of the region. i.e. like: <entity>_<region> Get the specified map file. Get the count of map files in this region. Get the name of this region. Remove all map files from this region. Delete the given map file from this region. Returns false if the map file could not be found in this region. Set the VFS path that will be used during the call to engine->Prepare() after loading this region. By default this is not set which means that the lightmaps are supposed to come from every individual map file. If you set this path then you say that the cache manager is setup at this directory instead. The documentation for this struct was generated from the following file: Generated for CEL: Crystal Entity Layer 1.4.1 by doxygen 1.7.1
http://crystalspace3d.org/cel/docs/online/api-1.4.1/structiCelRegion.html
CC-MAIN-2015-48
refinedweb
278
77.53
NAMEsignalfd - create a file descriptor for accepting signals SYNOPSIS #include <sys/signalfd.h> int signalfd(int fd, const sigset_t *mask, int flags); DESCRIPTIONsign .). - close(2) - When the file descriptor is no longer required it should be closed. When all file descriptors associated with the same signalfd object have been closed, the resources for object are freed by the kernel. The signalfd_siginfo structureThe) */ };. fork(2) semanticsAfter a fork(2), the child inherits a copy of the signalfd file descriptor. A read(2) from the file descriptor in the child will return information about signals queued to the child. Semantics of file descriptor passingJust).) Thread semanticsT.) epoll(7) semanticsIf. - memory to create a new signalfd file descriptor. VERSIONSsignalfd() is available on Linux since kernel 2.6.22. Working support is provided in glibc since version 2.8. The signalfd4() system call (see NOTES) is available on Linux since kernel 2.6.27. CONFORMING TOsignalfd() and signalfd4() are Linux-specific. NOTESA. LimitationsT differencesT. BUGSIn kernels before 2.6.25, the ssi_ptr and ssi_int fields are not filled in with the data accompanying a signal sent by sigqueue(3). EXAMPLEST"); } } }
https://man.archlinux.org/man/signalfd4.2.en
CC-MAIN-2022-27
refinedweb
186
60.01
Need to know your download trial is x64? This message was posted using Aspose.Live 2 Forum Need to know your download trial is x64? Hi, <?xml:namespace prefix = o Thank you for considering Aspose. Please download the latest version of Aspose.Cells (MSI installer) from the following location: If you have installed Aspose.Cells for .NET component using Aspose.Cells.msi installer, there will be two dlls on the installation path. You may use the dll in the NET2.0 folder for 64 bit machine. Please see the following documentation link for your reference. You may also check the following links which may help you regarding use of DLL and referring them in your project. Also, I have attached the latest DLL which can be used with 64 bit machine. If you need any further assistance, please do let us know. We will be happy to help you out. Thank You & Best Regards,
https://forum.aspose.com/t/64-bit-component/140668
CC-MAIN-2021-10
refinedweb
154
70.6
Components and supplies Apps and online services About this project Measuring distances from our robot to other objects is one of the most common data we want to obtain. For example, if we are building an autonomous vehicle, we want to check its distance from obstacles to help it make the right decision about its. These sensors have a transmitter that sends a short ultrasonic burst and a receiver that senses the ultrasound upon it’s return. Knowing the speed of the sound in the air (approximately 343 m/sec), we can calculate the distance it traveled, if we measure the time passed for the ultrasound to return to the sensor. Working Principle All ultrasonic sensors operate in a similar way. They send a short (a few microseconds long) ultrasonic burst from the transmitter and measure the time it takes for the sound to return to the receiver. Let’s say that it took 10 milliseconds for the ultrasound to return to the sensor. That means: - That the time in seconds is 0, 01. - Knowing that sound travels in the air 343 meters for every second, we can calculate the distance in meters by simply multiplying seconds by 343. in our case 0, 01 x 343 = 3, 43 meters. - This is the distance that the ultrasound traveled to the obstacle and back to the sensor, so the obstacle is 3, 43/2 = 1, 715 meters away from the sensor. Pros and Cons The main advantages of using ultrasonic sensors to measure distance are: - They are cheap and there is a plethora of choices in the market, - They are not affected by the color or the transparency of obstacles, - They are not affected by lighting conditions, - They are pretty accurate on their measurements. Their drawbacks are: - They don’t work well enough on obstacles with small surfaces. - The angle of the surface of the obstacle is crucial for the sensor. - Obstacles made from sound absorbing materials (for example sponges) are hard to be traced by the sensor, since the absorb sound. Choosing a Sensor There is a wide variety of ultrasonic sensors on the market, for most robotics platforms. For those who prefer working with the Lego platform, EV3 and the older NXT, include ultrasonic sensors. Some examples of using them in the classroom are: If you are an Arduino or Raspberry Pi fan and want to dive more into how these sensors work, there are several options that you can find online. The most common and affordable choice is the HC-SR04, which costs less than a euro on ebay (August 2018). For more details and comparative tests with various ultrasonic sensors, I advise you to watch two detailed videos (here and here) from Andreas Spiess channel on Youtube. Connecting the Sensor to Arduino and Programming The HC-SR04 sensor has 4 pins: - VCC, that is connected to 5V, - GND, that is connected to Ground, - TRIG (Trigger), that is connected to the transmitter to send the ultrasonic burst, - ECHO, that is connected to the receiver. There are two ways to connect the sensor to Arduino: - Connect TRIG and ECHO to different digital pins and make all the hard work and calculations in our program, - Connect TRIG and ECHO to the same digital pin and use a library to make all the calculations. I am going to start from the second way (easy) and then stay longer on the first, which gives the programmer more control and as an educator I find it more interesting. One Pin Connection and the NewPing Library For my program to work I will need to install the NewPing library to my Arduino IDE, using the library manager. Now I can write a simple program to print the distance obtained by the sensor to the Serial monitor. #include <NewPing.h> //Include the NewPing library #define PING_PIN 4 //The Digital pin i use for connecting the sensor NewPing sonar(PING_PIN, PING_PIN ); //Create a NewPing object void setup() { Serial.begin(9600); //Start serial communication } void loop() { unsigned int uS = sonar.ping(); int cm = uS / US_ROUNDTRIP_CM; //Get the distance in cm using the library Serial.println(cm); //Print the distance on the serial monitor delay(50); //Delay 50 milliseconds for the next distance mesurement } I uploaded the program to the board and started testing. Two Pin Connection – Calculating Distance From Time As an educator, I find it more interesting to dig in the working principal of things, even if that means more work for my students. In order to do so in this example we will have to forget the luxury of the NewPing library and make all the calculations ourselves. First of all I changed the schematic by connecting the TRIG and ECHO to different digital pins. Before I can start coding there are some things I need to clarify: - TRIG (Trigger) has a default LOW state and when we change it to HIGH it starts sending ultrasonic burst. - When ECHO receives the bouncing sound it returns a HIGH pulse to the Arduino. - I will use the pusleIn function to measure the time the ECHO pin stays in HIGH state. This functions returns time in microseconds. Now I can start my algorithm: - Set TRIG pin to HIGH. - Wait for a short period of time (10 microseconds). - Set TRIG pin to LOW. Now I have sent a short ultrasonic burst. - Get the time from ECHO pin in microseconds. - Convert microseconds to seconds (division by 1.000.000). - Calculate the distance the sound traveled in meters. Multiply seconds by 343 m/sec. - Now I have the distance I meters. I will convert it to centimeters by multiplying by 100. - This is the distance the sound traveled to the obstacle and back. So the distance of the obstacle from the sensor is half of that. So I divide distance by 2. #define TRIG_PIN 4 //Connect TRIG pin to digital pin 4 #define ECHO_PIN 5 //Connect ECHO pin to digital pin 5 void setup() { pinMode(TRIG_PIN, OUTPUT); //Set the TRIG pin to OUTPUT mode pinMode(ECHO_PIN, INPUT); //Set the ECHO pin to INPUT mode Serial.begin(9600); //Begin serial communication } void loop() { / * 343; //Get the distance in meters using the speed of sound (343m/s) float cm = meters * 100; //Convert meters to cm cm = cm/2; //We only want the distance to the obstacle and not the roundtrip Serial.println(cm); //Print distance in cm to the serial monitor delay(50); //Delay 50 milliseconds until the next mesurement } I upload the program to my board and the sensor works as with the NewPing library, returning decimal values since all my variables are float. Digging Even Further – Finding the Actual Speed of Sound Based on Temperature and Humidity So far I used the speed of sound to calculate distance from time, assuming that this is a constant value of 343 m/sec. That is not actually true. Speed of sound depends on the “density” of the mean it travels through. In solid materials the speed of sound is greater than liquids and in liquids sound travels faster than through gases. The ultrasonic sensor sends sound through air which is a gas. In gases the speed of sound is affected mostly by the gas temperature, less by the gas humidity and even less by the gas pressure. For example in air with pressure of 1 Atm and - Temperature of 0 degrees Celsius (32 F) and 50% humidity, the speed of sound is 331.61 m/sec, - Temperature of 20 degrees Celsius (68 F) and 50% humidity, the speed of sound is 343.99 m/sec, - Temperature of 30 degrees Celsius (86 F) and 50% humidity, the speed of sounds is 350, 31 m/sec και - Temperature of 30 degrees Celsius (86 F) and 90% humidity, the speed of sound is 351, 24 m/sec. There are many online calculators for the speed of sound. I used the one on to get the above results. Since I had a cheap temperature – humidity sensor lying around (DHT11 Temperature and Humidity Sensor), I decided to improve the calculations in my code, using these two values to estimate a more accurate speed of sound value. First of all I embedded the new sensor in my schematic. Typical DHT11 sensors have either 3 pins (5V, GND and Signal) or 4 pins (5V, GND, Signal and NULL). The connections are as follows: - 5V DHT11 –> 5V Arduino - GND DHT11 –> GND Arduino - SIGNAL DHT11 –> A0 pin Arduino The next step was to add the measurement of temperature and humidity in my code using the dht.h library, which you can download from here. I followed the step by step tutorial from Brainy Bits and now the only thing I needed was to calculate the actual speed of sound. After a long search, I found that the formula needed was originally published in 1993 by Owen Cramer in his work “The variation of the specific heat ratio and the speed of sound in air with temperature, pressure, humidity, and CO2 concentration.”. I was also happy to find a JAVA implementation by a research team from the Univeristy of Sao Paolo Brazil. With a few tweaks to adjust it to my code the full program is as follows: #include <dht.h> //Include the DHT library for the temperature - humidity sensor #define TRIG_PIN 4 //Connect TRIG pin to digital pin 4 #define ECHO_PIN 5 //Connect ECHO pin to digital pin 5 #define dht_apin A0 //Connect Signal pin from DHT11 sensor to analog pin A0 dht DHT; //Create a DHT object void setup() { pinMode(TRIG_PIN, OUTPUT); //Set the TRIG pin to OUTPUT mode pinMode(ECHO_PIN, INPUT); //Set the ECHO pin to INPUT mode Serial.begin(9600); //Begin serial communication delay(500); //Delay to let system boot Serial.println("Accurate distance sensing\n\n"); //Write a welcoming message to serial monitor delay(1000); //Wait before accessing the DHT11 Sensor } void loop() { DHT.read11(dht_apin); //Read the data from the DHT sensor float p = 101000; //Set atmospheric pressure to 101.000 kPa float temp = DHT.temperature; //Get temperature from sensor float humidity = DHT.humidity; //Get humidity from sensor //Use the fromula from to evaluate speed of sound float a0 = 331.5024; float a1 = 0.603055; float a2 = -0.000528; float a3 = 51.471935; float a4 = 0.1495874; float a5 = -0.000782; float a6 = -1.82e-7; float a7 = 3.73e-8; float a8 = -2.93e-10; float a9 = -85.20931; float a10 = -0.228525; float a11 = 5.91e-5; float a12 = -2.835149; float a13 = -2.15e-13; float a14 = 29.179762; float a15 = 0.000486; float T = temp + 273.15; float h = humidity /100.0; float f = 1.00062 + 0.0000000314 * p + 0.00000056 * temp * temp; float Psv = exp(0.000012811805 * T * T - 0.019509874 * T + 34.04926034 - 6353.6311 / T); float Xw = h * f * Psv / p; float c = 331.45 - a0 - p * a6 - a13 * p * p; c = sqrt(a9 * a9 + 4 * a14 * c); float Xc = ((-1) * a9 - c) / ( 2 * a14); float speedOfSound = a0 + a1 * temp + a2 * temp * temp + (a3 + a4 * temp + a5 * temp * temp) * Xw + (a6 + a7 * temp + a8 * temp * temp) * p + (a9 + a10 * temp + a11 * temp * temp) * Xc + a12 * Xw * Xw + a13 * p * p + a14 * Xc * Xc + a15 * Xw * p * Xc; / * speedOfSound; //Get the distance in meters using the speed of sound calculated earlier float cm = meters * 100; //Convert meters to cm cm = cm/2; //We only want the distance to the obstacle and not the roundtrip //Write results to serial monitor Serial.print("Obstacle distance = "); Serial.print(cm); Serial.print("cm "); Serial.print("Current humidity = "); Serial.print(humidity); Serial.print("% "); Serial.print("temperature = "); Serial.print(temp); Serial.print("C "); Serial.print("Speed of Sound = "); Serial.print(speedOfSound); Serial.println("m/sec "); //Wait 2 seconds before accessing sensor again. delay(2000); } I uploaded the program to my board and starting testing. I was happy to have more accurate measurements, even if that does not play a significant role in small distances of few cm. Using the Ultrasonic Sensor in Class The use of ultrasonic sensors in educational robotics is very common and there are hundreds of examples over the internet, either using the Lego platform or Arduino and Raspberry pi. Recently tinkercad added a new command block for getting the distance from an ultrasonic sensor. I find particularly interesting, for educational purposes, the analytical way of calculating the distance, from the time the sound takes to travel to the obstacle and back. In a previous project (smart trash can), that we implemented with my students from the evening club Young Hackers, we spent a lot of time to fully understand the algorithm that calculates distance from time using an analytical worksheet (in greek). We implemented the algorithm using a block style language (Ardublockly) that helped students a lot to understand every step of the way. Code Arduino Arduino Arduino Schematics Author Giannis Arvanitakis - 3 projects - 27 followers Published onAugust 21, 2018 Members who respect this project you might like
https://create.arduino.cc/projecthub/ioarvanit/measuring-distance-with-sound-353c17
CC-MAIN-2019-04
refinedweb
2,154
61.87
Some extensions don't work with .48.2 and OSX 10.7.3 Hi - some render extensions (such as gear / Foldable Box) etc don't work with my Inscape .48.2 on OSX 10.7.3 The error I get within Inkscape is "The fantastic lxml wrapper for libxml2 is required by inkex.py and therefore this extension. Please download and install the latest version from http:// any thoughts or ideas appreciated! Question information - Language: - English Edit question - Status: - Answered - For: - Inkscape Edit question - Assignee: - No assignee Edit question - Last query: - 2013-02-26 - Last reply: - 2014-09-19 This question was reopened - 2013-02-26 by Darren Lennard OK - many thanks. I have seen mention of workarounds, and downloaded Windell Oakey's Eggbot extension 2.2.2.r2 but either it doesn't work for OSX10.7.3 or I have no idea how to correctly execute it (the latter is more likely in my opinion) Does any of these guides help you? https:/ @mafiaz - ever used a Mac? @Darren - I did not and still would NOT recommend to you to try to install lxml yourself (there is no installer nor DMG for osx). @Darren - to get python extensions working with the official Mac OS X package of Inkscape 0.48.2, a single line needs to be inserted into a "text file" (actually a shell script) inside the application bundle. I can try to guide you to do this edit yourself if you want - one prerequisite though would be that you install a 'real' plain-text editor (Apple's TextEdit won't work because the new 'Autosave' feature on Lion can potentially modify such files in a way that would prevent the application from launching again). "Free" text editors I could recommend: - TextWrangler <http:// - Sublime 2 <http:// - Smultron (the old "free" and open-source version, not the one from the App Store): <http:// Another prerequisite would be that you install the original Inkscape app again (because we don't know what exactly happened (or not) when you tried to run the installer package from the Egg-Bot project). @ mahfiaz - I got as far as looking for solutions to the apparent lxml issue but did not try to follow any - none appeared to include, or be specific for, OSX 10.7.3 anywa @ suv - I would much appreciate an attempt to get them working yes. I also note that you do not recommend an attempt to install lxml myself Darren Steps to work around the known issue of python-based extensions not working with the official Inkscape 0.48.2 package on OS X 10.7 Lion: Prerequisites: - unmodified version of Inkscape 0.48.2 installed - alternative text editor is available (see earlier comment) - quit all open Inkscape windows 1) Open a new Finder window and browse to the directory where you installed Inkscape to (most likely '/Applications'). 2) select 'Inkscape', open the context menu (with the right mouse button or 'Ctrl+mouse button' for single-button mouse or 'Ctrl+tap' with the trackpad) and choose the entry 'Show Package Contents' 3) within the package contents, browse to 'Contents > Resources > bin' 4) in 'Contents > Resources > bin' open the file 'inkscape' in a plain-text editor (use drag&drop for example, or 'Open with…' from the context menu) 5) once you have the file (a shell script) open in the text editor, go to line 32. The content of line 32 is: export VERSIONER_ 6) above line 32, insert a new line with this text: export VERSIONER_ 7) save the changes (make sure that no (hidden) file extension is added) 8) test Inkscape: launch the app and use for example 'Extensions > Render > Gear…' as test If all went well, all python-based extensions (those from the menu 'Extensions' as well as those used for input/output extensions to support other file formats) will work now as expected. If something went wrong, trash the modified application, install a fresh copy and report back here. thankyou so much, this worked perfectly. Many thanks Darren Hi suv, I did modify the file and was able to get passed the lxml message. Now I get this error message: Traceback (most recent call last): File "sozi.py", line 27, in <module> import pygtk ImportError: No module named pygtk I have installed pygtk through macports, and have also been successful in loading the module in python manually (i.e. executing python and then invoking "import pygtk). But still when I call sozi from inkscape I get the above noted error. I am using the latest table version for inkscape .48.2 and mac os x lion 10.7.3. Any suggestions would be highly appreciated. a orchard wrote > But still when I call sozi from inkscape I get the above noted error. If you already have MacPorts experience, why don't you install inkscape with macports as well? Then you'll have no conflicts between various python versions and python modules. Getting the precompiled version of Inkscape working with sozi is a hack at best, see <https:/ for an example. @aorchard - sozi-related issues on OS X Lion would better be asked in a new question, or continued in <https:/ - This one here (Q 194132) covers extensions shipping with Inkscape, and how to get them working for basic usage. - Extensions like sozi or textext are externally developed custom extensions with a lot of additional dependencies which are not and cannot be fully covered by the official pre-compiled package of Inkscape for Mac OS X (it includes only the minimal requirements - libraries as well as python modules). Questions about sozi and other custom extensions which spawn external dialogs are beyond the scope of this question (Q 194132) IMHO. @suv, thanks for the response, and I agree. WIll follow up on the other list you noted. it works also for me, on mac os mountain lion, thank you ~suv (suv-lp) I'm back I'm afraid, with the same "fantastic xlml wrapper..."error. Inkscape .48.2 (freshly installed from inkscape.org - although not obviously .48.4 as indicated on download page) OS 10.8.2 Python 2.7.3 (from python -V) and line 32 and 33 of Inkscape in Contents/ export VERSIONER_ export VERSIONER_ any help appreciated, especially a process that might help identify the problem Hi, I have tried to go to line 32 in the "inkscape" as you recommended, but I don't have the same text: Last login: Tue Dec 3 13:10:11 on ttys000 /Volumes/ iMac-de- Setting Language: .UTF-8 (process:1968): Gtk-WARNING **: Locale not supported by C library. Using the fallback 'C' locale. /Users/ /Users/ /Users/ ** (inkscape- ** (inkscape- . ** (inkscape- (inkscape- (inkscape- (inkscape- was not found either, perhaps you need to install it. You can get a copy from: http:// (inkscape- (inkscape- [Opération terminée] Can you help? I really would like to use the extensdions. Annabelle I am having the same issue. I have the latest XQuartz X1 Window manager 2.7.5 and the latest Inkscape installed - 0.48 - using the package from Inkscape. I am running OSX Mavericks 10.9.1 and Python based on Macports' version: python --version Python 2.7.6 I installed lxml via MacPorts and I should be able to use the filter to embed all images, but I get The fantastic lxml wrapper for libxml2 is required by inkex.py and therefore this extension. Please download and install the latest version from http:// Any ideas? Never mind. Adding export VERSIONER_ This has fixed my issue with DXF, many thanks Dave This solved my problem, too. Many Thanks. Thanks suv! That worked fine on my Mac running OS X 10.9.3 and Inkscape 0.48.2 r9819.:// I suppose there is a PATH problem, but I cannot find where I have to add the needed one. Could you help me? [many thanks, Ignacio] On 2014-09-19 17:18 , Ignacio Alvarez wrote: > Question #194132 on Inkscape changed: > https:/ > > Ignacio Alvarez requested more information: >:// > package manager by a command like: sudo apt-get install python-lxml" > > I suppose there is a PATH problem, but I cannot find where I have to add > the needed one. > > Could you help me? [many thanks, Ignacio] > Sozi is not supported out-of-the-box with Inkscape 0.48.5 on OS X - this custom external extension requires PyGtk which is not included in the application bundle because inkscape's own extensions use the internal GUI and not PyGtk for dialogs. I assume that you followed some random instructions found on the web, likely based on some older package of Inkscape for OS X - without details on what you already modified (inside the app bundle) and installed outside of the app bundle, I'm sorry but there is no easy help to provide (too many unknowns). Your question is also off-topic for this thread (which is about a known issue with an older package (Inkscape 0.48.2 for Mac OS X) - this known issue was fixed in the current stable release package 0.48.5 for Mac OS X). O.K., many thanks and sorry to introduce noise to the thread. On 2014-09-19 19:18 , Ignacio Alvarez wrote: > O.K., many thanks and sorry to introduce noise to the thread. Sorry for not being more supportive. For starters, see https:/ (the instructions had been written for 0.48.2, but probably work with 0.48.5 too: install the dependencies for sozi via Macports (at least py27-lxml, py27-numpy, py27-pygtk), and create/modify the two files as described). Known issue with Inkscape 0.48.2 on OS X Lion: /bugs.launchpad .net/inkscape/ +bug/819209> Bug #819209 “Extensions do not work with Mac OS X Lion” <https:/ Unfortunately, we don't yet have packages of Inkscape 0.48.3.1 for Mac OS X available for downloading.
https://answers.launchpad.net/inkscape/+question/194132
CC-MAIN-2016-50
refinedweb
1,646
71.04
Returns the caller's kernel thread ID. Standard C library (libc.a) #include <sys/thread.h> tid_t thread_self () The thread_self subroutine returns the caller's kernel thread ID. The kernel thread ID may be useful for the bindprocessor and ptrace subroutines. The ps, trace, and vmstat commands also report kernel thread IDs, thus this subroutine can be useful for debugging multi-threaded programs. The kernel thread ID is unrelated with the thread ID used in the threads library (libpthreads.a) and returned by the pthread_self subroutine. The thread_self subroutine returns the caller's kernel thread ID. This subroutine is part of the Base Operating System (BOS) Runtime. The bindprocessor subroutine, pthread_self subroutine, ptrace subroutine.
http://ps-2.kev009.com/wisclibrary/aix51/usr/share/man/info/en_US/a_doc_lib/libs/basetrf2/thread_self.htm
CC-MAIN-2022-33
refinedweb
114
53.27
Ross Tucker The hidden side of the Olympic Games Technology In the pool and on the track, technology influences performance. Where is the line of fairness? Doping The "pharmacological arms race" has never been fiercer. Are we turning a corner? Pistorius The blade runner is both the most inspirational and controversial athlete at the Games. The scientific evidence, however, is less grey than the PR suggests 12 in 2012? We've been promised 12 medals. What should we expect? The role of science Technological warfare Doping Pistorius "12 in 2012" 2001/2002 Pre-EPO test Abnormally high % with HIGH Ret% About 1 in 8 samples "abnormal" EPO test developed About 1 in 10 samples "abnormal" Abnormally LOW % with HIGH Ret% 2003-07 Post-EPO, pre Bio-passport About 1 in 40 samples "abnormal" Doping has been squeezed down 2008-present Post Bio-passport Biological passport introduced Establish the theory Scientific process Known theory Hypothesis Research/ gather data Evaluate Revise hypothesis Confirm hypothesis Flex-foot characteristics 8% energy loss 2.95 kg Human limb characteristics 40-60% energy loss 5 - 8 kg ± 7 fold greater energy return 2-5 kg lighter mass (F = ma) Less work (energy) to run The theory Lower metabolic demand to run at given speeds Research study # 1 At 400m sprint speeds, Pistorius uses 25% LESS oxygen than able-bodied sprinters. This confirms the theory, but with important fine-print During sprinting, energy comes from two sources For the purposes of appeal (CAS), Pistorius needed to cast doubt on this "metabolic difference" Repeat the running test, but at slower speeds and hope Pistorius is similar to able-bodied sprinters OP compared to able-bodied SPRINTERS What if we add some long-distance runners to able-bodied group? Oxygen-dependent Oxygen-independent Competitive imbalance Wearing the LZR Wearing anything but the LZR Also gave disproportionately large benefit to larger body shapes 94 won Men Women Clearance sale on records - everything must go! 37 Olympic 14 World 29 Olympic 11 World 178 2% The approximate average size of the advantage in the suit Number of world records set between introduction and ban of high-tech swimsuit (I think...) 100m freestyle world record - a bumpy evolution/revolution Tumble-turn Swimming goggles 2008: The LZR Racer by Speedo No seams - ultra-sonically bonded Primary objective - reduce drag from the skin Polyurethane panels - zero resistance Core compression/ "corset" to hold swimmer's body in optimal shape Swimming performances have always been susceptible to equipment innovation The Speedo Fastskin: Faster than ever? As of Jan 1st 2010, new stipulations: The war on drug cheats has evolved. Here's how, and a message of qualified hope Companies always make claims about their latest equipment. Most of it is marketing. But once in while, they hit the 'jackpot' and the sport is transformed. For better or worse... The Olympics are as much a showcase of sports science as they are the human spirit. When perfection matters, every millisecond counts "A 12-second advantage" vs "common sense" The science of the Blade Runner Beijing was South Africa's worst Olympic showing ever. Will London be our best? The suit, cap & goggles "engineered to work in unison" And on land, we have the Nike Turbospeed suit Wind tunnel tests suggest a 0.023s improvement in a 100m race 0.24% A look inside the doping cabinet Doping basics Prohibited substances Prohibited methods The smoking gun fallacy In hindsight, naive to expect that doping control would produce clear & incontrovertible evidence An evolution of anti-doping to be a) long term and b) repeated The Athlete Biological Passport System Rather than finding the foreign substance in the body, look for its effects Longitudinal - multiple tests per year There is natural biological variation in blood variables, and unusually large deviations are flagged as "suspicious" Reticulocytes Hemoglobin Hematocrit A calculated "off score" Reticulocytes "Immature" red blood cells Normal If blood is removed If blood is re-infused If EPO is used (not to scale) (not to scale) (not to scale) (not to scale) 0.5% - 1.5% Body stops producing new blood EPO causes more red blood cell production Body responds by making more new blood Ret % Ret % Ret % By tracking every athlete over time, anti-doping gets a "fingerprint" that allows any future changes to be detected in the context of individual physiology Addition of elite marathon runners from scattered studies Compared to other sprinters in the available literature Lower metabolic demand to run at given speeds Mechanism? Mechanics of Oscar Pistorius Why, if the data exist for sprinters, would they have compared him to a group of long-distance runners? ." - Pistorius, quoted in The Guardian, July 17 2012 Performance implications Year Distance (m) Anti-doping practice changes performance "We conclude that running on modern, lower-limb sprinting prostheses appears to be physiologically similar but mechanically different from running with intact limbs" To distance runners There was plenty of comparable data on sprinters, so why not use it? How do humans get faster? Swinging their legs faster Applying more force to the ground Studies show that we get faster by... A changing sport 1993 - 2006 Ave: > 6 W/kg Max: > 6.3 W/kg 2007 - present Ave: 5.7 W/kg Max: 5.9 W/kg When compared to the correct control group, Pistorius is 14% (2.3 SD) more economical The ability to swing the leg faster is LIMITED. So we need to apply more and more force to the ground - the athletic limit As sprinters get faster, their swing times level off A "speed limit" in how rapidly the legs can move at top speed OP 0.284s Time to reposition limbs 21% faster than average of athletes tested in lab 17.4% faster than two 2 men in 100m final at World Championships 10% faster than the fastest individuals recorded in history Pistorius: repositioning limbs "off the biological charts" The effect of fast limb movement is... Adjusting swing times to "average": 12 sec slower Adjusting to fastest ever: ± 6 sec slower Are they the "same blades"? A company called Flex-Foot debuted the Cheetah in 1996, but the prosthetic blades remained a bit crude until Flex-Foot was acquired by the Icelandic firm Ossur in 2000... - written by "Wired" magazine, March 2008 Conclusion Lighter limbs, more energy return Lower cost of running, mechanical advantage "Off biological charts" Confirmed The disadvantages No calf muscles 6% Contribution made by calf MUSCLE to propulsion vs Which has greater energy return (spring?) 92% 41.4% Slow start 100m best: 10.91s Max 0.5s "I see 12 medals in the team. There is no doubt about it that there are a lot of potential medalists. When I was going through the team I said to myself that there is a lot of potential here" 1992 - 2008 19 4 6 9 Within a FORMALIZED system, money in = medals out Over-achievers: Punching above their weight Zimbabwe win the most medals per million people Caribbean islands dominate per billion $ Expectations Hopes Dreams Naive or delusional? Olympic/HP spend in four-year cycle £ 265 million R 3.47 billion Projected medal count 48 From at least 12 sports For top 4 ranking in table R 2.6 billion Huge increase ahead of Rio 2016 Target 15 in London 30 in Rio "It’s no use getting R$1 billion (US$ 510 million) in 2015, because I can’t buy medals. I have to make medal winners, and that takes eight years, 10 years. I need, at the very least, six years" R 1.2 billion Historically best performing "big nation" R 1.1 billion Part of a 2028 Olympic vision R 59 million in Beijing R 78 million for London R 65.6 M/medal R 64.1 M/medal R 59 M/medal 1.5 medals in London? If the price of medals was uniform: 11 in Beijing would cost R660 million 14 in Beijing would cost R840 million R 720 million for 11 in Beijing (9.2 times more than actual) Fortunately the cost of medals is a function of the system in which the money is spent Equivalent in running time: Usain Bolt slows down from 9.58s to 10.06s 0.5% separates gold from nothing If SA want 12 medals at our current price-point 0.5% Does more make sense? The difference between gold and anonymity What sets the price of medals? Supply & demand Formal system from production to sale On a black market, price is variable Athletes Coaches Management Funding Doing the right things will increase spend Increased spending does not necessarily do the right thing Not winning a medal would disappoint 80% probability Realistic chance, but a lot must go right 20 - 50% probability Cameron van der Burgh Sunette Viljoen Burry Stander Strongest event - 50m breaststroke, not Olympic event Unpredictable event, weather variability, recent form Credentials at global level, experience Consistency, world silver medal in 2011. Best will be medal guarantee Recent form, exceptionally well-prepared Event susceptible to 'luck', competitor strength Khotso Mokoena Experience, unpredictable event Recent form, patchy since 2009 Richard Murray Nearly unmatched running ability. Confidence. Recent form Swim, may have to work excessively hard on the bike. No team-mates. The Brownlees LJ van Zyl Caster Semenya Historical credentials, best performances would medal Injuries and 2011 form, lack of competition in 2011 Chad le Clos Precocious talent, confidence, nothing-to-lose Age, lack of experience on truly global stage. Phelps & Lochte Realistic chance, but a lot must go right 10% probability Men's lightweight four Women's pair 4 x 100m medley relay 4 x 400m relay Bridgette Hartley "I see 12 medals in the team. There is no doubt about it that there are a lot of potential medalists. When I was going through the team I said to myself that there is a lot of potential here" 0 2 2 0.01s 0ics 2012 talk No description byTweet Ross Tuckeron 19 October 2015 Please log in to add your comment.
https://prezi.com/ne4jakfjeth6/olympics-2012-talk/
CC-MAIN-2017-04
refinedweb
1,670
56.89
They call it a calling frequency for a reason. Why is it that folks just can't bring themselves to QSY off of it? Many times each summer I hear West Coast stations calling the East Coast, only to be drowned out by the same stations who just can't pry themselves off of 50.125. A word to the unknowing- lots of folks hear better than you do. Get some hardline and listen for a change instead of calling endlessly or running pileups on the calling frequency. Remember, just because you are not hearing anything but local stations does not mean others nearby aren't either. Bottom line, listen more than you call, and if you call on .125, QSY off ASAP. To all the DX stations out there- in case you forgot, the intercontinental calling frequency is for calling, not running pileups. We all appreciate working you but trust me, at least three other DX stations you can't hear will be using the same frequency and thus it will degenerate into a battle of signal strength. Do everyone a favor & QSY to some other frequency. I promise, we will find you and spot you there. For U.S. stations, use of this frequency is a subject that is touchy at best. Personally I hate to use it at all, but in some areas of the world (namely South America & the Caribbean) no one ever listens anywhere else, so no other choice exists. If I have propagation indicators from South America or Europe & no one seems to be aware of them, I use it- sparingly, but yes, I use it. As with 50.125 however, promptly QSYing off of it after getting a response is the best policy. This one is probably laughable for some folks, but I get tired of beginning a DX pileup to South America or Europe, only to have some local U.S. station butt in & try to work everyone who responds to me. If I know the station would be a new one for someone, or is a rare entity, I have no problem leaving the frequency to the DX. However, my patience has limits, & during the fantastic 2001-2002 F2 season, I reached it after allowing myself to be bumped from 5 different frequencies in a one hour period. The moral here- get the hell off my frequency & run your own damn pileup, especially if you have a decent station & DXCC on 6 meters. If you want to work someone who calls me, ask them to QSY with you, but don't expect me to move- I won't. Though it should make me smile, this one still makes me cringe. When I first began DXing on 6 meters I heard Iceland and Ireland at least 5 times each before every completing a QSO with either country. Why? At the time I had only 100 watts out on 6 meters, and each time these (same) stations were in, the same local stations with more power would have to work them again and again and again. In truth, I owe these DX hogs a measure of thanks- they forced me to get an amp. Though I now have little trouble working any DX I attempt to, I have not forgotten the lesson. Under most circumstances, endeavor not to compete in a DX pileup unless you have yet to work the country, and if you rework many of the same DX stations, do so when no one else is. Never rework the truly rare DX, or you will likely destroy someone elses chance to do so, as a W8 did to me by duping KH2JU during the winter of 2001-2002- I'm glad he was 599 again for you, but he was 539 for me & gone for the cycle here after you duped him. So, to all the dupers & DX hogs, kiss my amp. This should be a no-brainer, but stupidity hath no bounds like a DXer on the hunt for a new one. This idiot behavior comes in two forms. The most pernicious comes from Lids who want desperately to break the pileup by making call after call after call after call. They don't make it the first time, so they QRM everyone else (not to mention the DX) by repeatedly giving their callsign and calling over others intentionally. They usually keep calling even during those periods they cannot hear the DX, or anything else happening on the frequency for that matter. They are bringers of disorder, and are best suited to the HF bands from whence they came. The other kind of DX QRMers are more comical, and display their lack of understanding of the 6 meter band for all the DXing world to hear. These guys first see a 50MHz DX spot on the internet or packet cluster. After hearing nothing on the frequency in question, these idiots proceed to call CQ on the same frequency in the hope of a miracle. It is usually laughable, except for those times you yourself are actually hearing the target DX and waiting for them to get strong enough to call. The moral here- shut up and listen, you'll do everyone a favor and not make a fool of yourself. Another no-brainer, but with the advent of the HF + 6 meter rigs, the clueless are out in droves these days. When I get on HF, admittedly a rare occassion, I use a frequency chart to figure out where I should operate. I also listen- it is relatively easy to tell what frequencies are traditionally used for a given purpose. During a domestic E opening when the band is full from 50.125 (yep, you guessed it, the calling frequency) to 50.200+, it is not too hard to figure out that there is a reason the frequencies from 50.100 to 50.124 (inclusive) are empty- and it is not so that stations new to 50MHz will have a place to operate close to the calling frequency. It is a safe bet that more people monitor the DX window even during a domestic Es opening than are actually working it, but it never ceases to amaze me how many domestic QSOs occur here each opening. Do your part & ask these folks nicely to QSY, and explain to them why. Nothing is quite so disturbing as losing that once-in-a-cycle opportunity at working a rare entity while listening to some idiot already in the log tie the DX station up for a ragchew or needless questions. We are all happy for the guy who worked him for a new one and hope he gets his card, but look the damn address up on the net or ask someone local to do it- just don't ask the DX station in the middle of the opening. Sound like a no-brainer? Well, just listen to some of my sound files... Other stupid time-wasters include needless repetition of information like names, towns, mother's maiden name & etc. Some DX stations ask for this type stuff so, oh well, but don't bother if not asked- it might be you who misses out next time but for this type of nonsense. Also, if a station does not give his grid square, don't bother giving one either. If not asked for it the other station obviously couldn't care less anyway, and the time saved can allow many extra QSOs for fellow DXers. It can be looked up later. I admit it, I am a hypocrite. For a few years I answered every card recieved, whether or not an SASE was enclosed. In fact, I returned a few SASEs as well, and griped about those who refused to QSL without an SASE. However, the basic laws of economics caught up with me- at 37 cents a pop I can't afford to respond to QSLs that don't contain return postage. I dare not attempt to make an exact calculation, but suffice it to say that I could have bought a new ICOM 756 PRO II for what I have spent in postage over the last few years. When I QSL I always enclose an SASE, and even this does not guarantee a return (there are even some deadbeat U.S. 6m DXers out there too). The other straw that broke the camel's back (aside from my job as a 3rd call bureau sorter) was the volume of cards I recieved for activating FM28 & Delaware on 50MHz- at least 16 cards were recieved without return postage. Bottom line, if you want my grid, or any grid I activate, enclose an SASE or forget it.
http://www.qsl.net/n3db/annoy.htm
crawl-001
refinedweb
1,455
76.25
Recently, I have attended several presentations in which the SAP speaker states that when it comes to SAP user experience, “everything is Fiori.” Many of my customers and colleagues have asked me what this means, especially in the context of SAP Screen Personas. In this blog, I will attempt to clarify this statement and show how SAP Screen Personas complements SAP Fiori UX. The definition of Fiori has changed When we originally launched SAP Fiori, it was a collection of apps to replace the most commonly used functions of SAP Business Suite.These apps had several very important characteristics. SAP Fiori means: simple, role-based, responsive, coherent, and delightful. The market embraced this approach to disaggregating SAP software, such as CRM, SRM, HCM, and SCM, into relevant tasks and activities for end user roles across the enterprise. In response, we created more apps for more user roles to make more functions of SAP ERP accessible through a variety of devices. Based on continued positive feedback, we expanded the meaning of SAP Fiori to include not just the apps, but also represent the concept, design and technology. Now, SAP Fiori is the new user experience for SAP software, across solutions such as S4HANA, SAP Simple Finance and SAP Business Suite. This is what we mean by “everything is Fiori.” A consistent user experience across SAP functionality Frequently, I hear customers express their desire for a consistent user experience across SAP. For some organizations, this means a single entry point. For others, they want all SAP applications to have some family resemblance. For example, when you use Microsoft Office, Word, Excel, and PowerPoint all do different things, but they have a similar look and feel. Several solutions have emerged over the years from SAP that provide elements of a unified user experience. NetWeaver Business Client (NWBC), the SAP Portal, and now Fiori Launchpad are all essentially shells to hold other content such as Dynpro-based transaction screens or Web Dynpro applications. Or even new Fiori applications. The UX strategy covers new and existing applications If you think about the SAP UX strategy, we offer a three-pronged approach: New, Renew, Enable. With the updated Fiori definition, we now deliver all new applications with the Fiori user experience. And many renewed parts of SAP ERP are available with the Fiori user experience (the aforementioned apps). What about all the remaining functionality in the Business Suite? There are over 10,000 SAP-provided transaction codes in the SAP GUI, through which over 80% of users still access SAP. This is where SAP Screen Personas fits in. It enables you to modify any SAP GUI transaction so it has the Fiori characteristics. SAP Screen Personas lets you make SAP ERP transactions look like Fiori apps. Making everything look like Fiori with SAP Screen Personas is often faster and easier than developing custom applications, if the underlying transactions do most of what you want. While many customers are embracing the Fiori design, you can use whatever visual design you want to make the screens consistent with how you consume enterprise applications at your company. I have seen some customers integrate SAP transactions into their corporate portal using SAP Screen Personas. The resulting screens do not look anything like classic SAP. Some users do not even realize they are using SAP. I have heard some people say that SAP Screen Personas and Fiori are ways to deliver cloud-like usability for on-premise systems. Certainly, they are a great way to prepare your company for the cloud and help you run simple. Everything is Fiori means better usability for you Who would not want their enterprise software to be intuitive, optimized for specific user roles, and run on a variety of devices? This is what the Fiori user experience is all about. SAP Screen Personas is a great way for you to deliver this capability to your users. Here is a style guide that allows you to build Fiori designs using SAP Screen Personas 3.0. For the SAP Screen Personas product team, Peter Spielvogel. hi Peter, in your last statement you mention we can produce Fiori-like Applications using Personas. which version actually can do such? – I understand by Fiori-like the ability to product applications using the design guidelines of Fiori.. please correct me if not accurate, I still did not find an easy way to do such. on the topic, Fiori seems to be only a good choice when dealing with mobile or smaller form factors than a desktop.. I hardly think applying the guidelines of Fiori into a desktop form factor is a good design decision – seems to be a lot of waste and unbalanced screen elements, even on the app-framework the master-detail and full screen modes are not as smart or provides far less usability than material design from Google.. check Google Inbox for instance, the way they handle lists and inner elements inside lists, or even material design forms seems to utilize screen real-state a lot better. on the other hand, some stuff I’ve seen from SAP uses the ux3 namespace components which IMO, provides a far better result for desktop form factor than m namespace, and ux3 namespace seems to be vastly used by SAP to deliver applications, but they are not mobile ready – wouldn’t make more sense Personas utilize more of ux3 and follow ux3 patterns other than Fiori patterns? – Personas have a extremely heavy load of resources comparing to any other small footprint framework (or application if you like) which already kinda restricts the use to desktop, why not focus on something else other than Fiori to produce better results? last, I’d be really keen to see SAP split the scripting engine from the flavor management in Personas – this would be awesome since a developer would be capable of easily ‘mapping’ whatever he developed into a SAP GUI screen, execute functions and read/parse back the result.. in fact, I believe this is the real power behind of Personas, as I find the elements to use to design screens extremely limited. Cheers, D. Hi Daniel, You raise many points here, so let me address them individually. It is possible to build screens that look like Fiori in both SAP Screen Personas 2.0 and 3.0. The style guide for 2.0 is available now. The Fiori style guide for SAP Screen Personas 3.0 is being developed and will come out as part of the planned RDS (rapid deployment solution). Fiori can be used across all form factors – phone, tablet, and desktop. The responsive design presents more content on the screen the larger the available real estate. If want to put more information on the desktop than the standard Fiori design, then you can modify the Fiori screens using Web IDE. In SAP Screen Personas 3.0, we have separated the design and scripting functionality. Each has its own menu optimized for the roles that will generally be using them. You can now build the screens using a more intuitive menu structure and theming engine. Scripts are built in a separate window, built in more of a development environment style. Once the scripts are written and tested, you then go back to the editing mode to attach scripts to the buttons. I will ask someone from the development organization to address your namespace question. Regards, Peter hi Peter, “It is possible to build screens that look like Fiori in both SAP Screen Personas 2.0 and 3.0. The style guide for 2.0 is available now.” – would you mind hyperlink the documentation here? – I simply can’t find this information.. – would be pretty great to be honest, we’re in a process with two customers deciding to adopt Personas 2 and I still have time to do something about it. well, I understand Fiori can be used across all form factors, this is not what I questioned.. – all I said is that, no one in their sane mind would develop on sap.m / blue_crystal (which is what you folks in SAP named Fiori) when dealing with desktop.. don’t you find strange all the players who actually ‘lead’ UX doing something different than what SAP is doing at this moment? – the boat has sort of passed, the patterns applied in Fiori are old patterns, forcing a developer to end up with an old pattern for UX is not really nice.. about Personas 3, I’m part of the ramp-up.. – I understand we have two windows, one with a extremely limited wysiwyg (which is what we use to edit the flavor) and another pull up panel with the ‘script editor’… however, my question was not this, it was something else. – why not release the js objects used in the personas composer application to be used outside of personas, as a standalone technology? – this way we could use the scripting engine outside personas.. of course personas could still use it internally, but this way SAP would extend its possible usage. Cheers, D. Hi Daniel, Your point about the scripting engine is a very good point. As usual, what I am saying next is not a promise of a feature or a product. If you look at how scripting has changed from 2.0 to 3.0, you can see that it changed from being a feature in the editor, into a separate component that is part of Personas but separated from the editor. If you check the back-end tables, you can also see that the scripts have there own structure and are not part of the standard flavor changes anymore. However, at this point in time, they are still linked to a flavor. We are looking into the possibility of detaching them from a flavor and making them globally available (within Personas) to be called from other flavor specific event handlers. Why would you want to use the scripting engine outside of Personas? With Personas you can bind the scripts to control/screen events, without it you would have to write your own application that remote-controls the WebGUI. If you write your own application, wouldn’t it be better to build it using Gateway service to fetch/manipulate data? Since you are part of the ramp-up already, why not discussing this in the next ramp-up accelerator call? Cheers! — Tobias Queck SAP Screen Personas Lead Architect hi Tobias, yeah, I understand how the scripting engine works, the objects you guys created in the ui5 composer app for Personas.. there is some stuff I could actually play around and enhance to abstract it a little bit to be used ‘outside’ the personas shell.. – main reason why I believe something like this could be good is due the fact some ‘processes’ already coded in the dynpro’s – and by this, I mean update processes pretty much, fetching would usually become some ICF SREST node (I personally don’t like the GW oData since I find most of the stuff built in there slower than coding it myself) – but perhaps, for one or two ‘queries’ that there is a dynpro screen built already, it could even be a target for such too. – having such, someone could pretty much code using whatever toolset they want and finalize a dynpro transaction, and considering the dynpro screen does it’s job it may actually save some development time (of course, it’s a performance trade off) in regards your first question, I agree: writing a custom application from scratch will always provide better results, but there is no such thing as a ‘clear API’ in ABAP, this way we always end up having to code a lot of back-end logic, that somehow is already there in the dynpro screens, it’s just the ‘format’ of that code is not really re-usable – also, as Peter mentioned in his blog, “10,000 SAP-provided transaction codes” – means, we would have access to utilize some of these.. and don’t get me wrong here, in a developers view I understand all the benefits of coding the services and opening up an API that should already exist in SAP for the last 15 years, but there is some appealing to business being able to ‘face-lift’ a dynpro screen, reason why Personas exist. =) “Since you are part of the ramp-up already, why not discussing this in the next ramp-up accelerator call?” – I have no idea what’s this about, but I will check with the folks here how can we join in.. used Personas 2 and Personas 3 and I believe Personas 3 is far superior, more than happy to provide some feedback on the correct channel. Cheers, D. Hi Daniel, You should have received an invitation for the next call by now. I am looking forward to continuing the discussion on the phone. Cheers! — Tobias Queck SAP Screen Personas Lead Architect hi Tobias, yeah I did, thanks for that.. will try to join in, it’s pretty early for me in NZ but hopefully I will be able to attend. Cheers, D. Agree….where is this information?? Hi Peter, I am also interested in finding the “style guide for 2.0”. I got a Fiori style guide for Personas many months ago but it was very much a draft – I think I got it from the link draft of a style guide document within SAP Screen Personas Practitioner Forum session 13 summary, but the link was only available until 8/8/2014. The only thing I can find is the generic Fiori design guide at: Rob Hi Rob, I posted it here: SAP Screen Personas style guide for Fiori Feb 2015 Regards, Peter Peter, Thanks for the information. I just went through the style guide. I do have two questions: 1. You reference numerous image files – like Fiori_LaunchPad_Hdr_1200x48.png. Where can we find those files? 2. You reference the Fiori Icon Font. Can the font be used in lieu of an image, or do you have to download the image as a file to use it? Thanks, Todd Hi Todd, I have the same question. Have you yet found the resources, where in we can find all those images mentioned in the Style Guide ? @Peter – Please shed some light as so where these resources are available. I have recently taken your OpenSAP course. And its really wonderful. — Regards Saurabha J Hi Saurabha, You can find the assets here: Regards, Peter Hi Peter. Do you have a link accessible to people outside SAP’s corporate network? Hi Peter, The link is not working — Regards Saurabha J Try this one: SAP Screen Personas 3.0 Style Guide Hi Tamas, I have gone through that, hence I was able to point out that the resources mentioned in that post say for Launchpad-Background use Fiori_Gradient.png are not listed. The guide lists various resources like Fiori_BasicPage_Hdr_1200x48.png or Fiori_Gradient.png and many more. But there is no link to download these resources so that they can be used in Personas screens. — Regards Saurabha J Images are here in a zip file. Thanks Peter !!! Hi, I can’t agree more on this, Daniel! Fiori on desktop On desktop (especially with the huge resolution people have nowadays), Fiori apps look rather “empty” so-to-say. It even may seem counter-productive since the process is split up into several screens where – on desktop – you could easily have reduced it to 1 single screen. Theming Screen Personas using Fiori Guidelines Besides, AFAIK, SAP Fiori can change the theme using the UI Theme Designer but Screen Personas follows a different concept* with its own internal theming (correct me if I’m wrong). Thus, each time you update the theme using UI Theme Designer, you will end up changing manually all of this in Screen Personas? Last point, I very much liked the approach by Tobias about “Screen Personas responsiveness”: Giving an SAP transaction a responsive design with SAP Screen Personas 3.0 but, to say the truth, isn’t it a clever trick compared to what Fiori offers “for free” (assuming you use the sap.m library)? Imagine you decline your Flavor for n population categories, you will end up having to maintain 3*n flavors in case you want to support 3 different resolutions? *just wanted to know if UI Theme Designer support for Screen Personas is planned/part of the roadmap? Best regards, Guillaume SAP Screen Personas does not work with Theme Designer. Since our themes must work across the different SAP GUIs, we had to create our own theme engine so the designs you create in Web GUI will also render properly in SAP GUI. Hi Peter, Thanks to your answer. There is some confusion (at least, on my side) because we can read on UX Explorer () that UI Theme Designer is supported by SAP Gui for HTML and SAP Screen Personas is HTML-based. Is this possible to create a flavor based on a SAP Gui for HTML that would be theme using UI Theme Designer? If yes, the theme would then be “frozen” into this particular flavor, right? Thanks in advance for your insights. Best regards, Guillaume SAP Screen Personas cannot use Theme Designer because the themes we build must work across both SAP GUI for HTML (Web GUI) and SAP GUI for Windows (SAP GUI). SAP GUI for Windows cannot use CSS (cascading style sheets) as it is not browser-based. Regards, Peter Thank you, Peter! PS: I enrolled the SAP Screen Personas course on openSAP and I must say that – for a non-native English speaker like me – you’ref SO easy to understand (no offense to some of your German colleagues…)
https://blogs.sap.com/2015/01/29/everything-is-fiori-what-about-sap-screen-personas/
CC-MAIN-2019-35
refinedweb
2,959
69.21
advent-of-code-2018 Code for adventofcode.com/2018/ I'm trying to learn Julia so I'm using AoC to force myself to learn. Python solutions! part 1 part 2 -- this one feels clunky to me. 10 points for the variable name “thrice!” My part one ended up looking super similar! I ended up using zipto help with the comparison if the strings in part 2: Nice! Definitely think that's the easiest way to do number 1. Zip also makes sense for the second, though not using it allowed me to do the early return! True! As a lispy thinker, my brain wants to tell you to make those for loops into comprehensions, but my work brain is telling me that my coworkers would have me drawn and quartered to make your "find-one-difference" a one-liner! Kudos on how neat this looks. I find python and ruby to be difficult to make look "clean" when I write it. Part 1 You can probably see my Python shining through as I implemented a custom Counter struct to help me out. Part 2 Double for loop boooooooo! But it works and it's fast enough for now. I'm pretty happy with the mileage I got out of Iterators for this part. This is very well documented and clear, easy-to-read code. This also makes me want to jump into Rust again (I've only hobbied around with it here and there). Thanks! That really encouraging! I love how you've used enumerate and skip together in your nested for loop. I struggled to find a clean solution like this. Thanks! Yeah, I originally had a very manual nested for-loop set up, but after I got the tests passing, I decided to make an effort to do everything I could with iterators instead :) I've decided that the iterator module in Rust is where most of the magic that I'm missing from Python and Ruby lives. This was still bothering me, but I found the Itertoolscrate and the tuple_combinationsmethod. Check out my updated solution in the thread. Itertoolsmakes iterators even more powerful. Clojure (inefficient part 2) Part 1: Part 2: I like the threaded use of updatehere in part 1 - my method used a transient map and returned a persistent copy at the end: Nice one. Is definitely faster than mine. Thanks for hosting the private leaderboard! Never been on a leaderboard before lol so that'll be fun. :) I am curious - how is everyone posting their code? Is there a code tag on here, like there is on Slack? Is everyone sharing screenshots? I haven't posted a whole lot on here yet, so I'm not sure of the best way to share code. I'm using JS this year, so here's my day 2 solutions: not the prettiest / most succinct, but they work! Part 1: Part 2: There's an enhanced form of markdown for code blocks: triple backticks for start and end, and if you immediately follow the opening backticks with the language you get syntax highlighting. Difficult to show raw markdown in markdown unfortunately. Excellent, thank you! Much better than screenshots. :) Javascript lazy solution I don't have much time to solve the challenges :( So I'm just trying to get the stars. part 1 Part 2 PHP Ended up learning about a bunch of useful array functions like array_values, array_count_valuesand array_diff_assocfor this one! Part 1: Part 2: Enjoyed this one. My Kotlin solution: Were you also annoyed that Kotlin has .groupBybut not .frequencies? Have you thought about looking into Sequence? You could make your pairs function lazily? Using Listmeans you're materializing the entirety of your double for-loop right away. The lack of frequenciesdidn't bother me - it's easy to add. And yes, I've been thinking for the rest of the day that I should use lazy sequences. In this case the execution time remains O(N²) but as you say the memory footprint becomes more like constant. Definitely a good practice when you can't find a better algorithm. I’m trying to use a broader range of languages than I do usually, so I figured I’d try not to use one I’d already used before through the days. I use bash all the time, so I thought I’d get it out of the way early. This was not one of my better decisions, but worked fine! Part 1 uses regular expressions; I could have used an associative array like some other people in the thread, but for some reason I went here first. The sort function wasn’t necessary, but helped with debugging. Part 2 uses the same double forloop lamented elsewhere, but it gets the job done. Neither of these is what I’d call “fast”. Woah, nice! It's always really cool to see bigger things done with Bash :) P.S. Depending on your Bash version, you can save yourself some typing with {a..z}. Oh yes, good call, I missed that compaction. A little late, to the party. I tried really hard to think of a solution to part 2 that only involved iterating the list once, but no luck. Here is my solution in Elixir. Part one: Part 2: Part 1 Part 2 And from the output, I just copied and pasted the necessary characters that matched. That was faster than comming up with a custom method to do so. Nice! Did you implement editdistance yourself, or is that an external library? It is external. I found it via a quick google search. The edit distance measures how many operations - insertion, deletion or substitution - it takes to get from one string to the other. Since all the strings in the puzzle input have the same length, insertion and deletion do not come into play and it works out perfectly. Ok that’s cool, just checking. Neat! So this was a pain. I also ended up with a double loop (O(n2)) and couldn't think of anything better. One thing I discovered during part two was that Crystal has Levenshtein distance as part of the standard library. It might have been a bit heavy going for what I needed, but it did the trick! Bring on day 3! High five for a beefy standard library! That’s awesome 😎 My solution with Python, not sure about the second part, not the most fastest solution I think regarding of performance. Nice! Don’t forget about collections.Counter for part 1! I didn’t know about difflib though. Cool! Thanks for the hint! Will do that later! My edited solution for part one with collections Counter Part 1: C# + LINQ = one-liner Part 2 Terrible C++ solution for part 1 ! Terrible is better than never finished! And this looks pretty good to me, not knowing C++ if that makes you feel better 😄 I am using Advent of Code to learn Golang, and here is the solution I came up with. Suggestions for improvements are always welcome! Part 1: Part 2: My idea was to use a dictionary and store the subnames and see if we encounter one we have already visited. Since there should only be one match we can immediately return it. I learned a lot about using maps in Golang! I am also using Python that I have more experience with to cross check solutions. I have tried to implement it with readability in mind, not performance. Part 1: Part 2: I did my solutions at midnight last night, but I was surviving on very little sleep at the time, so the resulting code was below standard. I tried again this morning and felt better about it. For anyone reading this, I'm still using the simple linesfunction from Day 1 which reads a file into a sequence of strings. Part 1 This was my solution last night: This is embarrassing code. I totally forgot about the frequenciesfunction, which is why I used group-byfollowed by count. But the 2 filteroperations in the final calculation meant that the results of the mapget processed twice. My morning attempt fixed these: This time I accumulated the 2/3 count values while processing the list, so it only goes through it once. Part 2 Since each element needs to be compared to every other element, I can't see any way around the O(n2 ) complexity. Of course, each element should only be compared to the ones after it, so as to avoid comparing each one twice (since comparing A/B has the same result as comparing B/A). When doing list processing, the only ways I know to refer to everything following an element are by using indexes (yuck) or with a loop. Unfortunately, I got fixated on the loop construct, and nested it: The other way you can tell that I wrote this on very little sleep was the use of ridiculously terse var names. On reflection this morning, I realized that the inner loop should have been a someoperation. This does a predicate test and terminates processing early, which is what I was doing manually with the inner loop. Also, my closefunction has several issues. First is the name! I was thinking of "A is close to B", but when I saw it again this morning I realized that it's the same function name for closing a resource. Something else that bothers me is that it processes the entirety of each string before returning, when a false result can usuall terminate early. Finally, a minor issue is that the anonymous function #(when (= %1 %2) %1)would be more idiomatic to use a singleton set on %1to compare to %2. The nearly=function now returns a string, rather than the array of characters, but hasn't changed much. I was still unsatisfied with it not terminating the test early, so I rewrote it to be faster. However, the resulting code lacks any kind of elegance, and I'm not convinced that it's worth the effort: Hopefully I'll get some sleep before attempting day 3. 😊 A bit late to the party, here are my solutions on Ruby: Part 1: Part 2: I learnt about the combination method :) My solution in JavaScript / Node 11, using the readlineinterface: readLines.js 02a.js 02b.js My solutions for part 1 Python Go Benchmark difference Giving it a go in good ol' JavaScript. Part 1 First creates an Object to track the # of times a letter appears in a string. That gets converted to a Set to filter out any duplicate values to account for situations where 2 or more letters appeared twice (as the string only gets counted once for the check sum). Part 2 I struggled with this one, so I'm sure there's a cleaner/more efficient way to do this. This takes each line of the input and compares it to the other lines, checking the characters and tracking the # of differences and the position of the last accounted for difference. The loops are set to break when a result with only 1 difference is found to prevent unnecessary looping. My solution to day 2, in Elixir. The double for-loop in part 2 is certainly not optimal, but it works. The available time was short today. :) Part one: Part two: Common: Part 1 in Ruby: I enjoyed playing around with the one liner which produces a frequency chart something like this: i.e. there are exactly 2 occurrences of the letter s in str Part 2 in Ruby Not happy about the double loop through the input file... but also can't think of a way to avoid it! It occurred to me that sorting the list first might improve the chances of finding a match earlier in the loop since all similar strings would be together, but thinking about it, perhaps there's equal chance that the similar strings would end up being sorted to the bottom of the list and it wouldn't help at all :/ Another thought - many of the input strings are v. similar, maybe they could be grouped into sets early on (e.g. based on the first 4 chars or something) and then you only look for similar strings in the same set? Going to read into hamming distances now! I lost about 10 minutes to my son stalling getting into bed. Kotlin Solution Answer 1 Once again a fairly straightforward opener. Just run through, do some simple frequency checking and spit back the results. I think this is technically O(n2) but it moved fast enough for me. (And in a more lazy language, it ends up being closer to O(n) anyway) Answer 2 As Ryan said, this is just Hamming distance with the added wrinkle that you need to throw away the differences while counting them. Lots of optimization room here, but once again, we shave off just under half the possibilities by only doing unique combinations and going from a raw n^2to (n*(n+1))/2. At around 10 ms (calculated by shuffling my input 1000 times), I don't think there's an easy way to make this noticeably faster at this point without a more esoteric method. Node.js Common async generator. Read the file in chunk by chunk and yield each product ID based on new lines. Part 1 was fun with some ES6 array -> Set -> Map to get the value counts Part 2 got interesting. I needed to generate all pairs for every product ID. I made my Hamming Distance function also return the common letters. Then tied it all together by running each pair through the Hamming Distance function and getting the lowest. Putting it all together: Full code here: github.com/MattMorgis/Advent-Of-Co... Making hamming distance return common letters is a slick way to go. I was looking at the duplication between the hamming distance and common letters functionality in my solution and was a little bummed about it, but I couldn't figure out a good way to do it. I like this! Parts 1 and 2 in Ruby: Rust Part 1 Part 2 I'm still searching for a nice iterator adapter to replace the nested loop.I finally used the itertoolscrate and it's AMAZING!!! I didn't see any C# glancing through: (Note AddIfNotNullis from my extension methods package nuget.org/packages/Kritner.Extensi... Part 2: I dunno how I'm gonna keep up during the week, but putting my solutions in the repo github.com/Kritner/Kritner.AdventO... Late to the party! Still digging F#, but I'm definitely hindered by my lack of familiarity with what's available in .NET I'm not super thrilled with my Day 2 code, but I haven't really had the time to tweak it with everything going on at work currently. Swift Solutions Part 1 This one was fairly simple. Just count how many times each letter appears in each Stringand act appropriately. I do like the fact that a Swift Stringtype is really an Arrayof Characters. Part 2 This one feels clunky, if I get a chance I'll revisit it. I break on the first hit for a solution to short circuit everything and return the answer, this can help a lot with so many Charactersto test. I use zip(_:_:)with a filter and count to quickly test how many differences there are in the same positions. When I see two strings that differ by one character in the same position I move to the next step. In the second part I cast the Arrays into Settypes so that I can use the Collection Algebra features to quickly find the actual character to remove by subtracting one collection from the other. With that done I can just remove it from the source Arrayand return what's left. Normally I would import Foundationhere so that I could just use NSOrderedSetand skip a few steps. I wanted to try and keep it all in the Swift Standard Library though, so I didn't! Here's my C# .Net solution: Hosting my solutions on my github... Code for advent-of-code-2018 Code for adventofcode.com/2018/ I'm trying to learn Julia so I'm using AoC to force myself to learn. I'm pretty sure part 2 has to be O(n2) but you can cut down on the total number of iterations by only looking forward...here's my solution (implemented in Julia) Julia's a weird language...it's so close to python that I forget it does weird things like "Arrays are 1-indexed", and it spells None/ nullas nothing I agree that Julia is weird, but I actually love it because it adds some functional stuff that I miss in python like the pipe operator! We need better cover images 😂 I was kind of hoping that the magical resizing was an automated resizing thing that happens and not one of you guys fixing it for me. I can actually put a background on it and scale it. What are the optimal dimensions? Is the white text on black ok or should I come up with something more fancy? I have very little aesthetic skill, so I’d appreciate any suggestions you or anyone else has. Ryan, do you want me to design something? The only hard part is that I probably won't be up at midnight most nights to add specifics. Do you have any design software? If so, I can transfer you a file! We could also use Canva. That would be amazing! I have a Figma account, but that’s it. I’ve never heard of Canva and pretty sure I don’t have any design software, but if you tell me the best way to handle it, I’ll happily do whatever you suggest. I’m happy to learn. Cool -- sending you a DM via /connect! This one was difficult for me, but I eventually got it! I'm also not happy about that double for loop in part 2, but I think I sped it up by removing the elements as I compared them? github.com/stevieoberg/advent-of-c...
https://dev.to/rpalo/aoc-day-2-inventory-management-system-537i/comments
CC-MAIN-2019-43
refinedweb
3,050
72.66
Yes, this is about machine learning and not some weird fetish. This post is totally safe for work, promise. With that out of the way: What is bumping? Bumping is a simple algorithm that can help your classifier escape from a local minimum. Huh? Read on, after a few imports you will see what I mean. %matplotlib inline import numpy as np np.random.seed(457) import matplotlib.pyplot as plt import seaborn as sns import sklearn from sklearn.tree import DecisionTreeClassifier from sklearn.tree import export_graphviz def make_chessboard(N=1000, xbins=(0.,0.5,1.), ybins=(0.,0.5,1.)): """Chessboard pattern data""" X = np.random.uniform(size=(N,2)) xcategory = np.digitize(X[:,0], xbins)%2 ycategory = np.digitize(X[:,1], ybins)%2 y = np.logical_xor(xcategory, ycategory) y = np.where(y, -1., 1.) return X,y def plot_data(X,y): fig, ax = plt.subplots() ax.scatter(X[:,0], X[:,1], c=y) ax.set_xlabel("Feature 1") ax.set_ylabel("Feature 2") ax.set_ylim([0,1]) ax.set_xlim([0,1]) return (fig, ax) X_,y_ = make_chessboard(N=1000) p = plot_data(X_,y_) Let's train a simple decision tree and see how well it does. Then we will inspect the tree to see what it learnt. A neat feature of scikit-learn is the export_graphviz function which will draw a decision tree for you. Interpretability heaven! dt = DecisionTreeClassifier(max_depth=2) dt.fit(X_,y_) DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=2, max_features=None, max_leaf_nodes=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, random_state=None, splitter='best') The following two functions help with interpreting the decision tree. The first one, visualise_tree, draws a decision tree. We can look at each node's decision function, how many samples were sent to the left and the right, etc. The second function draw_decision_regions superimposes the prediction of the decision tree on the samples. I find this more intuative for deep trees. def visualise_tree(tree): """Draw the decision tree""" export_graphviz(tree, out_file="/tmp/tree.dot", feature_names=["Feature 1", "Feature 2"]) !dot -Tpng -o /tmp/tree.png /tmp/tree.dot from IPython.display import Image return Image("/tmp/tree.png") def draw_decision_regions(X, y, estimator, resolution=0.01): """Draw samples and decision regions The true label is indicated by the colour of each marker. The decision tree's predicted label is shown by the colour of each region. """ plot_step = resolution x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step), np.arange(y_min, y_max, plot_step)) Z = estimator.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) fig, axis = plot_data(X,y) cs = axis.contourf(xx, yy, Z, alpha=0.5, cmap=plt.cm.RdBu) with sns.axes_style('white'): draw_decision_regions(X_,y_, dt) visualise_tree(dt) As you can see most of the figure is red! What happened? There is no way for the tree to decide where to split along either of the two features. Each split is as good as any other. In the end it picks one at random, which often leads to a suboptimal choice. In this case it split on feature 2 <= 0.0137. Not a smart move. For the second split it does not do much better either. The idea behind bumping is that we can break the symmetry of the problem (or escape the local minimum) by training a decision tree on random subsample. This is similar to bagging. The hope is that in the subsample there will be a preferred split so the tree can pick it. We fit several trees on different bootstrap) samples (sampling with replacement) and choose the one with the best performance on the full training set as the winner. The more rounds of bumping we do, the more likely we are to escape. It costs more CPU time as well though. from sklearn.base import ClassifierMixin, MetaEstimatorMixin from sklearn.base import clone from sklearn.utils import check_random_state class Bumper(ClassifierMixin, MetaEstimatorMixin): def __init__(self, estimator=DecisionTreeClassifier(max_depth=2), n_bumps=20, random_state=None): self.estimator = estimator self.n_bumps = n_bumps self.random_state = random_state def fit(self, X, y, sample_weight=None): random_state = check_random_state(self.random_state) n_samples, n_features = X.shape self.best_estimator_ = None best_score = None best_estimator = None for n in xrange(self.n_bumps): indices = random_state.randint(0, n_samples, n_samples) estimator = clone(self.estimator) estimator.fit(X[indices], y[indices]) # performance is measured on all samples score = estimator.score(X,y) if score > best_score: best_score = score best_estimator = estimator self.best_estimator_ = best_estimator return self def predict(self, X): return self.best_estimator_.predict(self.best_estimator_) from IPython.display import YouTubeVideo YouTubeVideo('I5Zk2vUmjpk') Exactly. Though actually it is bumping, not pumping. Let's try this out. A simple decision tree with a mximum depth of two and 50 rounds of bumping. bumper = Bumper(n_bumps=50, random_state=5572) bumper.fit(X_,y_) draw_decision_regions(X_,y_, bumper.best_estimator_) visualise_tree(bumper.best_estimator_) Voila! We find the perfect split. So what, my gradient boosted decision trees/deep neural networks/ensemble of NN and random forests could have done this as well. This is true. The point here is that we do not need a complicated model to make perfect decision in this case. A complex model could have done it, however you would have to think quite hard to understand what it is doing. Bumping is another tool to add to your toolbox if you want a simple model. During training it takes extra CPU time, once you have your model it is a simple one that is fast to evaluate. Let's try bumping on a more complicated pattern: a chessboard with four rows and four columns. As you can see we do considerably more rounds of bumping. The brief wait is worth it as we are rewarded with a nearly perfect pattern. X4by4,y4by4 = make_chessboard(N=5000, xbins=(0.,0.25,0.5,0.75,1.), ybins=(0.,0.25,0.5,0.75,1.) ) bumper2 = Bumper(n_bumps=1000, estimator=DecisionTreeClassifier(max_depth=4), random_state=457*31) bumper2.fit(X4by4,y4by4) with sns.axes_style('white'): draw_decision_regions(X4by4,y4by4, bumper2.best_estimator_) visualise_tree(bumper2.best_estimator_) With this: have fun bumping! I found this nice little algorithm at the end of chapter 8 of Elements of statistical learning. Some more details are in a paper written by Tibshirani and Knight in 1999. If you find a mistake or want to tell me something else get in touch on twitter @betatim
http://nbviewer.jupyter.org/url/betatim.github.io/downloads/notebooks/Bumping.ipynb
CC-MAIN-2017-47
refinedweb
1,074
54.29
lamine_ba's Blog Building RESTful web services with JAX-RS, JAXB and Groovy Any intelligent fool can make things bigger, more complex, and more violent But it takes a touch of genius and a lot of courage to move in the opposite direction. Albert Einstein Introduction Ladies and Gentlemen, beyond the fact that I'm going today to share with you, most of the things a developer should know when building a JAX-RS web service with Groovy, this blog post is first and foremost a farewell to the Java language. It was a fun ride but now I think it is high time for me to move on. And how odd and ridiculous is it to build the dynamic parts of my application with a static language? How to set up the environment? Unless you are using Grails and its grails-jaxrs plugin or a proprietary solution, I have no answer to how to write a groovy service which gets compiled and registered at runtime within the JAX-RS system. But in the other hand, if you are asking me how to do so in my own framework, my answer would be that : " there is nothing to configure, everything has already been set up and the core classes of JAX-RS and JAXB have been transparently imported". So the next thing, Ladies and Gentlemen, is to just write a groovy service to see what we will get... - CompilerConfiguration configuration=new CompilerConfiguration(); - ImportCustomizer customizer=new ImportCustomizer(); - customizer.addStarImport("javax.ws.rs""javax.ws.rs.core","javax.xml.bind.annotation"); - configuration.addCompilationCustomizers(customizer); The Problem Beyond the classical hello word service which might be written with groovy like this : And beyond the simple statement that any String with an embedded expression is a GString, today we are going to follow my dear friend Joe in his obsession to make the list of his customers available through REST. And in the same time, we will also look at the potential errors he might encounter in the resolution of his problem and the set of techniques he has really under his belt. - class CustomerDao { - def customers=[] - CustomerDao() { - customers << new Customer(id:1,firstName:"Bob",lastName:"Lee") - customers << new Customer(id:2,firstName:"Jim",lastName:"Dry") - customers << new Customer(id:3,firstName:"Joe",lastName:"Hart") - customers << new Customer(id:4,firstName:"Frank",lastName:"Lucas") - } - } Sending Representations to the target client using the Groovy Builders XML with MarkupBuilder - @Path("/customers") - class Service { - @Produces(["application/xml"]) - @GET - def getCustomers() { - def writer = new StringWriter() - def builder = new groovy.xml.MarkupBuilder(writer) - builder.customers { dao.customers.each() { entity -> - customer(){ - id entity.id - firstName entity.firstName - lastName entity.lastName - } - } - } - writer.toString() - } - } Output - <customers> - <customer> - <id>1</id> - <firstName>Bob</firstName> - <lastName>Lee</lastName> - </customer> - <customer> - <id>2</id> - <firstName>Jim</firstName> - <lastName>Dry</lastName> - </customer> - .... - </customers> JSON with JsonBuilder - @Path("/customers") - class Service { - @Produces(["application/json"]) - @GET - def getCustomers() { - def builder = new groovy.json.JsonBuilder() - builder(customers : dao.customers) - builder.toString() - } - } Output - {"customers":[{"firstName":"Bob","id":1,"lastName":"Lee"},{"firstName":"Jim","id":2,"lastName":"Dry"}, - {"firstName":"Joe","id":3,"lastName":"Hart"}, - {"firstName":"Frank","id":4,"lastName":"Lucas"},{"firstName":"Mark","id":5,"lastName":"Zuckerberg"},{"firstName":"Alexis","id":6,"lastName":"Sanchez"}]} If you are not impressed and shocked in the same time by the brutality of Groovy, Ladies and Gentlemen, this time, I don't even know what to say to you but please always remember this : As a rule of thumb, a service should always be decoupled from the code that marshall and unmarshall the data. So if you are aware of the concept of a MessageBodyWriter, I think you know perfectly how to refactor the code. And also one more thing, maybe we can use a XmlSlurper or a JsonSlurper for the opposite operation? XML - def c = """<customer> - <id>1</id> - <firstName>John</firstName> - <lastName>Doe</lastName> - </customer> - """ - def customer = new XmlSlurper().parseText(c) JSON - def slurper = new JsonSlurper() - def result = slurper.parseText('{"customer":{"id":"1","firstName":"John","lastName":"Doe"}}') Using JAXB for the Marshaling/Unmarshaling That is where the funny part is and also where all the real problems start. - @Path("/customers") - class Service { - @GET - @Produces([MediaType.APPLICATION_JSON,MediaType.APPLICATION_XML]) - def getCustomers() { - dao.customers - } - } When running this example and when accessing the service through its url, here is the first explicit error we will get : A message body writer for Java class java.util.ArrayList was not found . And this error is caused by the following declaration in the Dao : - class CustomerDao { - def customers=[] - } and by the fact that we haven't explicitly set the return type in the Service. - @Path("/customers") - class Service { - @GET - @Produces([MediaType.APPLICATION_JSON,MediaType.APPLICATION_XML]) - def List<Customer> getCustomers() { - dao.customers - } - } Groovy making the type optional does not mean that JAX-RS does it also. So Ladies and Gentlemen, you must pay attention to that. The object type and the media type will ultimately determine how to select a MessageBodyWriter or a MessageBodyReader. Now to close this post, Let's run again the example for the last time and let's get this following error : groovy.lang.MetaClass is an interface, and JAXB can't handle interfaces. which we can easily solve by setting the XmlAccessType to NONE and by explicitly doing the mapping of the fields of the domain object by ourselves. So what you have to bear in mind here is that : There is NO MORE configuration by exception..... A Service is a Module : Let's develop it from the cloud.. You can find this module deployed in my application on the cloud at this url so that you can complete your knowledge there and download it if you want. Also don't hesitate to click on the WADL button to see everything that your service can do. If you click on the customer : application/json link at the right or the another link, all the attributes of your entity will be listed. And next time for sure, I will show you how to build a web client for this service and how to consume it within a JSF environment with my JAX-RS web application framework : YouControl and an Ajax library like JQuery. So until then, Ladies and Gentlemen, Have a nice week.... Documentation No one can create something without taking inspiration somewhere........ - JAX-RS as the one Java web framework to rule them all? - Leaving JSPs in the dust: moving LinkedIn to dust.js client-side templates - Modern Web Apps using JAX-RS, MongoDB, JSON, and jQuery - RESTful services with jQuery and Java using JAX-RS and Jersey - Tutorial: HTML Templates with Mustache.js - Client-Side MVC frameworks compared - The client-side templating throwdown: mustache, handlebars, dust.js, and more - Asynchronous UIs - the future of web user interfaces - Login or register to post comments - Printer-friendly version - lamine_ba's blog - 2540 reads If JAX-RS had an MVC framework? waiting more? Let's first start with the idea you are expecting in JSF 2.2... Multi-Templating In a recent interview, the JSF spec lead did talk about it and as he said this feature has gone through some iterations on the EG. And the lovely templates below are really the confirmation that a JSF developer will never build the user interface of his JSF application from scratch. He will just download a template from the web and hop he will be ready to go. And that is one reason why, I totally disagree the idea of putting the templates in a jar. Frankly, If I do so, how can I edit my templates with my cloud web-based editor? And with the rise of JavaEE 7, isn't it the time to move our IDE in the cloud...... Templates 100.Jf_Texturia YouControl Let's come now to the interesting part of this blog post and I'm asking if it is possible to build a JSF application which does not save its state? No better, Is it possible to build a JSF application which has a lifecycle of one phase (Render Phase) and which has no <h:form> inside its views? No, Let's move the level higher, can one build a JSF application with a set of UI components that can render themselves on the client side? If your answer is no, I think it is high time for me to introduce you to my JAX-RS web application framework : YouControl.... If you have any doubt that this JSF application is in fact a JAX-RS application, just display the list of my modules and click on the wadl link to see it from yourselves. To finish, JSF 2.0 has already a support for HTML5 and CSS3. If you put a <canvas> tag in your view, will Facelets complain or just your browser if this one does not support it? And at last, if you are wondering how my client side UI components are made, Just imagine yourself consuming your Customer service with Html 6 or Html 7 : - @Path("/customers") - public class CustomerService { - @GET - @Produces({MediaType.APPLICATION_JSON,MediaType.APPLICATION_XML}) - public List<Customer> getCustomers(){ - return dao.getCustomers(); - } - } - <c:table - <thead> - <tr> - <th>ID</th> - <th>First Name</th> - <th>Last Name</th> - </tr> - </thead> - <tbody> - <tr> - <td> {{id}} </td> - <td> {{firstName}} </td> - <td> {{lastName}} </td> - </tr> - </tbody> - </c:table> If you are too lazy to design the table, set the auto-create attribute to true and let me do the Rest for you. Yes, Ladies and Gentlemen, Javascript, Json and client-side templating systems like Mustache are really awesome.... - <c:table I sincerely hope that at this moment, the JSF guys are asking really this question : Hey wait a minute, where is my Managed Bean?. ( #{customerBean.customer.id}, #{customerBean.customer.firstName}, #{customerBean.customer.lastName}. ..) - Login or register to post comments - Printer-friendly version - lamine_ba's blog - 2787 reads Before reading this, please read my response to your ... by edburns - 2012-04-11 07:38 Before reading this, please read my response to your 20111127 post. Hello Lamine, thanks for this nice blog post. You say, > That is one reason why, I totally disagree the idea of putting the > templates in a jar. Frankly, If I do so, how can I edit my templates > with my cloud web-based editor? Putting templates in a jar is a deploypment time expedient. The idea is for templates to be hosted in a maven repo. If you want to edit them, you'd have to manually download it and bundle it into your app, or you could re-package it as a jar and consume it that way. Regarding YouControl, I agree with Cay, keep at it! I hope you find a productive partnership coming from this work. Thanks, Ed Hello, Ed Thanks for your nice comment. Yes I know, I ... by lamine_ba - 2012-04-11 10:27 Hello, Ed Thanks for your nice comment. Yes I know, I was very radical in my statement but it was just a way to show to the JSF users, the opposite face of the vision you have. In fact what you have read is just a projection of a near future from my perspective. So the key thing is to design the Multi-Templating feature abstract enough to let the conflicting visions to coexist...... Regarding YouControl, Yes I will keep at it while hoping to have a productive partnership soon....... Thanks, Lamine This definitely an interesting way to go. I'll definitely ... by pandavr - 2012-04-11 08:30 This definitely an interesting way to go. I'll definitely take a look at it. Ivan Hi, I am glad you are thinking along these lines. I just had ... by cayhorstmann - 2012-04-10 21:56 Hi, I am glad you are thinking along these lines. I just had to make a service accessible to people, and I thought, heck, I just use JAX-RS and some forgettable UI. Much to my surprise, that was a smart move. Yes, I want to go the next step with JSF to use a pretty UI, but I also don't want to lose the benefits of a service-oriented interface. Keep at it! Hi Cay, Thank you very much for your comment. It has set ... by lamine_ba - 2012-04-11 04:50 Hi Cay, Thank you very much for your comment. It has set up the real vision which is to replace the programming model of JSF by the programming model of JAX-RS while keeping all the parts related to the UI that JSF did right (Composite components, Resource Handling, Facelets Templating, Multi-templating...). By the way, Core Javaserver Faces is one the book where I have learned JSF. My sincere congratulations for all the good things in it.... Thanks, Lamine What JSF should become? Perfection is achieved through times. Anonymous Introduction How someone can improve if he has no clear vision of what he could become? I'm asking this tough question to myself while reading to the definition of JSF. As it is written in its specification, JavaServer Faces is a user interface framework designed to significantly ease the burden of writing and maintaining applications that run on a java application server and render their UIs back to a target client. Woow! That is really a lovely definition but is it still relevant in 2011 and beyond? Ladies and Gentlemen, are you sure that we don't need more? Our standard web framework is built around the greatest design patterns but it definitely lacks the fundamentals. I have read nowhere that JSF has provided any guidance to developers through the form of "development patterns". And we still have the huge pretention to ask you to use a web framework which has no standard development models coming through the form of conventions. Let me now adopt a more professional attitude in order to clarify my position with some concrete examples thus you will better understand what I'm talking about. Views Let's begin with this simple question. Where one must store his views? Inside a folder put in the WEB-INF or in the root directory? Shall we name this folder "pages" like Steve or "views" like Mike? No, let's name it "bobmarley" like Crazy Joe. And with this lack of convention, we have lost one possibility to have some nice automations to ease the development and to make our JSF applications homogeneous. And when someone has the requirement to move to another project to work with another team, he must always take the time to learn first their own conventions. Isn't it a waste of time and frankly does one understand the meaning of a knowledge if it cannot be shared? Client Side Request Forgery Protection (CSRF) This feature is mostly specified and it will come with JSF 2.2. That is an standardization of how JSF provides CSRF protections. If you haven't read the proposal yet, here is how it will be implemented. Within the <faces-config-extension> element, a new <protected-views> element will be introduced. This element contains zero or more <url-pattern> elements - <faces-config-extension> - <protected-views> - <url-pattern>/views/deleteBlogEntry.xhtml</url-pattern> - <url-pattern>/views/deleteComment.xhtml</url-pattern> - </protected-views> - </faces-config-extension> Any view that matches any of the url-patterns may only be reached from another JSF view in the same web application. And because the runtime is aware of which views are protected, any navigation from an unprotected view to a protected view is automatically subject to protection. That is really a lovely theory but in practice, it hardly means this : If you have 100 views to protect, you have 100 xml lines to write and that is really a lovely annoying stuff made for bureaucrats. I would rather have this convention combined with an url rewriting in order to have this simple automation : Any view stored in the views/protected directory is automatically protected. And doing so, the API which has been added to the ViewHandler for a support of this feature at runtime will be now full of sense. Facelets Templating Ladies and Gentlemen, have you watched "Saving Private Ryan" from Steven Spielberg. I'm using this analogy as a way to undoubtedly say that Facelets has saved the life of JSF. And I'm truly in love with its templating approach but what I hate the most is : - There is no convention about where to store the template. A view can only use one template and it has a huge dependency with it. And if you are so crazy to change its location, prepare yourself for a cascading views update. <html xmlns=""xmlns:<body><ui:composition</ui:composition></body></html> - There is no clear separation between the web designer and the developers. They are dependent to each other. Who is waiting for who? Such is the question. - The template and its resources (css, js, images) cannot be externalized, packaged and shared. Multi-Templating But these complaints are no longer true with JSF 2.2 and its Multi-Templating system. But don't be too much confused with this feature. It is only an extension of the Facelets templating system for a better modeling of the concept so that : - A view can use more than one template. And with this new abstraction, it cannot even guess the location of the templates. Who knows, maybe they are stored somewhere in the Cloud. <html xmlns=""xmlns:<body><ui:composition</ui:composition></body></html> - The web designer and the developers have no dependency. No one is waiting for no one. - A template and its resources (css, js, images) can now be externalized, packaged and shared. To finish, I think it is a well-established knowledge that in a JSF application, a view will always use a template. So I don't see any valuable reason to waste our time by writing this "<ui:composition in our views. Don't you agree that once a knowledge is set, things should be automatic? But fortunately, such will be the case for JSF 2.2. Security Model I will end up by a feature I'm hardly battling to have. So if you see any value in it, please leave a comment for our spec lead. But take the time to read first the conversation below : - Hey template authors, that is very nice to see you designing templates for us but can we delegate more work to you? Can you design the login form for us? - Of course, we can design it in the template or in a separate page. And we don't care about if you will use JAAS, Spring Acegi or such. The only thing we need is an authentication model coming through the form of a standard managed bean for the connection with your security logic. - Ahh! I understand. You want to have an abstraction like SeamLoginModule.... - <h:form - <h:panelGrid - <h:outputLabelUsername</h:outputLabel> - <h:inputText - <h:outputLabelPassword</h:outputLabel> - <h:inputSecret - </h:panelGrid> - <h:commandButton - </h:form> And to complete the whole thing, maybe we can add to this, a view-level security in order to prevent non-authenticated users from accessing restricted views. Nonetheless, the decision does not belong to me, but undoubtedly the JSF community has a louder voice. So Ladies and Gentlemen, feel free to contribute and be hard on yourself if you want to improve. Yes, truly I love JSF but today I'm not tender with it... - Login or register to post comments - Printer-friendly version - lamine_ba's blog - 4502 reads Hello Lamine, Reading your 20120410 post, I had cause ... by edburns - 2012-04-11 07:35 Hello Lamine, Reading your 20120410 post, I had cause to re-read your 20111127 post and I think it's appropriate to reply first to 20111127 and then to 20120410. First, let's look at the convention for where to put your Facelet pages. Aside from establishing a project format convention, which is arguably the most valuable thing maven has brought to the world, what value would it add? In order to make it worth doing this, there would have to be some value over and above just establishing a convention. For example, let's say that we add an item to the list of conditions that cause the FacesServlet to be automatically mapped, as shown in JSF 2.1 [1]. The existing list is this:. We can add this one There exists a non-empty "views" subdirectory of the web app root. In this case, we have both created a convention and introduced a feature that leverages that convention. Is that sufficient value? I'm not sure. Now let's look at your CSRF proposal. This one is clearly valuable. And, in your proposal, it depends on the creation of a "views" concept. On to Multi-templating. I think the naming convention I started out my response with falls neatly under this feature. As far as login, yes, we certainly need to do this. Thanks, Ed [1] Hello Ed First of all, you have to forget most of the ideas ... by lamine_ba - 2012-04-11 11:24 Hello Ed First of all, you have to forget most of the ideas I have shared here and you have to keep only this one : Storing the views in a "views" directory. Now the problem is how to add this feature in JSF 2.2 without breaking the backward compatibility stuff. Here is the solution, give only the behavior below to the default Facelets Resolver : public class ViewResolver extends DefaultResourceResolver{ public URL resolveUrl(String path){ URL url=super.resolveUrl("/views"+path); return url!=null?url:super.resolveUrl(path); } If the user send this request :. The view resolver will first try to resolve index.html in the views directory by appending "/views to the path (/index.xhtml). And if we have nothing, we come back to the original behavior with the original path.... Hi Ba, it would be great for sure! What about new ... by axcdnt - 2011-11-28 04:36 Hi Ba, it would be great for sure! What about new versions for Spring Roo, soon supporting JSF 2.0 and Primefaces? I think it will fill some of this needs. Check this out java.dzone.com/articles/jsf-20-spring-roo Thanks for this great post! Hi axcdnt, Thank you very much for this nice and clever ... by lamine_ba - 2011-11-28 09:56 Hi axcdnt, Thank you very much for this nice and clever comment. In this blog post, I did not write this time a conclusion because my goal was to let you discover by yourselves what JSF could be. The next step for a web user interface framework is to become a tiny web application framework that one can extend. And no framework can reach this level without having conventions. And once those standard development patterns are established, its community will better see the value and the power of app generators like Spring Roo. Yes its next versions will rock and I can't wait the one which will come with this right combination: JSF 2.2 + Primefaces (views) + Multi-Templating (templates) + Tasks Flows (modules) + Security Model A Tasks Flow is a module written by a web developer and like for a template, it is shareable but it would be nice if we can write its business logic with scripting languages so that we can customize it. We are also coming soon with an integration model with JAX-RS 2.0, so be ready to add it to the stack. Regards, Lamine Hi Ba! nice blog again! The concept "Convention over ... by papesdiop - 2011-11-27 12:54 Hi Ba! nice blog again! The concept "Convention over Configuration" than you want to adopt with JSF, will ease the web development and improve the maintainability in Java EE. I don't see any constraints to block the community in adopting these conventions quickly. The goal is to facilitate and improve the development (The Time-to-market). Continue at this way, with some guys like you, JSF will become a great web framework. Regards Pape S. Diop Hi Diop! Thanks for the nice comment. I really appreciate ... by lamine_ba - 2011-11-27 13:21 Hi Diop! Thanks for the nice comment. I really appreciate it. Yes with conventions, things will become more easy. JSF is already a great web framework but it could be more.. Regards Lamine A Multi-templating System in JSF 2.2 : what does it mean for you? For prosperity to be sustained it must be shared. thegatesnotes Introduction Ladies and Gentlemen, today, while I'm writing this blog post, I'm one-eyed. Last saturday, I woke up earlier in the morning and I noticed that I couldn't see out with my right eye. I went to see an ophtalmologist to find the cause and the solution and now I'm taking drugs. I'm sharing this crazy experience with you only because it has teached me something. That life is all about having the right vision and you can do that only if you are in the right perspective to see the things right. Now let's come back to the subject of my blog post, a Multi-templating system in JSF 2.2 : what does it mean for you? But before answering this question, could I have an insight of your perspective? Who are you in regard to the JSF audience? a Page Author, a Component Writer, an Application Developer, a Tool Provider or a JSF Implementor? If your role belongs to any of these categories, it is fair to call you a "Web Developer". And myself I can fairly say that you are not the primary target of this feature. We have built this Multi-templating system inspired by Joomla's, the most popular CMS in the PHP world, to correct a mistake that Java EE and its web frameworks are doing ever since. Why did they put the web designers aside? Why Java EE has the ugliest user interfaces I have ever seen ? Why most of the web sites are Php-based ? Truly, in the web and in term of audience, a weak language is kicking the ass of a Platform. Woow! that is really amazing! A real remake of David and Goliath. And as a confirmation, I'm going to share with you a little part of the discussion I had yersterday with Akintayo Abiona Olusegun, a nigerian developer who has converted his PHP application into a JSF one. I asked him why he did that and also if I could have an insight of the state of art in the area of web development in his country. And below are his answers. "I did decide to use JSF not because I have any particular problems with PHP, No. The website needs a facelift and since I have just studied JSF and all the goodness, I decided to use JSF. Primefaces is also one of the things that help my decision. Most of the things I have to write css and/or javascript (or look for plugins upon plugins for jquery) are made very easy using primefaces. Java EE 6 was the icing on the cake. It is really now very easy to do a Java EE APP compared to say 5 years ago......" "Here in Nigeria, the predominant web design language is still PHP though. A few people use java for web really, Most people just either use PHP or ASP.NET. The reason is because hosting a Java APP is still very expensive and most of the independent Java developers I know can't afford it. But Java is still the most popular programming language, ALL the companies I know use Java somehow. The biggest companies here (telecoms, oil and gas and banking) are all using Java and Oracle tools, so to get a Job in these companies, Java knowledge is a MUST. But a very high percentage of the freelancers I know use PHP for web..." But soon such will not be the case because a metamorphosis is happening and we were clever enough to include the web designers in the audience of the upcoming 2.2 version of JSF. And depending on the perspective you are, it means these things for you : Like for Joomla and PHP, JSF and the Java EE world are moving towards a worldwide collaborative development. A JSF developer will no longer create the user interface of his web application, he will just download a template made a template author from all over the world. And you can be sure that it will happen because it is much easier now to design a JSF template than a Joomla template. No knowledge of JSF and Java is required. Only a knowledge of html, css, javascript, photoshop and such. And you can be sure that soon we will have all these templates available to us : Joomla Templates . Very cool, no? Let's meet in the Cloud Let's come now to the other part of the problem and here is how I will start. Am I the only one who has the vision that once we have released an implementation of JSF 2.2 and its Multi-Templating system, these two questions will arise : - From a developer perspective : Where can I find templates? - From a template author perspective : Where can I publish free and commercial templates to make my own business? Don't you see that if we do nothing, we will end up by having like for Joomla this awful distributed model : "Do it yourself. Build your own solution and host it on your own infrastructure and don't forget to reference your web site if you want the developers to find your templates through a Google search." So can you understand our desire to make things simple by building a service in the Cloud to connect these two entities into a central place : - so that a web developer can find templates easily without searching the whole web - so that a professional web design individual or company will not build and host a solution he cannot afford. And to have this dream fullfilled, we are planning to collaborate with a Cloud provider so that we can push JSF and Java EE to the next level. So be ready to meet us soon in the Cloud.... JSF 2.2 : The Movie [ Coming Soon ] Following our desire to innovate, JSF 2.2 will be the first JSR to have a video presentation. It will be made available after the Final Draft but below are some screenshots.I definitely recommend you to do the same for your project. You can use After Effects for that. Professionnal video designers are using this tool for most of the animations and the movies you are watching on TV. If you want to learn more about it and to have an insight of the video design universe, I suggest you to follow the tutorials of Andrew Kramer on his videocopilot.net web site. Definitely worth giving it a look, this guy is truly awesome.... Conclusion Ladies and Gentlemen, everything has been said and done so I will not waste your time anymore. Let's just meet here next time for another blog post. So until then, I'm going to take my drugs. Hope I will recover soon my whole vision... - Login or register to post comments - Printer-friendly version - lamine_ba's blog - 3481 reads Hi bro, hope my message find you in good heath that allow ... by papesdiop - 2011-11-15 06:07 Hi bro, hope my message find you in good heath that allow you to continue to solve a big problem in JavaEE. This blog has very nicely explained. JSF2.2+PrimeFaces will biggy improve the development web in JavaEE platform. I still believed JSF and i continue to believe it, because it will be the solution. Regards!! Hi papesdiop Thank you very much for your nice comment and ... by lamine_ba - 2011-11-15 06:50 Hi papesdiop Thank you very much for your nice comment and I appreciate your prayer. Yes, continue to believe in Java EE and JSF. Having a bad start in the starting-block does not mean you cannot be the winner of the race. The JSF community, you included, is currently doing an amazing work. But more are coming. Hope to read you soon. Regards Lamine Good day, This is a really awesome idea and it has my ... by mpashworth - 2011-11-14 01:03 Good day, This is a really awesome idea and it has my vote to be included in the JSF spec. I wonder if it would be possible to have the templates externalized from the actual WAR file which has a number of advantages. Regards, Mark P Ashworth () Hello Mark, First of all, thank you very much for your vote ... by lamine_ba - 2011-11-14 09:46 Hello Mark, First of all, thank you very much for your vote and for your clever comment. Yes, having the templates externalized from the actual WAR file has a number of advantages and that is a part of a topic I have called : " The deployment scenarios ". When I have built the prototype of this Multi-templating system, I have covered and solved this problem at last. I was waiting just for people to understand the idea and I'm really glad they did it. Let's me share with you now the different deployment scenarios available to us so that you can choose the right one depending on your needs. - First scenario : you have stored your templates in a directory in the war file. Can we call this : " the evident and non-efficient way ". With this model, how one can share his templates among his JSF applications? Surely with a copy and paste? The use of this model will lead to a lot of duplication. And when a developer downloads his templates from the web on his computer, he stores them already in an external directory. So why can't he just reference in the web.xml the location of this directory to have the templates loaded by any of his JSF applications? That is one solution but it means these things for you : you are working alone and your applications are running in a local mode. And when you deploy your JSF application in a remote web server and the templates are still in your local external directory, you must also upload this directory and make the right change (location) in the web.xml. But at least, using this solution, you have a central place and any of your JSF applications hosted on this remote web server can share your templates. Thus you have no resources duplication.To finish, when you are collaborating with a team using this solution in the development of a JSF application, my suggestion is to create first a svn or a git repository for only the storage of templates. - Second Scenario : Here is another cool way to externalize your templates. Did you know that you can use my own templates without having them installed on your computer? How I did that? I have a template service built with JAX-RS available worldwide. You can see it in action in my prototype. My JSF applications are using these templates remotely because each template has an url and I can reference one of them in my web.xml. And after that, I let the two entities (JSF and JAX-RS) to talk and make my will. And with this solution I can build another template clients like an android client to view my templates on my phone. cool, no? JSF and JAX-RS playing seamlessly - JSF application : Prototype - Template Service : Template Service . Please delete in the url the added by this site for a better view of the xml content If you find another deployment scenario, It would be nice if you can share it to us. I have also the project of an administration console so that you can manage easily your templates and set the default one at runtime without the use of the the web.xml file so that you can avoid a restart of your application and a waste of your precious time. At last, I think that you will be the guest star of one of my next blog posts because many people will not read your clever comment and my reply. Thank you again Mark Regards Lamine I concur. Having the template outside the war file ... by trinisoftinc - 2011-11-14 02:18 I concur. Having the template outside the war file means you can change the LAF of your page without having to recompile the whole app. This would be very advantageous. I think this would be also be easy if one can link to the template using say href. Right now what I do is host my images and css file in apache, so that I can change the images and css files easily to change the LAF of my web page without having to recompile/re-deploy. Thanks Lamine. Hi trinisoftinc, Thank you very much for your nice comment. ... by lamine_ba - 2011-11-14 09:32 Hi trinisoftinc, Thank you very much for your nice comment. Yes, Having the template outside the war file means you can change the LAF of your page without having to recompile the whole app and this would be very advantageous. And you can be sure that you will have it. Please read my reply to Mark or wait for my next blog post. Regarding your statement "I think this would be also be easy if one can link to the template using say href", Let me reveal you that we have already this feature and it working pretty well on my prototype. We call it "Dynamic Template Selection". You can change the LAF of your application while navigating to another page of course if your add in the url the id of the template you want to use as a request parameter following this scheme :. Please see it in action in my prototype . Thanks One page using dynamically three different templates - - href=" - Regards Lamine hi all . i ask what it is El resolver into ... by naciu45 - 2011-11-14 11:12 hi all . i ask what it is El resolver into JSF . I study JSF self me and i not understand . Tank you all. naciu Hi naciu45 I'm very glad to hear that you are learning JSF ... by lamine_ba - 2011-11-14 12:46 Hi naciu45 I'm very glad to hear that you are learning JSF and I like very much your approach. When you want to have a clear understanding of something, you must always try to understand first its concepts. Google can be your best friend in your learning process, just type what you are looking for in the input text or you can consider to buy a recent JSF book covering the 2.0 version of JSF. I recommend you "JSF 2.0 the complete reference" from Ed Burns. But here are some links I have found in the web. Try first to understand what is an Expression Language (EL) and after it will be very easy for you to understand what is an EL Resolver. These concepts were first introduced in JSP. - - Thanks Lamine Conventional UI design with Facelets and JSF 2.2 An experience is relevant only if it can prevent you to repeat the same mistake. Anonymous Introduction Make it compliant! Yes, Ladies and Gentlemen, Make it compliant! That is my tip of the day and the statement I woke up with today. Like many of you, I have read the Jsf-spec pdf file and as you may know, JSF 2.2 is coming with two major features : Faces Flows and Multi-templating. Did you see that it will be possible to package a Faces Flow and a template as a jar file, or as an exploded directory? And that it will be possible to offer faces flows and templates as re-usable components in a software repository, such as a maven repository ? In other words, the new vision of JSF is “drop in and go”. There is nothing wrong and suspicious in that strategy but I'm only curious to see how this will gonna happen if we haven't any conventions yet. You would be in a big dream if you think that my view can reuse your template if it does not bring well-known areas. Just take a look at this sample to be convinced : Your Template - <ui:insert - XEN Template - </ui:insert> - <ui:insert - top design - </ui:insert> - <ui:insert - login-area design - </ui:insert> - <ui:insert - search-area design - </ui:insert> - <ui:insert - </ui:insert> - <ui:insert - </ui:insert> - <ui:insert - bottom design - </ui:insert> My view using another Template - <ui:define - My View - </ui:define> - <ui:define - header design - </ui:define> - <ui:define - login design - </ui:define> - <ui:define - search design - </ui:define> - <ui:define - menu design - </ui:define> - <ui:define - content design - </ui:define> - <ui:define - footer design - </ui:define> In other words, your template is not compliant with my application because you and I have chosen the stupid strategy of using a different name to name the same thing. Crazy no! I think it is high time to be homogeneous and to escape from this insanity by making our new components "compliant" with any JSF application . For the case of our templates, that is very easy. Let's design a template according to the basic structure of the user interface of a web application. Let's bring this simple contract and let's all use these well-know values : title, header, login, search, menu, content, footer. But if you want, we can even further reduce this list if we all agree that a view will only override the title and the content areas. And also if we all agree that the other areas minus the menu area must be designed statically in the template. Isn't it the whole idea behind customization? You don't like the design of the header, just go in the template and change it. But if you were a true web designer, you would know that this customization is even done with css. Here is now the strategy we are expecting to see in any template : - <div id="header"> - common header design - </div> - <div id="footer"> - common footer design - </div> or - <div id="header"> - <ui:insert - common header design - </ui:insert> - </div> - <div id="footer"> - <ui:insert - common footer design - </ui:insert> - </div> but not this - <div id="header"> - <ui:include - </div> - <div id="footer"> - <ui:include - </div> nor this - <div id="header"> - <ui:insert name="header" - <ui:include - </ui:insert> - </div> - <div id="footer"> - <ui:insert name="footer" - <ui:include - </ui:insert> - </div> The design technique above must only be applied for the menu area because a menu is the only thing a template cannot come with - <div id="menu"> - <ui:insert name="menu" - <ui:include - </ui:insert> - </div> We are also expecting the menu to be designed like this, with <ul> and <li> menu.xhtml - <ui:composition xmlns="" - xmlns:h="" - xmlns: - <ul> - <li> - <h:link - </li> - <li> - <h:link - </li> - <li> - <h:link - </li> - <li> - <h:link - </li> - </ul> - </ui:composition> So that a template author can give it its style and layout with css : template.css - #menu { height: 49px; width:978px;background: #FF7805 } - #menu ul { width: auto; float: right; list-style-type:none; } - #menu ul li{ margin: 0; padding: 0; height: 49px; float: left; position: relative; } - #menu ul li a{ color: #FFFFFF; font-weight: bold; text-decoration: none; } - #menu ul li a:hover{ background-color: rgb(245,235,207);color: #444444; } So that you can get according to the template the following menu : or this one If you are ready to follow these conventions, I'm ready to share all my templates with you and even this application ( ) so that any of you can deploy it worldwide. If you download it, you will be able to manage your own repository through a nice web interface built with JSF and to share your templates with your users through REST. Yes, it has also a Template Service built with JAX-RS ( ) Template Service (serving XML Representations) - <templates> - <template> - <id>1</id> - <name>world_cup</name> - <creationDate>2010-06-15</creationDate> - <author>Themza Team</author> - <authorEmail>templates@themza.com</authorEmail> - <authorUrl></authorUrl> - <creationDate>2010-06-15</creationDate> - <license>ThemZa TOS</license> - <description>World Cup Heroes</description> - <link></link> - <thumbnail></thumbnail> - <download></download> - </template> - ...... - ...... - </templates> Build your own Template client To make things funny, You can even build your own client or download my upcoming google android application to consume this service. Using a Template Remotely No need to download a template, your application can use it remotely. Isn't it the whole idea behind "Cloud"? You can also use this strategy to avoid resources duplication when your application is running in a clustered environment. - <?xml version="1.0" encoding="UTF-8"?> - <web-app> - <context-param> - <param-name>javax.faces.view.TEMPLATE</param-name> - <param-value></param-value> - </context-param> - </web-app> Template Highlighting The ui:insert tags can now be highlighted in the UI. Just send in your request the highlight parameter and set its value to true. You can see this feature in live here : Template Highlighting Conclusion Ladies and Gentlemen, that's all for today. But remember, having conventions is the only way for us to collaborate so that we can move fast. - Login or register to post comments - Printer-friendly version - lamine_ba's blog - 2882 reads Multi-Templating with JSF 2 : The Prototype Revolution is only the next step to Evolution Anonymous Introduction Here we are, Yes ladies and Gentlemen, Here we are. After having shared the story with you, here is the presentation of the prototype of the multi-templating system of JSF 2.2. Meanwhile, an application has been deployed worldwide with google app engine and below are the instructions on how to use it both remotely and locally. We sincerely hope that you will enjoy this concept and as always we value your comments. How to use the application remotely? - With your browser point to this url : to display the home page and because of the configuration below, - <?xml version="1.0" encoding="UTF-8"?> - <web-app> - <context-param> - <param-name>javax.faces.view.TEMPLATE</param-name> - <param-value>rockstar</param-value> - </context-param> - </web-app> the rockstar template is selected - If you want to select another template, click on the templates link to display the templates list. - Point your cursor on the name of one template to see its thumbnail or click on it to select this template - Wanna see the templates gallery? click on the gallery link to display it. To enlarge a thumbnail, just click on it. To select a template, click on the name. - To finish, you have an about link to show the About page which says that the JSR 344 EG is working very hard to give you an outstanding new version of JavaServer Faces. So please, be patient and be ready to check it out once available.. Multi-Templating System Author(s): JSR 344 Expert Group Version: 1.0 Requirements: JSF 2.2 Description: Change the look and feel of your JSF applications. How to run the application locally with maven and the embedded glassfish plugin ? - With svn, check out the application at this url : - With maven, run mvn package embedded-glassfish:run to deploy the application - And finally with your browser point to this url : Conclusion Ladies and Gentlemen, we wish you a nice try and undoubtedly, Reuse is the only way to speed up development and boost productivity. Be sure that soon we will move to a higher concept like Application as a Template... - Login or register to post comments - Printer-friendly version - lamine_ba's blog - 3556 reads If you want to see this in JSF 2.2, please vote for the ... by edburns - 2011-08-29 07:35 If you want to see this in JSF 2.2, please vote for the corresponding issuetracker issue: java.net/jira/browse/JAVASERVERFACES_SPEC_PUBLIC-971. Multi-Templating with JSF 2 : The Story One thing can have several representations but each of the representations means the same thing. Everything is an object and every object has a virtual representation. You don't believe me! Well, look at your shadow... Anonymous Introduction When describing a feature, one must always show to his readers the set of contexts in which its use can bring a significant value . If you can't do that, there is no way to convince the users to give it a try . The feature I'm about to present is called "Multi-templating with JSF 2" and to further open the presentation, I'm starting today with a simple question. Does it make sense to have a dynamic templating system inside of the Java Server Faces Framework? What are the set of contexts in which its application can bring a significant value to its users? The answer is self evident for those who are in the right perspective to see the rationale of this idea. For the others, it might be wise to take the time to find it through the story of this web designer. Let's start by reading his profile. The Profile The Story Jim is working with a team of 19 web developers and they have the mission to build web applications using the JSF framework. The separation of concerns has been made as easy as to assign to Jim the simple task to design the master page and to define the look and feel of the web user interface. The web developers in the other hand, have the heavy task to build the application logic broken into a set of modules and to create the views that use the master page. Since the release of JSF 2.0 and the advent of its resource handling mechanism, the life of Jim has never been so easy. Where to store the css, the js and the images files was not anymore his question. And since he has found the right convention with his team about the right place to store the master page (facelets template), their integration process has never been so easy. The root directory has been chosen. - <html xmlns="" - xmlns:h="" - xmlns: - <body> - <ui:composition - </ui:composition> - </body> - </html> And Jim, like any perfect innocent was living in a perfect world until......... The Mutation Until his boss came one day with a new requirement : their JSF applications must now be able to change dynamically their look and feel. And Jim to wonder how he could achieve this miracle without a little bit of programming. He has already understood that changing dynamically the look and feel of an application means only building several themes for this application. And that a theme is nothing more than what he calls a master page or a template. And that for each template, he has only to create a facelets page and the resources (css, js,images) it needs and store them in the right places. But what Jim didn't know was, the right places he thought , were not in fact the right places to meet a clean separation. The proof is the message he was getting. His IDE was warning him that there is a file that already exists in the selected destination and if he wants to overwrite it And Jim to exclaim "oh what a bad and monolithic approach!" and Jim to ask "Hey how to be modular? How to avoid resources name collision?". And surprisingly the inspiration which came to him was to create a folder for each template and to surprisingly store the whole thing in a directory named "templates". But what Jim didn't know was, all he did were just the beginning of a great inspiration. And again he couldn't help asking himself "Hey If a template has a thumbnail, who can stop me to build a template gallery for one to see what it looks like?" And to complete his work, Jim brought the rule that a template must always have a metadata for one to know its designer. - <template> - <name>Roses</name> - <version>1.0</version> - <creationDate>05/06/2011</creationDate> - <author>Jim DRY</author> - <authorEmail>jim@yahoo.com</authorEmail> - <authorUrl></authorUrl> - <description>Blogstyle JSF Template</description> - </template> You want to become famous! wait for the release of JSF 2.2 and its multi- templating system so you can like Jim, design and share your templates with the JSF community. And maybe someday I will blog about you to share your story.... Dynamic Resolution of a template - <html xmlns="" - xmlns:h="" - xmlns: - <body> - <ui:composition - </ui:composition> - </body> - </html> Selecting a template at startup - <?xml version="1.0" encoding="UTF-8"?> - <web-app> - <context-param> - <param-name>javax.faces.view.TEMPLATE</param-name> - <param-value>mybusiness</param-value> - </context-param> - </web-app> The Movie For a better illustration of this feature, you can watch the video presentation of my project "Metamorfaces". I have created this video with After Effects and some plugins like Trapcode and Knoll Light Factory. The sounds have been composed by my little brother . The Prototype Things are getting long and I'm getting tired. The presentation of the prototype will be the subject of my next blog post. So until then, ladies and gentlemen, have a good night..... - Printer-friendly version - lamine_ba's blog - 2033 reads Moonlight Thoughts In computer science, in the context of data storage and transmission, serialization is the process of converting a data structure or object into a format that can be stored and "resurrected" later in the same or another computer environment. Java provides automatic serialization which requires that the object be marked by implementing the java.io.Serializable interface. Implementing the interface marks the class as "okay to serialize.. Wikipedia Introduction Last week, while letting a beautiful moon shine, my friend Bob and I were discussing. We didn't find in that evening, any interesting subject other than my previous blog post, which was if you remember, the writing expression of a skeleton. The skeleton of a solution. The solution of how to "implement a cascading dropdown using the JSF framework". And today, coming again with the tremendous desire to share, my friend Bob and I are very pleased to give you a replay of that conversation so you can touch our moonlight thoughts. Furthermore, when all things are said and done, what else from an ending whisper than to fall into a deep silence and let the story begin... The Conversation Me: Hey Bob! Have you read my last blog post on java.net? Bob: Which one? the one talking about how to implement a cascading dropdown with JSF 2? Me : Yes this one. In the case you read it, did you have your question answered? Bob: I'm really sorry, I can't remember what it was. Could you be more accurate? Me: Of course, bob. Last time, you were asking me what was the correlation between an UISelectOne component and a bean put in any scope after having asked you why I was getting this validation error. Bob: Oh yeah! I can still remember this one and like I often say to my coworkers, the solution comes always when the question is better asked. In the way it is designed, the validation of the value of an UISelectOne or an UISelectMany component will always fail when having your bean put in the request scope. - public class UISelectOne extends UIInput { - - // Skip validation if it is not necessary - super.validateValue(context, value); - if (!isValid() || (value == null)) { - return; - } - // Ensure that the value matches one of the available options - boolean found = SelectUtils.matchValue(getFacesContext(), - this, - value, - new SelectItemsIterator(context, this), - getConverter()); - boolean isNoSelection = SelectUtils.valueIsNoSelectionOption(getFacesContext(), - this, - value, - new SelectItemsIterator(context, this), - getConverter()); - // Enqueue an error message if an invalid value was specified - if ((!found) || - (isRequired() && isNoSelection)) { - FacesMessage message = - MessageFactory.getMessage(context, INVALID_MESSAGE_ID, - MessageFactory.getLabel(context, this)); - context.addMessage(getClientId(context), message); - setValid(false); - } - } - } JSF like any logical system is unable to validate a value against a list of values which becomes unavailable at the end of the request.Your only advice could be to keep the state and that is exactly what it does when the bean is put in the view scope. Me: But bob, do you see any reason why the value is not validated against the tree? Aren't the set of valid values simply children of the UISelectOne? Bob: Yes they are but let me show you where the problem is. No one can do a value comparison against the UISelectItems component because we only save the value expression which means a regeneration of the data model through a costly EL evaluation at any phase we need the data. One must now question why we cannot replace in the tree an UISelectItems by a list of UISelectItem? Having not myself the answer, I leave it to your appreciation. Me: And I'll forward it to Ed Burns. Now let's come back to the ViewScope. As you state, the only advice we can give to JSF is to keep the state and all of this reminds me the UISaveState component from MyFaces Tomahawk project. Does it work the same? Bob: The Tomahawk t:saveState tag provides an alternative "view" scope. It allows objects to be associated with the current JSF view, so that they are kept alive while the user stays on the same view, but are automatically discarded when the user moves to another view. Note that the t:saveState tag does require that all the objects it stores are Serializable or implement the JSF StateHolder interface.When JSF is configured to use "client-side-state-saving" - <context-param> - <param-name>javax.faces.STATE_SAVING_METHOD</param-name> - <param-value>client</param-value> - </context-param> then objects in "view" scope are automatically sent to the browser along with the rest of the JSF view state.However of course the network bandwidth needed does increase as the view-scoped objects must be transferred back and forth on each request. In the other hand, when JSF is configured to use "server-side-state-saving" - <context-param> - <param-name>javax.faces.STATE_SAVING_METHOD</param-name> - <param-value>server</param-value> - </context-param> then objects in "view" scope are automatically stored in the HttpSession along with the rest of the JSF view state. Me : Having now the proof that we are talking about the same concept, it is self evident that I did a mistake in my previous blog post. If state saving means really Serialization, how JSF could save the state of a bean and its members if they don't implement the Serializable interface? - @ManagedBean - @ViewScoped - public class Bean implements Serializable { - } Bob: Very good question. The ViewScope is attached to the UIViewRoot and when the UIViewRoot state is saved, it may be serialized by the StateManager implementation. - public class UIViewRoot extends UIComponentBase implements UniqueIdVendor { - private Map<String, Object> viewScope = null; - public Map<String, Object> getViewMap() { - return getViewMap(true); - } - public Map<String, Object> getViewMap(boolean create) { - if (create && viewScope == null) { - viewScope = new ViewMap(getFacesContext().getApplication().getProjectStage()); - getFacesContext().getApplication() - .publishEvent(getFacesContext(), - PostConstructViewMapEvent.class, - this); - } - return viewScope; - } - } Me: Shame on my reviewer! But the real truth is that I haven't tried the solution myself until yesterday and guess what, I got this nasty java.io.NotSerializableException error Bob: ooh! I'm really sorry for the frustration but did the CDI @Inject annotation work for you? What were your dependencies? SeamFaces or Myfaces CODI? And by the way, aren't these annotations that share the same unqualified name confusing to you? Me: Yes from the perspective of a beginner, they are. I use SeamFaces to enable the CDI injection on my bean Bob: and did you replace @ManagedBean by @Named ? Me: of course, after having got a traumatic NullPointerException. You have to use the @Named annotation so that CDI is used instead of JSF but it is really intriguing why we cannot automatically map JSF managed bean annotations to CDI annotations at startup. It is high time to think about what to do with JSF's own managed bean mechanism. Bob: Yes I agree and like it is stated in its goal, JSF is a user interface framework for Java Web applications designed to significantly ease the burden of writing and maintaining applications that run on a Java application server and render their UIs back to a target client. Me: So if I'm following you, JSF must now focus on the UI, delegate the DI task and deprecate its managed bean mechanism. Bob: You already know my position about this debate so why talking more about it. I would rather talk about why haven't you packaged your solution into a reusable unit we can download, install and try. Me: Oh bob! I wish I could but this feature is currently under prototyping in JSF 2.2. We call it Support for modular, reusable, fragments of a web application. My apologies for the time wasted and for the copy and paste. But believe me soon all this pain will belong to the past...... - Login or register to post comments - Printer-friendly version - lamine_ba's blog - 1085 reads Implementing a Cascading DropDown with JSF 2 The Beginning The term " Cascading DropDown" means a dynamic dependent list boxes that allow a “child” list box to refresh when a selection is made in a “parent” list box. It is a recurrent problem in the software space which has only one solution but sadly there are several implementations of that solution.It depends on the tools you use and their limitations which can force you to invent ways that shouldn't be invented.This blog entry is all about how to implement a cascading dropdown using the JSF framework and today we have the simple requirement to display a list of countries and to update a list of cities once a country is selected. Having now the scenario, I think we can start to present the actors : The Entities - public class Country { - - - } - public class City { - - - private Country country; - } The DAO [Provide your own implementation] The Managed Bean - @ManagedBean - public class Bean { - - - private @Inject Dao dao; - public List<Country> getCountries() { - return dao.getCountries(); - } - public List<City> getCities() { - return dao.getCities(country_id); - } - .................................................................. - } The Facelets view - :form> - </body> - </html> The End And that's all. We have created a cascading dropdown without using a programmatic approach. We select a country in the first combo and automatically an ajax request is fired behind the scenes to update the other combo. This feature is really a powerful one until the idea to add a commandButton in your form to make a postback and save things, comes to you. - :commandButton - </h:form> - </body> - </html> And boum! you get a validation error. Sadly, the combo displaying your cities is saying that the value you have picked in its list is not a valid one because this same value is not anymore in its list". Isn't it a craziness statement?. After some long and hard hours debugging the jsf.js file and looking at the tree printed in the console by my PhaseListener, I came accross no rationale idea. Everything was fine. The partial view processing and rendering were done perfectly and the state of the combo was updated and saved. And suddenly when I was about to loose hope, comes this ironical idea : "Hey put the managed bean in a higher scope". And definitely that was the solution and it is not ironical at all once you understand how the value of an UISelectOne is validated and what means ' to put a managed bean in the request scope'. There is a big security concern to take into account when validating the value of an UISelectOne. This security concern is summarized into this simple question : How to prevent the client side to insert values that were not valid choices? No don't worry, the JSF framework is not asking you to provide this answer. This validation has been made as transparent as to make another call of your getCities() method for a regeneration of the data model in order to do a value comparaison. - public List<City> getCities() { - return dao.getCities(country_id); - } But wait a minute, the bean is created at each request and the Validation Phase is before the Update Model Values Phase. What JSF will get when running getCities() at this point of time? Just an empty list because the bean can't remember the country_id. And that is why we get this validation error ("The value is not valid") and that is why we have to put the bean in a higher scope. If we put the bean in the ViewScope, it will be removed as soon as we move to another view. Definitely the right place to put it. But wait a minute, isn't JSF calling my data access logic two times? A first call at the render phase and another call at the validation phase. That is really a big performance concern to deal with when having my data stored in a database. Then how to prevent that? Three solutions come to my mind: - The EG must find another way to validate the value without a regeneration of the data model - If the EG can't, I want to have a way to validate the value myself through a light query - Again if the EG can't, It is better to write your managed bean like this and let it go - @ManagedBean - @ViewScoped - public class Bean { - - - private @Inject Dao dao; - private List<Country> countries; - private List<City> cities; - public List<Country> getCountries() { - if(countries==null) - countries=dao.getCountries(); - return countries; - } - public List<City> getCities() { - if(cities==null) - cities=dao.getCities(country_id); - return cities; - } - - this.country_id=country_id; - cities=null; - } - } I would like to finish this blog entry by an announcement. in JSF 2.2, the var attribute of the selectItems tag is now optional and by default its value is equal to 'item'. I look forward seeing this convention adopted by any UIComponent that accepts a var attribute. - <f:selectItems - Printer-friendly version - lamine_ba's blog - 3455 reads Nice blog, it's useful! good conception in YouControl app! ... by papesdiop - 2012-05-23 05:07 Nice blog, it's useful! good conception in YouControl app! i suggest you to give it a Senegalese name! ;) wish u good continuation! Hi Lamine, Your approach to the front end Java ... by jjviana - 2012-05-22 15:48 Hi Lamine, Your approach to the front end Java development is a breath of fresh air. Question: is Youcontrol open source? If yes, where can I download it? Regards, - Juliano
http://www.java.net/blogs/lamine_ba
crawl-003
refinedweb
11,156
62.88
29 June 2011 11:34 [Source: ICIS news] SINGAPORE (ICIS)--Spot prices of linear low density polyethylene (LLDPE) film in the Middle East continued to plunge, taking the cue from the key ?xml:namespace> Offers for July LLDPE film cargoes were at $1,280-1,290/tonne (€896-903/tonne) The GCC offers for July cargoes were about $130-150/tonne lower than the prices for June shipments. Prices have fallen by 13% from the year’s peak in of $1,520-1,560/tonne DEL Dubai in early April, tracking the slide in Chinese prices, according to ICIS data. (Please see the graph below) Meanwhile, mounting LLDPE inventory of Middle Eastern producers lead them to substantially lower offers to entice buyers, market sources said, adding that procurement by converters is very limited. “Operating rates of regional LLDPE facilities are still high, and even at full tilt for some units,” a Saudi Arabian producer said. But he is hopeful that Saudi Arabian producers will find relief from pressures of having high stocks, citing that LLDPE makers in the country did not cut production even when prices troughed in end-2008. Regional converters do not welcome the price slump in LLDPE as it as creates difficulty in cost management. They said they prefer a gradual price decline. “Prices are like crashing here, this is worrying,” said a Saudi converter. Converters need to average out their costs, especially with business slowing down ahead of Ramadan - the Muslim fasting month that will start on 1 August, said a second Saudi converter. “We can’t see when the prices will bottom out,” the second converter said. “There is no peak season for the film sector this year as a result of this downtrend,” a third Saudi converter
http://www.icis.com/Articles/2011/06/29/9473426/middle-east-lldpe-film-extends-slump-on-weak-china-market.html
CC-MAIN-2014-52
refinedweb
293
56.69
I'm just starting to learn python and have come across some trouble when trying to program a simple 1-D version of single player battleship. 2 things I can't seem to make work: 1 2 3 * 5 1 2 X * 5 from random import randint ship=randint(0, 5) board = ["O","O","O","O","O"] print ("Let's play Battleship!") attempts = 1 while attempts < 4: print (board) guess = int(input("Guess Where My Ship Is: ")) if guess == ship: print ("Congratulations Captain, you sunk my battleship!") break else: print ("You missed my battleship!") if attempts<3: print("Try again!") elif attempts==3: print("Better luck next time, Captain!") attempts+=1 Good practice: set the board size to a variable so you can reference it regularly. Put this at the top size = 5 # Can be changed later if you want to make the board bigger Next, have your ship location be chosen based on that ship = randint(0, size) Instead of making a board filled with 0's, Generate the board dynamically so that it's already pre-populated with the possible values. board = [] # Creating an empty board for i in range(1, size): position = str(i) # Converting integers to strings board.append(position) # Adding those items to the board Then, inside of the game logic, after the "You missed my battleship" line, change the relevant square on the board ... print("You missed my battleship!") number_guess = int(guess) - 1 # Because lists are zero-indexed board[number_guess] = "*" # Assign "*" to the spot that was guessed if attempts < 3: ...
https://codedump.io/share/kqW3DwcGF8ae/1/list-problems-in-1-dimensional-python-battleship
CC-MAIN-2016-44
refinedweb
256
69.21
(to be determined pending resolution of below issues). Code freeze and release candidate planned for Fri Apr 18 Fri Apr 25 (to be determined pending resolution of below issues).: Final pre-merge repo staged at - WiFi sleep mode () - Status: Waiting for review of new code (posted Friday) from Daniel. - PacketSocket application () - Status: Waiting for Tom's review of recent revisions Flow Monitor IPv6 support () - Status: Ready to merge (Tommaso)Merged - Adding A-MPDU aggregation () - Status: Daniel reviewing to recommend which chunks can be merged to ns-3-dev now, and which saved for later. Bugs in core or build system - bug 1900 time arithmetic consistency across platforms - Status: For ns-3.20, propose to solve this according to bug 1900 plan (adjust the failing tests) bug 1898 problem with click - Status: Fixed - bug boost library detection - Status: Patch needs to be applied - bug 1868 fstrict-overflow fixes - Status: Once patch is confirmed on clang systems, Peter plans to push the patch - bug 1847 test.py output for failed tests - Status: Waiting for brief review before pushing. Bugs in models to fix - bug 1903 (Olsr) fix namespace usage - Status: Ready to apply - bug 1770 (mesh) selected mesh tests failing - Status: Peter bisected this while working on bug 1868, but hasn't been debugged yet. - bug 1895 and bug 1872 (dsr) DSR fixes - Status: Seems to be nearing completion to merge. - bug 1876 (olsr HNA access) - Status: just need final patch - bug 1831 TCP slow start tracing - Status: Brian reviewing. bug 1850 TCP NewReno patch - Status: Fixed - bug 1858 Wireless examples - Status: Assigned to Daniel to review bug 1873 Energy source/container confusion - Status: Fixed bug 1829 TCP socket forking - Status: Fixed bug 1791 TCP endpoint deallocation - Status: Fixed - bug 1824 (bind2netdevice for IPv6) - Status: Waiting for review, may not make this release - bug 1817 IP ID for each protocol - Status: Patch pending (Tommaso) - waiting green light. -.
https://www.nsnam.org/mediawiki/index.php?title=Ns-3.20&direction=prev&oldid=8505
CC-MAIN-2022-05
refinedweb
315
52.02
by Robert Muth Simple Directmedia Layer (SDL) is a popular library that many games and applications use to access sound and video capabilities on end-user machines. Native Client bindings for SDL have recently become available on naclports; thus it is now possible to port SDL-based games to Native Client. This article describes how to complete such a port. The focus of the article is on writing the glue code for fusing your game with PPAPI (the bridge between Native Client modules and the browser, also known as "Pepper"). Other important aspects, such as how to load resources and files, are covered in other articles listed in the Links section. What SDL components are supported? The SDL bindings for Native Client currently support the following components: - 2D graphics (SDL_INIT_VIDEO) - audio (SDL_INIT_AUDIO) - input events (mouse, keyboard) - timer events (SDL_INIT_TIMER) At present, the SDL bindings for Native Client do not support the following components: - SDL_INIT_JOYSTICK - SDL_INIT_CDROM Step 1: Install the Native Client SDK and the SDL bindings for Native Client. In order to port an SDL-based game to Native Client, you must: - Download and install the Native Client SDK. - Install the SDL bindings for Native Client by checking out and building the SDL library. Step 2: Modify the main() function in your game's code. Native Client modules are event-driven and do not use main() as an entry point. Thus, you must rename the main() function to something like game_main(). You must also move the initialization of SDL out of main() and into your new PPAPI glue code (listed below). Thus, remove the call to SDL_Init() from main(). This is a good time to check whether the SDL bindings for Native Client support the SDL components your game uses – make sure that the arguments to SDL_Init are on the list of supported components shown above. Step 3: Write glue code to fuse your game with PPAPI. Native Client uses PPAPI to play audio and render graphics in the browser (see the Pepper C++ reference for additional information). The Native Client port of SDL hides most of the use of PPAPI from developers, but you still need to fuse the game code with PPAPI. The code samples below illustrate how to do so. Note that the code samples use the C++ version of PPAPI. You can put the code samples in a new file, say nacl_glue.cc, which you can compile and link with the game code as described in the next section of this article. As with all Native Client modules, your code must include a Module class and an Instance class. These classes provide an entry point into your module, and represent multiple instances of your module that could in theory be embedded into a web page. The code fragment below shows subclasses called GameModule and GameInstance: class GameModule : public pp::Module { public: GameModule() : pp::Module() {} virtual ~GameModule() {} virtual pp::Instance* CreateInstance(PP_Instance instance) { return new GameInstance(instance); } }; namespace pp { Module* CreateModule() { return new GameModule(); } } // namespace pp The function pp::CreateModule() is actually the only real entry point into your module; PPAPI bootstraps all other entry points from this function. As alluded to above, in theory a Native Client module could be instantiated multiple times within the same web page; all instances would then be handled by a single process. In reality this rarely works with ported applications because of global variables and other considerations. The code fragment below explicitly guards against the creation of multiple instances: class GameInstance : public pp::Instance { private: static int num_instances_; // Ensure we only create one instance. pthread_t game_main_thread_; // This thread will run game_main(). int num_changed_view_; // Ensure we initialize an instance only once. int width_; int height_; // Dimension of the SDL video screen. pp::CompletionCallbackFactory cc_factory_; // Launches the actual game, e.g., by calling game_main(). static void* LaunchGame(void* data); // This function allows us to delay game start until all // resources are ready. void StartGameInNewThread(int32_t dummy); public: explicit GameInstance(PP_Instance instance) : pp::Instance(instance), game_main_thread_(NULL), num_changed_view_(0), width_(0), height_(0), cc_factory_(this) { // Game requires mouse and keyboard events; add more if necessary. RequestInputEvents(PP_INPUTEVENT_CLASS_MOUSE| PP_INPUTEVENT_CLASS_KEYBOARD); ++num_instances_; assert (num_instances_ == 1); } virtual ~GameInstance() { // Wait for game thread to finish. if (game_main_thread_) { pthread_join(game_main_thread_, NULL); } } // This function is called with the HTML attributes of the embed tag, // which can be used in lieu of command line arguments. virtual bool Init(uint32_t argc, const char* argn[], const char* argv[]) { [Process arguments and set width_ and height_] [Initiate the loading of resources] return true; } // This crucial function forwards PPAPI events to SDL. virtual bool HandleInputEvent(const pp::InputEvent& event) { SDL_NACL_PushEvent(event); return true; } // This function is called for various reasons, e.g. visibility and page // size changes. We ignore these calls except for the first // invocation, which we use to start the game. virtual void DidChangeView(const pp::Rect& position, const pp::Rect& clip) { ++num_changed_view_; if (num_changed_view_ > 1) return; // NOTE: It is crucial that the two calls below are run here // and not in a thread. SDL_NACL_SetInstance(pp_instance(), width_, height_); // This is SDL_Init call which used to be in game_main() SDL_Init(SDL_INIT_TIMER|SDL_INIT_AUDIO|SDL_INIT_VIDEO); StartGameInNewThread(0); } }; For simplicity reasons, the function StartGameInNewThread(), shown below, uses polling to wait until all resources are available. In most circumstance it is possible to avoid polling and use a scheme based on PPAPI's asynchronous callbacks. void StartGameInNewThread(int32_t dummy) { if ([All Resourced Are Ready]) { pthread_create(&game_main_thread_, NULL, &LaunchGame, this); } else { // Wait some more (here: 100ms). pp::Module::Get()->core()->CallOnMainThread( 100, cc_factory_.NewCallback(&GameInstance::StartGameInNewThread), 0); } } static void* LaunchGame(void* data) { // Use "thiz" to get access to instance object. GameInstance* thiz = reinterpret_cast(data); // Craft a fake command line. const char* argv[] = { "game", ... }; game_main(sizeof(argv) / sizeof(argv[0]), argv); return 0; } Step 4: Compile and link your code. Native Client modules are currently processor-specific, which means that you must provide both a 32-bit and a 64-bit version of your module. Assuming your SDK is located at $(NACL_SDK_ROOT), you can create different versions of your module by using the two compiler settings shown below: CC = $(NACL_SDK_ROOT)/toolchain/linux_x86/bin/i686-nacl-g++ -m32 or CC = $(NACL_SDK_ROOT)/toolchain/linux_x86/bin/i686-nacl-g++ -m64 Note that the compiler sets the following pre-processor symbol, which you can use to enable Native Client-specific conditional compilation: #define __native_client__ 1 Once you've compiled your game code and the PPAPI glue code (e.g., the nacl_glue.cc file described in the previous section), you can create an executable Native Client module by linking the following files: nacl_glue.o - the PPAPI glue code discussed above -lSDL - part of the Native Client SDL port -lSDLmain - part of the Native Client SDL port -lppapi - PPAPI C bindings -lppapi_cpp - PPAPI C++ bindings -lnosys - library with stubs for common functions like kill(), which are not available in Native Client (note that these functions will cause asserts when actually called) If you're using autoconf-based software, you can avoid typing these file names by directing the software to the correct sdl-config, e.g.: ./configure --with-sdl-exec-prefix=$(NACL_SDK_ROOT)/toolchain/linux_x86/i686-nacl/usr Because you renamed the main() function, the linker might get confused and report undefined symbols during the final link (this is especially true when the exact link line is not completely under your control, e.g., when using autotools/configure). In such cases you can work around the problem by using the "‑u <symbol>" option, e.g., ‑u game_main. Note again that you must create two versions of the Native Client executable module, e.g., game32.nexe and game64.nexe. Step 5: Create an HTML file and a manifest file. After you have generated the 32- and 64-bit versions of your Native Client module, you must create a manifest file to tell the browser which version of the module to load based on the end-user's processor. A sample manifest file, say game.nmf, looks as follows: { "program": { "x86-32": {"url": "game32.nexe"}, "x86-64": {"url": "game64.nexe"} } } The manifest file is in turn referenced by an HTML file, which can be as simple as this: <!DOCTYPE html> <html> <body> <!-- Note: Attributes are passed to GameInstance::Init(). --> <embed width="640" height="480" src="game.nmf" type="application/x-nacl" /> </body> </html> Step 6: Run your game in Chrome. See How to Test-Run Web Applications for instructions on how to run your game. Links - Porting MAME to Native Client - Porting XaoS to Native Client
https://developers.google.com/native-client/dev/community/porting/SDLgames
CC-MAIN-2014-15
refinedweb
1,407
53
On 12/04/2012 02:19 PM, Michal Privoznik wrote: > Network should be notified if we plug in or unplug an > interface, so it can perform some action, e.g. set/unset > network part of QoS. However, we are doing this in very > early stage, so iface->ifname isn't filled in yet. So > whenever we want to report an error, we must use a different > identifier, e.g. the MAC address. > --- > src/Makefile.am | 7 +- > src/conf/domain_conf.h | 1 + > src/conf/network_conf.c | 21 ++++ > src/conf/network_conf.h | 4 + > src/libvirt_network.syms | 13 +++ > src/libvirt_private.syms | 8 -- > src/network/bridge_driver.c | 223 ++++++++++++++++++++++++++++++++++++++++++- > 7 files changed, 266 insertions(+), 11 deletions(-) > create mode 100644 src/libvirt_network.syms > > diff --git a/src/Makefile.am b/src/Makefile.am > index 01cb995..04378d1 100644 > --- a/src/Makefile.am > +++ b/src/Makefile.am > @@ -1373,6 +1373,10 @@ if WITH_ATOMIC_OPS_PTHREAD > USED_SYM_FILES += libvirt_atomic.syms > endif > > +if WITH_NETWORK > +USED_SYM_FILES += libvirt_network.syms > +endif > + > EXTRA_DIST += \ > libvirt_public.syms \ > libvirt_private.syms \ > @@ -1386,7 +1390,8 @@ EXTRA_DIST += \ > libvirt_sasl.syms \ > libvirt_vmx.syms \ > libvirt_xenxs.syms \ > - libvirt_libssh2.syms > + libvirt_libssh2.syms \ > + libvirt_network.syms > > GENERATED_SYM_FILES = libvirt.syms libvirt.def libvirt_qemu.def > > diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h > index 4ab15e9..b4d149b 100644 > --- a/src/conf/domain_conf.h > +++ b/src/conf/domain_conf.h > @@ -807,6 +807,7 @@ struct _virDomainActualNetDef { > virNetDevVPortProfilePtr virtPortProfile; > virNetDevBandwidthPtr bandwidth; > virNetDevVlan vlan; > + unsigned int class_id; /* class ID for bandwidth 'floor' */ > }; I also just noticed that you add this directly into the actualnetdef rather than into the virNetDevBandwidth. Did you maybe do this because you wanted the parser to be able to see the network type before allowing it? If so, I don't think that's necessary - the actualdef is never parsed from user config, so if it's there at an inappropriate time, it's either a program bug, or someone messing with the domain status file. I figured it would be simplest to pass around if it was part of the bandwidth object (and logically it fits there). Also, I think it's appropriate to add the bits for parsing/formatting a new element in the same patch where it was added to the struct, but you've delayed it to a separate patch at the end that only does that one thing. I'm okay with that, since it will still pass make check at each step, but I think it would make more sense to put that last patch earlier, and include this data definition with it.
https://www.redhat.com/archives/libvir-list/2012-December/msg00545.html
CC-MAIN-2015-22
refinedweb
411
51.65
This document describes how to create views in BigQuery. You can create a view in BigQuery in the following ways: - Using the Cloud Console. - Using the bqcommand the following: - Up to 1,024 characters - Letters (uppercase or lowercase), numbers, and underscores Cloud Console,. For information about quotas and limits that apply to views, see View limits. Required permissions. Creating a view You can create a view by composing a SQL query that is used to define the data accessible to the view. To create a view: same location. - For Table name, enter the name of the view. - Click Save. bq Use the bq mk command with the --view flag. For standard SQL queries, add the --use_legacy_sql flag and set it to false. Optional parameters include --expiration, --description, and --label. If your query references external user-defined function (UDF) resources stored in Cloud Storage or in local files, use the --view_udf_resource flag to specify those resources. The --view_udf_resource flag is not demonstrated here. For more information about using UDFs, see UDFs and the bq command-line tool. Replace the following: PATH_TO_FILEis the URI or local file system path to a code file to be loaded and evaluated immediately as a UDF resource used by the view. Repeat the flag to specify multiple files. INTEGERsets the lifetime (in seconds) for the view. If INTEGERis 0, the view doesn't expire. If you don't include the --expirationflag, BigQuery creates the view with the dataset's default table lifetime. DESCRIPTIONis a description of the view in quotes. KEY:VALUEis the key-value pair that represents a label. Repeat the --labelflag to specify multiple labels. QUERYis a valid query. PROJECT_IDis your project ID (if you do not have a default project configured). DATASETis a dataset in your project. VIEWis the name of the view that you want to create. Examples: Enter the following command to create a view named myview in mydataset in your default project. The expiration time is set to 3600 seconds (1 hour), the description is set to This is my view, and the label is set to organization:development. The query used to create the view queries data from the USA Name Data public dataset. bq mk \ --use_legacy_sql=false \ --expiration 3600 \ --description "This is my view" \ --label organization:development \ --view \ 'SELECT name, number FROM `bigquery-public-data.usa_names.usa_1910_current` WHERE gender = "M" ORDER BY number DESC' \ mydataset.myview Enter the following command to create a view named myview in mydataset in myotherproject. The description is set to This is my view, the label is set to organization:development, and the view's expiration is set to the dataset's default table expiration. The query used to create the view queries data from the USA Name Data public dataset. bq mk \ --use_legacy_sql=false \ --description "This is my view" \ --label organization:development \ -.TableId; import com.google.cloud.bigquery.TableInfo; import com.google.cloud.bigquery.ViewDefinition; // Sample to create a view public class CreateView { public static void main(String[] args) { // TODO(developer): Replace these variables before running the sample. String datasetName = "MY_DATASET_NAME"; String tableName = "MY_TABLE_NAME"; String viewName = "MY_VIEW_NAME"; String query = String.format( "SELECT TimestampField, StringField, BooleanField FROM %s.%s", datasetName, tableName); createView(datasetName, viewName, query); } public static void createView(String datasetName, String viewName, String query) { try { // Initialize client that will be used to send requests. This client only needs to be created // once, and can be reused for multiple requests. BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService(); TableId tableId = TableId.of(datasetName, viewName); ViewDefinition viewDefinition = ViewDefinition.newBuilder(query).setUseLegacySql(false).build(); bigquery.create(TableInfo.of(tableId, viewDefinition)); System.out.println("View created successfully"); } catch (BigQueryException e) { System.out.println("View was not created. \n" + e.toString()); } } }() view_id = "my-project.my_dataset.my_view" source_id = "my-project.my_dataset.my_table" view = bigquery.Table(view_id) # The source table in this example is created from a CSV file in Google # Cloud Storage located at # `gs://cloud-samples-data/bigquery/us-states/us-states.csv`. It contains # 50 US states, while the view returns only those states with names # starting with the letter 'W'. view.view_query = f"SELECT name, post_abbr FROM `{source_id}` WHERE name LIKE 'W%'" # Make an API request to create the view. view = client.create_table(view) print(f"Created {view.table_type}: {str(view.reference)}") After you create the view, you query it like you query a table. Next steps - For information about creating an authorized view, see Creating authorized views. - For information about listing views, see Listing views. - For information about getting view metadata, see Getting information about views. - For information about updating views, see Updating views. - For more information about managing views, see Managing views.
https://cloud.google.com/bigquery/docs/views?hl=hu
CC-MAIN-2021-10
refinedweb
766
51.44
In the previous part of the article, we’ve explained how to compile the Windows kernel driver. Now that we know how to compile the driver, we also have to look at how to load it into the kernel. We’ll be using the Service Control Manager (SCM), which is a services.exe program under Windows that is responsible for starting, stopping and interacting with Windows service processes. The picture below shows that services.exe program is indeed running: In the article, we’ll see different methods of interacting with the SCM: by using OSR Driver Loader, sc.exe and of course by using the Win32 API functions. The services.exe program is started early on in the system startup. After it is started, it must launch all of the services that are configured to start automatically. When the services.exe program starts, the internal database is initialized by reading the HKLM\SYSTEM\CurrentControlSet\Control\ServiceGroupOrder\List registry key, which contains the names and order of service groups [9]. This can be seen on the picture below: Another registry key is also read, the HKLM\SYSTEM\CurrentControlSet\Services, which contains the database of services and device drivers, which is read into the SCM’s internals database [9]. Some of the services are presented below: Services that have the Type registry value set to SERVICE_KERNEL_DRIVER are device driver services that load device drivers from the C:\WINDOWS\System32\drivers\ directory. To do that, the NtLoadDriver function call is invoked. Let’s first start the winobj.exe program to check out which drivers are currently loaded. We can see that on the picture below: On the picture above, we have selected the DSFKSvcs device name. Since the order of devices is listed alphabetically, the Example device name should appear directly after the selected name once we load the driver. Let’s first download the OSR Driver Loader and select our driver.sys (seen in the Driver Path on the picture below): After that, click on the Register Service and Start Service. As soon as this happens, we need to refresh the drivers in winobj.exe, which will now list the Example driver, as seen on the picture below: We can see that the driver has been loaded into the kernel, which is exactly what we’re trying to achieve. But this by itself doesn’t tell us much, because we can’t directly interact with the driver and see whether it’s doing anything or now. This is exactly why dbgview.exe comes in handy, because it should display the messages we’re printing with DbgPrint in the kernel driver. Right after starting dbgview.exe, we need to enable the “Capture Kernel” option, which enables logging of kernel messages (otherwise we won’t see the messages printed by our driver): If we go back to the OSR Driver Loader and click on Stop Service, then Start Service again, we will see our DbgPrint statements written in dbgview.exe. We can see that on the picture below: We’ve come to the point where all of this suddenly seems very cool, because we can actually see what we were working on. The first entry in dbgview.exe is printed by the DriverUnload function, because we’ve unloaded the driver. And the second entry is printed by the DriverEntry routine, since we’re loading the driver again. But there’s one problem with loading the driver like this: it leaves a trail in the registry under the HKLM\System\CurrentControlSet\Services\driver key, as seen below: We will explain this in more detail later. Let’s just say that by using the above approach. the entry is written to the registry, which leaves behind a trail, so a security researcher looking for an evidence of a compromise can easily find the entry in registry. We can also start the service and load the driver directly from the command prompt by using the sc.exe command. We can see all of the commands on the picture below, where we’re first creating the service named example and then we’re starting that same service. When the service is started it will print some of the information about itself: the type of service, which is KERNEL_DRIVER and the state, which is RUNNING, etc… We used the “sc.exe create” command to create a new entry; this command calls the underlying CreateService() function. Keep in mind that there should be a space after the ‘=’ character and before the values of the parameters. We used the start= command with the demand option, but remember that there are other options as well. They are listed below: - boot : the driver will be loaded by system boot loader winload.exe - system : the driver will be loaded by kernel ntoskrnl.exe - auto : the driver will be loaded by services.exe - demand : the driver is loaded manually - disabled : the driver cannot be loaded If we have dbgview.exe open at the same time of loading the driver with “sc.exe start”, we’ll see a new “DriverEntry Called” message that will be printed to the debug log, which proves that the driver has been successfully loaded into the kernel. After the service is created with the “sc.exe create”, it will be saved into the registry under the HKLM\System\CurrentControlSet\Services\example key as seen below: We can see that the “sc.exe create” command created a new entry example with the key-value pair as seen on the picture above. The macro names for the numbers above can be seen in the winnt.h header file located in the C:\WinDDK\7600.16385.1\inc\api folder. The Type name has the value 0x1, which is the SERVICE_KERNEL_DRIVER macro as seen below: Once the service is started, another folder will be created, the Enum folder, as seen on the picture below: But there’s also a third option to load the driver into the kernel mode: that is by using the code accessible in the LoadDriver/ directory of the example at [1]. To compile the example, we have to delete the makefile and create a sources file with the following contents: Once we’ve started the build environment and issued the bcz command, the main.exe executable program will be created. Let’s present the whole code taken from [1] that does this: #include <windows.h> #include <stdio.h>); CloseServiceHandle(hService); DeleteService(hService); } CloseServiceHandle(hSCManager); } return 0; } When compiling the code, you should change the path to the sys driver, so the driver can be found and loaded into the kernel. After the compilation phase is done, we can start the program and it will load the driver into the kernel, so our user application can use its services. Let’s analyze the code now. First we’re calling the OpenSCManager function to establish a connection to the service control manager and its database. The syntax of the function is presented below and was taken from [5]: The lpMachineName is the name of the target computer, which is NULL in our case. That means that we’re connecting to the service control manager on the local computer. The lpDatabaseName specifies service control manager database, which is also NULL in our case. That means that the SERVICES_ACTIVE_DATABASE database is opened by default. The dwDesiredAccess specifies the access to the service control manager: we used the SC_MANAGER_CREATE_SERVICE, which requests permissions to call the CreateService function to create a service object and add it to the database. The OpenSCManager function returns NULL on failure, otherwise a handle is returned. hSCManager = OpenSCManager(NULL, NULL, SC_MANAGER_CREATE_SERVICE); Then we’re calling the CreateService function that creates a service object and adds it to the specified service control manager database. The syntax can be seen below [6]: hService = CreateService(hSCManager, "Example", "Example Driver", SERVICE_START | DELETE | SERVICE_STOP, SERVICE_KERNEL_DRIVER, SERVICE_DEMAND_START, SERVICE_ERROR_IGNORE, "C:\\example.sys", NULL, NULL, NULL, NULL, NULL); The parameters to the function are pretty much self-explanatory. Let’s specifically mention only the lpBinaryPathName parameter that should contain a path to our driver. The function returns NULL on failure, otherwise it returns a handle to the service. At last, we’re also calling the OpenService function that opens an existing service. The syntax is as follows [7]: The lpServiceName specifies the name of the service to be opened and should be the same as specified as the lpServiceName when calling the CreateService function. The unction returns NULL on failure, otherwise it returns a handle to the service. hService = OpenService(hSCManager, "Example", SERVICE_START | DELETE | SERVICE_STOP); At last, the StartService function is called to actually start the service. The syntax of the function call is as follows [8]: The hService is a handle to the server, which is returned by the OpenService function call. The function returns zero on failure, otherwise a non-zero number is returned. StartService(hService, 0, NULL); When we compile and run the program, it will create the same entry as before in the registry, so even with this option there are forensic evidences left in registry, which can arise suspicion in a system administrator. When we have loaded our driver with Service Control Manager (SCM), there was an entry saved to the registry, which leaves a trail of our driver being loaded. Since we want to be stealth, we want to load the driver with as little evidence as possible. To do that without an entry being added to the registry, we need to use an export driver. A disport driver supports a subset of features of real kernel driver; it doesn’t have a dispatch table, it doesn’t have a place in the driver stack and it doesn’t have an entry in the SCM database that defines it as a system service [10]. Because it doesn’t have an entry in the SCM database, there is no proof of our driver being loaded in the registry. When we build an export driver, we must place it in the C:\WINDOWS\System32\drivers\ directory in order for its functions to be accessible. Also, the driver is only loaded into the kernel when we’re using it from another drivers; thus, if no driver is using the exported driver, then the exported driver is not loaded into the kernel. Because we need to have another driver that loads the exported driver, this is not exactly the solution we’re looking for, because there’s considerably more evidence left on the system when using an exported driver. This is because we have an exported driver present in the C:\WINDOWS\System32\drivers\ directory and yet another driver that must load the exported driver that leaves a footprint in the registry. This is also the reason why we don’t want to use an exported driver in the first place; we must find another way to load the drivers into the kernel, presumably one that leaves less footprints on the system. One way we can go about it is find a vulnerability in the Windows operating system itself to leverage our kernel driver into loading in the kernel mode. User Mode Application Let’s also present an example also taken from [1], which is a user application that can communicate with the kernel driver. The whole code of the user application can be seen below: #include <windows.h> #include <stdio.h>; } We can see that we’re dealing with a very simple program. We can compile the program with the same sources file as we used in the previous cases. Once the compilation is complete, we’ll have a main.exe program, which we can run. At the same time we have to have dbgview.exe open, so we can see any messages printed to the debugger console log. Once we run the main.exe program, we can see the following printed to the console log: The first entry is present because the driver was loaded into the kernel. All the other entries are printed because we’ve run the user mode application main.exe. We can see that the message from user mode was printed to the console, which means that we’ve successfully passed a string from the user application to the kernel mode driver. Notice that the “Hello from user mode!” message is exactly the message we’re inputting in the WriteFile function in our user mode application? This is the message that’s being printed by the kernel by using the DbgPrint function. Conclusion In this article we’ve seen how to load the kernel driver into the kernel and explained a user application that used the driver’s services to do some action. References: [1] Driver Development Part 1: Introduction to Drivers, accessible at. [2] IoGetCurrentIrpStackLocation routine, accessible at. [3] MmGetSystemAddressForMdlSafe macro, accessible at. [4] IRP, accessible at. [5] OpenSCManager function, accessible at. [6] CreateService function, accessible at. [7] OpenService function, accessible at. [8] StartService function, accessible at. [9] Service Control Manager, accessible at. [10] Creating Export Drivers, accessible at.
http://resources.infosecinstitute.com/loading-the-windows-kernel-driver/
CC-MAIN-2017-04
refinedweb
2,169
53.51
20 April 2007 09:02 [Source: ICIS news] By Matt Kovac ?xml:namespace> The high cost of natural gas as a feedstock for the chemical industry was contributing to plants becoming obsolete, said David Graham, vice president of environment, health and safety at Dow Chemical, on Friday. “When equipment becomes obsolete then you have to pretty quickly find joint ventures around the world,” he said on the sidelines of a United Nations sponsored environmental summit in “This comes at the expense of the Research by bank HSBC Saudi Arabia showed that US Henry Hub gas prices averaged $6.73/m Btu in 2006 and around $7.20/m Btu in the first quarter of 2007, 10 times more costly than in parts of the Saudi Basic Industries Corp (SABIC), for example, pays 75 cents/m Btu for gas in Democrats in The bill is likely to encounter some opposition, particularly to drilling off the coast of However, the bill may not be enough to solve ageing crackers, which are already close to the limit of their normal operating life. The HSBC research said the average age of ethylene plants in the “The decision to close a plant is now a lot easier than it was before, and we already see signals that operators of marginal capacity are looking to exit,” the report said. Dow recently signed a deal with “We have the same standards wherever we are in the world and that’s why we are the preferred joint venture partner with governments,” said Gra
http://www.icis.com/Articles/2007/04/20/9022232/interview-dow-exec-critiques-us-energy-policy.html
CC-MAIN-2015-22
refinedweb
255
52.57
Radians are mostly used when you want to do mathematical calculations on a Circular object or path. Python has inbuilt functions that can convert radians to degrees and degrees to radians. In this entire tutorial, you will know how to convert radians to degrees and vice-versa in python using various examples. How to Convert Degrees to radians in Python In this section, you will know to convert radians to degrees in python. Suppose I want to use degree inside the NumPy cos() method. As this function accepts only radian values so I have to convert it to radians. There is a python module that allows this conversion and it is math.radians(). import math import numpy as np radians = math.radians(30) print(np.cos(radians)) The above code first converted the degrees to radians and then the results are passed as an argument to the numpy.cos() method. You will get the following output when you will run the code. Output The other method to convert degrees to radians is to define your own custom function. Inside the function, you will use the radians conversion formulae. The following are the lines of code for the function. import math import numpy as np def to_radian(degree): return degree * math.pi/180 print(np.cos(to_radian(30))) You can see in the function I am taking degree as a parameter and converting it to radians using formulae degree * pi/180. You will get the same output as the first method. Output Convert Radians to Degrees Python module also provides a method to convert radians to degrees. And that function is math.degrees(). Inside this method, you will put radian value and it will convert radian to degrees. Let’s take the output of the radian of the above methods (30) and convert it back to degrees. Execute the below lines of code. import math import numpy as np radians = math.radians(30) degrees = math.degrees(radians) print(degrees) In the above code, you can see I am taking radian input and passing it to the degrees() method. It will convert back the radian to degrees. You will get the following output. These are examples of how to convert degree to radians in python and vice-versa. I hope you have liked this tutorial. If you have any queries then you can contact us for more help. Join our list Subscribe to our mailing list and get interesting stuff and updates to your email inbox.
https://www.datasciencelearner.com/convert-degrees-to-radians-python/
CC-MAIN-2021-39
refinedweb
414
58.28
Summary of Chapter 9. Platform-specific API calls Note Notes on this page indicate areas where Xamarin.Forms has diverged from the material presented in the book. Note Portable Class Libraries have been replaced by .NET Standard libraries. All the sample code from the book has been converted to use .NET standard libraries. A library cannot normally access classes in application projects. This restriction seems to prevent the technique shown in PlatInfoSap2 from being used in a library. However, Xamarin.Forms contains a class named DependencyService that uses .NET reflection to access public classes in the application project from the library. The library must define an interface with the members it needs to use in each platform. Then, each of the platforms contains an implementation of that interface. The class that implements the interface must be identified with a DependencyAttribute on the assembly level. The library in each platform. Related links Feedback
https://docs.microsoft.com/en-us/xamarin/xamarin-forms/creating-mobile-apps-xamarin-forms/summaries/chapter09
CC-MAIN-2019-51
refinedweb
152
53.58
This page describes parts of the Path class design which are in discussion. It is meant to show the current state of the discussion, so when we reach a consensus, we can delete all the discussion details and just write the decision. Please write your opinions below in the appropriate section, or start a new section. Also indicate what you agree with, so we know how close to consensus we are. This discussion will be used to write a PEP (an alternative to PEP 355) and reference implementation. Work coordination Mike (2006/08/20): I'll be working on a reference implementation and a PEP soon. Anybody else feel inclined? We should get working now if we want this in Python 2.6. Constructor agreed: {{{Path("/a/b") => a Path object for "/a/b". Path() => same as Path(os.curdir). Path.cwd() => same as Path(os.getcwd()). Path(["a", "b"]) => same as Path('a/b') - since a path is a sequence of items, it should be initializable from an iterable of items.}}} Representation agreed: A logical representation is better than a string representation. p[:] should behave like a tuple of path components (directories and the final filename). p[n1:n2 should return a new Path containing only the sliced components. p1 + p2 should join paths. This eliminates the need for several properties/methods: .parent, .name, .join(), .split(), etc. str(p) should return a platform-specific string representing the entire path. Joining absolute paths Mike: p1 + p2 should return p2 if it isn't relative. Noam: p1 + p2 should raise an exception if p2 isn't relative. Rationale: - Explicit is better than implicit. - It makes len(p1 + p2) == len(p1) + len(p2), which is nice. It's pretty easy to write p1 + p2 if p2.isrel else p2 Mike: My proposal matches existing os.path.join() behavior. We shouldn't contradict it without a good reason. The most common use case is Path.cwd() + Path(sys.argv[1]). However, I'm not personally opposed to Noam's proposal if there's consensus for it. Why does the length of the combined path matter? Noam: About the length: I feel it's nice that addition preserves this property, that's all. About cwd() + something: the abspath method does that. Path + string What should Path("/a/b") + "c" do? Alternatives: Join paths. Same as Path("/a/b/c"). Append to the filename (useful for extensions). Same as Path("/a/bc"). Raise an exception because tuple + string and list + string are illegal in Python. Noam: I know that tuple + string are illegal, but I think that since there's an obvious way to treat the string as a path, it's ok. Mike: Maybe. But Guido rejected / for joining; he may also reject +. Its obviousness is debatable. If we do use + for joining, we'll need APIs to modify the filename and extension without having to split/rejoin. Note: discussion of a filename/extension API is in the Filename/Extensions section below. Noam: Another point against automatic conversion: it doesn't preserve the property that (a + b) + c == a + (b + c). But it is convinient... One sequence or several parts? agreed: The filename or leaf directory should be the final component of the sequence, with extensions treated as part of the filename. Should the root and drive be encoded as the first component of the sequence, or as attributes? On POSIX there is one root: "/". On Windows, each drive has its own root and current directory: r"C:\", r"D:\". There is an implied default drive, subject to .chdir. r"C:foo" is relative to drive C:'s current directory. Should we encode all this info in the first Path component, or as .isabs/.isabsolute and .root attributes? What about Windows UNC paths (r"\\a\b\c")? Noam: Keep it all in the sequence. Sequence slicing is simple and intuitive. Attributes storing data not found in the sequence is complicating matters. The root element stores all the data of "where to start from". UNCRoot stores host and share name. Mike: Encoding absolute/relative and drive in the sequence may be too obscure and magical. Noam: I don't think so - it's just an attribute of the root element, "isabs". Mike: Note that slicing off the front of an absolute path makes a relative path. Path("/a/b/c")[:1] => Path("b/c") Noam: which is great. A seperate class for files and directories? agreed: The same class will represent a file, directory, or symbolic link. (Reasons can be found in the wiki history) Inheritance from str to allow easy use in other functions Noam, Mike: This won't work. Strings must slice by character, and this is incompatible with slicing by directory component. Inheritance from tuple Noam: I think it works well. Guido said that he didn't like it, but I don't understand why. If all the data is stored in the sequence, I think a sequence interface should be provided. As far as I can see, the tuple interface is just that: an interface for an immutable sequence. This means that it doesn't cause any unwanted restrictions, so I don't see why not to inherit from it. Jason: I suggest making it look like a sequence without actually subclassing tuple. It is rather strange to be subclassing tuple this way. Noam: I guess this may be left to Guido's decision. I feel that subclassing from tuple is fine, but I don't really care. Mike: the top level can emulate tuple slicing/addition to return a new Path object. It doesn't have to *subclass* tuple. Noam: Can you please elaborate about why not to subclass from tuple? Mike: Containment is better than inheritance. Never subclass if you can reasonably put the value in an attribute; it leads to all sorts of potential conflicts and bugs. Subclass only if the object really is a type of the superclass, and/or if the user will be calling a lot of the superclass's methods directly. Noam: You can always use containment - you never really need to subclass. I think that if it's agreed that all the data is stored in the sequence, inheritence from tuple is ok, since we really behave like an immutable sequence, and add some operations about the sequence. Immutability Noam, Jason, Mike: I think that immutable paths are somewhat easier to implement, and allow usage as dictionary keys. I think that if we have managed to live so far without mutable strings, we will manage to live without mutable paths. I don't see this as a major issue, but immutable paths can be somewhat more efficient: you can hash the string representation, and you can make sure you have a path by writing things like dst=path(dst) , and if dst is already a path, no new object will be created. In which module(s)? Mike: A new 'basepath' module would contain the common base class. The platform modules (posixpath, ntpath, etc) already exist and are the logical place for these Path classes. Noam: I think that all path OS subclasses fit nicely into one module. Most of the logic is in the base class, anyway, and it makes it easier to see what are the differences between each platform. Mike: Putting code for disparate architectures in one module is asking for trouble. What if one architecture needs to import modules which aren't needed or can't be built on other architectures, especially C modules? Plus the module would become very large due to the need to accommodate Windows's intricaies (e.g., r"C:foo", r"\\uncpath"). Noam: Ok. It's basically an implementation detail, so we can decide on it after we implement the full class. Filenames Mike: If we use + for path joining, we need a way to create a derived path from a modified filename. Example: "I want to add a prefix or suffix to the filename portion of "/a/b/filename". Splitting/rejoining the path is messy, especially if you have to modify the base name but preserve the extensions. No specific API proposal yet. Extensions agreed: extensions are critical, so the class must make it easy to query/modify them without splitting/rejoining the Path. Like directories, extensions have a platform-specific separator. Unlike directories, extensions are conventions rather than OS-enforced rules: not every apparent extension should be treated as such. The user must tell us when to recognize extensions, defined as N number of filename suffixes beginning with the platform's extension separator (.extsep). For instance, most users consider "filename.2005-05-13.tar.gz" and "filename.2006.05.13.tar.gz" as having two extensions each (".tar.gz"), even though the number of apparent extensions is larger. We can put attributes/methods on the Path object, or on a special str/unicode subclass used for the filename (or for each directory component). Noam: How should we distinguish between a file with an empty extension ("a.") and a file without an extension ("a")? Mike: The legacy os.path.splitext() returns ".ext", so it presumably returns "." for an empty extension. We could stick with this. That prevents the ability of treating extensions platform-independently though. I doubt "a." is important enough to support though, have you ever seen it? Mike: subclassing str is impractical due to the string/unicode duality. Why not path properties: p.ext, p.name (name without extension). The full filename is p[-1] so it doesn't need a property. Noam: Why does the string/unicode duability makes subclassing str impractical? On Windows we can have unicode subclasses, and on POSIX we can have str subclasses. Having extension-related methods added to elements is nice because: - Extension is an attribute of a path element, not of the sequence of path elements. (dirs can have extensions just as well) - It reduces the number of methods of the path type and makes it easier to distinguish between different kinds of methods. What should be the interface? Mike said that adding and removing extensions is important. How should it be done? Mike: There must be a convenient way to add/delete extensions. How about p.add_ext(*exts_without_separator) and p.del_ext(n=1), each returning new Paths. The only other operations then are querying N extensions or splitting the filename into name + N extensions. (Note: if the extsep is attached, an empty string in the result would mean "there are not that many extensions".) Noam: Ok. Here's a suggestion. It's simple, and I think that it gives you all that you need. Path elements will be subclasses of str/unicode, with two additional properties: element.ext will return the string from the last dot, including the dot. If there's no dot, it will return the empty string. element.withoutext will return the string up to the last dot, not including the dot. It there's no dot, it will return the complete string. element.ext will return str/unicode. element.withoutext will return an instance of the same subclass as element, so that you'll be able to write element.withoutext.withoutext to strip off two extensions. So, to add extension to a path, you'll write p[:-1] + (p[-1]+'.py') . To remove extension, you'll write p[:-1] + p[-1].withoutext . To replace an extension, you'll write p[:-1] + (p[-1].withoutext + '.py') . I've checked, and even in macpath.py, extsep is '.'. So I don't see a real problem of platform incompatibility with that scheme. Another possible name can be "stripext" instead of "withoutext". It's shorter, but perhaps less descriptive. Mike: OK on .ext and .withoutext. Concerned about using subclasses, but maybe it'll be OK. Noam: Excellent! Stat agreed: p.stat() and p.lstat() should return an enhanced version of Python's os.stat() object, with attributes like p.stat().mtime for all information traditionally provided by stat. Include Noam's additional properties from. Do not have Path methods duplicating stat attributes. Mike: Unlike os.stat(), do not support ugly attributes like .st_mtime or tuple indexing. Noam: I think that it would be best if os.stat(pathstr) would return the same type of object as pathobj.stat() - in other words, add the easy attributes to os.stat() too. In that case, I guess that the ugly "st_" names will have to stay until Python 3. Mike: No, this is supposed to be an improvement and an API we'll want permanently. Nobody using legacy stat objects will get caught up because they'll never call Path.stat(). Noam: I don't really mind - we can take this to python-dev / guido decision. Mike: I formerly proposed moving all stat attributes into Path methods, because the distinction between "stat attribute" and "other file info" was arbitrarily defined by Unix tradition, but withdrew this because it's not critical. Having .stat() does let the user cache the result, and having .lstat() avoids the need for a parallel set of methods that don't follow symlinks. Finding files Jason Orendorff's path module has three methods returning a non-recursive list of paths: listdir, files, dirs; and three methods returning a recursive iteration of paths: walk, walkfiles, walkdirs. Noam proposed combining all these plus filename globbing into one method: glob, with a special pattern "**" meaning "any subdirectory or recursive path of subdirectories". Nick: Swiss army methods are even more evil than wide APIs. And I consider the term 'glob' itself to be a Unixism - I've found the technique to be far more commonly known as wildcard matching in the Windows world. Noam: Can you give examples why this proposed method is evil? I think that the basic pattern idea is well defined. It gets three arguments. topdown is, I think, well defined and may be useful. onlyfiles and onlydirs are well defined and are only a convinience. I don't really mind ommitting them. About the name "glob": I have nothing against glob, but if you find another name for the method, I might have nothing against it either. Jason: Hard-won knowledge here: d.files('*.html') is just right. This is the common use case. glob() overgeneralizes it, forcing me to write d.glob('*.html', filesonly=True). Yuck. Guido strongly prefers multiple APIs for distinct use cases, as opposed to a single API that serves all the use cases by providing boolean flags that toggle various aspects of its behavior. Noam: I see what you mean. How about "glob" doing what it does in the current proposal, without the "onlyfiles" and "onlydirs" arguments, and "files" and "dirs" getting exactly the same arguments but yielding only files and directories, respectively? About the "l" versions: Having glob, files, dirs, lglob, lfiles, ldirs seems ugly. Perhaps this should go in as a flag, say, "follow_symlinks=True"? (I would put it after pattern, because remembering the string "topdown" is easier. I don't think of any better name than "follow_symlinks". I also tend to think that it is more useful.) Mike: Non-recursive lists: listdir, files, dirs, symlinks. Recursive iterators: walk, walkfiles, walkdirs, walklinks. All except *links should take a 'symlinks' argument, default True, meaning follow symlinks. If false, never return a symlink. The user can call *links to get the symlinks separately if desired. listdir should have a 'names_only' argument, default False, meaning return the same as os.listdir(). Doesn't a 'pattern' argument eliminate the need for .glob()? Noam: Can you explain why you think that "listdir, files, dirs, walk, walkfiles, walkdirs" is bettern than "glob, files, dirs"? I prefer three over six. About "links" methods - Do you have examples of when they are useful? Thinking about it, it seems that dirs+files should cover all the files in the directory, when symlinks are considered directories if they point to directories in the follow_symlinks mode. About the names_only: I don't like an attribute which changes the type of the result. You can always do x[-1] to get the base name. Mike: Combining the recursive and non-recursive methods is acceptable. They would all have to be generators in that case. .glob() is not the best name: it sounds like something else to Unix people and incomprehensible to non-Unix people. The *links() methods are useful when you want to treat symlinks specially; they eliminate an if-stanza in the main for loop. No reason to shove disparate things into the same loop. If symlinks=True, we do follow the links and inspect the actual directory/file, so we're in agreement. We can drop names_only if we add listdir(). Sometimes you just want the names, and it's a pain (and inefficient) to unpack temporary Path objects made from those same names. Noam: About glob: Can you suggest a better name? I'm happy with glob but have nothing agains a better name. About listdir: I prefer to omit that method. From my experience, you always want to add the base name to the dir name (what would you do with it otherwise?) I can live with the slight inefficiency and small pain of making a path and taking only the last element on the rare cases in which it's needed. I prefer the "one way to do it" approach here. About symlinks: I see what you mean. I prefer one iteration with an if stanza, since I then iterate over the contents of a directory only once, but it seems like a reasonable friend of "dirs" and "files". The name "link" is ok, but we should make sure that all symlinks are referred to as "links" in the method names - I don't want to remember when it's a link and when it's a symlink. If so, the "link" method should be renamed "hardlink". But "lfiles", "ldirs" are so ugly... Mike: p.listdir() => os.listdir(str(p)) is small, simple, and unobtrusive; it won't bother anybody except purists. Say you need the filenames for a GUI list box or a menu. It's hard to find a name that means ".dirs plus .files"; maybe .walk(recursive=False) is OK. That will surprise existing users of walk functions, but we haven't found a better name. I agree we should be consistent about .(sym)links methods; maybe we should rename .link to .hardlink because it's so rarely used. 'follow_symlinks' as an argument is also acceptable; it's wordy but perhaps better self-explanatory than 'symlinks'. Down with the 'l' versions! Noam: To get all the files, you simply write p.glob() , which is the same as p.glob('*') . Here's a suggestion. The two basic functions for finding files will be glob and lglob. They will have the same interface: def glob(self, pattern='*', topdown=True) def lglob(self, pattern='*', topdown=True) In addition, you'll have two simple convinience methods: def files(self, pattern='*', topdown=True): for subpath in self.glob(pattern, topdown): if subpath.isfile(): yield subpath def dirs(self, pattern='*', topdown=True): for subpath in self.glob(pattern, topdown): if subpath.isdir(): yield subpath We will ommit "lfiles", "ldirs" and "links" because: - They are less useful - They have ugly names - They can be replaced by two lines of code. Mike: There will be significant opposition to "one method to rule them all", so let's have a main proposal and an alternate proposal. I think mine more closely matches the expectations of the majority of Pythoneers who favor a Path object, especially those who have used Orendorff's/interim/PEP 355. We can code your .glob() in the reference implementation (perhaps as .glob2()), and then mention that some methods will be dropped in the final. Noam: We can have two alternatives in the PEP. Mike: But the #1 thing is, .glob() should not call glob.glob() unless it has to. So pattern='*' should bypass glob and call os.listdir() directly. Noam: It does. isfile Noam: It currently returns True if a file is a regular file. Perhaps it would be better if it returned whether a file is not a link or a dir? The rationale is that block device files and FIFO files can mostly be treated as regular files - you can write to and read from them, if permissions allow you. It simplifies things: paths can only be: - Nonexisting - Symlinks to nonexisting paths - Symlinks to dirs (perhaps with a few "jumps") - Symlinks to files (perhaps with a few "jumps") - Dirs - Files Each of the methods exists, isdir, isfile, lexists, lisdir, lisfile, islink returns True on a different subset of these: exists - 3,4,5,6. isdir - 3,5. isfile - 4,6. lexists - 2,3,4,5,6. lisdir - 5. lisfile - 6. islink - 2,3,4. Mike: In other words, Noam is proposing that .isfile return True for all non-directories, whereas os.path.isfile(F) returns True only for regular files (not special files). This parallels the other methods (.files()). However, it does make it harder for the user to treat special files separately. Normally if not os.path.isfile(foo) and not os.path.isdir(foo), the file is a device. Since we're removing functionality, let's add .isspecial() to bring it back -- even better than os.path does! This returns True if any of the stat.S_* functions would return True. Mike: There are no methods to check for a special file type (ischr, isblk, isfifo, issock, isdoor). Instead you have to do stat.S_ISBLK( Path("FOO").stat().mode ). I'm tempted to say we should add the methods, although they're rarely used. Noam: Great! We can add them to the stat object - no need to make them methods of Path. "isfile" is a method of path because it's quite common to write "if p.exists() and p.stat().isfile". But we don't have to add those shortcuts for rarely-used tests. Mike: Note that .isfile() and .isspecial() follow symbolic links, so they refer to the pointed-to file. They both return False if the link is dead, because these methods should never return True when .exists() would return False). But .lisfile() and .isspecial() return False and True respectively for a symbolic link, regardless of what the link points to. Noam: About .isspecial(), I think that it ought to be left as an attribute of stat, as I wrote. About lisfile(): It should return True if p.lexists() and p.stat().isfile - it should return True only for files, not for symlinks and dirs. Access Permissions Noam: I think that the "access" method should be replaced by three straightforward methods, which don't require the use of a constant from somewhere: p.isreadable(), p.iswriteable(), p.isexecutable(). Mike: .canread(), .canwrite(), .canexecute(). The access() function is cumbersome and was probably added to conform to the Unix API. Noam: These are indeed better names. Expand Noam: I removed expand. There's no need to use normpath, so it's equivalent to .expanduser().expandvars(), and I think that the explicit form is better. Mike: Expand is useful though, so you don't forget one or the other. Noam: I wouldn't want to call expandvars() by default - I think that expanding environment variables is something that should be done with care, as it may expose info about the environment which should be kept private. Anyway, I think that p.expanduser().expandvars() shows exactly what is being done and isn't a lot longer, so I prefer it. copytree Mike: Er, not sure I've used it, but it seems useful. Why force people to reinvent the wheel with their own recursive loops that they may get wrong? Nick: Because the handling of exceptional cases is almost always going to be application specific. Note that even os.walk provides a callback hook for if the call to os.listdir() fails when attempting to descend into a directory. For copytree, the issues to be considered are significantly worse: - what to do if listdir fails in the source tree? - what to do if reading a file fails in the source tree? - what to do if a directory doesn't exist in the target tree? - what to do if a directory already exists in the target tree? - what to do if a file already exists in the target tree? - what to do if writing a file fails in the target tree? - should the file contents/mode/time be copied to the target tree? - what to do with symlinks in the source tree? Now, what might potentially be genuinely useful is paired walk methods that allowed the following: #) Jason: I think Python needs high-level APIs to do stuff like copytree(). The current state of affairs is just awful. On Unix I can do os.system('cp ' + ...), but it's not portable. I haven't tried pairedwalkfiles(), so no opinion. Mike: .pairedwalk() and friends may be useful. The user wants to know which files/directories to create, update, and delete. So it's essentially a diff report. Noam: I'm not sure about pairedwalk() - it may be a bit complicated, I'm afraid. However, perhaps copytree() isn't such a big deal if it works only when the source is a directory and the destination doesn't exist. Then, exceptions aren't expected, so if they happen they can simply be propagated. Mike: I'm now thinking about a class with methods to handle each of the exceptional cases, and boolean attributes for the alternative behaviors. Then we'd define one or more default behaviors as copytree = CopyTree.__call__. Copy Nick: OK, this is one case where a swiss army method may make sense. Specifically, something like: - def copy_to(self, dest, copymode=True, copytime=False) *[Mike removed the 'copyfile' argument, aka 'content'. If you want to copy just the mode or time or create an empty file, without copying the file content, use other methods.]* Whether or not to copy the permission settings and the last access and modification time are then all independently selectable. The different method name also makes the direction of the copying clear (with a bare 'copy', it's slightly ambiguous as the 'cp src dest' parallel isn't as strong as it is with a function). Noam: I think the different name and arguments are a good idea. Jason: Definitely agree with Nick. Noam: What about copyto? It's easier to write, I think that it's not hard to understand, and perhaps it focuses less attention on the "to", making it look like a special kind of copy. Mike: src.copy(dest, mode=False, time=False). .copy_to is OK, .copyto is bad. Almost everybody expects .copy to mean .copy_to and not .copy_from. Noam: About copymode vs. mode: I prefer copymode. "mode" seems like a mode specification (like in mkdir), not like a boolean. Unicode Noam: Someone with experience with unicode filenames, please help! Jason: I have some experience, not a ton. In the Win32 API, paths are Unicode strings. To produce a path-string you'll have to decode any non-Unicode strings in your tuple; Python's default encoding is one option, but the operating system's default encoding is another option; I think the latter is what the os functions do on Windows. In the POSIX API, paths are char strings, which means 8-bit strings on every platform I'm familiar with. The character set varies from system to system. Some use UTF-8. It's kind of squirrely if you allow both 8-bit strings and Unicode strings in your tuple. I suggest using only Unicode within the tuple and converting to 8-bit only as needed to talk to POSIX. Noam: Thanks for the explanation. I agree about not mixing different kinds of strings. Is there a good way to convert unicode strings into file names on POSIX? How do you know the right encoding? Mike: At first I thought about forcing everything to Unicode on input and adding 'encoding' and 'onerror' arguments to the constructor. That doesn't solve the problem of chosing the charset to encode on output. But now I'm wondering if we should just preserve whatever type(s) the user inputs. Noam: I don't think that preserving the type of the user input will work: You'll still have to decode it to str on POSIX. It seems to me that the only solution is to use the native "alphabet" of the system: Unicode chars on Windows, and byte chars on POSIX. To put it more clearly: All elements on Windows will be unicode, all elements on POSIX will be str. Noam: I checked, and it seems that on Windows, file-related functions work well with byte strings. I guess it's because there's a clearly defined system encoding. So why not use only byte strings, and convert from unicode if needed by using the system encoding? Noam: Here's another suggestion. I think it's good. It's based on the behaviour of functions like listdir, which return a list of unicode strings when they get a unicode argument, and a list of byte strings if they get a str argument. Path objects will be homogeneous containers of either str or unicode items. This will be determined upon construction: if they are constructor from a unicode string, or from a sequence containing a unicode string, all elements will be unicode strings. Furthermore, all methods returning strings will return strings of that type, and all methods returning paths will return paths containing strings of that kind. Obsoleting other modules Nick: I don't believe it's a given that a nice path object will obsolete the low level operations. When translating a shell script to Python (or vice versa), having access to the comparable low level operations would be of benefit. At most, I would expect provision of an OO path API to result in a comment in the documentation of various modules (os.path, shutil, fnmatch, glob) saying that "pathlib.Path" (or whatever it ends up being called) is generally a more convenient API. Noam: I don't mind obsoleting os.path, shutil, fnmatch, glob, as I see them as high-level operations. I don't mind not obsoleting them either - it may keep the code more organized if different operations are in differnt modules. I agree that most of the functions in the os module shouldn't be obsoleted - these are really low-level operating system operations, and you shouldn't need to use a complex path object in order to call them. Jason: The new API should be the one high-level API for this type of stuff. All the other high-level APIs should be obsoleted. Mike: We cannot deprecate the existing functions in Python 2.x; too many existing programs would break. But we can discourage them in the documentation. Additional methods/attributes .purge() Mike: Delete "it" recursively if it exists, whatever it is. This is convenient when you don't care whether it's a file or directory, you just want to overwrite it, and you don't want to take six lines of code to do it. Noam: Why six lines of code? I count four: if p.isfile(): p.remove() elif p.isdir(): p.rmtree() We can have rmtree work also for files, and even for non-existing paths, but I'm not sure it's a good idea. Mike: .rmtree would go away if .purge is added. So we'd have to inline its implementation. The main reason for .purge is .rmtree raises exceptions if (A) the Path is a file, or (B) the Path doesn't exist, and you don't want to clutter your code for all those cases when you just want to write or remove "it". Noam: I feel fine with the four lines above, but I can live with another method. We can bring this to python-dev decision. Mike: Adding the two capabilities to .rmtree would be functionally the same. I think .purge is a better name though. Noam: I want to keep the number of methods and their complexity small enough. I'm against adding capabilities to .rmtree(). I prefer not having a method which can be clearly defined in four lines and I personally have never used, but it can be taken to python-dev. Mike: The purpose of this module (or any Python module) is to make the calling code elegant, so it describes the level of detail that matches the conceptual operation, and is not cluttered with unnecessary details (if-stanzas) simply because the module owner refused to include commonly-used convenience functions. That forces every developer to write the same stanzas again and again, or to each write their own function encapsulating it. Noam: Ok, let's leave this to python-dev / Guido's decision. mkdir/rmdir Mike: These should succeed silently if the operation is already done. Otherwise the user has to write an unnecessary "if p.exists():" around it. If the user really cares whether the item exists, he can explicity write the if-stanza. If not, he shouldn't be forced to clutter his code, especially since that obscures whether it does matter or not that the item existed. Noam: Simply use makedirs() and removedirs(). Mike: Those methods should go away. Their names are kludgey, and their behavior can be done with a 'parents=False' argument. Noam: I don't agree. In this case, I like to leave the basic OS calls do what they do, and keep the API of the complex methods simple (avoid the parents=False option.) Mike: This is supposed to be an improved API. Programs are more readable if the methods reflect the operations the user really wants to do. If the user calls .mkdir it means they want to create the directory, no ifs ands or buts. The shell commands have a --parents option so why can't we? If we must have .makedirs it should be called .mkdirs. Noam: I'm not sure about the name. makedirs/removedirs is more verbose but not too verbose, and keeps "name compatibility" with the os module. On the other hand, mkdirs/rmdirs looks more like mkdir/rmdir. We can leave this to python-dev/Guido decision. Jason wrote that Guido prefers separate functions for different cases, and in this case, I prefer it too. I think that a simple API is generally better than a complex API. I think that in this case, separate functions are simpler than one function with another option.
http://wiki.python.org/moin/AlternativePathDiscussion?action=fullsearch&context=180&value=linkto%3A%22AlternativePathDiscussion%22
CC-MAIN-2013-20
refinedweb
5,762
67.35
References and the Imports Statement (Visual Basic) You can make external objects available to your project by choosing the Add Reference command on the Project menu. References in Visual Basic can point to assemblies, which are like type libraries but contain more information. Assemblies include one or more namespaces. When you add a reference to an assembly, you can also add an Imports statement to a module that controls the visibility of that assembly's namespaces within the module. The Imports statement provides a scoping context that lets you use only the portion of the namespace necessary to supply a unique reference. The Imports statement has the following syntax:. They must appear after any Option statements, if present, but before any other code. The Imports statement makes it easier to access methods of classes by eliminating the need to explicitly type the fully qualified names of references. Aliases let you assign a friendlier name to just one part of a namespace. For example, the carriage return/line feed sequence that causes a single piece of text to be displayed on multiple lines is part of the ControlChars module in the Microsoft.VisualBasic namespace. To use this constant in a program without an alias, you would need to type the following code: Imports statements must always be the first lines immediately following any Option statements in a module. The following code fragment shows how to import and assign an alias to the Microsoft.VisualBasic.ControlChars module: Future references to this namespace can be considerably shorter: If an Imports statement does not include an alias name, elements defined within the imported namespace can be used in the module without qualification. If the alias name is specified, it must be used as a qualifier for names contained within that namespace.
https://msdn.microsoft.com/en-us/library/h9st4tss(v=vs.120).aspx
CC-MAIN-2017-17
refinedweb
297
50.26
I know!! That's why I've been asking for help hahah Hence "my goal" All the equations and stuff work perfectly it's just the If-Then statements! Printable View I know!! That's why I've been asking for help hahah Hence "my goal" All the equations and stuff work perfectly it's just the If-Then statements! There is nothing wrong with that code. I just ran it on my computer and got the following result: And then I ran it with positive values:And then I ran it with positive values:Quote: Enter the loan amount: -100000 Enter the rate: 500 Enter the number of years: 5 ALL NUMERICAL VALUES MUST BE POSITIVE! Quote: Enter the loan amount: 10000 Enter the rate: 500 Enter the number of years: 5 The monthly payment is: $4,166.67 4166.66667016393 You just said that code could not create what I wanted and could you maybe screen shot it? Are you sure you didn't change it because this is exactly what I got after saving and reopening netbeans. Attachment 2373 No, I said the new code could not possibly create what you posted in the screen shot of your output. Simply looking at the code, there is no possible way "ALL NUMERICAL VALUES MUST BE POSITIVE!" can be printed before you are done accepting all the user input. I'm not saying the new code is wrong, I'm saying your output (given the new code you posted last) is physically impossible to occur. This is the code you are running, correct (formatting might be a bit different due to my auto-formatter, but formatting would not effect execution in any way): Code java: import java.text.NumberFormat; import java.util.Scanner; /** * @author Akira */ public class MortgageCalculation2a { /** * @param args the command line arguments */ public static void main(String[] args) { Scanner in = new Scanner(System.in); double loanAmount; double interestRate; double numberYears; double months; double monthlyPayment; System.out.print("Enter the loan amount: "); loanAmount = Double.parseDouble(in.nextLine()); System.out.print("Enter the rate: "); interestRate = Double.parseDouble(in.nextLine()); System.out.print("Enter the number of years: "); numberYears = Double.parseDouble(in.nextLine()); months = numberYears * 12; if ((loanAmount < 0) || (interestRate < 0) || (numberYears < 0)) { System.out.println("ALL NUMERICAL VALUES MUST BE POSITIVE!"); } else { interestRate = interestRate / 100 / 12; monthlyPayment = (interestRate * loanAmount * (Math.pow(1 + interestRate, months))) / (Math.pow(1 + interestRate, months) - 1); NumberFormat defaultFormat = NumberFormat.getCurrencyInstance(); System.out.println("The monthly payment is: " + defaultFormat.format(monthlyPayment)); System.out.println(monthlyPayment = (interestRate * loanAmount * (Math.pow(1 + interestRate, months))) / (Math.pow(1 + interestRate, months) - 1)); } } } yes. If that code is impossible what's the code you used to get that answer?! Ok, then netbeans must be the problem. Do you know how to compile and run from the command prompt? Nope can you show me how? EDIT: but that doesn't help me with the code will it? also, I have to turn it in working on netbeans sooo.... Ok, assuming that your MortgageCalculation2a class is in the mortgagecalculation2a package, do the following: Locate the MortgageCalculation2a.java class in your file system (assuming windows explorer). Open this file with notepad and make sure it is the correct code. Once you've made sure of that, SHIFT+RIGHT-CLICK anywhere in the folder and go to: "Open command window here". Then, do the following in the command window: 1. Type: javac MortgageCalculation2a.java 2. Wait for that to finish (it may take a minute or two). The .class file should now appear in the same directory as the java file. 3. Type: java -cp .. mortgagecalculation2a.MortgageCalculation2a And there you go, it should execute your program in the command window (it works the same as the console in netbeans). it says javac is unrecognizable...should I just give up? Um, I suppose you could close Netbeans, try to find the .class file Netbeans created and delete it in your file system, start up Netbeans again, recompile your file, and attempt to run it again. So this code works perfectly for you and it just seems to not want to work on netbeans? The code works perfectly for me, both in my IDE and in the command line. My guess is Netbeans has either cached your old file and is running that, or your old file still exists somewhere and it is running it. I guess netbeans is broken then...I created a new file, renamed it, did everything I think I could and it's still not doing what you said you got... --- Update --- SMH. I guess the file was completely messed up and I just compied and pasted the code in a new project and it worked!! thank you so much seriously
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/32769-if-then-statement-question-2-printingthethread.html
CC-MAIN-2018-17
refinedweb
788
66.74
Bluetooth Controlled Arduino Scrapboat Introduction: Bluetooth Controlled Arduino Scrapboat I've been planning to make an RC boat to roam the waterways that are present in my apartment complex for a long time. But one hickup or another kept holding me back. Most of the time, it was lack of required materials or time. Even this time, my soldering iron committed suicide under mysterious circumstances just before I started making this. But I put my foot down and decided to make it without soldering and using just jumpers, insulation tape and lots of double sided tape. So i present to you the SS Scrap v1.1 Step 1: Parts Required For this project I used whatever I had lying around the house so you can improvise as Step 2: Connecting the Servo The servo runs directly of the 5V provided by the Arduino. But in some cases, as happened to me, the Arduino cannot provide enough current to drive the servo, especially when it starts moving. This leads to random restarts and your code will never behave as you want it to. So connect the capacitor along the power leads of the servo, making sure that the positive (ie longer lead) is connected to the 5V pin and the shorter pin to ground. This will store charge and provide the extra current required. The third pin of the servo is connected to pin number 9 on the Arduino. On my servo, the colours of the leads were orange, brown and red. Orange-pin number 9 Red- +5V Brown- ground I stuffed the leads of the capacitor into the servo port and taped it to make sure it would stay there (no soldering iron makes one resort to such barbaric methods :P) Step 3: Connecting the Bluesmirf As in my previous instructable, this boat is also controlled using serial over bluetooth link. THe BLuesmirf alread had headers soldered to it so it was easier to connect than the servo requiring only a 4 pin connector. While connecting the bluesmirf, the RX pin should go to the TX pin (pin 1) of the arduino and the TX pin should go to the RX pin (pin 0) of the arduino. The GND goes to one of the ground pins on the Arduino and Vcc goes to 3.3V as the 5V piin is taken up by the servo. Vcc- 3.3v Gnd- Gnd Rx- Tx or pin number 1 Tx- Rx or pin number 0 Step 4: Motor Connections and Mounting The motor I was from and old toy plane which gave a lot of power for it's size. I didn't have a motor driver so this motor would be kept running while only the direction would be changed using the servo. The motor can handle only a maximum of 4-5V hence a separate battery supply was used. I used 2AAs in series taped together to give around 3V. This motor was peculiar as it always required an external force to start, kind of like world war I era biplanes. Hence, once it started it had to be kept running or it wouldn't start again. The motor was attached to the servo using double sided tape over which I layered insulation tape for further safety. The whole servo+motor assembly was mounted on one end of the plastic box on top of a rectangular piece of thermocol to provide room for the propeller during its full range of motion. Step 5: Arduino Code #include <Servo.h> Servo steer; char input = 'x'; int pos[] = {1000, 1500, 2000}; //contains armature postions void setup() { Serial.begin(2400);//starts serial link and baud rate should be the same as that configured in the bluesmirf steer.attach(9); //for connecting servo to pin number 9 on the arduino steer.writeMicroseconds(pos[1]);//initial position of serov makes the motor point in the forward direction } void loop() { if(Serial.available()) { input = Serial.read();// reads input charcater from serial over bluetooth switch(input) //for steering to specific dierction based on value in array pos { case '1': { steer.writeMicroseconds(pos[0]); break; } case '2': { steer.writeMicroseconds(pos[1]); break; } case '3' : { steer.writeMicroseconds(pos[2]); break; } } } } Code is a simpler version of the code found here Step 6: Final Assembly So the final assembly consisted of cobbling together everything and putting it in the box. The Arduino was fixed in the center using double sided tape and the Bluesmirf was mounted a little higher up using a piece of cardboard. The 9V battery was connected to the vin port of the Arduino and it was attached near the end opposite to the motor using double sided tape along with the batteries for the motor. The wires for the motor had to be long enough to allow freedom of movement. The batteries were moved around to ensure the CG was maintained and the boat did not have a tendency to topple over. Step 7: And Done..... That Was Easy That's it! The motor has to be push started everytime as said before. Connect through bluetooth using any android phone running the blueterm app. The controls are 1- left 2-forward 3-right The motor keeps running independent of the arduino so if you lose connection, it still continues to move. I haven't tested it outdoors as the winds are too high, but indoors or in calm water it handles pretty well and it depends on whether you can react fast enough to control it properly. It is quite responsive and the better the batteries are for the motor the better it will handle. This was a fun project which behaved as its supposed to for once. Please leave your comments below and vote for me in the contests i'm entering this in. Thank you for reading Cool Instructable. Do you have a video by chance?
http://www.instructables.com/id/Bluetooth-Controlled-Arduino-Scrapboat/
CC-MAIN-2017-39
refinedweb
980
69.62
Before using any of the generic parallel algorithms, an application must first initialize the task scheduler by creating a tbb::task_scheduler_init object: #include <tbb/task_scheduler_init.h> int main() { tbb::task_scheduler_init init; // use of an algorithm } The generic parallel algorithms provided by the library are safe to mix with user threads, so they can be deployed in an already threaded application. However, each thread that invokes an Intel TBB algorithm must first create a task_scheduler_init object so that it is registered with the single shared task scheduler. The task_scheduler_init objects behave like reference-counted handles to the scheduler. The first handle constructed creates the scheduler and the last handle destroyed destroys the scheduler. Parallelizing Simple Loops: parallel_for and parallel_reduce In the data decomposition example earlier, I showed you a simple loop that calls a single function. To flesh out this example slightly,]); } 00:#include "tbb/blocked_range.h" 01: class ApplyFoo { 02: float *const my_a; 03: public: 04: void operator()( const blocked_range<size_t>& r ) const { 05: float *a = my_a; 06: for( size_t i=r.begin(); i!=r.end(); ++i ) { 07: Foo(a[i]); 08: } 09: } 10: ApplyFoo( float a[] ) : 11: my_a(a) {} 12: }; 13: void ParallelApplyFoo( float a[], size_t n ) { 14: parallel_for(blocked_range<size_t>(0,n,IdealGrainSize), 15: ApplyFoo(a) ); 16: } The call to the template function tbb::parallel_for is found at line 14. It receives two arguments, a Range and Body. In this example, the Intel TBB-provided class blocked_range<T> is used to define the range. It describes a one-dimensional iteration space over type T. Class parallel_for is templated to work with any iteration space that matches the range concept expected by the library. For example, the library also provides blocked_range2d for two-dimensional spaces, and you can define your own spaces as well. At line 14, a blocked_range<size_t> is defined that represents the STL-style half-open interval [0, n). The IdealGrainSize argument specifies the number of iterations to be lumped together when handing out chunks of iterations as tasks. So for example, a value of 100 would mean that each task generated by the Intel TBB library would consist of about 100 iterations. While template function parallel_for is sufficient for loops that have no dependencies between iterations, it is very common for loops to contain reductions. Reductions are expressions of the form x = x + y, where + is associative. Reductions can be parallelized by exploiting their mathematical properties; it is legal to generate local reduction results on each thread for its part of the iteration space, and then combine the local results into a final global result. The loop below performs a simple reduction, a summation: float SerialSumFoo( const float a[], size_t n ) { float sum = 0; for( size_t i=0; i<n; ++i ) sum += Foo(a[i]); return sum; } Using these methods, Intel TBB will create and execute tasks to generate the local sums and combine their partial results into a final global result. parallel_while So far, all of the loops I have examined have had iteration spaces that were known before the loops commence. But not all loops are like that. The end of the iteration space may not be known in advance, or the loop body may add more iterations before the loop exits. You can deal with both situations using the template class tbb::parallel_while. An example is a linked list traversal: void SerialApplyFooToList( Item*root ) { for( Item* ptr=root; ptr!=NULL; ptr=ptr->next ) Foo(pointer->data); } So my linked list example becomes: void ParallelApplyFooToList( Item*root ) { parallel_while<ApplyFooToList> w; ItemStream stream; ApplyFoo body; w.run( stream, body ); } class ItemStream { Item* my_ptr; public: bool pop_if_present( Item*& item ) { if( my_ptr ) { item = my_ptr; my_ptr = my_ptr->next; return true; } else { return false; } }; ItemStream( Item* root ) : my_ptr(root) {} } class ApplyFoo { public: void operator()( Item* item ) const { Foo(item->data); } typedef Item* argument_type; }; pipeline The last algorithm that this article will introduce is tbb::pipeline. Pipelining is a common parallel pattern that mimics a traditional manufacturing assembly line. Data flows though a series of pipeline stages, and each stage processes the data in some way. Given an incoming stream of data, some of these stages can operate in parallel and others cannot. For example, in video processing, some operations on frames do not depend on other frames, and so can be done on multiple frames at the same time. On the other hand, some operations on frames require processing prior frames first. The Intel TBB classes pipeline and filter implement the pipeline pattern. A pipeline is represented by Intel TBB as a series of filters. Filters can be configured to execute concurrently on distinct data packets or to only process a single packet at a time. A simple text processing problem can be used to demonstrate the usage of pipeline and filter. The problem is to read a text file, capitalize the first letter of each word, and write the modified text to a new file. Figure 3 is a picture of the pipeline. I shall assume that the file I/O is sequential. However, the capitalization stage can be done in parallel. That is, if I can serially read n chunks very quickly, I am free to capitalize each of the n chunks in parallel, as long as they are written in the proper order to the output file. To decide whether to capitalize a letter, I inspect whether the previous character is a blank. For the first letter in each chunk, I need to inspect the last letter of the previous chunk. But doing so would introduce a complicating dependence in the middle stage. The solution is to have each chunk also store the last character of the previous chunk. The chunks overlap by one character. Using this approach, a pipeline could be created, with a serial input filter, a parallel capitalize filter and a serial output filter. As with all pipelines there are dependencies between the stages, but if the input and output stages do not dominate the overall computation time, useful concurrency can be achieved in the middle stage. Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. Your name/nickname Your email WebSite Subject (Maximum characters: 1200). You have 1200 characters left.
http://www.devx.com/cplus/Article/32935/0/page/2
CC-MAIN-2015-27
refinedweb
1,035
52.7
Change Parent Upon Child Focus in React Here's how to change a parent element's style when a child element is focused. Parent Selector It would be awesome if we could do this in CSS. Alas, there is no parent selector in CSS today. Maybe someday. In CSS, we can style the element itself when it's focused: .element:focus { color: 'thatfocusedcolor'; } And we can style children when a parent is focused: .some-parent:focus .some-child { color: 'thatfocusedcolor'; } But we can't traverse upward in pure CSS. But we can accomplish this with some extra state and event handlers in JavaScript. Capturing Focus in React There are two easily-understood event handlers for knowing if something is focused in React: onFocus and onBlur (for unfocusing). We attach them to the child element we're checking to see is focused or not: <button className="some-child" onFocus={someFunction} onBlur={someOtherFunction}>Some child!</button> Now we need to implement the functions that will be called for the onFocus and onBlur events. Styling a Parent in React These functions will most effectively be implemented in the parent component. This is because it is the parent that needs to know whether the child is focused or not. So the parent will store some internal isFocused state: class SomeParent extends React.Component { constructor(props) { super(props) this.state = { isFocused: false } } // ... } isFocused will start false, but it'll change as focus events fire on the child. When focused, it should switch to true and back to false on blur. Let's expand the parent code: class SomeParent extends React.Component { constructor(props) { super(props) this.state = { isFocused: false } } render() { return ( <div className={this.state.isFocused ? 'box focused' : 'box'}> {React.cloneElement(this.props.children, { onFocus: _ => this.setState({ isFocused: true }), onBlur: _ => this.setState({ isFocused: false }) })} </div> ) } } We are using React.cloneElement to allow the parent to set onFocus and onBlur props on the child without having to define them on the child outside of the SomeParent component. Usage of the component would look like: <SomeParent> <button>Some child!</button> </SomeParent> Now we are left to implement the special .focused css selector that's being toggled on and off to see the style change. Try out this example on jsbin to see it in action. Click in the "Output" pane and press the tab key to change focus. It might also be worth noting that the child must be focusable in order to fire the focus/blur events. In this case, we're using a natively-focusable element, button. But you can make other things focusable as well. How do you control parent elements' style when child elements get focus?
https://jaketrent.com/post/change-parent-on-child-focus-react/
CC-MAIN-2022-40
refinedweb
445
59.4