text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Announcing DataSift VEDO - Giving Structure To Social Data Today we announced the arrival of DataSift VEDO. In this post I’ll outline what this means to you as a developer or analyst. DataSift VEDO gives you a robust solution to add structure to social data, solving one of the common challenges when working with unstructured ‘big data’. VEDO lets you define rules to classify data so that it fits your business model. The data delivered to your application needs less post-processing and is much easier to work with. The new features will save you time and give you a load more possibilities for your social data. Data Is Meaningless Without Structure When working with big data such as social content, one challenge you will always need to tackle is giving unstructured data meaningful structure. If you’re working with our platform currently, you will no doubt be extracting data to your server and running post-processing rules to organise the data to meet your needs. Processing unstructured data is expensive and not much fun, but it’s where we excel. VEDO lets you offload processing onto our platform. You can now use CSDL (the same language you use for filtering) to add custom metadata labels and scores to data specifically for your use case. Introducing Tagging And Scoring VEDO introduces new features which let you attach this metadata, these are tagging and scoring. Tagging allows you to categorize interactions to match your business model. Any interaction that matches a tagging rule will be given the appropriate text label, serving as a boolean flag to indicate whether an interaction belongs to a category. Scoring builds on tagging allowing you to attach numerical values to interactions rather than just labels. Scoring allows you to build up a score over many rules, and allows you to model subtle concepts such as priority, intention and weighting. As you begin to use tagging and scoring more and more, you will want to be able to organise your growing set of rules. To help we have also introduced tag namespaces and reusable tag definitions. Tag namespaces allow you to define taxonomies of tags. You can group tags at any number of levels in namespaces and build deep schemas to fully reflect your model. Reusable tag definitions allow you to perfect your rules and reuse them across any number of streams and projects. Definition Library Tagging and scoring are powerful features, but at this point you might not have grasped exactly how they can help you. Therefore alongside the tagging features we’ve also introduced a library of definitions to get you started. Some definitions you can use immediately in your streams (and benefit from our experience), and some serve as example definitions to show you what is now possible. For example, we have definitions that help you score content for quality (such as how likely is the content a job advert?) and make it easier to exclude spam. On the other hand we have an example definition that shows how you can use the new features to classify conversations for customer service teams, picking out rants, raves and enquiries. You can view the library here. There’s More... Although tagging is the main theme of the new release, there is an awful lot more happening here at DataSift. Alongside the release of VEDO we’re giving you more power, more connectivity and a wider range of sources to play with. For instance we’ve just introduced delivery destinations for MySQL and PostgreSQL. These new destinations allow you to map your filtered data directly to a tabular schema and have it pushed directly into your database. We’re also in the process of bringing many more sources onboard (you may have seen our recent announcements!), including many asian social networks. Look out for improvements to help you work with a wider variety of languages, updates to our developer tools and client libraries, and much much more. I’ll cover these all soon. Watch this space In summary there’s far too much to cover in detail here. So watch this space, as over the coming weeks I’ll cover every feature of the new release in depth, with worked examples and sample code so you can take advantage of all these new powers for yourself. If you can’t wait, all of these new features are fully documented in our Documentation area. Again, check out the new library for inspiration. If you’re new to DataSift, what’s stopping you? Register now and experience the power of our platform for yourself!! Deprecating Historics "volume_info" Output+ New delivered_at meta field for Push Using Managed Sources I you have hundreds of Facebook pages about your brands, plus a body of content created by users or customers. DataSift can aggregate it all: your brand pages, campaign pages, competitor's pages, and pages from industry influencers. In this blog I'm going to focus on our UI but you can set up and manage everything via API calls instead and, for production use, that's the way to go. To learn more about that process, read our step-by-step guide. Just to set the scene, DataSift offers two types of data source: - Public - Managed A public source (Youtube, for example) is one that anyone can access. A Managed Source is one that requires you to supply valid authentication credentials before you can use it. Create a token The first task is to create an OAuth token that DataSift will use for authentication. The good news is that you don't even need to know what an OAuth token is, because it's generated automatically: 1. Log in and go to Data Sources -> Managed Sources. 2. Click on the Facebook tile. 3. Click Add Token. A popup box appears, inviting your to sign into your Facebook account. If you look at the URL in the popup's address bar, you'll see that it's served by Facebook, not by us. That means you're giving your Facebook credentials to Facebook privately, just as you do any other time you sign in. You are not giving them to us and we cannot see them. 4. Log in to Facebook in the popup box. The popup closes and you will now see that you have a token. From now on, any time you run a filter in DataSift against this Managed Source, DataSift will use the token to gain access. It's secure; if you want to stop using the token, you can delete it from DataSift by clicking the red X. Or, in your Apps settings in Facebook, you can revoke it. If you do that, the token becomes useless. Create a Managed Source 5. In the Name field, specify a name for your Managed Source. Here, I've called it "Example". 6. Type a search term in the Search box and click Search. Here I'm going to monitor Ferrari cars and merchandise. DataSift lists all the accounts that match your search term. Select which ones you want to include in your filtering. In this example, I've chosen the candidate with the greatest number of likes. 8. Click Save Create a CSDL filter for that Managed Source 9. Click the My Managed Sources tab. You will see the source you just defined. Notice that the Start button is orange whereas the other two sources, which I defined before I took this screenshot, have a Stop button. It's important that you don't click Start yet. The first time you click it, DataSift delivers a backlog of posts from the past seven days. You need to create a stream and start a recording to capture those posts otherwise they'll be lost. The next few steps explain how to do that. 10. Click on your Managed Source, "Example" in this case. DataSift displays the definition page for the source. 11. Click How to Use. Now you can grab the CSDL code for this Managed Source. It's a simple one-line filter that uses the source.id target and the unique id for the source you just defined. 12. Copy the CSDL code to the clipboard: source.id == "c07504cc3a324848ba1fb5905287799b" 13. Create a filter with that CSDL. You're probably very familiar with this step already. Just click the Create Stream button, paste the CSDL code in from my clipboard, and save it. Start recording the output of the filter Now you need to start recording the output of that filter. Recordings are under the Tasks tab in DataSift. 14. Click Start a Recording. 15. Choose the filter that you created in Step 13. 16. Click Start Now and choose and end time for your recording. For this first test, I'd recommend that you don't choose a long duration. 17. Click Continue and then click Start Task. Start the Managed Source 18. Now go back to My Managed Sources and click Start. Your filter will start to collect data from the source and DataSift will record it automatically. Summary That's all you need to know to use Managed Sources from the UI. Notice that you didn't even need to write a filter to get started; the platform provided the code for you. And by starting the recording before you ran the filter, you made sure that no data was lost. For production use, there's a powerful Managed Sources API, plus that step-by-step guide that I mentioned at the beginning of this blog. Pulling Data with the Pull (Push) Connector. We designed it specifically for the clients who are firewalled from the public internet and prefer to keep and process data in house. The Pull Connector provides the following benefits: - Firewalls and network security policies are no longer an issue. With Pull, there is no need to set up public endpoints. It simplifies firewall and network management on your side. For example, you no longer need to ask your operations team to loosen up the firewall rules to enable connections from DataSift to a host that will receive data. They will not have to give up a precious public IP address or think of ways of redirecting traffic to a shared IP address. Also, a change of the IP address of the host receving data does not require a call to /push/update. - Data collection and processing at your own pace. The Pull Connector uses the Push data queuing subsystem. Your data is stored for an hour in a Push queue, giving you freedom to collect it as often as you want (up to twice per second per Push subscription ID) and to request as much of it as you want, in batches of up to 20MB. - You can retrieve data again, if necessary. If you need to request data again, you can go back in time for up to an hour using the queue cursor mechanism. It lets you retrieve data from the queue again in case it gets lost. You have up to one hour to retrieve it, which should give you plenty of time to handle technical problems. When you combine the robust foundations of the Push subsystem, the freedom to collect data at your own pace, and the ease of setting up a data collection and processing system without having to make changes to your organization's network and security setup, the Pull Connector becomes a very attractive solution. And we saved the best for last, even though the Pull Connector introduces a new endpoint, /pull, for data collection, we implemented it using the same REST API you are already familiar with. You set it up just like any other Push connector and then call /pull to get your data. Pages
http://dev.datasift.com/blog?page=4
CC-MAIN-2015-22
refinedweb
1,962
72.46
0 Hey guys, recently i started with socket programming in python and still doing it. And i decided to do a project, its a text based GUI with different options that are related to Networks, but ive had trouble with it looping back to the menu which the input is represented by ">>>" thats what i want to loop it back to. I'm thinking using a while loop to loop it back but when i tried it. The one option would just loop in a continuous loop. Any suggestions? Thanks import socket, sys, urllib, urllib2, webbrowser a = 0 print "Welcome!" print "\nType 'help' or '/help' for a list of syntax commands" command = raw_input("\n>>>") if command == 'ip': host = raw_input("Host name: ") result = socket.gethostbyaddr("host") print "Primary hostname:" print " " + result[0] print "\nAddresses:" for item in result[2]: print " " + item print "\nDone" elif command == 'web': print "\nCreating socket for connection..." s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(("", 80)) url = '' webbrowser.open_new_tab(url + '/doc') webbrowser.open_new(url) print "\nComplete" elif command == 'html': link = raw_input("Type the desired website for page source: ") def addGETdata(url, data): return url + '?' + urllib.urlencode(data) zipcode = 'S2S 7U8' url = addGETdata('link', [('query', zipcode)]) print "Using URL", url req = urllib2.Request(url) fd = urllib2.urlopen(req) while 1: data = fd.read(1024) if not len(data): break sys.stdout.write(data) if command == 'ip list': hostlookup = raw_input("Host name for matches: ") result = socket.getaddrinfo('hostlookup', None, 0, socket.SOCK_STREAM) counter = 0 for item in result: print "%-2d: %s" % (counter, item[4]) counter += 1 if command == 'socketpeer': print "Creating socket...", s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) print "done." print "Looking up port number...", port = socket.getservbyname('http', 'tcp') print "done." print "Connecting to remote host on port %d..." % port, s.connect(("", port)) print "done." print "Connected from", s.getsockname() print "Connected to", s.getpeername()
https://www.daniweb.com/programming/software-development/threads/126423/network-help
CC-MAIN-2016-44
refinedweb
309
60.31
Hi everyone, I really need help to make the below code calculate the numeric value for each name separately. for example: if i enter "Williams smith", it will calculate the first name and give its value alone and calculate the second name and give its value too. the code right now gives only the total value for the complete name. thanks in advance def main(): print "This program calculates a numeric value for a name.\n" name = raw_input("Please enter your first or last name (no spaces): ").lower() print "\n","-"*10 total = 0 for ch in name: total += ord(ch)-96 print ch,"=",ord(ch)-96 print "-"*10,"\n",name,"=",total main ()
https://www.daniweb.com/programming/software-development/threads/389177/help-with-numeric-value
CC-MAIN-2017-17
refinedweb
112
62.88
Xander here. For our new game Endless Elevator I’ve been working on finding the best method of moving the character through the scenes. At it’s most basic our character needs to move left and right across the screen and forwards and backwards on a couple of limited parallel z axis. The only way he can move up or down is by an escalator or an elevator so we have to take control away from the user in that case and handover to another controller. Then once we were done moving we can hand control back to the user. But it was mostly the left-right, forwards-backwards movements I was concerned about for this set of tests. I wanted a really simple set of controls and didn’t want to hand them over to the Physics Engine with Rigidbodies and Colliders. But the character movement couldn’t feel stiff or mechanical. We wanted a bit of slack in the movement and bounce . I scripted up three different methods of controlling the character with flip switches to move between them during testing so we could decide on the best looking movement. Method One was the most simple basic Vector move. Method Two was a damped Vector move using mathf.DampSmooth. Method Three was to implement the Character Controller and use Object.Move. I’ll put the script down the bottom of the post. Right now I want to look the actual movement controllers and the Input into those commands. Let’s start with the Basic Move controller. We define our speed as a float and get the x and z values from our Vertical and Horizontal Input Axis (WASD or Arrow Keys on the computer). public float speed = 10f; var x = Input.GetAxis(“Horizontal”); // this is a float value between -1 and 1 that tracks left and right input from the left and right controller. If it’s under 0 we move left. If it’s over 0 we move right. Simple easy to understand and doesn’t factor into the movement at all except to define which way we go. var z = Input.GetAxis(“Vertical”); // We do the same for the z value which governs character depth on the screen (forwards and backwards). if (x < 0) { transform.position -= new Vector3(speed * Time.deltaTime, 0, 0); // Move Left the value of speed and normalised for frame rate differences with the deltaTime function. } if (x > 0) { transform.position += new Vector3(speed * Time.deltaTime, 0, 0); // Right } if (z < 0) { transform.position -= new Vector3(0, 0, speed * Time.deltaTime); // Towards the camera } if (z > 0) { transform.position += new Vector3(0, 0, speed * Time.deltaTime); // Away from the camera } Every Update we move our character in the desired direction at a speed of 10. This is fine. It works. But it looks kind of static. The character hits full speed straight away and stops on a dime. It’s not particularly realistic looking (and maybe you want your game to have this look) and not what I really wanted. But it’s a great comparison to judge the other two methods by. Let’s look at the Smooth Damp method next. We use the same kind of input: eg. x = Input.GetAxis(“Horizontal”); speed = 1f; // This method is much faster so we need to reduce the speed to keep all the methods at roughly the same rate. We set some variables that control the “bounciness” of the smoothing. public float forwardVelocity = 0; public float sidewaysVelocity = 10; // These Velocity floats hold the value of the velocity between Updates (I only wanted some on the left right movement) public float movementSmoothing = 0.15f; // the higher the number the slower it get’s to top speed. Mathf.SmoothDamp takes the current position and the raw input from the Input keys. That means that those values between -1 and 1 amp up or down over a small amount of time (depending on the direction) . This is is factored together with the current velocity, 0, movementInput.y * speed); //transform.position += new Vector3(movementInput.x * speed * Time.deltaTime, 0, movementInput.y* speed * Time.deltaTime); // I had that last line with the deltaTime in it as well but it looked “moochy” if that’s a word and did the opposite of smoothing the transitions so it ended up looking like this: movementInput.x = Mathf.SmoothDamp(movementInput.x, Input.GetAxisRaw(“Horizontal”), ref forwardVelocity, movementSmoothing); Mathf.SmoothDamp Gradually changes a value towards a desired goal over time. public static float SmoothDamp(float current, float target, ref float currentVelocity, float smoothTime, float maxSpeed = Mathf.Infinity, float deltaTime = Time.deltaTime); Input.GetAxisRaw(“Horizontal”) goes straight from -1 on left arrow back to 0 when off and +1 on right arrow – no smoothing. Input.GetAxis(“Horizontal”); does the same thing but increments gradually over a short time so that it goes from 0 to 1 in a few microseconds. Sooo using the Raw one and pumping it into the SmoothDamp function does pretty much the same thing. In the Editor I cannot tell the difference between them. This controller took a bit more playing with but was really looking good. It picks up speed incrementally when you start moving and there is a controllable amount of exaggerated overshooting when you change direction. If you set the smoothing or velocity too high it gets really bouncy and a little crazy but at low levels it looks cartoony and mimics the stretch and squash of that genre. Lastly I looked at the character controller. It’s part of the system so really easy to set up in a simple move script to get a feel for it. speed = 20f; // Once again the speed was modified to get a realistic similarity between the three methods. The character direction comes straight from the input controllers and the cc.Move method handles everything else. direction = new Vector3((Input.GetAxis(“Vertical”)), 0, (-(Input.GetAxis(“Horizontal”)))); direction1 = transform.TransformDirection(-direction); cc.Move(direction1 * speed * Time.deltaTime); Initially this looked really good. But the more I played with it to try and limit the way it was moving the character and the amount of radians it could turn it quickly got way too complicated and I started getting weird movement behaviour. So if you want a character controller that you can just plug in and go with then this is good. But if you want more control be prepared for a longer haul. As with everything this was a greats study into character control methods and one step in the path of looking for a smooth moving character but in the end following a massive redesign of the level system of my game I ended up going with a physics based collider system for the levels and a physics controller. Not all my time wasted though as I had a better understanding of what I wanted and used elements of both the Smooth Move method and the Basic method in my final Character Control script. This is the final edit of the script I was using if you want to play with it. Just attach it to a player and add a character controller. Use the tick boxes on the bools in the editor to switch between methods while playing. using System.Collections; using System.Collections.Generic; using UnityEngine; public class SmoothMove : MonoBehaviour { public bool basicMove; public bool smoothDampMove; public bool CCMove; public Vector2 movementInput = Vector2.zero; public float forwardVelocity = 0; public float sidewaysVelocity = 10; public float movementSmoothing = 0.15f; // the higher the number the slower it get’s to top speed public float speed; public CharacterController cc; public Vector3 direction = Vector3.zero; public Vector3 direction1; // used for debugging in the editor float smooth = 50.0f; // Update is called once per frame void Update() { if (basicMove) { speed = 10f; var x = Input.GetAxis(“Horizontal”); var z = Input.GetAxis(“Vertical”); if (x < 0) { transform.position -= new Vector3(speed * Time.deltaTime, 0, 0); } if (x > 0) { transform.position += new Vector3(speed * Time.deltaTime, 0, 0); } if (z < 0) { transform.position -= new Vector3(0, 0, speed * Time.deltaTime); } if (z > 0) { transform.position += new Vector3(0, 0, speed * Time.deltaTime); } } if (smoothDampMove) { speed = 1f; * Time.deltaTime, 0, movementInput.y* speed * Time.deltaTime); transform.position += new Vector3(movementInput.x * speed, 0, movementInput.y * speed); } if (CCMove) { speed = 20f; direction = new Vector3((Input.GetAxis(“Vertical”)), 0, (-(Input.GetAxis(“Horizontal”)))); //direction = new Vector3(0, 0, (-Input.GetAxis(“Horizontal”))); direction1 = transform.TransformDirection(direction); //direction1 *= speed; cc.Move(direction1 * speed * Time.deltaTime); } } }
https://www.zuluonezero.net/2018/06/14/unity3d-smooth-moves/
CC-MAIN-2021-31
refinedweb
1,411
67.76
I really need help. I want to make some changes in my .MP4 movies (subtitles & some logos in those movies). I tried a lot but i couldn’t find a way to do it. I really need help. I want to make some changes in my .MP4 movies (subtitles & some logos in those movies). I tried a lot but i couldn’t find a way to do it. thinking laterally! import the mp4 into something like magix movie editor pro plus, you'll see each frame. you can then add a splotch (save a frame before and a frame after (without text or graphic), cut out the small area and overlay) alternatively you can overlay a clean whole frame(s) on the graphic'd frame(s), this will give the appearance of a small pause of the video while the soundtrack still runs. alternatively you can put your own solid graphic over the area to be masked, once you're happy with the fudged clean up, resave in a format/fps/size that suits your needs. ....just ideas/food for thought etc File information: AVC Advanced Video Codec - High@L4.O I`m trying to demux now, Divit Dsouza i will try that. If the subtitles or soft coded, you can extract it from the container and edit it. You can do this using mp4box. Download and install it from here: Click on the "Demux" tab and add your video. If the subtitles are soft coded you should see it in the list as "'Timed Text" or something like that. Then you can demux/extract it and edit them in a text editor. But if they are hard coded, there's nothing you can do. Try this link: Hard-coded subtitles can not be edited out. If the mp4 video is on iTunes, open iTunes, then right click on the movie you want to edit and click get info. You can edit the movie picture, description, etc.. if the subs are hard coded, nothing much to do with it! or if it's in a seperate file, open it with text editor and eidt! sometimes subtitle are the part of movie, and can't be edited. if you can locate a separate subtitles file only then you'll be able to edit it. ha14 thanx, but not working for me, like i sad i have only one file. Drew Butler i have only one file. If that is the case, then it means that the subtitles are hard-coded and cannot be edited out. How to create/edit/embed subtitles in videos using Jubler and Avidemux under Windows/Linux/MAC OS If they are hard-encoded you're stuck with them. hard encode meaning the only file you have is the mp4 and they still show up... if you have additional files like a subtitle file wither delete it or watch video using vlc and de-select subtitles from the toolbar options.
https://www.makeuseof.com/answers/edit-subtilties-logos-mp4-movies/
CC-MAIN-2018-43
refinedweb
492
82.14
Hi, I have a DTML document with the following structure: <dtml-var standard_html_header> <dtml-try> <dtml-call "my_external_method(REQUEST)"> <p> Report that everything was ok and wait for ack. <form method=POST action=index_html> <input type=submit </form> <dtml-except MyException> <p class="warning"> Warn about problem found by external method. <form method=POST <input type=submit </form> </dtml-try> <dtml-var standard_html_footer> and it works wonderfully, to be honest I thought it was quite uncanny to define and raise an exception inside the python module, and having it caught by the DTML document. Now, to improve the treatment of errors I need to refer, inside the <dtml-except> block, to some variables that were defined in the REQUEST variable (I'm pretty confident of this because I used them inside the external method), but it seems that the original REQUEST variable is no longer set inside the <dtml-except> block. Is this correct? What actually happens (in simple terms, please :) when an exception is thrown? I also tried the very few other tricks I know with DTML (<dtml-with "_.namespace..."> outside the <dtml-try> block, plus some other voodoo) but couldn't get it to work, either. Should I return these values as error values with the exception? I can do it if this is the only option, but I'm already using the return value for information related to external method things, and I'd rather not mix the two ``domains'', so to speak. Many thanks for any tips, -- Ramiro Brito Willmersdorf [EMAIL PROTECTED] GPG key: _______________________________________________ Zope maillist - [EMAIL PROTECTED] ** No cross posts or HTML encoding! ** (Related lists - )
https://www.mail-archive.com/zope@zope.org/msg08395.html
CC-MAIN-2017-17
refinedweb
272
56.89
Description RapidXml is an attempt to create the fastest XML parser possible, while retaining useability, portability and reasonable W3C compatibility. It is an in-situ parser with parsing speed approaching speed of strlen function executed on the same data. Categories License Other Useful Business Software User Ratings User Reviews Quite buggy! Especially with large buggy XML inputs. No namespace support (WHY?!). I ended up hand-rolling my own expat-based parser. I have used rapidxml-1.13, and the version 'hidden' inside Boost.PropertyTree. It's simple to operate, and without many lines of code I was able to add XML Namespaces (prefix mapping) support and create a visitor to adapt to a parser-neutral SAX2 interface, so I could change over from another parser. good job Clean structure, even no docs needed. Easy to integrate - headers only.
https://sourceforge.net/projects/rapidxml/
CC-MAIN-2017-30
refinedweb
138
51.04
Created on 2020-03-24 15:29 by conqp, last changed 2020-03-25 06:44 by serhiy.storchaka. This issue is now closed. Currently only plain strings can be used as docstrings, such as: class Foo: """Spamm eggs.""" For dynamic class generation, it would be useful to allow format strings as docstrings as well: doc = 'eggs' class Foo: """Spamm {}.""".format(doc) or: doc = 'eggs' class Foo: f"""Spamm {doc}.""" A current use case in which I realized that this feature was missing is: class OAuth2ClientMixin(Model, ClientMixin): # pylint: disable=R0904 """An OAuth 2.0 client mixin for peewee models.""" <snip> @classmethod def get_related_models(cls, model=Model): """Yields related models.""" for mixin, backref in CLIENT_RELATED_MIXINS: yield cls._get_related_model(model, mixin, backref) @classmethod def _get_related_model(cls, model, mixin, backref): """Returns an implementation of the related model.""" class ClientRelatedModel(model, mixin): f"""Implementation of {mixin.__name__}.""" client = ForeignKeyField( cls, column_name='client', backref=backref, on_delete='CASCADE', on_update='CASCADE') return ClientRelatedModel It actually *is* possible to dynamically set the docstring via the __doc__ attribute: doc = 'eggs' class Foo: pass Foo.__doc__ = doc Allowing format strings would imho be more obvious when reading the code as it is set, where a docstring is expected i.e. below the class / function definition. The problem is that __doc__ is set at compile time, not run time. The ''.format call (and f-strings) are evaluated at run time. See also
https://bugs.python.org/issue40054
CC-MAIN-2021-17
refinedweb
232
50.02
I started using zeromq import zmq import random import time port = "5556" topic = "1" context = zmq.Context() socket = context.socket(zmq.PUB) socket.bind("tcp://*:%s" % port) while True: messagedata = random.randrange(1,215) print "%s %d" % (topic, messagedata) socket.send("%s %d" % (topic, messagedata)) time.sleep(.2) import zmq port = "5556" topic = "1" context = zmq.Context() socket = context.socket(zmq.SUB) print "Connecting..." socket.connect ("tcp://localhost:%s" % port) socket.setsockopt(zmq.SUBSCRIBE,topic) while True: if isLastMessage(): # probably based on socket.recv() analysis_function() # time consuming function else: simple_function() # something simple like print and save in memory isLastMessage() subscriber.py zeromq Sorry, I will keep the question for reference. I just found the answer, in the documentation there is a NOBLOCK flag that you can add to the receiver. With this the recv command doesn't block. A simple workaround, extracted from a part of an answer, is the following: while True: try: #check for a message, this will not block message = socket.recv(flags=zmq.NOBLOCK) #a message has been received print "Message received:", message except zmq.Again as e: print "No message received yet" As for the real implementation, one is not sure that it is the last call you use the flag NOBLOCK and once you have entered the exception block. Wich translates to something like the following: msg = subscribe(in_socket) is_last = False while True: if is_last: msg = subscribe(in_socket) is_last = False else: try: old_msg = msg msg = subscribe(in_socket,flags=zmq.NOBLOCK) # if new message was received, then process the old message process_not_last(old_msg) except zmq.Again as e: process_last(msg) is_last = True # it is probably the last message
https://codedump.io/share/wFP63cUSApQ1/1/zeromq-pubsub-action-last-element-in-queue-an-other-elements
CC-MAIN-2019-35
refinedweb
273
59.7
A couple of use cases for nested static classes in Java Whenever you can use an anonymous class, you can use a named nested static class instead. But that only makes sense if you need to instantiate the class from two or more places. That’s probably not going to be the case with an iterator, but it’s likely to be the case with nodes in a singly- or doubly-linked list. Nodes in a linked list Instead of a backing array to hold the elements of a list, like ArrayList, a linked list has linked nodes to hold the elements. Then we can use a private nested static class to represent each individual node in a linked list. In the case of a singly-linked list, each node needs to hold only two things: the element and a pointer to the next node or the previous node, or a null pointer if it’s the last or first element in the list (depending on whether the links are forward- or backward-pointing). One of the advantages of linked lists is that insertions and removals to and from the middle of the list are very easy. There’s no need to move elements around in a backing array. In either direction. Linked lists also have their drawbacks, but that’s outside the scope of this article. The important thing is that the nodes of a linked list can be represented by a specialized class that is nested within the linked list class. Here’s a rough draft for SinglyLinkedList<E>: Since Node<E> has no need to access the enclosing class this, it can be declared as a nested static class rather than an inner class. Implementing class nested in a test class Another use case is one that I thank Apache NetBeans for: using a static nested class nested inside the test class for an abstract class under test. As I try to be more thorough about test-driven development (TDD), I realized that in most cases abstract classes need to be tested. The first time I thought about that, I decided to give it a try in NetBeans. If you have NetBeans generate a test class for an abstract class, it will probably include an “Impl” class nested in the test class. For example, if SomeClass is abstract and has a constructor that requires one or more parameters, NetBeans will include SomeClassImpl nested in SomeClassTest. However, NetBeans will not mark such an Impl class as static nor remind you that it can be static. But maybe you can chalk this up to NetBeans not being as “opinionated” as Eclipse or IntelliJ IDEA — that last one will definitely let you know when a nested class can be static. For example, given this abstract class, NetBeans will generate for you something like this: package postal;// JUnit imports go herepublic class PostalCodeTest { // Test stub for getCountry() private class PostalCodeImpl extends PostalCode { @Override public String toString() { return "ImplPostal " + this.postalCodeNumber; super(code, Locale.US); } }} As a matter of good form you add “ static” to the PostalCodeImpl declaration. I like this idea so much I use it in my IntelliJ projects as well, which is how I found out that I was missing the static declaration for this particular use case.
https://medium.com/codex/a-couple-of-use-cases-for-nested-static-classes-in-java-8a115dbcaacf?source=user_profile---------2----------------------------
CC-MAIN-2022-33
refinedweb
549
64.85
A supporting module for jplephem to handle data type 1 A supporting module for jplephem to handle data type 1 Compute position and velocity of a celestial small body, from a NASA SPICE SPK ephemeris kernel file of data type 1 (Modified Difference Arrays). You can get SPK files for many solar system small bodies from HORIZONS system of NASA/JPL: - Usage: from spktype01 import SPKType01 kernel = SPKType01.open(‘path’) position, velocity = kernel.compute_type01(center, target, jd) - where: center - SPKID of central body (10 for Sol) target - SPKID of target body jd - time for computation (Julian date) Author: Shushi Uetsuki (whiskie14142) This module has been developed based on jplephem.spk and FORTRAN source of the SPICE Toolkit of NASA/JPL/NAIF. jplephem : SPICE Toolkit : Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/spktype01/
CC-MAIN-2017-39
refinedweb
145
51.78
September 2, 2012 Getting Started with Topic Modeling and MALLET - Reviewer(s): John Fink, Alan MacEachern and Adam Crymble Editor’s Note This lesson requires you to use the command line. If you have no previous experience using the command line you may find it helpful to work through the Scholars' Lab Command Line Bootcamp tutorial. Lesson Goals). We will run the topic modeller on some example files, and look at the kinds of outputs that MALLET installed. This will give us a good idea of how it can be used on a corpus of texts to identify topics found in the documents without reading them individually. Please see the MALLET users' discussion list for the full range of things one can do with the software. (We would like to thank Robert Nelson and Elijah Meeks for hints and tips in getting MALLET to run for us the first time, and for their examples of what can be done with this tool.) What is Topic Modeling And For Whom is this Useful? A topic modeling tool takes a single text (or corpus) and looks for patterns in the use of words; it is an attempt to inject semantic meaning into vocabulary. Before you begin with topic modeling, you should ask yourself whether or not it is likely to be useful for your project. Matthew Kirschenbaum’s Distant Reading (a talk given at the 2009 National Science Foundation Symposium on the Next Generation of Data Mining and Cyber-Enabled Discovery for Innovation) and Stephen Ramsay’s Reading Machines are good places for beginning to understand in which circumstances a technique such as this could be most effective. As with all tools, just because you can use it, doesn’t necessarily mean that you should. If you are working with a small number of documents (or even a single document) it may well be that simple frequency counts are sufficient, in which case something like Voyant Tools might be appropriate. However, if you have hundreds of documents from an archive and you wish to understand something of what the archive contains without necessarily reading every document, then topic modeling might be a good approach.. By unstructured we mean that there are no computer-readable annotations that tell the computer the semantic meaning of the words in the text. Topic modeling programs do not know anything about the meaning of the words in a text. Instead, they assume that any piece of text is composed (by an author) by selecting words from possible baskets of words where each basket corresponds to a topic. If that is true, then it becomes possible to mathematically decompose a text into the probable baskets from whence the words first came. The tool goes through this process over and over again until it settles on the most likely distribution of words into baskets, which we call topics. There are many different topic modeling programs available; this tutorial uses one called MALLET. If one used it on a series of political speeches for example, the program would return a list of topics and the keywords composing those topics. Each of these lists is a topic according to the algorithm. Using the example of political speeches, the list might look like: - Job Jobs Loss Unemployment Growth - Economy Sector Economics Stock Banks - Afghanistan War Troops Middle-East Taliban Terror - Election Opponent Upcoming President - et cetera By examining the keywords we can discern that the politician who gave the speeches was concerned with the economy, jobs, the Middle East, the upcoming election, and so on. As Scott Weingart warns, there are many dangers that face those who use topic modeling without fully understanding it. For instance, we might be interested in word use as a proxy for placement along a political spectrum. Topic modeling could certainly help with that, but we have to remember that the proxy is not in itself the thing we seek to understand – as Andrew Gelman demonstrates in his mock study of zombies using Google Trends. Ted Underwood and Lisa Rhody (see Further Reading) argue that we as historians would be better to think of these categories as discourses; however for our purposes here we will continue to use the word: topic. Note: You will sometimes come across the term “LDA” when looking into the bibliography of topic modeling. LDA and Topic Model are often used synonymously, but the LDA technique is actually a special case of topic modeling created by David Blei and friends in 2002. It was not the first technique now considered topic modeling, but it is by far the most popular. The myriad variations of topic modeling have resulted in an alphabet soup of techniques and programs to implement them that might be confusing or overwhelming to the uninitiated; ignore them for now. They all work in much the same way. MALLET uses LDA. Examples of topic models employed by historians: - Rob Nelson, Mining the Dispatch - Cameron Blevins, “Topic Modeling Martha Ballard’s Diary” Historying, April 1, 2010. - David J Newman and Sharon Block, “Probabilistic topic decomposition of an eighteenth century American newspaper,” Journal of the American Society for Information Science and Technology vol. 57, no. 6 (April 1, 2006): 753-767. Installing MALLET There are many tools one could use to create topic models, but at the time of this writing (summer 2012) the simplest tool to run your text through is called MALLET. MALLET uses an implementation of Gibbs sampling, a statistical technique meant to quickly construct a sample distribution, to create its topic models. MALLET requires using the command line – we’ll talk about that more in a moment, although you typically use the same few commands over and over. The installation instructions are different for Windows and Mac. Follow the instructions appropriate for you below: Windows Instructions - Go to the MALLET project page, and download MALLET. (As of this writing, we are working with version 2.0.7.) - You will also need the Java developer’s kit – that is, not the regular Java that’s on every computer, but the one that lets you program things. Install this on your computer. - Unzip MALLET into your C:directory . This is important: it cannot be anywhere else. You will then have a directory called C:\mallet-2.0.7or similar. For simplicity’s sake, rename this directory just mallet. - MALLET uses an environment variable to tell the computer where to find all the various components of its processes when it is running. It’s rather like a shortcut for the program. A programmer cannot know exactly where every user will install a program, so the programmer creates a variable in the code that will always stand in for that location. We tell the computer, once, where that location is by setting the environment variable. If you moved the program to a new location, you’d have to change the variable. To create an environment variable in Windows 7, click on your Start Menu -> Control Panel -> System -> Advanced System Settings (Figures 1,2,3). Click new and type MALLET_HOME in the variable name box. It must be like this – all caps, with an underscore – since that is the shortcut that the programmer built into the program and all of its subroutines. Then type the exact path (location) of where you unzipped MALLET in the variable value, e.g., c:\mallet. To see if you have been successful, please read on to the next section. Running MALLET using the Command Line MALLET is run from the command line, also known as Command Prompt (Figure 4). If you remember MS-DOS, or have ever played with a Unix computer Terminal, this will be familiar. The command line is where you can type commands directly, rather than clicking on icons and menus. - Click on your Start Menu -> All Programs -> Accessories -> Command Prompt.\ You’ll get the command prompt window, which will have a cursor at c:\user\user>(or similar; see Figure 4). - Type cd ..(That is: cd-space-period-period) to change directory. Keep doing this until you’re at the C:\. (as in Figure 5) - Then type cd malletand you are in the MALLETdirectory. Anything you type in the command prompt window is a command. There are commands like cd(change directory) and dir(list directory contents) that the computer understands. You have to tell the computer explicitly that ‘this is a MALLET command’ when you want to use MALLET. You do this by telling the computer to grab its instructions from the MALLET bin, a subfolder in MALLET that contains the core operating routines. - Type bin\malletas in Figure 6. If all has gone well, you should be presented with a list of MALLET commands – congratulations! If you get an error message, check your typing. Did you use the wrong slash? Did you set up the environment variable correctly? Is MALLET located at C:\mallet? You are now ready to skip ahead to the next section. Mac Instructions Many of the instructions for OS X installation are similar to Windows, with a few differences. In fact, it is a bit easier. - Download and install MALLET (mallet-2.0.7.tar.gazas of Summer 2012). - Download the Java Development Kit. Unzip MALLET into a directory on your system (for ease of following along with this tutorial, your /user/ directory works but anywhere is okay). Once it is unzipped, open up your Terminal window (in the Applications directory in your Finder. Navigate to the directory where you unzipped MALLET using the Terminal (it will be mallet-2.0.7 . If you unzipped it into your /user/ directory as was suggested in this lesson, you can navigate to the correct directory by typing cd mallet-2.0.7). cd is short for “change directory” when working in the Terminal. The same command will suffice to run commands from this directory, except you need to append ./ (period-slash) before each command. This needs to be done before all MALLET commands when working on a Mac. Going forward, the commands for MALLET on a Mac will be nearly identical to those on Windows, except for the direction of slashes (there are a few other minor differences that will be noted when they arise). If on Windows a command would be \bin\mallet, on a Mac you would instead type: ./bin/mallet A list of commands should appear. If it does, congratulations – you’ve installed it correctly! Typing in MALLET Commands Now that you ahve MALLET installed, it is time to learn what commands are available to use with the program. There are nine MALLET commands you can use (see Figure 6 above). Sometimes you can combine multiple instructions. At the Command Prompt or Terminal (depending on your operating system), try typing: import-dir --help You are presented with the error message that import-dir is not recognized as an internal or external command, operable program, or batch file. This is because we forgot to tell the computer to look in the MALLET bin for it. Try again, with bin\mallet import-dir --help Remember, the direction of the slash matters (See Figure 7, which provides an entire transcript of what we have done so far in the tutorial). We checked to see that we had installed MALLET by typing in bin\mallet. We then made the mistake with import-dir a few lines further down. After that, we successfully called up the help file, which told us what import-dir does, and it listed all of the potential parameters you can set for this tool. Note: there is a difference in MALLET commands between a single hyphen and a double hyphen. A single hyphen is simply part of the name; it replaces a space (e.g., import-dir rather than import dir), since spaces offset multiple commands or parameters. These parameters let us tweak the file that is created when we import our texts into MALLET. A double hyphen (as with –help above) modifies, adds a sub-command, or specifies some sort of parameter to the command. For Windows users, if you got the error ‘exception in thread “main” java.lang.NoClassDefFoundError:’ it might be because you installed MALLET somewhere other than in the C:\ directory. For instance, installing MALLET at C:\Program Files\mallet will produce this error message. The second thing to check is that your environment variable is set correctly. In either of these cases, check the Windows installation instructions and double check that you followed them properly. Working with data MALLET comes pre-packaged with sample .txt files with which you can practice. Type dir at the C:\mallet> prompt, and you are given the listing of the MALLET directory contents. One of those directories is called sample-data. You know it is a directory because it has the word <dir> beside it. Type cd sample-data. Type dir again. Using what you know, navigate to first the web then the en directories. You can look inside these .txt files by typing the full name of the file (with extension). Note that you cannot now run any MALLET commands from this directory. Try it: bin\mallet import-dir --help You get the error message. You will have to navigate back to the main MALLET folder to run the commands. This is because of the way MALLET and its components are structured. Importing data In the sample data directory, there are a number of .txt files. Each one of these files is a single document, the text of a number of different web pages. The entire folder can be considered to be a corpus of data. To work with this corpus and find out what the topics are that compose these individual documents, we need to transform them from several individual text files into a single MALLET format file. MALLET can import more than one file at a time. We can import the entire directory of text files using the import command. The commands below import the directory, turn it into a MALLET file, keep the original texts in the order in which they were listed, and strip out the stop words (words such as and, the, but, and if that occur in such frequencies that they obstruct analysis) using the default English stop-words dictionary. Try the following (swapping in the correct pathway to the sample data). bin\mallet import-dir --input pathway\to\the\directory\with\the\files --output tutorial.mallet --keep-sequence --remove-stopwords If you type dir now (or ls for Mac), you will find a file called tutorial.mallet. (If you get an error message, you can hit the cursor up key on your keyboard to recall the last command you typed, and look carefully for typos). This file now contains all of your data, in a format that MALLET can work with. For Mac Mac instructions are similar to those above for Windows, but keep in mind that Unix file paths (which are used by Mac) are different: for example, if the directory was in one’s home directory, one would type ./bin/mallet import-dir --input /users/username/database/ --output tutorial.mallet --keep-sequence --remove-stopwords Issues with Big Data If you’re working with extremely large file collections – or indeed, very large files – you may run into issues with your heap space, your computer’s working memory. This issue will initially arise during the import sequence, if it is relevant. By default, MALLET allows for 1GB of memory to be used. If you run into the following error message, you’ve run into your limit: Exception in thread "main" java.lang.OutOfMemoryError: Java heap space If your system has more memory, you can try increasing the memory allocated to your Java virtual machine. To do so, you need to edit the code in the mallet file found in the bin subdirectory of your MALLET folder. Using Komodo Edit, (See [Mac][], [Windows,][Mac][Linux][Mac] for installation instructions), open the Mallet.bat file ( C:\Mallet\bin\mallet.bat). Find the following line: MEMORY=1g You can then change the 1g value upwards – to 2g, 4g, or even higher depending on your system’s RAM, which you can find out by looking up the machine’s system information. Save your changes. You should now be able to avoid the error. If not, increase the value again. Your first topic model At the command prompt in the MALLET directory, type: bin\mallet train-topics --input tutorial.mallet This command opens your tutorial.mallet file, and runs the topic model routine on it using only the default settings. As it iterates through the routine, trying to find the best division of words into topics, your command prompt window will fill with output from each run. When it is done, you can scroll up to see what it was outputting (as in Figure 8). The computer is printing out the key words, the words that help define a statistically significant topic, per the routine. In Figure 8, the first topic it prints out might look like this (your key words might look a bit different): 0 5 test cricket Australian hill acting England northern leading ended innings record runs scored run team batsman played society English If you are a fan of cricket, you will recognize that all of these words could be used to describe a cricket match. What we are dealing with here is a topic related to Australian cricket. If you go to C:\mallet\sample-data\web\en\hill.txt, you will see that this file is a brief biography of the noted Australian cricketer Clem Hill. The 0 and the 5 we will talk about later in the lesson. Note that MALLET includes an element of randomness, so the keyword lists will look different every time the program is run, even if on the same set of data. Go back to the main MALLET directory, and type dir. You will see that there is no output file. While we successfully created a topic model, we did not save the output! At the command prompt, type bin\mallet train-topics --input tutorial.mallet --num-topics 20 --output-state topic-state.gz --output-topic-keys tutorial_keys.txt --output-doc-topics tutorial_compostion.txt Here, we have told MALLET to create a topic model ( train-topics) and everything with a double hyphen afterwards sets different parameters This command - opens your tutorial.malletfile - trains MALLET to find 20 topics - outputs every word in your corpus of materials and the topic it belongs to into a compressed file ( .gz; see on how to unzip this) - outputs a text document showing you what the top key words are for each topic ( tutorial_keys.txt) - and outputs a text file indicating the breakdown, by percentage, of each topic within each original text file you imported ( tutorial_composition.txt). (To see the full range of possible parameters that you may wish to tweak, type bin\mallet train-topics –helpat the prompt.) Type dir. Your outputted files will be at the bottom of the list of files and directories in C:\Mallet. Open tutorial_keys.txt in a word processor (Figure 9). You are presented with a series of paragraphs. The first paragraph is topic 0; the second paragraph is topic 1; the third paragraph is topic 2; etc. (The output begins counting at 0 rather than 1; so if you ask it to determine 20 topics, your list will run from 0 to 19). The second number in each paragraph is the Dirichlet parameter for the topic. This is related to an option which we did not run, and so its default value was used (this is why every topic in this file has the number 2.5). If when you ran the topic model routine you had included --optimize-interval 20 as below bin\mallet train-topics --input tutorial.mallet --num-topics 20 --optimize-interval 20 --output-state topic-state.gz --output-topic-keys tutorial_keys.txt --output-doc-topics tutorial_composition.txt the output might look like this: 0 0.02995 xi ness regular asia online cinema established alvida acclaim veenr commercial That is, the first number is the topic (topic 0), and the second number gives an indication of the weight of that topic. In general, including –optimize-interval leads to better topics. The composition of your documents What topics compose your documents? The answer is in the tutorial_composition.txt file. To stay organized, import the tutorial_composition.txt file into a spreadsheet (Excel, Open Office, etc). You will have a spreadsheet with a #doc, source, topic, proportion columns. All subsequent columns run topic, proportion, topic, proportion, etc., as in figure 10. You can see that doc# 0 (ie, the first document loaded into MALLET), elizabeth_needham.txt has topic 2 as its principal topic, at about 15%; topic 8 at 11%, and topic 1 at 8%. As we read along that first column of topics, we see that zinta.txt also has topic 2 as its largest topic, at 23%. The topic model suggests a connection between these two documents that you might not at first have suspected. If you have a corpus of text files that are arranged in chronological order (e.g., 1.txt is earlier than 2.txt), then you can graph this output in your spreadsheet program, and begin to see changes over time, as Robert Nelson has done in Mining the Dispatch. How do you know the number of topics to search for? Is there a natural number of topics? What we have found is that one has to run the train-topics with varying numbers of topics to see how the composition file breaks down. If we end up with the majority of our original texts all in a very limited number of topics, then we take that as a signal that we need to increase the number of topics; the settings were too coarse. There are computational ways of searching for this, including using MALLETs hlda command, but for the reader of this tutorial, it is probably just quicker to cycle through a number of iterations (but for more see Griffiths, T. L., & Steyvers, M. (2004). Finding scientific topics. Proceedings of the National Academy of Science, 101, 5228-5235). Getting your own texts into MALLET The sample data folder in MALLET is your guide to how you should arrange your texts. You want to put everything you wish to topic model into a single folder within c:\mallet, ie c:\mallet\mydata. Your texts should be in .txt format (that is, you create them with Notepad, or in Word choose Save As -> MS Dos text). You have to make some decisions. Do you want to explore topics at a paragraph by paragraph level? Then each txt file should contain one paragraph. Things like page numbers or other identifiers can be indicated in the name you give the file, e.g., pg32_paragraph1.txt. If you are working with a diary, each text file might be a single entry, e.g., april_25_1887.txt. (Note that when naming folders or files, do not leave spaces in the name. Instead use underscores to represent spaces). If the texts that you are interested in are on the web, you might be able to automate this process. Further Reading about Topic Modeling To see a fully worked out example of topic modeling with a body of materials culled from webpages, see Mining the Open Web with Looted Heritage Draft. You can grab the data for yourself at Figshare.com, which includes a number of .txt files. Each individual .txt file is a single news - For extensive background and bibliography on topic modeling you may wish to begin with Scott Weingart’s Guided Tour to Topic Modeling - Ted Underwood’s ‘Topic modeling made just simple enough’ is an important discussion on interpreting the meaning of topics. - Lisa Rhody’s post on interpreting topics is also illuminating. ‘Some Assembly Required’ Lisa @ Work August 22, 2012. - Clay Templeton, ‘Topic Modeling in the Humanities: An Overview | Maryland Institute for Technology in the Humanities’, n.d. - David Blei, Andrew Ng, and Michael Jordan, ‘Latent dirichlet allocation,’ The Journal of Machine Learning Research 3 (2003). - Finally, also consult David Mimno’s bibliography of topic modeling articles. They’re tagged by topic to make finding the right one for a particular application that much easier. Also take a look at his recent article on Computational Historiography from ACM Transactions on Computational Logic which goes through a hundred years of Classics journals to learn something about the field. While the article should be read as a good example of topic modeling, his ‘Methods’ section is especially important, in that it discusses preparing text for this sort of analysis.
http://programminghistorian.org/lessons/topic-modeling-and-mallet
CC-MAIN-2014-35
refinedweb
4,122
62.98
pipeproc0.2.1 • Public • Published PipeProcPipeProc Multi-process log processing for nodejs Table of Contents - Intro - Example - Installing - Status - Process management - Committing logs - Read API - Procs - SystemProcs - LiveProcs - Waiting for procs to complete - GC - Typings - Tests - Meta - Contributing IntroIntro PipeProc is a data processing system that can be embedded in nodejs applications (eg. electron). It will be run in a separate process and can be used to off-load processing logic from the main “thread” in a structured manner. Underneath it uses a structured commit log and a “topic” abstraction to categorize logs. Inspired by Apache Kafka and Redis streams. In practice it is a totally different kind of system since it is meant to be run embedded in the main application as a single instance node. Another key difference is that it also handles the execution of the processing logic by itself and not only the stream pipelining. It does this by using processors which are custom-written modules/functions that can be plugged to the system, consume topic streams, execute custom logic and push the results to another topic, thus creating a processing pipeline. ExampleExample const PipeProc = ;const pipeProcClient = ;pipeProcClient; InstallingInstalling npm install --save pipeproc StatusStatus Process managementProcess management spawnspawn Spawn the node and connect to it. If there is a need to spawn multiple nodes on the same host you can use the namespace option with a custom name. If a custom namespace is used, all clients that will connect() to it will need to provide it. spawnoptions?:| boolean}: Promise<string>; connectconnect Connect to an already spawned node. Usecase: Connect to the same PipeProc instance from a different process (eg. electron renderer) connectoptions?:: Promise<string>; shutdownshutdown Gracefully close the PipeProc instance. shutdown: Promise<string>; Committing logsCommitting logs This is how you add logs to a topic. The topic will be created implicitly when its first log is committed. Multiple logs can be committed in a batch, either in the same topic or to different topics, in that case the write will be an atomic operation and either all logs will be successfully written or all will fail. commit examplescommit examples Add a single log to a topic: pipeProcClient; commit() will return the id(s) of the log(s) committed. Ids follow a format of {timestamp}-{sequenceNumber} where timestamp is the time the log was committed in milliseconds and the sequence number is an auto-incrementing integer (starting from 0) indicating the log's position in its topic. The log's body can be an arbitrarily nested javascript object. Adding multiple logs to the same topic: pipeProcClient; Notice the timestamps are the same since the two logs where inserted at the same time but the sequence number is different and auto-increments. Adding multiple logs to different topics: pipeProcClient; As before, the timestamps are the same (since they were committed at the same time) but the sequence numbers are both 0 since these two logs are the first logs committed in their respective topics. Read APIRead API rangerange Get a slice of a topic. range signaturerange signature rangetopic: string,options?:: Promise<>; range examplesrange examples pipeProcClient//timestamps onlypipeProcClient//from beginning to endpipeProcClient//from specific timestamp to the endpipeProcClient//with a limitpipeProcClient//by sequence idpipeProcClient//=> [5..15]//by sequence id exclusivepipeProcClient//=> [6..14]//returns a Promise that resolves to an array of logsid: "1518951480106-0"body:myData: "hello" revrangerevrange Ranges through the topic in an inverted order. start and end should also be inverted. (start >= end). The API is the same as range(). eg. to get the latest log pipeProcClient lengthlength get the total logs in a topic pipeProcClientlength"my_topic"; ProcsProcs Procs are the way to consistently process logs of a topic. Let's start with an example and explain as we go along. //lets add some logsawait pipeProcClient;//run a proc on the "numbers" topicconst log = await pipeProcClient;//=> log = {id: "1518951480106-0", body: {myNumber: 1}}try//process the logconst incrementedNumber = logdatamyNumber + 1;//ack the operation and commit the result to a different topicawait pipeProcClient;catch err//something went wrong on our processing, the proc should be reclaimedconsole;pipeProcClient; Procs are the way to consistently fetch logs from a topic, process them and commit the results in a safe and serial manner. So, what's going on in the above example? - first we add a log to our "numbers" topic - then we create a proc named "my_proc" with an offset of ">" (it means start fetching from the very beginning of the topic, see more below) for the "numbers" topic - the proc returns a log (the log we added on the first commit) - we do some processing (incrementing the number) - we then acknowledge the operation and commit our result to a different topic - we are also catching errors in our processing and the ack, in that case the proc must be reclaimed. If everything goes well, the next time we call the proc it will fetch us our second log 1518951480106-1. If something goes wrong and reclaim() is called the proc will be "reset" and will fetch the first log again. Until we call ack() (or ackCommit() in this case) to move on or reclaim() to reset, the proc will not fetch us any new logs. Here is the whole proc signature: proc//for what topic this proc is fortopic: string,options:: Promise<null | | >; offsetsoffsets offsets are how you position the proc to a specific point in the topic. >fetch the next log after the latest acked log for this proc. If no logs have been acked yet, it will start from the beginning of the topic. $>like >but it will start from new logs and not from the beginning (logs created after the proc’s creation) {{specific_log/timestamp}}- follows the range()syntax. It can be a full log name, a partial timestamp or a sequence id( :{{id}}). The next non-acked log AFTER the match will be returned. ackack Acking the proc is an explicit operation and should be run after the log has successfully been processed by calling ack() or ackCommit(). ackprocName: string: Promise<string>; Returns the logId of the log we just acked. If our proc fetched multiple logs (using count > 1) all of the logs will be acknowledged as processed and the call instead of an id will return a range ( 1518951480106-0..1518951480106-1). The next time the proc is executed it will fetch the next log after the above logId(or range). ackCommitackCommit ackCommit() combines an ack() and a commit() in an atomic operation. If either of these fail, both will fail. reclaimreclaim If something goes wrong while we are processing our log(s) or a PipeProc error is raised when we ack/commit our result, we should call reclaim. This will reset the proc, allowing to retry the operation. reclaim settingsreclaim settings In the proc's signature there are some settings for the reclaims, allowing us to control how reclaims work and not retrying failed operations forever or getting stuck. - maxReclaims - how many times we can call reclaim on a proc before the onMaxReclaimsReachedstrategy is triggered (defaults to 10, set to -1 for no limit) - reclaimTimeout - In order not to get stuck by a bad processing (failing to call ack()or reclaim()), the proc will be automatically be reclaimed after a certain amount of time by the system, this value sets the time. - onMaxReclaimsReached - what to do when the maxReclaims are reached. By default it will "disable" the proc which will raise an error if we try to use the proc. Can be set to "continue" so we can keep reclaiming forever. destroying procsdestroying procs Since procs are persisted and are not meant to be used as an once-off operation (use a simple range() for that) they need to be explicitly destroyed. pipeProcClient // throws if it doesn't exist;; If a destroyed proc is re-run it will be re-created anew without maintaining the previous state. inspecting procsinspecting procs Inspect the internal proc's state (last claimed/acked ranges etc). Useful for debugging. pipeProcClient // throws if it doesn't exist;; resuming/disabling procsresuming/disabling procs Manually disable the proc or resume it (eg. after reaching maxReclaims) pipeProcClient // throws if it doesn't exist//.resumeProc("my_proc") - throws if already active or doesn't exist; SystemProcsSystemProcs Manually executing and managing a proc can be tiresome. SystemProcs will take care of all creation/execution/management of procs while also distributing the load to multiple workers, let's take a look using the above proc example with incrementing numbers but now using a systemProc and a processor module. systemProc examplesystemProc example pipeProcClient; //myProcessor.jsmodule {//log = {id: "1518951480106-0", body: {myNumber: 1}};}; Processors can publish to multiple topics by setting the to field to an array of topics. If the to field is omitted, the processor will not publish any logs (eg. get a log, process it, write the result to a database) Instead of using a done callback, you can also return a promise. If an error is returned in the done callback (or a rejected promise is returned) the proc will be reclaimed. Inline processorsInline processors Processors can also be inlined: pipeProcClient; LiveProcsLiveProcs With liveprocs you can react to topic changes while not having to keep executing the underlying proc. liveprocs are run in the process in which they are called and are not distributed to the workers like systemProcs. liveProc signatureliveProc signature liveProcoptions:: ILiveProc; liveProc exampleliveProc example pipeProcClient; Inside the changes function you can either return a promise or use the next callback to keep listening for changes. liveProc instances also have simpler versions of all of the proc's methods (that implicitly point to the underlying proc) Waiting for procs to completeWaiting for procs to complete When you have multiple systemProcs and/or liveProcs running it is sometimes needed to know when all logs in their topics have been acked. For example when we need to shutdown and exit the application: pipeProcClient; waitForProcs() can take a proc name or an array of proc names and it will wait only for those to complete. If nothing is passed then it will wait for every active proc. GCGC Logs are immutable and cannot be edited or deleted after creation, so a garbage collector is needed to make sure our topics don't grow too large. Every time it runs it performs the following: - for topics that have no procs attached it will collect all logs that have passed the minPruneTime - for topics that have procs attached, it will collect all logs 2 positions behind the last claimed log range, but only if they have also passed the minPruneTime You can configure the minPruneTime and gc interval when you spawn the PipeProc node. By default they are both set to 30000ms. By default the gc is disabled. It can be enabled by passing true on the spawn's gc options or an object with prune time and interval settings. Caveats/problemsCaveats/problems - topic, proc and systemProc metadata are left behind even if the topic is empty and/or no longer used - the length()function will return an incorrect number if a part of the topic is collected - there seems to be a problem with the gc timers on OSX, causing the tests to sometimes fail TypingsTypings Since PipeProc is written in typescript all public interfaces are properly typed and should be loaded automatically in your editor. TestsTests You can run the test suite with: npm install --save-devnpm run test MetaMeta Distributed under the The 3-Clause BSD License. See LICENSE for more information. ContributingContributing - Fork it () - Create your feature branch ( git checkout -b feature/fooBar) - Commit your changes ( git commit -am 'Add some fooBar') - Push to the branch ( git push origin feature/fooBar) - Create a new Pull Request install npm i pipeproc weekly downloads 84 version 0.2.1 license BSD-3-Clause
https://www.npmjs.com/package/pipeproc
CC-MAIN-2019-18
refinedweb
1,969
57.5
circe-yamlcirce-yaml This is a small library which translates SnakeYAML's AST into circe's AST. It enables parsing YAML 1.1 documents into circe's Json AST. Why?Why? YAML is a useful data format for many purposes in which a more readable, less verbose document is desired. One use case, for example, is human-readable configuration files. SnakeYAML provides a Java API for parsing YAML and marshalling its structures into JVM classes. However, you might find circe's way of marshalling into a Scala ADT preferable -- using compile-time specification or derivation rather than runtime reflection. This enables you to parse YAML into Json, and use your existing (or circe's generic) Decoders to perform the ADT marshalling. You can also use circe's Encoder to obtain a Json, and print that to YAML using this library. UsageUsage The artifact is hosted by Sonatype, and release versions are synced to Maven Central: libraryDependencies += "io.circe" %% "circe-yaml" % "0.12.0" Snapshot versions are available by adding the Sonatype Snapshots resolver: resolvers += Resolver.sonatypeRepo("snapshots") ParsingParsing Parsing is accomplished through the io.circe.yaml.parser package; its API is similar to that of circe-parser: import io.circe.yaml.parser val json: Either[ParsingFailure, Json] = parser.parse(yamlString) Additionally, there is a function for parsing multiple YAML documents from a single string: val jsons: Stream[Either[ParsingFailure, Json]] = parser.parseDocuments(multiDocumentString) Both of these methods also support a "streaming" parse from a java.io.Reader – this is different from the behavior of circe-streaming (which supports fully asynchronous streaming parsing with iteratees) but does provide a convenient way to retrieve YAML from Java inputs: val config = getClass.getClassLoader.getResourceAsStream("config.yml") val json = parser.parse(new InputStreamReader(config)) val configs = getClass.getClassLoader.getResourceAsStream("configs.yml") val jsons = parser.parseDocuments(new InputStreamReader(configs)) Once you've parsed to Json, usage is the same as circe. For example, if you have circe-generic, you can do: import cats.syntax.either._ import io.circe._ import io.circe.generic.auto._ import io.circe.yaml case class Nested(one: String, two: BigDecimal) case class Foo(foo: String, bar: Nested, baz: List[String]) val json = yaml.parser.parse(""" foo: Hello, World bar: one: One Third two: 33.333333 baz: - Hello - World """) val foo = json .leftMap(err => err: Error) .flatMap(_.as[Foo]) .valueOr(throw _) Other features of YAML are supported: - Multiple documents - use parseDocumentsrather than parseto obtain Stream[Either[ParsingFailure, Json]] - Streaming - use parse(reader: Reader)or parseDocuments(reader: Reader)to parse from a stream. Not sure what you'll get out of it. - References / aliases - The reference will be replaced with the complete structure of the alias - Explicit tags (on scalar values only) are handled by converting the tag/scalar pair into a singleton json object: example: !foo bar { "example": { "foo": "bar" } } PrintingPrinting The package io.circe.yaml.syntax provides an enrichment to Json which supports easily serializing to YAML using common options: import cats.syntax.either._ import io.circe.yaml._ import io.circe.yaml.syntax._ val json = io.circe.jawn.parse("""{"foo":"bar"}""").valueOr(throw _) println(json.asYaml.spaces2) // 2 spaces for each indent level println(json.asYaml.spaces4) // 4 spaces for each indent level Additionally, there is a class io.circe.yaml.Printer which (in similar fashion to circe's Printer) can be configured with many options which control the String output. Its pretty method produces a String using the configured options: io.circe.yaml.Printer(dropNullKeys = true, mappingStyle = Printer.FlowStyle.Block) .pretty(json) LimitationsLimitations Only JSON-compatible YAML can be used, for obvious reasons: - Complex keys are not supported (only Stringkeys) - Unlike YAML collections, a JSON array is not the same as a JSON object with integral keys (given the above, it would be impossible). So, a YAML mapping with integral keys will still be a JSON object, and the keys will be strings. LicenseLicense This is released under the Apache 2.0 license, as specified in the LICENSE file. It depends on both circe and SnakeYAML, which each has its own license. Consult those projects to learn about their licenses. This library is neither endorsed by, nor affiliated with, SnakeYAML. ContributingContributing As part of the circe community, circe-yaml supports the Typelevel code of conduct and wants all of its channels (Gitter, GitHub, etc.) to be welcoming environments for everyone. Please read the circe Contributor's Guide for information about how to submit a pull request. This circe community module is currently maintained by Jeremy Smith and Jeff May, with guidance from Travis Brown. It strives to conform as closely as possible to the style of circe itself.
https://index.scala-lang.org/circe/circe-yaml/circe-yaml/0.12.0?target=_2.13
CC-MAIN-2021-10
refinedweb
777
51.14
User Tag List Results 1 to 7 of 7 Thread: Rails: Layouts & Search/Sort Threaded View - Join Date - Mar 2004 - Location - Toronto, Canada - 326 - Mentioned - 0 Post(s) - Tagged - 0 Thread(s) Rails: Layouts & Search/Sort Three questions for you all... 1. Code: def index list render :action => 'list' end 2. I have an admin section to a site I'm building. Here's the pertinent bit of the layout that all controllers are using: Code: <div id="main-content"> <%= @content_for_layout %> </div> <div id="sidebar"> </div> as new, edit, list etc. My problem is that for each controller I want a specific chunk of content to show up in the sidebar. How can I accomplish this without altering the layout to look something like this: Code: <div id="main-content"> <%= @content_for_layout %> </div> <div id="sidebar"> <% if @controller.controller_name == 'news' %><%= render(:partial => "news_sidebar") %><% end %> <% if @controller.controller_name == 'users' %><%= render(:partial => "users_sidebar") %><% end %> <% if @controller.controller_name == 'links' %><%= render(:partial => "links_sidebar") %><% end %> </div> that takes the controller name as an argument? Other suggestions? 3. I'm used to building searches and sorting mechanisms with PHP that analyze a $_GET request and display data based on those parametrs. URLs end up looking like: Code: someone provide some code for me to look at that does just this? I'm very much a learn by seeing kind of guy and would greatly appreciate it. Thanks. Bookmarks
http://www.sitepoint.com/forums/showthread.php?400754-Rails-Layouts-amp-Search-Sort&p=2890769&mode=threaded
CC-MAIN-2014-23
refinedweb
231
62.07
dos2unix use monospace font instead of bold fonts for code. 1:. 3: 4: Because all configuration takes place in a single file, there are some variables so the user can choose different configurations based on whether he is building the base system or packages from pkgsrc. These variables are: 5: 6: * `BSD_PKG_MK`: Defined when a pkgsrc package is built. 7: * `BUILDING_HTDOCS`: Defined when the NetBSD web site is built. 8: * None of the above: When the base system is built. The file /usr/share/mk/bsd.README is a good place to start in this case. 9: 10: A typical `mk.conf` file would look like this: 11: <pre><code> 12: # This is /etc/mk.conf 13: # 14: 15: .if defined(BSD_PKG_MK) || defined(BUILDING_HTDOCS) 16: # The following lines apply to both pkgsrc and htdocs. 17: 18: #... 19: LOCALBASE= /usr/pkg 20: #... 21: 22: .else 23: # The following lines apply to the base system. 24: 25: WARNS= 4 26: 27: .endif 28: </code></pre>
https://wiki.netbsd.org/cgi-bin/cvsweb/wikisrc/pkgsrc/mk.conf.mdwn?rev=1.2;f=h;only_with_tag=MAIN;ln=1
CC-MAIN-2021-10
refinedweb
163
78.14
Weather on SunTechDays The SunTechDays conference will take place in Saint-Petersburg this week.? Start desktop applications to track the weather forecasts in Tallinn, Saint-Petersburg, Prague, Menlo Park and Santa Clara. Module 6: Accessing Web Services Consider a task of accessing web services by using the WeatherForecast application that displays weather forecasts retrieved from Yahoo! Weather RSS feed. The RSS details are available here. The HttpRequest's onInput variable is implemented to call the PullParser's parse function. The PullParser's onEvent function processes attributes of the elements with the yweather prefix. I would recommend to read the "JavaFX HTTP Networking and XML Parsing" article first. 6.1. HTTP Networking The HttpRequest class provides an API to make asynchronous HTTP requests. It can also invoke RESTful Web Services. This class enables developers to specify the location and method variables, and to start an HTTP operation by invoking the enqueue function. Other variables of this class such as started, connecting, writing, reading and done change their state as the operation progresses. For operations that transfer large amounts of data, the percentage progress can be computed using the read, toread, written and towrite variables. Callback functions are also called to indicate changes in states as the operation progresses. This class supports the following methods of the HTTP request: GET, PUT and DELETE. The GET method passes arguments directly to the request string and is used by default: When using the POST or PUT methods, the output variable should contain an OutputStream object that transfers data to the server. In addition, you may change the request header that contains the name-value pairs before the enqueue function is called. Use the setHeader function to set a value for the particular name. Some commonly-used header names and values are defined in the HttpHeaders class. Once the enqueue function is called, a server response gets a response Node, a header and a content. The response code resides in the responseCode integer variable. Usually it is included into the responseMessage string variable. Commonly-used response Nodes are defined in the HttpStatus class. Use the getResponseHeaderNames function to obtain a list of the HTTP header names set on the HTTP response. The values corresponding to the names can be retrieved through the getResponseHeaderValue function. A response content resides in the input variable in a form of an InputStream object. The onInput variable might be specified by a callback function that is invoked once to indicate that the request body is now available. The provided InputStream object must be closed when done reading in a finally block. Note that the HttpRequest object can be used for a single request only. If you need another request you have to create a new HttpRequest object. HttpRequest { location: url ... onInput: function(input) { try { // read data from input stream ... } finally { input.close() } } }.enqueue() A request's HTTP operation can be canceled by calling the cancel function at any time after the enqueue function has been called. This removes the request from the queue if it is queued, or interrupts the request if it has started executing. Both of these cases cause the done variable to change its value to true. 6.2. Data Processing The PulParser class provides a parser for structured data. The parser generates a sequence of Event objects as it processes the document. Its implementation is quite similar to SAXParser for XML. The PulParser supports only two formats: XML (default) and JSON. Use the documentType variable to specify the format. The parser can be used in combination with the onEvent callback function. This function notifies the application on each pull event encountered in the input stream after the parse function has been called. Note that the PullParser object can be reused. Setting the input variable while parsing is in progress will reset the parser. def parser = PullParser { ... onEvent: function(event) { if (event.type == PullParser.START_ELEMENT) { if (event.qname.prefix == "yweather") { if (event.qname.name == "location") { location = event.getAttributeValue("city"); ... } ... } } } } Also the PullParser class can be used in linear mode where the application simply pulls events, discarding or skipping over those that are not processed. The current event is available in the event variable or through the onEvent callback function. For example, parser.seek(QName{prefix:"yweather" name:"location"}); location = parser.event.getAttributeValue("city"); Advantages of the PullParser class: - The client gets data only when it explicitly requests it. - An application can use the seekfunction to skip over elements that are not assigned to be processed. - A single parser API for both XML and JSON documents. - A reusable streaming parser. 6.3. XML Supporting There are two classes to work with XML: XMLConstants and QName. The XMLConstants class contains several useful constants. The QName class represents a qualified name as defined in the XML specification. It contains the following variables: namespace, name and prefix. The prefix is included in the QName class to retain lexical information that is presented in an XML input source. Note that the prefix is not used in the equals function or to compute the hash code. Module 7: Deploying a JavaFX Application There are three basic models of deploying JavaFX application: desktop, browser and mobile. Two first models are based on JavaSE and use the desktop profile of the JavaFX API. This profile enables developers to apply all the beneficial features including visual effects from the javafx.scene.effect package. The mobile model is based on the extended implementation of JavaME to compile applications. JavaFX SDK 1.1 supports only the common API for the mobile applications. Use the __PROFILE__ pseudo-variable to specify the deployment model for your application. var applet = __PROFILE__ == "browser"; // = FX.getProperty("javafx.runtime.isApplet") != null var mobile = __PROFILE__ == "mobile"; // = FX.getProperty("javafx.me.profiles") != null var normal = __PROFILE__ == "desktop"; // = not applet and not mobile The current implementation uses the properties given in the commented lines. 7.1. Desktop: Standalone Application This is the easiest model of deployment. All you need is to have a jar-file of your application and run it by using one of the following commands: javafx -jar application.jar [arguments] javafx -cp application.jar package.mainclass [arguments] A disadvantage of this model is that you have to ensure required JavaSE and JavaFX versions are installed on your computer. 7.2. Desktop: Java WebStart by using JNLP Java WebStart is a tool for deploying standalone applications on desktop by using the Java Network Launching Protocol. To deploy your application by using Java WebStart, create a jnlp-file or modify the file that has been generated automatically. Below is an example of the jnlp-file. <?xml version="1.0" encoding="UTF-8"?> <jnlp spec="1.0+" codebase="" href="Weather> <application-desc <argument>code=RSXX0091&u=c</argument> </application-desc> </jnlp> The codebase attribute specifies the target location of jar- and jnlp-files. You may delete the offline-allowed tag, because our application should have access to on-line web services. Arguments are passed to the application by using the argument tags. 7.3. Browser: Java Plug-in with Applet Java Plug-in is a tool for deploying Java applets that run inside a web browser. Java 6 update 10 has a new Java Plug-in architecture that unifies deployment between Java WebStart and the Java Plug-in. JavaFX provides a wrapper class that enables executing JavaFX applications as applets. To deploy an application as an applet, create a jnlp-file or modify the file that has been generated automatically. The file name should end with _browser.jnlp. Below is an example of the jnlp-file to run your application in a browser. <?xml version="1.0" encoding="UTF-8"?> <jnlp spec="1.0+" codebase="" href="Weather_browser> <applet-desc <param name="MainJavaFXScript" value="Weather"/> </applet-desc> </jnlp> Whenever your JavaFX application is launched by using the Java WebStart technology, the JNLP client checks for updates. You can improve the startup time of your JavaFX application if you enable update checks to be performed in the background. Add the following line to the jnlp-file: <update check="background"/> Use JavaScript to call the javafx function and embed your application into a web page: <script src=""></script> <script> javafx( { draggable: true, archive: "Weather.jar", width: 200, height: 200, code: "Weather", name: "Weather" }, { code: "RSXX0091&u=c" } ); </script> Note, the second parameter is used to pass arguments to the application. You may omit it, if not needed. Set the draggable parameter to true to enable dragging the applet to desktop. Once the application is launched, press the ALT key and drag the application from the browser. This is the default behavior that can be changed by using the AppletStageExtension. Stage { ... extensions: AppletStageExtension { useDefaultClose: false shouldDragStart: function(event) { return applet and event.primaryButtonDown } onDragStarted: function() { applet = false } onAppletRestored: function() { applet = true } } } In the preceding code the applet is draggable only when it is located in the browser window. It is implemented by using the local variable named applet. The onDragStarted and onAppletRestored functions change the state of the applet variable when the applet is being dragged from the browser and to the browser correspondingly. Note that the preceding code fragment makes the dragged applet looking like a desktop application, however, the AppletStageExtension class resides only in the desktop API, therefore, the application can't be compiled for the mobile environment. 7.4. Mobile: JavaFX Mobile Emulator JavaFX Mobile Emulator is provided with the JavaFX 1.1 SDK. It displays the application as it would look on a typical mobile device. JavaFX 1.1 Mobile Emulator supports the common API only. Also it limits the size of a jar-file. Modify the descriptor as follows to pass arguments to your application: MIDlet-Name: Weather MIDlet-Version: 1.0 MIDlet-Vendor: Sergey Malenkov MicroEdition-Profile: JAVAFX-1.1 MicroEdition-Configuration: CLDC-1.1 MIDlet-1: Weather,,Weather_MIDlet MIDlet-Jar-URL: Weather.jar MIDlet-Jar-Size: 79095 id: RSXX0091&u=c Note, the same descriptor is used to pass arguments to the application. For example, the last line specifies the string value for the id argument. Use the following command line to run the application on the Mobile Emulator: emulator -Xdescriptor:Weather.jad 7.5. Arguments Parsing From the previous sections you've learned how to pass arguments to your application. Now consider parsing arguments in the application. It worth mentioning that the browser and mobile deploying models pass arguments as key-values. Use the following function to retrieve the value by using the key. This function returns null, if there is no value set for this key. var code = FX.getArgument("code"); You can use two methods to retrieve arguments when using the desktop model. First, you can define a global function named run: function run(args: String[]) { var code: Object; for (arg in args) { code = if (indexof arg > 0) then "{code} {arg}" else arg } println('code: {code}'); } Second, you can use the following utility function: def args = FX.getArguments(); There is no big difference between two those approaches. When using the first one, all variables defined within the run function are local. For the second approach, all variables are global. Use the FX.getArgument function to receive a value by using a key, similar to the implementation in browser, mobile models. 7.6. JavaFX Packager JavaFX Packager utility creates an application in a format that is specific to a target profile, either desktop or mobile. Its documentation is included in the JavaFX 1.1 SDK, however, is not available on web. The NetBeans IDE incorporates this utility and makes it available to users when they choose an execution model. Use the following command to run the JavaFX Packager utility: javafxpackager -src D:\projects\Weather\sources -appClass Weather Consider the basic options you can use to run this application: -src | -sourcepath <source root1>[;<source root2> ...] - Specifies the root directory of the source tree holding the application to be packaged. At least one source root is required. -cp | -classpath | -librarypath <library1>[;<library2> ...] - Specifies the classpath. -res | -resourcepath <resource root1> [;<resource root2> ...] - Specifies the resource path. -d | -destination <target directory> - Specifies target directory for application files. The default target directory is ./dist. -workDir <working directory> - Specifies the folder for temporary files. If the folder is not specified all temporary files will be removed. -appClass <application class> - Specifies the main class of the application. -appName <application name> - Specifies the application name that is the application class name by default. -appVendor <application vendor> - Specifies the application vendor that is the user.nameJava system property by default. -appVersion <application version> - Specifies the application version that is 1.0by default. -appWidth <applet width> - Specifies the applet width that is 200pixels by default. It is used to generate html-file and jnlp-file for browser. -appHeight <applet height> - Specifies the applet height that is 200pixels by default. It is used to generate html-file and jnlp-file for browser. -appCodebase <application code base URL> - Specifies the codebase for the application. It is used to generate jnlp-files. The default codebase is a local path to generated jnlp-files. -p | -profile [ desktop | mobile ] - Specifies the JavaFX platform. JavaFX 1.1 SDK supports desktop and mobile profiles. The default profile is desktop. Use the mobile profile to generate files needed by the JavaFX Mobile Emulator. Internally this option specifies the configuration file that sets environment variables. -draggable - Makes the application draggable from the web browser. Adds draggable: trueinto generated html-file. -pack200 - Compresses an application using Pack200 to a file with jar.pack.gzextension. It is used to generate jnlp-files. -sign - Builds a signed application. If it is not specified, a self-signture is not created. -keystore <keystore for signing> - Specifies keystore for signed application. A temporary self-signature is created when keystore is not specified. -keystorePassword <password> - Specifies the password when signing with specific keystore. It is mandatory in such case. -keyalias <alias> - Specifies the keyalias when signing with specific keystore. It is mandatory in such case. -keyaliasPassword <password> - Specifies the password for specific keyalias. If it is not specified the keystore password is used instead. -paramFile <property file> - Specifies the named parameters for the application that are stored in the specified property file. It is used to generate jad-file, html-file and jnlp-file for desktop. original post - Login or register to post comments - Printer-friendly version - malenkov's blog - 3508 reads
https://weblogs.java.net/blog/malenkov/archive/2009/04/06/weather-suntechdays
CC-MAIN-2014-15
refinedweb
2,384
50.12
What are Data Types? Data types determine the type of data that any variable can store. Variables belonging to different data types are allocated different amounts of space in the memory. There are various data types in VB.NET. They include: - Boolean: the allocated storage depends on the platform of implementation. Its value can be either True or False. - Byte: allocated storage space of 1 byte. Values range from 0 to 255 (unsigned). - Char: allocated a space of 2 bytes. Values range from 0 to 65535 (unsigned). - Date: allocated storage space of 8 bytes. Values range from 0:00:00 (midnight) January 1, 0001 to 11:59:59 PM of December 31, 9999. - Integer: has a storage space of 4 bytes. Values range between -2,147,483,648 to 2,147,483,647 (signed). - Long: has a storage space of 8 bytes. Numbers range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807(signed). - String: The storage space allocated depends on the platform of implementation. Values range from 0 to about 2 billion Unicode characters. In this tutorial, you will learn: - Data Types - Type Conversion Functions - Variable Declaration - Variable Initialization - Accepting User Values - Lvalues and Rvalues Type Conversion Functions There are functions that we can use to convert from one data type to another. They include: - CBool (expression): converts the expression to a Boolean data type. - CDate(expression): converts the expression to a Date data type. - CDbl(expression): converts the expression to a Double data type. - CByte (expression): converts the expression to a byte data type. - CChar(expression): converts the expression to a Char data type. - CLng(expression): converts the expression to a Long data type. - CDec(expression): converts the expression to a Decimal data type. - CInt(expression): converts the expression to an Integer data type. - CObj(expression): converts the expression to an Object data type. - CStr(expression): converts the expression to a String data type. - CSByte(expression): converts the expression to a Byte data type. - CShort(expression): converts the expression to a Short data type. Variable Declaration In VB.NET, the declaration of a variable involves giving the variable a name and defining the data type to which it belongs. We use the following syntax: Dim Variable_Name as Data_Type In the above syntax, Variable_Name is the variable name while Data_Type is the name to which the variable belongs. Here is an example of a valid variable declaration in VB.NET: Dim x As Integer In the above example, ‘x’ is the variable name while Integer is the data type to which variable x belongs. Variable Initialization Initializing a variable means assigning a value to the variable. The following example demonstrates this: Dim x As Integer x = 10 Above, we have declared an integer variable named ‘x’ and assigned it a value of 10. Here is another example: Dim name As String name = "John" Above, we have declared a string variable name and assigned it a value of John. If you declare a Boolean variable, its value must be either True or false. For example: Dim checker As Boolean checker = True Above, we have defined a Boolean variable named checker and assigned it a value of True. Let us demonstrate how to declare and initialize a variable using a code example: Step 1) Create a New Project - Open Visual Studio and click the File menu, Choose New then Project from the toolbar. On the new window, click Visual Basic from the left vertical navigation pane. Choose Window Forms Application. - Give it a name and click the OK button. The Project will be created. Step 2) Create a Button - To create a Button, begin by opening the design tab. - Drag the Button control from the toolbox into the WindowForm: Step 3) Click the other tab located to the left of the design tab. You can also double click the button that you have added to the form. Step 4) Add the following code to add text to the control: Private Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click Dim checker As Boolean checker = True MsgBox("The value of variable checker is : " & checker) Dim x As Integer x = 32 MsgBox("The value of variable x is : " & x) Dim name As String name = " Guru99 " MsgBox("The value of variable name is : " & name) End Sub Step 5) You can now run the code by clicking the Start button located at the top bar: Step 6) You should get the following form: Step 7) Click Button 1. You should get the following dialog: Click the OK button to move to the next dialog. It should be as follows: Again, click the OK button to move to the next dialog. It should be as follows: Here is a screenshot of the complete code for the above: Explanation of code: - Creating a public class named Form1 - Creating a sub procedure named Button1_Click. It will be called when the button is clicked. The sender object will raise the event while the e argument will have data for the event. EventArgs forms the base class for all VB.Net event arguments. The Handles Button1.Click states that the subprocedure will handle any click on the button. - Creating a Boolean variable named checker. - Assigning a value of True to the checker variable. - Creating a Message Box dialog to show some text and the value of variable checker. - Creating an integer variable named x. - Assigning a value of 32 to the variable x. - Creating a Message Box dialog to show some text and the value of variable x. - Creating a string variable named name. - Assigning a variable of Guru99 to the variable name. - Creating a Message Box dialog to show some text and the value of variable name. - Ending the sub-procedure. - Ending the class. Accepting User Values When creating an application, you may need a way of getting input from the user. This can be done using the ReadLine function of the Console class in System namespace. Once you have received the input from the user, you are required to assign it to a variable. For example: Dim user_message As String user_message = Console.ReadLine In the above example, we have defined a variable named user_message. The message read from the console has been assigned to that variable. Let us demonstrate this: Step 1) Create a Console Application 2. On the new window, click Visual Basic from the left vertical navigation pane. Choose Console Application. - Give it a name and click the OK button. The Project will be created. Step 2) Add the following code to the workspace: Module Module1 Sub Main() Dim user_message As String Console.Write("Enter your message: ") user_message = Console.ReadLine Console.WriteLine() Console.WriteLine("You typed: {0}", user_message) Console.ReadLine() End Sub End Module Step 3) You can now run the code by clicking the Start button located at the top bar: Step 4) Enter your Message - On running the code, you will get the following console: - Type in any message and hit the enter key. You should get the following: Here is the complete code for the example: Explanation of Code: - Creating a module named Module1. It will act as the class for the code. - Creating the main sub-procedure for the module. It will be invoked when the class is run/executed. - Creating a string variable named user_message. - Print a message on the console asking the user to type his/her message. - Assign the value read from the console to the above variable. - Printing an empty line on the console. - Print some text and the message typed by the user on the console. - Reading the message typed by the user. - End the sub-procedure. - End the module. Lvalues and Rvalues VB.NET expressions are of two types: - lvalue − an lvalue expression may appear on the left-hand or on the right-hand side of the assignment operator. - rvalue – an rvalue expression can only appear on the right-hand of the assignment operator but not on the left-hand side. Variables are lvalues, meaning that we can put them on the left side of the assignment operator. For example: Dim x As Integer = 32 For numeric literals, they can neither be assigned nor can they appear on the left-hand side of the assignment operators since they are rvalues. For example: 32 = x The above expression is wrong and will generate a compile-time error. Summary - Each variable must belong to a data type. The data type determines the amount of memory space allocated to the variable. - We can convert a variable from one data type to another. - Initializing variables means assigning values to the variables. - We create a console application to help us get input from the users via the console using the ReadLine function.
https://www.thehackingcoach.com/vb-net-data-types-and-variable-declaration-with-dim/
CC-MAIN-2022-40
refinedweb
1,460
66.94
Agenda See also: IRC log saz: background info on HTTP request/response notes ... recalls that these were to be published as working notes ... anything missing? JK: cookies issue ... seems to be no other issues - send spec to list ... will be able to edit as working group note. Carlos will help. SAZ: any objections as publish as note? resolution: HTTP request/response spec to be published as working group note. resoultion: JK will edit saz: cookies - publish as note in spec. Will discuss later. JK will send message to list to discuss. JK: namespace issue still unresolved? saz: do we want a temp namespace? (for now then switch to formal) discussion of namespace, temp namespace was helpful in short term JK: used several namespaces in spec saz: will look what other groups are using but would like to use temp for now <chaals> [I think it is a *terrible* idea to change namespaces. It is just an opaque string, and changing it just requires everyone to handle a bunch of extra stuff for no identifiable purpose] <scribe> ACTION: saz will look into namespace for HTTP request/response stuff and talk to JK [recorded in] saz: dc:location topic from CI - remove location (since using request/response)? discussion of dc:location <chaals> [Location in the current spec isn't used in the same way as request/response, but I don't thik it is remotely necessary as currently used anyway so support removing it] saz: define our own earl:location? discussion of perhaps using something like earl:uri <JohannesK> ACK resolution: use earl:uri instead of dc:location ci: concerns over testcase element in earl draft (message sent to list) ... testrequirement element is OK but is more like testcase then testrequirement saz: seems more like an editorial problem ci: important that we have common view of what elements mean ... does group agree with suggestions from message posted to list? jj: agrees with ci's post to list, wording is clearer <CarlosI> IMO a requirement is a documented need of what a particular product should be or do (what you expect from the product) and test are process used to ensure products are designed and produced to meet those requirements (the way you test whether you have what you expected or not). discussion of ci's proposal <chaals> [I think the words are much more flexible and harder to pin down in english usage than Carlos' idea of them, but I like the wording proposals anyway...) <JohannesK> 1. test (actual test that is performed) discussion of what the actual tests are - something specific like HTML test suite or something more general like a WCAG guideline SC saz: review of evidence class ... good discussion and we should take to list more more consideration ... is this a showstopper (hold off publication of EARL spec)? ci: draft is still OK but needs to be resolved before final status saz: after discussion on list we will bring up on next weeks call <scribe> ACTION: saz will send to mailing list [recorded in] <shadi> * Chaals make proposal for schema to describe how to add a Collection as evidence <shadi> - <> <shadi> * jim write up proposal for "earl:snippet" locator type <shadi> - <> <shadi> saz: sign up and book hotels soon!
http://www.w3.org/2006/01/18-er-minutes
CC-MAIN-2014-23
refinedweb
540
58.62
XML Elements An XML document contains XML Elements. What is an XML Element? An XML element is everything from (including) the element's start tag to (including) the element's end tag. An element can contain: - text - attributes - other elements - or a mix of the above <book category="children"> <title>Harry Potter</title> <author>J K. Rowling</author> <year>2005</year> <price>29.99</price> </book> <book category="web"> <title>Learning XML</title> <author>Erik T. Ray</author> <year>2003</year> <price>39.95</price> </book> </bookstore> In the example above: <title>, <author>, <year>, and <price> have text content because they contain text (like 29.99). <bookstore> and <book> have element contents, because they contain elements. <book> has an attribute (category="children"). Empty XML Elements An element with no content is said to be empty. In XML, you can indicate an empty element like this: You can also use a so called self-closing tag: The two forms produce identical results in XML software (Readers, Parsers, Browsers). Empty elements can have attributes. XML Naming Rules Any name can be used, no words are reserved (except xml). Best Naming Practices Create descriptive names, like this: <person>, <firstname>, <lastname>. Create short and simple names, like this: <book_title> not like this: <the_title_of_the_book>. Avoid "-". If you name something "first-name", some software may think you want to subtract "name" from "first". Avoid ".". If you name something "first.name", some software may think that "name" is a property of the object "first". Avoid ":". Colons are reserved for namespaces (more later). Non-English letters like éòá are perfectly legal in XML, but watch out for problems if your software doesn't support them. Naming Styles There are no naming styles defined for XML elements. But here are some commonly used: If you choose a naming style, it is good to be consistent! XML documents often have a corresponding database. A common practice is to use the naming rules of the database for the XML elements. Camel case is a common naming rule in JavaScripts. XML Elements are Extensible XML elements can be extended to carry more information. Look at the following XML example: <to>Tove</to> <from>Jani</from> <body>Don't forget me this weekend!</body> </note> Let's imagine that we created an application that extracted the <to>, <from>, and <body> elements from the XML document to produce this output: Imagine that the author of the XML document added some extra information to it: <date>2008-01. This is one of the beauties of XML. It can be extended without breaking applications.
https://www.w3schools.com/XML/xml_elements.asp
CC-MAIN-2017-43
refinedweb
428
58.79
This program reverses a given string but i can't seem to figure out how to return the entire buffer, all answers welcome. #include <iostream>#include <string.h>using namespace std;char ReverseString(char *input);int main(){ string str; cout << "Please Enter a String: "; getline(cin,str); cout << "The string you entered is: " << str << endl; str = ReverseString(&str[0]); cout << str;}char ReverseString(char* input){ int size = strlen(input); char *it; char *revBuffer = new char[size]; it = revBuffer; for(int i=0; i<size; i++) { *it = input[(size-1)-i]; it++; } return *revBuffer; // I can't seem to figure out how to return the entire buffer instead of just one element the pointer is pointing to. I have a program that prints out the characters of a string using a for-loop. It must also print the same characters in reverse, which is where I'm having problems. Can someone help me figure out why the second for-loop isn't executing? int main(){ string myAnimal; cout << "Please enter the name of your favorite animal.\n"; cin >> myAnimal; // This loop works fine int i; for(i = 0; i < myAnimal.length(); i++){ cout << myAnimal.at(i) << endl; } // This one isn't executing for(i = myAnimal.length(); i > -1; i--){ cout << myAnimal.at(i) << endl; } return 0;} studying Python as crazy and have many many questions. This time about function, i need to create 2 functions, first for numbers to sum up everything that user inputs in the list and second function, where user inputs some words in to the list and function without touching word indexes in the list, takes each word and returns reversed words (On the same index) I can show you my code, i think i don't have problems with numbers and its function, i need your help with reverse function, i tried some ways, even one "for" in another, but i prefer some easy ways. def sum(numbers): acc = 0 for numb in numbers: acc += numb return accdef rever(strings): r = [] for i in strings: for n in i: reversed(r[n]) return rnumbers = [int(x) for x in input("Please input at least 5 numbers (Use space): ").split()]print(sum(numbers))strings = [str(x) for x in input("Please input at least 5 words (Use Space): ").split()]print(rever(strings)) I need user to enter some text, later this text should be reversed.But while reversing, I need to print every step, i really don't know how it works, how it reverses. If user enters - stack, in the end I need to get kcats, but before I need to also print all the steps for example if it starts from the first and the last letters, first step should be like - ktacs and continue like that. I have some kind of a code : input_string = input("\n Please Enter Text To Reverse - ")for i in input_string: print(input_string[::-1]) It does a reverse, but I can't print out step by step. Any help please ? Im trying to reverse a string inputted by the user from Interrupt 21h function 0A by placing that input into a buffer and then reversing it to be shown again to the user in reverse. So far this is what I got: .databufferMe DB 21 ; User Inputbuffers DB 21 dup('$') ; Memory Allocated for that InputlenStr EQU $-buffersbufferMe2 DB 21buffers2 DB 21 dup ('$')pos DW ?msg1 DB 'Input String: ', '$'.codeINITIATE: mov ax, @data mov ds, ax ;;; Ask for User Input mov ah, 09h mov dx, offset msg1 int 21h ;;; Get User Input mov ah, 0Ah mov dx, offset bufferMe int 21h ;;; Reverse the Input LEA si, bufferMe2 LEA di, bufferMe ;add di,4 mov cx, lenStr Reverse: mov al,[di] mov [si],al inc si dec di loop Reverse The output... well, there's no output. But if i placed msg1 DB 'Input String: ', '$' at the start of the Data segment of the code, it would print that in reverse but without the actual input of the user. :gnirtS tupnI_ Can anyone tell me what's wrong?
http://www.convertstring.com/tl/StringFunction/ReverseString
CC-MAIN-2017-47
refinedweb
676
66.17
Building a medium- to large-sized program usually requires you to divide tasks into several smaller, more manageable, and more interactive pieces. (A rule of thumb is that each "piece" should be about one or two printed pages in length, but this is just a general guideline.) An analogy can be made to building a microarray machine, which requires that you construct separate interacting pieces such as housing, temperature sensors and controls, robot arms to position the pipettes, hydraulic injection devices, and computer guidance for all these systems. Subroutines divide a large programming job into more manageable pieces. Modern programming languages all provide subroutines, which are also called functions, coroutines, or macros in other programming languages. A subroutine lets you write a piece of code that performs some part of a desired computation (e.g., determining the length of DNA sequence). This code is written once and then can be called frequently throughout the main program. Using subroutines speeds the time it takes to write the main program, makes it more reliable by avoiding duplicated sections (which can get out of sync and make the program longer), and makes the entire program easier to test. A useful subroutine can be used by other programs as well, saving you development time in the future. As long as the inputs and outputs to the subroutine remain the same, its internal workings can be altered and improved without worrying about how the changes will affect the rest of the program. This is known as encapsulation. The benefits of subroutines that I've just outlined also apply to other approaches in software engineering. Perl modules are a technique within a larger umbrella of techniques known as software encapsulation and reuse. Software encapsulation and reuse are fundamental to object-oriented programming. A related design principle is abstraction, which involves writing code that is usable in many different situations. Let's say you write a subroutine that adds the fragment TTTTT to the end of a string of DNA. If you then want to add the fragment AAAAA to the end of a string of DNA, you have to write another subroutine. To avoid writing two subroutines, you can write one that's more abstract and adds to the end of a string of DNA whatever fragment you give it as an argument. Using the principle of abstraction, you've saved yourself half the work. Here is an example of a Perl subroutine that takes two strings of DNA as inputs and returns the second one appended to the end of the first: sub DNAappend { my ($dna, $tail) = @_; return($dna . $tail); } This subroutine can be used as follows: my $dna = 'ACCGGAGTTGACTCTCCGAATA'; my $polyT = 'TTTTTTTT'; print DNAappend($dna, $polyT); If you wish, you can also define subroutines polyT and polyA like so: sub polyT { my ($dna) = @_; return DNAappend($dna, 'TTTTTTTT'); } sub polyA { my ($dna) = @_; return DNAappend($dna, 'AAAAAAAA'); } At this point, you should think about how to divide a problem into interacting parts; that is, an optimal (or at least good) way to define a set of subroutines that can cooperate to solve a particular problem. In my projects, I gather subroutine definitions into separate files called libraries,[1] or modules, which let me collect subroutine definitions for use in other programs. Then, instead of copying the subroutine definitions into the new program (and introducing the potential for inaccurate copies or for alternate versions proliferating), I can just insert the name of the library or module into a program, and all the subroutines are available in their original unaltered form. This is an example of software reuse in action. [1] Perl libraries were traditionally put in files ending with .pl, which stands for perl library; the term library is also used to refer to a collection of Perl modules. The common denominator is that a library is a collection of reusable subroutines. To fully understand and use modules, you need to understand the simple concepts of namespaces and packages. From here on, think of a Perl module as any Perl library file that uses package declarations to create its own namespace. These simple concepts are examined in the next sections.
http://etutorials.org/Programming/perl+bioinformatics/Part+I+Object-Oriented+Programming+in+Perl/Chapter+1.+Modular+Programming+with+Perl/1.2+Why+Perl+Modules/
CC-MAIN-2017-04
refinedweb
696
57.5
One of the important steps in OCR is the thresholding process. It helps us in separating the text regions (foreground) from the background. If you apply a thresholding algorithm like OTSU or Sauvola, you might end up with a lot of noise. Some of your text regions may even get categorized as background. Take a look at the below examples where I applied Sauvola algorithm directly on grayscale images without any pre-processing. As you can see in the outputs, there is a lot of noise in the images and a few characters in the binary image are also vague. Passing this image directly into an OCR engine like Tesseract may not yield best results. To overcome these issues related to noise and loss of text regions, I usually try to adjust gamma and remove noise before thresholding. The choice of algorithms for contrast enhancement and noise removal may differ based on the image type. If the image is of low resolution, noise removal may clean up some of your regions of interest. So, its important to use noise removal only when the image is of high resolution. Here is my pre-processing code. from skimage.color import rgb2gray import matplotlib.pyplot as plt from skimage.io import imread from skimage.filters import threshold_sauvola from skimage.exposure import is_low_contrast from skimage.exposure import adjust_gamma from skimage.restoration import denoise_tv_chambolle cimage = imread('') gamma_corrected = adjust_gamma(cimage, 1.2) noise_removed = denoise_tv_chambolle(gamma_corrected, multichannel=True) gry_img = rgb2gray(cimage) th = threshold_sauvola(gry_img, 19) bimage = gry_img > th fig, ax = plt.subplots(ncols=2, figsize=(20,20)) ax[0].imshow(cimage) ax[0].axis("off") ax[1].imshow(bimage, cmap="gray") ax[1].axis("off") The outputs for the above sample images is as below: What I am basically doing is enhancing the contrast first to darken text regions and then removing noise using the Total_variation_denoising method. If you have a better pre-processing suggestion let me know in the comments below.
https://muthu.co/pre-processing-for-detecting-text-in-images/
CC-MAIN-2020-40
refinedweb
325
50.73
12 October 2011 08:26 [Source: ICIS news] SINGAPORE (ICIS)--Shell has achieved stable production at its largest 210,000 bbl/day crude distillation unit (CDU) No 5 at Bukom in ?xml:namespace> The CDU may provide Shell’s 800,000 tonne/year mixed-feed cracker at the site with sufficient feedstock to continue running, they said. There were initial expectations that the cracker will have to halt operations, as its supply of feedstock naphtha from Shell's 540,000 bbl/day refinery in Bukom was disrupted. The refinery, which has three CDUs, was forced shut by a blaze at the manufacturing site on 28-29 September. Shell restarted the biggest of the three CDUs on 9 October, while the other two remained shut, with no clear timeline when production can resume. The company is also expected to ramp up operating rates at its long residue catalytic cracker (LRCC), which produces light distillates, including propylene, later on Wednesday. Meanwhile, the company’s 350,000 tonne/year group 1 base oils unit in Bukom sustained damage at its furnace following an emergency shutdown during the blaze that hit the site, market sources said. The base oils plant is expected to restart by the end of next week, they said. Asked on the operating status of Shell's Bukom facilities, a company spokesperson said: "We can confirm that some operations have continued and some operations will resume at the site but we are unable to comment on operational
http://www.icis.com/Articles/2011/10/12/9499260/shell-singapores-bukom-cdu-no-5-to-keep-cracker-running-sources.html
CC-MAIN-2015-18
refinedweb
246
54.15
1. Introduction 1.1. Overview As I explained in a previous blog post, I delivered a presentation this week as part of the Microsoft TechEd 2011 event. The presentation was titled “Windows Server 2008 R2 File Services Consolidation - Technology Update”. It included two demos that showed several Windows Server 2008 R2 features and also a little SQL Server 2008 R2. You can listen to a recording of this presentation at In this post, I am sharing the steps I used to create the demo, so you can reproduce the environment used there demo and experience with the technologies yourself. Since I wanted you to be able to do this on your own even if you’re not already running the latest version of Windows Server or SQL Server, I used evaluation versions that you can download from the web at no cost (the links are provided below). You also only need to have a single computer (the specs are provided below). The: This will require a few hours of work to complete from start to finish, but it is a great way to experiment with a fairly large set of Microsoft technologies, including: - Windows Server 2008 R2 - Hyper-V - Networking - Domain Name Services (DNS) - Active Directory Domain Services (AD-DS) - iSCSI Software Target 3.3 - iSCSI Initiator - File Server (SMB2) - Failover Clustering (WSFC) - SQL Server 2008 R2 Follow the steps and let me know how it goes in the comment section. If you run into any issues or found anything particularly interesting, don’t forget to mention the number of the step. 1.2. Hardware You will need the following hardware to install the demo: - One computer capable of running Windows Server 2008 R2 and Hyper-V (64-bit, virtualization technology) - At least 8 GB of RAM - In my case, I am using a Lenovo W500 with 8GB of RAM, Intel Core2 Duo, P9600 @ 2.67 GHz - Internet connection for downloading software and updates - A USB stick, if you’re installing Windows Server from USB and copying downloaded software around (you can also burn the software to a DVD) 1.3. Downloadable software You will need to download the following sofware to install the demo: - Windows Server 2008 R2 with SP1 Evaluation - SQL Server 2008 R2 Evaluation The links above take you to the evaluation versions of the software. If you are an MSDN or TechNet subscriber, you can download from there instead. You will also use the Microsoft iSCSI Software Target for Windows Server 2008 R2. This is now a public download. Find details at 1.4. Notes and disclaimers - This post does not include a screenshot for every single step in the process. I focused the screenshots on specific decision points where defaults are not used or the course of action is not clear. - The text for each step also focuses on the specific actions that deviate from the default or where a clear default is not provided. If you are asked a question or required to perform an action that you do not see described in these steps, go with the default option. - Obviously, a single-computer solution cannot be tolerant to the failure of that computer. So, the configuration described here is not really fault-tolerant. It is adequate only for demonstrations, testing or learning. You will definitely need a different configuration for a production deployment. - A certain familiarity with Windows Server administration and configuration is assumed. If you're new to Windows Server, this post is not for you. Sorry... - There are usually several ways to perform a specific Windows Server configuration or administration task. What I describe here is one of those many ways. It's not necessarily the best way, just the one of them. 2. Install the Windows Server 2008 R2 with SP1 and Hyper-V 2.1. Format a USB disk using Windows 7 or Windows Server 2008 R2, copy the contents of the ISO to a USB stick (I used an 8 GB). If you don't have a tool to open an ISO file, you can simply burn to a DVD and copy from there. 2.2. Make sure your BIOS is configured for Virtualization: 2.3. Use the boot menu to boot from the USB drive (you can also boot from a DVD, if you burned the ISO to a DVD): 2.4. Select Datacenter edition 2.5. Select the partition to use for the install the OS 2.6. Choose an administrator password 2.7. Use Windows Update to get all the available updates 2.8. Optionally, rename the computer to CONTOSO-H0. 3. Add Hyper-V and configure Virtual Networks 3.1. From Server Manager, select Add Roles 3.2. Select Hyper-V 3.4. In the "Create Virtual Networks" page, select the physical network interface connected to the Internet: 3.5. After Hyper-V is installed, use Hyper-V manager and open the Virtual Network Manager 3.6. Configure 3 internal networks for communication between the VMs, in addition to the external one. 4. Create the Base VM 4.1. Create a folder for your ISO files at C:\ISO and a folder for your VMs at C:\VMS 4.2. Copy the Windows Server 2008 R2 SP1 ISO file to C:\ISO. 9.2. Since you are both connected to the External network, you can use SMB2 to copy to the VM simply using a UNC path to a VM drive: \\CONTOSO-H0\C$. Or you can use the USB stick. 4.3. Create a new VM called BASE using C:\VMS as a location: 4.4. Select 1024MB for the amount of RAM and use the External network for Networking 4.5. Create the new VHD at C:\VMS\BASE.VHD 4.6. Start the VM, point the DVD to the Windows Server 2008 R2 SP1 ISO file at C:\ISO, and perform a regular install like you did before for the physical machine. 4.7. Set a password and, install Windows Updates, like you did for the parent partition, but don’t install any roles. Don’t bother renaming it. 4.8. After you have the fully configured VM, run C:\Windows\System32\Sysprep\Sysprep.exe 4.9. Select the options to run the OOBE, generalize and shutdown: 4.10. After that, you have a new base VHD ready to use at C:\VMS\BASE.VHD which should be a little less than 8GB in size. 4.11. You should now remove the BASE VM using Hyper-V Manager. The BASE.VHD file will not be deleted. 5. Create 5 differencing VHDs 5.1. Use the “New…”, “Hard Disk…” option in Hyper-V Manager to create a differencing VHD using the base VHD you created 5.2. After this, you will have a new differencing VHD at VM1.VHD that’s less than 400KB in size 5.3. Since we’re creating 5 VMS, copy that file into VM2.VHD, VM3.VHD, VM4.VHD and VM5.VHD 5.4. You can now create five similarly configured VMs 5.5. Make sure to select to use the External network. We will manually add additional network interfaces later 5.6. When creating the VMs, make sure to select to use one of the five VHD files created previously: 5.7. After creating each VM, select the VM “Settings” and use the “Add Hardware” option to add more network adapters 5.8. For VM1 (domain controller) and VM2 (iSCSI Target), add one more network adapter (connected to the Internal 1 network) 5.9. The end result for VM1 and VM2 will be a VM with 2 network adapters, 1 external and 1 internal: 5.10. For VM3, VM4 and VM5, add 3 more network adapters (connected to the 3 internal networks) 5.11. For those VMs, the end result will be a 4 network adapters, 1 external and 3 internal: 6. Configure the 5 VMs’ names and IP addresses 6.1. You can now start all 5 VMs 6.2. Connect to each of the 5 VMs from the Hyper-V Manager, let the mini-setup complete, and set the passwords for all five 6.3. Configure the IP addresses for each network interface as shown on the table below 6.4. Rename the Network Connections in each guest for easy identification. 6.5. If you can’t tell which network is which inside the VM, temporarily set one of the adapters to “Not Connected” in the VM Settings and see which one shows as “Network cable unplugged”: 6.6. The Internal 1 network is the main network used by the DNS and Domain Controller and the iSCSI Target. 6.7. Make sure to set the subnet mask to 255.255.255.0 and the DNS to 192.168.1.1 for all 3 internal networks on all 5 computers. This will instruct them to register their names and IPs with the DNS VM. 6.8. The External network is useful only for downloading from the Internet or remotely connecting to the 5 VMs, but is not required for the demo. 6.9. To make things easier to review and demo, I disabled IPv6 on all interfaces. Everything works fine with IPv6, so you don’t need to do this. 6.10 You could configure a DHCP server for the internal interfaces. However, due to the risk of accidentally creating a rogue DHCP server in my corporate network, I used fixed IPs. 6.11. Rename each computer according to the table in step 6.3. 6.12. Make sure to set the Primary DNS suffix to “contoso.local”. You will need to click the “More…” button to set this. 6.13. Your “Full computer name” should show with the DNS suffix in the "Computer Name" tab of the "System Properties" window, as highlighted below: 6.14. After renaming the computer, renaming the network and configuring IP addresses, you VM should look like this (two examples below): 7. Configure DNS 7.1. Now, on VM1 (CONTOSO-DC), you should configure the DNS role 7.2. You will start by adding the DNS role in Server Manager: 7.3. Then you need to create a primary, forward lookup zone for the CONTOSO.LOCAL domain and 3 primary, reverse lookup zones for 192.168.1.x, 192.168.2.x and 192.168.3.x. 7.4. Allow both non-secure and secure dynamic updates (you can change to secure updates after the domain is fully configured). 7.5. After that, from all 5 VMs, open a command prompt and run “IPCONFIG /REGISTERDNS” 7.6. Then verify in DNS if all addresses are showing up in the forward and reverse zones. If not, go troubleshoot! 8. Configure the Domain Controller 8.1. Using Server Manager, select Add Role to add the Domain Controller role to CONTOSO-DC 8.2. Run DCPROMO.EXE to create a new domain called CONTOSO.LOCAL 8.3. Select the option to create a new domain in a new forest 8.4. Use CONTOSO.LOCAL as the FQDN for the forest root domain 8.5. Select the “Windows Server 2008 R2” forest functional level 8.6. Say “Yes” to the dialog about having DHCP-assigned addresses on this computer (that address in the External interface is used only for internet access) 8.7. Select the “Do not create the DNS Delegation” option 8.8. Accept the default location for the database, log files and SYSVOL. 8.9. Set a restore mode administrator password 8.10. Click OK the dialog about already having a DNS zone 8.11. Finish the Active Directory install and reboot the server 8.12. After the Domain Controller reboots, for every one of the other 4 VMs, join the domain 8.13. You will need to provide the domain name (CONTOSO.LOCAL) and the Administrator credentials 8.14. From now on, always log on to any of the VMs using the domain administrator credentials CONTOSO\Administrator 8.15. In the Domain Controller, use Active Directory Users and Computers to create a new Active Directory user account called SQLService. We’ll use that later when configuring SQL Server: 8.16. Set a password for the SQLService account and make sure to uncheck “User must change password at next logon”. 9. Configure the iSCSI Target 9.1. Copy the MSI file with the downloaded iSCSI Target file to VM2 (CONTOSO-IT) 9.2. Since you are both connected to the External network, you can use SMB2 to copy from the parent to the VM simply using a UNC path to a VM drive: \\CONTOSO-IT\C$. 9.3. Make sure to log on to CONTOSO-IT using the domain administrator credentials CONTOSO\Administrator, not the local Administrator credentials CONTOSO-IT\Administrator 9.4. Run the install file iSCSITarget_Public.MSI, using the default settings 9.5. For this demo, we’ll create a single Target with 3 Devices (LUNs or VHD files) and used by 2 initiators (CONTOSO-F1 and CONTOSO-F2) 9.6. Start by creating the 3 devices. One VHD with 1GB in size for the Cluster Witness volume and two VHDs with 20GB for the data volumes. 9.7. Create the first device with the file at C:\LUN0.VHD, 1024MB in size, description “LUN0” and no target access. 9.8. Create the second and third devices at C:\LUN1.VHD and C:\LUN2.VHD, both with 20480MB in size and no taget acess. 9.9. After that, you will have 3 devices 9.10. Next, create a single target, exposed to two initiators (the two file servers that will become the cluster nodes) by IP address (192.168.1.3 and 192.168.1.4) and using the 3 devices 9.11. Specify a target name and description 9.12. In the page for iSCSI Initiators Identifiers, click on advanced and add two initiators by IP address 9.13. Confirm the fact that you’re exposing the same target to multiple initators. That is OK if those initiators are Windows Servers running Failover Clustering. 9.14. After the target is created, use the option to add existing virtual disks to the iSCSI Target 9.15. Select all three disks created previously. 10. Configure the iSCSI Initiators 10.1. Now we shift to the two File Servers, which will run the iSCSI Initiator. We’ll do this on VM3 and VM4 (or CONTOSO-F1 and CONTOSO-F2) 10.2. Again, Make sure to log on to CONTOSO-F1 and CONTOSO-F2 using the domain administrator credentials CONTOSO\Administrator, not the local Administrator credentials 10.3. Start the iSCSI Initiator. On the first run, confirm that you want to configure the service to start automatically 10.4. Specify the IP address of your iSCSI Target (in this case, 192.168.1.2) and click on “Quick Connect…” 10.5. The configured target will be recognized and you only have to click “Done”. You initiator is configured. 10.6. After configuring the iSCSI Initiator on both nodes, in only one of the two nodes, open the Disk Management tool 10.7. Online all three offline disks (the iSCSI LUNs), then initialize them (you can use MBR partitions, since they are small) 10.8. Then create a new Simple Volume on each one using all the disk space on the LUN and quick-format them with NTFS. 10.9. Assign each one a drive letter (W:, X: and Y:) and a proper volume label (Witness, Data 1 and Data 2) 11. Configure the Roles/Services/Features of the File Servers 11.1. Now we need to configure VM3 and VM4 as file servers and cluster nodes 11.2. For both CONTOSO-F1 and CONTOSO-F2, from Server Manager, select Add Role and check File Services. 11.3. Add DFS-N and DFS-R as role services, which are covered in part of the demo (no need to create namespace or select a volume for monitoring) 11.4. Next, select Add Feature and check Failover Clustering 11.5. This is what Server Manager should look like after you install the role, role services and feature 12. Configure the Failover Cluster 12.1. On VM3 (CONTOSO-F1), open the Failover Cluster Manager and click on the option to “Validate a Configuration…” 12.2. Enter the name of each of the two file servers 12.3. Select to “Run all tests”. Let the validation process run. It will take a few minutes to complete. 12.4. Validation should not return any errors. If it does, review the previous steps and make sure to address any issues listed in the validation report. 12.5. Next you should select the option to “Create a cluster”. Here you also specify the two nodes to use. 12.6. Give the cluster a name (CONTOSO-FC). 12.7. Select only the Internal 2 and Internal 3 networks and use the IP addresses 192.168.2.10 and 192.168.3.10 12.8. After the cluster is created, you will get a confirmation: 12.9. For consistency, you should rename the Cluster networks to match the names used previously. 13. Create the Clustered File Service 13.1. You can now create a clustered file service. On the Failover Cluster Manager, right click on “Configure a Service or Application…” 13.2. On the wizard, select the “File Server” option 13.3. Specify the name of the service (CONTOSO-FS) and the IP addresses to use (192.168.2.11 and 192.168.3.11) 13.4. Select one of the Cluster Disks available (X:) 13.5. For the next step, make sure the File Service is running on the node you are connected to, or else you won’t be able to see the X: drive. If not, move the File Service to that node 13.6. After the File Service is created, create a folder X:\SQLDB and create a cluster share called SQLDB (Use the “Add a shared folder” option) 13.7. There are a number of options to configure when creating a share. For this demo, just make sure you grant the Administrator and SQLService accounts Full control for both NTFS permissions and SMB share permissions: 13.8. After creating the share, you have a fully configured Clustered File Service 14. Configure the SQL Server 14.1. Extract the SQL Server evaluation download to a folder. Copy that SQL install folder to VM5 placing it on a folder under the \\CONTOSO-DB\C$ path 14.2. Run Setup. Click OK on the dialog to install the .NET Framework. 14.3. On the SQL Server Installation Center, select the “Installation” section on the left, then select “New installation or add features to an existing installation” 14.4. In SQL Server Setup, let it verify the SQL Server Setup Support Rules pass and click OK 14.5. Select the “Evaluation” version, review the licensing terms, and click “Install” install the Setup Support files 14.6. In SQL Server Setup, let it verify the second set of SQL Server Setup Support Rules pass and click Next 14.7. In Setup Role, select SQL Server Feature Installation 14.8. On the “Feature Selection” page, select only the Database Engine and the basic Management Tools. Use the default locations. 14.9. In SQL Server Setup, let it verify the Installation Rules pass and click Next 14.10. On the “Instance Configuration” page, Select the Default instance, instance ID and root directory 14.11. On the “Server Configuration” page, Specify CONTOSO\SQLService as the service account for the SQL instance (both Agent and Database Engine) 14.12. On the “Database Engine Configuration” page, specify CONTOSO\Administrator as the SQL Server Administrator (you can use the option to “Add current user”) 14.13. Use defaults for all the rest and perform the install. This will take a while: 15. Create a database using the clustered file share 15.1. On the SQL Server VM, open SQL Server Management Studio. When connecting, use a period as the Server Name to indicate the local server 15.2. Right click the main node, select Properties and use the Database Settings page to set the database default location to the UNC path to the clustered file share: \\CONTOSO-FS\SQLDB 15.3. Expand to find the Databases node and right-click to create a new database 15.4. Use Orders as the database name and note the path pointing to the clustered file share: 16. Shut down, startup and install final notes 16.1. Keep in mind that there are dependencies between the services running on each VM 16.2. To shut them down, start with VM5 and end with VM1, waiting for each one to go down completely before moving to the next one 16.3. To bring the VMs up, go from VM1 to VM5, waiting for the previous one to be fully up (with low to no CPU usage) before starting the next one 16.4. As a last note, the total size of the VHD files (base plus 5 diffs), after all the steps were performed, was around 22 GB 16.5. You might want to also take a snapshot of the VMs after you shut them down, just in case you want to bring them back to the original state after experimenting with them for a while. If you do, you should always snapshot all of them, again due to dependencies between them. Just right-click the VM and select the “Snapshot” option 17. Conclusion I hope you enjoyed these step-by-step instructions. I strongly encourage you to try them out and perform the entire installation yourself. It’s a good learning experience. After you perform the steps, you have a good setup to try the demos I showed during the presentation. You can find details at these additional blog posts: - The overall SQL Server over SMB2 scenario is described at - The script used to generate some activity on the server is shown at - The multiple network configuration and durability demo are described at - The many options for name consolidation (including DFS-N and Clustering) are described at @Brad: Yes, the Microsoft iSCSI Software target 3.3 is now a free download for anyone using Windows Server 2008 R2 and it's not only for Windows Storage Server anymore. For details, see blogs.technet.com/.../microsoft-iscsi-software-target-3-3-for-windows-server-2008-r2-available-for-public-download.aspx We have a new blog post out that explains the demo setup used in the TechEd 2011 presentation about Windows Server 2008 R2 File Services Consolidation... Hi Can you uses the Microsoft iSCSI Software Target without Microsoft Storage Server?From my understanding Microsoft Storage server is reserved for Hardware vendors who provide Microsoft Storage server with certified hardware. If I can uses Microsoft iSCSI Software Target as a standalone product in a production envioment it would give me more options for our clients Dear Jose, You are great man. You have always new gift for Microsoft IT Pro (Articles). Thanks Sir regards
https://blogs.technet.microsoft.com/josebda/2011/05/19/teched-2011-demo-install-step-by-step-hyper-v-ad-dns-iscsi-target-file-server-cluster-sql-server-over-smb2/
CC-MAIN-2017-30
refinedweb
3,849
65.22
Timers and time management in the Linux kernel. Part 7. Time related system calls in the Linux kernel This is the seventh and last part chapter, which describes timers and time management related stuff in the Linux kernel. In the previous part, we discussed timers in the context of x86_64: High Precision Event Timer and Time Stamp Counter. Internal time management is an interesting part of the Linux kernel, but of course not only the kernel needs the time concept. Our programs also need to know time. In this part, we will consider implementation of some time management related system calls. These system calls are: clock_gettime; gettimeofday; nanosleep. We will start from a simple userspace C program and see all way from the call of the standard library function to the implementation of certain system calls. As each architecture provides its own implementation of certain system calls, we will consider only x86_64 specific implementations of system calls, as this book is related to this architecture. Additionally, we will not consider the concept of system calls in this part, but only implementations of these three system calls in the Linux kernel. If you are interested in what is a system call, there is a special chapter about this. So, let's start from the gettimeofday system call. Implementation of the gettimeofday system call As we can understand from the name gettimeofday, this function returns the current time. First of all, let's look at the following simple example: int main(int argc, char **argv) { char buffer[40]; struct timeval time; gettimeofday(&time, NULL); strftime(buffer, 40, "Current date/time: %m-%d-%Y/%T", localtime(&time.tv_sec)); printf("%s\n",buffer); return 0; } As you can see, here we call the gettimeofday function, which takes two parameters. The first parameter is a pointer to the timeval structure, which represents an elapsed time: struct timeval { time_t tv_sec; /* seconds */ suseconds_t tv_usec; /* microseconds */ }; The second parameter of the gettimeofday function is a pointer to the timezone structure which represents a timezone. In our example, we pass address of the timeval time to the gettimeofday function, the Linux kernel fills the given timeval structure and returns it back to us. Additionally, we format the time with the strftime function to get something more human readable than elapsed microseconds. Let's see the result: ~$ gcc date.c -o date ~$ ./date Current date/time: 03-26-2016/16:42:02 As you may already know, a userspace application does not call a system call directly from the kernel space. Before the actual system call entry will be called, we call a function from the standard library. In my case it is glibc, so I will consider this case. The implementation of the gettimeofday function is located in the sysdeps/unix/sysv/linux/x86/gettimeofday.c source code file. As you already may know, the gettimeofday is not a usual system call. It is located in the special area which is called vDSO (you can read more about it in the part, which describes this concept). The glibc implementation of gettimeofday tries to resolve the given symbol; in our case this symbol is __vdso_gettimeofday by the call of the _dl_vdso_vsym internal function. If the symbol cannot be resolved, it returns NULL and we fallback to the call of the usual system call: return (_dl_vdso_vsym ("__vdso_gettimeofday", &linux26) ?: (void*) (&__gettimeofday_syscall)); The gettimeofday entry is located in the arch/x86/entry/vdso/vclock_gettime.c source code file. As we can see the gettimeofday is a weak alias of the __vdso_gettimeofday: int gettimeofday(struct timeval *, struct timezone *) __attribute__((weak, alias("__vdso_gettimeofday"))); The __vdso_gettimeofday is defined in the same source code file and calls the do_realtime function if the given timeval is not null: notrace int __vdso_gettimeofday(struct timeval *tv, struct timezone *tz) { if (likely(tv != NULL)) { if (unlikely(do_realtime((struct timespec *)tv) == VCLOCK_NONE)) return vdso_fallback_gtod(tv, tz); tv->tv_usec /= 1000; } if (unlikely(tz != NULL)) { tz->tz_minuteswest = gtod->tz_minuteswest; tz->tz_dsttime = gtod->tz_dsttime; } return 0; } If the do_realtime will fail, we fallback to the real system call via call the syscall instruction and passing the __NR_gettimeofday system call number and the given timeval and timezone: notrace static long vdso_fallback_gtod(struct timeval *tv, struct timezone *tz) { long ret; asm("syscall" : "=a" (ret) : "0" (__NR_gettimeofday), "D" (tv), "S" (tz) : "memory"); return ret; } The do_realtime function gets the time data from the vsyscall_gtod_data structure which is defined in the arch/x86/include/asm/vgtod.h header file and contains mapping of the timespec structure and a couple of fields which are related to the current clock source in the system. This function fills the given timeval structure with values from the vsyscall_gtod_data which contains a time related data which is updated via timer interrupt. First of all we try to access the gtod or global time of day the vsyscall_gtod_data structure via the call of the gtod_read_begin and will continue to do it until it will be successful:; As we got access to the gtod, we fill the ts->tv_sec with the gtod->wall_time_sec which stores current time in seconds gotten from the real time clock during initialization of the timekeeping subsystem in the Linux kernel and the same value but in nanoseconds. In the end of this code we just fill the given timespec structure with the resulted values. That's all about the gettimeofday system call. The next system call in our list is the clock_gettime. Implementation of the clock_gettime system call The clock_gettime function gets the time which is specified by the second parameter. Generally the clock_gettime function takes two parameters: clk_id- clock identifier; timespec- address of the timespecstructure which represent elapsed time. Let's look on the following simple example: int main(int argc, char **argv) { struct timespec elapsed_from_boot; clock_gettime(CLOCK_BOOTTIME, &elapsed_from_boot); printf("%d - seconds elapsed from boot\n", elapsed_from_boot.tv_sec); return 0; } which prints uptime information: ~$ gcc uptime.c -o uptime ~$ ./uptime 14180 - seconds elapsed from boot We can easily check the result with the help of the uptime util: ~$ uptime up 3:56 The elapsed_from_boot.tv_sec represents elapsed time in seconds, so: 14180 / 60 236 14180 / 60 / 60 3 14180 / 60 % 60 56 The clock_id maybe one of the following: CLOCK_REALTIME- system wide clock which measures real or wall-clock time; CLOCK_REALTIME_COARSE- faster version of the CLOCK_REALTIME; CLOCK_MONOTONIC- represents monotonic time since some unspecified starting point; CLOCK_MONOTONIC_COARSE- faster version of the CLOCK_MONOTONIC; CLOCK_MONOTONIC_RAW- the same as the CLOCK_MONOTONICbut provides non NTP adjusted time. CLOCK_BOOTTIME- the same as the CLOCK_MONOTONICbut plus time that the system was suspended; CLOCK_PROCESS_CPUTIME_ID- per-process time consumed by all threads in the process; CLOCK_THREAD_CPUTIME_ID- thread-specific clock. The clock_gettime is not usual syscall too, but as the gettimeofday, this system call is placed in the vDSO area. Entry of this system call is located in the same source code file - arch/x86/entry/vdso/vclock_gettime.c) as for gettimeofday. The Implementation of the clock_gettime depends on the clock id. If we have passed the CLOCK_REALTIME clock id, the do_realtime function will be called: notrace int __vdso_clock_gettime(clockid_t clock, struct timespec *ts) { switch (clock) { case CLOCK_REALTIME: if (do_realtime(ts) == VCLOCK_NONE) goto fallback; break; ... ... ... fallback: return vdso_fallback_gettime(clock, ts); } In other cases, the do_{name_of_clock_id} function is called. Implementations of some of them is similar. For example if we will pass the CLOCK_MONOTONIC clock id: ... ... ... case CLOCK_MONOTONIC: if (do_monotonic(ts) == VCLOCK_NONE) goto fallback; break; ... ... ... the do_monotonic function will be called which is very similar on the implementation of the do_realtime: notrace static int __always_inline do_monotonic(struct timespec *ts) {; } We already saw a little about the implementation of this function in the previous paragraph about the gettimeofday. There is only one difference here, that the sec and nsec of our timespec value will be based on the gtod->monotonic_time_sec instead of gtod->wall_time_sec which maps the value of the tk->tkr_mono.xtime_nsec or number of nanoseconds elapsed. That's all. Implementation of the nanosleep system call The last system call in our list is the nanosleep. As you can understand from its name, this function provides sleeping ability. Let's look on the following simple example: int main (void) { struct timespec ts = {5,0}; printf("sleep five seconds\n"); nanosleep(&ts, NULL); printf("end of sleep\n"); return 0; } If we will compile and run it, we will see the first line ~$ gcc sleep_test.c -o sleep ~$ ./sleep sleep five seconds end of sleep and the second line after five seconds. The nanosleep is not located in the vDSO area like the gettimeofday and the clock_gettime functions. So, let's look how the real system call which is located in the kernel space will be called by the standard library. The implementation of the nanosleep system call will be called with the help of the syscall instruction. Before the execution of the syscall instruction, parameters of the system call must be put in processor registers according to order which is described in the System V Application Binary Interface or in other words: rdi- first parameter; rsi- second parameter; rdx- third parameter; r10- fourth parameter; r8- fifth parameter; r9- sixth parameter. The nanosleep system call has two parameters - two pointers to the timespec structures. The system call suspends the calling thread until the given timeout has elapsed. Additionally it will finish if a signal interrupts its execution. It takes two parameters, the first is timespec which represents timeout for the sleep. The second parameter is the pointer to the timespec structure too and it contains remainder of time if the call of the nanosleep was interrupted. As nanosleep has two parameters: int nanosleep(const struct timespec *req, struct timespec *rem); To call system call, we need put the req to the rdi register, and the rem parameter to the rsi register. The glibc does these job in the INTERNAL_SYSCALL macro which is located in the sysdeps/unix/sysv/linux/x86_64/sysdep.h header file. which takes the name of the system call, storage for possible error during execution of system call, number of the system call (all x86_64 system calls you can find in the system calls table) and arguments of certain system call. The INTERNAL_SYSCALL macro just expands to the call of the INTERNAL_SYSCALL_NCS macro, which prepares arguments of system call (puts them into the processor registers in correct order), executes syscall instruction and returns the result: The LOAD_ARGS_##nr macro calls the LOAD_ARGS_N macro where the N is number of arguments of the system call. In our case, it will be the LOAD_ARGS_2 macro. Ultimately all of these macros will be expanded to the following: ... ... ...... ... ... After the syscall instruction will be executed, the context switch will occur and the kernel will transfer execution to the system call handler. The system call handler for the nanosleep system call is located in the kernel/time/hrtimer.c source code file and defined with the SYSCALL_DEFINE2 macro helper: SYSCALL_DEFINE2(nanosleep, struct timespec __user *, rqtp, struct timespec __user *, rmtp) { struct timespec tu; if (copy_from_user(&tu, rqtp, sizeof(tu))) return -EFAULT; if (!timespec_valid(&tu)) return -EINVAL; return hrtimer_nanosleep(&tu, rmtp, HRTIMER_MODE_REL, CLOCK_MONOTONIC); } More about the SYSCALL_DEFINE2 macro you may read in the chapter about system calls. If we look at the implementation of the nanosleep system call, first of all we will see that it starts from the call of the copy_from_user function. This function copies the given data from the userspace to kernelspace. In our case we copy timeout value to sleep to the kernelspace timespec structure and check that the given timespec is valid by the call of the timesc_valid function: static inline bool timespec_valid(const struct timespec *ts) { if (ts->tv_sec < 0) return false; if ((unsigned long)ts->tv_nsec >= NSEC_PER_SEC) return false; return true; } which just checks that the given timespec does not represent date before 1970 and nanoseconds does not overflow 1 second. The nanosleep function ends with the call of the hrtimer_nanosleep function from the same source code file. The hrtimer_nanosleep function creates a timer and calls the do_nanosleep function. The do_nanosleep does main job for us. This function provides loop: do { set_current_state(TASK_INTERRUPTIBLE); hrtimer_start_expires(&t->timer, mode); if (likely(t->task)) freezable_schedule(); } while (t->task && !signal_pending(current)); __set_current_state(TASK_RUNNING); return t->task == NULL; Which freezes current task during sleep. After we set TASK_INTERRUPTIBLE flag for the current task, the hrtimer_start_expires function starts the give high-resolution timer on the current processor. As the given high resolution timer will expire, the task will be again running. That's all. Conclusion This is the end of the seventh part of the chapter that describes timers and timer management related stuff in the Linux kernel. In the previous part we saw x86_64 specific clock sources. As I wrote in the beginning, this part is the last part of this chapter. We saw important time management related concepts like clocksource and clockevents frameworks, jiffies counter and etc., in this chpater. Of course this does not cover all of the time management in the Linux kernel. Many parts of this mostly related to the scheduling which we will see in other chapter..
https://0xax.gitbooks.io/linux-insides/Timers/linux-timers-7.html
CC-MAIN-2019-04
refinedweb
2,169
51.18
apache / buildstream / 55ea3031d1bf4eed99d1b9eb8dfe1afee091d9c3 / . / HACKING.rst blob: b4d546542cceebd20a4878a516648486df27251f [ file ] [ log ] [ blame ] Contributing ============ Some tips and guidelines for developers hacking on BuildStream Feature additions ----------------- Major feature additions should be proposed on the `mailing list <>`_ before being considered for inclusion, we strongly recommend proposing in advance of commencing work. New features must be well documented and tested either in our main test suite if possible, or otherwise in the integration tests. and bugs which may have fell through the cracks in the review process, giving us a reasonable timeframe for identifying these. Patch submissions ----------------- Branches must be submitted as merge requests in gitlab and should usually be associated to an issue report on gitlab. Commits in the branch which address specific issues must specify the issue number in the commit message. Merge requests that are not yet ready for review must be prefixed with the ``WIP:`` identifier. A merge request is not ready for review until the submitter expects that the patch is ready to actually land. Submitted branches must not contain a history of the work done in the feature branch. Please use git's interactive rebase feature in order to compose a clean patch series suitable for submission. We prefer that test case and documentation changes be submitted in separate commits from the code changes which they test. Ideally every commit in the history of master passes its test cases. This makes bisections more easy to perform, but is not always practical with more complex branches. Commit messages ~~~~~~~~~~~~~~~. This fixes issue #123 Coding style ------------ Coding style details for BuildStream Style guide ~~~~~~~~~~~ Python coding style for BuildStream is pep8, which is documented here: We have a couple of minor exceptions to this standard, we dont want to compromise code readability by being overly restrictive on line length for instance. The pep8 linter will run automatically when running the test suite. Imports ~~~~~~~ Module imports inside BuildStream are done with relative ``.`` notation Good:: from .context import Context Bad:: from buildstream.context import Context The exception to the above rule is when authoring plugins, plugins do not reside in the same namespace so they must address buildstream in the imports. An element plugin will derive from Element by importing:: from buildstream import Element When importing utilities specifically, dont import function names from there, instead import the module itself:: from . import utils This makes things clear when reading code that said functions are not defined in the same file but come from utils.py for example. Policy for private symbols ~~~~~~~~~~~~~~~~~~~~~~~~~~ Private symbols are expressed via a leading ``_`` single underscore, or in some special circumstances with a leading ``__`` double underscore. Before understanding the naming policy, it is first important to understand that in BuildStream, there are two levels of privateness which need to be considered. These are treated subtly differently and thus need to be understood: * API Private A symbol is considered to be *API private* if it is not exposed in the *public API*. Even if a symbol does not have any leading underscore, it may still be *API private* if the containing *class* or *module* is named with a leading underscore. * Local private A symbol is considered to be *local private* if it is not intended for access outside of the defining *scope*. If a symbol has a leading underscore, it might not be *local private* if it is declared on a publicly visible class, but needs to be accessed internally by other modules in the BuildStream core. Ordering '''''''' For better readability and consistency, we try to keep private symbols below public symbols. In the case of public modules where we may have a mix of *API private* and *local private* symbols, *API private* symbols should come before *local private* symbols. Symbol naming '''''''''''''Stream are named as such, e.g. ``_thismodule.py``. Cases for double underscores '''''''''''''''''''''''''''' The double underscore in python has a special function. When declaring a symbol in class scope which has a leading underscore, it can only be accessed within the class scope using the same name. Outside of class scope, it can only be accessed with a *cheat*. We use the double underscore in cases where the type of privateness can be ambiguous. * For private modules and classes We never need to disambiguate with a double underscore * For private symbols declared in a public *scope* In the case that we declare a private method on a public object, it becomes ambiguous whether: * The symbol is *local private*, and only used within the given scope * The symbol is *API private*, and will be used internally by BuildStream from other parts of the codebase. In this case, we use a single underscore for *API private* methods which are not *local private*, and we use a double underscore for *local private* methods declared in public scope. Documenting private symbols ''''''''''''''''''''''''''' Any symbol which is *API Private* (regardless of whether it is also *local private*), should have some documentation for developers to better understand the codebase. Contrary to many other python projects, we do not use docstrings to document private symbols, but prefer to keep *API Private* symbols documented in code comments placed *above* the symbol (or *beside* the symbol in some cases, such as variable declarations in a class where a shorter comment is more desirable), rather than docstrings placed *below* the symbols being documented. Other than this detail, follow the same guidelines for documenting symbols as described below. Documenting BuildStream ----------------------- BuildStream starts out as a documented project from day one and uses sphinx to document itself. Documentation formatting policy ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The BuildStream documentation style is as follows: * Titles and headings require two leading empty lines above them. Only the first word should be capitalized. * If there is an ``.. _internal_link`` anchor, there should be two empty lines above the anchor, followed by one leading empty line. * Within a section, paragraphs should be separated by one empty line. * Notes are defined using: ``.. note::`` blocks, followed by an empty line and then indented (3 spaces) text. * Code blocks are defined using: ``.. code:: LANGUAGE`` blocks, followed by an empty line and then indented (3 spaces) text. Note that the default language is `python`. * Cross references should be of the form ``:role:`target```. * To cross reference arbitrary locations with, for example, the anchor ``_anchor_name``, you must give the link an explicit title: ``:ref:`Link text <anchor_name>```. Note that the "_" prefix is not required. Useful links: For further information, please see the `Sphinx Documentation <>`_. Building Docs ~~~~~~~~~~~~~ The documentation build is not integrated into the ``setup.py`` and is difficult (or impossible) to do so, so there is a little bit of setup you need to take care of first. Before you can build the BuildStream documentation yourself, you need to first install ``sphinx`` along with some additional plugins and dependencies, using pip or some other mechanism:: # Install sphinx pip3 install --user sphinx # Install some sphinx extensions pip3 install --user sphinx-click pip3 install --user sphinx_rtd_theme # Additional optional dependencies required pip3 install --user arpy To build the documentation, just run the following:: make -C doc This will give you a ``doc/build/html`` directory with the html docs which you can view in your browser locally to test. Regenerating session html ''''''''''''''''''''''''' The documentation build will build the session files if they are missing, or if explicitly asked to rebuild. We revision the generated session html files in order to reduce the burden on documentation contributors. To explicitly rebuild the session snapshot html files, it is recommended that you first set the ``BST_SOURCE_CACHE`` environment variable to your source cache, this will make the docs build reuse already downloaded sources:: export BST_SOURCE_CACHE=~/.cache/buildstream/sources To force rebuild session html while building the doc, simply build the docs like this:: make BST_FORCE_SESSION_REBUILD=1 -C doc Man pages ~~~~~~~~~ Unfortunately it is quite difficult to integrate the man pages build into the ``setup.py``, as such, whenever the frontend command line interface changes, the static man pages should be regenerated and committed with that. To do this, first ensure you have ``click_man`` installed, possibly with:: pip install --user click_man Then, in the toplevel directory of buildstream, run the following:: python3 setup.py --command-packages=click_man.commands man_pages And commit the result, ensuring that you have added anything in the ``man/`` subdirectory, which will be automatically included in the buildstream distribution. Documenting conventions ~~~~~~~~~~~~~~~~~~~~~~~ We use the sphinx.ext.napoleon extension for the purpose of having a bit nicer docstrings than the default sphinx docstrings. A docstring for a method, class or function should have the following format:: """Brief description of entity Args: argument1 (type): Description of arg argument2 (type): Description of arg Returns: (type): Description of returned thing of the specified type Raises: (SomeError): When some error occurs (SomeOtherError): When some other error occurs A detailed description can go here if one is needed, only after the above part documents the calling conventions. """ Documentation Examples ~~~~~~~~~~~~~~~~~~~~~~ The examples section of the documentation contains a series of standalone examples, here are the criteria for an example addition. * The example has a ``${name}`` * The example has a project users can copy and use * This project is added in the directory ``doc/examples/${name}`` * The example has a documentation component * This is added at ``doc/source/examples/${name}.rst`` * A reference to ``examples/${name}`` is added to the toctree in ``doc/source/examples.rst`` * This documentation discusses the project elements declared in the project and may provide some BuildStream command examples * This documentation links out to the reference manual at every opportunity * The example has a CI test component * This is an integration test added at ``tests/examples/${name}`` * This test runs BuildStream in the ways described in the example and assert that we get the results which we advertize to users in the said examples. Adding BuildStream command output ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ As a part of building the docs, BuildStream will run itself and extract some html for the colorized output which is produced. If you want to run BuildStream to produce some nice html for your documentation, then you can do so by adding new ``.run`` files to the ``doc/sessions/`` directory. Any files added as ``doc/sessions/${example}.run`` will result in generated file at ``doc/source/sessions/${example}.html``, and these files can be included in the reStructuredText documentation at any time with:: .. raw:: html :file: sessions/${example}.html The ``.run`` file format is just another YAML dictionary which consists of a ``commands`` list, instructing the program what to do command by command. Each *command* is a dictionary, the members of which are listed here: * ``directory``: The input file relative project directory * ``output``: The input file relative output html file to generate (optional) * ``fake-output``: Don't really run the command, just pretend to and pretend this was the output, an empty string will enable this too. * ``command``: The command to run, without the leading ``bst`` When adding a new ``.run`` file, one should normally also commit the new resulting generated ``.html`` file(s) into the ``doc/source/sessions-stored/`` directory at the same time, this ensures that other developers do not need to regenerate them locally in order to build the docs. **Example**: .. code:: yaml commands: # Make it fetch first - directory: ../examples/foo command: fetch hello.bst # Capture a build output - directory: ../examples/foo output: ../source/sessions/foo-build.html command: build hello.bst Protocol Buffers ---------------- BuildStream uses protobuf and gRPC for serialization and communication with artifact cache servers. This requires ``.proto`` files and Python code generated from the ``.proto`` files using protoc. All these files live in the ``buildstream/_protos`` directory. The generated files are included in the git repository to avoid depending on grpcio-tools for user installations. Regenerating code ~~~~~~~~~~~~~~~~~ When ``.proto`` files are modified, the corresponding Python code needs to be regenerated. As a prerequisite for code generation you need to install ``grpcio-tools`` using pip or some other mechanism:: pip3 install --user grpcio-tools To actually regenerate the code:: ./setup.py build_grpc Testing BuildStream ------------------- BuildStream uses pytest for regression tests and testing out the behavior of newly added components. The elaborate documentation for pytest can be found here: Don't get lost in the docs if you don't need to, follow existing examples instead. Running tests ~~~~~~~~~~~~~ To run the tests, just type:: ./setup.py test At the toplevel. When debugging a test, it can be desirable to see the stdout and stderr generated by a test, to do this use the --addopts function to feed arguments to pytest as such:: ./setup.py test --addopts -s You can always abort on the first failure by running:: ./setup.py test --addopts -x If you want to run a specific test or a group of tests, you can specify a prefix to match. E.g. if you want to run all of the frontend tests you can do:: ./setup.py test --addopts '-k tests/frontend/' We also have a set of slow integration tests that are disabled by default - you will notice most of them marked with SKIP in the pytest output. To run them, you can use:: ./setup.py test --addopts '--integration' By default, buildstream also runs pylint on all files. Should you want to run just pylint (these checks are a lot faster), you can do so with:: ./setup.py test --addopts '-m pylint' Alternatively, any IDE plugin that uses pytest should automatically detect the ``.pylintrc`` in the project's root directory. Adding tests ~~~~~~~~~~~~ Tests are found in the tests subdirectory, inside of which there is a separarate directory for each *domain* of tests. All tests are collected as:: tests/*/*.py If the new test is not appropriate for the existing test domains, then simply create a new directory for it under the tests subdirectory. Various tests may include data files to test on, there are examples of this in the existing tests. When adding data for a test, create a subdirectory beside your test in which to store data. When creating a test that needs data, use the datafiles extension to decorate your test case (again, examples exist in the existing tests for this), documentation on the datafiles extension can be found here: Tests that run a sandbox should be decorated with:: @pytest.mark.integration and use the integration cli helper. Measuring BuildStream performance --------------------------------- Benchmarking framework ~~~~~~~~~~~~~~~~~~~~~~~ BuildStream has a utility to measure performance which is available from a separate repository at. This tool allows you to run a fixed set of workloads with multiple versions of BuildStream. From this you can see whether one version performs better or worse than another which is useful when looking for regressions and when testing potential optimizations. For full documentation on how to use the benchmarking tool see the README in the 'benchmarks' repository. Profiling tools ~~~~~~~~~~~~~~~ When looking for ways to speed up the code you should make use of a profiling tool. Python provides `cProfile <>`_ which gives you a list of all functions called during execution and how much time was spent in each function. Here is an example of running `bst --help` under cProfile: python3 -m cProfile -o bst.cprofile -- $(which bst) --help You can then analyze the results interactively using the 'pstats' module: python3 -m pstats ./bst.cprofile For more detailed documentation of cProfile and 'pstats', see:. For a richer visualisation of the callstack you can try `Pyflame <>`_. Once you have followed the instructions in Pyflame's README to install the tool, you can profile `bst` commands as in the following example: pyflame --output bst.flame --trace bst --help You may see an `Unexpected ptrace(2) exception:` error. Note that the `bst` operation will continue running in the background in this case, you will need to wait for it to complete or kill it. Once this is done, rerun the above command which appears to fix the issue. Once you have output from pyflame, you can use the ``flamegraph.pl`` script from the `Flamegraph project <>`_ to generate an .svg image: ./flamegraph.pl bst.flame > bst-flamegraph.svg The generated SVG file can then be viewed in your preferred web browser. Profiling specific parts of BuildStream with BST_PROFILE ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ BuildStream can also turn on cProfile for specific parts of execution using BST_PROFILE. BST_PROFILE can be set to a section name, or 'all' for all sections. There is a list of topics in `buildstream/_profile.py`. For example, running:: BST_PROFILE=load-pipeline bst build bootstrap-system-x86.bst will produce a profile in the current directory for the time take to call most of `initialized`, for each element. These profile files are in the same cProfile format as those mentioned in the previous section, and can be analysed with `pstats` or `pyflame`. Profiling the artifact cache receiver ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Since the artifact cache receiver is not normally run directly, it's necessary to alter the ForceCommand part of sshd_config to enable profiling. See the main documentation in `doc/source/artifacts.rst` for general information on setting up the artifact cache. It's also useful to change directory to a logging directory before starting `bst-artifact-receive` with profiling on. This is an example of a ForceCommand section of sshd_config used to obtain profiles:: Match user artifacts ForceCommand BST_PROFILE=artifact-receive cd /tmp && bst-artifact-receive --pull-url /home/artifacts/artifacts The MANIFEST.in and setup.py ---------------------------- When adding a dependency to BuildStream, it's important to update the setup.py accordingly. When adding data files which need to be discovered at runtime by BuildStream, update setup.py accordingly. When adding data files for the purpose of docs or tests, or anything that is not covered by setup.py, update the MANIFEST.in accordingly. At any time, running the following command to create a source distribution should result in creating a tarball which contains everything we want it to include:: ./setup.py sdist
https://apache.googlesource.com/buildstream/+/55ea3031d1bf4eed99d1b9eb8dfe1afee091d9c3/HACKING.rst
CC-MAIN-2021-49
refinedweb
2,950
52.39
In earlier posts (Part 1 here, Part 2 here and Part 3 here) I have set about building a "Hello World" Python/Django application. In this post I want to configure the first screen. So let's get started... I have already added a home module to the apps folder. The contents are: models.py tests.py views.py Nothing complicated I just want to add a routine to present an html page. The Home page will be a subroutine - home. Here is the addition to the views.py file: #!/usr/bin/env python # -*- coding: utf-8 -*- # vim: ai ts=4 sts=4 et sw=4 nu from django.shortcuts import render_to_response from django.template import RequestContext def home(request): context = {} return render_to_response('home/index.html', context, RequestContext(request)) When we run localhost:8000 we get an error (totally expected!): Let's fix that by updating urls.py: url(r'^$', 'apps.home.views.home'), Another expected error (but it is progress). We are getting to the view.py file in the apps.home module because we are getting the error that no html file exists: In the Templates folder I create a home folder and create index.html in there: cd templates mkdir home When I launch I now get the index.html file loaded. Success! Yes - we have a basic page but there is no formatting and we are not using any of the Django template features. Let's setup the base template. While we are at it we will probably need a set of include files so let's create a folder for them. cd templates mkdir include I then create a base.html file that will contain placeholders. <!doctype html> <html class="no-js" lang="en"> <head> <title>{% block pretitle %}{% endblock %}{% block title %}Hello World! {% endblock %}{% block posttitle %}{% endblock %}</title> <meta charset="utf-8"> {% block head %} {% endblock %} {% block extra_head %} {% endblock %} </head> <body class="{% block active_nav_tab %}{% endblock %}" {% block body_load_trigger %}{% endblock %}> {% include "include/top-nav.html" %} <div id="page"> {% include "include/messages.html" %} {%block featureBox %} {% endblock %} {%block extra_body %} {% endblock%} </div> {% include "include/footer.html" %} </body> </html> Note the use of block and endblock tags like these {% block block_name %}{% endblock %} in the code. These will be used in the application to insert customized code. Now that we have a template we need to go back and update our index.html file to use this template. We can include other template elements using the {% include "include/filename.html" %} tag. This allows us to break the template in to manageable chunks of code. It also makes it easier to produce variations of the basic template design. With the base template in place we now need to edit the index.html home page to make use of the template. First let's take note of some of the sections in the template. {% block pretitle %}{% endblock %}{% block title %}Hello World! {% endblock %}{% block posttitle %}{% endblock %} This section allows us to insert a custom title for a page. If we want to replace the default Hello World! we need to use the {%block title}{{% endblock %} in our html file. To add content to the main body of the page we can use the {%block featureBox %} {% endblock %} section. Let's change Index.html so it still performs the same function but uses the base.html template. Here is the updated version: {% extends "base.html" %} {% block title %} Welcome to My World!{% endblock %} {% block featureBox %} {%load get_settings %} <p>Welcome to my voyage of discovery in Python/Django</p> @ekivemark <p>Can we print the STATIC_URL SETTING: [ {{ STATIC_URL }} ]</p> {% endblock %} When we run this we get an error. This is because we didn't create all of the include files. in this case {% include "include/top-nav.html" %} and {% include "include/messages.html" %} are the culprits. I create basic placeholders for those files and re-run the application and success! We now have a simple template operational. At this stage with only a single page the template add a little overhead since we could code everything in a single file. But, when the application grows to tens or hundreds of pages the templates really shine because you can incorporate all the standard design elements in to the templates and your individual pages contain just the individualized elements. When you couple this with Cascading Stylesheets and the use of some standard design classes and you can quickly and easily modify the design of your application by making changes to your CSS files or to the template files. Let's demonstrate that by adding a simple horizontal line <hr /> to the top-nav and footer include files. Well that was easy! So now we have a working application that is using the Django Template features. It may be time to check this code in to github. Let's do that! I navigate to the helloworld folder on my Mac (the one that contains manage.py). git init git status git status tells me what files are being tracked. git add README.md git add *.py git add apps git add config git add mainstatic git add templates git add .gitignore run git status again to see that we haven't missed anything. No - looks good. git commit -m 'first commit' git remote add origin git push -u origin master Remember - you need to have an account on github. You will be prompted for your userid and password. There we have the first stage completed. We have a do nothing application that uses Django templates and we have it stored in GitHub. We aren't finished yet. The next step is to make sure Compass / SASS and Bootstrap are working. We will start out on that journey in the next post in this series. Posted via email from ekivemark: pre-blogspot
http://ekive.blogspot.com/2012/06/my-journey-in-pythondjango-hello-world.html
CC-MAIN-2017-34
refinedweb
958
68.47
Number of elements in IR, PR, and PI arrays #include "matrix.h" mwSize mxGetNzmax(const mxArray *pm); #include "fintrf.h" mwSize mxGetNzmax(pm) mwPointer pm pm Pointer to a sparse mxArray Number of elements allocated to hold nonzero entries in the specified sparse mxArray, on success. Returns an indeterminate value on error. The most likely cause of failure is that pm points to a full (nonsparse) mxArray. Use mxGetNzmax to get the value of the nzmax field. The nzmax field holds an integer value that signifies the number of elements in the ir, pr, and, if it exists, the pi arrays. The value of nzmax is always greater than or equal to the number of nonzero elements in a sparse mxArray. In addition, the value of nzmax is always less than or equal to the number of rows times the number of columns. As you adjust the number of nonzero elements in a sparse mxArray, MATLAB® software often adjusts the value of the nzmax field. MATLAB adjusts nzmax to reduce the number of costly reallocations and to optimize its use of heap space. See these examples in : matlabroot/extern/examples/mx
https://uk.mathworks.com/help/matlab/apiref/mxgetnzmax.html
CC-MAIN-2021-17
refinedweb
191
55.95
I am totally new to programming and I'm feeling like an idiot. I need to write a c++ program, basically a cash register program is what I've been told it's commonly called. It needs to ask for amount owed, amount paid, and display the change and break the change down into dollars,quarters,dimes,nickels,pennies. If the amount paid is less then the amount owed it needs to display a message saying the customer owes whatever the difference is. I'm supposed to use a "nested if" here is what I have so far : And this is where I become totally lost, I don't really get how to break the change down. I'm not sure how to actually write the nested if.And this is where I become totally lost, I don't really get how to break the change down. I'm not sure how to actually write the nested if.Code:#include<iostream> #include<iomanip> using namespace std; int main() { float a,b,c; cout<<"Enter amount owed $ : "; cin>>a; cout<<"Enter amount paid $ : "; cin>>b; c=a-b; cout<<"Your change is $ : "; if (c>=0) { cout<<"You owe $ : "; cout<<c*-1; } cout<<fixed; cout<<setprecision(2); cout<< c; cout << endl << endl; system("pause"); return 0; }
https://cboard.cprogramming.com/cplusplus-programming/141961-cash-register-program-im-lost.html
CC-MAIN-2017-22
refinedweb
214
68.1
another bounding box intersect with this bounding box? Check if the bounding box comes into contact with another bounding box. This returns a Boolean that is set to true if there is an intersection between bounds. Two bounds are intersecting if there is at least one point which is contained by both bounds. //Attach this script to an empty GameObject. Create 2 more GameObjects and attach a Collider component on each. Choose these as the "My Object" and "New Object" in the Inspector. //This script allows you to move your main GameObject left to right. If it intersects with the other, it outputs the message to the Console. using UnityEngine; public class BoundsIntersectExample : MonoBehaviour { public GameObject m_MyObject, m_NewObject; Collider m_Collider, m_Collider2; void Start() { //Check that the first GameObject exists in the Inspector and fetch the Collider if (m_MyObject != null) m_Collider = m_MyObject.GetComponent<Collider>(); //Check that the second GameObject exists in the Inspector and fetch the Collider if (m_NewObject != null) m_Collider2 = m_NewObject.GetComponent<Collider>(); } void Update() { //If the first GameObject's Bounds enters the second GameObject's Bounds, output the message if (m_Collider.bounds.Intersects(m_Collider2.bounds)) { Debug.Log("Bounds intersecting"); } } }
https://docs.unity3d.com/ScriptReference/Bounds.Intersects.html
CC-MAIN-2022-27
refinedweb
190
50.12
In this codelab, you'll learn about convolutions and why they are so powerful in Computer vision scenarios. It will equip you to be ready for Lab 4 which shows you have to use convolutions in deep learning scenarios to make convolutional neural networks. In the last lab, you saw how to train an image classifier for fashion items using the Fashion MNIST dataset. This gave you a pretty accurate classifier, but there was an obvious constraint: the images were 28x28 gray scale and the item was centered in the image. For example here are a couple of the images in Fashion MNIST The DNN that you created simply learned from the raw pixels what made up a sweater, and what made up a boot in this context. But consider how it might classify this image? While it's clear that there are boots in this image, the classifier would fail for a number of reasons. First, of course, it's not 28x28 grayscale, but more importantly, the classifier was trained on the raw pixels of a left-facing boot, and not the features that make up what a boot actually is. To solve this problem, you can use Convolutions. You'll be using TensorFlow to hand build convolutions in this lab. If you've never done this before, it's strongly recommended that. A convolution is a filter that passes over an image, processes it, and extracts features that show a commonality in the image. In this lab you'll see how they work, by processing an image to see if you can extract features from it! The process is very simple. You simply scan every pixel in the image and then look at its neighboring pixels. You multiply out the values of these pixels by the equivalent weights in a filter. So, for example, consider this: In this case a 3x3 convolution matrix is specified. The current pixel value is 192, but you can calculate the new one by looking at the neighbor values, multiplying them out by the values specified in the filter, and making the new pixel value the final amount. Let's explore how convolutions work by creating a basic convolution on a 2D gray scale image. We'll demonstrate this with the 'ascent' image from scipy. It's a nice built-in picture with lots of angles and lines. Let's start by importing some Python libraries and get the ascent picture. import cv2 import numpy as np from scipy import misc i = misc.ascent() Next, we can use the `pyplot` library to draw the image so we know what it looks like. import matplotlib.pyplot as plt plt.grid(False) plt.gray() plt.axis('off') plt.imshow(i) plt.show() We can see that this is an image of a stairwell. There are lots of features in here that we can play with seeing if we can isolate them -- for example there are strong vertical lines. The image is stored as a numpy array, so we can create the transformed image by just copying that array. Let's also get the dimensions of the image so we can loop over it later. i_transformed = np.copy(i) size_x = i_transformed.shape[0] size_y = i_transformed.shape[1] First, we make a convolution matrix (or kernel) as a 3x3 array. # let's calculate the output pixels. We will iterate over the image, leaving a 1 pixel margin, and multiply out each of the neighbors of the current pixel by the value defined in the filter. That means the current pixel's neighbor above it and to the left will be multiplied by the top left item in the filter etc. etc. We'll then multiply the result by the weight, and then ensure the result is in the range 0-255 Finally we'll we can plot the image to see the effect of passing this filter over it! # Plot the image. Note the size of the axes -- they are 512 by 512 plt.gray() plt.grid(False) plt.imshow(i_transformed) #plt.axis('off') plt.show() So, consider the following filter values, and their impact on the image. Using [-1,0,1,-2,0,2,-1,0,1] gives us a very strong set of vertical lines: Using [-1,-2,-1,0,0,0,1,2,1] gives us horizontal lines: Explore different values for yourself! Also, try different sized filters, such as 5x5 or 7x7. As well as using convolutions, pooling helps us greatly in detecting features. The goal is to reduce the overall amount of information in an image while maintaining the features that are detected as present. There are a number of different types of pooling, but for this lab we'll use one called MAX pooling. The idea here is to iterate over the image, and consider the pixel and its immediate neighbors to the right, beneath, and right-beneath. Take the largest of these (hence the name MAX pooling) of them and load it into the new image. Thus the new image will be 1/4 the size of the old This code will show a (2, 2) pooling. Run it to see the output, and you'll see that while the image is 1/4 the size of the original while keeping this plot -- the image is now 256x256, ¼ of it's original size, and the detected features have been enhanced, despite less data now being in the image. In the next lab you'll see how to add convolutions to your Fashion MNIST neural network to make it more efficient. It will classify based on features, and not on raw pixels.
https://codelabs.developers.google.com/codelabs/tensorflow-lab3-convolutions/index.html?index=..%2F..learn-tensorflow
CC-MAIN-2020-10
refinedweb
942
70.63
Doug Holton wrote: > It's me wrote: > >> The argument I wish to pass is either one string, or a list of >> strings, or a tuple of strings. > > def seq(x): > if hasattr(x,"__iter__"): > return x > else: > return (x,) > > def abc(arg1, arg2, arg3): > for item in seq(arg2): > print item > Note that this takes advantage of the fact that str and unicode implement iteration through the old __getitem__ protocol instead of the __iter__ protocol. It's actually probably the most concise solution to your problem, but I don't think there's any guarantee that str and unicode won't grow __iter__ methods in the future, so you might find that this code breaks in some future version of Python (probably in the fabled Python 3.0). Unfortunately, if this does happen, it won't break with an exception; it will break by treating the characters in strings as 'item's. For the moment though (and I suspect in Python 2.5 as well since I haven't heard anyone lobbying to add __iter__ methods to str or unicode), this should work fine for you. Steve
https://mail.python.org/pipermail/python-list/2004-December/250870.html
CC-MAIN-2014-15
refinedweb
187
67.72
Opened 3 years ago Last modified 10 months ago The attached files add CGI support for django. Maybe this is useful for somebody else as well. The CGI Handler classes The CGI Handler classes (replaces the old CGI.py which had still my standard file-header) The CGI Server classes An alternative cgi script to Martin's using extisting WSGI I took a differnet approach to Martin. Rather that add another handler the script I've added uses the existing WSGI handler. A CGi->WSGI shim layer if you like. Tested it on the standard Apache 1.3 install that comes with Mac OS X, and python 2.4. I don't know of any reason it shouldn't work on other platforms, but you've been warned. Has anyone tested this? Recently I switch to the "django.cgi" approach and it works fine in my setup. Which of the patches is "the good one" ? About the last one: Setting "Patch needs improvement" for that. Side note: for the one who writes the docs for that: consider putting a big big big note about the performance of running Django (or anything larger than a Hello World) in CGI mode. to be placed in django/core/servers to be placed in docs/ cgi.py and cgi.txt is the django.cgi splitted, notes: Hope it's ok. I tested Marc's cgi.py code, which works fine for me with django 0.95.1. I think that Django should include this server interface, but there should be a note somewhere explaining that when using the CGI interface all dynamic page views slow down the more models and applications that your website contains, even if the page itself doesn't use them. The timing may be fine on a simple blog, but my site has 5 sub-applications and 31 tables (including all the admin ones) and it's currently taking 6 seconds to return a dynamic page, against under 0.5 seconds for a static GIF. Works for me using 0.95.1, aside from the same issue that's behind the old, dismissed bug #285. Since not having a working admin side would have defeated most of the point in using Django, and the blithe suggestion of a global rewrite rule was somewhere between distasteful and impossible here, I had to find a fix which allows Django to work with this as a CGI, but as usual there's a price to be paid (pseudo patch for the 1/22 cgi.py): + # kluge to give Django the full local part of the URL + # fixes problems like #285 at the cost of injecting a prefix into urls.py + environ['PATH_INFO'] = environ['SCRIPT_NAME'] + environ.get('PATH_INFO', '') result = application(environ, start_response) This fixes the issue with the admin interface going off to points unknown after the login screen; it may also handle other places where a non-relative URL is generated. It does break the CGI standard meaning of PATH_INFO, so if this were to be adopted I should advise fixing the root of the problem in wsgi.py where it stupidly assumes that PATH_INFO is all there is to the URL. I assume this is a relic of the way Django has traditionally been deployed (see for example the discouragment of serving anything but Django in the mod_python setup docs, etc.) I would further speculate that some similar fix would deal as well with #285 and any related issues for FCGI, SCGI, ...? Oh, the cost. It is, of course, that the top-level urls.py has to embed the added prefix (SCRIPT_NAME) to every url. In my own experiemntal work I renamed urls.py to real_urls.py, and made a "dummy" urls.py that matches the prefix and chains to real_urls.py. It's a bit ugly, and, once again assuming that the project cares about working cleanly in a non-mod_python environment, should get better support. I have some handwaving-grade ideas about that; perhaps I'll work it up into a patch later. Since this is for a copious spare time project, it's likely I'll just continue as-is for at least a while. I tried But it doesn't run, if the URL is empty, example: It's because, the file access to "PATH_INFO" in the os.environ. But my Apache2 put only this requested path into the environ if it is not empty. Here a small patch, for this: Index: wsgi.py =================================================================== --- wsgi.py (revision 4556) +++ wsgi.py (working copy) @@ -73,7 +73,7 @@ class WSGIRequest(http.HttpRequest): def __init__(self, environ): self.environ = environ - self.path = environ['PATH_INFO'] + self.path = environ.get('PATH_INFO','/') self.META = environ self.method = environ['REQUEST_METHOD'].upper() Also it would work, if we change the file and insert: ... def run_with_cgi(application): environ = dict(os.environ.items()) + environ['PATH_INFO'] = environ.get('PATH_INFO',"/") environ['wsgi.input'] = sys.stdin environ['wsgi.errors'] = sys.stderr environ['wsgi.version'] = (1,0) uhmm.. as per my last comment: cgi.py and cgi.txt is the django.cgi splitted, notes: * It seems that the page is not in SVN so I cannot add a link to /cgi/ somebody will need to do that! * The reporter or some volunteer could please try the patch? the cgi.txt doc explains how to set the wrapper. * How to write a test for this? no idea ;) So, the official docs for this patch are on the cgi.txt file, they are not in the "Documentation" section of djangoproject.com as the patch is still pending to be applied. And cgi.py is the python code to run the thing ;) The best should be the cgi.py file as it's the best choice as it's the last patch and the candidate to be checked in. Read the cgi.txt file for set-up information (it's almost like the fcgi wrapper). About the files: Hope this helps. It's not a good idea to name a file cgi.py! So you override python's builtin cgi module! What's about cgi_server.py? Replying to Jedie: It's not a good idea to name a file cgi.py! So you override python's builtin cgi module! What's about cgi_server.py? It's not a good idea to name a file cgi.py! So you override python's builtin cgi module! What's about cgi_server.py? from django.core.servers.cgi import runcgi import django.core.servers.cgi.runcgi is not import cgi This cgi.py is deep inside django and the documentation instructs you to import "runcgi" not "cgi" so you should have no trouble (I see no reason you would import the builtin "cgi" and this "cgi" on the wrapper script anyway). But it can be renamed anyway, something more to be discussed! Call to volunteers: Write tests for this ticket! ;) Yes, you are right. I have not put cgi.py into django/core/servers. I saved it into the root and make "from cgi import runcgi"... So i get an funny error, because the builtin file was overwritten ;) Two problems remaining: 1. I must use use the environ['PATH_INFO'] = environ.get('PATH_INFO',"/") fix from above. 2. My http headers was not send in the right order: So i have made a silly patch: ... def send_content_type(response_headers): for no, header in enumerate(response_headers): if header[0].lower() == "content-type": sys.stdout.write('%s: %s\r\n' % header) del(response_headers[no]) return response_headers sys.stdout.write('Content-Type: text/html\r\n') sys.stdout.write('Warning: Content Type not send!') # Bullshit?!? def write(data): if not headers_set: raise AssertionError("write() before start_response()") elif not headers_sent: # Before the first output, send the stored headers status, response_headers = headers_sent[:] = headers_set response_headers = send_content_type(response_headers) # Send Content-Type first sys.stdout.write('Status: %s\r\n' % status) for header in response_headers: sys.stdout.write('%s: %s\r\n' % header) sys.stdout.write('\r\n') sys.stdout.write(data) sys.stdout.flush() ... FWIW, when using django.cgi, I had to specify the PATH_INFO like this: environ['PATH_INFO'] = environ.get('REDIRECT_URL',"/") instead of : def run_with_cgi(application): environ = dict(os.environ.items()) + environ['PATH_INFO'] = environ.get('PATH_INFO',"/") environ['wsgi.input'] = sys.stdin environ['wsgi.errors'] = sys.stderr environ['wsgi.version'] = (1,0) so that I got the correct path passed to django. Perhaps there is a different way to do this, but it worked for me. Why in god's name a patch for django? There are couple of CGI-WSGI wrappers out there. nobody has to patch django for CGI support :-/ Why not include the small CGI Handler into django directly? It's small! So CGI guys need no extra files from somewhere. The empty PATH_INFO problem is the same with SCGI or fastCGI. There exist the ticket:3414 for this. print in python code should go stderr (logfile): ===> diff -u cgi.py ~modarch/django/trunk/django/core/servers/cgi.py --- cgi.py 2007-05-07 16:49:30.313178117 +0200 +++ /home/modarch/django/trunk/django/core/servers/cgi.py 2007-05-07 17:11:09.788411072 +0200 @@ -10,6 +10,7 @@ def runcgi(): environ = dict(os.environ.items()) def runcgi(): environ = dict(os.environ.items()) environ = dict(os.environ.items()) + environPATH_INFO? = environ.get('PATH_INFO',"/") environwsgi.input? = sys.stdin environwsgi.errors? = sys.stderr environwsgi.version? = (1,0) environwsgi.input? = sys.stdin environwsgi.errors? = sys.stderr environwsgi.version? = (1,0) @@ -26,6 +27,9 @@ headers_set = [] headers_sent = [] headers_set = [] headers_sent = [] + + stdout=sys.stdout + sys.stdout=sys.stderr # print should go to stderr (logfile) def write(data): if not headers_set: def write(data): if not headers_set: if not headers_set: @@ -34,13 +38,13 @@ elif not headers_sent: # Before the first output, send the stored headers status, response_headers = headers_sent[:] = headers_set elif not headers_sent: # Before the first output, send the stored headers status, response_headers = headers_sent[:] = headers_set # Before the first output, send the stored headers status, response_headers = headers_sent[:] = headers_set - sys.stdout.write('Status: %s\r\n' % status) + stdout.write('Status: %s\r\n' % status) for header in response_headers: for header in response_headers: - sys.stdout.write('%s: %s\r\n' % header) - sys.stdout.write('\r\n') + stdout.write('%s: %s\r\n' % header) + stdout.write('\r\n') - sys.stdout.write(data) - sys.stdout.flush() + stdout.write(data) + stdout.flush() def start_response(status,response_headers,exc_info=None): if exc_info: def start_response(status,response_headers,exc_info=None): if exc_info: if exc_info: Why ?? stdout is stdout, and stderr is stderr they are and mean different things. Why should stdout = stderr ? If you are using print statemens for debuggin please use the logging module ;) Changes to cgi.py: include SCRIPT_NAME in PATH_INFO and connect stdout to stderr (print to logfile) Django just isn't designed to run under CGI. It won't run under OS/2, either. Use wsgiref module for django cgi support run django as cgi with wsgiref CGIHandler By Edgewall Software.
http://code.djangoproject.com/ticket/2407
crawl-002
refinedweb
1,801
60.21
Hello group, I just downloaded the Digital Mars C/C++ compiler and was playing around with the piece of code shown at the end of this post. When I compile and run said code, I get the following output: dog = 8, dog_p points to 7 However, when I uncomment the printf("hello world") line I get the output I expect: dog = 7, dog_p points to 7 I'm confused as to what is going on. Anyone care to enlighten me? Thanks! Vic ------------------------------------------------------- #include <stdio.h> main() { const int dog = 8; int *dog_p = (int *)&dog; *dog_p = 7; //printf("hello world!\n"); printf("dog = %d, dog_p points to %d\n", dog, *dog_p); } -------------------------------------------------------
http://www.digitalmars.com/archives/cplusplus/4801.html
CC-MAIN-2015-14
refinedweb
110
79.6
After launching a program thru valgrind I got the following message : ==9290== Conditional jump or move depends on uninitialised value(s) ==9290== at 0x4E82A03: vfprintf (vfprintf.c:1661) ==9290== by 0x4EA9578: vsnprintf (vsnprintf.c:119) ==9290== by 0x4E8B531: snprintf (snprintf.c:33) ==9290== by 0x400820: _function (in /home/snp/prog/TEST) ==9290== by 0x4006D5: start (in /home/snp/prog/TEST) ==9290== by 0x40085C: main (in /home/snp/prog/TEST) ==9290== Uninitialised value was created by a heap allocation ==9290== at 0x4C2AB80: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==9290== by 0x400715: init (in /home/snp/prog/TEST) ==9290== by 0x400857: main (in /home/snp/prog/TEST) #include <net/if.h> #include <stdlib.h> #include <stdio.h> #include <string.h> #include <unistd.h> #include <syslog.h> #define TARGET "8.8.8.8" #define DEVICE "eth0" static int _function(void); struct remote { char *target; char device[IFNAMSIZ]; }; struct remote * st_args; int start(void) { return (_function()); } int init(void) { st_args = malloc (sizeof (struct remote)); if (st_args == NULL) return (-1); st_args->target = malloc (sizeof (TARGET)+1); if (st_args->target == NULL) { free (st_args); return (-1); } strncpy(st_args->target, TARGET , sizeof(TARGET)-1); strncpy(st_args->device, DEVICE, IFNAMSIZ-1); return 0; } void stop(void) { if (st_args != NULL) { free (st_args->target); free (st_args); } } static int _function(void) { char cmd[256]; memset(cmd, 0, sizeof(cmd)); snprintf(cmd, sizeof(cmd), "ping -I %s %s", st_args->device, st_args->target); return 0; } int main(int argc, char **argv) { init(); start(); stop(); return 0; } snprintf Valgrind's message, ==9290== Conditional jump or move depends on uninitialised value(s) is reasonably self-explanatory: the program is observed to be relying on on uninitialized memory to make a decision. Happening in a standard library function as it does as it does, it is natural to suppose that there is something wrong with the function arguments. Since you're specifically printing strings, the most likely cause is that one of the string arguments is unterminated. And indeed, at least one is. Consider this code: #define TARGET "8.8.8.8" [...] strncpy(st_args->target, TARGET , sizeof(TARGET)-1); In trying to be safe, you have shot yourself in the foot. strncpy() copies at most the specified number of bytes, but it does not append a terminator afterwards. Thus, its Linux manual page contains this warning: Warning: If there is no null byte among the first nbytes of src, the string placed in destwill not be null terminated. You have ensured that the situation described in that warning takes place -- no null terminator is written, and the last byte allocated for st_args->target remains uninitialized. Since you are careful to allocate enough space for the full string to begin with, including the terminator, the strncpy() is overkill anyway. Just use strcpy(). Or indeed, if your system has strdup() or you're willing to write an implementation, then strdup() is much cleaner than malloc() + strcpy(). Alternatively, if you want to use strncpy() then it's a good idea to ensure that the destination string is terminated by following up each strncpy() call by manually writing a terminator to the last byte of the destination. In this case, that would be st_args->target[sizeof(TARGET)] = '\0'; Note also that you actually allocate one more byte than you need, for the sizeof a string literal includes the terminator. The code just above is written for that actual one-byte-too-many allocation.
https://codedump.io/share/oucqLnDqjPRi/1/valgrind---snprintf--conditional-jump-or-move-depends-on-uninitialised-values
CC-MAIN-2018-26
refinedweb
565
53.31
Does Office 2.0 include Voice 2.0? I'm stopping by the Office 2.0 Conference Thursday and Friday. When it started, Office 2.0 was document centric, bringing Microsoft Office to the web. Last year it became more metawork (work about work) and project/workflow oriented. Realtime talk remains off topic. There are a few contrary examples. Plutext.org enables live collaborative editing of Microsoft Word docs. So where do Office 2.0 and Talk 2.0 overlap? Simply, you have... Talk with Office features might look like Skype plug-ins for document co-writing. Call centric with talk experience enhanced by office tool. Office with Talk features might include collaborative spaces that add live chat room. There's room for service-to-service interop, but we haven't seen much. Three dimensions affect the uptake of this union: - Time structures - Engagement - Packaging Time Structures Nearly all Office 2.0 services are mostly asynchronous. While most Talk 2.0 services are nearly synchronous. But we're seeing some blending. For example, Blackberries turn email into instant messages. Persistent IM chat rooms keep history so you can catch up on a conversation. The other structure to time is that Live Talk is an event. It takes place in time. Divide each conversation into periods before, during and after a call. Before a talk, you have to discover people to engage, using a namespace, group affiliations, authentication of ID, permissions, white/yellow page directories, etc. You'll also want to schedule your conversation using calendars, project deadlines and services that find common time windows. If you're exceptionally lucky, someone has tools that map to-do lists to agenda items and reminder services. During a conversation, you can augment the experience. For example, adding live chats or conferencing backchannels to desktop sharing or collaborative writing exercises. After, you can add the conversation's debris to a team/project/process/transaction workspace. Or publish it to a blog/vlog/wiki/microblog, becoming part of your team's institutional memory, searchable, attributable. Degrees of Engagement You are more than an email address or Skype name. The more you share digitally, the closer your experience comes to feel like face-to-face contact. The higher the fidelity (wideband audio, high quality video) the higher you climb the ladder of engagement. Engagement brings people into a call, make it more real, vivid, increasing focus and participation. When embedded in an Office application, that engagement improves the quality of the work experience. Embedability OK, so you can design solutions that exploit Talk's time, engagement, and modality attributes. How do you add talk with as little effort and as much reliability and scalability as possible? I started off saying few Office 2.0 companies have Talk 2.0 features in their products. It's a little failure of imagination. Mostly, though, it's the companies that offer Talk 2.0 components haven't made them very embedable. What does it take to make Talk readily embedable? Web services. Web services let my servers talk to your servers. To start, you want access to a metatalk command language, creating accounts, groups, sessions and getting statistics, status, and reports. More, you want access to the content of conversations; the better to index and repurpose them. A startup can't force a customer to download 20MB software clients and keep them running on a desktop; they rarely have that sort of power. Browser clients. Flash and JavaScript downloads are small and cached. So you can access your Office/Talk service from nearly anywhere. Side benefit: you aren't tied into a Talk supplier's UI, you can adapt and adjust it to meet your changing needs and your deep understanding of the workplaces you support. The customer's name spaces. Skype commands the Skype user namespace, Microsoft Microsoft's, and so on. As an infrastructure provider, you have to go beyond that; you no longer control the customer relationship. Each Office 2.0 service will either have their own namespace ("thank you for registering at Octopz") or administer an enterprise's namespace ("set up the call using your company directory or org chart"). Security. Your security must be better than your customers' and much better than their customers' security. Commerce. Office 2.0 companies will charge for many services, so accounting, billing, automatic payments, and revenue sharing must be part of any Talk 2.0 service offer. Fidelity and Immediacy. Skype's been spoiling people with amazing audio quality. Skype sets expectations high. Wideband spectrum, noise reduction, echo cancellation, high resolution, fast frame rates, deep color depth, smart compression and other techniques are expected in rich clients like Skype. Thin/browser clients suffer from comparison but are in demand anyway. The same applies to the problems of latency, compute demand, and network connectivity. Skype makes it all seem easy but it isn't. Media access. Many services don't let you manipulate IMs, audio or video during a live session. Others won't let you get them after a session. Your Office 2.0 application may have excellent reasons for touching those streams or files, solving real customer problems. Widgets and other user-facing components. I'm still surprised at how many Voice 2.0 vendors don't make it simple for designers to add talk without knowing three programming languages and four APIs. Delivering Talk in ready-to-install UI components expands reach and embedability. How does Skype fit in? Skype doesn't. This is an architecture Skype cannot deliver today. Should Skype strive to? I believe so. Skype's downloads earn a measure of customer lock-in. But downloading is a barrier to adoption, a problem as people use multiple devices in their onlives, and an inconvenience. Browser-based talk solves these problems for Skype's own customers. Should Skype offer white label talk? Others are quickly filling that gap. Jajah has 9 white labeled users for each Jajah branded user. SightSpeed is very successful in private labeling and co-branding its services. Jaduka only delivers wholesale talk. BT/Ribbit has embedding as its charter. Voxeo is years ahead of Skype on its voice platform. An embedding strategy is within Skype's reach. The theme of 2009's Office 2.0 conference? I'm betting on talkification. tags: skype, jajah, sightspeed, jaduka, ribbit, voxeo, talk, talkification, talk 2.0, office 2.0, talk20, office20, embedding, embedability, voip Follow Phil Wolff on Twitter or FriendFeed or on Skype. Labels: architecture, Blackberry, business, competition, events, identity, mobile, skype, skypenomics, strategy, technology, Voxeo 2 Comments: super super post. agree with nearly everything here. would be surprised if Ribbit and Voxeo were both not going down this exact path. You bringing up some interesting ideas about convergence of technologies. It makes me think about what Google might do with its online docs programs and Grandcentral that it bought awhile back. Or Zoho and a voip company could do something like that to build more tools for collaboration around. This post certainly opens the imagination. We've started to moderate comments to avoid spam. Please excuse the short delay. We'll get your post online a quickly as possible. Links to this post: Create a Link
http://skypejournal.com/2008/09/does-office-20-include-voice-20-could.html
crawl-002
refinedweb
1,205
60.92
Are you sure? This action might not be possible to undo. Are you sure you want to continue? Money Sense Getting Smart With Your Money Published by: 1 2 .Agensi Kaunseling dan Pengurusan Kredit Level 8.my © AKPK First Edition 2008 The copyright of this book belongs to Agensi Kaunseling dan Pengurusan Kredit (AKPK). While all efforts have been made to make the information contained in this book accurate. It is not intended as a substitute for any advice you may receive from a professional financial advisor. Agensi Kaunseling dan Pengurusan Kredit (AKPK) disclaims all and any liability to any person using the information in this book as a basis for making or taking an action. ISBN 978-983-44004-0-8 Disclaimer: The information contained in this book is solely for educational purpose. Jalan Sultan Ismail 50250 Kuala Lumpur Fax: 03-2698 1575 E-mail: enquiry@akpk. This book or parts thereof.9. AKPK seeks your understanding for any errors or omission.org. Maju Junction Mall 1001. whether direct or indirect from such reproduction.2 . No monetary gain in any form should be made or derived.8. translated. or transmitted in any form with prior written permission from AKPK only for the sole purpose of education. may be reproduced. approximately 50.000 individuals have sought the free credit counselling and debt management services of the agency. Our 40 odd counsellors from 9 locations nationwide have worked passionately to assist many of these distressed borrowers and help put them back on track with their finances. Over the last 26 months. More than 50% of those who sought our assistance were below the age of 40.2 years. done with 80 young working adult graduates in various disciplines.FOREWORD The Credit Counselling and Debt Management Agency or more commonly known as Agensi Kaunseling dan Pengurusan Kredit (AKPK) was set up by Bank Negara Malaysia in April 2006. who had been working between 1 . All these working adults agreed that early personal financial education is important and believe that if they had been taught personal financial education. I feel truly honoured to lead this organization from its inception. which prioritizes on promoting personal financial literacy to all Malaysians. 3 . This claim was further endorsed by the findings of a focus group study. Many of them said that they got into financial trouble because they had low level of understanding of basic financial literacy. they would be better able to manage their finances. made us to come out with ‘Money $ense. It is important that we. Money $ense – a book about getting smart with your money is indeed a common sense guide on money matters.These findings and real life encounters by our counsellors with financially-distressed young adults. as how you use your money today will have an impact on how you live tomorrow.’ Money $ense aims to teach you skills on how to manage your money wisely so that you can spend within your means and achieve your financial goals with ease. Money $ense illustrates how important it is to always be financially prepared for life’s uncertainties.” Mohamed Akwal Sultan Chief Executive Officer July 2008 4 . Besides that. The book also puts emphasis on being in control of your finances regardless of whether or not you are facing financial problems. “Make Prudent Financial Management Our Way of Life. Our deepest appreciation also goes to both the previous and present Ministers of Higher Education for their foresight in recommending personal financial education as a subject to be included in all universities. Our heartfelt thanks goes to all the AKPK counsellors who provided valuable feedback on issues affecting thousands of financially distressed individuals they have assisted. a special thank you for the dedication and continuous support towards our mutual objective of promoting financial literacy. We also extend our gratitude to the Vice Chancellors of the institutions of higher learning in Malaysia. To Bank Negara Malaysia. thank you to the rest of you for supporting the agency’s vision by working tirelessly towards the development of this book. without hesitation agreed to make this subject available for their respective institutions. Last but never the least. Mohamed Akwal Sultan Chief Executive Officer July 2008 5 .ACKNOWLEDGEMENT Many individuals worked tirelessly on the development of this book. who had. Their input helped us develop topics we believe will assist young adults avoid similar pitfalls. Internet Banking 39 PART B – ACHIEVING YOUR DREAMS ○ ○ Chapter 4 Building Your Wealth - 52 The Saving Habit Increasing Your Net Worth Types of Investments Financial Scams 6 .Understanding the Value of Money ○ ○ ○ ○ ○ ○ ○ Chapter 2 Power From Your Money 19 .About Financial Planning .Budgeting Basics .Living Within Your Means Chapter 3 Financial Products and Services -Banking Facilities -Islamic Banking .CONTENTS PART A – GETTING STARTED ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ Chapter 1 Why Everyone Needs Financial Planning 9 .Setting Your Financial Goals . Matters of the Heart ..Your Career . Charge Cards and Other Cards .Your Car .Your Retirement ○ ○ Chapter 8 Getting Out of Financial Trouble .In Financial Trouble .Takaful .Loans and Credit .Agensi Kaunseling dan Pengurusan Kredit (AKPK) 119 APPENDICES ○ ○ ○ ○ ○ ○ Chapter 5 Planning for Uncertainties 65 78 98 7 .Looking for the Signs .Why You Need Insurance .Types of Insurance .Credit Cards.More About Insurance PART C – MANAGING DEBT ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ Chapter 6 Borrowing Basics .Your House .Types of Loans .Repayment and Default PART D – REAL-LIFE STRATEGIES ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ ○ Chapter 7 Living Your Life . PART A GETTING STARTED 8 . ask yourself questions and then make your decision. medium-term or long-term. you should be looking as far ahead as your retirement years.1 CHAPTER 1 WHY EVERYONE NEEDS FINANCIAL PLANNING ABOUT FINANCIAL PLANNING What do you do when you want to get somewhere? You probably ask yourself – What is the best way to get there? Will there be traffic jams? Is it better to take the LRT or bus. Planning can be for the shortterm. Ideally. To get the best option. except that the time frame is over a longer period. 9 . It is the same in financial planning. evaluate the ones available to you. These are steps in the process that is known as planning. or should I get someone to drive me there instead? There are therefore many options open for you to choose from. Think about what you need to do. you look at how you will be budgeting. To achieve your life dreams and goals.Financial planning involves asking questions about your future. 2 Setting goals. 10 . 4 Implementing the plan. Benefits of financial planning Many people think that financial planning is a hassle and that it stops them from doing fun things. buying a car or a house. 3 Creating a financial plan. your dreams and goals. having children and planning for their education. you need to plan from the financial aspect. you can always budget your money in such a way that you have some for going out with friends and having a good time. saving and spending your money over time. wouldn’t you? Think about it. 5 Monitoring and reassessing. We will talk more about these steps in the following chapters of this book. Steps in financial planning There are five steps in financial planning: 1 Assessing where you are now in financial terms. It is thinking about what you want to do in your life. such as getting married. if you have to save. In financial planning. If you consistently live on a budget surely you would have to give up fun activities now. not only for yourself but also for the people you love. because you are happy with your life and you are not going around borrowing money to make ends meet or expecting handouts from others. such as your family. bankruptcy or dependence on others. have a sense of freedom from financial worries because you have planned for the future. anticipated your expenses and achieved your personal goals in life. you can always ensure your have enough for more fun in the future! With your very own financial plan. have better personal relationships with people around you. be more effective in obtaining. friends and colleagues. when you have a good personal financial plan.If you set a good financial planning habit. In other words. you will: have more control of your financial affairs and be able to avoid excessive spending. 11 . you will be more informed about your future needs and the resources that you have. You will also have peace of mind knowing that you are in control. unmanageable debts. using and protecting your financial resources throughout your lifetime. Your Personal Financial Planning Process 12 12 . Life stages and financial goals In your adult life. Discipline and perseverance play a key role in maintaining a reliable financial strategy. you need to save your money! 13 . responsibilities and financial goals. from being single to getting married. As your needs are different your financial priorities will adjust to meet your varied needs at different points of your life. having children and sometimes being single again. When you are in your 20s. Each stage of your life presents different investment opportunities and challenges. you will be looking at money and spending it differently from when you get into your 50s. Therefore. Sound financial planning can prepare you to meet them successfully. For example. from starting a career to retiring. Nonetheless. Later. After you get married. you may want to buy a house. what you do with your money as you go through your adult life depends on your financial goals. when you have children. it is worthwhile to point out here that to achieve your financial goals. you have different priorities. As your life changes. so do your needs and goals. At various phases in your life. We will be looking at how you can achieve your financial goals in the following chapters of this book. you probably want to have enough money to make a down payment for a car or go on a holiday with your friends. you will go through various stages. when you are single. you would want to plan for their education and maybe even start a retirement fund. This is because from now to a given time in future it would continually yield interest. at the present time.UNDERSTANDING THE VALUE OF MONEY Time value of money Imagine that you are offered a sum of money and asked to choose whether you want the money now or one year later. i. Better yet think about this. what could it buy you today and what would it be able to buy you in the future? Now would you choose the money now or money later? You can see that it is really not that tough of a question. Anyone and everyone would surely choose to have the money now.e. you would know that money you have now.000 x 1. This is a key principle of economics that states as long as money can earn interest. On the other hand. Say you deposit money into an interest bearing savings account at a 5% interest rate. also known as ‘present discounted value’. Here multiplication is used when the ringgit amount is deposited in an interest bearing account.38 today (RM1. This concept illustrates the time value of money.050 in one year (RM1.05). Now let us understand this idea. is worth more than the same amount in the future. RM1. RM1.000 received one year from now is only worth RM952.000 saved today will be worth RM1. what could RM 1 buy you in 1990. Instinctively. 14 . any amount of money is worth more the sooner it is received.000 divided by 1.05). i.000. When all three retire at age 55. instead of some time later? Please refer to the example of three individuals – Ahmad.Division is used to represent the losses that arise during the period that a ringgit amount is not in an interest bearing account. Siti at age 22 and Zainal at age 30. he has also invested the least amount of money. i.e. All things being equal.000. The power of compound interest How important is it to begin putting aside money for savings right now. whereas Siti has RM449. the present value of money is greater than the value of the same amount of money at any given time in the future. He has RM467.e. Ahmad has not only more money at age 55. which earns the same interest return of 10% per year.913.773 and Zainal has RM360. Siti and Zainal – in APPENDIX 1. It is that simple! From this illustration you can observe that money has a time value. Ahmad has more money than Siti and Zainal. RM15.300. RM3. Each of them consistently invests the same amount of money. But they start investing at different ages – Ahmad at age 18. 15 . 000 investment mentioned above.289! Can you believe that?! 16 . not only will you get interest on the original investment.000 in the bank that draws 5% interest per annum.000. This is called compounded interest. It is the additional interest earned on top of the original saving amount plus the interest received.025. the earlier you start saving. will be worth RM16. The RM10. Compound interest is important to investors who are able to leave their investment to grow over long periods of time. interest applied to interest. If you leave the entire amount in the bank for another year. you will then have RM11. you will have RM10.000 and Zainal who invested RM75. This simply means the more money you keep aside now.the faster you can fulfill your dreams. The power of compound interest is that with compound interest. In the second year. the greater the interest accumulated on your original investment. The outcome in the example above is due to the effect of compound interest.compared to Siti who invested RM24. When is the best time to start saving? Well NOW of course! How compound interest works – annually and monthly If you put RM10.e. you also receive interest on the interest you earned the prior year.500 at the end of the year. when invested for 10 years at 5% per annum. i. 42% (5% per year divided by 12 months). In contrast. Compound interest can be what we call a double-edged sword. it will be worth RM16. This is because if you delay your loan or credit card repayment for a longer time. 17 . eventually making it increasingly difficult for you to settle your loan or credit card debt. If the same amount of RM10. which is RM112 more than if invested at a yearly rate of 5%. the monthly interest rate is 0. you will gain more if you invest in an investment that pays interest on a monthly instead of yearly compounded basis.If the interest rate of 5% is compounded on a monthly basis.401.42% per month and invested for 10 years.000 is invested based on 0. you will be charged more interest. Therefore. if you have a loan or credit card debt. It can work both to your advantage and disadvantage. It can help give you more return on your investment as the benefit of compounding interest means you will earn more interest income the longer you keep your money invested. you can end up paying more interest if these debts are calculated on a compounded interest rate. It is important to begin saving your money NOW. The sooner you start to save. Compound interest is a doubleedged sword. security for your future and a better quality of life. you will end up paying more interest. You will earn more interest if your investment pays interest on a monthly instead of yearly compounded basis. 18 . If you delay payment on a loan or credit card debt. the greater the benefit of compounded interest. Financial planning is essential in achieving your life’s dreams and goals. money you have at the present time is worth more than the same amount in the future. Provided that money can earn interest.IN A NUTSHELL Financial planning is important to provide you with peace of mind. but worst of all. They make move after move. Goals help give you direction. not achieving anything. as they guide you towards achieving the life that you want.2 SETTING YOUR FINANCIAL GOALS CHAPTER 2 POWER FROM YOUR MONEY Why is this important? In a football game. adopt one strategy after another. the objective of the players is to get the ball into the opposing team’s net. Now could you imagine playing football with no objective? All your favorite players will simply run around this huge green field with a ball. it is important to note that money is a vital tool to help you 19 . all to bring the ball forward and into the goal. Without goals you may wake up one day just to realize that the best of your years has gone by and you have yet to achieve anything! Aside from being an essential aspect of daily life. you also need to have objectives or goals in your life. not going anywhere.not scoring any GOALS! There is no football game without an objective. Just like football players. When you understand what your values and beliefs are. What do you mean by “lots of money”? Is it RM50. Break down your goals into those that are shortterm. 20 20 .000 or RM500. Without knowing what is important to you personally. medium-term and long-term. Just having these goals in your thoughts are not enough. When setting your financial goals. it will be difficult to set satisfying financial goals. One of the steps in this process is to set your financial goals. pay attention to what you value and believe in life. Writing down financial goals will increase your chances of achieving them.000. How to set your goals Your financial goals will help you determine where your money will come from and what it will go towards. You are very likely to forget the goals that you have set or you may even have unconsciously changed them in your mind. you will find it easier to set financial goals that you can achieve. As discussed in Chapter 1. What is the point of writing: “My goal is to have lots of money in the bank”. you need to manage your money through financial planning. Write down your financial goals. When writing down your financial goals.000 or RM5. be as specific as possible.work towards your goals in life.000? Be specific and write your goals in terms that can be measured. 000 in six months.Simple Steps in Setting Your Financial Goals Examples: Short-term Goals (less than 1 year) To save RM5. To pay the deposit for a new car. Medium-term Goal (1-3 years) To pay the deposit of RM 20. Long-Term Goal (more than 3 years) To save RM100.000 for my first house.000 within five years for my retirement account. 21 . include saving for an emergency fund.Use the table below to help you in writing down your financial goals: My Financial Goals Medium-term Short-term Long-term An important goal – saving for emergencies What would happen if you suddenly could not afford to pay for your education? Would you sacrifice your goal of attaining a degree or would you have a back-up emergency plan? In life. It is extremely important that you are always prepared with the right tools and knowledge for situations that require you to think on your feet and deal with problems you might not be used to otherwise. From a minor breakdown of your vehicle to the more serious death of the sole breadwinner in your family. When you list your financial goals. As a general guideline. have an equivalent 22 . money would be needed. unexpected events are well. Unexpected events. can happen to you or members of your family. there are many uncertainties that you might face. In most of these situations. which may be disastrous and unfortunate in nature. unexpected. An emergency fund is one such tool you can use. as well as electricity and water bills. 23 . you need to assess where you are now in financial terms. Your cash flow statement (discussed in the section on “Budgeting Basics” of this chapter). but make sure you build it over time-every little amount will help build your emergency fund. Assets and liabilities: what you own and owe In financial planning.e. Remember to make a conscious effort to save. When doing this.e.of at least 6 months’ worth of your basic living expenses in your emergency fund. keep aside RM18. how much money you have and after making the various payments.500 a month to pay for your living expenses. It might be hard at first when you start working to have that kind of money kept aside.000 in your emergency fund (i. If possible. how much money there is left. including fixed payments such as housing loan or rent and insurance premium.000 in the fund (RM1.500 x 6 months). Example: If you need about RM1. RM1. i. It is desirable to put aside about 12 months’ worth. you should have at least RM9.500 x 12 months). what you own and what you owe. two types of personal financial statements come in handy: Your personal balance sheet. real estate. measure your progress in meeting your short-term. A personal balance sheet is your financial scorecard. savings. provide data you can use when preparing tax forms or applying for a bank loan. maintain information about your financial activities. medium-term and long-term financial goals. whether to your bank. It can be a reference point in making money-related decisions. as well as credit card debt and payments that are due. Your personal balance sheet reports on what you own and what you owe: What you own (assets) – Include items such as cash. family or friends. such as house rental and utility bills. such as investments and spending patterns. 24 . unit trusts or shares in companies.These statements will help you to: provide information about your current financial position and a summary of your income and expenditure. which you can use to regularly assess your financial standing. What you owe (liabilities) – Include all types of loans. 278 6.532 — — — 36. Item Bank Accounts Savings accounts Current accounts Fixed deposit accounts Cash on hand Properties Apartment House Land Jewellery Car Investments Employee Provident Fund Unit trust Shares Bank Loans Credit cards Study loan Borrowing from friends & family Hire purchase of furniture & electrical goods Total Net Worth Asset RM 5.292 25 .An example of a personal balance sheet is provided below.835 1. This personal balance sheet has a positive net worth because the value of the total assets is more than the total liabilities.672 254.532 25.982 — — — — — 424.000 — — 7.572 — — — 11.695 60.491 Liability RM — — — — 189.235 250.000 55.237 3.238 — 10.267 15.783 170. which you can use as the basis to prepare one for yourself. It is also where you and maybe your family live. is not easily sold and finding a buyer may take several months. being an illiquid asset. When this happens. You really cannot sell your home unless you have somewhere else to go. Out of a job with no possible way of making one ringgit. So how is it possible for someone with a positive net worth to get into problems? Assets that are not liquid! When assets are not liquid (easily converted into cash) there could be potential problems looming ahead. you decide to sell your house for money to support your expenses. Here is the problem. Say you have a house as your asset (where you live in). having a high net worth does not guarantee that you will never face financial difficulties. Let us see how this is possible. The most pressing thing now is. However. You can have a high net worth and still be in for a rough time. congratulations! This means that you are in a healthy financial position. 26 . It is easy to conclude that being financially healthy means having a balanced portfolio of assets so that you will not be short of cash at any time. You cannot sell your house immediately to get money because the house.Knowing your net worth Your net worth is your total assets minus your total liabilities. That way you can ensure that financial freedom will be in your grasp. You will have a positive net worth if you own more than what you owe. but you do not have cash in your wallet or bank account and you have already defaulted on your credit card payments. you need money for your daily expenses. You are actually in financial trouble and may be made a bankrupt (Read more about insolvency and bankruptcy in Chapter 8). In this situation.When you owe more than you own. Do not consider your non cash items as cash as it may not be easily disposed. Your net worth gives an idea of your financial position on a given date. you are unable to pay your debts when they are due because you do not have enough money or assets that can be easily converted into cash. you have a negative net worth. 27 . Deriving your net worth 28 . living according to a budget requires a lot of discipline but it helps you to: live within your monthly income. with regular check-ups of your cash flow and net worth. Selling some of your non-income generating assets/ belongings. prepare for financial emergencies. keep aside money or savings. Reducing your debts. Prepare your budget at the beginning of the month or on the day when you receive your monthly salary. it is important to prepare a budget (A blank budget sheet is attached in APPENDIX 3. True. 29 . reach your financial goals. BUDGETING BASICS Budgeting and spending plan In financial planning. Reducing your spending. develop good financial management habits. These include: Increasing your savings.There are several ways you can increase your net worth. e. These cover items such as petrol. i. hobbies. allowances and other sources of money. for items that you can choose whether to spend or not spend. commissions. estimate your discretionary expenses. Estimate your income for the budget period. Put aside at least 10% of your income for your savings (20-30% of your income as savings will be better because you are creating a bigger pool of money for your future retirement). This covers your salary. Put aside some money for your emergency fund. For example. if you have targeted on putting a down payment to buy a car in one year. entertainment and holidays. They include gifts. These are expenses that must be paid or spent. Estimate fixed expenses for the budget period. make sure you do monthly checks to ensure you are keeping money away towards your goal. electricity and water bills. loan installment payments. 30 .When you prepare your personal budget: Refer to your financial goals. and include house rental. Compare your budget to your financial goals to see whether or not you are achieving them. credit card payments and insurance premiums. Aside from that. groceries. Also estimate variable expenses for the period. 31 . remember: Be patient and disciplined. do not expect to save a lot of money in a short period of time. Another term used for this is managing cash flow– the actual inflow and outflow of cash during a given period of time. There will be unexpected expenses and changes in the prices of groceries and other items. You need to monitor your actual spending every day. Revise your budget when needed. Be flexible. especially where your money is going. Do not give up because you feel that there is too much to do! Be realistic. Remember: a budget will work only if you follow it! Tracking your cash flow Preparing your personal budget is not the end of the process. A good budget takes time and effort to prepare. Set aside an amount of money to enjoy yourself.To prepare a successful budget. If you have a moderate income. You are young and will want to have a night out with friends or to watch a movie. that is fantastic! Put the money away in your savings. if some of these are not realistic in relation to your monthly income. Your budget tells you how much is your planned income. look at expenses you can do without or cut down. However. If you have to use your credit card. if you have a cash deficit. saving and spending to achieve your financial goals. have other sources of income. In preparing next month’s budget. take another look at your spending. Your cash flow statement tells you what you received and spent in terms of cash over a period of time. Your balance sheet outlines your financial net worth. If you have a cash surplus. use it as your last resort as using your credit card will only add towards expenses for the coming months. such as a business income and interest earned on savings and investments. however. base it on the balance brought forward from the previous month. A good idea is to take a look at expenses involving food and entertainment. 32 . Try not to use your emergency fund unless it is absolutely necessary. loans and other financial commitments. Try postponing any purchases or payments for the time being. Outflows would be living expenses. If you need to decrease your spending.Your most important inflow is probably your income from employment. You may. You may even have to revise your financial goals. Make sure you review and revise both your budget and spending plan regularly. e. the person has spent more than his budget due to unforeseen circumstances. a car breakdown and extra traveling using the car. toll and parking expenses as well as food. i.005 RM100 RM3.450 RM345 RM3. In this example.450 RM0 RM0 RM3. SOCSO and PAYE tax) Job #2 Other sources Total monthly income Less monthly fixed savings (10% of monthly income) Less savings for emergency funds Monthly income net of savings Less monthly fixed expenses Rental of room Car payment Car insurance Total monthly fixed expenses RM3.005 RM300 RM650 RM0 RM950 RM300 RM650 RM0 RM950 33 . Personal Monthly Budget with Personal Monthly Cash Flow in one Statement Monthly Income Budget Actual Cash Flow Job #1 (net of EPF. resulting in additional spending on petrol.450 RM0 RM0 RM3.Below is an example of a statement that combines a personal budget and cash flow statement.450 RM345 RM100 RM3. Less monthly variable expenses Food House utilities Handphone bill Bus and taxi fare Gas and oil Parking and toll Car Repairs Total monthly variable expenses Less monthly discretionary expenses Medical expenses Clothing Entertainment Household items Personal items Gifts Other expenses RM70 RM150 RM100 RM100 RM150 RM100 RM100 RM0 RM0 RM186 RM392 RM163 RM0 RM0 RM741 RM529 RM550 RM0 RM85 RM50 RM200 RM150 RM250 RM1.285 RM650 RM0 RM97 RM30 RM270 RM198 RM598 RM1. (If the amount is negative. you have spent more than your monthly income) 34 RM0 .843 Total monthly discretionary RM770 expenses Excess (Deficit) Income Excess of income over expenses + fixed savings + emergency funds = extra savings. a mobile phone with basic features is a ‘must have’ for communicating with other people. In today’s world. Now what is a ‘want’? A want is something you would like to have. 35 . An example of a need is food. one that has Bluetooth. It is so easy to overspend. you are putting aside money as savings to meet your financial goals. You can do so by understanding what influences your spending and what your needs and wants are. that you cannot do without. You need to eat to live. you would not survive for long. camera and video. However. Products are constantly marketed on TV and billboards as well as in magazines and newspapers. for example. and GPS is a ‘nice to have’ mobile phone. Otherwise. At the same time. you are spending money based on the money you have. What is a ‘need’? A need is something you must have. It ultimately depends on you to be strong-willed to resist all these forms of temptation. a music player. which is not absolutely necessary.LIVING WITHIN YOUR MEANS Knowing your needs and wants When you live within your means. Jewelry is a want because you do not really have to wear it for survival. 36 . you are spending wisely and living within your means. consider not buying it or spending less for something similar so that you can put more into your savings. your financial future. ultimately. it is easy if you just learn to say ‘no’ to purchases you cannot afford. ask yourself – Is it something that I need? Can I afford the money to buy it? Your personal budget and cash flow will help you answer this question. The decisions that you make in regard to what you need or want will affect your budget and your monthly spending.Knowing the difference between a need and a want. If it is a want. Making sensible purchasing choices and spending wisely will prevent you from creating financial difficulties for yourself and others. Spending wisely When you want to buy something. Handle your personal finances in a responsible manner. will make a significant impact on your spending behavior and. When you do this. Check if you have previously allocated to spend on this item or have already overspent your cash for that month. it will be the sales promoter at the hypermarket asking you to buy their product. Throughout your adult life. after you finish your homework? Do you remember asking your father if he could buy you a new bicycle and he said that you could have it later. using the public transport might be a good alternative to moving around the city. You will save money and furthermore contribute positively to the environment. there will always be someone that will try to influence you into spending your money. 37 . there are always alternatives to choose from. you are one step closer towards achieving your financial dreams. Take the time to think whether it is necessary to spend the money and if it is something you can afford within your budget. Change your perceptions.Delaying gratification Do you remember your childhood days when you wanted an ice-cream and your mother said that you could have it later. your parents were teaching you to delay gratification. In life. Remember when you say ‘no’ to spending money now (by delaying gratification). If it is not your friend asking you to go out to dinner. if you did well in your exams? In both situations. Instead of buying a car. if necessary. Look into different substitutes for your needs and wants. Preparing a budget and tracking your cash flow is part of an ongoing process that requires patience.IN A NUTSHELL Setting financial goals are important to achieve security and financial freedom. Saving for an emergency fund should be one of your financial goals. medium-term and long-term goals. Write down your financial goals in specific terms. 38 . Your personal balance sheet shows your assets and liabilities and net worth at a given point in time. It is very important to live within your means. then categorise them into short-term. Your financial goals must reflect your values and beliefs. discipline and flexibility. bankinginfo. These pamphlets are available in any bank branches around the country.com. If you cannot get them at a bank branch near you. 39 . log on to BNM’s website at Basic banking services All banking institutions offer basic savings accounts (BSA) and basic current accounts (BCA) to Malaysians and permanent residents.3 CHAPTER 3 FINANCIAL PRODUCTS AND SERVICES BANKING FACILITIES Bank Negara Malaysia (BNM) has produced a series of informative pamphlets on banking products and services. These accounts can be operated under individual or joint names. ATM: Balance enquiries. It is available at all banking institutions in Malaysia. You can apply for an ATM card. fund transfers within the same bank and bill payments. bill payments and fund transfers. which allows you to withdraw money from the bank’s ATMs and perform other transactions. withdrawals and fund transfers within the same bank. The minimum deposit to open an SA differs from bank to bank. You are only allowed to open one BSA and one BCA per bank. You can make deposits into or withdrawals from your SA whenever you need to. Internet banking: Account enquires. 40 . the difference is you can only perform basic transactions with these accounts at a minimal cost or for free. deposits and withdrawals. Savings account A savings account (SA) allows you to deposit your money into an account and receive certain interest with no stated maturity. Deposit machines: Cheque and cash deposits. The banking services available to you with a BSA or BCA are: Over-the-counter: Account enquiries. Interbank GIRO: Interbank fund transfers. They operate in the same way as ordinary savings or current accounts do. Online banking. Bill payments. Fund transfers within the same banking institution. Withdrawals. Interbank transfers (GIRO). Withdrawals. 41 . Bill payments. You will be able to do the following as a CA holder: Cheque payments. A CA allows you to use cheques to make payments. Fund transfers within the same banking institution.Some of the common transactions you can perform are: Cash or cheque deposits. Cheque and cash deposits. Current account A current account (CA) is a deposit account that can be used for either personal or business purposes. Account enquiries. Some of these institutions offer interest-bearing CAs while some do not. Interbank transfers (GIRO). Account enquiries. Monthly statements will be delivered to you free-of-charge every month to help you keep track of your transactions. It is available at all banking institutions in Malaysia. Online banking. This system protects depositors against the loss of their deposits if a member institution is unable to meet its obligations to depositors.noticed any discrepancy in your bank statement.e. Destroy or shred cancelled cheques. cheque or chequebook .000 per depositor per member institution.gov. which is managed by PIDM. the Malaysian Government introduced the deposit insurance system. go to www. Keep your account statements.pidm.changed your address or telephone number .Things to remember with both your SA and CA Keep your account numbers and PINs confidential.my 42 . The deposit insurance currently covers businesses and individuals up to a maximum RM60. You also do not have to pay any fees for the deposit insurance. passbook. passbook and ATM card in a safe and secure place. fixed deposits and investment deposits. Malaysia Deposit Insurance Corporation (Perbadanan Insurans Deposit Malaysia or PIDM) In September 2005. you as a depositor do not have to do anything for it to take effect. Immediately inform your bank if you have . i. The types of deposits covered include savings and current deposits. The deposit insurance is automatic. For more information.lost your ATM card. including the principal balance and interest or return. fmb. The following matters. The FMB will only deal with a complaint that has initially been lodged with the institution concerned. insurance or takaful matters.Financial Mediation Bureau The Financial Mediation Bureau (FMB) is an independent body set up to settle complaints. who are members of the FMB. ATM cards and cheques for which the limit is not more than RM25. Cases that have been or are referred to the court and/or for arbitration. financial.org. Click on www. Fraud cases (other than for fraud cases involving payment instruments. as an alternative to going to court. rejection and rescheduling of loans). Always compare amongst the various banks to get the best deal for yourself.my for more information. Product policies or services of the members. however. Credit decisions (approval.000). Make sure you know what these are. Banks differ in the terms and conditions relating to their products and services. Cases that are time-barred for more than 6 years. The disputes may be related to banking. disputes or claims between you and financial services providers. 43 . credit cards. will not be considered: General pricing. charge cards. joint-venture (Musharakah). There are banks in our country that are solely Islamic. except that Islamic banking operates according to Shariah rules on transactions. The purpose of Islamic banking is similar to conventional banking. conventional banks have also set up divisions that specialise in Islamic banking. Islamic banking is for all individuals regardless of their religious beliefs. Islamic banks and conventional banks that offer Islamic banking products and services are required to establish their 44 . Contrary to what some people might think. usury.ISLAMIC BANKING Islamic banking is a banking system that follows Shariah (Islamic law) principles. we have the opportunity to choose between the conventional banking system and Islamic banking system. In addition. Amongst the common Islamic concepts used in Islamic banking are profit-sharing (Mudharabah). it prohibits riba (collection and payment of interest). Guided by Islamic economics. However. safekeeping (Wadiah). known as Fiqh al-Muamalat. The basic principles of Islamic banking are the sharing of profit and loss and the prohibition of riba. In Malaysia. trading in financial risk and haram (unlawful) business ventures. cost-plus (Murabahah) and leasing (Ijarah). Islamic banking in Malaysia is monitored by the National Shariah Advisory Council set up by BNM. No special software or hardware is usually required. Internet banking has advantages for you as a customer. credit cards and loans. check the status of your cheques. pay bills. request for stop cheque payments. INTERNET BANKING Banks in Malaysia are now encouraging their customers to do as much of their banking on the Web.own Shariah advisory committee or appoint consultants to advise them and ensure that their operations and activities comply with Shariah principles. create. 45 . transfer funds between accounts (own and third party). You can bank from the comfort of your home or office. The services offered through Internet banking allow you to: check your bank balances and statements. submit applications for new accounts. request for cheque books and statements. and from anywhere you can get Internet access. during or outside of banking hours. Internet or online banking refers to performing banking transactions over the computer through a bank’s secure website. change and cancel standing orders. loan instalments and insurance premiums. apply for Banks Drafts and Telegraphic Transfers. All you need is a web browser and normal Internet connection. credit cards. place fixed deposits. You sign up with a bank that offers the service to you with existing accounts. credit card and loans. There are also banks that offer special accounts designed for Internet banking for existing as well as new customers. current account. 46 . Shop around for the best deal. Make sure the features of the special accounts are suitable for you and do not get attracted by special offers that may not be relevant or meet your needs. such as savings account. You can go to BNM’s website for a list of banks offering Internet banking. Overall. Change your password or PIN regularly. Those involved in the business. are continually working towards making the Internet a safe and secure channel. Do not store your Login ID and Password or PIN on the computer. This will allow you to refer to it if there is a mismatch of your balance with the statements the bank sends you. Banks may offer more competitive fees for using the Internet to conduct your banking activities as compared to using other channels such as the branch counter service. 47 . although this may change depending on the banks. It also serves as an easy reference for you and the bank staff if you want to trace the transaction.Subscription to Internet banking services is currently free. including banks. But the risks will always be there. telegraphic tansfers and requisition for new cheque books. Secure banking on the Internet There are concerns about the safety of information and the security of transactions on the Internet. password or PIN confidential. Check with your bank regarding their fees. Always make a print-out for each banking transaction that you do using the Internet. Internet banking offers a safe way to conduct your banking transactions when adequate security precautions are taken: Keep your Login ID. Fees may also be charged for some services such as Interbank fund tansfers. Check that you have logged into the right website. Avoid sending any personal information. 48 . These firewall and anti-virus programs must be updated regularly to be effective. via ordinary e-mail. Do not have other browser windows open while you are banking online. Avoid downloading files or software from sites that you are unfamiliar with or click on hyperlinks sent to you by strangers. Avoid using shared or public personal computers to conduct your Internet Banking transactions. Ensure you are in a secure website by checking that the Universal Resource Locator (URL) begins with ‘https’ and look for a display of a closed padlock symbol on the status bar of your browser. Do not leave your computer unattended while connected to the Internet banking service. You are advised to disable the ‘file and printer-sharing’ feature on your operating system. particularly your password or PIN. Protect your personal computer from viruses and malicious programs by installing an up-to-date firewall and a reputable anti-virus program. Check your transaction history details and statements regularly. Always sign-off at the end of each session and clear the memory cache and history after logging out from the website. It is good to meet the customer service officer to understand these fully before you decide to use Internet banking. Your bank will have records of all your transactions and they will be able to assist you in this matter. contact the customer services of your bank immediately with details of the transactions and problems encountered. If you encounter any problems. read and understand the terms and conditions of the Internet banking service that you have signed up for. 49 .There is always the possibility of something going wrong or of you making a mistake. As with everything else that you do. particularly in regard to unauthorised or fraudulent transactions and in case you have complaints. services and facilities to meet your financial needs. 50 . regardless of religious belief. Internet banking is a convenient way to do your banking. Islamic banking is for everyone. Consider various aspects when choosing the right bank for you.IN A NUTSHELL Banks offer a variety of products. Contrary to what some people may think. Islamic banking offers you an alternative to conventional banking. PART B ACHIEVING YOUR DREAMS 51 . we will go further into discussing how you can build your wealth and at the same time plan for uncertainties. There are several ways that this percentage of your monthly salary can be put into your savings account in the bank. It is even better if you can save 20-30% because this will translate into more money for your future. You can: write out a cheque every month and deposit it into a savings account. 52 .4 CHAPTER 4 BUILDING WEALTH We have discussed in Part A that the way to become smart with your money is to build your wealth by owning more than you owe. We suggest that you save at least 10% of your salary every month. THE SAVING HABIT Make your saving automatic A saving plan is an essential part of your financial plan. you will not be able to achieve your financial goals. Without a saving plan. Remember that the more you save now the easier it would be to achieve your financial goals. In Part B. such as after you have received a raise or promotion. after a few months. As and when you can afford it. 53 . So how do you make sure that you keep to your saving plan? Simple – by making it automatic! Give an instruction to your bank to transfer at least 10% of your monthly salary from your current account to your savings account every month.carry out the transfer on the ATM. It is good if you are doing any of the above. instruct your bank to increase the amount to be transferred to your savings account. once broken. You would have broken the pattern and. you may forget to do so or find some reason to use the money for something else instead of putting it into your savings account. When you have saved the total amount of money that you had planned. However. it is possible that you will not get back to your saving plan. bringing you closer to your financial goals. Have the transfer done as soon as your salary is credited into your current account. transfer money from your current account to your savings account via Internet banking every month. you would not miss. the amount of money in your savings account will just grow and grow. What you do not see or have. In the meantime. which can be positive (assets more than liabilities) or negative (liabilities more than assets). In this chapter. continue to instruct your bank to make the monthly deduction. But how do you come up with investment goals that fit your needs? 54 . Never stop it as this would break your habit. Now how do you do this? Easy. However. The money that you are ‘automatically’ saving can go towards another financial goal. which we covered in Chapter 2. you can invest money sensibly by first stating your investment goals. Your investment goals Once you are saving on a regular basis. INCREASING YOUR NET WORTH Building wealth is about increasing your net worth. we will focus on increasing your assets through savings and investments. Your assets minus your liabilities equals your net worth. you will need to start making important decisions about how to invest your money.transfer the whole sum into a fixed deposit or some other account that can earn more returns. the interest rate) is lower compared to other forms of investment. changing your lifestyle and spending habits? Be realistic when you consider your answers to the above questions. The return (i. e. let us take a look at some questions below: What are your financial goals? Why do you need to save and invest your money? How much money do you need to save and how much to invest to achieve your goals? How long do you have to save or invest your money to achieve your goals? How much risk are you willing to take? How much return do you expect from your savings or investments? What sort of sacrifices are you prepared to make to achieve these goals.g. 55 . Remember to make your financial and investment goals reasonable and achievable. Look at your sources of income and see how much you can consistently save and invest. Your financial and investment goals should be reasonable and achievable.e. Investment risk and return Keeping your money in a savings or fixed deposit account with a bank is the safest form of investment. You can go to sleep at night and not be worried about your money. but it is not risky. there are some crucial questions you should think about when coming up with your investment goals.Well. they also sometimes carry greater risks. The actual return on an investment would. then your rate of return is 8%. However. 56 . For example. If you buy a house. you would not be able to build your wealth as fast as you would like to. you expect to earn a return on that money. agent’s commission. A return on an investment is usually stated as an annual percentage.Of course.e. the higher the return. you face a greater risk of losing your money than if you were to keep it in a savings account. You may have bought the shares of a company at RM5 per share. the greater the risk. for example. When you invest your money. depending on many factors. if you invest your money in the stock market. stamp duty and bank loan interests. by keeping your money only in savings or fixed deposits. If you buy shares at RM10 a share and the price goes up to RM10. Remember: when choosing your investment. there is a greater chance of such investments losing their value. you will only know your return after you have deducted items such as legal fees.80 after one year. but this price can go up to RM7 or it can go down to RM2. i. be after you have deducted related expenses. keep in mind that although other form of investments can give you better returns. however. and then sell it. Share prices move up and down. Similarly. you are aggressive. If you are an aggressive investor. you might consider investing in more volatile investments such as shares. 57 . When you do this. Spreading your money across a variety of investments is the key to spreading your risks.Don’t put all your eggs in one basket What happens if you put all your eggs in one basket and you drop the basket? It is very likely that all the eggs will break. when you invest your money. It is important to diversify. In other words. whether. If you are a moderate investor. How you invest is partly determined by your investor profile. The amount that you assign to an asset class could be further divided among different segments such as a bond or equity fund. you might invest a high portion of your money in different asset class such as unit trust funds and the balance in fixed income investments such as fixed deposits. moderate or conservative. do not put it all into one type of investment. you are highly likely to benefit substantially from your investments while eliminating chances of financial losses. you would lose all your money. as an investor. i. It is smarter to put your money in different types of investment.e. Many people actually fall in-between these types. plan for a balanced portfolio of investments. If something happens to that investment. On the other hand. covering products such as bank savings accounts and fixed deposits. 58 . they usually provide very little income and no capital growth. However. Cash and fixed interest investments Cash investments are the most common form of investment in Malaysia. They provide easy access to your money when you need it. In actuality they can be quite risky in the long-term because inflation erodes the value of your investment. and there is no chance you could lose any capital – so they are very secure. if you are a more conservative investor. you might consider diversifying your money among less aggressive investments such as bonds and fixed deposits. TYPES OF INVESTMENTS The more common types of investments available in the financial marketplace are explained below. while they do offer security. No two investors are exactly alike! You are the only one who can decide which options to choose and how much to spread your savings amongst the types of investment products available. For most investors, cash and fixed interest products are suitable for: use as a transaction account. keeping cash on hand for short-term expenses and emergencies.. are comfortable with some volatility in their investment value over the short-term, in exchange for higher returns in the long-term (in terms of dividend income and capital gain). 59. are happy to outsource the selection of investments to professional managers. have a small initial amount to invest (with the option to make regular additional contributions). are seeking investment diversification to minimise risk. Property Property is one asset class that most Malaysians are familiar with. Property investment offers value to investors in two ways: Properties increase in capital value over time as house and land prices rise. You can earn rental income from tenants. 60: do not require ‘emergency’ access to their money. have a long-term investment time-frame. have the ability to meet mortgage repayments if interest rates rise or if the property is not being tenanted. Other types of investments Other types of investments are: Bonds: When you buy a government or corporate bond, you are ‘lending’ your money for a certain period of time. 61 very careful of get-rich-quick schemes. while the truly illegal ones (scams) are often promoted on the Internet through spam and e-mail and offered via cold calls and sms’s. Some of the features of these schemes are: Promise of high returns of 20-30% a month with little or no risk. These get-rich-quick schemes are frequently advertised in magazines and newspapers. The scheme is in another country and you cannot check on its office or confirm its status from the website of any regulatory body. time or effort is needed to get the high return. be very. You are asked to give confidential information such as your bank account number. They also promise that little skill. 62 .FINANCIAL SCAMS When investing your hard-earned money. The offer is for a limited period only so you are asked to sign up immediately. and that you can obtain wealth by working at home. Such schemes promise that you will get very high return with little risk. always be mindful of the intention behind the offer because no investment is easy and can make you lots of money quickly. Be suspicious with an investment that offers high return. as it is unlikely that a business can provide all these. 63 . low risk and free of investment costs.What must you do when you come across an attractive offer to make you rich? The golden rule is do not be greedy and make instant decisions. Such an investment is risky and you can be caught for investing illegally. Even if the investment offer is being introduced or recommended by someone you know very well or a family member. Find out more about the offer before you decide to invest. An investment that is not legal is not the type of investment that you want to make. family and professionals whether such investment opportunities are genuine or not . Always check with friends. the greater the risk. Diversify your investments in order to spread your risks.IN A NUTSHELL In order to build wealth. 64 . Be very careful of get-rich-quick schemes. You need to increase your assets in order to increase your net worth. The higher the return you get from an investment. start saving and investing NOW. There is a financial instrument that you can purchase to protect you from such an eventuality – insurance. Should there be an unfortunate natural event (death or disability) or an economic catastrophe (retrenchment). the possibility of losing your hard-earned asset is real. It is a means of giving you a financial buffer or protection in case something happens to you.5 CHAPTER 5 PLANNING FOR UNCERTAINTIES WHY YOU NEED INSURANCE What is insurance? When you make a financial commitment. you have locked part of your future income. 65 . where your ability to meet these commitments have been impaired. such as purchasing a house by borrowing money from the bank. your family or your belongings. However these incidences must occur during the insurance period up to the limit of the sum insured. The insurance company sells policies to thousands of people and the premiums collected becomes part of a common fund. they are not affected at the same time. The common fund helps all those who contribute to share risks. The money you put towards insurance will enable you to: pay for damages to your personal belongings or to replace items that had been stolen (provided such items are insurable). a licensed insurance company will replace items lost or damaged due to an insured peril. accidents and theft. insurance is providing you with a peace of mind. such as fire. provide some financial support to your family in the event of your disability. 66 . Not all who pay the premiums will be affected by misfortune and when some do. serious illness or death. particularly if you are the breadwinner of the family. How does insurance work? Upon payment of a relatively small fee (known as a premium).Purpose of insurance In offering you such protection. take care of your monthly living expenses. pay for medical bills when you or your family members are hospitalised. debts and financial commitments when you are not able to work due to a serious illness or an accident. Alternatively. which you can get from branches of insurance companies.com.The insurance industry in Malaysia is regulated by Bank Negara Malaysia (BNM). will be paid upon death. To help the public know more about insurance. BNM has produced a series of booklets. It is important to have if there are people depending on you. you can log on to www. The coverage period is usually more than a year and you have a choice of making premium payments monthly. whether they are young children or aged parents. illness. the premium is higher than for term insurance and it provides cash value during the term of policy.my TYPES OF INSURANCE Life insurance A life insurance policy insures you and your life against risks such as premature death. 67 .insuranceinfo. disability and hospitalisation. semi-annually or annually throughout the coverage period. quarterly. total and permanent disability or critical illness. Due to the long-term nature of the policy. The claim amount (sum insured). The main life insurance products are: Whole life: This offers lifelong protection but you must pay premiums throughout your life. including bonuses. 68 . Endowment: This combines protection and savings. Medical and health: This helps to cover the cost of medical treatment. the policy will settle the loan and the bank will release the ownership of the house to you or your beneficiaries. total and permanent disability or critical illness during the term of the policy and according to the amount agreed upon when buying the policy. Under this policy. In case you default on the payments due to illness or disability or upon premature death. particularly in regard to hospitalisation and surgery. The coverage period is determined by the buyer.Term life: This offers protection for a limited period of time only. Investment-linked: This combines investment and protection. This policy provides cash benefits at the end of a specific period or upon death or total and permanent disability during the same period. Mortgage reducing term: This is usually a single premium policy with the coverage amount matching the scheduled outstanding balance of the loan. The money will be paid only upon death. you get to choose the type of investment fund you wish to place your investment and the amount of life insurance coverage you wish to have. The amount of premium is flexible. burst pipes and other calamities as well. i. notebooks and cameras against loss or theft. are covered against theft. It protects you against travel-related accidents. The period covered is usually one year and you have to pay a one-time premium payment on an annual basis. accidental death or injury as well as hospitalisation. A house owner’s policy extends coverage of the building to loss or damage due to flood. If you buy third party cover. third party property loss or damage.e. accident or fire. medical and other expenses. It also covers you if you cause damage to a third party. and loss or damage to your own vehicle due to an accident. you are insured against claims made against you by a third party for injuries or death of other person (third party) as well as loss or damage to the property of the third party that is caused by your vehicle. the contents of the house. third party injury and death. Travel: This is good to buy when you travel overseas. flight delays or interruptions. baggage lost in transit. With a house holder’s policy. handphones. lightning or explosion. flood and fire. This policy does not cover damage to the house itself. 69 . Personal belongings: This covers items such as computers. If you buy comprehensive cover. such as furniture. The main general insurance products are: Motor: It covers your motor vehicle against theft. House: A basic fire policy covers the building only against fire. you are getting the widest coverage.General insurance General insurance protects you against losses due to theft or damages to your personal belongings. fire or theft. Before deciding on an insurance policy. The amount in the surplus fund is calculated after deducting expenses such as claims. For example. it has some unique features: The surplus of the fund is shared between you and the takaful company based on a pre-agreed ratio. should any of the participants suffer a specified loss. the insurance industry in Malaysia also offers conventional insurance as well as takaful. You undertake a contract (aqad) to become one of the participants by agreeing to mutually help each other. You are entitled to this surplus if you had not made a claim during the period of the takaful. make sure you check the perils and risks that are covered by various policies offered by different insurance companies. management expenses and re-takaful. in both insurance and takaful. You contribute a sum of money to a common takaful fund in the form of participative contribution. TAKAFUL As in banking where you can choose between conventional and Islamic banking products and services. technical reserves. While takaful offers products similar to conventional insurance. you must suffer a financial loss when the insured event occurs. 70 . Takaful is insurance protection based on Shariah principles. Both insurance and takaful have similar basic principles. The basic types of family takaful include: Individual family: With plans that include education. mortgage and health. There are also long-term savings and investment profits are distributed upon claim.Family takaful Family takaful is a combination of protection and long-term savings and usually covers a period of more than a year. quarterly. Medical and health: This covers the costs of medical treatment. you and your beneficiary will receive financial benefits arising from death or permanent disability. maturity or early surrender. semi-annually or annually. It provides financial benefits if you suffer a tragedy as well as gives you investment profits. Investment-linked: Combining investment and protection. Contribution payments can be paid monthly. including hospitalisation and surgery. part of your contribution is used to buy investment units while the balance goes towards providing coverage in the event of death or permanent disability. Retirement annuity: This is a plan that provides you with a regular income upon retirement. 71 . the two types of cover are third party and comprehensive.insuranceinfo. The main types of general takaful are: Home takaful: A house owner takaful covers your home against loss or damage caused by flood. You pay a one-time contribution on an annual basis.com. Personal accident: This provides you or your beneficiaries with compensation in the event of death.General takaful This protects you on a short-term basis. disablement or injuries arising from an accident. theft or an accident as well as bodily injury or death of a third party and loss or damage of a third party’s property. As with general motor insurance. fire and other similar perils. This plan is also available for a short duration. such as when traveling abroad. usually for a oneyear period. for any loss or damage to your property or personal belongings. while a house holder takaful covers loss or damage to the contents of your house. You may participate in either one or both. For more on takaful.my 72 . check out this website:. Motor takaful: You are covered against loss or damage to your vehicle due to fire. MORE ABOUT INSURANCE Before buying an insurance policy Buying an insurance policy. It is very important for you to know: the types of policies and plans that are available. the higher the premium you must pay. the perils and risks covered. especially their features and benefits. the premium depends upon a combination of factors. This is because the insurance risk for your life is higher. the main ones being your age and health. the situations that allow you to make a claim and how you can make such a claim. 73 . All life and general insurance have a legal document called an insurance policy or insurance certificate that states all the terms and conditions between you and the insurance company. For life insurance. the terms and conditions. You must get as much information as possible before making up your mind on the best option that would fit your lifestyle. is an important decision. including pre-existing conditions and exclusions that are not covered. whether conventional or takaful. risks covered and period of coverage. the amount of premiums that you must pay in relation to the sum insured. The older you are when you purchase a life policy. If you are buying life insurance so that there will be some money for your family in the event of your death. you will have to decide on the amount of money your family will need to live without you. You do not want to have too much or too little insurance. Your goal is to get enough or the right amount to help you meet your financial commitments in times of need. financial commitments and obligations. Always buy insurance that gives you the greatest coverage and meets your needs at the lowest premium cost. How much should you insure? The amount of life insurance to buy depends on how much money you need when you are critically ill or disabled due to an illness or accident. Buying insurance Buy only from insurance companies that are licensed by BNM. 74 . living expenses.All insurance premiums are paid in the name of the insurance company and not in the name of the agent. It depends on your lifestyle. You do not want to pay too much in insurance premium to the extent that it creates a financial burden on you. if an insured personal belonging has been stolen. For example. Making a claim When making a claim. A general insurance agent must also be appointed by a licensed general insurance company and registered with Persatuan Insuran Am Malaysia. Use the Internet. What is important is that you must notify your insurance company as soon as possible after the occurrence of the unfortunate event. If you are buying life insurance through an agent. read the product brochures. The process will be quick if you produce all the necessary documents when filing your claim. check that the insurance consultant or broker is licensed by BNM and is a member of the Malaysian Insurance and Takaful Brokers Association. Be sure of the requirements of the insurance company. talk to people who have experience in buying insurance policies. 75 .When seeking advice. It pays to shop around. Insurance is generally bought for protection against unfortunate events so that you or your beneficiaries will not be in serious financial difficulties. he or she has to be appointed by a licensed life insurance company and registered with the Life Insurance Association of Malaysia. you must prove that the event you are insured against did happen. one of the documents you will need to provide is a police report. 76 . A general insurance agent must be appointed by a licensed general insurance company and registered with Persatuan Insuran Am Malaysia (PIAM). ensure that you have all the documents needed by the insurance company to speed up the process. When making a claim. An insurance consultant or broker has to be licensed by BNM and be a member of the Malaysian Insurance and Takaful Brokers Association. A life insurance agent must be appointed by a licensed life insurance company and registered with the Life Insurance Association of Malaysia (LIAM).IN A NUTSHELL The amount of life insurance to buy depends on how much money you need to support your lifestyle and pay your expenses when you are critically ill or disabled due to an illness or accident. PART C MANAGING DEBT 77 . can you afford to put down a larger down payment? 78 . You have to pay it back – with interest! Before borrowing money. LOANS AND CREDIT When you want to take a loan or use a credit card.6 CHAPTER 6 BORROWING BASICS You may be tempted to spend more money than you have because of the availability of loans and credit cards offered by financial institutions. never use short-term loans. You want to build wealth. Also. It is crucial to keep in mind that this money is not free. such as credit cards or overdrafts. Try not to pay for things that will not create value for you. to fund long-term assets like houses. make sure you can manage debts. such as a car or house. a car. ask yourself these questions: Is the product or service you want to buy important? Is it necessary? If it is important and you need to have it. you should use it to make more money. If you borrow money. can you afford to pay the installments? If it is a substantial purchase. Remember that you want to own more than you owe. These institutions offer you money and credit on loan so you can buy a house. pay your bills or go on a holiday. it is important not to overcommit on loans and purchases using credit card.If it is something you desire. You can even be made a bankrupt. can you control the feeling and delay your decision to buy it since it is not that important? If you decide to take a loan or use your credit card to buy something. Are you prepared to make this trade-off? For example. legal proceedings can be taken against you. something unexpected or an emergency can happen and you will need extra cash. can you give up spending on your weekly entertainment to make payments on your loan and credit card debt? No matter how careful you have worked out your monthly cash flow to pay your loan payment or credit card debt. you need to give up something to make your monthly loan payment. your total monthly payments on all your loans and credit card debt should not exceed one-third of your gross monthly salary. Do you understand the consequences of failing to repay money you borrow? If you fail to do so. 79 . have you worked out your cash flow to see if you are able to repay the money you borrowed? Do you know the costs of borrowing and using your credit card? There are interest rate costs as well as finance charges such as late payment fees. As a general rule. Unless you are able to increase your income. Are you able to still meet your commitments if such a thing happens? Hence. Unlicensed money lenders There are also unlicensed money lenders.gov. However. Never. 3 open yourself and your family to harassment if you get behind on your payments. For a listing of licensed financial institutions.bnm. commonly referred to as loan sharks. 80 . 2 have to pay a very high rate of interest with daily compounding effect. 4 be pressured into borrowing more from the loan shark to repay one debt after another. Their operations are illegal. they cannot accept deposits. Most people would usually use products and services of commercial banks in their daily transactions. go to BNM’s website at financial institutions There are various types of financial institutions in Malaysia. They charge very high interest rates and often use threats and violence on people who cannot pay back their loans. unlike licensed financial institutions.my Licensed money lenders There are money lenders licensed by the Ministry of Housing and Local Government that provide loans to the public. ever borrow money from a loan shark because you will: 1 get a loan on very strict terms and conditions. Commercial banks are financial institutions licensed by Bank Negara Malaysia (BNM) to provide financial facilities to the public. We will discuss the more common ones in this chapter. most banks do not require a guarantor or collateral. government agencies as well as the Government to college or university students to help them pay for their higher education. but more for the purchase of a personal computer or money to use towards your marriage. Moreover. such as under the PTPTN fund.TYPES OF LOANS There is a wide range of loans and credit facilities available in the market. denying others from the same benefit. Graduates who do not pay back their student loans are. in fact. Conversely. some of the interest rates can be very high. work on a revolving basis. The loans are provided at a very low interest rate and become payable when the students gain employment. This means that there is a fixed amount of money in their fund and payments made by previous beneficiaries are used to help new applicants. Personal loan This is a loan offered for your personal use. Student loan Student loans are offered by private organisations. 81 . not for a large purchase such as a house or car. Most of the loans offered by the Government and its agencies. It is tempting to apply for this type of loan because the application process is usually fast and easy. As stated earlier in this chapter, ask yourself some important questions before you apply for such a loan. Be clear about the purpose of application and whether you can afford to make the repayments. Car loan Most people want to have their own car as soon as they start working. They usually buy a car using a loan (also known as hire purchase or HP). If you do so, you become the hirer of the car while the financial institution is the owner. As the hirer, you pay installments to the financial institution based on their terms and conditions. You become the owner after completing all your payments. As with any loan you take, ask yourself the important questions before deciding to borrow. Also work out your cash flow to see how much you can afford to pay in monthly installments. When you apply for a car loan, you can do so directly with the financial institution or through the car dealer, who will then submit your application to the financial institution. As a hirer: Read all the fine print in the written agreement; Check and ensure that the purchase price and HP terms in the agreement are as agreed; Know your rights under the Hire Purchase Act; Know your obligations as a hirer so that you do not do anything to breach the agreement; Keep all documents, such as the agreement and receipts, in a safe place; 82 Make your payments to authorised persons only as identified by the financial institution. However, before taking up a car loan, check on the effective interest rate as it will work out to be much higher than the flat rate offered. Look at the following example of a RM50,000 loan at 5% interest per annum with a five-year tenure. The effective interest rate works out to be 9.15%. Monthly installment: Total interest payment: Total loan + interest: Approximate effective rate per annum: Hire Purchase Basics Minimum deposit: This is about 10% of the cash value of the car but financial institutions can request for a higher deposit. Interest rate: This is a fixed rate and the maximum allowed is 10%. Effective interest rate: This is the actual interest that you pay after taking into account annual compound interest on the loan over its tenure. Late payment charges: You will be charged a penalty if you are late in paying your installments. This interest is charged on a daily basis. 83 RM1,042 RM12,500 RM62,500 9.15% Guarantor: Financial institutions may require a guarantor who will be responsible for the unpaid portion of a loan including interest, if you default on your loan. Insurance: You must purchase insurance as a cover for your car. Financial institutions require a car owner to undertake a comprehensive insurance policy. Repossession: If you default on your payments, financial institutions can repossess your car as they are the legal owners. When you do not make your car loan payments on schedule, financial institutions can repossess your car by engaging a registered reposessor. Having your car taken away from you can be a traumatic and embarrassing experience. Before taking any action, the repossessor must show you his identity and authority cards along with a repossession order issued by the financial institution. He must then make a police report and bring the repossessed car to a place indicated by the financial institution. You will receive advance notice in writing, known as the Fourth Schedule, before your car is repossessed. This notice expires in 21 days after which you will receive a second notice 14 days after the Fourth Schedule is issued – this is a reminder to pay up or your car will be repossessed. 84 g. there is still a way to get it back. Before the notice expires in 21 days. a family member or friend. If your car has been repossessed. Housing loan The market for housing loans today is very competitive and financial institutions now offers all kinds of loans to attract customers. Alternatively. if any. if you or your guarantors do not settle the outstanding amount. before your car is repossessed. you can introduce a buyer. pay the outstanding arrears before the notice period expires or return the car to the financial institution before the expiry date. to buy the car at the price given in the Fifth Schedule. You will still need to pay any outstanding amount less the value of the car. known as the Fourth Schedule. To avoid repossession.You will receive advance notice in writing. Within 21 days of the Fifth Schedule issue. The financial institution will issue you and your guarantors. you will receive a second notice 14 days after the Fourth Schedule is issued – this is a reminder to pay up or your car will be repossessed. the financial institution will sell your car by public auction or give you the option to buy the car at a price lower than the estimated price stated in the Fifth Schedule. Some loans are even packaged with free gifts. a Fifth Schedule notice within 21 days of the repossession. e. You can have the car returned to you if you pay all outstanding arrears or the due amount in full and other expenses incurred by the financial institution. 85 . processing fees and disbursement fees. some of which are as follows: Is the loan meant for buying a completed house or one under construction? Are you buying land to build a house? What is the value of the house or land you want to buy? How much can you afford to pay in monthly installments. 86 . A housing loan is a large financial commitment. stamp duties. As with other loan products. depending on your monthly cash flow? Do you have enough money to make the downpayments and the cash flow to pay the loan installments? What are the incidental fees or costs that you have to pay? The more common ones are legal fees. Think very carefully about the various aspects of such a loan before making your decision. one that will stretch over many years. Is the interest rate fixed or variable with the Base Lending Rate (BLR)? How flexible can your loan payments be? There are several payment schemes available. you can choose between a conventional or Islamic housing loan. get as much information as you can and compare items such as interest rates before deciding on the loan suitable for you. Is there an early termination penalty if you repay your loan in full before the tenure expires? Financial institutions may impose such a penalty because of the attractive rates they may have packaged for the loan.Do shop around. 87 . be restrictions on how much you can pay. With a variable rate loan. This scheme is useful if you had just started working and your salary will increase over the years. A housing loan can be paid in the following ways: Graduated payment: This allows you to pay lower installment payments at the beginning of the loan. If the BLR rises. your interest rate will increase and your monthly repayments will be higher. gradually increase over time. There are also variable interest rate loans with fixed monthly payments where any changes to the interest rate will either increase or decrease the loan tenure. The amount will. if the BLR decreases you will benefit from paying lower monthly repayments. the interest is fixed and you therefore know the amount of instalments you need to pay. On the other hand. the rate changes according to the Base Lending Rate (BLR). however. however.Is it better to take a loan on a fixed or variable interest rate? With a fixed rate loan. If done during the early years of the loan tenure. you can reduce your interest charges. There may. The principal sum of a loan is reduced each time an installment is paid. Partial prepayment of the outstanding loan: You can shorten the loan tenure by making partial prepayments with your surplus savings or annual bonus. you are legally bound to do so. Ask questions on all aspects of the loan to your satisfaction. Make payments on time.As a borrower: Read and understand all the terms and conditions of the loan. Stick to these terms and conditions. Being a guarantor You may be requested by a family member or friend to become a guarantor for his or her loan. the financial institution will take legal action against you to recover the loan. Check that you have accurate information on your loan account on a regular basis. You are aware of your liabilities if variations are made to the terms and conditions of the loan. 88 . if you fail to pay your installments. Should you agree to be a guarantor. You do not become a guarantor to someone whose integrity you are doubtful of. Think carefully before you agree to do so because being a guarantor for a loan means that if the borrower cannot or will not pay the loan. make sure that: You read and understand the nature of the guarantee and the implications on you. You do not sign a blank or partially filled document. As with any loan. which may agree subject to conditions such as full repayment of the principal debt. This makes purchasing things a lot easier. CREDIT CARDS. You can use the statement to track your spending for budgeting purposes. Even if the borrower passes away. 89 .It is not easy to withdraw from being a guarantor. Some credit cards provide personal accident and travel insurance. The decision is up to the financial institution. Some of its benefits are as follows: It is a convenient and efficient method of payment. depending on the type of card issued. CHARGE CARDS AND OTHER CARDS Credit cards allow you to buy items and pay for services electronically without using cash. the credit card acquirer will pay the merchant on your behalf and bill you later through your issuing bank. the financial institution can seek recourse from the guarantor if there are no other sources of repayment of the loan. When you use a credit card. A credit card can be a useful payment instrument if you know how to use it properly and wisely. You can use the debit card at places where you pay for products or services. A prepaid card can be used to make purchases but there is a spending limit equivalent to the amount of money you place on the card. While a credit card allows you to make a minimum payment when you receive your monthly statement. failing which. With a charge card. you can reload up to the maximum amount as determined by the card issuer. Like a credit card. When the amount placed on the card gets low. you must pay the total amount due in full each month. it is convenient to use a debit card because you do not have to carry cash with you. except that you do not have to withdraw cash from an ATM. The amount spent will be immediately deducted from your bank account. a reward that is unavailable when cash payments are made. late payment charges will be imposed. A charge card is similar to a credit card. a charge card does not.Credit card issuers have introduced attractive schemes. such as zero-interest installment scheme. 90 . Debit and prepaid cards are better options for people who are not financially disciplined. It is like a prepaid phone card or a Touch & Go card where you have a fixed amount of money you can spend. A debit card is similar to an ATM card. flexi-pay scheme and 0% balance transfer. to enable you to maximise on your purchases. You can also earn loyalty points for usage of credit cards. the more money you will owe. where interest is paid on the principal amount plus the accumulated interest amount owing. Remember to always pay in full.this will ensure you keep out of financial trouble. it helps with your cash flow. The result? Huge debts in a short time frame. You read earlier in this book about the effect of compound interest when you save your money. you are effectively spending at least 2-3 months of your salary in advance. which is charged on a daily basis and compounded monthly. 91 . It is important to realize that the longer you take to settle your credit card debts by making minimum payments. most card issuers impose a finance charge of 1.Normally. At present. The credit trap of paying the monthly minimum It may be tempting to just pay the monthly minimum due on your credit card statement. If you use your card up to this limit. the credit card limit given is 2-3 times your monthly salary. Unfortunately there are consequences of paying just the minimum each month – you will incur interest charges and it will take you longer to settle your outstanding balance. Compound interest also applies in this situation. you will end up paying more money to the financial institution as compared to the original amount you paid for the product or service. With high interest rates.5% per month. After all. The statement includes your transactions. any fees and charges. the due date of payment and the minimum payment. Tips on using credit cards Pay the amount due in full when you get your monthly statement to avoid paying interest. 92 . Always check your credit card monthly statement to ensure proper transactions and charges are recorded.my to find the formula for calculating credit card interest payments. Call your bank if there is anything wrong with your statements. Each time you use your credit card to withdraw money. If you have a cash flow problem. Do not use a credit card if you cannot make the monthly payments. Limit the number of credit cards you have. Pay before the due date to avoid late payment charges and penalty rates.Log on www. For examples of how two individuals manage their respective credit card debt. Be aware of the consequences of paying minimum amounts all the time.bankinginfo. Do not use your credit card to get cash advances from an ATM. please refer to APPENDIX 2. or you have not received it. pay the minimum amount for the present but pay the full amount as soon as possible. you are increasing your loan commitments in addition to paying upfront withdrawal charges and daily interest.com. which is a computerised system that automatically processes credit data received from financial institutions and synthesises these information into credit reports. there are also privately-owned companies that provide their clients. including financial institutions. with information on a borrower’s repayment record and status of legal actions. companies and government entities. It collects credit information on borrowers. Other than the Credit Bureau. These reports are made available to financial institutions on request. They will use the information to decide whether to give you a loan or not. Each time you make a new application for a loan. Keep a copy of your CCRIS report to track your loans with financial institutions and monitor your loan and credit card repayment pattern. including private individuals. You can check whether you have a 93 . the financial institution will check your payment history with the Credit Bureau. The Credit Bureau keeps information in the Central Credit Reference Information System (CCRIS).REPAYMENT AND DEFAULT A good paymaster is one who pays his or her monthly loan repayments on time and in the amount required according to the terms and conditions of the loan agreement. and supplies the information back to lenders. Credit Bureau The Credit Bureau of BNM has been in operation since 1982. if any. businesses (sole proprietors and partnerships). Your guarantors.gov. If your CCRIS report indicates late repayment or default. You may become a bankrupt. In fact. a topic that will be discussed in Chapter 8. if any. click on: Debt repayment problem Defaulting on loan payments and failing to settle your credit card debt can have terrible consequences. You will be getting constant calls and letters from lawyers and lenders to demand that you settle your debts. Your family members will be affected because they may have to help in paying your debts. 94 . For more information. effective 1 July 2008. a financial institution has the option of denying any new loan applications because it indicates that you are not managing your loans well or you have financial difficulties It pays to be a good paymaster because this will be reflected in your credit report.bnm. will also suffer because legal action can be taken against them. Your car or property will be auctioned.healthy repayment schedule and defaults or late payments appearing in your report. you can become unproductive and your work or health may be affected. You will suffer emotionally due to stress. You will be sued by the financial institution. you are rewarded for being prompt in your credit card payments – financial institutions will impose finance charges on a tiered basis depending on your repayment behaviour. In such situations. You have more debts than you can manage to pay.e. You do not have any savings to meet personal or family emergencies. 95 . If you are facing any of the above problems. You are only able to pay the minimum 5% every month on your credit card bills.Some signs to show that you are in financial difficulty are: If you are not in control of your money i. seek help and advice on your finances from a professional financial counselor as soon as possible. your expenses are more than your income. You get calls from debt collectors regularly. You are being served with legal notice of demand. IN A NUTSHELL When applying for a loan. Always ask for the effective interest rate on all your hire purchase and fixed rate term loans. Your total monthly payments on all your loans and credit card debt should not exceed one-third of your gross monthly salary. Do not fall into the trap of using credit and charge cards as if it is ‘free’ money. Be aware of the terms and conditions of the loans you take. 96 . Paying only the minimum monthly payment on your credit card statement can result in a huge debt due to the compounding effect. ask yourself the purpose of the loan and whether you can afford to make the installments. Aim to be a good paymaster so that you will have a positive credit report. Never ever resort to borrowing from unlicensed money lenders. PART D REAL-LIFE STRATEGIES 97 . It is better if what you do at work is something for which you have the necessary aptitude or talent. YOUR CAREER Like most people. develop an interest in your respective fields and make sure you love and enjoy your job. do not force yourself into a career in accounting. If you enjoy dealing with people. If you dislike mathematics. Making smart choices requires that you plan ahead. you will need a job after you graduate. whether it is choosing a career. maybe a career in sales and marketing is for you. be informed about what is available in the market and avoid impulsive decisions.7 CHAPTER 7 LIVING YOUR LIFE Living your life is essentially about making smart choices. You will need to practise them in living your life. Irrespective of what you choose to do in life. You would have gained a lot of ideas about planning your finances and managing your money in the earlier chapters. 98 . buying a car or house. This job will be the first of many work experiences you will have that eventually shape your career. Whatever the situation. Volunteer to take part in activities for opportunities to show your talents. if you are considering changing jobs. Therefore. but also to your knowledge. use it to learn the basics of surviving in the workplace. The development of your career is going to determine the quality of your life. plan properly. It would not be boring or mundane if you use it to develop yourself while you’re contributing towards the company.Planning your career Your career is going to take up the most part of your adult life so do take it seriously. You may build your career in the same company throughout your working life or you may move from one company to another. assess how the new job will add. ensure that it would benefit you in the longrun. both written and spoken – people who can write and speak well are the ones who are most likely to get ahead in a company. Will it move you towards where you want to go. skills and career development. Develop your communication skills. Read as many management and self-development books as you can. Each time you change jobs. Take on assignments that can improve your knowledge and skills. not only to your income. but would also widen your prospects. career-wise? Or would it move you closer towards your financial goals? Whatever the reason you choose to move. Your job is what you make of it. 99 . ensure it is a positive career move that not only increases your income earning capacity. Practise your human interaction skills with all levels of employees. No matter what your first job is. Your income tax As an employed young adult. You may also have a problem in getting financing from financial institutions later. It is a legal as well as a moral obligation to pay it back so that others can enjoy the same benefit that you did. It is your responsibility to declare all your sources of income to the IRB and pay the appropriate amount of tax. We have discussed earlier how important it is for you to do so. you need to pay your income tax. The student loan is a loan. Sources of income that are taxable include: 100 . Your income tax is deducted from your monthly salary and sent directly to the Inland Revenue Board (IRB) by your employer. it may hamper your chances of working for a good employer. not a gift. if you have one.this could potentially delay you from achieving your life and financial goals.Your student loan A student loan is an obligation and should not be taken lightly. However. If you do not pay back your student loan. This is now done ‘Pay-As-You-Earn’ (PAYE) under the Scheduler Tax Deduction Scheme. this does not mean that you will not pay any more taxes at the end of the year when the IRB assesses your total income earnings. begin paying back your student loan. Once you start working. You are strongly advised to budget an amount from your monthly salary for income tax payment. your employer must provide your name. address and income details. Pension and annuity. such as allowances or commissions. In addition. there is also the self-assessment tax form that you must submit to the IRB. if your taxable income is more than what has been deducted from your PAYE. to the IRB monthly in the Earning Advise (EA) form. in which you have to declare all other sources of income. Allowances (in cash or otherwise). Under-declaring your income for tax purposes is an offence! The IRB and you The IRB knows how much you earn from the PAYE deduction made against your monthly salary. you will have to pay the difference to the IRB. Bonus. Commissions. Directors’ fees. Gratuity. 101 . Rental. At the end of the year. Other than the EA form submitted by your employer. in case the amount you actually have to pay at the end of the year is more than the amount that is being deducted from your monthly salary. royalty and premium income.Salary and wages. Dividend and interest income. hasil. The responsibility is on you to calculate your own tax liability. You must complete and submit the Return Form by the deadline but no Notice of Assessment will be sent to you. partnership or private limited company.my You are allowed to make monthly payments to the IRB directly or through selected financial institutions. To find out what your monthly tax payment is. you will need to pay taxes for the profit that you make from the business.Under the self-assessment tax system. check out the IRB’s Schedule of Monthly Deductions at: www. 102 . A car is a convenient means of transport. Instead. you have to work out and pay your own income tax. At the same time.org. repair it when necessary and pay for costs such as road tax and car insurance. but there is more to owning a car than buying it. You need to maintain it. you have to estimate the tax to be paid for the current year and make monthly payments during the year based on that estimate. one of your major purchasing decisions would be buying a car. you must calculate your own tax and pay the full amount. YOUR CAR After you start working. If you own a business. It is then advisable to have a tax consultant to advise you on proper tax planning based on the appropriate business entity such as sole proprietary. not only for work but for other purposes as well. 103 . get a trusted mechanic to examine the car first. Take into account the monthly car installments. If you decide to buy a used car. family members or friends. At this point in your adult life. You can buy the car of your dreams when you can afford to pay for it later in your life. as well as unexpected expenses to cover any repairs to the car in case of a breakdown or an accident. insurance and regular service.Assuming that you have decided to buy a car. toll and parking fees. monthly running costs such as petrol. People often think that it is better to buy a new car because the maintenance costs would be lower than a used car. annual expenses including road tax. That may be true but a new car can also have its problems and the depreciation of a new car is higher. you then need to decide whether to buy a new or used car. you do not need to have a flashy or imported model. After all. buy a used car that is in a good condition or a new car with affordable repayments. Ask for advice from your parents. you only want to get from A to B conveniently and in comfort. In order for you not to overstretch your financial resources. Work out the budget based on your monthly salary. Buying a car The following are some useful tips in buying your car: Shop around for a car that is reliable. try saving more cash for the downpayment and apply for a smaller loan. Do not use your credit card to pay for the down-payment. You are not being wise because you will be getting yourself into a loan with high interest rates. These will not add value to your car. make sure that you can afford to maintain it! YOUR HOUSE Buying a house ranks at the top of the financial decisions you will be making in your adult life. shop around for a package based on affordability. Try to pay for petrol.If you decide to apply for a car loan. Avoid upgrading or modifying your car with a sound system. Always ask what is the effective interest rate that you will be paying (as discussed in Chapter 6). mainly because of the costs involved. annual road tax. It is always good to buy the car with cash but if using cash is too much of a burden. Owning a car is not cheap. Before you buy one. The first house that you buy is usually 104 . better tyres or additional accessories. insurance and car maintenance with cash instead of using your credit card. the value of your house increases as long as it is in a good location. Plus – as the housing market grows. While rental rates can increase from year to year. there is also a greater sense of security in having a permanent roof over your head. Buying a house is time-consuming. You have to take care of the garden. a percentage goes toward your net worth. the principal on your mortgage is going down with each payment you make. each monthly payment that you make is like putting money into a savings account. 105 . handle repairs and spend countless hours to keep things running smoothly. Not only is there more freedom with house ownership. you will increase your net worth as you pay down your housing loan. complex and may be a costly affair: Owning a house requires investment of your time. Own or rent – the financial angle When owning a house. Each time the mortgage is paid. Why should you buy a house instead of renting one? The following are two good reasons: By buying a house.for you to live in. This is like having money in the bank because it is something you can draw upon later in life if needed. it is not for everyone. Owning your own home brings a sense of pride and accomplishment. Although buying a house sounds great. You are, in fact, tied down to your house. However, as a tenant in a rented home or apartment, you have freedom to move about the city or country. Many small home improvements can add up to big dollars in upkeep expenses. If you rent, maintenance expenses are usually covered by the landlord. If you find that you are not thrilled with your new neighbourhood, you may find yourself ‘stuck’ until the value of your house increases enough to get back your initial investment. Once you become a home owner, anything can happen to unexpectedly bring down the value of the house – your area may be prone to floods during the rainy season or a highway may be built close by several years later. If you rent, you can just pack your bags and move. When you rent, you can have freedom of movement. However, when you own a house, you will have a sense of financial freedom. Location, location, location When buying a house, do the following before committing yourself: Drive around the neighbourhood at different times of the day and week to see how your potential neighbours are like, both during the week and on weekends. 106 Check the area for features that can add value to the house, such as schools, shops, a park and playground, public transport and surrounding businesses. Watch out for conditions that may make the area economically disadvantaged. Check if the property is located on freehold or leasehold land. Buying a house located on freehold land means that you have the right to the land for an indefinite period of time. The market value of the house is usually higher than if it is on leasehold land. A house on leasehold land means you never really 'own' the land because you have to return it back to the government after the lease period, usually 99 years. The market value of the house decreases as the date to the end of the lease gets nearer. Talk to property experts who can give you sound advice about the property market, suitable locations or types of property, such as apartments, condominiums or landed houses, which will increase in value. Paying for the house The most important question in buying a house is – Can you afford it? Like most people, you will look at applying for a housing loan. The housing loan process is complex and you need to carefully analyse your financial position – your income, 107 savings and cash on hand. Be honest about your past spending habits, present needs, and ability in not only getting a loan, but paying it off. You also need to calculate your debt-to-income ratio. Look ahead as well in terms of possible increases in your income and the money you will need to maintain the house or renovate it in the future. All these affect your affordability when buying a house. There are many types of housing loans offered, so shop for the best deal that meets your financial commitments. Save enough money and try to pay more than 10% of the purchase price as the initial downpayment to avoid applying for a large housing loan. You also need to have enough money for expenses such as legal fees, stamp duty and insurance. You must be able to afford to buy and pay for your house. Otherwise, your dream home will turn into a financial nightmare! So … you want to be a landlord Investing in property can be a powerful wealth-building tool. Because of the money and time involved, do so as part of an overall investment strategy. Here are some questions to ask: What type of property investment will increase my financial worth? Because property is not liquid, i.e. it cannot be sold quickly, will this be a problem for me when there is a need for cash? 108 Of course. There are repairs to attend to. you can move on to bigger pieces of property. especially when you have made money from the capital appreciation from your sale. Your tenants may be unsatisfactory and you have to evict them.Will I be able to handle long-term ownership and maintenance of the property. income or deductions to be declared. get the rental income that you want and increase the value of your house. You will have to collect rent and there might be instances where it is unpaid or paid late. even if my cash flow is not consistent? What type of property investment will contribute to my retirement income? How much income do I expect from my property investment when I retire? Do I expect it to provide immediate income or long-term capital appreciation? Are you willing to deal with the responsibilities of being a landlord? It is not an easy task. taxes to be paid. 109 . Start with a small piece of property first so that you can manage the financial aspects. The same considerations apply whether you are buying a house to live in or as an investment. the primary one being that of location. A good location helps you to attract tenants. paperwork to be updated. With experience. you can hire professionals to take care of these matters for you – but at a cost. and wedding photos can add up to. 110 . The wedding Be realistic about your wedding expenses. Discuss with your spouse-to-be on how much money you both can afford to spend. In fact. Very often financial problems lead to other difficulties. clothes. List the things that are necessary for the wedding. Marriage is a big commitment and should be entered into only if you feel capable of handling the responsibilities. have a budget. With the romance come the real issues. you should have been saving for your wedding as part of your overall financial plan. There have been many cases of couples encountering problems due to the large debt incurred during their wedding.MATTERS OF THE HEART As you move through adulthood. You will be surprised at how much gifts. or even from the bank. Do not try to live up to the expectations of other people by going to the extent of borrowing from family members or friends. shoes. including finances. Get tips and advice from those who have been through it all on how to spend wisely for this important occasion. Like in other aspects of your life. to pay for your wedding. dinners. you may meet the person whom you wish to marry. ceremonies. household budgeting. many couples can go through their entire married life and not talk openly about it with one another. at least once a month. It is extremely important that you and your spouse see eye-to-eye on money matters (please note that these pointers assume both of you are working): Your financial plan has to move from being an individual to a shared one involving your spouse. Remember to review the list of the common expenses often. 111 .Marriage and family As money is a sensitive topic. save money on a monthly basis in a joint account. To pay for common expenses. as well as savings and investment. Decide on how to handle routine bills. to see if these expenses are increasing due to inflation or your lifestyle as a couple. paying for the family and children. any unused funds can be added to joint retirement fund.If both of you decide to use credit cards to pay for some of the common expenses. your spouse and family. Financial debts This may sound harsh but do discuss with your intended spouse the debts that both of you will bring into the marriage. 112 . You may need to increase your insurance coverage – life insurance. It is wise to have a common investment portfolio to meet future financial goals. Both of you should make a commitment to settle personal debts within a period of time after the wedding. Just as in your individual financial plan. if those debts cannot be paid off before you get married. Decide on an amount as common savings. including your respective risk appetite and preferences. These investments are also usually used to pay for your children’s living and education expenses. you need to set aside an emergency fund as a couple to take care of those unexpected expenses that can happen to you. critical illness insurance. it is important that the person who is going to use the credit card have access to the money from the joint account. medical insurance and personal accident insurance. so that there are no complications later in paying the credit card debt. such as retiring together and living a comfortable life. Talk about each other’s investment style. if any. but it is also better late than never. and track spending at least once a week using a family cash flow statement. you will need to discuss your ability to pay for that new debt commitment. The earlier the better. Some marriages are reported to fail due to financial debts incurred by spouses. You and your spouse should not hide from each other the topic of debt. 113 . Both of you must agree on the terms of payment and period to settle the debt. Talk about positive ways to improve the household financial commitments. The family’s net worth (assets and liabilities) should then be reviewed monthly. YOUR RETIREMENT You are never too young or too old to plan for retirement.Should it become necessary to apply for a loan or even a credit card once you are married. You and your spouse must review the household financial commitments and net worth every month. Financial check-up It is a healthy practice for a couple to prepare a monthly budget and spending plan. Make it a monthly affair and a fun exercise to do together. find creative ideas and reasons to grow more money for each other and your family. In fact. 114 . the parents’ healthcare. Not knowing what will happen to you when you are old may sometimes prevent you from taking stock of your financial welfare. your incomeearning ability is not improving and you are not able to increase your savings. They have obligations – there is the house. Retirement is difficult for people of all ages. Young adults normally do not think they are going to retire one day. the children’s education. and 1001 things to take care of first. It is wise not to rely solely on your EPF as your retirement fund because you may not have enough in your account to provide you with a comfortable lifestyle. it is desirable to inculcate the saving habit at a young age. You know that you have spent too much money. saving should be part of your lifestyle.Nonetheless. Perhaps you are afraid of being told that you are not on track when you start to plan for your retirement fund. There is also the fear of the unknown. Even people in their 40s do not want to face the prospect of retirement. You prefer to live for now rather than plan for the future. believe it or not. 60 or 65? Calculate the monthly retirement income needed for your desired retirement lifestyle. you will be in better shape than you think you would be. 55. diversifying investments and. you may find that you have enough to enjoy some of your money even before you actually retire. By saving early in your adult life. Basics of retirement planning As in any planning process. maintaining good health through diet and regular exercise. you need to know where you are at present: How much savings or assets do you have now? What is your monthly income? What is the percentage of your income contributed to EPF or other retirement plans? What rate of return do you want on your investments? How many years do you have until retirement to earn your money? After analysing your current assets and liabilities. Then decide when you want to retire – at age 45. budgeting. estimate your spending needs and adjust them for inflation.If you start saving as soon as possible. 115 . Your retirement planning should include debt reduction. you will find it empowering for your personal self-worth. If you have been regularly saving some money and cultivating saving habits. 116 . However. due to inflation. you will be saving 20% of your disposable income. Create a personal retirement account by opening a savings account with the bank and start saving at least 10% of your disposable income. if you started working at age 25 with a savings of 10% (from your disposable income) and decide to increase your savings by 1% each year as you grow older. Sometimes certain illness are only partially covered or not at all covered under insurance benefits.The question of health is vital during old age. the money you will have in your EPF may not be enough for your retirement. Health costs can be a major expenditure and a drain on retirement savings. Earmark a portion of each raise or yearly bonus to your retirement savings. adequate retirement planning has to be done for maintaining and living a healthy life. you will save 11% of your disposable income and by age 35. For example. Therefore. at age 26. For each increment in your age. Retirement timeline Sign up for EPF on your first day of work. Make sure you provide in your monthly budget the amount of savings meant for retirement. increase your savings in your personal retirement fund by a certain percentage of your disposable income per year. Even if you have medical and life insurance that covers critical illness and disability. this may not include all the procedures and prescriptions that you need to have. Career planning is important so that you can increase your income-generating capability. You need to have strategies for living your life. family and spiritual wellbeing are important elements that contribute to happiness. review your investment portfolio for retirement annually to ensure that your money is growing according to your retirement plan. We should also make regular donations and give to the less fortunate to alleviate their hardships and misfortunes. Health. A balanced life While money is necessary in today’s world. 117 . Do not make it the sole reason for what you do in your life. it is not everything.While you are saving for retirement. It is very important to create a balance in your life. You and your spouse must be willing to frankly discuss financial issues and draw up a plan together. 118 . Work towards having a balance in your life. In buying a car. You are never too young or too old to plan for retirement. Weigh out the benefits of renting versus buying a house.In a Nutshell You need to have strategies for living your life. not just the loan installments. where money is not the sole reason for your existence in this world. It is essential that you repay your student loan and pay your taxes. affordability is key. Buying a house is a major financial decision. There are other costs in maintaining a car. Career planning is important so that you can increase your income-generating capability. it is very possible for you to spend more than you earn. You will instead be managing your money wisely and on the road to becoming financially stress-free. Being financially distress will affect your reputation.all because you failed to plan. spending as and when you like. but that is not all – you will also be emotionally troubled. mental and emotional stability. If you do not plan your finances. which leads to financial trouble. you should not get into financial trouble.8 It is essential that you repay CHAPTER 8 GETTING OUT OF FINANCIAL TROUBLE IN FINANCIAL TROUBLE If you follow the advice. All these will affect your physical health. You will end up in a neverending spiral of problems. 119 . without thinking about the future. looking for money to pay off your debts and eventually your relationships with family and friends will strain. tips and guidance given to you in this book. It is just not worth it in the long run to give in to your whims and fancies. putting you eventually into serious debt. These include the borrowers’ character. you will be seen as a bad credit risk to any banker you may approach for a loan. What can happen When you default on your loan.Your creditworthiness As explained in Chapter 6. If you are in serious debt. Financial institutions have various criteria to assess the creditworthiness of potential borrowers.000 and above. These include writ of seizure and sale. It will first obtain a judgement. If it is a housing loan secured by property. 120 . the Credit Bureau of Bank Negara Malaysia (BNM) maintains credit information on borrowers in its Central Credit Reference Information System and financial institutions are able to check on your status. attitude towards their loan obligations as well as their capacity to pay their loans. garnishee proceedings. it will be difficult for you to obtain loans from licensed financial institutions. For unsecured loans. the financial institution will take legal proceedings against you. When you have a poor credit record. the financial institution has a number of options to execute the judgement it has obtained to recover its debts. the financial institution will foreclose on your property and sell it by public auction. If the loan is a car loan. the financial institution will act to repossess and sell the car. and filing a bankruptcy order if the debt amount is RM30. judgement debtor summons. They carry on as they are. 121 . Leave the country without the court’s or DG’s approval. Pursue any court action without the DG’s permission. Be a company director or carry out your own business or be involved in the management of a company without the court’s or the DG’s approval. there are people who do not realise that they are in financial trouble.If you are made a bankrupt. there are many things you are legally barred from doing: YOU CANNOT Hold any public office without the approval of the Director-General (DG). Be a committee member of any registered body. These are: Credit Cards Paying only the minimum balance each month. Increasing the outstanding balance every month Going over your credit limit. LOOKING FOR THE SIGNS Unfortunately. living in denial and making their situation worse by the day. Be involved in the management of a company or be an employee of a company that is owned by your spouse or close relatives and their spouses. There are some tell-tale signs to indicate that you are in financial trouble and you must be aware of them. Taking frequent cash advances. Savings Using up your savings at an alarming rate. commissions or bonuses to pay for your living expenses. Loans Using the overdraft or automatic loan features on your current account frequently. Being denied credit because of a negative credit bureau report. paying late. Ignorance Not talking to your spouse or family members about money problems or arguing when you talk about money to them. overtime. Borrowing money from family or friends to pay your debts. Living from paycheck to paycheck. Having little or no savings to handle unexpected expenses or emergencies. Having your credit card revoked by the bank. 122 .Missing payments. Receiving second or third payment notices from banks or creditors for non-payment of debts. Not knowing how much money you owe until the bills arrive. or paying some bills this month and others next month. Expenses Depending on part-time jobs. it will only get worse. Do not wait until the problem gets bigger. the easier it is to get out of the situation. AKPK provides three main services free-of-charge to members of the public to help them to better manage their money: Financial education programmes and materials on the proper use of credit and basic money management skills as well as tips on how to use credit responsibly. Financial counselling and advice. providing individuals with an in-depth evaluation of their problems and how these may be resolved. 123 .. The earlier you seek assistance..If you start experiencing any of the above. You can contact the Credit Counselling and Debt Management Agency (Agensi Kaunseling dan Pengurusan Kredit or AKPK) for financial counseling and advice. If you do not act immediately when you see the signs of being in financial trouble. AGENSI KAUNSELING DAN PENGURUSAN KREDIT (AKPK) Set up by BNM. What can you do? You can contact your bank and work out an adjusted repayment plan to suit your cash flow. get advice immediately. It seeks to equip Malaysians to live their lives with good money management skills and debts that are manageable. AKPK’s vision is “to make prudent financial management a way of life for all Malaysians”. Debt Management Programme (DMP). through which AKPK works with individuals in developing personalised debt repayment plans and consults with their financial services providers to structure manageable repayment schemes.org. not under any advanced legal action. are not bankrupts. More information about AKPK can be found at www. are unable to manage their own debts. The DMP is provided to individuals who: have positive net income after meeting their expenses. have loans taken from institutions regulated by BNM have loan exposure of less than RM2 million (as at time of publication).akpk.my 124 . The Credit Counselling and Debt Management Agency (AKPK) can assist you if you need help in managing your debt. Your creditor can make you a bankrupt if you fail to pay your loans.IN A NUTSHELL There are signs to indicate that you are in financial trouble. you must take immediate action to get yourself out of the problem. 125 . If you see any of these signs. Financial institutions will take legal action against you for defaulting on your loan. Being a bankrupt limits you in undertaking a number of things. Having unmanageable debts can affect all aspects of your life. Repayment track records are financial institutions’ main criteria to assess creditworthiness. APPENDICES 126 . 000 $3.00 $3.594 $61.000.896 $88.000 $6.000.000.923 $15.815 26 $29.856 $73.00 $3.000 Zainal investing at age 30 at 10% interest return/year Investment amount RM3.216 42 $135.542 $80.748 $3.00 $3.778 $66.256 33 $57.00 $3.000 $3.505 32 $52.308 $37.000 $3.00 $3.253 $60.634 $133.000 $3.229 35 $69.00 $3.147 $25.315 $20.315 $20.000 $3.000 $3.552 36 $76.481 34 $63.923 21 $3.000.000.00 $3.00 $3.000.308 $37.000 $10.000 $15.000.930 $10.00 $3.930 $10.538 43 $149.000.015 41 $123.000.672 $118.153 $70.643 $173.568 $80.000 $3.300 $6.162 24 $24.147 $25.985 $97.158 38 $92.00 $3.000.884 $107.407 $190.849 $118.923 $15.000.000.000.378 25 $26.664 $50.401 46 $198.260 30 $43.798 127 .930 20 $3.812 $52.000 $3.147 23 $22.441 Siti investing at age 22 at 10% interest return/year Investment amount RM3.925 $92.000 $3.300 19 $3.439 $130.000 $3.00 $3.000.312 $157.000.00 $3.00 $3.00 $3.001 45 $180.317 $104.000 $20.508 37 $84.000 18 $3.283 $143.315 22 $3.462 $31.691 29 $39.574 39 $101.230 $55.512 $45.APPENDIX 1 TIME VALUE OF MONEY – Invest now rather than later Ahmad investing at age 18 at 10% interest return/year Invesment amount RM3.832 40 $112.738 $41.00 $3.00 $3.000.462 $31.738 $44.092 44 $164.497 27 $32.187 31 $47.00 $3.000.300 $6.447 28 $35. 128 .492 $292. only RM15. Therefore.545 $360.000.041 $324.629 $262. Investment return will fluctuate over the years due to economic and stock market conditions.376 $467.000 Investment Ahmad has more money at age 55 compared to Siti and Zainal and he has invested the least.825 $189.000 Total RM15.000.550 $386.537 $319.00 $3.773 RM24.00 $3. The total investment that Ahmad.000.000.823 $230.885 $279.113 $264.000 $3.00 $150.591 $351.000.913 $209.274 $307.00 $3.00 $3.201 $337.208 $235.000.000. Important notes: a.477 $168.921 $371.885 $449. Some years may be lower than 10% per year and some years may be higher than 10% per year.705 $425.125 $290.000.47 48 49 50 51 52 53 54 55 $218.00 $3.805 $253.007 $211.00 $3.000. Siti and Zainal will get at the age of 55 will be as stated above only if the annual investment return is consistently at 10% per year.713 $408.300 RM75. the total investment value may be more or less than the original investment amount.285 $240. b.000.00 $3.00 $3. 000. Steven’s monthly salary is RM3. Percentage of Total Minimum Payment from monthly take home salary: 17.000.600.000 on his credit cards.382 to live on the whole month!) Steven wants to know how to clear all his credit card debt! Is there a solution? Thankfully.00 Bank C (Master) 5. For this example.Example 1 Steven has 3 credit cards.000. In addition. cancellation of cards or assistance from 3rd parties.000. Steven must firstly acknowledge the fact that he overspends and needs to cut back.000.00 TOTAL 10. Otherwise. but usually. He has only been paying the monthly minimum payment of 5% on each card.500. each with a credit limit of 5. which he tries to save every month. 129 . his take home pay is approximately RM 2. Steven has to service his car hire purchase instalment of RM525 per month. Steven needs to devise a plan to systematically pay off each card by adding more to the minimum payment each month. After deductions (EPF.300 per month. and that Steven chooses to handle this situation on his own without any negotiation with the bank.00 4.00 100.00 500. but it requires self-discipline and effort.00 Bank B (Visa) 5. most of it goes to his parents. equivalent to almost 25. he will need to tackle one card at a time.5% per month.00 2. his spending habits will lead him into the poorhouse! Secondly. A summary of his outstanding balances and the monthly minimum payments of 5% are listed below (all figures are in RM): Name of Bank Limit Balance Minimum 5 % Payment Bank A (Master) 5.25% of his annual gross salary of RM39. SOCSO and tax).35% (this leaves Steven with only RM 2.00 Steven’s total outstanding credit card balance is RM10.00 225. Thirdly. as well as his day to day living expenses which amount to RM900 per month.00 3. it is assumed that the interest rate on each credit card remains at 18% per year. Steven needs to stop usage of all his cards. rental of RM650 per month.APPENDIX 2 Clearing Your Credit Card Debt .500.00 175. leaving him with about RM300 in discretionary income. However.000. which comes to 1. as he has more than 1 card.882 per month.000. He is struggling to manage his finances because he presently owes a total of RM10. yes. 00 6.13 3.000 in his savings shall be retained for emergencies.175.10 3.218.218. Watch what happens after 6 months of following this plan (note: the balances are calculated AFTER making payment): Name of Bank Limit Balance Minimum 5% Payment Monthly Commitment Month 1 Balance Month 2 Balance Month 3 Balance Month 4 Balance Month 5 Balance Month 6 Balance Bank A (Master) 5.200.500. This means he has to live according to a strict budget and control his lifestyle.84 4.500.855.88 3.500. In addition.010.00 175.00 4.25 346.247.339.13 2.500.374.00 225.00 300.35 * The balance for Bank C becomes RM1.00 175.00 * 931. We shall utilize RM500 to pay off a portion of his debt with Bank C (Master).82 Bank B (Visa) 5.218.88 3.32 2.00 1. The balance of RM1.00 3.00 475.77 641. Here’s how we calculate the on-going interest and outstanding balances after making payment each month: Using Bank C (Master) as an example: Outstanding balance: 1.00 1.671.53 Bank C (Master) 5.less payment 300.37 47.87 3.00 4.97 2.00 .500 after deducting RM500 from Steven’s cash savings.00 + add interest (1. Steven has some cash savings of RM1. Steven also agrees to commit a total of RM700 each month to pay off his debt. we will apply the same monthly payment amount on Bank A and Bank B each month. while maintaining the minimum payments on the rest of his cards.00 (see Month 1 Balance for Bank C) 130 .841.720.118.00 225.13 4.00 700.00 3.5% per month) 18.497.500.000.988.06 0.000.000.00 Balance before interest 1.00 75.500.00 Balance (carry forward) 1.Let’s begin by re-ranking his credit card balances in order of the lowest balance first.00 TOTAL 9. Steven then makes a commitment to pay an additional RM225 per month on top of his minimum payment for Bank C. For the sake of simplicity. We will begin by attacking the card with the lowest balance. 000.00 Bank B (Visa) 5. Steven has successfully reduced his outstanding credit card debt to RM6. he took a year and almost 4 months to settle his credit card debt.21 1.92 700. take the payment of RM475 per month and add it to the minimum payment for Bank A. Steven has settled his credit card with Bank C (Master) in full.279. The chart now looks like this: Name of Bank Limit Balance Minimum 5% Payment Monthly Commitment Month 7 Balance Month 8 Balance Month 9 Balance Month 10 Balance Month 11 Balance Month 12 Balance Bank A (Master) 5.00 In the space of 1 year (12 months). week and month so that he would not accidentally spend beyond his means.98% of his debt amount! A great achievement indeed! At the end of 12 months.218. and exercising some measure of self-control.14 446. Steven has made the commitment to utilize RM700 per month on all this credit cards. With his credit debt out of the way.00 0. He would need to ensure that he uses his credit cards more prudently and to always pay off his credit card bills in full.00 0. Steven can now focus on serious savings and investments towards achievement of his life goals. This means that Steven’s monthly commitment will not change.35! In addition.000. Remember.143.14 2.591.00 0. this time.00 2.37 2.720.00 0. All he then needs to do is continue with the same strategy.74 0.35 310.321. Thus. until all his debt is fully settled.91 3. Now.53 136.00 2.5 months.00 0. Steven has reduced his original credit debt of RM10.218.831.00 2. 131 .20 2.00 0.00 3.00 TOTAL 6.497. let’s continue with his attack plan by now targeting Bank B (Visa). He would also need to design a realistic budget to help him manage and monitor his cashflow each day.03 475.402.00 3. the payment of RM300 that he used to make on Bank C must now be added to the minimum payment on Bank B.778. which would amount to RM700 per month.89 225. All in all.402. By paying RM700 per month on Bank A (Master).00 Bank C (Master) 5.402! This is 75. Steven is done paying off Bank B. Stephen would settle this card in another 3.62 915. targeting the credit cards in a systematic fashion.In the space of just 6 months.50 2.000.00 0.00 0. leaving him to deal with only 2 more banks! This is quite an achievement.376.000 to RM2.82 174. simply by re-adjusting his finances.962.27 1.00 0. e.71% Therefore. 0. which is interest upon interest added to the principal. However. 3 Years ago.300 on his credit cards. he has only been barely paying the monthly minimum payment of 5% on each card. 19.1971 or 19. each with a credit limit of 10. The lower income. he was forced to accept his now lower salary. and it was during those years that he applied for his credit cards. i. The credit card interest rate of 18% per annum needs to be divided by 365 days.000.1971-1 = 0.5% per month (see box below on The Real Interest Rate on Your Credit Card) The Real Interest Rate on Your Credit Card Recall the earlier chapter that spoke about compound interest. 132 . or 1. however. he was subsequently retrenched and struggled to obtain a new job.000 per month. He is struggling to make ends meet because he presently owes a total of RM37. Due to his higher income at that time. SOCSO and tax). After deductions (EPF. his take home pay is approximately RM 2.300 per month. The interest rate on each credit card is currently 18% per year.71% is the effective interest rate of a credit card. created difficulties in paying his credit cards and as a result.000493)^365-1] = 1.882 per month.Clearing Your Credit Card Debt . When he finally did get a job offer.0493% per day! Using the following mathematical formula for daily compounding: [(1+0. John’s monthly salary is RM 3.Example 2 John has 5 credit cards. the banks extended him credit limits twice his monthly gross salary. John was earning RM5. 017 to meet his other expenses for the month.A summary of his outstanding balances and the monthly minimum payments of 5% are listed below (all figures are in RM): Name of Bank Bank A (Master) Bank B (Visa) Bank C (Master) Bank D (Visa) Bank E (Master) Limit 10.000.000. study loan repayment and so on.00 9.000.000. Firstly. This will only leave John with RM1. such as rental. John is in a financial predicament. there is a solution.00 475. His debt burden is too heavy and he would lack sufficient funds for his living expenses and emergencies. With a nett take home pay of RM2.00 Minimum 5% Payment 490.300 is 11.00 6. He may either negotiate directly with each bank or he may opt to seek out the services of a 3rd party.000. discipline and effort. 133 .500.00 3. and face legal action from his financial institutions.300.00 John’s outstanding credit card balances of RM37. but it requires willpower.00 1865. his car hire purchase payment.00 450. his credit card balances would keep on increasing (credit card interest compounds DAILY!) and he would soon be unable to service his debt. Thankfully. to assist him with the negotiations. and possibly. food.00 10. such as a credit counseling agency.3 times his gross monthly salary.00 10. John must acknowledge that he has a serious debt problem and he must cancel his credit cards immediately Secondly.00 300.00 9. assistance from 3rd parties.00 37.00 10.882 per month. default on his payments.000. utilities.00 150. If he were to continue on his present path.000.800.00 10. John has to commit 64. he can negotiate for a lower interest rate and longer but fixed duration to pay off his debt.71% of this amount in order to meet the minimum 5% credit card payment. Thirdly.00 TOTAL Balance 9. he must tighten his belt and exercise self-control and dicipline and stick to the repayment plan with determination.000. Clearly. His restructured repayment plan would now look like this: Restructured Credit Card Repayment Plan C (Term loan conversion at 9% interest per annum over 4 years) Name of Bank Bank A Bank B Bank C Bank D Bank E TOTAL Outstanding Balance 9.John recalculates his expenses and meets with all his banks to negotiate a manageable structured repayment plan.00 Fixed Monthly Payment 250.00 80.000.00 950.00 230.00* At a reduced interest rate of 9% per annum.500.00 240. John's monthly commitment is now just about half of what it used to be and this certainly eases the pressure off of him.00 150. The banks would then convert all his outstanding credit card debt into term loans which according to this example.300.000.00 9.00 9. and perhaps put aside a small amount each month for savings and emergencies. enabling him to meet his payments each month as well as take care of his own needs.800.00 3. Assuming all his financial institutions agree to reduce the interest rate to 9% per annum with fixed monthly repayments. * This was derived from a financial calculator using the ‘time value of money’ formula.00 37.00 6. John would have to experience his credit card facilities being terminated and he would lose the privilege and convenience of using them.000. 134 . would be paid off in approximately 4 years. John would be able to commit RM950 per month. However. you have spent more than your monthly income) 135 . (If the amount is negative.APPENDIX 3 Personal Monthly Budget with Personal Monthly Cash Flow in one Statement Monthly income Budget Actual Cash Flow Total monthly income Less monthly fixed savings (10% of monthly income) Less savings for emergency funds Monthly income net of savings Less monthly fixed expenses Total monthly fixed expenses Less monthly variable expenses Total monthly variable expenses Less monthly discretionary expenses Total monthly discretionary expenses Excess (Deficit) Income Excess of income over expenses + fixed savings + emergency funds = extra savings. my Insurance: Bank Negara Malaysia: www. A CAP Guide to Managing Your Money & More.com.APPENDIX 4 Resources Websites Banking: www. published by the Consumers Association of Penang Banking Info and Insurance Info booklets. published by Bank Negara Malaysia Personal Money.my Agensi Kaunseling dan Pengurusan Kredit (AKPK): Malaysia Deposit Insurance Corporation (Perbadanan Insurans Deposit Malaysia or PIDM):. The Edge Communications Sdn Bhd 136 .bankinginfo. 2007.bnm.insuranceinfo.gov.my Reference materials and books Smart Money-User by Carol Yip.my Ministry of Domestic Trade and Consumer Affairs: www. The Edge Magazine on Managing Your Finances magazine.com.org.org.fmb. published by Kanyin Publication The Money Book.my Financial Mediation Bureau: www. gov.
https://www.scribd.com/doc/129422197/money-sense-pdf
CC-MAIN-2017-47
refinedweb
24,449
68.57
[Moved] Invalid Signal signature: errorSignal(long) Dear community, Sometimes (not even always), my program raises the following errors: @Invalid Signal signature: errorSignal(long) Invalid Signal signature: infoSignal(long) Invalid Signal signature: doneSignal(long,QString) Invalid Signal signature: aSlot()@ There are two things: 1. The last one, aSlot() is not a Signal but a Slot, defined as follows: @@Slot() def aSlot(self): ... code ...@ and called by linking a signal to it: @myObject.mySignal.connect(self.aSlot)@ 2. For the three first real signals, what is wrong with their signature? I defined my Signals as follows: @class myClass(QObject): errorSignal = Signal(long) doneSignal = Signal(long,str) infoSignal = Signal(long) def init(self): ...code...@ The problem is only raised "sometimes". And I cannot figure out when does it occur, as it does not stop the program and does not break on that error in debug mode... However, something is not working correctly as my program keeps waiting when this error occurs. Any help? PyQt or PySide? PySide. It appears from the pyside source code that there is no C++ wrapper available for this signal. I'm not exactly sure what that means. Did you try using a different signature? Things I would try include @Signal() Signal(int) Signal(QString)@ Oh, the empty one also failed. Yeah I think it's not a signature problem as the program is actually working 99% of the time. Just that 1% that I cannot figure out why... with the same program, same execution parameters... Anyway, it's occurring less and less, and I just need to retry if it occurs but I was just wondering if there is a way to fix it. I think I found the problem. I'm not sure but it seems that it happens when an object is deleted too early. The problem happens with a QNetworkReply and raises weird errors such as "QNetworkReply doesn't have a createRequest() method". I don't know if this error is linked to the previous one but that's a new error that occurs right after the previous ones already introduced previously. And I think this error is raised because the object QNetworkReply was deleted too early (by a manual Object.deleteLater(reply)). Good to see you found it. Well I'm not sure that was the problem... It was finally raised even if I don't manually "QOBject.deleteLater()"... I got more information from the debugger this time: @Error in sys.excepthook: Traceback (most recent call last): File "C:\Program Files (x86)\Programming\PyCharm 1.2.1\helpers\pydev\pydevd_breakpoints.py", line 58, in excepthook frame = frames[-1] IndexError: list index out of range Original exception was: TypeError: errorOccured() takes exactly 2 arguments (1 given) Invalid Signal signature: errorSignal(long,QString) Invalid Signal signature: partPercent(long,long,long) Invalid Signal signature: doneSignal(long,QString) Invalid Signal signature: aSlot() Invalid Signal signature: anotherSlot(QNetworkReply::NetworkError) @ Any chance you could share the code or mock up an example that shows the same behavior? I sent you an email with the code, thanks for your help! It's probably useful if other people can have a look at the code as well. Well actually this is only one part of a non really open-source project. I was allowed to send one class only (the one raising the error) to you but not to 'everyone'. I'll ask permission for that and might edit this post! If I can't use it directly, I'll try to rewrite a similar class but it will need more time. I'd go the extra mile to try and produce the error in a separate mock up then. I could not find anything odd or wrong reviewing your code, but I'm not proficient enough using PySide to notice more subtle errors. It'll be a learning experience for either of us then :) Well the main code of the mock up example is there: The main class it is using is there: However, the error is raised almost never... I even could not get the error after trying the mock up example more than 50 times. Maybe I already fixed the error. It used to appear sometimes only. I'm not sure this mock-up is very meaningful, but maybe some people might find some huge mistakes in the main class. This is very crappy coding by the way, this is my first PySide program and it was rapidly coded...
https://forum.qt.io/topic/5461/moved-invalid-signal-signature-errorsignal-long/15
CC-MAIN-2019-43
refinedweb
740
63.49
We are about to switch to a new forum software. Until then we have removed the registration on this forum. I guys, I need help to understand how to play two songs without overlapping. I am using the minim library but I do not know how to detect the final part of the first track to play the second song (or maybe there are others better ways to do it). This is my code, thanks for the tips:) import ddf.minim.*; Minim minim; AudioPlayer player[]=new AudioPlayer[4]; String filenames[] = new String[]{"sound1.mp3", "sound2.mp3"}; void setup(){ minim = new Minim(this); for(int i=0;i<2;i++){ player[i] = minim.loadFile(filenames[i]); player[i].play(); print(filenames); } } void draw(){} void stop(){ for(int i=0;i<2;i++){ player[i].close(); } minim.stop(); super.stop(); } Answers Well, don't launch playing of both sounds in setup() at the same time. Wait for the end of the first one to play the second one. Hi Philho, thanks for the replay, actually I am proceeding step by step to understand the minimum functions, but probably is better if I ask directly what I would like to achieve. I am trying to write a sort of musical player. On the top I have a alphabet composed by random images (for the moment I linked only 'a' and 'b') which are connected with the keyboard. Using the images as letters, is possible to write on the bottom. Each letter written on the bottom should be connected with a sound track and played one after other. What I need to know is how to link each image to the sound track and how let the sound plays without overlapping. Can you help me?:) Hi GoToLoop, the problem for the moment is that I do not understand how to link the image with the sound. For each letter I have three images, and each of these three images should be connected with a specific sound track.(so the same for each alphabet letter). How is possible to expres in code... "when 'a' is pressed, chose one random Aimg (here my basic way ) and load the corresponding soundtrack which it will be played in a second moment (maybe with a 'ENTER')... thanks This is an even better forum thread about it: :> Thank you very much, I updated my sketch:) here two options both that do not work.... 1) overlapping of the sound: ... { if (key=='a') { 2) it plays only the last letter drawn How is possible to let play the letter one after the other and not all together? Thanks a lott!
https://forum.processing.org/two/discussion/10508/please-some-help-to-understand-how-to-use-the-minim-sound-library
CC-MAIN-2019-47
refinedweb
438
72.46
REST to collection builder using LINQ to XML Few weeks ago, I did a post about how can I handle REST response using LINQ To XML. You can take a look at it here. Now , this is nice that we can parse REST response with LINQ To XML but at the same time it is kind of boring writing the mapping manually every time. In my Linq.Flckr project, I need to build objects from REST in various occasions, so its kind of monotonous as well as error prone writing the XML to Object Mapping manually, even its really easy to do using LINQ to XML. Therefore, I have come up with a tiny class that does the task for me and in return, all I have to do is to declare some attributes on top of my Class and Property. Once, I have added all the property mappings, all is left is to create the RestCollectionBuilder object and call the ToCollecton to get the IEnumreable<T> result from REST response. RestToCollectionBuilder<SomeObject> builder = new RestToCollectionBuilder<SomeObject>(); OR // pass the root element from which attributes //will be passed to the SomeObject class. RestToCollectionBuilder<SomeObject> builder = new RestToCollectionBuilder<SomeObject>(##ROOELEMENT##); // Where T: IDisposable // Build the object. IEnumerable<SomeObject> list = builder.ToCollection(##REQUESTURL##); OR // get the element and do some parse of your own, if needed. XElement element = GetElement(..) IEnumerable<SomeObject> list = builder.ToCollection(element); Talking about attributes and elements , let's consider the following REST response from Flickr . <rsp stat="ok"> <person id="12037949754@N01" nsid="12037949754@N01" isadmin="1" ispro="1" iconserver="122" iconfarm="1" gender="M" ignored="0" contact="0" friend="0" family="0" revcontact="0" revfriend="0" revfamily="0"> <username>bees</username> <realname>Cal! Henderson@</realname> <mbox_sha1sum>2971b1c2fd1d4f0e8f99c167cd85d522a614b07b</mbox_sha1sum> <location>San Francisco, USA</location> <photosurl></photosurl> <profileurl></profileurl> <mobileurl></mobileurl> </person> </rsp> Here , we need to tell the Collection builder which element is mapped to which property.Accordingly, I have created some custom attributes that will be used by the builder to create the map. Which are - XElementAttribute - XAttributeAttribute - XNameAttribtute The first two attributes inherits the XNameAttribute. So, if our class is People and we need to map the person element and its descendants to it, our class will look like // maps the element to which the class belongs to [XElement("person")] public class Person { // maps to <person nsId ="xx"> [XAttribute("nsid")] public string Id { get; set; } // maps to <person ispro ="True"> [XAttribute("ispro")] public bool IsPro { get; set; } // maps to <username>Henderson</username> [XElement("username")] public string Username { get; set; } .... .... .... } And that's all is needed to map the class to REST elements though under the hood, RestToCollectionBuilder class does use LINQ To XML to prepare the collection, but gives out a more strong solution to work with REST responses. I have used this technique in Linq.Flickr. You can check that out by digging in the code. Along with that, I have fused in a tiny Console App that processes the REST response, builds the object and dumps the result to the screen, you can download it here as well.
http://weblogs.asp.net/mehfuzh/rest-to-collection-builder-using-linq-to-xml
CC-MAIN-2015-11
refinedweb
514
51.07
Doing BFS right-to-left means we can simply return the last node's value and don't have to keep track of the first node in the current row or even care about rows at all. Inspired by @fallcreek's solution (not published) which uses two nested loops to go row by row but already had the right-to-left idea making it easier. I just took that further. Python: def findLeftMostNode(self, root): queue = [root] for node in queue: queue += filter(None, (node.right, node.left)) return node.val Java: public int findLeftMostNode(TreeNode root) { Queue<TreeNode> queue = new LinkedList<>(); queue.add(root); while (!queue.isEmpty()) { root = queue.poll(); if (root.right != null) queue.add(root.right); if (root.left != null) queue.add(root.left); } return root.val; } dude you do know whole lotta tricks, you just simply switch the order of left and right while i spent time to write the naive level-order traversal, brilliant! @SB.Hu Well like I said, I got that idea from @fallcreek. Edit: I just checked again, actually @fallcreek's solution does have nested loops, the inner loop going over one whole level. But the right-to-left idea was already there, and I just took full advantage of it. @StefanPochmann aha, didn't notice there are description on top, good job anyway. Dude thanks for inspiring. Just one quick question as for the "(node.right, node.left)" that I did not figure out by researching. Does that work like appending node.right first and then node.left to the queue? And is the type of "(node.right, node.left)" just a group of elements or a list-like. Thanks very much! @StefanPochmann I have a question: For below example, the code will return 4. It is not left leaf, why it is correct? 1 / \ 2 3 \ / 4 5 @yanchun said in Right-to-Left BFS (Python + Java): It is not left leaf Yes. And? @yanchun hi, this question asks for left most value of last level, it does not have to be a left left. Hope it helps. @StefanPochmann Got it. I think I misunderstand the question. It is not left node value. Thank you. @jinping Now I got it. I think I misunderstand the question. :) It is not left node value. Thank you. Whoa I didn't know the scope of a for loop variable stays with the last element. Hence we didn't have to pop anything from queue, so don't have to use deque instead. @Hellokitty_2015 What does that have to do with this thread / my solution? @StefanPochmann brilliant idea. A C++ version: int findBottomLeftValue(TreeNode* root) { queue<TreeNode*> q; q.push(root); TreeNode* last; while(!q.empty()){ last = q.front(), q.pop(); if(last->right) q.push(last->right); if(last->left) q.push(last->left); } return last->val; } Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/78981/right-to-left-bfs-python-java
CC-MAIN-2017-43
refinedweb
490
78.75
How to set up an IDE to use Anaconda¶ The following integrated development environments (IDEs) can be used to run Anaconda: Spyder¶ Spyder, the Scientific PYthon Development EnviRonment, is a free interactive development environment (IDE) included with Anaconda. It includes editing, interactive testing, debugging and introspection features. After you’ve installed Anaconda, you can start Spyder on OS X, Linux, or Windows by opening a Terminal or Command Prompt window and running the command spyder. On OS X and Windows you can also start Spyder with the graphical Anaconda Launcher: - OS X: go to your ~/anaconda directory and double-click Launcher.app. - Windows: go to the Start Menu and select Anaconda Launcher. After Anaconda Launcher has started, click the launch button to start Spyder. (You may need to click the install button next to spyder-app first. The launch button appears after installation completes.) You are now ready to use Spyder as an IDE with Anaconda. Python Tools for Visual Studio (PTVS)¶ Python Tools for Visual Studio is a free, open source plugin that turns Visual Studio into a Python IDE. If you’ve installed Anaconda as your default Python installation and installed PTVS, your Visual Studio installation is already set to use Anaconda’s Python interpreter in PTVS. You can verify this by creating a new Python project and checking whether Anaconda is the Python it uses. - To create a new Python project, navigate to File -> New -> Project (keyboard shortcut: CTRL+SHIFT+N) and select Python Application. To check which Python the project uses: - Navigate to Tools -> Python Tools -> Python Interactive (keyboard shortcut: ALT+I). - In the Python Interactive window, type import sys, then press Enter. - Type sys.version, then press Enter. If PTVS is using Anaconda, you will see your Anaconda Python version displayed. (Anaconda 1.5.0 (64-bit) in the example below.) For more information on PTVS, check out the official PTVS documentation and the PTVS installation instructions. PyCharm¶ PyCharm is an intelligent Python IDE offered in a Community version (free) and a Professional version (paid) for Windows, OS X and Linux. Use PyCharm’s Preferences to set it to use Anaconda. There are multiple methods to get to the Preferences Window, including selecting Preferences within the PyCharm file menu or by clicking the Preferences icon in the top icon bar. After the Preferences window is open, navigate to Project Interpreter -> Python Interpreters. Click on the + sign to add Anaconda to the Python Interpreters list. If the Anaconda Python path is not listed, select Local... and navigate to the directory you installed Anaconda Python in. Typically this directory is ~/anaconda/bin/python. After you’ve added Anaconda, you will see it in the list of Python Interpreters, as well as the packages that were installed with it. Click OK, and the PyCharm IDE is now set up to use Anaconda. MORE INFO: Official PyCharm documentation, and the Configuring Available Python Interpreters section. Eclipse & PyDev¶ Eclipse is an open-source platform that provides an array of convenient and powerful code editing and debugging tools. PyDev is a Python IDE that runs on top of Eclipse. If you do not already have Eclipse & PyDev installed: - Download Eclipse and read the Eclipse resources. - You may need to install the latest Java JDK/JRE before installing PyDev in order for PyDev to show up in the Eclipse Preferences menu after PyDev installation. - Install PyDev. After you have Eclipse, PyDev, and Anaconda installed, set Anaconda Python as your default by adding it as a new interpreter, and then selecting the new interpreter. Here’s how: - Open the Eclipse Preferences window: - Go to PyDev -> Interpreters -> Python Interpreter. - Click the New button: - In the “Interpreter Name” box type “Anaconda Python”. - Browse to ~/anaconda/bin/python or wherever your Anaconda Python is installed. - Click the OK button. - In the next window, select all the folders and click the OK button again to select the folders to be added to the SYSTEM python path. - The Python Interpreters window will now show Anaconda Python. Click OK. You are now ready to use Anaconda Python with your Eclipse and PyDev installation. Wing IDE¶ Wing IDE is an IDE designed specifically for the Python programming language. Wing IDE is offered in Professional, Personal, and Free 101 versions. To set up your Wing IDE installation to use Anaconda: - Navigate to the Python configuration window: Menu -> Edit -> Configure Python - Select the Custom option for Python Executable. - Click Browse... and navigate to ~/anaconda/bin/python or wherever your Anaconda Python is installed. - Select the Custom option for Python Path. - Click Insert and browse to ~/anaconda/bin/python or wherever your Anaconda Python is installed. - Select OK. You may be prompted to reload your Python shell. After the reload, you should see Anaconda in the Python Shell tab. Wing IDE can use Anaconda’s Python now. For more information, check out the official Wing IDE documentation. Ninja IDE¶ Ninja IDE is a cross-platform free and open source IDE specially designed for Python application development. To use Anaconda Python with the Ninja IDE: - Go to Project -> Open Project Properties - In the Project Execution tab, select Anaconda Python for the Python Custom Interpreter. Enter the full path to the installed python interpreter. - Verify this worked by creating a test script. Enter this code: import sys print(sys.version) - Run your test script. In the output screen, you will see your Anaconda Python version listed. For more information, check out the Ninja-IDE documentation.
http://docs.continuum.io/anaconda/ide_integration
CC-MAIN-2015-35
refinedweb
910
56.55
CGI::MxScreen::Screen - Ancestor for all user screens use base qw(CGI::MxScreen::Screen); sub display { # display screen -- defined my $self = shift; my ($arg1, $arg2) = @_; ... } This deferred class is meant to be the ancestor of all your application screens. You must inherit from CGI::MxScreen::Screen and at least define the display() routine, which will be called by the CGI::MxScreen manager when that screen is selected for display. CGI::MxScreen::Screen objects are blessed hashes. In order to leave you as much of the key namespace as possible, all attributes defined by the deferred class start with two leading underscores. Contrary to the routine namespace (see "INTERFACE" below), this simple workaround should prevent any accidental attribute collision. Screen objects must be serializable. That means they must never hold or refer to objects or data containing things known to be non-serializable, like GLOB and CODE references. Screen objects are never created by the user. They are handled by the CGI::MxScreen manager, who will ensure that at most one instance of each screen name will be created during a session. That means the init() routine will be called at most once. I say at most because screens are created on demand, and if your application does not need to show some state during a session, then the screen object will never be created. Because you need to inherit, you must be very familiar with the internals of the class, i.e. not just only the public interface, but also with the routines used throughout the framework but not meant for public consumption. Indeed, Perl will not warn you when you accidentally define a routine bearing the same name as one present in the ancestors (i.e. redefinition is automatic, which is dangerous here). And redefining routines as essential as validate() or name() would lead to havoc. Before detailing the interface, here is a compact list of all the public features, to make it easier to see what is provided (and know which routine names are forbidden to you... A trailing + indicates a routine that you may choose to redefine, a trailing * indicates a deferred routine, which you must define. Functions listed between (parenthesis) are action callbacks, not meant to be called directly. The following interface is public, i.e. you may safely use those features on screen objects: (abort_on_error) bgcolor bounce (clear_context) current_screen default_button display* enter+ error error_env init+ leave+ manager name previous_screen record_button record_field screen_title set_default_button set_error spring_screen (validate) vars The following interface is private, i.e. you should never make direct use of those features on screen objects. It is used internally by CGI::MxScreen and is documented (well, listed) so that you never define a feature bearing that name in your own classes. Those names you should absolutely never override: make remake relink_to_manager _init set_error_env _clear_internal_context You must also know that, in order to be serializable with Storable, the screen defines the following hooks: STORABLE_freeze STORABLE_thaw If for some reason you need to redefine those hooks, you can't simply call SUPER:: on them, whilst doing your local processing in the redefinition. Look at the source code to understand what needs to be done. Because the above hooks were necessary, it means that adding other serializer support (see CGI::MxScreen::Serializer) will probably require similar hooks. Unfortunately, although we could design things so as to make this choice possible, the only serializer we knew about was Storable. Screens are created automatically by the CGI::MxScreen manager, based on the -screens settings, as explained in "Creation Routine" in CGI::MxScreen. The only special argument is -class, which is handled internally by CGI::MxScreen, but the others are passed verbatim to the screen creation routine. The supported arguments are: -bgcolor=> color Optinal. Overrides the default background for this screen. -title=> screen_title Mandatory. Sets the screen title. The following attributes are defined: bgcolor The background color (string form, i.e. either a color name like "gray" or an hexadecimal representation of the RGB triplet like "#1e32ef") used to display this screen. current_screen The current screen being displayed, along with the display() arguments, as an array reference, for instance: ["Welcome", 010125] where the first item is the screen name, the remaining are the displaying arguments. This makes it possible to use that as a -target argument for buttons (see CGI::MxScreen::Form::Button). Note: The notion of current_screen is maintained by the manager. Whatever screen object you query, you will always obtain the same answer. This note also applies to previous_screen and spring_screen. default_button The default button recorded via set_default_button. The default button is used when the user presses <Enter> when submitting a form, without pressing one of the submit buttons. Usually, browsers allow this when there is only one CGI input field in the form. If there is no default button specified, CGI::MxScreen will remain in the current state and redisplay the screen. error The error information, recorded vi set_error. This is a user-defined field, i.e. it is not used by CGI::MxScreen. It is meant to be filled by action callbacks, when an error is detected. Since it will be used by your own screens, you may put whatever you wish. error_env When an action callback failed, this attribute holds the CGI::MxScreen::Action_Env object used during the execution of the callback chain. See CGI::MxScreen::Action_Env for its access interface. The attribute is otherwise undef, so you may test it boolean-wise to determine whether an error did occur or not. manager The CGI::MxScreen object who manages this screen. name The screen name, as defined by the -screens argument when CGI::MxScreen was built. See "Creation Routine" in CGI::MxScreen. previous_screen The previously displayed screen, in the same form as current_screen. It will be undef the first time. If you had a transition from one screen to the same one, then previous_screen and current_screen will return the same information. However, see spring_screen. screen_title This is the title of the screen, as configured during the creation of the CGI::MxScreen manager, via the -screens argument. See "Creation Routine" in CGI::MxScreen. It is not called simply title because screens may use the CGI module, and CGI exports a title() routine as part of its HTML generation routines. spring_screen The screen we sprang from. This is the last stable state we were in before jumping into the current screen, transitions to the same screen not withstanding. The format is the same as current_screen. This may be used as -target for buttons to come back to the screen that directed a transition to the current screen, whatever it was. See CGI::MxScreen::Form::Button. vars This returns a reference to a global persistent hash table, available in all states. It is free for use by user screens, but all the values you will store there must be serializable (by Storable, or any other serializer you configured). By default, access to keys within this hash table are protected, to guard against typos (at runtime, alas). If you don't like that behaviour, you can set $mx_check_vars to false in the global configuration. See CGI::MxScreen::Config. The following routines allow changing of some attributes: set_default_buttonbutton_object Records a default button for the screen, to be used if they press <Enter> to submit the form data. The button_object is a CGI::MxScreen::Form::Button object, obtained via record_button(). set_errorvalue Set the error attribute, which is free for use by action callback, for instance to propagate some error indication to the screen. To make your screen a concrete class, you need to define the deferred feature, display. displayargs The routine that displays the CGI form. It is called by the CGI::MxScreen manager, with some arguments, as defined by the -initial argument (see "Creation Routine" in CGI::MxScreen) or by the -target button argument (see CGI::MxScreen::Form::Button), or generally speaking, anything that defines a state transition (e.g. the bounce() routine, as described below). Before calling the routine, CGI::MxScreen initialized the HTML headers and opened the form tag, directing the submission to the script's URL, but without the query string: if one is supplied initially, it is up to you to save the relevant information in the persistent context, or in your screen objects, since they are also persistent. The routine must print to STDOUT its generated HTML, and can make use of all the helper routines from the CGI module to generate form controls or emit HTML via routines. There are also some other helper routines defined in CGI::MxScreen::HTML. When it returns, the screen is supposed to have been fully displayed, and the form will be closed automatically by CGI::MxScreen. If you haven't read them already, you should study "Example" in CGI::MxScreen and CGI::MxScreen::Layout. A screen is given the opportunity to redirect itself to another state, by sending a bounce execption to the manager via bounce(). However, it may do so only if it has not already emitted anything. If you left $mx_buffer_stdout to its default true setting (see CGI::MxScreen::Config), anything you output before bouncing will be discarded for you. Usually, your screens will define fields and submit buttons. You should record them to be able to attach validation routines or action callbacks, but you may choose not to and use plain raw CGI routines, with manual hidden context propagation. However, note that it would be a waste, because CGI::MxScreen is supposed to handle that for you, and also the only display() routine called is the one for the visible screen. Any other parameters defined on other screens would not even have the opportunity to hide themselves... As for buttons, not recording them means you won't be able to make use of the state machine features. To record fields and buttons, use record_field() and record_button(). The following features have empty default implementations, and are meant to be redefined in your screens. It is not necessary to redefine all of them, or any of them, if you don't need them: enterfrom_screen Called when we enter a screen whilst coming from another one. The screen object we sprang from is given as argument, but will be undef for the very first screen displayed (the initial screen). Note: we're passed a screen object, not a list representation like the one returned by spring_screen(). init Called when the screen object is created. You may do whatever initialization is necessary on your object, but remember that screen objects are created once and remain persistent accross the whole session. Therefore, if you need runtime initialization each time one enters this screen, write it within enter(). leaveto_screen Called when we leave a screen to go to to_screen, which is a screen object. Contrary to enter(), this one is always defined, by construction. Note: we're passed a screen object, not a list representation like the one returned by spring_screen(). Those features are necessary to let the screen control what's going to happen when the form is submitted. They are meant to be used during display() processing: bouncescreen_name, args This is an exception (sic!) to the definition given above. By calling bounce(), a screen redirects the state machine to the screen called screen_name, with args being the display() arguments. You should not call bounce() after having emitted something. This feature is meant to be an exception, allowing to bypass a state when some condition is met. To avoid endless loops, there is an hardwired limit of 20 consecutive bounces allowed. As famous people said on other occasion when talking about computer limits, "this should be sufficient for any application". record_buttonargs Records a submit button, and returns a CGI::MxScreen::Form::Button object. Please see CGI::MxScreen::Form::Button for the interface, and the description of what args can be. record_fieldargs Records a control field, and returns a CGI::MxScreen::Form::Field object. Please see CGI::MxScreen::Form::Field for the interface, and the description of what args can be. Those features are not meant to be used directly, but are provided so that they can be used as action callbacks attached to buttons, as described in CGI::MxScreen::Form::Button. The most important one is 'validate' (spelled as a string because this is how it should be used: see "Callbacks" in CGI::MxScreen), which will trigger all the field verfication and patching callbacks. abort_on_error This callback returns CGI_MX_ABORT to immediately abort the callback chain if there is an error already in one of the preceding callbacks. See CGI::MxScreen::Error. clear_context[screen_name] Clears a specific section of the overal context where orphan CGI parameters are saved. A CGI parameter is orphan if there was no record_field() done for it. If screen_name is not specified, this applies to the current screen. This callback is useful if you wish to discard the state of orphan CGI parameters, so that the next time they are created, they get their default value. validate[continue] Runs the validation and patching callbacks on all the recorded fields for this screen. If continue is true, any error will not be fatal immediately, i.e. CGI_MX_ERROR will be returned, so that other action callbacks may execute. If not specified, it defaults to false, meaning a failed validation immediately triggers the error and the end of the action callback sequence. Here is an example of action callback settings for a submit button: my $ok = $self->record_button( -name => "OK", -target => "Next", -action => [ 'validate', # Same as ['validate', 0] ['do_something', $self], 'abort_on_error', ['clear_context', "Next"], ] ); See CGI::MxScreen::Form::Button for more information on record_button(). Raphael Manfredi <Raphael_Manfredi@pobox.com> and Christophe Dehaudt <Christophe.Dehaudt@teamlog.fr>. CGI::MxScreen(3), CGI::MxScreen::Form::Button(3), CGI::MxScreen::Form::Field(3).
http://search.cpan.org/~ram/CGI-MxScreen-0.1.1/MxScreen/Screen.pm
CC-MAIN-2014-23
refinedweb
2,289
62.78
Hi All- I was wondering if it is possible to move a Software Component from One Product to Another. If it is possible can some one guide me on how to do this. Regards, Ashish Hi, this is possible you can use the release Transfer in the Integration builder's menu. First import the new SC and create a namespace with the same name in that SC. Then you can do the release transfer. Make sure that you activate the SC and namespace before the release transfer. I have done this and it worked. If you want a different namespace then you have to copy the objects from the new namespace when you have done the release transfer. I hope this helps. Stefan Help to improve this answer by adding a comment If you have a different answer for this question, then please use the Your Answer form at the bottom of the page instead. Hi Stefan, I created a Custom entry for Product and Software Component which I do not require any more. Can one delete the custom entry of Product and Software Component. Thanks in anticipation. Ravinder Add a comment
https://answers.sap.com/questions/845152/moving-software-component-from-one-product-to-anot.html
CC-MAIN-2020-40
refinedweb
192
74.59
1 import java.io.File ;2 import java.io.IOException ;3 4 import ch.ethz.ssh2.Connection;5 import ch.ethz.ssh2.LocalPortForwarder;6 7 public class PortForwarding8 {9 public static void sleepSomeTime(long milliSeconds)10 {11 try12 {13 Thread.sleep(milliSeconds);14 }15 catch (InterruptedException e)16 {17 }18 }19 20 public static void main(String [] args)21 {22 String hostname = "127.0.0.1";23 String username = "joe";24 25 File keyfile = new File ("~/.ssh/id_rsa"); // or "~/.ssh/id_dsa"26 String keyfilePass = "joespass"; // will be ignored if not needed27 28 try29 {30 /* Create a connection instance */31 32 Connection conn = new Connection(hostname);33 34 /* Now connect */35 36 conn.connect();37 38 /* Authenticate */39 40 boolean isAuthenticated = conn.authenticateWithPublicKey(username, keyfile, keyfilePass);41 42 if (isAuthenticated == false)43 throw new IOException ("Authentication failed.");44 45 /* ===== OK, now let's establish some local port forwardings ===== */46 47 /* Example Port Forwarding: -L 8080: (OpenSSH notation)48 * 49 * This works by allocating a socket to listen on 8080 on the local interface (127.0.0.1).50 * Whenever a connection is made to this port (127.0.0.1:8080), the connection is forwarded51 * over the secure channel, and a connection is made to from the remote52 * machine (i.e., the ssh server).53 * 54 * (the above text is based partially on the OpenSSH man page)55 */56 57 /* You can create as many of them as you want */58 59 LocalPortForwarder lpf1 = conn.createLocalPortForwarder(8080, "", 80);60 61 /* Now simply point your webbrowser to 127.0.0.1:8080 */62 /* (on the host where you execute this program) */63 64 /* ===== OK, now let's establish some remote port forwardings ===== */65 66 /* Example Port Forwarding: -R 127.0.0.1:8080: (OpenSSH notation)67 * 68 * Specifies that the port 127.0.0.1:8080 on the remote server is to be forwarded to the69 * given host and port on the local side. This works by allocating a socket to listen to port70 * 8080 on the remote side (the ssh server), and whenever a connection is made to this port, the71 * connection is forwarded over the secure channel, and a connection is made to72 * by the Ganymed SSH-2 library.73 * 74 * (the above text is based partially on the OpenSSH man page)75 */76 77 /* You can create as many of them as you want */78 79 conn.requestRemotePortForwarding("127.0.0.1", 8080, "", 80);80 81 /* Now, on the ssh server, if you connect to 127.0.0.1:8080, then the connection is forwarded82 * through the secure tunnel to the library, which in turn will forward the connection83 * to. */84 85 /* Sleep a bit... (30 seconds) */86 sleepSomeTime(30000);87 88 /* Stop accepting remote connections that are being forwarded to */89 90 conn.cancelRemotePortForwarding(8080);91 92 /* Sleep a bit... (20 seconds) */93 sleepSomeTime(20000);94 95 /* Stop accepting connections on 127.0.0.1:8080 that are being forwarded to */96 97 lpf1.close();98 99 /* Close the connection */100 101 conn.close();102 103 }104 catch (IOException e)105 {106 e.printStackTrace(System.err);107 System.exit(2);108 }109 }110 }111 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/PortForwarding.java.htm
CC-MAIN-2018-05
refinedweb
541
57.57
All modules implicitly inherit from a built-in module called UNIVERSAL and inherit the following three methods: For example, Rectangle->isa('Shape') returns true if the Rectangle module inherits (however indirectly) from the Shape module. Rectangle->can('draw') returns true if the Rectangle or any of its base packages contain a function called draw. If you say, package Bank; $VERSION = 5.1; and the user of this module says, use Bank 5.2; Perl automatically calls Bank->VERSION(5.2), which can, for instance, make sure that all libraries required for version 5.2 are loaded. The default VERSION method provided by UNIVERSAL simply dies if the Bank's $VERSION variable has a lower value than that needed by the user of the module. Because Perl allows a package to shamelessly trample on other namespaces, some packages use the UNIVERSAL module as a holding area for some global subroutines that they wish to export to everyone. I recommend that you do not use this "feature" yourself (or at least not in those that you contribute to CPAN!). We have mentioned two places that Perl searches when it cannot find a method in the target module: the inheritance hierarchy (@ISA) and AUTOLOAD. While checking the inheritance hierarchy, Perl checks the base classes' @ISA arrays too: a depth-first search is conducted, and the first available one is used. Let us examine the precise order in which all these subroutines are searched. Given: package Man; @ISA = qw(Mammal Social_Animal); a call to Man->schmooze results in the following search sequence. First the normal inheritance hierarchy is checked: (Mammal's base classes, recursively)::schmooze (Social_Animal's base classes, recursively)::schmooze UNIVERSAL::schmooze (because UNIVERSAL is implicitly at the end of every module's @ISA array) Then AUTOLOAD is looked up in the same order: (Mammal's base classes, recursively)::AUTOLOAD (Social_Animal's base classes, recursively)::AUTOLOAD The first available subroutine is given the control and the search is stopped. If all fails, Perl throws a run-time exception.
http://doc.novsu.ac.ru/oreilly/perl/advprog/ch07_03.htm
CC-MAIN-2018-05
refinedweb
335
51.89
reloaded. A reloaded the >] [-help] [-jarasresource] [-noaction] [-nocasesensitivepub] [-nocleargrants] [-nodefiner] [-nogrant] [-norecursivejars] [-noschema] [-noserverside] [] [-time] [-unresolvedok] [-v | -verbose] Table 7 or -thin; -oci is the default. -oci: @<database>is optional; if you do not specify, then dropjavauses the user's default database. If specified, then .. command>] The java command is analogous to the JDK java command; it invokes a class's static main() method. The class must have been loaded with loadjava. (There is no point to publishing a class that will be invoked with the java command.) The java command provides a convenient way to test Java code that runs in the database. In particular, the command catches exceptions and redirects the class's standard output and standard error to the 7-6 summarizes the java arguments. Here is a java command example. Say hello and display arguments: package hello; public class World { public World() { super(); } public static void main(String[] argv) { System.out.println("Hello from the Oracle9i ORB"); if (argv.length != 0) System.out.println("You supplied " + argv.length + " arguments: "); for (int i = 0; i < argv.length; i++) System.out.println(" arg[" + i + "] : " + argv[i]); } } Compile, load, publish, and run the executable as follows, substituting your userid,.
http://docs.oracle.com/cd/B10500_01/java.920/a96656/newtools.htm
CC-MAIN-2017-04
refinedweb
200
50.73
Name CYGPKG_PROFILE_GPROF — eCos Support for the gprof profiling tool Description The GNU gprof tool provides profiling support. After a test run it can be used to find where the application spent most of its time, and that information can then be used to guide optimization effort. Typical gprof output will look something like this: 7.60 4.22 0.78 40000 19.41 86.75 Proc_1 6.89 4.93 0.70 40000 17.60 28.99 Proc_6 6.77 5.62 0.69 40000 17.31 27.14 Func_2 6.62 6.30 0.68 40000 16.92 16.92 Proc_8 5.94 6.90 0.61 strcmp 5.58 7.47 0.57 40000 14.26 26.31 Proc_3 5.01 7.99 0.51 40000 12.79 12.79 Proc_4 4.46 8.44 0.46 40000 11.39 11.39 Func_3 3.68 8.82 0.38 40000 9.40 9.40 Proc_5 3.32 9.16 0.34 40000 8.48 8.48 Proc_2 … This output is known as the flat profile. The data is obtained by having a hardware timer generate regular interrupts. The interrupt handler stores the program counter of the interrupted code. gprof performs a statistical analysis of the resulting data and works out where the time was spent. gprof can also provide information about the call graph, for example: index % time self children called name … 0.78 2.69 40000/40000 main [1] [2] 34.0 0.78 2.69 40000 Proc_1 [2] 0.70 0.46 40000/40000 Proc_6 [5] 0.57 0.48 40000/40000 Proc_3 [7] 0.48 0.00 40000/120000 Proc_7 [3] This shows that function Proc_1 was called only from main, and Proc_1 in turn called three other functions. Callgraph information is obtained only if the application code is compiled with the -pg option. This causes the compiler to insert extra code into each compiled function, specifically a call to mcount, and the implementation of mcount stores away the data for subsequent processing by gprof. The eCos profiling package requires some additional support from the HAL packages, and this may not be available on all platforms: - There must be an implementation of the profiling timer. Typically this is provided by the variant or platform HAL using one of the hardware timers. If there is no implementation then the configuration tools will report an unresolved conflict related to CYGINT_PROFILE_HAL_TIMERand profiling is not possible. Some implementations overload the system clock, which means that profiling is only possible in configurations containing the eCos kernel and CYGVAR_KERNEL_COUNTERS_CLOCK. - There should be a hardware-specific implementation of mcount, which in turn will call the generic functionality provided by this package. It is still possible to do some profiling without mcountbut the resulting data will be less useful. To check whether or not mcountis available, look at the current value of the CDL interface CYGINT_PROFILE_HAL_MCOUNTin the graphical configuration tool or in an ecos.eccsave file. - The current profiling support is only suitable for single-core systems, it is not SMP-aware. Since profiling is driven by interrupts from a timer, samples can only be collected from the CPU to which that interrupt is bound. There is no mechanism for sampling the state of the other CPUs. The array of sample counters is not updated atomically, so updates to the same, or close, entries may result in readings being lost. For these reasons, targets that might support profiling in a single-core configuration will disable it in a multi-core configuration. This document only describes the eCos profiling support. Full details of gprof functionality and output formats can be found in the gprof documentation. However it should be noted that that documentation describes some functionality which cannot be implemented using current versions of the gcc compiler: the section on annotated source listings describes basic block counting which is not relevant. For basic block counting, the GNU gcov tool should be used instead. Building Applications for Profiling To perform application profiling the gprof package CYGPKG_PROFILE_GPROF must first be added to the eCos configuration. On the command line this can be achieved using: $ ecosconfig add profile_gprof $ ecosconfig tree $ make Alternatively the same steps can be performed using the graphical configuration tool by adding the package "Application profile support" with the Build->Packages menu item. If the HAL packages implement mcount for the target platform then usually application code should be compiled with -pg. Optionally eCos itself can also be compiled with this option by modifying the configuration option CYGBLD_GLOBAL_CFLAGS. Compiling with -pg is optional but gives more complete profiling data. Profiling does not happen automatically. Instead it must be started explicitly by the application, using a call to profile_on. A typical example would be: #include <pkgconf/system.h> #ifdef CYGPKG_PROFILE_GPROF # include <cyg/profile/profile.h> #endif … int main(int argc, char** argv) { … #ifdef CYGPKG_PROFILE_GPROF { extern char _stext[], _etext[]; profile_on(_stext, _etext, 16, 3500); } #endif … } The profile_on takes four arguments: start address, end address - These specify the range of addresses that will be profiled. Usually profiling should cover the entire application. On most targets the linker script will export symbols _stextand _etextcorresponding to the beginning and end of code, so these can be used as the addresses. It is possible to perform profiling on a subset of the code if that code is located contiguously in memory. bucket size profile_ondivides the range of addresses into a number of buckets of this size. It then allocates a single array of 16-bit counters with one entry for each bucket. When the profiling timer interrupts the interrupt handler will examine the program counter of the interrupted code and, assuming it is within the range of valid addresses, find the containing bucket and increment the appropriate counter. The size of the array counters is determined by the range of addresses being profiled and by the bucket size. For a bucket size of 16, one counter is needed for every 16 bytes of code. For an application with say 512K of code that means dynamically allocating a 64K array. If the target hardware is low on memory then this may be unacceptable, and the requirements can be reduced by increasing the bucket size. However this will affect the accuracy of the results and gprof is more likely to report the wrong function. It also increases the risk of a counter overflow. For the sake of run-time efficiency the bucket size must be a power of 2, and it will be adjusted if necessary. time interval - The final argument specifies the interval between profile timer interrupts, in units of microseconds. Increasing the interrupt frequency gives more accurate profiling results, but at the cost of higher run-time overheads and a greater risk of a counter overflow. The HAL package may modify this interval because of hardware restrictions, and the generated profile data will contain the actual interval that was used. Usually it is a good idea to use an interval that is not a simple fraction of the system clock, typically 10000 microseconds. Otherwise there is a risk that the profiling timer will disproportionally sample code that runs only in response to the system clock. profile_on can be invoked multiple times, and on subsequent invocations, it will delete profiling data and allocate a fresh profiling range. Profiling can be turned off using the function profile_off: void profile_off(void); This will also reset any existing profile data. If the eCos configuration includes a TCP/IP stack and if a tftp daemon will be used to extract the data from the target then the call to profile_on should happen after the network is up. profile_on will attempt to start a tftp daemon thread, and this will fail if networking has not yet been enabled. int main(int argc, char** argv) { … init_all_network_interfaces(); … #ifdef CYGPKG_PROFILE_GPROF { extern char _stext[], _etext[]; profile_on(_stext, _etext, 16, 3000); } #endif … } The application can then be linked and run as usual. When gprof is used for native development rather than for embedded targets the profiling data will automatically be written out to a file gmon.out when the program exits. This is not possible on an embedded target because the code has no direct access to the host's file system. Instead the gmon.out file has to be extracted from the target as described below. gprof can then be invoked normally: $ gprof dhrystone Flat profile: … If gmon.out does not contain call graph data, either because mcount is not supported or because this functionality was explicitly disabled, then the -no-graph must be used. $ gprof --no-graph dhrystone Flat profile: Each sample counts as 0.003003 seconds. % cumulative self self total time seconds seconds calls us/call us/call name 14.15 1.45 1.45 Proc_7 11.55 2.63 1.18 Func_1 8.04 3.45 0.82 main … Extracting the Data By default gprof expects to find the profiling data in a file gmon.out in the current directory. This package provides two ways of extracting data: a gdb macro or tftp transfers. Using tftp is faster but requires a TCP/IP stack on the target. It also consumes some additional target-side resources, including an extra tftp daemon thread and its stack. The gdb macro can be used even when the eCos configuration does not include a TCP/IP stack. However it is much slower, typically taking tens of seconds to retrieve all the data for a non-trivial application. The gdb macro is called gprof_dump, and can be found in the file gprof.gdb in the host subdirectory of this package, and in the subdirectory. A typical way of using this macro is: ECOS_INSTALL_DIR/etc (gdb) source <ECOS_INSTALL_DIR>;/etc/gprof.gdb (gdb) gprof_dump This macro can be used any time after the call to profile_on. It will store the profiling data accumulated so far to the file gmon.out in the current directory, and then reset all counts. gprof uses only a 16 bit counter for every bucket of code. These counters can easily saturate if the profiling run goes on for a long time, or if the application code spends nearly all its time in just a few tight inner loops. The counters will not actually wrap around back to zero, instead they will stick at 0xFFFF, but this will still affect the accuracy of the gprof output. Hence it is desirable to reset the counters once the profiling data has been extracted. The file gprof.gdb contains two other macros which may prove useful. gprof_fetch extracts the profiling data and generates the file gmon.out, but does not reset the counters. gprof_reset only resets the counters, without extracting the data or overwriting gmon.out. If the configuration includes a TCP/IP stack then the profiling data can be extracted using tftp instead. There are two relevant configuration options. CYGPKG_PROFILE_TFTP controls whether or not tftp is supported. It is enabled by default if the configuration includes a TCP/IP stack, but can be disabled to save target-side resources. CYGNUM_PROFILE_TFTP_PORT controls the UDP port which will be used. This port cannot be shared with other tftp daemons. If neither application code nor any other package (for example the gcov test coverage package) provides a tftp service then the default port can be used. Otherwise it will be necessary to assign unique ports to each daemon. If enabled the tftp daemon will be started automatically by profile_on. This should only happen once the network is up and running, typically after the call to init_all_network_interfaces. The data can then be retrieved using a standard tftp client. There are a number of such clients available with very different interfaces, but a typical session might look something like this: $ tftp tftp> connect 10.1.1.134 tftp> binary tftp> get gmon.out Received 64712 bytes in 0.9 seconds tftp> quit The address 10.1.1.134 should be replaced with the target's IP address. Extracting the profiling data by tftp will automatically reset the counters. Configuration Options This package contains a number of configuration options. Two of these, CYGPKG_PROFILE_TFTP and CYGNUM_PROFILE_TFTP_PORT, related to support for tftp transfers and have already been described. Support for collecting the call graph data via mcount is optional and can be controlled via CYGPKG_PROFILE_CALLGRAPH. This option will only be active if the HAL provides the underlying mcount support and implements CYGINT_PROFILE_HAL_MCOUNT. The call graph data allows gprof to produce more useful output, but at the cost of extra run-time and memory overheads. If this option is disabled then the -pg compiler flag should not be used. If CYGPKG_PROFILE_CALLGRAPH is enabled then there are two further options which can be used to control memory requirements. Collecting the data requires two blocks of memory, a simple hash table and an array of arc records. The mcount code uses the program counter address to index into the hash table, giving the first element of a singly linked list. The array of arc records contains the various linked lists for each hash slot. The required number of arc records depends on the number of function calls in the application. For example if a function Proc_7 is called from three different places in the application then three arc records will be needed. CYGNUM_PROFILE_CALLGRAPH_HASH_SHIFT controls the size of the hash table. The default value of 8 means that the program counter is shifted right by eight places to give a hash table index. Hence each hash table slot corresponds to 256 bytes of code, and for an application with say 512K of code profile_on will dynamically allocate an 8K hash table. Increasing the shift size reduces the memory requirement, but means that each hash table slot will correspond to more code and hence mcount will need to traverse a longer linked list of arc records. CYGNUM_PROFILE_CALLGRAPH_ARC_PERCENTAGE controls how much memory profile_on will allocate for the arc records. This uses a simple heuristic, a percentage of the overall code size. By default the amount of arc record space allocated will be 5% of the code size, so for a 512K executable that requires approximately 26K. This default should suffice for most applications. In exceptional cases it may be insufficient and a diagnostic will be generated when the profiling data is extracted. Implementing the HAL Support The profiling package requires HAL support: A function hal_enable_profile_timer and an implementation of mcount. The profile timer is required. Typically it will be implemented by the variant or platform HAL using a spare hardware timer, and that HAL package will also implement the CDL interface CYGINT_PROFILE_HAL_TIMER. Support for mcount is optional but very desirable. Typically it will be implemented by the architectural HAL, which will also implement the CDL interface CYGINT_PROFILE_HAL_MCOUNT. #include <pkgconf/system.h> #ifdef CYGPKG_PROFILE_GPROF # include <cyg/profile/profile.h> #endif int hal_enable_profile_timer(int resolution) { … return actual_resolution; } This function takes a single argument, a time interval in microseconds. It should arrange for a timer interrupt to go off after every interval. The timer VSR or ISR should then determine the program counter of the interrupted code and register this with the profiling package: … __profile_hit(interrupted_pc); … The exact details of how this is achieved, especially obtaining the interrupted PC, are left to the HAL implementor. The HAL is allowed to modify the requested time interval because of hardware constraints, and should return the interval that is actually used. mcount can be more difficult. The calls to mcount are generated internally by the compiler and the details depend on the target architecture. In fact mcount may not use the standard calling conventions at all. Typically implementing mcount requires looking at the code that is actually generated, and possibly at the sources of the appropriate compiler back end. The HAL mcount function should call into the profiling package using standard calling conventions: … __profile_mcount((CYG_ADDRWORD) caller_pc, (CYG_ADDRWORD) callee_pc); … If mcount was invoked because main called Proc_1 then the caller pc should be an address inside main, typically corresponding to the return location, and the callee pc should be an address inside Proc_1, usually near the start of the function. For some targets the compiler does additional work, for example automatically allocating a per-function word of memory to eliminate the need for the hash table. This is too target-specific and hence cannot easily be used by the generic profiling package.
http://doc.ecoscentric.com/ref/gprof.html
CC-MAIN-2019-30
refinedweb
2,718
56.05
37809/python-convert-all-sheets-of-excel-to-csv I am using the following with this code is that it takes data only from the first sheet and not other sheets. I have 4 sheets and want them to be converted to csv too. Please help. You will have to parse through the sheets and then get data. Refer to the following code:) You could try using the AST module. ...READ MORE You can use the ast module Ex: import ast s = """[{'10': ...READ MORE XLSX tables are usually created in MS ...READ MORE if you google it you can find. .. Already have an account? Sign in.
https://www.edureka.co/community/37809/python-convert-all-sheets-of-excel-to-csv
CC-MAIN-2020-05
refinedweb
107
84.27
See also: IRC log <trackbot> Date: 20 June 2011 <bblfish> agenda <bblfish> odd cut me off <bblfish> today I have to leave a bit early today <bblfish> perhaps we stop five minutes before the end of the hour <MacTed> ... but you see no-one on IRC? strange. <bblfish> member:Zakim, who's noisy? <MacTed> and there's the answer... Oh? <bblfish> who wants to scribe? I'll do it there is enough people today to take up the proposed agenda <bblfish> +1 <MacTed> +0 I wasn't there... <bergi> +1 +1 <bblfish> RESOLVED: Minutes approved <bblfish> agendum next we had a lot of people around the table during the workshops and of course, the WebID XG meeting danbri was "late" we had several good discussions during 3rd of June <bblfish> danbri was not that late <bblfish> it was great he came memory is just like lined data -> distributed everybody remembers bits therefore people should fill in the wiki page with stuff that's missing there's a missing discussion on the wiki, between bergi and bblfish <bblfish> yes, I can tell if people have not done this because there are bits where people should fill in what they talked about bblfish will add it at some point <danbri> I arrived exactly when I intended to arrive! correction, the discussion is there now <bblfish> Bergi discussion is not missing <danbri> (which was as soon as possible, given flight from AMS:) <danbri> nice to be there, however briefly yes, it was great the picture danbri drew during the XG meeting should be discussed a little there's other content which can go down another line <bblfish> we are discussing that now <bblfish> very important picture there was consensus from everyone that there should be a core on which we should concentrate http vs https, syntax as well <bblfish> there was even talk of MUST requirements for the core the people representing other protocols accepted that the WebID spec doesn't have to contain a predefined list of protocols at this point there MUST be a core <bblfish> so my understanding was that there was consensus on there being a CORE small list of syntax, uri, and ontologies bblfish does not really agree with using MUST in the spec, as tests can cover exceptions <bblfish> Questuin: A. do are we all agree there is the core? B: that is is rdf/xml, rdfa + cert/rsa ontology + http/https (ftp..) <bblfish> A:+ B:+ but not necessarily with a MUST in the spec - though we can add it later. More of a focus on these things first <scor> +1 +1 A and B <scor> +1 A and B <bblfish> have I just dissapeared? <scor> proposal: remain syntax agnostic, as long as there is a way to get RDF out of it Question: is rdf/xml the best syntax to use on the long run? <scor> so RDFa, RDF/XML, etc fit in this category <scor> I disagree with RDF/XML <bergi> +1 A and B no ftp most of us have working rdf/xml representations <scor> Deiu: I disagree, I work primarily with RDFa scor, I'm scribing :-) <scor> we've had very long and controversial discussions on the syntax in the group prior to joining the XG, and we could not settle on one syntax, that's why we have both at the moment we have consensus that we don't want to use everything from the start scor, I meant that I'm also scribing (not always stating my personal thoughts) <bblfish> RESOLVED: consensus here and in Belrin on restricting ourselves for pragmatic purposes to a core, with HTTP and HTTPS, cert and rsa close to a MUST, and format still open it's important that we "synchronize" our implementations, and this is where tests can help a lot <bblfish> anything else anyone remembers us speaking of that morning? by synchronize I mean deciding on common formats/syntax that was it for the morning session <bblfish> So in afternoon we started talking about tests <scor> +1 for test driven dev during the afternoon there was a debate between Mike and bblfish whether or not to use test driven development Mike proposed the waterfall model, whereas bblfish proposes test driven dev <bblfish> <scor> and feedback the implementations to improve the spec is important too yes, exactly bergi describes his junit webid tests different test cases -> multiple URIs, keys, etc which are verified against endpoints cache tests <bblfish> webid mercurial repo: <bblfish> I added this these are tests that clients should implement a lot of feedback is needed at this point we should concentrate on tests during the next few months <bblfish> so we can prepare the final report <bblfish> and so we can improve the specs those of us currently developing WebID implementations should try to add tests <bblfish> these machine readable tests are quite easy to generate <bblfish> so I can point at the WebID implementation I'll try to add something as well <bblfish> So do we have consensus to make this a priority? Is the methodology ok? personal note: tests are always welcome +1 from me <bblfish> +1 for me, but I implemented them already. <bblfish> perhaps we should have a wiki page for tests +1 for wiki page <bblfish> ACTION: wiki page for tests [recorded in] <trackbot> Sorry, couldn't find user - wiki <bblfish> ACTION: bblfish, wiki page for tests [recorded in] <trackbot> Sorry, couldn't find user - bblfish, <bblfish> ACTION: bblfish wiki page for tests [recorded in] <trackbot> Created ACTION-30 - Wiki page for tests [on Henry Story - due 2011-06-27]. bblfish would like to add GRDDL support but it's complicated right now <bblfish> Request cert & rsa ontologies bblfish proposes moving the two namespaces into one <bblfish> Question is moving cert & rsa into one namespace <scor> hugely in favor <bblfish> +1 <scor> bblfish: could DSA be factored in the single vocab as well? <bergi> +1 <bblfish> yes <scor> what would be the name of the ontology? cert? <bblfish> cert: <scor> I can scribe <bblfish> html rdfa of those ontologies like that foaf ontology <bblfish> who would like to take on as a task <bblfish> ACTION: scor to lead on rdfa-izing the cert/rsa ontologies [recorded in] <trackbot> Created ACTION-31 - Lead on rdfa-izing the cert/rsa ontologies [on Stéphane Corlosquet - due 2011-06-27]. <bblfish> by the Scala in the w3c <bblfish> trackbot, end meeting This is scribe.perl Revision: 1.136 of Date: 2011/05/12 12:01:43 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) No ScribeNick specified. Guessing ScribeNick: Deiu Inferring Scribes: Deiu WARNING: No "Topic:" lines found. Default Present: bblfish, MacTed, +49.874.aabb, scribe, bergi, +1.781.866.aacc Present: bblfish MacTed +49.874.aabb scribe bergi +1.781.866.aacc WARNING: No meeting chair found! You should specify the meeting chair like this: <dbooth> Chair: dbooth Found Date: 20 Jun 2011 Guessing minutes URL: People with action items: bblfish scor wiki]
http://www.w3.org/2011/06/20-webid-minutes.html
CC-MAIN-2014-41
refinedweb
1,167
53.07
- MooseX::Struct - Version - Synopsis - Description - Exports - Compile-time declaration of a struct - Attributes - Types - Notes - Author - Bugs - Support - Acknowledgements MooseX::Struct MooseX::Struct - Struct-like interface for Moose Object creation Version Version 0.06 Synopsis use MooseX::Struct; struct 'MyClass::Foo' => ( bar => 'Scalar', baz => 'Array', ); my $obj = new MyClass::Foo; $obj->bar(44); # sets $obj->{bar} to 44 print $obj->bar; # prints 44 ### or package MyClass::Foo; use MooseX::Struct; ### This will default to the current package : 'MyClass::Foo' struct ( bar => 'Scalar', baz => 'Array', ); ### or create your struct at compile-time use MooseX::Struct 'MyClass::Foo' => ( bar => 'Scalar', baz => 'Array', ); ### Immutable Moose Objects package MyClass::Foo; use MooseX::Struct; immutable struct ( bar => 'Scalar', baz => 'Array', ); Description This module is a reimplementation of the core Class::Struct package for the Moose Object System. The original Class::Struct is a very useful package but offers little to no extensibility as soon as you outgrow its features. For the Class::Struct users: This is not a drop-in replacement (though for most common cases, it is a drop in replacement), it works somewhat differently and has different performance concerns. For Moose users: This can be used as an alternate way to create Moose objects. All exports that normally come from 'use Moose' are exported to the specified package, or the current package if none given (unless the current package is 'main'). A lot of this package passes off work to Moose and Class::MOP, so both of those should be considered good reading recommendations. Exports MooseX::Struct exports two functions, struct and immutable, to the caller's namespace. immutable immutable() is a convenience method that takes in a class name and calls CLASS->meta->make_immutable(). Since struct() returns the class name of the object it just defined, you can write out very nice looking code such as: immutable struct 'MyClass' => ( class definition ); struct The struct function can be passed parameters in four forms but boil down to : struct( ['Class Name',] %hash|$hashref ); Omitting the 'Class Name' argument allows MooseX::Struct to default to the current package's namespace. Because you do not need parantheses for predefined functions and the => is a synonym for ,, the above can be written in a more attractive way : struct 'My::Class' => ( attribute => 'type', ); Thus the following three forms are: struct 'My::Class' => { attribute => 'type', }; struct ( attribute => 'type', ); struct { attribute => 'type', }; The last two would default to the current package name. Compile-time declaration of a struct Like Class::Struct, MooseX::Struct allows you to specify a class at compile time by passing the appropriate definition to MooseX::Struct at import. e.g. use MooseX::Struct 'My::Class' => ( attribute => 'type', ); Again, like Class::Struct, there is no real time savings, but you do get a more logical flow of events and it does get all of the hard work done at startup. Attributes Attributes all take the form of a hash key/value pair with the hash key being the name of the attribute and the default name of the accessor, and the value being a predefined type alias (see below). All attributes are read/write by default (is => 'rw'). Advanced attributes can be made by specifying a hashref of acceptible attribute specifications (see Class::MOP::Attribute) instead of a type alias, e.g. struct 'My::Class' => ( foo => 'Scalar', bar => { accessor => 'quux' } baz => { is => 'ro', reader => 'get_baz', [etc] } ); Note / Warning / Not a bug Multiple attributes can be declared at once in an array reference while being defined within parantheses as opposed to curly brackets (i.e., as a standard array of arguments as opposed to a hash / hash reference). This is due to perl stringifying references in order to use them as hash keys and the fact that perl can't dereference them after that happens. Types These are used to constrain an attribute's value to a certain data type (isa => 'Type'). Types are case-insensitive for matching purposes, but you can specify a type that is not listed here and it will be passed through unchanged to Moose::Meta::Class / Class::MOP::Class. So if you are familiar with advanced types or have created your own type constraints, you can still use MooseX::Struct. +----------------+-----------------------+ | MooseX::Struct | Moose/Class::MOP type | +----------------+-----------------------+ | '' | [No type constraint] | | ? | [No type constraint] | | any | [No type constraint] | | ro | [Read Only - No Type] | | rw | [Read/Write - No Type]| | ! | Bool | | # | Num | | 1 | Int | | $ | Value | | *$ | ScalarRef | | @ | ArrayRef | | *@ | ArrayRef | | % | HashRef | | *% | HashRef | | & | CodeRef | | * | GlobRef | | w | Str | | rx | RegexpRef | | int | Int | | integer | Int | | number | Num | | scalar | Value | | scalarref | ScalarRef | | array | ArrayRef | | arrayref | ArrayRef | | hash | HashRef | | hashref | HashRef | | bool | Bool | | boolean | Bool | | glob | GlobRef | | regex | RegexpRef | | string | Str | +----------------+-----------------------+ Notes strict and warnings are imported automatically By issuing a use MooseX::Struct, same as with useing Moose, strict and warnings are automatically imported into the calling package. Differences from Class::Struct The accessors that are created for each attribute are simple read / write accessors. They will attempt to assign any passed value to the attribute, and they will return the whole value on access. # For an object 'foo' with an attribute 'bar' of type ArrayRef: $foo->bar([1,2,3]); # sets bar to [1,2,3] $foo->bar; # returns [1,2,3] $foo->bar(0); # Attempts to set bar to 0 and errors out because # 0 is not an array reference. Class::Struct would # have given you the element at index 0; $foo->bar->[0] # Correct The types have been changed and extended. There are no '%' or '@' types that indicate 'Hash' and 'Array,' respectively. Both of those symbols now refer to the reference of the type. Author Jarrod Overson, <jsoverson at googlemail.com> Bugs Of course there could be bugs with use cases I hadn't thought of during testing, but most of this module's work passes off to Class::MOP or Moose, so if you find a bug, please do some testing to determine where the actual bug is occurring. Please report any bugs or feature requests to bug-moosex-struct::Struct You can also look for information at: RT: CPAN's request tracker AnnoCPAN: Annotated CPAN documentation CPAN Ratings Search CPAN Acknowledgements Thanks to everyone who worked on Class::Struct for providing us a very clean interface for creating intuitive, logical data structures within perl. And thanks to everyone who has worked on Moose for providing a somewhat complicated method of creating extremely powerful and extensible data structures within perl. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
https://metacpan.org/pod/MooseX::Struct
CC-MAIN-2016-44
refinedweb
1,081
51.92
Brewing in Beats: Mysqlbeat and HWsensorsbeat from the community Welcome to Brewing in Beats! With this series, we're keeping you up to date with all that's new in Beats, from the details of work in progress pull requests to releases and learning resources. Last week we’ve released Beats 1.2.2. The blog post is here. New community Beat: HWsensorsbeat Created by Jasper Lievisse Adriaanse, HWsensorsbeat reads HW sensors information through sysctl(3). It’s already part of OpenBSD ports: New community Beat: Mysqlbeat Created by Adi Ben-Dahan, Mysqlbeat is a fully customizable Beat for MySQL server. This beat can also ship the results of any query defined on the config file to Elasticsearch. It comes with a really cool Kibana dashboard. Update Elasticsearch template for Beats Starting with Elasticsearch 5.0.0-alpha1, string is replaced with text/keyword datatypes, text is for full text, and keyword for keyword identifiers. The string datatype will continue to work during the 5.x series, but it will be removed in 6.0. We updated the Elasticsearch templates for all Beats, including the script that generates them, to use keyword datatype instead of string. In addition, the old Elasticsearch template is still available under *.template-es2x.json that works for Elasticsearch 2.x and 5.x versions. Simplifying Filebeat state logic As Filebeat gets more features, it is also important to refresh and simplify its architecture so that it can be maintained in the long therm. Nicolas spent some time last week to decouple and simplify the communication between the harversters, the prospectors, and the registrar. Fixed a Filebeat JSON bug If the decoded JSON contained “null” values this would travel through Filebeat and it could break the generic filtering code. This was fixed by hardening the normalization code that is executed before filtering is executed. FIxed a Winlogbeat bug related to large events Fixed panic that occurred when reading a large events on Windows Vista and newer. The bug occurred in an error recovery path so in order for the panic to occur, first there had to have been an error rendering the event as XML with the event message string. When that error occurs Winlogbeat tries to render the event as XML, but without the message string. If the XML was larger than half the buffer size a panic would occur. Rename the filter configuration sections to filters This is just a cosmetic change in libbeat, but it breaks the compatibility with 5.0.0-alpha1. The filters configuration section includes a list of filtering rules that are applied one by one in the order they are defined. Generate a single template for Metricbeat Each Metricbeat module exports fields whose type needs to be defined in Elasticsearch via a template. Instead of loading a template for each Metricbeat module, the pull request collects all exported fields and their datatypes from each Metricbeat module and generates a single template for Metricbeat. Use ucfg to validate the Beats configuration files The Beats are using the ucfg library, written by Steffen, to deal with configuration file management. As simple configuration errors are often the source of troubles, Steffen greatly improved the validation of the configuration files, which is now checked before the Beat starts. Introduce namespacing in the Packetbeat configuration file As we’re moving towards having all Beat specific settings namespaced with their Beat name (e.g. packetbeat.interfaces), Packetbeat accepts now packetbeat.interfaces and packetbeat.protocols instead of interfaces and protocols as it’s top level key. The change breaks compatibility with 5.0.0-alpha1.
https://www.elastic.co/blog/brewing-in-beats-mysqlbeat-hwsensorbeat
CC-MAIN-2019-09
refinedweb
599
54.93
Rose::DBx::Garden::Catalyst - plant Roses in your Catalyst garden # create a Catalyst app > catalyst.pl MyApp # create a Rose::DBx::Garden::Catalyst script > cat mk_cat_garden.pl use Rose::DBx::Garden::Catalyst; use MyDB; # isa Rose::DB my $garden = Rose::DBx::Garden::Catalyst->new( catalyst_prefix => 'MyApp', controller_prefix => 'RDGC', garden_prefix => 'MyRDBO', db => MyDB->new, tt => 1, # make Template Toolkit files ); $garden->plant('MyApp/lib'); # run your script > perl mk_cat_garden.pl # edit your MyApp.pm file: > vi MyApp/lib/MyApp.pm # serve static assets use Catalyst qw/ Static::Simple::ByClass /; __PACKAGE__->config( 'Plugin::Static::Simple::ByClass' => { classes => [qw( CatalystX::CRUD::YUI::TT )], } ); # after __PACKAGE__->setup(); # add these lines: use MRO::Compat; use mro 'c3'; Class::C3::initialize(); # start your Catalyst dev server > cd MyApp > perl script/myapp_server.pl # enjoy the fruits at Rose::DBx::Garden::Catalyst extends Rose::DBx::Garden to create Catalyst component scaffolding. These created components use the RDBO and RHTMLO classes that the Garden class produces. The base Controller, Model and View classes are simple subclasses of CatalystX::CRUD::YUI. This module works on the assumption that 1 table == 1 form == 1 controller class == 1 model class. By default this class creates stub Template Toolkit files for use with the RDBO and RHTMLO CRUD components. If you use a different templating system, just set the tt option to 0. Only new or overridden methods are documented here. The namespace where Catalyst Controllers are created. Will also be lowercased and serve as the URI path namespace for all RDGC actions. Default: RDGC Override the base method to create methods useful to RDBO classes used in Catalyst. Custom base Form code to implement features that template will require. Defaults to 'MyApp'. Returns controller_prefix() transformed to a file path. Override the base method to create Catalyst-related files in addition to the basic Garden files. Does the actual file creation of Catalyst files. Called by plant(). class_names should be a hash ref of RDBO => RHTMLO class names, as returned by Rose::DBx::Garden->plant(). If you have existing RDBO and RHTMLO classes that have namespaces inconsistent with the conventions in Rose::DBx::Garden, they should still work. Just map the RDBO => RHTMLO classes in your class_names hash ref. Peter Karman, <karman at cpan.org> Should be straightforward since the Garden nows puts column-type as xhtml class value. Need a way to reliably test the JS. Known issues: This is due to issues with @INC and how the RDBO Loader requires classes. There is no known workaround at the moment. The TT templates generated depend heavily on the YUI toolkit. Graceful degredation is not implemented as yet. Please report any bugs or feature requests to bug-rose-dbx-garden-catalyst::Catalyst You can also look for information at: The Minnesota Supercomputing Institute sponsored the development of this software. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/dist/Rose-DBx-Garden-Catalyst/lib/Rose/DBx/Garden/Catalyst.pm
CC-MAIN-2013-20
refinedweb
486
50.33
#include <algorithm> //for for_each() std::for_each (vt.begin(), vt.end(), &Task::show_pid); Step 3: Using a Function Adapter Fortunately, you don't really need a fourth argument because the member function show_pid() should be called for every object in the range vt.begin() vt.end(). But how do you tell for_each() to do this? The Standard Library also defines a set of function adapters that bind a member function to an object and return a matching function object. For example, std::mem_fun_ref() takes a member function's address and binds it to an object's reference, which is exactly what you need: std::for_each (vt.begin(), vt.end(), std::mem_fun_ref(&Task::show_pid)); Notice that the results of this example and the previous for-loop are identical. The benefit of using for_each() is maintenance ease and improved readability. Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. Your name/nickname Your email WebSite Subject (Maximum characters: 1200). You have 1200 characters left.
http://www.devx.com/getHelpOn/10MinuteSolution/19911/0/page/3
CC-MAIN-2015-14
refinedweb
167
57.77
On Fri, 2005-01-28 at 14:42 +0530, Prasad PS wrote: > Hi Luis, > I too have followed the second choice but what happened was, when I add > the root document to the xml file, I find the previous content and the > combination of the previous and the new content in the file and moreover > xml declarator is appearing twice. You don't give enough information about your problem for anyone to diagnose. Here is an example of what you described in your original message: $ python Python 2.3.2 (#1, Dec 8 2003, 07:49:35) [GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>>>> from amara import binderytools >>> doc = binderytools.bind_string(XML) >>> doc.a.xml_append(doc.xml_element(None, u'b')) #None is the namespace >>> doc.xml() '<?xml version="1.0" encoding="UTF-8"?>\n<a><b/><b/><b/></a>' >>> print doc.xml() <?xml version="1.0" encoding="UTF-8"?> <a><b/><b/><b/></a> >>> Works fine. If you tried something similar and it didn't work, let us know the details of what you tried and what went wrong. Thanks. -- -
https://mail.python.org/pipermail/xml-sig/2005-January/010864.html
CC-MAIN-2016-44
refinedweb
196
74.19
W3C/INRIA 2004, route des Lucioles BP 93 Sophia Antipolis Cedex France Keywords: XSLT, Transformations Abstract This paper describes the use of XSLT in conjunction with SVG. Various techniques and usage scenarios are presented, and the advantages of each approach are also explained, showing the benefits of combining both languages. SVG has been designed for interoperability with other W3C specifications, such as XLink and CSS. In particular the XSL Transformations (XSLT) Recommendation is a valuable tool for generating SVG graphics. This paper discusses techniques that can be used to combine SVG and XSLT and illustrates them through various examples. Part I provides an overview of the XSLT language. Originally designed for the Extensible Stylesheet Language (XSL), XSLT is now widely used as a general transformation language, which can transform any document of a given XML type into another of a different type. XML-to-HTML transformations are increasingly popular with the inclusion of XSLT processors in major Web browsers. SVG can also be generated via an XSLT transformation, providing a way to generate graphical representations of a wide range of XML data. Part II describes popular techniques for generating SVG using an XSLT transform from XML data, producing a visual presentation of it. Examples of applications include reporting statistics from structured data via graphical charts, rendering mathematical equations from MathML content or displaying molecule diagrams from Chemical Markup Language documents. Various strategies exist for defining and applying transforms that generate SVG, which are shown here. The advantages of each approach are also explained, showing all the benefit of combining XSLT and SVG, such as the separation of concerns or the definition of constraints in a graphic. Part III describes concrete instances where XSLT and SVG are associated, showing how XSLT can perform complex operations in a simple and concise way, and proving that it provides a valid alternative to using standard programming languages and the DOM. The first example explains how to write a transform to generate an animated board from a chess game expressed in XML format (using ChessGML). The second shows how to generate animated statistical graphs from code metrics. Originally, XSL was designed to add style to an XML document using Formatting Objects (FO), an XML vocabulary describing paged documents, with advanced formatting features such as vertical writing modes, widow/orphan control or automatic table of content generation. Early in the definition of XSL, the specification was split in two: XSL Transformations (XSLT), which defines the process to transform the XML instance into an FO, and XSL which only describes the semantics and rendering of each formatting object and property. XSLT was designed to be more general than for use within XSL, allowing the transformation of documents into documents of any XML type, not only FO. This allowed XSLT to be used in a much more general context, allowing users to transform XML documents into widely supported vocabularies, such as HTML. XSLT is a functional language, expressed in XML syntax. An XSLT transform (or stylesheet) is in essence a list of templates, each defining a mapping between an object from the source document (element, attribute, comment, etc.) to an object in the output document. On top of that, XSLT defines standard programming language constructs like conditionals and loops. <xsl:stylesheet xmlns: <xsl:template <bar><xsl:apply-templates/></bar> </xsl:template> ... </xsl:stylesheet> As XSLT can produce any type of XML, SVG is a good choice as it makes it possible to produce graphical representations of documents. <data> <datum>23</datum> <datum>94</datum> <datum>55</datum> </data> The simple stylesheet below transforms our data into a histogram in SVG. Of course the stylesheet can process any input file of that type, with any number of data.Stylesheet: <xsl:stylesheet xmlns: <xsl:template <svg xmlns: <xsl:for-each <rect x="{10*position()}" y="{100- .}" width="10" height="{.}" fill="red" stroke="black"/> </xsl:for-each> </svg> </xsl:template> </xsl:stylesheet>Result: As the example above shows, XPath expressions can be included within SVG markup, for computing the value of numeric attributes from values in the source documents. The value of the attribute is now an XPath Attribute Value Template, i.e. an XPath expression delimited by curly brackets in the output markup. <xsl:stylesheet ...> <xsl:template ...> <svg ...> <rect x="{/data/datum * 10}" .../> ... </svg> </xsl:template> </xsl:stylesheet> Using the XSLT construct called "Literal Result Element as Stylesheet", it is even possible to simplify a stylesheet further, to make it look somewhat like the destination document. Below is our example rewritten in this manner: <svg xmlns: <xsl:for-each <rect x="{10*position()}" y="{100- .}" width="10" height="{.}" fill="red" stroke="black"/> </xsl:for-each> </svg> Simplifying even further it is possible to reduce the use of XSLT/XPath in an SVG file to the use of XSLT variables and XPath expressions only. The example below shows that by defining variables in the document and by referring them further down in the definition of primitives, it is possible to simplify the process of laying out a graphic by making the attribute of primitives depend on the value of those variables. Here, two circles are defined as well as a line connecting their centres. Modifying the graphic by changing the value of variables ensures that constraints are met, in this case that the line connect the centers of the circles. <svg xmlns: <xsl:variable <xsl:variable <xsl:variable <xsl:variable <circle cx="{$x1}" cy="{$y1}" r="1" .../> <circle cx="{$x2}" cy="{$y2}" r="1" .../> <line x1="{$x1}" y1="{$y1}" x2="{$x2}" y2="{$y2}" .../> ... </svg> Using this subset of XSLT/XPath has the consequence that an input document is no longer necessary to the stylesheet, as it is now self-referential. This means that a complete implementation of XSLT/XPath is not necessary and that the variable/expression mechanism could be included directly in the SVG renderer. The Batik SVG toolkit indeed supports it. XPath Attribute Value Templates can of course be used in any SVG construct, not only geometric attributes. As an example below shows, it can also be advantageous to use them in animation markup. <animate id="move{$move}" begin="{$move - 1}.end" xlink: Using XSLT constructs can not only be used as AVT in attributes but also in element content. This can be useful to refer to XSLT variables in scripts This example shows how to insert the value of an XPath expression into ECMAScript code embedded in SVG <script type="text/ecmascript"> function circle_click(evt) { var circle = evt.target; circle.setAttribute("x", <xsl:value-of*2); }</script> Note that it is advised that code in <script> be included in a CDATA section, omitted here for clarity We have seen that it can be interesting to use XSL stylesheet to express constraints, easily layout graphics and manage animation element dependencies. How can this be done in practice and where does it apply? To apply XSL transformations, you can use a specialized software (e.g., a command line utility) which will take an XML document as an input, the XSL stylesheet to apply and will generate the result into the desired file. For example, the Xalan software package () comes with a command-line utility which will let you apply an XSL transformation myStylesheet.xsl to an XML file myXMLFile.xml and output the result in result.svg as follows: xalan -in myXMLFile.xml -xsl myStylesheet.xsl -out result.svg However, there are many environments where it is not an option to use a stand-alone software package to perform XSL transformations and where using a software library is preferable. For example, the Java programming language defines a standard API for applying transforms to XML content (the javax.xml.transform package) and this API is part of the JDK 1.4 distribution. The following lines of code apply a transformation to a DOM Document object: import org.w3c.dom.Document; import javax.xml.transform.TransformerFactory; import javax.xml.transform.Transformer; import javax.xml.transform.dom.DOMSource; import javax.xml.transform.stream.StreamSource; import javax.xml.transform.stream.StreamResult; .... TransformerFactory tFactory = TransformerFactory.newInstance(); String stylesheetId = ...; Transformer transformer = tFactory.newTransformer(new StreamSource(styleheetId)); Document doc = ...; // XML DOM to transform transformer.transform (new DOMSource(doc), new StreamResult(new FileOutputStream(outFileName))); ... The above example is rather 'techy', but it is important to realize that the ability to integrate transformations in applications means that stylesheets can be applied in all kinds of contexts, client-side or server side. For example, it is easy to write a server component which would take a 'template' SVG stylesheet and populate it with custom information. Poscards are an example. Another example would be is a business card which can be transformed dynamically to incorporate a user's information. A snippet of what such an stylesheet could look like: <svg ....> ... <text ...><xsl:value-of</text> ... </svg> <game type="chess" variant="classic"> ... <moves ply- <mp> <!-- 1. e4 e5 --> <m c="w"><p c="w" n="p"/><e2/><e4/></m> <m c="b"><p c="b" n="p"/><e7/><e5/></m> </mp> ... Used by chess software but not easily readable otherwise. And it is not understood by browsers The stylesheet first generates the initial chess board. It also defines each chess piece as an SVG font glyph (using the proper unicode codepoints) Then, for each move, SMIL Animation elements are generated. For example, black's second move (2. ...exf4) produces the following SMIL Animation markup: <animate id="move3" begin="move2.end" xlink: <animate xlink: <set xlink: The first two lines describe the black pawn's move, and the third specifies that after the pawn has moved, the white pawn that has just been taken must become invisible. This example uses XSL techniques similar to the ones used in the ChessML demonstration. However, XML files and XSL documents where used in a slightly different way than in ChessML. In the ChessML example, the SVG document was created by applying an XSL transformation directly to an XML source. The Code Metrics example, comes out of the Batik open-source project where developers extract metrics from their code source. An input XML document defines the category to which each of the software component belongs. The tool generates metrics on the size of each component following a three-step process: When it comes to generating SVG content, a number of options are possible. SVG content can be generated through the DOM API, by technologies such as PHP, CGI scripts or Active Server Pages. What are the advantages of using XSL? First and foremost, XSL is declarative as it is itself an XML syntax. Therefore, it is a natural fit with all the other XML technologies which are so prominent today. Second, XSL is ubiquitous in the sense that it can be applied in various contexts: client-side or server side. There is no requirement as to where the transformation applies. Third, XSL provides an elegant way to express constraints (graphical constraints such as related element widths, layout constraints such as element positioning or time constraints such as dependencies between animation elements) easily and that is handy to generate some types of graphics (such as graphs). In summary, XSL transformations generating SVG can be applied to simple or complex input structures and produce simple or complex SVG content. There are a couple limitations to mention relative to the previous slide. First, the constraints between graphical elements which can be expressed in an XSL stylesheet do not survive the transformation. Once the XSL transformation has been applied (and the constraints enforced) the constraints and no longer enforced if the SVG content changes. For example, imagine the following XSL snippet: <rect x="{$precedingRectX + 30}" y="..." ... /> which could produce an SVG with the following rectangles: <rect x="0" y="..." .... /> <rect x="30" y="..." ... /> <rect x="60" y="..." ... /> The rectangles at the x position '30' was originally constrained by the position of the first rectangle. Similarly, the rectangle at position '60' was constrained by the second rectangle. However, modifying the position of the first rectangle through scripting (e.g., setting the first rectangle's x attribute to '20') will not contrain the x attributes on the second and consequently third rectangles to change (i.e., they will not become '50' and '70' respectively). A second limitation to the XSL and SVG 1.0 combination is related to text flow. SVG 1.0 does not flow text and this is a problem for XSL transformations which deal with arbitrary text for which it is difficult to compute flow. Line-breaking is a very difficult issue that is not easily (or not properly) solved by XSL transformations. The SVG working group has recognized that offering a solution to flow text is an important need and is working on adding it to the SVG recommendation (version 1.2).
http://www.w3.org/People/maxf/papers/2002-07-SVGOpen/svgopen.html
CC-MAIN-2015-35
refinedweb
2,123
54.93
What is the scope of Javascript variables? Discussion in 'Javascript' started by Trev, Oct 29,64 - Paul Opal - Oct 11, 2004 Scope - do I need two identical classes, each with different scope?ann, Sep 12, 2005, in forum: Java - Replies: - 13 - Views: - 740 - Patricia Shanahan - Sep 13, 2005 How do namespace scope and class scope differ?Steven T. Hatton, Jul 18, 2005, in forum: C++ - Replies: - 9 - Views: - 567 - Kev - Jul 19, 2005 Re: Lexical scope vs. dynamic scopeXah Lee, Feb 26, 2009, in forum: Java - Replies: - 0 - Views: - 2,337 - Xah Lee - Feb 26, 2009 newbie question about scope, variables, declarations of variables and option strict (as in perl)Talha Oktay, Mar 8, 2006, in forum: Ruby - Replies: - 8 - Views: - 267
http://www.thecodingforums.com/threads/what-is-the-scope-of-javascript-variables.927833/
CC-MAIN-2015-22
refinedweb
122
73.1
C Programming Strings In C programming, array of characters is called a string. A string is terminated by a null character /0. For example: "c string tutorial" Here, "c string tutorial" is a string. When, compiler encounter strings, it appends a null character /0 at the end of string. Declaration of strings Before we actually work with strings, we need to declare them first. Strings are declared in a similar manner as arrays. Only difference is that, strings are of char type. Using arrays char s[5]; Using pointers Strings can also be declared using pointer. char *p; Initialization of strings In C, string can be initialized in a number of different ways. For convenience and ease, both initialization and declaration are done in the same step. Using arrays char c[] = "abcd"; OR, char c[50] = "abcd"; OR, char c[] = {'a', 'b', 'c', 'd', '\0'}; OR, char c[5] = {'a', 'b', 'c', 'd', '\0'}; Using pointers String can also be initialized using pointers as: char *c = "abcd"; Reading Strings from user You can use the scanf() function to read a string like any other data types. However, the scanf() function only takes the first entered word. The function terminates when it encounters a white space (or just space). Reading words from user char c[20]; scanf("%s", c); Example #1: Using scanf() to read a string ignores Ritchie because, scanf() function takes only a single string before the white space, i.e. Dennis. Reading a line of text An approach to reading a full line of text is to read and store each character one by one. Example #2: Using getchar() to read a line of text 1. C program to read line of text character by character. #include <stdio.h> int main() { char name[30], ch; int i = 0; printf("Enter name: "); while(ch != '\n') // terminates if user hit enter { ch = getchar(); name[i] = ch; i++; } name[i] = '\0'; // inserting null character at end printf("Name: %s", name); return 0; } In the program above, using the function getchar(), ch gets a single character from the user each time. This process is repeated until the user enters return (enter key). Finally, the null character is inserted at the end to make it a string. This process to take string is tedious. Example #3: Using standard library function to read a line of text 2. C program to read line of text using gets() and puts() To make life easier, there are predefined functions gets() and puts in C language to read and display string respectively. #include <stdio.h> int main() { char name[30]; printf("Enter name: "); gets(name); //Function to read string from user. printf("Name: "); puts(name); //Function to display string. return 0; } Both programs have the same output below: Output Enter name: Tom Hanks Name: Tom Hanks Passing Strings to Functions Strings are just char arrays. So, they can be passed to a function in a similar manner as arrays. Learn more about passing array to a function. #include <stdio.h> void displayString(char str[]); int main() { char str[50]; printf("Enter string: "); gets(str); displayString(str); // Passing string c to function. return 0; } void displayString(char str[]){ printf("String Output: "); puts(str); } Here, string c is passed from main() function to user-defined function displayString(). In function declaration, str[] is the formal argument. String handling functions There are various string operations you can perform manually like: finding the length of a string, concatenating (joining) two strings etc. But, for programmer's ease, many of these library functions are already defined under the header file <string.h>. Learn more about string manipulations in C.
https://www.programiz.com/c-programming/c-strings
CC-MAIN-2016-50
refinedweb
606
71.75
Tutorial: make virtual machines available to your Azure Stack users As an Azure Stack cloud administrator, you can create offers that your users (sometimes referred to as tenants) can subscribe to. By subscribing to an offer, users can consume the Azure Stack services that an offer provides. This tutorial shows how to create an offer for a virtual machine, and then sign in as a user to test the offer. What you will learn: - Create an offer - Add an image - Test the offer In Azure Stack, services are delivered to users using subscriptions, offers, and plans. Users can subscribe to multiple offers. An offer can have one or more plans, and a plan can have one or more services. To learn more, see Key features and concepts in Azure Stack. Create an offer Offers are groups of one or more plans that providers present to users to purchase or subscribe to. The process of creating an offer has several steps. First, you're prompted to create the offer, then a plan, and finally, quotas. Sign in to the portal as a cloud administrator and then select + Create a resource > Offers + Plans > Offer. In New offer, enter a Display name and Resource name, and then select a new or existing Resource group. The Display name is the offer's friendly name. Only the cloud operator can see the Resource name which is the name that admins use to work with the offer as an Azure Resource Manager resource. Select Base plans, and in the Plan section, select Add to add a new plan to the offer. In the New plan section, fill in Display name and Resource name. The Display name is the plan's friendly name that users see. Only the cloud operator can see the Resource name which is the name that cloud operators use to work with the plan as an Azure Resource Manager resource. Select Services. From the list of Services, pick Microsoft.Compute, Microsoft.Network, and Microsoft.Storage. Choose Select to add these services to the plan. Select Quotas, and then select the first service that you want to create a quota for. For an IaaS quota, use the following example as a guide for configuring quotas for the Compute, Network, and Storage services. First, create a quota for the Compute service. In the namespace list, select Microsoft.Compute and then select Create new quota. In Create quota, enter a name for the quota. You can change or accept any of the quota values that are shown for the quota you're creating. In this example, we accept the default settings and select OK. Pick Microsoft.Compute in the namespace list, and then select the quota that you created. This links the quota to the Compute service. Repeat these steps for the Network and Storage services. When you're finished, select OK in Quotas to save all the quotas. In New plan, select OK. Under Plan, select the new plan and then Select. In New offer, select Create. You'll see a notification when the offer is created. On the dashboard menu, select Offers and then pick the offer you created. Select Change State, and then chose. You'll sign in as a user, subscribe to the offer, and then add a virtual machine. a. Sign in to the user portal with a user account and select the Get a Subscription tile. - For an integrated system, the URL varies based on your operator's region and external domain name, and will be in the format.<region>.<FQDN>. - If you're using the Azure Stack Development Kit, the portal address is. b. In Get a Subscription, enter a name for your subscription in the Display Name field. Select Offer, and then choose one of the offers in the Choose an offer list. Select Create. c. To view the subscription, select All services, and then under the GENERAL category select Subscriptions. Select your new subscription to see which services are part of the subscription. Note After you subscribe to an offer, you might have to refresh the portal to see which services are part of the new subscription. Provision a virtual machine From the user portal you can provision a virtual machine using the new subscription. a. Sign in to the user portal with a user account. - For an integrated system, the URL varies based on your operator's region and external domain name, and will be in the format.<region>.<FQDN>. - If you're using the Azure Stack Development Kit, the portal address is. b. On the dashboard, select +Create a resource > Compute > Windows Server 2016 Datacenter Eval, and then select Create. c. In Basics, provide the following information: - Enter a Name - Enter a User name - Enter a Password - Choose a Subscription - Create a Resource group (or select an existing one.) - Select OK to save this information. d. In Choose a size, select A1 Standard, and then Select. e. In Settings, select Virtual network. f. In Choose virtual network, select Create new. g. In Create virtual network, accept all the defaults, and select OK. h. Select OK in Settings to save the network configuration. i. In Summary, select OK to create the virtual machine. j. To see the new virtual machine, select All resources. Search for the virtual machine and select its name from the search results. Next steps In this tutorial you learned how to: - Create an offer - Add an image - Test the offer Advance to the next tutorial to learn how to: Feedback Send feedback about:
https://docs.microsoft.com/en-us/azure-stack/operator/azure-stack-tutorial-tenant-vm
CC-MAIN-2019-22
refinedweb
921
65.62
! Please keep this My stuff out of C#. C# was perfect in 1.0. Adding different APIs for the same kind of code would make it worse. Moderation? Is this a fake marketing blog? I’m divided on this. I hope it won’t, as time goes by, make the class framework "messy" with all these different ways of accomplishing things. Nonetheless the examples are pretty neat and the benefits probably outweigh the possible downsides. I think I’d like this for C# as well. I think the lion’s share of VB developers will make quick use of the new My namespace. It really adds a set of tiger’s claws to those developers looking to latch on to the underlying framework in a RAD mannor. The fact that is will be extensible bares <g> repeating. When productivity increases, and projects are finished sooner, everybody gets to finish up and go home just that much quicker!
https://blogs.msdn.microsoft.com/somasegar/2004/09/03/oh-my/
CC-MAIN-2017-43
refinedweb
157
75.61
Python Tutorial - Week 2 In the Week 1 we got started with Python. Now that we can interact with python, lets dig deeper into it. This week we will go over some additional fundamental things common in any program - interactive input from users, adding comments to your code, use of conditional logic i.e. if - else conditions, loops, formatted output with strings and print() statements. Python Week 2 User Inputs, i.e. the text that will be shown when asking for input. name = input("What's your name? ") print("Nice to meet you " + name + "!") age = input("Your age? ") print("So, you are already " + age + " years old, " + name + "!") What's your name? Sadanand Nice to meet you Sadanand! Your age? 30 So, you are already 30 years old, Sadanand! What if you try to do some mathematical operation on the age? You will get a TypeError as follows: age = 12 + age --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-2-3d9ce720d6f3> in <module>() ----> 1 age = 12 + age TypeError: unsupported operand type(s) for +: 'int' and 'str' This says that by default all data is read as raw input i.e. strings. If we want numbers we need to convert them ourselves. For example: cities_canada = input("Largest cities in Canada: ") Largest cities in Canada: ["Montreal", "Ottawa", "Calgary", "Toronto"] print(cities_canada, type(cities_canada)) ["Montreal", "Ottawa", "Calgary", "Toronto"] <class 'str'> cities_canada = eval(input("Largest cities in Canada: ")) Largest cities in Canada: ["Montreal", "Ottawa", "Calgary", "Toronto"] print(cities_canada, type(cities_canada)) ['Montreal', 'Ottawa', 'Calgary', 'Toronto'] <class 'list'> population = input("Population of Portland? ") Population of Portland? 604596 print(population, type(population)) 604596 <class 'str'> population = int(input("Population of Portland? ")) Population of Portland? 604596 print(population, type(population)) 604596 <class 'int'> pi = input("Value of PI is?") Value of PI is?3.14 print(pi, type(pi)) 3.14 <class 'str'> pi = float(input("Value of PI is?")) Value of PI is?3.14 print(pi, type(pi)) 3.14 <class 'float'> Notice the use of various methods like eval(), int() and float() to get user input in correct formats. In summary, eval() is used to get data into various native python formats, e.g. lists, dictionaries etc. We will look at these in more detail in next few tutorials. int() is used to convert input to integer numbers (numbers without decimals), while float() is used to get floating point numbers. Also, of interest above is the type() method used in print statements. You can get the type of any variable in python using this method. In the output of this we see something like: < class ‘float’> - if variable is of float type. For the time being we will ignore the “class” in this. Indentation Blocks Python programs get structured through indentation, i.e. code blocks are defined by their indentation (The amount of blank space before any line). This principle makes it easier to read and understand other people’s Python code.. In the following sections below we will see extensive use of such indentation blocks. Consider the following example to calculate Pythagorean triples. You do not need to understand the full code right here. We will revisit this code at the end of this tutorial. from math import sqrt n = input("Maximum Number? ") n = int(n)+1 for a in range(1,n): for b in range(a,n): c_square = a**2 + b**2 c = int(sqrt(c_square)) if ((c_square - c**2) == 0): print(a, b, c) Maximum Number? 10 3 4 5 6 8 10 In the above code, we see three indentation blocks, first and second “for” loops and the third “if” condition.. Python has two ways to annotate/comment Python code. One is by using comments to indicate what some part of the code does. Single-line comments begin with the hash character (“#”) and are terminated by the end of line. Here is an example: # This is a comment in Python before print statement print("Hello World") #This is also a comment in Python Hello World Conditionals Conditionals, - mostly in the form of if statements - are one of the essential features of a programming language.. Conditional statements in Python use indentation blocks to conditionally execute certain code.. Typical examples of “condition” statements follow some of following operations: mathematical comparisons like, “<”, “>”, “<=“, “>=“, “==” object comparisons like “is” i.e. this is exactly something or not. boolean logic operators like “not”, “or”, “and”, “xor” etc. The following objects are evaluated by Python as False: - numerical zero values (0, 0L, 0.0, 0.0+0.0j), - the Boolean value False, - empty strings, - empty lists and empty tuples, - empty dictionaries. - the special value None. All other values are considered to be True. Let us try to solve this simple DNA sequence problem: Given the an input DNA sequence, print the sequence if its length is less than equal to 20. Print “Error” if the sequence is empty or its length is larger than 25. If length is between 21 and 25, print the last 5 bases only. dna = "ATGCCGATTTATCGGGAACCNNNAATTCCGG" if len(dna) <= 20: if len(dna) > 0: print(dna) else: print("ERROR!") elif len(dna) <= 25: print(dna[-5:]) else: print("ERROR!") ERROR! dna = "ATGCAATGCN" if len(dna) <= 20: if len(dna) > 0: print(dna) else: print("ERROR!") elif len(dna) <= 25: print(dna[-5:]) else: print("ERROR!") ATGCAATGCN dna = "" if len(dna) <= 20: if len(dna) > 0: print(dna) else: print("ERROR!") elif len(dna) <= 25: print(dna[-5:]) else: print("ERROR!") ERROR! dna = "ATGCCGATTTATCGGGAACCNNN" if len(dna) <= 20: if len(dna) > 0: print(dna) else: print("ERROR!") elif len(dna) <= 25: print(dna[-5:]) else: print("ERROR!") CCNNN if else conditions can also be combined in a regular assignment expression to assign values. For example, In the DNA case, we want to store length of DNA. However, we want length to number only if length of sequence is between 1 and 25. In all other cases, we want to store the length of sequence as -1. A typical way to do this would be: dna = "ATGCCGATTTATCGGGAACCNNN" length = -1 if 0 < len(dna) <= 20: length = len(dna) print(length) -1 dna = "CCGGGAACCTCACG" length = -1 if 0 < len(dna) <= 20: length = len(dna) print(length) 14 This example can be written in a much shorter fashion as well. Such conditions are commonly called as ternary if statements. dna = "ATGCCGATTTATCGGGAACCNNN" length = len(dna) if 0 < len(dna) <= 20 else -1 print(length) -1 dna = "CCGGGAACCTCACG" length = len(dna) if 0 < len(dna) <= 20 else -1 print(length) 14 Loops Many algorithms make it necessary for a programming language to have a construct which makes it possible to carry out a sequence of statements repeatedly. The code within the loop, i.e. the code carried out repeatedly, is called the body of the loop. There are two types of loops in Python - whileLoop forLoop The while Loop These are a type of loop called “Condition-controlled loop”. As suggested by the name, the loop will be repeated until a given condition changes, i.e. changes from True to False or from False to True, depending on the kind of loop. Let us consider the following example of DNA sequence: We want to print every base of a given sequence, until we have found 2 A’s. dna = "ATGCCGATTTATCGGGAACCNNN" countA = 0 index = 0 while countA < 2: print(dna[index]) if dna[index] == 'A': countA = countA + 1 index = index + 1 A T G C C G A In the above example, the loop (code under the while block) was executed until countA < 2 statement remained true. The loops can be made to exit before its actual completion using the break statements. Consider the following example of DNA sequence. We want to print every base of a given sequence, until we have found 2 A’s. However, we want to stop printing as soon as we have found an N base. dna = "ATGCNCGATTTATCGGGAACCNNN" countA = 0 index = 0 while countA < 2: if dna[index] == 'N': break if dna[index] == 'A': countA = countA + 1 print(dna[index]) index = index + 1 A T G C Now, let us consider another case while looping over something. We want to skip over a part of code at certain condition. In such cases, continue statement comes handy. Consider the following example wrt to DNA sequencing. Given a sequence of dna, we do NOT want to print the base name if it is ‘N’ dna = "ATGCNCN" index = 0 while index < len(dna): index = index + 1 if dna[index-1] == 'N': continue print(dna[index-1]) A T G C C The for Loop A for loop is similar to while loop, except it is used to loop over certain elements, unlike while loop that continues until certain condition is satisfied. In the case DNA sequences, say, one case of for loop would be to loop over all bases in a sequence. Consider the following example: Given a DNA sequence, we want to count the number of all ‘A’, and ’T bases. dna = "ATGCNCGATTTATCGGGAACCNNN" count = 0 for base in dna: if base == 'A' or base == 'T': count += 1 print("Number of A, T bases is:", count) Number of A, T bases is: 10 Similar to while loops, we can use break and continue statements with for loops as well. Let us look at somewhat complicated use of for loop: Given a DNA sequence, we want to count the number of doublets of bases, i.e. no. of times certain bases come twice exactly. If some base occur more than twice, we do not want to count that. dna = "ATGGCNCGAATTTAAATCGGGAACCNNN" countPairs = 0 pairFound = 0 prevBase = '' for base in dna: if (base == prevBase): pairFound += 1 else: if pairFound == 1: countPairs += 1 pairFound = 0 prevBase = base print("Number of paired bases is:", countPairs) Number of paired bases is: 4 Formatting of Output Final topic for this week is the formatting of text in the print statements. Consider the following case: We have following variables: name = "Sadanand", age = 30, and gender = "male" We would like to print a quite cumbersome statement like as follows. This can be quite easily done using the format method. name = "Sadanand" age = 30 gender = "male" msg = "Hi {0}, You are a {1}, and you have seen {2} winters as you are {2} years old! Thanks {0}!" print(msg.format(name, gender, age)) Hi Sadanand, You are a male, and you have seen 30 winters as you are 30 years old! Thanks Sadanand! Thus format method provides us with easy way to mix different types of variables in the strings. Thats it for this week. Next we will look at strings and lists in Python in more detail. Exercise Given the following sequence of dna - “ATGGCNCGAATTTAAATCGGGAACCNNN”, - Write a program to count number of all triplets in it. - Write a program that prints all non ’T’ bases that come after ’T’, but stops when two or more continuous ’T’ has been found. - Write a program to generate new sequence with every 3rd base from the above sequence. - Write a program to calculate sum of all numbers from 1 to 10. HINT: Please take a look at the range method.
https://sadanand-singh.github.io/posts/pythontutorialweek2/
CC-MAIN-2018-30
refinedweb
1,847
64.1
Problem Statement In this problem, we are given a sequence of numbers (may be positive negative or zero). We have to take a positive integer with us and then we will start adding all integers of this array from left to right with it. We want the minimum positive integer that we should take in the start, so that, at any time our current sum will always remain positive. Example nums = [-3,2,-3,4,2] 5 Explanation: We can see here that if we choose startValue=5, we get all intermediate sum positive. We can check for startValue=4 also, which is not correct solution. nums = [1,2] 1 Explanation: Minimum start value should be positive. Approach Suppose, we have array , nums = [-3,2,-3,4,2] Now if we choose initial value as 2, and keep adding elements from left to right, then: In above example, we have chosen initial value as 2. Our sum will not remain positive every time, so we need some larger element. Let the initial val be 5. Now, we can clearly see that if starting value is 5 then, we can surely travel throughout the array keeping our current sum positive always. 5 can be the answer if it is the smallest integer doing so. Let’s think of a situation if we choose val=0 with us at the start. Now, can we say that if we overcome the value of most negative current sum (-4 in current example), then we can clearly pass the array without any problem. Like, in above example, the most negative value is -4, To overcome it, we have to make it 1 (because, smallest positive integer needed). So we want value 1-(-4)=5 to pass the most negative situation. we have also seen that 5 can pass the solution. And if there is no negative current sum, we will just output 1 because we want a positive integral solution. So, our algorithm will be : 1. We have to search for most negative solution, so we will traverse the whole array. 2. In each iteration of the loop we will check if current sum is minimum or not and we will update our min value accordingly. 3. Finally to make this most negative value to 1, we will just subtract it from 1. (e.g. if min = -4, val=1-(-4)=5). Implementation C++ Program for Minimum Value to Get Positive Step by Step Sum Leetcode Solution #include <iostream> #include<vector> using namespace std; int minStartValue(vector<int>& nums) { int min=0,sum=0; for (int i = 0; i < nums.size(); i++){ sum+=nums[i]; min=min<sum?min:sum; } return 1-min; } int main() { vector<int> nums{-3,2,-3,4,2}; cout<<minStartValue(nums)<<endl; return 0; } 5 Java Program for Minimum Value to Get Positive Step by Step Sum Leetcode Solution import java.util.*; class Rextester{ public static int minStartValue(int[] nums) { Integer min=0,sum=0; for(int i:nums){ sum+=i; min=Math.min(min,sum); } return 1-min; } public static void main(String args[]) { int[]nums={-3,2,-3,4,2}; int ans=minStartValue(nums); System.out.println(ans); } } 5 Complexity Analysis for Minimum Value to Get Positive Step by Step Sum Leetcode Solution Time Complexity O(n): Because, we are traversing the given array linearly, thus our time complexity will be O(n). Space Complexity O(1): We haven’t used any extra space, thus our space complexity will be constant.
https://www.tutorialcup.com/leetcode-solutions/minimum-value-to-get-positive-step-by-step-sum-leetcode-solution.htm
CC-MAIN-2021-39
refinedweb
583
53.51
Hello everybody: In the following thread an issue was discussed about TTF fonts losing glyphs after window resizes: This bug was fixed in allegro-5.0.9. Unfortunately fonts loses glyphs on linux using allegro-5.0.11 after using al_get_text_width(). (My applications window is not resizable) My current workaround is to load each font twice and then use al_get_text_width() only with the second one, while doing al_text_draw*() with the first one. Could you make a minimal test case for this that illustrates the issue? Also, what OS is this? "For in much wisdom is much grief: and he that increases knowledge increases sorrow."-Ecclesiastes 1:18[SiegeLord's Abode][Codes]:[DAllegro5]:[RustAllegro] I'll create a test case next week. Beforehand some information: System Hardware: 4th Gen i7 Quad Core 3.2 GHz, 32GiB RAM, Intel HD + nvidia HybridSystem OS: Gentoo LinuxCompiler: gcc-4.9.2Fonts tested: DejaVuSansMono-Bold.ttf, FreeMonoBold.ttf and FreeSansBold.ttf. I first suspected that I have an error that accidentally writes over an array bound garbling the font or something like that. So I tried address sanitizer and leak sanitizer plus strong stack protection. Neither came up with anything. To check whether the sanitizers didn't miss anything crucial I switched the fonts, so the one used for the width calculation is declared first. Currently the one for drawing is declared first. If anything corrupted any stack the display should be garbled again, but it isn't. Then I added font reloading before each drawing block, and see, the display is fine when reloaded. But I can not reload the font for every frame. It was just a test. The only possibility left is, that al_get_text_width() somehow corrupts the font memory, but only if the font is loaded into video memory.If I put a al_set_new_bitmap_flags(ALLEGRO_MEMORY_BITMAP) just before the font loading, everything is fine and the FPS rate drops from a fixed 60 FPS to ~10 FPS. See the mini screenshots for examples. However, I just tried to run the game using primusrun/optirun (bumblebee, a system to use any program with the nvidia graphics card instead of intel HD) an noticed something strange. I am describing it here, because it might be be caused by me missing some important step, that is somehow responsible for the missing glyphs as well. One might never know. The screenshots are from the "Shopping screen" of the game, which can be seen before each round. Before entering here al_get_text_width() is nowhere used, so the missing glyphs show up here first.Whenever the display is updated, it switches between the old screen (main menu of the game with loading / starting options) and the shopping screen. It is like al_flip_display() is flipping the displays backbuffer with an old state, too. So I clear the backbuffer to a green color, draw the shop on it and it is shown. The next frame has the old screen with only the shop stuff drawn to it.How can this be? Is there something special to be done when using nvidia cards? Thanks for your help and patience! No ideas just yet, but I'm planning on investigating this later this weekend. This wouldn't be the first time something broke in the TTF addon, so perhaps it's a real issue. IIRC, calling al_get_text_width causes the character cache to be drawn - it is some form of lazy creation. The character creation code could be the problem, but I haven't looked at this. Could you try compiling with Allegro 5.1? There have been quite a few bug fixes in the ttf addon that haven't been backported. Sorry for the late reply. I was off sick last week with a bad cold, couldn't even look straight. I do not think, and I am sorry that my previous post didn't make this very clear, that al_get_text_width() is the actual culprit. I think it is just the messenger. This morning I wrote a simple test program that used al_get_text_width() a lot (code follows below for reference) and all is well there. No glyphs disappear. What puzzles me is, that none of the gcc-4.9.2 sanitizers (leak, address or thread) nor the stack protector can find any issues. Nowhere in the code anything is written out of bounds. So my best bet is, that something in the drawing of the mentioned "shop" is actually responsible for that, or the background thread that creates the next level wreaks havoc. I already found out that allegro 5 can not handle different threads drawing something at the same time using primitives, even when the drawing takes place into different targets. So I impose a general lock and ensure the correct target is set.But is it enough? I'll investigate this further. Here is the code I wrote this morning for testing. It is very rudimentary and I left most checks out as it is just a small test. As can be suspected, it works without any glyphs going missing: I already found out that allegro 5 can not handle different threads drawing something at the same time using primitives, even when the drawing takes place into different targets. If they share the same Context that is true. If you create a separate context for each thread (in a5, a context is tied to a display). OpenGL just doesn't support letting multiple threads access the same context. D3D may be similar. I think this comes under "undefined" behavior. There are ways to overcome that, but it may not gain you much. Without using multiple contexts and "context sharing"[1], you can only render from one thread at a time, and you need to manually switch which thread owns the context, which means you gain nothing in terms of performance, and it may actually decrease performance considerably as it can cause a large number of state. I have only one display. The background thread is there so while the player(s) do their shopping, the next level can be prepared. Currently the outline is this for all drawing operations while these two threads work in parallel: (Within the game round itself only the main thread does any drawing operations to the display) lock the display globally using a spinlock or mutex set allegro target to the bitmap to draw on Lock the bitmap (region) if pixel putting and/or reading is done (quite a lot) Do the drawing (or reading if I need pixel colors) Unlock the bitmap set allegro target to nullptr to release the display context unlock the global lock. In the meantime I have deferred the background thread to after the shopping is done. The glyphs keep disappearing, so the background thread is not the issue. At least I think I get nearer to the bottom of this. The next change was to remove all calls to al_get_text_width() but one I really need. The others could be substituted by simply aligning the text to 'right'. I have to add that this is not a new game. I am currently porting an old allegro 4 game to allegro 5. So basically it is currently plastered with #ifdef/#else/#endif macros to allow me to compile for both worlds on demand until I can be sure that the porting is done correctly. It looks like a mess, but at least, if I screw up, I have the old working code right there. ;-) If you dare to take a look, the file in question is this one: I have not yet (fully) ported and tested the game loop, so the shop is the last progress. Once this works out, the rest will be done. *sigh* I guess I have to single step it again... What I tried to say is you just can't access the display contents or bitmap contents in any way from more than one thread at the same time. You will actually need to null the target in the owning thread before setting the target in another thread. Only one thread can own the context at a time, so you have to release it in the first thread before trying to capture it in another. al_get_text_width() also counts a a rendering function, so you can only call it from the thread in which you also draw the text. If you call it from another thread, either there will be no OpenGL context and so it will mess up the internal texture used for storing glyphs, or it will take OpenGL away from the drawing thread and so again mess things up. So if the only problem was that you called al_get_text_width() from another thread, then yes, that is expected to cause problems. (There is one way to make it work, which is if you use thread locking to make sure that only one thread at a time uses graphics functions (including al_get_text_width). And it may require Allegro 5.1 instead of 5.0.) --"Either help out or stop whining" - Evert Thank you very much for your help! @Thomas: This is exactly what I am doing. Do a global lock, do what has to be done, null the target and then unlock. I will single step it all tonight to see whether I overlooked any place where the locking would have to be done but isn't. @Elias: That is a good hint. Because right now I know of at least one place, and it is the only one remaining, where I use al_get_text_width() to fill a local constant, without getting the context first by (re-)setting the target. Again, thank you very much for your input! Edith can tell: Found it! *yay* ! In a different file, far far away (okay, not that far), the description text of the shop items gets printed line by line. And guess what, al_get_text_width() is used to determine when a line is full. And at that place, the current target bitmap is NULL. So that screwed it all up!So after enclosing that check with the mentioned global lock/unlock and the target setting/releasing, all is well now. Thanks again! You all saved my day! ( Completely unrelated : How can I mark this thread as SOLVED or FIXED?) The truth is, ideally we'd fix this in Allegro anyway - al_get_text_width could just use a separate glyph cache in a memory bitmap. Maybe even with a lock to be thread save and then any number of threads could query text dimensions on the same font. In fact, would be worth investigating if glyphs even have to be rendered to get their size. Even if so, all we'd need to keep around is the dimensions and not the actual bitmaps. In fact, would be worth investigating if glyphs even have to be rendered to get their size. Even if so, all we'd need to keep around is the dimensions and not the actual bitmaps. Ideally I think that's probably the best route. I'd be surprised if it didn't have a measure function that skips the rendering step (that also didn't have to do all of the calculations required of a full rendering step that makes it pretty slow...) IIRC the ttf addon the glyphs to bitmaps in the glyph cache just to measure them, because of kerning and other effects that make it hard to get the correct dimensions without rendering the glyph. But maybe there is a way to measure the withd of the glyphs without rendering them in libfreetype2 that I don't know of...
https://www.allegro.cc/forums/thread/615139/1011217
CC-MAIN-2018-22
refinedweb
1,928
71.95
How to use FTP in Python In this article, you will learn how to use FTP in your Python programs. You can do this with the help of ftplib module. FTP is File Transfer Protocol which is basically used for transferring files to and from a remote server. ftplib module allows you to use FTP tasks into your Python program and do many FTP related jobs such as retrieving files and processing them on your system. Use FTP in Python To start using this feature you need to import ftplib module into your program by import ftplib First, you have to connect the program to the FTP server to “open” the connection. For this, an object has to be created. When the connection is opened(made) then you can use the features of ftplib module. You can easily browse through the directory structure, download and uploading files, etc. For this use the commands from ftplib import FTP ftp = FTP(‘domain.name’) //To connect to a server anonymously. Kindly replace domain.name with your Domain Name. (Note -Place Domain Name inside the quotes) To check if connected successfully. print() //This will show a message sent by the server in response to the connection request. Now we have to log in to the server //To Login anonymously meaning when a username and password is not required. If you a username or password(user=”username” passwd=”password”) //To log in when you have a username and password. Kindly replace username with your User Name and password with your Password (Note- Within quotes “ ”) After login command, this will show a message generally starting with a number like 230, 430 or 226 These are standard codes. Here for example 230 means “User logged in, Logout if appropriate” 430 means “Username and Password Invalid”. There are many methods in FTP class like delete(), quit(), connect(), cwd(), mkd(). Now let us write a simple program to retrieve files from the server ftp=FTP(“domain name”) print()('LIST') Now this will show you the files present in the server. The output will be something like this. File Transfers in Python using FTP To transfer files from the server you would have to know if it is a single block of binary code or a text file. So here and method is used respectively. In the above code retrlines(‘LIST’) is used to retrieve files or folders from the server in ASCII transfer mode. Proper command in retrlines() must be passed such as LIST, MLSD or NLST. “LIST” is used to retrieving file names with their details. NLST only retrieve file names. from ftplib import FTP import os def writedata(info): info.write(info) info.write(os.linesep) ftp = FTP('Domain.name')('path') info = open('file_name', 'wb')('RETR file_name', writedata) info.close() Inline 1 and 2 we’re importing libraries ftplib as FTP and os module. Inline 4 to 6 we declared a function to write text to a file line by line. Now inline 8 and 9 we are connecting to the FTP server and logging in anonymously. Inline 10 we are navigating to a specified ‘path’. Inline 11 to 13 we are creating a file object to open the file in the FTP server as ‘wb’, ‘b’ for binary and “w’ as write. Pass file_name(replace with File Name) and write data into it locally on your machine. Then closing the file. And at last line 15 logging out of the server. Binary File Transfer in Python using FTP retrbinary() is used to retrieve the file in binary mode. Proper RETR command and function should be passed in retrbinary() Like ‘RETR file_name’ // Replace file_name with the name of the file you want to write to. The function is for each block of data received. from ftplib import FTP import os ftp = FTP('domain.name')('path') info = open('file_name', 'wb')('RETR file_name', info.write) info.close() Try this, its similar to last code. Shows the directory listing in your present working directory.(“pathHere”) To change the directory you are working in you can use this. //Replace pathHere with Path you want to get to. It creates a new directory taking a specific path. Shows the present working directory, pathname where you are at present. Deletes a file having name (fle_name) from the server. It sends a request to close the connection with the server and log out. quit() command is like this command but close() renders the following FTP calls useless after it is executed. No FTP commands should be executed after this. Also read: Check if a string is a valid URL or not in Python
https://www.codespeedy.com/how-to-use-ftp-in-python/
CC-MAIN-2020-40
refinedweb
769
74.59
Is there a command line tool out there that can record your computer screen and audio device and then save that to a file? This question came from our site for professional and enthusiast programmers. Mac OS X offers the ability to capture static screenshots from the command line using a utility called unimaginatively screencapture. You can find more information by running man screencapture. screencapture man screencapture eg. screencapture ~/Desktop/picture.png screencapture ~/Desktop/picture.png As for recording motion and computer audio you can do this via AppleScript using QuickTime Player X if you're using Mac OS X 10.6 or newer. (However it's definitely not "silent" and will only run when someone is logged in) (Recording the screen for 5 seconds, just a possible starting point) tell application "QuickTime Player" --activate new screen recording start document 1 delay 5 stop document 1 save document 1 in file "/tmp/test.mov" quit end tell I'm using SOX and VLC for capture, crontab for scheduling, XLD to create m4a and MP4Box for multiplexing. It is quite flexible, but requires knowledge of Bash scripting (perhaps also Apple script) I found my inspiration in Diego Massanti's mkmp4 script. core processes to launch: rec -q -c $C -r 48000 -b 16 $AFILE trim 0 $HH:$MM:00 & VLC -I dummy screen:// --screen-fps=25 --quiet --sout "#transcode{vcodec=h264,vb=3072}:standard{access=file,mux=mp4,dst=$FILE}" --run-time $TIME vlc://quit Scheduling recording: crontab -l 0 8 * * 1-5 ~/capture.sh 3 0 recording-name 1 you can get an idea what the script is doing: record 3h capture, mono sound, every working day at 8AM I did not find nor compile SOX enabled for MPEG audio streams, hence using FLAC to save some space; I'm using XLD to convert it to AAC-HE 16kbps, which is enough for voice. Next step: multiplex audio and video to create mp4. If you don't mind to use GUI, then MPEG StreamClip (or QuickTime) serves also well. MP4Box -add $1.m4a -sbr -add $1.m4v -fps $2.0 -inter 500 $1.mp4 I'm using this daily to create archive of GoToWebinar, but when next release of FFmpeg supports G2M4 codec, I won't bother anymore. Yes, there's also OSAscript to launch the webinar which also required getting rid of com.apple.quarantine flag to disable warning (Are you sure you want to open it?). links: You can use the software ffmpeg. To install it on a Mac, follow the instructions here. Then use the command: $ffmpeg -f alsa -ac 2 -i hw:0,0 -f x11grab -r 30 -s $(xwininfo -root | grep 'geometry' | awk '{print $2;}') -i :0.0 -acodec pcm_s16le -vcodec libx264 -vpre lossless_ultrafast -threads 0 -y output.mkv For capturing screen, there is several Open Source tools. You should search "screen capture" using a package manager on Linux. Tools like scrot can be called in a script. If you plane to do it programatically, there is the needed entry points in Imlib2. scrot Capturing sound is really recording for microphone sound device. It's not really a snapshot and you should manage how long the recording should be done.Any sound recorder will do. There is also tools that makes a video of your desktop. Dumping the contents of the Linux framebuffer device to a file may be something similar to what you are looking for. Below, I have quoted the relevant portion of the Linux 2.2 Framebuffer Device Programming Tutorial, take a look. You will need to configure your system to enable the framebuffer device (check if you have /dev/fb0 on your system), I don't have it on my Ubuntu 9.04.. You can use import from ImageMagick, but it's Linux only. Then you could go to a terminal and execute: import screenshot.png I am also running Snow Leopard on an aging macbook and since I am trying to learn Objective C, I spend some time to build a tool that can capture specific windows. It is a command line tool that will capture the top most active window or any window you specify. Hope that helped. By posting your answer, you agree to the privacy policy and terms of service. asked 4 years ago viewed 5364 times active 5 months ago
http://superuser.com/questions/164892/screen-capture-command-line
CC-MAIN-2014-52
refinedweb
724
64.81
Hi, On Mo 02 Jul 2018 12:56:32 CEST, Dashamir Hoxha wrote: On Sun, Jul 1, 2018 at 2:13 PM Wolfgang Schweer <w.schweer@gmx.de> wrote:To: agree that level related meta-packages can be useful to DebianEdu, in addition to topic related meta-packages. This gives users better choices about what to install. /me nods. About the namespace of the meta-packages, I also see packages starting with `debian-edu-` (for example if you try `apt list debian-edu-*`) But if you say that `education-*` is the right namespace to use, I would agree. The education-* bin:pkgs come from the debian-edu src:package.Ideally, you put your education-level-based meta-package also into the debian-edu src:package. All the other debian-edu-* packages relate to pre-configuring Debian Edu systems in the school network (-install and -config) or to special artwork (-artwork-*).All the other debian-edu-* packages relate to pre-configuring Debian Edu systems in the school network (-install and -config) or to special artwork (-artwork-*). The meta-package `education-university` that you propose does not seem useful to me, because unlike the other levels of study, the topics of study on this level are too broad, so you cannot suggest a list of packages that can be useful for everyone (or the list would be huge). Topic meta-packages are more useful for this level of study. I agree. Anything could be put in -university. If we can depend on your help and support, I would encourage the student (Deepanshu) to create the relevant Debian Edu tasks (about the meta-packages that you mentioned: education-preschool, education-primaryschool, education-secondaryschool, education-highschool) on the salsa repository. This should not be a difficult task for him. Deepanshu can file merge request on Debian's GitLab instance salsa.debian.org. He can sign up for a guest account on salsa.debian.org underDeepanshu can file merge request on Debian's GitLab instance salsa.debian.org. He can sign up for a guest account on salsa.debian.org under As far as I know, due to missing contributors Edubuntu is LTS since 2014with supposed EOL in 2019, see: <>Unfortunately yes, Edubuntu is not an official Ubuntu flavor anymore. Mainly because Jonathan is now working on Debian stuff and also on other topics and Stéphane has also shifted his focus to LXD project lead. However, I am not sure if he even works for Canonical anymore.Mainly because Jonathan is now working on Debian stuff and also on other topics and Stéphane has also shifted his focus to LXD project lead. However, I am not sure if he even works for Canonical anymore. But as long as the meta-packages are maintained (which is not a difficult task) this is OK, since they can be installed on any Ubuntu flavor or derivative. I have also suggested the student (Deepanshu) to think about becoming an Edubuntu maintainer, if he likes. But this is not a requirement for this GSoC project. It took me a bit to find the project description: Nice! Also, to my understanding, the Virtual LTSP Server project intends to besort of an replacement for it (as a classroom installation), obviously involving people with teaching experience.Virtual LTSP is not exactly a replacement for Edubuntu. Rather it is an attempt to offer an easy installation and usage of LTSP in a classrom. How do you handle the DHCP / PXE boot stuff in a class room that already has DHCP setup for the site?How do you handle the DHCP / PXE boot stuff in a class room that already has DHCP setup for the site? How will the clients be booted? It is inspired by the observation that secondary and high schools in Albania mostly have a classroom of old computers and that's all, no complex IT infrastructure. These computers usually are connected to a LAN and have access to internet. They already have Microsoft Windows installed and nobody will give you permission to replace it with Linux (since they claim that learning Urgh. Microsoft tools is required by the official educational program). Sigh... Maybe they will allow you to make them dual boot, but it is too much work to install, maintain and update all of them (and nobody will pay you for a work that they did not ask you to do, and you just do it for fun). Ok. In these conditions it would be easier to have a LTSP server on a virtual machine on your laptop, with all the programs that the students need to use (like labby, kturtle, scratch, etc.) and boot the computers of the classroom from the network. As LTSP FAT Clients or LTSP thin clients? I would get scared of having 15+ students logged into a VBox instance on my notebook.As LTSP FAT Clients or LTSP thin clients? I would get scared of having 15+ students logged into a VBox instance on my notebook. With LTSP Fat Clients, this might work, depending on the client hardware in computer labs.With LTSP Fat Clients, this might work, depending on the client hardware in computer labs. You should definitely look at two LTSP extras: desktop-autoloader (makes only sense, if clients get booted before class unburden-home-dir Maybe this is specific only to Albania, no other countries are in such terrible conditions, but I think that it is still a useful and interesting project. I am pretty sure that there are other countries that are in a similar bad or possibly even worse condition.I am pretty sure that there are other countries that are in a similar bad or possibly even worse condition. In fact, it does not seem so much different from some school situations I encounter here in Germany.In fact, it does not seem so much different from some school situations I encounter here in Germany. Joining forces to identify, evaluate and package educational applicationsseems to be useful.I agree with this. Thanks for your help and support. Thanks for all the extra infos. Greets,riel@das-netzwerkteam.de, Attachment: pgp3HyySZonsX.pgp Description: Digitale PGP-Signatur
https://lists.debian.org/debian-edu/2018/07/msg00002.html
CC-MAIN-2021-39
refinedweb
1,026
62.48
(See: long foo(long x) { return x / 52223; } The desired output is something like: foo(long): movabs rdx, -6872094784941870951 mov rax, rdi imul rdx lea rax, [rdx+rdi] sar rdi, 63 sar rax, 15 sub rax, rdi ret Assume for a moment that we know that this sequence of instructions will work, and we only lack the specific constants: foo(long): movabs rdx, C1 mov rax, rdi imul rdx lea rax, [rdx+rdi] sar rdi, C2 sar rax, C3 sub rax, rdi ret We need to find a 64-bit constant and two 6-bit constants such that the assembly code returns the same result as the C code for every value of the function parameter. Of course there’s a well-known algorithm that GCC and LLVM use to compute these constants, but the synthesizer doesn’t have it: it must operate from first principles. An even more difficult example is a 64-bit Hamming weight computation, where (assuming we don’t want to use vector instructions) we might hope to synthesize something like this: popcount64(unsigned long): movabs rdx, 6148914691236517205 mov rax, rdi shr rax and rax, rdx movabs rdx, 3689348814741910323 sub rdi, rax mov rax, rdi shr rdi, 2 and rax, rdx and rdi, rdx add rdi, rax mov rax, rdi shr rax, 4 add rax, rdi movabs rdi, 1085102592571150095 and rax, rdi movabs rdi, 72340172838076673 imul rax, rdi shr rax, 56 ret Again assuming that we know that this sequence of instructions will work, we still need to come up with 280 bits worth of constant values. One can imagine even more challenging problems where we want to synthesize an entire lookup table. The running example I’ll use in this post is much easier: uint32_t bar1(uint32_t x) { return ((x << 8) >> 16) << 8; } The code we want to synthesize is: uint32_t bar2(uint32_t x) { return x & 0xffff00; } That is, we want to find C in this skeleton: uint32_t bar3(uint32_t x) { return x & C; } Basic Constant Synthesis We need to solve an “exists-forall” problem: does there exist a C such that the LHS (left-hand side, the original code) is equivalent to the RHS (right-hand side, the optimized code) for all values of x? In other words: (Here we’ll play fast and loose with the fact that in the real world we check refinement instead of equivalence — this doesn’t matter for purposes of this post.) The specific formula that we want an answer for is: A SAT solver can natively solve either an exists query (this is the definition of SAT) or else a forall query (by seeing if there exists a solution to the negated proposition). But, by itself, a SAT solver cannot solve an exists-forall query in one go. An SMT solver, on the other hand, can natively attack an exists-forall query, but in practice we tend to get better results by doing our own quantifier elimination based only on SAT calls. First, we ask the solver if there exists a C and x that make the LHS and RHS equivalent. If not, synthesis fails. If so, we issue a second query to see if C works for all values of x. If so, synthesis succeeds. If not, we add a constraint that this value of C doesn’t work, and we start the process again. The problem is that each pair of queries only rules out a single choice for C, making this process equivalent, in the worst case, to exhaustive guessing, which we cannot do when C is wide. We need to do better. Reducing the Number of Counterexamples using Specialization Souper uses a technique that appears to be common knowledge among people who do this kind of work; unfortunately I don’t know where it was first published (Section 4.2 of this paper is one place it has appeared). The trick is simple: we choose a few values of x and use it to specialize both the LHS and RHS of the equation; you can think of this as borrowing some constraints from the forall phase and moving them into the exists phase (which becomes an “exists-forsome” query, if you will). In many cases this adds enough extra constraints that the solver can arrive at a workable constant within a few iterations. If we specialize x with 0 and -1 then in the general case we get: After constant folding, our running example becomes: The first choice, 0, turns out to be an unlucky one: after further simplification it comes out to “0 = 0”, giving the solver no extra information. The second choice, on the other hand, is extremely lucky: it can be rewritten as “0x00FFFF00 = C” which simply hands us the answer. In most cases things won’t work out so nicely, but specializing the LHS and RHS with fixed inputs helps enormously in practice. Some open questions remain: How many specific values should we try? How should these values be chosen? An obvious constraint is that the values chosen should be as different from each other as possible, though it isn’t clear what distance metric should be used. A related but less obvious criterion (this is Nuno Lopes’s idea, and we haven’t tried it out yet in Souper) is that the values should exercise as many different behaviors of each instruction as possible. For example, an addition instruction should have example inputs that overflow and also inputs that don’t. This requirement can be satisfied syntactically for instructions that are close to the inputs, but solver calls (or other symbolic methods) will be required to reach instructions that consume the outputs of other instructions. (In Souper’s case, solver calls are required anyway if there are any path conditions, which place arbitrary restrictions on the input space.) It also seems possible that we could tailor the input values to maximize how much of the RHS folds away (the LHS always folds completely down to a constant, of course). Synthesis at Reduced Bitwidth Instead of solving a difficult synthesis problem, we can instead solve a problem that is structurally identical, but uses narrower bitvectors (even two or three bits are often enough to capture the essence of a computation). This results in far better solver performance and, typically, a narrow synthesis result that does not contain constants will also work at the original, wider bitwidth (though obviously this needs to be verified by the solver). When the narrow result has constants, there’s a problem: we lack a principled method for deriving the wider constants from the narrow ones. One approach is to use heuristics. This paper suggests the following methods: Here you should read BV(x,y) as “a bitvector of length y holding value x.” So, for example, rule 1 would extend an 8-bit variable containing 8 to a 16-bit variable containing 16. Rule 4 would extend an 8-bit variable containing 8 to a 16-bit variable containing 8. These seem reasonable, but notice that none of them would help with our running example. In Souper we have the additional problem that the codes we’re trying to optimize usually contain constants, presenting us with a constant-narrowing problem that is analogous to the constant-widening problem that we just discussed, but worse because now any dodgy heuristics that we come up with are located in front of the expensive synthesis process instead of in back of it. It isn’t clear to us that there’s any satisfactory solution to this problem. Getting More out of Counterexamples In standard constant synthesis, each failed guess only adds the constraint that that particular constant doesn’t work — this fails to make much of a dent in an exponential search space. This paper and this one propose extracting additional information from each counterexample, cutting out a larger part of the search space. This also seems like an excellent direction. Symbolic Constants An alternate approach that avoids both narrowing and widening problems would be to treat constants symbolically. This, however, creates two new problems: deriving preconditions for optimizations and deriving functions to compute RHS constants in terms of elements of the LHS (constants, widths, etc.). It seems clear that in some cases this will be very difficult, for example imagine trying to derive, from first principles, the algorithm for computing the output constants for an arbitrary value of the LHS constant C: long foo(long x) { return x / C; } Alive-Infer helps derive preconditions but it does not yet try to come up with the functions for computing constants in the synthesized output. Since in some use cases (including my “destroy InstSimplify and InstCombine” dream) we want the generalized optimization anyway, this seems like a great direction for future work. Conclusion All of the techniques described in this post could be implemented inside the solver or outside it. Solvers like CVC4 already have sophisticated internal support for synthesis. The advantage of pushing more work into the solver is that it can exploit internal levers to guide the search, reuse data structures across synthesis iterations, and do other clever things that aren’t exposed by the solver’s API. The disadvantage is that we might have high-level information about the problem domain that is difficult to communicate effectively to the solver, that would help it do its job better. In summary, constant synthesis is a hard problem that hasn’t received a lot of attention yet. My group is actively working on this and we have some ideas that aren’t mature enough to share yet. Please drop me a line if you know of any good techniques that I’ve missed here. Also, I’d be interested to hear from anyone working on benchmarks for constant synthesis. Finally, tools such as Rosette and Sketch know how to synthesize constants. Maybe I missed it, but it sounds like the only part of the counterexample being fed back into the process is the ∃ term. My instincts make me think that using the ∀ term would be more effective: – Start with the basic case, ∃C.∀x.(LHS(x)=RHS(x,C)) – Find a C,x₁ that works – Fix C and find any counter example e₁ for x – Add a term to the problem (…∨LHS(e₁)=RHS(e₁,C)) – Repeat. For the running example, I’d expect that would run in at worst O(log n) iterations. Is there some reason that doesn’t work well in general? Or did I just skim over something in your post? Hi Benjamin, I don’t believe the worst case here is going to be better than an exponential number of guesses for the constant, but I don’t have a proof of that. What you are suggesting is a way of selecting values of the input variable(s) used for specialization. Your way may well be better than what we do now, but experimentation is needed. If there is a way to synthesize an arbirary constant in O(bitwidth) solver calls then that’s huge. – I’ll conceded that O(exp(bits)) is still the worst (and expected?) case for arbitrary functions. And I don’t really have a feel for what differentiates random from function of interest to real world domains, so I can only guess at what effect that will have. – I was assuming n was the number of possible values which, now that I think on it, is not the normal notation. O(bitwidth) is the result I was thinking of. In fact, it would probably be “collapse the polynomial-time hierarchy” huge. Not that I have an immediate proof for that, but it does seem plausible to me that there’d be some kind of reduction to show that. If so, we issue a second query to see if C works for all values of x. If so, synthesis succeeds. If not, we add a constraint that this value of C doesn’t work, and we start the process again. At this point, could you attempt to generalise the C counter-example? If you replace all bits but one of C with “don’t know” values and it still produces a counter-example, then you’ve halved the search space. Hi kme: that’s exactly the sort of technique we’re investigating.
https://blog.regehr.org/archives/1636
CC-MAIN-2019-39
refinedweb
2,054
63.43
Overview - Once SAP UI5 app has been deployed in SAP Fiori (Front-End) server, we need to do some configurations for accessibility of this app in SAP Fiori Launchpad. -. - In this blog, we will com to know about steps to configure any custom UI5 for SAP Fiori launchpad access. - This blog is part of below parent blog: Pre-requisites: - Steps of Configuration: In SAP-Fiori Launchpad, a SAP-UI5 application get accessed using following flow of components: [1] Create Semantic Object in SAP Fiori Server (Front-end) server - T-code: /n/UI2/SEMOBJ - Steps: - Semantic object with which we link ‘Business Tile‘ and ‘Target Mapping‘ within ‘Business Catalog’ - Go to t-code -> change -> click on ‘New Entries’ -> - enter the details like object name , description etc. - click on icon ‘save’ to save it. [2] Create Launchpad Role in SAP Fiori Server (Front-end) server - T-code: LPD_CUST - Notes: - We define Launchpad roles to link SAP UI5 Application with Tiles - One Launchpad role can have multiple SAP UI5 Application references each will be differentiated with help of Alias name. - Launchpad role and specific Alias combination will be referred in ‘Target Mapping’ of Catalog configuration to link one SAP-UI5 application. - Go to t-code -> click on button ‘New Launchpad’ -> - Steps: - Create New Launchpad Application: - Go to tcode -> click on button ‘New Launchpad’ -> - Provide input as shown in below screen - Here put some namespace as ‘Z’, it is required while including this role to TR (Transport-Request) - Click ok icon -> Click ‘Yes’ in next window. - In Next window, click on ‘New Application’ to create new UI5 application link-> - end. - Create New Application which requires below details: - Link Text: - Title of Launchpad Role for fiori App - Note: While accessing app from Fiori Launchpad, this LPD text get appears as header title for each page in new SAP-Fiori Hana upgrade - Url - This entry is to refer SAPUI5 application path which post-deployment resides in t-code ‘SICF’ path - Application Alias - One ;Launchpad Role’ can have multiple UI5 application links, each should have unique alias - Launchpad role name with its Alias name identifies one application link - Additional Information - Here enter UI5 app’s component name defined in Component.js - Syntax: Component=<ui5 app component name> - For e.g. Component= ZTest_App01 - Below ‘Component.js’ screen can be referred where component name is present - LPD Role’s application looks like as below after saving the details - Once saved we can see role in t-code ‘LPD_CUST’ as in below screen - end. - Steps to include Launchpad role in TR: - We can include Launchpad roles in TRs for trasnporting it from one system to another - Pre-requisites: Two transport request (TR) required: - One as customized request (CSR) - One as workbench request (WBR) - To Include Launchpad role in transport request, follow steps as below - open Launchpad Role from t-code ‘LD_CUST’ > go to Menu ‘Launchpad’-> click on ‘Transport - In next window enter Package details - In Next window provide Customizing Request - In Next window enter provide Workbench Request - Once done below screen appears - end. [3] Create Business Catalog - SAP Fiori Designer Url: - http://<host>:<port>/sap/bc/ui5_ui5/sap/arsrvc_upb_admn/main.html - Steps: - Login to above ‘SAP Fiori Designer Url’ - To add a Catalog, in ‘Catalog’ section click on plus ‘+’ icon - Enter Catalog Id and title in pop-up window - Now once catalog gets added, we can see it left panel, now next we need to add a ‘Tile’ - Here we can add tile as of below types: - Static - Dynamic - New Tile - select “App Launcher – Static” Tile - Enter Title, Subtitle, Semantic Object, Action in Tile and save it. - Once Tile gets added it can be seen as below: - Now click on icon ‘Target Mapping’ -> click on status bar button ‘Create Target Mapping’ - Enter same Semantic Object name and action as of ‘Tile’ - And save it. - Thus Tile and ‘Target Mapping’ is linked using ‘Semantic Object’ + ‘Action’ - and ‘SAP UI5 application’ path is linked in ‘Target mapping’ via ‘Launchpad Role’ - Catalog -> Tile -> ‘Target Mapping’ -> ‘Launchpad Role’ -> UI5 App [4] Create Business Group - SAP Fiori Designer Url: - http://<host>:<port>/sap/bc/ui5_ui5/sap/arsrvc_upb_admn/main.html - Note: - One ‘Business Group’ can have single or multiple Tile from same Catalog or multiple Catalog - With this ‘Business Group’, we control app/tile accessibility to users in SAP Fiori Launchpad - Steps: - Login to above ‘SAP Fiori Designer Url’ - Create Business Group -> go to Group -> click on ‘+’ sign to add group - Enter Group details - Once Group get added, we can assign multiple tiles from different catalogs. - Here select our test catalog and add tile as below - Open Catalog F4 help pop-up from button and search/select test-catalog - Once catalog is selected, all tile of this catalog can be visibl - We have to add catalog by selecting ‘+’ icon of Tile, which gets changed to ‘ok’ icon post addition - Here we can select multiple ‘Tile’ from same Catalog or from multiple Catalog - Thus once group has one tile assigned, it appears as below - end [5] Create App’s PFCG Role in SAP Fiori Server (Front-end) server - T-code: PFCG - Once ‘Business Catalog’ and ‘Business Group’ gets created, their access needs to be provided to user-id, for same we create a PFCG role specific to UI5 application which can be assigned to user-ids - For customized app’s accessibility in Fiori Launchpad page, one user-id will be assigned two PFCG Roles, which are: - One PFCG Role for Fiori Launchpad accessibility, which includes below default services acces - - Note: Post activation/registration of above standard services, they get renamed with ‘Z’ prefix - One PFCG Role for SAPUI5 Application, which includes: - Business Group - Business Catalog - Odata Service accessibility - Steps: - [A] Create PFCG Role for Fiori Launchpad accessibility - To create PFCG roles go to t-code ‘PFCG’ -> Enter Role name -> click on button ‘Single Role’ - Enter description - Go to Tab ‘Menu’ -> click on end point of button ‘Transaction’ -> select ‘Authorization Default’ -> - Here we add below standard registered Fiori Launchpad services with parameters as shown in screens: - - These are required for accessibility of Fiori Launchpad to any user - Add R3TR IWSG - Add R3TR IWSV - Now add user in tab ‘USER’, to which app access is to be given for e.g. ‘DILIPP’ and perform user comparison - And generate authorization profile in tab ‘Authorization’ - Once above activities gets completed our roles looks like as below: - [B] Create PFCG Role for SAPUI5 Application - To create PFCG roles go to t-code ‘PFCG’ -> Enter Role name -> click on button ‘Single Role’ - Enter description - Go to Tab ‘Menu’ -> click on end point of button ‘Transaction’ -> select ‘SAP Fiori Tile Catalog’ -> - Adding ‘Business Catalog’ access: In Next pop-up select ‘Business Catalog’ which we have created - Adding ‘Business Group’ access: - Go to Tab ‘Menu’ -> click on end point of button ‘SAP Fiori Tile Catalog’ -> select ‘SAP Fiori Tile Group’ - Note: here ‘Transaction’ button get replaced by ‘SAP Fiori Tile Catalog’ because last selection was of catalog - Adding custom ‘Odata Service’ access: - If UI5 App is using Custom OData Service, then add R3TR IWSG & IWSV accessibility also - Go to Tab ‘Menu’ -> click on end point of button ‘SAP Fiori Tile Group’ -> select ‘Authorization Default’ -> - Now add user in tab ‘USER’, to which app access is to be given for e.g. ‘DILIPP’ and perform user comparison - And generate authorization profile in tab ‘Authorization’ - Once above activities gets completed our roles looks like as below: - [C] Role assignment to user-id - Suppose one user-id ‘DILIPP’ needs to access our test app, then we need to assign PFCG role to this user-id - In SAP-fFori server, Go to t-code ‘SU01’ -> Tab ‘roles’ -> add roles - Now login SAP Fiori Launchpad using this user-id, we get access of UI5 app tile. HI Dileep, I am trying to Configure Custom UI5 App to FIORI Launchpad am getting below error. Please fin ethe screen shot below for detailed error. Dear Mantri, First confirm whether your custom app is running within Eclipse. For UI5 app project's structure, you can refer below link: And as per your error screen, it seems to be your UI5 app doesn't have You can also ping me your LaunchPad-Role screen & UI5 App structure. Thanks & Regards, Dilip Dear Dileep, Thanks for your prompt response. Actually I created my App in SAP WEB IDE and am trying to deploy the application using se38 program in abap server(/ui5/ui5/repository_load). In my application I have component.js file.But doing a small change it is working. Regards, Shekar. Dear Mantri, Good to hear, your issue has been resolved.. Thanks & Regards, Dilip Hello Shekar, I am also getting the same error as you mentioned in last screenshot. Could you please help me to resolve the same. What small changes are needed in Component.js ? Regards, Archana Hi Archana, May i know how you created your custom App and how you are deploying this app. Regards, Shekar. Hello, I created my App in SAP WEB IDE and am trying to deploy the application on abap server using WEB IDE only. Regards, Archana Hello, Could you Please share the screenshots of SAP WEB IDE Folder and the eror you are facing. Regards, Shekar. mantrishekar951@gmail.com Hello, Please find my screenshots for WEBIDE project folder structure and error screenshot. Regards, Archana I have a similar problem can help me the version of fiori is 1.0 detination with ip public I execute the project where I use the created destination Hi, Nice blog. Just a small doubt. Is it necessary to create separate Semantic Object for every application? or we can use same semantic object with different actions for different app? Thanks in advance. Dear Irfan, One 'Semantic Object ' can be referred in two application but they should have unique action methods. For Example: you see example of std. Buyer apps, where Here, please note, 'semantic-object' and 'action' combination should be unique, which will be used in 'Target Mapping'. Hope, above helps.... Thanks & Regards, Dilip Hello, i made two applicaitons one in sap web ide and anothe in eclipse and neither of them works, can you give the link to donwload your app ? my syntax it's a little different i don't know if that's the reason ... Dear Naoto, Sorry for delayed response, I was too much busy in my current project…. I'll try to make a google drive link soon to refer a test app (which is been developed using Eclipse) Thanks & Regards Dilip i would be very greatful Hi, let's say I created one catalogue for all relevant apps for a certain scenario. Now I want to hide some apps (not just the tiles but also the target mappings since they actually prevent the app from being started) from a certain user group do I have to created another catalogue with the allowed apps only? Can I control that via groups or is there a direct connection between target mappings and PFCG Roles? Tobias Dear Tobias, First lets understand relation between PFCG-Roles, Groups, Catalogs, Tiles, Target-Mappings and SAP-UI5-App. Thus, about your query: I hope above makes the things clear for you. Thanks and Regards, Dilip Nice blog. Thanks for sharing the knowledge. Hi, First of all, thank you so much for your blog. It’s pretty good and easy to understand. I have an issue opening my custom app in Fiori launchpad. Can you please help. Below are the details: I have created a custom Fiori app to display table data from backend Odata. I also deployed by UI5 application into Frontend Gateway Fiori system and able to run successfully from SICF node test run(see attached). Later I created Semantic object, Launchpad role and provided these details in my Catalog as described in this blog and attached to group. The PFCG roles are created as suggested and assigned to my user id but unable to open my custom app in Fiori Launchpad(see attached). Also attached my Component.js code for your review(see attached). Can you please help me to resolve the issue. Any pointers on this is much appreciated. Please let me know if you need any further details. Thank you. -Raghu Hi, Surly, there is FLP configuration issue w.r.t. to your custom app, just re-verify each of below things step-by-step: If, above not helping you to get the root cause, then, while running app, press 'F12' and get console log and try to understand it, you can also let us know Thanks & Regards, Dilip Hi Dilip, Thanks for your reply. I have verified all above mentioned and it looks good. I think I have an issue with my UI5 application itself after adding the component.js. Earlier I had deployed my UI5 application without Component.js and it's working fine from SICF. index.html -> zsupplier_dwnld.view.js But in order configure in FIori Launchpad I had added Component.js and change script in index.html to call component.js instead of zsupplier_dwnld.view.js but I am getting error when I do local testing in eclipse. index.html -> Component.js -> zsupplier_dwnld.view.js Error: SCRIPT5022: failed to load 'zsupplier/Component.js' from resources/zsupplier/Component.js: 404 - Resource could not be found! I am posting my complete UI5 application below for your review. Can you try the same in your system quickly if possible as I am getting error in local testing itself. index.html : I have changed in script in index.html to call Component.js but earlier it was directly calling zsupplier_dwnld.view using below script and working fine from SICF. Component.js : zsupplier_dwnld.view.js : zsupplier_dwnld.controller.js : Do you see any issue in my UI5 application or any script changes needed? It seems i am able to navigate well from index.html to Component.js but some issue in Component.js and zsupplier_dwnld.view. Can you please advise. Do we need to use only xml views(not js views) to make it work from Fiori Launchpad? Is there any restriction like that? It would be stupid question but want to know if any such restrictions? Thank you. -Raghu F12 error screen shot of my UI5 application explained in above comment. Hi Raghavender Vadakattu Your project structure has wrong page references. In the blog, its like something below: However, in you project structure, you are doing something like: Hope, above helps you understand the issue. Thanks & Regards, Dilip Hi Dilip, Thanks very much for your reply. I have done all suggested changes except converting my zsupplier_dwlnd.js to xml view which I am not sure how to do it but I still have the same issue. Please see below screen shots. Component.js: zsupplier_dwnlnd.view.js: zsupplier_dwnld.controller.js: No changes done on index.html as suggested. Error message: I hope my UI5 application works If I can make the navigation from component.js to zsupplier_dwnlnd.js successfully. I suspect there’s would be some basic script issue causing loading of component.js. Can you please suggest where would it went wrong with your expertise as I’m still facing the same issue. Can you please guide me to convert my zsupplier_dwlnd.js view to xml view and also what other changes need to be done to make it work. Thanks again. -Raghu Hi, Please have a look on below blog from me, you will get your answers surly: thanks & Regards, Dilip Hi Dilip, I followed these instructions to what i thought was “to the letter”. Unfortunately, when i click on the tile in the launchpad, i get an empty shell: In eclipse, the app is working as designed. Also, i get no errormessages at all. I suspect the LPD_Cust to be the root, so could you elaborate on which alias has to match what and what exactly has to be put in the aditional infos? or maybe i’m wrong somewhere else, in which case i will happily provide more info. Thanks in advance Dominic Hi Dominic, If you are doubting LPD_CUST config, then I suggest you to try below things first: 2nd About LPD_CUST and your query, plz refer following comments: Thanks & Regards, Dilip Hi Dilip, first of all, many thanks for the rapid response. both answers did the job. choosing URL made it possible for everyone with the link (for ex. passsed to collegue via email) to use the app. i put the application alias in all uppercase, as found in LPD_CUST. it also appears that "additional information" is far from ..."oh and by the way", but a vital part of the orchestration. thanks again, Dominic Happy to hear that your side issue gets resolved....... Thanks & regards, Dilip HI Dilip, I would like talk about Fiori launch pad configuration on EHP 7 only for PM on Fiori. I am trying to find ur contact but it is very diffcult. Could you pls call me once @ 8660036709 or please text me so that i can call u for 5 mins. Regards Suresh K Hi Suresh, You please write your queries here only, I'll try to address it. Thanks & Regards, Dilip Hi, I added my app as described, however, there is an empty screen. What did I skip? Hi Fedor Shestov, Thanks & Regards, Dilip Thank you for your response. This app was created automatically based on the template "List Report Application" with using Odata Service + CDS + annotations. This app works fine in SAP Web IDE. Nice blog. It helps a lot Jagtar....nice to hear that.... Regards, Dilip Nice blog Dileep, really helpful for people who are new to Fiori. I am trying to Launch the AO workbooks which are saved as Document to ABAP system (SAP NetWeaver Platform S4 system). Can we somehow call the "Object Type=Document" from Fiori Launchpad using Target Mapping? I am using following URL in the tile currently to achieve this but I believe there must be a way to create a target mapping for this as well. Any thoughts? Hi Rajkumar, I think, you can try Target Mapping’s ‘Application Type’ option as “URL” And if you succeed, please let me know, this is a new kind of requirement for me as well. Generally I prefer use of oDataService to “store/read” SAP-file (pdf/excel) via SAP-UI5-App. Regards, Dilip Very nice blog. And thanks for your prompt responses., If app’s tile is not visible in FLP, the root cause is, that, respective user-id do not have correct PFCG role access. This PFCG role should have access to app’s B. Catalog/Group and ODataSrv. Please re-check w.r.t. above blog’s reference. Thanks & Regards, Dilip. Why ? thanks you sammon Dear Sammon, Please remove empty space, you can not add space in ID it should be ZTest_Catalog not like below (below yellow highlighted has empty space) : Thanks & Regards, Dilip Hi Dilip , Am following your blog step by step.Does the above configuration all steps required if i am using Local Webide with Connection to Fiori/ Backend established ? Thanks. Regds, Somnath. Hi Somnath, Sorry for very late reply, till now, you may have already addressed the requirement However, this blog’s configuration steps are required when you plan to deploy your application in Fiori Launchpad. For Local testing (e.g. Eclipse or WebIDE), above is mandatory. Thanks & Regards, Dilip
https://blogs.sap.com/2017/11/19/sap-fiori-ui5-app-configuration-in-sap-fiori-launchpad/
CC-MAIN-2021-10
refinedweb
3,222
60.35
Hi, class Base def initialize @test = 123 end end class Child < Base def output print @test end public :output end child = Child.new child.output Prints out 123. If attribute "test" is private, why I am able to access it in Child class?If attribute "test" is not private, what the hell am I missing?How do I make it really private? I really new to this Ruby stuff so please be polite. Ruby has no private/public instance variable, they are all protected. You cannot access them from outside the class, but not because you have to access, but because there is no access. Everything goed through methods that return the values of instance variables and set them. So you can't make instance variables private. Are you coming from C#? All methods are public by default so you don't have to: public :output Most people use: class Example def one #this one is public #code... end private def two #this is private end def three #private end protected def four #protected end def five #protected end public def six #public again end end So every method after protected is protected until the ruby interpreter hits another modifier. Thanks for the reply. No private attributes? All protected. Check. I don't really know if this is a good thing in general, but I can't think of any counterexample at the moment so maybe yes. Ok but now look(Copy&Paste from) Protected access is used when objects need to access the internal state of other objects of the same class. For example, we may want to allow the individual Account objects to compare their raw balances, but may want to hide those balances from the rest of the world (perhaps because we present them in a different form). class Account attr_reader :balance # accessor method 'balance' protected :balance # and make it protected def greaterBalanceThan(other) return @balance > other.balance end end Because the attribute balance is protected, it's available only within Account objects. If what you say is true, it means that this code (attr_reader, protected mumbo jumbo) is simply not necessary because I can (as I demonstrated before) access the attribute directly. I think this example in docs is really missleading. No I'm not comming from C#. I am programming mostly in PHP and Java. Thank god no J2EE You've got to remember that that doesn't create @balance, rather it is more a shortcut for this: class Account def balance @balance end end So when you do protected :balance it is exactly the same thing which happens when you do protected :some_other_method. Though this mightbe useful: class Account def balance @balance / 8 end def balance= (val) @balance = val * 8 end protected :balance, :balance= # and make it protected def greaterBalanceThan(other) return balance > other.balance # uses self.balance instead of @blanace end end Douglas DOUGBTX: Yes, yes, I know it now. I am pointing finger at the online book. Imagine me (a Ruby wanna-learn with some OO experience) reading this The Song objects we've created so far have an internal state (such as the song title and artist). That state is private to those objects---no other object can access an object's instance variables. In general, this is a Good Thing. It means that the object is solely responsible for maintaining its own consistency. Then comes the example how to make these "private" attributes protected. Then I try to access them directly (by mistake) and WOW it works! Really misleading examples. Why on earth is there not written that attributes are protected by default. One sentence. Or maybe I am stupid and just can't find it. Hmm, yes, there is some fuzzyness there. I have to say, it totally passed me by, I read that book ages ago, and whenever I see @foo, I think "private", but when I think "private" I'm really thinking "protected" - I've never really seen the point of private. And besides, bar.instance_variables Well, it might be so natural for Ruby programmers that all attibutes act like this, and besides, attributes are not exactly "protected". There is just no way of accessing them from the outside. It is a different scope. You can only send messages (call methods) to objects. Well no instance of another object can access those variables. But a a subclass sure can. So yes, the variables are only protected but, the wording there isn't incorrect. >> class MyPrivates >> def initialize >> @> @> end >> end >> my = MyPrivates.new => #<MyPrivates:0x2cfa470 @ >> my.instance_variable_get(:@apple) => "it" >> my.instance_variable_get(:@google) => "search" >> my.instance_variables.each { |name| puts "#{name} => #{my.instance_variable_g et(name)}" } @apple => it @google => search Interesting choice of class name Nothing wrong with a bit of childish humor from time to time...;) Anyway, if you really wanted to make your instance variables private, could you not just do: class YourPrivates private :instance_variables end No, because attributes are can be accessed directly from subclass. You don't have to make any setters/getters. I meant from outside of the class. See the example above on how to access instance variables from outside of the class. By making the instance_variables() method private, you should disable access to them that way. There isn't a way to make things entirely private or protected in ruby right now. No matter what you do to try to protect something, you can always use send or send to access private or protected vars or methods. Check these commands for more info: $ ri sendOR$ ri send Cheers--Ezra Here's a quote I just came across: I started programming by learning C++. I got up to the part about public, private, protected, friend, etc... got very annoyed by the unnecessary bureaucracy of it all and dropped the language... If I can't trust the programmers around me not to muck around in my guts without good reason, I can't trust them at all. And if they're willing to perform that bad practice, they'll probably do more anyway. It's not worth worrying about. source It's a bit of a moot point, considering we have open classes. >> class Foo >> def initialize >> @> end >> end => nil >> f = Foo.new => #<Foo:0x2cfbfb8 @ >> def f.foo >> @foo >> end => nil >> f.foo => "bar" Oh, and good to know somone actually reads the code I post DOUG: There is absolutely some really rational thinking in that citation. I've never thought of access control this way. On the other hand, I think in some cases it can be better if you declare some variables as protected (or whatever that means in ruby). Encapsulation is a good thing in general. You don't need to know the internals of classes if you are using it. It would be only missleading. We all dream about nice APIs. So why polute them with unneccessary 'internal' attributes? That's what I'm thinking. And if you take "protected" to mean "you've got to put in noticeable effort when you are outside," and "public" to mean "always easy to get at," then I think that covers it.
https://www.sitepoint.com/community/t/how-to-make-attributes-in-ruby-private-inaccessible/2312
CC-MAIN-2017-09
refinedweb
1,187
65.83
Hi, i know my following question may seem rather stupid but im having some problems understanding the static concept especially in the following code and id appreciate some help. 4: using System; 5: 6: public class myClass 7: { 8: static public int sctr = 0; 9: public int ctr = 0; 10: 11: public void routine() 12: { 13: Console.WriteLine(“In the routine - ctr = {0} / sctr = {1}\n”, 14: ctr, sctr ); 15: } 16: 17: public myClass() 18: { 19: ctr++; 20: sctr++; 21: Console.WriteLine(“In Constructor- ctr = {0} / sctr = {1}\n”, 22: ctr, sctr ); 23: } 24: } 25: 26: class TestApp 27: { 28: public static void Main() 29: { 30: Console.WriteLine(“Start of Main method...”); 31: 32: Console.WriteLine(“Creating first object...”); 33: myClass first = new myClass(); 34: Console.WriteLine(“Creating second object...”); 35: myClass second = new myClass(); The output is as follows: Start of Main method... Creating first object... In Constructor- ctr = 1 / sctr = 1 Creating second object... In Constructor- ctr = 1 / sctr = 2 From what i have understood, the SCTR is a static and is therefore only initialized only at the begining of the program and then never again hence why it can increment. CTR on the other hand is public so Line 9 which sets it to 0 each time the myclass object is accessed. if the above is right, could someone please clarify why there is a static in line 28 as the description i found in my c# book wasnt particularly useful.
https://www.daniweb.com/programming/software-development/threads/198484/statics-and-alot-of-confusion
CC-MAIN-2016-50
refinedweb
244
64.2
Graphical User Interfaces Graphical User Interfaces  ... to the display area with a layout manager. As the Java programming language, the AWT... be preserved. AWT Basics The superclass of all graphical user interface objects Implementation code inside interfaces Implementation code inside interfaces 2001-01-25 The Java Specialists' Newsletter [Issue 006] - Implementation code inside interfaces Author: Dr. Heinz..., with more lines of pure Java code than the JDK 1.3, and actually paying me Java AWT Package Example Java AWT Package Example  ... provided are supported with tested code. Graphical User Interfaces... in the Java AWT package for developing user interface for your program. AWT AWT code for popUpmenu - Swing AWT for more information. code for popUpmenu Respected Sir/Madam, I am writing a program in JAVA/AWT.My requirement is, a Form consists of a "TextBox" and a "Button interfaces,exceptions,threads interfaces,exceptions,threads SIR,IAM JAVA BEGINER,I WANT KNOW THE COMPLETE CONEPTS OF INTERFACES,EXCEPTIONS,THREADS Interface... class. In java, multiple inheritance is achieved by using the interface java interfaces java interfaces sample code for multiple interface interface A { void test(int i); } interface B { void test(String s); } public class MultipleInterface implements A, B { public void test(int i awt - Swing AWT market chart this code made using "AWT" . in this chart one textbox when user..., For solving the problem visit to : Thanks java awt package tutorial Java AWT Package In Java, Abstract Window Toolkit(AWT) is a platform independent widget toolkit for windowing, graphics, and user-interface. As the Java.... The implementation of the user interface elements provided by the AWT is done using Java Dialogs - Swing AWT /springlayout.html... visit the following links: Dialogs a) I wish to design a frame whose layout mimics What is AWT in java What is AWT in java  ... toolkits. You can use the AWT package to develop user interface objects like.../api/java/awt/package-summary.html Interfaces - Java Interview Questions Interfaces two interfaces having same method names and these two interfaces are implemented by a single class. so now i would like to generate different code of same methods of different interfaces. eg: ---- interface i1 code - Swing AWT code i want example problem for menubar in swings Hi Friend, Please visit the following links: provide code - Swing AWT ); } } ------------------------------------- visit for more information. code Dear frnds please provide code for two player CHESS GAME.....using swings,awt concepts Hi friend, import java.awt. Code - Swing AWT Java Code Write a Program using Swings to Display JFileChooser that Display the Naem of Selected File and Also opens that File code - Swing AWT java code i want to open a new dialog box after clicking "upload" button, it should have a text field, browse button to browse the file from directory Hi Friend, Try the following code: 1)OpenDialog.java import java code - Swing AWT java code Hello .anyone can plz tell me code for this .First create a button wen it is clicked its enters into another window asking for Name .After... Hi Friend, Try the following code: import java.io.*; import Java Code - Swing AWT Java Code How to Display a Save Dialog Box using JFileChooser and Save the loaded Image from Panel in any Location. Hi Friend, Try the following code: import java.io.*; import java.awt.*; import java.util. Summary: Classes, Interfaces Java: Summary: Classes, Interfaces Packages package package-name; Class Definition A class definition prototype: visibility class class-name... and its fields (members). Example: This defines a structure node, then declares x java awt calender java awt calender java awt code for calender to include beside a textfield interfaces & interfaces. An interface declaration introduces a new reference type whose members are classes, interfaces, constants and abstract methods... to directly implement one or more interfaces, meaning that any instance of the class AWT Tutorials AWT Tutorials How can i create multiple labels using AWT???? Java Applet Example multiple labels 1)AppletExample.java: import...;BODY> <APPLET ALIGN="CENTER" CODE="AppletExample.class" width = "260" height GUI - Swing vs. AWT Java: GUI - Swing vs. AWT The original graphical user interface (GUI) for Java was called the Abstract Windowing Toolkit (AWT). Performance... in almost all cases; an AWT continues to work in Java. Must choose between Swing java swings - Swing AWT write the code for bar charts using java swings. Hi friend, I am.... swings I am doing a project for my company. I need a to show Need source code - Swing AWT Need source code Hai, In java swing, How can upload and retrieve... image in database, try the following code: import java.sql.*; import... = "com.mysql.jdbc.Driver"; String user = "root"; String pass = "root"; FileInputStream fis java - Swing AWT Java Implementing Swing with Servlet How can i implement the swing with servlet in Java? Can anyone give an Example?? Implementing Swing with Servlet Example and source Code Servlet SwingToServlet Marker interfaces in java Marker interfaces in java how marker interfaces work Getting list of Local Interfaces on a machine is the Code of the Example : InterfaceLister.java import ... Getting list of Local Interfaces on a machine... to find out the total no of list of local interfaces available on a machine. Here java swing - Swing AWT : Thanks...java swing how to add image in JPanel in Swing? Hi Friend, Try the following code: import java.awt.*; import java.awt.image. - Swing AWT /java/example/java/swing/AddRemoveItemFromCombo.shtml Thanks Hi Friend, Try the following code: import java.sql.*; import java.awt.*; import awt jdbc awt jdbc programm in java to accept the details of doctor (dno,dname,salary)user & insert it into the database(use prerparedstatement class&awt Collection Interfaces Collection Interfaces  ... of several interfaces, and classes that implement those interfaces, contained within... of objects. Different interfaces describe different types of functionalities JAVA - Swing AWT password generation by using swing in java . Hi Friend, Try the following code: import java.io.*; import java.util.*; import java.awt.*; import... to be generated.(5-15): "); JLabel l3=new JLabel("Type of password User wants java - Swing AWT , Try the following code: 1)LoginDemo.java import javax.swing.*; import..."; String driver = "com.mysql.jdbc.Driver"; String user = "root"; String pass...(driver); con = DriverManager.getConnection(url+db, user, pass Create a Container in Java awt Create a Container in Java awt  ... of the BorderLayout class. Here is the code of the program : ...; } } Download this example Overview of Networking through JAVA,Getting list of Local Interfaces on a machine is the Code of the Example : InterfaceLister.java import ... Getting list of Local Interfaces on a machine... to find out the total no of list of local interfaces available on a machine. Here Help Required - Swing AWT the password by searching this example's\n" + "source code... JFrame("password example in java"); frame.setDefaultCloseOperation...(); } }); } } ------------------------------- Read for more information. Authentication of password - Swing AWT information. Thanks..., I am sending running code, please implement this code. import... String: " + inputStr); System.out.println("Encrypt code: " + str1 java interfaces implementation java interfaces implementation hai I have defined one inter face like Maths taking methods like add(), sub(), mul()in interface I take different implementation classes for add() method and sub()and mul() when i trying JLabel font and color component in your Swing application. The most user-friendly interfaces... Java: JLabel font and color  ... on that. For example JLabel title = new JLabel("Want a Raise interfaces - Java Beginners , super class constructor runs.what about interfaces? Hi Friend... is called the base class or the parent class. To derive a class in java the keyword... to : Thanks interfaces - Java Beginners interfaces examples on interfaces Hi Friend, Please visit the following links: JList - Swing AWT is the method for that? You kindly explain with an example. Expecting solution as early... this will do.. Hi friend, i am sending working code...(); //User didn't type in a unique name... if (name.equals graphical user interface - Java Beginners graphical user interface how do i write a code for a jmenu bar, File with items like open, save , save as. that lead to another interface? .../java/example/java/swing/SwingMenu.shtml Thanks awt in java awt in java using awt in java gui programming how to false the maximization property of a frame EJB Interfaces EJB Interfaces Interface in java means a group of related methods with empty bodies. EJB have generally 4 interfaces. These are as follows 1)Remote interface:- Remote interface User Interface Toolkits by it is light weight. These elements can be incorporate in user interfaces. Click...User Interface Toolkits User Interface Toolkits / Libraries are given below... For creating application specifically for handicapped persons Java Accessibility Java AWT Java AWT What interface is extended by AWT event listeners Java: Interfaces Java: Interfaces An interface is a list of methods that must be defined... does, but abstract classes do allow static method definitions, and interfaces... different interfaces. If a class doesn't define all methods of the interfaces java to excel connectivity - Swing AWT java to excel connectivity give me source code to stored the value entered by user in a text field(such as name, age, sex, add. of the user etc... Sources(ODBC) 2. Open User DSN tab 3. Add a user DSN 4. Select Microsoft Swings problem - Swing AWT Java Swings problem Sir, I am facing a problem in JSplitPane. I want... pane. For example, if the split pane is of dimension (0,0,100, 400), then divider... me a complete source code to achieve this. Please Getting the implemented Interfaces Getting the implemented Interfaces  ... (that included in the program) by using the getInterfaces() method. Here is an example... retrieve Interfaces (included in the program) and store their references tree - Swing AWT tree example of tree java program Hi Friend, Please visit the following code: java-awt - Java Beginners java-awt how to include picture stored on my machine to a java frame... am sending code. import javax.swing.*; import java.awt.*; import... information, Thanks Event handling in Java AWT Event handling in Java AWT  ... events in java awt. Here, this is done through the java.awt.*; package of java. Events are the integral part of the java platform. You can see the concepts Java AWT Components components available in the Java AWT package for developing user interface for your program. Following some components of Java AWT are explained : Labels... Java AWT Components   Java - Swing AWT Java Hi friend,read for more information, Java AWT Package Example tetris game code - Swing AWT tetris game code To develop a JAVA puzzle game which is a "variation" of the Tetris game awt list item* - Swing AWT information. Thanks...awt list item* how do i make an item inside my listitem...); choice.add("Java "); choice.add("Jsp"); choice.add("Servlets Example Code - Java Beginners Example Code I want simple Scanner Class Example in Java and WrapperClass Example. What is the Purpose of Wrapper Class and Scanner Class . when i compile the Scanner Class Example the error occur : Can not Resolve symbol java - Swing AWT java how to use JTray in java give the answer with demonstration or example please query - Swing AWT java swing awt thread query Hi, I am just looking for a simple example of Java Swing get User Home Java get User Home In this section, you will study how to get the user home. We are providing you an example which will obtain the home directory by using JSP User-Agent illustrate an example from 'JSP User-Agent'. To understand and elaborate the example we... of User Agent. Here is the code of userAgent.jsp <%@ page... JSP User-Agent   slider - Swing AWT :// Thanks... as possible...... ??????? Hi Friend, Try the following code... Example"); Container content = frame.getContentPane(); JSlider slider,eclipse - Swing AWT java,eclipse Hi, I have made on program SimpleCalculator on eclipse 3.4.0.But while running the code I am getting the problem for Class File Editor... for NullPointerException.class.You can change the sorce attachment.and then some code as below java - Swing AWT selected image on frma or panel....so plz give me some code/sugession for upload image in JAVA SWING.... Hi Friend, Try the following code...java Hello Sir/Mam, I am doing my java mini AWT Java write a servlet program to create a form which accept user information and find the number of visit to the page Use this concept, to find how many times this page is accessed by users ublic class java swing - Swing AWT java swing how to save data in sql 2005 while insert in textfield Hi Friend, Try the following code: import java.sql.*; import javax.swing.*; import javax.swing.border.*; import java.awt.*; import java - Swing AWT java how can i link up these two interfaces ie CurrentAccount and Templates.so that if i click submit,am connected to the next interface.thanks. import java.awt.FlowLayout; // specifies how components are arranged DrawingCircle - Swing AWT : Thanks...DrawingCircle I want to draw circle which dynamically accepts radius, X cordinates & Y cordinates from the user through a dialouge box. and draws hi - Swing AWT information, visit the following link: Thanks... Friend, Try the following code: import java.awt.*; import javax.swing. Java swings - Swing AWT Java swings i have the following class .In that class i has two... it as it is in the panel1 and i want to drag a copy of it into panel2.Please send code after..., Try the following code: import java.awt.*; import javax.swing. Java question - Swing AWT Java question I want to create two JTable in a frame. The data... columns-Item_code,Item_name,Item_Price. When I click on one of the row in first... the following code: import javax.swing.*; import java.awt.*; import JAVA AWT BASE PROJECT JAVA AWT BASE PROJECT suggest meaningful java AWT-base project Java - Swing AWT Java I have write a program to capture images from camera and scanners,I got image in panel but how should i save this image through save dialogbox at any location ...plz help Hi Friend, Try the following code Java API of code. Java Application Programming Interface is a library of classes, interfaces with their methods, constructors that come with JDK. Java API included... has to use these code through Java API..In java most task are done by API java - Swing AWT java Hi, I override the mouseClicked interface in my code and get the X and Y coordinates of the event using e.getX() and e.getY(). Is there a way to obtain the text at that location? Hi Friend, Please clarify Line Drawing - Swing AWT ) { System.out.println("Line draw example using java Swing"); JFrame frame = new...Line Drawing How to Draw Line using Java Swings in Graph chart,by giving x & Y axis values Hi friend, i am sending code of draw line java - Swing AWT java what will be the code for handling button event in swing? Hi Friend, Try the following code: import java.awt.*; import javax.swing.*; import java.awt.event.*; class ButtonEvent extends JFrame Ask Questions? If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
http://www.roseindia.net/tutorialhelp/comment/82875
CC-MAIN-2013-20
refinedweb
2,542
57.16
Created attachment 146665 [details] MulticastTest.java . Previously, I reported some error messages reported by Jenkins on startup: I looked at the Jenkins source code, and isolated the problem. I wrote this simple testcase (see attached MulticastTest.java), which I am also including inline: ================================================================================ /* * To build this test, * (1) Make sure that the OpenJDK is installed from ports: * * pkg install openjdk * * (2) Rename this file to: MulticastTest.java * (3) Build it: * * javac MulticastTest.java * * (4) Run it: * * java MulticastTest.java * */ import java.net.InetAddress; import java.net.MulticastSocket; class MulticastTest { public static void main(String[] args) { try { int PORT = Integer.getInteger("hudson.udp",33848); InetAddress MULTICAST = InetAddress.getByAddress(new byte[]{(byte)239, (byte)77, (byte)124, (byte)213}); MulticastSocket mcs = new MulticastSocket(PORT); mcs.joinGroup(MULTICAST); } catch (Exception e) { e.printStackTrace(); System.exit(-1); } } } ================================================================================ If I run this testcase, I get the same error as what I reported earlier with Jenkins: java.net.SocketException: Invalid argument at java.net.PlainDatagramSocketImpl.join(Native Method) at java.net.AbstractPlainDatagramSocketImpl.join(AbstractPlainDatagramSocketImpl.java:178) at java.net.MulticastSocket.joinGroup(MulticastSocket.java:319) at MulticastTest.main(MulticastTest.java:31) If I run: ktrace java MulticastTest I see that the error here: 13253 java CALL setsockopt(0x4,0x29,0x1b,0x7fffffbfd7dc,0x4) 13253 java RET setsockopt 0 13253 java CALL setsockopt(0x4,SOL_SOCKET,SO_BROADCAST,0x7fffffbfd7d8,0x4) 13253 java RET setsockopt 0 13253 java CALL getsockopt(0x4,SOL_SOCKET,SO_TYPE,0x7fffffbfd77c,0x7fffffbfd778) 13253 java RET getsockopt 0 13253 java CALL setsockopt(0x4,SOL_SOCKET,SO_REUSEPORT,0x7fffffbfd7e0,0x4) 13253 java RET setsockopt 0 13253 java CALL setsockopt(0x4,SOL_SOCKET,SO_REUSEADDR,0x7fffffbfd7e0,0x4) 13253 java RET setsockopt 0 This looks like a bug in the FreeBSD networking code for multicast, or a bug in the FreeBSD code in the OpenJDK. This Java code works under Linux, Solaris, Windows, etc., so it would be good to fix this problem on FreeBSD. Can a networking person help me with this? I did some more analysis and found that this code: is happening inside the mcast_join_leave() function inside the JDK here: From looking at the FreeBSD header files, IPPROTO_IPV6 = 41 (0x29) IPV6_MULTICAST_IF = IPV6_JOIN_GROUP = 12 (0xc)_ inside the mcast_join_leave() function what is basically happening is: setsockopt(0x4, IPPROTO_IPV6, IPV6_MULTICAST_IF, ...) getsockopt(0x4, IPPROTO_IPV6, IPV6_MULTICAST_IF, ...) setsockopt(0x4, IPPROTO_IPV6, IPV6_JOIN_GROUP, ...) the second setsockopt() is returning EINVAL I ran the same test program under truss with: truss java MulticastTest and found this: socket(PF_INET6,SOCK_DGRAM,0) = 4 (0x4) setsockopt(0x4,0x29,0x1b,0x7fffffbfd7dc,0x4,0x0) = 0 (0x0) setsockopt(0x4,0xffff,0x20,0x7fffffbfd7d8,0x4,0x0) = 0 (0x0) getsockopt(0x4,0xffff,0x1008,0x7fffffbfd77c,0x7fffffbfd778,0x83f5292d8) = 0 (0x0) setsockopt(0x4,0xffff,0x200,0x7fffffbfd7e0,0x4,0x83f5292d8) = 0 (0x0) setsockopt(0x4,0xffff,0x4,0x7fffffbfd7e0,0x4,0x83f5292d8) = 0 (0x0) bind(4,{ AF_INET6 [108c:bd00:800:0:4b41:9700:100:0]:36096 },28) = 0 (0x0) setsockopt(0x4,0x29,0x9,0x7fffffbfd7f4,0x4,0x83f54efd0) = 0 (0x0) getsockopt(0x4,0x29,0x9,0x7fffffbfd8ac,0x7fffffbfd864,0x83f54c798) = 0 (0x0) setsockopt(0x4,0x29,0xc,0x7fffffbfd8c0,0x14,0x83f54c798) ERR#22 'Invalid argument' I tracked this down some more. Inside the JDK, there is this code in =============================================================================== /* * IPv6 join. If it's an IPv4 multicast group then we use an IPv4-mapped * address. */ #ifdef AF_INET6 { struct ipv6_mreq mname6; jbyteArray ipaddress; jbyte caddr[16]; jint family; jint address; family = (*env)->GetIntField(env, iaObj, ia_familyID) == IPv4? AF_INET : AF_INET6; if (family == AF_INET) { /* will convert to IPv4-mapped address */ memset((char *) caddr, 0, 16); address = (*env)->GetIntField(env, iaObj, ia_addressID); caddr[10] = 0xff; caddr[11] = 0xff; caddr[12] = ((address >> 24) & 0xff); caddr[13] = ((address >> 16) & 0xff); caddr[14] = ((address >> 8) & 0xff); caddr[15] = (address & 0xff); =============================================================================== I can confirm that the address created by this code looks something like: 0 0 0 0 0 0 0 0 0 0 ff ff ef 4d 7c d FreeBSD, in src/sys/netinet6/in6_mcast.c inside in6p_join_group(), there is this: if (!IN6_IS_ADDR_MULTICAST(&gsa->sin6.sin6_addr)) return (EINVAL); Since IN6_IS_ADDR_MULTICAST() only checks if the first octet is 0xff, that is what is returning the EINVAL. So the JDK is creating an IPV4 multicast address mapped inside an IPV6 address. The FreeBSD kernel code is rejecting this as a valid IPV6 multicast address. I'm not sure if it is better to fix this in the kernel or the JDK. Don't you really want an IPv4 socket here anyway? It seems rather convoluted to create an IPv6 socket so you can listen for IPv4 multicast. My guess is that the in6_mcast code doesn't handle IPv4 multicast groups, but bms@ might know. I'm looking at this from the perspective that 3rd party multicast code written in Java "just works" under Linux and Solaris, but fails under FreeBSD. Sure, using an IPv4 socket from the beginning would be the way to go, but now that will require pushing patches upstream to all Java software using multicast, in order to accomodate FreeBSD. It's possible, but not practical. It would be nice if the FreeBSD IPv6 stack could be modified to deal with IPv4-mapped multicast addresses for IPv6, similar to how Solaris does it. See comment from Andrey, If that could be made to work, then no changes to upstream code would be required. In src/sys/netinet6, I see that there is usage of a IN6_IS_ADDR_V4MAPPED() macro in other places in the code, like in udp6 and sctp6, so V4 mapped addresses are supported for other things. It seems to be a non-trivial amount of work. From bms@ on IRC: <quote> There's no hard and fast reasons why it couldn't be done. The code as it stands will reject that as being an API mixup (you want v4 memberships, use the v4 APIs). The tension points are the Layer 4 ingress filtering for SSM, and actually calling Layer 2 in the right way. The nasty thing about IP6-mapped is that you need to track the memberships in v6 terms, but hand-off all the work to the v4 routines to do the right thing. The easiest way to go about doing it is to deal with the ASM case first, and just punch a hole in ingress filtering if someone tries to use SSM (which is what the stack has to do anyway). I'm not going to stick around to see what happens, though. ;-) </quote>. Not a solution, but does it work as a workaround to add this option on the commandline to java? -Djava.net.preferIPv4Stack=true Ronald, Thanks for the tip. java -Djava.net.preferIPv4Stack=true MulticastTest seems to work around the problem. I would still like to see FreeBSD fixed so that this workaround is not required. Even though it is a lot of work, I would like to see Java on FreeBSD behave out of the box, "just like Linux", without the FreeBSD community needing to push lots of patches upstream to different Java software authors. I want there to be less motivation for people to migrate from FreeBSD to Linux if they are deploying Java applications. That's why I've spent the time to analyze the problem and reporting my findings in this bug report. I find this audit trail is quite interesting. :) -------- Forwarded Message -------- Subject: Re: [Bug 193246] Bug in IPv6 multicast join(), uncovered by Jenkins Date: Fri, 05 Sep 2014 03:05:14 +0100 From: Bruce Simpson <bms@fastmail.net> To: bugzilla-noreply@freebsd.org On 04/09/2014 19:16, bugzilla-noreply@freebsd.org wrote: >. > Unfortunately I'm fully booked. But yes, that is a good summary. If Craig (or someone else) is willing to volunteer to support v4-mapped addresses: one approach would be to extend in6_mship{} to include them. Pushing state down to IGMP will need to be added as a special case. To keep it simple, assume that only the legacy any-source multicast (ASM) model will be supported, i.e. listeners will not specify source filters. Looking at the JDK source, it appears they used to handle the v4/v6 swizzle themselves because of limitations in Linux 2.4/2.6. In other words, we do not support RFC 3493 Sec 3.7 for multicast groups at the moment. A more appropriate errno value to return is EPROTONOOPT. [Interestingly, Sec 5.2 discusses IPv6 multicast options, but does not contain any reference to IPv4 at all.] There now follows a late-night writeup of the rationale behind this code -- and this is as concise as I can make it, I'm afraid. in[6]_mcast.c is split into a top half (i.e. per-socket state) and a bottom half (i.e. stack-wide state, and the IGMPv1/2/3 and MLDv1/2 wire protocols). IPv6 mcast options are processed separately from IPv4. Both implement a transaction scheme to protect global membership state (in the bottom half) from failures at the granularity of a single socket (or PCB) in the top half. Why all the complexity? Well, this is to support source-specific multicast (SSM, aka "inclusive mode"). To cut a long story short: as the size of an internetwork increases, it gets more difficult for routers to track the state of multicast listeners, unless they are aware of where the traffic originates from. The book "Interdomain Multicast Routing" by Brian M. Edwards discusses this in lurid detail. So, SSM was introduced to support inter-domain multicast. In this model, joining a multicast group is no longer a simple matter of registering for a channel -- you must also specify the sources you are interested in. However, the majority of multicast applications are built on the older model: i.e. they do not cross more than one IP network hop, and do not specify sources ("any-source multicast", aka ASM). The network stack must be able to cope with both of these uses. It does so by representing the legacy ASM scheme as "exclusive mode". The RFC 3678 APIs also have the advantage that the application can block unwanted senders, even if ASM is in use. [The main API it specifies, setsourcefilter(), does not explicitly mandate v4-mapped support.] So, in the bottom half of mcast, each group has a shared RB-tree of listener state. This is created from the set union of the filter state on each subscribed socket. If there are no filters for a particular group, i.e. all of the sockets/PCBs in the system are in "exclusive" mode and have no filters, then of course the RB-tree for that group will be empty. Otherwise, if there is a mix of "exclusive" and "inclusive" mode listeners, the tree will need to be recomputed. The shared tree is then used to fill out the IGMP/MLD messages sent to on-link routers to request group subscription. It is also used to filter input traffic from within the relevant transports (e.g. UDP, raw sockets). Previously, this filtering required that the network layer take a socket-layer lock. In closing: this isn't just a simple matter of adding a few defines. The v6 code will need to check that a v4-mapped group was passed, and make sure to perform the transaction pushdown to IGMP. Thanks for excellent analysis! I ran into this one today at Hi, just to pitch in with additional argumentation for resolution of this bug. I ran into this exact problem with openHAB, the workaround works. But as the original reporter writes, it puts of people using FreeBSD. It was also not trivial to diagnose. Cheers, Marc. MARKED AS SPAM Any progress here? I can see Tomcat test failures with multicast. I don't think anyone is actively working on this. bms@ is no longer working on FreeBSD currently, so it will require finding someone with interest in this area (and multicast is kind of an unusual case relative to most network stack use cases). (In reply to John Baldwin from comment #16) Unfortunately, I can't. This is out of my C knowledge and as far as I understand this issue, FreeBSD cannot use IPv6-mapped IPv4 addresses. FreeBSD can use IPv6-mapped IPv4 addresses just fine for unicast traffic. What this bug is about is joining IPv4-addressed multicast groups on an IPv6 socket. I think the question is not only about joining an IPv4 multicast group on an IPv6 socket, but about whether mapped addresses are supported for multicast in general... That might make the fix more complex than just fixing the join operation.
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=193246
CC-MAIN-2022-33
refinedweb
2,061
55.84
See correcting errors in user queries. This module contains helper functions for correcting typos in user queries. Base class for spelling correction objects. Concrete sub-classes should implement the _suggestions method. Suggests corrections based on the content of a field in a reader. Ranks suggestions by the edit distance, then by highest to lowest frequency. Suggests corrections based on the content of a raw whoosh.support.dawg.GraphReader object. By default ranks suggestions based on the edit distance. Merges suggestions from a list of sub-correctors. Base class for objects that correct words in a user query. Returns a Correction object representing the corrected form of the given query. A simple query corrector based on a mapping of field names to Corrector objects, and a list of ("fieldname", "text") tuples to correct. And terms in the query that appear in list of term tuples are corrected using the appropriate corrector. Represents the corrected version of a user query string. Has the following attributes: You can also use the Correction.format_string() to reformat the corrected query string using a whoosh.highlight.Formatter class. For example, to display the corrected query string as HTML with the changed words emphasized: from whoosh import highlight correction = mysearcher.correct_query(q, qstring) hf = highlight.HtmlFormatter(classname="change") html = correction.format_string(hf)
http://packages.python.org/Whoosh/api/spelling.html
crawl-003
refinedweb
215
52.36
not actually to establish a blogging point where individuals can enrich their learns on facilitating and leveraging .NET-related activities most effectively Holy cow, I wrote a book! The headline: First lady to speak at Ala. school hit by tornado. My interpretation: There's a woman who was the first woman to speak at an Alabama school. This is notable presumably because she had to break some historical gender barrier in order to be invited to give an address there. That woman was hit by a tornado. My reaction: Man, what horrible luck, for a person to be hit by a tornado. I hope she's okay. What the headline actually meant: A school in Alabama was hit by a tornado. The First Lady will visit that school. My first take was that Eve was set to speak at a school. Then I remembered that she was a sneak and a liar and, arguably, a hoochy for messing around with that snake so it's questionable whether one could call her a lady. Then I remembered she's been dead for quite some time so she's speaking nowhere. its Pathetic Ray.. By reading "First Lady" 99.99% of Americans thinks about "The First Lady". mvadu: Maybe. But note that the title says "First lady", not "First Lady". Something about a horse, uncle Jack and helping him. Is English a great language or what? I love it! @mvadu: When I read "First Lady" (capital L), of course I think of the President's wife. When I read "First lady" (lowercase L) then I think of it along the lines of "first person", the way Raymond described. So count me in the 0.01 percent. Of course, when you wrote it, you capitalized "Lady" too. "First lady" isn't capitalized in the linked article. I misread Raymonds blogpost, episode 1 The nice thing about Google News is that in the process of showing you multiple news sources, it also shows you multiple headlines for the story. Thus, crappy headlines will be easier to interpret. As I recall, this is what YACC refers to as a shift/reduce ambiguity. Bad luck using a language without well-defined operator precedence. Or grouping operators. there might've been other woman who have spoken before, but this was the first lady Ok, how do you interpret: First lady eats shoots and leaves (FWIW, I read your headline the same way you did) Tangentializing a bit, note that the link to the First Lady's bio is at Am I the only one who finds overqualification to cause excruciating mental discomfort? You're already inside the "firstlady" folder; the default "namespace" is therefore already the First Lady. It should be either firstlady/bio.html or whitehouse.gov/flbio.html. I know this has little to do with the original topic, but Raymond has already set a precedent of silliness. CtrlAltDelete: According to the preface of "Hitchhiker's Guide to the Galaxy", Douglas Adams' hobbies are: Cooking his children and leaving out commas. I immediately read that as "a US president's wife", since I don't expect a modern-day newspaper to use the word "lady" instead of "woman". George Jansen: YACC... You mean the old "Yet Another Compiler Compiler"? Time Flies like an Arrow... Hmm. The headline seems to actually state “First lady _speaks_ at …” (which may be changed from before), which is somewhat different; that puts it more clearly into the present indicative tense, so it can't be a modifier to the noun “First lady” at the beginning of the sentence. This is the kind of blog post that makes a grammar nazi feel all warm and fuzzy. @daves561: I concur. It also annoys me when people do that with code. If you're already in the foo/ folder, don't name your classes FooBar. First, you are twisted. :) Just like me. @ mvadu, not everyone reading The Old New Thing is American. Another possible interpretation: After a lady speaks for the first time in Ala., the school she spoke in was hit by a tornado. I know, the dot is required for the abbreviation of Alabama, but it's so much funnier this way... Perhaps, instead of making fun of the headline, reading the content of the article might have cleared up any misunderstandings? FLOTUS is beautifully unambiguous and takes up less headline space. But, ew. FWIW, dictionary.com says capitalization is optional. How about "First lady to speak at devastated Ala. school"? How about "First Lady grabs shovel, helps rebuilding of Alabama school"? Nah, that would be wishfull thinking. > How about "First lady to speak at devastated Ala. school"? Brillant! Student A: Hey, did you hear the First Lady is coming to speak at our school? Student B: Really? Oh, man... PingBack from
http://blogs.msdn.com/oldnewthing/archive/2008/06/02/8568489.aspx
crawl-002
refinedweb
801
67.04
"Quinn Dunkan" <quinn at dinar.ugcs.caltech.edu> wrote in message news:slrn90i5bd.rvo.quinn at dinar.ugcs.caltech.edu... [snip] > interfaces or casts. You almost never need to know or even care what 'kind' > an object is... just call whatever method you want on it. If it accepts the > method, then that's all you need to know about its type (if it doesn't you get > an exception). I.e. if your Knobs and Dials have a 'twist' method, that's all > the client code cares about. It couldn't care less what 'class' they belong > to. I think that's what Alex meant by the "the method is the type" stuff. Yep, basically. To be fair about it -- there _are_ (small) advantages to grouping sets of methods into interfaces. For example, no "accidental naming conflicts". E.g., a classical example...: Say that you're implementing objects that need to interface to two frameworks -- frameworks that were developed independently from each other. One is a graphical fw, which handles your objects by (among other things) sometimes calling a method on them, called "draw", that tells them to re-draw themselves onscreen (your objects, to this fw, represent "drawables"). One is a lottery-administration fw, which handles your objects by (among other things) sometimes calling a method on them, called "draw", that tells them to generate another random number (your objects, to _this_ fw, represent "lotteries"). Ooops...!-) Accidental naming conflict. This is somewhat of a problem in most languages, by the way; the grouping-into-interfaces does not necessarily solve it "magically". E.g., in Java, you can't usefully have your object's class have both "implements Drawable" as well as "implements Lottery" (as that would erase the namespace distinction -- you'd have to define/override just ONE method 'draw', so the problem is on you again). You solve this (typically/idiomatically) with 'inner classes' -- but this also means the _frameworks_ cannot be the one 'navigating' (with casts) among your objects' interfaces... rather, things have to be arranged so it's _you_ passing either the 'Drawable' or the 'Lottery' ``view'' of your objects. It's slightly better, in a sense, in C++, though you have to "rename" one or both versions of 'draw' in an intermediate utility class; once that is done, it's again possible for the frameworks to be the ones 'navigating' (dynamic_cast) among your objects' interfaces. Visual Basic (sorry folks!-) is better in this, since when you define a class that 'implements Drawable' _and_ also 'implements Lottery', then the methods you have to define in the class are *automatically* to be named Drawable_draw and Lottery_draw in their definitions. Eiffel's powerful renaming-while-inheriting mechanisms are also ok with this, though not as simple and automatic as Visual Basic's. In the COM native object model, navigation between interfaces of an object is handled _very_ explicitly ("explicit is better than implicit" is not ONLY a Python mantra:-), by a dedicated 'QueryInterface' method that all objects _must_ implement. While a 'cast' of any kind is handled implicitly by the language and its runtime system, QueryInterface is something that YOU implement, in YOUR code, and therefore it's quite easy to handle things. COM is the object model that VB is (more automatically/implicitly/magickally) working towards, by the way (VB, like most COM C++ frameworks, handles tasks such as implementing QueryInterface on your behalf). OK, so, what is Python to do in such a case? A Python object gets to "explicitly handle navigation", in that its __getattr__ method gets called when client code asks for an attribute (including a method) that is not in its __dict__ (or inherited); in this way, it gives you even more control than COM does. *However*... the *granularity* of this control is *a single attribute*. There is no concept of "interface", i.e. "named group of attributes"; thus, your __getattr__ just *doesn't get enough context information* (no "interface-name"...) to be able to tell whether it should supply Drawable_draw or Lottery_draw when it's asked for a method named 'draw'! So, despite the excellent 'explicit control', you're not out of the woods. You basically need two objects (that share state), with different implementations of method draw. There _is_ no "navigation" (because there are neither casts, nor QueryInterface, in the Python object model) -- methods get called directly on the object that the framework (or, the client-code) has in hand; thus, to behave different on calls to draw from different 'sides', different objects must be originally given to the different sides. Nothing dramatic, mind you -- even though you don't get the use of Java's "inner classes", that use is mostly in helping you bypass for such special cases the barriers that Java normally erects; in Python, no barriers, thus no special need to bypass them either. But, enough verbiage, let me give an example -- a Python solution to the accidental renaming problem that just uses Python's normal 'metaprogramming' approach, to reach much the same situation you would have in Java with inner classes. A toy example of the Draw-vs-Lottery situation above, with a data-member to show that we are indeed sharing state: class MyClass: def __init__(self): self.calls = 0 def Lottery_draw(self): self.calls += 1 print 'lottery_draw',self.calls def Drawable_draw(self): self.calls += 1 print 'drawable_draw',self.calls def asLottery(self): return Aux(self, {'draw': self.Lottery_draw} ) def asDrawable(self): return Aux(self, {'draw': self.Drawable_draw} ) class Aux: def __init__(self, delegate, auxdict): self.__dict__=delegate.__dict__ self.__dict__.update(auxdict) Note that Aux is completely general: it creates a "synonym" for the 'delegate' object (different identity, but same state), except for the differences specified in the auxdict. The asLottery/asDrawable methods need not be methods -- you can wrap an 'Aux', with _whatever_ renaming you desire, around any object you wish, in any external piece of code (it's nicer to have them as method where feasible, but this can also let you do "impedance matching" on other external Python libraries without changing them!). [In some cases you could do that by inheritance, but there's no _strict_ need for inheritance here]. [In Java, by contrast, the methods returning inner-class "views" of a class do need to be methods of that class, but you can generally arrange that by inheritance... as long as somebody hasn't inappropriately used a "final" attribute in the class you're trying to wrap/reuse!] Here's example usage...: >>> me=MyClass() >>> me.Lottery_draw() lottery_draw 1 >>> me.Drawable_draw() drawable_draw 2 >>> me.draw() Traceback (innermost last): File "<pyshell#61>", line 1, in ? me.draw() AttributeError: 'MyClass' instance has no attribute 'draw' >>> dr=me.asDrawable() >>> dr.draw() drawable_draw 3 >>> lo=me.asLottery() >>> lo.draw() lottery_draw 4 >>> So, you will pass to the drawing-framework me.asDrawable(), to the lottery-framework me.asLottery() [rather than just 'me' in either case] and live happily ever after. Incidentally, Python metaprogramming is no end of fun, but in most cases it had better _stay_ 'just fun' -- no real point using it to provide funky syntax sugar for something that could be done much more simply in "Python native" ways (by analogy -- remember the budding C programmers that, many years ago, were using C's preprocessor for such momentous tasks as '#define { BEGIN' and '#define } END' just so as to give C some of the syntax sugar flavour of languages they preferred...? Well, most syntax-sugar-level uses of Python are similarly silly!-). But I think that this one (basically emulating a typical Java "inner class" solution, or C++/Eiffel "renaming") *IS* a good use of the metaprogramming mechanisms (if you often find yourself using two or more frameworks and thus occasionally needing it, of course!-). The slight-but-non-zero handiness of 'interfaces' here comes up when an interface has several methods, not just one, of course (albeit, it's hopefully unlikely that _many_ of them will conflict between interfaces...!). But we can enrich the 'class Aux' context so that its __init__ will explicitly take a string as interface name, a list of strings as methods to be renamed, and build its dictionary from them: class Renamer: def __init__(self, delegate, itfname, metnames): self.__dict__=delegate.__dict__ for met in metnames: self.__dict__[met] = getattr(delegate,itfname+'_'+met) and change the two relevant methods in MyClass to: def asLottery(self): return Renamer(self, 'Lottery', ('draw',)) def asDrawable(self): return Renamer(self, 'Drawable', ('draw',)) The example code works just the same as before... Of course, if "interface" became an official Python concept (e.g. as in, or) there shall be benefits (reasonably small benefits for very little cost, if a minimalist approach is adopted; reasonably large ones for higher cost, if a more ambitious one -- "you get what you pay for"...:-). But, you can already get _today_ most of the uses of Java's interfaces (static, compile-time checks excepted!), out of what Python already offers. That Python may well give you even more in the future, should not deter you from enjoying it today!-) Alex
https://mail.python.org/pipermail/python-list/2000-November/036746.html
CC-MAIN-2017-17
refinedweb
1,513
52.09
Paul Prescod wrote: >... > element.prefix gets the element's prefix. I don't think that the > namespaces view that prefixes are irrelevant should obviate the XML 1.0 > view that they are NOT. Even if we accept the namespaces view of the world > entirely, prefixes are chosen to be mmenonmic so they shouldn't be > discared by software. I discussed this in my other email, but wanted to emphasize the point: Retaining the prefix is a *very* bad idea. I will elaborate if necessary when I return from Mexico, but I will earnestly ask that any further progress on miniDOM should first remove this from the API. Cheers, -g -- Greg Stein,
https://mail.python.org/pipermail/xml-sig/1999-April/001128.html
CC-MAIN-2017-30
refinedweb
111
74.9
Keywords: Cointegration, stationary covariates, local asymptotic power, CDS basis. Abstract: Tests for cointegration are important tools for empirical macroeconomics and finance. Residual based tests for the null of no cointegration, pioneered by Engle & Granger (1987), have the advantages of computational ease and good small sample size properties. These tests involve running regressions and forming simple test statistics. However, residual based tests suffer from low power under the alternative hypothesis. Among other papers, this problem is highlighted by Pesavento (2004), who finds that while residual based tests have good size in most cases, their power disadvantage relative to system-based cointegration tests is significant. The goal of this paper is to construct a more powerful residual based cointegration test. In empirical analysis, researchers often have data on variables other than the cointegration candidates. For instance, when testing for Purchasing Power Parity (PPP), time series for GDP and money growth rates are observed together with exchange rates and prices (see Amara & Papell, 2006). These variables, or covariates, may be helpful in uncovering cointegration relationships. The idea of this paper is to take advantage of these covariates in testing for cointegration. The inclusion of stationary covariates has been shown to improve the power of tests under local-to-unity alternatives in the univariate setting. Hansen (1995) first proposed a unit root test where the leads and lags of stationary covariates are included in the inference. Elliott & Jansson (2003) provided point optimal unit root tests that include stationary covariates in presence of deterministic trends. In the multivariate setting, Jansson (2004) shows that stationary covariates can be used to increase power of tests with the null of cointegration. In addition, Seo (1998) shows that covariates significantly improve the power of Johansen rank tests, while Rahbek & Mosconi (1999) study the asymptotic implications of covariate inclusion. We add to the work described above by including stationary covariates in the construction of the Augmented Dickey-Full (ADF) cointegration test. Intuitively, when stationary covariates related to the cointegration candidates are included in the residual regression, parameters of the regression are more precisely estimated, resulting in a more powerful test. The new test is named the Covariate Augmented Dickey-Fuller (CADF) test. The extent of power improvement depends on the long-run correlations between the stationary covariates and cointegration candidates. Asymptotic analysis shows that the local-to-unity power functions of the CADF test depends critically on these long-run correlations. Not surprisingly, when the covariates and cointegration candidates have zero long-run correlations, the power functions are the same as those of the ADF test. Large sample Monte Carlo simulations are used to illustrate the asymptotic results, revealing two interesting facts. First, the power of ADF test serves as the lower bound for the power of the CADF test, in all experiments conducted. This means that asymptotically, the CADF test does at least as well as the ADF test. Second, the power of the CADF test is the highest when the covariates are highly correlated with both the cointegration error as well as the right hand side variables in the cointegration relationship. Deriving asymptotic critical values for the CADF test is difficult due to the presence of nuisance parameters in the asymptotic null distribution. As pointed out by Elliott & Pesavento (2009), there are no obvious ways to estimate the nuisance parameters. Therefore, we propose a bootstrap procedure to obtain critical values in finite samples. Small sample Monte Carlo simulations are conducted to assess the performance of the bootstrapped CADF tests under various cases of deterministic trends and various correlation scenarios. They show that the CADF test has reasonable size and good power in finite samples relative to not only the ADF test, but the Johansen test as well. In an empirical application of the new test, we investigate whether there are cointegrating relationships between Credit Default Swap (CDS) spreads and corporate bond spreads, for 24 US firms during the 2007-2009 financial crisis. Previous work Blanco (2005), Zhu (2006), De Wit (2006), Levin (2005) and Norden & Weber (2009), for instance, establishes that cointegration between CDS and bond spreads holds for most firms during benign economic periods. However, it may be the case that traditional cointegration tests used in these studies cannot as easily detect the same relationships during the recent crisis, due to the unprecedented levels of market volatility and uncertainty. The CADF test allows us to partially control for such factors through the use of covariates such as the and VIX index returns and the Libor-OIS spread. Indeed, the CADF test finds that cointegration between CDS and bond spreads holds for most firms during the crisis. In comparison, results from the ADF and Johansen tests find cointegration for less firms. The remainder of the paper will be organized as follows: section 2 describes the model, assumptions, test statistic, and bootstrap inference. It also contains asymptotic analysis of the power of the CADF test. Section 3 investigates the power of the CADF test in large and small samples using simulations. Section 4 presents CADF tests for cointegration between CDS and bond spreads during the financial crisis, and section 5 concludes. The appendix contains mathematical proofs, tables and figures. Consider the following system: Where is a vector of scalar for , of dimension , and of dimension . and are the candidates for cointegration. are stationary covariates to be be utilized in the CADF test. For brevity and in order to keep notation simple, theoretical work in this paper is based on the case of no deterministic components, i.e., and are set equal to zero. In section 3, extensive simulation evidence is presented on the performance of the proposed test when deterministic components are present. The hypothesis of interest is We also define an alternative decomposition of that is useful in presenting the asymptotic results that follow as: Suppose data is generated by (1) and (2) and assumptions 1.1, 2.1 and 2.2 are satisfied. Then the following equation holds Notice that unlike the traditional ADF test, the leads and lags of the covariates, as well as those of , are included in the CADF regression. Proposition 1 provides the motivation for deriving a test based on a feasible version of (3). is typically not observed unless the cointegrating vector is pre-specified, therefore an estimate of is required. We consider the OLS estimate of the cointegrating vector.1 Let be the estimate of the cointegrating vector and be the residuals.2 Noting that , using (4), similar to the derivation of (3), The asymptotic null distribution depends on difficult to estimate nuisance parameters (more specifically, as shown in the next section, and ). This is closely related to an issue pointed out by Elliott & Pesavento (2009) regarding the long run correlation parameter between what would be the equivalent of and of this paper. The authors on p1832 note that "...in practice, this parameter is not only unknown, but also, under the null and local alternative, there is no obvious way to obtain a good estimate of this parameter". In light of this difficulty, we propose a bootstrap inference instead of relying on asymptotics. In particular, the bootstrap inference is designed to take into account the following cases of deterministic trends: We are interested in the distribution of under a local-to-unity version of . This section gives precise statements as to how the distribution for is different from the distribution of the ADF test. Following Phillips (1987), Hansen (1995), and Pesavento (2004), re-define so that for some constant , The truncation lag in (6) satisfies as , with the bound that . For a symmetric positive definite matrix , define its Cholesky and inverse Cholesky decompositions as and . Unless otherwise stated, let for some vector stochastic process . Define Let the data be generated by (1) and (2) and assume that assumptions 1, 2, and 3 hold. If (8) is true, then as Thus, the influence of the covariate feeds through , the correlation between and . To further understand the role of the covariates, consider the case where the covariates have no long run correlation with the cointegration candidates. That is, and . In this case, observe that . This means that now , where The local-to-unity asymptotic distribution in proposition 2 can be used to assess large sample power of the CADF test. We numerically construct the distribution, for and -20 using 3,000 samples of Gaussian innovations. Each sample has the size of 3,000, and the innovations are used in constructing the functionals present in the right hand side of (6). Power is then calculated, for and -20, as the mass of the distribution to the left of the 5% critical value of the distribution. Note that the test only depends on and . Nonetheless, it is more intuitive to express power as a function of pairwise correlations , , and . We set and all long run variances equal to one. As such, and . Figures 1-3 display the power surfaces across different values of , and . Figures 1, 2, and 3 each show four graphs per figure. The horizontal axes show corr(x,z) and corr(e,z), while the vertical axis shows the square of corr(e,Q) in the top left graph and the local-to-unity power in each of the three remaining graphs. The three graphs showing power on the vertical axis differ by the specifications of the local-to-unity parameter, c. The top right, bottom left, and bottom right graphs show power when c = -5, -10, and -20, respectively. The figures each differ in terms of the specification of corr(e,x). Figures 1, 2, and 3, shows power when corr(e,x) = -0.5, 0, and 0.5, respectively. As expected, for a given combination of corr(e,x), corr(e,z), and corr(x,z), the local-to-unity power increases monotonically as c decreases. Throughout the figures, we see that in general the CADF has high power when corr(e,x) and corr(e,z) are large in magnitude, either with different signs when corr(x,z) is positive, or with the same signs when corr(x,z) is negative. Importantly, the ADF tests (where corr(e,z) and corr(x,z) = 0) always have the lowest power. As expected, for a given combination of , , and , the power increases monotonically as decreases. Comparing the graphs in each of the figures with the top-left graph of that figure, it is also clear that the power function mimics the shape of , although the exact shape varies. Throughout the figures, in general the CADF has high power when and are large in magnitude, either with different signs when is positive, or with the same signs when is negative. A heuristic interpretation of these conditions is that power is highest when the covariates convey different information about than . Importantly, the ADF tests (corresponding to the point on the graphs where and ) always have the lowest power. For instance, when and (top-right graph of figure 2), the ADF test has a power of roughly 20%, while the power of the CADF test could reach 60%. Asymptotically, one cannot do worse in terms of power by using the CADF test instead of the ADF test. In this section we study the small sample size and power of the CADF test, and compare the size and power to those of the ADF and Johansen tests. This exercise is important because it is well known that residual based tests are typically less powerful than Johansen's test in small samples. Furthermore, using these simulations, we study the effects of the presence of deterministic trends. Pseudo time series of length 200 are generated in the following way: for each Repeating this procedure 2,000 times, the empirical rejection rates are obtained, representing the small sample power (where ) and size (where ). Table 1 contains the size and power results. Note: Details on the simulation setup are described in Section 2.3. Numbers are empirical rejection frequencies from 2,000 Monte Carlo simulations. Sample size in each simulation is set to 100. Deterministic cases 1, 2, and 3 are as described in Section 2.3 and this section. For the CADF and Johansen tests, power increases with . On the other hand, the power of the ADF test decreases with , and in general becomes significantly lower than the power of the CADF and Johansen tests. The power discrepancy between the ADF and CADF test is particularly large when deterministic terms are present (cases 2 and 3), or when is large. The ADF test performs well when , but still fails to show higher power than the CADF test in all cases other than case 3 when . The low power of the ADF test in these cases is consistent with previous findings (e.g.,Pesavento 2004). In terms of size (i.e., when ), the ADF test has good size in almost every case, while the CADF test tends to be under-sized when is large or under case 3. The CADF test also compares favorably with the Johansen test (see Johansen, 1991,1988). It is particularly advantageous under cases 1 and 2 when or .5, while the Johansen test is advantageous under case 3 for . In all other instances, the powers of the two tests are similar. The Johansen test tends to be over-sized, particularly under case 3, whereas the CADF test under case 3 is typically under-sized. Finally, we observed that there are minor discrepancies in power for CADF test based on different combinations of , and the best combination differs depending on the deterministic case, , and . The seller of a CDS contract offers insurance to the buyer of protection against default of an underlying reference entity. In return for protection, the buyer makes regular payments over the life of the contract. Thus, the CDS "spread"7 is often viewed as the price of the credit risk of the underlying reference entity. Abstracting from other factors, an investor who holds a corporate bond for a given entity requires the same premium as the seller of a CDS contract, since both the bond and CDS are exposed to the same default event of the reference entity. The deviation between the corporate bond spread (accounting for the reference rate) and the CDS spread is referred to as the CDS-bond basis. Following previous literature, we use the CDS spread minus the par asset-swap rate to measure the basis (see, Kocic (2000), Houweling & Vorst (2005), Hull (2004), or see Choudhry (2006)for explanation of alternative measures). Typically, an asset-swap consists of a fixed coupon bond and an interest-rate swap, where the bond holder pays a fixed coupon and receives a floating spread over LIBOR. It can be thought of as measuring the difference between the present value of future cash flows of the bond and the market price of the bond using zero coupon rates (Choudhry, (2006)). For no arbitrage conditions to hold, the pricing of credit risk for any underlying entity should be the same in both markets, ceteris paribus. As noted by Zhu (2006), under the Duffie (1999) pricing framework, it is possible to replicate a CDS contract synthetically by shorting a maturity matched par fixed coupon bond on the underlying reference entity, and investing the money in a par fixed risk free note. Therefore, the CDS premium equals the bond spread over the reference rate, or zero basis under no arbitrage. If there exists a negative (positive) basis, arbitrage is possible through a negative (positive) basis trade by buying (shorting) the cash bond and buying protection (selling protection) on the CDS contract. Previous literature (see, for instance, Blanco (2005), Zhu (2006), De Wit 2006, Levin (2005), Norden & Weber (2009)) notes the existence of the basis and establish it is stationary (i.e., CDS and bond spreads are cointegrated) for most firms during benign economic periods. We revisit this cointegration relationship during the financial crisis, which we define as July 2007 to July 2009. Our conjecture is that unprecedented levels of volatility, illiquidity, and market uncertainty may impose difficulties for traditional tests to find cointegration between CDS and bond spreads. The CADF test, on the other hand, may perform better through the use of covariates to account for some of these factors. During the financial crisis, evaporation of liquidity in the market caused funding costs to rise (see Giglio, 2010; Fontana, 2010). This coupled with surging counterparty credit risk and market volatility drove the basis wider (see Fontana, 2010)8. While it is difficult to construct explicit proxies for liquidity and counterparty credit risk, our choice of covariates intends to reflect these risk factors. The first covariate considered is the HFRX Global Hedge Fund Index return (HFRXGL). Hedge funds and banks comprise the largest CDS market participants (see, Anderson, 2010). While banks often use the CDS market to hedge against loan risk, hedge funds on the other hand are important speculators in the CDS market, using CDS contracts as tools to engage in credit arbitrage. Hedge funds also hedge convertible bond positions, and cover their exposures in the CDO market with CDS contracts. It is argued by Brunnermeier (2009) and Anderson (2010) that hedge funds access to external financing plays an important role in the liquidity of assets for which they participate in a large share of market transactions. The extent and rate at which hedge funds can obtain capital is related to their returns (see Boyson (2008), and consequently hedge fund performance affects the liquidity of the CDS market. HFRXGL is therefore used as a proxy for market-wide hedge fund performance. The second set of covariates is the S&P 500 returns and percentage change VIX. The S&P 500 returns can be viewed as a proxy of market wide performance as a whole, while the VIX index serves as a measure of implied market volatility. Counterparty credit risk and liquidity risk are often heightened during periods of low equity returns and high market volatility. As such, S&P 500 and VIX returns may be driven by the same factors that affect the CDS-bond basis. We also use the two covariates together in order to see how the CADF test performs when there is more than one covariate. The third covariate is the Libor-OIS spread, which is the difference in the three-month libor and the overnight index swap (OIS) rate. The Libor-OIS spread increases with a perceived rise in bank counterparty credit risk (see Schwarz, 2009). In contrast to CDS contracts, bonds do not have counterparty credit risk. Because counterparty risk is a driver of the basis (see Choudhry, 2006), the Libor-OIS spread is chosen as a covariate. Finally, daily stock returns for each firm are used as a firm-specific covariate. Drivers of the basis such as firm credit quality, type of institution, the rate at which a firm can obtain funding (see Choudhry, 2006), and many other factors unique to each firm may not be captured by systematic covariates. As noted by Aunon-Nerin et al. (2002), declines in stock price are associated with a rise in CDS premium, and should be considered when assessing credit risk. Therefore, we chose stock returns as a covariate. We start with all firms listed in both the Markit Partners CDS and bond data sets between June 2007 and June 2009. Five year CDS spreads are considered as they are the most actively traded. Quotes selected from Markit Partners are for CDS spreads referencing Senior Unsecured, USD denominated debt with the Modified Restructuring (MR) clause. In order to match the remaining maturity of the bond spread to the five year CDS spreads, a generic bond is constructed for each firm from a pool of outstanding bonds similar to the methodology of Zhu (2006). Using Fixed Income Securities Database (FISD), we constrain our analysis to a list of bonds that meet the following criteria: For bonds that meet the stated criteria, the daily bond asset-swap rate, the depth of the quote, and type of quote for each bond is obtained from Markit. For each bond, the depth weighted average of both TRACE and Composite quotes is calculated. We eliminate all bonds with remaining maturity shorter than two and a half years or longer than seven years. There are three possible cases in constructing the generic bond for each firm-day. First, all of the firm's available bonds have a shorter remaining maturity than 5 years, or all available bonds have a longer remaining maturity than five years. Second, there is only one bond available. Third, there is at least one bond with maturity shorter than five years and at least one bond with maturity longer than five year. In the first case, the generic bond is the bond with the maturity closest to five years. In the second case, the generic bond is the only available bond. In the third case, the generic bond is the linear interpolation of the closets two bonds on each side of the five year maturity, following Zhu (2006). Using ADF unit root tests, we ensure that all covariates and cointegration candidates are stationary by excluding any firms for which one of these series is non-stationary. The final set of firms has bonds with no more than 20 consecutive days of missing quotes. Based on this construction, there are 24 firms in our final list, similar in length and the number of firms to previous studies. Daily data for the S&P 500 index, firm stock price, the VIX index, the Libor-OIS spread, and the HFRXGL index are obtained from either Bloomberg or Datastream.9 For each firm, the weekly average of the daily series of bond asset-swap rates, CDS spreads, and each covariate series is calculated. We take the first difference of the log of each covariate, except for the Libor-OIS spread where we simply take the first difference. Four sets of CADF tests, one for each set of covariates, is performed under deterministic case 1. Critical values for the CADF test are generated using a 10,000 iteration residual based bootstrap with a block size of 5 (where ) as described in Section 2.3. To benchmark the CADF tests, we also perform ADF and Johansen cointegration tests using asymptotic critical values. Results for each test are shown in Table 2. Notes: 1: Numbers presented are test statistics. 2: ***, **, and * correspond to rejections at the 1, 5, and 10 percent confidence levels, respectively.3: The CADF test is run under deterministic case 1, as described in Section 2.3, with a block size of 5. The Johansen and ADF tests fail to reject the null of no cointegration at the 10% confidence level for 6 and 7 of the 24 firms, respectively. The CADF test using the S&P 500 index and the percentage change in the VIX fails to reject the null of no cointegration for 3 firms, while the CADF test using firm stock returns fails to reject to null of no cointegration for 4 of the 24 firms at the 10% confidence level. Covariates choices of the HFRXGL index and Libor-OIS spread reject the null of no-cointegration for the most firms, with each failing to reject only 2 firms. Results at the 5% confidence level are qualitatively similar. Overall, by using covariates the CADF test is able to find more cointegrating relationships than ADF and Johansen tests during the financial crisis. One possible explanation is that the inclusion of covariates removes part of the heightened volatility that may otherwise mask the cointegrating relationships. The strong performance of the CADF test for all sets of covariates is consistent with Anderson (2010), who concludes that during the crisis, systemic factors and market volatility significantly affected the basis. This paper introduces a residual based cointegration test with better power. Inclusion of stationary covariates reduces the noise in the system, providing more precise parameter estimates and higher power tests. The test and its asymptotic distribution under the local-to-unity alternative are derived under a simple model and mild assumptions. Due to the dependence of the asymptotic null distribution on hard to estimate nuisance parameters, we provide a bootstrap framework for obtaining test critical values. Simulations based on the asymptotic results shows that the CADF test has higher power than the ADF test. The magnitude of power improvement depends on the long-run correlation between the cointegration candidates and the stationary covariates. In small samples, Monte Carlo simulations also show that the CADF test has good size and power properties in comparison to the ADF and Johansen tests, under the presence of deterministic trends. The CADF test is used to study the cointegration relationship between CDS and bond spreads for 24 U.S. firms during the financial crisis. Covariates are chosen to proxy various factors that may affect the CDS-bond basis. The use of covariates allows us to uncover cointegration relationships for more firms than the Johansen and ADF tests, possibly because the covariates partially control for the heightened levels of volatility and market uncertainly that may otherwise mask cointegration relationships. To prove Lemma 1, some auxiliary results are needed. Define the regressors in the CADF regression as To prove Lemma 2.4, note that by definition, Lemma 1.1 follows directly from (13) and the fact that . To prove the two statements in Lemma 1.2, re-write the CADF regression (6) as . First note that is the first element of Given this, by the diagonality of ,
https://www.federalreserve.gov/pubs/feds/2011/201118/index.html
CC-MAIN-2017-39
refinedweb
4,250
51.89
The data contains the complete edit history (all revisions, all pages) of all Wikipedia since its inception till January 2008. There are two parts to the dataset: Note that the file decompresses to several (>3) Terabytes of text. Use 7zip to decompress the data on the fly. See All revisions of Wikipedia and Latest complete dump for more information about different dumps of the Wikipedia dataset. The data set contains processed metadata for all revisions of all articles extracted from the full Wikipedia XML dump as of 2008-01-03. For each specified namespace, there is a bzipped file with pre-processed data and also a file with all redirects. The output data is in the tagged multi-line format (14 lines per revision, space-delimited). Each revision record contains the following lines: For example: Anonymous editors are listed by their ip address, e.g. ip:69.17.21.242. The list of admins with simplified dates of adminship (disregarding demotions and reappointments of the same user) can be found at and Bots can often (but neither necessarily nor exclusively) be identified by the string "bot" in the username. You can create a list of bots by using the bot status page at Sometimes Wikipedia editors change their user names, which may lead to misattribution of edits (it does not seem that name changes are retroactively applied to the previously generated content). This issue may be especially important for prolific contributors. To handle name changes properly, you want to use the logs at and/or Data and the description was prepared by Gueorgi Kossinets. To examine a part of the data file, use bzcat and pipe its output to a combination of head, tail, grep, awk, sed, and so on. For example, the command$ bzcat enwiki-20080103.talk.bz2 | head -n 1414 | tail -n 14 will print lines 1401 through 1414 from the Talk namespace data file. Similarly$ 7z x -so enwiki-20080103-pages-meta-history.xml.7z | head -n 1414 | tail -n 14 will print lines 1401 through 1414 from pages-meta-history file.
http://snap.stanford.edu/data/wiki-meta.html
CC-MAIN-2013-20
refinedweb
345
60.65
Consider the following program where the Employee class has a private string name. If we declare an instance e1 in the Program class, we would not be able to access the private string using e1.name.Accessors using System; namespace Automatic { public class Employee { private string name; public string Name { get { return this.name; } set { name = value; } } } class Program { static void Main() { Employee e1 = new Employee(); e1.Name = "John"; //This will set name to John Console.WriteLine(e1.Name); //This will get the name Console.ReadLine(); } } } However, in C# there are the get and set accessors where we can use them to access, in a way, the private properties of a class. The get Accessor The body of the get accessor is similar to that of a method. It must return a value of the property type. The execution of the get accessor is equivalent to reading the value of the field. The set Accessor The set accessor is similar to a method that returns void. It uses an implicit parameter called value, whose type is the type of the property. public string Name { get { return name; } set { name = value; } }
https://codecrawl.com/2014/08/15/csharp-get-set-accessors/
CC-MAIN-2018-43
refinedweb
190
65.73
Imagine of Photo make a mask and select image marge the mask. How to ① button tap to pop up dialog select either import or export. import If you select import image for camera roll appeared. You select image for camera roll. - Big size image resize for application - Load image copy to application. original image do not change. Cut for the mask Load image appeared application and the mask on load image with black circle. You handle load image and black circle. circle area is select area. Now tap clip button cut for the mask. Handle Load image and circle Single finger drug to circle. Two finger drug to load image. Pinch operate change for circle size. Double tap screen to appear slider bar. Change slider value to change load image size. export You created the mask then export for tap upper left button and pop up dialog select to export image button. Next, you change slider value to change the background color. This color is the mask color. Export button tapped when merge image save to cameraroll. Check alert “saved” when save image complete. Imagine of Photo for next Imagine of Photo should have fix of few problem. And i’ll try to something feature.
http://midorina.net/2012/09/imagine-of-photo-available-appstore.html
CC-MAIN-2018-26
refinedweb
206
70.5
Introduction to ClassLoader in Java A ClassLoader is an object responsible for dynamically loading Java class during runtime to prevent JVM realizing that ClassLoader is a part of the Java Runtime Environment. It makes JVM life easier. JVM loads the classes into memory when required by the application and does not load all at once. ClassLoader then comes into the picture and loads the class into memory. How to Implement ClassLoader in Java? Let’s take a look at how the java.lang.ClassLoader is implemented in Java library code and what are its functionalities. java.lang.ClassLoader: public abstract class ClassLoader { public Class loadClass(String name); protected Class defineClass(byte[] b); public URL getResource(String name); public Enumeration getResources(String name); public ClassLoader getParent() }; Lets look and what are the functionalities of the ClassLoader in java: - loadClass(): This is the important method which will take the name of a class as a string and will return an instance of a class back and this is going to be the class which class loader has found on its classpath and it will provide it so that an object can be instantiated from it. - defineClass(): This method works similarly to that of the loadClass method except it takes a byte array as an argument and then will create a class from that byte array that means it takes the class itself as a byte array. It is slightly different from the loadClass method because the class itself is given as a byte array whereas loadClass needs to find that class to load it. - getResource() or getResources(): This method is a key to diagnose any problems related to loading the class as it provides a URL or an enumeration of URL’s back when you provide a string name and a package name. It will give you the exact path from where and how your class is loading leaving back all your assumptions. - getparent(): It is a key method to understand the hierarchy of classLoader. ClassLoader is not a flat structure and you have parents and child hierarchy and levels and levels of structure. How ClassLoader Works in Java? Example to demonstrate how a class loader works: Code: public class A() { public void addOne() { B b = new B(); b.addTwo(); } } How the ClassLoader will load classes in the above scenario: - As we can see, in the above code the class A calls a function addOne(). And inside that function, an instance is created of class B and another method from class B as addTwo() is called. - So the classLoader will load class A and then the class loader will load class B. So the call will be caused like class.getClassLoader().loadClass(“B”); - Due to this feature of class loaders, we can have a hierarchy of classes and can decode their connection with each other. Types of ClassLoader In this tutorial, we are going to talk about different types of class loaders and their built-in functionality and why it is used. 1. Bootstrap ClassLoader - Java ClassLoader is also an instance of java.lang. ClassLoader which is a class, just imagine who will be loading ClassLoader classes. We will see the scenario where Bootstrap ClassLoader comes into the picture. - It will load all the rt.jar and other core libraries from $JAVA_HOME/jre/lib directory. - It serves as a parent of all other ClassLoader Instances. - It is one of the parts of JVM and is written in Native code, thus the implementation may be changed for this particular ClassLoader. 2. Extension ClassLoader The Extension ClassLoader is a child of Bootstrap It is used to load the extensions of all Java classes coming into JVM. 3. Application ClassLoader - It is a child of Extension ClassLoader. - It is used to load all application-level classes in the classpath environment variable –classpath or –cp. ClassLoader Leaks Every class has a link to all the classes it creates. It effectively needs to have a memory to store static fields. If classLoader leaks any static field for any single class it just means that you are leaking a ClassLoader. If you do so you will leak all the classes and a bunch of objects and all the objects they linked to. ClassLoader leaks can be way too dangerous. Every time we do a redeployment or adding enhancements at the runtime in our application, ClassLoader will load a class and it will never reload or unload a class. So when a classLoaders load a single class from scratch it will have some objects in order to recreate or reload it from scratch old class loader sends the object from old state to the new state. So in this transition, there might be a leak. So when you are leaking an object, you are leaking a class and so it’s the class loader. Principles of Java ClassLoader There are 3 principles that a java ClassLoader works upon: - Delegation Model: It delegates class loading requests to parent ClassLoader and loads class only if the parent is not able to find or load the class. - Visibility Principle: This principle states the visibility scope of loaded classes. Class loaded by its parent is visible to parent class loaders but the class loaded by its child is not visible to the parent class loader. - Uniqueness Property: It ensures that there is no repetition of classes in the class loader. If a parent loads a class then its corresponding child does not load this class. Example of Custom ClassLoader This is the custom ClassLoader example named with ClassLoaderJava.java: Code: ClassLoaderJava.java import java.lang.reflect.Constructor; import java.lang.reflect.Method; public class ClassLoaderJava extends ClassLoader{ // created to load class and invoke method. public void classLoadingDemo(String classBinString, String methodName) { try { // will create an instance of class loader. ClassLoader classLoaderInstance = this.getClass().getClassLoader(); // creating an instance of a class to store the loaded class. Class loadedClass = classLoaderInstance.loadClass(classBinString); System.out.println("Loaded class name is: " + loadedClass.getName()); // Fetching the constructor of loaded class. Constructor con = loadedClass.getConstructor(); // creating an instance to invoke the method. Object obj = con.newInstance(); // Will store the method fetched from loaded class. Method invokingMethod = loadedClass.getMethod(methodName); System.out.println("Invoked method name is: " + invokingMethod.getName()); invokingMethod.invoke(obj); } catch (ClassNotFoundException e) { e.printStackTrace(); } catch (Exception e) { e.printStackTrace(); } } } Code: DemoClass.java public class DemoClass { public void add() { System.out.println("This method is invoked by the classLoader."); } } Code: LoadingTest.java public class LoadingTest { public static void main(String[] args) { ClassLoaderJava classLoader = new ClassLoaderJava(); classLoader.classLoadingDemo("DemoClass" , "add"); } } Output: Recommended Articles This is a guide to ClassLoader in Java. Here we discuss the working, implementation, types, and principles of ClassLoader in Java along with its code implementation. You can also go through our other related articles to learn more –
https://www.educba.com/classloader-in-java/?source=leftnav
CC-MAIN-2020-34
refinedweb
1,132
56.35
Each student should end up with a bundle of softwares which are needed in the mandatory courses of the Cogmaster. You migh skip the Atom install if and only if you are already used to an advanced text editor as wim, emacs, sublimetext. Microsoft Office Word, LibreOffice and other document formatting softwares are not text editors. The only slot in the schedule dedicated to installation of softwares is on Wednesday 2 September from 9:00 to 12:00. We will not answer installation questions during the following AIP lectures. We will not try to solve your problems if you were not attending on Wednesday morning. If you don’t have a computer yet, tell us after the AIP presentation on Monday 31 August. Backup your computer on monday evening. Installation procedures are currently being tested on Windows (7 64bits), MacOS (10.9 Maverick), and debian-based linux. We have have few years of experience with installations on various Operating System versions, but there is always some computers on which the usual procedures and fixes fail. We will try our best, if it happens to you, please be patient. Some installations will require an internet access, thus don’t forget to bring your login and password for the ENS wifi on wednesday morning. Installation procedures will be put online on Tuesday. Links will be provided to download the installation files from the internet. Try much as possible to download the software installers on your computer Tuesday evening: it will save us some time on Wednesday morning because the network at the ENS is usually slow and we will have only few USB sticks with the software installers. If you are using a debian-based Linux such as Ubuntu, most of the installations will be made with apt, it is way safer to try the installation at your home if you have a decent internet connection than Wednesday at the ENS. Select the installation instructions for your operating system: Linux Mac OS Windows Follow carefully the instructions step by step, not skipping any. If something does not work as expected, stop there and ask for our help. It is much easier to prevent a misinstallation than to repare it. Once the installation on your computer completed, you can browse the documents in the ressource folder Save target as, and select an appropriate directory, for example the default Downloadsfolder. Open the Folderor open an explorer (windows key + e) and go to your Downloadsfolder Yesbutton. NextButton and accepting default parameters (note in which directory the program will be installed) until you have to click on the Finishbutton. Finish, you should see the program running and you coud reopen it using the desktop Scratch icon. Alternatively, you can open an explorer, go to the directory in which the program was installed and double click on the Scratch icon. Download Windows Installerbutton on [] AtomSetup.exe Nextbutton). Download GitHub Desktopbutton. Downloadsfolder and double click on the “GitHubSetup.exe” file login, then your email and click on Continue if you have trouble with the anaconda installation that we told you we cannot solve, here is a link to the Christophe Gohlke 64bits modules [] Downloadsfolder and double click on the “Anaconda-X.X.X-Windows-” file Just Me (recommended), then click on Install Finish All the programsand then the Anaconda (64-bit)folder, what you are looking for is the IPython (Py 2.7)entry. Click there (and not the IPython (Py 2.7) Notebooknor the IPython (Py 2.7) QTConsole). [ ] in just after the $ sign, type each of those lines one by one followed by a stroke on the enter key import numpy as np import matplotlib.pyplot as plt from scipy import stats x=np.arange(-5,5,.1) y=stats.norm.pdf(x) plt.plot(x,y) plt.show() [ ] close the ipython shell by typing quit() or the keyboard shortcut ctrl + D All the programsand then the Anaconda folder, then on Anaconda Command Prompt Enterkey: sh conda install condaYou have to type it where a little rectangle is blinking (this is the “prompt”), after something that looks like C:Users\your_name\AppData\Local\continuum\Anaconda>You will see some text messages during the installation of some python modules, don’t worry! Procced ([y]/n), press on the Enterkey (because yes is the default) [ ] when you are back to the blinking little rectangle, type this text, then press the Enter key: conda install -c pygame [ ] When the installation of pygame is over, you can even type exit and press on Enter to close the window, how spooky! All the programsand then the Anaconda folder, then on Ipython (Py 2.7) QTConsole [ ] after the “IPython window” has opened, you can copy and paste the following seven lines just after the In [1]:, then press twice on Enter You should see a little window appear, change color and then disappear.You should see a little window appear, change color and then disappear. import pygame pygame.init() w=pygame.display.set_mode([300,300]) w.fill([128,37,213]) pygame.display.flip() pygame.time.wait(3000) pygame.quit() ctrl+Dto quit the ipython console Windowsicon (or just press the Windowskey on your keyboard), then on All the programsand then the Anaconda folder, then on Anaconda Command Prompt [ ] at the prompt, type the following text then press on Enter python Lib\site-packages\pygame\examples\chimp.py You should be able to play a silly game, including sound (make sure the sound is on, but not too loud). Findernext to the Applelogo on the top left corner of your screen, then on Preferences, then the Side Barcheck the first unchecked box under DEVICES, close the Finder Preferenceswindow. Application folderand the to the Utilitiessubfolder, grab the Terminalicon and put it on the second place on your “Dock”, right next to the Findericon. Terminalicon you just placed in the “Dock”. [ ] In this window copy and paste the following text then press on the Enter key (from now on this will be called executing a command in the terminal) xcode-select --install [ ] This should make a window pop up to ask you if you want to install the “Command Line Tools”, answer Yes, and wait until completion of the installation Download XQuartz-2.7.7.dmg from [] Continueand Agreeuntil you can click on Install Download GitHub Desktopbutton. Downloadsfolder .ziparchive if needed GitHub Desktopicon Openbutton at the security pop up window Move to Application Folder Continue Continue Install Command Line Tools, then on the pop-up window, type down your mac account password and click on Install Helper OKupon completion of the Helper install Continueon the “Welcome to GitHub Desktop” Done ### Atom Download the Atom installer by clicking on the big red Download For Mac button on [] Install as usual Enjoy! Downloadsfolder from the Dock Applicationsfolder in the pop-up window Applicationsfolder from the Dock Scratch1.4folder Scratch.appicon [ ] Go to [] and download either the “R-3.2.2.pkg” or the “R-3.2.1-snowleopard.pkg” depending on the version of your OS (check About this mac on the apple menu on the top left of your screen if needed). [ ] download RStudio-0.99.473.dmg from or an appropriate older version from [] Appleicon again, then on “About This Mac” window, now click on “More info…” and in the window that opens up seek the “Processor Name” entry in the “Hardware Overview”. If it says “PowerPC”, “Intel Core Solo” or “Intel Core Duo”, then stop right there before doing anything else, because you will need to wait until the Wednesday install party to get a different version of Python. [ ] alternatively, open a terminal and type the following text, then press on the Enter key sysctl hw.cpu64bit_capable The output tells you if your processor can get huge instruction sets [ ] If and only if your mac pass these tests, you can carry on. Downloadsfolder and double click on the file Anaconda-X.X.X-MacOSX-x86_64.pkg in order to start the installation. Continueseveral times and agree on licence terms until the installation is completed, if at some point you see the error “You cannot install Anaconda in this location”, then just click on Install for me onlyand you should be able to continue. Closebutton ipython-notebookand ipython-qtconsoleare installed (their icon should be Update | Lauch), otherwise, click on Installbuttons. Terminalapplication from your “Dock” $sign, type ipythonthen press on the Enterkey in order to lauch a ipython interpreter [ ] in the ipython shell, type each of those lines one by one followed by enter import numpy as np import matplotlib.pyplot as plt from scipy import stats x=np.arange(-5,5,.1) y=stats.norm.pdf(x) plt.plot(x,y) plt.show() quit()or the keyboard shortcut ctrl + D [ ] you are now back to the command line in the Terminal application. Warning Now the Mac python install procedure starts to be tricky, if you don’t feel confident with typing commands in a terminal, of if you’d like to sleep, stop rigth now, we will carry on tomorrow morning. Otherwise, stay up for some more fun with the terminal! [ ] in a terminal, copy paste or type this command: ruby -e "$(curl -fsSL)" [ ] if you ever have an error about certificates using curl, execute the two following commands and restart the “Homebrew” install of the previous step export CURL_CA_BUNDLE=/usr/local/curl/ curl -o cacert.pem [ ] once the installation is over type in the terminal brew doctor [ ] when the doctor gave you its check-up diagnosis, it should tell you that your system is ready for brewing stuff or something similar IF THERE IS SOME CRITICAL ERROR AND NOT JUST WARNINGS, STOP THE INSTALLATION PROCESS NOW AND ASK US WHAT TO DO [ ] If and only if the doctor gave its green light, you can Now close (by typing exit and then closing the windows with the cmd+W key stroke combination) all your instances of the terminal application, quit the application cmd+Q and relaunch it. brew install sdl sdl_image sdl_mixer sdl_ttf portmidi [ ] In a terminal, execute conda install conda Procced ([y]/n), press on the Enterkey (because yes is the default [ ] wait conda install -c pygame Procced ([y]/n), press on the Enterkey (because yes is the default [ ] in a terminal, type ipython qtconsole In [1]:, then press twice on Enter import pygame pygame.init() w=pygame.display.set_mode([300,300]) w.fill([128,37,213]) pygame.display.flip() pygame.time.wait(3000) pygame.quit() ctrl+Dto quit the ipython console [ ] to further check the installation, in a Terminal window, type: python ~/anaconda/lib/python2.7/site-packages/pygame/examples/chimp.py You should be able to play a silly game, including sound (make sure the sound is on, but not too loud). You must be connected to the Internet! Execute the following commands: 1. Download and install sudo apt-get install python2.7 sudo apt-get install python-numpy python-scipy python-matplotlib python-pandas sudo apt-get install ipython ipython-notebook sudo apt-get install python2.7-doc python2.7-examples sudo apt-get install python-numpy-doc python-matplotlib-doc sudo apt-get install ipython-doc sudo apt-get install python-pygame ipythonin order to lauch a ipython interpreter import numpy as np import matplotlib.pyplot as plt from scipy import stats x=np.arange(-5,5,.1) y=stats.norm.pdf(x) plt.plot(x,y) plt.show() [ ] exit the ipython shell by typing quit() or the keyboard shortcut ctrl + D [ ] then you are back to the terminal shell where you can install pygame iceweasel /usr/share/doc/python2.7-doc/html/index.html and iceweasel /usr/share/doc/python-pygame/index.hml sudo apt-get install git-core git config --global user.name "your_user_name" git config --global user.email your_email@example.com sudo apt-get install scratch scratch You should see a new window, where you should be able to grab and move the little mascot. Instructions to install R are available here: lsb_release -da You should see an output like this one: Distributor ID: Debian Description : Debian GNU/Linux 7.8 (wheezy) Release: 7.8 Codename: wheezy /etc/apt/sources.list sudo sh -c 'echo deb wheezy-cran3/ >> /etc/apt/sources.list' For Ubuntu, you migh have to leave out the -cran3 after the version codename sudo sh -c 'echo deb vivid/ >> /etc/apt/sources.list' sudo apt-get update .debinstaller for Debian/Ubuntu. If your system is not that recent, go to [] to find the appropriate installer file. sudo apt-get install r-base r-base-core r-base-html sudo apt-get install libjpeg62 sudo dpkg -i rstudio-X.XX.XXX-amd64.deb rstudioin a console to lauch the R interpreter sudo apt-get install atom But if you are using linux, you might already be using a decent text editor and thus won’t need Atom.
http://www.pallier.org/lectures/AIP2015/00_Installation/install.html
CC-MAIN-2018-09
refinedweb
2,142
50.26
Opened 5 years ago Last modified 4 years ago #12303 new bug nl_langinfo(CODESET) seems to always return US-ASCII whatever environment values Description My guess is that under Haiku, this function should return UTF-8. I found a discussion about this in the haikuport mailling list :. But it is not clear for me what need to be done (or if there is something to do). Maybe, it is not worth the time. Here is a sample program to test this function : #include <stdio.h> #include <langinfo.h> main() { printf("%s\n", nl_langinfo(CODESET)); } Here is a sample run : /boot/src/fpc/tests> printenv LC_TYPE en_US.UTF-8 /boot/src/fpc/tests> nl_lang_info US-ASCII Change History (2) comment:1 by , 5 years ago comment:2 by , 4 years ago I had a similar problem under FreeBSD 10.1 and my test program - just like yours - didn't call setlocale() before calling nl_langinfo(). Here is the corrected test code - which works now under FreeBSD. Just thought I would mention it, in case it fixes your issue under Haiku too. /* Compile with "gcc48 -W -Wall -o langinfo langinfo_test.c" */ #include <stdio.h> #include <langinfo.h> #include <locale.h> int main() { setlocale(LC_ALL,""); printf("%s\n", nl_langinfo(CODESET)); return 0; } This ends up returning the charset computed in src/system/libroot/add-ons/icu/ICUCategoryData.cpp. I have no idea how the code there can end up saying "US-ASCII", however. So it sounds like an actual bug.
https://dev.haiku-os.org/ticket/12303
CC-MAIN-2020-10
refinedweb
246
59.7
In Windows 8, you can still keep writing classic .NET applications as you have done for the past decade. Windows 8, however, adds a new family of applications often referred to as "Metro applications." To be precise, "Metro" is the blanket term used to indicate a collection of design principles that have inspired Microsoft in the creation of the new interface of Windows 8, running side by side with the classic Windows interface. Glimpses of the Metro style are visible in Windows Phone and will likely be visible in the new releases of Microsoft's flagship products in the months to come. I agree with those who say that we can measure the real commitment of Microsoft to Metro with the next version of Office. According to Wikipedia, Metro is a "design language" inspired by the "principles of classic Swiss graphic design" that emphasize cleanliness and readability. If Metro is mainly a design language, then a Metro application is a Windows application whose user interface and user experience are inspired by the Metro principles. In light of this, what's the role of Windows 8 and why does Metro seems to be so tightly bound to it? What's really going to be different for developers? Meet WinRT The .NET framework runs on top of the old Win32 API and it uses the P/Invoke mechanism to make calls from the dazzling world of managed code to the wilds of COM and C-based APIs. With Windows 8, Microsoft is undertaking an important move replacing the underlying layer through which the core of the OS functions is exposed to applications. In other words, Microsoft is attempting to touch and restructure some of the pillars of all Windows applications. They're doing so by introducing WinRT short for Windows Runtime. WinRT is ultimately a new layer of code that is expected to provide the same core services as Win32/COM. You should be able to see immediately the deep impact of such a change: WinRT can't just be plugged in silently while keeping existing .NET applications working as usual. WinRT is the dawn of a new generation of applications. It goes beyond .NET in the sense that it would ideally require a new, revisited .NET. Not really a realistic scenario is it? Or, at least, not a scenario that will be realistic in the space of a Windows release; and not a scenario you can enable overnight. That's why Windows 8 is presented as a dual operating system the classic Windows/.NET model and the new WinRT-based model. WinRT-powered applications are also expected to show off a new and more modern user interface and user experience the Metro style. However, WinRT and Metro are distinct things. It is mere guesswork at this time, but it seems that you can have WinRT applications equipped with a non-Metro UI; likewise, you could have Metro-inspired applications written for classic .NET. If past history holds, it will take only a few months to see Metro-inspired UI components from major vendors. At that point, the circle will be closed and the message will be clear to everybody: Metro is the recommended UI/UX of the next Windows. And WinRT is the new runtime for building next-generation Windows applications. WinRT and .NET What's the relationship between WinRT and today's .NET Framework? As I said, my guess is that in an ideal world, Microsoft would have shipped a brand new .NET Framework working on top of WinRT and would have educated the masses to forget the old and embrace the new. This is ideal, but not realistic, and some common ground must be found. WinRT offers a framework that looks like the familiar .NET Framework, but still has some key differences clear signs of the underlying idea of reworking the programming framework developers deal with. Some classes are very similar as .NET Framework classes, with only minor differences; some.NET Framework classes have a counterpart in WinRT, but in different namespaces (and renamed and refactored a bit); and some of the .NET Framework classes are not exposed to WinRT applications and are unavailable. Finally, some new classes make their debut in WinRT. In addition, WinRT is driven by a new set of policies and this is reflected in the organization of the programming API. For example, all APIs expected to run for longer than a few milliseconds have been designed to be asynchronous. This is a huge change that introduces a significant paradigm shift for most developers. The shift, however, is partially mitigated by some new facilities available at the language level, such as the new await/async keywords, which you may have already seen in F#. Direct file access is also restricted in the name of sandboxing and making new applications ready for a Windows-specific marketplace. That doesn't mean that you cannot permanently store data in WinRT applications: You just have to use a different API, such as the classes stored in the new Windows.Storage namespace. To open and save documents and files, you have new file picker components that replace the standard common dialogs of Windows. You also have an ad hoc WinRT API for accessing the registry. In a nutshell, you have a new API, only a part of which exactly matches the .NET API. WinRT native components can be written with primary .NET languages such as C# and Visual Basic as well as with C++ and JavaScript. Conclusion Anybody outside Microsoft today who claims to have a vision regarding WinRT and what Windows 8 will become might be proved wrong in a few months. This is due to the lack of specific and detailed information at this early stage. At any rate, I feel confident stating that the new Windows runtime takes on a huge challenge: improving the Windows programming model and subsequently the foundation of Windows applications. To make this happen and push a more modern view of applications, Microsoft had to touch the ground occupied by the .NET Framework. In doing so, it considered and rejected breaking changes in the framework that serves millions of applications today. That's why we now are getting this "new" thing the new Windows Runtime. Dino Esposito is a frequent contributor on Microsoft topics and has written several developer-oriented books for Microsoft Press. Related Reading Windows 8: Microsoft's Development Re-Do
http://www.drdobbs.com/tools/winrt-the-new-runtime-in-windows-8/232200577
CC-MAIN-2015-18
refinedweb
1,070
62.58
About four months ago, when I was working on a project, I wanted to add some buttons on the title bar of windows, near the standard Minimize/Maximize/Close buttons. To do that, I had to draw on the title bar, that is, non-client area of the window. Although my work was useless on that project and I just wasted my time, at least I am now writing this article about what I have done. In this article, I'll show you how to create special Graphics objects in C# that you can use to draw anywhere on your window, and in my next article, I will explain how to add buttons on the title bar. Graphics Well, as you now, in Microsoft .NET you need a Graphics object whenever you want to draw something. The Graphics object must belong to the object you want to draw on. Usually, you have two options to obtain a Graphics for a control (Form or whatever derived from System.Windows.Forms.Control class). First, you can handle the Paint message of the control, and second, you can call the CreateGraphics method of the object. In the former case, your drawing will be always on the object! This is because the Paint event is always raised by the .NET subsystem (wrapped on win32 WM_PAINT message) whenever any portion of your window needs to be drawn (i.e. is invalidated - in terms of C++ win32 programming). In the latter case, you draw on your window whenever you want, not whenever your OS wants. But the drawback is that your painting is there, until something passes over your window! System.Windows.Forms.Control Paint CreateGraphics WM_PAINT I think I am going too far from the subject. For more information about painting mechanism on .NET and Windows, refer to MSDN Library (Pain event, CreateGraphics method, and Painting and Drawing subsection of Windows GDI in Windows SDK Documentation). Pain Whichever of two methods you use to obtain the graphics object, you can only draw on the working area of the window (called client area), that is, anywhere except the border and title bar of your window. graphics In this article, I am designing a class called WindowGraphics, that creates a Graphics object for your entire window, that contains client and non-client area of the window; and by using that, you can draw wherever you want on your window. You can also use it to draw on portions of some controls that you had no control over before. For example, any place on a TextBox (with usual Graphics objects, you cannot draw on borders of a TextBox). The class also handles a problem on normal Graphics objects when the RightToLeft and RightToLeftLayout properties of your form are both true. Details of this problem are explained later on. WindowGraphics TextBox Graphics RightToLeft RightToLeftLayout true After downloading the source files attached to the article, you can easily start using the WindowGraphics class by including the WindowGraphics.cs file into your project. It is very simple. You start by creating an instance of the WindowGraphics class, passing to the constructor the control you want to draw on. // ... // create a Graphics for entire form WindowGraphics wg = new WindowGraphics( this ); // I assume that this line // is put somewhere in your form class, // so the 'this' keyword refers // to an instance of the Form class. Then, simply use the Graphics property of the newly created object to do your drawing: //..... wg.Graphics.DrawLine( Pens.Blue, 0, 0, 100, 100 ); //..... // or if you have to call many drawing functions, here is the way to reduce // your typing. This is what I always do... Graphics g = wg.Graphics; g.DrawString( "I am on the title bar!", new Font( "Tahoma", 10, FontStyle.Bold ), Brushes.Black, 0, 4 ); g.FillEllipse( Brushes.Black, this.Width - 40, this.Height - 40, 80, 80 ); // .... other drawing commands... Finally call the Dispose method of the object to free any resources it is using. Because this class uses unmanaged resources, it is highly recommended that you do not forget to call the Dispose method! Dispose .... wg.Dispose(); .... You can also use the using block of C# that calls the Dispose method automatically for you. It is the recommended way: using using ( WindowGraphics wg = new WindowGraphics( this ) ) { Graphics g = wg.Graphics; // ... // do your drawing with 'g' // ... } Also note that, when you work with this Graphics, the origin is upper left corner of the whole form rectangle, not its client area. The sample project included in the source code is created using Visual Studio 2008, but the project file should also open in Visual Studio 2005 with no problem, because it is configured to use .NET Framework 2.0. If you are not interested in details, you can leave the article here, and use the class in your project. But if you are, continue reading. Creating this kind of Graphics object is not a trivial task in .NET. You have to use native methods and win32 calls. As some of you know, the easiest way to do this is to call the GetWindowDC function, passing the handle of your window. The GetWindowDC function belongs to windows user32 library. You must import this function first: GetWindowDC user32 using System; using System.Runtime.InteropServices; //... [DllImport( "user32" )] private static extern IntPtr GetWindowDC( IntPtr hwnd ); // you also need ReleaseDC [DllImport( "user32" )] private static extern IntPtr ReleaseDC( IntPtr hwnd, IntPtr hdc ); Then, call the method to create a DC* for the entire window, and then create a Graphics object from the DC: * DC: Device Context - the objects that in the GDI world are used to draw things. Somehow equivalent to Graphics in the GDI+. IntPtr hdc = GetWindowDC( this.Handle ); Graphics g = Graphics.FromHdc( hdc ); // .... // do your drawing // .... // free the resources g.Dispose(); ReleaseDC( this.Handle, hdc ); I had used this way in my project at first. Now the problem... If you will never use right to left forms, you can skip this section, but if you make software for a right to left language (as in my case), or are interested in the subject, read this section. Although this problem is not related to this kind of Graphics, I am explaining it because I encountered it here. You can see that when you set the RightToLeftLayout and RightToLeft properties both to true, the whole coordinate system at the top level gets mirrored. The origin is no longer upper left corner of the client area of the form; it is upper right corner instead. I said at the top level, because it occurs only in the form itself, not for the child controls. For example, when you put a Panel in your form, inside the Panel the origin is still upper left corner of the panel. Panel Well, you may guess that when this happens, any Graphics objects you construct for the form must be mirrored. Yes, that's true, but not completely! Try this: private void Form1_Paint( object sender, PaintEventArgs e ) { e.Graphics.DrawLine( Pens.Blue, 0,0, 100, 100 ); } By running the project, you see that the line is started from upper right corner of the form, as expected. Now move another window over the Form. What do you see? The line will be drawn from upper left corner, as if the Graphics are not right to left. Now when the form is behind another window on your desktop, click its icon on the taskbar to bring it to front (or minimize the form, and restore it). The line again is drawn from upper right corner! I have done a lot of investigations, and tried whatever you thought of to find the reason, and I didn't find anything! It may be a bug in the .NET Framework or Windows. I am using the latest version of .NET Framework and Windows XP at the time of writing this article (Service Pack 3 of XP, and .NET 3.5 SP1 which includes .NET 2 Service Pack 2), and the problem is still there. It happens even in Windows Vista. So I decided to do something else. Here is my solution. I tried to create the Graphics from something that never is right to left – the whole Desktop. First, I get the DC for the entire screen, and then do the required transformations and clippings to fit the DC on the visible region of the window. Here is the step-by-step explanation: IntPtr hdc = GetDC( IntPtr.Zero ); // get DC for the entire screen By passing zero to the GetDC function as the window handle, we get a DC for the entire screen. GetDC IntPtr hrgn = GetVisibleRgn( hWnd ); // obtain visible clipping region for the window SelectClipRgn( hdc, hrgn ); // clip the DC with the region We must retrieve the clipping region of the form, and clip the DC with that. Without doing this, we may draw on other windows - places that we do not own.The GetVisibleRgn method is a private method of our class. It returns a handle to the region that the window is currently clipped to. I'll explain the method later. The SelectClipRgn is a Win32 API function. It clips the given DC to the given region. GetVisibleRgn private SelectClipRgn Now the origin of the DC is upper left corner of the screen, but we want it to be upper left corner of our form. So we must move the origin: Rect rect = new Rect(); GetWindowRect( hWnd, rect ); // move the origin from upper left corner of the screen, // to the upper left corner of the form SetWindowOrgEx( hdc, -rect.left, -rect.top, IntPtr.Zero ); Finally, create your Graphics from the DC: Graphics graphics = Graphics.FromHdc( hdc ); You're done. Now the GetVisibleRgn method: private IntPtr GetVisibleRgn( IntPtr hWnd ) { IntPtr hrgn, hdc; hrgn = CreateRectRgn( 0, 0, 0, 0 ); hdc = GetWindowDC( hWnd ); int res = GetRandomRgn( hdc, hrgn, 4 ); // the value of SYSRGN is 4. // Refer to Windows SDK Documentation. ReleaseDC( hWnd, hdc ); return hrgn; } We create an empty region, get the DC of the window, pass the DC to a special function to retrieve the visible region associated with the window, and release the DC. That special function is GetRandomRgn. I don't know what the philosophy behind the name of the function is, but I think at first it supposed to do a lot more that just retrieving the clipping region of the window. By the way, it works for us. GetRandomRgn The final thing is wrapping them all up in a reusable class. You can see the final class in the code attached with the article. The first reason for me was that I wanted to write something! And I found this class simple enough and well designed. I hope that you learn some tips about Object Oriented design, in addition to a few things about Windows GDI. This is my first article on CodeProject. It has been a long time since I wanted to write about my projects (almost a few years!). I think I have a lot of good experiences to share with others. This is my first step, and I hope that it will continue well enough. I don't know when I can I write my next article, but I think it is about buttons on the title bar. If you liked the article, please vote for it, and leave some comments. So I can understand the good points, and correct my mistakes. hWnd IntPtr.Zero This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) if (GoodArticle && goodexplanation && theProgrammer is perfect) { SendFriendshipRequest("MGH.UNC.IR@gmail.com", theProgrammer ); } private const int WM_NCPAINT = 0x85; private const int WM_NCACTIVATE = 0x86; protected override void WndProc( ref Message m ) { switch ( m.Msg ) { case WM_NCPAINT: base.WndProc( ref m ); // call base method to draw standard border of the control or form // ... do the painting return; case WM_NCACTIVATE: base.WndProc( ref m ); // ... do the painting return; default: base.WndProc( ref m ); } } icestatue wrote:get the desktop window using win32 api with a call to find window ::GetDesktopWindow() IntPtr hdc = GetWindowDC( IntPtr.Zero ); Graphics g = Graphics.FromHdc( hdc ); // .... // your drawing.... // .... g.Dispose(); ReleaseDC( IntPtr.Zero, hdc ); General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/script/Articles/View.aspx?aid=29443
CC-MAIN-2014-41
refinedweb
2,054
73.27
The Things Network In order to use The Things Network (TTN) you should navigate to their website and create/register an account. Enter a username and an email address to verify with their platform. Once an account has been registered, you can register your Pycom module as either a node or a nano-gateway. The steps below will detail how to do this. Create an application In order to register your device to connect to the things network, you must first create an application for these devices to belong to. This way the Network will know where to send the devices data to. Selecting the Applications tab at the top of the TTN console, will bring up a screen for registering applications. Click register and a new page, similar to the one below, will open. Enter a unique Application ID as well as a Description & Handler Registration. Now the Pycom module nodes can be registered to send data up to the new Application. Register a Device To connect nodes to a things network should be a globally unique identifier for the device. You can run the following on you Pycom module to retrieve its EUI. from network import LoRa import ubinascii lora = LoRa() print("DevEUI: %s" % (ubinascii.hexlify(lora.mac()).decode('ascii'))) Once the device has been added, change the Activation Method between OTAA and ABP depending on user preference. This option can be found under the Settings tab. Register a Nano-Gateway You can also setup your Pycom module to act as a gateway with The Things Network. The code required to do this can be found here..
https://docs.pycom.io/gettingstarted/registration/lora/ttn.html
CC-MAIN-2019-26
refinedweb
269
64.41
Structuring datasets to facilitate analysis (Wickham 2014) So, you've sat down to analyze a new dataset. What do you do first? In episode 11 of Not So Standard Deviations, Hilary and Roger discussed their approaches. I'm with Hilary on this one, you should make sure your data is tidy. Before you do any plots, filtering, transformations, summary statistics, regressions... Once you get to a tidy dataset, it's relatively easy to do all of those. Today we'll talk about what makes a dataset tidy and how to get there. Hadley Whickham (who I think is tied with Wes for number of shoutouts in this series...) kindly summarized a tidyness as a dataset where And most of the time it's just the first two that matter. As quoted at the top, this really is about facilitating analysis: going as quickly as possible from question to answser. We'll cover a few methods that help you get you there. import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt pd.options.display.max_rows = 10 sns.set(style='ticks', context='talk') %matplotlib inline Note: I'm reusing this example from an earlier post. Based on this StackOverflow question. We'll grab some NBA game data from basketball-reference.com using pandas' read_html function, which returns a list of DataFrames. tables = pd.read_html("") games = tables[0] games.head() Side note: pandas' read_html is pretty good. On simple websites it almost always works. It provides a couple parameters for controlling what gets selected from the webpage if the defaults fail. I'll always use it first, before moving on to BeautifulSoup or lxml if the page is more complicated. As you can see, we have a bit of general munging to do before tidying. Each month slips in an extra row of mostly NaNs, the column names aren't too useful, and we have some dtypes to fix up. column_names = {'Date': 'date', 'Start (ET)': 'start', 'Unamed: 2': 'box', 'Visitor/Neutral': 'away_team', 'PTS': 'away_points', 'Home/Neutral': 'home_team', 'PTS.1': 'home_points', 'Unamed: 7': 'n_ot'} games = (games.rename(columns=column_names) .dropna(thresh=4) [['date', 'away_team', 'away_points', 'home_team', 'home_points']] .assign(date=lambda x: pd.to_datetime(x['date'], format='%a, %b %d, %Y')) .set_index('date', append=True) .rename_axis(["game_id", "date"]) .sort_index()) games.head() A quick aside on that last block. dropnahas a threshold argument. If at least threshitems are missing, the row is dropped. We used it to remove the "Month headers" that slipped into the table assigncan take a callable. This lets us refer to the column in the previous step of the chain. Otherwise we would have to assign temp_df = games.dropna()...And then do the pd.to_datetimeon that. set_indexhas an appendkeyword. We keep the original index around since it will be our unique identifier per game .rename_axisto set the index names (this behavior is new in pandas 0.18; before .rename_axisonly took a mapping for changine labels) The Question: How many days of rest did each team get between each game? Whether or not your dataset is tidy depends on your question. Given our question, what is an observation? In this case, an observation is a (team, game) pair. Unfortunately we don't have that yet. We have two observations per row, one for home and one for away. We'll fix that with pd.melt. melt works by taking observations that are spread across columns ( away_team, home_team), and melting them down into one column with multiple rows. However, we don't want to lose the metadata (like game_id and date) that is shared between the observations. By including those columns as id_vars, the values will be repeated as many times as needed to stay with their observations. tidy = pd.melt(games.reset_index(), id_vars=['game_id', 'date'], value_vars=['away_team', 'home_team'], value_name='team') tidy.head() The DataFrame tidy meets our rules for tidyness: each variable is in a column, and each observation ( team, date pair) is on its own row. Now the translation from question ("How many days of rest") to operation ("date of today's game - date of previous game - 1") is direct: # For each team... get number of days between games tidy.groupby('team')['date'].diff().dt.days - 1 0 NaN 1 NaN 2 NaN 3 NaN 4 NaN ... 2455 7.0 2456 1.0 2457 1.0 2458 3.0 2459 2.0 dtype: float64 That's the essence of tidy data, the reason why it's worth considering what shape your data should be in. It's about setting yourself up for success, so that the answers naturally flow from the (just kidding, it's still difficult. But hopefully less so). Let's assign that back into our DataFrame tidy['rest'] = tidy.sort_values('date').groupby('team').date.diff().dt.days - 1 tidy.dropna().head() To show the inverse of melt, let's take rest values we just calculated and place them back in the original DataFrame with a pivot_table. by_game = (pd.pivot_table(tidy, values='rest', index=['game_id', 'date'], columns='variable') .rename(columns={'away_team': 'away_rest', 'home_team': 'home_rest'})) df = pd.concat([games, by_game], axis=1) df.dropna().head() One somewhat subtle point: an "observation" depends on the question being asked. So really, we have two tidy datasets, tidy for answering team-level things, and df for game-level things. One potentially interesting question is "what was each team's average days of rest, at home and on the road". With a tidy dataset (the DataFrame tidy, since it's team-level), Seaborn makes this easy: g = sns.FacetGrid(tidy, col='team', col_wrap=6, hue='team') g.map(sns.barplot, 'variable', 'rest'); An example of a game-level statistic is the distribution of rest differences in games: df['home_win'] = df['home_points'] > df['away_points'] df['rest_spread'] = df['home_rest'] - df['away_rest'] df.dropna().head() delta = (by_game.home_rest - by_game.away_rest).dropna().astype(int) ax = (delta.value_counts() .reindex(np.arange(delta.min(), delta.max() + 1), fill_value=0) .sort_index() .plot(kind='bar', color='k', width=.9, rot=0, figsize=(12, 6)) ) sns.despine() ax.set(xlabel='Difference in Rest (Home - Away)', ylabel='Games') plt.savefig('../content/images/tidy_diff_in_rest.svg', transparent=True) Or the win percent by rest difference fig, ax = plt.subplots(figsize=(12, 6)) sns.barplot(x='rest_spread', y='home_win', data=df.query('-3 <= rest_spread <= 3'), color='#4c72b0', ax=ax) sns.despine() rest = (tidy.groupby(['date', 'variable']) .rest.mean() .dropna()) rest.head() date variable 2015-10-28 away_team 0.000000 home_team 0.000000 2015-10-29 away_team 0.333333 home_team 0.000000 2015-10-30 away_team 1.083333 Name: rest, dtype: float64 rest is in a "long" form since we have a single column of data, with multiple "columns" of metadata (in the MultiIndex). We use .unstack to move from long to wide. rest.unstack().head() unstack moves a level of a MultiIndex (innermost by default) up to the columns. stack is the inverse. rest.unstack().stack() date variable 2015-10-28 away_team 0.000000 home_team 0.000000 2015-10-29 away_team 0.333333 home_team 0.000000 2015-10-30 away_team 1.083333 ... 2016-04-11 home_team 0.666667 2016-04-12 away_team 1.000000 home_team 1.400000 2016-04-13 away_team 0.500000 home_team 1.214286 dtype: float64 With .unstack you can move between those APIs that expect there data in long-format and those APIs that work with wide-format data. For example, DataFrame.plot(), works with wide-form data, one line per column. ax=(rest.unstack() .query('away_team < 7') .rolling(7) .mean().plot(figsize=(12, 6), linewidth=3, legend=False)) ax.annotate("Home", (rest.index[-1][0], 1.02), color='g', size=14) ax.annotate("Away", (rest.index[-1][0], 0.82), color='b', size=14) sns.despine() The most convinent form will depend on exactly what you're doing. When interacting with databases you'll often deal with long form data. Pandas' DataFrame.plot often expects wide-form data, while seaborn often expect long-form data (more on that in a future post). Regressions will expect wide-form data. Either way, it's good to be comfortable with stack and unstack (and MultiIndexes) to quickly move between the two. df['home_win'] = df.home_points > df.away_points wins = ( pd.melt(df.reset_index(), id_vars=['game_id', 'date', 'home_win'], value_name='team', var_name='is_home', value_vars=['home_team', 'away_team']) .assign(win=lambda x: x.home_win == (x.is_home == 'home_team')) .groupby(['team', 'is_home']) .win .agg({'n_wins': 'sum', 'n_games': 'count', 'win_pct': 'mean'}) ) wins.head() Pause for visualiztion, because why not g = sns.FacetGrid(wins.reset_index(), hue='team', size=10, aspect=.5, palette=['k']) g.map(sns.pointplot, 'is_home', 'win_pct').set(ylim=(0, 1)); (It'd be great if there was a library built on top of matplotlib that auto-labeled each point decently well. Apparently this is a difficult problem to do in general). g = sns.FacetGrid(wins.reset_index(), col='team', hue='team', col_wrap=5) g.map(sns.pointplot, 'is_home', 'win_pct'); Those two graphs show that most teams have a higher win-percent at home than away. So we can continue to investigate. Let's aggregate over home / away to get an overall win percent per team. win_percent = ( # Use sum(games) / sum(games) since I don't # know if teams play the same number of games at # home as away wins.groupby(level='team', as_index=True) .apply(lambda x: x.n_wins.sum() / x.n_games.sum()) ) win_percent.head() team Atlanta Hawks 0.585366 Boston Celtics 0.585366 Brooklyn Nets 0.256098 Charlotte Hornets 0.585366 Chicago Bulls 0.512195 dtype: float64 win_percent.sort_values().plot.barh(figsize=(6, 12), width=.85, color='k') plt.tight_layout() sns.despine() plt.xlabel("Win Percent"); Is there a relationship between overall team strength and their home-court advantage? (wins.win_pct .unstack() .assign(**{'Home Win % - Away %': lambda x: x.home_team - x.away_team, 'Overall %': lambda x: (x.home_team + x.away_team) / 2}) .pipe((sns.regplot, 'data'), x='Overall %', y='Home Win % - Away %') ) sns.despine() And get that back into df. You could you pd.merge, but I prefer .map when joining a Series. df = df.assign(away_strength=df['away_team'].map(win_percent), home_strength=df['home_team'].map(win_percent), point_diff=df['home_points'] - df['away_points'], rest_diff=df['home_rest'] - df['away_rest']) df.head() import statsmodels.formula.api as sm df['home_win'] = df.home_win.astype(int) # for statsmodels mod = sm.logit('home_win ~ home_strength + away_strength + home_rest + away_rest', df) res = mod.fit() res.summary() Optimization terminated successfully. Current function value: 0.552792 Iterations 6 The strength variables both have large coefficeints (really we should be using some independent measure of team strength here, win_percent is showing up on the left and right side of the equation). The rest variables don't seem to matter as much. It's easy to explore variations in formula. (sm.Logit.from_formula('home_win ~ strength_diff + rest_spread', df.assign(strength_diff=df.home_strength - df.away_strength)) .fit().summary()) Optimization terminated successfully. Current function value: 0.553499 Iterations 6 mod = sm.Logit.from_formula('home_win ~ home_rest + away_rest', df) res = mod.fit() res.summary() Optimization terminated successfully. Current function value: 0.676549 Iterations 4
http://nbviewer.jupyter.org/gist/TomAugspurger/1dc8c1a2469cef04a7d8feb5c1d5a3a8
CC-MAIN-2017-13
refinedweb
1,817
62.64
11 May 2010 09:20 [Source: ICIS news] By Fanny Zhang GUANGZHOU (ICIS news)--China must further tighten access to credits to contain the rapid increase in consumer prices, while mindful of the need to ensure healthy growth in industrial production and investments, analysts said on Tuesday. A slew of economic data in April showed sustained recovery in economic activities, but this was accompanied by a strong pick-up in inflation. The country’s consumer price index (CPI) rose at an accelerated pace of 2.8% year on year in April, while its producer price index (PPI) also grew faster at 6.8%, according to the National Bureau of Statistics (NBS). In March, CPI was up 2.4% year on year, while PPI was up 5.9%. "The data indicated that inflation (CPI) is nearing the 3% warning level and we expect that prices would keep rising in the next few months,” said Wang Hu, a Shanghai-based analyst at Guotai Junan Securities (GTJA). The trend makes the need for tighter control on credit urgent, Wang said. ?xml:namespace> Meanwhile, industrial production in The same trend was seen in fixed assets investment, which grew at 25.4% in April from 26.3% in March, NBS data showed. Higher production went hand in hand with exports recovery, with shipments last month jumping 30.5% year on year to $119.9bn, based on official data. Its overall imports jumped 49.7% to $118.2bn in April. "[The] overall economy is in good shape. Both investments and consumptions have kept rising since the beginning of this year,” said Liu Qiyuan, chief analyst at Shenzhen-based China Merchants Securities (CMS). Liu said that the potential risk of overheating had been significantly reduced after Aggressive lending - largely due to Consumer prices had begun to spike given too much money floating around in China. To contain inflation and prevent asset bubbles, the Chinese authorities did a combination of moral suasion to induce banks to lend less and of straight mopping up operations through issuance of central bank bills. But new loans continued to grow at unprecedented speed, logging a 30.1% surge last month to CNY774bn, based on central bank data released on Tuesday. Liu said a lot of funds still need to be siphoned off from the system to effectively keep a lid on soaring prices. “We feel that the government would take measures, like long-term bill issuance and debt yields hikes, to speed up liquidity grips. Otherwise, inflation may go out of control,” Liu said. (
http://www.icis.com/Articles/2010/05/11/9358086/china-needs-tighter-credit-controls-to-stave-off-inflation.html
CC-MAIN-2014-15
refinedweb
422
64.81
Python - List sort() Method The Python list sort() method is used to sort elements of the list in ascending or descending order. This method has an optional argument which can be used to pass a function to specify sorting criteria. Syntax list.sort(reverse = True | False, key = function) Parameters Return Value None. Example: Reverse order Sorting In the example below, sort() method is used to sort all elements of the list called MyList in descending order. MyList = [1, 10, 5, 7, 3, 6, 5] MyList.sort(reverse = True) print(MyList) The output of the above code will be: [10, 7, 6, 5, 5, 3, 1] Example: Reverse order Sorting with function criteria In the example below, a function called name_length() is used with list sort() method which specifies length of the element as sorting criteria. def name_length(x): return len(x) MyList = ['Marry', 'Sam', 'John', 'Jo'] MyList.sort(reverse = True, key = name_length) print(MyList) The output of the above code will be: ['Marry', 'John', 'Sam', 'Jo'] ❮ Python List Methods
https://www.alphacodingskills.com/python/notes/python-list-sort.php
CC-MAIN-2021-31
refinedweb
169
68.2
I've seen numerous posts about this on the list. The FAQ at says that it's Xerces problem, but after I tried many different parsers/versions none of them seem to help. So I traced the problem and yes, it has nothing to do with the parser I was using! To fix the problem in version 2.3.1 open org.apache.soap.Fault.java, go to line 316 and add these lines: if( null != namespaceURI && namespaceURI.length() == 0 ) { namespaceURI = null; } They go right before the line if (namespaceURI == null || namespaceURI.equals(Constants.NS_URI_SOAP_ENV)) { which is the problem as it checks namespaceURI for null, but not for empty string. After this I tried different versions of Xerces, Crimson and Weblogic's parser and they all worked fine. Nedim Colic Software Architect MediaOcean p: 404-879-2537 f: 404-885-9949
http://mail-archives.apache.org/mod_mbox/xml-soap-user/200206.mbox/%3C1AFCE2A881EAD34A92736B11E4AF7E9E099153@moexchg.mediaocean.com%3E
CC-MAIN-2013-48
refinedweb
141
75.91
Toolkit: A Front-End Framework for the Modern Web Titon Toolkit, or simply Toolkit, is a project that I’ve been working on in my free time for the past 4 years. It started out as a MooTools UI framework, which slowly transitioned to jQuery, with plans to be vendorless for 3.0. So why did I write another framework? At its inception, the world of “CSS/JavaScript frameworks” was still young, with Bootstrap and Foundation being about a year old. I was intrigued with the concept of a front-end framework and set out to build my own, with the main selling point being customizability and extensibility. So, what is Toolkit exactly? Toolkit is a front-end framework that provides a collection of powerful state-based, role-specific user interface components and utility classes for the responsive, mobile, and modern web. It makes use of the latest and greatest in technology — HTML5 for semantics, CSS3 for animations and styles, Sass for CSS pre-processing, Gulp for task and package management, and powerful new browser APIs for the JavaScript layer, just to name a few. The core of Toolkit is based on strict but important design principles, which include responsive design, mobile-first design, semantic markup, progressive enhancement, graceful degradation, continuous integration, and configuration over convention. These principles ultimately shape the decisions behind Toolkit. So, is Toolkit just another front-end UI framework? Yes but, as mentioned, with some key differences: Toolkit was built to be extremely extensible, easily customizable, and efficiently architected. Let’s look at some of its unique features. Decoupled JavaScript, CSS, and HTML A running paradigm in front-end development is tying JavaScript to a fixed CSS structure via class names as well as a fixed HTML structure. Toolkit disagrees with this approach and strives to decouple the CSS, JavaScript, and HTML as much as possible, which opens up the possibility of customizable alternatives. Toolkit mitigates many coupling issues by requiring specific data attributes — all of which are used for element traversal, look-up, and event binding. The following example uses Toolkit’s carousel component as a proof of concept. <div class="carousel" data-carousel> <div class="carousel-items"> <ul data-carousel-items> <li>...</li> <li>...</li> <li>...</li> </ul> </div> <button type="button" class="carousel-prev" data-carousel-prev> </button> <button type="button" class="carousel-next" data-carousel-next> </button> </div> With decoupling in place, custom HTML is now possible, which isn’t the case when using alternative frameworks. No longer is the markup tied to the JavaScript component; the JavaScript component is now tied to the markup via data-* attributes. Want to add new markup to a component? Go for it! Want to change the markup to match the project? Feel free! Want to remove component functionality? Remove away! Data attributes provide what we think is a much better level of customization. Easier CSS Styling Tired of having to overwrite styles? Or dealing with bloat? Toolkit sure was. Because of this, the CSS found in Toolkit is extremely lightweight as it only defines the very bare minimum for the component to function correctly — mainly layout and structural styles. You could say that Toolkit is a themeless and styleless front-end framework. By being themeless, Toolkit is easy to style, and even easier to integrate. Furthermore, Toolkit opted out of providing Sass variables for CSS theme customization (e.g. for border size, background color, text size, font family, etc). If you want to style an element, you can do so the old fashioned way, using CSS (or Sass, or Less)! You can also take this a step further by integrating Toolkit as a Compass extension, which allows for Toolkit’s Sass files to be imported into scope and compiled directly in your own Sass files. Customizable CSS Class Names Another pain point found in existing frameworks is CSS class name collision. Toolkit resolves this issue in one of two ways. The first way is through customizable Sass variables that allow most CSS class names to be customized. Using this approach will require compilation of the Sass files, either through a Compass extension, or in the source directly. // Built-in BEM classes! $carousel-class-items: bem("carousel", "slides"); The second approach allows for global namespacing by prefixing classes. This works wonders when integrating the framework into an existing codebase where collisions are abundant. Enabling namespaces is as easy as modifying a Sass variable and a JavaScript property. $namespace: "tk-"; // Sass Toolkit.namespace = 'tk-'; // JavaScript Do note, however, that namespaces are not applied to state, animation, or behavioral class names. Extensible JavaScript The entire JavaScript layer in Toolkit is built around a flexible inheritance based object-oriented class system. Each class manages its own state, events, and behaviors, which allow for complex interactions as each instance is unique. Since this class layer is so flexible, it allows for custom classes to be written, or existing classes to be extended via inheritance. var Toolkit.Tooltip = Toolkit.Component.extend({ constructor: function() { // ... } }); On top of this, each class supports a set of options for customizability. These options can be set globally, through the constructor, or as data attributes. Option groups and even responsive options are built into the core. $('.carousel').carousel({ itemsToShow: 1, itemsToCycle: 1, autoCycle: false, responsive: { tablet: { breakpoint: '(min-width: 641px)', itemsToShow: 2 }, desktop: { breakpoint: '(min-width: 1281px)', itemsToShow: 3 } }, groups: { static: { infinite: false, loop: false }, dynamic: { infinite: true, autoCycle: true } } }); Flexbox Support Although experimental, Toolkit offers built-in flexbox support through the Flex component. The Flex component shines in the building of layout and grid based structures through the concept of regions and blocks. A region is an object that contains blocks or other regions, while a block is an object that contains content and is aligned within the main and cross axis. Although being analogous to rows and columns, regions and blocks are packaged with additional support for growing, shrinking, ordering, wrapping, nesting, alignment, and responsiveness. <div class="region region--grid flow-center"> <div class="block no-shrink">...</div> <div class="block">...</div> <div class="block order-2">...</div> <div class="block order-1">...</div> <div class="block">...</div> <div class="block no-grow">...</div> </div> Feature Packed Besides the highlights already mentioned, Toolkit supports an array of features that include: - ARIA support - Right-to-left (RTL) support - BEM naming convention - Option groups - Responsive options - Nested components - Integrated debugging - Event hooks - Customizable templates - Automatic conflict resolution - New jQuery events and methods - New Sass mixins and functions - Modular architecture with custom builds - Smaller CSS and JS filesizes - Robust testing suite - Travis CI support - And more… If you’d like to test out some of Toolkit’s components, you can visit this interactive demo page. Down the Pipeline The JavaScript ecosphere is constantly evolving with new technology, functionality, and specifications. Toolkit aims to be a part of this evolution by continuously staying in sync with the latest JavaScript developments. The roadmap as it currently stands includes the following breaking, but interesting, changes for the next 3.0 major release, some of which have already started development. - Target evergreen browsers and their previous 3 releases. - Remove jQuery as a dependency and polyfill any functionality not present. - Rewrite the JavaScript layer using ECMAScript 6 functionality like classes, arrow functions, modules, generators, promises, and more. - Integrate Babel for ES6 -> ES5 compilation. - Integrate template rendering instead of DOM manipulation. - Look into using webpack as the bundler and build tool. - Add Less support by integrating a Sass to Less transpiler. - Rewrite components using a flux-based uni-directional data flow system There’s also some discussion about integrating with external frameworks, but this is currently under RFC. - Polyfill integration for missing browser features - Custom web components through Polymer or another service - React and Toolkit component integration Why not help my work on Toolkit by offering some advice on the direction it should take? Community feedback and contributions are very much appreciated! I’m always looking for new team members and contributors, so if you’re interested, come chat in #titon on freenode.net. In Closing It’s been a wonderful experience showcasing Toolkit and its features to all of you. I hope you enjoyed it and find as much use out of Toolkit as I have. If you’re looking for any more information on Toolkit, I suggest visiting the official website, our Twitter account, or the GitHub repo. Cheers! Replies Honestly, thanks for reading the article. I appreciate it. I do agree that this article is a self promotion type of article, but I wouldn’t label it shameless. I shouldn’t be ashamed of my work, or this article, as I’m simple talking about it, not forcing anyone to use it. Just my two cents. I’m the editor for the HTML/CSS content and I can tell you right now that we don’t have enough of this type of “self promotion”. I would love it if more framework, library, plugin, and tool authors would write honest, down-to-earth articles on their experience building their projects and how those projects can help developers. When the tool is open source, it’s hardly “shameless”. In fact, we pay for these articles just like any others. Of course, that doesn’t mean we only want promo articles. We aren’t going to publish too many like this, but I don’t think we do enough of these. Anything to help new tools get noticed is more than fine by me. No problem, don’t worry about it. And just to be clear: I kind of took your comment to be somewhat half-joking anyhow, so I don’t think it’s a big deal. You did say “nice” so to me you were saying “good job, self promoting is shameless, but that’s ok”… Maybe @RyanReese was a little overly sensitive on this one. First off, to call this “shameless self-promotion” is disingenuous. Besides, when someone has put this much effort to something - in their free time, no less - they’re quite entitled to shout about it! As for the framework itself. I’ll be honest, my first thought was predictable - “another front-end framework?!?”. But then I looked at it in more detail, and I like a lot of what I see. I’d like to make some comments and observations in list-form, if I may? I’m not all that keen on some of it visually, but that’s a moot point - since it’s so customisable. I love that the styles are primarily structural, which should make it much easier to style and make unique. Having said that, the demo site might benefit from having alternative “themes” to look at; partly because really good-looking examples will “grab” people, partly because it demonstrates the flexibility. My biggest bug-bear about a lot of JQuery plugins and the like is that they force very specific markup structure on you. Things like carousels in particular. If Toolkit provides more flexibility, that’s a great plus-point in my book. The JS being class/component based is great. In fact, modern approaches all round. I particularly like the fact that it uses CSS3 animations where possible. From a personal point-of-view, being RequireJS-friendly is a big plus point. No shims for Backbone work, which is great! Again personal preference, but being SASS-based rather than Less is alright by me! Personal preference once again, but I’d have preferred Grunt to Gulp - but you can’t please everyone! Lastly, if there’s one thing I think might really help the accompanying website, it’s trying to cram as many of the components and styles onto a single page as possible, to make it easier to see what it offers at a glance; having a drop-down and forward/back arrows to browse through the framework is good for viewing components in isolation, but a “single page demo” would, I think, make it more accessible. Overall though, really good work! Two things, First I want to thank Sitepoint for publishing this article on Toolkit. I enjoy reading these types of articles. Second I want to thank the author, milesj for making this framework available to us all and for an outstanding job of writing the article. I fully intend to give this framework workout once a current project is completed. I like what I’ve read and am eager to try out the framework. Thanks Steve 32 more replies
https://www.sitepoint.com/toolkit-front-end-framework-modern-web/
CC-MAIN-2020-10
refinedweb
2,082
55.34
3. Semantics, structure, and APIs of HTML documents 3.1. Documents Every XML and HTML document in an HTML user agent is represented by a Document object. [DOM] The document’s address is the URL associated with a Document (as defined in the DOM standard). It is initially set when the Document is created, but that can change during the lifetime of the Document; for example, it changes when the user navigates to a fragment on the page and when the pushState() method is called with a new URL. [DOM] responsible document specified by the script’s settings object, CSP list be the CSP list of the browsing context’s active document. Navigate the browsing context to a new response whose body is source, CSP list is CSP list and HTTPS state is HTTPS state, with the exceptions enabled flagScriptElement? currentScript; // address of the Documentfrom which the user navigated to this one, unless it was blocked or there was no such document, in which case it returns the empty string. The noreferrerlink type can be used to block the referrer. referrerattribute address. A Document object that falls into one of the following conditions is a cookie-averse Document object: A Documentthat has no browsing context. A Documentwhose address does not use a server-based naming authority. address address. lastModifiedattribute,. Documentobjectat the Documentobject. A Document is said to have an active parser if it is associated with an HTML parser or an XML parser that has not yet been stopped or The readyState IDL attribute must, on getting, return the current document readiness. 3.1.3. DOM tree accessors The html element of a document is the document’s root element, if there is one and it’s an html element, or null otherwise. - document . head - Returns the headelement. The head element of a document is the first head element that is a child of the html element, if there is one, or null otherwise. headattribute, on getting, must return the headelement of the document (a headelement. titleattribute must, on getting, run the following algorithm: If the root element is an svgelement in the SVG namespace, then let value be a concatenation of the data of all the child Textnodes of the first titleelement in the SVG namespace that is a child of the root element. [SVG] Otherwise, let value be a concatenation of the data of all the child Textnodes of the titleelement, in tree order, or the empty string if the titleelement is null. Strip and collapse whitespace in value. Return value. On setting, the steps corresponding to the first matching condition in the following list must be run: - If the root element is an svgelement in the SVG namespace [SVG] Let element be the first titleelement in the SVG namespace that is a child of the root element, if any. If there isn’t one, create a titleelement in the SVG namespace, insert it as the first child of the root element, and let element be that element. [SVG] Act as if the textContentIDL attribute of element was set to the new value being assigned. - If the root element is in the HTML namespace If the titleelement is null and the headelement is null, then abort these steps. If the titleelement is null, then create a new titleelement and append it to the headelement, and let element be the newly created element; otherwise, let element be the titleelement.rorexception. The body element of a document is the first child of the html element that is either a body element or a frameset element. If there is no such element, it is null. bodyattribute, on getting, must return the bodyelement of the document (either a bodyelement, a framesetelement, or null). On setting, the following algorithm must be run: If the new value is not a bodyor framesetelement, then throw a HierarchyRequestErrorexception and abort these steps. Otherwise, if the new value is the same as the bodyelement, do nothing. Abort these steps. Otherwise, if the bodyelement is not null, then replace that element with the new value in the DOM, as if the root element’s replaceChild()method had been called with the new value and the incumbent bodyelement as its two arguments respectively, then abort these steps. Otherwise, if there is no root element, throw a HierarchyRequestErrorexception and abort these steps. Otherwise, the bodyelement is null, but there’s a root element. Append the new value to the root. imagesattribute must return an HTMLCollectionrooted at the Documentnode, whose filter matches only imgelements.. getElementsByName(name)method takes a string name, and must return a live NodeListcontaining all the html elements in that document that have a nameattribute whose value is equal to the name argument (in a case-sensitive manner), in tree order. When the method is invoked on a Documentobject again with the same argument, the user agent may return the same as the object returned by the earlier call. In other cases, a new NodeListobject must be returned. - document . currentScript - Returns the scriptelement that is currently executing. In the case of reentrant scriptexecution, returns the one that most recently started executing amongst those that have not yet finished executing. Returns null if the Documentis not currently executing a scriptelement (e.g., because the running script is an event handler, or a timeout). currentScriptattribute, on getting, must return the value to which it was most recently initialized. When the Documentis created, the currentScriptmust be initialized to null.element, then return the WindowProxyobject of the nested browsing context represented by that iframeelement, and abort these steps. Otherwise, if elements has only one element, return that element and abort these steps..1.4. Loading XML documents partial interface XMLDocument { boolean load(DOMString url); }; The load(url) method must run the following steps: Let document be the XMLDocumentobject on which the method was invoked. Parse url, relative to the entry settings object. If this is not successful, throw a " SyntaxError" DOMExceptionand abort these steps. Otherwise, let urlRecord be the resulting URL record. If urlRecord’s origin is not the same as the origin of document, throw a " SecurityError" DOMExceptionand abort these steps. Remove all child nodes of document, without firing any mutation events. Set the current document readiness of document to " Run the remainder of these steps in parallel, and return true from the method. Let result be a Documentobject. Let success be false. Let request be a new request whose URL is urlRecord, client is entry settings object, destination is " subresource", synchronous flag is set, mode is " same-origin", credentials mode is " same-origin", and whose use-URL-credentials flag is set. Let response be the result of fetching request. If response’s Content-Type metadata is an XML MIME type, then run these substeps: Create a new XML parser associated with the result document. Pass this parser response’s body.containing the new children had been inserted. Fire a simple event named loadat document.; [SameObject] readonly attribute DOMStringMap dataset; // user interaction attribute boolean hidden; void click(); attribute long tabIndex; void focus(); void blur(); attribute DOMString accessKey; attribute boolean draggable; [SameObject, PutForwards=value] readonly attribute DOMTokenList dropzone; attribute HTMLMenuElement? contextMenu; attribute boolean spellcheck; void forceSpellCheck(); }; HTMLElement implements GlobalEventHandlers; HTMLElement implements DocumentAndElementEventHandlers; HTMLElement implements ElementContentEditable; interface HTMLUnknownElement : HTMLElement { }; The HTMLElement interface holds methods and attributes related to a number of disparate features, and the members of this interface are therefore described in various different sections of this specification. HTMLUnknownElementinterface must be used for html elements that are not defined by this specification (or other applicable specifications). Except where otherwise specified, attributes on html elements may have any string value, including the empty string. Except where explicitly stated, there is no restriction on what text can be specified in such attributes., except for template elements, where the children are those in the template contents (a separate DocumentFragment assigned to the element when the element is whitespace): - §3.2.4.2.1 Metadata content - §3.2.4.2.2 Flow content - §3.2.4.2.3 Sectioning content - §3.2.4.2.4 Heading content - §3.2.4.2.5 Phrasing content - §3.2.4.2.6 Embedded content - §3.2.4.2.7 Interactive content meter nav noscript object ol output p picture pre progress q ruby s samp script section small span strong sub sup §4.3.10 meter noscript object output picture progress q ruby s samp script small span strong sub sup content model: Text.) menu(if the typeattribute is in the toolbar - Text that is not inter-element whitespace> a, ins, del, and mapelements mapelements.)): - class - - - - - hidden - - spellcheck style tabindex - translate). [WAI-ARIA] The following event handler content attributes may be specified on any HTML element: onabort onblur* oncancel oncanplay oncanplaythrough onchange onclick onclose oncontextmenu oncopy oncuechange oncutwheel onpaste onpause onplay onplaying onprogress onratechange onreset onresize* onscroll* onseeked onseeking onselect onshow onstalled onsubmit onsuspend ontimeupdate ontoggle. 3.2.5.1. The id attribute. idattribute. 3.2.5. If the element is a link, style, dfn, abbr, or menuitemelement,.3. language of HTML documents is indicated using a lang attribute (on the HTML element itself, to indicate the primary language of the document, and on individual elements, to indicate a change in language). It provides an explicit indication to user agents about the language of content, so an appropriate language dictionary can be used and, in the case of screen readers and similar assistive technologies with voice output, the content is pronounced using the correct voice / language library (where available). Setting of a language using the lang attribute menuit, submit button, or reset button state.5. The xml:base attribute (XML only) The xml:base attribute is defined in XML Base. [XMLBASE] The xml:base attribute may be used on html elements of XML documents. Authors must not use the xml:base attribute on html elements in HTML documents. 3.2.5 root element and the dirattribute is not in a defined state (i.e., it is not present or has an invalid value) - If the element is an inputelement whose typeattribute is in the Telephone state, E-mail state, and root menuit. dirIDL attribute on an element must reflect the dircontent.7. The class attribute Every HTML element may have a class attribute specified. The attribute, if specified, must have a value that is a set of space-separated tokens representing the various classes that the element belongs to. classattribute] 3.2.5.8. The style attribute All html elements may have the style content attribute set. This is a CSS styling attribute as defined by the CSS Styling Attribute Syntax specification. [CSS-STYLE-ATTR] In user agents that support CSS, the attribute’s value must be parsed when the attribute is added or has its value changed, according to the rules given for CSS styling attributes. [CSS-STYLE-ATTR] However, if the Should element’s inline behavior be blocked by Content Security Policy? algorithm returns " Blocked" when executed upon the attribute’s element and " style attribute",. spanelement and the styleattribute to make those words show up in the relevant colors in visual media. <p>My sweat suit is <span style="color: green; background: transparent">green</span> and my eyes are <span style="color: blue; background: transparent">blue</span>.</p> 3.2.5.9.. - element . dataset - Returns a DOMStringMapobject for the element’s data-*attributes. Hyphenated names are converted to dromedaryCase (which is the same as CamelCase except the initial letter is not uppercased). For example, data-foo-bar=""becomes element.dataset.fooBar. datasetIDL attribute provides convenient accessors for all the data-*attributes on an element. On getting, the datasetIDL attribute must return a DOMStringMapobject, associated with the following algorithms, which expose these attributes on their element: The algorithm for getting the list of name-value pairs Let list be an empty list of name-value pairs. For each content attribute on the converted to ASCII uppercase. Return list. The algorithm for setting names to certain values Let name be the name passed to the algorithm. Let value be the value passed to the algorithm. If name contains a U+002D HYPHEN-MINUS character (-) followed by a lowercase ASCII letter, throw a " SyntaxError" DOMExceptionand abort these steps. For each uppercase ASCII letter in name, insert a U+002D HYPHEN-MINUS character (-) before the character and replace the character with the same character converted to ASCII lowercase. Insert the string data-at the front of name.. For each uppercase ASCII letter in name, insert a U+002D HYPHEN-MINUS character (-) before the character and replace the character with the same character converted to ASCII lowercase. Insert the string data-at the front of name. Remove the attribute with the name name, if such an attribute exists. Do nothing otherwise. This algorithm will only get invoked by the Web IDL specification for names that are given by the earlier algorithm for getting the list of name-value pairs. [WEBIDL] classattribute along with data-*attributes: <div class="spaceship" data- ). 3.2.6. Requirements relating to the bidirectional algorithm 3.2: - If node is a Textnode - Append the text data of node to string. - If node is a brelement - If node is an HTML element that is flow content but that is not also phrasing content - If string is not the empty string, push string onto output, and let string be empty string. - Otherwise - Do nothing.. 3.2 §10 §10 Rendering section that, due to the requirements in this section, are requirements on all user agents (not just those that support the suggested default rendering): 3.2.7. WAI-ARIA and HTML Accessibility API Mappings 3.2.7.1. ARIA Authoring RequirementsAuthors may use the ARIA role] In the majority of cases setting an ARIA role and/or aria-* attribute that matches the default implicit ARIA semantics is unnecessary and not recommended as these properties are already set by the browser. Authors are encouraged to make use of the following documents for guidance on using ARIA in HTML beyond that which is provided in this section: - Notes on Using ARIA in HTML - A practical guide for developers on how to to add accessibility information to HTML elements using the Accessible Rich Internet Applications specification [WAI-ARIA]. - WAI-ARIA 1.1 Authoring Practices - An author’s guide to understanding and implementing Accessible Rich Internet Applications. 3.2.7.2. Conformance Checker Implementation Requirements Conformance checkers are required to implement document conformance requirements for use of the ARIA role and aria-* attributes on HTML elements , as defined in ARIA in HTML. [html-aria] 3.2.7.3. User Agent Implementation Requirements User agents are required to implement ARIA semantics on all HTML elements , as defined in the ARIA specifications [WAI-ARIA] and [core-aam-1.1]. User agents are required to.7.3.1. ARIA Role AttributeEvery HTML element may have an ARIA roleattribute specified. This is an ARIA Role attribute as defined by [WAI-ARIA]. The attribute, if specified, must have a value that is a set of space-separated tokens; each token must be a non-abstract role defined in the WAI-ARIA specification [WAI-ARIA]. roleattribute is split on spaces. 3.2.7.3.2. State and Property AttributesEvery HTML element may have ARIA state and property attributes specified. These attributes are defined by [WAI-ARIA]. A subset of the ARIA State and Property attributes are defined as "Global States and Properties" in the [WAI-ARIA] Specification. These attributes, if specified, must have a value that is the ARIA value type in the "Value" field of the definition for the state or property, mapped to the appropriate HTML value type according to [WAI-ARIA].] and , as defined in the ARIA specifications [WAI-ARIA] and [core-aam-1.1]. 3.2.7.4. Allowed ARIA roles, states and properties] definitions.
https://www.w3.org/TR/2016/WD-html52-20160818/dom.html
CC-MAIN-2019-22
refinedweb
2,627
53.31
Defining and Using Stored Procedures This chapter describes how to define and use stored procedures in Caché SQL. It discusses the following: An overview of the types of stored procedures How to define stored procedures How to use stored procedures How to list stored procedures and their parameters. Overview An SQL routine is an executable unit of code that can be invoked by the SQL query processor. There are two types of SQL routines: functions and stored procedures. Functions are invoked from any SQL statement that supports functionname() syntax. Stored procedures can only be invoked by a CALL statement. Functions accept some number of input directed arguments and return a single result value. Stored procedures accept some number of input, input-output, and output arguments. A stored procedure can be a user-defined function, returning a single value. A function can also be invoked by a CALL statement. Like most relational database systems, Caché allows you to create SQL stored procedures. A Stored Procedure (SP) provides a callable routine that is stored in the database and can be invoked within an SQL context (for example, by using the CALL statement or via ODBC or JDBC). Unlike relational databases, Caché lets you define stored procedures as methods of classes. In fact, a stored procedure is nothing more than a class method that is made available to SQL. Within a stored procedure, you can use the full range of Caché object-based features. You can defined a stored procedure as a query that returns a single result set of data by querying the database. You can define a stored procedure as a function procedure that can serve as a user-defined function, returning a single value. You can define a stored procedure as a method that can modify the database data and return either a single value or one or more result sets. You can determine if a procedure already exists using the $SYSTEM.SQL.ProcedureExists() method. This method also returns the procedure type: “function” or “query”. Defining Stored Procedures As with most aspects of Caché SQL, there are two ways of defining stored procedures: using DDL and using classes. These are described in the following sections. Defining a Stored Procedure Using DDL Caché SQL supports the following commands to create a query: CREATE PROCEDURE can create a query that is always projected as a stored procedure. A query can return a single result set. CREATE QUERY creates a query that can optionally be projected as a stored procedure. A query can return a single result set. Caché SQL supports the following commands to create a method or function: CREATE PROCEDURE can create a method that is always projected as a stored procedure. A method can return a single value, or one or more result sets. CREATE METHOD can create a method that can optionally be projected as a stored procedure. A method can return a single value, or one or more result sets. CREATE FUNCTION can create a function procedure that can optionally be projected as a stored procedure. A function can return a single value. The block of executable code specified within these commands can be written either in Caché SQL or ObjectScript. You can include Embedded SQL within an ObjectScript code block. SQL to Class Name Transformations When you use DDL to create a stored procedure, the name you specify is transformed into a class name. If the class does not exist, the system creates it. If the name is unqualified and no FOR clause is provided: the system-wide default schema name is used as the package name, followed by a dot, followed by a generated class name consisting of the string ‘func’, ‘meth’, ‘proc’, or ‘query’, followed by the SQL name stripped of punctuation characters. For example, the unqualified procedure name Store_Name results in a class name such as the following: User.procStoreName. This procedure class contains the method StoreName(). If the name is qualified and no FOR clause is provided: the name of the schema is converted to a package name, followed by a dot, followed by the string ‘func’, ‘meth’, ‘proc’, or ‘query’, followed by the SQL name stripped of punctuation characters. If necessary, the specified package name is converted to a valid package name. If the name is qualified and a FOR clause is provided: the qualified class name specified in the FOR clause overrides the schema name specified in the function, method, procedure, or query name. SQL stored procedure names follow identifier naming conventions. Caché strips punctuation characters from the SQL name to generate unique class entity names for the procedure class and its class methods. The following rules govern the transformation of a schema name to valid package name: If the schema name contains an underscore, this character is converted to a dot, denoting a subpackage. For example, the qualified name myprocs.myname creates the package myprocs. The qualified name my_procs.myname creates the package my containing the subpackage procs. The following example shows how the punctuation differs in a class name and its SQL invocation. It defines a method with a class name containing two dots. When invoked from SQL, the example replace the first dot with an underscore character: Class tmp.test.sql Extends %RegisteredObject { ClassMethod myfunc(dummy As %String) As %String [ SqlProc ] { /* method code */ Quit "abc" } } SELECT tmp_test.sql_myfunc(Name) FROM Sample.Person Defining a Method Stored Procedure using Classes Class methods can be exposed as Stored Procedures. These are ideal for actions that do not return data, such as a Stored Procedure that calculates a value and stores it in the database. Almost all classes can expose methods as Stored Procedures; the exception is generator classes, such as a data type class ([ClassType = datatype]). Generator classes do not have a runtime context. It is only valid to use a datatype context within the runtime of some other entity, such as a property. To define a method stored procedure, simply define a class method and set its SqlProc keyword: Class MyApp.Person Extends %Persistent [DdlAllowed] { /// This procedure finds total sales for a territory ClassMethod FindTotal(territory As %String) As %Integer [SqlProc] { // use embedded sql to find total sales &sql(SELECT SUM(SalesAmount) INTO :total FROM Sales WHERE Territory = :territory ) Quit total } } After this class is compiled, the FindTotal() method will be projected to SQL as the stored procedure MyApp.Person_FindTotal(). You can change the name that SQL uses for the procedure using the SqlName keyword of the method. The method uses a procedure context handler to pass the procedure context back and forth between the procedure and its caller (for example, the ODBC server). This procedure context handler is automatically generated by Caché (as %qHandle:%SQLProcContext) using the %sqlcontext object. %sqlcontext consists of properties for the SQLCODE error status, the SQL row count, an error message, and so forth, which are set using the corresponding SQL variables, as follows: SET %sqlcontext.%SQLCode=SQLCODE SET %sqlcontext.%ROWCOUNT=%ROWCOUNT SET %sqlcontext.%Message=%msg There is no need to do anything with these values, but their values will be interpreted by the client. The %sqlcontext object is reset before each execution. The method should return no value. The maximum number of user-defined methods for a class is 2000. For instance, suppose there is a CalcAvgScore() method: ClassMethod CalcAvgScore(firstname As %String,lastname As %String) [sqlproc] { New SQLCODE,%ROWID &sql(UPDATE students SET avgscore = (SELECT AVG(sc.score) FROM scores sc, students st WHERE sc.student_id=st.student_id AND st.lastname=:lastname AND st.firstname=:firstname) WHERE students.lastname=:lastname AND students.firstname=:firstname) IF ($GET(%sqlcontext)'= "") { SET %sqlcontext.%SQLCODE = SQLCODE SET %sqlcontext.%ROWCOUNT = %ROWCOUNT } QUIT } Defining a Query Stored Procedure using Classes Many Stored Procedures that return data from the database can be implemented through the standard query interface. This approach works well as long as the procedure can be written in embedded SQL. Note the use of the Embedded SQL host variable to supply a value to the WHERE clause in the following example: Class MyApp.Person Extends %Persistent [DdlAllowed] { /// This procedure result set is the persons in a specified Home_State, ordered by Name Query ListPersons(state As %String = "") As %SQLQuery [ SqlProc ] { SELECT ID,Name,Home_State FROM Sample.Person WHERE Home_State = :state ORDER BY Name } } To expose a query as a Stored Procedure, either change the value of the SQLProc field to True in the Studio Inspector’s entry for the query or add the following “[ SqlProc ]” string to the query definition: Query QueryName() As %SQLQuery( ... query definition ... ) [ SqlProc ] After this class is compiled, the ListPersons query will be projected to SQL as the stored procedure MyApp.Person_ListPersons. You can change the name that SQL uses for the procedure using the SqlName keyword of the query. When MyApp.Person_ListPersons is called from SQL, it will automatically return the result set defined by the query’s SQL statement. The following example is a stored procedure using a result set: Class apc.OpiLLS.SpCollectResults1 [ Abstract ] { /// This SP returns a number of rows (pNumRecs) from WebService.LLSResults, and updates a property for each record Query MyQuery(pNumRecs As %Integer) As %Query(ROWSPEC = "Name:%String,DOB:%Date") [ SqlProc ] { } /// You put initial code here in the Execute method ClassMethod MyQueryExecute(ByRef qHandle As %Binary, pNumRecs As %Integer) As %Status { SET mysql="SELECT TOP ? Name,DOB FROM Sample.Person" SET rset=##class(%SQL.Statement).%ExecDirect(,mysql,pNumRecs) IF rset.%SQLCODE'=0 {QUIT rset.%SQLCODE} SET qHandle=rset QUIT $$$OK } /// This code is called by the SQL framework for each row, until no more rows are returned ClassMethod MyQueryFetch(ByRef qHandle As %Binary, ByRef Row As %List, ByRef AtEnd As %Integer = 0) As %Status [ PlaceAfter = NewQuery1Execute ] { SET rset=qHandle SET tSC=$$$OK FOR { ///Get next row, quit if end of result set IF 'rset.%Next() { SET Row = "", AtEnd = 1 SET tSC=$$$OK QUIT } SET name=rset.Name SET dob=rset.DOB SET Row = $LISTBUILD(name,dob) QUIT } QUIT tSC } ClassMethod MyQueryClose(ByRef qHandle As %Binary) As %Status [ PlaceAfter = NewQuery1Execute ] { KILL qHandle //probably not necesary as killed by the SQL Call framework QUIT $$$OK } } If it is possible to write the query as a simple SQL statement and create it through the Query Wizard, it is not necessary to know anything about the underlying methods that implement the query. Behind the scenes, for each query the class compiler generates methods based on the name of the Stored Procedure, including: stored-procedure-nameExecute() stored-procedure-nameFetch() stored-procedure-nameFetchRows() stored-procedure-nameGetInfo() stored-procedure-nameClose() If the query is of type %SQLQuery, the class compiler automatically inserts some embedded SQL into the generated methods. Execute() declares and opens a stored cursor for the SQL. Fetch() is called repeatedly until it returns an empty row (SET Row=""). You can, optionally, also have Fetch() return an AtEnd=1 boolean flag to indicate that the current Fetch constitutes the last row and the next Fetch is expected to return an empty row. However, an empty row (Row="") should always be used as the test to determine when the result set has ended; Row="" should always be set when setting AtEnd=1. FetchRows() is logically equivalent to repeated calls to Fetch(). GetInfo() is called to return details of the signature for the Stored Procedure. Close() closes the cursor. All these methods are called automatically when a Stored Procedure is invoked from a client, but could in theory be called directly from ObjectScript running on the server. To pass an object from the Execute() to a Fetch(), or from a Fetch() to the next invocation of Fetch(), you can set the query handler to the object reference (oref) of the object you wish to pass. To pass multiple objects, you can set qHandle as an array: SET qHandle(1)=oref1,qHandle(2)=oref2 It is possible to create a result set stored procedure that is based on custom-written code (not an SQL statement). The maximum number of user-defined queries for a class is 200. Customized Class Queries For complex queries, or for Stored Procedures that do not fit the query model, it is often necessary to customize the query by replacing some or all of its methods. You can use %Library.Query, as described in this section. It is often easier to implement the query if you choose type %Query (%Library.Query) instead of %SQLQuery (%Library.SQLQuery). This generate the same five methods, but now the FetchRows() is simply a repeated invocation of Fetch() (%SQLQuery has some optimization that causes other behavior). GetInfo() simply gets information from the signature, so it is very unlikely that the code will need to be changed. This reduces the problem to creating class methods for each of the other three. Note that when the class is compiled, the compiler detects the presence of these methods, and does not overwrite them. The methods need specific signatures: They all take a Qhandle (query handler) of type %Binary. This is a pointer to a structure holding the nature and state of the query. This is passed by reference to Execute() and Fetch() and by value to Close(): ClassMethod SP1Close(qHandle As %Binary) As %Status { // ... } ClassMethod SP1Execute(ByRef qHandle As %Binary, p1 As %String) As %Status { // ... } ClassMethod SP1Fetch(ByRef qHandle As %Binary, ByRef Row As %List, ByRef AtEnd As %Integer=0) As %Status { // ... } Query SP1(p1 As %String) As %Query(CONTAINID=0,ROWSPEC="lastname:%String") [sqlproc ] { } The code usually includes declaration and use of an SQL cursor. Cursors generated from queries of type %SQLQuery automatically have names such as Q14. You must ensure that your queries are given distinct names. The class compiler must find a cursor declaration, before making any attempt to use the cursor. Therefore the DECLARE statement (usually in Execute) must be in the same MAC routine as the Close and Fetch and must come before either of them. Editing the source directly, use the method keyword PLACEAFTER in both the Close and the Fetch definitions to make sure this happens. Error messages refer to the internal cursor name, which typically has an extra digit. Therefore an error message for cursor Q140 probably refers to Q14. Using Stored Procedures You can use stored procedures in two distinct ways: You can invoke a stored procedure using the SQL CALL statement; see the CALL statement in the Caché SQL Reference for more details. You can use a stored function (that is, a method-based stored procedure that returns a single value) as if it were a built-in function within an SQL query. When executing a stored procedure that takes an SQL function as a argument, invoke the stored procedure using CALL, as in the following example: CALL sp.MyProc(CURRENT_DATE) A SELECT query does not support executing a stored procedure with an SQL function argument. SELECT does support executing a stored function with an SQL function argument. xDBC does not support executing a stored procedure with an SQL function argument using either SELECT or CALL. Stored Functions A stored function is a method-based stored procedure that returns a single value. For example, the following class defines a stored function, Square, that returns the square of a given value: Class MyApp.Utils Extends %Persistent [DdlAllowed] { ClassMethod Square(val As %Integer) As %Integer [SqlProc] { Quit val * val } } A stored function is simply a class method with the SqlProc keyword specified. For a stored function, the ReturnResultsets keyword must either be not specified (the default) or prefaced by the keyword Not. You can use a stored function within an SQL query as if it were a built-in SQL function. The name of the function is the SQL name of the stored function (in this case “Square”) qualified by the schema (package) name in which it was defined (in this case “MyApp”). The following query uses the Square function: SELECT Cost, MyApp.Utils_Square(Cost) As SquareCost FROM Products If you define multiple stored functions within the same package (schema), you must make sure that they have unique SQL names. The following example defines a table named Sample.Wages that has two defined data fields (properties) and two defined stored functions, TimePlus and DTime: Class Sample.Wages Extends %Persistent [ DdlAllowed ] { Property Name As %String(MAXLEN = 50) [ Required ]; Property Salary As %Integer; ClassMethod TimePlus(val As %Integer) As %Integer [ SqlProc ] { QUIT val * 1.5 } ClassMethod DTime(val As %Integer) As %Integer [ SqlProc ] { QUIT val * 2 } } The following query uses these stored procedures to return the regular salary, time-and-a-half, and double time salary rates for each employee in the same table, Sample.Wages: SELECT Name,Salary, Sample.Wages_TimePlus(Salary) AS Overtime, Sample.Wages_DTime(Salary) AS DoubleTime FROM Sample.Wages The following query uses these stored procedures to return the regular salary, time-and-a-half, and double time salary rates for each employee in a different table, Sample.Employee: SELECT Name,Salary, Sample.Wages_TimePlus(Salary) AS Overtime, Sample.Wages_DTime(Salary) AS DoubleTime FROM Sample.Employee Privileges To execute a procedure, a user must have EXECUTE privilege for that procedure. Use the GRANT command or the %SYSTEM.SQL GrantObjPriv() method to assign EXECUTE privilege for a specified procedure to a specified user. You can determine if a specified user has EXECUTE privilege for a specified procedure by invoking the $SYSTEM.SQL.CheckPriv() method. To list all the procedures for which a user has EXECUTE privilege, go to the Management Portal. From System Administration select Security, then select either Users (System, Security Management, Users) or Roles (System, Security Management, Roles). Select Edit for the desired user or role, then select the SQL Procedures tab. Select the desired Namespace from the drop-down list. Listing Procedures The INFORMATION.SCHEMA.ROUTINES persistent class displays information about all routines and procedures in the current namespace. When specified in Embedded SQL, INFORMATION.SCHEMA.ROUTINES requires the #include %occInclude macro preprocessor directive. This directive is not required for Dynamic SQL. The following example returns the routine name, method or query name, routine type (PROCEDURE or FUNCTION), routine body (SQL=class query with SQL, EXTERNAL=not a class query with SQL), the return data type, and the routine definition for all routines in the schema “Sample” in the current namespace: SELECT ROUTINE_NAME,METHOD_OR_QUERY_NAME,ROUTINE_TYPE,ROUTINE_BODY,SQL_DATA_ACCESS,IS_USER_DEFINED_CAST, DATA_TYPE||' '||CHARACTER_MAXIMUM_LENGTH AS Returns,NUMERIC_PRECISION||':'||NUMERIC_SCALE AS PrecisionScale, ROUTINE_DEFINITION FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_SCHEMA='Sample' The INFORMATION.SCHEMA.PARAMETERS persistent class displays information about input and output parameters for all routines and procedures in the current namespace. The following example returns the routine name, parameter name, whether it is an input or output parameter, and the parameter data type information for all routines in the schema “Sample” in the current namespace: SELECT SPECIFIC_NAME,PARAMETER_NAME,PARAMETER_MODE,ORDINAL_POSITION, DATA_TYPE,CHARACTER_MAXIMUM_LENGTH AS MaxLen,NUMERIC_PRECISION||':'||NUMERIC_SCALE AS PrecisionScale FROM INFORMATION_SCHEMA.PARAMETERS WHERE SPECIFIC_SCHEMA='Sample' You can display much of the same information for a single procedure using the Catalog Details tab in the Management Portal SQL Interface. The Catalog Details for a procedure include the procedure type (query or function), class name, method or query name, the description, and the number of input and output parameters. The Catalog Details Stored Procedure Info display also provides an option to run the stored procedure.
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GSQL_PROCEDURES
CC-MAIN-2021-10
refinedweb
3,187
54.32
The abstract URLStreamHandler class is a superclass for classes that handle specific protocolsfor example, HTTP. You rarely call the methods of the URLStreamHandler class; they are called by other methods in the URL and URLConnection classes. By overriding the URLStreamHandler methods in your own subclass, you teach the URL class how to handle new protocols. Therefore, I'll focus on overriding the methods of URLStreamHandler rather than calling the methods. You do not create URLStreamHandler objects directly. Instead, when a URL is constructed with a protocol that hasn't been seen before, Java asks the application's URLStreamHandlerFactory to create the appropriate URLStreamHandler subclass for the protocol. If that fails, Java guesses at the fully package-qualified name of the URLStreamHandler class and uses Class.forName( ) to attempt to construct such an object. This means each concrete subclass should have a noargs constructor. The single constructor for URLStreamHandler doesn't take any arguments: public URLStreamHandler( ) Because URLStreamHandler is an abstract class, this constructor is never called directly; it is only called from the constructors of subclasses. The first responsibility of a URLStreamHandler is to split a string representation of a URL into its component parts and use those parts to set the various fields of the URL object. The parseURL( ) method splits the URL into parts, possibly using setURL( ) to assign values to the URL 's fields. It is very difficult to imagine a situation in which you would call parseURL( ) directly; instead, you override it to change the behavior of the URL class. This method parses the String spec into a URL u . All characters in the spec string before start should already have been parsed into the URL u . Characters after limit are ignored. Generally, the protocol will have already been parsed and stored in u before this method is invoked, and start will be adjusted so that it starts with the character after the colon that delimits the protocol. The task of parseURL( ) is to set u 's protocol , host , port , file , and ref fields. It can assume that any parts of the String that are before start and after limit have already been parsed or can be ignored. The parseURL( ) method that Java supplies assumes that the URL looks more or less like an http or other hierarchical URL: protocol:// This works for ftp and gopher URLs. It does not work for mailto or news URLs and may not be appropriate for any new URL schemes you define. If the protocol handler uses URLs that fit this hierarchical form, you don't have to override parseURL() at all; the method inherited from URLStreamHandler works just fine. If the URLs are completely different, you must supply a parseURL( ) method that parses the URL completely. However, there's often a middle ground that can make your task easier. If your URL looks somewhat like a standard URL, you can implement a parseURL( ) method that handles the nonstandard portion of the URL and then calls super.parseURL( ) to do the rest of the work, setting the offset and limit arguments to indicate the portion of the URL that you didn't parse. For example, a mailto URL looks like mailto:elharo@metalab.unc.edu. First, you need to figure out how to map this into the URL class's protocol , host , port , file , and ref fields. The protocol is clearly mailto . Everything after the @ can be the host . The hard question is what to do with the username. Since a mailto URL really doesn't have a file portion, we will use the URL class's file field to hold the username. The ref can be set to the empty string or null . The parseURL( ) method that follows implements this scheme: public void parseURL(URL u, String spec, int start, int limit) { String protocol = u.getProtocol( ); String host = ""; int port = u.getPort( ); String file = ""; // really username String fragmentID = null; if( start < limit) { String address = spec.substring(start, limit); int atSign = address.indexOf('@'); if (atSign >= 0) { host = address.substring(atSign+1); file = address.substring(0, atSign); } } this.setURL(u, protocol, host, port, file, fragmentID ); } Rather than borrowing an unused field from the URL object, it's possibly a better idea to store protocol-specific parts of the URL, such as the username, in fields of the URLStreamHandler subclass. The disadvantage of this approach is that such fields can be seen only by your own code; in this example, you couldn't use the getFile( ) method in the URL class to retrieve the username. Here's a version of parseURL( ) that stores the username in a field of the Handler subclass. When the connection is opened, the username can be copied into the MailtoURLConnection object that results. That class would provide some sort of getUserName( ) method: String username = ""; public void parseURL(URL u, String spec, int start, int limit) { String protocol = u.getProtocol( ); String host = ""; int port = u.getPort( ); String file = ""; String fragmentID = null; if( start < limit) { String address = spec.substring(start, limit); int atSign = address.indexOf('@'); if (atSign >= 0) { host = address.substring(atSign+1); this.username = address.substring(0, atSign); } } this.setURL(u, protocol, host, port, file, fragmentID ); } This method puts the pieces of the URL u that is, its protocol , host , port , file , and ref fieldsback together in a String . A class that overrides parseURL() should also override toExternalForm( ) . Here's a toExternalForm() method for a mailto URL; it assumes that the username has been stored in the URL 's file field: protected String toExternalForm(URL u) { return "mailto:" + u.getFile( ) + "@" + u.getHost( ); } Since toExternalForm( ) is protected, you probably won't call this method directly. However, it is called by the public toExternalForm( ) and toString( ) methods of the URL class, so any change you make here is reflected when you convert URL objects to strings. This method sets the protocol , host , port , authority , userInfo , path , query , and ref fields of the URL u to the given values. parseURL( ) uses this method to set these fields to the values it has found by parsing the URL. You need to call this method at the end of the parseURL( ) method when you subclass URLStreamHandler . This method is a little flaky, since the host, port, and user info together make up the authority. In the event of a conflict between them, they're all stored separately, but the host, port, and user info are used in preference to the authority when deciding which site to connect to. This is actually quite relevant to the mailto example, since mailto URLs often have query strings that indicate the subject or other header; for example, mailto:elharo@metalab.unc.edu?subject=JavaReading. Here the query string is subject=JavaReading . Rewriting the parseURL( ) method to support mailto URLs in this format, the result looks like this: public void parseURL(URL u, String spec, int start, int limit) { String protocol = u.getProtocol( ); String host = ""; int port = u.getPort( ); String file = ""; String userInfo = null; String query = null; String fragmentID = null; if (start < limit) { String address = spec.substring(start, limit); int atSign = address.indexOf('@'); int questionMark = address.indexOf('?'); int hostEnd = questionMark >= 0 ? questionMark : address.length( ); if (atSign >= 0) { host = address.substring(atSign+1, hostEnd); userInfo = address.substring(0, atSign); } if (questionMark >= 0 && questionMark > atSign) { query = address.substring(questionMark + 1); } } String authority = ""; if (userInfo != null) authority += userInfo + '@'; authority += host; if (port >= 0) authority += ":" + port; this.setURL(u, protocol, host, port, authority, userInfo, file, query, fragmentID ); } The getDefaultPort() method returns the default port for the protocol, e.g., 80 for HTTP. The default implementation of this method simply returns -1, but each subclass should override that with the appropriate default port for the protocol it handles. For example, here's a getDefaultPort() method for the finger protocol that normally operates on port 79: public int getDefaultPort( ) { return 79; } As well as providing the right port for finger, overriding this method also makes getDefaultPort( ) public. Although there's only a default implementation of this method in Java 1.3, there's no reason you can't provide it in your own subclasses in any version of Java. You simply won't be able to invoke it polymorphically from a reference typed as the superclass. The getHostAddress() method returns an InetAddress object pointing to the server in the URL. This requires a DNS lookup, and the method does block while the lookup is made. However, it does not throw any exceptions. If the host can't be located, whether because the URL does not contain host information as a result of a DNS failure or a SecurityException , it simply returns null. The default implementation of this method is sufficient for any reasonable case. It shouldn't be necessary to override it. The hostsEqual( ) method determines whether the two URLs refer to the same server. This method does use DNS to look up the hosts . If the DNS lookups succeed, it can tell that, for example,. ibiblio .org/Dave/this-week.html and are the same host. However, if the DNS lookup fails for any reason, then hostsEqual( ) falls back to a simple case-insensitive string comparison, in which case it would think these were two different hosts. The default implementation of this method is sufficient for most cases. You probably won't need to override it. The only case I can imagine where you might want to is if you were trying to make mirror sites on different servers appear equal. The sameFile( ) method determines whether two URLs point to the same file. It does this by comparing the protocol, host, port, and path. The files are considered to be the same only if each of those four pieces is the same. However, it does not consider the query string or the fragment identifier. Furthermore, the hosts are compared by the hostsEqual( ) method so that and metalab.unc.edu can be recognized as the same if DNS can resolve them. This is similar to the sameFile() method of the URL class. Indeed, that sameFile( ) method just calls this sameFile( ) method. The default implementation of this method is sufficient for most cases. You probably won't need to override it. You might perhaps want to do so if you need a more sophisticated test that converts paths to canonical paths or follows redirects before determining whether two URLs have the same file part. The final equality method tests almost the entire URL, including protocol, host, file, path, and fragment identifier. Only the query string is ignored. All five of these must be equal for the two URLs to be considered equal. Everything except the fragment identifier is compared by the sameFile() method, so overriding that method changes the behavior of this one. The fragment identifiers are compared by simple string equality. Since the sameFile( ) method uses hostsEqual( ) to compare hosts, this method does too. Thus, it performs a DNS lookup if possible and may block. The equals( ) method of the URL class calls this method to compare two URL objects for equality. Again, you probably won't need to override this method. The default implementation should suffice for most purposes. URLStreamHandler s can change the default hash code calculation by overriding this method. You should do this if you override equals( ) , sameFile() , or hostsEqual( ) to make sure that two equal URL objects will have the same hash code, and two unequal URL objects will not have the same hash code, at least to a very high degree of probability. The second responsibility of a URLStreamHandler is to create a URLConnection object appropriate to the URL. This is done with the abstract openConnection( ) method. This method must be overridden in each subclass of URLConnection . It takes a single argument, u , which is the URL to connect to. It returns an unopened URLConnection , directed at the resource u points to. Each subclass of URLStreamHandler should know how to find the right subclass of URLConnection for the protocol it handles. The openConnection( ) method is protected, so you usually do not call it directly; it is called by the openConnection( ) method of a URL class. The URL u that is passed as an argument is the URL that needs a connection. Subclasses override this method to handle a specific protocol. The subclass's openConnection( ) method is usually extremely simple; in most cases, it just calls the constructor for the appropriate subclass of URLConnection . For example, a URLStreamHandler for the mailto protocol might have an openConnection( ) method that looks like this: protected URLConnection openConnection(URL u) throws IOException { return new com.macfaq.net.; } Example 16-1 demonstrates a complete URLStreamHandler for mailto URLs. The name of the class is Handler , following Sun's naming conventions. It assumes the existence of a MailtoURLConnection class. package com.macfaq.net.; import java.net.*; import java.io.*; import java.util.*; public class Handler extends URLStreamHandler { protected URLConnection openConnection(URL u) throws IOException { return new MailtoURLConnection(u); } public void parseURL(URL u, String spec, int start, int limit) { String protocol = u.getProtocol( ); String host = ""; int port = u.getPort( ); String file = ""; // really username String userInfo = null; String authority = null; String query = null; String fragmentID = null; if( start < limit) { String address = spec.substring(start, limit); int atSign = address.indexOf('@'); if (atSign >= 0) { host = address.substring(atSign+1); file = address.substring(0, atSign); } } // For Java 1.2 comment out this next line this.setURL(u, protocol, host, port, authority, userInfo, file, query, fragmentID ); // In Java 1.2 and earlier uncomment the following line: // this.setURL(u, protocol, host, port, file, fragmentID ); } protected String toExternalForm(URL u) { return "mailto:" + u.getFile( ) + "@" + u.getHost( );; } } Java 1.5 overloads the openConnection( ) method to allow you to specify a proxy server for the connection. The java.net.Proxy class (also new in Java 1.5) encapsulates the address of a proxy server. Rather than connecting to the host directly, this URLConnection connects to the specified proxy server, which relays data back and forth between the client and the server. Protocols that do not support proxies can simply ignore the second argument. Normally connections are opened with the usual proxy server settings within that VM. Calling this method is only necessary if you want to use a different proxy server. If you want to bypass the usual proxy server and connect directly instead, pass the constant Proxy.NO_PROXY as the second argument.
https://flylib.com/books/en/1.135.1.109/1/
CC-MAIN-2019-47
refinedweb
2,380
64.91
Facebook game 8 Ball Pool smeared v>=60 for GMA 4500 Categories (Core :: Canvas: WebGL, defect, P1) Tracking () People (Reporter: Tonnes, Assigned: jgilbert) Details (Keywords: nightly-community, regression, Whiteboard: [gfx-noted] [sci-exclude]) Attachments (3 files) Facebook 8-Ball Pool started to show smeared parts on Win 7 in nightly builds as of March 3. Perhaps related to bug 1442608 or 1421818? Last good build: BuildID=20180302220122 First bad build: BuildID=20180303100406 Can you still reproduce the issue? Yes, even in today’s build. Do you need any console or other info? I can't reproduce on mac or Windows. Since, you have a regression range can you narrow it down using mozregression? Last good build: 9f87ddff4b02b89cc5530b4cb8ec767b14f4c687 First bad build: 2c120740c252c63b781cb99ebf02cfa46c23ff32 Pushlog: So the ANGLE update in bug 1440849 seems the most suspicious. Can you reproduce this without webrender on? How? Fwiw, all prefs containing webrender are set as default, hence gfx.webrender.enabled is false, gfx.webrender.force-angle is true. Same in any new profile, flipping either of them makes no difference. Hw acceleration is partly disabled though, i.e. the onboard chipset is blocked for D2D, its pref is always enabled by default in Options nevertheless and I need to disable it manually for any profile to prevent certain issues. Doing so here makes no difference for this issue (it just turned up with hw accel already disabled). I'm not sure how wide spread this problem is but we should try to find out. So is this WebGL or not? If WebGL, I'll help hunt this down, but otherwise this should be investigated from the WebRender side. This is WebGL. It reproduces with WebRender off see: (In reply to Jeff Gilbert [:jgilbert] from comment #9) > So is this WebGL or not? I’m not sure, you probably know better. Is there anything I could test or do more to help out? Copy of today’s nightly en-US Graphics section in TS info, as requested by philipp. This doesn't feel like it would be dot-release material, so setting status to wontfix for firefox 60. Seems like this is Jeff Gilbert's area. Jeff please reassign if I'm wrong. Do you think this was the Angle update? Used mozregression once again and I may have actually submitted the wrong pushlog_url (sorry). Bisecting pointed to this one: Last good build: d9f8ec82354d03faa5e0475004ffb8cc31bad8d6 First bad build: 58b3bc37c7f8a18abb2cf2c669fe5547b87ed0a6 Pushlog: End of log view: 2018-05-18T20:02:26: INFO : Narrowed inbound regression window from [d9f8ec82, 58b3bc37] (3 builds) to [d9f8ec82, 8ee92682] (2 builds) (~1 steps left) 2018-05-18T20:02:26: DEBUG : Starting merge handling... 2018-05-18T20:02:26: DEBUG : Using url: 2018-05-18T20:02:27: DEBUG : Found commit message: Bug 1440849 - Update ANGLE to firefox-60 branch. I cannot reproduce this issue. Are there any more details? The initial break scene rendered fine for me. I see the driver date for you is from 2012. Can you update your graphics driver? We may need to push you off the GPU onto WARP to ensure things render properly. First, I cannot reproduce the issue myself at this point in (latest) 60 and 61 nightlies, nor current Nightly 62. FB recently upgraded the 8 Ball Pool webapp from v 1.x.x to v 2.0, which is probably the reason. For reproducing in general, I would advise to try on a machine with the same Intel chipset (G41/G43/G45). Basically there is no newer graphics driver available than the one in use / reported, although a newer one with the following changes shows up @ Intel when attempting to download v 8.15.10.2869 (=8.15.10.2993): - Fixed artifacts seen when rendering triangle strips in an OpenGL * MATLAB* sample. - Fixed issue where an OpenGL sample application extends off-screen when the application is changed to full-screen. - Fixed issue where some laptop screens may be blank after updating the graphics driver and rebooting. I’m not sure whether or not the above would fix issues resulting from any changes affecting Firefox around the given regression range, either related to Angle or not. Updating the driver however requires some efforts (system is permanently in use) and be to no avail without possible reproduction. And as odd as it may seems, I have not tested in 60 release with the older webapp as I don’t use that version as default browser yet. Too many parameters. The fear of course is the issue may show up for other webapps and for various chipsets/drivers (from mainboard manufacturers rather than Intel), something that may have become clear when verifying/triaging a bit earlier around the time of reporting. For now, it may be safe to mark this WFM - it can be reopened when the issue returns or comparable ones turn up. Thanks for the details. We'll mark this WFM for now. I’m afraid someone ran into the same issue on 60 esr with another game. Can you please have a look at? Reopening as the issue was kind of confirmed - forcing to WARP by the user seems to be a workaround - see the support question. (As written there, trying other methods as a fix beforehand rather than forcing to use WARP if possible would be appreciated.) Moving to p3 because no activity for at least 24 weeks. See for more information Alright, so I think I know what this is. We're not even supposed to be running d3d11 on this hardware: ("Intel(R) G41 Express Chipset" appears to be GMA X4500) There's a ton of spaghetti code to get to here: FEATURE_DIRECT3D_11_ANGLE => Feature::D3D11_HW_ANGLE => : if (gfxConfig::IsForcedOnByUser(Feature::D3D11_HW_ANGLE)) { return GetAndInitDisplay(egl, LOCAL_EGL_D3D11_ONLY_DISPLAY_ANGLE); } if (d3d11ANGLE.IsEnabled()) { ret = GetAndInitDisplay(egl, LOCAL_EGL_D3D11_ELSE_D3D9_DISPLAY_ANGLE); } if (!ret) { ret = GetAndInitDisplay(egl, EGL_DEFAULT_DISPLAY); } So normal users aren't force-enabled, but they shouldn't be d3d11ANGLE.IsEnabled() either, so we should take the EGL_DEFAULT_DISPLAY path. Tracing that into ANGLE: // The default display is requested, try the D3D9 and D3D11 renderers, order them using // the definition of ANGLE_DEFAULT_D3D11 #if ANGLE_DEFAULT_D3D11 # if defined(ANGLE_ENABLE_D3D11) rendererCreationFunctions.push_back(CreateTypedRendererD3D<Renderer11>); # endif # if defined(ANGLE_ENABLE_D3D9) rendererCreationFunctions.push_back(CreateTypedRendererD3D<Renderer9>); # endif #else # if defined(ANGLE_ENABLE_D3D9) rendererCreationFunctions.push_back(CreateTypedRendererD3D<Renderer9>); # endif # if defined(ANGLE_ENABLE_D3D11) rendererCreationFunctions.push_back(CreateTypedRendererD3D<Renderer11>); # endif #endif Oh, let's check on ANGLE_DEFAULT_D3D11: ESR52 (working): ESR60 (broken): Oops. The GLLibraryEGL code is treating EGL_DEFAULT_DISPLAY as d3d9, which stopped being the case when we lost this moz.build define in the angle update/refactor in 60. We should really ask for d3d9 explicitly when we want d3d9. I don't like relying on the magic DEFINES['ANGLE_DEFAULT_D3D11'] = "0" line to keep working, particularly since this was the second time we've accidentally deleted it during an ANGLE update. So let's ask for d3d9 explicitly instead. Pushed by jgilbert@mozilla.com: If ANGLE D3D11 disabled, ask for D3D9 explicitly. r=lsalzman [Tracking Requested - why for this release]: Users on "Intel(R) G41 Express Chipset" (GMA X4500, which is a lot of people) have reported bad rendering on some games, and now we can help them. Given that it's a longstanding issue, I don't think we need to start tracking it now just because we have a patch. That said, I'm happy to entertain an approval request if you think we should consider uplifting to Fx69/ESR68 :-). I'll be asking for uplifts after verification! I could not reproduce the issue on the original affected build, Nightly v60.0a1 from 2018-03-03. Also, I do not have the "Intel(R) G41 Express Chipset" that is also a cause of this issue. Moreover, the reporter states that he could not reproduce the issue either, in firefox60, firefox61 and firefox62 even before the fix, but the test page (facebook game 8 pool) got an update, therefore the test page may be invalid now. This being said, this bug's fix cannot be verified unless another method to reproduce is found. Jeff, do you know any other test page to reproduce this issue? I don't. I would like to take this blacklisting fix in esr68 though, since that'll be with us for some time. Comment on attachment 9076704 [details] Bug 1443181 - If ANGLE D3D11 disabled, ask for D3D9 explicitly. ESR Uplift Approval Request - If this is not a sec:{high,crit} bug, please state case for ESR consideration: Fixes a regression from v60, where users of a fairly common GPU were supposed to have d3d11 blocklisted due to unreliable d3d11 on that device, but became accidentally unblocked in v60. - User impact if declined: Users on a fairly common GPU may run into WebGL content (e.g. Facebook games) that behaves badly, including at the very least bad rendering. - Fix Landed on Version: 70 - Risk to taking this patch: Low - Why is the change risky/not risky? (and alternatives if risky): Low risk, fixes a regression, but we don't have a device to test this on. - String or UUID changes made by this patch: none Comment on attachment 9076704 [details] Bug 1443181 - If ANGLE D3D11 disabled, ask for D3D9 explicitly. Beta/Release Uplift Approval Request - User impact if declined: Users on a fairly common GPU may run into WebGL content (e.g. Facebook games) that behaves badly, including at the very least bad rendering. - Is this code covered by automated tests?: No - Has the fix been verified in Nightly?: No - Needs manual test from QE?: No - If yes, steps to reproduce: - List of other uplifts needed: esr68 - Risk to taking this patch: Low - Why is the change risky/not risky? (and alternatives if risky): Low risk, fixes a regression, but we don't have a device to test this on. - String changes made/needed: none I have attempted to reproduce it again on 2 other more lo-end systems but with still no success. We don't have the chipset that the reporter has AND the game the issue was logged for appears to have been updated. This given, we cannot correctly verify the fix of this issue. Removing the "qe-verify+" tag. Comment on attachment 9076704 [details] Bug 1443181 - If ANGLE D3D11 disabled, ask for D3D9 explicitly. Seems like a low-risk fix, but the inability to verify is a bit concerning. Let's take this for 69.0b5 and let it bake for a bit. Comment on attachment 9076704 [details] Bug 1443181 - If ANGLE D3D11 disabled, ask for D3D9 explicitly. I'm not convinced there's enough need for this fix on ESR to justify uplift, and regressions may not be immediately obvious given the lack of test coverage. Let's let this ride the regular trains. This was backed out from 69 & 70 for causing bug 1571838. There's a r+ patch which didn't land and no activity in this bug for 2 weeks. :jgilbert, could you have a look please? For more information, please visit auto_nag documentation.
https://bugzilla.mozilla.org/show_bug.cgi?id=1443181
CC-MAIN-2020-10
refinedweb
1,819
65.22
or does TR1 belong to boost? This is a discussion on is TR1 and boost the same thing? within the C++ Programming forums, part of the General Programming Boards category; or does TR1 belong to boost?... or does TR1 belong to boost? No, TR1 is the next version of C++ standard. Boost is just a set of free libraries, although some of the things in TR1 came from Boost. May I ask why it is called boost::TR1? TR1 is "Technical Report 1" that was approved by the standards committee as semi-official additions to augment the standard library. So a bunch of things were approved that are a part of TR1 that library implementors can implement and distribute to users or with compilers. TR1 defines a common interface much like the regular standard does, so that users can write code to the TR1 interface and expect it to work even when they switch to different implementations of TR1. Note that TR1 is not the next version of the standard, as cpjust mentioned. It is just a group of approved libraries to hold people over until the next standard is official. Boost is a group of libraries created and approved by the Boost organization. Boost is not directly affiliated with the standards committee. Libraries in Boost are generally only implemented in Boost, not in other vendor's libraries. The relationship between them is that many of the interfaces from the Boost library were used TR1. The reason is that Boost is widely used and tested in actual code, and interfaces that have already been implemented and are already being used are generally better to be standardized than new interfaces. So some things, like shared_ptr, were implemented in Boost, then included in TR1 as well. So if you want to use shared_ptr, you can use shared_ptr from Boost, or you can use shared_ptr from any other implementation of TR1. In addition, there are other parts of TR1 that aren't in Boost (at least I think so, Boost may have finished a complete implementation of the TR1 libraries). There are many libraries in Boost that are not in TR1. In the end, if something is part of TR1, then using it is a good idea, since TR1 is almost standard. If you can't find what you need in TR1, then try Boost because even if it doesn't become standard, Boost is highly respected and widely used. The reason you see Boost::TR1 is because as I sort of mentioned, Boost has an implementation of TR1. If you want to use TR1, you can use Boost's version. You can also use the one from Dinkumware, and if any other library vendors implement it you can use those as well. Thanks all. May I ask if there is "std::TR1"? What in TR1 do you most oftenly used? Yes, that is what is officially known as TR1.Yes, that is what is officially known as TR1.May I ask if there is "std::TR1"? C + C++ Compiler: MinGW port of GCC Version Control System: Bazaar Look up a C++ Reference and learn How To Ask Questions The Smart Way When (if) they ever do finally release the new C++ standard, where are all those TR1 libraries going to be? Will they keep them in a TR1 namespace forever, or will they be moved to std:: or something other than TR1? That's up to the committee. However, in general terms, it's a fair bet those things in TR1 that make it into the final standard will probably be moved into std. I've seen experts give advice on how to use std::tr1 so that code will require minimal changes when those libraries are added to std. Something about making a namespace alias. However, even when they put them in std I would imagine that they would remain in std::tr1 as well but deprecated. Of course, this all assumes that the libraries make it into the standard without any changes, which is not a guarantee. I'd be more inclined to bet on a situation like the <iostream.h> header. That header only ever existed in draft standards, but the standard (as ratified) only has reference to <iostream>. Several compilers (or, more accurately their libraries) support <iostream.h> for historical reasons (such as avoiding breaking code that dates from the time of those draft standards, and was supported by compilers at the time). However, <iostream.h>, as far as the ratified standard is concerned, is never mentioned and has never existed. That's certainly possible. But given that the same standards committee was the one that approved Tr1, I think they might also make mention of it. Either way will effectively be the same, though, as library writers will likely still provide implementations in the std::tr1 namespace for a while after C++0x is implemented. What did they do in previous standards before & after they were released? Did they ever have any sort of TR1 before, or was everything in std:: right from the start? They had std from the start. There has been only one standard. Before the language was standardized there was no std namespace. Libraries that supported both standard and pre-standard C++ provided both options, but they were in different headers.
http://cboard.cprogramming.com/cplusplus-programming/100530-tr1-boost-same-thing.html
CC-MAIN-2014-23
refinedweb
888
71.85
Contrary; opposing: moves and counter moves on the checkerboard.n. -. v., -tered, -ter·ing, -ters. v.tr. - To meet or return (a blow) by another blow. - To move or act in opposition to; oppose. - To offer in response: countered that she was too busy to be thorough. To move, act, or respond so as to be in opposition.adv. - In a contrary manner or direction. - To or toward an opposite or dissimilar course or outcome: a method running counter to traditional techniques. [Middle English countre, from Old French contre, from Latin contrā. See counter–.] count·er2 (koun'tər) n. - A flat surface on which money is counted, business is transacted, or food is prepared or served. - Games. A piece, as of wood or ivory, used for keeping a count or a place. - An imitation coin; a token. - A piece of money. over the counter - Without being listed or available on an officially recognized stock exchange but traded in direct negotiation between buyers and sellers: bought stocks over the counter. - Without a doctor's prescription being legally required: cold medicine that is available over the counter. - In an illegal or surreptitious manner; illicitly: arrested for selling prescription drugs under the counter. [Middle English countour, from Anglo-Norman counteour, from Medieval Latin computātōrium, countinghouse, from Latin computāre, to calculate. See count1.] count·er3 (koun'tər) n. One that counts, especially an electronic or mechanical device that automatically counts occurrences or repetitions of phenomena or events.
http://www.answers.com/counter
crawl-002
refinedweb
243
59.6
Hey guys. Today I would like to add a new entry to my on-going series on Large Scale Application development with jQuery – printable tools for application architecture charts. A lot of you replied back to posts in the series with additional tools you found useful and I thought it might be handy to create you a single chart that broke down some of my updated recommendations for tasks like structuring client-side MVC apps, templating, unit and BDD testing, debouncing and more. For readers that might prefer something more list-like, I’ve also posted up a list version and linked up to the official sites for most of the recommendations on the list inside this post.. Tools for jQuery Application Architecture – List Version - Re-usable components - Standard jQuery Plugin - Patterns - Event-driven pattern - Object-driven pattern - In-loop driven pattern - Class and Function pattern - Extend pattern - $.widget factory jQuery Plugin - jQuery.Controller - Single-page Applications - Framework - Backbone.js - Utilities - Sammy.js - Templating - jQuery templates plugin - Mustache.js - Handlebars - Dust.js - ICanHaz.js - PURE - MicroTemplates - Closure Templates - jQuery View (jQuery templates with inheritance) - Large-scale applications - Patterns - Architecture - Code Structure - Module Pattern - Revealing Module Pattern - Inheritance Pattern - Prototype Pattern - Sandbox Pattern - Observer (PubSub) Pattern - Events, Routing and Tools - pub/sub - Hash/Routing & Bookmarking - Tools - Feature Detection - Debouncing - LocalStorage - Unit Testing - QUnit - Jasmine (BDD) - FireUnit - Crosscheck - JSSpec (BDD) - jsTestDriver - Script Loading - Build Tools - Concatenation - Minification and that’s it!. If you’re currently building any large-scale applications using jQuery, please feel free to get in touch as I’m always looking for success stories that may prove useful to the rest of the community. Remember that if you found the charts useful, you can easily share it with your friends and colleagues by clicking the ‘retweet’ button below. Until next time, good luck with your projects! Last Updated: March 15th 2011. Awesome, thanks Addy. Will refer ppl to this at our conf! Thanks Simon! Good job Addy! Great Job around! thanks for Sharing Addy! Excellent! Any reason why FuncUnit isn't listed above? For more information, see. Hey David – I remember looking at FuncUnit while I was evaluating what to include on the chart. Whilst it does offer a level of further QUnit automation through env.js, I wasn't all that comfortable slotting it into the 'unit testing' block given that it describes itself as a functional testing kit and thats it's main focus. Would you disagree? I agree. I believe devs would find FuncUnit useful though since it simplifies testing the user interaction. Perhaps it doesn't belong in the section on unit testing, but I think it qualifies to be on the list somewhere. When we're on the subject; how about adding Amplify (amplifyjs.com) to the list? (request, store, pub/sub). Thanks for a great list, and please keep updating it. I will use it as a primary source when looking for recommended tools. angularjs.org is missing. It is an MVC with templating, validation, testing, binding and many other features built in. Check it out. Nice job Addy!! Would you mind including my pub/sub plugin? You can see some demos here: i've submitted this proposal to the jQuery team (1.6 roadmap) Hey Tubal! Thanks for sending over your pub/sub implementation. If you get a chance could you describe how your version might improve on Higgins original? Hopefully we'll be able to get pub/sub in 1.6. I added it as a suggestion during our roadmap meeting and I believe Ben Alman's going to be doing a little work on it if it gets greenlighted. Blimey! That looks…wow! Another great post addy, thanks. Pingback: Fundamenty tworzenia aplikacji w JS « Wiadomości o technologiach IT Pingback: Tools for jQuery Application Architecture – The Printable Chart | Brent Sordyl's blog great post again, jquery community is hungry of guidelines to structure medium to large js applications, thanks The fact that jQuery is so easy to use has led to an explosion of new sites (even very large projects) that use simple dhtml-ish effects as well as ajax and other stuff extensively. Now everbody needs "js-developers". What happens? Just find someone "who programs is jQuery" get started. The first couple of things work fine, but as things get more complex, that person can't handle it. We really need more developers who understand javascript much more thouroughly. BTW there are more and more of these people, and they will always find good work. But there are also very many "jQuery users", who are involved in projects way beyond their capabilities. Good evening, I want to know how to use MVC Framework with jQuery syntax with the quickest to learn. Depending on your testing what are your feelings? Your personal opinions on the subject ? Hey gtraxx. I would recommend reading up on my Building Large jQuery Applications post as I do cover your options for using MVC with jQuery there. Hope it helps! Thank you, and when it is the most appropriate MVC framework with a quick start for a light application or average? Pingback: 30 Fantastic New jQuery Plugins-Speckyboy Design Magazine Pingback: 316i « Mindless Chatter Hey you people have to bump this up jQuery course up by voting. Right now it is tied with the Yeti course at a measly 10 points. What technology do you want to use with QuickBase – jQuery technology or Yeti technology? I have consistently followed a blog thank you Pingback: Quelques plugs jquery « SpagMedia Thanks Andy! This is awesome. Will spread this post. Very helpful! Pingback: Weekly Links– 2010_11 (50 Links for Web Devs & Geeks) :MS-Joe (Joe Stagner) Pingback: The World In Links [3/14-3/20] - Nicholas Kreidberg | Nicholas Kreidberg Pingback: 30 Fantastic New jQuery Plugins « Dragon Blog Pingback: Follia Digitale » Tools For jQuery Application Architecture – The Printable Chart Great stuff! It might be a good idea to add html5boilerplate somewhere. It's a huge time saver, and it integrates a bunch of the items above into a cohesive build process. I've been thinking about that : ) The 'tools' branch of the chart might get a little bit of an overhaul with some improved sections and if so, I'll definitely include html5bp in there. WOW! What about lawnchair for storage? Nice! I haven't used lawnchair before but Brian does seem to have written a fairly clean implementation for storage. I'll check it out and consider it for inclusion. It necessary to know the jQuery concept before starting the work because I am beginner in this concept that's why I am asking about this tol Good job addy. Looking more.. Hi mate, not sure if you did, but perhaps you linked to the wrong History.js – there are two: 1. The fortes one you lined to, which development stopped last year 2. The balupton one which was created this year – Is there something like underscore.js but with deep variable testing? So I don't have to include my own utils with only this highly used piece of code: /** * Check if a deeply set property is set * * @param strVar string representation of property path i.e. foo.bar.something.id * @param context optional, defaults to window * @return boolean if set */ my.isset = function ( strVar, context ) { if (!context) { context = window; } var namespaces = strVar.split("."); for (var i = 0; i < namespaces.length; i++) { if (context[namespaces[i]]) { context = context[namespaces[i]]; } else { return false; } } return true }; Ariston ServisiAriston Servisi İstanbul geneline haftanın 7 günü uzman teknik servis personeli, TSE belgeli servis hizmet noktaları ve güleryüzlü hizmet anlayışı ile size bir telefon kadar yakın. 0212 444 1 156 numaralı çağrı merkezimizi arayabilir ve kapıda kredi kartıyla ödeme imkanından da faydalanabilirsiniz. Ariston Bakım Montaj Arıza Tamir Servisi. Satış sonrası teknik servis hizmetleri sunmaktadır. Ariston Beyaz Eşya Ankastre Servisi Another great post addy, thanks. This is amazing for the creator Addy Osmani. Thank you for posting and sharing your intellect. I shall keep on checking out new posts in this blog. Great man thank you so much for sharing. I’m so intrigued by this I want to learn more javascript now. this jquery tool may be very useful to show your services under tree. Pingback: The best 120+ Jquery plugins from around the internet Pingback: 30 jQuery Plugins | prosoxi.com Exciting but I are uncertain only go together with it. Pingback: w3easystep » Blog Archive » 30 Fantastic New jQuery Plugins Pingback: 专访HTML5 Boilerplate项目核心成员——石川 | Web前端开发工程师-成长之路 For me , I will using backbone, jQueryMobile, __, requireJs
http://addyosmani.com/blog/tools-for-jquery-application-architecture-the-printable-chart/
CC-MAIN-2013-20
refinedweb
1,423
65.52
You probably already know VLC. As you might know, and as j-b mentionned in his talk during FOSDEM, «VLC doesn’t exist». Indeed, VLC is basically a 100 lines wrapper around libvlc. This library is the entry point to another one, libvlccore, which is doing all the heavy lifting by coordinating modules with each other. libvlc is a public API, and can be used by anyone wishing to add multimedia functionnalities to its application. While it’s a fairly easy to use API, it contains quite a few things which can be annoying to C++ developers. For instance, non automatic ref-counting, callbacks with raw function pointers, raw C strings, … In the last few years, VideoLAN had quite a bunch of C++ wrappers. I can count 4 from the top of my head, and a quick search on Google gives out a few more. All have their specificities, advantages, and drawbacks. Since most of them are not maintained nor they expose the full set of functionalities available within libvlc, j-b and I decided to write one, that will, hopefully, be the “official” one. You can clone it here: git://git.videolan.org/vlc-bindings/libvlcpp.git It just has been tagged as 0.1.0-rc1, and is now waiting for applications to use it! Before showing a quick sample, here’s a quick summary of libvlcpp: - Automatic VLC objects ref-counting - Handles events with lambdas - Header only implementation - VLC 2.2 full feature parity - C++11 Now for a quick example, which is included in the git as the “helloworld” project: #include "vlcpp/vlc.hpp" #include <iostream> #include <thread> int main(int ac, char** av) { if (ac < 2) { std::cerr << "usage: " << av[0] << " <file to play>" << std::endl; return 1; } auto instance = VLC::Instance(0, nullptr); auto media = VLC::Media(instance, av[1], VLC::Media::FromPath); auto mp = VLC::MediaPlayer(media); mp.play(); std::this_thread::sleep_for( std::chrono::seconds( 10 ) ); } Which is only going to create a new window, and display the video you provided in it. There’s obviously ways to make things look better, but that will be part of some other example projects :) As said earlier, this has been tagged as a rc1, so we need some your reviews & comments! I hope you’ll enjoy using libvlcpp as much as I enjoyed writing it!
https://www.beauzee.fr/2015/03/05/introducing-libvlcpp/
CC-MAIN-2021-04
refinedweb
390
61.26